From Try-Ons to Treasure Hunts: The Spectacular Impact of AR and WebXR on Shopping

This article was written by our CEO Olga Kryvchenko and originally published on Linkedin. To get more biweekly updates about extended reality, subscribe to Olga’s XR Frontiers LinkedIn newsletter

Picture yourself trying on a fabulous pair of shoes, testing out a trendy new hairstyle, or zipping through a gigantic mall without breaking a sweat, all from the comfort of your own home! With the magic of Augmented Reality (AR) and WebXR, shoppers and shop owners are about to embark on an exciting escapade into the future of shopping. But before we plunge into this virtual wonderland, let’s take a playful peek at the current state of AR and WebXR technologies and how they are shaping the future of retail.

A Whirlwind Tour of AR and WebXR Technologies for Shopper Experience:

  • AR fitting apps: Apps like Wanna Kicks by Wannaby and the IKEA Place app allow customers to virtually try on shoes or place furniture in their homes, providing a more satisfying shopping experience and reducing returns. Loreal Groups provides Makeup Virtual Try-on Maybelline, which makes online makeup shopping more convenient.
  • AR scanning apps: Google Lens enable users to identify products, read labels, and gather detailed information, making product searches and comparisons a breeze.
  • AR maps and wayfinding apps: The Aisle411 app helps customers navigate the maze of large shopping malls by providing indoor maps and turn-by-turn directions.
  • AR menus and loyalty programs: KabaQ is an app that showcases 3D models of menu items, while Snatch, an AR-based loyalty program, allows users to hunt for virtual prizes, rewards, and discounts.
  • AR Entertainment Extravaganza: The Leo AR app delivers an array of captivating games and virtual experiences to be enjoyed in shopping centers and other public spaces.

The Exciting State of AR and WebXR in Shopping:

Good news! The cost of developing AR and WebXR solutions is on the decline, thanks to the availability of pre-built solutions, plugins, and libraries. A treasure trove of resources with 3D assets is already available for use or customization in AR shopping apps. Loads of talented companies can whip up custom 3D models to bring products to life in AR. Plus, businesses can purchase or lease 3D scanners to help digitize their products for AR integration. And, to top it all off, many existing apps are ready and waiting for easy integration with AR and WebXR technologies.

Embracing AR and WebXR technologies can have a positive impact on the environment, helping to create a more sustainable shopping experience. By allowing customers to virtually try on products or see how items would look in their homes, these technologies can significantly reduce the need for returns. This, in turn, leads to a decrease in packaging waste and transportation emissions associated with shipping products back and forth. For example, ASOS, an online fashion retailer, implemented a virtual fitting room feature called “See My Fit”, which allows customers to see how clothing items would look on different body types, reducing the likelihood of returns. Similarly, the IKEA Place app enables users to visualize furniture items in their homes before making a purchase, helping to minimize unnecessary returns and their environmental impact. By integrating AR and WebXR technologies into the shopping experience, businesses can contribute to a more eco-friendly retail landscape while still delivering a satisfying customer experience.

As AR and WebXR technologies continue to advance, they hold immense potential to revolutionize the online shopping experience even further. One exciting development is the creation of virtual showrooms, where customers can explore a 3D representation of a physical store and interact with products in a more immersive manner. This could transform the way consumers shop online, making it feel more like an in-person experience. For example, Shopify has introduced 3D modeling and AR solutions that enable merchants to showcase their products in 3D, allowing customers to examine items from every angle and visualize them in real-world environments.

Another promising direction is the personalization of shopping experiences using AR and WebXR. By combining these technologies with artificial intelligence and customer data, retailers can offer tailored recommendations, virtual styling assistance, and customized product presentations. For instance, the Sephora Virtual Artist app utilizes AR to allow users to virtually try on makeup products and receive personalized recommendations based on their facial features and preferences.

Moreover, AR and WebXR can be integrated with social media platforms to create shared shopping experiences, allowing users to virtually shop together, seek opinions from friends, and even attend live virtual events, such as fashion shows or product launches. This would further blur the line between e-commerce and social media, making online shopping a more interactive and engaging experience.

But, as with any great adventure, there are a few hiccups to keep in mind. Size measurement in AR apps can be a bit of a mixed bag. On the one hand, iOS devices boast impressive measurement accuracy thanks to LiDAR technology. However, Android devices have yet to achieve the same level of precision, leaving room for improvement. Not all devices support AR capabilities, so only devices with ARCore or ARKit can join in on the fun. Outdoor GPS accuracy can be a bit of a challenge for AR guide apps. And using AR and WebXR apps may require a little extra setup, support, and maintenance, which could bump up marketing budgets.

However, with technology continually evolving, these limitations are expected to fade away, paving the way for a seamless and delightful AR shopping experience. A spectacular shopping spree of the future is just around the corner, with fully immersive AR experiences waiting for shoppers and shop owners alike. By embracing these cutting-edge technologies, businesses can elevate their customers’ shopping adventures to thrilling new heights. So, hold onto your hats, folks, because the astonishing and transformative shopping extravaganza is about to begin!

Image: Pixabay

Latest Articles

September 10, 2025
Immersive Technology & AI for Surgical Intelligence – Going Beyond Visualization

Immersive XR Tech and Artificial Intelligence are advancing MedTech beyond cautious incremental change to an era where data-driven intelligence transforms healthcare. This is especially relevant in the operating room — the most complex and high-stakes environment, where precision, advanced skills, and accurate, real-time data are essential. Incremental Change in Healthcare is No Longer an Option Even in a reality transformed by digital medicine, many operating rooms still feel stuck in an analog past, and while everything outside the OR has moved ahead, transformation has been slow and piecemeal inside it. This lag is more pronounced in complex, demanding surgeries, but immersive technologies convert flat, two-dimensional MRI and CT scans into interactive 3D visualizations. Surgeons now have clearer spatial insight as they work, which reduces the risk of unexpected complications and supports better overall results. Yet, healthcare overall has changed only gradually, although progress has been made over the course of decades. Measures such as reducing fraud, rolling out EMR, and updating clinical guidelines have had limited success in controlling costs and closing quality gaps. For example, the U.S. continues to spend more than other similarly developed countries. Everything calls for a fundamental rethinking of how healthcare is structured and delivered. Can our healthcare systems handle 313M+ surgeries a year? Over 313 million surgeries will likely be performed every year by 2030, putting significant pressure on healthcare systems. Longer waiting times, higher rates of complications, and operating rooms stretched to capacity are all on the rise as a result. Against this backdrop, immersive XR and artificial intelligence are rapidly becoming vital partners in the OR. They turn instinct-driven judgement into visual data-informed planning, reducing uncertainty and supporting confident decision-making. The immediate advantages are clear enough: shorter time spent in the operating room include reduced operating-room time and lower radiation exposure for patients, surgeons, and OR staff. Just as critical, though less visible, are the long-term outcomes. Decreased complication rates and a lower likelihood of revision surgeries are likely to have an even greater impact on the future of the field. These issues have catalyzed the rise of startups in surgical intelligence, whose platforms automate parts of the planning process, support documentation, and employ synthetic imaging to reduce time spent in imaging suites. Synthetic imaging, for clarity, refers to digitally generated images, often created from existing medical scans, that enrich diagnostic and interpretive insights. The latest breakthroughs in XR and AI Processing volumetric data with multimodal generative AI, which divides volumes into sequences of patches or slices, now enables real-time interpretation and assistance directly within VR environments. Similarly, VR-augmented differentiable simulations are proving effective for team-based surgical planning, especially for complex cardiac and neurosurgical cases. They integrate optimized trajectory planners with segmented anatomy and immersive navigation interfaces. Organ and whole-body segmentation, now automated and fast, enables multidisciplinary teams to review patient cases together in XR, using familiar platforms such as 3D Slicer. Meanwhile, DICOM-to-XR visualization workflows built on surgical training platforms like Unity and UE5 have become core building blocks to a wave of MedTech startups that proliferated in 2023–2024, with further integrations across the industry. The future of surgery is here The integration of volumetric rendering and AI-enhanced imaging has equipped surgeons with enhanced visualization, helping them navigate the intersection of surgery and human anatomy in 2023. Such progress led to a marked shift in surgical navigation and planning, becoming vital for meeting the pressing demands currently facing healthcare systems. 1) Surgical VR: Volumetric Digital Twins Recent clinical applications of VR platforms convert MRI/CT DICOM stacks into interactive 3D reconstructions of the patient’s body. Surgeons can explore these models in detail, navigate them as if inside the anatomy itself, and then project them as AR overlays into the operative field to preserve spatial context during incision. Volumetric digital twins function as dynamic, clinically vetted, and true-to-size models, unlike static images. They guide trajectory planning, map procedural risks, and enable remote team rehearsals. According to institutions using these tools, the results include clearer surgical approaches, reduced uncertainty around critical vasculature, and greater confidence among both surgeons and patients. These tools serve multidisciplinary physician teams, not only individual users. Everyone involved can review the same digital twin before and during surgery, working in tight synchronization without the risk of mistakes, especially in complex surgeries such as spinal, cranial, or cardiovascular cases. These pipelines also generate high-fidelity, standardized datasets that support subsequent AI integration, as they mature. Automated segmentation, predictive risk scoring, and differentiable trajectory optimizers can now be layered on top, transforming visual intuition into quantifiable guidance and enabling teams to leave less to chance, delivering safer and less invasive care. The VR platform we built for Vizitech USA serves as a strong example within the parallel and broader domain of healthcare education. VMed-Pro is a virtual-reality training platform built to the standards of the National Registry of Emergency Medical Technicians; the scenarios mirror real-world protocols, ensuring that training translates directly to clinical practice. Beyond procedural skills, VMed-Pro also reinforces core medical concepts; learners can review anatomy and physiology within the context of a virtual patient, connecting textbook knowledge to hands-on clinical judgment. 2) Surgical AR: Intra-operative decision making Augmented reality for surgical navigation combines real-time image registration, AI segmentation, ergonomically designed head-worn glasses, and headsets to convert preoperative DICOM stacks into interactive holographic anatomy, giving surgeons X-ray visualization without diverting gaze from the field – a true Surgical Copilot right in the OR. AI-driven segmentation and computer-vision pipelines generate metric-accurate volumetric models and annotated overlays that support trajectory planning, instrument guidance, and intraoperative decision support. Robust spatial registration and tracking (marker-based or depth-sensor aided) align holograms with patient anatomy to submillimetre accuracy, enabling precise tool guidance and reduced reliance on fluoroscopy. Lightweight AR hardware, featuring hand-tracking and voice control, preserves surgeon ergonomics and minimizes distractions. Cloud and on-premises inference options balance latency and computational power to enable real-time assistance. Significant industry investment and agile startups have driven integration with PACS, navigation systems, and multi-user XR sessions, enhancing preoperative rehearsal and team…

June 27, 2025
Methodology of VR/MR/AR and AI Project Estimation

Estimation of IT projects based on VR, XR, MR, or AI requires both a deep technical understanding of advanced technologies and the ability to predict future market tendencies, potential risks, and opportunities. In this document, we aim to thoroughly examine estimation methodologies that allow for the most accurate prediction of project results in such innovative fields as VR/MR/AR and AI by describing unique approaches and strategies developed by Qualium Systems. We strive to cover existing estimation techniques used at our company and delve into the strategies and approaches that ensure high efficiency and accuracy of the estimation process. While focusing on different estimation types, we analyze the choice of methods and alternative approaches available. Due attention is paid to risk assessment being the key element of a successful IT project implementation, especially in such innovative fields as VR/MR/AR and AI. Moreover, the last chapter covers the demo of a project of ours, the Chemistry education app. We will show how the given approaches practically affect the final project estimation. Read

June 27, 2025
What Are Spatial Anchors and Why They Matter

Breaking Down Spatial Anchors in AR/MR Augmented Reality (AR) and Mixed Reality (MR) depend on accurate understanding of the physical environment to create realistic experiences, and they hit this target with the concept of spatial anchors. These anchors act like markers, either geometric or based on features, that help virtual objects stay in the same spot in the real world — even when users move around. Sounds simple, but the way spatial anchors are implemented varies a lot depending on the platform; for example, Apple’s ARKit, Google’s ARCore, and Microsoft’s Azure Spatial Anchors (ASA) all approach them differently. If you want to know how these anchors are used in practical scenarios or what challenges developers often face when working with them, this article dives into these insights too. What Are Spatial Anchors and Why They Matter A spatial anchor is like a marker in the real world, tied to a specific point or group of features. Once you create one, it allows for some important capabilities: Persistence. Virtual objects stay exactly where you placed them in the real-world, even if you close and restart the app. Multi-user synchronization. Multiple devices can share the same anchor, so everyone sees virtual objects aligned to the same physical space. Cross-session continuity. You can leave a space and come back later, and all the virtual elements will still be in the right place. In AR/MR, your device builds a point cloud or feature map by using the camera and built-in sensors like the IMU (inertial measurement unit). Spatial anchors are then tied to those features, and without them, virtual objects can drift or float around as you move, shattering the sense of immersion. Technical Mechanics of Spatial Anchors At a high level, creating and using spatial anchors involves a series of steps: Feature Detection & Mapping To start, the device needs to understand its surroundings: it scans the environment to identify stable visual features (e.g., corners, edges). Over time, these features are triangulated, forming a sparse map or mesh of the space. This feature map is what the system relies on to anchor virtual objects. Anchor Creation Next, anchors are placed at specific 3D locations in the environment in two possible ways: Hit-testing. The system casts a virtual ray from a camera to a user-tapped point, then drops an anchor on the detected surface. Manual placement. Sometimes, developers need precise control, so they manually specify the exact location of an anchor using known coordinates, like ensuring it perfectly fits on the floor or another predefined plane. Persistence & Serialization Anchors aren’t temporary — they can persist, and here’s how systems make that possible: Locally stored anchors. Frameworks save the anchor’s data, like feature descriptors and transforms, in a package called a “world map” or “anchor payload”. Cloud-based anchors. Cloud services like Azure Spatial Anchors (ASA) upload this anchor data to a remote server to let the same anchor be accessed across multiple devices. Synchronization & Restoration When you’re reopening the app or accessing the anchor on a different device, the system uses the saved data to restore the anchor’s location. It compares stored feature descriptors to what the camera sees in real time, and if there’s a good enough match, the system confidently snaps the anchor into position, and your virtual content shows up right where it’s supposed to. However, using spatial anchors isn’t perfect, like using any other technology, and there are some tricky issues to figure out: Low latency. Matching saved data to real-time visuals has to be quick; otherwise, the user experience feels clunky. Robustness in feature-scarce environments. Blank walls or textureless areas don’t give the system much to work with and make tracking tougher. Scale drift. Little errors in the system’s tracking add up over time to big discrepancies. When everything falls into place and the challenges are handled well, spatial anchors make augmented and virtual reality experiences feel seamless and truly real. ARKit’s Spatial Anchors (Apple) Apple’s ARKit, rolled out with iOS 11, brought powerful features to developers working on AR apps, and one of them is spatial anchoring, which allows virtual objects to stay fixed in the real world as if they belong there. To do this, ARKit provides two main APIs that developers rely on to achieve anchor-based persistence. ARAnchor & ARPlaneAnchor The simplest kind of anchor in ARKit is the ARAnchor, which represents a single 3D point in the real-world environment and acts as a kind of “pin” in space that ARKit can track. Building on this, ARPlaneAnchor identifies flat surfaces like tables, floors, and walls, allowing developers to tie virtual objects to these surfaces. ARWorldMap ARWorldMap makes ARKit robust for persistence and acts as a snapshot of the environment being tracked by ARKit. It captures the current session, including all detected anchors and their surrounding feature points, into a compact file. There are a few constraints developers need to keep in mind: World maps are iOS-only, which means they cannot be shared directly with Android. There must be enough overlapping features between the saved environment and the current physical space, and textured structures are especially valuable for this, as they help ARKit identify key points for alignment. Large world maps, especially those with many anchors or detailed environments, can be slow to serialize and deserialize, causing higher application latency when loading or saving. ARKit anchors are ideal for single-user persistence, but sharing AR experiences across multiple devices poses additional issues, and developers often employ custom server logic (uploading ARWorldMap data to a backend), enabling users to download and use the same map. However, this approach comes with caveats: it requires extra development work and doesn’t offer native support for sharing across platforms like iOS and Android. ARCore’s Spatial Anchors (Google) Google’s ARCore is a solid toolkit for building AR apps, and one of its best features is how it handles spatial anchors: Anchors & Hit-Testing ARCore offers two ways to create anchors. You can use Session.createAnchor(Pose) if you already know the anchor’s position, or…



Let's discuss your ideas

Contact us