AR and VR podcasts you should pay attention to right now

Are you looking for ways to stay constantly updated with the latest news about the extended reality? In this case, we can recommend you some captivating AR and VR podcasts, if you like to listen to audio news and interviews.

Previously, we wrote an article about trendy VR YouTube channels

VR Download

YouTube XR podcast by Upload VR is published twice a week — on Mondays and Thursdays. VR Download is also available in audio format on Apple Podcasts, Google Podcasts, Spotify, and other streaming services.

The main host of the podcast is Kyle Riesenbeck, Operational Manager at Upload VR. 

The podcast has two main directions: 

  • Weekly news podcast issues, which are usually released on Mondays at noon PST. Riesenbeck with other co-hosts discusses the latest updates in hardware development and current XR industry trends.
  • Gamescast issues, which are released every Thursday at 10:30 PST (20:30, according to Kyiv time). In Gamecast, the hosts discuss the recently-released VR games, narrated experiences, etc.

An interesting fact: the podcast is fully recorded in virtual reality and the hosts themselves use digital avatars via Oculus Avatars SDK.

The AR Show

It’s an augmented reality podcast that collaborates with AR Insider and, primarily, highlights the latest tendencies of smart glasses development. Every The AR Show issue is released, at least, once a week, and is available for listening and downloading right on the podcast’s official site.

The AR Show host Jason McDowell is Ostendo Technologies VP Product, Head of Visual Experience, and Amplify.LA mentor.

McDowell believes that AR glasses will have a much bigger impact on a human and the community than smartphones, PC, or the internet. He thinks smart glasses can transform the human brain-physical world interaction and reduce the time of searching the information on the spot.

In every podcast issue, the host interviews company representatives that develop smart glasses and other AR-related technologies. They discuss various implementations of augmented reality in education, sport, healthcare, etc.

In addition, AR Insider and Upload VR are included in the list of online media we recommend reading.

Ruff Talk VR

It is a  virtual reality podcast about games that comes out, at least, once-twice a week. You can listen to the podcast on the Ruff Talk VR site and on many streaming services, like Apple Podcasts, Spotify, Overcast, iHeartRadio, etc. The podcast is also available on YouTube in video format.

The Ruff Talk VR hosts are bloggers-enthusiasts, known by nicknames Dscruffles (son) and Stratus2k1 (father).

In every issue, the hosts review the latest VR games and other apps from Oculus Store and App Lab. Dscruffles and Stratus2k1 explain gameplay and discuss every advantage and disadvantage of the reviewed game with small miscellaneous talks. In the end, the hosts announce the final review score.

The Ruff Talk VR hosts also interview game developers and post VR morning news issues every Monday.

Everything AR & VR

A weekly VR/AR Association podcast that is available on VR/AR Association website and other streaming platforms like Apple Podcasts.

The hosts of Everything AR & VR are Tyler Gates and Sophia Moshasha. Gates is a Brightline Interactive General Manager, Chief Futurist at the Glimpse Group, and VRARA DC Chapter President. Sophia Moshasha is a metaverse and Web 3.0 strategist and DC Chapter Vice President VRARA.

The podcast hosts interview technologists and companies representatives regarding their working experience in virtual and augmented reality in different fields like games, entertainment, healthcare, education, military, hardware and software development, etc.  Everything AR & VR guests and hosts also discuss the latest tendencies of XR development in governments and enterprises.

By the way, a few months ago, Qualium Systems officially joined the VR/AR Association, which currently unites more than 4300 organizations. 

Voices Of VR

Voice Of VR is a weekly podcast about virtual and augmented reality development, founded in May 2014. The podcast is available on its official site and streaming platforms like Apple Podcasts.

The podcast host Kent Bye is a philosopher, experimental journalist, and oral historian, that tries to define patterns of immersive storytelling, experimental design, and the potential of extended reality development in general.

In every issue, Bye interviews various developers, storytellers, designers, and other specialists, that work on XR and related technologies like metaverse, for example. The podcast also covers discussions of the latest news of hardware and software development.

Full Dive Gaming: a Virtual Reality Podcast

Full Dive Gaming is one of the virtual reality news podcasts available on different streaming services, like Apple Podcasts, Spotify, Deezer, etc. The podcast is also available on YouTube as a video channel.

In every Full Dive Gaming issue, blogger Jay Bratt highlights the latest virtual reality news. Usually, Bratt discusses the news with the hosts of other podcasts.

The podcast is divided into three sections: VR gaming news, game reviews, and VR games-related discussions. 

Field of View Podcast

Field of View Podcast is a project of non-profitable VR/AR/MR organization AIXR (The Academy of International Extended Reality), developed in collaboration with Accenture. 

You can listen to every issue on a podcast webpage on the academy site. Field of View Podcast is also available on Spotify, Apple Podcasts, and on YouTube. You can also find the podcast transcription on Accenture site.

Daniel Colaianni and Nicola Rosa are the hosts of the podcast. Colaianni is AIXR CEO, and Rosa is XR Lead Europe and Immersive Learning Lead in Accenture.

The podcast main topics are metaverse, virtual and augmented reality, web 3.0, and other technologies. Rosa and Colaianni periodically interview representatives from worldwide-known companies like Disney, HTC, Audi, NVIDIA, etc.

In one of the latest issues, the hosts talked to Louis Rosenberg about artificial intelligence in metaverse. You can find his article about metaverse potential in our previous article. 

Latest Articles

September 10, 2025
Immersive Technology & AI for Surgical Intelligence – Going Beyond Visualization

Immersive XR Tech and Artificial Intelligence are advancing MedTech beyond cautious incremental change to an era where data-driven intelligence transforms healthcare. This is especially relevant in the operating room — the most complex and high-stakes environment, where precision, advanced skills, and accurate, real-time data are essential. Incremental Change in Healthcare is No Longer an Option Even in a reality transformed by digital medicine, many operating rooms still feel stuck in an analog past, and while everything outside the OR has moved ahead, transformation has been slow and piecemeal inside it. This lag is more pronounced in complex, demanding surgeries, but immersive technologies convert flat, two-dimensional MRI and CT scans into interactive 3D visualizations. Surgeons now have clearer spatial insight as they work, which reduces the risk of unexpected complications and supports better overall results. Yet, healthcare overall has changed only gradually, although progress has been made over the course of decades. Measures such as reducing fraud, rolling out EMR, and updating clinical guidelines have had limited success in controlling costs and closing quality gaps. For example, the U.S. continues to spend more than other similarly developed countries. Everything calls for a fundamental rethinking of how healthcare is structured and delivered. Can our healthcare systems handle 313M+ surgeries a year? Over 313 million surgeries will likely be performed every year by 2030, putting significant pressure on healthcare systems. Longer waiting times, higher rates of complications, and operating rooms stretched to capacity are all on the rise as a result. Against this backdrop, immersive XR and artificial intelligence are rapidly becoming vital partners in the OR. They turn instinct-driven judgement into visual data-informed planning, reducing uncertainty and supporting confident decision-making. The immediate advantages are clear enough: shorter time spent in the operating room include reduced operating-room time and lower radiation exposure for patients, surgeons, and OR staff. Just as critical, though less visible, are the long-term outcomes. Decreased complication rates and a lower likelihood of revision surgeries are likely to have an even greater impact on the future of the field. These issues have catalyzed the rise of startups in surgical intelligence, whose platforms automate parts of the planning process, support documentation, and employ synthetic imaging to reduce time spent in imaging suites. Synthetic imaging, for clarity, refers to digitally generated images, often created from existing medical scans, that enrich diagnostic and interpretive insights. The latest breakthroughs in XR and AI Processing volumetric data with multimodal generative AI, which divides volumes into sequences of patches or slices, now enables real-time interpretation and assistance directly within VR environments. Similarly, VR-augmented differentiable simulations are proving effective for team-based surgical planning, especially for complex cardiac and neurosurgical cases. They integrate optimized trajectory planners with segmented anatomy and immersive navigation interfaces. Organ and whole-body segmentation, now automated and fast, enables multidisciplinary teams to review patient cases together in XR, using familiar platforms such as 3D Slicer. Meanwhile, DICOM-to-XR visualization workflows built on surgical training platforms like Unity and UE5 have become core building blocks to a wave of MedTech startups that proliferated in 2023–2024, with further integrations across the industry. The future of surgery is here The integration of volumetric rendering and AI-enhanced imaging has equipped surgeons with enhanced visualization, helping them navigate the intersection of surgery and human anatomy in 2023. Such progress led to a marked shift in surgical navigation and planning, becoming vital for meeting the pressing demands currently facing healthcare systems. 1) Surgical VR: Volumetric Digital Twins Recent clinical applications of VR platforms convert MRI/CT DICOM stacks into interactive 3D reconstructions of the patient’s body. Surgeons can explore these models in detail, navigate them as if inside the anatomy itself, and then project them as AR overlays into the operative field to preserve spatial context during incision. Volumetric digital twins function as dynamic, clinically vetted, and true-to-size models, unlike static images. They guide trajectory planning, map procedural risks, and enable remote team rehearsals. According to institutions using these tools, the results include clearer surgical approaches, reduced uncertainty around critical vasculature, and greater confidence among both surgeons and patients. These tools serve multidisciplinary physician teams, not only individual users. Everyone involved can review the same digital twin before and during surgery, working in tight synchronization without the risk of mistakes, especially in complex surgeries such as spinal, cranial, or cardiovascular cases. These pipelines also generate high-fidelity, standardized datasets that support subsequent AI integration, as they mature. Automated segmentation, predictive risk scoring, and differentiable trajectory optimizers can now be layered on top, transforming visual intuition into quantifiable guidance and enabling teams to leave less to chance, delivering safer and less invasive care. The VR platform we built for Vizitech USA serves as a strong example within the parallel and broader domain of healthcare education. VMed-Pro is a virtual-reality training platform built to the standards of the National Registry of Emergency Medical Technicians; the scenarios mirror real-world protocols, ensuring that training translates directly to clinical practice. Beyond procedural skills, VMed-Pro also reinforces core medical concepts; learners can review anatomy and physiology within the context of a virtual patient, connecting textbook knowledge to hands-on clinical judgment. 2) Surgical AR: Intra-operative decision making Augmented reality for surgical navigation combines real-time image registration, AI segmentation, ergonomically designed head-worn glasses, and headsets to convert preoperative DICOM stacks into interactive holographic anatomy, giving surgeons X-ray visualization without diverting gaze from the field – a true Surgical Copilot right in the OR. AI-driven segmentation and computer-vision pipelines generate metric-accurate volumetric models and annotated overlays that support trajectory planning, instrument guidance, and intraoperative decision support. Robust spatial registration and tracking (marker-based or depth-sensor aided) align holograms with patient anatomy to submillimetre accuracy, enabling precise tool guidance and reduced reliance on fluoroscopy. Lightweight AR hardware, featuring hand-tracking and voice control, preserves surgeon ergonomics and minimizes distractions. Cloud and on-premises inference options balance latency and computational power to enable real-time assistance. Significant industry investment and agile startups have driven integration with PACS, navigation systems, and multi-user XR sessions, enhancing preoperative rehearsal and team…

June 27, 2025
Methodology of VR/MR/AR and AI Project Estimation

Estimation of IT projects based on VR, XR, MR, or AI requires both a deep technical understanding of advanced technologies and the ability to predict future market tendencies, potential risks, and opportunities. In this document, we aim to thoroughly examine estimation methodologies that allow for the most accurate prediction of project results in such innovative fields as VR/MR/AR and AI by describing unique approaches and strategies developed by Qualium Systems. We strive to cover existing estimation techniques used at our company and delve into the strategies and approaches that ensure high efficiency and accuracy of the estimation process. While focusing on different estimation types, we analyze the choice of methods and alternative approaches available. Due attention is paid to risk assessment being the key element of a successful IT project implementation, especially in such innovative fields as VR/MR/AR and AI. Moreover, the last chapter covers the demo of a project of ours, the Chemistry education app. We will show how the given approaches practically affect the final project estimation. Read

June 27, 2025
What Are Spatial Anchors and Why They Matter

Breaking Down Spatial Anchors in AR/MR Augmented Reality (AR) and Mixed Reality (MR) depend on accurate understanding of the physical environment to create realistic experiences, and they hit this target with the concept of spatial anchors. These anchors act like markers, either geometric or based on features, that help virtual objects stay in the same spot in the real world — even when users move around. Sounds simple, but the way spatial anchors are implemented varies a lot depending on the platform; for example, Apple’s ARKit, Google’s ARCore, and Microsoft’s Azure Spatial Anchors (ASA) all approach them differently. If you want to know how these anchors are used in practical scenarios or what challenges developers often face when working with them, this article dives into these insights too. What Are Spatial Anchors and Why They Matter A spatial anchor is like a marker in the real world, tied to a specific point or group of features. Once you create one, it allows for some important capabilities: Persistence. Virtual objects stay exactly where you placed them in the real-world, even if you close and restart the app. Multi-user synchronization. Multiple devices can share the same anchor, so everyone sees virtual objects aligned to the same physical space. Cross-session continuity. You can leave a space and come back later, and all the virtual elements will still be in the right place. In AR/MR, your device builds a point cloud or feature map by using the camera and built-in sensors like the IMU (inertial measurement unit). Spatial anchors are then tied to those features, and without them, virtual objects can drift or float around as you move, shattering the sense of immersion. Technical Mechanics of Spatial Anchors At a high level, creating and using spatial anchors involves a series of steps: Feature Detection & Mapping To start, the device needs to understand its surroundings: it scans the environment to identify stable visual features (e.g., corners, edges). Over time, these features are triangulated, forming a sparse map or mesh of the space. This feature map is what the system relies on to anchor virtual objects. Anchor Creation Next, anchors are placed at specific 3D locations in the environment in two possible ways: Hit-testing. The system casts a virtual ray from a camera to a user-tapped point, then drops an anchor on the detected surface. Manual placement. Sometimes, developers need precise control, so they manually specify the exact location of an anchor using known coordinates, like ensuring it perfectly fits on the floor or another predefined plane. Persistence & Serialization Anchors aren’t temporary — they can persist, and here’s how systems make that possible: Locally stored anchors. Frameworks save the anchor’s data, like feature descriptors and transforms, in a package called a “world map” or “anchor payload”. Cloud-based anchors. Cloud services like Azure Spatial Anchors (ASA) upload this anchor data to a remote server to let the same anchor be accessed across multiple devices. Synchronization & Restoration When you’re reopening the app or accessing the anchor on a different device, the system uses the saved data to restore the anchor’s location. It compares stored feature descriptors to what the camera sees in real time, and if there’s a good enough match, the system confidently snaps the anchor into position, and your virtual content shows up right where it’s supposed to. However, using spatial anchors isn’t perfect, like using any other technology, and there are some tricky issues to figure out: Low latency. Matching saved data to real-time visuals has to be quick; otherwise, the user experience feels clunky. Robustness in feature-scarce environments. Blank walls or textureless areas don’t give the system much to work with and make tracking tougher. Scale drift. Little errors in the system’s tracking add up over time to big discrepancies. When everything falls into place and the challenges are handled well, spatial anchors make augmented and virtual reality experiences feel seamless and truly real. ARKit’s Spatial Anchors (Apple) Apple’s ARKit, rolled out with iOS 11, brought powerful features to developers working on AR apps, and one of them is spatial anchoring, which allows virtual objects to stay fixed in the real world as if they belong there. To do this, ARKit provides two main APIs that developers rely on to achieve anchor-based persistence. ARAnchor & ARPlaneAnchor The simplest kind of anchor in ARKit is the ARAnchor, which represents a single 3D point in the real-world environment and acts as a kind of “pin” in space that ARKit can track. Building on this, ARPlaneAnchor identifies flat surfaces like tables, floors, and walls, allowing developers to tie virtual objects to these surfaces. ARWorldMap ARWorldMap makes ARKit robust for persistence and acts as a snapshot of the environment being tracked by ARKit. It captures the current session, including all detected anchors and their surrounding feature points, into a compact file. There are a few constraints developers need to keep in mind: World maps are iOS-only, which means they cannot be shared directly with Android. There must be enough overlapping features between the saved environment and the current physical space, and textured structures are especially valuable for this, as they help ARKit identify key points for alignment. Large world maps, especially those with many anchors or detailed environments, can be slow to serialize and deserialize, causing higher application latency when loading or saving. ARKit anchors are ideal for single-user persistence, but sharing AR experiences across multiple devices poses additional issues, and developers often employ custom server logic (uploading ARWorldMap data to a backend), enabling users to download and use the same map. However, this approach comes with caveats: it requires extra development work and doesn’t offer native support for sharing across platforms like iOS and Android. ARCore’s Spatial Anchors (Google) Google’s ARCore is a solid toolkit for building AR apps, and one of its best features is how it handles spatial anchors: Anchors & Hit-Testing ARCore offers two ways to create anchors. You can use Session.createAnchor(Pose) if you already know the anchor’s position, or…



Let's discuss your ideas

Contact us