Med Tech Standards: Why DICOM is Stuck in the 90s and What Needs to Change

You probably don’t think much about medical scan data. But they’re everywhere.

If you’ve got an X-ray or an MRI, your images were almost certainly processed by DICOM (Digital Imaging and Communications in Medicine), the globally accepted standard for storing and sharing medical imaging data like X-rays, MRIs, and CT scans between hospitals, clinics, and research institutions since the late 80s and early 90s.

But there’s a problem: while medical technology has made incredible leaps in the last 30 years, DICOM hasn’t kept up.

What is DICOM anyway?

DICOM still operates in ways that feel more suited to a 1990s environment of local networks and limited computing power. Despite updates, the system doesn’t meet the demands of cloud computing, AI-driven diagnostics, and real-time collaboration. It lacks cloud-native support and rigid file structures, and shows inconsistencies between different manufacturers.

If your doctor still hands you a CD with your scan on it in 2025 (!), DICOM is a big part of that story.

Med Tech Standarts

The DICOM Legacy

How DICOM Came to Be

When DICOM was developed in the 1980s, the focus was on solving some big problems in medical imaging, and honestly, it did the job brilliantly for its time.

The initial idea was to create a universal language for different hardware and software platforms to communicate with each other, sort of like building a shared language for technology. They also had to make sure it was compatible with older devices already in use.

At that time, the most practical option was to rely on local networks since cloud-based solutions simply didn’t exist yet.

These decisions helped DICOM become the go-to standard, but they also locked it into an outdated framework that’s now tough to update.

Why It’s Hard to Change DICOM

Medical standards don’t evolve as fast as consumer technology like phones or computers. Changing something like DICOM doesn’t happen overnight. It’s a slow and complicated process muddled by layers of regulatory approvals and opinions from a tangled web or organizations and stakeholders.

What’s more, hospitals have decades of patient data tied to these systems, and making big changes that may break compatibility isn’t easy.

And to top it all off, device manufacturers have different ways of interpreting and implementing DICOM, so it’s nearly impossible to enforce consistency.

The Trouble With Staying Backwards Compatible

DICOM’s focus on working perfectly with old systems was smart at the time, but it’s created some long-term problems.

Technological advancements have moved on with AI, cloud storage, and tools for real-time diagnostics. They have shown immediately how limited DICOM can be in catching up with these innovations. Also, vendor-specific implementations have created quirks that make devices less compatible with one another than they should be.

And don’t even get started on trying to link DICOM with modern healthcare systems like electronic records or telemedicine platforms. It would be like trying to plug a 1980s gadget into a smart technology ecosystem — not impossible, but far from seamless.

Med Tech Standarts

Why Your CT Scanner and MRI Machine Aren’t Speaking the Same Language

Interoperability in medical imaging sounds great in theory — everything just works, no matter the device or manufacturer — however, in practice, things got messy. Some issues sound abstract, but for doctors and hospitals, they mean delays, misinterpretations, and extra burden. So, why don’t devices always play nice?

The Problem With “Standards” That Aren’t Very Standard

You’d think having a universal standard like DICOM would ensure easy interoperability because everybody follows the same rules.

Not exactly. Device manufacturers implement it differently, and this leads to:

  • Private tags. These are proprietary pieces of data that only specific software can understand. If your software doesn’t understand them, you’re out of luck.
  • Missing or vague fields. Some devices leave out crucial metadata or define it differently.
  • File structure issues. Small differences in how data is formatted sometimes make files unreadable.

The idea of a universal standard is nice, but the way it’s applied leaves a lot to be desired.

Metadata and Tag Interpretation Issues

DICOM images contain extensive metadata to describe details like how the patient was positioned during the scan or how the images fit together. But when this metadata isn’t standardized, you end up with metadata and tag interpretation issues.

For example, inconsistencies in slice spacing or image order can throw off 3D reconstructions, leaving scans misaligned. As a result, when doctors try to compare scans over time or across different systems, they often have to deal with mismatched or incomplete data.

These inconsistencies make what should be straightforward tasks unnecessarily complicated and create challenges for accurate diagnoses and proper patient care.

File Structure and Storage Inconsistencies

The way images are stored varies so much between devices that it often causes problems.

Some scanners save each image slice separately. Others put them together in one file. Then there are slight differences in DICOM implementations that make it difficult to read images on some systems. Compression adds another layer of complexity — it’s not the same across the board. File sizes and levels of quality vary widely.

All these mismatches and inconsistencies make everything harder for hospitals and doctors trying to work together.

Med Tech Standarts

Orientation and Interpretation Issues

Medical imaging is incredible, but sometimes working with scans slows things down when time matters most and makes it harder to get accurate insights for patient care.

There are several reasons for this.

Different Coordinate Systems

Sometimes, DICOM permits the use of different coordination systems and causes confusions.

For instance, patient-based coordinates relate to the patient’s body, like top-to-bottom (head-to-feet) or side-to-side (left-to-right). Scanner-based coordinates, on the other hand, are based on the imaging device itself.

When these systems don’t match up, it creates misalignment issues in multi-modal imaging studies, where scans from different devices need to work together.

Slice Ordering Problems

Scans like MRIs and CTs are made up of thin cross-sectional images called slices. But not every scanner orders or numbers these slices in the same way.

Some slices can be stored from top-to-bottom or bottom-to-top. If the order isn’t clear, reconstructing 3D models becomes harder. Certain scanners use inconsistent slice numbering and make volume alignment challenging.

Display Inconsistencies Across Viewers

It’s weird to think that a medical scan looks completely different depending on which viewer you open it with, but that’s exactly the problem with DICOM viewers. The problem is, there’s no universal approach to how images should be presented.

For example, the brightness and contrast look perfect in one viewer but totally off on another because they each interpret presets differently. Or images can be flipped or rotated because one system handles orientation metadata in a different way. There are also cross-platform compatibility issues when a scan looks perfect on one viewer but appears distorted or altered when opened on another platform.

All these inconsistencies add up, and interpretation becomes more complicated than it should be.

Med Tech Standarts

Interoperability: Why It’s Breaking Down

When you think about healthcare, the last thing you want is for technology to get in the way of patient care. However, that’s exactly what happens when systems can’t talk to each other.

Interoperability challenges slow down workflows, add stress to healthcare systems, and impact how quickly patients get the care they need.

Interoperability Isn’t Optional Anymore

Interoperability is absolutely critical in healthcare, and it’s not hard to see why.

Hospitals use equipment from different manufacturers, and everything should work seamlessly together. Doctors at different facilities need to share images for second opinions and collaboration. Finally, AI tools and cloud-based services only work well when they have clean data to analyze.

These connections break down without interoperability, and it becomes harder for healthcare teams to give their patients the proper care.

Common Stumbling Blocks

When each vendor configures the DICOM standard to suit their devices, you get broken compatibility between systems. Moreover, trying to connect DICOM systems to modern cloud platforms is a pain because there aren’t enough standard APIs to make it simple.

And for hospitals, it’s even worse. They often feel stuck with a specific PACS vendor. The software is so locked down and proprietary that switching to something else feels almost impossible.

Problems With Cloud and AI Integration

Large file sizes and inefficient compression drag down cloud-based workflows and make everything slower than it needs to be.

Then there’s real-time remote diagnostics. It becomes harder to manage without native streaming support, and delays are almost guaranteed.

What about AI, it’s one of the most powerful tools we have, but it relies on consistent and clean metadata to really perform. The problem is, DICOM data is often inconsistent. So, instead of letting AI do what it’s designed to do, like analyzing and automating tasks, it hits a wall because metadata isn’t aligned, and you end up spending more time trying to make the data compatible.

These are real challenges that get in the way of what could be a more efficient system. The sooner we address these issues, the sooner the system flows like it’s meant to.

Med Tech Standarts

Efforts to Update Medical Imaging Standards

Medical imaging is going through some much-needed upgrades, and honestly, it’s about time. Systems that have been around forever, like DICOM, are finally getting the updates they need to keep up with the pace of healthcare.

For example, with DICOMweb, you can now pull up imaging files using RESTful APIs. FHIR (Fast Healthcare Interoperability Resources) helps DICOM work better with newer healthcare systems.

Of course, DICOM isn’t the only option. The NIfTI format works well for 3D volumetric imaging and is a favorite in neuroscience research. FHIR-based imaging workflows offer cloud-native alternatives. The whole point is to give people alternatives that fit the way the world looks like now, not the way it worked 20 years ago.

AI and cloud computing are also making an impact — but they do have some big requirements. AI is powerful, no doubt, but it needs well-organized image data to produce accurate results. They’re simply held back when the data isn’t consistent. On the cloud side of things, PACS systems depend on strong DICOM support to run properly. Metadata is a key element that ties all this together. It powers automation, speeds up workflows, and ensures precision in every process.

Piece by piece, all these changes are helping medical imaging to keep up with what modern healthcare actually needs.

Med Tech Standarts

What’s next for medical imaging?

DICOM was never built for today’s needs like real-time collaboration or AI-powered analysis. They weren’t even on the radar when DICOM was created. It’s an old system trying to function in a new reality.

What medical imaging systems need is a fresh approach. There should be clearer rules to enforce standardization and stop vendors from going rogue with custom modifications.

Modern systems would also benefit from cloud-native imaging formats that handle modern needs and don’t break old data. Moreover, smart APIs can simplify bridging imaging systems with EHRs and the many other tools healthcare depends on.

If imaging systems can make these changes, it could finally start working the way modern medicine really needs it to.

Final Thoughts

Medical imaging deserves better. DICOM had its time, but modern medicine needs systems that keep pace with advancements.

Change is possible, but it’s going to take teamwork between healthcare professionals, software developers, and policymakers to move on from the 90s and build something that works for the challenges of today and tomorrow.

References & Resources

Latest Articles

June 27, 2025
Methodology of VR/MR/AR and AI Project Estimation

Estimation of IT projects based on VR, XR, MR, or AI requires both a deep technical understanding of advanced technologies and the ability to predict future market tendencies, potential risks, and opportunities. In this document, we aim to thoroughly examine estimation methodologies that allow for the most accurate prediction of project results in such innovative fields as VR/MR/AR and AI by describing unique approaches and strategies developed by Qualium Systems. We strive to cover existing estimation techniques used at our company and delve into the strategies and approaches that ensure high efficiency and accuracy of the estimation process. While focusing on different estimation types, we analyze the choice of methods and alternative approaches available. Due attention is paid to risk assessment being the key element of a successful IT project implementation, especially in such innovative fields as VR/MR/AR and AI. Moreover, the last chapter covers the demo of a project of ours, the Chemistry education app. We will show how the given approaches practically affect the final project estimation. Read

June 27, 2025
What Are Spatial Anchors and Why They Matter

Breaking Down Spatial Anchors in AR/MR Augmented Reality (AR) and Mixed Reality (MR) depend on accurate understanding of the physical environment to create realistic experiences, and they hit this target with the concept of spatial anchors. These anchors act like markers, either geometric or based on features, that help virtual objects stay in the same spot in the real world — even when users move around. Sounds simple, but the way spatial anchors are implemented varies a lot depending on the platform; for example, Apple’s ARKit, Google’s ARCore, and Microsoft’s Azure Spatial Anchors (ASA) all approach them differently. If you want to know how these anchors are used in practical scenarios or what challenges developers often face when working with them, this article dives into these insights too. What Are Spatial Anchors and Why They Matter A spatial anchor is like a marker in the real world, tied to a specific point or group of features. Once you create one, it allows for some important capabilities: Persistence. Virtual objects stay exactly where you placed them in the real-world, even if you close and restart the app. Multi-user synchronization. Multiple devices can share the same anchor, so everyone sees virtual objects aligned to the same physical space. Cross-session continuity. You can leave a space and come back later, and all the virtual elements will still be in the right place. In AR/MR, your device builds a point cloud or feature map by using the camera and built-in sensors like the IMU (inertial measurement unit). Spatial anchors are then tied to those features, and without them, virtual objects can drift or float around as you move, shattering the sense of immersion. Technical Mechanics of Spatial Anchors At a high level, creating and using spatial anchors involves a series of steps: Feature Detection & Mapping To start, the device needs to understand its surroundings: it scans the environment to identify stable visual features (e.g., corners, edges). Over time, these features are triangulated, forming a sparse map or mesh of the space. This feature map is what the system relies on to anchor virtual objects. Anchor Creation Next, anchors are placed at specific 3D locations in the environment in two possible ways: Hit-testing. The system casts a virtual ray from a camera to a user-tapped point, then drops an anchor on the detected surface. Manual placement. Sometimes, developers need precise control, so they manually specify the exact location of an anchor using known coordinates, like ensuring it perfectly fits on the floor or another predefined plane. Persistence & Serialization Anchors aren’t temporary — they can persist, and here’s how systems make that possible: Locally stored anchors. Frameworks save the anchor’s data, like feature descriptors and transforms, in a package called a “world map” or “anchor payload”. Cloud-based anchors. Cloud services like Azure Spatial Anchors (ASA) upload this anchor data to a remote server to let the same anchor be accessed across multiple devices. Synchronization & Restoration When you’re reopening the app or accessing the anchor on a different device, the system uses the saved data to restore the anchor’s location. It compares stored feature descriptors to what the camera sees in real time, and if there’s a good enough match, the system confidently snaps the anchor into position, and your virtual content shows up right where it’s supposed to. However, using spatial anchors isn’t perfect, like using any other technology, and there are some tricky issues to figure out: Low latency. Matching saved data to real-time visuals has to be quick; otherwise, the user experience feels clunky. Robustness in feature-scarce environments. Blank walls or textureless areas don’t give the system much to work with and make tracking tougher. Scale drift. Little errors in the system’s tracking add up over time to big discrepancies. When everything falls into place and the challenges are handled well, spatial anchors make augmented and virtual reality experiences feel seamless and truly real. ARKit’s Spatial Anchors (Apple) Apple’s ARKit, rolled out with iOS 11, brought powerful features to developers working on AR apps, and one of them is spatial anchoring, which allows virtual objects to stay fixed in the real world as if they belong there. To do this, ARKit provides two main APIs that developers rely on to achieve anchor-based persistence. ARAnchor & ARPlaneAnchor The simplest kind of anchor in ARKit is the ARAnchor, which represents a single 3D point in the real-world environment and acts as a kind of “pin” in space that ARKit can track. Building on this, ARPlaneAnchor identifies flat surfaces like tables, floors, and walls, allowing developers to tie virtual objects to these surfaces. ARWorldMap ARWorldMap makes ARKit robust for persistence and acts as a snapshot of the environment being tracked by ARKit. It captures the current session, including all detected anchors and their surrounding feature points, into a compact file. There are a few constraints developers need to keep in mind: World maps are iOS-only, which means they cannot be shared directly with Android. There must be enough overlapping features between the saved environment and the current physical space, and textured structures are especially valuable for this, as they help ARKit identify key points for alignment. Large world maps, especially those with many anchors or detailed environments, can be slow to serialize and deserialize, causing higher application latency when loading or saving. ARKit anchors are ideal for single-user persistence, but sharing AR experiences across multiple devices poses additional issues, and developers often employ custom server logic (uploading ARWorldMap data to a backend), enabling users to download and use the same map. However, this approach comes with caveats: it requires extra development work and doesn’t offer native support for sharing across platforms like iOS and Android. ARCore’s Spatial Anchors (Google) Google’s ARCore is a solid toolkit for building AR apps, and one of its best features is how it handles spatial anchors: Anchors & Hit-Testing ARCore offers two ways to create anchors. You can use Session.createAnchor(Pose) if you already know the anchor’s position, or…

June 2, 2025
Extended Reality in Industry 4.0: Transforming Industrial Processes

Understanding XR in Industry 4.0 Industry 4.0 marks a turning point in making industry systems smarter and more interconnected: it integrates digital and physical technologies like IoT, automation, and AI, into them. And you’ve probably heard about Extended Reality (XR), the umbrella for Virtual Reality, Augmented Reality, and Mixed Reality. It isn’t an add-on. XR is one of the primary technologies making the industry system change possible. XR has made a huge splash in Industry 4.0, and recent research shows how impactful it has become. For example, a 2023 study by Gattullo et al. points out that AR and VR are becoming a must-have in industrial settings. It makes sense — they improve productivity and enhance human-machine interactions (Gattullo et al., 2023). Meanwhile, research by Azuma et al. (2024) focuses on how XR makes workspaces safer and training more effective in industrial environments. One thing is clear: the integration of XR into Industry 4.0 closes the gap between what we imagine in digital simulations and what actually happens in the real world. Companies use XR to work smarter — it tightens up workflows, streamlines training, and improves safety measures. The uniqueness of XR is in its immersive nature. It allows teams to make better decisions, monitor operations with pinpoint accuracy, and effectively collaborate, even if team members are on opposite sides of the planet. XR Applications in Key Industrial Sectors Manufacturing and Production One of the most significant uses of XR in Industry 4.0 is in manufacturing, where it enhances design, production, and quality control processes. Engineers now utilize digital twins, virtual prototypes, and AR-assisted assembly lines, to catch possible defects before production even starts. Research by Mourtzis et al. (2024) shows how effective digital twin models powered by XR are in smart factories: for example, studies reveal that adopting XR-driven digital twins saves design cycle times by up to 40% and greatly speeds up product development. Besides, real-time monitoring with these tools has decreased system downtimes by 25% (Mourtzis et al., 2024). Training and Workforce Development The use of XR in employee training has changed how industrial workers acquire knowledge and grow skills. Hands-on XR-based simulations allow them to practice in realistic settings without any of the risks tied to operating heavy machinery, whereas traditional training methods usually involve lengthy hours, high expenses, and the need to set aside physical equipment, disrupting operations. A study published on ResearchGate titled ‘Immersive Virtual Reality Training in Industrial Settings: Effects on Memory Retention and Learning Outcomes’ offers interesting insights on XR’s use in workforce training. It was carried out by Jan Kubr, Alena Lochmannova, and Petr Horejsi, researchers from the University of West Bohemia in Pilsen, Czech Republic, specializing in industrial engineering and public health. The study focused on fire suppression training to show how different levels of immersion in VR affect training for industrial safety procedures. The findings were astounding. People trained in VR remembered 45% more information compared to those who went through traditional training. VR also led to a 35% jump in task accuracy and cut real-world errors by 50%. On top of that, companies using VR in their training programs noticed that new employees reached full productivity 25% faster. The study uncovered a key insight: while high-immersion VR training improves short-term memory retention and operational efficiency, excessive immersion — for example, using both audio navigation and visual cues at the same time — can overwhelm learners and hurt their ability to absorb information. These results showed how important it is to find the right balance when creating VR training programs to ensure they’re truly effective. XR-based simulations let industrial workers safely engage in realistic and hands-on scenarios without the hazards or costs of operating heavy machinery, changing the way they acquire new skills. Way better than sluggish, costly, and time-consuming traditional training methods that require physical equipment and significant downtime. Maintenance and Remote Assistance XR is also transforming equipment maintenance and troubleshooting. In place of physical manuals, technicians using AR-powered smart glasses can view real-time schematics, follow guided diagnostics, and connect with remote experts, reducing downtime. Recent research by Javier Gonzalez-Argote highlights how significantly AR-assisted maintenance has grown in the automotive industry. The study finds that AR, mostly mediated via portable devices, is widely used in maintenance, evaluation, diagnosis, repair, and inspection processes, improving work performance, productivity, and efficiency. AR-based guidance in product assembly and disassembly has also been found to boost task performance by up to 30%, substantially improving accuracy and lowering human errors. These advancements are streamlining industrial maintenance workflows, reducing downtime and increasing operational efficiency across the board (González-Argote et al., 2024). Industrial IMMERSIVE 2025: Advancing XR in Industry 4.0 At the Industrial IMMERSIVE Week 2025, top industry leaders came together to discuss the latest breakthroughs in XR technology for industrial use. One of the main topics of discussion was XR’s growing impact on workplace safety and immersive training environments. During the event, Kevin O’Donovan, a prominent technology evangelist and co-chair of the Industrial Metaverse & Digital Twin committee at VRARA, interviewed Annie Eaton, a trailblazing XR developer and CEO of Futurus. She shared exciting details about a groundbreaking safety training initiative, saying: “We have created a solution called XR Industrial, which has a collection of safety-themed lessons in VR … anything from hazards identification, like slips, trips, and falls, to pedestrian safety and interaction with mobile work equipment like forklifts or even autonomous vehicles in a manufacturing site.” By letting workers practice handling high-risk scenarios in a risk-free virtual setting, this initiative shows how XR makes workplaces safer. No wonder more companies are beginning to see the value in using such simulations to improve safety across operations and avoid accidents. Rethinking how manufacturing, training, and maintenance are done, extended reality is rapidly becoming necessary for Industry 4.0. The combination of rising academic study and practical experiences, like those shared during Industrial IMMERSIVE 2025, highlights how really strong this technology is. XR will always play a big role in optimizing efficiency, protecting workers, and…



Let's discuss your ideas

Contact us