EU AI Act Meets MDR: What Medical SaMD Vendors Should Be Doing Right Now
The EU AI Act came into force in August 2024 with phased applicability, and the high-risk system obligations apply from 2 August 2026. For medical Software as a Medical Device vendors, the relevant question is not "do we need to comply?" — most medical AI software is high-risk per Annex III — but "how does this layer onto the MDR or IVDR compliance we already have?" The answer is more nuanced than the published guidance suggests.
This piece is the operational mapping we use with medical SaMD vendors planning their combined MDR + AI Act posture. It is opinionated about what the overlap saves and where the gaps will surface compliance debt that vendors are not currently budgeting for.
The classification interaction that confuses most vendors
A medical AI device under EU MDR is classified by its medical-device risk class (I, IIa, IIb, III) and conformity-assessed accordingly. The same device under the AI Act is classified by its AI risk tier (prohibited, high-risk, limited-risk, minimal-risk). For medical SaMD that is regulated as a medical device, the AI Act's Article 6 makes most such devices automatically high-risk under the AI Act because they are subject to MDR conformity assessment by a notified body.
This produces a specific obligation pattern: the AI Act's high-risk obligations apply, but the conformity assessment can in many cases be integrated with the existing MDR/IVDR conformity assessment rather than performed as a separate exercise. The same notified body, in many cases, can assess both regimes together. This is the structural overlap most vendors do not yet exploit.
The question to ask is not "do we need a separate AI Act conformity assessment?" It is "is our notified body equipped to assess both regimes, and have we structured the technical file to support that integration?"
What the AI Act adds beyond MDR
There are five substantive obligations in the AI Act that do not have direct equivalents in MDR, and these are where most vendors will incur additional work.
1. Risk management for AI-specific harms (Article 9). The AI Act's risk management process covers harms specific to AI systems: bias, automation bias from human operators, overreliance on AI outputs, performance degradation under distribution shift. These map onto ISO 14971 risk management but extend it. Vendors with mature 14971 processes can layer the AI-specific extensions; vendors who have treated AI risks as out of scope will need to add them.
2. Data and data governance (Article 10). Specific requirements on training, validation, and testing datasets: provenance, representativeness, examination for biases, statistical properties. The medical-device design controls (21 CFR 820.30 / ISO 13485 7.3) require design verification but not at this level of dataset-specific analysis.
3. Logging and traceability of AI decisions (Article 12). Automated event logs of the AI system's operation throughout its lifecycle, with retention requirements. Beyond standard medical-device audit logs.
4. Transparency and human oversight (Articles 13, 14). Explicit user-facing transparency about the AI system's capabilities and limitations, plus mandatory human oversight provisions. Some of this is covered by IFU and labelling requirements under MDR; some is new.
5. Post-market monitoring tied to AI performance metrics (Article 72). The AI Act requires ongoing monitoring of the AI system's performance on the population it is actually serving, with documented response to drift. MDR requires post-market clinical follow-up; the AI Act adds the AI-specific dimension.
The combined posture covers all five plus the existing MDR obligations. The overlap is real but partial.
The technical file structure that supports both
The MDR technical file (Annex II / III for non-IVD, similarly structured for IVD) has a defined structure. The AI Act's Annex IV technical documentation has its own defined structure. They are not identical but they map onto each other in significant overlap.
The structural pattern that works: extend the MDR technical file with explicit AI Act sections rather than maintain two parallel files. Sections that already address both regimes (clinical evaluation, risk management, design controls) get cross-referenced. Sections specific to one regime become explicit subsections.
For a typical medical SaMD with AI components, the file structure we recommend:
- Device description and intended use — single section, addresses both MDR and AI Act intended-purpose requirements.
- Risk management — ISO 14971 framework extended with AI Act Article 9 elements (bias, distribution shift, automation bias). Single risk file, AI-specific risks tagged.
- Design controls and software lifecycle — IEC 62304 lifecycle covers both regimes' development-process expectations.
- Data governance — new section addressing AI Act Article 10. Substantially additional work for most vendors.
- Performance evaluation — clinical evaluation per MDR plus AI Act Article 15 performance metrics (accuracy, robustness, cybersecurity).
- Post-market plan — PMS plan per MDR extended with AI Act Article 72 post-market monitoring. Single plan, AI-specific KPIs added.
- Cybersecurity — covered as in the 510(k) cybersecurity piece; the EU regulators expect substantively similar evidence as the FDA.
This structure produces a single technical file that can be assessed against both regimes by a notified body equipped to do so, which most major notified bodies will be by the time the AI Act high-risk obligations apply.
What vendors should be doing in 2026
For medical SaMD vendors who are not yet prepared, the practical sequence:
Now: identify whether your device is high-risk under the AI Act (almost certainly yes for any medical AI), and confirm with your notified body their readiness to assess both regimes.
Next 60 days: gap-analysis between your existing MDR technical file and the AI Act Annex IV requirements. Most of our clients find 5–15 specific documentation gaps in this exercise.
Next 6 months: build out the data governance documentation per Article 10. This is the gap most vendors are not currently budgeting for, and it requires work that touches the engineering team rather than just the regulatory team.
Next 12 months: extend post-market monitoring per Article 72. The first year of post-market data under the new framework will be examined more closely than later years; the monitoring infrastructure has to be operational from launch.
By 2 August 2026, the high-risk obligations apply. Existing devices on the market at that date have transitional provisions, but new placements after that date face full applicability immediately.
Where this connects to our practice
Pelican Tech's MedTech Regulatory practice builds combined MDR + AI Act compliance postures for medical SaMD, working with notified bodies that are equipped to assess both regimes in a single technical-file review. We work with our regulatory affairs team on the parallel FDA submissions where applicable, and with our AI Solutions practice when the underlying AI engineering work needs to support the data-governance and performance-evaluation requirements the Act introduces.
If you are within 9 months of an MDR submission for an AI-containing medical SaMD and the AI Act dimension is not yet incorporated into the technical file, that is the engagement to start with before the file goes to your notified body.