Impact of AI act on medical devices

Blog

The AI Act is the first-ever legal framework addressing the risks of Artificial Intelligence (AI) positioning Europe to take a leading role in global AI regulation. It officially came into force on 1 August 2024, and will become fully applicable by 2026, with specific timelines for different provisions:

  • Prohibitions take effect after 6 months.
  • Governance rules and obligations for general-purpose AI models apply after 12 month.
  • Rules for AI systems embedded into regulated products (e.g. medical devices) apply after 36 months.
Scope of the AI Act

The AI Act defines an AI system as any machine-based system designed to operate with varying levels of autonomy. Such systems may adapt after deployment, generating outputs (like predictions, recommendations, or decisions) that can influence physical or virtual environments.

The Act has a broad scope, applying to all AI systems and operators in the supply chain, including:

  • Providers
  • Deployers: a new term for medical device industry which refers to any entity using an AI system, except for personal, non-professional activities
  • Product manufacturers
  • Authorized representatives
  • Importers and distributors
Key provisions of the AI Act

The AI Act introduces:

  1. Harmonized rules for placing AI systems on the market and their use in the EU.
  2. Prohibitions on certain AI practices.
  3. Specific requirements and obligations for operators of high-risk AI systems.
  4. Transparency rules for certain AI systems.
  5. Rules for general-purpose AI models.
  6. Market monitoring, surveillance, and enforcement mechanisms.
  7. Measures to support innovation, especially for small and medium-sized enterprises (SMEs), including start-ups.
AI and medical devices

AI is increasingly used in various industries, including medical devices. The AI Act takes a risk-based approach to regulate AI, classifying systems into four risk levels:

  • Unacceptable risk: prohibit from marketing (Chapter II, e.g. remote biometric identification systems)
  • High risk: strict regulatory control (Chapter III, including medical devices and IVDs)
  • Limited risk: regulatory control
  • Minimal risk: free use

This blog focuses on the impact of the AI Act on medical devices and in vitro diagnostic medical devices (IVDs) from a medical device manufacturer’s perspective, particularly those classified as high-risk. It highlights the additional work required on top of existing quality management systems (QMS) and technical documentation for compliance.

High-risk AI systems in medical devices

As outlined in Article 6 of the AI Act, high-risk AI systems are those that are safety components of products, or stand-alone AI system for which conformity assessment procedure requires a notified body involvement.

In the context of medical devices, any non-class I devices that either incorporate AI as a safety component or functions as an AI systems themselves is classified as high-risk. However, not all high-risk AI systems equate to high-risk medical devices. Medium-risk devices under the EU-MDR or EU-IVDR may still require notified body’s involvement for conformity assessment of the AI component if it impacts safety of users or patients, such as robot-assisted surgery.

Key requirements for high-risk AI systems

To place high-risk AI systems on the market, providers or product manufacturers must comply with several mandatory requirements, such as those outlined in the EU-MDR or EU-IVDR. The AI Act offers flexibility, allowing providers or product manufacturers to streamline compliance by integrating AI-related requirements into existing technical documentation and QMS to avoid redundant administrative work.

Some key requirements include:

  1. Risk assessment and mitigation (Article 9)
    The AI risk management process can be combined with existing medical device risk management, taking into account AI-specific risks, such as potential adverse impacts on vulnerable groups and persons under the age of 18.
  2. Data and data governance (Article 10)
    High-quality datasets must be used to minimize risks and prevent discriminatory outcomes.
  3. Technical documentation (Article 11)
    Detailed documentation must include information on system capabilities, algorithms, data, training, testing, validation processes, and risk management. This can be integrated with the device’s existing technical documentation.
  4. Record keeping (Article 12)
    Automatic event logging ensures traceability throughout the AI system’s lifecycle.
  5. Transparency and information for deployers (Article 13)
    Clear and adequate instruction for use (IFU) must be provided to deployers, supporting informed use and decision-making.
  6. Human oversight (Article 14)
    Providers must implement appropriate human oversight mechanisms to minimize risks, ensuring that AI systems cannot override human control.
  7. Robustness, security, and accuracy (Article 15)
    High-risk AI systems must be robust, resilient, and secure. Providers must declare the system’s accuracy in the accompanying IFU and ensure protection against cybersecurity threats.
Overlap with EU-MDR and EU-IVDR

There is significant overlap between the AI Act and existing medical devices regulations (EU-MDR and EU-IVDR). Given that the AI Act allows for a single conformity assessment, medical device manufacturers are encouraged to conduct a gap analysis between these regulations to ensure compliance without duplicating efforts.

Limited risk AI systems

AI systems classified as limited risk typically involve transparency concerns, such as ensuring humans are aware when interacting with AI. For example:

  • Chatbots must inform users that they are interacting with AI.
  • AI-generated content for public dissemination must be clearly labeled as artificially generated.
Conclusion

The AI Act introduces new regulatory obligations for medical device manufacturers, particularly those utilizing AI in high-risk applications. Manufacturers must integrate AI-specific requirements into their existing compliance frameworks to ensure smooth market access while maintaining robust safety and transparency standards.

EN