As artificial intelligence (AI) and machine learning (ML) technologies rapidly evolve, their integration into medical devices is transforming healthcare. From diagnostic tools to treatment algorithms, AI/ML-enabled devices offer significant benefits but also pose unique regulatory challenges.
Technology in medical device applications is shifting and digital health is becoming more a part of every-day healthcare, through improving the mode of care, augmenting treatments, and assisting in diagnoses, representing an exciting time in the advancement of personal care.
We’ve become familiar with, and almost desensitized to, the AI around us. It used to be said that computers are only as smart as the human inputting information. Today, AI technology goes beyond that input, mimicking humans’ analytical thought patterns by imitating intelligence and making predictions based on trending. Chatbot platforms used to write college term papers, lane-assist features in cars manipulating driving performance, customer service models replacing human interaction, remotely controlled surgical robots, and wearable devices providing medical information have become routine and somewhat indiscriminately trusted.
While AI/ML technology launches us forward with developing complex algorithms to aid in decision-making through the use of real-world data and to identify unforeseen contradictions, there is an ethical matter to consider. Cybersecurity and privacy are major concerns with any automation, as more and more reliance on machines is being used to govern personal data (e.g., banking, identity information, etc.), and trust with life-affecting decisions in medicine is not easily garnered, not to mention the influence of societal predilection during the creation and training of these applications. That said, to ensure the proper use and control of AI, there is an imperative need for accurate testing and validation in any and all applications, and a robust regulatory framework to ensure strict oversight and compliance with policies and guidelines.
The Importance of Regulation
Regulatory bodies are congressing to both promote innovation and protect the public health and safety. In October 2021, the FDA, Health Canada, and the UK Medicines and Healthcare products Regulatory Agency (MHRA) jointly published 10 guiding principles that serve as the basis for developing good machine learning practices (GMLPs). Work with stakeholders (users, manufacturers, healthcare providers) is paramount to the betterment of the use of AI/ML-enabled medical devices and the boundaries of its control.
Related Reading: Guiding Principles on Transparency for Machine Learning Medical Devices – Alvamed
Key Regulatory Frameworks
1. United States
The Food and Drug Administration (FDA) has taken a proactive approach to regulating AI/ML-enabled medical devices. The FDA’s Digital Health Innovation Action Plan emphasizes a framework for the oversight of software as a medical device (SaMD).
- Pre-certification Program: This initiative allows developers to demonstrate their software’s reliability and quality through a streamlined review process. The goal is to promote innovation while ensuring safety. Manufacturers submit a predetermined change control plan (PCCP) to the FDA during the initial review of changes (known as SaMD pre-specifications or SPS) for intended modifications to the AI/ML-based SaMD, along with an algorithm change protocol (ACP), describing the methodology used to control patient risk and to ensure the safety and effectiveness of the SaMD after the proposed modification.
- Real-World Evidence (RWE): The FDA recognizes the importance of real-world data in post-market surveillance, enabling continuous monitoring and improvements of AI algorithms after their initial approval.
- In a joint effort, the FDA, Health Canada, and MHRA released a guidance on the predetermined change control plan (PCCP), noting that a PCCP should be focused, bounded, risk based, evidence based, transparent, and created using the total product lifecycle approach (TPLC) approach.
Read more about the PCCP Draft Guidance Released by FDA
2. European Union
The European Union’s (EU) Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) provide comprehensive guidelines for AI/ML-enabled devices.
- Classification and Conformity Assessment: Devices are classified based on risk, with higher-risk devices requiring more rigorous assessment. The emphasis is on transparency and clinical evaluation to ensure safety and efficacy.
- EU AI Act: The European Council formally approved the EU Artificial Intelligence Act (EU AI Act), which became legally binding in August 2024. This legislation specifically targets AI technologies, which aims to classify AI systems based on risk levels. It establishes requirements for high-risk applications, including transparency, accountability, and data governance applicable to any devices employing AI technology that are placed on the EU market.
Read more about The EU AI ACT: Implications for the Medical Device Industry – Alvamed
3. Canada
- Health Canada’s Digital Health Division: A new group in the government, specifically created to manage the growth and development of AI/ML-enabled devices. Health Canada partnered with both the US and MHRA to publish guidance documents for GMLP and PCCP and in 2023, published a draft premarket guidance on ML-enabled medical devices containing information pertinent to the applicable for medical device licenses for machine learning medical devices (MLMDs).
- Artificial Intelligence and Data Act (AIDA): Today, there is no legislation governing the regulation of AI-enabled medical devices in Canada. Therefore, the Canadian government has proposed the AIDA for the development and usage of AI systems with a primary aim to safeguard against harm that could be the result of the use of AI in high-impact systems, putting the ultimate responsibility on the organization responsible for the deployment of such systems.
The Artificial Intelligence and Data Act (AIDA) – Companion document (canada.ca)
4. United Kingdom
The United Kingdom (UK) regulations for medical devices are defined in the Medical Device Regulations (MDR) 2002. Additionally, AI as a medical device (AIaMD) must also conform to the UK MDR 2002. The MHRA continues to reform its regulatory platform, taking into account a risk-based approach to incorporate AI/ML-enabled devices. Within the reform, the UK has adopted guiding principles to form the basis of these changes, including cybersecurity, transparency, explainability, fairness, and accountability and governance.
- AI Airlock: Launched by the MHRA in May 2024, the program was piloted as a ‘regulatory sandbox’ to foster collaboration between stakeholders (manufacturers, regulators, etc.) with regard to the challenges posed by AIaMDs.
Read our article MHRA Launches AI Airlock: Pioneering Innovation in Regulatory Oversight – Alvamed
5. Singapore
The Health Sciences Authority (HSA), using a lifestyle approach, is developing it’s regulatory guidelines with respect to SaMD and AI-enabled medical devices and published a document containing these principles in March 2024. The document outlines the premarket registration requirements into distinct categories for submission, as well as information pertaining to training, human intervention, and continuous learning. The document also requires a robust post-market surveillance monitoring and evaluation system, employing real-world performance data and a change notification system to ensure proper regulatory oversight.
6. The International Medical Device Regulators Forum (IMDRF)
- The Artificial Intelligence Medical Devices Working Group: The group has published a technical document aimed at promoting public safety, the standardization of regulations pertaining to MLMD, and noting that such devices have the ability to change their performance over time based on available data.
- The Artificial Intelligence/Machine learning-Enabled Working Group (AI/ML-WG): In alignment of the work that has been done between the FDA, Health Canada, and the MHLW, the group has published a new guidance document for GMLP and is working to harmonize with global regulators for guidance on AI/ML with SaMD.
Read our recap of The International Medical Device Regulators Forum which took place on March 12, 2024. IMDRF: Updates on Current Working Groups – Alvamed
7. Global Approaches
Countries like Australia, Korea, Brazil, China, Hong Kong, and Japan are also developing regulatory frameworks for AI/ML-enabled devices. The Australian Therapeutic Goods Administration (TGA) have guidelines similar to the FDA’s, focusing on risk-based classification and continuous monitoring, while the Japanese Pharmaceuticals and Medical Devices Agency (PMDA) is also adapting its regulatory approaches to incorporate AI technologies.
Challenges in Regulation
1. Rapid Technological Advancements
AI and ML technologies evolve quickly, often outpacing existing regulatory frameworks. Regulators must find a balance between fostering innovation and ensuring patient safety.
2. Algorithmic Bias and Transparency
Bias in training data can lead to unfair outcomes in AI/ML applications. Regulators are tasked with ensuring that developers provide transparency regarding the data used to train algorithms and the measures taken to mitigate bias.
3. Data Privacy and Security
As AI/ML systems rely on vast amounts of data, ensuring the privacy and security of patient information is paramount. Regulatory frameworks must address these concerns without stifling innovation.
The Path Forward
1. Collaborative Approaches
Regulators, industry stakeholders, and academic researchers must collaborate to create adaptive regulatory frameworks that can keep pace with technological advancements. Public-private partnerships can facilitate knowledge sharing and best practices.
2. International Harmonization
Efforts to harmonize regulations across jurisdictions can simplify the approval process for AI/ML-enabled medical devices. Organizations such as the International Medical Device Regulators Forum (IMDRF) are working towards this goal, aiming for consistency in regulatory requirements.
3. Emphasizing Post-Market Surveillance
Continuous monitoring of AI/ML devices post-approval is critical to ensure long-term safety and efficacy. Establishing robust post-market surveillance systems can help identify issues early and facilitate timely interventions.
Conclusion
The global regulatory landscape for AI/ML-enabled medical devices is evolving rapidly to meet the challenges and opportunities presented by these technologies. By fostering collaboration, pursuing harmonization, and emphasizing ongoing surveillance, regulators can ensure that innovation thrives while maintaining patient safety. As AI continues to shape the future of healthcare, a well-defined regulatory framework will be essential in guiding its responsible and effective integration into medical practice.