Skip to main content
Blog-Banner

MHRA Issues Transparency for Machine Learning and Medical Devices

The Medicines and Healthcare Products Regulatory Agency (MHRA) has published guiding principles for machine learning medical devices, focusing on transparency. According to the MHRA guidance, “These The Medicines and Healthcare Products Regulatory Agency (MHRA) has released a set of guiding principles aimed at improving transparency in machine learning‑enabled medical devices. These principles highlight key considerations for fostering safe, effective, and high‑quality medical devices that incorporate artificial intelligence and machine learning (AI/ML).

The guidance emphasizes the importance of understanding who uses the device, who receives care facilitated by the device, and all other stakeholders involved in decision‑making. It also outlines expectations for the type, placement, and timing of information needed to ensure transparency. This may include details about a device’s purpose, capabilities, clinical evidence, and performance.

Attention is also given to enhancing transparency within the software interface itself, promoting personalized, adaptive, and reciprocal information delivery. The MHRA stresses that information needs should be evaluated throughout the entire product lifecycle, with timely updates provided as new insights emerge—such as on‑screen alerts, warnings, or instructions.

The 10 guiding principles for good machine learning practice (GMLP) include:

1. Multi‑Disciplinary Expertise Across the Product Lifecycle:

A strong understanding of how a model fits into the clinical workflow, including expected benefits and potential patient risks, is essential to developing safe, effective ML‑enabled medical devices.

2. Adoption of Robust Software Engineering and Security Practices:

Sound engineering methods, high‑quality data management, and reliable cybersecurity measures are needed to support model development. Thorough risk management and documentation help maintain data authenticity and device integrity.

3. Representative Clinical Studies and Datasets:

Clinical studies and datasets should reflect the characteristics of the intended patient population—including factors such as age, sex, race, and ethnicity—to minimize bias and ensure generalizable performance.

4. Separation of Training and Test Data:

Training and test datasets must remain independent. Potential dependencies—such as patient overlap or shared acquisition methods—should be identified and addressed to maintain dataset integrity.

5. Use of High‑Quality Reference Datasets:

Reference datasets should be developed using accepted, well‑characterized methods. When available, established reference standards should be used to support device robustness and generalizability.

6. Model Design Aligned with Data and Intended Use:

Model architecture must be suited to the available data and designed to mitigate risks like overfitting or performance degradation. Clear performance goals should be set to confirm the model can safely achieve its intended clinical purpose.

7. Emphasis on Human‑AI Team Performance:

For systems that involve human oversight, both human factors and interpretability must be considered. The guidance encourages evaluating the combined performance of the Human‑AI team rather than the model alone.

8. Testing Under Clinically Relevant Conditions:

Statistically sound testing must demonstrate real‑world performance. This includes evaluation across relevant patient subgroups, clinical environments, measurement inputs, and possible confounding variables.

9. Delivery of Clear, Essential Information to Users:

Healthcare professionals and patients must have access to clear, contextual information regarding intended use, performance characteristics, limitations, acceptable inputs, and workflow integration. Updates based on real‑world monitoring should also be communicated.

10. Monitoring Deployed Models and Managing Re‑Training Risks:

ML models should be monitored continuously in real‑world use. When models are updated or retrained, safeguards must exist to prevent issues like bias, overfitting, or performance decline caused by changing datasets.

AI/ML technologies provide significant opportunities to advance healthcare by extracting meaningful insights from the vast amount of data generated daily. However, due to their complexity and iterative nature, they require rigorous oversight and thoughtful governance.

The MHRA’s guiding principles highlight areas where international organizations—such as the International Medical Device Regulators Forum (IMDRF) and standards bodies—can collaborate to further develop best practices in GMLP. Collaborative efforts may include creating educational resources, supporting research, and developing harmonized standards to inform future regulatory approaches.

The guiding principles can be used to:

  • Adopt proven practices from other industries 
  • Adapt external best practices for the healthcare sector 
  • Develop new, healthcare‑specific practices 

As AI/ML medical technologies evolve, so must global best practices and regulatory expectations. Continued collaboration with international partners will be key in supporting responsible innovation within the medical device ecosystem.ea.

Get the latest updates from Vistaar

    Subscribe

    Share the Blog:

    CONNECT WITH US

      Subscribe
      The First Step

      Let's talk about how Vistaar can help you