Skip to content

Keeping Up with the Machine: Predetermined Change Control Plans for Medical Devices

Machine learning (ML) revolutionizes the medical field by enabling faster disease detection, more accurate diagnoses, and personalized treatments. These advancements rely heavily on the vast amounts of medical data generated daily. With significant increases in digital care, healthcare analytics, and imaging technology, ML-enabled medical devices (MLMDs) can interpret large volumes of data. However, ensuring these devices remain safe and effective as they continuously learn and adapt poses a unique challenge.

Enter Predetermined Change Control Plans (PCCPs). Developed by the US Food and Drug Administration (FDA) in collaboration with international regulatory bodies, PCCPs offer a framework for managing changes in ML-enabled medical devices (MLMDs).

What are Predetermined Change Control Plans (PCCPs)?

Think of a PCCP as a roadmap for anticipated modifications to an MLMD. It outlines the changes a manufacturer expects to make, their rationale and the process for ensuring their safety and effectiveness.

PCCPs are essential for managing changes in ML-enabled medical devices. They include detailed plans for anticipated modifications, ensuring these changes do not compromise device safety or effectiveness.

Importance of PCCPs in ML-Enabled Medical Devices

  • Rapid Updates: Manage ongoing improvements in MLMDs efficiently.
  • Risk Management: Conduct proactive risk assessments to ensure patient safety.
  • Regulatory Alignment: Streamline regulatory processes to promote innovation and maintain high standards.

Benefits of developing a PCCP include increased efficiency in managing changes to the SaMD software, improved quality of the SaMD software, and reduced risk of regulatory non-compliance.

The 5 Guiding Principles of PCCPs

The FDA has outlined five key principles for developing robust PCCPs:

Focused and Bounded

PCCPs should target specific, pre-determined changes that stay within the original intended use of the MLMD. A direct approach to ensure no change in the device’s intended use when the modifications are implemented is to prove “substantial equivalence” for each automatically implemented modification. While presenting the PCCP framework to the FDA, the manufacturer should clearly outline the modifications and how they would be implemented, documented, and validated with predetermined acceptance criteria, including post-market surveillance activities and usability studies. If the intended use of the device changes, the manufacturer will need to proceed with a separate regulatory submission.

Risk-Based

The design and implementation of a PCCP should be driven by a thorough risk assessment of the planned modifications. Risks associated with AI-based MDs are different from those of traditional MDs. For instance, AI MDs function based on continuous learning, which is beneficial in optimizing the device’s performance. However, the adaptability of the AI algorithm may challenge regulatory requirements such as management process, risk and quality management, clinical evaluation, control design, and post-market surveillance. These regulatory requirements ensure that the real-time data may not instruct the AI to perform differently, leading to potential harm and endanger patients.

Evidence-Based

Data and evidence gathered throughout the device’s lifecycle (from development to post-market use) should inform PCCP decisions. Right from the start of the premarket submission, the manufacturer must conduct a risk-benefit analysis to compare the modified and unmodified device versions. They need to foresee the collective impact of all these modifications with time. They also need to consider whether the activities outlined in the Modification Protocol would raise any safety and effectiveness concerns. Even after the device is deployed into the market, manufacturers must conduct post-market surveillance activities to assess the device’s performance, intended use, and user complaints.

Transparency

Clear communication with regulatory bodies and the target users about the PCCP and its implementation is crucial. The integration of AI into healthcare has amplified existing challenges and sparked new debates, particularly regarding trust. Policymakers recognize transparency as a critical element for building trustworthy AI applications. Transparency encompasses efforts to make AI’s inner workings more accessible. These efforts often focus on explainability and traceability.

Typically, AI models process inputs like patient data or medical images to generate predictions or classifications. However, the internal decision-making processes often remain opaque to physicians, hindering trust in the AI’s output. This lack of understanding creates a “black box” problem. There’s an acknowledged trade-off: high-performing models are often less explainable, while easier-to-understand models might perform poorly.

Explainability and traceability in AI-powered medical devices (AI-MD) are crucial not just for fostering trust among doctors and patients but also for troubleshooting (identifying and tracing errors) and assigning liability (determining who is responsible for mistakes) to minimize risks and promote AI adoption. Additionally, transparency can be vital in clarifying the AI’s functionality, learning method (batch or continuous), and how it evolves.

Total Product Lifecycle Management

The PCCP should consider how changes will impact the device throughout its entire lifespan. Existing SaMD-focused regulatory frameworks should be leveraged and adapted to regulate AI-based digital health products. For instance, during validation, a widely accepted method to validate the ML component is to expose it to a set-aside validation data set containing data that has yet to be used for training the model. The model’s performance is evaluated by sensitivity, specificity, and accuracy metrics. The goal of the validation test is to achieve the most appropriate model hyperparameter model selection, whereas, from the software validation perspective, the goal is to confirm that the software fulfills its intended purpose.

The Future of Machine Learning in Medicine

PCCPs are a significant step forward in regulating MLMDs. By establishing a framework for safe and controlled modifications, they pave the way for the continued development and integration of these powerful tools into the medical field. As ML technology evolves, PCCPs will likely adapt and improve, ensuring patients benefit from the latest advancements while safeguarding their well-being.

In summary, PCCPs are crucial for the safe and effective operation of ML-enabled medical devices. By following FDA guidelines and partnering with experts like RQS, manufacturers can navigate the complexities of PCCP development and implementation.

How RQS Can Assist in Developing PCCPs

Developing and implementing a robust PCCP can be a complex undertaking. This is where RQS can be a valuable partner and can assist in:

  • PCCP Development: Help draft and refine your PCCP, ensuring it aligns with FDA guidelines and effectively addresses the specific risks associated with your MLMD.
  • Risk Management: Guide you in identifying potential risks associated with ML algorithms and develop strategies to mitigate them.
  • Regulatory Expertise: Help navigate the ever-evolving regulatory landscape for MLMDs and ensure your PCCP meets all FDA requirements.
  • Documentation and Training: Help create clear and comprehensive documentation of your PCCP and train your team on its implementation.

By following the FDA’s guiding principles and partnering with RQS, you can ensure the safe and effective use of your MLMD, paving the way for a future of improved healthcare powered by cutting-edge technology.

Back To Top