Over the past decade, interest in innovating and commercializing medical devices has significantly increased. In the current trend of MedTech, Software as a Medical Device (SaMD) is one of the hottest fields, with the rise of Machine Learning (ML), Cloud Computing, and the Internet of Things (IoT). SaMD needs to be verified and validated (V&V) rigorously to comply with the regulations. As a software engineer at Rook Quality Systems, I will describe the key challenges below, based upon my prior software validation experience.
What to Verify and Validate?
Many companies are confused about the scope of testing, and hope to minimize the scope of testing. The development is still in the early stages, which means they often lack the resources to complete the necessary verification and validation requirements. There are questions I am often asked, like “Do we need to validate the firmware of our hardware device?”, “Do we need to create a comprehensive suite of unit test cases for the AWS S3 Bucket?”, and “Do we need to validate the Trello tool?” In the SaMD industry, it is important to understand and take care of Off-The-Shelf Software (OTS) and Software of Unknown Provenance (SOUP). Without enough relevant software validation experience and compliance knowledge, it would be hard to define the minimum scope to speed up the business.
How to Verify and Validate?
In the final guidance of General Principles of Software Validation from FDA, they introduced the concept of software verification, software validation, and the IQ/OQ/PQ.
Software Verification follows the 3 level approach – Unit Testing, Integrated Testing, and System Testing. It is a comparatively easy piece, since it follows the general practice. However, when it comes to the concept of software validation, the complexity of defining the proper scope of validation increases immensely. According to the guidance, software validation is to be confirmed by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled.
In the current trend, software products usually contain a bunch of AI automation, components making use of ML models, and Deep Learning algorithms. The performance validation is what the reviewers care about most. It is the hardest part by far, since performance evaluation of an algorithm involves corresponding field experts, proper evaluation matrix setup, and the resources to execute that.
For example: I once helped a client to perform a non-clinical performance validation of their 3D vascular segmentation algorithm. The processes included:
- Defining the image formats and evaluation matrix to use (DICE Coefficient, and Hausdorff Distance).
- Requesting a third-party 3D scanning company to scan the vascular models and provide the standard 3D images.
- Designing and implementing a custom Python script to register two sets of 3D images and calculate the DICE and HF.
- Verifying the script with test cases, and drafting a validation suite which included the spec, the validation plan, and report for the script.
- Running the validated script to calculate the DICE coefficients and Hausdorff distances of 3D images generated by the algorithm and the standard 3D images.
- Drafting the validation report to show the objective evidence of the performance of the algorithm.
That workload would be unimaginable to any software quality assurance who doesn’t work in the medical device industry. In addition, there was no library available in the market for calculating percentile Hausdorff Distance in 3D space for Python, so a custom function had to be implemented and verified. To validate algorithms which are heuristic or ML-based, there are many questions to be answered, and validation works would be different. For these difficult problems, they would be even more difficult if there was no relevant validation work to learn from.
When and Who to Verify and Validate?
Software validation activities usually come with the Quality Management System and the regulatory submission requirements. It depends on the nature, scale and functionality of the software. For large-scale projects or projects with ML-based software algorithms, it is recommended to plan the activities as early as possible, and receive relevant training before the development kickstart, since both the software validation process and the review of the submission may take much longer.
It also depends on the resources and size of the company. For small development teams, there might be no dedicated QA team and testers to perform the V&V activities. It is important to know that the basic quality principle – Independence of review – applies in software validation. Some firms may contract out for third-party independent verification and validation; however, this might not be feasible for early-stage companies. In such a case, the development team might need to plan the validation activities carefully to maintain that principle internally. Companies with a dedicated DevOps team might care about how to integrate software validation activities with their existing CI/CD (Continuous Integration/Continuous Deployment) environment.
V&V with Rook Quality Systems
Rook has experts in software verification and validation of SaMD ready to help with different SaMD projects in different stages. We have all the necessary qualifications to serve as a third-party V&V consultant for your software device. We have prior experience in verifying and validating many different popular SaMD software components, such as web application, cloud backend, docker based solution, medical image algorithm, and machine learning algorithm. We can help you to plan, execute, and work with your development team in order to GET IT DONE.