Clinical evaluation of software | RAPS

Software has become an increasingly critical area of healthcare product development. Rapid technological advancement has resulted in substantial changes to software function and acceptance, leading to a growing number of novel medical devices capable of informing, driving, or replacing clinical choices, or directly providing therapy. Regulatory authorities expect that clinical evidence for a software as medical device (SaMD) is generated with a scientific level of rigor that is commensurate with their risk and impact, and provide assurance of safety, effectiveness, and performance. As the range and scale of potential uses of artificial intelligence (AI) and machine learning (ML) for healthcare raises, new issues related for premarket clinical evaluation become apparent. Regulators across the globe focus their efforts on developing robust regulatory frameworks and the unified industry standards for risk classification and evaluation of medical AI, that safeguards against potential risks while balancing innovation.
Keywords – AI/ML-based SaMD, clinical evidence, SaMD, software evaluation


Advances in medical software applications align with a persistent trend toward automating tasks and minimizing human interventions to increase efficiency, reduce costs, and prevent errors.1 However, the growing role of software in medical decision-making also warrants caution about potential risks arising from usability issues, automation bias, lack of operational transparency, or simply erroneous output, which can lead to catastrophic consequences.2,3 For example, when a skin- and mole-examining application (app) incorrectly tells its user that the detected melanoma is harmless, it will likely cause a delay in treatment. Therefore, whether developing new medical software for more precise diagnosis and targeted treatment or a healthcare app for individuals concerned about their health, manufacturers are tasked to appreciably prove to authorities the safety, quality, and effectiveness of their products. The evaluation of clinical safety and performance, as well as the overall benefit‑risk profile of the product, through a critical assessment of clinical data generated from its use, is one of the key requirements for the manufacturers of a software as a medical device (SaMD) and, consequently, a focus area for regulators.

General principles of clinical evaluation

Many misunderstandings are arising from different interpretations by different stakeholders of the term and the legal requirements for clinical evaluation. Software developers often refer to it in terms of validation of the product’s intended use in clinical settings. While demonstration of the intended performance prior to commercial release lies at the center of clinical evaluation, in the regulatory context, this notion refers to a much broader concept of a systematic and ongoing process conducted throughout the entire lifecycle of a medical device. Clinical evaluation requirements have been embedded in various national and regional medical device laws. A voluntary collaboration between the national regulatory authorities within the International Medical Device Regulators Forum (IMDRF) resulted in the release of guidance on clinical evaluation4 (and closely related documents addressing clinical investigations5 and clinical evidence.)6 The IMDRF is careful to avoid binding its recommendations to any national regulation and offers guidance to all those involved in the generation, compilation, and review of clinical evidence sufficient to support the marketing of medical devices.
Clinical evaluation is first performed during the development of a device to identify data that need to be generated for regulatory purposes and will determine if one or more clinical investigation(s) are necessary, with their progress captured in a clinical development plan (CDP). Many crucial parameters, such as the study design, the objective(s), and the endpoint(s), will be based on the evidence gaps identified during evaluation of nonclinical data and accompanying state of the art literature review. Clinical evaluation is repeated periodically as new safety, clinical performance, or effectiveness information about the medical device is obtained during its use. Clinical data is often generated through clinical investigations of the concerned device. Data also can be retrieved from the scientific literature or postmarket surveillance and, in particular, from the postmarket clinical follow-up (PMCF). This process is performed in discrete stages (Figure 1), and results are documented in a clinical evaluation plan (CEP) and a clinical evaluation report (CER). The output is fed into the ongoing risk management process and may result in changes to the manufacturer’s risk assessment, clinical investigation documents, Instructions for Use, and postmarket activities.7

All medical devices require clinical evaluation regardless of their risk classification. However, risks associated with the software application, in addition to other factors, should be considered when determining the type and amount of data needed to sufficiently support the intended medical purpose and (individual) clinical claims. Manufacturers must justify the level of clinical evidence provided and demonstrate that the device has been tested appropriately in the intended clinical setting, that any claims are backed up by evidence, and that the benefit-risk ratio that emerges is clear and acceptable in the light of the state of the art. The latter is defined during a comprehensive and objective literature review, an integral part of the clinical evaluation process.
Specific guidelines for SaMD clinical evaluation
The clinical evaluation of SaMD is based on the same legal requirements as the clinical evaluation of more traditional medical devices. However, the majority of SaMD products is vastly different—they can be used across a broad range of technology platforms, do not come in direct contact with the patient, and frequently fulfill their role based on information that they provide rather than any direct action on the patient. Given the unique features of SaMD that extend beyond a traditional medical device or hardware, regulators recognized the need to converge on a common approach for clinical evaluation of SaMD to promote safe innovation and protect patient safety. Another dedicated IMDRF guidance considers these distinguishing differences and closely ties clinical evaluation to a broader software lifecycle management effort.8 Recognized as the state-of-the-art approach for clinical evaluation of SaMD, this guidance has been adopted by the US Food and Drug Administration (FDA)9 and, with minor modifications reflecting the specific expectations of European regulators, within the European Union (EU).10
How extensively a manufacturer or developer should conduct a clinical evaluation depends on the following factors:

  • The SaMD’s underlying algorithm
  • The algorithm’s degree of transparency
  • Characteristics of the SaMD’s intended use
  • Platform it is deployed on
  • Target population
  • Intended users

Before a clinical evaluation is undertaken, the manufacturer should define its scope, based on the relevant safety and performance requirements that need to be addressed from a clinical perspective, along with the medical intended purpose and intended clinical benefits. This is referred to as the scoping or planning stage of the clinical evaluation. Its outcome will determine the type and amount of data supporting the evidence goals, defined in the IMDRF guidance, such as demonstration of valid clinical association (also known as scientific validity), analytical or technical validity, and valid clinical performance of a SaMD (Figure 2).11

A valid clinical association refers to the association between the software output and the disease or physiological state defined in the intended purpose of the device, based on the inputs and selected algorithms. This association is fundamental to demonstrating the clinical significance of the SaMD to the intended healthcare situation in the real world. The need to generate such evidence depends on whether the link between a SaMD and a clinical condition or state is already well known, i.e., supported by the existing scientific framework or body of evidence. If this is the case, evidence may be provided from sources such as clinical guidelines, scientific (peer-reviewed) literature, consensus statements, international reference materials, or product standards. For example:

  • Calculating estimated glomerular filtration rate (eGFR) with Bedside Schwartz formula to measure the level of kidney function in the pediatric patients in the eGFR calculator app.12

  • Implementing a linear interpolation algorithm for zooming feature in medical image processing software.13
  • Triggering an alert for drug-drug interaction by the automated clinical decision support (CDS) within the clinical information system when quinidine is prescribed to a patient taking digoxin.14
  • Application of the recognized practice parameters to ensure the highest image quality for software supporting breast cancer screening.15-17

Where the valid clinical association cannot be demonstrated by referencing the evidence-based strategies from the relevant clinical or technical guidelines, the manufacturer must generate evidence of the scientific validity of the hypothesis linking the software input, output, algorithms, or knowledge base to the healthcare situation intended, either by repurposing existing data or initiating new clinical investigations.
Validation of analytical or technical performance refers to the demonstration of the software’s ability to correctly process data the same way, every time. Manufacturers are expected to provide “objective evidence” of sound software development processes, including verifying the effective software design for data processing supporting the SaMD’s intended use. This type of validation is also needed to confirm the safety, performance, and effectiveness of the software application, safeguarded by such aspects as availability, integrity accuracy, and data confidentiality. In practice, the medical device software verification and validation requirements, as a part of the IEC 62304 and IEC 82304-1 software development standards, can feed into the clinical evaluation.18-20 Gaps identified during the clinical evaluation process will require generation of new evidence, for example, to demonstrate generalizability with real-life datasets or to extend the usability evaluation to omitted user groups.
Validation of clinical performance aims to demonstrate the “clinically meaningful” performance of a SaMD application. This concept relates to the ability of the SaMD to provide the intended clinical benefit. For some applications, it may require measuring the impact of the SaMD on the individual, for example, reducing the risk of revisions and improving mobility after total knee arthroplasty with computer-assisted navigation. For other software, the benefits may lie in providing accurate medical information assessed against medical information obtained with other diagnostic options and technologies, while the final clinical outcome for the patient is dependent on further diagnostic or therapeutic options available, for example, an application that sends the ECG rate, walking speed, heart rate, elapsed distance, and location for an exercise-based cardiac rehabilitation patient to a server for monitoring by a physician. In this case, the device’s ability to yield clinically relevant output is linked to its accurate and reliable technical performance. In each case, the manufacturer must demonstrate that the device has been tested in the correct clinical setting. Still, the methodology will be dependent on the device’s intended purpose, clinical claims, and associated risks, requiring, at a minimum, evidence of the product’s usability assessment, from the perspective of IEC 62366‑1, to assure elimination of use-related hazards and usability problems.21 The remarkably high cost and time burden of the clinical investigation that may be required to establish scientific validity or validation of clinical performance may become a major obstacle for a company developing a new algorithm. Therefore, the commercialization strategy might require a staged approach and careful weighting of specific clinical claims and the data needed to support them.

Considerations for artificial intelligence

Devices that rely on artificial intelligence (AI), or more specifically, machine learning (ML), have unique characteristics that pose a new set of problems for manufacturers and regulators, including unpredictability of system behavior in response to new inputs, and in some cases, changes in system performance through continual learning. Just like any other medical software, AI/ML-based SaMD require sufficient clinical evidence. However, machine learning models are often complex and act like a black box, making them difficult to interpret and explain.22 This affects the ability to demonstrate a medical device’s safety and effectiveness according to the current guidelines.
First, even though understanding the logic involved in an AI/ML‐driven output is important to build trust in the model, given the complex nature of machine learning, it is not always possible to establish the extent to which the output adheres to proven scientific knowledge. Evaluation of the input features’ clinical relevance to the predicted output, consistent with the established diagnostic criteria, has been proposed in the literature to support the scientific validity of AI/ML-based devices and provide general information about the software’s logic.23 In some situations, a peer-reviewed article may provide the necessary evidence, for example, to explain the link between certain imaging features and tumor categorization. However, determining the relative importance of the clinical factors or features is not possible in cases of complex pattern recognition or where the intricate interactions between the features weigh on the individual prediction, as is the case in artificial neural networks. Ultimately, for many new AI/ML-based devices, the validity of the scientific knowledge implicitly learned can only be deduced from the algorithm’s performance metrics with the previously unseen test data. While evidence of scientific validity/valid clinical association is neither considered mandatory in the IMDRF clinical evaluation guidance, nor is it called for by the FDA in the discussion paper on proposed regulatory framework for AI/ML-based SaMD,24 scientific validity must be demonstrated for SaMD that qualifies as an in vitro diagnostic (IVD) medical device under the EU In Vitro Diagnostics Medical Devices Regulation (EU IVDR).25
Like any other medical software, AI/ML-based SaMD requires demonstration of reliable and accurate performance within the context of clinical evaluation. However, AI/ML systems have features that make them hard to test using conventional verification methods. Verification of complex AI software may inevitably be limited to testing the interface components between the user and the model. Given the breadth of AI/ML uses across healthcare, the variety of techniques, and the extent of data and human involvement, the requirements for any given AI/ML-based SaMD depends heavily on context.26 Either way, a manufacturer should define a verification approach to detect anomalies, eliminate errors, and build confidence into the system.27
Validation, understood as confirmation by examination and provision of objective evidence that the designed system conforms to user needs and intended uses,28 measures the AI system’s performance by using an independent reference standard. The reference standard can come from many sources, including a well-defined ground truth, the consensus of experts in the field, or the clinical decision made by clinicians.29,30 In addition, validation of AI algorithms requires manufacturers to pay special attention to many other factors, such as validating machine learning data31 or implementing proper controls over the training and testing data, to avoid bias in the datasets.32 A clinical utility assessment or user studies can be a necessary part of validation to reveal when an AI’s decisions need to be explained or made traceable to minimize risks by bringing human judgment in the loop.
Clinically meaningful performance of an AI/ML-based application implies achieving human or superhuman capabilities and measured performance consistent with clinical goals (proper rates of true positive and false negative). Observational cohort studies are best suited to assess the initial feasibility of machine learning algorithms, given the requirement to both develop and validate their efficacy.33 Nonetheless, while validation studies are routinely performed in a retrospective manner, algorithm performance in a clinical setting may be lower than its retrospective performance. The new setting in which the model is implemented may be different from the setting in which the model was derived or validated. Local practices may differ in terms of both medical care services and patient populations. If these differences are large, the prediction model may yield inaccurate risk predictions, lead to improper decisions, and thus compromise patient outcomes in the new setting.34 Using external test data to demonstrate that the model performance generalize sufficiently across its intended use is frequently proposed to support clinical validity.35 However, this is only possible when the test data and the trained model are using common data representation. It is much easier to achieve in medical imaging with the widely used DICOM format; however, demonstrating generalizability becomes more difficult in other scenarios, for example, where semantic interoperability also is required, such as electronic health records (EHR) using the same coding systems. In addition, a narrow focus on generalizability may come at the price of clinical utility of a model in a specific clinical context, impacting the relevance and usefulness of an intervention in a patient. Optimizing clinical performance may require recalibration of the model with individual patient data from the new setting before routine use, especially as there are no clear guidelines regarding the number of external validations needed before use in daily practice.36,37
Notably, only the feasibility and performance of an ML model can be tested on retrospective datasets, not its prospective practical implication. Metrics of accuracy do not address the clinical value of a model, as models can accurately predict an elevated risk, for example, for postsurgical complications, without offering any opportunity for reducing that risk. The limited view on the adoptability and clinical utility may lead to under- or over-representation of risks in the premarket assessment of the device design and its interaction with the user. The postmarket clinical follow-up is a particularly crucial step to ensure adequate characterization of the device’s real-world clinical use. By recording, assessing, and integrating data from clinical use into the software algorithms, postmarket clinical data—using a stepwise approval phasing—integrates into a total product lifecycle approach advocated by the IMDRF.
The ability of the model to continually retrain to improve performance creates another set of problems for manufacturers and regulators. This kind of dynamic change does not fit well within the current change control processes for medical devices.38,39 Continuous learning systems require continuous monitoring to ensure the system is operating within its prespecified operational parameters, and performing ongoing safety reviews and a continuous benefit-risk analysis that compares the model’s performance to its human counterpart (if relevant), as both change over time, with sufficient transparency towards users and regulators.40,41


The requirement to perform a clinical evaluation (medical devices) or performance evaluation (IVD medical devices) and demonstrate sufficient clinical evidence is defined in EU law.42,43 A clinical (or performance) evaluation report is a mandatory element of the technical documentation for every medical device destined for the European market. CE marking medical devices under the EU Medical Devices Regulation (EU MDR) always requires clinical data, with a mandatory clinical investigation for high-risk devices, except if a bulletproof rationale is given. Data for similar devices can be used only if equivalence to the subject device is demonstrated according to the strict criteria of the EU MDR and if sufficient levels of access to the data relating a device to which an equivalence is claimed can be demonstrated. A guidance describing the state-of-the-art methodology for performing clinical evaluations of medical devices44 is being revised to reflect changes in the legal basis in the EU. However, formal requirements, developed in anticipation of the EU MDR, will likely not be significantly affected. Guidance dedicated to the clinical evaluation of software that qualifies as a medical device or IVD medical device was published in the EU, drawing on the IMDRF guidance on clinical evaluation of SaMD.45 The guidance helps manufacturers draft clinical evaluation plans for their software products. The principles described in this guidance are also generally considered suitable for AI-based software.46
While developing an EU MDR-compliant CER requires substantial efforts, manufacturers can use the guidance document as a basis for regulatory submissions in other countries, especially those that have revised their national laws under the strong influence of the EU MDR, such as Saudi Arabia. The Table below provides an overview of EU, US, and international guidelines and standards for generation of clinical evidence and clinical evaluation of SaMD.

FDA released guidance on clinical evaluation of SaMD with the same text and the same presentation as the IMDRF guidance.47 The guidance does not provide recommendations on specific regulatory situations, nor does it change expectations for regulatory submissions. Still, its principles are relevant for the development of future regulatory approaches for SaMD. With these limitations in mind, the guidance provides substantial assistance in the determination of data necessary to assure safety and effectiveness during the premarket review, especially for manufacturers of low- or medium-risk SaMD where no predicate exists (see de novo and premarket assessments).
Like the EU, FDA extends the adopted IMDRF principles to AI/ML-based devices, with the specific types of data necessary to assure safety and effectiveness during the premarket review, including study design, depending on the function of the AI/ML, the risk it poses to users, and its intended use.48
In China, clinical evaluation reports are required for Class II and III medical devices. China’s National Medical Products Administration (NMPA) published its own guidelines (Medical Device Clinical Evaluation Technical Guidance),49 somewhat similar to European MEDDEV 2.7/1, but with different priorities, which usually leads to rejection when a manufacturer attempts to submit an EU CER to the NMPA. To help manufacturers better determine their regulatory burden upfront, the NMPA implemented the device exemption list. For devices on the list, only a simplified CER is needed. For all other Class II and III devices, a full China CER is required. The NMPA also may require specific technical information in CERs for certain products. According to NMPA’s classification system, a SaMD is either Class II or III, which implies that a CER is always required for a SaMD; however, not all SaMD require a clinical trial, as some may fit into clinical exemptions in China (e.g., medical image processing software). As with FDA’s 510(k) process, comparator devices (approved in China) can be used for this. Clinical investigations are required if no equivalent devices can be found, and safety and efficacy cannot be proven with other clinical and nonclinical data. While China does not require a PMCF, a design change submission requires a CER update. China also requires Chinese language literature sources and clinical data from the Chinese population.
The NMPA has demonstrated a faster-maturing approach to regulating AI/ML-based devices than the US or EU, with several standards published or drafted. In June 2019, the NMPA released the Key Review Points of Software for Deep Learning Decision Support Medical Devices, addressing one of the areas of AI/ML application where the risk is considerably higher, in advance of a general guideline on the review of AI. A more specific guideline, with a focus on pneumonia triage and evaluation, follows the general approach in the deep learning decision support guideline but gives more detailed requirements on clinical evaluation related to the software’s function. Different validation requirements are proposed—clinical validation is needed for decision support software, while non-clinical performance assessment could be adopted for software intended for workflow optimization or medical image post processing.50


The rapid development of SaMD applications brings new opportunities and new challenges for both manufacturers and regulators, as they struggle to balance the drive to foster innovation with the need to protect patient safety. Adopting the same technical documents, standards, and scientific principles of clinical evaluation across countries and regions leads to more similar or aligned regulatory requirements and approaches that promote modern technologies without losing focus on patient safety.
Regulation of AI/ML-based devices is an area in which legislators pilot new paradigms. A robust regulatory framework for AI-based devices is still missing, and the lack of unified industry standards for risk classification and evaluation of medical AI is still causing delays in new product registrations. Future clinical evaluation approaches should consider using established standards of clinical benefit to clarify the added value accompanying these complex algorithms.
About the author
Zuzanna Kwade, MSc, PhD, is a clinical evaluation lead at Dedalus. She has more than 15 years’ experience in clinical safety management and clinical evaluation of a broad range of medical devices. As the industry representative in the EU Task Force on Clinical Evaluation of Medical Device Software, Kwade played an active role in the development of the MDCG2020-1 guidance. She has a PhD in molecular biology from the University of Antwerp, Belgium. Kwade can be reached at [email protected] 
Citation Kwade Z. Clinical evaluation of software. RF Quarterly. 2022;2(1):23-34. Published online 1 April 2022.
Acknowledgment This article is based on a chapter in Software as a Medical Device – Regulatory and Market Access Implications. The full citation for the chapter is:
Kwade Z. Clinical evaluation of software. In: Cobbaert K, Bos G, eds. Software as a medical device: Regulatory and market access implications. 1st ed. Regulatory Affairs Professionals Society; 2021:63-74.

  1. Sutton RT, Pincock D, Baumgart DC, et al. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med. Accessed 2 February 2021.
  2. Sittig DF, Wright A, Osheroff JA, et al. Grand challenges in clinical decision support. J Biomed Inform. NCBI website. Accessed 2 February 2021.
  3. Goddard K, Roudsari A, Wyatt JC. Automation bias—a hidden issue for clinical decision support system use. Stud Health Technol Inform. 2011. 164:17-22.
  4. Clinical Evaluation. IMDRF MDCE WG/N56 FINAL:2019. IMDRF website. Accessed 2 February 2021.
  5. Clinical Investigation. IMDRF MDCE WG/N57 FINAL:2019. IMDRF website. Accessed 2 February 2021.
  6. Clinical Evidence: Key Definitions and Concepts. IMDRF MDCE WG/N55 FINAL:2019. IMDRF website. Accessed 2 February 2021.
  7. Clinical Evaluation. IMDRF MDCE WG/N56 FINAL:2019. IMDRF website. Accessed 2 February 2021.
  8. Software as a Medical Device (SaMD): Clinical Evaluation. IMDRF/SaMD WG/N41 FINAL:2017. IMDRF website. Accessed 2 February 2021.
  9. Software as a Medical Device (SaMD): Clinical Evaluation: Guidance for Industry and Food and Drug Administration Staff. Final. 8 December 2017. FDA website Accessed 2 February 2021.
  10. Guidance on Clinical Evaluation (MDR)/Performance Evaluation (IVDR) of Medical Device Software. MDCG 2020‑1. European Commission website. Accessed 2 February 2021.
  11. Op cit 8.
  12. Schwartz GJ and Work DF. Measurement and estimation of GFR in children and adolescents. J Am Soc Nephrol. CJASN website. Accessed 2 February 2021.
  13. Lehmann TM, Gönner C, Spitzer K. Survey: interpolation methods in medical image processing. IEEE Transactions on Medical Imaging. 1999. Nov;18(11):1049-1075.
  14. Bigger JT Jr, Leahey EB Jr. Quinidine and digoxin. An important interaction. Drugs. 1982. Sep; 24(3):229-39.
  15. Kanal KM, Krupinski E, Berns EA, et al. ACR-AAPM-SIIM practice guideline for determinants of image quality in digital mammography. J Digit Imaging. NCBI website. Accessed 2 February 2021.
  16. IEC 62304:2006 Medical device software: Software lifecycle processes. ISO website. Accessed 2 February 2021.
  17. IEC 62304:2006/AMD 1:2015 Medical device software: Software lifecycle processes: Amendment 1. ISO website. Accessed 2 February 2021.
  18. IEC 82304-1:2016 Health software: Part 1: General requirements for product safety. ISO website. Accessed 2 February 2021. 
  19. IEC 62366-1:2015 Medical devices—Part 1: Application of usability engineering to medical devices. ISO website. Accessed 2 February 2021.
  20. Cutillo CM, Sharma KR, Foschini L, et al. Machine intelligence in healthcare—perspectives on trustworthiness, explainability, usability, and transparency. NPJ Digit Med. Nature website. Accessed 2 February 2021.
  21. Ordish, J, Murfet H, Hall A. Algorithms as medical devices. 2019. PHG Foundation website. Accessed 2 February 2021.
  22. How is the FDA Considering Regulation of Artificial Intelligence and Machine Learning Medical Devices? FDA website. Accessed 2 February 2021.
  23. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU. Last updated May 2017. EUR-Lex website. Accessed 2 February 2021.
  24. Op cit 22.
  25. Op cit 23.
  26. Op cit 21.
  27. Perspectives and Best Practices for Artificial Intelligence and Continuously Learning Systems in Healthcare. Xavier Health’s Artificial Intelligence Summit. 23-24 August 2018. website. Accessed 2 February 2021.
  28. General Principles of Software Validation; Final Guidance for Industry and FDA Staff. January 2002. FDA website. Accessed 2 February 2021.
  29. Op cit 27.
  30. Guidance for Industry and FDA Staff: Computer-Assisted Detection Devices Applied to Radiology Images and Radiology Device Data – Premarket Notification [510(k)] Submissions. July 2012. FDA website. Accessed 2 February 2021.
  31. Breck E, Zinkevich M, Polyzotis N, et al. Data Validation for Machine Learning. Proceedings of SysML. 2019. Google Research website. Accessed 2 February 2021.
  32. Op cit 27.
  33. Motwani M, Dey D, Berman DS, et al. Machine learning for prediction of all-cause mortality in patients with suspected coronary artery disease: a 5-year multicentre prospective registry analysis. Eur Heart J. Accessed 2 February 2021.
  34. Kappen TH, van Klei WA, van Wolfswinkel L, et al. Evaluating the impact of prediction models: lessons learned, challenges, and recommendations. Diagn Progn Res. Accessed 2 February 2021.
  35. Op cit 21.
  36. Futoma J, Simons M, Panch T, Doshi-Velez F, Celi LA. The myth of generalisability in clinical research and machine learning in health care. Lancet Digit Health. NCBI website. Accessed 2 February 2021.
  37. Artificial intelligence in EU medical device legislation. September 2020. COCIR website: Accessed 2 February 2021.
  38. Op cit 22.
  39. Op cit 37.
  40. Op cit 27.
  41. Op cit 37.
  42. Op cit 23.
  43. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No. 178/2002 and Regulation (EC) No. 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC. Accessed 2 February 2021.
  44. MEDDEV 2.7/1 rev.4. Clinical evaluation: Guide for manufacturers and notified bodies. June 2016. EC website: Accessed 2 February 2021.
  45. Op cit 10.
  46. Op cit 37.
  47. Op cit 8.
  48. Op cit 22.
  49. China CER (Technical Guidance on Clinical Evaluation of Medical Devices). Translation available. Michaelyan website. Accessed 2 February 2021.
  50. In one day, two AI diagnostic software go into NMPA fast-track channel. China Med Device website. Accessed 2 February 2021.

© 2022 Regulatory Affairs Professionals Society.


Leave a Reply

Your email address will not be published. Required fields are marked *