Quality by Design (QbD) principles have been incorporated into regulatory guidelines for over two decades,…

New Draft Guidance on the Use of Artificial Intelligence in Regulatory Decision-Making for Drug and Biological Products
In January, the Food and Drug Administration (FDA) issued draft guidance on the use of artificial intelligence (AI) models that support regulatory submissions for drug and biological products. Notably, the guidance does not address the use of AI models in drug discovery or those used for operational efficiencies (e.g., internal workflows, resourcing, drafting/writing regulatory submissions). The FDA said that the use of artificial intelligence (AI) in the drug development life cycle was growing quickly. They issued the guidance because they have concerns about the accuracy, reliability, introduced bias, methodological transparency, and stability of AI models used in regulatory submissions.
Windshire agrees with the FDA’s assessment, but we have been stunned at the interest on the operational side. Our industry have been very risk adverse and has historically moved slowly to adopt new technologies. The great interest in AI tells us two things. The industry is very interested in cost savings and improving quality proactively. This dual interest has probably never been higher as companies look to be more competitive in and to reduce costs. As always, a risk based approach to accomplishing this is advised. While this guidance addresses using AI in regulatory filings, it provides a framework for thinking about AI in other applications.
The guidance centers on a risk-based credibility assessment framework for AI models included in regulatory submissions. It also discusses the importance of life-cycle maintenance of the credibility of AI model outputs. In addition, the FDA recommends that sponsors engage with the agency early in the AI model development process and gives them different options to do so.
Giving examples for steps 1-3, the guidance lays out the following seven steps of the risk-based credibility assessment:
- Define the Question of Interest: Specify the question or decision addressed by the AI model.
- Define the Context of Use (COU): Describe the role and scope of the AI model.
- Assess the AI Model Risk: Evaluate model influence and decision consequence to determine model risk.
- Develop a Plan to Establish AI Model Credibility: Create a credibility assessment plan based on model risk and COU.
- Execute the Plan: Implement the credibility assessment plan.
- Document the Results: Record the outcomes and any deviations from the plan.
- Determine the Adequacy of the AI Model: Assess if the model is appropriate for the COU.
The FDA expects interactive feedback concerning the assessment of AI model risk and, again, strongly encourages sponsors to engage early with the agency to discuss AI model risk, the appropriate risk based credibility assessment activities for the proposed model, and the COU.
The comment period for the draft guidance closed on April 7th, and Windshire encourages interested parties to review the draft guidance by clicking https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological We will continue to monitor this draft guidance and post any updates in our blog.
Do you need help with regulatory compliance or other drug development issues? Please email info@windshire.comor call +1 844-686-5750 if you need our experts to assist with your needs.