NEWS
Publishing Good AI Practice
|
January 14 was a very interesting day for those involved in manufacturing pharmaceuticals, as the Food and Drug Administration (FDA) and the European Medicines Agency (EMA) jointly released guiding principles for "Good AI Practice" that covers Research and Development (R&D), manufacturing, and the ensuring patient safety. The objective is to prevent a fragmented landscape when it comes to using Artificial Intelligence (AI) and for manufacturers to have confidence that processes implemented in one part of the world can be replicated elsewhere.
IMPACT
Use Enterprise Systems and Make Sure You Provide Context
|
One of the motivations for publishing the principles is to avoid manufacturing personnel using consumer-grade AI tools for tasks such as summarizing data or handling regulatory documents. Manufacturers must now identify content that hasn’t been created via enterprise AI systems, which is likely to lack human oversight. The risk is that documents and decisions are influenced by poor-quality training data.
An important principle is that the use of an AI application must have a clear Context of Use (CoU). This means that as part of deploying AI at a facility, the manufacturer must document what the intention is for using AI and the model’s involvement in decision-making—especially as some manufacturers are beginning to deploy Agentic AI systems. Any use of AI that could impact patient safety or medical interventions will be considered high risk. In other words, the onus will be to have a human-in-the-loop to confirm changes to plans on the production line before implementation.
CoU will necessitate that manufacturers have robust data governance policies to provide evidence of how data were collected, processed, and utilized, as well as scrutiny from different stakeholders in the business (not just manufacturing teams, but also clinical scientists and ethicists). The models must be transparent in terms of use and be explainable to the regulators in the same manner as when adhering to Good Practice (GxP) requirements. Regulators will require that the testing and examining of models will not be on an ad hoc basis, but rather on a scheduled basis to make sure the models continue to perform as intended. However, by publishing guidance on Predetermined Change Control Plans (PCCP), the regulators appear to be conscious about not deterring innovation as manufacturers do not have to submit changes for every single adjustment to the model.
RECOMMENDATIONS
Good Practices for Others to Follow
|
ABI Research has evaluated the digital transformation strategies, including the use of AI, for 15 of the largest pharmaceutical manufacturers (see ABI Research’s Digital Transformation Benchmarking Index for Pharmaceutical Manufacturers presentation (PT-6041)). AI is well established in the drug discovery phase of the drug development process and increasingly part of manufacturing operations.
The research revealed that Pfizer is advanced in its development of Large Language Models (LLMs) and Small Language Models (SLMs) on its internal data for insights. The company is also investigating how Agentic AI can help automate production line tasks like anomaly detection. But the company is already adhering to the principles, especially CoU with experts validating AI model outputs in critical areas like quality control and regulatory submissions. Other examples of manufacturers already working in the desired manner include Johnson & Johnson’s engineers validating AI outputs when it comes to predictive analytics for yield optimization and process adjustments. And at Eli Lilly, scientists and operators validate AI recommendations from monitoring and simulating production processes.
Manufacturers, not just those involved in manufacturing pharmaceuticals, need to develop AI governance frameworks. Transparency, explainability, and monitoring for drift should be part of manufacturers’ strategies for implementing AI to support their operations.