The Column spoke to Evelien Dejaegere from Waters Corporation about the method life cycle management approach and its importance in the pharmaceutical sector.
Apichet/stock.adobe.com
The Column spoke to Evelien Dejaegere from Waters Corporation about the method life cycle management approach and its importance in the pharmaceutical sector.
Q. What does it mean to have a good method?
A: The key word is risk assessment-what is the risk of method failure and how can you minimize this? The best method requires proper identification of the requirements, such as detection of the main compound(s) and/or related impurities, good intermediate results, and excellent overall performance of equipment such as instruments and chemistries. In addition, a good method reduces the occurrence of aberrant events such as out of specification (OOS) and out of trend (OOT) results. This is highly desirable because a good method prevents the set of actions that get triggered when OOS and OOT results happen. Having excellent understanding of the method will help to pinpoint the cause of OOS and OOT results during investigations. Having a good method is beneficial because it generates good quality data that reduce analytical uncertainty, improve method performance, and decrease cost of poor quality (COPQ).
Q. Why is method life cycle management (MLCM) important in the pharmaceutical sector?
A: The main reason MLCM is important in pharmaceuticals is to ensure drugs are safe for patients. Implementing the MLCM approach leads to better method understanding and improved method performance. This leads to robust methods that reduce analytical uncertainty; it facilitates and improves method transfer, and reduces the need for post-approval changes. Overall there is higher confidence in data quality.
Q. Please could you outline the MLCM approach.
A: Method life cycle management promotes a more structured approach to analytical method development and establishes connections from development to validation, transfer to routine use.
When implementing an MLCM approach, we start by defining the analytical target profile (ATP). The ATP defines the accuracy and precision goals of the method, and sometimes it will include sensitivity and specificity goals that the method must achieve. The approach also begins with a depth of understanding-understanding the analyte, the goals, and the areas of risk-which must all be carefully documented. The ATP is overarching across all the stages, playing a key role in facilitating continuous improvements of the analytical method.
Stage 1-Method Design and Development: The ATP can be used to decide on method selection, design, and development activities. Risk assessment is used to decide which variables may affect the performance of the method. With the help of analytical quality by design (AQbD) principles and design of experiments (DOE), the impact of variables is tested, helping to identify risks that impact the method performance. When the risks have been evaluated, then the control strategies can be formulated.
Stage 2-Method Qualification: Stage 2 is where we will review if our analytical method or procedure meets the criteria previously established in the ATP. It is the stage that is traditionally known as method validation. However, instead of a checkbox method validation approach, we are now validating the method against predefined method goals set in the ATP and ensuring we are meeting those goals as we validate the method.
Stage 3-Method Performance Verification: This is the stage where the method is deemed suitable for routine use in quality control. In this stage we will monitor whether the method continues to be fit for its purpose and look for potential improvements. If we’ve done our job well, we should have a method that consistently performs well and is suitable for its intended use. In the spirit of continuous improvement, changes are allowed, if the risk is evaluated, method qualification is demonstrated, and performance characteristics are in line with the ATP. If necessary, you can decide to take it back to one of the previous stages. Personally, I think Stage 3 is where the full potential and the beauty of the MLCM approach gets real.
Q. What advice can you offer to scientists thinking of implementing MLCM in their research?
A: Start with identifying method requirements (ATP) and use those to drive method selection. Experts say pick one method and start there. I couldn’t agree more! From my experience I would say the impact of controlling sample handling is typically underestimated, as is paying attention to making sure method instructions are written in a controlled way.
Also using quality consumables provides an immediate quality implementation and control strategy to method performance all going back to using consumables you can rely on regardless of where the method lies.
All these may be easily overlooked in today’s world where everything must move very fast, but it could save you a lot of time when you start transferring methods to different laboratories and locations.
Q. Are there any misconceptions among scientists about where the analytical procedure life cycle sits within their method?
A: MLCM and its concept are in the adoption phase. Laboratory personnel are learning about the requirements and benefits of the MLCM approach, and regulators are, too. Today there are still some barriers, I would not call them misconceptions. These barriers are linked to a lot of different aspects such as limited time, lack of training and education, and an unwillingness to disrupt current workflows. On the other hand we see a lot of very enthusiastic scientists who are looking for tools and resources to move it forwards. There are also guidelines supporting the approach such as the United States Pharmacopeia (USP) 1220, 1210, and the upcoming revision of the International Conference on Harmonisation (ICH) Q2(R2) and creation of a new guidance on method development, Q14 (1,2).
Q. What are the benefits of adopting a more modern approach to methods?
A: The implementation of a more modern approach generates clear method procedures while increasing traceability and compliance for method investigations. These improvements promote reliable and consistent results and ease method transfers. The implementation of the MLCM concept ultimately leads to better quality methods, with enhanced understanding of the variables associated with the method, leading to better data, fewer OOS and OOT results, and better quality products and medicines that are safer for the patient.
Q. In Stage 1 of the process, you mention statistical methods and risk assessment tools. Could you provide more detail on what these approaches might be?
A: Gaining insight into the ATP requirements and the chemical/physical properties of the analytes will help with the risk assessment. Often this will start with gathering knowledge through many different processes, such as specific tools including fishbone diagrams, scoring matrices, and risk templates. Also, research on the compounds, such as what you know, but also acquiring additional knowledge using software tools to calculate chemical properties and pKa will be part of the process. These activities are typically performed by getting a team together to consider what factors may be of risk.
Q. What analytical tools or processes do you recommend for the Stage 1 method design and development phase?
A: The goal of Stage 1 within the MLCM approach is to provide the analyst with a robust method operable design region (MODR). This will give the scientist a broad knowledge of the factors that affect the performance of their method. Knowing these factors will allow the scientist to have a final method that is “proven” to be robust within the space of the MODR. There are statistical software packages available to help design the space. Using chromatographic software and DOE software can also be valuable and speed the process.
Q. Can you give more detail on how a good method can help identify causation for OOS or OOT observations?
A: Having an understanding of the impact of variables on method performance (from Stage 1) can be invaluable. An example in chromatographic testing OOS or OOT can be related to inadequately resolved peaks. Unresolved or coeluting peaks can be the root cause of variable results or OOS results. By reviewing the data generated in Stage 1, it may be possible to adjust the chromatographic conditions to achieve the necessary separation.
Good methods in that perspective separate known impurities and leave space for unknown impurities, and when combined with a detection technique such as photodiode array (PDA) or mass spectrometry (MS) confirming identity and peak homogeneity, it will show you coeluting peaks before you’ve incorrectly assigned their response to your impurity or even your assay peak.
If a method is developed with riskâbased approaches, and good control strategies are placed, the method should be robust and have infrequent OOS results occurrences. However, methods can change over time and to ensure OOS results are kept at minimum, it is recommended to monitor (via control charts) the critical variables, such as column performance, system performance, and even stability data, to ensure there are no OOT patterns. By monitoring, you can start to see any trends, such as results trending towards failing, and take action to prevent future OOS and OOT results.
A good method allows the scientist to know those critical variables that might significantly affect the performance, and this helps them to confidently identify the probable cause for OOS and OOT results.
References
Evelien Dejaegere joined Waters in 2005 and is currently a market development manager in the European Chemistry Team. Her main focus is to work on customers’ pharmaceutical and biopharmaceutical workflows, aiming to better understand the full processes from sample to results.
E-mail: evelien_dejaegere@waters.comWebsite:www.waters.com
AI and GenAI Applications to Help Optimize Purification and Yield of Antibodies From Plasma
October 31st 2024Deriving antibodies from plasma products involves several steps, typically starting from the collection of plasma and ending with the purification of the desired antibodies. These are: plasma collection; plasma pooling; fractionation; antibody purification; concentration and formulation; quality control; and packaging and storage. This process results in a purified antibody product that can be used for therapeutic purposes, diagnostic tests, or research. Each step is critical to ensure the safety, efficacy, and quality of the final product. Applications of AI/GenAI in many of these steps can significantly help in the optimization of purification and yield of the desired antibodies. Some specific use-cases are: selecting and optimizing plasma units for optimized plasma pooling; GenAI solution for enterprise search on internal knowledge portal; analysing and optimizing production batch profitability, inventory, yields; monitoring production batch key performance indicators for outlier identification; monitoring production equipment to predict maintenance events; and reducing quality control laboratory testing turnaround time.
2024 EAS Awardees Showcase Innovative Research in Analytical Science
November 20th 2024Scientists from the Massachusetts Institute of Technology, the University of Washington, and other leading institutions took the stage at the Eastern Analytical Symposium to accept awards and share insights into their research.