Special Issues
A snapshot of key trends and developments in the data handling sector according to selected panelists from companies exhibiting at Analytica 2018.
LCGC: What is currently the biggest problem in data management for chromatographers?
Andrew Anderson: While the extent of this problem depends on the discrete responsibilities for each chromatographer, we would posit that the “oppression of transcription” between different foundational systems presents the greatest challenge. We define this as the effort to have to use different IT systems (and consequently, transcribe information between them) to matriculate through a set of chromatographer tasks.
Consider the new paradigm of qualityâbyâdesign (QbD) for chromatographic method development. Different data handling systems are used at different stages of this process. For example compositional data (chemical information), definition of the statistical design of experiment (DoE), experiment execution, and project reporting all require different pieces of software. Significant human effort is spent transcribing information between these systems.
The core mission of a chromatography data system (CDS) (the most widely used informatics system by chromatographers) is use of methods for instrument control and data acquisition. Rather than attempting to extend CDS capabilities beyond those essentials, it is logical that their interfacing capabilities be sufficient to interface with other informatics systems that consolidate and assemble data appropriately from multiple CDS systems and experimental studies.
We believe that transcription and documentation time exceeds the time chromatographers are able to spend doing what they trained for, and
applying their expertise in performing design, experimentation, and analysis.
John Sadler: Chromatographers generate a large volume of data and vendors have responded with tools that improve the ability to find, and share, the appropriate results required for their job. Today, we see data review as a bigger challenge. Historically, chromatographers have reviewed every peak in a chromatogram or every compound in a target list. Innovative data analysis tools, specifically designed to present chromatographic data in a format optimized for visualization by the human eye, allow rapid detection of anomalies to enable the chromatographer to review by exception and dramatically improve the speed of data review.
Heather Longden: The biggest problem in data management is dealing with chromatographic data in multiple proprietary formats. Companies and regulators are looking for a way to compare data across multiple analytical techniques, but are even struggling to find a common format for a single technique such as chromatographic data. Current solutions that rely on printed reports, or exported data (into some common generic human readable format), suffer from incompleteness (missing data, missing versions of data, methods, and audit trails) or lack security around the file format (how do you know that the exported data has not been altered? Specifically, if it is human readable?) The only current solution is to print or export the data and then verify once it lands in its final location and make sure that the values match the original and nothing was lost in the conversion.
LCGC: What is the future of data handling solutions for chromatographers?
Andrew Anderson: We anticipate that increased standardization in method ontologies, IT system interoperability, and integrated decision support interfaces will provide chromatographers with muchâneeded productivity enhancements.
John Sadler: Further automation of the analytical tasks. Automated data review and analysis will continue to reduce the need for manual data interaction. The systems will continue to improve their ability to recognize peaks and patterns. The interfaces will alert the analyst to review a specific compound and provide guidance when necessary. Ultimately, this will not only reduce the time to results, but will also improve confidence that test results are accurate.
Heather Longden: The future of data handling is making the data review process complete, guided, and documented to prevent errors and omissions. Review processes should be streamlined to increase efficiency and focus on the major areas of concern. Leveraging “across laboratory” analytics to understand the overall quality of the data generation cycle should be designed into this review process.
LCGC: What one recent development in Big Data is most important for chromatographers from a practical perspective?
Andrew Anderson: We anticipate that integrating Big Data repositories to Machine Learning or Deep Learning systems will allow chromatographers insights into retention or separation phenomena. We believe that this may afford a reduction in the number of physical experiments required for design space mapping, and correspondingly, method robustness validation experiments.
John Sadler: I believe there are existing aspects of data analytics that are very powerful, but not commonly used today. Pattern recognition and peak deconvolution are two that come to mind. The use of data fusion may enable deeper insight from the combination of multiple chromatographic techniques, with mass spectrometry and spectroscopy. However, the chromatography industry has been slow to adopt new data science-based solutions.
Heather Longden: The biggest recent development is the focus on results trending across large data sets, especially as it applies to continually monitoring “out of spec” but also “out of trend” results. This focus requires re-examining metrics tools like control charting, or other metrics gathering tools. As mentioned above, in order to gain meaningful metrics, the data needs to reside in a single application or location. Solutions that are cloud deployable allow data from multiple chromatographic laboratories to be managed in a single location, whether inside one regulated company or across company borders. In a world where the global supply chain is increasingly fragmented, gathering the data from contract research organizations (CROs), contract manufacturing organizations (CMOs), and contract testing organizations (CTOs) into one data pool is essential before trends can be observed.
LCGC: What obstacles do you think stand in the way of chromatographers adopting new data solutions?
Andrew Anderson: Across a variety of industries, chromatographic methods serve a fundamental purpose for ensuring product quality. With this purpose in mind, we must recognize that overall quality assurance and regulatory compliance comes with a significant documentation and validation effort. While new technological advances will create productivity and innovation opportunities, we must be mindful to also provide documentation and validation capabilities to ensure efficient implementation. The main obstacle to the adoption of these new data solutions that will reduce data transcription and reporting efforts for separation scientists, while providing the scientific tools essential to method development, will likely be ease of integration into the current informatics environment. A typical separations laboratory includes a variety of instruments from different vendors with disparate software on top of all the other informatics systems that support R&D. Integration of these systems for a seamless workflow will be a challenge many organizations will need to overcome.
John Sadler: Unlike spectroscopists, who have embraced mathematical data transformation for decades, chromatographers have been reluctant to broadly adopt these techniques. To unlock the power of new technology, chromatographers are going to need to change their mindset and embrace these advances.
Heather Longden: In regulated companies, the challenge of any change to registered methodologies is one significant obstacle to adopt new data solutions. Not only does this often require validation overhead, but additionally, it also necessitates completely reworking standard operating procedures (SOPs) and training of both users and reviewers in the new process. If software applications already in use can be adopted to wider, more globally harmonized deployment (potentially even in business partner environments), then validation, SOP, and training burdens can be minimized.
Andrew Anderson is the Vice President of Innovation and Informatics Strategy at ACD/Labs.
John Sadler is the VP/GM, Software & Informatics Division at Agilent Technologies.
Heather Longden is the Senior Marketing Manager, Informatics and Regulatory Compliance, at Waters Corporation.
AI and GenAI Applications to Help Optimize Purification and Yield of Antibodies From Plasma
October 31st 2024Deriving antibodies from plasma products involves several steps, typically starting from the collection of plasma and ending with the purification of the desired antibodies. These are: plasma collection; plasma pooling; fractionation; antibody purification; concentration and formulation; quality control; and packaging and storage. This process results in a purified antibody product that can be used for therapeutic purposes, diagnostic tests, or research. Each step is critical to ensure the safety, efficacy, and quality of the final product. Applications of AI/GenAI in many of these steps can significantly help in the optimization of purification and yield of the desired antibodies. Some specific use-cases are: selecting and optimizing plasma units for optimized plasma pooling; GenAI solution for enterprise search on internal knowledge portal; analysing and optimizing production batch profitability, inventory, yields; monitoring production batch key performance indicators for outlier identification; monitoring production equipment to predict maintenance events; and reducing quality control laboratory testing turnaround time.
2024 EAS Awardees Showcase Innovative Research in Analytical Science
November 20th 2024Scientists from the Massachusetts Institute of Technology, the University of Washington, and other leading institutions took the stage at the Eastern Analytical Symposium to accept awards and share insights into their research.