We live in a world with ever-increasing automation, from the simple (hello, automatic coffee maker!) to the complex (I think I’ll just let the car do the driving...or at least the parking). The pervasive, almost intelligent machines have seemingly always been a part of an analytical chemist’s world. Are we ready to let programs and instruments take over the art of separations?
When I joined Bristol Myers Squibb (BMS), I was introduced to some very cool software programs to aid in liquid chromatography (LC) method development, mainly in the achiral reversed-phase LC domain, such as Molnar Institute’s DryLab, S-Matrix’s Fusion, and ACD’s LC Simulator, to name a few. With only a couple of empirical LC runs, these modeling programs could map out predicted separations in silico across a wide design space. With a click of a button, we could potentially let the program “choose the optimum” separation, which we would then quickly discard. Why? Mainly because the programs at the time did not know what we humans really wanted. For example, it was difficult to specify our needs, such as “Only make gradient changes in this region,” or “Allow a step gradient here,” or “Keep the retention factors between 1 and 20,” all while maintaining baseline resolution in an under 5 min runtime. . .with re-equilibration. Usually, we were faster at defining these boundary conditions in our heads, mapping them into the program’s gradient, and seeing what popped out visually, followed by empirical verification. However, these programs are making significant and rapid evolutions, with better optimal design optionality, more dimensionality by increasing from one dimension (for example, gradient), to two (such as add-on temperature), to three (for example, add in pH), more separation modes to be able to model non-reversed-phase separations with the same facility as reversed-phase LC, and back-end analysis of robustness, sometimes with pre-templated reports. Regardless, the human has traditionally always been the conductor of the in silico express train, picking the input packages, keeping them on the rails, and deciding on the final destination. Sometimes wholly iterative method development will arrive at the same place as the modeling, albeit at a slower pace. However, sometimes the modeling program can reveal better-operating spaces, while your iterative work only finds a locally ideal space that is not globally optimal.
At BMS, we have a comprehensive reversed-phase LC system capable of automatically running well over 100 conditions. In these cases, automation results in single-button data acquisition with an overnight run. Although processing can be automated to some degree, at this stage data interpretation can be the primary bottleneck. Some efforts have been made in current commercial chromatography data systems (frequently as an add-on purchase) to facilitate this evaluation, but it is still up to the human to decide what “optimal” actually means. Do we prioritize peak count? Efficiency? Resolution? Tailing factors? Does the answer change if I tell you the sample has 20 components, but only 10 are relevant? The scientist is accountable for picking the right samples to inject, determining the results’ relevancy, and usually defining the ultimate final separation conditions. Also, in the absence of human–data engagement, potential critical situations may be missed, such as a clean sample turning up a new unknown during screening (those pesky peak shoulders!). My fellow scientists and I are extremely grateful that we don’t have to serially churn through setting up all these conditions, giving us the time back to delve deeper into the data or work on other, more rewarding activities.
A more recent LC method automation system available today is feedback optimized screening, such as by ACD’s Autochrom. In this implementation, we let the software choose the best screening condition, but also allow it to do its own final optimization on that condition through an automated modeling exercise. Will this perfect marriage of screening and modeling result in the bench scientists hanging up their lab coats? I hope the previous paragraphs have already convinced you otherwise. Beyond the above arguments, there are just too many separation problems that require serious intervention beyond what current systems can handle. If my automated system relies on mass spectrometry to track peaks, what will it do when I need to do low UV detection? What if the optimal condition is missed due to chelation effects that I could mitigate with a mobile-phase additive? Buffer concentration? Ionic strength? Alternative ion pairs? Uncommon columns? On-column degradation? New technology? Lest we forget, the human is also in the driver’s seat for proper sample preparation (it’s more than just a separation!). We are both blessed and cursed to have so many selectivity levers in LC separations. There’s still plenty of opportunity for artistry by those who study the field. Let automation give you the freedom to explore the many less-common ways of doing your separations. You may just find the next big thing.
RAFA 2024 Highlights: Contemporary Food Contamination Analysis Using Chromatography
November 18th 2024A series of lectures focusing on emerging analytical techniques used to analyse food contamination took place on Wednesday 6 November 2024 at RAFA 2024 in Prague, Czech Republic. The session included new approaches for analysing per- and polyfluoroalkyl substances (PFAS), polychlorinated alkanes (PCAS), Mineral Oil Hydrocarbons (MOH), and short- and medium-chain chlorinated paraffins (SCCPs and MCCPs).
Advancing Bladder Cancer Research with Mass Spectrometry: A FeMS Interview with Marta Relvas-Santos
November 12th 2024LCGC International interviewed FeMS Empowerment Award winner Marta Relvas-Santos on her use of mass spectrometry to identify potential biomarkers and therapies for bladder cancer. She also shared insights on her work with FeMS and advice for fellow scientists.