Readers’ questions regarding problems related to internal standard calibration of liquid chromatography methods are addressed.
Readers’ questions regarding problems related to internal standard calibration of liquid chromatography methods are addressed.
I have recently had several email inquiries from readers regarding calibration issues for liquid chromatography (LC) methods. I try to reply directly to the queries as quickly as possible so that the readers sending the questions can get back up and running, and I collect the questions until there are enough on a particular topic to share in LCGC. Last month (1) we looked at one of these inquiries in a case study format. For this month’s “LC Troubleshooting” discussion, I will look at two additional questions that centre on issues relating to calibration using the internal standard technique.
Over-Curve Samples with Internal Standardization
The first question comes from a reader who finds that occasional samples occur at concentrations that exceed the range of the calibration curve. The question regards how to dilute the samples so that they can be analyzed at concentrations within the calibration range. The problem is complicated by the fact that the method uses the internal standard method for calibration.
External Versus Internal Standardization: Before we get into the problem itself, let’s review the difference between external standardization and internal standardization. With external standardization, calibration samples (calibrators) are made at concentrations covering the desired calibration range, such as 1, 2, 5, 10, 20, 50, 100, 500, and 1000 ng/mL. A fixed volume (for example, 10 µL) is injected for each sample and the response (usually peak area) is recorded. For calibration, a plot of X = concentration versus Y = area is made, and the equation for the regression line is used to determine the concentration of unknown samples. The same volume of the unknown sample is injected and the area response is used to determine the concentration of the injected sample. Corrections are then made for any concentration changes during sample preparation, and the final sample concentration is reported.
Internal standardization follows a process that is similar to external standardization, except another compound, the internal standard (IS), is added at the same concentration in every sample and calibration standard early in the sample preparation process. For example, a 100-µL aliquot of sample might be mixed with 10 µL of IS, then processed. The calibration curve is constructed as a plot of X = ratio of the concentration of analyte to concentration of internal standard versus Y = ratio of area of analyte to IS. For application, the ratio of analyte to IS area is determined for unknowns and the equation for the regression line then allows determination of sample concentration. The IS method is especially useful when the sample preparation process has many steps or it is likely that volumetric losses might take place. This commonly occurs with biological samples, such as plasma, where sample preparation may involve several transfer steps, evaporation to dryness, reconstitution in a new solvent, and so forth. Any physical losses in sample are compensated for by tracking the ratio of the analyte to IS, rather than the absolute area of the analyte, because the analyte–IS ratio should stay the same, even with sample volume changes.
Over-Curve Samples: Now, what happens if the analyte concentration is greater than the upper end of the calibration curve (we’ll refer to these as “over-curve” samples)? It is not a good analytical practice to extrapolate the calibration curve above the upper end or below the lower end of the actual calibrator concentrations. This is because it may not be a valid conclusion that the analyte response is the same outside the test range as inside it. For example, many detectors will show a nonlinear drop-off in response at high concentrations; at low concentrations, nonlinear changes in response may be seen because of absorptive losses or other factors.
Unfortunately, such changes in response are seldom predictable, so the risk of reporting a faulty analysis result is large if the concentration lies outside the calibration range.
For external standardization, the solution to over-curve samples is quite simple - just dilute the sample until its concentration is within the calibration range. For example, let’s consider a calibration curve that covers 1–1000 ng/mL, as in the example above, and a sample that is estimated to contain ~1500 ng/mL based on an extrapolated curve. We could dilute this sample by a factor of two with injection solvent and reinject the sample. If it now assays as 725 ng/mL, we would correct for the dilution factor and report 1450 ng/mL as the sample concentration. Of course, this could not be done without proper supporting evidence that dilution was an acceptable solution to such problems. You could show this during the validation process by preparing test samples at known over-curve concentrations, then diluting them after sample preparation and showing that you obtained the appropriate results. This information would be included in the validation report and the method would be written so that over-curve samples could be diluted by twofold (or 10-fold, or whatever had been demonstrated) to obtain accurate results.
The internal standard method doesn’t deal as simply with overâcurve samples. If we use the same example as for the external standardization above, the problem is quickly apparent. Diluting the ~1500-ng/mL sample by a factor of two does not change the analyte to IS ratio at all! That is, when the sample is diluted twofold, both the analyte and IS peaks halve in size, but the ratio doesn’t change and therefore the sample would still be over-curve. This is the reason the IS was added in the first place - to compensate for unintended or uncontrolled sample loss or dilution - and now it seems to defeat us. We’ll have to try a different approach.
One simple way to handle the overâcurve problem with the IS method is to dilute the sample before adding the IS. For example, if an over-curve sample is found or is anticipated, the sample could be diluted twofold with blank matrix before adding IS. Alternatively, twice the concentration of IS could be added to the undiluted sample. Either technique would effectively dilute the analyte-to-IS ratio twofold and bring it back into the calibration range for the ~1500âng/mL example used here. As with the external standard method, the effectiveness of sample dilution would have to be demonstrated as part of the validation process, and the analytical method would need to be written to allow this procedure. For example, when our laboratory was analyzing plasma samples using internal standardization, for validation we would often prepare plasma spiked at five and 10 times the concentration of approximately 80% of the upper point on the calibration curve. These samples were frozen to mimic normal sample handling, then thawed and diluted five- or 10-fold with blank plasma and treated as normal samples. If the diluted samples then gave acceptable assay results (following correction for dilution), we had demonstrated that dilution was a valid way to analyze such overâcurve samples. Our methods were then written so that any dilution up to 10-fold was permissible for overâcurve samples.
Documentation is Vital: Whenever you devise a method for diluting overâcurve samples, whether external or internal standardization is used, you need to be careful to document it properly. At least three steps need to be documented:
One last item needs to be documented, which is how to deal with multiple analyses of the same sample, because usually you don’t want to report a value that is known to be or could potentially be wrong. The acceptable procedure is usually recorded as part of a standard operating procedure (SOP) or as part of the method document. For example, if the initial analysis of the sample gave over-curve results and the sample was diluted 10-fold and reanalyzed, the data table might include two entries. The one for the initial analysis might list “o/c” (overâcurve) instead of an assay value and the reanalysis might include the assayed value with a footnote that the sample was diluted before analysis. This documentation acknowledges that the sample was analyzed twice and that the over-curve result should be ignored.
Practical Application: Finally, how would you implement dilution of IS-calibrated samples on a practical basis? If you are analyzing randomâconcentration samples and the over-curve problem occurs only occasionally, such as ≤10% of the time, it may be most efficient to analyze all the samples normally. The over-curve samples could then be diluted and reanalyzed with the next batch of samples. If, on the other hand, over-curve samples were common half the time, for example, it might be worthwhile to prepare every sample in the normal manner and in the diluted form. Your analytical method would then describe how to handle the results. For example, if the normal dilution samples are within range, their values would be reported and their diluted versions would be ignored; the over-curve samples would instead report the diluted values and ignore the normal dilution ones. This, of course, would mean that every sample would have to be injected twice, adding time and expense to the analysis process. Other analytical strategies may be more appropriate. Economics, run times, and other factors will help you decide which technique is best for your laboratory.
Other alternatives to avoid the problem of over-curve samples may be possible. If the detector is sufficiently sensitive at low concentrations, it may be better to dilute all samples and, if necessary, extend the lower end of the calibration curve so that over-curve samples are effectively eliminated. Or if the detector has acceptable response characteristics above the range of the calibration curve, extend the calibration curve to sufficiently high concentrations so that overâcurve samples are rarely, if ever, encountered.
I had a related question from a reader, who is analyzing drugs in urine or cerebral spinal fluid (CSF) as a matrix using an internal standardization scheme. The problem with the target drugs is that they stick on container walls, pipette tips, and other surfaces. To counteract adsorption, surfactants and other additives are mixed into study samples, calibration standards, and control samples. The question is how to report the concentration of drug in the original samples.
This problem would be fairly simple if the same volume of each sample existed, but I suspect that is not the case. CSF is probably collected in a syringe and an arbitrary volume of urine is obtained, so we cannot assume a known volume. Because the sample comes to the laboratory already in a container, we must assume that adsorption on the container has already occurred. This means that we need to determine the volume of the original sample, because the adsorptive loss, and thus the amount of analyte desorbed with additives, will be related to the sample volume.
There are at least three ways to determine the sample volume, and none is perfect. One could use a graduated pipette to withdraw all the sample from the sample tube and measure it in the pipette. This would work, but the pipette would then need to be rinsed with surfactant to release any adsorbed sample. Another option would be to pour the sample from the original container into a calibrated tube for further sample pretreatment. This would add the cost of a calibrated tube to the sample pretreatment. Still another option would be to pour the sample into a tared tube and determine the volume by weight or pour it into an empty tube and weigh the original container to determine the volume
After the sample volume had been determined, the surfactantâadditive mixture could be added. I would favour spiking this mixture with the internal standard, then adding an aliquot in proportion to the measured sample volume. The walls of the original container (as well as any pipettes or other surfaces that contacted the sample) would need to be rinsed with this IS mixture and then transferred to the holding tube to remove any adsorbed analyte from its walls and to mix with the sample. This step would surely add uncertainty to the measurements because of the approximation of the original volume. With bioanalytical samples such as these, however, the analysis guidelines from the United States Food and Drug Administration (2) allow uncertainty (%RSD) in precision and accuracy of ±15% at all levels above the lower limit of quantification (LLOQ) and ±20% at the LLOQ. It is unlikely that these additional transfer and measurement steps would degrade the precision and accuracy beyond the limits if the rest of the method is performing well. Study samples, calibration standards, and control samples would all have to be treated in the same manner, and the IS calibration technique should allow acceptable quantification of drug concentrations. Of course, the success of this procedure could not be assumed, but would require validation through demonstration of acceptable recoveries from blank matrix spiked with known concentrations of drug.
Conclusions
Internal standardization is the most common standardization technique in some laboratories, such as those analyzing drugs in biological matrices. For analyses that require little sample manipulation, such as simple dissolution or dilution of samples before injection, external standardization is often preferred. Most of us tend to work in a laboratory where one or the other standardization technique is used almost exclusively. If this is your situation, be especially careful if you switch standardization techniques, because practices that aren’t problematic with one technique may create problems when the other calibration scheme is used.
References
1. J.W. Dolan, LCGC Europe28(5), 278–281 (2015).
2. United Stated Food and Drug Administration, Guidance for Industry: Bioanalytical Method Validation, (FDA, Rockville, Maryland, USA, 2001).
John Dolan is vice president of LC Resources, Walnut Creek, California, USA. He is also a member of LCGC Europe’s editorial advisory board. Direct correspondence about this column should go to: “LC Troubleshooting”, LCGC Europe, Honeycomb West, Chester Business Park, Wrexham Road, Chester, CH4 9QH, UK, or e-mail the editorâinâchief, Alasdair Matheson, at amatheson@advanstar.com
AI and GenAI Applications to Help Optimize Purification and Yield of Antibodies From Plasma
October 31st 2024Deriving antibodies from plasma products involves several steps, typically starting from the collection of plasma and ending with the purification of the desired antibodies. These are: plasma collection; plasma pooling; fractionation; antibody purification; concentration and formulation; quality control; and packaging and storage. This process results in a purified antibody product that can be used for therapeutic purposes, diagnostic tests, or research. Each step is critical to ensure the safety, efficacy, and quality of the final product. Applications of AI/GenAI in many of these steps can significantly help in the optimization of purification and yield of the desired antibodies. Some specific use-cases are: selecting and optimizing plasma units for optimized plasma pooling; GenAI solution for enterprise search on internal knowledge portal; analysing and optimizing production batch profitability, inventory, yields; monitoring production batch key performance indicators for outlier identification; monitoring production equipment to predict maintenance events; and reducing quality control laboratory testing turnaround time.
2024 EAS Awardees Showcase Innovative Research in Analytical Science
November 20th 2024Scientists from the Massachusetts Institute of Technology, the University of Washington, and other leading institutions took the stage at the Eastern Analytical Symposium to accept awards and share insights into their research.