LCGC Europe
The concept of the limit of detection (LOD) has been, and still is, one of the most controversial in analytical chemistry. The multiple definitions and calculation methods proposed have contributed to this situation. Although in the last years, several international organizations, such as ISO or IUPAC, have tried to reach a consensus in their definitions and have issued guidelines for the estimation of this important parameter in chemical analysis, the subject is still a matter of scientific debate. In this article, we try to clarify the definition and provide guidelines to estimate LOD in chromatographic methods of analysis.
The limit of detection (LOD) is usually defined as the lowest quantity or concentration of a component that can be reliably detected with a given analytical method. Intuitively, the LOD would be the lowest concentration obtained from the measurement of a sample (containing the component) that we would be able to discriminate from the concentration obtained from the measurement of a blank sample (a sample not containing the component). The first problem appears with the word reliably, because it implies the need for statistics. We will try to explain this in the following article.
Figure 1
Let us imagine a given analytical procedure and mentally place ourselves in the concentration domain. Suppose the analytical procedure has a known precision along the different concentration levels and that the results provided follow a normal distribution.
If we analyse many blank samples, we would obtain a distribution of values resembling the one shown in Figure 1.
The concentration values (in absence of bias in the procedure) would distribute around zero with a given standard deviation, σ 0. This means that, as a result of the blank measurements and as a result of the experimental errors of the procedure (captured in σ0) we could obtain a non-zero concentration. As responsible for the results provided by the laboratory, we would like to limit the distribution at some point. This point is the critical level, LC, and will allow us, once the sample has been measured, to make a decision whether the analyte is present or not. If the concentration obtained is higher than LC then it probably does not correspond to a blank and we could state that the component is present in the sample. We, however, are running a risk when limiting the distribution at LC. There is a certain probability that the analysis of a blank sample would give as a result a concentration value higher than LC. In this case we would falsely conclude that the component is present. This probability, α, is called type I error, or, more commonly, probability of committing a false positive.
Choosing the value of α is our decision, depending on the risk of being wrong we are willing to accept. We could, for example, fix LC at a concentration level of zero. The risk of committing a false positive in this case would be of 50% (any concentration value above zero found in a sample would be taken as a positive detection). Defining LC in such a way that the risk is limited to, for instance, 5% (α = 0.05) seems a more logical decision in most situations.
However, can we take the value of LC in Figure 1 as the LOD of our analytical procedure? Suppose this is the case and that LC has a concentration value of 2 ppb. Doing so, the laboratory assumes that it is able to detect components in samples at a concentration level equal or greater than 2 ppb. Imagine the laboratory receives a batch of many samples, each of them containing 2 ppb of the component. If the laboratory analyses these samples, it would probably obtain (in absence of bias in the procedure) a distribution of values as the one shown in Figure 2 (in red).
Figure 2
The concentrations are approximately distributed around 2 ppb (the critical level). What is the consequence? In about half of the cases the laboratory would declare the component is not present in the sample (i.e., it is not detected) because the concentrations found are below the critical level, LC. This is contradictory, because the laboratory stated that it is able to detect at a level of 2 ppb but, in this particular example, at this level of concentration (2 ppb), it decides incorrectly in half of the cases. There is then a risk of falsely concluding that the component is not detected when in fact it is present. This probability, β, is called type II error, or, more commonly, probability of committing a false negative.
It is again the decision of the laboratory or the analyst to choose an acceptable value of β. Can the laboratory afford 50% of errors? If not (which is the case in reality), the only alternative to reduce the risk of false negatives is to increase the LOD, as it is shown in Figure 3.
By increasing the LOD, the laboratory protects itself against false negatives. For a LOD = 4 ppb, clearly if the laboratory analyses samples containing 4 ppb of the component, it is running a much lower risk β of stating the analyte is not detected when in fact it is present.
Figure 3
The International Organization for Standardization, ISO,1 defines the LOD as the true net concentration (or quantity) of component in the material subject to analysis that will lead, with a probability (1-β), to the conclusion that the concentration (or quantity) of component in the material analysed is greater than that of a blank sample. The International Union of Pure and Applied Chemistry, (IUPAC),2 in an earlier document, provided a similar definition and adopted the term "minimum detectable (true) value", as the equivalent to LOD.
Including the probability for false negatives in the definition of LOD leads to a performance characteristic that informs the analyst what (minimum) analyte level the method is capable of detecting with (at least) a probability of (1-β). It is a parameter, defined a priori, that can be used to select a method or to optimize a method that is already in use. As soon as the method is being used (as intended), it no longer plays a role (or should no longer play a role) in the detection decision,3 which is taken once the result of the measurement is known, that is, a posteriori.
Figure 4
The decision whether a given component is present or not in a sample is based on the comparison of the predicted concentration with the critical level, LC, defined as:
z1–α is the value of the one-sided standardized normal distribution at a given significance level, α, and σ0 is the standard deviation (SD) of the net concentration when the component is not present in the sample. LC is defined to fix a decision limit from which a predicted concentration is considered to be due to the presence of the component. In this way, we run a risk α of committing a false positive. However, if we want to maintain a small β risk of committing a false negative, the LOD of the method, LD, has to be larger:
z1–β is the value of the one-sided standardized normal distribution at a given significance level, β, and σD is the standard deviation of the net concentration when the component is present in the sample at the level of LOD. It is assumed in Equations 1 and 2 that the predicted concentrations follow a normal (Gaussian) distribution, with known variances. If we take α = β = 0.05 and assume that the SD is constant between c = 0 and c = LD (i.e. σ0 = σD) Equation 2 can be rewritten as:
If σ0 and σD are unknown and have to be estimated from replicate measurements, then they have to be replaced by the corresponding estimates, s0 and sD. Also, the z-values, based on standardized normal distributions have to be replaced by the t-values of the t-Student distribution with ν degrees of freedom. Taking α = β the expressions for LC and LD (assuming constant standard deviations) become:
The equations are quite simple. The main difficulty comes from the estimation of σ0.
1. Take a test sample (ideally real but could also be artificially composed), where the concentration of the component is low (close to the expected detection limit).
2. Analyse a minimum of 10 portions following the complete analytical procedure. The precision conditions (repeatability, intermediate conditions) have to be specified.
3. Convert the responses to concentrations by subtracting the blank signal and dividing by the slope of the analytical calibration curve.
4. Calculate from the data the SD in concentration units. Remember that the critical level and LOD are defined in terms of concentration.
5. Compute the critical level and the LOD using Equations 4 and 5. Alternatively, if a statistically sufficient number of replicates has been performed, then calculate the critical level and the detection limit as LC = 1.64σ0 and LD = 3.3σ0 , respectively.
There is also a common practice in chromatographic analysis, which consists on calculating the LOD as the concentration of component providing a signal-to-noise ratio (S/N) of three. The approach consists on, working at the minimal attenuation of the chromatographic signal, the measurement of standard solutions (ideally spiked samples) with decreasing concentrations until a peak is found whose height is three times taller than the maximum height of the baseline (measured at both sides of the chromatographic peak). The concentration corresponding to that peak is taken as the LOD.
The procedure recommended by the SFSTP,4 consists on determining the maximum amplitude of the baseline in a time interval equivalent to 20 times the width at half weight of the peak of the component. The LOD is expressed as:
where hnoise is half of the maximum amplitude of the noise and R is the response factor (concentration/peak height). This procedure is valid only when the chromatographic peak heights are used for quantification purposes. If peak areas are used instead, then the LOD has to be estimated by other means, such as those described above.
In pharmaceutical analysis, more frequently followed guidelines or regulatory documents than those of the SFSTP are the ICH documents,5 the United States Pharmacopeia (USP)6 and the European Pharmacopoeia.7 They also use the signal-to-noise ratio (S/N) to estimate LOD. The ICH documents and the USP describe, as a common approach, the one which compares measured signals from low concentrated samples with those of blank samples. Other approaches are allowed but it is stated that whatever method is used, the detection limit should be subsequently validated by the analysis of a suitable number of samples known to be near, or prepared at, the detection limit.
The European Pharmacopoeia7 defines the signal-to-noise ratio (S/N) as
where H is the height of the peak corresponding to the component concerned, in the chromatogram with the prescribed (low concentration) reference solution. H is measured from the maximum of the peak to the extrapolated baseline of the signal observed over a distance equal to 20 times the width at half height, and h is the range (maximum amplitude) of the background noise obtained after injection of a blank and observed over the above-mentioned interval, situated around the time where the peak would be found. Often in chromatography, h is measured in the chromatogram of a sample in a region where no substance peaks occur (Figure 4).
Finally, it is highly recommended that when reporting the LOD of an analytical method, the approach used to estimate it is specified.
When an analytical method is used for trace analysis or in cases (i.e., food, drugs, doping substances,...) where the legislation requires the absence of certain components, the LOD has to be estimated in a rigorous way (e.g., as it is described in the ISO or IUPAC documents).1,2 Its calculation has to include the whole analytical process and to consider the probabilities α and β of taking erroneous decisions.
1. ISO 11843-1. Capability of detection. Part 1: Terms and definitions. ISO, Genève (1997).
2. IUPAC. Nomenclature in Evaluation of Analytical Methods including Detection and Quantification Capabilities, Pure & Appl. Chem., 67, 1699–1723 (1995).
3. N.M.Faber, Accred. Qual. Assur., 13, 277–278 (2008).
4. STP Pharma Practiques Guide de validation analitique: Report SFSTP. Methodologie and examples, 2(4), 205–226 (1992).
5. ICH Q2(R1). Validation of Analytical Procedures: Text and Methodology. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (2005).
6. United States Pharmacopeia 30, National Formulary 25, General Chapter 1225, Validation of compendial methods, The United States Pharmacopeial Convention, Rockville, Maryland, USA (2007).
7. European Pharmacopoeia 6.2, Council of Europe, Strasbourg, France (2008).
Ricard Boqué is an associate professor at the Universitat Rovira i Virgili, Tarragona, Spain, working on Chemometrics and Qualimetrics. Yvan Vander Heyden is a professor at the Vrije Universiteit Brussel, Brussels, Belgium, department of Analytical Chemistry and Pharmaceutical Technology. In addition to this he heads a research group on Chemometrics and Separation Science.
RAFA 2024: Giorgia Purcaro on Multidimensional GC for Mineral Oil Hydrocarbon Analysis
November 27th 2024Giorgia Purcaro from the University of Liège was interviewed at RAFA 2024 by LCGC International on the benefits of modern multidimensional GC methods to analyze mineral oil aromatic hydrocarbons (MOAH) and mineral oil saturated hydrocarbons (MOSH).