Why Does Metrological Terminology Matter?
In the world of laboratory analysis, linguistic precision is as important as measurement precision. Improper understanding of basic metrological concepts can lead to misinterpretation of results, accreditation issues, or misunderstandings with laboratory clients.
When a laboratory receives a reference certificate with a value "detected below the limit of quantification," every team member must understand what this exactly means and how it differs from "not detected." When an auditor asks about the difference between repeatability and reproducibility, the answer must be unambiguous.
Consequences of Misunderstandings
Interlaboratory studies have repeatedly shown that results for the same sample can differ by even several orders of magnitude. Often the cause is not technical errors, but fundamental misunderstandings about what we are actually measuring and how we interpret method limits.
Verification vs Validation - Similar, but Not the Same
These two terms are often used interchangeably, which leads to confusion. Meanwhile, they represent two fundamentally different processes in the laboratory.
Method Verification
Verification answers the question: "Does this known method work correctly in our laboratory?"
When a laboratory implements a standard method, for example an ISO standard or pharmacopeial method, it is not creating something new. The method has already been developed and tested. Verification consists of confirming that in the specific conditions of a given laboratory - with its equipment, reagents, personnel, and procedures - this method works as expected.
Verification Example
A laboratory implements a standard spectrophotometric method for determining iron in water, described in a standard. It performs a series of tests with certified reference materials, checks whether the results fall within acceptable limits. This is verification - confirming that a known method works for us as it should.
Method Validation
Validation is a much more complex process, answering the question: "Is this new or modified method suitable for its intended purpose?"
Validation is performed when a laboratory develops its own method, modifies an existing one, or applies a known method to a new type of sample. The validation process requires systematic investigation of all parameters characterizing the method and proving that it meets requirements.
Verification
Standard method
Confirmation of performance
Shorter process
Fewer parameters
Validation
New or modified method
Full characterization
Comprehensive process
All parameters
Precision and Accuracy - The Heart of Measurement Quality
Understanding the difference between precision and accuracy is crucial for interpreting laboratory results. Both concepts describe measurement quality, but from completely different perspectives.
Precision - Repeatability of Results
Precision tells us how consistent the results of repeated measurements of the same sample are. A precise method gives results clustered close together, even if they are all shifted from the true value.
Imagine shooting at a target - precision is the clustering of all shots in a small area. They may hit away from the center, but if they are close to each other, precision is high.
Repeatability and Reproducibility
Precision has two aspects that are often a source of misunderstandings:
Repeatability is the consistency of results obtained under identical conditions: the same analyst, the same equipment, the same day, the same reagents. This is the best precision we can achieve.
Reproducibility is the consistency of results obtained under different conditions: different analysts, different laboratories, different equipment, different times. This is a more realistic picture of what we can expect in practice.
Practical Significance
When a laboratory reports measurement uncertainty, it must include not only repeatability (which is easy to measure), but also factors affecting reproducibility. A result may be very repeatable within one day, but significantly more variable over a longer period.
Accuracy - Closeness to Truth
Accuracy determines how close a measurement result is to the true value. An accurate method gives results close to the actual content of a substance in the sample.
Returning to the target analogy - accuracy is hitting the center. We can have high accuracy (the average of many shots is in the center), even if individual shots are quite dispersed.
Four Scenarios of Measurement Quality
- High precision and high accuracy: Ideal situation - results clustered and close to the true value
- High precision, low accuracy: Repeatable results, but systematically shifted (systematic error)
- Low precision, high accuracy: Dispersed results, but on average correct
- Low precision and low accuracy: Chaotic and incorrect results - method requires intervention
Limit of Detection and Quantification - Where Our Vision Ends
Every analytical method has its limitations. Below a certain concentration level, we are not able to detect or measure anything with appropriate certainty. This is where two key concepts come into play.
Limit of Detection (LOD)
Limit of detection is the lowest concentration of a substance that we can still reliably detect - distinguish from background noise - but not necessarily accurately measure.
It's like looking at stars at night - you can see a very faint point of light and say "yes, something is there," but you cannot determine how brightly it shines or how large the star is.
Limit of Quantification (LOQ)
Limit of quantification is the lowest concentration that we can not only detect, but also measure quantitatively with appropriate precision and accuracy.
Continuing the analogy - it's the brightness level at which you can no longer just confirm the presence of a star, but also estimate its brightness and size with reasonable certainty.
Practical Implications
The difference between LOD and LOQ is not just theoretical. When a laboratory detects a pesticide at a level between LOD and LOQ, it can inform the client of its presence, but cannot provide a reliable numerical value. This has enormous significance in the context of interpreting results relative to legal limits or quality standards.
Practical Scenario
A method for determining pesticides in drinking water has an LOD of 0.01 micrograms per liter, and the LOQ is 0.03 micrograms per liter. The legal limit is 0.1 micrograms per liter.
A result of 0.02 micrograms per liter means detection of the pesticide, but it cannot be considered a reliable quantitative value. The report should contain the information "detected below the limit of quantification," not a specific number.
Linearity and Range - The Method's Comfort Zone
Method Linearity
Linearity is the ability of a method to provide results directly proportional to the concentration of analyte in the sample. In simple terms - when you double the concentration, the signal should also double.
Perfect linearity is the ideal we strive for. In practice, there are always some deviations from the ideal straight line. The key question is: are these deviations small enough that we can accept them?
Method Range
Method range defines the concentration interval in which the method operates with appropriate linearity, precision, and accuracy. It's the method's "comfort zone" - the area where we can trust it.
Outside this range, the method may still give some signal, but its interpretation becomes uncertain. Too low concentrations - we approach the limits of detection and quantification. Too high - we may exceed the linear range of the detector or saturate the system.
Danger of Extrapolation
The most common error in laboratory practice is extrapolation beyond the established method range. The result may seem sensible, but its reliability is questionable. It is always better to dilute or concentrate the sample to fit within the range than to risk uncertain calculations.
Selectivity and Specificity - Are We Measuring What We Think?
These two terms are often used interchangeably, although technically they describe slightly different aspects of the same method characteristic.
Specificity
Specificity is the ideal situation where only and exclusively our analyte generates a signal. No other substances present in the sample affect the result. This is the highest level of selectivity.
Selectivity
Selectivity is a more pragmatic approach - the method can unambiguously determine the analyte of interest in the presence of other substances that may occur in the sample. These may be other matrix components, contaminants, or substances with similar structure.
In laboratory practice, we rarely achieve ideal specificity. Therefore, we speak rather of selective methods - good enough to distinguish our analyte from other sample components.
How to Test Selectivity?
The laboratory tests selectivity by analyzing samples containing the analyte (positive test) and samples without the analyte, but with potential interfering substances (negative test). If the method is selective, the negative test should not give a false positive result.
Recovery - Are We Getting Everything We Added?
Recovery is one of the simplest and most intuitive indicators of method quality. It shows what percentage of an added known amount of analyte we are able to measure.
The procedure is simple: we add a known amount of analyte to a sample, then perform analysis and check how much we managed to measure. Ideal recovery is one hundred percent - we recovered exactly as much as we added.
What Affects Recovery?
- Losses during sample preparation: Extraction, evaporation, filtration - each stage can lead to losses
- Matrix effect: Sample components can affect determination efficiency
- Analyte degradation: Some substances decompose during preparation or analysis
- Systematic errors: Incorrect calibration or interferences
Acceptable Recovery Values
For major components, we expect recovery in the range of ninety-five to one hundred five percent. For trace analytes, we accept a wider range - from eighty to one hundred twenty percent. These limits account for greater technical difficulties at very low concentrations.
Method Stability - Can I Trust It Tomorrow?
Stability (sometimes called robustness or ruggedness) of a method determines how sensitive it is to small, deliberate changes in measurement conditions.
In an ideal world, everything would always be identical - the same temperature, the same pH, the same reaction time. Laboratory reality is more chaotic. Temperature may differ by a degree, pH may be one-tenth higher, reaction time may extend by a few seconds.
Why Test Stability?
Testing stability allows us to answer key questions:
- Which method parameters require strict control?
- How precisely must I control temperature, pH, reaction time?
- Can I introduce minor modifications without losing result quality?
- What will happen when something doesn't go as planned?
Example of Stability Testing
A method requires pH of 7.0. The laboratory tests what happens at pH 6.9 and 7.1. If results remain acceptable, the method is stable with respect to small pH changes. If results already change significantly at pH 6.9, pH requires very strict control and should be precisely described in the procedure.
Measurement Uncertainty - Honesty with the Client
Measurement uncertainty is an estimate of the range of values within which, with a specified probability, the true value of the measured quantity lies. It is the way a laboratory communicates how certain it is of the result.
Every measurement is burdened with uncertainty. Even using the best equipment, the best methods, and the best personnel, we cannot provide a result as a single, absolutely certain number. We must admit: "the result is X, but the true value may be somewhat higher or lower."
Sources of Uncertainty
Measurement uncertainty comes from many sources:
- Method precision (scatter of repeated measurement results)
- Accuracy of standards and reference materials
- Equipment tolerances (pipettes, balances, volumetric flasks)
- Environmental conditions (temperature, humidity)
- Personnel competence
- Sample matrix effects
Why Is Uncertainty Important?
When a laboratory reports a result "ten milligrams per liter ± two milligrams per liter," the client knows that the true value most likely falls between eight and twelve. This allows them to make informed decisions, especially when the result is close to a legal limit or quality standard.
Quality Control - Monitoring Result Consistency
Even the best validated method requires continuous supervision. Laboratory conditions change - equipment ages, new batches of reagents arrive, employees gain experience or make mistakes.
Blank Samples
Blank sample is a sample not containing the analyte, but going through the entire analytical procedure. It should give a zero or very close to zero result. If it gives a significantly higher result, it means contamination of reagents, vessels, or the laboratory environment.
Reference Materials
Certified reference materials (CRM) are samples with known, certified composition. Regular analysis of CRM allows checking whether the method still works correctly and gives results consistent with certified values.
Control Charts
Control charts are a graphical tool for tracking control results over time. They show trends and sudden changes, warning of problems before they become serious.
Philosophy of Quality Control
Quality control is not bureaucracy - it's protection against the consequences of erroneous results. It's better to detect a problem during analysis of a control sample than to receive a complaint from a client or, worse, issue an incorrect report with legal or health consequences.
Summary - The Language of Quality
Metrological terminology is not a set of dry definitions to memorize for an exam. It's a common language of communication between laboratories, auditors, clients, and regulatory bodies. Precise use of these concepts is a sign of professionalism and a guarantee of laboratory work quality.
Understanding these terms allows:
- Properly design and implement analytical methods
- Communicate limitations and capabilities of methods
- Interpret results in the context of their uncertainty
- Meet standard and accreditation requirements
- Build client trust through transparency
Key Conclusion
In an analytical laboratory, it's not enough to measure well - you must also understand and communicate well what is being measured, how well it's being done, and what the method's limitations are. This is what distinguishes a professional laboratory from an amateur approach to analysis.