A guide through the process of verification of instrument, method and control material performance

Any clinical laboratory knows the drill – you get a new instrument or bring in a new test and you have to prove it works as expected. While documented proof is required by various regulatory1 and accreditation bodies, remember the intent and ultimate goal of this exercise is ensuring the new results are reliable for patient care.

There are many expert resources available to guide you through this process, including those in conventional laboratory medicine textbooks2-6 and many diverse CLSI documents (most of the “EP documents”).7-11

Validation is Not Verification

Let’s start with understanding the difference between verification and validation. While the terms are used interchangeably, they are not the same. Validation is “the process of testing a measurement procedure to assess its performance and determine whether that performance is acceptable” and is typically a manufacturer’s activity.

Verification, in contrast, is simply verifying the manufacturer’s claims for performance specifications. Verification is what is typically performed in a clinical laboratory wanting to implement an FDA-approved instrument or method. It is a much simpler, streamlined process than validation.

One caveat: Laboratory-developed tests (LDTs), by definition, are not FDA-approved. Many LDTs were developed to meet a particular need within a single healthcare organization, or developed in highly specialized laboratories to address rare disorders for which there were no commercially available tests (e.g., genetic testing of “orphan” diseases). The FDA has not historically reviewed or approved LDTs, and has used and continues to use “discretion enforcement” to choose which LDTs to scrutinize. This is an evolving area and if your laboratory performs LDTs, it would behoove you to characterize your LDT through as much of a full validation process as possible. While your laboratory may not be able to perform the extensive validations achievable by large manufacturers, you have the ultimate advantage of demonstrating the positive influence of your LDT on patient care, safety and outcome.

New Instrument or Method Verification

When considering a new test method or instrument, first find out what is commercially available, then review their relative ­performance as published in peer reviewed literature, paying special attention to performance at medical decision thresholds – either evidence-based and/or those unique to your healthcare organization. It’s a good idea to develop a ranked preference based on individual instrument/method characteristics (e.g., accuracy, precision, LoD, LoQ, AMR, CRR, etc.).

Next, review proficiency testing (PT) summary booklets to determine the “peer group” size of the various methods/instruments you’re considering. Ideally, the favored instrument or method has a peer group of substantial size and hopefully the majority of PT participants. Next, determine who in your local area has your favored instrument or method – you will need to take advantage of their expertise early in your “go live” phase and/or “borrow” reagents in times of unexpected shortage. If there is no user in your local area, you may consider eliminating the instrument/method choice. Lastly, narrow the decision to at least two vendors so you have negotiation leverage.

When it comes to verifying a new instrument or method performance, you must verify the manufacturer’s claims for precision, accuracy, reference range and reportable range before putting an instrument into use. The lab must also notify and educate laboratory users about any change to units of measure or reference ranges that will occur.

Once the instrument’s installed, calibrated, and the testing people are comfortable with the instrument or method, it’s time to verify manufacturer’s claims. Use as many people over as many days as possible to predict maximum instrument/method variability in your laboratory.

To ensure precision, test the same specimen 20 times over 5 – 10 days and determine the mean and variation standard deviation (SD) and the coefficient of variation (CV). You can use quality control (QC) material, calibrators (a different lot than used to calibrate your instrument) or patient specimen(s). Matrix appropriate specimens are desirable. The concentration(s) tested should be the same as reported by the manufacturer in their instructions for use (i.e., normal versus abnormal concentration).

As well, test patient specimens over 5 – 10 days in parallel with the current method.

For a quantitative test, at least 40 specimens ideally supning the AMR should be tested in parallel. The more you test, the better your statistical confidence around measured values.

For a qualitative test, the number of specimens is not as simple as there are different categories of qualitative tests and expected rigor. Decide how confident you need to be with the result, guided by your clinical knowledge of the impact of a false negative or false positive result on patient management, then decide on the number of specimens to be tested.

For the reference range, test at least 20 specimens from healthy individuals representative of your patient population. The laboratory may obtain specimens from healthy outpatients who are having laboratory work performed for clinical indications not related to the disease states for which the new method/instrument would be used.

To determine the reportable range, test specimens with analyte concentrations supning the entire manufacturer’s reportable range. Specimens can be patient specimens, QC material or calibrators (if a different lot than used to calibrate the instrument).

If the replicate results yield a SD and CV in accordance with manufacturer’s specifications, the precision is assumed to be acceptable. The reference range is assessed by verifying that results from healthy individuals are within the manufacturer’s recommended reference range. Over time, review all patient results and adjust the reference range accordingly, if necessary.12 Assess the reportable range using calibration material, QC material and/or patient specimens. If the results do not sup the manufacturer’s entire AMR, restrict the AMR to reflect the extremes of what you’re able to measure with acceptable precision.

Additional Strategies

Spend the most time reviewing the accuracy (method comparison) data. Do a correlation analysis using commercial software such as Excel.13 The lab should also assess the difference between paired values obtained from the same specimen tested by the current and new method and review the corresponding difference plot.14 Look for systematic biases of the new method compared with the current method, knowing you may have to adjust reference ranges if a bias is detected, including adjusting reference ranges for calculated values dependent on the measured analyte (e.g., anion gap calculated from measured Na+, K+, Cl- and CO2).

If all looks acceptable, document the entire process in a “new instrument” and/or “new method” verification study summary report. The summary should include the methods compared, dates of comparison, who performed testing, experimental design, findings (including summary of statistical analyses) and conclusions. To facilitate reviews for inspections, it is a good idea to create a report format that includes and prompts you to address each of the required new instrument/test method verification elements (such as the nine required by CAP “Common” checklist items COM.40200 to COM.50100).

Notify the medical staff shortly before or on the day of “go live” if there are reference range changes, or if the new methodology is significantly different from the previous and the incidence of positive or negative results is expected to change substantially (e.g., enzyme immunoassay versus nucleic acid amplification methods for C. difficile toxin B detection). Also, try to “go live” at 10 a.m. on a Tuesday not following a three or four day weekend. This allows staff to resolve issues occurring on the preceding weekend and leaves an entire (Monday) regular workday to iron out any last minute hospital-laboratory information technology (IT) or system (HIS/LIS) interface issue(s). IT/IS items typically tested in advance include assuring the test is orderable in the HIS, the order crosses the interface from the HIS to the LIS, the LIS communicates the order accurately with the testing instrument, the instrument reliably transmits the result to the LIS, the LIS reliably transmits the result to the HIS, and a charge appears in the financial IS when the test is complete. If applicable, verify critical results are flagged in both the HIS and LIS, and “>” and “<” signs transmit from the testing instrument to and appear in the LIS and HIS correctly.

New QC Material Verification

Selection of new QC material follows a very similar process as selecting a new method and/or instrument, but has some unique considerations. Stability and shelf-life are important, especially the ability to freeze-thaw multiple times without compromising performance for QC material that must be stored frozen. Ideally, QC material should be independent (3rd party), matrix-appropriate QC material with analyte concentrations at LoD and AMR extremes. Consider sourcing matrix-appropriate QC material exceeding your AMR, thereby allowing you to extend the AMR. Real-time access to peer performance data (same instrument and reagent lot) is desirable so statistical comparisons can be made and data easily submitted. This may not be as simple as you might think due to organizational firewalls and LIS middleware and will require IT/IS coordination.

Verification of QC material requires verification of the manufacturer’s claim for target mean. The target mean should fall within the range published in the manufacturer’s product insert. Run the new QC material multiple times over 5 – 10 days and calculate the mean, SD and CV. If the calculated mean is within the manufacturer’s range the product is verified. The laboratory may also use the manufacturer’s interlaboratory comparison program to verify SD and CV%.


  1. Clinical Laboratory Improvement Amendments. 42 CFR §493.1253. available at (accessed 02/04/2013).
  2. Coleman M. Method evaluation and preanalytical variables. In Clinical Chemistry. Concepts & applications. 2nd edition. Anderson SC, Cockayne S (eds). Waveland Press, Inc., Long Grove IL, 2007.
  3. Linnet K, Boyd JC. Selection and analytical evaluation of methods – with statistical techniques. In Tietz Textbook of Clinical Chemistry and Molecular Diagnostics. 5 th edition. Burtis CA, Ashwood ER, Bruns DE (eds), Saunders Elsevier, St. Louis MO, 2012.
  4. Cembrowski GS, Martindale RA. Quality Control and Statistics. In Clinical Chemistry. Principles, Procedures, Correlations. 5th edition. Bishop ML, Fody EP, Schoeff L (eds). Lippincott Williams & Wilkins, Baltimore MD, 2005.
  5. Lewandrowski K. Clinical Chemistry. Laboratory Management & Clinical Correlations. Lippincott Williams & Wilkins, Philadelphia PA, 2002.
  6. Westgard JO. Basic Method Validation. Third Edition. WesTgard Quality Corporation, Madison WI, 2008.
  7. NCCLS. Evaluation of Precision Performance of Quantitative Measurement Methods; Approved Guidelines – Second edition. NCCLS document EP5-A2 [ISBN 1-56238-542-9]. NCCLS, 940 West Valley Road, Suite 1400, Wayne PA 10987-1898 USA, 2004.
  8. CLSI. Method Comparison and Bias Estimation Using Patient Samples; Approved Guideline – Second Edition (Interim Revision). CLSI document EP09-A2-IR (ISBN 1-56238-731-6). Clinical and Laboratory Standards Institute, 940 West Valley Road, Suite 1400, Wayne PA 19087-1898 USA, 2010.
  9. Clinical and Laboratory Standards Institute. Preliminary Evaluation of Quantitative Clinical Laboratory Measurement Procedures; Approved Guideline – Third Edition. CLSI document EP10-A3 [ISBN 1-56238-622-0]. Clinical and Laboratory Standards Institute, 940 West valley Road, Suite 1400, Wayne, PA 19087-1898 USA, 2006.
  10. CLSI. User Protocol for Evaluation of Qualitative Test Performance; Approved Guideline – Second Edition. CLSI document EP12-A2. Wayne, PA: Clinical and Laboratory Standards Institute; 2008.
  11. Clinical and Laboratory Standards Institute (CLSI). User Verification of Performance for Precision and Trueness; Approved Guideline-Second edition. CLSI document EP15-A2 [ISBN 1-56238-574-7]. Clinical and Laboratory Standards Institute, 940 West valley Road, Suite 1400, Wayne, PA 19087-1898 USA, 2005.
  12. Horn PS, Pesce AJ. Reference Intervals. A user’s guide. AACC Press, Washington DC, 2005.
  13. Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 327 (8476):307-10, 1986.
  14. Excel, Microsoft, Redmond WA.

About Author

Valerie Ng, PhD, MD
Valerie Ng, PhD, MD

Valerie Ng, PhD, MD, is Chair, Laboratory Medicine & ­Pathology, Highland General Hospital, ­Alameda Health System, Oakland, Calif.

Leave A Reply

Log in or register to comment on this article.