Calibration Verification & Linearity

0

How to interpret, understand & troubleshoot results

The Clinical Laboratory Improvement Act of 1988 (CLIA’88) and subsequent amendments require that laboratories perform calibration and calibration verification procedures to substantiate the continued accuracy of their testing systems. This requirement is true for every laboratory and testing site in the U.S., whether the site performs only a few basic tests as part of physical examinations or hundreds of thousands of tests for the diagnosis, prevention or treatment of disease.

To meet CLIA requirements, testing sites must conduct approved studies to prove that their testing equipment and subsequent test results are consistently accurate. These surveys provide an approach to measurement and verification across testing sites and ensure that all regulatory standards have been met.

The College of American Pathologists (CAP) is one of several vendors that offer a comprehensive menu of Calibration Verification/Linearity Surveys (CVLs) designed to satisfy the requirements of CLIA ’88. Use of CAP CVLs also meets the requirements for the CAP Laboratory Accreditation Program, and thus its use offers dual benefits for those laboratories. Many experienced administrators and lab personnel, however, have expressed confusion when reading the output statistics produced by the CVL survey completion reports. Questions posed include: Why did I receive a “Different” rating, and what can I do to fix this? What does it mean if the report indicates my samples are “Linear” but not “Verified”?

In this article we outline how to understand calibration verification and linearity testing via the CAP CVLs, how to interpret CAP calibration verification and linearity survey results (including results of the analytical measuring range or “AMR” verification), how to troubleshoot survey results when needed, and how to understand the meaning behind the evaluation results. Thanks go to my colleagues, Linda Prust and Laura Hughes, and the Vanderbilt University laboratory team. Together we developed a training module for employees and produced this article.

Calibration Verification: It’s All About the Peer Group-TE/2

For calibration verification, results of CAP CVLs will produce a Verified or Different rating. To understand these ratings, it is important to observe the pattern of your data versus the peer group. You can still have a bias versus the peer group and be Verified; you can also be rated Different with a bias. You can be Linear and not Verified and not Linear or Verified. These ratings are based on the Goal for Total Error (TE), which measures your laboratory’s results compared to peer data using CAP Total Error limits (about 2SD for calibration verification evaluation verses about 1SD for linearity evaluation).

AL_070411_pg28_Table_lgWhen interpreting results that do not meet assay performance expectations, troubleshooting to fix the errors may seem difficult. A systematic approach with particular attention to the pattern of your data will make the process more straightforward. The most useful table of the survey output to begin with can be the peer results summary. A quick study of this table provides information on the percent of samples Verified and Different as well as the percent Linear, Nonlinear and Imprecise. Pay attention to the range of specimens. The first line includes the whole range of data (every specimen). The second line will exclude the highest, the third line excludes the lowest and so on. Reading this table first gives you an idea of how the peer group handled the survey specimens.

For example, if either the highest or lowest (or both) specimen results are outside allowable error limits, you may be Linear but not Verified. According to CAP, you should evaluate your results versus the target values (which are determined by peer value in this case) and document if there is a possible problem with the target or peer value. This may be especially important if the survey included diluted specimens. For diluted samples, you will also want to verify the dilution technique or methods, such as ensuring pipettors used to dilute are accurate.

Likewise, a “Different (bias is present)” evaluation report is normally generated when results are consistently above or below the mean-indicating a calibration issue. Recalibration is often all that is needed to remedy the issues and pass calibration verification upon a subsequent run. Remember, the results of your experiment reflect the calibration in place on the instrument at the time of the survey.

If an evaluation result is “Different, and the peer group is generally Linear and Verified,” this indicates that specimen results are outside allowable error limits from the peer group. To troubleshoot this issue, you should evaluate if there is a pattern in the results compared to the peer group. Often, closely verifying that sample handling protocols are being followed and that the instrument is performing properly (i.e., verifying system operation, analyzing daily quality control performance) will pinpoint any issues that need to be fixed. Calibration verification will then document the resolution on a subsequent run.

In some cases, such as an evaluation rating of “Different, peer group also Different,” the laboratory may need to call CAP to discuss the results. There may be an issue with the survey since no one, or very few people, met survey expectations under this rating.

Reading Linearity Reports: It’s All About You-TE/4

Linearity verification is done to ensure that the results you see when running testing are the results you expect to see. It indirectly verifies AMR-the range of numeric results a method can produce under the usual analytic process (i.e., no special specimen pre-treatment that is not part of the usual analytic process)-to plot the expected values and observed values in testing for a given analyte. As with calibration verification, these ratings are based on the Goal for Total Error (TE), in this case about 1SD. Patterns are important here, too.

CAP Linearity evaluation results can be Linear, Nonlinear or Imprecise. According to CAP, if results are classified as Linear, it indicates that results meet the criteria for acceptable linearity in a specified range. While there may be evidence of small deviations from linearity (it’s acceptable to have one specimen slightly out of the gray area TE/4), results are within acceptable limits for nonlinearity and imprecision, and these acceptable ranges are indicated on the resulting evaluation report (in both table and plot graph formats).

AL_070411_pg30_Table_lgAs shown in Figs. 1 and 2, Linearity Plot 1 reports results against a best-fit line (or best-fit curve in the case of Nonlinear). Linearity Plot 2 reports results relative to acceptable imprecision ranges. Linearity Plot 2, therefore, illustrates the difference between a laboratory’s individual results and the best-fit target value that was determined from the linear regression line in Linearity Plot 1; so, it’s all about your lab.

Linearity Plot 2 causes the most confusion when reading evaluation results. CAP creates a “shaded area” that’s equal to a quarter of total error (or to the random error budget or 25% of total error), approximately 1SD from the sample mean. Per CAP, limits of a quarter to a half of the total error goal are practical and useful goals for assessing precision limits with the linearity survey. This means values outside the shaded area-indicating results more than 1SD away from the mean-may still be acceptable.

A Nonlinear evaluation (Fig. 2) means your reported results do not meet the criteria for acceptable linearity in any of the ranges evaluated within the algorithm. On Linearity Plot 1 of Fig. 2, the result will show a nonlinear curve where a best-fit line would be expected. Instrument performance is most often suspected in Nonlinear situations. S or U shaped patterns are not normal.

An Imprecise evaluation indicates that the data submitted has too much variability; either the results are both above and below the midline or there are large differences between the two replicates and there is no way to reliably assess the linearity. An Imprecise Linearity Plot 1 may actually still produce a straight line, but the data plot points will show an irregular pattern or an unacceptably wide range. These types of precision issues also point to hardware. Check your instrument.

Troubleshooting to Improve Results

As with calibration verification, there are specific steps laboratories can take to troubleshoot and remedy process or instrument issues that are creating unwanted linearity evaluation results. Running a system check and verifying performance results, reviewing calibration and quality control (QC) reports and recalibrating as necessary, and closely verifying that sample handling protocols are being followed are important and often the cause and cure for an unwanted evaluation report.

It is also important to determine if the precision goals on Linearity Plot 2 are acceptable. Remember that Linearity Plot 2 graphs the difference between your laboratory’s individual results and the best fit target value, which was determined from your results in Linearity Plot 1. Evaluate the limits for acceptable precision for the assay (CAP recommends using a quarter to a half of the total error). Running an n=20 precision study to verify instrument performance followed by appropriate data reduction will often provide you with reports outlining linearity and precision verification information to help you troubleshoot any remaining irregularities.

It is also possible to receive a Linear evaluation report, but the results are not linear throughout the entire range. In this case, start by reviewing peer group data to determine if the issue was seen throughout the study participants or only your laboratory. If the issue appears to only be occurring at some sites, start troubleshooting the issue by reviewing QC performance that corresponds to survey samples determined to be outside the linear range. Also verify instrument performance through system check or calibration review, and repeat the linearity study after troubleshooting.

Failure to indicate a sample was diluted will result in the sample being analyzed as “neat” or undiluted. If you denote that the sample was diluted, the sample will be called out as diluted in your survey result report. A key is included with the survey results that show which sample or samples were diluted. When reviewing linearity results, keep the assay’s upper measuring limit in mind and determine if samples may have been diluted but not indicated as such in the results. Refer to the sample values to determine if samples are within the stated undiluted measuring range.

Sidebar:

Calibration Verification: Confirms the accuracy of your measurement of patient samples by proving that the values you receive are what you expect to receive. (CLIA rules define calibration verification as the determination of analyte in materials composed of a matrix similar to that of patient samples.)

Linearity: Along with proving measurement accuracy, linearity verifies that the assay is linear and therefore not a curved relationship. (CAP defines linearity as a straight line relationship between observed values and expected values.)

Analytical Measuring Range (AMR): The range of numeric results method can produce without any special specimen pre-treatment that is not part of the usual analytic process, such as dilution. (The AMR is also referred to as the reportable range by CLIA.) For many assays, the AMR is the range of results between the lowest calibrator (ex: S0) and the highest calibrator (example: S5 or S6). In ISO and CLSI documents this is referred to as an assay’s measuring interval.

Share.
// Uncomment below to display word count of article //

1836 words

About Author

Chris White, PhD

Dr. White is scientific affairs manager, tactical marketing, North American Commercial Operations for Beckman Coulter Inc., Brea, CA.

Comments are closed.