(Editor's note: This is a multi-part series on risk management for the laboratory. Click here to read Part 1. Click here to read Part 2. Click here to read Part 3)
Welcome back. In this installment, we begin our discussion of those risks associated with the analytical phase of handling patient samples through acquiring the result from the instrument. We have divided this portion into three parts: 1) method validaton, 2) linearity of the method and 3) the reference range (reference interval, "normal" range). Each of these has potential pitfalls that could put physicians and their patients at risk. We will look at these possible pitfalls so that you can build a set of checks to avoid the pitfalls.
There are two facets of a method that must be studied and found acceptable before a new method replaces the current method -- accuracy and precision. Let's begin with precision. There are those who feel that, within limits, consistency, reproducibility or precision is more important than accuracy.
For example 11.2 and 10.8 for a hemoglobin on consecutive days (the "real" value is 11.1) is more useful than value of 9.7 and 12.5 -- which, when averaged, give the value of 11.1.
In doing a method validation, we suggest either or both the two approaches we outline to measuring precision:
- One, analyze four or more samples representing the range of the method 3-5 times. Again preferably with both the new and the current method. The F-ratio is the statistic we prefer to detect a statistical difference in the precision of the two methods. Obviously, if the new method is more precise, that is good. If it should turn out to be statistically significantly larger, you will need to proceed with caution. Study Table Ia.
- Two, analyze eight or more patient samples in duplicate on four or more days. You can use different samples each day. Preferably do this with both the new method and the current method. Table Ib.
The second component in method validation is accuracy. We recommend three ways to determine accuracy. First, read the package insert, to see that the comparison of the proposed method with another method indicates that the proposed method is clinically accurate.
Second, look in the literature (e.g. Clinical Chemistry or Clinical Biochemistry) for published articles evaluating the proposed method. Three, perform a method comparison with your instrument and your patients.
We suggest analyzing 30 or more samples representing the linearity of the method over a period of 4-5 days. Table IIa is an example of this approach. Figures 1 and 2 illustrate useful charts for a method validation. Figure 1 is a plot of the samples by both methods; you should look at how close the points are to the line. [Note that the line is not extended below the lowest value nor above the highest one.]
The scatter, or lack thereof, gives e nor above the higherd isyou an idea of the precision of the methods as well as how the method compare. Keep in mind that although the two methods might agree, the new method is mimicking the "accuracy" of the current method.
In other words, if the two methods do not agree, it may be that the new method is indeed more accurate. Of course, the new method may not be as accurate. This is where reading the package insert and surveying the literature for additional information is helpful.
The statistics we like when comparing methods are 1) the difference between the two methods, 2) the percent difference between the two methods and the unpaired t-test. See Table IIa.
We are not enamored of slope-intercept nor the r-value in studying method comparisons. The slope-intercept is prone to being mis-read -- giving the impression that there is a difference between the methods that is not there and the r value because the precision study we suggested gives us more information than the r value. The r gives information only about precision but doesn't tell us which method, or both, is (are) contributing imprecision and how much from each. The F-ratio does that for us.
Once you have carried out your studies on precision and accuracy and are satisfied with the results, you will avoid most risks in how your clinicians interpret the data your have collected.
In our next installment, we continue the discussion of avoiding risks in the analytical phase as we look at evaluating linearity and the reference range.