(Editor's note: This is a multi-part series on risk management for the laboratory. Click here to read Part 1.)
It is useful to divide into three parts the steps that a patient's sample goes through from drawing the sample until the physician sees the result:
- Preanalytical -- 60-70%
- Analytical -- 10-15%
- Postanalytical -- 20-25%
The proportion of the errors that occur are given above.
In each of the three cases, it is possible that an error goes undetected. Even bar codes are not perfect nor are the people who use them
There are no perfect instruments nor are there perfect error detectors for those instruments. We discuss this in more detail shortly.
Postanalytical errors occur when a result is transferred from the instrument sheet to the report sheet. Or when a result is phoned from the lab to the physician.
Our concern here is especially aimed at errors that occur in the analytical step.
There are a number of error detectors during the analytical step: the quality control system, the instrument's feature that detects many malfunctions in the instrument (but not all) and the person operating the instrument. It has been repeatedly found that of the three general sources of error, the analytical part is the least likely to report out errors. But like the other two sources, errors do occur.
We will assume for this discussion that the QC system would detect an error in the instrument before results are sent to the physician (using the instrument checks only to identify the source of the detected error.
Given that assumption and two others -- that the sample analyzed is from the right patient on the right tube and without hemolysis, and that the report to the physician is what the instrument reported. This leaves the QC system as the only source of error for this discussion.
The QC system is quite simply made from two parts: 1) a known sample (the control) that by and large mimics the patient samples and is analyzed at least once a day, usually before any patients are analyzed; 2) a "rule" that determines whether the analysis of the control is to be accepted ("in") or rejected ("out"). If the control is accepted the patient data is sent to the physician. If the control is rejected, patient data are not reported, the error found and corrected.
Since the instrument is not perfect, it rarely reports the exactly correct answer to both control and patients. Based on well-known statistics, the accept/reject rules are chosen to provide useful data nearly always. It is common to set the limits so that the control will reject 0.3% to 5% of the values that actually acceptable. These are called False Rejects (FR). In this case as we said, patient data are not reported. FRs cost time and money, but are acceptable when compared to the opposite -- accepting data that would be misleading to the physician. These are called False Accepts (FA) and are also costly. The assumption (probably true) is that FAs are much more expensive than a FR. The laboratory understands this and sets the accept/reject in an on-going effort to avoid any FAs.
In addition, the laboratory is guided by a set of values issued by the federal government termed "allowable errors." These are usually set wider than the range that most instruments function. This gives the QC program a bit more room for errors.
The question to be asked after this rather long preamble is whether patients can be reported as "normal" when they are not well? Or not well when they are indeed healthy. The answers to both parts are "Yes. But . " There are three reasons this can happen: 1) the range for the apparently healthy person is not correct for your patient or this person just isn't like 95% of apparently healthy people who made up the "normal range." That is not a laboratory error using our definition; 2) the TEa is not a "one size fits all" number. Again this is not a laboratory error; 3) there is the possibility that the instrument check did not detect the error and the accept/reject point was in the wrong place at the time the patient was analyzed. This can happen but is quite rare, but not 0%.
In conclusion, it is our view that since the selection of the accept/reject point (also known as a QC rule) can be customized for each test and using the TEa the proper rule can be chosen, the chance of the laboratory reporting an error due to a laboratory error while not 0%, it is quite low.
Once you are satisfied with one of these approaches or another one that will fit your needs, you will find it helpful to build a chart showing all the factors that are causes for the risk. Using our example of TAT we have prepared a chart for you to consider and adapt to any risk you are going to attack. We will address this chart in detail in the next installment.
David Plaut is a chemist and statistician in Plano, TX. Nathalie Lepage is a clinical biochemist and a biochemical geneticist at the Children's Hospital of Eastern Ontario and an associate professor in the Department of Pathology and Laboratory Medicine at the University of Ottawa, Ontario, Canada.