From Our Print Archives

Autoverification Strategies for Clinical Chemistry

View Comments (0)Print ArticleEmail Article
Vol. 14 •Issue 8 • Page 36
Autoverification Strategies for Clinical Chemistry

Autoverification promises to significantly reduce errors, decrease turnaround time and increase productivity.

Experienced technologists have for many years either actively or passively verified results before reporting. This has been done by a variety of processes—comparing simultaneous run analytes (e.g., urea nitrogen and creatinine), checking electrolyte results with the anion gap or using manually recorded results to delta check analytes such as pediatric total bilirubin. As the training and experience of laboratory personnel decrease and testing workloads increase, more computer-assisted autoverification and autorelease of results is needed to ensure quality, improve productivity and decrease turnaround times for normal or reasonable results.

Autoverification can be used to:

• direct autorelease of results with no technologist interaction,

• trigger automated retesting or automated reflex testing,

• notify laboratory personnel of a need for further interaction with the specimen (e.g., checking for proper labeling, monitoring stability and storage concerns, determining interference possibilities or concurrent medications issues),

• indicate recommended or required responses to flagged results and

• provide specialized reporting schemes to alert clinicians to unexpected abnormal results, significant changes in an analyte or specimen integrity issues that might affect the clinical usefulness of the result.

Personnel Issues

Autoverification strategies involve not only formulation of the expert rules for evaluating results, but also the response rule for non-verified results when a result fails the autoverification process. Attempts to initiate autoverification protocols may be met with resistance from the bench technologists, who see it as means of eliminating their direct involvement with the release of results and, thus, diminishing the technologist's job security. However, autoverification will allow technologists to be more productive and spend more time and experience dealing with truly abnormal or unexpected results instead of having to manually review and release large numbers of otherwise acceptable results.

Obtaining techs' input for the development of the autoverification and response rules may alleviate their concerns, obtain buy-in from all affected parties and provide the necessary knowledge base to make the best decisions.

Picking the Team

Successful implementation of autoverification strategies will necessarily involve a large number of individuals, including:

• bench technologists,

• operations supervisors,

• technical specialists,

• laboratory directors,

• information technology (IT) personnel and

• clinical staff.

Team selection is important, as each participant brings a different perspective to the challenge of developing useful rules and responses.

In addition, a project manager should be appointed to organize the team and set timelines. While some decisions will have to be modified as the project moves along and testing and validation is accomplished, it is important to keep the process going forward.

Experienced bench and lead technologists from all shifts should be asked to describe how they currently manually review results, what criteria they use to hold up a result for retest, recollection or other remediation, and what they usually do to resolve issues of questioned results. Use of questionnaires may allow more technologists to be involved. Involve IT personnel early in this project so they can get a feel for the manual processes they will be asked to convert into computer-based rules.

Lead technologists, technical specialists or scientific directors can help provide instrument and analyte information such as instrument flags for sample integrity and analytical range, reference range and imprecision data. Additional information may include biological variation data for calculation of reference change values, which is useful in determining valid delta checks. Clinicians can help determine what pattern or change in a result triggers a change in diagnosis or therapy.

Start Simple

In every case, the laboratory should start by picking a single analyte to determine the autoverification and response rules. It is especially important to simultaneously develop response rules to every autoverification rule so that bench technologists have clear direction of what to do if a result does not verify. The tendency is to reflexively repeat all results that do not autoverify, but this may not be the most economically desirable. Rather, more appropriate responses should be developed, such as releasing the result after consideration of issues but without retesting.

Actual rule writing can proceed once the instrument flags, specimen integrity flags and limits, medically critical results, acceptable delta checks and other verification parameters have been determined. Rules are generally written as "IF" statements, evaluated as "true" or "false." These IF statements may be individual or nested to provide the necessary discrimination. An example of rule format and response for sodium is given in Table 1.

When a rule evaluates "false," the result is held pending technologist response as indicated under the response column. As experience is gained more complex rules may be applied to individual analytes and panels.

For the example in Table 1, an additional rule that releases results outside the 130-150 mmol/L range if the previous result was out of the range and if the delta check is < 8 mmol/L could be included. This may be helpful for critical care units where turnaround time is of the essence and frequent monitoring is usual.

Also, panels of analytes—sodium, potassium, chloride and bicarbonate—could be worked on as a group after the rules for each analyte is determined. The anion gap calculation could then be used as a first-pass test for acceptability of the electrolyte panel, mimicking the manual review used by many experienced technologists. If the anion gap rule fails, all results would be held for review. If the anion gap rule passes, each analyte would be evaluated individually.

Delta checking confirms the validity of the result but requires several considerations such as acceptable delta value and period of look-back (i.e., how far back to look for a valid comparison result). Acceptable delta checks need to consider both analytic and biologic variability.

Deciding on the period to look back for a previous result is necessary to set up delta checks. Looking back too far can tie up the computer; not looking back far enough can lead to no delta check value. For some analytes, how far to look back may be determined by whether it is an inpatient or outpatient sample. For inpatient electrolyte results, for example, a look-back of hours on sodium may be appropriate if frequent electrolyte panels are being monitored on the patient. Outpatient cholesterol results may require a look-back of several months.

Complications can sometimes arise in determining look-back intervals. PSA, for example, is usually monitored in otherwise healthy men every year or two and might require a look-back of one year to compare previous results while a post-prostatectomy patient might be monitored weekly after surgery to monitor for reoccurance or appearance of metastatic disease. Also, the delta value in an otherwise healthy man might not be the same as in a post-prostatectomy patient. In more sophisticated autoverification protocols, the source of the specimen—inpatient vs. outpatient, dialysis unit vs. CCU, etc.—may lead to a different set of rules and responses.

Where to Perform Rules

A decision must be made as to where each rule is actually evaluated. Choices include the analytical instrument, a programmable interface, middleware or the laboratory information system (LIS). Some instrument systems are able to do rule checking in addition to evaluating instrument flags. Based on certain instrument flags, the instrument can rerun the specimen, autodilute and rerun the specimen, or perform automated reflex testing.

Some programmable interfaces and middleware systems can be used extensively to evaluate rules and direct retesting, dilution and retesting and other automated procedures, including notifying technologists of possible responses.

Finally, autoverification rules can be performed in the LIS that can be used to notify personnel of possible responses. The LIS also can notify end-users that results have been retested, an integrity issue is occurring that may affect the usefulness of the result or that a significant change has been noted since the last analysis. The decision of where to evaluate the rule will have to be determined based on each laboratory's instrument and IT capabilities.

Validation and Regulatory Issues

Next, a thorough testing and validation process must be done. Depending on how the rules are implemented, a simulation of possible scenarios should challenge the system to verify its performance. Only after posing as many data scenarios as possible can the rules be manually evaluated.

If this test is successful, proceed to a computer scenario test where artificial scenarios are entered into the middleware or LIS and evaluated by the autoverification protocol. Finally, real data simulations (preferably using a test system) should be undertaken. These validation procedures should be carefully documented for regulatory purposes and will need to be repeated on a periodic basis for regulatory compliance.

Even with appropriate and thorough testing, validation issues may arise after go-live. Intense and close supervision by the project manager for the initial hours or days is necessary. An example from the author's laboratory involved an autoverification protocol for TSH and fT4 that looked at a comparison of TSH and fT4 results ordered on the same specimen. All testing and validation had gone smoothly, but on go-live all TSH verifications failed with an undetermined error message. It was discovered that a delay of about 15 minutes between the analysis and resulting for TSH and fT4 occurred. As the TSH result was received by the computer, the autoverification rule looked for the fT4 result, which had not yet been received. A back-out procedure to remove the expert rules from the interface, middleware or LIS should be anticipated if the rule has to be removed or modified after go-live.

Conclusion

Autoverification has been used in hematology for several years, probably because of the close interrelationship between hematology parameters. Autoverification in clinical chemistry, however, has lagged behind due to the increasing complexity of evaluating the results, especially in panels where there may be no direct interrelationship between the parameters. But with improved instrument monitoring capabilities for sample integrity and autodilution, new sophisticated interface and middleware solutions and improvements in LIS capabilities, autoverification in clinical chemistry offers the promise of significantly reducing errors, decreasing turnaround time and increasing productivity.

Dr. Fowler is laboratory director, Medpace Laboratories, Cincinnati, OH.

Make into a table 2 x 4

Table 1: Example Rule Format for Sodium

RULE RESPONSE
IF "Analyzer Flag" then False    Address individual flag and repeat
IF Na <130 or >150 mmol/L then FALSE    Check previous results; if within <5 mmo/L release; if >5 mmol/L recheck
IF Delta Check > 8 mmol/L then False    Check more previous result for trend; check for clotted sample; check for mislabeled specimen; recheck; redraw for retest if unable to resolve

table/courtesy Michael W. Fowler, PhD

Information Needed to Develop Rules and Responses

• Current Manual Review and Response Processes

• Specimen Parameters — specimen type and storage requirements

• Instrument Flags — sampling checks, integrity checks, linearity

• Analytical Performance — imprecision, linearity, interferences

• Reference Ranges — age, gender, ethnicity

• Biological Variation

• Clinical Practice Guidelines and Information

• Concurrent Test Information




     

Email: *

Email, first name, comment and security code are required fields; all other fields are optional. With the exception of email, any information you provide will be displayed with your comment.

First * Last
Name:
Title Field Facility
Work:
City State
Location:

Comments: *
To prevent comment spam, please type the code you see below into the code field before submitting your comment. If you cannot read the numbers in the below image, reload the page to generate a new one.

Captcha
Enter the security code below: *

Fields marked with an * are required.

 
http://www.advanceweb.com/jobs/search/employer/1185/baycare-health-system.html
http://shop.advanceweb.com/clearance.html
http://www.stlukehealthnet.org
http://www.advanceweb.com/jobs/search/employer/1183/supplemental-health-care.html
http://www.advanceweb.com/jobs/search/employer/765/the-execusearch-group.html
http://www.advanceweb.com/jobs/search/employer/508/new-york-blood-center.html
http://www.advanceweb.com/jobs/search/employer/5935/thomas-jefferson-university.html
http://www.advanceweb.com/jobs/search/employer/4678/college-of-coastal-georgia.html