Home Introduction of QUIP (Quality Information Program) as a Semi-automated Quality Assessment Endeavor Allowing Retrospective Review of Errors in Cross-sectional Abdominal Imaging
Post
Cancel

Introduction of QUIP (Quality Information Program) as a Semi-automated Quality Assessment Endeavor Allowing Retrospective Review of Errors in Cross-sectional Abdominal Imaging

Rationale and Objectives

The aims of this study were to review the role of a quality information program (QUIP) as a semiautomated educational feedback mechanism and to review common errors in cross-sectional abdominal and pelvic studies as an initiative for continuing medical education and improving patient care.

Materials and Methods

Abdominal and pelvic errors identified by QUIP and cases collected from morbidity and mortality conferences were reviewed. Errors were classified and graded to levels of morbidity.

Results

There were 222 errors in 218 patients over 4 years in this study. One hundred thirteen (51%) were identified after the introduction of QUIP (January to December 2009). One hundred thirty-eight studies (61%) were read independently, while 84 (39%) were double-read. Sixty-five percent of errors (145 of 222) were false-negatives, of which 45 (31%) were “satisfaction-of-search” errors. There were 62 cognitive errors (28%), nine technical errors (4%), eight communication errors (4%), six ordering errors (3%), and five false-positives identified. Seventy-six percent of errors were identified on computed tomography ( n = 168); fewer cases involved ultrasound ( n = 20 [9%]) and magnetic resonance ( n = 34 [15%]). Forty-one percent resulted in no changes to patient outcomes. Forty-percent caused minor patient morbidity, and 19% caused major patient morbidity, including three cases (1%) that likely contributed to patient death.

Conclusions

Most abdominopelvic errors in this study were classified as false-negatives. Many can be attributed to satisfaction-of-search errors. Implementing a simple, semiautomated QUIP allows timely feedback regarding errors to radiologists. This may improve the quality of health care while allowing radiologists the opportunity to learn from each case they are involved in.

With the increasing use of diagnostic imaging for patient diagnosis, management, and follow-up, the role and scope of radiologists in image synthesis are expanding. Identifying relevant findings pertaining to patient symptoms as well as reporting significant incidental or unexpected findings are among the key responsibilities of our specialty.

At our teaching institution, we have developed a quality initiative program (QUIP), which is a confidential, semiautomated method of documenting and following up on subsequently identified errors. QUIP includes an e-mail template accessible from a desktop Outlook (Microsoft Corporation, Redmond, WA) public e-mail folder or anonymously through the agency of a transcriptionist ( Fig 1 ). The Outlook folder is part of a virtual PC that is accessible on the third screen of our picture archiving and communication system (McKesson Corporation, San Francisco, CA) but is a separate system. This shortcut to the virtual machine servers opens up a virtual session as if on a regular PC. It does not use any of the picture archiving and communication system’s computing resources. The Outlook e-mail template is preaddressed to appropriate recipients, including the section chief (eg, of abdominopelvic radiology) and the administrative assistant. It has fields that can be quickly filled in with case identifiers, the date of the study in question, and a brief account of the event requiring attention (such as inappropriate protocolling, interpretation issues, or transcription errors). Finally, the specific QUIP case is also addressed to the radiologist (or radiologists) who created the original report and would therefore most benefit from this information ( Fig 1 ). It is sent in the form of an email with the title “QUIP” to allow easy identification. The recipient radiologist is required to reply to the section chief by checking off one of the four options at the bottom of the standardized QUIP form. Action options include (1) reviewing the case, (2) making an addendum to the original report, (3) contacting the referring clinician (if the finding is of sufficient clinical importance) or (4) other, in rare cases where there may be extenuating circumstances involved.

Open full size image

Figure 1

Screen-captured image of a standardized quality initiative program e-mail demonstrating information that is usually included in the e-mail, including standardized wording and editable variables.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Methods and materials

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Results

Get Radiology Tree app to read full this article<

Table 1

Rates of QUIP Reporting by Quarter During the First Year of Implementation (2009)

Quarter Number of QUIP Reports January to March 24 April to June 23 July to September 29 October to December 37

QUIP, quality information program.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Figure 2, A 58-year-old man with intermittent hypertension and sweating. (a) Contrast-enhanced computed tomographic scan though the head of the pancreas. A subtle neuroendocrine neoplasm was not identified ( arrows ). (b) Follow-up fat-saturated T1-weighted magnetic resonance imaging 2 months later demonstrated a low–signal intensity area in the head of the pancreas ( arrows ). This was considered a false-negative finding. This error was classified as causing minor morbidity, because the diagnosis was delayed but there was no significant change in the size of the lesion in the interval and no development of metastases. The patient went on to have surgical resection and did well clinically.

Figure 3, (a) Follow-up contrast-enhanced computed tomography for evaluation of carcinoid metastases in a 64-year-old woman. The report indicated the presence of an inferior vena cava (IVC) thrombus ( black arrow ), in addition to the known liver metastasis ( white arrow ). This was an early portal-venous phase, and this finding was due to admixing of unopacified blood from the lower extremities with venous blood from the kidneys that already had intravenous contrast in it. This is an example of a false-positive error. Although the patient may have had anxiety related to the original report, follow-up Doppler ultrasound 4 days later (b) confirmed patency of the IVC ( white arrow ). This error was therefore felt to be of no clinical consequence.

Table 2

Classification and Rates of Error Types

Error Type_n_ % Perceptual: false-negative 145 65 Satisfaction of search 45 20 (31% of false-negatives) Cognitive 62 28 Technical or other 9 4 Communication 8 4 Ordering issue 6 3 Perceptual: false positive 5 2

The sum of the errors by type exceeds the number of error cases identified, because several cases involved more than one type of error.

Figure 4, A 47-year-old woman with cervical cancer. (a) On this T2-weighted coronal magnetic resonance image, there are para-aortic lymph nodes that were not described. The report did describe the primary cervical cancer with local staging, as well as other incidental findings in the kidneys, but these lymph nodes were reported ( arrow ). This was classified as a satisfaction-of-search error (a type of false-negative error) because only a few sequences of the upper abdomen were included in the field of view on this magnetic resonance imaging study, and most images were focused in the pelvis for local staging. The subsequent unenhanced axial computed tomographic scan performed 1 week later (b) as part of the overall staging shows the para-aortic lymphadenopathy ( arrow ).

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 3

Rates of Reported Errors by Organ or Anatomic Structure in the Abdomen and Pelvis

Organ_n_ % Liver 37 17 Kidneys 29 13 Lymph nodes 26 12 Large bowel/appendix 26 12 Bones 23 10 Omentum/peritoneum 16 7 Vascular 14 6 Gynecologic organs 14 6 Pancreas 13 6 Subcutaneous tissue 9 4 Gallbladder/biliary system 7 3 Small bowel 6 3 Adrenal 6 3 Lung bases 5 2 Stomach 4 2 Spleen 2 1 Bladder 1 <1 Esophagus 1 <1 Scrotum 1 <1 Prostate 1 <1

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 4

Evaluation of Errors by Patient Morbidity and Mortality

Degree of Morbidity and Mortality_n_ % No change to patient outcome 91 41 Minor morbidity 89 40 Moderate morbidity 39 18 Mortality 3 1

Get Radiology Tree app to read full this article<

Discussion

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Conclusions

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

References

  • 1. Renfrew D.L., Franken E.A., Berbaum K.S., et. al.: Error in radiology: classification and lessons in 182 cases presented at a problem case conference. Radiology 1992; 183: pp. 145-150.

  • 2. Fitzgerald R.: Error in radiology. Clin Radiol 2001; 56: pp. 938-946.

  • 3. Pinto A., Brunese L.: Spectrum of diagnostic errors in radiology. World J Radiol 2010; 2: pp. 377-383.

  • 4. Berbaum K.S.: Difficulty of judging retrospectively whether a diagnosis has been “missed.”. Radiology 1995; 194: pp. 582-583.

  • 5. Fitzgerald R.: Radiological errors: analysis, standard setting, targeted instruction and team-working. Eur Radiol 2005; 15: pp. 1760-1767.

  • 6. McCreadie G., Oliver T.B.: Eight CT lessons that we learned the hard way: an analysis of current patterns of radiological error and discrepancy with particular emphasis on CT. Clin Radiol 2009; 64: pp. 491-499.

  • 7. Potchen J.: Measuring observer performance in chest radiology: some experiences. J Am Coll Radiol 2006; 3: pp. 423-432.

  • 8. Doubilet P., Herman P.G.: Interpretation of radiographs: effect of clinical history. AJR Am J Roentgenol 1981; 137: pp. 1055-1058.

  • 9. Kruskal J., Anderson S., Yam C., et. al.: Strategies for establishing a comprehensive quality and performance improvement program in a radiology department. Radiographics 2009; 29: pp. 315-329.

  • 10. Talner L.B., D’Agostino H.: The department of radiology quality assessment meeting. An unexpected teaching bonus. Invest Radiol 1994; 29: pp. 378-380.

  • 11. Pinto A., Acampora C., Pinto F., et. al.: Learning from diagnostic errors: a good way to improve education in radiology. Eur J Radiol 2011; 78: pp. 372-376.

  • 12. Berlin L.: Errors in judgment. AJR Am J Roentgenol 1996; 166: pp. 1259-1261.

  • 13. Romano L., Giovine S., Guidi G., et. al.: Hepatic trauma: CT findings and considerations based on our experience in emergency diagnostic imaging. Eur J Radiol 2004; 50: pp. 59-66.

  • 14. Lv P, Lin XZ, Li J, et al. Differentiation of small hepatic hemangioma from small hepatocellular carcinoma: recently introduced spectral CT method. Radiology. In press.

  • 15. Caumo F., Brunelli S., Zorzi M., et. al.: Benefits of double reading of screening mammograms: retrospective study on a consecutive series. Radiol Med 2011; 116: pp. 575-583.

  • 16. Wakeley C.J., Jones A.M., Kabala J.E., et. al.: Audit of the value of double reading magnetic resonance imaging films. Br J Radiol 1995; 68: pp. 358-360.

  • 17. Berg W., Blume J.D., Cormack J., et. al.: Operator dependence of physician-performed whole-breast US: lesion detection and characterization. Radiology 2006; 241: pp. 355-365.

  • 18. Brown M.A., Sirlin C.B., Hoyt D.B., et. al.: Screening ultrasound in blunt abdominal trauma. J Intensive Care Med 2003; 18: pp. 253-260.

  • 19. Degani A, Weiner EL. Human factors of flight-deck checklists: the normal checklist. Available at: http://www.bluecoat.org/reports/Degani_90_Checklist.pdf . Accessed May 8, 2011.

  • 20. Edhemovic I., de Gara C.J., Temple W.J., et. al.: The computer synoptic operative report :a leap forward in the science of surgery. Ann Surg Oncol 2004; 11: pp. 941-947.

This post is licensed under CC BY 4.0 by the author.