Home The Influence of a Vocalized Checklist on Detection of Multiple Abnormalities in Chest Radiography
Post
Cancel

The Influence of a Vocalized Checklist on Detection of Multiple Abnormalities in Chest Radiography

Rationale and Objectives

Although a checklist has been recommended for preventing satisfaction of search (SOS) errors, a previous research study did not demonstrate that benefit. However, observers in that study had to turn away from the image display to use the checklist. The current study tested a vocalized checklist to avoid this constraint.

Materials and Methods

A total of 64 chest computed radiographs, half containing various “test” abnormalities, were read twice by 20 radiologists, once with and once without the addition of a simulated pulmonary nodule. Readers used a vocalized checklist-directing search. Receiver operating characteristic (ROC) detection accuracy and decision thresholds were analyzed to study the effects of adding the nodule on detecting the test abnormalities.

Results

Adding nodules induced a substantial reluctance to report the other abnormalities ( P < 0.001), as had been the case in the most recent study of the SOS effect in radiography.

Conclusions

The vocalized checklist did not reduce nor eliminate the SOS effect on readiness to report further abnormalities. Although useful for organizing search and reporting, particularly among students, a vocalized checklist does not prevent SOS effects.

Introduction

Laboratory studies conducted in 1990 and 2000 demonstrated a satisfaction of search (SOS) effect in chest radiology with reduced accuracy in detecting native abnormalities on chest radiographs in the presence of simulated pulmonary nodules . A more recent study suggested that adding nodules may not always reduce detection accuracy for the other abnormalities, but rather induces a reluctance to report them . The authors suggested that this change may reflect changes in the practice and training of radiologists related to the ascendance of three-dimensional imaging modalities.

Checklists have been recommended to counteract SOS errors in radiology . Using the same radiographs as the 1990 and 2000 papers, a 2006 experiment studied whether self-prompting can prevent reader errors due to SOS . A printed checklist was produced as a booklet and contained a page for each case that required the reader to explicitly report on each item on the checklist (e.g., neck, mediastinum; heart/vessels; lungs; pleura; abdomen; bones). The results indicated that there was no SOS effect on detection accuracy; instead, detection accuracy seemed to be reduced even when the added nodule was not present. The authors interpreted this finding, suggesting that using the checklist may have interfered with the radiologist’s visual search because for some readers, the order of elements in the printed checklist differed from the order they prefer in the clinic. The checklist may have also interrupted the radiologist’s search as they had to take their eyes off the display and look at the booklet to follow the checklist.

Get Radiology Tree app to read full this article<

Materials and Methods

Experimental Conditions

Get Radiology Tree app to read full this article<

Figure 1, Constructs for the experimental conditions. The satisfaction of search (SOS) condition presents with a pulmonary nodule (a) and the non-SOS condition presents without a pulmonary nodule (b) . The same native abnormality, a gallstone, appears in both (a) ( black arrow ) and (b) . A simulated pulmonary nodule has been digitally placed in (a) ( black arrow ). In all other respects, the two examinations are identical.

Get Radiology Tree app to read full this article<

Computed Radiography Examinations

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Vocalized Checklist

Get Radiology Tree app to read full this article<

Table 1

Standard Checklist Order with Variants

Standard Checklist: Neck, Mediastinum; Heart/Vessels; Lungs; Pleura; Abdomen; Bones_Variant 1:_ Lungs; pleura; heart; mediastinum; vessels; abdomen; bones_Variant 2:_ Abdomen; bones; airway; heart/mediastinum; vessels; diaphragms; lungs/pleura_Variant 3:_ Trachea; lungs/pleura; heart; mediastinum; vessels; bones; abdomen_Variant 4:_ Neck/trachea; mediastinum/vessels; pleura; lungs; abdomen; bones_Variant 5:_ Bones/soft tissue; abdomen; neck; mediastinum; heart; pleura; lungs_Variant 6:_ Airway; neck; heart/vessels; mediastinum; lungs; pleura; abdomen; bones_Variant 7:_ Diaphragms; abdomen; soft tissue; bones; trachea; mediastinum; heart; lungs_Variant 8:_ Lungs; trachea; heart/mediastinum; abdomen; bones; soft tissue_Variant 9:_ Neck; mediastinum; heart/vessels; abdomen; lungs; pleura; bones_Variant 10:_ Abdomen; soft tissue; bones; heart; mediastinum; vessels; trachea; lungs_Variant 11:_ Airway; bones; soft tissue/pleura; diaphragms; heart; lungs

Get Radiology Tree app to read full this article<

Image Display

Get Radiology Tree app to read full this article<

Research Participants

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Procedure

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Scoring

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Statistical Analysis

Terminology

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Shift in Decision Thresholds

Get Radiology Tree app to read full this article<

Detection Accuracy

Get Radiology Tree app to read full this article<

Inspection Time

Get Radiology Tree app to read full this article<

Results

Get Radiology Tree app to read full this article<

Figure 2, (a) Each point is the average of 20 readers and the false-positive coordinates are based only on false-positive responses that were not nodules. Open circle symbols represent receiver operating characteristic (ROC) points from the non-satisfaction of search (SOS) condition, and open diamond symbols represent ROC points from the SOS condition. (b) A magnified view of the operating points in (a) to highlight differences in those points between the treatment conditions. These points suggest a major threshold shift toward a more conservative reporting in the SOS condition.

Figure 3, (a) Each point is the average of 20 readers and the false-positive coordinates are based on all false-positive responses regardless of the type of abnormality reported. Open circle symbols represent receiver operating characteristic (ROC) points from the nonsatisfaction of search (SOS) condition, and open diamond symbols represent ROC points from the SOS condition. (b) A magnified view of the operating points in (a) to highlight differences in those points between the treatment conditions. These points suggest a major threshold shift toward a more conservative reporting in the SOS condition.

Get Radiology Tree app to read full this article<

Decision Thresholds

Get Radiology Tree app to read full this article<

Table 2

Analysis of Thresholds

Non-SOS Condition SOS Condition F(1,18)P True-positive fractions Most conservative threshold 0.25 0.22 0.98 0.34 Most lenient threshold 0.67 0.57 10.18 0.005 Center of range 0.46 0.40 6.87 0.017 Width of range 0.42 0.35 4.90 0.040 False-positive fractions (non-nodule abnormality only) Most conservative threshold 0.03 0.02 0.24 0.63 Most lenient threshold 0.24 0.18 6.50 0.020 Center of range 0.14 0.10 5.76 0.027 Width of range 0.21 0.15 5.51 0.031 False-positive fractions (any abnormality) Most conservative threshold 0.04 0.03 1.38 0.26 Most lenient threshold 0.31 0.23 5.50 0.031 Center of range 0.18 0.13 6.53 0.020 Width of range 0.27 0.21 3.20 0.090

SOS, satisfaction of search.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 3

Analysis of ROC Accuracy Using True-positive Fraction at False-positive Fraction = 0.1

Non-SOS Condition SOS Condition Difference F(1,19)P Using all false-positive responses Empirical method 0.45 0.42 0.03 0.79 0.39 Contaminated binormal model 0.45 0.41 0.04 2.75 0.11 Using only non-nodule false-positive responses Empirical method 0.50 0.45 0.05 2.72 0.12 Contaminated binormal model 0.49 0.44 0.05 5.47 0.03

ROC, receiver operating characteristic; SOS, satisfaction of search.

Get Radiology Tree app to read full this article<

Diagnostic Accuracy

Get Radiology Tree app to read full this article<

Inspection Time

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 4

Wilcoxon Signed-rank Test Comparison of Non-SOS Counts and SOS Counts in Difference Phases of Search

Response Types—Search Phase Average Number of Response Counts:P Significance Level 1 Non-SOS Condition SOS

Condition True-positives All phases 21 17 0.0003 \\\* Gestalt phase 7 6 0.0059 \\ Checklist item—prompted 10 9 0.21 Checklist Item—unprompted 4 3 0.09 False-positives naming non-nodule abnormalities All phases 17 13 0.0015 \\ Gestalt phase 4 3 0.21 Checklist item—prompted 11 8 0.0063 \\ Checklist item—unprompted 3 3 0.67

SOS, satisfaction of search.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Discussion

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Acknowledgment

Get Radiology Tree app to read full this article<

References

  • 1. Berbaum K.S., Franken E.A., Dorfman D.D., et. al.: Satisfaction of search in diagnostic radiology. Invest Radiol 1990; 25: pp. 133-140.

  • 2. Berbaum K.S., Dorfman D.D., Franken E.A., et. al.: Proper ROC analysis and joint ROC analysis of the satisfaction of search effect in chest radiography. Acad Radiol 2000; 7: pp. 945-958.

  • 3. Berbaum K.S., Krupinski E.A., Schartz K.M., et. al.: Satisfaction of search in chest radiography 2015. Acad Radiol 2015; 22: pp. 1457-1465.

  • 4. Kinard R.E., Orrison W.W., Brogdon B.G.: The value of a worksheet in reporting body-CT examinations. AJR Am J Roentgenol 1986; 147: pp. 848-849.

  • 5. Samuel S., Kundel H.L., Nodine C.F., et. al.: Mechanism of satisfaction of search: eye position recordings in the reading of chest radiographs. Radiology 1995; 194: pp. 895-902.

  • 6. Berbaum K.S., Franken E.A., Caldwell R.T., et. al.: Can a checklist reduce SOS errors in chest radiography?. Acad Radiol 2006; 13: pp. 296-304.

  • 7. Sistrom C.L., Langlotz C.P.: A framework for improved radiology reporting. J Am Coll Radiol 2005; 2: pp. 61-67.

  • 8. Sistrom C.: Radiology checklists, satisfaction of search, and the talking template concept. Acad Radiol 2006; 13: pp. 922-923.

  • 9. Schartz K.M., Berbaum K.S., Caldwell R.T., et. al.: WorkstationJ: workstation emulation software for medical image perception and technology evaluation research.2007.SPIEBellingham WA:pp. 1-11.

  • 10. Schartz K.M., Berbaum K.S., Caldwell R.T., et. al.: WorkstationJ as ImageJ plugin for medical image studies. Annual Meeting of the Society for Imaging Informatics in Medicine (SIIM)—9th Annual SIIM Research and Development Symposium, Charlotte, NC http://www.siimweb.org/assets/FCBE219A-C30B-4003-9892-FACA9230AB91.pdf

  • 11. Swets J.A., Pickett R.M.: Evaluation of diagnostic systems: methods from signal detection theory.1982.Academic PressNew Yorkpp. 39.

  • 12. BMDP2V : release: 8.0. Copyright 1993 by BMDP Statistical Software, Inc. Statistical Solutions Ltd., Cork, Ireland http://www.statsol.ie

  • 13. Dorfman D.D., Berbaum K.S.: A contaminated binormal model for ROC data—Part II. A formal model. Acad Radiol 2000; 7: pp. 427-437.

  • 14. Dorfman D.D., Berbaum K.S., Metz C.E.: Receiver operating characteristic rating analysis: generalization to the population of readers and patients with the jackknife method. Invest Radiol 1992; 27: pp. 723-731.

  • 15. Hillis S.L., Berbaum K.S., Metz C.E.: Recent developments in the Dorfman-Berbaum-Metz procedure for multireader ROC study analysis. Acad Radiol 2008; 15: pp. 647-661.

  • 16. Berbaum K.S., Franken E.A., Dorfman D.D., et. al.: Time course of satisfaction of search. Invest Radiol 1991; 26: pp. 640-648.

This post is licensed under CC BY 4.0 by the author.