Home Increasing Prevalence Expectation in Thoracic Radiology Leads to Overcall
Post
Cancel

Increasing Prevalence Expectation in Thoracic Radiology Leads to Overcall

Rationale and Objectives

The aim of this study was to measure the effect of prevalence expectation as determined by clinical history on the diagnostic performance of radiologists during pulmonary nodule detection on adult chest radiographs.

Materials and Methods

A multi-observer, counter-balanced study (having half the readers in each group read a different condition initially) was performed to assess the effect of abnormality expectation on experienced radiologists’ performance. A total of 33 board-certified radiologists were divided into three groups and searched for evidence of malignancy on a single set of 47 postero-anterior (PA) chest radiographs, 10 of which contained a single pulmonary nodule. The radiologists were unaware of disease prevalence. Before each viewing of the same dataset, the radiologists were allocated to two of three conditions based on the differing clinical information (previous cancer, no history, visa applicant). Location sensitivity, specificity, and jack-knife free-response receiver operator characteristics figure of merit were used to compare radiologist performance between conditions.

Results

A significant reduction in specificity was shown for the cancer compared to that for the visa condition (W = −41 P = 0.02). No other significant findings were demonstrated for this or the other condition comparisons. No significant difference in the performance of radiologists was noted when viewing images under the same conditions.

Conclusions

This study suggested that there is a reduction in specificity with high compared to low prevalence expectation following specific radiological contexts. A reduction in specificity can have important clinical consequences leading to unnecessary interventions. The results and their implications emphasize the caution that should be placed on providing accurate referral criteria.

Introduction

Previous research exploring the effect of making clinical information available to the radiologist’s interpretation process has had mixed results. Berbaum et al. and White et al. have suggested that clinical information increases diagnostic accuracy, whereas other researchers concluded that it had no effect. Berbaum et al. further suggested that clinical prompts can influence search patterns, which may lead to a positive effect in the perception of certain abnormalities but a negative effect in others.

Although a number of studies, within the medical and nonmedical domains, have indicated that target prevalence can affect performance, there has been less research undertaken on the effect of prevalence expectation. Expectation bias occurs when expectations about an outcome influences a subject’s behavior, which in radiology is a factor for almost every diagnosis. Additionally, they may also be influenced by the reading task, for example undertaking a reporting session for routine chest radiographs versus diagnosing images from a chest cancer clinic. One paper by Reed et al. explored this prevalence expectation issue using 30 postero-anterior (PA) chest radiographs with a consistent prevalence of 50% lung nodules. A total of 22 board-certified radiologists were asked to read the same image set twice, but only after they had been given explicit prior information about the prevalence, which was either the true prevalence (15 of 30), or a high (22 of 30) or a low (9 of 30) falsely stated prevalence rate. This varying prior information had little effect on diagnostic performance in terms of receiver operator characteristics values although the number of fixations and time spent interpreting each image increased at higher prevalence rates. However, Reed et al.’s research informed the radiologists of the prevalence of abnormal images for each read, which, unlike the present paper, does not reflect the clinical situation where the radiologists cannot know the true prevalence of abnormalities in the cases that they are about to report and, as such, do not address the problem of expectation.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Materials and Methods

Subjects

Get Radiology Tree app to read full this article<

Table 1

Details of Participating Radiologists

Group Number Mean Number of Years Post-ABR Certification Range of Years Post-ABR Certification A 10 26 10–38 B 10 22 8–36 C 13 23 6–37

ABR, American Board of Radiology.

Get Radiology Tree app to read full this article<

Image Bank

Get Radiology Tree app to read full this article<

Table 2

Location and Size of Nodules on the 10 Abnormal Images

Case Conspicuity Size (mm) Size (Pixels) Location 1 4 10 35.70 Lt lower Lobe 2 3 26 92.82 Lt lower Lobe 3 3 14 49.98 Lingula 4 3 15 53.55 Lt lower Lobe 5 3 23 82.11 Rt lower lobe 6 3 8 28.56 Rt upper lobe 7 3 13 46.41 Lingula 8 3 26 92.82 Rt upper lobe 9 3 25 89.25 Rt middle lobe 10 3 12 42.84 Lt upper lobe

Get Radiology Tree app to read full this article<

Viewing

Get Radiology Tree app to read full this article<

Readers and Groups

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Reader Instructions

Get Radiology Tree app to read full this article<

Analysis

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Results

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 3

Comparison A: Cancer versus No History. Localization Sensitivity, Specificity, and Performance Data Are Shown with P and W Values for the Intergroup Comparisons

Group A Localization Sensitivity Specificity JAFROC Cancer No History Cancer No History Cancer No History Reader 1 0.70 0.70 0.59 0.59 0.69 0.69 Reader 2 0.80 0.40 0.65 0.73 0.81 0.61 Reader 3 0.40 0.50 0.22 0.35 0.36 0.51 Reader 4 0.80 0.60 0.38 0.46 0.70 0.65 Reader 5 0.80 0.40 0.62 0.49 0.81 0.50 Reader 6 0.50 0.70 0.19 0.24 0.48 0.56 Reader 7 0.90 1.00 0.49 0.46 0.84 0.92 Reader 8 0.50 0.70 0.30 0.35 0.59 0.58 Reader 9 0.30 0.50 0.67 0.51 0.52 0.47 Reader 10 0.40 0.60 0.62 0.56 0.55 0.64 Median 0.60 0.60 0.54 0.47 0.64 0.60P value 0.95 0.95 0.66 Test statistic W = −2.0 W = −2.0 F(1,9) = 0.21

JAFROC, jack-knife free-response receiver operator characteristics.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 4

Comparison B: Cancer versus Visa. Localization Sensitivity, Specificity, and Performance Data Are Shown with P and W Values for the Intergroup Comparisons. The Asterisk Demonstrates the Significant Difference

Group B Localization Sensitivity Specificity JAFROC Cancer Visa Cancer Visa Cancer Visa Reader 1 0.70 0.80 0.59 0.51 0.69 0.82 Reader 2 0.70 0.70 0.70 0.81 0.72 0.75 Reader 3 0.40 0.80 0.22 0.73 0.36 0.85 Reader 4 0.80 0.70 0.38 0.68 0.69 0.83 Reader 5 0.80 0.60 0.38 0.46 0.70 0.64 Reader 6 0.80 0.60 0.65 0.76 0.73 0.78 Reader 7 0.90 0.80 0.49 0.49 0.84 0.77 Reader 8 0.40 0.10 0.78 0.81 0.61 0.49 Reader 9 0.50 0.40 0.30 0.43 0.59 0.43 Reader 10 0.30 0.30 0.67 0.81 0.51 0.55 Median 0.70 0.65 0.54 0.70 0.70 0.76P value 0.73 0.02 \* 0.84 Test statistic W = 5.0 W = −41.00 F(1,9) = 0.07

JAFROC, jack-knife free-response receiver operator characteristics.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 5

Comparison C: No History versus Visa. Localization Sensitivity, Specificity, and Performance Data Are Shown with P and W Values for the Intergroup Comparisons

Group C Localization Sensitivity Specificity JAFROC Visa No History Visa No History Visa No History Reader 1 0.80 0.70 0.51 0.59 0.82 0.69 Reader 2 0.40 0.40 0.27 0.62 0.36 0.50 Reader 3 0.40 0.80 0.51 0.46 0.44 0.74 Reader 4 0.40 0.60 0.76 0.76 0.57 0.73 Reader 5 0.80 0.50 0.73 0.35 0.85 0.58 Reader 6 0.40 0.60 0.70 0.70 0.55 0.67 Reader 7 0.40 0.40 0.43 0.43 0.49 0.47 Reader 8 0.60 0.60 0.46 0.46 0.64 0.65 Reader 9 0.90 1.00 0.49 0.46 0.78 0.92 Reader 10 0.50 0.20 0.65 0.76 0.64 0.48 Reader 11 0.40 0.70 0.43 0.35 0.42 0.58 Reader 12 0.80 0.70 0.46 0.65 0.69 0.69 Reader 13 0.30 0.50 0.81 0.51 0.55 0.47 Median 0.40 0.60 0.51 0.51 0.57 0.65P value 0.47 1.00 0.57 Test statistic W = 15.00 W = 1.00 F(1,26) = 0.33

JAFROC, jack-knife free-response receiver operator characteristics.

Get Radiology Tree app to read full this article<

Discussion

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

References

  • 1. Berbaum K.S., Franken E.A., Dorfman D.D., et. al.: Influence of clinical history upon detection of nodules and other lesions. Invest Radiol 1988; 23: pp. 48-55.

  • 2. Aideyan O.U., Berbaum K., Smith W.L.: Influence of prior radiologic information on the interpretation of radiographic examinations. Acad Radiol 1995; 2: pp. 205-208.

  • 3. Berbaum K.S., Franken E.A.: Commentary: does clinical history affect perception?. Acad Radiol 2006; 13: pp. 402-403.

  • 4. White K., Berbaum K.S., Smith W.L.: The role of previous radiographs and reports in the interpretation of current radiographs. Invest Radiol 1994; 29: pp. 263-265.

  • 5. Cooperstein L.A., Good B.C., Eelkema E.A., et. al.: The effect of clinical history on chest radiograph interpretation in a PACS environment. Invest Radiol 1990; 25: pp. 670-674.

  • 6. Good B.C., Cooperstein L.A., DeMarino G.B.: Does knowledge of the clinical history affect the accuracy of chest radiograph interpretation?. AJR Am J Roentgenol 1990; 154: pp. 709-712.

  • 7. Swensson R.G.: The effects of clinical information on film interpretation: another perspective. Invest Radiol 1988; 23: pp. 56-61.

  • 8. Kundel H.L.: Disease prevalence and the index of detectability: a survey of studies of lung cancer detection by chest radiography.2000. Proceedings of SPIE

  • 9. Gur D., Rockette H.E., Armfield D.R., et. al.: Prevalence effect in a laboratory environment. Radiology 2007; 228: pp. 10-14.

  • 10. Wolfe J.M., Horowitz T.S.: Low target prevalence is a stubborn source of errors in visual search tasks. J Exp Psychol 2007; 136: pp. 623-638.

  • 11. Reed W.M., Ryan J.T., McEntee M.F., et. al.: The effect of abnormality-prevalence expectation on expert observer performance and visual search. Radiology 2011; 258: pp. 938-943.

  • 12. Nocum D.J., Brennan P.C., Huang R.T., et. al.: The effect of abnormality-prevalence expectation on naive observer performance and visual search. Radiography 2013; 19: pp. 196-199.

  • 13. Reed W.M., Chow S.C., Chew L.E., et. al.: Can prevalence expectations drive radiologists’ behaviour?. Acad Radiol 2014; 21: pp. 450-456.

  • 14. Popp D., Williams J.B.W., Boehm P., et. al.: The role of expectation bias in neurocognition clinical trials. Alzheimers Dement 2012; 8: pp. 589-590.

  • 15. Larson D.B.: Changing radiologists’ expectation: false information versus years of experience. Radiology 2011; 261: pp. 327.

  • 16. Beutel J.Kundel H.L.Van Metter R.L.Handbook of medical imaging.2000.SPIE PressUSA:pp. 838.

  • 17. Shiraishi J., Katsuragawa S., Ikezoe J., et. al.: Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists’ detection of pulmonary nodules. AJR Am J Roentgenol 2000; 174: pp. 71-74.

  • 18. Samei E., Badano A., Chakraborty D., et. al.: Assessment of display performance for medical imaging systems. American Association of Physicists in Medicine Task Group 18; Available at: http://www.aapm.org/pubs/reports/OR_03.pdf Accessed June 21, 2015

  • 19. Haygood T.M., Ryan J., Brennan P.C., et. al.: On the choice of acceptance radius in free-response observer performance studies. Br J Radiol 2012; 86: pp. 42313554.

  • 20. Chakraborty D.P.: Recent advances in observer performance methodology: jackknife free-response ROC (JAFROC). Radiat Prot Dosimetry 2005; 114: pp. 26-31.

  • 21. Kahneman D., Tversky A.: On the psychology of prediction. Psychol Rev 1973; 80: pp. 237-251.

  • 22. Massad C.M., Hubbard M., Newtson D.: Selective perception of events. J Exp Psychol 1979; 15: pp. 513532.

  • 23. Sterzer P., Frith C., Petrovic P.: Believing is seeing: expectations alter visual awareness. Curr Biol 2008; 18: pp. R697-R698.

  • 24. Pines J.M.: Profiles in patient safety: confirmation bias in emergency medicine. Acad Emerg Med 2008; 13: pp. 90-94.

  • 25. Deyo R.A.: Cascade effects of medical technology. Annu Rev Public Health 2002; 23: pp. 23-44.

  • 26. Mold J.W., Stein H.F.: The cascade effect in the clinical care of patients. N Engl J Med 1986; 314: pp. 512-514.

  • 27. Elmore J.G., Taplin S.H., Barlow W.E., et. al.: Does litigation influence medical practice? the influence of community radiologists’ medical malpractice perceptions and experience on screening mammography. Radiology 2005; 236: pp. 37-46.

  • 28. Mohammed T.L.H., White C.S., Pugatch R.D.: The imaging manifestations of lung cancer. Semin Roentgenol 2005; 40: pp. 98-108.

  • 29. Haygood T.M., Qing Liu M.A., Galvan E.M., et. al.: Memory for previously viewed radiographs and the effect of prior knowledge of memory task. Acad Radiol 2013; 20: pp. 1598-1603.

  • 30. Soh B.P., Lee W., McEntee M.F., et. al.: Screening mammography: test set data can reasonably describe actual clinical reporting. Radiology 2013; 268: pp. 46-53.

This post is licensed under CC BY 4.0 by the author.