Home A Review of Research Into the Development of Radiologic Expertise Implications for Computer-Based Training
Post
Cancel

A Review of Research Into the Development of Radiologic Expertise Implications for Computer-Based Training

Rationale and Objectives

Studies of radiologic error reveal high levels of variation between radiologists. Although it is known that experts outperform novices, we have only limited knowledge about radiologic expertise and how it is acquired.

Materials and Methods

This review identifies three areas of research: studies of the impact of experience and related factors on the accuracy of decision-making; studies of the organization of expert knowledge; and studies of radiologists’ perceptual processes.

Results and Conclusion

Interpreting evidence from these three paradigms in the light of recent research into perceptual learning and studies of the visual pathway has a number of conclusions for the training of radiologists, particularly for the design of computer-based learning programs that are able to illustrate the similarities and differences between diagnoses, to give access to large numbers of cases and to help identify weaknesses in the way trainees build up a global representation from fixated regions.

In 1999, the Institute of Medicine’s Committee on Quality of Health Care in America produced the report “To Err is Human,” which contained the headline-grabbing statistic that as many as 98,000 Americans might be dying each year as a result of medical error ( ). The report was highly influential and led professional organizations and government agencies, not just in the United States, to pay increased attention to the problem of medical error and the concept of patient safety.

Most studies of error in radiology report significant rates of “observer variability” or disagreement with a gold standard. A review by Goddard et al suggests a range of 2%–20% for clinically significant or major error across radiologic investigations ( ). More recently, van Rijn et al compared spiral computed tomography (CT) with magnetic resonance imaging (MRI) in a series of patients suspected of lumbar herniated discs and found only moderate levels of agreement between observers, who, for example, disagreed on herniation at CT evaluation in 12% of discs ( ). Halligan et al asked experts, consultants, and experienced trainees to report a consecutive series of 20 double-contrast barium enema studies ( ). They found that experts misclassified 23% of cases, whereas consultants misclassified 31% and trainees 34%. Manning et al report a missed lesion rate of 27.2% in a study involving four radiologists viewing 120 chest x-rays, containing 81 lung nodules ( ). Monnier-Cholley carried out a receiver operating characteristic (ROC) study of lung cancer detection from a selected set of chest x-rays and noted that overall consistency in observer detection of lung nodules was poor (the LROC A z being 0.54 ± 0.024 for residents, 0.561 ± 0.033 for staff radiologists) ( ). These findings should be considered in the light of others suggesting that clinically insignificant errors predominate over significant ones and that audits of large numbers routine cases reveal lower error rates than laboratory studies based on selected cases ( ).

Many possible solutions to the problem have been proposed, including the use of computer aids, multiple human readers, and improvements in working conditions ( ). The evidence about the effectiveness of computer aids is increasingly negative ( ), constraints on manpower limit the scope for double reading, whereas there is such variety in working conditions that general suggestions for improvement are not helpful. Another key variable which needs to be addressed is that of radiologist skill or expertise. We know that some radiologists are better than others, and that experts do better than novices. But what do we know about the nature of this difference? And does what we know help us to understand how training can best enhance radiologists’ abilities? In the rest of this article, I first review the research into radiologic expertise and then summarize some more general findings about perception and experience. In the conclusion, the reviewed research on expertise is interpreted in the light of the more general work in order to furnish some conclusions for radiologic training.

The science of radiologic expertise

Get Radiology Tree app to read full this article<

Studies of Medical Decision-Making

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Figure 1, The 95% confidence intervals for false positive rate and sensitivity of radiologists classified by age, years since qualification (greater or less than 5 years), annual volume, and whether or not they focussed on screening, for (a) first and (b) subsequent screening visits. (Reprinted by permission of Oxford University Press).

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Studies of the Organization and Structure of Radiologic Knowledge

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Studies of Radiologists’ Perceptual Processes

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Figure 2, Example of the scan path of two radiologists reading the same case. The small circles show the location of the fixations; the light circles show the location of a malignant lesion and the dark circles the areas that attracted more than 1s of visual dwell. (© 2003 IEEE.)

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Understanding the nature of perceptual learning

Get Radiology Tree app to read full this article<

Figure 3, The reverse hierarchy theory of perceptual learning. The processing of visual information moves through a hierarchy of levels, with lower levels being responsible for detecting simple image features such as lines or edges, and higher levels performing the recognition or classification of objects. If perceptual learning resulted from changes in the processing of low-level features we would expect the learning to be specific to the low-level features of the stimulus (such as contrast or orientation). Such low-level learning seems to occur, but only after higher level learning has taken place and only if task demands are high (for example, if the images are noisy).

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

The relationship between cognition and perception in radiology

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Implications for radiologic training

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Figure 4, This diagram shows how innovative presentation can help teach classification. The tool displays a two dimensional projections of a conceptual space, created by charaterising cases along different axes. This can help trainees to get a sense of how instances of a disease are distributed around a typical case. The distance between a case of unknown diagnosis and the typical presentations of candidate diagnoses can be used to help in classification.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Conclusions

Get Radiology Tree app to read full this article<

Acknowledgment

Get Radiology Tree app to read full this article<

References

  • 1. Kohn L.T., Corrigan J.M., Donaldson M.S.: Executive summary.2000.Institute of Medicine, National Academy PressWashington DC

  • 2. Goddard P., Leslie A., Jones A., et. al.: Error in radiology. Br J Radiol 2001; 74: pp. 949-951.

  • 3. van Rijn J.C., Klemetso N., Reitsma J.B., et. al.: Observer variation in the evaluation of lumbar herniated discs and root compression: spiral CT compared with MRI. Br J Radiol 2006; 79: pp. 372-377.

  • 4. Halligan S., Marshall M., Taylor S., et. al.: Observer variation in the detection of colorectal neoplasia on double-contrast barium enema: implications for colorectal cancer screening and training. Clin Radiol 2003; 58: pp. 948-954.

  • 5. Manning D.J., Ethell S.C., Donovan T.: Detection or decision errors?. Br J Radiol 2004; 77: pp. 231-235.

  • 6. Monnier-Cholley L., Carrat F., Cholley B.P., et. al.: Detection of lung cancer on radiographs: receiver operating characteristic analyses of radiologists,’ pulmonologists,’ and anesthesiologists’ performance. Radiology 2004; 233: pp. 799-805.

  • 7. Erly W.K., Berger W.G., Krupinski E., et. al.: Radiology resident evaluation of head CT scan orders in the emergency department. Am J Neuroradiol 2002; 23: pp. 103-107.

  • 8. Lal N.R., Murray U.M., Eldevik O.P., et. al.: Clinical consequences of misinterpretations of neuroradiologic CT scans by on-call radiology residents. Am J Neuroradiol 2000; 21: pp. 124-129.

  • 9. Roehrig J.: The manufacturer’s perspective. Br J Radiol 2005; 78: pp. S41-S45.

  • 10. Johnston K., Brown J.: Two view mammography at incident screens: cost effectiveness analysis of policy options. BMJ 1999; 319: pp. 1097-1102.

  • 11. Laming D., Warren R.: Improving the detection of cancer in the screening of mammograms. J Med Screen 2000; 7: pp. 24-33.

  • 12. Fenton J.J., Taplin S.H., Carney P.A., et. al.: Influence of computer-aided detection on performance of screening mammography. N Engl J Med 2007; 356: pp. 1399-1409.

  • 13. Wigton R.: Social judgement theory and medical judgement. Thinking and Reasoning 1996; 2: 175-100

  • 14. Tversky A., Kahneman D.: Judgment under uncertainty: heuristics and biases. Science 1974; 185: pp. 1124-1131.

  • 15. Harries C., Evans J. St. B.T., Dennis I.: Measuring doctors’ self-insight into their treatment decisions. Appl Cognit Psychol 2000; 14: pp. 455-477.

  • 16. Graber M., Gordon R., Franklin N.: Reducing diagnostic errors in medicine: what’s the goal?. Acad Med 2002; 77: pp. 981-992.

  • 17. Swensson R.G., Hessel S.J., Herman P.G.: The value of searching films without specific preconceptions. Invest Radiol 1985; 20: pp. 100-114.

  • 18. Berbaum K.S., Franken E.A., Dorfman D.D., et. al.: Influence of clinical history upon detection of nodules and other lesions. Invest Radiol 1988; 23: pp. 48-55.

  • 19. Loy C.T., Irwig L.: Accuracy of diagnostic tests read with and without clinical information: a systematic review. JAMA 2004; 292: pp. 1602-1609.

  • 20. Dawes T.J., Vowler S.L., Allen C.M., et. al.: Training improves medical student performance in image interpretation. Br J Radiol 2004; 77: pp. 775-776.

  • 21. Nodine C.F., Kundel H.L., Mello-Thoms C., et. al.: How experience and training influence mammography expertise. Acad Radiol 1999; 6: pp. 575-585.

  • 22. Mello-Thoms C.: Perception of breast cancer: eye-position analysis of mammogram interpretation. Acad Radiol 2003; 10: pp. 4-12.

  • 23. Beam C.A., Conant E.F., Sickles E.A.: Association of volume and volume-independent factors with accuracy in screening mammogram interpretation. J Natl Cancer Inst 2003; 95: pp. 282-290.

  • 24. Barlow W.E., Chi C., Carney P.A., et. al.: Accuracy of screening mammography interpretation by characteristics of radiologists. J Natl Cancer Inst 2004; 96: pp. 1840-1850.

  • 25. Smith-Bindman R., Chu P., Miglioretti D.L., et. al.: Physician predictors of mammographic accuracy. J Natl Cancer Inst 2005; 97: pp. 358-367.

  • 26. Esserman L., Cowley H., Eberle C., et. al.: Improving the accuracy of mammography: volume and outcome relationships. J Natl Cancer Inst 2002; 94: pp. 369-375.

  • 27. Wagner R.F., Beam C.A., Beiden S.V.: Reader variability in mammography and its implications for expected utility over the population of readers and cases. Med Decis Making 2004; 24: pp. 561-572.

  • 28. Bartlett F.C.: Remembering: an experimental and social study.1932.Cambridge University PressCambridge, UK

  • 29. Schank R., Abelson R.: Scripts, plans, goals and understanding.1977.ErlbaumHillsdale, NJ

  • 30. Lesgold A., Rubinson H., Feltovitch P., et. al.: Expertise in a complex skill: diagnosing X-ray pictures.Chi M.T.H.Glaser R.Farr M.J.The nature of expertise.1988.LEAHillsdale, NJ:

  • 31. Norman G.R.: The epistemology of clinical reasoning: perspectives from philosophy, psychology, and neuroscience. Acad Med 2000; 75: pp. S127-S135.

  • 32. Norman G.: Research in clinical reasoning: past history and current trends. Med Educ 2005; 39: pp. 418-427.

  • 33. Kulatunga-Moruzi C., Brooks L.R., Norman G.R.: Coordination of analytic and similarity-based processing strategies and expertise in dermatological diagnosis. Teach Learn Med 2001; 13: pp. 110-116.

  • 34. Ark T.K., Brooks L.R., Eva K.W.: Giving learners the best of both worlds: do clinical teachers need to guard against teaching pattern recognition to novices?. Acad Med 2006; 81: pp. 405-409.

  • 35. Kulatunga-Moruzi C., Brooks L.R., Norman G.R.: Using comprehensive feature lists to bias medical diagnosis. J Exp Psychol Learn Mem Cogn 2004; 30: pp. 563-572.

  • 36. Kundel H.: Visual search in medical images.Beutel J.Kundel H.L.Van Metter R.L.The handbook of medical imaging: volume 1, physics and psychophysics.2000.SPIE PressBellingham, WA:

  • 37. Nodine C.F., Mello-Thoms C.: The nature of expertise in radiology.Beutel J.Kundel H.L.Van Metter R.L.The handbook of medical imaging: volume 1, physics and psychophysics.2000.SPIE PressBellingham, WA:

  • 38. Kundel H.L., Nodine C.F., Carmody D.P.: Visual scanning, pattern recognition and decision making in pulmonary nodule detection. Invest Radiol 1978; 13: pp. 175-181.

  • 39. Kundel H., La Follette P.S.: Visual search patterns and experience with radiological images. Radiology 1972; 103: pp. 523-528.

  • 40. Mello-Thoms C., Hardesty L., Sumkin J., et. al.: Effects of lesion conspicuity on visual search in mammogram reading. Acad Radiol 2005; 12: pp. 830-840.

  • 41. Kundel H., Nodine C.F.: Interpreting chest radiographs without visual search. Radiology 1975; 116: pp. 527-532.

  • 42. Mugglestone M., Gale A.G., Cowley H.C., et. al.: Diagnostic performance on briefly presented mammographic images. Proc SPIE Med Imaging 1995; 2436: pp. 106-115.

  • 43. Christensen E.E., Murry R.C., Holland K., et. al.: The effect of search time on perception. Radiology 1981; 138: pp. 361-365.

  • 44. Nodine C.F., Mello-Thoms C., Kundel H.L., et. al.: Time course of perception and decision making during mammographic interpretation. AJR Am J Roentgenol 2002; 179: pp. 917-923.

  • 45. Mello-Thoms C., Dunn S.M., Nodine C.F., et. al.: The perception of breast cancers: a spatial frequency analysis of what differentiates missed from reported cancers. IEEE Trans Med Imaging 2003; 22: pp. 1297-1306.

  • 46. Marr D.: 1983.W. H. FreemanSan Francisco

  • 47. Ahissar M., Hochstein S.: The reverse hierarchy theory of visual perceptual learning. Trends Cogn Sci 2004; 8: pp. 457-464.

  • 48. Sowden P.T., Davies I.R., Roling P.: Perceptual learning of the detection of features in X-ray images: a functional role for improvements in adults’ visual sensitivity?. J Exp Psychol Hum Percept Perform 2000; 26: pp. 379-390.

  • 49. Haller S., Radue E.W.: What is different about a radiologist’s brain?. Radiology 2005; 236: pp. 983-989.

  • 50. Bar M., Kassam K.S., Ghuman A.S., et. al.: Top-down facilitation of visual recognition. Proc Natl Acad Sci U S A 2006; 103: pp. 449-454.

  • 51. Thompson K.G., Biscoe K.L., Sato T.R.: Neuronal basis of covert spatial attention in the frontal eye field. J Neurosci 2005; 25: pp. 9479-9487.

  • 52. Treue S.: Visual attention: the where, what, how and why of saliency. Curr Opin Neurobiol 2003; 13: pp. 428-432.

  • 53. Thompson K.G., Bichot N.P., Sato T.R.: Frontal eye field activity before visual search errors reveals the integration of bottom-up and top-down salience. J Neurophysiol 2005; 93: pp. 337-351.

  • 54. Collins J.: Medical education research: challenges and opportunities. Radiology 2006; 240: pp. 639-647.

  • 55. Sharples M., Jeffery N.P., du Boulay B., et. al.: Structured computer-based training in the interpretation of neuroradiological images. Int J Med Inform 2000; 60: pp. 263-280.

  • 56. Ericsson K.A.: Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med 2004; 79: pp. S70-S81.

  • 57. Lloyd S., Jirotka M., Simpson A.C., et. al.: Digital mammography: a world without film?. Methods Inf Med 2005; 44: pp. 168-171.

  • 58. Gentili A., Chung C.B., Hughes T.: Informatics in radiology: use of the MIRC DICOM service for clinical trials to automatically create teaching file cases from PACS. Radiographics 2007; 27: pp. 269-275.

  • 59. Kundel H.L., Nodine C.F., Conant E.F., Weinstein S.P.: Holistic component of image perception in mammogram interpretation: gaze-tracking study. Radiology 2007; 242: pp. 396-402.

This post is licensed under CC BY 4.0 by the author.