Home Assessing Resident Performance in Screening Mammography
Post
Cancel

Assessing Resident Performance in Screening Mammography

Rationale and Objectives

This study aims to provide objective performance data and feedback, including examination volumes, recall rates, and concordance with faculty interpretations, for residents performing independent interpretation of screening mammography examinations.

Method and Materials

Residents (r) and faculty (f) interpret screening mammograms separately and identify non-callbacks (NCBs) and callbacks (CBs). Residents review all discordant results. The number of concordant interpretations (fCB-rCB and fNCB-rNCB) and discordant interpretations (fCB-rNCB and fNCB-rCB) are entered into a macro-driven spreadsheet. These macros weigh the data dependent on the perceived clinical impact of the resident’s decision. Weighted outcomes are combined with volumes to generate a weighted mammography performance score. Rotation-specific goals are assigned for the weighted score, screening volumes, recall rate relative to faculty, and concordance rates. Residents receive one point for achieving each goal.

Results

Between July 2013 and May 2017, 18,747 mammography examinations were reviewed by 31 residents, in 71 resident rotations, over 246 resident weeks. Mean resident recall rate was 9.9% and significantly decreased with resident level (R), R2 = 11.3% vs R3 = 9.4%, R4 = 9.2%. Mean resident-faculty discordance rate was 10% and significantly decreased from R2 = 12% to R4 = 9.6%. Weighted performance scores ranged from 1.1 to 2.0 (mean 1.6, standard deviation 0.17), but did not change with rotation experience. Residents had a mean goal achievement score of 2.6 (standard deviation 0.47).

Conclusions

This method provides residents with easily accessible case-by-case individualized screening outcome data over the longitudinal period of their residency, and provides an objective method of assessing resident screening mammography performance.

Introduction

Resident training in screening mammography is challenging from both the resident learner and the faculty educator perspectives. Unlike other subspecialties in diagnostic radiology, residents rarely view mammograms outside of their dedicated mammography rotation and thereby tend to have less familiarity with the range of normal mammographic findings and manifestations of pathology. Furthermore, workstation requirements are tightly regulated and expensive , which often limits access to resident review of mammograms. Teaching screening mammography to diagnostic radiology residents poses unique challenges for the faculty. The screening paradigms in academic settings vary, often including some combination of batch interpretations, real-time reading (while patients wait), or interpretation interspersed with diagnostic breast imaging examinations. Most high-volume academic screening practices rely on batch reading. Integrating residents into the batch reading process is particularly challenging for both residents and staff and in order for staff to maintain high levels of accuracy and efficiency, residents may be relegated to a more passive observer role. To maintain concentration, accuracy, and efficiency, staff may be less motivated or capable of providing adequate instruction .

Assessing resident performance in screening mammography also poses challenges. What metric or combination of metrics is most important—examination volumes, interpretative speed, recall rates, overall accuracy, or false-positive and false-negative rates? In light of these challenges, evaluation of resident screening mammography skills tends to be subjective, or at most, include minimal performance measures such as examination volume.

Get Radiology Tree app to read full this article<

Materials and Methods

Resident Screening Protocol

Get Radiology Tree app to read full this article<

TABLE 1

Resident Screening Mammography Goals by Resident Year

Goal R2 R3 R4 Volume per week 60 70 80 Resident/Faculty recall ratio 100%–300% 100%–250% 75%–200% Concordance rates >70% >80% >90% Weighted score >1.4 >1.5 >1.6

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Figure 1, The worksheet used by residents to record their screening interpretations.

Get Radiology Tree app to read full this article<

Spreadsheet Development

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Weighted Mammography Performance Score

Get Radiology Tree app to read full this article<

Goal Achievement Score

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Results

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

TABLE 2

Resident Screening Results per 1–4-Week Rotation

Factor Total Mean SD Range Resident weeks 246 3.5 1.1 1–7 Total screening examinations 18,747 264 106 36–510 Screens per week 79 30.9 18–181 Resident recall rate 9.9% 4.1% 2.8%–27.9% Staff recall rate \* 6.6% 2.6% 1.5%–13.6% rCB:fCB ratio 1.7 1.2 0.5–8 Resident concordance 89% 4% 74%–99% Resident discordance 10% 4% 1%–26% † Weighted performance score 1.60 0.17 1.11–2.02 ‡ Goal achievement score 2.61 0.47 0–4

f, faculty; r, resident.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

TABLE 3

Mean Resident Screening Outcomes by Resident Year; All Rotations Were Between 1 and 4 Weeks in Duration

Resident Level R2 R3 R4 Total resident weeks 70 70 105

Factor Mean (SD) Mean (SD) Mean (SD) Total screening examinations read 286 (94) 280 (107) 241 (108) Screening examinations per week 79 (30) 73 (32) 82 (30) Resident recall rate 11.3% (5) 9.4% (4.2)9.2% (3) \* rCB:fCB ratio 1.8 (0.9) 1.8 (0.9) 1.6 (1.2) Resident concordance 88% (5) 89% (3.8) 90.2% (3.7) Resident discordance 12% (5) 10.1% (3.8)9.6% (3.3) ‡ Weighted performance score 1.5 (0.2) 1.6 (0.15) 1.6 (0.15)

f, faculty; r, resident; SD, standard deviation.

Bold values indicate significant differences found.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

TABLE 4

Screening Goals Achieved (% Residents)

Goal R2 R3 R4 All No. of resident/rotations 21 18 32 71 Volume per week 76 5031 \* 49 Resident/Staff call back rate ratio 81 78 69 75 Concordance rates 100 10047 † 76 Weighted score 76 72 56 66 All goals achieved 48 336 \* , † 25

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Discussion

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

In summary, we present a model for resident education in screening mammography that appears to have great benefits for both resident learners and faculty teacher-clinicians. In a practical sense, the separation of screening interpretation into independent faculty and resident review sessions allows more efficient and less distracted faculty interpretation, and most importantly, for active resident learning. This quantitative system provides real-time feedback of resident screening mammography performance to residents as they progress through mammography rotations, and promotes self-learning and professionalism. It also affords faculty quantitative measures to better evaluate resident screening mammography skills.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

References

  • 1. American College of Radiology : American College of Radiology mammographic accreditation requirements. American College of Radiology [Online]. August 15; Available at http://www.acraccreditation.org/~/media/ACRAccreditation/Documents/Mammography/Requirements.pdf?la=en

  • 2. Hawley J.R., Taylor C.R., Cubbison A.M., et. al.: Influences of radiology trainees on screening mammography interpretation. J Am Coll Radiol 2016; 13: pp. 554-561.

  • 3. Lewis P.J.: Breast imaging resident handbook.2017.Department of Radiology, Geisel Medical School at Dartmouth [Online]; Available at https://geiselmed.dartmouth.edu/radiology/pdf/breast_imaging_guide_2017.pdf

  • 4. Miglioretti D.L., Gard C.C., Carney P.A., et. al.: When radiologists perform best: the learning curve in screening mammogram interpretation. Radiology 2009; 253: pp. 632-640.

  • 5. American College of Radiology : ACR BI-RADS atlas.5th ed.2014.American College Radiology [Online]; Available at https://www.acr.org/Quality-Safety/Resources/BIRADS

  • 6. Grimm L.J., Zhang J., Lo J.Y., et. al.: Radiology trainee performance in digital breast tomosynthesis: relationship between difficulty and error-making patterns. J Am Coll Radiol 2016; 13: pp. 198-202.

  • 7. Collins J.: Education techniques for lifelong learning: lifelong learning in the 21st century and beyond. Radiographics 2009; 29: pp. 613-622.

  • 8. Slotnik H.B.: How doctors learn: physicians’ self-directed learning episodes. Acad Med 1999; 74: pp. 1106-1117.

  • 9. Murad M.H., Coto-Yglesias F., Varkey P., et. al.: The effectiveness of self-directed learning in health professions education: a systematic review. Med Educ 2010; 44: pp. 1057-1068.

  • 10. Nothangle M., Anandarajah G., Goldman R.E., et. al.: Struggling to be self-directed: residents’ paradoxical beliefs about learning. Acad Med 2011; 86: pp. 1539-1544.

  • 11. Shute V.J.: Focus on formative feedback. Rev Educ Res 2008; 78: pp. 153-189.

  • 12. Geller B.M., Bowles E.J., Sohng H.Y., et. al.: Radiologists’ performance and their enjoyment of interpreting screening mammograms. Am J Roentgenol 2009; 192: pp. 361-369. http://www.ajronline.org/doi/abs/10.2214/AJR.08.1647

  • 13. Baxi S.S., Snow J.G., Liberman L., et. al.: The future of mammography: radiology residents’ experiences, attitudes, and opinions. Am J Roentgenol 2010; 196: pp. 1680-1686.

  • 14. D’Orsi C., Tu S.P., Nakano C., et. al.: Current realities of delivering mammography services in the community: do challenges with staffing and scheduling exist?. Radiology 2005; 235: pp. 391-395.

  • 15. Bassett L.W., Monsees B.S., Smith R.A., et. al.: Survey of radiology residents: breast imaging training and attitudes. Radiology 2003; 227: pp. 862-869.

  • 16. ACGME : Diagnostic radiology milestones. ACGME.org. [Online]. July http://www.acgme.org/Portals/0/PDFs/Milestones/DiagnosticRadiologyMilestones.pdf

  • 17. Harvey J.A., Nicholson B.T., Rochman C.M., et. al.: A milestone-based approach to breast imaging instruction for residents. J Am Coll Radiol 2014; 11: pp. 600-605.

This post is licensed under CC BY 4.0 by the author.