Home Are Radiologists’ Goals for Mammography Accuracy Consistent with Published Recommendations?
Post
Cancel

Are Radiologists’ Goals for Mammography Accuracy Consistent with Published Recommendations?

Rationale and Objectives

Mammography quality assurance programs have been in place for more than a decade. We studied radiologists’ self-reported performance goals for accuracy in screening mammography and compared them to published recommendations.

Materials and Methods

A mailed survey of radiologists at mammography registries in seven states within the Breast Cancer Surveillance Consortium (BCSC) assessed radiologists’ performance goals for interpreting screening mammograms. Self-reported goals were compared to published American College of Radiology (ACR) recommended desirable ranges for recall rate, false-positive rate, positive predictive value of biopsy recommendation (PPV2), and cancer detection rate. Radiologists’ goals for interpretive accuracy within desirable range were evaluated for associations with their demographic characteristics, clinical experience, and receipt of audit reports.

Results

The survey response rate was 71% (257 of 364 radiologists). The percentage of radiologists reporting goals within desirable ranges was 79% for recall rate, 22% for false-positive rate, 39% for PPV2, and 61% for cancer detection rate. The range of reported goals was 0%–100% for false-positive rate and PPV2. Primary academic affiliation, receiving more hours of breast imaging continuing medical education, and receiving audit reports at least annually were associated with desirable PPV2 goals. Radiologists reporting desirable cancer detection rate goals were more likely to have interpreted mammograms for 10 or more years, and >1000 mammograms per year.

Conclusion

Many radiologists report goals for their accuracy when interpreting screening mammograms that fall outside of published desirable benchmarks, particularly for false-positive rate and PPV2, indicating an opportunity for education.

Of all the specialties within radiology, breast imaging lends itself to the objective assessment of interpretive performance. As information technology infrastructure in medicine develops, more specialties may be added. Benchmarks for desirable interpretation in breast imaging have been published for the United States and Europe . Many countries now mandate that audit performance data be collected and reviewed so that administrators and radiologists know how well they are performing . It is not clear, however, what impact these efforts to collect and review audit data are having on individual radiologists or whether radiologists have goals for their own performance that align with published benchmarks.

Educational studies have shown that when clinicians understand that a gap exists between their performance and national targets, they can be predisposed to change their behavior . Interpretive accuracy in mammography could be improved if radiologists are motivated by recognizing a gap between their individual performance and desirable benchmarks. One web-based continuing medical education (CME) intervention used individual radiologist’s own recall rate data and compared it to rates of a large cohort of their peers . Radiologists with inappropriately high recall rates were able to come up with specific plans to improve their recall rates based on this recognition of a need to improve. For improvements to occur, radiologists must recognize the difference between their own performance and desired targets, which is potentially feasible given collection and review of Mammography Quality Standards Act audit data. However, it is not clear if radiologists are aware of common desirable performance goal ranges.

Get Radiology Tree app to read full this article<

Materials and methods

Get Radiology Tree app to read full this article<

Design, Testing, and Radiologist Survey Administration

Get Radiology Tree app to read full this article<

Table 1

Definitions of Mammography Performance Measures and Goals

Performance Measure Survey Definition BI-RADS Manual Definition ∗ 25–75 Percentile Performance Ranges Noted in Breast Cancer Surveillance Consortium † Desirable Performance Goals (%) ‡ Recall rate Percentage of all screens with a positive assessment leading to immediate additional workup The percentage of examinations interpreted as positive (for screening exams BI-RADS categories 0, 4, and 5, for diagnostic exams BI-RADS 4 and 5 assessments). 6.4–13.3 2–10 § RR = (positive examination)/(all examinations) False-positive rate Percentage of all screens interpreted as positive and no cancer is present Not available 7.5–14.0 2–10 § Positive predictive value of biopsy recommendation (PPV2) Percentage of all screens with biopsy or surgical consultation recommended that resulted in cancer The percentage of all screening or diagnostic examinations recommended for biopsy or surgical consultation (BI-RADS categories 4 and 5) that resulted in a tissue diagnosis of cancer within 1 year. 18.8–32.0 25–40 PPV2 = true positive/(number of screening or diagnostic examinations recommended for biopsy) Cancer detection rate Number of cancers detected by mammography per 1000 screens The number of cancers correctly detected at mammography per 1000 patients examined at mammography 3.2–5.8 2–10

BI-RADS, Breast Imaging and Reporting Data System; RR, relative risk.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Definition of Desirable Performance Goals

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Statistical Analysis

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Results

Get Radiology Tree app to read full this article<

Figure 1, Radiologists’ self-reported goals for performance measures: (a) recall rate, (b) false-positive rate, (c) positive predictive value of biopsy, and (d) cancer detection rate per 1000 screening mammograms. Vertical lines indicate the desirable goal ranges.

Figure 2, Radiologists’ reported performance goals for recall, false-positive rate, positive predictive value (PPV)2, and cancer detection rates (CDR) relative to desirable goal ranges and peer cohort benchmarks. (a) Radiologists’ performance goals relative to American College of Radiology desirable goal ranges categorized by no response, less than desirable range, greater than desirable range, and within desirable range. (b) Radiologists’ performance goals relative to peer cohort benchmark quartiles, categorized by no response, lowest quartile (0%–24%), average performance (25%–75%), and highest quartile (76%–100%).

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 2

Demographic and Clinical Experience Characteristics of Radiologists and Their Self-reported Mammography Performance Goals within Desirable Goal Ranges

Total % of Radiologists with Self-reported Performance Goals within Desirable Goal Range ∗ Recall False Positive PPV2 Cancer Detection Characteristic_n_ Total257 79 22 39 61 Demographics Age at survey (y) 34–44 69 72 20 3652 45–54 89 85 26 4574 ≥55 99 77 19 3555 Sex Male 184 78 22 37 62 Female 73 79 22 44 58 Practice type Affiliation with academic medical center No 208 77 2138 61 Adjunct 24 75 2925 50 Primary 22 95 2764 73 Breast imaging experience Fellowship training No 236 77 21 38 60 Yes 21 95 29 48 71 Years of mammography interpretation <10 56 73 16 3650 10–19 91 82 20 4469 ≥20 109 79 27 3760 Hours working breast imaging 0–30 42 79 24 38 62 31–40 57 82 21 37 63 >40 155 77 21 40 59 Number of breast imaging CME hr/3-y reporting period 15-hr minimum 65 71 2028 54 >15 hr, but <30 hr 116 78 2138 61 30 hr or more 74 86 2650 66 Volume Self-reported average number of mammograms per year over the past 5 y Total: ≤1000 25 76 20 2440 1001–2000 81 75 23 3765 ≥2000 134 84 22 4265 Audit reports Receive audit reports Never 22 82 1414 50 Once/y 157 84 2441 64 >Once/y 57 75 2151 68

CME, continuing medical education; PPV, positive predictive value.

Bold indicates statistical significance at the 0.05 level.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Discussion

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Acknowledgment

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

References

  • 1. Feig S.A.: Auditing and benchmarks in screening and diagnostic mammography. Radiol Clin North Am 2007; 45: pp. 791-800.

  • 2. American College of Radiology: ACR BI-RADS - Mammography.4th ed.2003.American College of RadiologyReston, VA

  • 3. Perry N., Broeders M., de Wolf C., et. al.: European guidelines for quality assurance in breast cancer screening and diagnosis. Fourth edition–summary document. Ann Oncol 2008; 19: pp. 614-622.

  • 4. US Food and Drug Administration/Center for Devices and Radiological Health. US FDA/CDRH: Mammography Program. Available at: http://www.fda.gov/cdrh/mammography . Accessed May 2, 2011.

  • 5. EUREF European Reference Organisation for Quality Assured Breast Screening and Diagnostic Services. Available at: http://www.euref.org/ . Accessed May 2, 2011.

  • 6. Speck M.: Best practice in professional development for sustained educational change. ERS Spectrum 1996; 14: pp. 33-41.

  • 7. Laidley T.L., Braddock I.C.: Role of adult learning theory in evaluating and designing strategies for teaching residents in ambulatory settings. Adv Health Sci Educ Theory Pract 2000; 5: pp. 43-54.

  • 8. Carney P.A., Bowles E.J., Sickles E.A., et. al.: Using a tailored web-based intervention to set goals to reduce unnecessary recall. Acad Radiol 2011; 18: pp. 495-503.

  • 9. Ballard-Barbash R., Taplin S.H., Yankaskas B.C., et. al.: Breast Cancer Surveillance Consortium: a national mammography screening and outcomes database. AJR Am J Roentgenol 1997; 169: pp. 1001-1008.

  • 10. Breast Cancer Surveillance Consortium (NCI). BCSC Collaborations: FAVOR. Available at: http://www54.imsweb.com/collaborations/favor.html . Accessed May 2, 2011.

  • 11. Elmore J.G., Aiello Bowles E.J., Geller B., et. al.: Radiologists’ attitudes and use of mammography audit reports. Acad Radiol 2010; 17: pp. 752-760.

  • 12. Elmore J.G., Jackson S.L., Abraham L., et. al.: Variability in interpretive performance at screening mammography and radiologists’ characteristics associated with accuracy. Radiology 2009; 253: pp. 641-651.

  • 13. Bassett L.W., Hendrick R.E., Bassford T.L., et. al.: Quality determinants of mammography. Clinical practice guideline No. 13.1994.Agency for Health Care Policy and ResearchRockville, MD

  • 14. Rosenberg R.D., Yankaskas B.C., Abraham L.A., et. al.: Performance benchmarks for screening mammography. Radiology 2006; 241: pp. 55-66.

  • 15. Breast Cancer Screening Consortium. BCSC Screening Performance Benchmarks. 2009. Available at: http://breastscreening.cancer.gov/data/benchmarks/screening . Accessed May 2, 2011.

  • 16. Institute of Medicine: Improving Breast Imaging Quality Standards.2005.The National Academies PressWashington, DC

  • 17. Miglioretti D.L., Gard C.C., Carney P.A., et. al.: When radiologists perform best: the learning curve in screening mammogram interpretation. Radiology 2009; 253: pp. 632-640.

  • 18. Perry N.: Interpretive skills in the National Health Service Breast Screening Programme: performance indicators and remedial measures. Semin Breast Dis 2003; 6: pp. 108-113.

  • 19. Buist D.S., Anderson M.L., Haneuse S.J., et. al.: Influence of annual interpretive volume on screening mammography performance in the United States. Radiology 2011; 259: pp. 72-84.

  • 20. Hebert-Croteau N., Roberge D., Brisson J., et. al.: Provider’s volume and quality of breast cancer detection and treatment. Breast Cancer Res Treat 2007; 105: pp. 117-132.

  • 21. Fletcher S.W., Elmore J.G.: False-positive mammograms—can the USA learn from Europe?. Lancet 2005; 365: pp. 7-8.

  • 22. Lindfors K.K., O’Connor J., Parker R.A., et. al.: False-positive screening mammograms: effect of immediate versus later work-up on patient stress. Radiology 2001; 218: pp. 247-253.

  • 23. Brewer N.T., Salz T., Lillie S.E., et. al.: Systematic review: the long-term effects of false-positive mammograms. Ann Intern Med 2007; 146: pp. 502-510.

  • 24. Allen S.: Cancer scares grow as screening rises better tests sought to reduce anxiety. The Boston Globe 2007;

  • 25. Nelson H.D., Tyne K., Naik A., et. al.: Screening for breast cancer: an update for the U.S. Preventive Services Task Force. Ann Intern Med 2009; 151: pp. 727-737. W237–W742

  • 26. Asch D.A., Jedrziewski M.K., Christakis N.A., et. al.: Response rates to mail surveys published in medical journals. J Clin Epidemiol 1997; 50: pp. 1129-1136.

This post is licensed under CC BY 4.0 by the author.