Home Impact of an Educational Intervention Designed to Reduce Unnecessary Recall during Screening Mammography
Post
Cancel

Impact of an Educational Intervention Designed to Reduce Unnecessary Recall during Screening Mammography

Rationale and Objectives

The aim of this study was to describe the impact of a tailored Web-based educational program designed to reduce excessive screening mammography recall.

Materials and Methods

Radiologists enrolled in one of four mammography registries in the United States were invited to take part and were randomly assigned to receive the intervention or to serve as controls. The controls were offered the intervention at the end of the study, and data collection included an assessment of their clinical practice as well. The intervention provided each radiologist with individual audit data for his or her sensitivity, specificity, recall rate, positive predictive value, and cancer detection rate compared to national benchmarks and peer comparisons for the same measures; profiled breast cancer risk in each radiologist’s respective patient populations to illustrate how low breast cancer risk is in population-based settings; and evaluated the possible impact of medical malpractice concerns on recall rates. Participants’ recall rates from actual practice were evaluated for three time periods: the 9 months before the intervention was delivered to the intervention group (baseline period), the 9 months between the intervention and control groups (T1), and the 9 months after completion of the intervention by the controls (T2). Logistic regression models examining the probability that a mammogram was recalled included indication of intervention versus control and time period (baseline, T1, and T2). Interactions between the groups and time period were also included to determine if the association between time period and the probability of a positive result differed across groups.

Results

Thirty-one radiologists who completed the continuing medical education intervention were included in the adjusted model comparing radiologists in the intervention group ( n = 22) to radiologists who completed the intervention in the control group ( n = 9). At T1, the intervention group had 12% higher odds of positive mammographic results compared to the controls, after controlling for baseline (odds ratio, 1.12; 95% confidence interval, 1.00−1.27; P = .0569). At T2, a similar association was found, but it was not statistically significant (odds ratio, 1.10; 95% confidence interval, 0.96 to 1.25). No associations were found among radiologists in the control group when comparing those who completed the continuing medical education intervention ( n = 9) to those who did not ( n = 10). In addition, no associations were found between time period and recall rate among radiologists who set realistic goals.

Conclusions

This study resulted in a null effect, which may indicate that a single 1-hour intervention is not adequate to change excessive recall among radiologists who undertook the intervention being tested.

Recall rates for screening mammography are higher in the United States compared to those in other countries . Identification of the reasons for this difference has been complex . The harms associated with unnecessary workup are now well recognized and were part of the rationale for changing the US Preventive Services Task Force screening mammography guidelines . If unnecessary recall rates could be diminished and recall brought below minimally acceptable cut points , the number of false-positive examinations could be reduced by 880 per 100,000 women screened . Although several studies have illustrated improved interpretive performance , they combined several strategies, such as audit data review, participation in a self-assessment and case review program, and increasing interpretive volume. In two of these studies , the intervention content ranged from 8 to 32 hours, which is a significant time commitment for busy clinicians. The extent to which a single interactive audit component may assist in improving performance has not been well evaluated.

We developed an interactive, Web-based intervention designed to provide peer comparison audit data and to explore individualized factors that may increase recall rates without improving cancer detection. The intervention was implemented using a randomized wait-list study design to assess its impact on reducing excessive recall. The purpose of this paper is to report the findings of this study.

Methods

Performance Data

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Study Participants and Intervention Development

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Data Analyses

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Figure 1, Recall rate for radiologists who completed the continuing medical education (CME) course. Baseline, 9 months prior to consent; T1 intervention, 0 to 9 months after completion, T1 control, 0 to 9 months after consent; T2 intervention, 9 to 18 months after completion; T2 control, 0 to 9 months after completion.

Figure 2, Recall rate for radiologists who did not completed the continuing medical education course. Baseline, 9 months prior to consent; T1, 0 to 9 months after consent; T2, 9 to 18 months after consent.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Results

Get Radiology Tree app to read full this article<

Table 1

Radiologist Characteristics for 54 Radiologists According to Assigned Study Group

Characteristic Radiologists Who Consented to but Did Not Complete the CME Course Radiologists Who Consented to and Completed the CME Course Intervention Group Control or Late Intervention Group_P_ Intervention Group Control or Late Intervention Group_P_ ( n = 10) ( n = 12) ( n = 23) ( n = 9) Demographics Sex Male 70.0 75.0 .79 52.2 44.4 .69 Female 30.0 25.0 47.8 55.6 Practice type Primary affiliation with academic medical center No 70.0 83.3 .71 87.0 77.8 .22 Adjunct 10.0 8.3 8.7 0.0 Primary 20.0 8.3 4.3 22.2 Breast imaging experience Fellowship training No 80.0 100.0 .10 95.7 100.0 .53 Yes 20.0 0.0 4.3 0.0 Years interpreting mammography <10 30.0 8.3 .34 13.0 44.4 .15 10–19 40.0 66.7 43.5 33.3 ≥20 30.0 25.0 43.5 22.2 % of time spent in breast imaging <20% 33.3 10.0 .23 26.1 11.1 .13 20% to 39% 22.2 30.0 39.1 22.2 40% to 79% 0.0 30.0 26.1 22.2 80%–100% 44.4 30.0 8.7 44.4 Missing ( n ) 1 2 % of mammograms that are screening <85% 55.6 66.7 .60 59.1 44.4 .46 85%–100% 44.4 33.3 40.9 55.6 Missing ( n ) 1 1 Performance outcome at baseline Recall rate (mean ± standard error) 11.0 ± 1.7 9.6 ± 1.2 .49 11.2 ± 0.9 8.7 ± 1.5 .11

CME, continuing medical education.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 2

Modeling the Probability of Being Recalled Among Study Groups

Ratio of ORs (95% CI) Model 1: Intervention vs Controls Model 2: Intervention Early vs Intervention Late (Controls) Model 3: Controls Who Did Intervention Late vs Those Who Did Not Adjusting for radiologist random effect ( n = 44) ( n = 32) ( n = 21) Relative change from baseline to T1 1.11 (1.01–1.23) 1.13 (1.01–1.27) 0.97 (0.83–1.12) Relative change from baseline to T2 1.08 (0.98–1.19) 1.12 (0.99–1.27) 0.93 (0.79–1.09) Adjusting for mammography registry ∗ ( n = 44) ( n = 32) ( n = 21) Relative change from baseline to T1 1.11 (1.01–1.23) 1.13 (1.01–1.27) 0.97 (0.83–1.12) Relative change from baseline to T2 1.08 (0.98–1.19) 1.12 (0.99–1.27) 0.93 (0.79–1.09) Adjusting for mammogram-level variables † ( n = 44) ( n = 32) ( n = 21) Relative change from baseline to T1 1.12 (1.02–1.23) 1.14 (1.01–1.28) 0.97 (0.83–1.13) Relative change from baseline to T2 1.08 (0.98–1.20) 1.11 (0.98–1.26) 0.94 (0.80–1.11) Adjusting for mammogram-level variables † and radiologist-level variables ‡ ( n = 41) ( n = 31) ( n = 19) Relative change from baseline to T1 1.11 (1.00–1.23) 1.12 (1.00–1.27) 0.97 (0.83–1.14) Relative change from baseline to T2 1.09 (0.98–1.21) 1.10 (0.96–1.25) 0.98 (0.83–1.16)

CME, continuing medical education.

Baseline, 9 months prior to consenting; T1 no CME, 0 to 9 months after consent; T1 CME/early, 0 to 9 months after completion; T1 CME/late, 0 to 9 months after consent; T2 no CME, 9 to 18 months after consent; T2 CME/early, 9 to 18 months after completion; T2 CME/late, 0 to 9 months after completion.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Discussion

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Conclusions

Get Radiology Tree app to read full this article<

Acknowledgments

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

References

  • 1. Smith-Bindman R., Chu P.W., Miglioretti D.L., et. al.: Comparison of screening mammography in the United States and the United kingdom. JAMA 2003; 290: pp. 2129-2137.

  • 2. Hofvind S., Vacek P.M., Skelly J., et. al.: Comparing screening mammography for early breast cancer detection in Vermont and Norway. J Natl Cancer Inst 2008; 100: pp. 1082-1091.

  • 3. Elmore J.G., Wells C.K., Lee C.H., et. al.: Variability in radiologists’ interpretations of mammograms. N Engl J Med 1994; 331: pp. 1493-1499.

  • 4. Beam C.A., Layde P.M., Sullivan D.C.: Variability in the interpretation of screening mammograms by US radiologists. Arch Intern Med 1996; 156: pp. 209-213.

  • 5. Elmore J.G., Miglioretti D.L., Reisch L.M., et. al.: Screening mammograms by community radiologists: variability in false-positive rates. J Natl Cancer Inst 2002; 94: pp. 1373-1380.

  • 6. Smith-Bindman R., Chu P., Miglioretti D.L., et. al.: Physician predictors of mammographic accuracy. J Natl Cancer Inst 2005; 97: pp. 358-367.

  • 7. Woloshin S., Schwartz L.M.: The benefits and harms of mammography screening: understanding the trade-offs. JAMA 2010; 303: pp. 164-165.

  • 8. Jørgensen K.J., Klahn A., Gøtzsche P.C.: Are benefits and harms in mammography screening given equal attention in scientific articles? A cross-sectional study. BMC Med 2007; 5: pp. 12.

  • 9. Agency for Healthcare Research and Quality : U.S. Preventive Services Task Force, screening for breast cancer systematic evidence review update for the US Preventive Services Task Force, evidence syntheses, No. 74.2009.Agency for Healthcare Research and QualityRockville, MD

  • 10. Carney P.A., Sickles E., Monsees B., et. al.: Identifying minimally acceptable interpretive performance criteria for screening mammography. Radiology 2010; 255: pp. 354-361.

  • 11. Perry N.M.: Breast cancer screening—the European experience. Int J Fertil Womens Med 2004; 49: pp. 228-230.

  • 12. Adcock K.A.: Initiative to improve mammogram interpretation. Permanente J 2004; 8: pp. 12-18.

  • 13. Linver M.N., Paster S., Rosenberg R.D., et. al.: Improvements in mammography interpretation skills in a community radiology practice after dedicated courses: 2-year medical audit of 38,633 cases. Radiology 1992; 184: pp. 39-43.

  • 14. Berg W.A., D’Orsi C.J., Jackson V.P., et. al.: Does training in Breast Imaging Reporting and Data System (BI-RADS) improve biopsy recommendations of feature analysis agreement with experienced breast imagers at mammography?. Radiology 2002; 224: pp. 871-880.

  • 15. American College of Radiology : Breast Imaging Reporting and Data System (BI-RADS).2004.American College of RadiologyReston, VA

  • 16. Carney P.A., Geller B.M., Moffett H., et. al.: Current medico-legal and confidentiality issues in large multi-center research programs. Am J Epidemiol 2000; 152: pp. 371-378.

  • 17. Carney P.A., Geller B.M., Sickles E.A., et. al.: Feasibility and satisfaction associated with using a tailored Web-based intervention for recalibrating radiologists’ thresholds for conducting additional work-up. Acad Radiol 2011; 18: pp. 369-376.

  • 18. Carney P.A., Aiello Bowles E., Sickles E.A., et. al.: Using a tailored Web-based intervention to set goals to reduce unnecessary recall. Acad Radiol 2011; 18: pp. 495-503.

  • 19. Elmore J.G., Jackson S.L., Abraham L., et. al.: Variability in interpretive performance of screening mammography and radiologist characteristics associated with accuracy. Radiology 2009; 253: pp. 641-651.

  • 20. Elmore J.G., Taplin S., Barlow W.E., et. al.: Does litigation influence medical practice? The influence of community radiologists’ medical malpractice perceptions and experience on screening mammography. Radiology 2005; 236: pp. 37-46.

  • 21. Egger J.R., Cutter G.R., Carney P.A., et. al.: Mammographers’ perception of women’s breast cancer risk. Med Decis Making 2005; 25: pp. 283-289.

  • 22. Rosenberg R.D., Yankaskas B.C., Abraham L., et. al.: Performance benchmarks for screening mammography. Radiology 2006; 241: pp. 55-66.

  • 23. Davis D.A., Thomson M.A., Oxman A.D., et. al.: Evidence for the effectiveness of CME: a review of 50 randomized controlled trials. JAMA 1992; 268: pp. 1111-1117.

  • 24. Flocke S.A., Litaker D.: Physician practice patterns and variation in the delivery of preventive services. J Gen Intern Med 2007; 22: pp. 191-196.

  • 25. Greco PJ, Eisenberg JM. Changing physicians’ practices. N Engl J Med 1993; 21:329:1271–1273.

  • 26. Smith W.R.: Evidence for the effectiveness of techniques to change physician behavior. Chest 2000; 118: pp. 8S-17S.

  • 27. Eisenberg J.M.: Physician utilization: the state of research about physicians’ practice patterns: HSR 84: planning for the third decade of health services research. Med Care 1985; 23: pp. 461-483.

  • 28. UK National Health Services Breast Screening Programme. Home page. Available at: http://www.cancerscreening.nhs.uk/breastscreen/index.html . Accessed 2011.

  • 29. National Radiographers Quality Assurance Coordinating Group. NHSBSP 63: quality assurance guidelines for mammography: including radiographic quality control. Available at: http://www.cancerscreening.nhs.uk/breastscreen/publications/nhsbsp63.html . Accessed 2011.

  • 30. Center for Information and Study on Clinical Research Participation. Clinical trials facts & figures. Available at: http://www.ciscrp.org/professional/facts_pat.html . Accessed April 25, 2012.

This post is licensed under CC BY 4.0 by the author.