Rationale and Objectives
To examine the feasibility of and satisfaction with a tailored web-based intervention designed to decrease radiologists’ recommendation of inappropriate additional work-up after a screening mammogram.
Materials and Methods
We developed a web-based educational intervention designed to reduce inappropriate recall. Radiologists were randomly assigned to participate in an early intervention group or a late (control) intervention group, the latter of which served as a control for a 9-month follow-up period, after which they were invited to participate in the intervention. Intervention content was derived from our prior research and included three modules: 1) an introduction to audit statistics for mammography performance; 2) a review of data showing radiologists’ inflated perceptions of medical malpractice risks related to breast imaging, and 3) a review of data on breast cancer risk among women seen in their practices. Embedded within the intervention were individualized audit data for each participating radiologists obtained from the national Breast Cancer Surveillance Consortium.
Results
Seventy-four radiologists (37.8%; 74/196) consented to the intervention, which was completed by 67.5% (27/40) of those randomized to the early intervention group and 41.2% (14/34) of those randomized to the late (control) group. Thus, a total of 41 (55%) completed the intervention. On average, three log-ins were used to complete the program (range 1–14), which took approximately 1 hour. Ninety-five percent found the program moderately to very helpful in understanding how to calculate basic performance measures. Ninety-three percent found viewing their own performance measures moderately to very helpful, and 83% reported it being moderately to very important to learn that the breast cancer risk in their screening population program was lower than perceived. The percentage of radiologists who reported that the risk of medical malpractice influences their recall rates dropped from 36.3% preintervention to 17.8% after intervention with a similar drop in perceived influence of malpractice risk on their recommendations for breast biopsy (36.4 to 17.3%). More than 75% of radiologists answered the postintervention knowledge questions correctly, and the percent of time spent in breast imaging did not appear to influence responses. The majority (>92%) of participants correctly responded that the target recall rate in the United States is 9%. The mean self-reported recall rates were 13.0 for radiologists spending <40% time in breast imaging and 14.9% for those spending >40% time spent in breast imaging, which was highly correlated with their actual recall rates (0.991; P < .001).
Conclusions
Radiologists who begin an internet-based tailored intervention designed to help reduce unnecessary recall will likely complete it, although only 55% who consented to the study actually undertook the intervention. Participants found the program useful in helping them understand why their recall rates may be elevated.
Several studies have shown that variability in recall rates in mammography screening is extensive (7% to 17%), even among high-volume readers . Similarly, recall rates in the United States are nearly twice as high as those reported in other countries with comparable cancer detection rates . Studies have not fully explained the reasons for high recall rates in the United States . However, the 1992 Mammography Quality Standards Act (MQSA) sought to improve the practice of mammography in the United States using a required system to track outcomes associated with mammograms that resulted in a recommendation for biopsy. MQSA requires that each facility track pathology outcomes when biopsy is recommended on the basis of mammography but does not require the tracking of clinical outcomes when additional imaging is recommended . Rather, in the United States, approaches used to implement tracking systems and reviews of additional audit data are left to the discretion of each mammography facility .
Very little research has focused on whether auditing systems alone affect recall rates or improve performance. A recent observational study found that among 255 radiologists across the United States who completed a survey about medical audits, 91% reported receiving individualized audit reports (in a paper format), which were provided by mammography registries participating in the National Cancer Institute funded Breast Cancer Surveillance Consortium (BCSC) . In addition, of these 255 radiologists, 83% reported paying close attention to their audit numbers, 87% found the reports valuable, and 75% felt that audit reports prompted them to improve interpretative performance. However, how radiologists use audit data to determine how their performance can be improved is not known.
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Methods
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Radiologist Eligibility and Survey Data Collection
Get Radiology Tree app to read full this article<
Web-based Tailored Educational Intervention Data System
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Data Analyses
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Results
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Table 1
Radiologist Characteristics According to Intervention Status
Characteristics Consenting BCSC Radiologists who Completed Program ∗ Consenting BCSC Radiologists who Did Not Complete Program † Nonconsenting-BCSC Radiologists Total n = 41 (%) n = 33 (%) n = 55 (%)P Value Demographics Sex Male 21 (52.5) 17 (70.8) 43 (78.2) .03 Female 19 (47.5) 7 (29.2) 12 (21.8) Mean age (SD) Practice Type Primary affiliation with academic medical center No 34 (85.0) 18 (75.0) 46 (83.6) .36 Adjunct 2 (5.0) 2 (8.3) 5 (9.1) Primary 4 (10.0) 4 (16.7) 2 (3.6) Breast imaging experience Fellowship training No 39 (97.5) 22 (91.7) 51 (92.7) .53 Yes 1 (2.5) 2 (8.3) 4 (7.3) Years of mammography interpretation <10 8 (20.0) 4 (16.7) 12 (22.2) .88 10–19 16 (40.0) 9 (37.5) 24 (44.4) ≥20 16 (40.0) 11 (45.8) 18 (33.3) Percent of time spent in breast imaging <20% 10 (25.0) 5 (23.8) 11 (21.6) .47 20–39% 15 (37.5) 6 (28.6) 20 (39.2) 40–79% 9 (22.5) 3 (14.3) 5 (9.8) 80–100% 6 (15.0) 7 (33.3) 15 (29.4) Preferences/attitudes toward CME ‡ Prefer instructor-led activities (such as lectures or instructor led teleconferences): Strongly agree/agree 34 (87.2) 19 (79.2) 46 (83.6) .87 Neutral 3 (7.7) 4 (16.7) 6 (10.9) Strongly disagree/disagree 2 (5.1) 1 (4.2) 3 (5.5) Prefer self-directed activities (reading professional journal articles with CME exercises Strongly agree/agree 13 (33.3) 6 (27.3) 17 (30.9) .13 Neutral 14 (35.9) 4 (18.2) 24 (43.6) Strongly disagree/disagree 12 (30.8) 12 (54.5) 14 (25.5) Prefer interactive activities Strongly agree/agree 22 (59.5) 11 (45.8) 28 (51.9) .45 Neutral 11 (29.7) 11 (45.8) 24 (44.4) Strongly disagree/disagree 4 (10.8) 2 (8.3) 2 (3.7) CME improves interpretive performance Strongly agree/agree 32 (80.0) 21 (87.5) 41 (74.5) .58 Neutral 7 (17.5) 2 (8.3) 13 (23.6) Strongly disagree/disagree 1 (2.5) 1 (4.2) 1 (1.8) Would be interested in a free Category 1 CME program on use of audit reports to help with mammography interpretation Strongly agree/agree 36 (90.0) 22 (91.7) 44 (80.0) .33 Neutral 4 (10.0) 2 (8.3) 8 (14.5) Strongly disagree/disagree 0 (0) 0 (0) 3 (5.5) Would take a free CME course over the internet Strongly agree/agree 35 (87.5) 22 (91.7) 46 (83.6) .43 Neutral 4 (10.0) 1 (4.2) 3 (5.5) Strongly disagree/disagree 1 (2.5) 1 (4.2) 6 (10.9)
BSCS, Breast Cancer Surveillance Consortium;CME, continuing medical education.
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Table 2
Completion Features and Satisfaction with Web-based Intervention by Percent of Time Spent in Breast Imaging
Overall (n = 40) ∗ Among Those who Spend <40% of Time in Breast Imaging (n = 25) % Among Those who Spend ≥40% of Time in Breast Imaging (n = 15) %P Value Completion features Number of log-ins (mean, SD, range) 3.2 (2.6) 1–14 3.4 (3.1) 1–14 3.1 (1.6) 1–6 .74 Total completion time, minutes (mean, SD, range) 53.8 (19.2) 23.4–106.7 55.4 (22.8) 23.4–106.7 50.6 (12.0) 34.6–74.4 .46 Satisfaction/usefulness features How helpful was this exercise in helping you understand how to calculate basic audit data? 1.00 A little 2 (5.0) 1 (4.0) 1 (6.7) Moderately 15 (37.5) 9 (36.0) 6 (40.0) Very 23 (57.5) 15 (60.0) 8 (53.3) How helpful was the 2 × 2 table in helping you understand recall and biopsy yield? 1.00 A little 1 (2.5) 1 (4.0) 0 (0) Moderately 17 (42.5) 10 (40.0) 7 (46.7) Very 22 (55.0) 14 (56.0) 8 (53.3) How helpful was it to see your own data? .58 A little 3 (7.5) 3 (12.0) 0 (0) Moderately 9 (22.5) 5 (20.0) 4 (26.7) Very 28 (70.0) 17 (68.0) 11 (73.3) How important is it to learn that breast cancer risk is small in your patient population? .44 A little 7 (17.5) 6 (24.0) 1 (6.7) Moderately 17 (42.5) 10 (40.0) 7 (46.7) Very 16 (40.0) 9 (36.0) 7 (46.7)
CME, continuing medical education.
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Table 3
Knowledge Assessment Associated with Web-based Intervention Content
Knowledge Questions Overall (n = 40) ∗ Among Those who Spend <40% of Time in Breast Imaging (n = 25) % Among Those who Spend ≥40% of Time in Breast Imaging (n = 15) %P Value What is the average recall rate of US radiologists? (correct answer = 13%) .71 10% 7 (17.5) 4 (16.0) 3 (20.0) 13% 31 (77.5) 19 (76.0) 12 (80.0) 15% 2 (5.0) 2 (8.0) 0 (0.0) What is the target or benchmark recall rate in the United States? (correct answer = 9%) 1.00 6% 2 (5.0) 1 (4.0) 1 (6.7) 9% 37 (92.5) 23 (92.0) 14 (93.3) 13% 1 (2.5) 1 (4.0) 0 (0.0) What is your recall rate? (mean, SD, range) 14.0 (8.1, 4.0–50.0) 14.9 (9.9, 4.0–50.0) 13.0 (3.1, 8.7–19.0) .48 What is the optimal range of recall (where the plateau exists between recall and cancer detection)? (correct answer 5–9%) .35 5–9% 35 (87.5) 23 (92.0) 12 (80.0) 10–14% 5 (12.5) 2 (8.0) 3 (20.0) Please rate your general understanding of breast cancer risk as a result of this exercise .79 Not improved 4 (10.0) 3 (12.0) 1 (6.7) Modestly improved 25 (62.5) 16 (64.0) 9 (60.0) Greatly improved 11 (27.5) 6 (24.0) 5 (33.3)
CME, continuing medical education.
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Discussion
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
References
1. Gur D., Sumkin J.H., Hardesty L.A., et. al.: Recall and detection rates in screening mammography: A review of clinical experience: Implications for practice guidelines. Cancer 2004; 100: pp. 1590-1594.
2. Smith-Bindman R., Chu P.W., Miglioretti D.L., et. al.: Comparison of screening mammography in the United States and the United Kingdom. JAMA 2003; 290: pp. 2129-2137.
3. Hofvind S., Vacek P.M., Skelly J., et. al.: Comparing screening mammography for early breast cancer detection in Vermont and Norway. J Natl Cancer Inst 2008; 100: pp. 1082-1091.
4. Carney P.A., Miglioretti D.L., Yankaskas B.C., et. al.: Individual and combined effects of breast density, age, and hormone replacement therapy use on the accuracy of screening mammography. Ann Intern Med 2003; 138: pp. 168-175.
5. Elmore J.G., Carney P.A., Abraham L.A., et. al.: The association between obesity and screening mammography accuracy. Arch Intern Med 2004; 164: pp. 1140-1147.
6. Carney P.A., Kasales C.J., Tosteson A.N.A., et. al.: Types of additional work-up among women undergoing routine screening mammography: the impact of age, breast density and hormone therapy use. Prevent Med 2004; 39: pp. 48-55.
7. Barlow W.E., Chi C., Carney P.A., et. al.: Accuracy of screening mammography interpretation by characteristics of radiologists. JNCI 2004; 96: pp. 1840-1850.
8. Elmore J.G., Taplin S., Barlow W.E., et. al.: Does litigation influence medical practice? The influence of community radiologists’ medical malpractice perceptions and experience on screening mammography. Radiology 2005; 236: pp. 37-46.
9. Fenton J.J., Egger J., Carney P.A., et. al.: Reality check: perceived versus actual performance of community mammographers. AJR Am J Roentgenol 2006; 187: pp. 42-46.
10. Carney P.A., Abraham L.A., Miglioretti D.L., et. al.: Factors associated with imaging and procedural events used to detect breast cancer following screening mammography. AJR Am J Roentgenol 2007; 188: pp. 385-392.
11. US Food and Drug Administration. Mammography quality standards act regulations. Available at: http://www.fda.gov/cdrh/mammography/frmamcom2.html . Accessed March 12, 2009.
12. Birdwell R.L., Wilcox P.A.: The mammography quality standards act: benefits and burdens. Breast Dis 2001; 13: pp. 97-107.
13. Monsees B.S.: The Mammography Quality Standards Act. An overview of the regulations and guidance. Radiol Clin North Am 2000; 38: pp. 759-772.
14. US Food and Drug Administration. Medical outcomes audit general requirement. Available at: http://www.fda.gov/CDRH/mammography/robohelp/med_outcomes_audit_gen_req.htm . Accessed May 4, 2009.
15. Elmore J.E., Aiello-Bowles E., Geller B.M., et. al.: Radiologists’ attitudes and use of mammography audit reports. Acad Radiol 2010; 17: pp. 752-760.
16. Ballard-Barbash R., Taplin S.H., Yankaskas B.C., et. al.: Breast Cancer Surveillance Consortium: a national mammography screening and outcomes database. AJR Am J Roentgenol 1997; 169: pp. 1001-1008.
17. Cook D.A., Levinson A.L., Garside S., et. al.: Internet-based learning in the health professions. JAMA 2008; 300: pp. 1181-1196.
18. Taylor P.M.: A review of research into the development of radiologic expertise: implications for computer-based training. Acad Radiol 2007; 14: pp. 1252-1263.
19. Egger J.R., Cutter G.R., Carney P.A., et. al.: Mammographers’ perception of women’s breast cancer risk. Medical Decision Making 2005; 25: pp. 283-289.
20. American College of Radiology, Breast Imaging Reporting and Data System (BI-RADS), Copyright 2004.
21. Carney P.A., Geller B.M., Moffett H., et. al.: Current medico-legal and confidentiality issues in large multi-center research programs. Am J Epidemiol 2000; 152: pp. 371-378.
22. Elmore J.G., Jackson S.L., Abraham L., et. al.: Variability in interpretive performance of screening mammography and radiologist characteristics associated with accuracy. Radiology 2009; 253: pp. 641-651.
23. Laidley T.L., Braddock C.H.: Role of adult learning theory in evaluating and designing strategies for teaching residents in ambulatory settings. Adv Health Sci Educ 2000; 5: pp. 43-54.
24. Speck M.: Best practice in professional development for sustained educational change.1996.Spring Publishers, ERS SpectrumWoodstock, CT 33–41
25. Ruby on rails. David Heinemeier Hansson et al, http://rubyonrails.org/ ; Ruby Programming Language: Yukihiro Matsumoto, et al, http://ruby-lang.org/ )
26. XML Extensible Markup Language. Available at: http://www.w3.org/XML
27. Dick J.F., Yi J.P., Gallagher T., et. al.: Predictors of radiologists’ perceived risk of malpractice lawsuits in breast imaging. AJR Am J Roentgenol 2009; 192: pp. 327-333.
28. Rosenberg R.D., Yankaskas B.C., Abraham L., et. al.: Performance benchmarks for screening mammography. Radiology 2006; 241: pp. 55-66.
29. Bassett L.W., Hendrick R.E., Bassford T.L., et. al.: Quality determinants of mammography: Clinical Practice Guideline No. 13. AHCPR Publication No. 95-0632.October 1994.Agency for Health Care Policy and Research, Public Health Service, US Department of Health and Human ServicesRockville, MD
30. Barlow W.E., White E., Ballard-Barbash R., et. al.: Prospective breast cancer risk prediction model for women undergoing screening mammography. JNCI 2006; 98: pp. 1204-1214.