Home Computer-Assisted Mammography Feedback Program (CAMFP)
Post
Cancel

Computer-Assisted Mammography Feedback Program (CAMFP)

Rationale and Objectives

Our goal was to develop and evaluate software to support a computer assisted mammography feedback program (CAMFP) to be used for continuing medical education (CME).

Materials and Methods

Thirty-five radiologists from our region signed consent to participate in an institutional review board–approved film-reading study. The radiologists primarily assessed digitized mammograms and received feedback in five film interpretation sessions. A bivariate analysis was used to evaluate the joint effects of the training on sensitivity and specificity, and the effects of image quality on reading performance were explored.

Results

Interpretation was influenced by the CAMFP intervention: Sensitivity increased (Δ sensitivity = 0.086, P < .001) and specificity decreased (Δ specificity = −0.057, P = .04). Variability in interpretation among radiologists also decreased after the training sessions ( P = .035).

Conclusion

The CAMFP intervention improved sensitivity and decreased variability among radiologist’s interpretations. Although this improvement was partially offset by decreased specificity, the program is potentially useful as a component of continuing medical education of radiologists. Dissemination via the web may be possible using digital mammography.

Mammography is used widely in developed countries to detect breast cancer early enough to treat successfully. Screening accuracy could be enhanced by either improving image quality or by improving the accuracy of radiologists who interpret mammograms ( ). Differences in radiologists’ training, experience and reading volume may affect their clinical recommendations and interpretations ( ). Regulations set forth by the Food and Drug Administration’s (FDA) Mammography Quality Standards Act (MQSA) ( ) require continuing medical education (CME) for all radiologists who interpret mammograms ( ). Few studies have evaluated the specific effectiveness of CME on improving physician performance in reading mammography since the implementation of MQSA; however, results suggest modest advances in performance among 23 practicing radiologists who attended a 1-day lecture on Breast Imaging and Reporting System BI-RADS assessment ( ). In general, CME has proven to enhance overall physician performance in many fields of medicine, with the greatest benefit achieved from learning sessions that allow hands-on practice in contextually relevant or difficult areas ( ). This type of learning format is more beneficial than traditional lecture-based programs because it allows the physicians to be interactive and engaged during CME ( ).

The FDA does not specify the format in which CME units are obtained and allows radiologists to complete Web-based or computer-based programs ( ). Therefore, missed cancers and variations in mammography interpretation may be reduced by providing practice in reading difficult films with feedback on physician accuracy during CME using the computer-assisted mammography feedback program (CAMFP). CAMFP was initially designed to be used as a continuous teaching tool for radiologists in low-volume isolated practice areas. However, because it is a computer-based format that can be easily obtained over the Internet ( ), it could contribute to CME programs that are used by radiologists with varying experience levels.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Materials and methods

Subjects

Get Radiology Tree app to read full this article<

Software

Get Radiology Tree app to read full this article<

Figure 1, Sample of mammogram image reviewed by the radiologists.

Get Radiology Tree app to read full this article<

Case Selection

Get Radiology Tree app to read full this article<

Composition of the Film Sets

Get Radiology Tree app to read full this article<

Table 1

Composition of the Film Sets

Test Sessions ( ) ⁎ Education Sessions ( ) ⁎ N (%) N (%) Outcome Malignant (Case) 40/90 (44.4%) 49/90 (54.4%) Control 50/90 (55.6%) 41/90 (45.6%) Lesion Type Mass 25/40 (62.5%) 31/49 (63.3%) Calcification 14/40 (35.0%) 16/49 (32.7%) Density ⁎⁎ 0 0 Architectural Distortion 1/40 (2.5%) 2/49 (4.1%) Patient Age (Mean, SD) 61 (13.3) 58 (11.3)

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Sessions

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Mammography Accuracy Measures

Get Radiology Tree app to read full this article<

Radiologists’ Characteristics and Variability in Reading Accuracy

Get Radiology Tree app to read full this article<

Table 2

Characteristics of Participating Radiologists as Reported in 1996

Characteristic Group I (N = 17) Group II (N = 18) Mean (SD) Mean (SD) Years practicing radiology 15.5 (8.3) 16.3 (8.5) Years reading mammograms 15.1 (7.8) 13.6 (6.3) Number CME credits for mammography in past 3 years 26.6 (18.4) 37.9 (39.2) Year obtained certification 1983 (8.3) 1982 (7.7) Average number mammograms read in past year 135.9 (93.5) 288.9 (336.6) Average number mammograms read per year 133.5 (96.9) 248.3 (297.3)

Get Radiology Tree app to read full this article<

Data Analysis

Get Radiology Tree app to read full this article<

Results

Characteristics of Participating Radiologists

Get Radiology Tree app to read full this article<

Change in Sensitivity and Specificity Attributable to the Intervention

Get Radiology Tree app to read full this article<

Table 3

Effects of CAMFP: Change (Δ) in Sensitivity and Specificity from Baseline to Follow-up by Reader Group ≠

Sensitivity Specificity Session 1 Mean (SD) Session 4 Mean (SD) Δ Mean (SD) Session 1 Mean (SD) Session 4 Mean (SD) Δ Mean (SD) Group I 0.75 (.112) 0.85 (.094) 0.103 (.139) 0.82 (.127) 0.76 (.073) −.061 (.159) Group II 0.77 (.148) 0.84 (.105) .069 (.134) 0.81 (.117) 0.76 (.096) −.053 (.166) Combined 0.76 (.13) 0.84 (.099) 0.086 ⁎ (0.02) 0.81 (.12) 0.76 (.085) −.057 ⁎⁎ (.028)

Group II read set B then set A.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Figure 2, Change in sensitivity vs. change in specificity with stratification on reader group (filmset order). Readers from the two groups are distinguished by different symbol types, and joint confidence regions were calculated separately for the two strata.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Variability in Sensitivity and Specificity among Radiologists

Get Radiology Tree app to read full this article<

Figure 3, Comparison of sensitivity versus specificity by session. The smallest symbol size represents an individual reader; larger symbols denote multiple readers with the same sensitivity/specificity coordinate. The plus sign (+) represents the observed joint mean sensitivity and specificity for each session. The vertical and horizontal lines are provided for reference (at 80 and 90%).

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Discussion

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Acknowledgements

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

References

  • 1. Pepe M.S., et. al.: Design of a study to improve accuracy in reading mammograms. J Clin Epidemiol 1997; 50: pp. 1327-1338.

  • 2. Elmore J., Wells C., Howard D.: Does diagnostic accuracy in mammography depend on radiologists’ experience?. J Womens Health 1998; 7: pp. 443-449.

  • 3. Ciatto S., et. al.: Proficiency test for screening mammography: results for 117 volunteer Italian radiologists. J Med Screen 1999; 6: pp. 149-151.

  • 4. Rutter C.M., Taplin S.: Assessing mammographers’ accuracy: a comparison of clinical and test performance. J Clin Epidemiol 2000; 53: pp. 443-450.

  • 5. Smith-Bindman R., et. al.: Physician predictors of mammographic accuracy. J Natl Cancer Inst 2005; 97: pp. 358-367.

  • 6. Beam C.A., Conant E.F., Sickles E.A.: Association of volume and volume-independent factors with accuracy in screening mammogram interpretation. J Natl Cancer Inst 2003; 95: pp. 282-290.

  • 7. Mammography Quality Standards Act Public Law 101-359, 1992. Amended 19971997. October 28

  • 8. Linver M., Newman J.: MQSA: the final rule. Radiol Technol 1999; 70: pp. 338-353.

  • 9. Berg W.A., et. al.: Does training in Breast Imaging and Reporting System (BI-RADS) improve biopsy recommendations or feature analysis agreement with experienced breast imagers at mammography?. Radiology 2002; 224: pp. 871-880.

  • 10. Nass S., Ball J.: Improving breast imaging quality standards.2005.National Academy PressWashington, DC:pp. 24-71.

  • 11. Thomson O’Brien M.A., et. al.: Audit and feedback versus alternative strategies: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2000; CD000260

  • 12. Robertson M.K., Umble K.E., Cervero R.M.: Impact studies in continuing education for health professions: update. J Contin Educ Health Prof 2003; 23: pp. 146-156.

  • 13. LoRusso A.P., Bassignani M.J., Harvey J.A.: Enhanced teaching of screening mammography using an electronic format. Acad Radiol 2006; 13: pp. 782-788.

  • 14. Pisano E.D., Yaffe M.J.: Digital mammography. Radiology 2005; 234: pp. 353-362.

  • 15. Pisano E.D., McLelland R.: Implementation of breast cancer screening. Curr Opin Radiol 1991; 3: pp. 579-587.

  • 16. Box G.: A general distribution theory for a class of likelihood criteria. Biometrika 1949; 36: pp. 317-346.

  • 17. Ikeda D., et. al.: Analysis of 172 subtle findings on prior normal mammograms in women with breast cancer detected at follow-up screening. Radiology 2003; 226: pp. 494-503.

  • 18. Pisano E.D., et. al.: Diagnostic performance of digital versus film mammography for breast-cancer screening. N Engl J Med 2005; 353: pp. 1773-1783.

  • 19. Rosenberg R.D., et. al.: Effect of variations in operational definitions on performance estimates for screening mammography. Acad Radiol 2000; 7: pp. 1058-1068.

  • 20. Institute of Medicine: To err is human: building a safer health system.1999.National Academy PressWashington, DC

This post is licensed under CC BY 4.0 by the author.