Rationale and Objectives
To determine the relationship between heightened levels of reader performance and reader practice in terms of number of cases read and previous experience.
Materials and Methods
A test set of mammograms was developed comprising 50 cases. These cases consisted of 15 abnormals (biopsy proven) and 35 normals (confirmed at subsequent rescreen). Sixty-nine breast image readers reviewed these cases independently and their performance was measured by recording their individual receiver operating characteristic score (area under the curve), sensitivity, and specificity. These measures of performance were then compared to a range of factors relating to the reader such as years of certification and reporting, number of cases read per year, previous experiences, and satisfaction levels. Correlation analyses using Spearman methods were performed along with the Mann-Whitney test to detect differences in performance between specific reader groups.
Results
Improved reader performance was found for years certified ( P = .004), years of experience ( P = .0001), and hours reading per week ( P = .003) shown by positive statistical significant relationships with Az values (area under receiver operating characteristic curve). Statistical comparisons of Az values scored for individuals who read varying number of cases per year showed that those individuals whose annual mammographic case load was 5000 or more ( P = .03) or between 2000 and 4999 ( P = .05), had statistically significantly higher scores than those who read less than 1000 cases per year.
Conclusion
The results of this study have shown variations in reader performance relating to parameters of reader practice and experience. Levels of variance are shown and potential acceptance levels for diagnostic efficacy are proposed which may inform policy makers, judicial systems and public debate.
Breast cancer is a global problem with 1 in 11 women in Australia, and 1 in 9 women in the United States and the United Kingdom being diagnosed with this disease during their lifetime . It is the second most common cancer, with half a million new cases worldwide each year , and is the most common cause of cancer death amongst females in Australia . Mammography is still the most common diagnostic procedure both for symptomatic patients and screening participants, particularly for those older than 50 years old who make up approximately 80% of breast cancer cases . Although image perception research is now considered crucially important to promote diagnostic efficacy and much work has been done to explore the impact of technical features, such as acquisition devices, displays, and environmental conditions on lesion detection, relatively less work has focused on the relationship between diagnostic performance and expert reader practice and experience. Also, the variability that may exist between expert readers and the possibility of establishing “acceptable” levels of diagnostic efficacy from a large group of experts reading a test set of clinically relevant breast images remains underexplored.
A recent questionnaire-based study involving 83 mammography readers employed by BreastScreen Services in Australia demonstrated the heterogeneity of mammographic screen readers ; age varied between 26 and >56 years; years of experience reporting mammograms ranged from <1 year to >10 years; some radiologists only reported breast images, whereas others reported a variety of different image types; individual reporting sessions ranged from <1 hour to up to and including 5 hours; numbers of cases read per year ranged from between 501 and 1000 to >10,000. Although this previous work demonstrated the variability that exists within reader practice and considers the impact this may have on levels of concentration, no attempt was made to explore if correlations exist between reader practice or experience and reader performance in terms of receiver operating characteristic (ROC), sensitivity or specificity scores.
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Materials and methods
Get Radiology Tree app to read full this article<
Table 1
Details on Participating Readers
Parameters Investigated Value Min Max 1. Years certified as a radiologist 11 ∗ <1 30 2. Years of experience reading mammograms 8 ∗ <1 20 3. Hours reading breast images per week 12 ∗ <1 47.5 4. Percentage of readers who have undergone a breast screening fellowship? 34 — — 5. Percentage of readers who screen read for BreastScreen Australia? 52 — — 6. Mean satisfaction score? 7.7 ∗∗ — — 7. Number of cases per year 3500 ∗ <100 20,000 8. Percent of readers reporting 5000 or more cases per year 38 — — 9. Percent of readers reporting 2000 or more cases per year 68 — — 10. Percent of readers reporting 1000 or more cases per year 85 — —
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Table 2
Specifications for Monitors used in the Study
Barco 1 Barco 2 Eizo 1 Eizo 2 Maximum luminance ∗ 475 cdm −2 486 cdm −2 427 cdm −2 436 cdm −2 Minimum luminance ∗ 1.3 cdm −2 1.4 cdm −2 1.1 cdm −2 1.2 cdm −2 Contrast ratio ∗ 365:1 347:1 388:1 363:1 Display resolution 2048 × 2560 2048 × 2560 2048 × 2560 2048 × 2560 Screen type LCD LCD LCD LCD Screen size 54 cm 54 cm 54 cm 54 cm
LCD, liquid crystal display.
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Table 3
Correlation Analysis of Az Value with Reader Parameters
Parameters Investigated_r_ Values_P_ Value 1. Years certified as a radiologist 0.32 .004 ∗ 2. Years of experience reading mammograms 0.43 .0001 ∗ 3. Hours reading breast images per week 0.34 .003 ∗ 4. Experience of a breast screening fellowship −0.08 .25 5. Screen reader for BreastScreen Australia 0.18 .06 6. Mean satisfaction score 0.04 .39 7. Number of cases per year 0.18 .07
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Results
Get Radiology Tree app to read full this article<
Table 4
Median ROC, Sensitivity, and Specificity Scores with Inter-quartile Ranges
Score Type Median First Quartile Third Quartile ROC (Az value) 0.84 0.79 0.90 % Sensitivity 87 73 93 % Specificity 83 74 89
ROC, receiver operating characteristic.
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Table 5
Nonparametric Comparisons of Az Values between Readers who Report Varying Levels of Cases per Year
Numbers of Cases Read per Year Median Values First Quartile Third Quartile 1. 5000–20,000 0.85 ∗ ( P = .03) 0.75 0.89 2. 2000–4999 0.83 ∗ ( P = .05) 0.76 0.86 3. 1000–1999 0.86 0.74 0.92 4. <1000 0.75 0.66 0.83
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Discussion
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Acknowledgments
Get Radiology Tree app to read full this article<
References
1. NLM and NIH. Breast cancer, 2009. Available online at: http://www.nlm.nih.gov/medlineplus/breastcancer.html . Accessed July 5, 2010.
2. Australian Institute of Health and Welfare. Australian Cancer Incidence and Mortality books, breast, 2007. Available online at: http://www.womhealth.org.au/studentfactsheets/breastcancer.htm . Accessed July 5, 2010.
3. Office for National Statistics. Female breast cancer incidence and mortality, England, 1971-2005. Available online at: http://www.statistics.gov.uk/cci/nugget.asp?id=575 . Accessed July 5, 2010.
4. WHO International Agency for Research on Cancer. World Cancer Report, 2003.
5. Australian Institute of Health and Welfare (AIHW), Cancer Australia & AustralasianAssociation of Cancer Registries 2008. Cancer survival and prevalence in Australia: cancers diagnosed from 1982 to 2004. Cancer Series no. 42. Cat. no. CAN 38. Canberra, AustraliaI: IARC Press Lyon; AIHW.
6. UK Cancer Research. CancerStats key facts on breast cancer, 2008. Available online at: http://info.cancerresearchuk.org/cancerstats/types/breast . Accessed July 5, 2010.
7. Office for National Statistics. Cancer statistics registrations: registrations of cancer diagnosed in 2005, England. Series MB1 no.36. London: Office for National Statistics.
8. Beam C.A., Krupinski E.A., Kundel H.L., et. al.: The place of medical image perception in 21st-century health care. J Am Coll Radiol 2006; 3: pp. 409-412. [review]
9. McCarthy E., Brennan P.C.: Viewing conditions for diagnostic images in three major Dublin hospitals: a comparison with WHO and CEC recommendations. Br J Radiol 2003; 76: pp. 94-97.
10. Brennan P.C., McEntee M., Evanoff M., et. al.: Ambient lighting: effect of illumination on soft-copy viewing of radiographs of the wrist. AJR Am J Roentgenol 2007; 188: pp. W177-W180.
11. Reed W., Poulos A., Rickard M., et. al.: Reader practice in mammography screen reporting in Australia. J Med Imaging Radiat Oncol 2009; 53: pp. 530-537.
12. Hart D, Wall BF. Radiation exposure of the UK population from medical and dental x-ray examinations. NRPB-W4 Report. Chilton, UK: National Radiological Protection Board.
13. International Commission on Radiological Protection, Publication 103. Recommendations of the ICRP. Ann ICRP 2007; 37: pp. 2-4.
14. Law J., Faulkner K., Young K.C., et. al.: Risk factors for induction of breast cancer by X-rays and their implications for breast screening. Br J Radiol 2007; 80: pp. 261-266.
15. Saarenmaa I., Salminen T., Geiger U., et. al.: The visibility of cancer on previous mammograms in retrospective review. Clin Radiol 2001; 56: pp. 40-43.
16. Department of Health and Human Services, Food and Drug Administration. Quality Mammography Standards. 1997 Final Rule 21 CRF parts 16 & 900 [Docket no. 95N-0192]. RIN 0910-AA24 ed. Washington, DC: Department of Health and Human Services; 1997.
17. National Accreditation Committee: National Program for the Early Detection of Breast Cancer—National Accreditation Requirements.1994.Commonwealth Department of Human Services and HealthCanberra, Australia
18. Pritchard J. Quality assurance guidelines for mammography. National Health Service Breast Screening Programme. London, UK; 1989.
19. National Health Service Breast Screening Programme. Quality assuranceguidelines for radiologists. 1997 Publication No. 15. Sheffield, UK: National Health Service Breast Screening Programme. London, UK; 1997.
20. Kan L., Olivotto I.A., Warren Burhenne L.J., et. al.: Standardised abnormal interpretation and cancer detection ratios to assess reading volume and reader performance in a breast screening program. Radiology 2000; 215: pp. 563-567.
21. Rickard M., Taylor R., Page A., et. al.: Cancer detection and mammogram volume of radiologists in a population-based screening programme. Breast 2006; 15: pp. 39-43.
22. Mushlin A.I., Kouides R.W., Shapiro D.E.: Estimating the accuracy of screening mammography: a meta-analysis. Am J Prev Med 1998; 14: pp. 143-153.
23. Pisano E.D., Gatsonis C., Hendrick E., et. al.: Digital Mammographic Imaging Screening Trial (DMIST) Investigators Group. Diagnostic performance of digital versus film mammography for breast-cancer screening. N Engl J Med 2006; 355: pp. 1840.
24. Rutter C.M., Taplin S.: Assessing mammographers’ accuracy. A comparison of clinical and test performance. J Clin Epidemiol 2000; 53: pp. 443-450.