Home How Does the Display Luminance Level Affect Detectability of Breast Microcalcifications and Spiculated Lesions in Digital Breast Tomosynthesis (DBT) Images?
Post
Cancel

How Does the Display Luminance Level Affect Detectability of Breast Microcalcifications and Spiculated Lesions in Digital Breast Tomosynthesis (DBT) Images?

Rationale and Objectives

This study evaluates the influence of the calibrated luminance level of medical displays in the detectability of microcalcifications and spiculated lesions in digital breast tomosynthesis images.

Materials and Methods

Four models of medical displays with calibrated maximum and minimum luminance, respectively, ranging from 500 to 1000 cd/m 2 and from 0.5 to 1.0 cd/m 2 , were investigated. Forty-eight studies were selected by a senior radiologist: 16 with microcalcifications, 16 with spiculated lesions, and 16 without lesions. All images were anonymized and blindly evaluated by one senior and two junior radiologists. For each study, lesion presence or absence and localization statements, interpretative difficulty level, and overall quality were reported. Cohen’s kappa statistic was computed between monitors and within or between radiologists to estimate the reproducibility in correctly identifying lesions; for multireader-multicase analysis, the weighted jackknife alternative free-response receiver operating characteristic statistical tool was applied.

Results

Intraradiologist reproducibility ranged from 0.75 to 1.00. Interreader as well as reader-truth agreement values were >0.80 and higher with the two 1000 cd/m 2 luminance displays than with the lower luminance displays for each radiologist. Performances in the detectability of breast lesions were significantly greater with the 1000 cd/m 2 luminance displays when compared to the display with the lowest luminance value ( P value <0.001).

Conclusions

Our findings highlight the role of display luminance level on the accuracy of detecting breast lesions.

Introduction

The interpretation of mammographic images may be challenging due to the subtle differences in soft tissue densities of normal and pathological structures of the breast, and diagnosis benefits from the detection of microcalcifications and the assessment of masses’ borders. Mammographic interpretations are based on the evaluation of morphologic descriptors well established by the Breast Imaging Reporting And Data System (BI-RADS) lexicon . However, they can be applied only if a lesion is perceived on mammographic studies. The radiologist’s expertise may be hampered if mammographic images are of bad quality or if they are improperly visualized. For this reason, great attention must be given to mammographic images’ presentation devices, both film view box and softcopy display devices, whose wrong choice or improper setup can compromise the overall quality of mammographic examination. Regarding mammographic displays, the ACR-AAPM-SIIM Practice Guideline for Determinants of Image Quality in Digital Mammography recommends that monitors used for interpretation be specifically approved for digital mammography use by the Food and Drug Administration (FDA), with a spatial resolution of 5 megapixel (MP) in a 21” panel . Regarding the luminance level, a calibrated maximum luminance of at least 400 cd/m 2 is required and greater than 450 cd/m 2 is recommended, but an increased display luminance level is welcome, being the human eye ability to detect difference in contrast and fine details affected by overall brightness of a scene. As concerns the calibrated minimum luminance level, no value is specified in the mentioned guidelines.

In literature, several observer performance studies are reported comparing the diagnostic accuracy among displays of different spatial resolutions , but only a few authors have investigated the role of display luminance level . Furthermore, to our knowledge, the diagnostic performance of medical displays dedicated to mammography with different setup has been poorly investigated. In the paper of Kimpe and Xthona, an increased level of calibrated maximum luminance is proven to increase the detection probability of breast microcalcifications . Briefly, increasing the calibrated maximum luminance of a medical display from 500 cd/m 2 to 1000 cd/m 2 increases the detection probability of microcalcifications between 13% and 20%, as a function of the parameter used in the Weibull psychometric function. The results presented in the mentioned article have been obtained only on the basis of the Barten’s model of the contrast sensitivity curve, and the authors themselves, in the conclusion of their study, suggested the activation of an observational study to support the presented results.

Get Radiology Tree app to read full this article<

Materials and Methods

Get Radiology Tree app to read full this article<

Equipment and Images

Get Radiology Tree app to read full this article<

Table 1

Physical Properties of the Four Medical Displays Used in Our Study

Display Model and Study Code A (Barco Nio

5MP MDNG-5221) B (Barco Coronis 5MP MDCG-5221) C (Barco Tomosynthesis

5MP MDMG-5221) D (Barco Coronis UNITI 12MP MDMC-12133) Panel LCD monochrome LCD monochrome LCD monochrome LCD color Diagonal Size (cm) 54.1 54.1 54.1 85.3 Pixel Pitch (mm) 0.165 0.165 0.165 0.169 Maximum Luminance (cd/m 2 ) 500 600 1000 1000 Minimum Luminance (cd/m 2 ) 0.5 0.6 1 1

LCD, liquid crystal display.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 2

Absolute Frequencies or Median Values (Range) for Different Variables, Grouped by Kind of Breast Lesion

Kind of Breast Lesion Microcalcification Spiculated lesion No Suspicious Lesion No. of lesions 18 16 16 Age (years) 57(48–76) 55(36–69) 63(48–73) Breast thickness (mm) 49(31–70) 58(48–79) 50(26–77) BI-RADS assessment – 1 — — 10 BI-RADS assessment – 2 7 — 6 BI-RADS assessment – 3 — — — BI-RADS assessment – 4 9 10 — BI-RADS assessment – 5 2 6 — ACR breast density – a — — — ACR breast density – b 4 2 5 ACR breast density – c 12 10 9 ACR breast density – d 2 4 2

ACR, American College of Radiology; BI-RADS, Breast Imaging Reporting And Data System.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Statistical Analysis

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Results

Concordance Study

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 3

Interobserver Reproducibility at Second Reading Session

Junior-1 Versus Junior-2 Junior-1 Versus Senior Junior-2 Versus Senior Monitor A 0.813(0.686;0.940) 0.868(0.759;0.977) 0.738(0.593;0.884) Monitor B 0.794(0.665;0.923) 0.871(0.763;0.978) 0.797(0.670;0.924) Monitor C 0.850(0.741;0.960) 0.950(0.883;1.000) 0.851(0.742;0.960) Monitor D 0.849(0.738;0.961) 0.876(0.774;0.978) 0.825(0.707;0.943)

In each cell is reported the k c value together with the 95% confidence interval (95%CI).

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 4

Observer-Truth Reproducibility at Second Reading Session

Junior-1 Versus REF Junior-2 Versus REF Senior Versus REF Monitor A 0.794(0.663;0.924) 0.744(0.606;0.883) 0.823(0.700;0.945) Monitor B 0.821(0.698;0.944) 0.750(0.614;0.885) 0.849(0.735;0.963) Monitor C 0.975(0.926;1.000) 0.875(0.773;0.976) 0.975(0.927;1.000) Monitor D 0.925(0.843;1.000) 0.874(0.771;0.977) 0.950(0.883;1.000)

REF, reference evaluation.

In each cell is reported the k c value together with the 95% confidence interval(95% CI).

Get Radiology Tree app to read full this article<

Performance Evaluation

Get Radiology Tree app to read full this article<

Table 5

Performance Indicators for Each Observer and Each Display at Second Reading Session

wJAFROC Monitor A Monitor B Monitor C Monitor D Junior-1 0.877 0.891 0.981 0.955 Junior-2 0.812 0.818 0.938 0.922 Senior 0.872 0.918 0.996 0.952 Average 0.854(0.774;0.933) 0.876(0.778; 0.974) 0.972 (0.896;1.00) 0.943(0.906;0.981)

FoM, figure of merit; wJAFROC, weighted jackknife alternative free-response receiver operating characteristic.

In each cell is reported the wJAFROC FoM value and in brackets the 95% confidence interval(for average only).

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Figure 1, Tomosynthesis reconstruction as displayed on the 500 cd/m 2 monochrome monitor (left side, 1 and 1′ for the entire field and the detail, respectively) and on the 1000 cd/m 2 color monitor (right side, 2 and 2′). The images have been captured by a digital camera positioned in the observer position.

Get Radiology Tree app to read full this article<

Additional Information

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Discussion

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

References

  • 1. Sickles E.A., D’Orsi C.J., Bassett L.W., et. al.: ACR BI-RADS ® mammography..2013.American College of RadiologyReston, VA:

  • 2. Kanal K.M., Krupinski E., Berns E.A., et. al.: ACR-AAPM-SIIM practice guideline for determinants of image quality in digital mammography. J Digit Imaging 2013; 26: pp. 10-25.

  • 3. Yamada T., Suzuki A., Uchiyama N., et. al.: Diagnostic performance of detecting breast cancer on computed radiographic (CR) mammograms: comparison of hard copy film, 3-megapixel liquid-crystal-display (LCD) monitor and 5-megapixel LCD monitor. Eur Radiol 2008; 18: pp. 2363-2369.

  • 4. Yin J., Guo Q., Sha X., et. al.: Influence of liquid crystal displays (LCDs) with different resolutions on the detection of pulmonary nodules: an observer performance study. Eur J Radiol 2011; 80: pp. e153-e156.

  • 5. Awan O., Safdar N.M., Siddiqui K.M., et. al.: Detection of cervical spine fracture on computed radiography images: a monitor resolution study. Acad Radiol 2011; 18: pp. 353-358.

  • 6. Toomey R.J., Ryan J.T., McEntee M.F., et. al.: Diagnostic efficacy of handheld devices for emergency radiologic consultation. AJR. 2010; 194: pp. 469-474.

  • 7. Ekpo U.E., McEntee M.F.: An evaluation of performance characteristics of primary display devices. J Digit Imaging 2015;

  • 8. Marchessoux C., de Paepe L., Vanovermeire O., et. al.: Clinical evaluation of a medical high dynamic range display. Med Phys 2016; 43: pp. 4023-4031.

  • 9. Kimpe T., Xthona A.: Quantification of detection probability of microcalcifications at increased display luminance levels.2012.

  • 10. Kopans D.B.: Digital breast tomosynthesis from concept to clinical care. AJR. 2014; 202: pp. 299-308.

  • 11. Samei E., Badano A., Chakraborty D., et. al.: Assessment of display performance for medical imaging systems: executive summary of AAPM TG18 report. Med Phys 2005; 32: pp. 1205-1225.

  • 12. IEC 62563-1:2009 : Medical electrical equipment-Medical image display systems-Part 1: Evaluation methods. Available at: http://webstore.iec.ch/publication/7209 Accessed July 19, 2016

  • 13. Cohen J.: A coefficient of agreement for nominal scales. Educ Psychol Meas 1960; 20: pp. 37-46.

  • 14. Landis J.R., Koch G.G.: The measurement of observer agreement for categorical data. Biometrics 1977; 33: pp. 159-174.

  • 15. Dendumrongsup T., Plumb A.A., Halligan S., et. al.: Multi-reader multi-case studies using the area under the receiver operator characteristic curve as a measure of diagnostic accuracy: systematic review with a focus on quality of data reporting. PLoS ONE 2014; 9: pp. e116018.

  • 16. Obuchowski N.A., Beiden S.V., Berbaum K.S., et. al.: Multireader, multicase receiver operating characteristic analysis: an empirical comparison of five methods. Acad Radiol 2004; 11: pp. 980-995.

  • 17. Wunderlich A., Abbey C.K.: Utility as a rationale for choosing observer performance assessment paradigms for detection tasks in medical imaging. Med Phys 2013; 40: pp. 111903.

  • 18. Hillis S.L.: A comparison of denominator degrees of freedom methods for multiple observer ROC analysis. Stat Med 2007; 26: pp. 596-619.

  • 19. Chakraborty D.P., Zhai X.: Analysis of Data Acquired Using ROC Paradigm and Its Extensions. Available at: https://cran.r-project.org/web/packages/RJafroc/vignettes/RJafroc.pdf Accessed July 19, 2016

  • 20. Agresti A.: Categorical data analysis.3rd ed.1990.John Wiley and SonsNew York

  • 21. Wang J., Langer S.: A brief review of human perception factors in digital displays for picture archiving and communications system. J Digit Imaging 1997; 10: pp. 158-168.

This post is licensed under CC BY 4.0 by the author.