Home Improving Accuracy in Reporting CT Scans of Oncology Patients
Post
Cancel

Improving Accuracy in Reporting CT Scans of Oncology Patients

Rationale and Objectives

In February 2010, our radiology department adopted the use of the Response Evaluation Criteria in Solid Tumors (RECIST) 1.1 criteria for newly diagnosed oncology patients. Prior to staff used RECIST 1.1, we hypothesized that education and feedback interventions could help clarify differences between RECIST 1.0 and the newly adopted RECIST 1.1 guidelines and result in appropriate and accurate utilization of both reporting systems. This study evaluates the effect of education and feedback interventions on the accuracy of computed tomography (CT) reporting using RECIST criteria.

Materials and Methods

Consecutive CT scan reports and images were retrospectively reviewed during three different periods to assess for compliance and adherence to RECIST guidelines. Data collected included interpreting faculty, resident, type, and total number of errors per report. Significance testing of differences between cohorts was performed using an unequal variance t -test. Group 1 (baseline): RECIST 1.0 used; prior to adoption of RECIST 1.1 criteria. Group 2 (post distributed educational materials): Following adoption of RECIST 1.1 criteria and distribution of educational materials. Group 3 (post audit and feedback): Following the audit and feedback intervention.

Results

The percentage of reports with errors decreased from 30% (baseline) to 28% (group 2) to 22% (group 3). Only the difference in error rate between the baseline and group 3 was significant ( P = .03).

Conclusion

The combination of distributed educational materials and audit and feedback interventions improved the quality of radiology reports requiring RECIST criteria by reducing the number of studies with errors.

The quality and accuracy of the radiology report is critical for the appropriate management of oncology patients, both on and off clinical trials. The Response Evaluation Criteria in Solid Tumors (RECIST 1.0), generated by a multidisciplinary group of physicians, were initially published in 2000 to provide a standardized, simplified set of rules for measuring and reporting tumor burden in oncology patients. This facilitates accurate determination of tumor response to therapy to direct future treatment decisions . The RECIST 1.0 criteria were subsequently widely adopted by academic institutions, cooperative groups, and the pharmaceutical industry. It was subsequently determined that tumor response could accurately be assessed using fewer lesions, and in an effort to improve the accuracy of choosing and measuring appropriate lymph node target lesions, the original criteria were revised by the RECIST Working Group yielding RECIST 1.1 criteria ( Table 1 ).

Table 1

Summary Comparison of Guideline Characteristics in RECIST 1.0 and RECIST 1.1

RECIST 1.0 RECIST 1.1 Maximum number of target lesions 10 5 Maximum number of target lesions per organ 5 2 Axis to measure lymph nodes Long Short Minimal lymph node size for target lesion in millimeters (mm) 10 15 Minimum size for target lesion (non-lymph node) (mm) 10 10

RECIST, Response Evaluation Criteria in Solid Tumors.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Materials and methods

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Scan Review

Get Radiology Tree app to read full this article<

Table 2

Types of Errors Recorded for RECIST 1.0 and RECIST 1.1

RECIST 1.0 Errors RECIST 1.1 Errors Error Type: Major ∗ vs. Minor † RECIST table omitted inappropriately RECIST table omitted inappropriately 1 Inappropriate choice of indicator lesion(s) in first scan Inappropriate. choice of indicator lesion(s) in first scan 2 >10 total lesions reported >5 total lesions reported 1 >5 lesions/organ reported >2 lesions per organ reported 1 Lesion <10 mm measured Lesion <10 mm measured 2 Measurement(s) not saved in PACS Measurement(s) not saved in PACS 1 Incorrect measurement unit (eg, report centimeters [cm] instead of millimeters [mm]) Incorrect measurement unit (eg, cm) 1 Inconsistent measurement angle Inconsistent measurement angle 2 Measurement not accurate Measurement not accurate 2 Poor lesion conspicuity ‡ Poor lesion conspicuity ‡ 2 LN <10 mm measured LN <15 mm measured 2 LN measured in short axis LN measured in long axis 2 Non-measurable disease used Non-measurable disease used 2 Indicator lesion numbering altered from previous RECIST report Indicator lesion numbering altered from previous RECIST report 1 Indicator lesion dropped Indicator lesion dropped 2 Nonaxial plane used Nonaxial plane used 1 Wrong window used Wrong window used 1 Wrong slice thickness used Wrong slice thickness used 1 Measured through intervening bowel or vessel Measured through intervening bowel or vessel 2 Multiple nodes grouped when measuring Multiple nodes grouped when measuring 2 Impression used the terms: complete response, partial response, progressive disease, stable disease Impression used the terms: complete response, partial response, progressive disease, stable disease 2 Other error (ie, incorrect series or slice number referenced for a lesion, failure to add new lesion to RECIST table, providing both short and long axis measurements for a lesion) Other error (ie, incorrect series or slice number referenced for a lesion, failure to add new lesion to RECIST table, providing both short and long axis measurements for a lesion) 1 Incorrect RECIST version used 1 RECIST version used not indicated 1

LN, lymph node; PACS, picture archive and communication system; RECIST, Response Evaluation Criteria in Solid Tumors.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Educational Interventions

DEM

Get Radiology Tree app to read full this article<

A & F

Get Radiology Tree app to read full this article<

Cohorts

Get Radiology Tree app to read full this article<

Group A, baseline

Get Radiology Tree app to read full this article<

Group B, post DEM

Get Radiology Tree app to read full this article<

Group C, post A & F

Get Radiology Tree app to read full this article<

Statistical Analysis

Get Radiology Tree app to read full this article<

Results

Total Number of Errors by Cohort

Get Radiology Tree app to read full this article<

Table 3

RECIST Errors Committed by Cohort

RECIST Error Type Group A ( n = 246) Group B ( n = 246) Group C ( n = 218) RECIST version not indicated n/a 15 4 RECIST table omitted inappropriately 34 37 10 Incorrect RECIST version used n/a 4 1 Inappropriate choice of indicator lesions in first scan n/a 0 0 Too many total lesions used 0 1 0 Too many lesions/organ 0 2 0 Lesion <10 mm used 0 1 2 Measurements not saved 5 4 5 Incorrect measurement unit (eg, cm) 3 1 0 Inconsistent measurement angle 0 0 0 Measurement not accurate 3 1 3 Poor lesion conspicuity 1 1 0 LN too small measured 0 1 0 LN measured in incorrect axis 5 5 12 Non-measurable disease used 0 2 0 Indicator lesions numbered wrong 0 2 1 Indicator lesion dropped 2 0 4 Nonaxial plane used 0 0 0 Wrong window 4 0 4 Wrong slice thickness 2 1 1 Measured through intervening bowel or vessel 0 2 0 Multiple nodes grouped 2 2 0 Impression used complete response, partial response or progressive disease, stable disease 12 4 2 Other error 23 7 19Total errors969368

LN, lymph node; RECIST, Response Evaluation Criteria in Solid Tumors.

Table 4

Error Rates by Cohort

Group A Baseline Group B Post DEM Group C Post A & F Total number of studies interpreted 246 246 218 Number of studies requiring use of RECIST 103 113 83 Number of RECIST-requiring studies with error756747 Percent of studies with error (95% CI) 30% (25%–36%) 28% (22%–34%) 22% (16%–27%) ∗ Total number of errors 96 93 68 Mean number of errors in RECIST applicable studies (SD) 0.93 (0.72) 0.80 (1.03) 0.82 (0.93) Major error rate 23 (24%) 19 (20%) 22 (32%)

A & F, audit and feedback; CI, confidence interval; RECIST, Response Evaluation Criteria in Solid Tumors; SD, standard deviation.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Error Rate and Mean Errors per Report by Cohort

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 5

Number of Major and Minor Errors Committed by Cohort

Major Errors Committed Minor Errors Committed Most Common Major Error Group A 23 73 Impression used “complete response, partial response or progressive disease, stable disease” Group B 19 74 Impression used “complete response, partial response or progressive disease, stable disease” Group C 22 46 Lymph node measured in incorrect axis

Get Radiology Tree app to read full this article<

Error Rates between Staff Only and Staff Plus Residents/Fellow Reads

Get Radiology Tree app to read full this article<

Table 6

Comparison of Error Rates in Resident/Fellow vs. Attending-only Reports

Resident/Fellow Attending Only Total number of studies 206 504 Number of studies requiring RECIST 79 219 Number of RECIST-requiring studies with errors 54 135 Percent of studies read with error(s) (95% CI) 26% (21%–33%) 27% (23%–31%) Total number of errors 81 176 Mean number of errors in RECIST applicable studies (SD) 1.01 (1.15) 0.79 (0.79)

CI, confidence interval; RECIST, Response Evaluation Criteria in Solid Tumors; SD, standard deviation.

Get Radiology Tree app to read full this article<

Discussion

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Conclusion

Get Radiology Tree app to read full this article<

References

  • 1. Therasse P., Arbuck S.G., Eisenhauer E.A., et. al.: New guidelines to evaluate the response to treatment in solid tumors (RECIST Guidelines). J NCI 2000; 92: pp. 205-216.

  • 2. Eisenhauer E.A., Therasse P., Bogaerts J., et. al.: New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer 2009; 45: pp. 228-247.

  • 3. Grimshaw J., Eccles M., Thomas R., et. al.: Toward evidence-based quality improvement evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies 1966–1998. J Gen Intern Med 2006; 21: pp. S14-S20.

  • 4. Farmer A.P., Légaré F., Turcot L., et. al.: Printed educational materials: effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews 2008; pp. CD004398. Art. No.

  • 5. Jamtvedt G., Young J.M., Kristoffersen D.T., et. al.: Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews 2006; pp. CD000259. Art. No

  • 6. Foy R., Eccles M., Jamtvedt G., et. al.: What do we know about how to do audit and feedback? Pitfalls in applying evidence from a systematic review. BMC Health Serv Res 2005; 5: pp. 50.

  • 7. Kielar A.Z., McInnes M., Quan M., et. al.: Introduction of QUIP (quality information program) as a semi-automated quality assessment endeavor allowing retrospective review of errors in cross-sectional abdominal imaging. Acad Radiol 2011; 18: pp. 1358-1364.

  • 8. Carney P.A., Abraham L., Cook A., et. al.: Impact of an educational intervention designed to reduce unnecessary recall during screening mammography. Acad Radiol 2012; 19: pp. 1114-1120.

  • 9. Carney P.A., Geller B.M., Sickles E.A., et. al.: Feasibility and satisfaction with a tailored web-based audit intervention for recalibrating radiologists’ thresholds for conducting additional work-up. Acad Radiol 2011; 18: pp. 369-376.

  • 10. Chabi M.L., Borget I., Ardiles R., et. al.: Evaluation of the accuracy of a computer-aided diagnosis (CAD) system in breast ultrasound according to the radiologist’s experience. Acad Radiol 2012; 19: pp. 311-319.

This post is licensed under CC BY 4.0 by the author.