Response Evaluation Criteria in Solid Tumors (RECIST) is a standardized methodology for determining therapeutic response to anticancer therapy using changes in lesion appearance on imaging studies. Many radiologists are now using RECIST in their routine clinical workflow, as part of consultative arrangements, or within dedicated imaging core laboratories. Although basic RECIST methodology is well described in published articles and online resources, inexperienced readers may encounter difficulties with certain nuances and subtleties of RECIST. This article illustrates a set of pitfalls in RECIST assessment considered to be “beyond the basics.” These pitfalls were uncovered during a quality improvement review of a recently established cancer imaging core laboratory staffed by radiologists with limited prior RECIST experience. Pitfalls are presented in four categories: (1) baseline selection of lesions, (2) reassessment of target lesions, (3) reassessment of nontarget lesions, and (4) identification of new lesions. Educational and operational strategies for addressing these pitfalls are suggested. Attention to these pitfalls and strategies may improve the overall quality of RECIST assessments performed by radiologists.
The Response Evaluation Criteria in Solid Tumors (RECIST) have become broadly accepted by the oncology community, industry sponsors, and regulatory bodies for assessing the therapeutic efficacy of new anticancer agents. Most clinical trials for solid malignancies use RECIST for response assessment. RECIST guidelines, maintained by the European Organization for Research and Treatment of Cancer (EORTC), specify how to identify and measure lesions, how to evaluate disease burden at follow-up imaging, and how to place patients into response categories at successive time points during a trial. Trial-level composite end points derived from RECIST, including overall response rate, time to progression, and progression-free survival (PFS), form the basic vocabulary for reporting the efficacy of investigational new agents and for comparing the efficacy of different treatment regimens.
Although RECIST was developed in the oncology community, many radiologists are now involved with performing RECIST assessments, especially in the academic setting . Radiologists may use RECIST in different contexts, including within their routine clinical workflow, as part of consultative arrangements with local research colleagues or industry sponsors, or within dedicated imaging core laboratories. A key objective when using RECIST is to apply the technique in a standardized fashion, adhering as closely as possible to established methodology and thus minimizing interreader variability that can lead to suboptimal reproducibility of results.
Although basic RECIST methodology is described in published articles and online resources , certain nuances and subtleties of the technique may be problematic for inexperienced readers. The purpose of this article is to illustrate a set of RECIST pitfalls considered to be “beyond the basics.” Examples were drawn from a 6-month quality improvement review of a cancer imaging core laboratory established at our institution in 2012; the radiologists staffing this new core laboratory were board-certified and subspecialty trained, but had little to no prior dedicated experience with RECIST methodology. (Please see Appendix for a description of our core laboratory, an overview of our quality improvement review methods, and a downloadable version of our educational materials.) Pitfalls are presented in four categories: (1) baseline selection of lesions, (2) reassessment of target lesions, (3) reassessment of nontarget lesions, and (4) reassessment of new lesions. These categories are intended to replicate the major evaluative steps established by RECIST for an individual patient on a clinical trial. For each pitfall, we suggest educational or operational support strategies for minimizing or preventing errors.
Baseline selection of lesions
Pitfalls in this category include (1) inappropriate selection of a target lesion when it is not unequivocally a metastasis, (2) selection of too many target lesions at baseline, (3) inappropriate selection of a small lesion as a target lesion, and (4) inappropriate selection of a target lesion from within a radiation field.
Inappropriate Selection of Target Lesion When Not Unequivocally a Metastasis
Because inadvertently selecting a benign lesion as a target lesion can lead to a false assessment of response or stability over time, it is crucial that only unequivocally metastatic lesions be chosen as target lesions ( Fig 1 ). Educational materials presented to new RECIST readers should emphasize this point. Prior studies may be useful to confirm prior growth of an otherwise indeterminate lesion, although these may not always be available (especially in a centralized review setting). Review of clinic notes, operative reports, or surgical pathology reports may be useful to confirm sites of recent biopsy or surgery, thus preventing the inadvertent selection of a postoperative seroma or granulation tissue as a target lesion.
Get Radiology Tree app to read full this article<
Selection of Too Many Target Lesions at Baseline
Get Radiology Tree app to read full this article<
Inappropriate Selection of Small Lesions as Target Lesions
Get Radiology Tree app to read full this article<
Inappropriate Selection of Target Lesion from within a Radiation Field
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Reassessment of target lesions
Get Radiology Tree app to read full this article<
Remeasurement of Lesions in a Different Phase of Contrast than Baseline
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Failure to Change Measurement Axis with Changes in Lesion Orientation
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Reassessment of nontarget lesions
Get Radiology Tree app to read full this article<
Premature Assignment of PD for Nontarget Lesions
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Incorrect Designation of PR for Nontarget Lesions
Get Radiology Tree app to read full this article<
Comparison to the Incorrect Prior Scan
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Failure to Assign CR for Nontarget Lymph Nodes Falling Less than 10 mm
Get Radiology Tree app to read full this article<
Identification of new lesions
Get Radiology Tree app to read full this article<
Premature Assessment of New Disease on Anatomic Imaging
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Premature Assessment of New Disease on FDG-PET Studies
Get Radiology Tree app to read full this article<
Discussion
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Table 1
Response Evaluation Criteria in Solid Tumor Data Extraction “Pearls”
Baseline selection of lesions
Reassessment of target lesions
Reassessment of nontarget lesions
Identification of new lesions
CR, complete response; CT, computed tomography; FDG-PET, 18-F-fluorodeoxyglucose positron emission tomography; PD, progressive disease; PR, partial response.
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
Supplementary data
Get Radiology Tree app to read full this article<
Appendix
Table 2
Figure
Get Radiology Tree app to read full this article<
Get Radiology Tree app to read full this article<
References
1. Eisenhauer E.A., Therasse P., Bogaerts J., et. al.: New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer 2009; 45: pp. 228-247.
2. Therasse P., Arbuck S.G., Eisenhauer E.A., et. al.: New guidelines to evaluate the response to treatment in solid tumors. European Organization for Research and Treatment of Cancer, National Cancer Institute of the United States, National Cancer Institute of Canada. J Natl Cancer Inst 2000; 92: pp. 205-216.
3. Jaffe T.A., Wickersham N.W., Sullivan D.C.: Quantitative imaging in oncology patients: Part 1, radiology practice patterns at major U.S. cancer centers. AJR Am J Roentgenol 2010; 195: pp. 101-106.
4. Nishino M., Jagannathan J.P., Ramaiya N.H., et. al.: Revised RECIST guideline version 1.1: what oncologists want to know and what radiologists need to know. AJR Am J Roentgenol 2010; 195: pp. 281-289.
5. EORTC. RECIST 1.1 questions and clarifications. Available at: http://www.eortc.org . Accessed January 6, 2015.
6. Suzuki C., Jacobsson H., Hatschek T., et. al.: Radiologic measurements of tumor response to treatment: practical approaches and limitations. Radiographics 2008; 28: pp. 329-344.
7. Nishino M., Jackman D.M., Hatabu H., et. al.: New Response Evaluation Criteria in Solid Tumors (RECIST) guidelines for advanced non-small cell lung cancer: comparison with original RECIST and impact on assessment of tumor response to targeted therapy. AJR Am J Roentgenol 2010; 195: pp. W221-W228.
8. Erasmus J.J., Gladish G.W., Broemeling L., et. al.: Interobserver and intraobserver variability in measurement of non-small-cell carcinoma lung lesions: implications for assessment of tumor response. J Clin Oncol 2003; 21: pp. 2574-2582.
9. Dodd L.E., Korn E.L., Freidlin B., et. al.: Blinded independent central review of progression-free survival in phase III clinical trials: important design element or unnecessary expense?. J Clin Oncol 2008; 26: pp. 3791-3796.
10. Borradaile K, Ford R, O’Neal M, et al. Discordance between BICR readers: understanding the causes and implementing processes to mitigate sources of discordance; 2010. Accessed online at http://www.appliedclinicaltrialsonline.com/appliedclinicaltrials/Labs/Discordance-Between-BICR-Readers/ArticleStandard/Article/detail/693554 .
11. Andoh H., McNulty N.J., Lewis P.J.: Improving accuracy in reporting CT scans of oncology patients: assessing the effect of education and feedback interventions on the application of the Response Evaluation Criteria in Solid Tumors (RECIST) criteria. Acad Radiol 2013; 20: pp. 351-357.
12. Abajian A.C., Levy M., Rubin D.L.: Informatics in radiology: improving clinical work flow through an AIM database: a sample web-based lesion tracking application. Radiographics 2012; 32: pp. 1543-1552.
13. Byrne M.J., Nowak A.K.: Modified RECIST criteria for assessment of response in malignant pleural mesothelioma. Ann Oncol 2004; 15: pp. 257-260.
14. Choi H., Charnsangavej C., Faria S.C., et. al.: Correlation of computed tomography and positron emission tomography in patients with metastatic gastrointestinal stromal tumor treated at a single institution with imatinib mesylate: proposal of new computed tomography response criteria. J Clin Oncol 2007; 25: pp. 1753-1759.
15. Wolchok J.D., Hoos A., O’Day S., et. al.: Guidelines for the evaluation of immune therapy activity in solid tumors: immune-related response criteria. Clin Cancer Res 2009; 15: pp. 7412-7420.
16. Lencioni R., Llovet J.M.: Modified RECIST (mRECIST) assessment for hepatocellular carcinoma. Semin Liver Dis 2010; 30: pp. 52-60.
17. Mok T.S.: Living with imperfection. J Clin Oncol 2010; 28: pp. 191-192.
18. Yankeelov T.E., Gore J.C.: Has quantitative multimodal imaging of treatment response arrived?. Clin Cancer Res 2009; 15: pp. 6473-6475.
19. Jaffe T.A., Wickersham N.W., Sullivan D.C.: Quantitative imaging in oncology patients: Part 2, oncologists’ opinions and expectations at major U.S. cancer centers. AJR Am J Roentgenol 2010; 195: pp. W19-W30.