Home How to Report a Research Study
Post
Cancel

How to Report a Research Study

Incomplete reporting hampers the evaluation of results and bias in clinical research studies. Guidelines for reporting study design and methods have been developed to encourage authors and journals to include the required elements. Recent efforts have been made to standardize the reporting of clinical health research including clinical guidelines. In this article, the reporting of diagnostic test accuracy studies, screening studies, therapeutic studies, systematic reviews and meta-analyses, cost-effectiveness assessments (CEA), recommendations and/or guidelines, and medical education studies is discussed. The available guidelines, many of which can be found at the Enhancing the QUAlity and Transparency Of health Research network, on how to report these different types of health research are also discussed. We also hope that this article can be used in academic programs to educate the faculty and trainees of the available resources to improve our health research.

This article is the first in a series of two articles that will review how to report and how to critically appraise research in health care. In this article, the reporting of diagnostic test accuracy and screening studies, therapeutic studies, systematic reviews and meta-analyses, cost-effectiveness studies, recommendations and/or guidelines, and medical education studies is discussed. The available guidelines on how to report these different types of health research are also discussed. The second article will review the evolution of standardization of critical appraisal techniques for health research.

Recent efforts have been made to standardize both the reporting and the critical appraisal of clinical health research including clinical guidelines. In 2006, Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network was formed to improve the quality of reporting health research . Recognizing the need to critically assess the methodological quality of studies and the wide spread deficiencies and lack of standardization in primary research reporting, the network brought together international stakeholders including editors, peer reviewers, and developers of guidelines to improve both the quality of research publications and the quality of the research itself . Many of the presentations at the joint Radiological Alliance for Health Service Research/Alliance of Clinical-Educators in Radiology session at the 2013 Association of University Radiologists annual meeting highlighted reporting guidelines available at the EQUATOR network . The EQUATOR network goals are raising awareness of the crucial importance of accurate and complete reporting of research; becoming the recognized global center providing resources, education, and training relating to the reporting of health research and use of reporting guidelines; assisting in the development, dissemination, and implementation of reporting guidelines; monitoring the status of the quality of reporting across health research literature; and conducting research evaluating or pertaining to the quality of reporting . The desired result of these goals is to improve the quality of health care research reporting which subsequently improves patient care.

The EQUATOR Network Resource Centre provides up-to-date resources related to health research reporting mainly for authors of research articles; journal editors and peer reviewers; and reporting guideline developers to enable better reporting, reviewing, and editing. Within the library for health research reporting, the EQUATOR network has developed and maintains a digital library that provides publications related to writing research articles; reporting guidelines and guidance on scientific writing; the use of reporting guidelines in editorial and peer review processes; the development of reporting guidelines; and evaluations of the quality of reporting. The library contains comprehensive lists of the available reporting guidelines, listed by study type. These include experimental studies, observational studies, diagnostic accuracy studies, biospecimen reporting, reliability and agreement studies, systematic reviews and meta-analyses, qualitative research, mixed-methods studies, economic evaluations, and quality improvement studies. The network has developed several standards for reporting research including the consolidated standards of reporting trials (CONSORT) checklist and flow diagram for randomized control trials (RCTs) ; the transparent reporting of evaluations with nonrandomized designs (TREND) checklist for nonrandomized trials; the standards for the reporting of diagnostic accuracy studies (STARD) checklist and flow diagram for diagnostic test accuracy studies ; the strengthening the reporting of observational studies in epidemiology (STROBE) checklists for cohort, case–control, and cross-sectional studies; the preferred reporting items of systematic reviews and meta-analyses (PRISMA) checklist and flow diagram for systematic reviews and meta-analyses ; the consolidated criteria for reporting qualitative research (COREQ) and enhancing transparency in reporting the synthesis of qualitative research (ENTREQ) checklists for reporting qualitative research; standards for quality improvement reporting excellence (SQUIRE) checklist for quality improvement studies; consolidated health economic evaluation reporting standards (CHEERS) for health economics studies ; and statement on reporting of evaluation studies in health informatics (STARE-HI) for studies of health informatics.

How to report studies

Screening Studies and Diagnostic Test Accuracy Studies

There are no specific EQUATOR network recommendations for reporting screening studies. However, in general, reporting screening studies should incorporate the items important to diagnostic test accuracy studies. Screening is the application of a test to detect a disease in an individual who has no known signs or symptoms. The purpose of screening is to prevent or delay the development of advanced disease through earlier detection and enable treatment of disease that is both less morbid and more effective.

Screening is distinct from other diagnostic tests in that patients undergoing screening are asymptomatic, and the chance of having the disease of interest is lower in asymptomatic patients compared to those presenting with symptoms. For a screening study, the most important factors to consider are the characteristics of the population to be screened, the screening regimens being compared, the diagnostic test performance of the screening test or tests, and the outcome measure selected. Additional considerations are the diagnostic consequences which occur during a patient’s screening episode, such as additional downstream testing for those with positive test results and follow-up monitoring of those with negative test results.

Get Radiology Tree app to read full this article<

Table 1

STARD Items and Explanation

Item Title/Abstract/Key words 1 Identify the article as a study of diagnostic accuracy Use the term “diagnostic accuracy” in the title or abstract.

In 1991, the National Library of Medicine’s MEDLINE database introduced a specific keyword (MeSH heading) for diagnostic studies: “Sensitivity and Specificity.” Introduction 2 State the research questions or study aims, such as estimating diagnostic accuracy or comparing accuracy between tests or across participant groups. Describe the scientific background, previous work on the subject, the remaining uncertainty, and, hence, the rationale for their study.

Clearly specified research questions help the readers to judge the appropriateness of the study design and data analysis. Methods Participants 3 Describe the study population: The inclusion and exclusion criteria, setting and locations where data were collected. Diagnostic accuracy studies describe the behavior of a test under particular circumstances and should report its inclusion and exclusion criteria for selecting the study population. The spectrum of the target disease can vary and affect test performance. 4 Describe participant recruitment and sampling: How eligible patients are identified. Was recruitment based on presenting symptoms, results from previous tests, or the fact that the participants had received the index tests or the reference standard?

Describe how eligible subjects were identified and whether the study enrolls consecutive or random sampling of patients. Study designs are likely to influence the spectrum of disease represented. 5 Describe participant sampling: Was the study population a consecutive series of participants defined by the selection criteria in item 3 and 4? If not, specify how participants were further selected. 6 Describe data collection: Was data collection planned before the index test and reference standard were performed (prospective study) or after (retrospective study)? Prospective data collection has many advantages: better data control, additional checks for data integrity and consistency, and a level of clinical detail appropriate to the problem. As a result, there will be fewer missing or uninterpretable data items.

Retrospective data collection starts after patients have undergone the index test and the reference standard and often relies on chart review. Studies with retrospective data collection may reflect routine clinical practice better than a prospective study, but also may fail to identify all eligible patients or to provide data of high quality. Methods Test methods 7 Describe the reference standard and its rationale. The reference standard is used to distinguish patients with and without disease. When it is not possible to subject all patients to the reference standard for practical or ethical reasons, composite reference standard is an alternative. The components may reflect different definitions or strategies for disease diagnosis. 8 Describe technical specifications of material and methods involved including how and when measurements were taken, and/or cite references for index tests and reference standard. Describe the methods involved in the execution of index test and reference standard in sufficient detail to allow other researchers to replicate the study. Differences in the execution of the index test and reference standard are a potential source of variation in diagnostic accuracy.

The description should cover the full test protocol including the specification of materials and instruments together with their instructions for use. 9 Describe the definitions and rationale for the units, thresholds and/or categories of the index tests and reference standard. Test results can be truly dichotomous (eg, present or absent), have multiple categories or be continuous. Clearly describe how and when category boundaries are used. 10 Describe the number, training, and expertise of the persons executing and reading the index tests and the reference standard. Variability in the manipulation, processing, or reading of the index test or reference standard will affect diagnostic accuracy.

Professional background, expertise, and prior training to improve interpretation and to reduce interobserver variation all affect the quality of reading. 11 Describe whether or not the readers of the index tests and reference standard were blind (masked) to the results of the other test and describe any other clinical information available to the readers. Knowledge of the results of the reference standard can influence the reading of the index test, and vice versa, leading to inflated measures of diagnostic accuracy. Methods Statistical methods 12 Describe methods for calculating or comparing measures of diagnostic accuracy, and the statistical methods used to quantify uncertainty (eg, 95% confidence intervals). Sensitivity, specificity, PPV, NPV, ROC, likelihood ratio and odds ratio. 13 Describe methods for calculating test reproducibility, if done. Reproducibility of the index test and reference standard varies. Poor reproducibility adversely affects diagnostic accuracy. If possible, authors should evaluate the reproducibility of the test methods used in their study and report their procedure to do so. Results Participants 14 Report when study was performed, including beginning and end dates of recruitment. Technology behind many tests advances continuously, leading to improvements in diagnostic accuracy. There may be a considerable gap between the dates of the study and the publication date of the study report. 15 Report Clinical and demographic characteristics of the study population. Description of the demographic and clinical characteristics are usually presented in a table, such as age, sex, spectrum of presenting symptoms, comorbidity, current treatments, recruitment centers. 16 Report the number of participants satisfying the criteria for inclusion who did or did not undergo the index tests and/or the reference standard. Describe why participants failed to receive either test.

Flow diagram is strongly recommended. Results Test results 17 Report time interval between the index tests and the reference standard, and any treatment administered in between. When delay occurs between doing the index test and the reference standard the condition of the patient may change, leading to worsening or improvement of the disease. Similar concerns apply if treatment is started after doing the index test but before doing the reference standard. 18 Report distribution of severity of disease (define criteria). Demographic and clinical features of the study population can affect measures of diagnostic accuracy. Many diseases are not pure dichotomous states but cover a continuum, ranging from minute pathological changes to advanced clinical disease. Test sensitivity is often higher in studies with a higher proportion of patients with more advanced stages of the target condition. 19 Report a cross tabulation of the results of the index tests (including indeterminate and missing results) by the results of the reference standard; for continuous results, the distribution of the test results by the results of the reference standard. Cross tabulations of test results in categories and graphs of distributions of continuous results are essential to allow scientific colleagues to (re)calculate measures of diagnostic accuracy or to perform alternative analyses, including meta-analysis. 20 Report any adverse events from performing the index tests or the reference standard. Not all tests are safe. Measuring and reporting of adverse events in studies of diagnostic accuracy can provide additional information about the clinical usefulness of a particular test. Results Estimates 21 Report estimates of diagnostic accuracy and measures of statistical uncertainty (eg, 95% confidence intervals). Report a value of how well the test results correspond with the reference standard. The values presented in the report should be taken as estimates with some variation. Many journals require or strongly encourage the use of confidence intervals as measures of precision. A 95% confidence interval is conventional. 22 Report how indeterminate results, missing data and outliers of the index tests were handled. Uninterpretable, indeterminate, and intermediate test results pose a problem in the assessment of a diagnostic test. By itself, the frequency of these test results is an important indicator of the overall usefulness of the test. Furthermore, ignoring such test results can produce biased estimates of diagnostic accuracy if these results occur more frequently in patients with disease than in those without, or vice versa. 23 Report estimates of variability of diagnostic accuracy between subgroups of participants, readers or centers, if done. Since variability is the rule rather than the exception, researchers should explore possible sources of heterogeneity in results, within the limits of the available sample size. The best practice is to plan subgroup analyses before the start of the study. 24 Report estimates of test reproducibility, if done. Report all measures of test reproducibility performed during the study. For quantitative analytical methods, report the coefficient of variation (CV). Discussion 25 Discuss the clinical applicability of the study findings. Provide a general interpretation of the results in the context of current evidence and its applicability in practice.

Clearly define the methodological shortcomings of the study, how it potentially affected the results and approaches to limit its significance.

Discuss differences between the context of the study and other settings and patient groups in which the test is likely to be used.

Provide future direction for this work in advancing clinical practice or research in this field.

NPV, negative predictive value; PPV, positive predictive value; ROC, receiver operating characteristic.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Therapeutic Studies

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Meta-analyses

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Cost-Effectiveness Assessments

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 2

Checklist for Reporting the Reference Case Cost-Effectiveness Analysis

Framework 1 Background of the problem 2 General framing and design of the analysis 3 Target population for intervention 4 Other program descriptors (eg, care setting, model of delivery, timing of intervention) 5 Description of comparator programs 6 Boundaries of the analysis 7 Time horizon 8 Statement of the perspective of the analysisData and Methods 9 Description of event pathway 10 Identification of outcomes of interest in analysis 11 Description of model used 12 Modeling assumptions 13 Diagram of event pathway (model) 14 Software used 15 Complete description of estimates of effectiveness, resource use, unit costs, health states, and quality-of-life weights and their sources 16 Methods for obtaining estimates of effectiveness, costs, and preferences 17 Critique of data quality 18 Statement of year of costs 19 Statement of method used to adjust costs for inflation 20 Statement of type of currency 21 Source and methods for obtaining expert judgment 22 Statement of discount ratesResults 23 Results of model validation 24 Reference case results (discounted at 3% and undiscounted): total costs and effectiveness, incremental costs and effectiveness, and incremental cost-effectiveness ratios 25 Results of sensitivity analyses 26 Other estimates of uncertainty, if available 27 Graphical representation of cost-effectiveness results 28 Aggregate cost and effectiveness Information 29 Disaggregated results, as relevant 30 Secondary analyses using 5% discount rate 31 Other secondary analyses, as relevantDiscussion 32 Summary of reference case results 33 Summary of sensitivity of results to assumptions and uncertainties in the analysis 34 Discussion of analysis assumptions having important ethical implications 35 Limitations of the study 36 Relevance of study results for specific policy questions or decisions 37 Results of related cost-effectiveness analyses 38 Distributive implications of an intervention

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 3

International Society for Pharmacoeconomics and Outcomes Research Randomized Control Trial Cost-Effectiveness Analysis (ISPOR RCT-CEA) Task Force Report of Core Recommendations for Conducting Economic Analyses Alongside Clinical Trials

Trial design 1 Trial design should reflect effectiveness rather than efficacy when possible. 2 Full follow-up of all patients is encouraged. 3 Describe power and ability to test hypotheses, given the trial sample size. 4 Clinical end points used in economic evaluations should be disaggregated. 5 Direct measures of outcome are preferred to use of intermediate end points.Data elements 6 Obtain information to derive health state utilities directly from the study population. 7 Collect all resources that may substantially influence overall costs; these include those related and unrelated to the intervention.Database design and management 8 Collection and management of the economic data should be fully integrated into the clinical data. 9 Consent forms should include wording permitting the collection of economic data, particularly when it will be gathered from third-party databases and may include pre- and/or post-trial records.Analysis 10 The analysis of economic measures should be guided by a data analysis plan and hypotheses that are drafted prior to the onset of the study. 11 All cost-effectiveness analyses should include the following: an intention-to-treat analysis; common time horizon(s) for accumulating costs and outcomes; a within-trial assessment of costs and outcomes; an assessment of uncertainty; a common discount rate applied to future costs and outcomes; an accounting for missing and/or censored data. 12 Incremental costs and outcomes should be measured as differences in arithmetic means, with statistical testing accounting for issues specific to these data (eg, skewness, mass at zero, censoring, construction of QALYs). 13 Imputation is desirable if there is a substantial amount of missing data. Censoring, if present, should also be addressed. 14 One or more summary measures should be used to characterize the relative value of the intervention. 15 Examples include ratio measures, difference measures, and probability measures (eg, cost-effectiveness acceptability curves). 16 Uncertainty should be characterized. Account for uncertainty that stems from sampling, fixed parameters such as unit costs and the discount rate, and methods to address missing data. 17 Threats to external validity—including protocol-driven resource use, unrepresentative recruiting centers, restrictive inclusion and exclusion criteria, and artificially enhanced compliance—are best addressed at the design phase. 18 Multinational trials require special consideration to address intercountry differences in population characteristics and treatment patterns. 19 When models are used to estimate costs and outcomes beyond the time horizon of the trial, good modeling practices should be followed.

Models should reflect the expected duration of the intervention on costs and outcomes. 20 Subgroup analyses based on prespecified clinical and economic interactions, when found to be significant ex post, are appropriate. Ad hoc subgroup analysis is discouraged.Reporting the results 21 Minimum reporting standards for cost-effectiveness analyses should be adhered to for those conducted alongside clinical trials. 22 The cost-effectiveness report should include a general description of the clinical trial and key clinical findings. 23 Reporting should distinguish economic data collected as part of the trial vs. data not collected as part of the trial. 24 The amount of missing data should be reported. If imputation methods are used, the method should be described. 25 Methods used to construct and compare costs and outcomes, and to project costs and outcomes beyond the trial period should be described. 26 The results section should include summaries of resource use, costs, and outcome measures, including point estimates and measures of uncertainty. Results should be reported for the time horizon of the trial, and for projections beyond the trial (if conducted). 27 Graphical displays are recommended for results not easily reported in tabular form (eg, cost-effectiveness acceptability curves, joint density of incremental costs and outcomes).

QALYs, quality-adjusted life years.

Get Radiology Tree app to read full this article<

Recommendations and/or Guidelines

Get Radiology Tree app to read full this article<

Practice guidelines are systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 4

Standards for Developing Trustworthy Clinical Practice Guidelines

Standard 1 Establishing transparency Standard 2 Management of conflict of interest Standard 3 Guideline development group composition Standard 4 Clinical practice guideline-systematic review intersection Standard 5 Establishing evidence foundations for and rating strength of recommendations Standard 6 Articulations of recommendations Standard 7 External review Standard 8 Updating

Based on clinical practice guidelines we can trust, Institute of Medicine, National Academic Press, 2011 (64).

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Medical Education Studies

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Conclusions

Get Radiology Tree app to read full this article<

Table 5

Reporting Guidelines by Research Study Design, Acronym, Web site URL, and Bibliographic Reference

Research Study Design Reporting Guidelines Provided For Reporting Guideline Acronym Reporting Guideline Web Site URL Full Text if Available Full Bibliographic Reference Diagnostic test accuracy Studies of diagnostic accuracy STARD http://www.stard-statement.org/ Full-text PDF documents of the STARD Statement, checklist, flow diagram and the explanation and elaboration document Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, Lijmer JG, Moher D, Rennie D, de Vet HC. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Standards for Reporting of Diagnostic Accuracy. Clin Chem. 2003; 49(1):1–6. PMID: 12507953 .

BMJ. 2003; 326(7379):41–44. PMID: 12511463 .

Radiology. 2003; 226(1):24–28. PMID: 12511664 .

Ann Intern Med. 2003; 138(1):40–44. PMID: 12513043 .

Am J Clin Pathol. 2003; 119(1):18–22. PMID: 12520693 .

Clin Biochem. 2003; 36(1):2–7. PMID: 12554053 .

Clin Chem Lab Med. 2003; 41(1):68–73. PMID: 12636052 . Clinical trials, experimental studies Parallel group randomised trials CONSORT http://www.consort-statement.org/ Full-text PDF documents of the CONSORT 2010 Statement, CONSORT 2010 checklist, CONSORT 2010 flow diagram, and the CONSORT 2010 Explanation and Elaboration document Schulz KF, Altman DG, Moher D, for the CONSORT Group. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. Ann Int Med. 2010; 152(11):726–32. PMID: 20335313 .

BMC Medicine. 2010; 8:18. PMID: 20334633 .

BMJ. 2010; 340:c332. PMID: 20332509 .

J Clin Epidemiol. 2010; 63(8): 834–40. PMID: 20346629 .

Lancet. 2010; 375(9721):1136 supplementary webappendix .

Obstet Gynecol. 2010; 115(5):1063–70. PMID: 20410783 .

Open Med. 2010; 4(1):60–68.

PLoS Med. 2010; 7(3): e1000251. PMID: 20352064 .

Trials. 2010; 11:32. PMID: 20334632 . Trials assessing nonpharmacologic treatments CONSORT nonpharmacological treatment interventions http://www.consort-statement.org/extensions/interventions/non-pharmacologic-treatment-interventions/ The full text of the extension for trials assessing nonpharmacologic treatments Boutron I, Moher D, Altman DG, Schulz K, Ravaud P, for the CONSORT group. Methods and Processes of the CONSORT Group: example of an extension for trials assessing nonpharmacologic treatments. Ann Intern Med. 2008:W60–W67. PMID: 18283201 . Cluster randomised trials CONSORT Cluster http://www.consort-statement.org/extensions/designs/cluster-trials/ The full text of the extension for cluster randomised trials Campbell MK, Piaggio G, Elbourne DR, Altman DG; for the CONSORT Group. Consort 2010 statement: extension to cluster randomised trials. BMJ. 2012; 345:e5661. PMID: 22951546 . Reporting randomised trials in journal and conference abstracts CONSORT for abstracts http://www.consort-statement.org/extensions/data/abstracts/ The full text of the extension for journal and conference abstracts Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG, Schulz KF, the CONSORT Group. CONSORT for reporting randomised trials in journal and conference abstracts. Lancet. 2008; 371(9609):281–283. PMID: 18221781 . Reporting of pragmatic trials in health care CONSORT pragmatic trials http://www.consort-statement.org/extensions/designs/pragmatic-trials/ The full text of the extension for pragmatic trials in health care Zwarenstein M, Treweek S, Gagnier JJ, Altman DG, Tunis S, Haynes B, Oxman AD, Moher D; CONSORT group; Pragmatic Trials in Healthcare (Practihc) group. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ. 2008; 337:a2390. PMID: 19001484 . Reporting of harms in randomized trials CONSORT Harms http://www.consort-statement.org/extensions/data/harms/ Ioannidis JPA, Evans SJW, Gotzsche PC, O’Neill RT, Altman DG, Schulz K, Moher D, for the CONSORT Group. Better Reporting of Harms in Randomized Trials: An Extension of the CONSORT Statement. Ann Intern Med. 2004; 141(10):781–788. PMID: 15545678 . Patient-reported outcomes in randomized trials CONSORT-PRO http://www.consort-statement.org/extensions/data/pro/ The full text of the extension for patient reported outcomes (PROs) Calvert M, Blazeby J, Altman DG, Revicki DA, Moher D, Brundage MD; CONSORT PRO Group. Reporting of patient-reported outcomes in randomized trials: the CONSORT PRO extension. JAMA. 2013; 309(8):814–822. PMID; 23443445 . Reporting of noninferiority and equivalence randomized trials CONSORT noninferiority http://www.consort-statement.org/extensions/designs/non-inferiority-and-equivalence-trials/ The full text of the extension for noninferiority and equivalence randomized trials Piaggio G, Elbourne DR, Pocock SJ, Evans SJ, Altman DG; CONSORT Group. Reporting of noninferiority and equivalence randomized trials: extension of the CONSORT 2010 statement. JAMA. 2012; 308(24):2594–2604. PMID: 23268518 . Defining standard protocol items for clinical trials SPIRIT http://www.spirit-statement.org/ The full text of the SPIRIT 2013 Statement Chan A-W, Tetzlaff JM, Altman DG, Laupacis A, Gøtzsche PC, Krleža-Jerić K, Hróbjartsson A, Mann H, Dickersin K, Berlin J, Doré C, Parulekar W, Summerskill W, Groves T, Schulz K, Sox H, Rockhold FW, Rennie D, Moher D. SPIRIT 2013 Statement: defining standard protocol items for clinical trials. Ann Intern Med. 2013; 158(3):200–207. PMID: 23295957 . Systematic reviews/meta-analyses/HTA Systematic reviews and meta-analyses PRISMA http://www.prisma-statement.org/ Full-text PDF documents of the PRISMA Statement, checklist, flow diagram and the PRISMA Explanation and Elaboration Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med. 2009; 6(7):e1000097. PMID: 19621072 .

BMJ. 2009; 339:b2535. PMID: 19622551 .

Ann Intern Med. 2009; 151(4):264–269, W64. PMID: 19622511 .

J Clin Epidemiol. 2009; 62(10):1006–1012. PMID: 19631508 .

Open Med. 2009; 3(3); 123–130 Reporting systematic reviews in journal and conference abstracts PRISMA for Abstracts Beller EM, Glasziou PP, Altman DG, Hopewell S, Bastian H, Chalmers I, Gøtzsche PC, Lasserson T, Tovey D; PRISMA for Abstracts Group. PRISMA for Abstracts: reporting systematic reviews in journal and conference abstracts. PLoS Med. 2013; 10(4):e1001419. PMID: 23585737 . Meta-analysis of individual participant data Riley RD, Lambert PC, Abo-Zaid G. Meta-analysis of individual participant data: rationale, conduct, and reporting. BMJ. 2010; 340:c221. PMID 20139215 . Economic evaluations Economic evaluations of health interventions CHEERS http://www.ispor.org/taskforces/EconomicPubGuidelines.asp Information about the CHEERS Statement and a full-text PDF copy of the CHEERS checklist Husereau D, Drummond M, Petrou S, Carswell C, Moher D, Greenberg D, Augustovski F, Briggs AH, Mauskopf J, Loder E. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement Eur J Health Econ. 2013; 14(3):367–372. PMID: 23526140 .

Value Health. 2013; 16(2):e1–e5. PMID: 23538200 .

Clin Ther. 2013; 35(4):356–363. PMID: 23537754 .

Cost Eff Resour Alloc. 2013; 11(1):6. PMID: 23531194 .

BMC Med. 2013; 11:80. PMID: 23531108 .

BMJ. 2013; 346:f1049. PMID: 23529982 .

Pharmacoeconomics. 2013; 31(5):361–367. PMID: 23529207 .

J Med Econ. 2013; 16(6):713–719. PMID: 23521434 .

Int J Technol Assess Health Care. 2013; 29(2):117–122. PMID: 23587340 .

BJOG. 2013; 120(6):765–770. PMID: 23565948 . Clinical trials, diagnostic accuracy studies, experimental studies, observational studies Narrative in reports of medical research Schriger DL. Suggestions for improving the reporting of clinical research: the role of narrative. Ann Emerg Med. 2005; 45(4):437–443. PMID: 15795727 . Qualitative research Qualitative research interviews and focus groups COREQ Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007; 19(6):349–357. PMID 17872937 Observational studies For completeness, transparency and data analysis in case reports and data from the point of care. CARE http://www.care-statement.org/ The CARE checklist and the CARE writing template for authors Gagnier JJ, Kienle G, Altman DA, Moher D, Sox H, Riley D; the CARE Group. The CARE Guidelines: consensus-based clinical case reporting guideline development. BMJ Case Rep. 2013; http://dx.doi.org/10.1136/bcr-2013-201554 PMID: 24155002 .

Global Adv Health Med. 2013; 10.7453/gahmj.2013.008

Dtsch Arztebl Int. 2013; 110(37):603–608. PMID: 24078847 Full-text in English / Full-text in German

J Clin Epidemiol. 2013. Epub ahead of print. PMID: 24035173 .

J Med Case Rep. 2013; 7(1):223. PMID: 24228906 .

J Diet Suppl. 2013; 10(4):381–90. PMID: 24237192 . Reliability and agreement studies Reliability and agreement studies GRRAS Kottner J, Audigé L, Brorson S, Donner A, Gajeweski BJ, Hróbjartsson A, Robersts C, Shoukri M, Streiner DL. Guidelines for reporting reliability and agreement studies (GRRAS) were proposed. J Clin Epidemiol. 2011; 64(1):96–106 PMID: 21130355 .

Int J Nurs Stud. 2011; 48(6):661–671. PMID: 21514934 . Qualitative research, systematic reviews/meta-analyses/HTA Synthesis of qualitative research ENTREQ Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol. 2012; 12(1):181. PMID 23185978 . Qualitative research Qualitative research interviews and focus groups COREQ http://intqhc.oxfordjournals.org/content/19/6/349.long Full text Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007; 19(6):349–357. PMID: 17872937 . Mixed-methods studies Mixed methods studies in health services research GRAMMS O’Cathain A, Murphy E, Nicholl J. The quality of mixed methods studies in health services research. J Health Serv Res Policy. 2008; 13(2):92–98. PMID: 18416914 . Quality improvement studies Quality improvement in health care SQUIRE http://squire-statement.org/ Davidoff F, Batalden P, Stevens D, Ogrinc G, Mooney S. Publication guidelines for quality improvement in health care: evolution of the SQUIRE project. Qual Saf Health Care. 2008; 17 Suppl 1:i3-i9. PMID: 18836063 .

BMJ. 2009; 338:a3152. PMID: 19153129 .

Jt Comm J Qual Patient Saf. 2008; 34(11):681–687. PMID: 19025090 .

Ann Intern Med. 2008; 149(9):670–676. PMID: 18981488 .

J Gen Intern Med. 2008; 23(12):2125–2130. PMID: 18830766 Health informatics Evaluation studies in health informatics STARE-HI Talmon J, Ammenwerth E, Brender J, de Keizer N, Nykanen P, Rigby M. STARE-HI - Statement on reporting of evaluation studies in Health Informatics. Int J Med Inform. 2009; 78(1):1–9. PMID: 18930696 .

CARE, case reports; CHEERS, consolidated health economic evaluation reporting standards; CONSORT, consolidated standards of reporting trials; COREQ, consolidated criteria for reporting qualitative research; ENTREQ, enhancing transparency in reporting the synthesis of qualitative research; GRAMMS, good reporting of a mixed-methods study; GRRAS, guidelines for reporting reliability and agreement studies; HTA, health technology assessment; PRISMA, preferred reporting items for systematic reviews and meta-analyses; SPIRIT, standard protocol items: recommendations for interventional trials; SQUIRE, standards for quality improvement reporting excellence; STARD, standards for reporting of diagnostic accuracy; STARE-HI, statement on reporting of evaluation studies in health informatics.

Get Radiology Tree app to read full this article<

Appendix

Get Radiology Tree app to read full this article<

Appendix Table 1

STARD checklist for reporting of studies of diagnostic accuracy .

Section and Topic Item# On Page# Title/Abstract/Keywords 1 Identify the article as a study of diagnostic accuracy (recommend MeSH heading ‘sensitivity and specificity’). Introduction 2 State the research questions or study aims, such as estimating diagnostic accuracy or comparing accuracy between tests or across participant groups. Methods Participants 3 The study population: The inclusion and exclusion criteria, setting and locations where data were collected. 4 Participant recruitment: Was recruitment based on presenting symptoms, results from previous tests, or the fact that the participants had received the index tests or the reference standard? 5 Participant sampling: Was the study population a consecutive series of participants defined by the selection criteria in item 3 and 4? If not, specify how participants were further selected. 6 Data collection: Was data collection planned before the index test and reference standard were performed (prospective study) or after (retrospective study)? Test methods 7 The reference standard and its rationale. 8 Technical specifications of material and methods involved including how and when measurements were taken, and/or cite references for index tests and reference standard. 9 Definition of and rationale for the units, cut-offs and/or categories of the results of the index tests and the reference standard. 10 The number, training and expertise of the persons executing and reading the index tests and the reference standard. 11 Whether or not the readers of the index tests and reference standard were blind (masked) to the results of the other test and describe any other clinical information available to the readers. Statistical methods 12 Methods for calculating or comparing measures of diagnostic accuracy, and the statistical methods used to quantify uncertainty (e.g. 95% confidence intervals). 13 Methods for calculating test reproducibility, if done. Results Participants 14 When study was performed, including beginning and end dates of recruitment. 15 Clinical and demographic characteristics of the study population (at least information on age, gender, spectrum of presenting symptoms). 16 The number of participants satisfying the criteria for inclusion who did or did not undergo the index tests and/or the reference standard; describe why participants failed to undergo either test (a flow diagram is strongly recommended). Test results 17 Time-interval between the index tests and the reference standard, and any treatment administered in between. 18 Distribution of severity of disease (define criteria) in those with the target condition; other diagnoses in participants without the target condition. 19 A cross tabulation of the results of the index tests (including indeterminate and missing results) by the results of the reference standard; for continuous results, the distribution of the test results by the results of the reference standard. 20 Any adverse events from performing the index tests or the reference standard. Estimates 21 Estimates of diagnostic accuracy and measures of statistical uncertainty (e.g. 95% confidence intervals). 22 How indeterminate results, missing data and outliers of the index tests were handled. 23 Estimates of variability of diagnostic accuracy between subgroups of participants, readers or centers, if done. 24 Estimates of test reproducibility, if done. Discussion 25 Discuss the clinical applicability of the study findings.

STARD, standards for reporting of diagnostic accuracy.

STARD flow diagram

Open full size image

Appendix Table 2

CONSORT 2010 checklist of information to include when reporting a randomized trial .

Section/Topic Item No Checklist Item Reported on Page No Title and abstract 1a Identification as a randomised trial in the title 1b Structured summary of trial design, methods, results, and conclusions Introduction Background and objectives 2a Scientific background and explanation of rationale 2b Specific objectives or hypotheses Methods Trial design 3a Description of trial design (such as parallel, factorial) including allocation ratio 3b Important changes to methods after trial commencement (such as eligibility criteria), with reasons Participants 4a Eligibility criteria for participants 4b Settings and locations where the data were collected Interventions 5 The interventions for each group with sufficient details to allow replication, including how and when they were actually administered Outcomes 6a Completely defined pre-specified primary and secondary outcome measures, including how and when they were assessed 6b Any changes to trial outcomes after the trial commenced, with reasons Sample size 7a How sample size was determined 7b When applicable, explanation of any interim analyses and stopping guidelines Randomisation: Sequence generation 8a Method used to generate the random allocation sequence 8b Type of randomisation; details of any restriction (such as blocking and block size) Allocation concealment mechanism 9 Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned Implementation 10 Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions Blinding 11a If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how 11b If relevant, description of the similarity of interventions Statistical methods 12a Statistical methods used to compare groups for primary and secondary outcomes 12b Methods for additional analyses, such as subgroup analyses and adjusted analyses Results Participant flow (a diagram is strongly recommended) 13a For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome 13b For each group, losses and exclusions after randomisation, together with reasons Recruitment 14a Dates defining the periods of recruitment and follow-up 14b Why the trial ended or was stopped Baseline data 15 A table showing baseline demographic and clinical characteristics for each group Numbers analysed 16 For each group, number of participants (denominator) included in each analysis and whether the analysis was by original assigned groups Outcomes and estimation 17a For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval) 17b For binary outcomes, presentation of both absolute and relative effect sizes is recommended Ancillary analyses 18 Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from exploratory Harms 19 All important harms or unintended effects in each group (for specific guidance see CONSORT for harms [ 28 ]) Discussion Limitations 20 Trial limitations, addressing sources of potential bias, imprecision, and, if relevant, multiplicity of analyses Generalisability 21 Generalisability (external validity, applicability) of the trial findings Interpretation 22 Interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence Other information Registration 23 Registration number and name of trial registry Protocol 24 Where the full trial protocol can be accessed, if available Funding 25 Sources of funding and other support (such as supply of drugs), role of funders

CONSORT 2010 flow diagram

Open full size image

Appendix Table 3

The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist .

Section/topic # Checklist Item Reported on Page# Title Title 1 Identify the report as a systematic review, meta-analysis, or both. Abstract Structured summary 2 Provide a structured summary including, as applicable: background; objectives; data sources; study eligibility criteria, participants, and interventions; study appraisal and synthesis methods; results; limitations; conclusions and implications of key findings; systematic review registration number. Introduction Rationale 3 Describe the rationale for the review in the context of what is already known. Objectives 4 Provide an explicit statement of questions being addressed with reference to participants, interventions, comparisons, outcomes, and study design (PICOS). Methods Protocol and registration 5 Indicate if a review protocol exists, if and where it can be accessed (e.g., Web address), and, if available, provide registration information including registration number. Eligibility criteria 6 Specify study characteristics (e.g., PICOS, length of follow-up) and report characteristics (e.g., years considered, language, publication status) used as criteria for eligibility, giving rationale. Information sources 7 Describe all information sources (e.g., databases with dates of coverage, contact with study authors to identify additional studies) in the search and date last searched. Search 8 Present full electronic search strategy for at least one database, including any limits used, such that it could be repeated. Study selection 9 State the process for selecting studies (i.e., screening, eligibility, included in systematic review, and, if applicable, included in the meta-analysis). Data collection process 10 Describe method of data extraction from reports (e.g., piloted forms, independently, in duplicate) and any processes for obtaining and confirming data from investigators. Data items 11 List and define all variables for which data were sought (e.g., PICOS, funding sources) and any assumptions and simplifications made. Risk of bias in individual studies 12 Describe methods used for assessing risk of bias of individual studies (including specification of whether this was done at the study or outcome level), and how this information is to be used in any data synthesis. Summary measures 13 State the principal summary measures (e.g., risk ratio, difference in means). Synthesis of results 14 Describe the methods of handling data and combining results of studies, if done, including measures of consistency (e.g., I 2 ) for each meta-analysis. Risk of bias across studies 15 Specify any assessment of risk of bias that may affect the cumulative evidence (e.g., publication bias, selective reporting within studies). Additional analyses 16 Describe methods of additional analyses (e.g., sensitivity or subgroup analyses, meta-regression), if done, indicating which were pre-specified. Results Study selection 17 Give numbers of studies screened, assessed for eligibility, and included in the review, with reasons for exclusions at each stage, ideally with a flow diagram. Study characteristics 18 For each study, present characteristics for which data were extracted (e.g., study size, PICOS, follow-up period) and provide the citations. Risk of bias within studies 19 Present data on risk of bias of each study and, if available, any outcome level assessment (see item 12). Results of individual studies 20 For all outcomes considered (benefits or harms), present, for each study: (a) simple summary data for each intervention group (b) effect estimates and confidence intervals, ideally with a forest plot. Synthesis of results 21 Present results of each meta-analysis done, including confidence intervals and measures of consistency. Risk of bias across studies 22 Present results of any assessment of risk of bias across studies (see Item 15). Additional analysis 23 Give results of additional analyses, if done (e.g., sensitivity or subgroup analyses, meta-regression [see Item 16]). Discussion Summary of evidence 24 Summarize the main findings including the strength of evidence for each main outcome; consider their relevance to key groups (e.g., healthcare providers, users, and policy makers). Limitations 25 Discuss limitations at study and outcome level (e.g., risk of bias), and at review-level (e.g., incomplete retrieval of identified research, reporting bias). Conclusions 26 Provide a general interpretation of the results in the context of other evidence, and implications for future research. Funding Funding 27 Describe sources of funding for the systematic review and other support (e.g., supply of data); role of funders for the systematic review.

PRISMA 2009 flow diagram

Open full size image

Appendix Table 4

Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist. Items to include when reporting economic evaluations of health interventions .

Section/Item Item No. Recommendation Reported on Page No./Line No. Title and abstract Title 1 Identify the study as an economic evaluation or use more specific terms such as “cost-effectiveness analysis”, and describe the interventions compared. Abstract 2 Provide a structured summary of objectives, perspective, setting, methods (including study design and inputs), results (including base case and uncertainty analyses), and conclusions. Introduction Background and objectives 3 Provide an explicit statement of the broader context for the study.

Present the study question and its relevance for health policy or practice decisions. Methods Target population and subgroups 4 Describe characteristics of the base case population and subgroups analysed, including why they were chosen. Setting and location 5 State relevant aspects of the system(s) in which the decision(s) need(s) to be made. Study perspective 6 Describe the perspective of the study and relate this to the costs being evaluated. Comparators 7 Describe the interventions or strategies being compared and state why they were chosen. Time horizon 8 State the time horizon(s) over which costs and consequences are being evaluated and say why appropriate. Discount rate 9 Report the choice of discount rate(s) used for costs and outcomes and say why appropriate. Choice of health outcomes 10 Describe what outcomes were used as the measure(s) of benefit in the evaluation and their relevance for the type of analysis performed. Measurement of effectiveness 11a_Single study-based estimates:_ Describe fully the design features of the single effectiveness study and why the single study was a sufficient source of clinical effectiveness data. 11b Synthesis-based estimates: Describe fully the methods used for identification of included studies and synthesis of clinical effectiveness data. Measurement and valuation of preference based outcomes 12 If applicable, describe the population and methods used to elicit preferences for outcomes. Estimating resources and costs 13a Single study-based economic evaluation: Describe approaches used to estimate resource use associated with the alternative interventions. Describe primary or secondary research methods for valuing each resource item in terms of its unit cost. Describe any adjustments made to approximate to opportunity costs. 13b Model-based economic evaluation: Describe approaches and data sources used to estimate resource use associated with model health states. Describe primary or secondary research methods for valuing each resource item in terms of its unit cost. Describe any adjustments made to approximate to opportunity costs. Currency, price date, and conversion 14 Report the dates of the estimated resource quantities and unit costs. Describe methods for adjusting estimated unit costs to the year of reported costs if necessary. Describe methods for converting costs into a common currency base and the exchange rate. Choice of model 15 Describe and give reasons for the specific type of decision-analytical model used. Providing a figure to show model structure is strongly recommended. Assumptions 16 Describe all structural or other assumptions underpinning the decision-analytical model. Analytical methods 17 Describe all analytical methods supporting the evaluation. This could include methods for dealing with skewed, missing, or censored data; extrapolation methods; methods for pooling data; approaches to validate or make adjustments (such as half cycle corrections) to a model; and methods for handling population heterogeneity and uncertainty. Results Study parameters 18 Report the values, ranges, references, and, if used, probability distributions for all parameters. Report reasons or sources for distributions used to represent uncertainty where appropriate.

Providing a table to show the input values is strongly recommended. Incremental costs and outcomes 19 For each intervention, report mean values for the main categories of estimated costs and outcomes of interest, as well as mean differences between the comparator groups. If applicable, report incremental cost-effectiveness ratios. Characterising uncertainty 20a_Single study-based economic evaluation:_ Describe the effects of sampling uncertainty for the estimated incremental cost and incremental effectiveness parameters, together with the impact of methodological assumptions (such as discount rate, study perspective). 20b Model-based economic evaluation: Describe the effects on the results of uncertainty for all input parameters, and uncertainty related to the structure of the model and assumptions. Characterising heterogeneity 21 If applicable, report differences in costs, outcomes, or costeffectiveness that can be explained by variations between subgroups of patients with different baseline characteristics or other observed variability in effects that are not reducible by more information. Discussion Study findings, limitations, generalisability, and current knowledge 22 Summarise key study findings and describe how they support the conclusions reached. Discuss limitations and the generalisability of the findings and how the findings fit with current knowledge. Other Source of funding 23 Describe how the study was funded and the role of the funder in the identification, design, conduct, and reporting of the analysis. Describe other non-monetary sources of support. Conflicts of interest 24 Describe any potential for conflict of interest of study contributors in accordance with journal policy. In the absence of a journal policy, we recommend authors comply with International Committee of Medical Journal Editors recommendations.

Appendix Table 5

Consolidated criteria for reporting qualitative studies (COREQ): 32-item checklist .

No. Item Guide Questions/Description Domain 1: Research team and reflexivity Personal characteristics 1. Interviewer/facilitator Which author/s conducted the interview or focus group? 2. Credentials What were the researcher’s credentials? e.g. PhD, MD 3. Occupation What was their occupation at the time of the study? 4. Gender Was the researcher male or female? 5. Experience and training What experience or training did the researcher have? Relationship with participants 6. Relationship established Was a relationship established prior to study commencement? 7. Participant knowledge of the interviewer What did the participants know about the researcher? e .g. personal goals, reasons for doing the research 8. Interviewer characteristics What characteristics were reported about the interviewer/facilitator? e.g. Bias, assumptions, reasons and interests in the research topic Domain 2: Study design Theoretical framework 9. Methodological orientation and Theory What methodological orientation was stated to underpin the study? e.g. grounded theory, discourse analysis, ethnography, phenomenology, content analysis Participant selection 10. Sampling How were participants selected? e.g. purposive, convenience, consecutive, snowball 11. Method of approach How were participants approached? e .g. face-to-face, telephone, mail, email 12. Sample size How many participants were in the study? 13. Non-participation How many people refused to participate or dropped out? Reasons? Setting 14. Setting of data collection Where was the data collected? e .g. home, clinic, workplace 15. Presence of non-participants Was anyone else present besides the participants and researchers? 16. Description of sample What are the important characteristics of the sample? e.g. demographic data, date Data collection 17. Interview guide Were questions, prompts, guides provided by the authors? Was it pilot tested? 18. Repeat interviews Were repeat interviews carried out? If yes, how many? 19. Audio/visual recording Did the research use audio or visual recording to collect the data? 20. Field notes Were field notes made during and/or after the interview or focus group? 21. Duration What was the duration of the interviews or focus group? 22. Data saturation Was data saturation discussed? 23. Transcripts returned Were transcripts returned to participants for comment and/or correction? Domain 3: Analysis and findings Data analysis 24. Number of data coders How many data coders coded the data? 25. Description of the coding tree Did authors provide a description of the coding tree? 26. Derivation of themes Were themes identified in advance or derived from the data? 27. Software What software, if applicable, was used to manage the data? 28. Participant checking Did participants provide feedback on the findings? Reporting 29. Quotations presented Were participant quotations presented to illustrate the themes/findings? Was each quotation identified? e .g. participant number 30. Data and findings consistent Was there consistency between the data presented and the findings? 31. Clarity of major themes Were major themes clearly presented in the findings? 32. Clarity of minor themes Is there a description of diverse cases or discussion of minor themes?

Appendix Table 6

Enhancing transparency in reporting the synthesis of qualitative research: the ENTREQ statement

No. Item Guide and Description 1 Aim State the research question the synthesis addresses. 2 Synthesis methodology Identify the synthesis methodology or theoretical framework which underpins the synthesis, and describe the rationale for choice of methodology (e.g. meta-ethnography, thematic synthesis, critical interpretive synthesis, grounded theory synthesis, realist synthesis, meta-aggregation, meta-study, framework synthesis). 3 Approach to searching Indicate whether the search was pre-planned ( comprehensive search strategies to seek all available studies) or iterative ( to seek all available concepts until they theoretical saturation is achieved) . 4 Inclusion criteria Specify the inclusion/exclusion criteria (e.g. in terms of population, language, year limits, type of publication, study type). 5 Data sources Describe the information sources used (e.g. electronic databases (MEDLINE, EMBASE, CINAHL, psycINFO, Econlit), grey literature databases (digital thesis, policy reports), relevant organisational websites, experts, information specialists, generic web searches (Google Scholar) hand searching, reference lists) and when the searches conducted; provide the rationale for using the data sources. 6 Electronic search strategy Describe the literature search (e.g. provide electronic search strategies with population terms, clinical or health topic terms, experiential or social phenomena related terms, filters for qualitative research, and search limits) . 7 Study screening methods Describe the process of study screening and sifting (e.g. title, abstract and full text review, number of independent reviewers who screened studies). 8 Study characteristics Present the characteristics of the included studies (e.g. year of publication, country, population, number of participants, data collection, methodology, analysis, research questions). 9 Study selection results Identify the number of studies screened and provide reasons for study exclusion (e.g., for comprehensive searching, provide numbers of studies screened and reasons for exclusion indicated in a figure/flowchart; for iterative searching describe reasons for study exclusion and inclusion based on modifications to the research question and/or contribution to theory development). 10 Rationale for appraisal Describe the rationale and approach used to appraise the included studies or selected findings (e.g. assessment of conduct (validity and robustness), assessment of reporting (transparency), assessment of content and utility of the findings). 11 Appraisal items State the tools, frameworks and criteria used to appraise the studies or selected findings (e.g. Existing tools: CASP, QARI, COREQ, Mays and Pope; reviewer developed tools; describe the domains assessed: research team, study design, data analysis and interpretations, reporting). 12 Appraisal process Indicate whether the appraisal was conducted independently by more than one reviewer and if consensus was required. 13 Appraisal results Present results of the quality assessment and indicate which articles if any, were weighted/excluded based on the assessment and give the rationale. 14 Data extraction Indicate which sections of the primary studies were analysed and how were the data extracted from the primary studies? (e.g. all text under the headings “results/conclusions” were extracted electronically and entered into a computer software). 15 Software State the computer software used, if any. 16 Number of reviewers Identify who was involved in coding and analysis. 17 Coding Describe the process for coding of data (e.g. line by line coding to search for concepts). 18 Study comparison Describe how were comparisons made within and across studies (e.g. subsequent studies were coded into pre-existing concepts, and new concepts were created when deemed necessary). 19 Derivation of themes Explain whether the process of deriving the themes or constructs was inductive or deductive. 20 Quotations Provide quotations from the primary studies to illustrate themes/constructs, and identify whether the quotations were participant quotations of the author’s interpretation. 21 Synthesis output Present rich, compelling and useful results that go beyond a summary of the primary studies (e.g. new interpretation, models of evidence, conceptual models, analytical framework, development of a new theory or construct).

Get Radiology Tree app to read full this article<

References

  • 1. The EQUATOR Network website. http://www.equator-network.org/ . Accessed December 21, 2013.

  • 2. Moher D., Hopewell S., Schulz K.F., et. al.: CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 2010; 340: pp. c869.

  • 3. Moher D., Hopewell S., Schulz K.F., et. al.: CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. J Clin Epidemiol 2010; 63: pp. e1-37.

  • 4. Moher D., Hopewell S., Schulz K.F., et. al.: CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. Int J Surg 2012; 10: pp. 28-55.

  • 5. Schulz K.F., Altman D.G., Moher D.: CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ 2010; 340: pp. c332.

  • 6. Schulz K.F., Altman D.G., Moher D.: CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. Trials 2010; 11: pp. 32.

  • 7. Schulz K.F., Altman D.G., Moher D.: CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMC Med 2010; 8: pp. 18.

  • 8. Schulz K.F., Altman D.G., Moher D.: CONSORT 2010 statement: updated guidelines for reporting parallel group randomized trials. Ann Intern Med 2010; 152: pp. 726-732.

  • 9. Schulz K.F., Altman D.G., Moher D.: CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. J Clin Epidemiol 2010; 63: pp. 834-840.

  • 10. Schulz K.F., Altman D.G., Moher D.: CONSORT 2010 statement: updated guidelines for reporting parallel group randomized trials. Obstet Gynecol 2010; 115: pp. 1063-1070.

  • 11. Schulz K.F., Altman D.G., Moher D.: CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. Int J Surg 2011; 9: pp. 672-677.

  • 12. Schulz K.F., Altman D.G., Moher D.: CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. PLoS Med 2010; 7: pp. e1000251.

  • 13. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD Initiative. Radiology 2003; 226: pp. 24-28.

  • 14. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. BMJ 2003; 326: pp. 41-44.

  • 15. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Toward complete and accurate reporting of studies of diagnostic accuracy. The STARD initiative. Am J Clin Pathol 2003; 119: pp. 18-22.

  • 16. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin Biochem 2003; 36: pp. 2-7.

  • 17. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin Chem Lab Med 2003; 41: pp. 68-73.

  • 18. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: [Reporting studies of diagnostic accuracy according to a standard method; the Standards for Reporting of Diagnostic Accuracy (STARD)]. Ned Tijdschr Geneeskd 2003; 147: pp. 336-340.

  • 19. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Toward complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Acad Radiol 2003; 10: pp. 664-669.

  • 20. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. AJR Am J Roentgenol 2003; 181: pp. 51-55.

  • 21. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Ann Clin Biochem 2003; 40: pp. 357-363.

  • 22. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin Radiol 2003; 58: pp. 575-580.

  • 23. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. The Standards for Reporting of Diagnostic Accuracy Group. Croat Med J 2003; 44: pp. 635-638.

  • 24. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Fam Pract 2004; 21: pp. 4-10.

  • 25. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Clin Chem 2003; 49: pp. 7-18.

  • 26. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med 2003; 138: pp. W1-W12.

  • 27. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD Initiative. Ann Intern Med 2003; 138: pp. 40-44.

  • 28. Pai M., Sharma S.: Better reporting of studies of diagnostic accuracy. Indian J Med Microbiol 2005; 23: pp. 210-213.

  • 29. Liberati A., Altman D.G., Tetzlaff J., et. al.: The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 2009; 6: pp. e1000100.

  • 30. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Eur J Health Econ 2013; 14: pp. 367-372.

  • 31. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Value Health 2013; 16: pp. e1-e5.

  • 32. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Clin Ther 2013; 35: pp. 356-363.

  • 33. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Cost Eff Resour Alloc 2013; 11: pp. 6.

  • 34. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BMC Med 2013; 11: pp. 80.

  • 35. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BMJ 2013; 346: pp. f1049.

  • 36. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Pharmacoeconomics 2013; 31: pp. 361-367.

  • 37. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. J Med Econ 2013; 16: pp. 713-719.

  • 38. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Int J Technol Assess Health Care 2013; 29: pp. 117-122.

  • 39. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BJOG 2013; 120: pp. 765-770.

  • 40. Stein P.D., Fowler S.E., Goodman L.R., et. al.: Multidetector computed tomography for acute pulmonary embolism. N Engl J Med 2006; 354: pp. 2317-2327.

  • 41. The STARD website. http://www.stard-statement.org/ . Accessed February 7, 2014.

  • 42. Johnston K.C., Holloway R.G.: There is nothing staid about STARD: progress in the reporting of diagnostic accuracy studies. Neurology 2006; 67: pp. 740-741.

  • 43. The CONSORT website. http://www.consort-statement.org/ . Accessed February 7, 2014.

  • 44. Moher D., Jones A., Lepage L., et. al.: Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA 2001; 285: pp. 1992-1995.

  • 45. Reeves B.C., Gaus W.: Guidelines for reporting non-randomised studies. Forsch Komplementarmed Klass Naturheilkd 2004; 11: pp. 46-52.

  • 46. Boutron I., Moher D., Altman D.G., et. al.: Extending the CONSORT statement to randomized trials of nonpharmacologic treatment: explanation and elaboration. Ann Intern Med 2008; 148: pp. 295-309.

  • 47. Reeves B.C.: A framework for classifying study designs to evaluate health care interventions. Forsch Komplementarmed Klass Naturheilkd 2004; 11: pp. 13-17.

  • 48. Salem R., Lewandowski R.J., Gates V.L., et. al.: Research reporting standards for radioembolization of hepatic malignancies. J Vasc Interv Radiol 2011; 22: pp. 265-278.

  • 49. Kallmes D.F., Comstock B.A., Heagerty P.J., et. al.: A randomized trial of vertebroplasty for osteoporotic spinal fractures. N Engl J Med 2009; 361: pp. 569-579.

  • 50. Jacquier I., Boutron I., Moher D., et. al.: The reporting of randomized clinical trials using a surgical intervention is in need of immediate improvement: a systematic review. Ann Surg 2006; 244: pp. 677-683.

  • 51. Campbell M.K., Piaggio G., Elbourne D.R., et. al.: Consort 2010 statement: extension to cluster randomised trials. BMJ 2012; 345: pp. e5661.

  • 52. Davey J., Turner R.M., Clarke M.J., et. al.: Characteristics of meta-analyses and their component studies in the Cochrane database of systematic reviews: a cross-sectional, descriptive analysis. BMC Med Res Methodol 2011; 11: pp. 160.

  • 53. Oxman A.D., Cook D.J., Guyatt G.H.: Users’ guides to the medical literature. VI. How to use an overview. Evidence-Based Medicine Working Group. JAMA 1994; 272: pp. 1367-1371.

  • 54. Swingler G.H., Volmink J., Ioannidis J.P.: Number of published systematic reviews and global burden of disease: database analysis. BMJ 2003; 327: pp. 1083-1084.

  • 55. Moher D., Cook D.J., Eastwood S., et. al.: Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of reporting of meta-analyses. Lancet 1999; 354: pp. 1896-1900.

  • 56. Green S.Higgins J.Glossary. Cochrane handbook for systematic reviews of interventions 4.2.5.2005.The Cochrane Collaboration Available http://www.cochrane.org/resources/glossary.htm Accessed February 21, 2014

  • 57. Moher D., Liberati A., Tetzlaff J., et. al.: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 2009; 339: pp. b2535.

  • 58. Otero H.J., Rybicki F.J., Greenberg D., et. al.: Twenty years of cost-effectiveness analysis in medical imaging: are we improving?. Radiology 2008; 249: pp. 917-925.

  • 59. Cost-effectiveness analysis registry https://research.tufts-nemc.org/cear4/, accessed March 2, 2014.

  • 60. Elixhauser A., Luce B.R., Taylor W.R., et. al.: Health care CBA/CEA: an update on the growth and composition of the literature. Med Care 1993; 31: JS1–11, JS8–149

  • 61. Siegel J.E., Weinstein M.C., Russell L.B., et. al.: Recommendations for reporting cost-effectiveness analyses. Panel on Cost-Effectiveness in Health and Medicine. JAMA 1996; 276: pp. 1339-1341.

  • 62. Ramsey S., Willke R., Briggs A., et. al.: Good research practices for cost-effectiveness analysis alongside clinical trials: the ISPOR RCT-CEA Task Force report. Value Health 2005; 8: pp. 521-533.

  • 63. Institute of Medicine website http://www.nap.edu/catalog.php?record_id=13058 . Accessed February 7, 2014.

  • 64. National Guidelines Clearinghouse. http://www.guideline.gov . Accessed May 3, 2010.

  • 65. Yarris L.M., Deiorio N.M.: Education research: a primer for educators in emergency medicine. Acad Emerg Med 2011; 18: pp. S27-S35.

  • 66. Chen F.M., Bauchner H., Burstin H.: A call for outcomes research in medical education. Acad Med 2004; 79: pp. 955-960.

  • 67. Carney P.A., Nierenberg D.W., Pipas C.F., et. al.: Educational epidemiology: applying population-based design and analytic approaches to study medical education. JAMA 2004; 292: pp. 1044-1050.

  • 68. Lynch D.C., Whitley T.W., Willis S.E.: A rationale for using synthetic designs in medical education research. Adv Health Sci Educ Theory Pract 2000; 5: pp. 93-103.

  • 69. Prystowsky J.B., Bordage G.: An outcomes research perspective on medical education: the predominance of trainee assessment and satisfaction. Med Educ 2001; 35: pp. 331-336.

  • 70. Bordage G.: Conceptual frameworks to illuminate and magnify. Med Educ 2009; 43: pp. 312-319.

  • 71. Bordage G.: Reasons reviewers reject and accept manuscripts: the strengths and weaknesses in medical education reports. Acad Med 2001; 76: pp. 889-896.

  • 72. Stiles C.R., Biondo P.D., Cummings G., et. al.: Clinical trials focusing on cancer pain educational interventions: core components to include during planning and reporting. J Pain Symptom Manage 2010; 40: pp. 301-308.

  • 73. Tong A., Sainsbury P., Craig J.: Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 2007; 19: pp. 349-357.

  • 74. Tong A., Flemming K., McInnes E., et. al.: Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol 2012; 12: pp. 181.

  • 75. Boutron I., Moher D., Altman D.G., et. al.: Methods and processes of the CONSORT Group: example of an extension for trials assessing nonpharmacologic treatments. Ann Intern Med 2008; 148: pp. W60-W66.

  • 76. Hopewell S., Clarke M., Moher D., et. al.: CONSORT for reporting randomised trials in journal and conference abstracts. Lancet 2008; 371: pp. 281-283.

  • 77. Zwarenstein M., Treweek S., Gagnier J.J., et. al.: Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ 2008; 337: pp. a2390.

  • 78. Ioannidis J.P., Evans S.J., Gotzsche P.C., et. al.: Better reporting of harms in randomized trials: an extension of the CONSORT statement. Ann Intern Med 2004; 141: pp. 781-788.

  • 79. Calvert M., Blazeby J., Altman D.G., et. al.: Reporting of patient-reported outcomes in randomized trials: the CONSORT PRO extension. JAMA 2013; 309: pp. 814-822.

  • 80. Piaggio G., Elbourne D.R., Pocock S.J., et. al.: Reporting of noninferiority and equivalence randomized trials: extension of the CONSORT 2010 statement. JAMA 2012; 308: pp. 2594-2604.

  • 81. Chan A.W., Tetzlaff J.M., Altman D.G., et. al.: SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Intern Med 2013; 158: pp. 200-207.

  • 82. Moher D., Liberati A., Tetzlaff J., et. al.: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009; 6: pp. e1000097.

  • 83. Moher D., Liberati A., Tetzlaff J., et. al.: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 2009; 151: 264–9, W64

  • 84. Moher D., Liberati A., Tetzlaff J., et. al.: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol 2009; 62: pp. 1006-1012.

  • 85. Beller E.M., Glasziou P.P., Altman D.G., et. al.: PRISMA for abstracts: reporting systematic reviews in journal and conference abstracts. PLoS Med 2013; 10: pp. e1001419.

  • 86. Riley R.D., Lambert P.C., Abo-Zaid G.: Meta-analysis of individual participant data: rationale, conduct, and reporting. BMJ 2010; 340: pp. c221.

  • 87. Schriger D.L.: Suggestions for improving the reporting of clinical research: the role of narrative. Ann Emerg Med 2005; 45: pp. 437-443.

  • 88. Gagnier J.J., Kienle G., Altman D.G., et. al.: The CARE guidelines: consensus-based clinical case reporting guideline development. BMJ Case Rep 2013; 2013:

  • 89. Gagnier J.J., Kienle G., Altman D.G., et. al.: The CARE guidelines: consensus-based clinical case report guideline development. J Clin Epidemiol 2014; 67: pp. 46-51.

  • 90. Gagnier J.J., Kienle G., Altman D.G., et. al.: The CARE guidelines: consensus-based clinical case reporting guideline development. J Med Case Rep 2013; 7: pp. 223.

  • 91. Gagnier J.J., Kienle G., Altman D.G., et. al.: The CARE guidelines: consensus-based clinical case report guideline development. J Diet Suppl 2013; 10: pp. 381-390.

  • 92. Kottner J., Audige L., Brorson S., et. al.: Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were proposed. J Clin Epidemiol 2011; 64: pp. 96-106.

  • 93. Kottner J., Audige L., Brorson S., et. al.: Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were proposed. Int J Nurs Stud 2011; 48: pp. 661-671.

  • 94. O’Cathain A., Murphy E., Nicholl J.: The quality of mixed methods studies in health services research. J Health Serv Res Policy 2008; 13: pp. 92-98.

  • 95. Davidoff F., Batalden P., Stevens D., et. al.: Publication guidelines for quality improvement in health care: evolution of the SQUIRE project. Qual Saf Health Care 2008; 17: pp. i3-i9.

  • 96. Davidoff F., Batalden P., Stevens D., et. al.: Publication guidelines for quality improvement studies in health care: evolution of the SQUIRE project. BMJ 2009; 338: pp. a3152.

  • 97. Davidoff F., Batalden P.B., Stevens D.P., et. al.: Development of the SQUIRE Publication Guidelines: evolution of the SQUIRE project. Jt Comm J Qual Patient Saf 2008; 34: pp. 681-687.

  • 98. Davidoff F., Batalden P., Stevens D., et. al.: Publication guidelines for improvement studies in health care: evolution of the SQUIRE Project. Ann Intern Med 2008; 149: pp. 670-676.

  • 99. Davidoff F., Batalden P., Stevens D., et. al.: Publication guidelines for quality improvement studies in health care: evolution of the SQUIRE project. J Gen Intern Med 2008; 23: pp. 2125-2130.

  • 100. Talmon J., Ammenwerth E., Brender J., et. al.: STARE-HI–statement on reporting of evaluation studies in health informatics. Int J Med Inform 2009; 78: pp. 1-9.

This post is licensed under CC BY 4.0 by the author.