Home How to Critically Appraise the Clinical Literature
Post
Cancel

How to Critically Appraise the Clinical Literature

Recent efforts have been made to standardize the critical appraisal of clinical health care research. In this article, critical appraisal of diagnostic test accuracy studies, screening studies, therapeutic studies, systematic reviews and meta-analyses, cost-effectiveness studies, recommendations and/or guidelines, and medical education studies is discussed as are the available instruments to appraise the literature. By having standard appraisal instruments, these studies can be appraised more easily for completeness, bias, and applicability for implementation. Appraisal requires a different set of instruments, each designed for the individual type of research. We also hope that this article can be used in academic programs to educate the faculty and trainees of the available resources to improve critical appraisal of health research.

This article is the second in a series of two articles that review how to report and how to critically appraise research in health care. The first article reviewed the reporting of and the available guidelines on how to report health research. In this article, critical appraisal of screening studies and diagnostic test accuracy studies, therapeutic studies, systematic reviews and meta-analyses, cost-effectiveness studies, recommendations and/or guidelines, and medical education studies is discussed as are the available instruments to appraise literature.

Recent efforts have been made to standardize both the reporting and the appraisal of clinical health research including clinical guidelines. Many of the presentations at the joint Radiological Alliance for Health Service Research/Alliance of Clinical-Educators in Radiology session at the 2013 Association of University Radiologists annual meeting highlighted appraisal instruments available. By having standard formats for reporting research findings and guidelines, these studies can be appraised more easily for completeness, bias, and applicability for implementation. Appraisal requires a different set of instruments, each designed for the individual type of research. Quality Assessment of Diagnostic Accuracy Studies (QUADAS) was initially published in 2003 and has been used to evaluate the quality of diagnostic accuracy studies . Recent revisions produced QUADAS-2 . Similarly, the Assessment of Multiple Systemic Reviews (AMSTAR) was published in 2007 and uses 11 yes or no questions to assess the methodological quality of systematic reviews . The Appraisal of Guidelines for Research and Evaluation (AGREE) instrument evaluates the process of practice guideline development and the quality of reporting .

How to appraise literature

Screening Studies and Diagnostic Test Accuracy Studies

There is no specific Enhancing the QUAlity and Transparency Of health Research network (EQUATOR) or other recommendations for appraising screening studies. However, when reading a screening study, we suggest considering the following questions:

Get Radiology Tree app to read full this article<

How Was the Study Designed?

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

What Are the Results?

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Will These Results Help Me Care for My Patients?

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Therapeutic Studies

Get Radiology Tree app to read full this article<

Meta-analyses

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Cost-Effectiveness Analyses

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Recommendations and/or Guidelines

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Medical Education Studies

Get Radiology Tree app to read full this article<

Conclusions

Get Radiology Tree app to read full this article<

Table 1

Appraisal Tools by Research Study Design, Acronym, Web Site URL, and Bibliographic Reference

Research Study Design Appraisal Tool(s) Provided For Appraisal Tool Acronym Reporting Guideline Web Site URL Full Text if Available Full Bibliographic Reference Diagnostic test accuracy Studies of diagnostic accuracy QUADAS

QUADAS-2 http://www.bris.ac.uk/quadas/quadas-2/ Appraisal tool and background document Whiting P, Rutjes A, Reitsma J, Bossuyt P, Kleijnen J. The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Medical Research Methodology 2003; 3:25. (PDF, 301kb) .

Evidence report prepared for QUADAS-2 face-to-face meeting. Updating QUADAS: Evidence to inform the development of QUADAS-2

Whiting PF, Rutjes AWS, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, Leeflang MM, Sterne JAC, and Bossuyt PMM. QUADAS-2: A revised tool for the Quality Assessment of Diagnostic Accuracy Studies. Ann Intern Med 155(8):529–536, 2011 . Systematic reviews/meta-analyses Systematic reviews and meta-analyses AMSTAR http://amstar.ca/index.php Appraisal tool Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007 February 15; 7:10. PMID: 17302989 .

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M.AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009 October; 62(10):1013–20. PMID: 19230606 .

Shea BJ, Bouter LM, Peterson J, Boers M, Andersson N, Ortiz Z, Ramsay T, Bai A, Shukla VK, Grimshaw JM. External Validation of a Measurement Tool to Assess Systematic Reviews (AMSTAR). PLoS ONE. 2007; 2(12): e1350. PMCID: PMC2131785 .

Shea B, Andersson N, Henry D. Increasing the demand for childhood vaccination in developing countries: a systematic review. BMC Int Health Hum Rights. 2009 October 14; 9 Suppl 1:S5. PMID: 19828063 . Recommendations and/or guidelines Recommendations and/or guidelines AGREE

AGREE II http://www.agreetrust.org/ Appraisal tool Brouwers MC, Kho ME, Browman GP, Burgers J, Cluzeau F, Feder G, Fervers B, Graham ID, Hanna SE, Makarski J, on behalf of the AGREE Next Steps Consortium. Performance, Usefulness and Areas for Improvement: Development Steps Towards the AGREE II – Part 1. Can Med Assoc J. 2010, 182: 1045–52 .

Brouwers MC, Kho ME, Browman GP, Burgers J, Cluzeau F, Feder G, Fervers B, Graham ID, Hanna SE, Makarski J, on behalf of the AGREE Next Steps Consortium. Validity Assessment of Items and Tools To Support Application: Development Steps Towards the AGREE II – Part 2. Can Med Assoc J. 2010, 182: E472–478 .

Brouwers M, Kho ME, Browman GP, Cluzeau F, Feder G, Fervers B, Hanna S, Makarski J on behalf of the AGREE Next Steps Consortium. AGREE II: Advancing guideline development, reporting and evaluation in healthcare. Can Med Assoc J. December 2010, 182: E839–842; doi:10.1503/cmaj.090449 .

Brouwers M, Kho ME, Browman GP, Cluzeau F, Feder G, Fervers B, Hanna S, Makarski J on behalf of the AGREE Next Steps Consortium. AGREE II: Advancing guideline development, reporting and evaluation in healthcare. J Clin Epidemol. 2010, 63(12): 1308–1311 .

Brouwers M, Kho ME, Browman GP, Cluzeau F, Feder G, Fervers B, Hanna S, Makarski J on behalf of the AGREE Next Steps Consortium. AGREE II: Advancing guideline development, reporting and evaluation in healthcare. Preventive Medicine, 2010, 51(5): 421–424 .

AGREE, Appraisal of Guidelines for Research and Evaluation; AMSTAR, A Measurement Tool to Assess Systematic Reviews; QUADAS, Quality of Diagnostic Accuracy Studies.

Get Radiology Tree app to read full this article<

Appendix

Get Radiology Tree app to read full this article<

Appendix Table 1

List the 14 Quality Items in the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) Tool .

Item Yes No Unclear 1. Was the spectrum of patients representative of the patients who will receive the test in practice? ( ) ( ) ( ) 2. Were selection criteria clearly described? ( ) ( ) ( ) 3. Is the reference standard likely to correctly classify the target condition? ( ) ( ) ( ) 4. Is the time period between reference standard and index test short enough to be reasonably sure that the target condition did not change between the two tests? ( ) ( ) ( ) 5. Did the whole sample or a random selection of the sample, receive verification using a reference standard of diagnosis? ( ) ( ) ( ) 6. Did patients receive the same reference standard regardless of the index test result? ( ) ( ) ( ) 7. Was the reference standard independent of the index test (i.e. the index test did not form part of the reference standard)? ( ) ( ) ( ) 8. Was the execution of the index test described in sufficient detail to permit replication of the test? ( ) ( ) ( ) 9. Was the execution of the reference standard described in sufficient detail to permit its replication? ( ) ( ) ( ) 10. Were the index test results interpreted without knowledge of the results of the reference standard? ( ) ( ) ( ) 11. Were the reference standard results interpreted without knowledge of the results of the index test? ( ) ( ) ( ) 12. Were the same clinical data available when test results were interpreted as would be available when the test is used in practice? ( ) ( ) ( ) 13. Were uninterpretable/intermediate test results reported? ( ) ( ) ( ) 14. Were withdrawals from the study explained? ( ) ( ) ( )

Appendix Table 2

Quality Items in the Quality Assessment of Diagnostic Accuracy Studies–2 (QUADAS-2) Tool .

Phase 1: State the review question:Patients (setting, intended use of index test, presentation, prior testing):__Index test(s):__Reference standard and target condition:Phase 2: Draw a flow diagram for the primary study.Phase 3: Risk of bias and applicability judgments.QUADAS-2 is structured so that 4 key domains are each rated in terms of the risk of bias and the concern regarding applicability to the research question (as defined above). Each key domain has a set of signalling questions to help reach the judgments regarding bias and applicability.DOMAIN 1: PATIENT SELECTIONA. Risk of Bias Describe methods of patient selection: Was a consecutive or random sample of patients enrolled? Yes/No/Unclear Was a case-control design avoided? Yes/No/Unclear Did the study avoid inappropriate exclusions? Yes/No/UnclearCould the selection of patients have introduced bias?RISK: LOW/HIGH/UNCLEARB. Concerns regarding applicability Describe included patients (prior testing, presentation, intended use of index test and setting):Is there concern that the included patients do not match the review questionCONCERN:

LOW/HIGH/UNCLEARDOMAIN 2: INDEX TEST(S)If more than one index test was used, please complete for each testA. Risk of Bias Describe the index test and how it was conducted and interpreted Were the index test results interpreted without knowledge of the results of the reference standard? Yes/No/Unclear If a threshold was used, was it pre-specified? Yes/No/Yes/No/UnclearCould the conduct or interpretation of the index test have introduced bias?RISK: LOW/HIGH/UNCLEARB. Concerns regarding applicabilityIs there concern that the index test, its conduct, or interpretation differ from the review question?CONCERN:

LOW/HIGH/UNCLEARDOMAIN 3: REFERENCE STANDARDA. Risk of Bias Describe the reference standard and how it was conducted and interpreted: Is the reference standard likely to correctly classify the target condition? Yes/No/Unclear Were the reference standard results interpreted without knowledge of the results of the index test? Yes/No/UnclearCould the reference standard, its conduct, or its interpretation have introduced bias?RISK:

LOW/HIGH/UNCLEARB. Concerns regarding applicabilityIs there concern that the target condition as defined by the reference standard does not match the review question?CONCERN:

LOW/HIGH/UNCLEARDOMAIN 4: FLOW AND TIMINGA. Risk of Bias Describe any patients who did not receive the index test(s) and/or reference standard or who were excluded from the 2x2 table (refer to flow diagram): Describe the time interval and any interventions between index test(s) and reference standard; Was there an appropriate interval between index test(s) and reference standard? Yes/No/Unclear Did all patients receive a reference standard? Yes/No/Unclear Did patients receive the same reference standard? Yes/No/Unclear Were all patients included in the analysis? Yes/No/UnclearCould the patient flow have introduced bias?RISK: LOW/HIGH/UNCLEAR

Appendix Table 3

The AMSTER (Assessment of Multiple Systematic Reviews) Tool

1. Was an ‘a priori’ design provided?

The research question and inclusion criteria should be established before the conduct of the review. □ Yes

□ No

□ Can’t answer

□ Not applicable2. Was there duplicate study selection and data extraction?

There should be at least two independent data extractors and a consensus procedure for disagreements should be in place. □ Yes

□ No

□ Can’t answer

□ Not applicable3. Was a comprehensive literature search performed?

At least two electronic sources should be searched. The report must include years and databases used (e.g. Central, EMBASE, and MEDLINE). Key words and/or MESH terms must be stated and where feasible the search strategy should be provided. All searches should be supplemented by consulting current contents, reviews, textbooks, specialized registers, or experts in the particular field of study, and by reviewing the references in the studies found. □ Yes

□ No

□ Can’t answer

□ Not applicable4. Was the status of publication (i.e. grey literature) used as an inclusion criterion?

The authors should state that they searched for reports regardless of their publication type. The authors should state whether or not they excluded any reports (from the systematic review), based on their publication status, language etc. □ Yes

□ No

□ Can’t answer

□ Not applicable5. Was a list of studies (included and excluded) provided?

A list of included and excluded studies should be provided. □ Yes

□ No

□ Can’t answer

□ Not applicable6. Were the characteristics of the included studies provided?

In an aggregated form such as a table, data from the original studies should be provided on the participants, interventions and outcomes. The ranges of characteristics in all the studies analyzed e.g. age, race, sex, relevant socioeconomic data, disease status, duration, severity, or other diseases should be reported. □ Yes

□ No

□ Can’t answer

□ Not applicable7. Was the scientific quality of the included studies assessed and documented?

‘A priori’ methods of assessment should be provided (e.g., for effectiveness studies if the author(s) chose to include only randomized, double-blind, placebo controlled studies, or allocation concealment as inclusion criteria); for other types of studies alternative items will be relevant. □ Yes

□ No

□ Can’t answer

□ Not applicable8. Was the scientific quality of the included studies used appropriately in formulating conclusions?

The results of the methodological rigor and scientific quality should be considered in the analysis and the conclusions of the review, and explicitly stated in formulating recommendations. □ Yes

□ No

□ Can’t answer

□ Not applicable9. Were the methods used to combine the findings of studies appropriate?

For the pooled results, a test should be done to ensure the studies were combinable, to assess their homogeneity (i.e. Chi-squared test for homogeneity, I²). If heterogeneity exists a random effects model should be used and/or the clinical appropriateness of combining should be taken into consideration (i.e. is it sensible to combine?). □ Yes

□ No

□ Can’t answer

□ Not applicable10. Was the likelihood of publication bias assessed?

An assessment of publication bias should include a combination of graphical aids (e.g., funnel plot, other available tests) and/or statistical tests (e.g., Egger regression test). □ Yes

□ No

□ Can’t answer

□ Not applicable11. Was the conflict of interest stated?

Potential sources of support should be clearly acknowledged in both the systematic review and the included studies. □ Yes

□ No

□ Can’t answer

□ Not applicable

Appendix Table 4

The AGREE (Appraisal of Guidelines for Research & Evaluation) Instrument .

Original AGREE Item AGREE II ItemDomain 1. Scope and Purpose 1. The overall objective(s) of the guideline is (are) specifically described. The overall objective(s) of the guideline is (are) specifically described. 2. The clinical question(s) covered by the guideline is (are) specifically described. The health question(s) covered by the guideline is (are) specifically described. 3. The patients to whom the guideline is meant to apply are specifically described. The population (patients, public, etc.) to whom the guideline is meant to apply is specifically described.Domain 2. Stakeholder Involvement 4. The guideline development group includes individuals from all the relevant professional groups. The guideline development group includes individuals from all the relevant professional groups. 5. The patients’ views and preferences have been sought. The views and preferences of the target population (patients, public, etc.) have been sought. 6. The target users of the guideline are clearly defined. No change 7. The guideline has been piloted among end users. Delete item. Incorporated into user guide description of item 19.Domain 3. Rigour of Development 8. Systematic methods were used to search for evidence. Systematic methods were used to search for evidence. Renumber to 7. 9. The criteria for selecting the evidence are clearly described. The criteria for selecting the evidence are clearly described. Renumber to 8.

NEW Item 9. The strengths and limitations of the body of evidence are clearly described. 10. The methods for formulating the recommendations are clearly described. The methods for formulating the recommendations are clearly described. 11. The health benefits, side effects, and risks have been considered in formulating the recommendations. The health benefits, side effects, and risks have been considered in formulating the recommendations. 12. There is an explicit link between the recommendations and the supporting evidence. There is an explicit link between the recommendations and the supporting evidence. 13. The guideline has been externally reviewed by experts prior to its publication. The guideline has been externally reviewed by experts prior to its publication. 14. A procedure for updating the guideline is provided. A procedure for updating the guideline is provided.Domain 4. Clarity of Presentation 15. The recommendations are specific and unambiguous. The recommendations are specific and unambiguous. 16. The different options for management of the condition are clearly presented. The different options for management of the condition or health issue are clearly presented. 17. Key recommendations are easily identifiable. Key recommendations are easily identifiable.Domain 5. Applicability 18. The guideline is supported with tools for application. The guideline provides advice and/or tools on how the recommendations can be put into practice.

AND Change in domain (from Clarity of Presentation) AND renumber to 19 19. The potential organizational barriers in applying the recommendations have been discussed. The guideline describes facilitators and barriers to its application.

AND change in order – renumber to 18 20. The potential cost implications of applying the recommendations have been considered. The potential resource implications of applying the recommendations have been considered. 21. The guideline presents key review criteria for monitoring and/or audit purposes. The guideline presents monitoring and/or auditing criteria.Domain 6. Editorial Independence 22. The guideline is editorially independent from the funding body. The views of the funding body have not influenced the content of the guideline. 23. Conflicts of interest of guideline development members have been recorded. Competing interests of guideline development group members have been recorded and addressed.

Whiting et al. BMC Medical Research Methodology 2003.

Get Radiology Tree app to read full this article<

References

  • 1. Whiting P., Rutjes A.W., Reitsma J.B., et. al.: The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol 2003; 3: pp. 25.

  • 2. Whiting P., Rutjes A.W., Dinnes J., et. al.: Development and validation of methods for assessing the quality of diagnostic accuracy studies. Health Technol Assess 2004; 8: iii, 1-234

  • 3. Whiting P.F., Rutjes A.W., Westwood M.E., et. al.: QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 2011; 155: pp. 529-536.

  • 4. Shea B.J., Grimshaw J.M., Wells G.A., et. al.: Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol 2007; 7: pp. 10.

  • 5. The AGREE Collaboration. Appraisal of Guidelines for Research & Evaluation (AGREE) Instrument. www.agreecollaboration.org . Accessed on 2/25/2011.

  • 6. Skaane P., Bandos A.I., Gullien R., et. al.: Comparison of digital mammography alone and digital mammography plus tomosynthesis in a population-based screening program. Radiology 2013; 267: pp. 47-56.

  • 7. Ciatto S., Houssami N., Bernardi D., et. al.: Integration of 3D digital mammography with tomosynthesis for population breast-cancer screening (STORM): a prospective comparison study. Lancet Oncol 2013; 14: pp. 583-589.

  • 8. Haas B.M., Kalra V., Geisel J., et. al.: Comparison of tomosynthesis plus digital mammography and digital mammography alone for breast cancer screening. Radiology 2013; 269: pp. 694-700.

  • 9. Rose S.L., Tidwell A.L., Bujnoch L.J., et. al.: Implementation of breast tomosynthesis in a routine screening practice: an observational study. AJR Am J Roentgenol 2013; 200: pp. 1401-1408.

  • 10. Blackmore C., Medina L., Ravenel J., et. al.: Crtically assessing the literature: understanding error and bias (Chapter 2).Santiago Medina L.Craige Blackmore C.Evidence-based imaging. Optimizing imaging in patient care.2006.Springer Science+Business Media, IncNew York, NY:

  • 11. Lee J.M.: Screening issues for radiologists. Acad Radiol 2004; 11: pp. 162-168.

  • 12. The STARD website. http://www.stard-statement.org/ . Accessed February 7, 2014.

  • 13. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Clin Chem 2003; 49: pp. 7-18.

  • 14. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med 2003; 138: pp. W1-W12.

  • 15. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD Initiative. Radiology 2003; 226: pp. 24-28.

  • 16. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. BMJ 2003; 326: pp. 41-44.

  • 17. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Toward complete and accurate reporting of studies of diagnostic accuracy. The STARD initiative. Am J Clin Pathol 2003; 119: pp. 18-22.

  • 18. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin Biochem 2003; 36: pp. 2-7.

  • 19. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin Chem Lab Med 2003; 41: pp. 68-73.

  • 20. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: [Reporting studies of diagnostic accuracy according to a standard method; the Standards for Reporting of Diagnostic Accuracy (STARD)]. Ned Tijdschr Geneeskd 2003; 147: pp. 336-340.

  • 21. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Toward complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Acad Radiol 2003; 10: pp. 664-669.

  • 22. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. AJR Am J Roentgenol 2003; 181: pp. 51-55.

  • 23. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Ann Clin Biochem 2003; 40: pp. 357-363.

  • 24. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Clin Radiol 2003; 58: pp. 575-580.

  • 25. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. The Standards for Reporting of Diagnostic Accuracy Group. Croat Med J 2003; 44: pp. 635-638.

  • 26. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Fam Pract 2004; 21: pp. 4-10.

  • 27. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD Initiative. Ann Intern Med 2003; 138: pp. 40-44.

  • 28. Pai M., Sharma S.: Better reporting of studies of diagnostic accuracy. Indian J Med Microbiol 2005; 23: pp. 210-213.

  • 29. Reeves B.C., Gaus W.: Guidelines for reporting non-randomised studies. Forsch Komplementarmed Klass Naturheilkd 2004; 11: pp. 46-52.

  • 30. Salem R., Lewandowski R.J., Gates V.L., et. al.: Research reporting standards for radioembolization of hepatic malignancies. J Vasc Interv Radiol 2011; 22: pp. 265-278.

  • 31. Kallmes D.F., Comstock B.A., Heagerty P.J., et. al.: A randomized trial of vertebroplasty for osteoporotic spinal fractures. N Engl J Med 2009; 361: pp. 569-579.

  • 32. Jacquier I., Boutron I., Moher D., et. al.: The reporting of randomized clinical trials using a surgical intervention is in need of immediate improvement: a systematic review. Ann Surg 2006; 244: pp. 677-683.

  • 33. Boutron I., Moher D., Altman D.G., et. al.: Extending the CONSORT statement to randomized trials of nonpharmacologic treatment: explanation and elaboration. Annals of internal medicine 2008; 148: pp. 295-309.

  • 34. Boutron I., Moher D., Altman D.G., et. al.: Methods and processes of the CONSORT Group: example of an extension for trials assessing nonpharmacologic treatments. Ann Intern Med 2008; 148: pp. W60-W66.

  • 35. The AMSTAR website. http://amstar.ca/About_Amstar.php . Accessed February 7, 2014.

  • 36. Shea B.J., Hamel C., Wells G.A., et. al.: AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol 2009; 62: pp. 1013-1020.

  • 37. Shea B.J., Bouter L.M., Peterson J., et. al.: External validation of a measurement tool to assess systematic reviews (AMSTAR). PLoS One 2007; 2: pp. e1350.

  • 38. The AMSTAR website. http://amstar.ca/Publications.php . Accessed April 26, 2014.

  • 39. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Eur J Health Econ 2013; 14: pp. 367-372.

  • 40. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Value Health 2013; 16: pp. e1-e5.

  • 41. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Clin Ther 2013; 35: pp. 356-363.

  • 42. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Cost Eff Resour Alloc 2013; 11: pp. 6.

  • 43. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BMC Med 2013; 11: pp. 80.

  • 44. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BMJ 2013; 346: pp. f1049.

  • 45. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Pharmacoeconomics 2013; 31: pp. 361-367.

  • 46. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. J Med Econ 2013; 16: pp. 713-719.

  • 47. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Int J Technol Assess Health Care 2013; 29: pp. 117-122.

  • 48. Husereau D., Drummond M., Petrou S., et. al.: Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BJOG 2013; 120: pp. 765-770.

  • 49. Singer M.E., Applegate K.E.: Cost-effectiveness analysis in radiology. Radiology 2001; 219: pp. 611-620.

  • 50. Briggs A., Sculpher M., Claxton K.: Decision modelling for health economic evaluation.2006.Oxford University Press

  • 51. National Institute for Health and Clinical Excellence (NICE) : Guide to the methods of technology appraisal.2008.NICELondon http://www.nice.org.uk/media/B52/A7/TAMethodsGuideUpdatedJune2008.pdf Accessed March 7, 2014

  • 52. Hirth R.A., Chernew M.E., Miller E., et. al.: Willingness to pay for a quality-adjusted life year: in search of a standard. Med Decis Making 2000; 20: pp. 332-342.

  • 53. Weinstein M.C.: How much are Americans willing to pay for a quality-adjusted life year?. Med Care 2008; 46: pp. 343-345.

  • 54. Braithwaite R.S., Meltzer D.O., King J.T., et. al.: What does the value of modern medicine say about the $50,000 per quality-adjusted life-year decision rule?. Med Care 2008; 46: pp. 349-356.

  • 55. World Health Organization. http://www.who.int/choice/costs/CER_levels/en/ . Accessed March 7, 2014.

  • 56. Graham I.D., Calder L.A., Hebert P.C., et. al.: A comparison of clinical practice guideline appraisal instruments. Int J Technol Assess Health Care 2000; 16: pp. 1024-1038.

  • 57. Brouwers M.C., Kho M.E., Browman G.P., et. al.: Development of the AGREE II, part 1: performance, usefulness and areas for improvement. CMAJ 2010; 182: pp. 1045-1052.

  • 58. Brouwers M.C., Kho M.E., Browman G.P., et. al.: Development of the AGREE II, part 2: assessment of validity of items and tools to support application. CMAJ 2010; 182: pp. E472-E478.

  • 59. Brouwers M.C., Kho M.E., Browman G.P., et. al.: AGREE II: advancing guideline development, reporting and evaluation in health care. CMAJ 2010; 182: pp. E839-E842.

  • 60. Brouwers M.C., Kho M.E., Browman G.P., et. al.: AGREE II: advancing guideline development, reporting and evaluation in health care. J Clin Epidemiol 2010; 63: pp. 1308-1311.

  • 61. Brouwers M.C., Kho M.E., Browman G.P., et. al.: AGREE II: advancing guideline development, reporting, and evaluation in health care. Prev Med 2010; 51: pp. 421-424.

  • 62. Institute of Medicine website. http://www.nap.edu/catalog.php?record_id=13058 . Accessed February 7, 2014.

  • 63. Blackmore C.C.: Using evidence to inform coverage decisions: the Washington State experience. Acad Radiol 2012; 19: pp. 1055-1059.

  • 64. ICER website. http://www.icer-review.org/ . Accessed April 29, 2014.

  • 65. Bordage G., Caelleigh A.S., Steinecke A., et. al.: Review criteria for research manuscripts. Acad Med 2001; 76: pp. 897-978.

  • 66. Shea B., Andersson N., Henry D.: Increasing the demand for childhood vaccination in developing countries: a systematic review. BMC Int Health Hum Rights 2009; 9: pp. S5.

This post is licensed under CC BY 4.0 by the author.