Home Setting Up, Maintaining and Evaluating an Evidence Based Radiology Journal Club
Post
Cancel

Setting Up, Maintaining and Evaluating an Evidence Based Radiology Journal Club

The authors outline the steps involved in setting up, maintaining, and evaluating an evidence-based imaging journal club, using their collective experience at the University of Michigan. The article opens with a background to journal clubs in general and describes their changing purpose or role in recent decades. This should act as a useful framework or “how-to” guide to get things started. Different journal club formats are discussed, and the pros and cons of each are outlined. Suggestions for obtaining feedback from residents and for performing evaluation are also provided. In addition, useful information, references and links to useful resources are also given throughout the article. Finally, the authors share the positive (and negative) experiences of setting up, maintaining, and evaluating the University of Michigan’s journal club, now in its third year. The authors welcome feedback from readers who have been involved in evidence-based imaging journal clubs to share their experiences, good and bad.

The first journal clubs are thought to have started in Europe in the 1880s, with the first recorded one at McGill University in 1885, established by Sir William Osler with the aim of distributing and paying for periodicals to which he could not afford to subscribe . In 1889, Osler established a book and journal club at Johns Hopkins University, which was a forum to review the latest medical research and provide suggestions for additions to the medical library . Since then, journal clubs have evolved a great deal and are found in most medical schools and residency programs and in every medical field or specialty. In the 1960s and 1970s, the major aim of journal clubs was to keep their attendees up to date with the latest medical literature . More recently, in the 1990s and 2000s, journal clubs have became a forum for teaching critical appraisal skills, improving understanding of biostatistical and epidemiologic methods, and promoting the practice of evidence-based medicine . Learning evidence-based medicine, biostatistics, epidemiology, and critical appraisal skills in general fall under practice-based learning and improvement, which is one of the six core general competencies required by the Accreditation Council for Graduate Medical Education of all residency programs .

Although the aims and purposes of journal clubs are well established, and journal clubs are here to stay, selecting the right format and setting to keep members stimulated and educated remains a challenge. In this article, the steps needed to set up and maintain an evidence-based medicine journal club are outlined, drawing on information from the existing literature. We also provide pointers for the evaluation of and constant refinement of journal clubs to ensure their continued success. We also share the format used for the University of Michigan’s evidence-based radiology journal club.

How to set up an evidence-based journal club

Select a Director or Moderator

The first and most critical step is to select a director or moderator for the journal club. This individual should be someone who is interested in and committed to the success of the journal club. The director can be a faculty member, a resident, or a combination of both. Having a designated leader was correlated with effectiveness in one study . The organizers need to have a strong belief in resident education and the purpose of the journal club. At our institution, two junior faculty members with a strong interest in critical appraisal and evidence-based radiology acted as moderators for the journal club .

Define the Goals

Next, the goals of the journal club need to be defined. Most journal club goals encompass some of the following aims: to improve clinical practice, to keep up to date with the latest medical literature, to teach techniques in critical appraisal, to teach techniques in biostatistics and epidemiology, and to teach evidence-based medicine.

The overall aim of our journal club is to improve clinical practice. The individual goals of our journal club are threefold: to review a variety of articles from the radiology literature, to teach critical appraisal techniques, and to teach basic statistical methodology. The two faculty moderators for our journal club have backgrounds in clinical research design and basic statistical analysis, through the On Job On Campus program at the University of Michigan School of Public Health . They also had taken the course for teaching evidence-based practice at the Centre for Evidence-Based Medicine (Oxford, United Kingdom) . A similar course for teaching evidence-based practice is run by many of the same faculty members at McMaster University .

Optimize Attendance

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Generate Resident Interest

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Designing the Curriculum

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Proposed journal club format

Preparation

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Learning Methods

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Defining the outcomes of an evidence-based journal club

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Evaluating the performance of the evidence-based journal club

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Journal club format: University of Michigan Radiology Department

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Acknowledgment

Get Radiology Tree app to read full this article<

Appendix

Get Radiology Tree app to read full this article<

Worksheet for Critical Appraisal

QUESTION YES/NO PAGE # 1) Is it a study of diagnostic accuracy? Did they assess sensitivity and specificity?

Did they assess for disease presence or absence? 2) Is the study aim stated? Are the research aims described? 3) and 15) Study population. Did they describe inclusion and exclusion criteria? What was the setting?

What were the locations of the study?

What was the stage of disease?

What was the sex and age breakdown of the participants?

What treatments did they receive? 4) Recruitment. How were the study participants recruited? 5) Participant Sampling. Were the study participants consecutive or not? 6) Data collection. Was there prospective enrollment of patients versus a retrospective review of patients? 7) What was the reference standard? What was the rationale for using that reference standard? 8) Technical specifications. Was the imaging described in sufficient detail, such that you could carry out the same (imaging) study at your institution? 9) What was the rationale for the units used; the cutoff/threshold values, or the categories of results? 10) How many readers were there? What was the expertise level of the readers?

How many years of experience?

Were they subspecialists? 11) Were the readers of the gold standard (reference) test and the test that we are assessing (index) blinded to the findings on the other test? 12) and 21) How was diagnostic accuracy measured? Sensitivity and specificity?

Disease or no disease?

How was uncertainty quantified?

Did they use standard deviation?

Did they report confidence intervals? 13) and 24) Did they calculate test reproducibility? 14) Start and end dates for recruitment? 16) Did they use a flowchart? How many of the participants who were eligible did not undergo the tests?

Why did eligible participants not undergo the test? 17) What was the time interval between the tests? Did participants receive any treatment between the index test and reference test? 18) What was the distribution of disease severity in the study population?

How did they define disease presence? 19) Did they use a 2x2 table of results of index test by results of reference test? 20) Were there any adverse events reported from the index and reference tests? 22) How were indeterminate results, subjects lost to follow up and outliers handled? 23) Did they look at subgroups of participants?

Did they compare centers in multicenter studies? 25) What is the clinical applicability of the study findings?

Based on the Standards for the Reporting of Diagnostic Accuracy Studies .

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

References

  • 1. Linzer M.: The journal club and medical education: over one hundred years of unrecorded history. Postgrad Med J 1987; 63: pp. 475-478.

  • 2. Sleeman W. Dr. Osler and the book and journal club of the medical and surgical faculty of Maryland. MD Med J 1990; 39:1111–1113.

  • 3. Valentini R.P., Daniels S.R.: The journal club. Postgrad Med J 1997; 73: pp. 81-85.

  • 4. Accreditation Council for Graduate Medical Education. General competencies. Available at: http://www.acgme.org/outcome/comp/compMin.asp . Accessed December 21, 2009.

  • 5. Heiligman P.M., Wollitzer O.W.: A survey of journal clubs in US family practice residencies. J Med Educ 1987; 62: pp. 928-931.

  • 6. Kelly A.M., Cronin P.: Evidence-based practice journal club: how we do it. Semin Roentgenol 2009; 44: pp. 209-213.

  • 7. University of Michigan School of Public Health. Clinical research design and statistical analysis (M.S.). Available at: http://www.sph.umich.edu/biostat/programs/clinical-stat/ . Accessed December 21, 2009.

  • 8. Centre for Evidence-Based Medicine. Teaching evidence-based practice. Available at: http://www.cebm.net/index.aspx?o=4912 . Accessed December 21, 2009.

  • 9. Centre for Evidence-Based Medicine Toronto. Teaching EBM. Available at: http://www.cebm.utoronto.ca/teach/workshops/ . Accessed December 21, 2009.

  • 10. Siderov J.: How are internal medicine residency journal clubs organized, and what makes them successful?. Arch Int Med 1995; 155: pp. 1193-1197.

  • 11. Rajpal S., Resnick D.K., Baskaya M.K.: The role of the journal club in neurosurgical training. Neurosurgery 2007; 61: pp. 397-403.

  • 12. How to read clinical journals: I. why to read them and how to start reading them critically. CMAJ 1981; 124: pp. 555-558.

  • 13. How to read clinical journals: II. To learn about a diagnostic test. CMAJ 1981; 124: pp. 703-710.

  • 14. How to read clinical journals: III. To learn the clinical course and prognosis of disease. CMAJ 1981; 124: pp. 869-872.

  • 15. How to read clinical journals: IV. To determine etiology or causation. CMAJ 1981; 124: pp. 985-990.

  • 16. How to read clinical journals: V. To distinguish useful from useless or even harmful therapy. CMAJ 1981; 124: pp. 1156-1162.

  • 17. American Medical Association. [List of JAMA “Users’ Guides to the Medical Literature”]. Available at: http://jama.ama-assn.org/cgi/search?fulltext=users+guides&quicksearch_submit.x=17&quicksearch_submit.y=6 . Accessed January 16, 2010.

  • 18. Guyatt G.Rennie D.Meade M.O.Cook D.J.Users’ guides to the medical literature: essentials of evidence-based clinical practice.2002.AMA PressChicago:

  • 19. American Roentgen Ray Society. Archive of 1994 online issues [American Journal of Roentgenology]. Available at: http://www.ajronline.org/contents-by-date.1994.shtml . Accessed January 16, 2010.

  • 20. Canadian Medical Association. Archive of 1995 online issues [Canadian Medical Association Journal]. Available at: http://www.cmaj.ca/contents-by-date.1995.dtl . Accessed January 16, 2010.

  • 21. Association of University Radiologists. Academic Radiology: journal search. Available at: http://www.academicradiology.org/search . Accessed January 16, 2010.

  • 22. Radiological Society of North America. [List of articles in Radiology on statistical concepts, 2002–2004]. Available at: http://radiology.rsna.org/search?fulltext=statistical+concepts+series&submit=yes&x=11&y=11 . Accessed January 16, 2010.

  • 23. Straus S.E., Richardson W.S., Glasziou P., Haynes R.B.: Evidence-based medicine: how to practice and teach EBM.3rd ed.2005.ElsevierNew York

  • 24. Radiological Society of North America. [List of articles in Radiology on evidence-based imaging]. Available at: http://radiology.rsna.org/search?fulltext=evidence+based+imaging+series&submit=yes&x=12&y=7 . Accessed January 16, 2010.

  • 25. Medina L.S.Blackmore C.C.Evidence-based imaging: optimizing imaging in patient care.2006.SpringerNew York:

  • 26. McMaster University Health Information Research Unit. Home page. Available at: http://hiru.mcmaster.ca/hiru/ . Accessed January 16, 2010.

  • 27. Centre for Evidence-Based Medicine. Home page. Available at: http://www.cebm.net . Accessed January 16, 2010.

  • 28. Evidence based radiology website. Available at: evidencebasedradiology. Accessed April 15, 2009.

  • 29. Bossuyt P.M., Reitsma J.B., Bruns D.E., et. al.: Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Ann Intern Med 2003; 138: pp. 40-44.

  • 30. Begg C., Cho M., Eastwood S., et. al.: Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA 1996; 276: pp. 637-639.

  • 31. von Elm E., Altman D.G., Egger M., Pocock S.J., Gøtzsche P.C., Vandenbroucke J.P.: STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet 2007; 370: pp. 1453-1457.

  • 32. Davidoff F., Batalden P., Stevens D., Ogrinc G., Mooney S.: Standards for Quality Improvement Reporting Excellence Development Group. Publication guidelines for quality improvement studies in health care: evolution of the SQUIRE project [Published erratum appears in J Gen Intern Med 2009; 24:147]. J Gen Intern Med 2008; 23: pp. 2125-2130.

This post is licensed under CC BY 4.0 by the author.