Home Improving Abnormality Detection on Chest Radiography Using Game-Like Reinforcement Mechanics
Post
Cancel

Improving Abnormality Detection on Chest Radiography Using Game-Like Reinforcement Mechanics

Rationale and Objectives

Despite their increasing prevalence, online textbooks, question banks, and digital references focus primarily on explicit knowledge. Implicit skills such as abnormality detection require repeated practice on clinical service and have few digital substitutes. Using mechanics traditionally deployed in video games such as clearly defined goals, rapid-fire levels, and narrow time constraints may be an effective way to teach implicit skills.

Materials and Methods

We created a freely available, online module to evaluate the ability of individuals to differentiate between normal and abnormal chest radiographs by implementing mechanics, including instantaneous feedback, rapid-fire cases, and 15-second timers. Volunteer subjects completed the modules and were separated based on formal experience with chest radiography. Performance between training and testing sets were measured for each group, and a survey was administered after each session.

Results

The module contained 74 cases and took approximately 20 minutes to complete. Thirty-two cases were normal radiographs and 56 cases were abnormal. Of the 60 volunteers recruited, 25 were “never trained” and 35 were “previously trained.” “Never trained” users scored 21.9 out of 37 during training and 24.0 out of 37 during testing (59.1% vs 64.9%, P value <.001). “Previously trained” users scored 28.0 out of 37 during training and 28.3 out of 37 during testing phases (75.6% vs 76.4%, P value = .56). Survey results showed that 87% of all subjects agreed the module is an efficient way of learning, and 83% agreed the rapid-fire module is valuable for medical students.

Conclusions

A gamified online module may improve the abnormality detection rates of novice interpreters of chest radiography, although experienced interpreters are less likely to derive similar benefits. Users reviewed the educational module favorably.

Background

Reinforcement learning theory posits that the performance of a learner increases proportionally to the discrepancy between the learner’s predicted outcome and the actual outcome as measured in reward or punishment . In radiology, teachers using the Socratic method during clinical service and traditional “hot-seat” style conferences are applying this reinforcement feedback mechanism to education.

The mechanism of reinforcement learning in humans is tied to dopamine D1 receptor and best examined in addiction disorders . These theories are an important part of software engineering, responsible for generating interest in otherwise mundane tasks such as stacking nondescript square quartets in endless layers (also known as Tetris), using Newtonian physics to destroy wooden structures occupied by porcine antagonists (Angry Birds), or in “first-person shooter” video games . Neuropsychology literature suggests that video games act on the reward pathway through striatal dopamine release, a phemenenon demonstrable on positron emission tomography . The patterns of goal-directed, reinforced behavior and dopamine release is similar to those seen in addiction and gambling . Reinforcement learning is also a salient form of information learning. The literature suggests that two primary modes of knowledge acquisition comprise the learning process: explicit vs implicit learning . In explicit knowledge acquisition, a trainee consciously studies a textbook or attends didactic lectures. In implicit learning, a trainee acquires skills without trying to learn but instead by processes of repetitive stimulus–response binding . For example, within radiology, listing the differential diagnosis of a solitary pulmonary nodule requires explicit knowledge, whereas identifying a solitary pulmonary nodule when reviewing a chest radiograph requires implicit skills.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Materials and Methods

Study Population

Get Radiology Tree app to read full this article<

Software Creation

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Figure 1, Screenshot of the module interface. The user begins from the top screen and must choose a response before moving to the next case. Either an incorrect answer choice (lower right) or a correct choice (lower left) triggers the appropriate feedback from the interface, including annotation of the case image.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Figure 2, Example score card at the end of the module. The user feedback is provided using all 74 available cases, with the first case serving as tutorial and the remaining 73 scored and separated by diagnosis.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Implementation of the Chest Radiography Module

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Evaluation

Get Radiology Tree app to read full this article<

Table 1

Survey Questions Presented Before and After Web Module

Before Starting the Module 1. Please indicate your level of medical training. No medical training Medical student Medical/surgical resident Radiology resident Radiology fellow Radiology attending If “Radiology resident” is selected Transitional/preliminary year 1st year 2nd year 3rd year 4th year 2. Approximately how many months of training/practice do you have in the interpretation of chest radiographs? None 1 month 2 months 3–4 months 5 + months

After Completing the Module 1. Did you find this module helpful? Interesting? Write a short comment below. Please rate your agreement with the following statement:

Traditional methods of learning to differentiate normal from abnormal chest radiographs (ie, textbooks, workstation readouts, conferences, etc) are an efficient means of learning . Strongly disagree Somewhat disagree Neutral Somewhat agree Strongly agree Traditional methods of learning to differentiate normal from abnormal chest radiographs (ie, textbooks, workstation readouts, conferences, etc) can be improved upon . Strongly disagree Somewhat disagree Neutral Somewhat agree Strongly agree The following questions pertains to modules such as the one you just completed, including normal and abnormal radiographs with rapid feedback:

These modules are an efficient means of learning. Strongly disagree Somewhat disagree Neutral Somewhat agree Strongly agree These modules are helpful for training medical students learning to differentiate normal from abnormal chest radiographs. Strongly disagree Somewhat disagree Neutral Somewhat agree Strongly agree This style could be used, and would be helpful, for learning normal vs abnormal findings in other radiology modalities (for example Ventilation-Perfusion (VQ) scans and different forms of intracranial hemorrhage). Strongly disagree Somewhat disagree Neutral Somewhat agree Strongly agree

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Results

Get Radiology Tree app to read full this article<

Figure 3, Histogram of the study participant's prior experience with dedicated chest radiography training.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 2

Results From Anonymous Participant Surveys

Total Responses_n_ = 60 Prior experience with chest radiography None: 42% (25)

1 mo: 32% (19)

2 mo: 2% (1)

3–4 mo: 10% (6)

5+ mo: 15% (9) Strongly

disagree Somewhat

disagree Neutral Somewhat

agree Strongly

agree Traditional methods are an efficient means of learning. 3 (5%) 10 (17%) 18 (30%) 26 (43%) 3 (5) Traditional methods can be improved upon. 0 0 6 (10%) 22 (37%) 32 (53%) Rapid-fire modules are an efficient means of learning. 0 0 8 (13%) 22 (37%) 30 (50%) Rapid-fire modules can be used to teach medical students. 0 1 (2%) 9 (15%) 24 (40%) 26 (43%) Rapid-fire modules can be used in other radiology modalities. 1 (2%) 0 7 (12%) 20 (33%) 32 (53%)

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Figure 4, Comparison of users without prior chest radiography training vs those with at least 1 month of dedicated chest radiography training, with the training and testing phases separated ( a ) vs cumulatively stacked ( b ).

Get Radiology Tree app to read full this article<

Discussion

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Conclusion

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

References

  • 1. Krigolson O.E., Hassall C.D., Handy T.C.: How we learn to make decisions: rapid propagation of reinforcement learning prediction errors in humans. J Cogn Neurosci 2014; 26: pp. 635-644.

  • 2. Steinberg E.E., Keiflin R., Boivin J.R., et. al.: A causal link between prediction errors, dopamine neurons and learning. Nat Neurosci 2013; 16: pp. 966-973.

  • 3. Zack M.H., Lobo D.S., Biback C., et. al.: Parallel role for the dopamine D1 receptor in gambling and amphetamine reinforcement in healthy volunteers. J Psychopharmacol 2016; 31: pp. 31-42. Available at: http://jop.sagepub.com/cgi/doi/10.1177/0269881116665329 Accessed September 16, 2016; [Internet]

  • 4. Holroyd C.B., Coles M.G.H.: The neural basis of human error processing: reinforcement learning, dopamine, and the error-related negativity. Psychol Rev 2002; 109: pp. 679-709.

  • 5. Mathiak K.A., Klasen M., Weber R., et. al.: Reward system and temporal pole contributions to affective evaluation during a first person shooter video game. BMC Neurosci 2011; 12: pp. 66.

  • 6. Kühn S., Romanowski A., Schilling C., et. al.: The neural basis of video gaming. Transl Psychiatry 2011; 1: pp. e53.

  • 7. Koepp M.J., Gunn R.N., Lawrence A.D., et. al.: Evidence for striatal dopamine release during a video game. Nature 1998; 393: pp. 266-268.

  • 8. Delfabbro P., King D.: On finding the C in CBT: the challenges of applying gambling-related cognitive approaches to video-gaming. J Gambl Stud 2015; 31: pp. 315-329.

  • 9. Lorenz R.C., Gleich T., Gallinat J., et. al.: Video game training and the reward system. Front Hum Neurosci 2015; 9: pp. 40.

  • 10. Haider H., Eberhardt K., Esser S., et. al.: Implicit visual learning: how the task set modulates learning by determining the stimulus-response binding. Conscious Cogn 2014; 26C: pp. 145-161.

  • 11. Williamson K.B., Gunderman R.B., Cohen M.D., et. al.: Learning theory in radiology education. Radiology 2004; 233: pp. 15-18.

  • 12. Seaborn K., Fels D.I.: Gamification in theory and action: a survey. Int J Hum Comput Stud 2015; 74: pp. 14-31.

  • 13. Blohm I., Leimeister J.M.: Gamification: design of IT-based enhancing services for motivational support and behavioral change. Bus Inform Syst Eng 2013; 5: pp. 275-278.

  • 14. Seguin P., Le Bouquin V., Aguillon D., et. al.: [Testing nasogastric tube placement: evaluation of three different methods in intensive care unit]. Ann Fr Anesth Reanim 2005; 24: pp. 594-599.

  • 15. Ahn J.Y., Lee J.S., Lee G.H., et. al.: The efficacy of a newly designed, easy-to-manufacture training simulator for endoscopic biopsy of the stomach. Gut Liver 2016; 10: pp. 764-772.

  • 16. Chen M.-J., Wang H.-Y., Chang C.-W., et. al.: A novel artificial tissue simulator for endoscopic submucosal resection training—a pilot study. BMC Gastroenterol 2016; 16: pp. 112.

  • 17. Rock B.G., Leonard A.P., Freeman S.J.: A training simulator for ultrasound-guided percutaneous nephrostomy insertion. Br J Radiol 2010; 83: pp. 612-614.

  • 18. Isaacson G., Ianacone D.C., Soliman A.M.S.: Ex vivo ovine model for suspension microlaryngoscopy training. J Laryngol Otol 2016; 130: pp. 939-942.

  • 19. Poot J.D., Chetlen A.L.: A simulation screening mammography module created for instruction and assessment: radiology residents vs national benchmarks. Acad Radiol 2016; 23: pp. 1454-1462.

  • 20. Back S.J., Darge K., Bedoya M.A., et. al.: Ultrasound tutorials in under 10 minutes: experience and results. AJR Am J Roentgenol 2016; 207: pp. 653-660.

  • 21. Benya E.C., Wyers M.R., O’Brien E.K.: Evaluation of a pediatric fluoroscopy training module to improve performance of upper gastrointestinal procedures in neonates with bilious emesis. Pediatr Radiol 2016; 46: pp. 1680-1683.

  • 22. Courtier J., Webb E.M., Phelps A.S., et. al.: Assessing the learning potential of an interactive digital game versus an interactive-style didactic lecture: the continued importance of didactic teaching in medical student education. Pediatr Radiol 2016; 46: pp. 1787-1796.

  • 23. Mesko B., Győrffy Z., Kollár J.: Digital literacy in the medical curriculum: a course with social media tools and gamification. JMIR Med Educ 2015; 1: pp. e6.

  • 24. Bhargava P., Lackey A.E., Dhand S., et. al.: Radiology education 2.0—on the cusp of change: part 1. Tablet computers, online curriculums, remote meeting tools and audience response systems. Acad Radiol 2013; 20: pp. 364-372.

  • 25. Bhargava P., Dhand S., Lackey A.E., et. al.: Radiology education 2.0—on the cusp of change: part 2. eBooks; file sharing and synchronization tools; websites/teaching files; reference management tools and note taking applications. Acad Radiol 2013; 20: pp. 373-381.

  • 26. Farkhondeh A., Geist J.R.: Evaluation of web-based interactive instruction in intraoral and panoramic radiographic anatomy. J Mich Dent Assoc 2015; 97: pp. 34-38.

This post is licensed under CC BY 4.0 by the author.