Home Memory for Previously Viewed Radiographs and the Effect of Prior Knowledge of Memory Task
Post
Cancel

Memory for Previously Viewed Radiographs and the Effect of Prior Knowledge of Memory Task

Rationale and Objectives

To investigate the effect of being forewarned that they would be asked to identify repeated images on radiologists’ recognition of previously interpreted versus new chest radiographs.

Materials and Methods

Thirteen radiologists viewed 60 posterior-anterior chest radiographs, 31 with and 29 without nodules, in two sets of 40 images each. Eight radiologists were forewarned and five radiologists were not forewarned of the memory task. Twenty images in each of the two sets were unique to each set and 20 images occurred in both sets. The readers indicated the presence or absence of any nodules during both readings, and in the second reading session they also indicated whether they thought each image had also occurred in the first reading.

Results

There was no significant difference in recognition memory performance between forewarned and not-forewarned readers. Overall accuracy in distinguishing previously-viewed from new images was 60.7%.

Conclusions

Being forewarned of the memory task did not improve recognition memory.

Imaging researchers commonly compare two different imaging techniques or viewing situations by having radiologists read two or more sets of images and then comparing their accuracy with one set versus another. Most authors use various techniques to avoid bias attributable to memory—in other words, to prevent the observers from recognizing the images in the second set and linking them to images in the first set, so that each set of interpretations is made as independently of the other as possible. Common methods include waiting for a predetermined period of time between readings, showing the images in different order at the separate readings, and counterbalancing the viewings, so that, in an example in which two different conditions are being tested, equal numbers of readers encounter condition A first and B second versus B first and A second.

Creating a time lapse between readings increases the time required to complete a performance study, and depending on when, where, and with what readers a study is carried out, there may be distinct practical limits to how much time can pass between readings. There has been little advice regarding such time lapses in the published literature. Charles Metz advised that researchers wait as long as possible between readings . In practice, the time allowed to elapse between readings can vary dramatically. Graf et al waited 2 days between readings in a pulmonary nodule detection task, while Fuhrman et al waited 2 years between readings in a rib fracture detection study . There has been relatively little work exploring the actual memory of radiologists for medical images that they have encountered, although four articles have suggested that recognition memory for radiographs by radiologists may be fairly limited .

Get Radiology Tree app to read full this article<

Materials and methods

Readers and Experimental Setting

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Images

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 1

Characteristics of Images Used in This Study and of Patients from Whom the Images were Obtained

Set Gender Age (years) Numbers Nodule Size (mm) Nodule Subtlety Female Male Mean Standard Deviation with Nodules without Nodules Mean Standard Deviation Mean Standard Deviation One Nodule Two Nodules Unique to set 1 12 8 53.9 11.7 7 4 9 16.1 8.8 1.5 0.8 Unique to set 2 11 9 61.5 13 9 1 10 18.3 9.9 1.8 0.6 Repeats 9 11 51.8 22.1 8 2 10 25.2 8.9 1.9 0.8 1 plus repeats 21 19 53 17.6 15 6 19 19.8 10 1.7 0.8 2 plus repeats 20 20 56.7 18.8 17 3 20 21.8 10 1.9 0.7

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Statistical Analysis

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Results

Get Radiology Tree app to read full this article<

Table 2

Results from Univariate Logistic Regression Models to Predict the Likelihood of Correctly Classifying an Image as Included or Not Included in the Previously Encountered Group (Information from Both Repeated Images and Images Unique to the Second Group)

Factor Likelihood of Correct Classification: All Images Seen in the Second Reading No. of Images No. Correct Odds Ratio 95% Confidence Interval (Odds Ratio)P -Value Not forewarned 199 129 (64.8%) 0.13 Forewarned 320 186 (58.1%) 0.753 0.522, 1.087 No nodules 253 148 (58.5%) 0.32 Nodules 266 167 (62.8%) 1.198 0.841, 1.706 No nodules marked 196 122 (62.2%) 0.54 Nodules marked 323 193 (59.8%) 0.893 0.619, 1.287

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Table 3

Results from Univariate Logistic Regression Models to Predict the Likelihood of Correctly Classifying an Image as Included or Not Included in the Previously Encountered Group (N = 20 Repeated Images)

Factor Likelihood of Correct Classification: Only Repeated Images No. of Images No. Correct Odds Ratio 95% Confidence Interval (Odds Ratio)P -Value Not forewarned 100 69 (69.0%) 0.11 Forewarned 160 80 (50.0%) 0.444 0.162, 12.14 No nodules 130 62 (47.7%) 0.001 Nodules 130 87 (66.9%) 2.445 1.431, 4.184 No nodules marked 97 49 (50.5%) 0.06 Nodules marked 163 100 (61.3%) 1.695 0.978, 2.941

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Figure 1, This receiver operating characteristic curve shows the performance of each reader in distinguishing new from old radiographs for readers who were forewarned of the memory task and those who were not performed quite similarly in this task.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Discussion

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

References

  • 1. Metz C.E.: Some practical issues of experimental design and data analysis in radiological ROC studies. Invest Radiol 1989; 24: pp. 234-245.

  • 2. Graf B., Simon U., Eickmeyer F., et. al.: 1K versus 2K monitor: a clinical alternative free-response receiver operating characteristic study of observer performance using pulmonary nodules. AJR Am J Roentgenol 2000; 174: pp. 1067-1074.

  • 3. Fuhrman C.R., Britton C.A., Bender T., et. al.: Observer performance studies: detection of single versus multiple abnormalities of the chest. AJR Am J Roentgenol 2002; 179: pp. 1551-1553.

  • 4. Hardesty L.A., Ganott M.A., Hakim C.M., et. al.: “Memory effect” in observer performance studies of mammograms. Acad Radiol 2005; 12: pp. 286-290.

  • 5. Ryan J.T., Haygood T.M., Yamal J.M., et. al.: The “memory effect” for repeated radiologic observations. AJR Am J Roentgenol 2011; 197: pp. W985-W991.

  • 6. Hillard A., Myles-Worsley M., Johnston W., et. al.: The development of radiologic schemata through training and experience. Invest Radiol 1985; 20: pp. 422-425.

  • 7. Evans K.K., Cohen M.A., Tambouret R., et. al.: Does visual expertise improve visual recognition memory?. Atten Percept Psychophys 2011; 73: pp. 30-35.

  • 8. Hermann K.A., Bonél H.M., Stäbler A., et. al.: [Time needs in evaluating digital thoracic images on the monitor in comparison with alternator.]. Rontgenpraxis 2001; 53: pp. 260-265.

  • 9. Razavi M., Sayre J.W., Taira R.K., et. al.: Receiver-operating-characteristic study of chest radiographs in children: digital hard-copy film vs 2Kx2K soft-copy images. AJR Am J Roentgenol 1992; 158: pp. 443-448.

  • 10. Haygood T.M., Ryan J., Brennan P.C., et. al.: On the choice of acceptance radius in free-response observer performance studies. Br J Radiol 2013; 86: pp. 42313554.

  • 11. Bellhouse-King M.W., Standing L.G.: Recognition memory for concrete, regular abstract, and diverse abstract pictures. Percept Mot Skills 2007; 104: pp. 758-762.

  • 12. Brady T.F., Konkle T., Alvarex G.A., et. al.: Visual long-term memory has a massive storage capacity of object details. Proc Natl Acad Sci U S A 2008; 105: pp. 14325-14329.

  • 13. Kallergi M., Pianou N., Georgakopoulos A., et. al.: Quantitative evaluation of the memory bias effect in ROC studies with PET/CT. Proc SPIE 2012; 8318: pp. 83180D-83181D.

This post is licensed under CC BY 4.0 by the author.