Home Simple and Efficient Method for Region of Interest Value Extraction from Picture Archiving and Communication System Viewer with Optical Character Recognition Software and Macro Program
Post
Cancel

Simple and Efficient Method for Region of Interest Value Extraction from Picture Archiving and Communication System Viewer with Optical Character Recognition Software and Macro Program

Rationale and Objectives

The objectives are: 1) to introduce a simple and efficient method for extracting region of interest (ROI) values from a Picture Archiving and Communication System (PACS) viewer using optical character recognition (OCR) software and a macro program, and 2) to evaluate the accuracy of this method with a PACS workstation.

Materials and Methods

This module was designed to extract the ROI values on the images of the PACS, and created as a development tool by using open-source OCR software and an open-source macro program. The principal processes are as follows: (1) capture a region of the ROI values as a graphic file for OCR, (2) recognize the text from the captured image by OCR software, (3) perform error-correction, (4) extract the values including area, average, standard deviation, max, and min values from the text, (5) reformat the values into temporary strings with tabs, and (6) paste the temporary strings into the spreadsheet. This principal process was repeated for the number of ROIs. The accuracy of this module was evaluated on 1040 recognitions from 280 randomly selected ROIs of the magnetic resonance images. The input times of ROIs were compared between conventional manual method and this extraction module-assisted input method.

Results

The module for extracting ROI values operated successfully using the OCR and macro programs. The values of the area, average, standard deviation, maximum, and minimum could be recognized and error-corrected with AutoHotkey-coded module. The average input times using the conventional method and the proposed module-assisted method were 34.97 seconds and 7.87 seconds, respectively.

Conclusions

A simple and efficient method for ROI value extraction was developed with open-source OCR and a macro program. Accurate inputs of various numbers from ROIs can be extracted with this module. The proposed module could be applied to the next generation of PACS or existing PACS that have not yet been upgraded.

Key points:

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Open full size image

Figure 1

Screenshots of the Picture Archiving and Communicating System (PACS). (a) Region of interest (ROI) values are displayed on the PACS screen. In large images, the ROI values are overlaid with the background signal. (b) By zooming out, the ROI values are displayed with no background. SD, standard deviation.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Materials and methods

Hardware and Software

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Main Module for ROI Extraction

Get Radiology Tree app to read full this article<

Figure 3, OCR of the ROI values. (a) Captured image of the ROI values, (b) unprocessed raw text after OCR, and (c) error correction and space removal of the raw text. OCR, optical character recognition; ROI, region of interest; SD, standard deviation.

Get Radiology Tree app to read full this article<

Figure 2, Flowchart of the ROI extracting module. OCR, optical character recognition; PACS, Picture Archiving and Communicating System; ROI, region of interest.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Figure 4, Spreadsheet. The extracted region of interest values are successfully inserted in a spreadsheet with proper cell separation.

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Accuracy Evaluation and Comparison

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Results

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Discussion

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

Get Radiology Tree app to read full this article<

References

  • 1. Optical character recognition. Wikipedia. Available at: http://en.wikipedia.org/wiki/Optical_character_recognition . Accessed August 24, 2013.

  • 2. Cook T.S., Zimmerman S., Maidment A.D., et. al.: Automated extraction of radiation dose information for CT examinations. J Am Coll Radiol 2010; 7: pp. 871-877.

  • 3. Li X., Zhang D., Liu B.: Automated extraction of radiation dose information from CT dose report images. AJR Am J Roentgenol 2011; 196: pp. W781-W783.

  • 4. Lee Y.H., Song H.T., Suh J.S.: Quantitative computed tomography (QCT) as a radiology reporting tool by using optical character recognition (OCR) and macro program. J Digit Imaging 2012; 25: pp. 815-818.

  • 5. AutoHotkey Web site. Available at http://www.autohotkey.com . Accessed August 24, 2013.

  • 6. Robson M.D., Gatehouse P.D., Bydder M., et. al.: Magnetic resonance: an introduction to ultrashort TE (UTE) imaging. J Comput Assist Tomogr 2003; 27: pp. 825-846. AutoHotkey Web site. Available at: http://www.autohotkey.com . Accessed August 24, 2013.

  • 7. GOCR Web site. Available at: http://jocr.sourceforge.net/api . Accessed August 24, 2013.

  • 8. Goyal N., Jain N., Rachapalli V.: Ergonomics in radiology. Clin Radiol 2009; 64: pp. 119-126.

  • 9. Harisinghani M.G., Blake M.A., Saksena M., et. al.: Importance and effects of altered workplace ergonomics in modern radiology suites. Radiographics 2004; 24: pp. 615-627.

This post is licensed under CC BY 4.0 by the author.