Your browser doesn't support javascript.
loading
Deep learning-based NLP data pipeline for EHR-scanned document information extraction.
Hsu, Enshuo; Malagaris, Ioannis; Kuo, Yong-Fang; Sultana, Rizwana; Roberts, Kirk.
Affiliation
  • Hsu E; Department of Biostatistics and Data Science, University of Texas Medical Branch, Galveston, Texas, USA.
  • Malagaris I; Department of Biostatistics and Data Science, University of Texas Medical Branch, Galveston, Texas, USA.
  • Kuo YF; Department of Biostatistics and Data Science, University of Texas Medical Branch, Galveston, Texas, USA.
  • Sultana R; Division of Pulmonary, Critical Care and Sleep Medicine, Department of Internal Medicine, University of Texas Medical Branch, Galveston, Texas, USA.
  • Roberts K; School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, Texas, USA.
JAMIA Open ; 5(2): ooac045, 2022 Jul.
Article in En | MEDLINE | ID: mdl-35702624
ABSTRACT

Objective:

Scanned documents in electronic health records (EHR) have been a challenge for decades, and are expected to stay in the foreseeable future. Current approaches for processing include image preprocessing, optical character recognition (OCR), and natural language processing (NLP). However, there is limited work evaluating the interaction of image preprocessing methods, NLP models, and document layout. Materials and

Methods:

We evaluated 2 key indicators for sleep apnea, Apnea hypopnea index (AHI) and oxygen saturation (SaO2), from 955 scanned sleep study reports. Image preprocessing methods include gray-scaling, dilating, eroding, and contrast. OCR was implemented with Tesseract. Seven traditional machine learning models and 3 deep learning models were evaluated. We also evaluated combinations of image preprocessing methods, and 2 deep learning architectures (with and without structured input providing document layout information), with the goal of optimizing end-to-end performance.

Results:

Our proposed method using ClinicalBERT reached an AUROC of 0.9743 and document accuracy of 94.76% for AHI, and an AUROC of 0.9523 and document accuracy of 91.61% for SaO2.

Discussion:

There are multiple, inter-related steps to extract meaningful information from scanned reports. While it would be infeasible to experiment with all possible option combinations, we experimented with several of the most critical steps for information extraction, including image processing and NLP. Given that scanned documents will likely be part of healthcare for years to come, it is critical to develop NLP systems to extract key information from this data.

Conclusion:

We demonstrated the proper use of image preprocessing and document layout could be beneficial to scanned document processing.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Type of study: Qualitative_research Language: En Journal: JAMIA Open Year: 2022 Document type: Article Affiliation country: United States

Full text: 1 Collection: 01-internacional Database: MEDLINE Type of study: Qualitative_research Language: En Journal: JAMIA Open Year: 2022 Document type: Article Affiliation country: United States