Your browser doesn't support javascript.
loading
Understanding spatial language in radiology: Representation framework, annotation, and spatial relation extraction from chest X-ray reports using deep learning.
Datta, Surabhi; Si, Yuqi; Rodriguez, Laritza; Shooshan, Sonya E; Demner-Fushman, Dina; Roberts, Kirk.
Afiliación
  • Datta S; School of Biomedical Informatics, The University of Texas Health Science Center, Houston, TX, United States. Electronic address: surabhi.datta@uth.tmc.edu.
  • Si Y; School of Biomedical Informatics, The University of Texas Health Science Center, Houston, TX, United States.
  • Rodriguez L; National Library of Medicine, National Institutes of Health, Bethesda, MD, United States.
  • Shooshan SE; National Library of Medicine, National Institutes of Health, Bethesda, MD, United States.
  • Demner-Fushman D; National Library of Medicine, National Institutes of Health, Bethesda, MD, United States.
  • Roberts K; School of Biomedical Informatics, The University of Texas Health Science Center, Houston, TX, United States. Electronic address: kirk.roberts@uth.tmc.edu.
J Biomed Inform ; 108: 103473, 2020 08.
Article en En | MEDLINE | ID: mdl-32562898
ABSTRACT
Radiology reports contain a radiologist's interpretations of images, and these images frequently describe spatial relations. Important radiographic findings are mostly described in reference to an anatomical location through spatial prepositions. Such spatial relationships are also linked to various differential diagnoses and often described through uncertainty phrases. Structured representation of this clinically significant spatial information has the potential to be used in a variety of downstream clinical informatics applications. Our focus is to extract these spatial representations from the reports. For this, we first define a representation framework based on the Spatial Role Labeling (SpRL) scheme, which we refer to as Rad-SpRL. In Rad-SpRL, common radiological entities tied to spatial relations are encoded through four spatial roles Trajector, Landmark, Diagnosis, and Hedge, all identified in relation to a spatial preposition (or Spatial Indicator). We annotated a total of 2,000 chest X-ray reports following Rad-SpRL. We then propose a deep learning-based natural language processing (NLP) method involving word and character-level encodings to first extract the Spatial Indicators followed by identifying the corresponding spatial roles. Specifically, we use a bidirectional long short-term memory (Bi-LSTM) conditional random field (CRF) neural network as the baseline model. Additionally, we incorporate contextualized word representations from pre-trained language models (BERT and XLNet) for extracting the spatial information. We evaluate both gold and predicted Spatial Indicators to extract the four types of spatial roles. The results are promising, with the highest average F1 measure for Spatial Indicator extraction being 91.29 (XLNet); the highest average overall F1 measure considering all the four spatial roles being 92.9 using gold Indicators (XLNet); and 85.6 using predicted Indicators (BERT pre-trained on MIMIC notes). The corpus is available in Mendeley at http//dx.doi.org/10.17632/yhb26hfz8n.1 and https//github.com/krobertslab/datasets/blob/master/Rad-SpRL.xml.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Radiología / Aprendizaje Profundo Tipo de estudio: Prognostic_studies Idioma: En Revista: J Biomed Inform Asunto de la revista: INFORMATICA MEDICA Año: 2020 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Radiología / Aprendizaje Profundo Tipo de estudio: Prognostic_studies Idioma: En Revista: J Biomed Inform Asunto de la revista: INFORMATICA MEDICA Año: 2020 Tipo del documento: Article