Your browser doesn't support javascript.
loading
Exploiting hierarchy in medical concept embedding.
Finch, Anthony; Crowell, Alexander; Bhatia, Mamta; Parameshwarappa, Pooja; Chang, Yung-Chieh; Martinez, Jose; Horberg, Michael.
Afiliação
  • Finch A; Kaiser Permanente Mid-Atlantic Permanente Medical Group, Rockville, Maryland, USA.
  • Crowell A; Kaiser Permanente Mid-Atlantic Permanente Medical Group, Rockville, Maryland, USA.
  • Bhatia M; Kaiser Permanente Mid-Atlantic Permanente Medical Group, Rockville, Maryland, USA.
  • Parameshwarappa P; Kaiser Permanente Mid-Atlantic Permanente Research Institute, Rockville, Maryland, USA.
  • Chang YC; Kaiser Permanente Mid-Atlantic Permanente Medical Group, Rockville, Maryland, USA.
  • Martinez J; Kaiser Permanente Mid-Atlantic Permanente Medical Group, Rockville, Maryland, USA.
  • Horberg M; Kaiser Permanente Mid-Atlantic Permanente Medical Group, Rockville, Maryland, USA.
JAMIA Open ; 4(1): ooab022, 2021 Jan.
Article em En | MEDLINE | ID: mdl-33748691
ABSTRACT

OBJECTIVE:

To construct and publicly release a set of medical concept embeddings for codes following the ICD-10 coding standard which explicitly incorporate hierarchical information from medical codes into the embedding formulation. MATERIALS AND

METHODS:

We trained concept embeddings using several new extensions to the Word2Vec algorithm using a dataset of approximately 600,000 patients from a major integrated healthcare organization in the Mid-Atlantic US. Our concept embeddings included additional entities to account for the medical categories assigned to codes by the Clinical Classification Software Revised (CCSR) dataset. We compare these results to sets of publicly released pretrained embeddings and alternative training methodologies.

RESULTS:

We found that Word2Vec models which included hierarchical data outperformed ordinary Word2Vec alternatives on tasks which compared naïve clusters to canonical ones provided by CCSR. Our Skip-Gram model with both codes and categories achieved 61.4% normalized mutual information with canonical labels in comparison to 57.5% with traditional Skip-Gram. In models operating on two different outcomes, we found that including hierarchical embedding data improved classification performance 96.2% of the time. When controlling for all other variables, we found that co-training embeddings improved classification performance 66.7% of the time. We found that all models outperformed our competitive benchmarks.

DISCUSSION:

We found significant evidence that our proposed algorithms can express the hierarchical structure of medical codes more fully than ordinary Word2Vec models, and that this improvement carries forward into classification tasks. As part of this publication, we have released several sets of pretrained medical concept embeddings using the ICD-10 standard which significantly outperform other well-known pretrained vectors on our tested outcomes.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Revista: JAMIA Open Ano de publicação: 2021 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Revista: JAMIA Open Ano de publicação: 2021 Tipo de documento: Article País de afiliação: Estados Unidos