Your browser doesn't support javascript.
loading
Attention-based multimodal fusion with contrast for robust clinical prediction in the face of missing modalities.
Liu, Jinghui; Capurro, Daniel; Nguyen, Anthony; Verspoor, Karin.
Afiliação
  • Liu J; Australian e-Health Research Centre, CSIRO, Queensland, Australia; School of Computing and Information Systems, The University of Melbourne, Victoria, Australia.
  • Capurro D; School of Computing and Information Systems, The University of Melbourne, Victoria, Australia; Centre for Digital Transformation of Health, The University of Melbourne, Victoria, Australia.
  • Nguyen A; Australian e-Health Research Centre, CSIRO, Queensland, Australia.
  • Verspoor K; School of Computing and Information Systems, The University of Melbourne, Victoria, Australia; School of Computing Technologies, RMIT University, Victoria, Australia. Electronic address: karin.verspoor@rmit.edu.au.
J Biomed Inform ; 145: 104466, 2023 09.
Article em En | MEDLINE | ID: mdl-37549722
ABSTRACT

OBJECTIVE:

With the increasing amount and growing variety of healthcare data, multimodal machine learning supporting integrated modeling of structured and unstructured data is an increasingly important tool for clinical machine learning tasks. However, it is non-trivial to manage the differences in dimensionality, volume, and temporal characteristics of data modalities in the context of a shared target task. Furthermore, patients can have substantial variations in the availability of data, while existing multimodal modeling methods typically assume data completeness and lack a mechanism to handle missing modalities.

METHODS:

We propose a Transformer-based fusion model with modality-specific tokens that summarize the corresponding modalities to achieve effective cross-modal interaction accommodating missing modalities in the clinical context. The model is further refined by inter-modal, inter-sample contrastive learning to improve the representations for better predictive performance. We denote the model as Attention-based cRoss-MOdal fUsion with contRast (ARMOUR). We evaluate ARMOUR using two input modalities (structured measurements and unstructured text), six clinical prediction tasks, and two evaluation regimes, either including or excluding samples with missing modalities.

RESULTS:

Our model shows improved performances over unimodal or multimodal baselines in both evaluation regimes, including or excluding patients with missing modalities in the input. The contrastive learning improves the representation power and is shown to be essential for better results. The simple setup of modality-specific tokens enables ARMOUR to handle patients with missing modalities and allows comparison with existing unimodal benchmark results.

CONCLUSION:

We propose a multimodal model for robust clinical prediction to achieve improved performance while accommodating patients with missing modalities. This work could inspire future research to study the effective incorporation of multiple, more complex modalities of clinical data into a single model.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Benchmarking / Aprendizado de Máquina Tipo de estudo: Prognostic_studies / Risk_factors_studies Limite: Humans Idioma: En Revista: J Biomed Inform Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Austrália

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Benchmarking / Aprendizado de Máquina Tipo de estudo: Prognostic_studies / Risk_factors_studies Limite: Humans Idioma: En Revista: J Biomed Inform Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Austrália