Your browser doesn't support javascript.
loading
A multi-center study on the adaptability of a shared foundation model for electronic health records.
Guo, Lin Lawrence; Fries, Jason; Steinberg, Ethan; Fleming, Scott Lanyon; Morse, Keith; Aftandilian, Catherine; Posada, Jose; Shah, Nigam; Sung, Lillian.
Afiliación
  • Guo LL; Program in Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, ON, Canada.
  • Fries J; Stanford Center for Biomedical Informatics Research, Stanford University, Palo Alto, CA, USA.
  • Steinberg E; Stanford Center for Biomedical Informatics Research, Stanford University, Palo Alto, CA, USA.
  • Fleming SL; Stanford Center for Biomedical Informatics Research, Stanford University, Palo Alto, CA, USA.
  • Morse K; Division of Pediatric Hospital Medicine, Department of Pediatrics, Stanford University, Palo Alto, CA, USA.
  • Aftandilian C; Division of Hematology/Oncology, Department of Pediatrics, Stanford University, Palo Alto, CA, USA.
  • Posada J; Universidad del Norte, Barranquilla, Colombia.
  • Shah N; Stanford Center for Biomedical Informatics Research, Stanford University, Palo Alto, CA, USA.
  • Sung L; Program in Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, ON, Canada. Lillian.sung@sickkids.ca.
NPJ Digit Med ; 7(1): 171, 2024 Jun 27.
Article en En | MEDLINE | ID: mdl-38937550
ABSTRACT
Foundation models are transforming artificial intelligence (AI) in healthcare by providing modular components adaptable for various downstream tasks, making AI development more scalable and cost-effective. Foundation models for structured electronic health records (EHR), trained on coded medical records from millions of patients, demonstrated benefits including increased performance with fewer training labels, and improved robustness to distribution shifts. However, questions remain on the feasibility of sharing these models across hospitals and their performance in local tasks. This multi-center study examined the adaptability of a publicly accessible structured EHR foundation model (FMSM), trained on 2.57 M patient records from Stanford Medicine. Experiments used EHR data from The Hospital for Sick Children (SickKids) and Medical Information Mart for Intensive Care (MIMIC-IV). We assessed both adaptability via continued pretraining on local data, and task adaptability compared to baselines of locally training models from scratch, including a local foundation model. Evaluations on 8 clinical prediction tasks showed that adapting the off-the-shelf FMSM matched the performance of gradient boosting machines (GBM) locally trained on all data while providing a 13% improvement in settings with few task-specific training labels. Continued pretraining on local data showed FMSM required fewer than 1% of training examples to match the fully trained GBM's performance, and was 60 to 90% more sample-efficient than training local foundation models from scratch. Our findings demonstrate that adapting EHR foundation models across hospitals provides improved prediction performance at less cost, underscoring the utility of base foundation models as modular components to streamline the development of healthcare AI.

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: NPJ Digit Med Año: 2024 Tipo del documento: Article País de afiliación: Canadá

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: NPJ Digit Med Año: 2024 Tipo del documento: Article País de afiliación: Canadá