Transparent medical image AI via an image-text foundation model grounded in medical literature.
Nat Med
; 30(4): 1154-1165, 2024 Apr.
Article
em En
| MEDLINE
| ID: mdl-38627560
ABSTRACT
Building trustworthy and transparent image-based medical artificial intelligence (AI) systems requires the ability to interrogate data and models at all stages of the development pipeline, from training models to post-deployment monitoring. Ideally, the data and associated AI systems could be described using terms already familiar to physicians, but this requires medical datasets densely annotated with semantically meaningful concepts. In the present study, we present a foundation model approach, named MONET (medical concept retriever), which learns how to connect medical images with text and densely scores images on concept presence to enable important tasks in medical AI development and deployment such as data auditing, model auditing and model interpretation. Dermatology provides a demanding use case for the versatility of MONET, due to the heterogeneity in diseases, skin tones and imaging modalities. We trained MONET based on 105,550 dermatological images paired with natural language descriptions from a large collection of medical literature. MONET can accurately annotate concepts across dermatology images as verified by board-certified dermatologists, competitively with supervised models built on previously concept-annotated dermatology datasets of clinical images. We demonstrate how MONET enables AI transparency across the entire AI system development pipeline, from building inherently interpretable models to dataset and model auditing, including a case study dissecting the results of an AI clinical trial.
Texto completo:
1
Coleções:
01-internacional
Base de dados:
MEDLINE
Assunto principal:
Médicos
/
Inteligência Artificial
Limite:
Humans
Idioma:
En
Revista:
Nat Med
Assunto da revista:
BIOLOGIA MOLECULAR
/
MEDICINA
Ano de publicação:
2024
Tipo de documento:
Article
País de afiliação:
Estados Unidos