Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Crit Care Explor ; 6(5): e1087, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38709088

RESUMEN

Large randomized trials in sepsis have generally failed to find effective novel treatments. This is increasingly attributed to patient heterogeneity, including heterogeneous cardiovascular changes in septic shock. We discuss the potential for machine learning systems to personalize cardiovascular resuscitation in sepsis. While the literature is replete with proofs of concept, the technological readiness of current systems is low, with a paucity of clinical trials and proven patient benefit. Systems may be vulnerable to confounding and poor generalization to new patient populations or contemporary patterns of care. Typical electronic health records do not capture rich enough data, at sufficient temporal resolution, to produce systems that make actionable treatment suggestions. To resolve these issues, we recommend a simultaneous focus on technical challenges and removing barriers to translation. This will involve improving data quality, adopting causally grounded models, prioritizing safety assessment and integration into healthcare workflows, conducting randomized clinical trials and aligning with regulatory requirements.


Asunto(s)
Aprendizaje Automático , Medicina de Precisión , Sepsis , Humanos , Sepsis/terapia , Medicina de Precisión/métodos , Resucitación/métodos
2.
NPJ Digit Med ; 6(1): 206, 2023 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-37935953

RESUMEN

The influence of AI recommendations on physician behaviour remains poorly characterised. We assess how clinicians' decisions may be influenced by additional information more broadly, and how this influence can be modified by either the source of the information (human peers or AI) and the presence or absence of an AI explanation (XAI, here using simple feature importance). We used a modified between-subjects design where intensive care doctors (N = 86) were presented on a computer for each of 16 trials with a patient case and prompted to prescribe continuous values for two drugs. We used a multi-factorial experimental design with four arms, where each clinician experienced all four arms on different subsets of our 24 patients. The four arms were (i) baseline (control), (ii) peer human clinician scenario showing what doses had been prescribed by other doctors, (iii) AI suggestion and (iv) XAI suggestion. We found that additional information (peer, AI or XAI) had a strong influence on prescriptions (significantly for AI, not so for peers) but simple XAI did not have higher influence than AI alone. There was no correlation between attitudes to AI or clinical experience on the AI-supported decisions and nor was there correlation between what doctors self-reported about how useful they found the XAI and whether the XAI actually influenced their prescriptions. Our findings suggest that the marginal impact of simple XAI was low in this setting and we also cast doubt on the utility of self-reports as a valid metric for assessing XAI in clinical experts.

3.
BMJ Health Care Inform ; 29(1)2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35851286

RESUMEN

OBJECTIVES: Establishing confidence in the safety of Artificial Intelligence (AI)-based clinical decision support systems is important prior to clinical deployment and regulatory approval for systems with increasing autonomy. Here, we undertook safety assurance of the AI Clinician, a previously published reinforcement learning-based treatment recommendation system for sepsis. METHODS: As part of the safety assurance, we defined four clinical hazards in sepsis resuscitation based on clinical expert opinion and the existing literature. We then identified a set of unsafe scenarios, intended to limit the action space of the AI agent with the goal of reducing the likelihood of hazardous decisions. RESULTS: Using a subset of the Medical Information Mart for Intensive Care (MIMIC-III) database, we demonstrated that our previously published 'AI clinician' recommended fewer hazardous decisions than human clinicians in three out of our four predefined clinical scenarios, while the difference was not statistically significant in the fourth scenario. Then, we modified the reward function to satisfy our safety constraints and trained a new AI Clinician agent. The retrained model shows enhanced safety, without negatively impacting model performance. DISCUSSION: While some contextual patient information absent from the data may have pushed human clinicians to take hazardous actions, the data were curated to limit the impact of this confounder. CONCLUSION: These advances provide a use case for the systematic safety assurance of AI-based clinical systems towards the generation of explicit safety evidence, which could be replicated for other AI applications or other clinical contexts, and inform medical device regulatory bodies.


Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Sepsis , Inteligencia Artificial , Cuidados Críticos , Humanos , Sepsis/terapia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA