Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artif Intell Med ; 137: 102498, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36868690

RESUMO

Medical experts may use Artificial Intelligence (AI) systems with greater trust if these are supported by 'contextual explanations' that let the practitioner connect system inferences to their context of use. However, their importance in improving model usage and understanding has not been extensively studied. Hence, we consider a comorbidity risk prediction scenario and focus on contexts regarding the patients' clinical state, AI predictions about their risk of complications, and algorithmic explanations supporting the predictions. We explore how relevant information for such dimensions can be extracted from Medical guidelines to answer typical questions from clinical practitioners. We identify this as a question answering (QA) task and employ several state-of-the-art Large Language Models (LLM) to present contexts around risk prediction model inferences and evaluate their acceptability. Finally, we study the benefits of contextual explanations by building an end-to-end AI pipeline including data cohorting, AI risk modeling, post-hoc model explanations, and prototyped a visual dashboard to present the combined insights from different context dimensions and data sources, while predicting and identifying the drivers of risk of Chronic Kidney Disease (CKD) - a common type-2 diabetes (T2DM) comorbidity. All of these steps were performed in deep engagement with medical experts, including a final evaluation of the dashboard results by an expert medical panel. We show that LLMs, in particular BERT and SciBERT, can be readily deployed to extract some relevant explanations to support clinical usage. To understand the value-add of the contextual explanations, the expert panel evaluated these regarding actionable insights in the relevant clinical setting. Overall, our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case. Our findings can help improve clinicians' usage of AI models.


Assuntos
Inteligência Artificial , Diabetes Mellitus Tipo 2 , Humanos , Confiança
2.
JAMIA Open ; 4(3): ooab077, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34568771

RESUMO

OBJECTIVE: We help identify subpopulations underrepresented in randomized clinical trials (RCTs) cohorts with respect to national, community-based or health system target populations by formulating population representativeness of RCTs as a machine learning (ML) fairness problem, deriving new representation metrics, and deploying them in easy-to-understand interactive visualization tools. MATERIALS AND METHODS: We represent RCT cohort enrollment as random binary classification fairness problems, and then show how ML fairness metrics based on enrollment fraction can be efficiently calculated using easily computed rates of subpopulations in RCT cohorts and target populations. We propose standardized versions of these metrics and deploy them in an interactive tool to analyze 3 RCTs with respect to type 2 diabetes and hypertension target populations in the National Health and Nutrition Examination Survey. RESULTS: We demonstrate how the proposed metrics and associated statistics enable users to rapidly examine representativeness of all subpopulations in the RCT defined by a set of categorical traits (eg, gender, race, ethnicity, smoking status, and blood pressure) with respect to target populations. DISCUSSION: The normalized metrics provide an intuitive standardized scale for evaluating representation across subgroups, which may have vastly different enrollment fractions and rates in RCT study cohorts. The metrics are beneficial complements to other approaches (eg, enrollment fractions) used to identify generalizability and health equity of RCTs. CONCLUSION: By quantifying the gaps between RCT and target populations, the proposed methods can support generalizability evaluation of existing RCT cohorts. The interactive visualization tool can be readily applied to identified underrepresented subgroups with respect to any desired source or target populations.

3.
AMIA Annu Symp Proc ; 2020: 462-471, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33936419

RESUMO

When healthcare providers review the results of a clinical trial study to understand its applicability to their practice, they typically analyze how well the characteristics of the study cohort correspond to those of the patients they see. We have previously created a study cohort ontology to standardize this information and make it accessible for knowledge-based decision support. The extraction of this information from research publications is challenging, however, given the wide variance in reporting cohort characteristics in a tabular representation. To address this issue, we have developed an ontology-enabled knowledge extraction pipeline for automatically constructing knowledge graphs from the cohort characteristics found in PDF-formatted research papers. We evaluated our approach using a training and test set of 41 research publications and found an overall accuracy of 83.3% in correctly assembling the knowledge graphs. Our research provides a promising approach for extracting knowledge more broadly from tabular information in research publications.


Assuntos
Inteligência Artificial , Bases de Conhecimento , Publicações , Estudos de Coortes , Bases de Dados Factuais , Sistemas de Apoio a Decisões Administrativas , Pessoal de Saúde , Humanos , Projetos de Pesquisa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...