Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Artif Intell Med ; 143: 102616, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37673561

RESUMO

BACKGROUND: Medical use cases for machine learning (ML) are growing exponentially. The first hospitals are already using ML systems as decision support systems in their daily routine. At the same time, most ML systems are still opaque and it is not clear how these systems arrive at their predictions. METHODS: In this paper, we provide a brief overview of the taxonomy of explainability methods and review popular methods. In addition, we conduct a systematic literature search on PubMed to investigate which explainable artificial intelligence (XAI) methods are used in 450 specific medical supervised ML use cases, how the use of XAI methods has emerged recently, and how the precision of describing ML pipelines has evolved over the past 20 years. RESULTS: A large fraction of publications with ML use cases do not use XAI methods at all to explain ML predictions. However, when XAI methods are used, open-source and model-agnostic explanation methods are more commonly used, with SHapley Additive exPlanations (SHAP) and Gradient Class Activation Mapping (Grad-CAM) for tabular and image data leading the way. ML pipelines have been described in increasing detail and uniformity in recent years. However, the willingness to share data and code has stagnated at about one-quarter. CONCLUSIONS: XAI methods are mainly used when their application requires little effort. The homogenization of reports in ML use cases facilitates the comparability of work and should be advanced in the coming years. Experts who can mediate between the worlds of informatics and medicine will become more and more in demand when using ML systems due to the high complexity of the domain.


Assuntos
Inteligência Artificial , Aprendizado de Máquina , Hospitais , Aprendizado de Máquina Supervisionado , Atenção à Saúde
2.
Artif Intell Med ; 132: 102372, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36207074

RESUMO

Understanding model predictions is critical in healthcare, to facilitate rapid verification of model correctness and to guard against use of models that exploit confounding variables. We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images, in which a model must indicate the regions used to predict each abnormality. To solve this task, we propose a multiple instance learning convolutional neural network, AxialNet, that allows identification of top slices for each abnormality. Next we incorporate HiResCAM, an attention mechanism, to identify sub-slice regions. We prove that for AxialNet, HiResCAM explanations are guaranteed to reflect the locations the model used, unlike Grad-CAM which sometimes highlights irrelevant locations. Armed with a model that produces faithful explanations, we then aim to improve the model's learning through a novel mask loss that leverages HiResCAM and 3D allowed regions to encourage the model to predict abnormalities based only on the organs in which those abnormalities appear. The 3D allowed regions are obtained automatically through a new approach, PARTITION, that combines location information extracted from radiology reports with organ segmentation maps obtained through morphological image processing. Overall, we propose the first model for explainable multi-abnormality prediction in volumetric medical images, and then use the mask loss to achieve a 33% improvement in organ localization of multiple abnormalities in the RAD-ChestCT dataset of 36,316 scans, representing the state of the art. This work advances the clinical applicability of multiple abnormality modeling in chest CT volumes.


Assuntos
Anormalidades Múltiplas , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos
3.
J Am Med Dir Assoc ; 22(2): 291-296, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33132014

RESUMO

OBJECTIVES: To evaluate a machine learning model designed to predict mortality for Medicare beneficiaries aged >65 years treated for hip fracture in Inpatient Rehabilitation Facilities (IRFs). DESIGN: Retrospective design/cohort analysis of Centers for Medicare & Medicaid Services Inpatient Rehabilitation Facility-Patient Assessment Instrument data. SETTING AND PARTICIPANTS: A total of 17,140 persons admitted to Medicare-certified IRFs in 2015 following hospitalization for hip fracture. MEASURES: Patient characteristics include sociodemographic (age, gender, race, and social support) and clinical factors (functional status at admission, chronic conditions) and IRF length of stay. Outcomes were 30-day and 1-year all-cause mortality. We trained and evaluated 2 classification models, logistic regression and a multilayer perceptron (MLP), to predict the probability of 30-day and 1-year mortality and evaluated the calibration, discrimination, and precision of the models. RESULTS: For 30-day mortality, MLP performed well [acc = 0.74, area under the receiver operating characteristic curve (AUROC) = 0.76, avg prec = 0.10, slope = 1.14] as did logistic regression (acc = 0.78, AUROC = 0.76, avg prec = 0.09, slope = 1.20). For 1-year mortality, the performances were similar for both MLP (acc = 0.68, AUROC = 0.75, avg prec = 0.32, slope = 0.96) and logistic regression (acc = 0.68, AUROC = 0.75, avg prec = 0.32, slope = 0.95). CONCLUSION AND IMPLICATIONS: A scoring system based on logistic regression may be more feasible to run in current electronic medical records. But MLP models may reduce cognitive burden and increase ability to calibrate to local data, yielding clinical specificity in mortality prediction so that palliative care resources may be allocated more effectively.


Assuntos
Cuidados Paliativos , Centros de Reabilitação , Idoso , Algoritmos , Humanos , Aprendizado de Máquina , Medicare , Estudos Retrospectivos , Estados Unidos/epidemiologia
4.
Med Image Anal ; 67: 101857, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33129142

RESUMO

Machine learning models for radiology benefit from large-scale data sets with high quality labels for abnormalities. We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients. This is the largest multiply-annotated volumetric medical imaging data set reported. To annotate this data set, we developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports with an average F-score of 0.976 (min 0.941, max 1.0). We also developed a model for multi-organ, multi-disease classification of chest CT volumes that uses a deep convolutional neural network (CNN). This model reached a classification performance of AUROC >0.90 for 18 abnormalities, with an average AUROC of 0.773 for all 83 abnormalities, demonstrating the feasibility of learning from unfiltered whole volume CT data. We show that training on more labels improves performance significantly: for a subset of 9 labels - nodule, opacity, atelectasis, pleural effusion, consolidation, mass, pericardial effusion, cardiomegaly, and pneumothorax - the model's average AUROC increased by 10% when the number of training labels was increased from 9 to all 83. All code for volume preprocessing, automated label extraction, and the volume abnormality prediction model is publicly available. The 36,316 CT volumes and labels will also be made publicly available pending institutional approval.


Assuntos
Pneumopatias , Aprendizado de Máquina , Humanos , Redes Neurais de Computação , Radiografia , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA