Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38942737

RESUMEN

OBJECTIVE: Artificial intelligence (AI) models trained using medical images for clinical tasks often exhibit bias in the form of subgroup performance disparities. However, since not all sources of bias in real-world medical imaging data are easily identifiable, it is challenging to comprehensively assess their impacts. In this article, we introduce an analysis framework for systematically and objectively investigating the impact of biases in medical images on AI models. MATERIALS AND METHODS: Our framework utilizes synthetic neuroimages with known disease effects and sources of bias. We evaluated the impact of bias effects and the efficacy of 3 bias mitigation strategies in counterfactual data scenarios on a convolutional neural network (CNN) classifier. RESULTS: The analysis revealed that training a CNN model on the datasets containing bias effects resulted in expected subgroup performance disparities. Moreover, reweighing was the most successful bias mitigation strategy for this setup. Finally, we demonstrated that explainable AI methods can aid in investigating the manifestation of bias in the model using this framework. DISCUSSION: The value of this framework is showcased in our findings on the impact of bias scenarios and efficacy of bias mitigation in a deep learning model pipeline. This systematic analysis can be easily expanded to conduct further controlled in silico trials in other investigations of bias in medical imaging AI. CONCLUSION: Our novel methodology for objectively studying bias in medical imaging AI can help support the development of clinical decision-support tools that are robust and responsible.

2.
Front Artif Intell ; 7: 1301997, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38384277

RESUMEN

Distributed learning is a promising alternative to central learning for machine learning (ML) model training, overcoming data-sharing problems in healthcare. Previous studies exploring federated learning (FL) or the traveling model (TM) setup for medical image-based disease classification often relied on large databases with a limited number of centers or simulated artificial centers, raising doubts about real-world applicability. This study develops and evaluates a convolution neural network (CNN) for Parkinson's disease classification using data acquired by 83 diverse real centers around the world, mostly contributing small training samples. Our approach specifically makes use of the TM setup, which has proven effective in scenarios with limited data availability but has never been used for image-based disease classification. Our findings reveal that TM is effective for training CNN models, even in complex real-world scenarios with variable data distributions. After sufficient training cycles, the TM-trained CNN matches or slightly surpasses the performance of the centrally trained counterpart (AUROC of 83% vs. 80%). Our study highlights, for the first time, the effectiveness of TM in 3D medical image classification, especially in scenarios with limited training samples and heterogeneous distributed data. These insights are relevant for situations where ML models are supposed to be trained using data from small or remote medical centers, and rare diseases with sparse cases. The simplicity of this approach enables a broad application to many deep learning tasks, enhancing its clinical utility across various contexts and medical facilities.

3.
IEEE J Biomed Health Inform ; 28(4): 2047-2054, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38198251

RESUMEN

Sharing multicenter imaging datasets can be advantageous to increase data diversity and size but may lead to spurious correlations between site-related biological and non-biological image features and target labels, which machine learning (ML) models may exploit as shortcuts. To date, studies analyzing how and if deep learning models may use such effects as a shortcut are scarce. Thus, the aim of this work was to investigate if site-related effects are encoded in the feature space of an established deep learning model designed for Parkinson's disease (PD) classification based on T1-weighted MRI datasets. Therefore, all layers of the PD classifier were frozen, except for the last layer of the network, which was replaced by a linear layer that was exclusively re-trained to predict three potential bias types (biological sex, scanner type, and originating site). Our findings based on a large database consisting of 1880 MRI scans collected across 41 centers show that the feature space of the established PD model (74% accuracy) can be used to classify sex (75% accuracy), scanner type (79% accuracy), and site location (71% accuracy) with high accuracies despite this information never being explicitly provided to the PD model during original training. Overall, the results of this study suggest that trained image-based classifiers may use unwanted shortcuts that are not meaningful for the actual clinical task at hand. This finding may explain why many image-based deep learning models do not perform well when applied to data from centers not contributing to the training set.


Asunto(s)
Enfermedad de Parkinson , Humanos , Enfermedad de Parkinson/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Aprendizaje Automático , Máquina de Vectores de Soporte
4.
J Med Imaging (Bellingham) ; 9(6): 061102, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36046104

RESUMEN

Purpose: Explainability and fairness are two key factors for the effective and ethical clinical implementation of deep learning-based machine learning models in healthcare settings. However, there has been limited work on investigating how unfair performance manifests in explainable artificial intelligence (XAI) methods, and how XAI can be used to investigate potential reasons for unfairness. Thus, the aim of this work was to analyze the effects of previously established sociodemographic-related confounders on classifier performance and explainability methods. Approach: A convolutional neural network (CNN) was trained to predict biological sex from T1-weighted brain MRI datasets of 4547 9- to 10-year-old adolescents from the Adolescent Brain Cognitive Development study. Performance disparities of the trained CNN between White and Black subjects were analyzed and saliency maps were generated for each subgroup at the intersection of sex and race. Results: The classification model demonstrated a significant difference in the percentage of correctly classified White male ( 90.3 % ± 1.7 % ) and Black male ( 81.1 % ± 4.5 % ) children. Conversely, slightly higher performance was found for Black female ( 89.3 % ± 4.8 % ) compared with White female ( 86.5 % ± 2.0 % ) children. Saliency maps showed subgroup-specific differences, corresponding to brain regions previously associated with pubertal development. In line with this finding, average pubertal development scores of subjects used in this study were significantly different between Black and White females ( p < 0.001 ) and males ( p < 0.001 ). Conclusions: We demonstrate that a CNN with significantly different sex classification performance between Black and White adolescents can identify different important brain regions when comparing subgroup saliency maps. Importance scores vary substantially between subgroups within brain structures associated with pubertal development, a race-associated confounder for predicting sex. We illustrate that unfair models can produce different XAI results between subgroups and that these results may explain potential reasons for biased performance.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...