Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Curr Probl Diagn Radiol ; 53(3): 346-352, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38302303

RESUMEN

Breast cancer is the most common type of cancer in women, and early abnormality detection using mammography can significantly improve breast cancer survival rates. Diverse datasets are required to improve the training and validation of deep learning (DL) systems for autonomous breast cancer diagnosis. However, only a small number of mammography datasets are publicly available. This constraint has created challenges when comparing different DL models using the same dataset. The primary contribution of this study is the comprehensive description of a selection of currently available public mammography datasets. The information available on publicly accessible datasets is summarized and their usability reviewed to enable more effective models to be developed for breast cancer detection and to improve understanding of existing models trained using these datasets. This study aims to bridge the existing knowledge gap by offering researchers and practitioners a valuable resource to develop and assess DL models in breast cancer diagnosis.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Femenino , Humanos , Mamografía , Neoplasias de la Mama/diagnóstico por imagen , Detección Precoz del Cáncer
2.
Ann Am Thorac Soc ; 21(2): 287-295, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38029405

RESUMEN

Rationale: Outcomes for people with respiratory failure in the United States vary by patient race and ethnicity. Invasive ventilation is an important treatment initiated based on expert opinion. It is unknown whether the use of invasive ventilation varies by patient race and ethnicity. Objectives: To measure 1) the association between patient race and ethnicity and the use of invasive ventilation; and 2) the change in 28-day mortality mediated by any association. Methods: We performed a multicenter cohort study of nonintubated adults receiving oxygen within 24 hours of intensive care admission using the Medical Information Mart for Intensive Care IV (MIMIC-IV, 2008-2019) and Phillips eICU (eICU, 2014-2015) databases from the United States. We modeled the association between patient race and ethnicity (Asian, Black, Hispanic, White) and invasive ventilation rate using a Bayesian multistate model that adjusted for baseline and time-varying covariates, calculated hazard ratios (HRs), and estimated 28-day hospital mortality changes mediated by differential invasive ventilation use. We reported posterior means and 95% credible intervals (CrIs). Results: We studied 38,258 patients, 52% (20,032) from MIMIC-IV and 48% (18,226) from eICU: 2% Asian (892), 11% Black (4,289), 5% Hispanic (1,964), and 81% White (31,113). Invasive ventilation occurred in 9.2% (3,511), and 7.5% (2,869) died. The adjusted rate of invasive ventilation was lower in Asian (HR, 0.82; CrI, 0.70-0.95), Black (HR, 0.78; CrI, 0.71-0.86), and Hispanic (HR, 0.70; CrI, 0.61-0.79) patients compared with White patients. For the average patient, lower rates of invasive ventilation did not mediate differences in 28-day mortality. For a patient on high-flow nasal cannula with inspired oxygen fraction of 1.0, the odds ratios for mortality if invasive ventilation rates were equal to the rate for White patients were 0.97 (CrI, 0.91-1.03) for Asian patients, 0.96 (CrI, 0.91-1.03) for Black patients, and 0.94 (CrI, 0.89-1.01) for Hispanic patients. Conclusions: Asian, Black, and Hispanic patients had lower rates of invasive ventilation than White patients. These decreases did not mediate harm for the average patient, but we could not rule out harm for patients with more severe hypoxemia.


Asunto(s)
Etnicidad , Ventilación no Invasiva , Adulto , Humanos , Estados Unidos/epidemiología , Estudios de Cohortes , Teorema de Bayes , Oxígeno , Blanco
3.
Radiol Artif Intell ; 5(5): e220270, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37795140

RESUMEN

Purpose: To externally test four chest radiograph classifiers on a large, diverse, real-world dataset with robust subgroup analysis. Materials and Methods: In this retrospective study, adult posteroanterior chest radiographs (January 2016-December 2020) and associated radiology reports from Trillium Health Partners in Ontario, Canada, were extracted and de-identified. An open-source natural language processing tool was locally validated and used to generate ground truth labels for the 197 540-image dataset based on the associated radiology report. Four classifiers generated predictions on each chest radiograph. Performance was evaluated using accuracy, positive predictive value, negative predictive value, sensitivity, specificity, F1 score, and Matthews correlation coefficient for the overall dataset and for patient, setting, and pathology subgroups. Results: Classifiers demonstrated 68%-77% accuracy, 64%-75% sensitivity, and 82%-94% specificity on the external testing dataset. Algorithms showed decreased sensitivity for solitary findings (43%-65%), patients younger than 40 years (27%-39%), and patients in the emergency department (38%-60%) and decreased specificity on normal chest radiographs with support devices (59%-85%). Differences in sex and ancestry represented movements along an algorithm's receiver operating characteristic curve. Conclusion: Performance of deep learning chest radiograph classifiers was subject to patient, setting, and pathology factors, demonstrating that subgroup analysis is necessary to inform implementation and monitor ongoing performance to ensure optimal quality, safety, and equity.Keywords: Conventional Radiography, Thorax, Ethics, Supervised Learning, Convolutional Neural Network (CNN), Machine Learning Algorithms Supplemental material is available for this article. © RSNA, 2023See also the commentary by Huisman and Hannink in this issue.

4.
J Am Coll Radiol ; 20(9): 842-851, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37506964

RESUMEN

Despite the expert-level performance of artificial intelligence (AI) models for various medical imaging tasks, real-world performance failures with disparate outputs for various subgroups limit the usefulness of AI in improving patients' lives. Many definitions of fairness have been proposed, with discussions of various tensions that arise in the choice of an appropriate metric to use to evaluate bias; for example, should one aim for individual or group fairness? One central observation is that AI models apply "shortcut learning" whereby spurious features (such as chest tubes and portable radiographic markers on intensive care unit chest radiography) on medical images are used for prediction instead of identifying true pathology. Moreover, AI has been shown to have a remarkable ability to detect protected attributes of age, sex, and race, while the same models demonstrate bias against historically underserved subgroups of age, sex, and race in disease diagnosis. Therefore, an AI model may take shortcut predictions from these correlations and subsequently generate an outcome that is biased toward certain subgroups even when protected attributes are not explicitly used as inputs into the model. As a result, these subgroups became nonprivileged subgroups. In this review, the authors discuss the various types of bias from shortcut learning that may occur at different phases of AI model development, including data bias, modeling bias, and inference bias. The authors thereafter summarize various tool kits that can be used to evaluate and mitigate bias and note that these have largely been applied to nonmedical domains and require more evaluation for medical AI. The authors then summarize current techniques for mitigating bias from preprocessing (data-centric solutions) and during model development (computational solutions) and postprocessing (recalibration of learning). Ongoing legal changes where the use of a biased model will be penalized highlight the necessity of understanding, detecting, and mitigating biases from shortcut learning and will require diverse research teams looking at the whole AI pipeline.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Radiografía , Causalidad , Sesgo
5.
Front Radiol ; 3: 1181190, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37588666

RESUMEN

Introduction: To date, most mammography-related AI models have been trained using either film or digital mammogram datasets with little overlap. We investigated whether or not combining film and digital mammography during training will help or hinder modern models designed for use on digital mammograms. Methods: To this end, a total of six binary classifiers were trained for comparison. The first three classifiers were trained using images only from Emory Breast Imaging Dataset (EMBED) using ResNet50, ResNet101, and ResNet152 architectures. The next three classifiers were trained using images from EMBED, Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM), and Digital Database for Screening Mammography (DDSM) datasets. All six models were tested only on digital mammograms from EMBED. Results: The results showed that performance degradation to the customized ResNet models was statistically significant overall when EMBED dataset was augmented with CBIS-DDSM/DDSM. While the performance degradation was observed in all racial subgroups, some races are subject to more severe performance drop as compared to other races. Discussion: The degradation may potentially be due to ( 1) a mismatch in features between film-based and digital mammograms ( 2) a mismatch in pathologic and radiological information. In conclusion, use of both film and digital mammography during training may hinder modern models designed for breast cancer screening. Caution is required when combining film-based and digital mammograms or when utilizing pathologic and radiological information simultaneously.

6.
Br J Radiol ; 96(1150): 20230023, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37698583

RESUMEN

Various forms of artificial intelligence (AI) applications are being deployed and used in many healthcare systems. As the use of these applications increases, we are learning the failures of these models and how they can perpetuate bias. With these new lessons, we need to prioritize bias evaluation and mitigation for radiology applications; all the while not ignoring the impact of changes in the larger enterprise AI deployment which may have downstream impact on performance of AI models. In this paper, we provide an updated review of known pitfalls causing AI bias and discuss strategies for mitigating these biases within the context of AI deployment in the larger healthcare enterprise. We describe these pitfalls by framing them in the larger AI lifecycle from problem definition, data set selection and curation, model training and deployment emphasizing that bias exists across a spectrum and is a sequela of a combination of both human and machine factors.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Sesgo , Progresión de la Enfermedad , Aprendizaje
7.
Lancet Digit Health ; 4(6): e406-e414, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35568690

RESUMEN

BACKGROUND: Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person's race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient's racial identity from medical images. METHODS: Using private (Emory CXR, Emory Chest CT, Emory Cervical Spine, and Emory Mammogram) and public (MIMIC-CXR, CheXpert, National Lung Cancer Screening Trial, RSNA Pulmonary Embolism CT, and Digital Hand Atlas) datasets, we evaluated, first, performance quantification of deep learning models in detecting race from medical images, including the ability of these models to generalise to external environments and across multiple imaging modalities. Second, we assessed possible confounding of anatomic and phenotypic population features by assessing the ability of these hypothesised confounders to detect race in isolation using regression models, and by re-evaluating the deep learning models by testing them on datasets stratified by these hypothesised confounding variables. Last, by exploring the effect of image corruptions on model performance, we investigated the underlying mechanism by which AI models can recognise race. FINDINGS: In our study, we show that standard AI deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities, which was sustained under external validation conditions (x-ray imaging [area under the receiver operating characteristics curve (AUC) range 0·91-0·99], CT chest imaging [0·87-0·96], and mammography [0·81]). We also showed that this detection is not due to proxies or imaging-related surrogate covariates for race (eg, performance of possible confounders: body-mass index [AUC 0·55], disease distribution [0·61], and breast density [0·61]). Finally, we provide evidence to show that the ability of AI deep learning models persisted over all anatomical regions and frequency spectrums of the images, suggesting the efforts to control this behaviour when it is undesirable will be challenging and demand further study. INTERPRETATION: The results from our study emphasise that the ability of AI deep learning models to predict self-reported race is itself not the issue of importance. However, our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging. FUNDING: National Institute of Biomedical Imaging and Bioengineering, MIDRC grant of National Institutes of Health, US National Science Foundation, National Library of Medicine of the National Institutes of Health, and Taiwan Ministry of Science and Technology.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Inteligencia Artificial , Detección Precoz del Cáncer , Humanos , Estudios Retrospectivos
8.
Nat Med ; 27(12): 2176-2182, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34893776

RESUMEN

Artificial intelligence (AI) systems have increasingly achieved expert-level performance in medical imaging applications. However, there is growing concern that such AI systems may reflect and amplify human bias, and reduce the quality of their performance in historically under-served populations such as female patients, Black patients, or patients of low socioeconomic status. Such biases are especially troubling in the context of underdiagnosis, whereby the AI algorithm would inaccurately label an individual with a disease as healthy, potentially delaying access to care. Here, we examine algorithmic underdiagnosis in chest X-ray pathology classification across three large chest X-ray datasets, as well as one multi-source dataset. We find that classifiers produced using state-of-the-art computer vision techniques consistently and selectively underdiagnosed under-served patient populations and that the underdiagnosis rate was higher for intersectional under-served subpopulations, for example, Hispanic female patients. Deployment of AI systems using medical imaging for disease diagnosis with such biases risks exacerbation of existing care biases and can potentially lead to unequal access to medical treatment, thereby raising ethical concerns for the use of these models in the clinic.


Asunto(s)
Inteligencia Artificial , Radiografía Torácica , Poblaciones Vulnerables , Adolescente , Algoritmos , Niño , Preescolar , Conjuntos de Datos como Asunto , Femenino , Humanos , Lactante , Recién Nacido , Masculino , Adulto Joven
9.
Pac Symp Biocomput ; 26: 232-243, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33691020

RESUMEN

Machine learning systems have received much attention recently for their ability to achieve expert-level performance on clinical tasks, particularly in medical imaging. Here, we examine the extent to which state-of-the-art deep learning classifiers trained to yield diagnostic labels from X-ray images are biased with respect to protected attributes. We train convolution neural networks to predict 14 diagnostic labels in 3 prominent public chest X-ray datasets: MIMIC-CXR, Chest-Xray8, CheXpert, as well as a multi-site aggregation of all those datasets. We evaluate the TPR disparity - the difference in true positive rates (TPR) - among different protected attributes such as patient sex, age, race, and insurance type as a proxy for socioeconomic status. We demonstrate that TPR disparities exist in the state-of-the-art classifiers in all datasets, for all clinical tasks, and all subgroups. A multi-source dataset corresponds to the smallest disparities, suggesting one way to reduce bias. We find that TPR disparities are not significantly correlated with a subgroup's proportional disease burden. As clinical models move from papers to products, we encourage clinical decision makers to carefully audit for algorithmic disparities prior to deployment. Our supplementary materials can be found at, http://www.marzyehghassemi.com/chexclusion-supp-3/.


Asunto(s)
Biología Computacional , Redes Neurales de la Computación , Humanos , Aprendizaje Automático , Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA