Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
1.
Radiology ; 311(2): e230750, 2024 May.
Article in English | MEDLINE | ID: mdl-38713024

ABSTRACT

Background Multiparametric MRI (mpMRI) improves prostate cancer (PCa) detection compared with systematic biopsy, but its interpretation is prone to interreader variation, which results in performance inconsistency. Artificial intelligence (AI) models can assist in mpMRI interpretation, but large training data sets and extensive model testing are required. Purpose To evaluate a biparametric MRI AI algorithm for intraprostatic lesion detection and segmentation and to compare its performance with radiologist readings and biopsy results. Materials and Methods This secondary analysis of a prospective registry included consecutive patients with suspected or known PCa who underwent mpMRI, US-guided systematic biopsy, or combined systematic and MRI/US fusion-guided biopsy between April 2019 and September 2022. All lesions were prospectively evaluated using Prostate Imaging Reporting and Data System version 2.1. The lesion- and participant-level performance of a previously developed cascaded deep learning algorithm was compared with histopathologic outcomes and radiologist readings using sensitivity, positive predictive value (PPV), and Dice similarity coefficient (DSC). Results A total of 658 male participants (median age, 67 years [IQR, 61-71 years]) with 1029 MRI-visible lesions were included. At histopathologic analysis, 45% (294 of 658) of participants had lesions of International Society of Urological Pathology (ISUP) grade group (GG) 2 or higher. The algorithm identified 96% (282 of 294; 95% CI: 94%, 98%) of all participants with clinically significant PCa, whereas the radiologist identified 98% (287 of 294; 95% CI: 96%, 99%; P = .23). The algorithm identified 84% (103 of 122), 96% (152 of 159), 96% (47 of 49), 95% (38 of 40), and 98% (45 of 46) of participants with ISUP GG 1, 2, 3, 4, and 5 lesions, respectively. In the lesion-level analysis using radiologist ground truth, the detection sensitivity was 55% (569 of 1029; 95% CI: 52%, 58%), and the PPV was 57% (535 of 934; 95% CI: 54%, 61%). The mean number of false-positive lesions per participant was 0.61 (range, 0-3). The lesion segmentation DSC was 0.29. Conclusion The AI algorithm detected cancer-suspicious lesions on biparametric MRI scans with a performance comparable to that of an experienced radiologist. Moreover, the algorithm reliably predicted clinically significant lesions at histopathologic examination. ClinicalTrials.gov Identifier: NCT03354416 © RSNA, 2024 Supplemental material is available for this article.


Subject(s)
Deep Learning , Multiparametric Magnetic Resonance Imaging , Prostatic Neoplasms , Male , Humans , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Aged , Prospective Studies , Multiparametric Magnetic Resonance Imaging/methods , Middle Aged , Algorithms , Prostate/diagnostic imaging , Prostate/pathology , Image-Guided Biopsy/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
2.
Abdom Radiol (NY) ; 2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38512516

ABSTRACT

OBJECTIVE: Automated methods for prostate segmentation on MRI are typically developed under ideal scanning and anatomical conditions. This study evaluates three different prostate segmentation AI algorithms in a challenging population of patients with prior treatments, variable anatomic characteristics, complex clinical history, or atypical MRI acquisition parameters. MATERIALS AND METHODS: A single institution retrospective database was queried for the following conditions at prostate MRI: prior prostate-specific oncologic treatment, transurethral resection of the prostate (TURP), abdominal perineal resection (APR), hip prosthesis (HP), diversity of prostate volumes (large ≥ 150 cc, small ≤ 25 cc), whole gland tumor burden, magnet strength, noted poor quality, and various scanners (outside/vendors). Final inclusion criteria required availability of axial T2-weighted (T2W) sequence and corresponding prostate organ segmentation from an expert radiologist. Three previously developed algorithms were evaluated: (1) deep learning (DL)-based model, (2) commercially available shape-based model, and (3) federated DL-based model. Dice Similarity Coefficient (DSC) was calculated compared to expert. DSC by model and scan factors were evaluated with Wilcox signed-rank test and linear mixed effects (LMER) model. RESULTS: 683 scans (651 patients) met inclusion criteria (mean prostate volume 60.1 cc [9.05-329 cc]). Overall DSC scores for models 1, 2, and 3 were 0.916 (0.707-0.971), 0.873 (0-0.997), and 0.894 (0.025-0.961), respectively, with DL-based models demonstrating significantly higher performance (p < 0.01). In sub-group analysis by factors, Model 1 outperformed Model 2 (all p < 0.05) and Model 3 (all p < 0.001). Performance of all models was negatively impacted by prostate volume and poor signal quality (p < 0.01). Shape-based factors influenced DL models (p < 0.001) while signal factors influenced all (p < 0.001). CONCLUSION: Factors affecting anatomical and signal conditions of the prostate gland can adversely impact both DL and non-deep learning-based segmentation models.

3.
Acad Radiol ; 2024 Jan 22.
Article in English | MEDLINE | ID: mdl-38262813

ABSTRACT

RATIONALE AND OBJECTIVES: Efficiently detecting and characterizing metastatic bone lesions on staging CT is crucial for prostate cancer (PCa) care. However, it demands significant expert time and additional imaging such as PET/CT. We aimed to develop an ensemble of two automated deep learning AI models for 1) bone lesion detection and segmentation and 2) benign vs. metastatic lesion classification on staging CTs and to compare its performance with radiologists. MATERIALS AND METHODS: This retrospective study developed two AI models using 297 staging CT scans (81 metastatic) with 4601 benign and 1911 metastatic lesions in PCa patients. Metastases were validated by follow-up scans, bone biopsy, or PET/CT. Segmentation AI (3DAISeg) was developed using the lesion contours delineated by a radiologist. 3DAISeg performance was evaluated with the Dice similarity coefficient, and classification AI (3DAIClass) performance on AI and radiologist contours was assessed with F1-score and accuracy. Training/validation/testing data partitions of 70:15:15 were used. A multi-reader study was performed with two junior and two senior radiologists within a subset of the testing dataset (n = 36). RESULTS: In 45 unseen staging CT scans (12 metastatic PCa) with 669 benign and 364 metastatic lesions, 3DAISeg detected 73.1% of metastatic (266/364) and 72.4% of benign lesions (484/669). Each scan averaged 12 extra segmentations (range: 1-31). All metastatic scans had at least one detected metastatic lesion, achieving a 100% patient-level detection. The mean Dice score for 3DAISeg was 0.53 (median: 0.59, range: 0-0.87). The F1 for 3DAIClass was 94.8% (radiologist contours) and 92.4% (3DAISeg contours), with a median false positive of 0 (range: 0-3). Using radiologist contours, 3DAIClass had PPV and NPV rates comparable to junior and senior radiologists: PPV (semi-automated approach AI 40.0% vs. Juniors 32.0% vs. Seniors 50.0%) and NPV (AI 96.2% vs. Juniors 95.7% vs. Seniors 91.9%). When using 3DAISeg, 3DAIClass mimicked junior radiologists in PPV (pure-AI 20.0% vs. Juniors 32.0% vs. Seniors 50.0%) but surpassed seniors in NPV (pure-AI 93.8% vs. Juniors 95.7% vs. Seniors 91.9%). CONCLUSION: Our lesion detection and classification AI model performs on par with junior and senior radiologists in discerning benign and metastatic lesions on staging CTs obtained for PCa.

4.
Med Image Anal ; 88: 102833, 2023 08.
Article in English | MEDLINE | ID: mdl-37267773

ABSTRACT

In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, gray matter, white matter, ventricles, cerebellum, brainstem, deep gray matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.


Subject(s)
Image Processing, Computer-Assisted , White Matter , Pregnancy , Female , Humans , Image Processing, Computer-Assisted/methods , Brain/diagnostic imaging , Head , Fetus/diagnostic imaging , Algorithms , Magnetic Resonance Imaging/methods
5.
IEEE Trans Med Imaging ; 42(7): 2044-2056, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37021996

ABSTRACT

Federated learning (FL) allows the collaborative training of AI models without needing to share raw data. This capability makes it especially interesting for healthcare applications where patient and data privacy is of utmost concern. However, recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data. In this work, we show that these attacks presented in the literature are impractical in FL use-cases where the clients' training involves updating the Batch Normalization (BN) statistics and provide a new baseline attack that works for such scenarios. Furthermore, we present new ways to measure and visualize potential data leakage in FL. Our work is a step towards establishing reproducible methods of measuring data leakage in FL and could help determine the optimal tradeoffs between privacy-preserving techniques, such as differential privacy, and model accuracy based on quantifiable metrics.


Subject(s)
Neural Networks, Computer , Supervised Machine Learning , Humans , Privacy , Medical Informatics
6.
Med Image Anal ; 82: 102605, 2022 11.
Article in English | MEDLINE | ID: mdl-36156419

ABSTRACT

Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge - 2020.


Subject(s)
COVID-19 , Pandemics , Humans , COVID-19/diagnostic imaging , Artificial Intelligence , Tomography, X-Ray Computed/methods , Lung/diagnostic imaging
7.
Abdom Radiol (NY) ; 47(4): 1425-1434, 2022 04.
Article in English | MEDLINE | ID: mdl-35099572

ABSTRACT

PURPOSE: To present fully automated DL-based prostate cancer detection system for prostate MRI. METHODS: MRI scans from two institutions, were used for algorithm training, validation, testing. MRI-visible lesions were contoured by an experienced radiologist. All lesions were biopsied using MRI-TRUS-guidance. Lesions masks, histopathological results were used as ground truth labels to train UNet, AH-Net architectures for prostate cancer lesion detection, segmentation. Algorithm was trained to detect any prostate cancer ≥ ISUP1. Detection sensitivity, positive predictive values, mean number of false positive lesions per patient were used as performance metrics. RESULTS: 525 patients were included for training, validation, testing of the algorithm. Dataset was split into training (n = 368, 70%), validation (n = 79, 15%), test (n = 78, 15%) cohorts. Dice coefficients in training, validation sets were 0.403, 0.307, respectively, for AHNet model compared to 0.372, 0.287, respectively, for UNet model. In validation set, detection sensitivity was 70.9%, PPV was 35.5%, mean number of false positive lesions/patient was 1.41 (range 0-6) for UNet model compared to 74.4% detection sensitivity, 47.8% PPV, mean number of false positive lesions/patient was 0.87 (range 0-5) for AHNet model. In test set, detection sensitivity for UNet was 72.8% compared to 63.0% for AHNet, mean number of false positive lesions/patient was 1.90 (range 0-7), 1.40 (range 0-6) in UNet, AHNet models, respectively. CONCLUSION: We developed a DL-based AI approach which predicts prostate cancer lesions at biparametric MRI with reasonable performance metrics. While false positive lesion calls remain as a challenge of AI-assisted detection algorithms, this system can be utilized as an adjunct tool by radiologists.


Subject(s)
Deep Learning , Prostatic Neoplasms , Artificial Intelligence , Humans , Magnetic Resonance Imaging/methods , Male , Prostate/pathology , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology
8.
Acad Radiol ; 29(8): 1159-1168, 2022 08.
Article in English | MEDLINE | ID: mdl-34598869

ABSTRACT

RATIONALE AND OBJECTIVES: Prostate MRI improves detection of clinically significant prostate cancer; however, its diagnostic performance has wide variation. Artificial intelligence (AI) has the potential to assist radiologists in the detection and classification of prostatic lesions. Herein, we aimed to develop and test a cascaded deep learning detection and classification system trained on biparametric prostate MRI using PI-RADS for assisting radiologists during prostate MRI read out. MATERIALS AND METHODS: T2-weighted, diffusion-weighted (ADC maps, high b value DWI) MRI scans obtained at 3 Tesla from two institutions (n = 1043 in-house and n = 347 Prostate-X, respectively) acquired between 2015 to 2019 were used for model training, validation, testing. All scans were retrospectively reevaluated by one radiologist. Suspicious lesions were contoured and assigned a PI-RADS category. A 3D U-Net-based deep neural network was used to train an algorithm for automated detection and segmentation of prostate MRI lesions. Two 3D residual neural network were used for a 4-class classification task to predict PI-RADS categories 2 to 5 and BPH. Training and validation used 89% (n = 1290 scans) of the data using 5 fold cross-validation, the remaining 11% (n = 150 scans) were used for independent testing. Algorithm performance at lesion level was assessed using sensitivities, positive predictive values (PPV), false discovery rates (FDR), classification accuracy, Dice similarity coefficient (DSC). Additional analysis was conducted to compare AI algorithm's lesion detection performance with targeted biopsy results. RESULTS: Median age was 66 years (IQR = 60-71), PSA 6.7 ng/ml (IQR = 4.7-9.9) from in-house cohort. In the independent test set, algorithm correctly detected 111 of 198 lesions leading to 56.1% (49.3%-62.6%) sensitivity. PPV was 62.7% (95% CI 54.7%-70.7%) with FDR of 37.3% (95% CI 29.3%-45.3%). Of 79 true positive lesions, 82.3% were tumor positive at targeted biopsy, whereas of 57 false negative lesions, 50.9% were benign at targeted biopsy. Median DSC for lesion segmentation was 0.359. Overall PI-RADS classification accuracy was 30.8% (95% CI 24.6%-37.8%). CONCLUSION: Our cascaded U-Net, residual network architecture can detect, classify cancer suspicious lesions at prostate MRI with good detection, reasonable classification performance metrics.


Subject(s)
Deep Learning , Prostatic Neoplasms , Aged , Algorithms , Artificial Intelligence , Humans , Magnetic Resonance Imaging , Male , Prostate/diagnostic imaging , Prostate/pathology , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Retrospective Studies
9.
Nat Med ; 27(10): 1735-1743, 2021 10.
Article in English | MEDLINE | ID: mdl-34526699

ABSTRACT

Federated learning (FL) is a method used for training artificial intelligence models with data from multiple sources while maintaining data anonymity, thus removing many barriers to data sharing. Here we used data from 20 institutes across the globe to train a FL model, called EXAM (electronic medical record (EMR) chest X-ray AI model), that predicts the future oxygen requirements of symptomatic patients with COVID-19 using inputs of vital signs, laboratory data and chest X-rays. EXAM achieved an average area under the curve (AUC) >0.92 for predicting outcomes at 24 and 72 h from the time of initial presentation to the emergency room, and it provided 16% improvement in average AUC measured across all participating sites and an average increase in generalizability of 38% when compared with models trained at a single site using that site's data. For prediction of mechanical ventilation treatment or death at 24 h at the largest independent test site, EXAM achieved a sensitivity of 0.950 and specificity of 0.882. In this study, FL facilitated rapid data science collaboration without data exchange and generated a model that generalized across heterogeneous, unharmonized datasets for prediction of clinical outcomes in patients with COVID-19, setting the stage for the broader use of FL in healthcare.


Subject(s)
COVID-19/physiopathology , Machine Learning , Outcome Assessment, Health Care , COVID-19/therapy , COVID-19/virology , Electronic Health Records , Humans , Prognosis , SARS-CoV-2/isolation & purification
10.
Med Image Anal ; 73: 102166, 2021 10.
Article in English | MEDLINE | ID: mdl-34340104

ABSTRACT

Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse.


Subject(s)
Benchmarking , Tomography, X-Ray Computed , Algorithms , Humans , Image Processing, Computer-Assisted , Spine/diagnostic imaging
11.
Res Sq ; 2021 Jun 04.
Article in English | MEDLINE | ID: mdl-34100010

ABSTRACT

Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge - 2020.

12.
Sci Rep ; 11(1): 6940, 2021 03 25.
Article in English | MEDLINE | ID: mdl-33767213

ABSTRACT

A better understanding of temporal relationships between chest CT and labs may provide a reference for disease severity over the disease course. Generalized curves of lung opacity volume and density over time can be used as standardized references from well before symptoms develop to over a month after recovery, when residual lung opacities remain. 739 patients with COVID-19 underwent CT and RT-PCR in an outbreak setting between January 21st and April 12th, 2020. 29 of 739 patients had serial exams (121 CTs and 279 laboratory measurements) over 50 ± 16 days, with an average of 4.2 sequential CTs each. Sequential volumes of total lung, overall opacity and opacity subtypes (ground glass opacity [GGO] and consolidation) were extracted using deep learning and manual segmentation. Generalized temporal curves of CT and laboratory measurements were correlated. Lung opacities appeared 3.4 ± 2.2 days prior to symptom onset. Opacity peaked 1 day after symptom onset. GGO onset was earlier and resolved later than consolidation. Lactate dehydrogenase, and C-reactive protein peaked earlier than procalcitonin and leukopenia. The temporal relationships of quantitative CT features and clinical labs have distinctive patterns and peaks in relation to symptom onset, which may inform early clinical course in patients with mild COVID-19 pneumonia, or may shed light upon chronic lung effects or mechanisms of medical countermeasures in clinical trials.


Subject(s)
COVID-19/diagnostic imaging , Clinical Chemistry Tests , Hematologic Tests , Thorax/diagnostic imaging , Adult , COVID-19/blood , COVID-19/virology , Female , Humans , Male , Middle Aged , Retrospective Studies , SARS-CoV-2/isolation & purification , Severity of Illness Index , Thorax/pathology , Tomography, X-Ray Computed
13.
Med Image Anal ; 70: 101992, 2021 05.
Article in English | MEDLINE | ID: mdl-33601166

ABSTRACT

The recent outbreak of Coronavirus Disease 2019 (COVID-19) has led to urgent needs for reliable diagnosis and management of SARS-CoV-2 infection. The current guideline is using RT-PCR for testing. As a complimentary tool with diagnostic imaging, chest Computed Tomography (CT) has been shown to be able to reveal visual patterns characteristic for COVID-19, which has definite value at several stages during the disease course. To facilitate CT analysis, recent efforts have focused on computer-aided characterization and diagnosis with chest CT scan, which has shown promising results. However, domain shift of data across clinical data centers poses a serious challenge when deploying learning-based models. A common way to alleviate this issue is to fine-tune the model locally with the target domains local data and annotations. Unfortunately, the availability and quality of local annotations usually varies due to heterogeneity in equipment and distribution of medical resources across the globe. This impact may be pronounced in the detection of COVID-19, since the relevant patterns vary in size, shape, and texture. In this work, we attempt to find a solution for this challenge via federated and semi-supervised learning. A multi-national database consisting of 1704 scans from three countries is adopted to study the performance gap, when training a model with one dataset and applying it to another. Expert radiologists manually delineated 945 scans for COVID-19 findings. In handling the variability in both the data and annotations, a novel federated semi-supervised learning technique is proposed to fully utilize all available data (with or without annotations). Federated learning avoids the need for sensitive data-sharing, which makes it favorable for institutions and nations with strict regulatory policy on data privacy. Moreover, semi-supervision potentially reduces the annotation burden under a distributed setting. The proposed framework is shown to be effective compared to fully supervised scenarios with conventional data sharing instead of model weight sharing.


Subject(s)
COVID-19/diagnostic imaging , Supervised Machine Learning , Tomography, X-Ray Computed , China , Humans , Italy , Japan
14.
J Am Med Inform Assoc ; 28(6): 1259-1264, 2021 06 12.
Article in English | MEDLINE | ID: mdl-33537772

ABSTRACT

OBJECTIVE: To demonstrate enabling multi-institutional training without centralizing or sharing the underlying physical data via federated learning (FL). MATERIALS AND METHODS: Deep learning models were trained at each participating institution using local clinical data, and an additional model was trained using FL across all of the institutions. RESULTS: We found that the FL model exhibited superior performance and generalizability to the models trained at single institutions, with an overall performance level that was significantly better than that of any of the institutional models alone when evaluated on held-out test sets from each institution and an outside challenge dataset. DISCUSSION: The power of FL was successfully demonstrated across 3 academic institutions while avoiding the privacy risk associated with the transfer and pooling of patient data. CONCLUSION: Federated learning is an effective methodology that merits further study to enable accelerated development of models across institutions, enabling greater generalizability in clinical use.


Subject(s)
Deep Learning , Information Dissemination , Humans , Privacy
15.
Res Sq ; 2021 Jan 08.
Article in English | MEDLINE | ID: mdl-33442676

ABSTRACT

'Federated Learning' (FL) is a method to train Artificial Intelligence (AI) models with data from multiple sources while maintaining anonymity of the data thus removing many barriers to data sharing. During the SARS-COV-2 pandemic, 20 institutes collaborated on a healthcare FL study to predict future oxygen requirements of infected patients using inputs of vital signs, laboratory data, and chest x-rays, constituting the "EXAM" (EMR CXR AI Model) model. EXAM achieved an average Area Under the Curve (AUC) of over 0.92, an average improvement of 16%, and a 38% increase in generalisability over local models. The FL paradigm was successfully applied to facilitate a rapid data science collaboration without data exchange, resulting in a model that generalised across heterogeneous, unharmonized datasets. This provided the broader healthcare community with a validated model to respond to COVID-19 challenges, as well as set the stage for broader use of FL in healthcare.

16.
Diagn Interv Radiol ; 27(1): 20-27, 2021 Jan.
Article in English | MEDLINE | ID: mdl-32815519

ABSTRACT

PURPOSE: Chest X-ray plays a key role in diagnosis and management of COVID-19 patients and imaging features associated with clinical elements may assist with the development or validation of automated image analysis tools. We aimed to identify associations between clinical and radiographic features as well as to assess the feasibility of deep learning applied to chest X-rays in the setting of an acute COVID-19 outbreak. METHODS: A retrospective study of X-rays, clinical, and laboratory data was performed from 48 SARS-CoV-2 RT-PCR positive patients (age 60±17 years, 15 women) between February 22 and March 6, 2020 from a tertiary care hospital in Milan, Italy. Sixty-five chest X-rays were reviewed by two radiologists for alveolar and interstitial opacities and classified by severity on a scale from 0 to 3. Clinical factors (age, symptoms, comorbidities) were investigated for association with opacity severity and also with placement of central line or endotracheal tube. Deep learning models were then trained for two tasks: lung segmentation and opacity detection. Imaging characteristics were compared to clinical datapoints using the unpaired student's t-test or Mann-Whitney U test. Cohen's kappa analysis was used to evaluate the concordance of deep learning to conventional radiologist interpretation. RESULTS: Fifty-six percent of patients presented with alveolar opacities, 73% had interstitial opacities, and 23% had normal X-rays. The presence of alveolar or interstitial opacities was statistically correlated with age (P = 0.008) and comorbidities (P = 0.005). The extent of alveolar or interstitial opacities on baseline X-ray was significantly associated with the presence of endotracheal tube (P = 0.0008 and P = 0.049) or central line (P = 0.003 and P = 0.007). In comparison to human interpretation, the deep learning model achieved a kappa concordance of 0.51 for alveolar opacities and 0.71 for interstitial opacities. CONCLUSION: Chest X-ray analysis in an acute COVID-19 outbreak showed that the severity of opacities was associated with advanced age, comorbidities, as well as acuity of care. Artificial intelligence tools based upon deep learning of COVID-19 chest X-rays are feasible in the acute outbreak setting.


Subject(s)
COVID-19/diagnosis , Deep Learning/statistics & numerical data , Radiography, Thoracic/methods , SARS-CoV-2/genetics , Thorax/diagnostic imaging , Adult , Age Factors , Aged , COVID-19/epidemiology , COVID-19/therapy , COVID-19/virology , Comorbidity , Feasibility Studies , Female , Humans , Italy/epidemiology , Male , Middle Aged , Radiography, Thoracic/classification , Radiologists , Retrospective Studies , Severity of Illness Index , Thorax/pathology
17.
Eur Radiol ; 31(5): 3165-3176, 2021 May.
Article in English | MEDLINE | ID: mdl-33146796

ABSTRACT

OBJECTIVES: The early infection dynamics of patients with SARS-CoV-2 are not well understood. We aimed to investigate and characterize associations between clinical, laboratory, and imaging features of asymptomatic and pre-symptomatic patients with SARS-CoV-2. METHODS: Seventy-four patients with RT-PCR-proven SARS-CoV-2 infection were asymptomatic at presentation. All were retrospectively identified from 825 patients with chest CT scans and positive RT-PCR following exposure or travel risks in outbreak settings in Japan and China. CTs were obtained for every patient within a day of admission and were reviewed for infiltrate subtypes and percent with assistance from a deep learning tool. Correlations of clinical, laboratory, and imaging features were analyzed and comparisons were performed using univariate and multivariate logistic regression. RESULTS: Forty-eight of 74 (65%) initially asymptomatic patients had CT infiltrates that pre-dated symptom onset by 3.8 days. The most common CT infiltrates were ground glass opacities (45/48; 94%) and consolidation (22/48; 46%). Patient body temperature (p < 0.01), CRP (p < 0.01), and KL-6 (p = 0.02) were associated with the presence of CT infiltrates. Infiltrate volume (p = 0.01), percent lung involvement (p = 0.01), and consolidation (p = 0.043) were associated with subsequent development of symptoms. CONCLUSIONS: COVID-19 CT infiltrates pre-dated symptoms in two-thirds of patients. Body temperature elevation and laboratory evaluations may identify asymptomatic patients with SARS-CoV-2 CT infiltrates at presentation, and the characteristics of CT infiltrates could help identify asymptomatic SARS-CoV-2 patients who subsequently develop symptoms. The role of chest CT in COVID-19 may be illuminated by a better understanding of CT infiltrates in patients with early disease or SARS-CoV-2 exposure. KEY POINTS: • Forty-eight of 74 (65%) pre-selected asymptomatic patients with SARS-CoV-2 had abnormal chest CT findings. • CT infiltrates pre-dated symptom onset by 3.8 days (range 1-5). • KL-6, CRP, and elevated body temperature identified patients with CT infiltrates. Higher infiltrate volume, percent lung involvement, and pulmonary consolidation identified patients who developed symptoms.


Subject(s)
COVID-19 , SARS-CoV-2 , China/epidemiology , Disease Outbreaks , Humans , Japan , Retrospective Studies , Tomography, X-Ray Computed
18.
IEEE Trans Med Imaging ; 40(4): 1113-1122, 2021 04.
Article in English | MEDLINE | ID: mdl-33351753

ABSTRACT

Multi-domain data are widely leveraged in vision applications taking advantage of complementary information from different modalities, e.g., brain tumor segmentation from multi-parametric magnetic resonance imaging (MRI). However, due to possible data corruption and different imaging protocols, the availability of images for each domain could vary amongst multiple data sources in practice, which makes it challenging to build a universal model with a varied set of input data. To tackle this problem, we propose a general approach to complete the random missing domain(s) data in real applications. Specifically, we develop a novel multi-domain image completion method that utilizes a generative adversarial network (GAN) with a representational disentanglement scheme to extract shared content encoding and separate style encoding across multiple domains. We further illustrate that the learned representation in multi-domain image completion could be leveraged for high-level tasks, e.g., segmentation, by introducing a unified framework consisting of image completion and segmentation with a shared content encoder. The experiments demonstrate consistent performance improvement on three datasets for brain tumor segmentation, prostate segmentation, and facial expression image completion respectively.


Subject(s)
Brain Neoplasms , Image Processing, Computer-Assisted , Brain Neoplasms/diagnostic imaging , Humans , Magnetic Resonance Imaging , Male
19.
IEEE Trans Med Imaging ; 40(10): 2534-2547, 2021 10.
Article in English | MEDLINE | ID: mdl-33373298

ABSTRACT

Active learning is a unique abstraction of machine learning techniques where the model/algorithm could guide users for annotation of a set of data points that would be beneficial to the model, unlike passive machine learning. The primary advantage being that active learning frameworks select data points that can accelerate the learning process of a model and can reduce the amount of data needed to achieve full accuracy as compared to a model trained on a randomly acquired data set. Multiple frameworks for active learning combined with deep learning have been proposed, and the majority of them are dedicated to classification tasks. Herein, we explore active learning for the task of segmentation of medical imaging data sets. We investigate our proposed framework using two datasets: 1.) MRI scans of the hippocampus, 2.) CT scans of pancreas and tumors. This work presents a query-by-committee approach for active learning where a joint optimizer is used for the committee. At the same time, we propose three new strategies for active learning: 1.) increasing frequency of uncertain data to bias the training data set; 2.) Using mutual information among the input images as a regularizer for acquisition to ensure diversity in the training dataset; 3.) adaptation of Dice log-likelihood for Stein variational gradient descent (SVGD). The results indicate an improvement in terms of data reduction by achieving full accuracy while only using 22.69% and 48.85% of the available data for each dataset, respectively.


Subject(s)
Magnetic Resonance Imaging , Algorithms , Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Uncertainty
SELECTION OF CITATIONS
SEARCH DETAIL
...