Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Radiology ; 311(2): e230750, 2024 May.
Article in English | MEDLINE | ID: mdl-38713024

ABSTRACT

Background Multiparametric MRI (mpMRI) improves prostate cancer (PCa) detection compared with systematic biopsy, but its interpretation is prone to interreader variation, which results in performance inconsistency. Artificial intelligence (AI) models can assist in mpMRI interpretation, but large training data sets and extensive model testing are required. Purpose To evaluate a biparametric MRI AI algorithm for intraprostatic lesion detection and segmentation and to compare its performance with radiologist readings and biopsy results. Materials and Methods This secondary analysis of a prospective registry included consecutive patients with suspected or known PCa who underwent mpMRI, US-guided systematic biopsy, or combined systematic and MRI/US fusion-guided biopsy between April 2019 and September 2022. All lesions were prospectively evaluated using Prostate Imaging Reporting and Data System version 2.1. The lesion- and participant-level performance of a previously developed cascaded deep learning algorithm was compared with histopathologic outcomes and radiologist readings using sensitivity, positive predictive value (PPV), and Dice similarity coefficient (DSC). Results A total of 658 male participants (median age, 67 years [IQR, 61-71 years]) with 1029 MRI-visible lesions were included. At histopathologic analysis, 45% (294 of 658) of participants had lesions of International Society of Urological Pathology (ISUP) grade group (GG) 2 or higher. The algorithm identified 96% (282 of 294; 95% CI: 94%, 98%) of all participants with clinically significant PCa, whereas the radiologist identified 98% (287 of 294; 95% CI: 96%, 99%; P = .23). The algorithm identified 84% (103 of 122), 96% (152 of 159), 96% (47 of 49), 95% (38 of 40), and 98% (45 of 46) of participants with ISUP GG 1, 2, 3, 4, and 5 lesions, respectively. In the lesion-level analysis using radiologist ground truth, the detection sensitivity was 55% (569 of 1029; 95% CI: 52%, 58%), and the PPV was 57% (535 of 934; 95% CI: 54%, 61%). The mean number of false-positive lesions per participant was 0.61 (range, 0-3). The lesion segmentation DSC was 0.29. Conclusion The AI algorithm detected cancer-suspicious lesions on biparametric MRI scans with a performance comparable to that of an experienced radiologist. Moreover, the algorithm reliably predicted clinically significant lesions at histopathologic examination. ClinicalTrials.gov Identifier: NCT03354416 © RSNA, 2024 Supplemental material is available for this article.


Subject(s)
Deep Learning , Multiparametric Magnetic Resonance Imaging , Prostatic Neoplasms , Male , Humans , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Aged , Prospective Studies , Multiparametric Magnetic Resonance Imaging/methods , Middle Aged , Algorithms , Prostate/diagnostic imaging , Prostate/pathology , Image-Guided Biopsy/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
2.
Acad Radiol ; 2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38670874

ABSTRACT

RATIONALE AND OBJECTIVES: Extraprostatic extension (EPE) is well established as a significant predictor of prostate cancer aggression and recurrence. Accurate EPE assessment prior to radical prostatectomy can impact surgical approach. We aimed to utilize a deep learning-based AI workflow for automated EPE grading from prostate T2W MRI, ADC map, and High B DWI. MATERIAL AND METHODS: An expert genitourinary radiologist conducted prospective clinical assessments of MRI scans for 634 patients and assigned risk for EPE using a grading technique. The training set and held-out independent test set consisted of 507 patients and 127 patients, respectively. Existing deep-learning AI models for prostate organ and lesion segmentation were leveraged to extract area and distance features for random forest classification models. Model performance was evaluated using balanced accuracy, ROC AUCs for each EPE grade, as well as sensitivity, specificity, and accuracy compared to EPE on histopathology. RESULTS: A balanced accuracy score of .390 ± 0.078 was achieved using a lesion detection probability threshold of 0.45 and distance features. Using the test set, ROC AUCs for AI-assigned EPE grades 0-3 were 0.70, 0.65, 0.68, and 0.55 respectively. When using EPE≥ 1 as the threshold for positive EPE, the model achieved a sensitivity of 0.67, specificity of 0.73, and accuracy of 0.72 compared to radiologist sensitivity of 0.81, specificity of 0.62, and accuracy of 0.66 using histopathology as the ground truth. CONCLUSION: Our AI workflow for assigning imaging-based EPE grades achieves an accuracy for predicting histologic EPE approaching that of physicians. This automated workflow has the potential to enhance physician decision-making for assessing the risk of EPE in patients undergoing treatment for prostate cancer due to its consistency and automation.

3.
Abdom Radiol (NY) ; 49(5): 1545-1556, 2024 May.
Article in English | MEDLINE | ID: mdl-38512516

ABSTRACT

OBJECTIVE: Automated methods for prostate segmentation on MRI are typically developed under ideal scanning and anatomical conditions. This study evaluates three different prostate segmentation AI algorithms in a challenging population of patients with prior treatments, variable anatomic characteristics, complex clinical history, or atypical MRI acquisition parameters. MATERIALS AND METHODS: A single institution retrospective database was queried for the following conditions at prostate MRI: prior prostate-specific oncologic treatment, transurethral resection of the prostate (TURP), abdominal perineal resection (APR), hip prosthesis (HP), diversity of prostate volumes (large ≥ 150 cc, small ≤ 25 cc), whole gland tumor burden, magnet strength, noted poor quality, and various scanners (outside/vendors). Final inclusion criteria required availability of axial T2-weighted (T2W) sequence and corresponding prostate organ segmentation from an expert radiologist. Three previously developed algorithms were evaluated: (1) deep learning (DL)-based model, (2) commercially available shape-based model, and (3) federated DL-based model. Dice Similarity Coefficient (DSC) was calculated compared to expert. DSC by model and scan factors were evaluated with Wilcox signed-rank test and linear mixed effects (LMER) model. RESULTS: 683 scans (651 patients) met inclusion criteria (mean prostate volume 60.1 cc [9.05-329 cc]). Overall DSC scores for models 1, 2, and 3 were 0.916 (0.707-0.971), 0.873 (0-0.997), and 0.894 (0.025-0.961), respectively, with DL-based models demonstrating significantly higher performance (p < 0.01). In sub-group analysis by factors, Model 1 outperformed Model 2 (all p < 0.05) and Model 3 (all p < 0.001). Performance of all models was negatively impacted by prostate volume and poor signal quality (p < 0.01). Shape-based factors influenced DL models (p < 0.001) while signal factors influenced all (p < 0.001). CONCLUSION: Factors affecting anatomical and signal conditions of the prostate gland can adversely impact both DL and non-deep learning-based segmentation models.


Subject(s)
Algorithms , Artificial Intelligence , Magnetic Resonance Imaging , Prostatic Neoplasms , Humans , Male , Retrospective Studies , Magnetic Resonance Imaging/methods , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/surgery , Prostatic Neoplasms/pathology , Image Interpretation, Computer-Assisted/methods , Middle Aged , Aged , Prostate/diagnostic imaging , Deep Learning
4.
Acad Radiol ; 2024 Jan 22.
Article in English | MEDLINE | ID: mdl-38262813

ABSTRACT

RATIONALE AND OBJECTIVES: Efficiently detecting and characterizing metastatic bone lesions on staging CT is crucial for prostate cancer (PCa) care. However, it demands significant expert time and additional imaging such as PET/CT. We aimed to develop an ensemble of two automated deep learning AI models for 1) bone lesion detection and segmentation and 2) benign vs. metastatic lesion classification on staging CTs and to compare its performance with radiologists. MATERIALS AND METHODS: This retrospective study developed two AI models using 297 staging CT scans (81 metastatic) with 4601 benign and 1911 metastatic lesions in PCa patients. Metastases were validated by follow-up scans, bone biopsy, or PET/CT. Segmentation AI (3DAISeg) was developed using the lesion contours delineated by a radiologist. 3DAISeg performance was evaluated with the Dice similarity coefficient, and classification AI (3DAIClass) performance on AI and radiologist contours was assessed with F1-score and accuracy. Training/validation/testing data partitions of 70:15:15 were used. A multi-reader study was performed with two junior and two senior radiologists within a subset of the testing dataset (n = 36). RESULTS: In 45 unseen staging CT scans (12 metastatic PCa) with 669 benign and 364 metastatic lesions, 3DAISeg detected 73.1% of metastatic (266/364) and 72.4% of benign lesions (484/669). Each scan averaged 12 extra segmentations (range: 1-31). All metastatic scans had at least one detected metastatic lesion, achieving a 100% patient-level detection. The mean Dice score for 3DAISeg was 0.53 (median: 0.59, range: 0-0.87). The F1 for 3DAIClass was 94.8% (radiologist contours) and 92.4% (3DAISeg contours), with a median false positive of 0 (range: 0-3). Using radiologist contours, 3DAIClass had PPV and NPV rates comparable to junior and senior radiologists: PPV (semi-automated approach AI 40.0% vs. Juniors 32.0% vs. Seniors 50.0%) and NPV (AI 96.2% vs. Juniors 95.7% vs. Seniors 91.9%). When using 3DAISeg, 3DAIClass mimicked junior radiologists in PPV (pure-AI 20.0% vs. Juniors 32.0% vs. Seniors 50.0%) but surpassed seniors in NPV (pure-AI 93.8% vs. Juniors 95.7% vs. Seniors 91.9%). CONCLUSION: Our lesion detection and classification AI model performs on par with junior and senior radiologists in discerning benign and metastatic lesions on staging CTs obtained for PCa.

5.
Nat Med ; 27(10): 1735-1743, 2021 10.
Article in English | MEDLINE | ID: mdl-34526699

ABSTRACT

Federated learning (FL) is a method used for training artificial intelligence models with data from multiple sources while maintaining data anonymity, thus removing many barriers to data sharing. Here we used data from 20 institutes across the globe to train a FL model, called EXAM (electronic medical record (EMR) chest X-ray AI model), that predicts the future oxygen requirements of symptomatic patients with COVID-19 using inputs of vital signs, laboratory data and chest X-rays. EXAM achieved an average area under the curve (AUC) >0.92 for predicting outcomes at 24 and 72 h from the time of initial presentation to the emergency room, and it provided 16% improvement in average AUC measured across all participating sites and an average increase in generalizability of 38% when compared with models trained at a single site using that site's data. For prediction of mechanical ventilation treatment or death at 24 h at the largest independent test site, EXAM achieved a sensitivity of 0.950 and specificity of 0.882. In this study, FL facilitated rapid data science collaboration without data exchange and generated a model that generalized across heterogeneous, unharmonized datasets for prediction of clinical outcomes in patients with COVID-19, setting the stage for the broader use of FL in healthcare.


Subject(s)
COVID-19/physiopathology , Machine Learning , Outcome Assessment, Health Care , COVID-19/therapy , COVID-19/virology , Electronic Health Records , Humans , Prognosis , SARS-CoV-2/isolation & purification
6.
J Am Med Inform Assoc ; 28(6): 1259-1264, 2021 06 12.
Article in English | MEDLINE | ID: mdl-33537772

ABSTRACT

OBJECTIVE: To demonstrate enabling multi-institutional training without centralizing or sharing the underlying physical data via federated learning (FL). MATERIALS AND METHODS: Deep learning models were trained at each participating institution using local clinical data, and an additional model was trained using FL across all of the institutions. RESULTS: We found that the FL model exhibited superior performance and generalizability to the models trained at single institutions, with an overall performance level that was significantly better than that of any of the institutional models alone when evaluated on held-out test sets from each institution and an outside challenge dataset. DISCUSSION: The power of FL was successfully demonstrated across 3 academic institutions while avoiding the privacy risk associated with the transfer and pooling of patient data. CONCLUSION: Federated learning is an effective methodology that merits further study to enable accelerated development of models across institutions, enabling greater generalizability in clinical use.


Subject(s)
Deep Learning , Information Dissemination , Humans , Privacy
7.
Res Sq ; 2021 Jan 08.
Article in English | MEDLINE | ID: mdl-33442676

ABSTRACT

'Federated Learning' (FL) is a method to train Artificial Intelligence (AI) models with data from multiple sources while maintaining anonymity of the data thus removing many barriers to data sharing. During the SARS-COV-2 pandemic, 20 institutes collaborated on a healthcare FL study to predict future oxygen requirements of infected patients using inputs of vital signs, laboratory data, and chest x-rays, constituting the "EXAM" (EMR CXR AI Model) model. EXAM achieved an average Area Under the Curve (AUC) of over 0.92, an average improvement of 16%, and a 38% increase in generalisability over local models. The FL paradigm was successfully applied to facilitate a rapid data science collaboration without data exchange, resulting in a model that generalised across heterogeneous, unharmonized datasets. This provided the broader healthcare community with a validated model to respond to COVID-19 challenges, as well as set the stage for broader use of FL in healthcare.

SELECTION OF CITATIONS
SEARCH DETAIL
...