Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Radiol Imaging Cancer ; 6(6): e240050, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39400232

RESUMO

Purpose To evaluate the performance of an artificial intelligence (AI) model in detecting overall and clinically significant prostate cancer (csPCa)-positive lesions on paired external and in-house biparametric MRI (bpMRI) scans and assess performance differences between each dataset. Materials and Methods This single-center retrospective study included patients who underwent prostate MRI at an external institution and were rescanned at the authors' institution between May 2015 and May 2022. A genitourinary radiologist performed prospective readouts on in-house MRI scans following the Prostate Imaging Reporting and Data System (PI-RADS) version 2.0 or 2.1 and retrospective image quality assessments for all scans. A subgroup of patients underwent an MRI/US fusion-guided biopsy. A bpMRI-based lesion detection AI model previously developed using a completely separate dataset was tested on both MRI datasets. Detection rates were compared between external and in-house datasets with use of the paired comparison permutation tests. Factors associated with AI detection performance were assessed using multivariable generalized mixed-effects models, incorporating features selected through forward stepwise regression based on the Akaike information criterion. Results The study included 201 male patients (median age, 66 years [IQR, 62-70 years]; prostate-specific antigen density, 0.14 ng/mL2 [IQR, 0.10-0.22 ng/mL2]) with a median interval between external and in-house MRI scans of 182 days (IQR, 97-383 days). For intraprostatic lesions, AI detected 39.7% (149 of 375) on external and 56.0% (210 of 375) on in-house MRI scans (P < .001). For csPCa-positive lesions, AI detected 61% (54 of 89) on external and 79% (70 of 89) on in-house MRI scans (P < .001). On external MRI scans, better overall lesion detection was associated with a higher PI-RADS score (odds ratio [OR] = 1.57; P = .005), larger lesion diameter (OR = 3.96; P < .001), better diffusion-weighted MRI quality (OR = 1.53; P = .02), and fewer lesions at MRI (OR = 0.78; P = .045). Better csPCa detection was associated with a shorter MRI interval between external and in-house scans (OR = 0.58; P = .03) and larger lesion size (OR = 10.19; P < .001). Conclusion The AI model exhibited modest performance in identifying both overall and csPCa-positive lesions on external bpMRI scans. Keywords: MR Imaging, Urinary, Prostate Supplemental material is available for this article. © RSNA, 2024.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Neoplasias da Próstata , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Estudos Retrospectivos , Idoso , Imageamento por Ressonância Magnética/métodos , Pessoa de Meia-Idade , Algoritmos , Próstata/diagnóstico por imagem , Próstata/patologia , Interpretação de Imagem Assistida por Computador/métodos , Biópsia Guiada por Imagem/métodos
2.
Radiology ; 311(2): e230750, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38713024

RESUMO

Background Multiparametric MRI (mpMRI) improves prostate cancer (PCa) detection compared with systematic biopsy, but its interpretation is prone to interreader variation, which results in performance inconsistency. Artificial intelligence (AI) models can assist in mpMRI interpretation, but large training data sets and extensive model testing are required. Purpose To evaluate a biparametric MRI AI algorithm for intraprostatic lesion detection and segmentation and to compare its performance with radiologist readings and biopsy results. Materials and Methods This secondary analysis of a prospective registry included consecutive patients with suspected or known PCa who underwent mpMRI, US-guided systematic biopsy, or combined systematic and MRI/US fusion-guided biopsy between April 2019 and September 2022. All lesions were prospectively evaluated using Prostate Imaging Reporting and Data System version 2.1. The lesion- and participant-level performance of a previously developed cascaded deep learning algorithm was compared with histopathologic outcomes and radiologist readings using sensitivity, positive predictive value (PPV), and Dice similarity coefficient (DSC). Results A total of 658 male participants (median age, 67 years [IQR, 61-71 years]) with 1029 MRI-visible lesions were included. At histopathologic analysis, 45% (294 of 658) of participants had lesions of International Society of Urological Pathology (ISUP) grade group (GG) 2 or higher. The algorithm identified 96% (282 of 294; 95% CI: 94%, 98%) of all participants with clinically significant PCa, whereas the radiologist identified 98% (287 of 294; 95% CI: 96%, 99%; P = .23). The algorithm identified 84% (103 of 122), 96% (152 of 159), 96% (47 of 49), 95% (38 of 40), and 98% (45 of 46) of participants with ISUP GG 1, 2, 3, 4, and 5 lesions, respectively. In the lesion-level analysis using radiologist ground truth, the detection sensitivity was 55% (569 of 1029; 95% CI: 52%, 58%), and the PPV was 57% (535 of 934; 95% CI: 54%, 61%). The mean number of false-positive lesions per participant was 0.61 (range, 0-3). The lesion segmentation DSC was 0.29. Conclusion The AI algorithm detected cancer-suspicious lesions on biparametric MRI scans with a performance comparable to that of an experienced radiologist. Moreover, the algorithm reliably predicted clinically significant lesions at histopathologic examination. ClinicalTrials.gov Identifier: NCT03354416 © RSNA, 2024 Supplemental material is available for this article.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata , Masculino , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Idoso , Estudos Prospectivos , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Pessoa de Meia-Idade , Algoritmos , Próstata/diagnóstico por imagem , Próstata/patologia , Biópsia Guiada por Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
3.
Acad Radiol ; 31(10): 4096-4106, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38670874

RESUMO

RATIONALE AND OBJECTIVES: Extraprostatic extension (EPE) is well established as a significant predictor of prostate cancer aggression and recurrence. Accurate EPE assessment prior to radical prostatectomy can impact surgical approach. We aimed to utilize a deep learning-based AI workflow for automated EPE grading from prostate T2W MRI, ADC map, and High B DWI. MATERIAL AND METHODS: An expert genitourinary radiologist conducted prospective clinical assessments of MRI scans for 634 patients and assigned risk for EPE using a grading technique. The training set and held-out independent test set consisted of 507 patients and 127 patients, respectively. Existing deep-learning AI models for prostate organ and lesion segmentation were leveraged to extract area and distance features for random forest classification models. Model performance was evaluated using balanced accuracy, ROC AUCs for each EPE grade, as well as sensitivity, specificity, and accuracy compared to EPE on histopathology. RESULTS: A balanced accuracy score of .390 ± 0.078 was achieved using a lesion detection probability threshold of 0.45 and distance features. Using the test set, ROC AUCs for AI-assigned EPE grades 0-3 were 0.70, 0.65, 0.68, and 0.55 respectively. When using EPE≥ 1 as the threshold for positive EPE, the model achieved a sensitivity of 0.67, specificity of 0.73, and accuracy of 0.72 compared to radiologist sensitivity of 0.81, specificity of 0.62, and accuracy of 0.66 using histopathology as the ground truth. CONCLUSION: Our AI workflow for assigning imaging-based EPE grades achieves an accuracy for predicting histologic EPE approaching that of physicians. This automated workflow has the potential to enhance physician decision-making for assessing the risk of EPE in patients undergoing treatment for prostate cancer due to its consistency and automation.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Gradação de Tumores , Neoplasias da Próstata , Sensibilidade e Especificidade , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Neoplasias da Próstata/cirurgia , Imageamento por Ressonância Magnética/métodos , Estudos Prospectivos , Pessoa de Meia-Idade , Idoso , Interpretação de Imagem Assistida por Computador/métodos , Prostatectomia , Algoritmo Florestas Aleatórias
4.
Abdom Radiol (NY) ; 49(5): 1545-1556, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38512516

RESUMO

OBJECTIVE: Automated methods for prostate segmentation on MRI are typically developed under ideal scanning and anatomical conditions. This study evaluates three different prostate segmentation AI algorithms in a challenging population of patients with prior treatments, variable anatomic characteristics, complex clinical history, or atypical MRI acquisition parameters. MATERIALS AND METHODS: A single institution retrospective database was queried for the following conditions at prostate MRI: prior prostate-specific oncologic treatment, transurethral resection of the prostate (TURP), abdominal perineal resection (APR), hip prosthesis (HP), diversity of prostate volumes (large ≥ 150 cc, small ≤ 25 cc), whole gland tumor burden, magnet strength, noted poor quality, and various scanners (outside/vendors). Final inclusion criteria required availability of axial T2-weighted (T2W) sequence and corresponding prostate organ segmentation from an expert radiologist. Three previously developed algorithms were evaluated: (1) deep learning (DL)-based model, (2) commercially available shape-based model, and (3) federated DL-based model. Dice Similarity Coefficient (DSC) was calculated compared to expert. DSC by model and scan factors were evaluated with Wilcox signed-rank test and linear mixed effects (LMER) model. RESULTS: 683 scans (651 patients) met inclusion criteria (mean prostate volume 60.1 cc [9.05-329 cc]). Overall DSC scores for models 1, 2, and 3 were 0.916 (0.707-0.971), 0.873 (0-0.997), and 0.894 (0.025-0.961), respectively, with DL-based models demonstrating significantly higher performance (p < 0.01). In sub-group analysis by factors, Model 1 outperformed Model 2 (all p < 0.05) and Model 3 (all p < 0.001). Performance of all models was negatively impacted by prostate volume and poor signal quality (p < 0.01). Shape-based factors influenced DL models (p < 0.001) while signal factors influenced all (p < 0.001). CONCLUSION: Factors affecting anatomical and signal conditions of the prostate gland can adversely impact both DL and non-deep learning-based segmentation models.


Assuntos
Algoritmos , Inteligência Artificial , Imageamento por Ressonância Magnética , Neoplasias da Próstata , Humanos , Masculino , Estudos Retrospectivos , Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Neoplasias da Próstata/patologia , Interpretação de Imagem Assistida por Computador/métodos , Pessoa de Meia-Idade , Idoso , Próstata/diagnóstico por imagem , Aprendizado Profundo
5.
Acad Radiol ; 31(6): 2424-2433, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38262813

RESUMO

RATIONALE AND OBJECTIVES: Efficiently detecting and characterizing metastatic bone lesions on staging CT is crucial for prostate cancer (PCa) care. However, it demands significant expert time and additional imaging such as PET/CT. We aimed to develop an ensemble of two automated deep learning AI models for 1) bone lesion detection and segmentation and 2) benign vs. metastatic lesion classification on staging CTs and to compare its performance with radiologists. MATERIALS AND METHODS: This retrospective study developed two AI models using 297 staging CT scans (81 metastatic) with 4601 benign and 1911 metastatic lesions in PCa patients. Metastases were validated by follow-up scans, bone biopsy, or PET/CT. Segmentation AI (3DAISeg) was developed using the lesion contours delineated by a radiologist. 3DAISeg performance was evaluated with the Dice similarity coefficient, and classification AI (3DAIClass) performance on AI and radiologist contours was assessed with F1-score and accuracy. Training/validation/testing data partitions of 70:15:15 were used. A multi-reader study was performed with two junior and two senior radiologists within a subset of the testing dataset (n = 36). RESULTS: In 45 unseen staging CT scans (12 metastatic PCa) with 669 benign and 364 metastatic lesions, 3DAISeg detected 73.1% of metastatic (266/364) and 72.4% of benign lesions (484/669). Each scan averaged 12 extra segmentations (range: 1-31). All metastatic scans had at least one detected metastatic lesion, achieving a 100% patient-level detection. The mean Dice score for 3DAISeg was 0.53 (median: 0.59, range: 0-0.87). The F1 for 3DAIClass was 94.8% (radiologist contours) and 92.4% (3DAISeg contours), with a median false positive of 0 (range: 0-3). Using radiologist contours, 3DAIClass had PPV and NPV rates comparable to junior and senior radiologists: PPV (semi-automated approach AI 40.0% vs. Juniors 32.0% vs. Seniors 50.0%) and NPV (AI 96.2% vs. Juniors 95.7% vs. Seniors 91.9%). When using 3DAISeg, 3DAIClass mimicked junior radiologists in PPV (pure-AI 20.0% vs. Juniors 32.0% vs. Seniors 50.0%) but surpassed seniors in NPV (pure-AI 93.8% vs. Juniors 95.7% vs. Seniors 91.9%). CONCLUSION: Our lesion detection and classification AI model performs on par with junior and senior radiologists in discerning benign and metastatic lesions on staging CTs obtained for PCa.


Assuntos
Neoplasias Ósseas , Aprendizado Profundo , Estadiamento de Neoplasias , Neoplasias da Próstata , Tomografia Computadorizada por Raios X , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Neoplasias Ósseas/diagnóstico por imagem , Neoplasias Ósseas/secundário , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Idoso , Pessoa de Meia-Idade , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
6.
Nat Med ; 27(10): 1735-1743, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34526699

RESUMO

Federated learning (FL) is a method used for training artificial intelligence models with data from multiple sources while maintaining data anonymity, thus removing many barriers to data sharing. Here we used data from 20 institutes across the globe to train a FL model, called EXAM (electronic medical record (EMR) chest X-ray AI model), that predicts the future oxygen requirements of symptomatic patients with COVID-19 using inputs of vital signs, laboratory data and chest X-rays. EXAM achieved an average area under the curve (AUC) >0.92 for predicting outcomes at 24 and 72 h from the time of initial presentation to the emergency room, and it provided 16% improvement in average AUC measured across all participating sites and an average increase in generalizability of 38% when compared with models trained at a single site using that site's data. For prediction of mechanical ventilation treatment or death at 24 h at the largest independent test site, EXAM achieved a sensitivity of 0.950 and specificity of 0.882. In this study, FL facilitated rapid data science collaboration without data exchange and generated a model that generalized across heterogeneous, unharmonized datasets for prediction of clinical outcomes in patients with COVID-19, setting the stage for the broader use of FL in healthcare.


Assuntos
COVID-19/fisiopatologia , Aprendizado de Máquina , Avaliação de Resultados em Cuidados de Saúde , COVID-19/terapia , COVID-19/virologia , Registros Eletrônicos de Saúde , Humanos , Prognóstico , SARS-CoV-2/isolamento & purificação
7.
J Am Med Inform Assoc ; 28(6): 1259-1264, 2021 06 12.
Artigo em Inglês | MEDLINE | ID: mdl-33537772

RESUMO

OBJECTIVE: To demonstrate enabling multi-institutional training without centralizing or sharing the underlying physical data via federated learning (FL). MATERIALS AND METHODS: Deep learning models were trained at each participating institution using local clinical data, and an additional model was trained using FL across all of the institutions. RESULTS: We found that the FL model exhibited superior performance and generalizability to the models trained at single institutions, with an overall performance level that was significantly better than that of any of the institutional models alone when evaluated on held-out test sets from each institution and an outside challenge dataset. DISCUSSION: The power of FL was successfully demonstrated across 3 academic institutions while avoiding the privacy risk associated with the transfer and pooling of patient data. CONCLUSION: Federated learning is an effective methodology that merits further study to enable accelerated development of models across institutions, enabling greater generalizability in clinical use.


Assuntos
Aprendizado Profundo , Disseminação de Informação , Humanos , Privacidade
8.
Res Sq ; 2021 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-33442676

RESUMO

'Federated Learning' (FL) is a method to train Artificial Intelligence (AI) models with data from multiple sources while maintaining anonymity of the data thus removing many barriers to data sharing. During the SARS-COV-2 pandemic, 20 institutes collaborated on a healthcare FL study to predict future oxygen requirements of infected patients using inputs of vital signs, laboratory data, and chest x-rays, constituting the "EXAM" (EMR CXR AI Model) model. EXAM achieved an average Area Under the Curve (AUC) of over 0.92, an average improvement of 16%, and a 38% increase in generalisability over local models. The FL paradigm was successfully applied to facilitate a rapid data science collaboration without data exchange, resulting in a model that generalised across heterogeneous, unharmonized datasets. This provided the broader healthcare community with a validated model to respond to COVID-19 challenges, as well as set the stage for broader use of FL in healthcare.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA