Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Radiology ; 312(2): e232635, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39105640

RESUMO

Background Multiparametric MRI can help identify clinically significant prostate cancer (csPCa) (Gleason score ≥7) but is limited by reader experience and interobserver variability. In contrast, deep learning (DL) produces deterministic outputs. Purpose To develop a DL model to predict the presence of csPCa by using patient-level labels without information about tumor location and to compare its performance with that of radiologists. Materials and Methods Data from patients without known csPCa who underwent MRI from January 2017 to December 2019 at one of multiple sites of a single academic institution were retrospectively reviewed. A convolutional neural network was trained to predict csPCa from T2-weighted images, diffusion-weighted images, apparent diffusion coefficient maps, and T1-weighted contrast-enhanced images. The reference standard was pathologic diagnosis. Radiologist performance was evaluated as follows: Radiology reports were used for the internal test set, and four radiologists' PI-RADS ratings were used for the external (ProstateX) test set. The performance was compared using areas under the receiver operating characteristic curves (AUCs) and the DeLong test. Gradient-weighted class activation maps (Grad-CAMs) were used to show tumor localization. Results Among 5735 examinations in 5215 patients (mean age, 66 years ± 8 [SD]; all male), 1514 examinations (1454 patients) showed csPCa. In the internal test set (400 examinations), the AUC was 0.89 and 0.89 for the DL classifier and radiologists, respectively (P = .88). In the external test set (204 examinations), the AUC was 0.86 and 0.84 for the DL classifier and radiologists, respectively (P = .68). DL classifier plus radiologists had an AUC of 0.89 (P < .001). Grad-CAMs demonstrated activation over the csPCa lesion in 35 of 38 and 56 of 58 true-positive examinations in internal and external test sets, respectively. Conclusion The performance of a DL model was not different from that of radiologists in the detection of csPCa at MRI, and Grad-CAMs localized the tumor. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Johnson and Chandarana in this issue.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Neoplasias da Próstata , Masculino , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Estudos Retrospectivos , Idoso , Pessoa de Meia-Idade , Imageamento por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Próstata/diagnóstico por imagem , Próstata/patologia
2.
Eur Radiol ; 2024 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-38842692

RESUMO

OBJECTIVES: To develop an automated pipeline for extracting prostate cancer-related information from clinical notes. MATERIALS AND METHODS: This retrospective study included 23,225 patients who underwent prostate MRI between 2017 and 2022. Cancer risk factors (family history of cancer and digital rectal exam findings), pre-MRI prostate pathology, and treatment history of prostate cancer were extracted from free-text clinical notes in English as binary or multi-class classification tasks. Any sentence containing pre-defined keywords was extracted from clinical notes within one year before the MRI. After manually creating sentence-level datasets with ground truth, Bidirectional Encoder Representations from Transformers (BERT)-based sentence-level models were fine-tuned using the extracted sentence as input and the category as output. The patient-level output was determined by compilation of multiple sentence-level outputs using tree-based models. Sentence-level classification performance was evaluated using the area under the receiver operating characteristic curve (AUC) on 15% of the sentence-level dataset (sentence-level test set). The patient-level classification performance was evaluated on the patient-level test set created by radiologists by reviewing the clinical notes of 603 patients. Accuracy and sensitivity were compared between the pipeline and radiologists. RESULTS: Sentence-level AUCs were ≥ 0.94. The pipeline showed higher patient-level sensitivity for extracting cancer risk factors (e.g., family history of prostate cancer, 96.5% vs. 77.9%, p < 0.001), but lower accuracy in classifying pre-MRI prostate pathology (92.5% vs. 95.9%, p = 0.002) and treatment history of prostate cancer (95.5% vs. 97.7%, p = 0.03) than radiologists, respectively. CONCLUSION: The proposed pipeline showed promising performance, especially for extracting cancer risk factors from patient's clinical notes. CLINICAL RELEVANCE STATEMENT: The natural language processing pipeline showed a higher sensitivity for extracting prostate cancer risk factors than radiologists and may help efficiently gather relevant text information when interpreting prostate MRI. KEY POINTS: When interpreting prostate MRI, it is necessary to extract prostate cancer-related information from clinical notes. This pipeline extracted the presence of prostate cancer risk factors with higher sensitivity than radiologists. Natural language processing may help radiologists efficiently gather relevant prostate cancer-related text information.

3.
Abdom Radiol (NY) ; 2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38896250

RESUMO

PURPOSE: To develop a deep learning (DL) zonal segmentation model of prostate MR from T2-weighted images and evaluate TZ-PSAD for prediction of the presence of csPCa (Gleason score of 7 or higher) compared to PSAD. METHODS: 1020 patients with a prostate MRI were randomly selected to develop a DL zonal segmentation model. Test dataset included 20 cases in which 2 radiologists manually segmented both the peripheral zone (PZ) and TZ. Pair-wise Dice index was calculated for each zone. For the prediction of csPCa using PSAD and TZ-PSAD, we used 3461 consecutive MRI exams performed in patients without a history of prostate cancer, with pathological confirmation and available PSA values, but not used in the development of the segmentation model as internal test set and 1460 MRI exams from PI-CAI challenge as external test set. PSAD and TZ-PSAD were calculated from the segmentation model output. The area under the receiver operating curve (AUC) was compared between PSAD and TZ-PSAD using univariate and multivariate analysis (adjusts age) with the DeLong test. RESULTS: Dice scores of the model against two radiologists were 0.87/0.87 and 0.74/0.72 for TZ and PZ, while those between the two radiologists were 0.88 for TZ and 0.75 for PZ. For the prediction of csPCa, the AUCs of TZPSAD were significantly higher than those of PSAD in both internal test set (univariate analysis, 0.75 vs. 0.73, p < 0.001; multivariate analysis, 0.80 vs. 0.78, p < 0.001) and external test set (univariate analysis, 0.76 vs. 0.74, p < 0.001; multivariate analysis, 0.77 vs. 0.75, p < 0.001 in external test set). CONCLUSION: DL model-derived zonal segmentation facilitates the practical measurement of TZ-PSAD and shows it to be a slightly better predictor of csPCa compared to the conventional PSAD. Use of TZ-PSAD may increase the sensitivity of detecting csPCa by 2-5% for a commonly used specificity level.

4.
J Imaging Inform Med ; 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38587766

RESUMO

Automated segmentation tools often encounter accuracy and adaptability issues when applied to images of different pathology. The purpose of this study is to explore the feasibility of building a workflow to efficiently route images to specifically trained segmentation models. By implementing a deep learning classifier to automatically classify the images and route them to appropriate segmentation models, we hope that our workflow can segment the images with different pathology accurately. The data we used in this study are 350 CT images from patients affected by polycystic liver disease and 350 CT images from patients presenting with liver metastases from colorectal cancer. All images had the liver manually segmented by trained imaging analysts. Our proposed adaptive segmentation workflow achieved a statistically significant improvement for the task of total liver segmentation compared to the generic single-segmentation model (non-parametric Wilcoxon signed rank test, n = 100, p-value << 0.001). This approach is applicable in a wide range of scenarios and should prove useful in clinical implementations of segmentation pipelines.

5.
Neuroscience ; 546: 178-187, 2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38518925

RESUMO

Automatic abnormality identification of brachial plexus (BP) from normal magnetic resonance imaging to localize and identify a neurologic injury in clinical practice (MRI) is still a novel topic in brachial plexopathy. This study developed and evaluated an approach to differentiate abnormal BP with artificial intelligence (AI) over three commonly used MRI sequences, i.e. T1, FLUID sensitive and post-gadolinium sequences. A BP dataset was collected by radiological experts and a semi-supervised artificial intelligence method was used to segment the BP (based on nnU-net). Hereafter, a radiomics method was utilized to extract 107 shape and texture features from these ROIs. From various machine learning methods, we selected six widely recognized classifiers for training our Brachial plexus (BP) models and assessing their efficacy. To optimize these models, we introduced a dynamic feature selection approach aimed at discarding redundant and less informative features. Our experimental findings demonstrated that, in the context of identifying abnormal BP cases, shape features displayed heightened sensitivity compared to texture features. Notably, both the Logistic classifier and Bagging classifier outperformed other methods in our study. These evaluations illuminated the exceptional performance of our model trained on FLUID-sensitive sequences, which notably exceeded the results of both T1 and post-gadolinium sequences. Crucially, our analysis highlighted that both its classification accuracies and AUC score (area under the curve of receiver operating characteristics) over FLUID-sensitive sequence exceeded 90%. This outcome served as a robust experimental validation, affirming the substantial potential and strong feasibility of integrating AI into clinical practice.


Assuntos
Inteligência Artificial , Plexo Braquial , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Plexo Braquial/diagnóstico por imagem , Neuropatias do Plexo Braquial/diagnóstico por imagem , Aprendizado de Máquina , Feminino , Masculino , Adulto
6.
Abdom Radiol (NY) ; 49(3): 964-974, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38175255

RESUMO

PURPOSE: To evaluate robustness of a radiomics-based support vector machine (SVM) model for detection of visually occult PDA on pre-diagnostic CTs by simulating common variations in image acquisition and radiomics workflow using image perturbation methods. METHODS: Eighteen algorithmically generated-perturbations, which simulated variations in image noise levels (σ, 2σ, 3σ, 5σ), image rotation [both CT image and the corresponding pancreas segmentation mask by 45° and 90° in axial plane], voxel resampling (isotropic and anisotropic), gray-level discretization [bin width (BW) 32 and 64)], and pancreas segmentation (sequential erosions by 3, 4, 6, and 8 pixels and dilations by 3, 4, and 6 pixels from the boundary), were introduced to the original (unperturbed) test subset (n = 128; 45 pre-diagnostic CTs, 83 control CTs with normal pancreas). Radiomic features were extracted from pancreas masks of these additional test subsets, and the model's performance was compared vis-a-vis the unperturbed test subset. RESULTS: The model correctly classified 43 out of 45 pre-diagnostic CTs and 75 out of 83 control CTs in the unperturbed test subset, achieving 92.2% accuracy and 0.98 AUC. Model's performance was unaffected by a three-fold increase in noise level except for sensitivity declining to 80% at 3σ (p = 0.02). Performance remained comparable vis-a-vis the unperturbed test subset despite variations in image rotation (p = 0.99), voxel resampling (p = 0.25-0.31), change in gray-level BW to 32 (p = 0.31-0.99), and erosions/dilations up to 4 pixels from the pancreas boundary (p = 0.12-0.34). CONCLUSION: The model's high performance for detection of visually occult PDA was robust within a broad range of clinically relevant variations in image acquisition and radiomics workflow.


Assuntos
Adenocarcinoma , Neoplasias Pancreáticas , Resiliência Psicológica , Humanos , Adenocarcinoma/diagnóstico por imagem , Neoplasias Pancreáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Radiômica , Fluxo de Trabalho , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA