Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Eur Radiol ; 34(10): 6229-6240, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38538841

RESUMEN

OBJECTIVES: To develop and test zone-specific prostate-specific antigen density (sPSAD) combined with PI-RADS to guide prostate biopsy decision strategies (BDS). METHODS: This retrospective study included consecutive patients, who underwent prostate MRI and biopsy (01/2012-10/2018). The whole gland and transition zone (TZ) were segmented at MRI using a retrained deep learning system (DLS; nnU-Net) to calculate PSAD and sPSAD, respectively. Additionally, sPSAD and PI-RADS were combined in a BDS, and diagnostic performances to detect Grade Group ≥ 2 (GG ≥ 2) prostate cancer were compared. Patient-based cancer detection using sPSAD was assessed by bootstrapping with 1000 repetitions and reported as area under the curve (AUC). Clinical utility of the BDS was tested in the hold-out test set using decision curve analysis. Statistics included nonparametric DeLong test for AUCs and Fisher-Yates test for remaining performance metrics. RESULTS: A total of 1604 patients aged 67 (interquartile range, 61-73) with 48% GG ≥ 2 prevalence (774/1604) were evaluated. By employing DLS-based prostate and TZ volumes (DICE coefficients of 0.89 (95% confidence interval, 0.80-0.97) and 0.84 (0.70-0.99)), GG ≥ 2 detection using PSAD was inferior to sPSAD (AUC, 0.71 (0.68-0.74)/0.73 (0.70-0.76); p < 0.001). Combining PI-RADS with sPSAD, GG ≥ 2 detection specificity doubled from 18% (10-20%) to 43% (30-44%; p < 0.001) with similar sensitivity (93% (89-96%)/97% (94-99%); p = 0.052), when biopsies were taken in PI-RADS 4-5 and 3 only if sPSAD was ≥ 0.42 ng/mL/cc as compared to all PI-RADS 3-5 cases. Additionally, using the sPSAD-based BDS, false positives were reduced by 25% (123 (104-142)/165 (146-185); p < 0.001). CONCLUSION: Using sPSAD to guide biopsy decisions in PI-RADS 3 lesions can reduce false positives at MRI while maintaining high sensitivity for GG ≥ 2 cancers. CLINICAL RELEVANCE STATEMENT: Transition zone-specific prostate-specific antigen density can improve the accuracy of prostate cancer detection compared to MRI assessments alone, by lowering false-positive cases without significantly missing men with ISUP GG ≥ 2 cancers. KEY POINTS: • Prostate biopsy decision strategies using PI-RADS at MRI are limited by a substantial proportion of false positives, not yielding grade group ≥ 2 prostate cancer. • PI-RADS combined with transition zone (TZ)-specific prostate-specific antigen density (PSAD) decreased the number of unproductive biopsies by 25% compared to PI-RADS only. • TZ-specific PSAD also improved the specificity of MRI-directed biopsies by 9% compared to the whole gland PSAD, while showing identical sensitivity.


Asunto(s)
Biopsia Guiada por Imagen , Imagen por Resonancia Magnética , Antígeno Prostático Específico , Neoplasias de la Próstata , Humanos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Antígeno Prostático Específico/sangre , Estudios Retrospectivos , Anciano , Imagen por Resonancia Magnética/métodos , Persona de Mediana Edad , Biopsia Guiada por Imagen/métodos , Reacciones Falso Positivas , Próstata/patología , Próstata/diagnóstico por imagen
2.
Eur J Radiol ; 166: 110964, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37453274

RESUMEN

PURPOSE: The ever-increasing volume of medical imaging data and interest in Big Data research brings challenges to data organization, categorization, and retrieval. Although the radiological value chain is almost entirely digital, data structuring has been widely performed pragmatically, but with insufficient naming and metadata standards for the stringent needs of image analysis. To enable automated data management independent of naming and metadata, this study focused on developing a convolutional neural network (CNN) that classifies medical images based solely on voxel data. METHOD: A 3D CNN (3D-ResNet18) was trained using a dataset of 31,602 prostate MRI volumes with 10 different sequence types of 1243 patients. A five-fold cross-validation approach with patient-based splits was chosen for training and testing. Training was repeated with a gradual reduction in training data assessing classification accuracies to determine the minimum training data required for sufficient performance. The trained model and developed method were tested on three external datasets. RESULTS: The model achieved an overall accuracy of 99.88 % ± 0.13 % in classifying typical prostate MRI sequence types. When being trained with approximately 10 % of the original cohort (112 patients), the CNN still achieved an accuracy of 97.43 % ± 2.10 %. In external testing the model achieved sensitivities of > 90 % for 10/15 tested sequence types. CONCLUSIONS: The herein developed CNN enabled automatic and reliable sequence identification in prostate MRI. Ultimately, such CNN models for voxel-based sequence identification could substantially enhance the management of medical imaging data, improve workflow efficiency and data quality, and allow for robust clinical AI workflows.


Asunto(s)
Metadatos , Próstata , Masculino , Humanos , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos
3.
Radiology ; 307(4): e222276, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37039688

RESUMEN

Background Clinically significant prostate cancer (PCa) diagnosis at MRI requires accurate and efficient radiologic interpretation. Although artificial intelligence may assist in this task, lack of transparency has limited clinical translation. Purpose To develop an explainable artificial intelligence (XAI) model for clinically significant PCa diagnosis at biparametric MRI using Prostate Imaging Reporting and Data System (PI-RADS) features for classification justification. Materials and Methods This retrospective study included consecutive patients with histopathologic analysis-proven prostatic lesions who underwent biparametric MRI and biopsy between January 2012 and December 2017. After image annotation by two radiologists, a deep learning model was trained to detect the index lesion; classify PCa, clinically significant PCa (Gleason score ≥ 7), and benign lesions (eg, prostatitis); and justify classifications using PI-RADS features. Lesion- and patient-based performance were assessed using fivefold cross validation and areas under the receiver operating characteristic curve. Clinical feasibility was tested in a multireader study and by using the external PROSTATEx data set. Statistical evaluation of the multireader study included Mann-Whitney U and exact Fisher-Yates test. Results Overall, 1224 men (median age, 67 years; IQR, 62-73 years) had 3260 prostatic lesions (372 lesions with Gleason score of 6; 743 lesions with Gleason score of ≥ 7; 2145 benign lesions). XAI reliably detected clinically significant PCa in internal (area under the receiver operating characteristic curve, 0.89) and external test sets (area under the receiver operating characteristic curve, 0.87) with a sensitivity of 93% (95% CI: 87, 98) and an average of one false-positive finding per patient. Accuracy of the visual and textual explanations of XAI classifications was 80% (1080 of 1352), confirmed by experts. XAI-assisted readings improved the confidence (4.1 vs 3.4 on a five-point Likert scale; P = .007) of nonexperts in assessing PI-RADS 3 lesions, reducing reading time by 58 seconds (P = .009). Conclusion The explainable AI model reliably detected and classified clinically significant prostate cancer and improved the confidence and reading time of nonexperts while providing visual and textual explanations using well-established imaging features. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Chapiro in this issue.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Masculino , Humanos , Anciano , Próstata/patología , Neoplasias de la Próstata/patología , Imagen por Resonancia Magnética/métodos , Inteligencia Artificial , Estudios Retrospectivos
4.
Eur J Nucl Med Mol Imaging ; 50(7): 2140-2151, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36820890

RESUMEN

BACKGROUND: In patients with non-small cell lung cancer (NSCLC), accuracy of [18F]FDG-PET/CT for pretherapeutic lymph node (LN) staging is limited by false positive findings. Our aim was to evaluate machine learning with routinely obtainable variables to improve accuracy over standard visual image assessment. METHODS: Monocentric retrospective analysis of pretherapeutic [18F]FDG-PET/CT in 491 consecutive patients with NSCLC using an analog PET/CT scanner (training + test cohort, n = 385) or digital scanner (validation, n = 106). Forty clinical variables, tumor characteristics, and image variables (e.g., primary tumor and LN SUVmax and size) were collected. Different combinations of machine learning methods for feature selection and classification of N0/1 vs. N2/3 disease were compared. Ten-fold nested cross-validation was used to derive the mean area under the ROC curve of the ten test folds ("test AUC") and AUC in the validation cohort. Reference standard was the final N stage from interdisciplinary consensus (histological results for N2/3 LNs in 96%). RESULTS: N2/3 disease was present in 190 patients (39%; training + test, 37%; validation, 46%; p = 0.09). A gradient boosting classifier (GBM) with 10 features was selected as the final model based on test AUC of 0.91 (95% confidence interval, 0.87-0.94). Validation AUC was 0.94 (0.89-0.98). At a target sensitivity of approx. 90%, test/validation accuracy of the GBM was 0.78/0.87. This was significantly higher than the accuracy based on "mediastinal LN uptake > mediastinum" (0.7/0.75; each p < 0.05) or combined PET/CT criteria (PET positive and/or LN short axis diameter > 10 mm; 0.68/0.75; each p < 0.001). Harmonization of PET images between the two scanners affected SUVmax and visual assessment of the LNs but did not diminish the AUC of the GBM. CONCLUSIONS: A machine learning model based on routinely available variables from [18F]FDG-PET/CT improved accuracy in mediastinal LN staging compared to established visual assessment criteria. A web application implementing this model was made available.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Humanos , Carcinoma de Pulmón de Células no Pequeñas/diagnóstico por imagen , Carcinoma de Pulmón de Células no Pequeñas/patología , Mediastino/diagnóstico por imagen , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Fluorodesoxiglucosa F18 , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Metástasis Linfática/diagnóstico por imagen , Metástasis Linfática/patología , Estudios Retrospectivos , Ganglios Linfáticos/patología , Estadificación de Neoplasias
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...