Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
J Urol ; 211(2): 256-265, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37889957

RESUMEN

PURPOSE: Given the shortcomings of current stone burden characterization (maximum diameter or ellipsoid formulas), we sought to investigate the diagnostic accuracy and precision of a University of California, Irvine-developed artificial intelligence (AI) algorithm for determining stone volume determination. MATERIALS AND METHODS: A total of 322 noncontrast CT scans were retrospectively obtained from patients with a diagnosis of urolithiasis. The largest stone in each noncontrast CT scan was designated the "index stone." The 3D volume of the index stone using 3D Slicer technology was determined by a validated reviewer; this was considered the "ground truth" volume. The AI-calculated index stone volume was subsequently compared with ground truth volume as well with the scalene, prolate, and oblate ellipsoid formulas estimated volumes. RESULTS: There was a nearly perfect correlation between the AI-determined volume and the ground truth (R=0.98). While the AI algorithm was efficient for determining the stone volume for all sizes, its accuracy improved with larger stone size. Moreover, the AI stone volume produced an excellent 3D pixel overlap with the ground truth (Dice score=0.90). In comparison, the ellipsoid formula-based volumes performed less well (R range: 0.79-0.82) than the AI algorithm; for the ellipsoid formulas, the accuracy decreased as the stone size increased (mean overestimation: 27%-89%). Lastly, for all stone sizes, the maximum linear stone measurement had the poorest correlation with the ground truth (R range: 0.41-0.82). CONCLUSIONS: The University of California, Irvine AI algorithm is an accurate, precise, and time-efficient tool for determining stone volume. Expanding the clinical availability of this program could enable urologists to establish better guidelines for both the metabolic and surgical management of their urolithiasis patients.


Asunto(s)
Cálculos Renales , Urolitiasis , Humanos , Inteligencia Artificial , Cálculos Renales/diagnóstico por imagen , Estudios Retrospectivos , Algoritmos , Tomografía Computarizada por Rayos X , Urolitiasis/diagnóstico por imagen
2.
AJR Am J Roentgenol ; 216(1): 111-116, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32812797

RESUMEN

OBJECTIVE: Prostate cancer is the most commonly diagnosed cancer in men in the United States with more than 200,000 new cases in 2018. Multiparametric MRI (mpMRI) is increasingly used for prostate cancer evaluation. Prostate organ segmentation is an essential step of surgical planning for prostate fusion biopsies. Deep learning convolutional neural networks (CNNs) are the predominant method of machine learning for medical image recognition. In this study, we describe a deep learning approach, a subset of artificial intelligence, for automatic localization and segmentation of prostates from mpMRI. MATERIALS AND METHODS: This retrospective study included patients who underwent prostate MRI and ultrasound-MRI fusion transrectal biopsy between September 2014 and December 2016. Axial T2-weighted images were manually segmented by two abdominal radiologists, which served as ground truth. These manually segmented images were used for training on a customized hybrid 3D-2D U-Net CNN architecture in a fivefold cross-validation paradigm for neural network training and validation. The Dice score, a measure of overlap between manually segmented and automatically derived segmentations, and Pearson linear correlation coefficient of prostate volume were used for statistical evaluation. RESULTS: The CNN was trained on 299 MRI examinations (total number of MR images = 7774) of 287 patients. The customized hybrid 3D-2D U-Net had a mean Dice score of 0.898 (range, 0.890-0.908) and a Pearson correlation coefficient for prostate volume of 0.974. CONCLUSION: A deep learning CNN can automatically segment the prostate organ from clinical MR images. Further studies should examine developing pattern recognition for lesion localization and quantification.


Asunto(s)
Aprendizaje Profundo , Imagenología Tridimensional , Imágenes de Resonancia Magnética Multiparamétrica , Neoplasias de la Próstata/diagnóstico por imagen , Humanos , Biopsia Guiada por Imagen , Masculino , Neoplasias de la Próstata/patología , Estudios Retrospectivos
3.
Curr Probl Diagn Radiol ; 52(6): 501-504, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37277270

RESUMEN

Hepatosplenomegaly is commonly diagnosed by radiologists based on single dimension measurements and heuristic cut-offs. Volumetric measurements may be more accurate for diagnosing organ enlargement. Artificial intelligence techniques may be able to automatically calculate liver and spleen volume and facilitate more accurate diagnosis. After IRB approval, 2 convolutional neural networks (CNN) were developed to automatically segment the liver and spleen on a training dataset comprised of 500 single-phase, contrast-enhanced CT abdomen and pelvis examinations. A separate dataset of ten thousand sequential examinations at a single institution was segmented with these CNNs. Performance was evaluated on a 1% subset and compared with manual segmentations using Sorensen-Dice coefficients and Pearson correlation coefficients. Radiologist reports were reviewed for diagnosis of hepatomegaly and splenomegaly and compared with calculated volumes. Abnormal enlargement was defined as greater than 2 standard deviations above the mean. Median Dice coefficients for liver and spleen segmentation were 0.988 and 0.981, respectively. Pearson correlation coefficients of CNN-derived estimates of organ volume against the gold-standard manual annotation were 0.999 for the liver and spleen (P < 0.001). Average liver volume was 1556.8 ± 498.7 cc and average spleen volume was 194.6 ± 123.0 cc. There were significant differences in average liver and spleen volumes between male and female patients. Thus, the volume thresholds for ground-truth determination of hepatomegaly and splenomegaly were determined separately for each sex. Radiologist classification of hepatomegaly was 65% sensitive, 91% specific, with a positive predictive value (PPV) of 23% and an negative predictive value (NPV) of 98%. Radiologist classification of splenomegaly was 68% sensitive, 97% specific, with a positive predictive value (PPV) of 50% and a negative predictive value (NPV) of 99%. Convolutional neural networks can accurately segment the liver and spleen and may be helpful to improve radiologist accuracy in the diagnosis of hepatomegaly and splenomegaly.

4.
Kidney360 ; 3(1): 83-90, 2022 01 27.
Artículo en Inglés | MEDLINE | ID: mdl-35368566

RESUMEN

Background: The goal of the Artificial Intelligence in Renal Scarring (AIRS) study is to develop machine learning tools for noninvasive quantification of kidney fibrosis from imaging scans. Methods: We conducted a retrospective analysis of patients who had one or more abdominal computed tomography (CT) scans within 6 months of a kidney biopsy. The final cohort encompassed 152 CT scans from 92 patients, which included images of 300 native kidneys and 76 transplant kidneys. Two different convolutional neural networks (slice-level and voxel-level classifiers) were tested to differentiate severe versus mild/moderate kidney fibrosis (≥50% versus <50%). Interstitial fibrosis and tubular atrophy scores from kidney biopsy reports were used as ground-truth. Results: The two machine learning models demonstrated similar positive predictive value (0.886 versus 0.935) and accuracy (0.831 versus 0.879). Conclusions: In summary, machine learning algorithms are a promising noninvasive diagnostic tool to quantify kidney fibrosis from CT scans. The clinical utility of these prediction tools, in terms of avoiding renal biopsy and associated bleeding risks in patients with severe fibrosis, remains to be validated in prospective clinical trials.


Asunto(s)
Inteligencia Artificial , Enfermedades Renales , Cicatriz/diagnóstico , Humanos , Enfermedades Renales/patología , Estudios Prospectivos , Estudios Retrospectivos
5.
Radiol Imaging Cancer ; 3(3): e200024, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33929265

RESUMEN

Purpose To develop a deep learning model to delineate the transition zone (TZ) and peripheral zone (PZ) of the prostate on MR images. Materials and Methods This retrospective study was composed of patients who underwent a multiparametric prostate MRI and an MRI/transrectal US fusion biopsy between January 2013 and May 2016. A board-certified abdominal radiologist manually segmented the prostate, TZ, and PZ on the entire data set. Included accessions were split into 60% training, 20% validation, and 20% test data sets for model development. Three convolutional neural networks with a U-Net architecture were trained for automatic recognition of the prostate organ, TZ, and PZ. Model performance for segmentation was assessed using Dice scores and Pearson correlation coefficients. Results A total of 242 patients were included (242 MR images; 6292 total images). Models for prostate organ segmentation, TZ segmentation, and PZ segmentation were trained and validated. Using the test data set, for prostate organ segmentation, the mean Dice score was 0.940 (interquartile range, 0.930-0.961), and the Pearson correlation coefficient for volume was 0.981 (95% CI: 0.966, 0.989). For TZ segmentation, the mean Dice score was 0.910 (interquartile range, 0.894-0.938), and the Pearson correlation coefficient for volume was 0.992 (95% CI: 0.985, 0.995). For PZ segmentation, the mean Dice score was 0.774 (interquartile range, 0.727-0.832), and the Pearson correlation coefficient for volume was 0.927 (95% CI: 0.870, 0.957). Conclusion Deep learning with an architecture composed of three U-Nets can accurately segment the prostate, TZ, and PZ. Keywords: MRI, Genital/Reproductive, Prostate, Neural Networks Supplemental material is available for this article. © RSNA, 2021.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Humanos , Imagen por Resonancia Magnética , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Estudios Retrospectivos
6.
Cancers (Basel) ; 11(6)2019 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-31207930

RESUMEN

Radiographic assessment with magnetic resonance imaging (MRI) is widely used to characterize gliomas, which represent 80% of all primary malignant brain tumors. Unfortunately, glioma biology is marked by heterogeneous angiogenesis, cellular proliferation, cellular invasion, and apoptosis. This translates into varying degrees of enhancement, edema, and necrosis, making reliable imaging assessment challenging. Deep learning, a subset of machine learning artificial intelligence, has gained traction as a method, which has seen effective employment in solving image-based problems, including those in medical imaging. This review seeks to summarize current deep learning applications used in the field of glioma detection and outcome prediction and will focus on (1) pre- and post-operative tumor segmentation, (2) genetic characterization of tissue, and (3) prognostication. We demonstrate that deep learning methods of segmenting, characterizing, grading, and predicting survival in gliomas are promising opportunities that may enhance both research and clinical activities.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA