Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Med Image Anal ; 94: 103157, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574544

RESUMEN

Computer-aided detection and diagnosis systems (CADe/CADx) in endoscopy are commonly trained using high-quality imagery, which is not representative for the heterogeneous input typically encountered in clinical practice. In endoscopy, the image quality heavily relies on both the skills and experience of the endoscopist and the specifications of the system used for screening. Factors such as poor illumination, motion blur, and specific post-processing settings can significantly alter the quality and general appearance of these images. This so-called domain gap between the data used for developing the system and the data it encounters after deployment, and the impact it has on the performance of deep neural networks (DNNs) supportive endoscopic CAD systems remains largely unexplored. As many of such systems, for e.g. polyp detection, are already being rolled out in clinical practice, this poses severe patient risks in particularly community hospitals, where both the imaging equipment and experience are subject to considerable variation. Therefore, this study aims to evaluate the impact of this domain gap on the clinical performance of CADe/CADx for various endoscopic applications. For this, we leverage two publicly available data sets (KVASIR-SEG and GIANA) and two in-house data sets. We investigate the performance of commonly-used DNN architectures under synthetic, clinically calibrated image degradations and on a prospectively collected dataset including 342 endoscopic images of lower subjective quality. Additionally, we assess the influence of DNN architecture and complexity, data augmentation, and pretraining techniques for improved robustness. The results reveal a considerable decline in performance of 11.6% (±1.5) as compared to the reference, within the clinically calibrated boundaries of image degradations. Nevertheless, employing more advanced DNN architectures and self-supervised in-domain pre-training effectively mitigate this drop to 7.7% (±2.03). Additionally, these enhancements yield the highest performance on the manually collected test set including images with lower subjective quality. By comprehensively assessing the robustness of popular DNN architectures and training strategies across multiple datasets, this study provides valuable insights into their performance and limitations for endoscopic applications. The findings highlight the importance of including robustness evaluation when developing DNNs for endoscopy applications and propose strategies to mitigate performance loss.


Asunto(s)
Diagnóstico por Computador , Redes Neurales de la Computación , Humanos , Diagnóstico por Computador/métodos , Endoscopía Gastrointestinal , Procesamiento de Imagen Asistido por Computador/métodos
2.
Invest Radiol ; 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38687025

RESUMEN

OBJECTIVES: Dark-blood late gadolinium enhancement (DB-LGE) cardiac magnetic resonance has been proposed as an alternative to standard white-blood LGE (WB-LGE) imaging protocols to enhance scar-to-blood contrast without compromising scar-to-myocardium contrast. In practice, both DB and WB contrasts may have clinical utility, but acquiring both has the drawback of additional acquisition time. The aim of this study was to develop and evaluate a deep learning method to generate synthetic WB-LGE images from DB-LGE, allowing the assessment of both contrasts without additional scan time. MATERIALS AND METHODS: DB-LGE and WB-LGE data from 215 patients were used to train 2 types of unpaired image-to-image translation deep learning models, cycle-consistent generative adversarial network (CycleGAN) and contrastive unpaired translation, with 5 different loss function hyperparameter settings each. Initially, the best hyperparameter setting was determined for each model type based on the Fréchet inception distance and the visual assessment of expert readers. Then, the CycleGAN and contrastive unpaired translation models with the optimal hyperparameters were directly compared. Finally, with the best model chosen, the quantification of scar based on the synthetic WB-LGE images was compared with the truly acquired WB-LGE. RESULTS: The CycleGAN architecture for unpaired image-to-image translation was found to provide the most realistic synthetic WB-LGE images from DB-LGE images. The results showed that it was difficult for visual readers to distinguish if an image was true or synthetic (55% correctly classified). In addition, scar burden quantification with the synthetic data was highly correlated with the analysis of the truly acquired images. Bland-Altman analysis found a mean bias in percentage scar burden between the quantification of the real WB and synthetic white-blood images of 0.44% with limits of agreement from -10.85% to 11.74%. The mean image quality of the real WB images (3.53/5) was scored higher than the synthetic white-blood images (3.03), P = 0.009. CONCLUSIONS: This study proposed a CycleGAN model to generate synthetic WB-LGE from DB-LGE images to allow assessment of both image contrasts without additional scan time. This work represents a clinically focused assessment of synthetic medical images generated by artificial intelligence, a topic with significant potential for a multitude of applications. However, further evaluation is warranted before clinical adoption.

3.
J Endourol ; 2024 Apr 13.
Artículo en Inglés | MEDLINE | ID: mdl-38613819

RESUMEN

Objective To construct a Convolutional Neural Network (CNN) model that can recognize and delineate anatomic structures on intraoperative video frames of robot-assisted radical prostatectomy (RARP) and to use these annotations to predict the surgical urethral length (SUL). Background Urethral dissection during RARP impacts patient urinary incontinence (UI) outcomes, and requires extensive training. Large differences exist between incontinence outcomes of different urologists and hospitals. Also, surgeon experience and education are critical towards optimal outcomes. Therefore new approaches are warranted. SUL is associated with UI. Artificial intelligence (AI) surgical image segmentation using a CNN could automate SUL estimation and contribute towards future AI-assisted RARP and surgeon guidance. Methods Eighty-eight intraoperative RARP videos between June 2009 and September 2014 were collected from a single center. 264 frames were annotated according to: prostate, urethra, ligated plexus and catheter. Thirty annotated images from different RARP videos were used as a test dataset. The Dice coefficient (DSC) and 95th percentile Hausdorff distance (Hd95) were used to determine model performance. SUL was calculated using the catheter as a reference. Results The DSC of the best performing model were 0.735 and 0.755 for the catheter and urethra classes respectively, with a Hd95 of 29.27 and 72.62 respectively. The model performed moderately on the ligated plexus and prostate. The predicted SUL showed a mean difference of 0.64 - 1.86mm difference versus human annotators, but with significant deviation (SD 3.28 - 3.56). Conclusion This study shows that an AI image segmentation model can predict vital structures during RARP urethral dissection with moderate to fair accuracy. SUL estimation derived from it showed large deviations and outliers when compared to human annotators, but with a very small mean difference (<2mm). This is a promising development for further research on AI-assisted RARP. Keywords Prostate cancer, Anatomy recognition, Artificial intelligence, Continence, Urethral length.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...