Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Lancet Digit Health ; 5(12): e905-e916, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-38000874

RESUMEN

BACKGROUND: Computer-aided detection (CADe) systems could assist endoscopists in detecting early neoplasia in Barrett's oesophagus, which could be difficult to detect in endoscopic images. The aim of this study was to develop, test, and benchmark a CADe system for early neoplasia in Barrett's oesophagus. METHODS: The CADe system was first pretrained with ImageNet followed by domain-specific pretraining with GastroNet. We trained the CADe system on a dataset of 14 046 images (2506 patients) of confirmed Barrett's oesophagus neoplasia and non-dysplastic Barrett's oesophagus from 15 centres. Neoplasia was delineated by 14 Barrett's oesophagus experts for all datasets. We tested the performance of the CADe system on two independent test sets. The all-comers test set comprised 327 (73 patients) non-dysplastic Barrett's oesophagus images, 82 (46 patients) neoplastic images, 180 (66 of the same patients) non-dysplastic Barrett's oesophagus videos, and 71 (45 of the same patients) neoplastic videos. The benchmarking test set comprised 100 (50 patients) neoplastic images, 300 (125 patients) non-dysplastic images, 47 (47 of the same patients) neoplastic videos, and 141 (82 of the same patients) non-dysplastic videos, and was enriched with subtle neoplasia cases. The benchmarking test set was evaluated by 112 endoscopists from six countries (first without CADe and, after 6 weeks, with CADe) and by 28 external international Barrett's oesophagus experts. The primary outcome was the sensitivity of Barrett's neoplasia detection by general endoscopists without CADe assistance versus with CADe assistance on the benchmarking test set. We compared sensitivity using a mixed-effects logistic regression model with conditional odds ratios (ORs; likelihood profile 95% CIs). FINDINGS: Sensitivity for neoplasia detection among endoscopists increased from 74% to 88% with CADe assistance (OR 2·04; 95% CI 1·73-2·42; p<0·0001 for images and from 67% to 79% [2·35; 1·90-2·94; p<0·0001] for video) without compromising specificity (from 89% to 90% [1·07; 0·96-1·19; p=0·20] for images and from 96% to 94% [0·94; 0·79-1·11; ] for video; p=0·46). In the all-comers test set, CADe detected neoplastic lesions in 95% (88-98) of images and 97% (90-99) of videos. In the benchmarking test set, the CADe system was superior to endoscopists in detecting neoplasia (90% vs 74% [OR 3·75; 95% CI 1·93-8·05; p=0·0002] for images and 91% vs 67% [11·68; 3·85-47·53; p<0·0001] for video) and non-inferior to Barrett's oesophagus experts (90% vs 87% [OR 1·74; 95% CI 0·83-3·65] for images and 91% vs 86% [2·94; 0·99-11·40] for video). INTERPRETATION: CADe outperformed endoscopists in detecting Barrett's oesophagus neoplasia and, when used as an assistive tool, it improved their detection rate. CADe detected virtually all neoplasia in a test set of consecutive cases. FUNDING: Olympus.


Asunto(s)
Esófago de Barrett , Aprendizaje Profundo , Neoplasias Esofágicas , Humanos , Esófago de Barrett/diagnóstico , Neoplasias Esofágicas/diagnóstico , Neoplasias Esofágicas/patología , Esofagoscopía/métodos , Oportunidad Relativa
2.
Surg Endosc ; 37(7): 5164-5175, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-36947221

RESUMEN

OBJECTIVE: To develop a deep learning algorithm for anatomy recognition in thoracoscopic video frames from robot-assisted minimally invasive esophagectomy (RAMIE) procedures using deep learning. BACKGROUND: RAMIE is a complex operation with substantial perioperative morbidity and a considerable learning curve. Automatic anatomy recognition may improve surgical orientation and recognition of anatomical structures and might contribute to reducing morbidity or learning curves. Studies regarding anatomy recognition in complex surgical procedures are currently lacking. METHODS: Eighty-three videos of consecutive RAMIE procedures between 2018 and 2022 were retrospectively collected at University Medical Center Utrecht. A surgical PhD candidate and an expert surgeon annotated the azygos vein and vena cava, aorta, and right lung on 1050 thoracoscopic frames. 850 frames were used for training of a convolutional neural network (CNN) to segment the anatomical structures. The remaining 200 frames of the dataset were used for testing the CNN. The Dice and 95% Hausdorff distance (95HD) were calculated to assess algorithm accuracy. RESULTS: The median Dice of the algorithm was 0.79 (IQR = 0.20) for segmentation of the azygos vein and/or vena cava. A median Dice coefficient of 0.74 (IQR = 0.86) and 0.89 (IQR = 0.30) were obtained for segmentation of the aorta and lung, respectively. Inference time was 0.026 s (39 Hz). The prediction of the deep learning algorithm was compared with the expert surgeon annotations, showing an accuracy measured in median Dice of 0.70 (IQR = 0.19), 0.88 (IQR = 0.07), and 0.90 (0.10) for the vena cava and/or azygos vein, aorta, and lung, respectively. CONCLUSION: This study shows that deep learning-based semantic segmentation has potential for anatomy recognition in RAMIE video frames. The inference time of the algorithm facilitated real-time anatomy recognition. Clinical applicability should be assessed in prospective clinical studies.


Asunto(s)
Aprendizaje Profundo , Robótica , Humanos , Esofagectomía/métodos , Estudios Retrospectivos , Estudios Prospectivos , Procedimientos Quirúrgicos Mínimamente Invasivos/métodos
3.
Eur Radiol Exp ; 5(1): 31, 2021 07 29.
Artículo en Inglés | MEDLINE | ID: mdl-34322765

RESUMEN

BACKGROUND: Two-dimensional (2D) ultrasound is well established for thyroid nodule assessment and treatment guidance. However, it is hampered by a limited field of view and observer variability that may lead to inaccurate nodule classification and treatment. To cope with these limitations, we investigated the use of real-time three-dimensional (3D) ultrasound to improve the accuracy of volume estimation and needle placement during radiofrequency ablation. We assess a new 3D matrix transducer for nodule volume estimation and image-guided radiofrequency ablation. METHODS: Thirty thyroid nodule phantoms with thermochromic dye underwent volume estimation and ablation guided by a 2D linear and 3D mechanically-swept array and a 3D matrix transducer. RESULTS: The 3D matrix nodule volume estimations had a lower median difference with the ground truth (0.4 mL) compared to the standard 2D approach (2.2 mL, p < 0.001) and mechanically swept 3D transducer (2.0 mL, p = 0.016). The 3D matrix-guided ablation resulted in a similar nodule ablation coverage when compared to 2D-guidance (76.7% versus 80.8%, p = 0.542). The 3D mechanically swept transducer performed worse (60.1%, p = 0.015). However, 3D matrix and 2D guidance ablations lead to a larger ablated volume outside the nodule than 3D mechanically swept (5.1 mL, 4.2 mL (p = 0.274), 0.5 mL (p < 0.001), respectively). The 3D matrix and mechanically swept approaches were faster with 80 and 72.5 s/mL ablated than 2D with 105.5 s/mL ablated. CONCLUSIONS: The 3D matrix transducer estimates volumes more accurately and can facilitate accurate needle placement while reducing procedure time.


Asunto(s)
Ablación por Catéter , Ablación por Radiofrecuencia , Nódulo Tiroideo , Humanos , Fantasmas de Imagen , Nódulo Tiroideo/diagnóstico por imagen , Nódulo Tiroideo/cirugía , Ultrasonografía
4.
Phys Med Biol ; 65(6): 065002, 2020 03 11.
Artículo en Inglés | MEDLINE | ID: mdl-31978921

RESUMEN

The increasing incidence of pancreatic cancer will make it the second deadliest cancer in 2030. Imaging based early diagnosis and image guided treatment are emerging potential solutions. Artificial intelligence (AI) can help provide and improve widespread diagnostic expertise and accurate interventional image interpretation. Accurate segmentation of the pancreas is essential to create annotated data sets to train AI, and for computer assisted interventional guidance. Automated deep learning segmentation performance in pancreas computed tomography (CT) imaging is low due to poor grey value contrast and complex anatomy. A good solution seemed a recent interactive deep learning segmentation framework for brain CT that helped strongly improve initial automated segmentation with minimal user input. This method yielded no satisfactory results for pancreas CT, possibly due to a sub-optimal neural network architecture. We hypothesize that a state-of-the-art U-net neural network architecture is better because it can produce a better initial segmentation and is likely to be extended to work in a similar interactive approach. We implemented the existing interactive method, iFCN, and developed an interactive version of U-net method we call iUnet. The iUnet is fully trained to produce the best possible initial segmentation. In interactive mode it is additionally trained on a partial set of layers on user generated scribbles. We compare initial segmentation performance of iFCN and iUnet on a 100CT dataset using dice similarity coefficient analysis. Secondly, we assessed the performance gain in interactive use with three observers on segmentation quality and time. Average automated baseline performance was 78% (iUnet) versus 72% (FCN). Manual and semi-automatic segmentation performance was: 87% in 15 min. for manual, and 86% in 8 min. for iUNet. We conclude that iUnet provides a better baseline than iFCN and can reach expert manual performance significantly faster than manual segmentation in case of pancreas CT. Our novel iUnet architecture is modality and organ agnostic and can be a potential novel solution for semi-automatic medical imaging segmentation in general.


Asunto(s)
Imagenología Tridimensional/métodos , Páncreas/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Aprendizaje Profundo , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA