Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Surg Res ; 296: 325-336, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38306938

RESUMO

INTRODUCTION: Minimally Invasive Surgery uses electrosurgical tools that generate smoke. This smoke reduces the visibility of the surgical site and spreads harmful substances with potential hazards for the surgical staff. Automatic image analysis may provide assistance. However, the existing studies are restricted to simple clear versus smoky image classification. MATERIALS AND METHODS: We propose a novel approach using surgical image analysis with machine learning, including deep neural networks. We address three tasks: 1) smoke quantification, which estimates the visual level of smoke, 2) smoke evacuation confidence, which estimates the level of confidence to evacuate smoke, and 3) smoke evacuation recommendation, which estimates the evacuation decision. We collected three datasets with expert annotations. We trained end-to-end neural networks for the three tasks. We also created indirect predictors using task 1 followed by linear regression to solve task 2 and using task 2 followed by binary classification to solve task 3. RESULTS: We observe a reasonable inter-expert variability for tasks 1 and a large one for tasks 2 and 3. For task 1, the expert error is 17.61 percentage points (pp) and the neural network error is 18.45 pp. For tasks 2, the best results are obtained from the indirect predictor based on task 1. For this task, the expert error is 27.35 pp and the predictor error is 23.60 pp. For task 3, the expert accuracy is 76.78% and the predictor accuracy is 81.30%. CONCLUSIONS: Smoke quantification, evacuation confidence, and evaluation recommendation can be achieved by automatic surgical image analysis with similar or better accuracy as the experts.


Assuntos
Processamento de Imagem Assistida por Computador , Procedimentos Cirúrgicos Minimamente Invasivos , Fumaça , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Nicotiana , Fumaça/análise
2.
J Minim Invasive Gynecol ; 30(5): 397-405, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36720429

RESUMO

STUDY OBJECTIVE: We focus on explaining the concepts underlying artificial intelligence (AI), using Uteraug, a laparoscopic surgery guidance application based on Augmented Reality (AR), to provide concrete examples. AI can be used to automatically interpret the surgical images. We are specifically interested in the tasks of uterus segmentation and uterus contouring in laparoscopic images. A major difficulty with AI methods is their requirement for a massive amount of annotated data. We propose SurgAI3.8K, the first gynaecological dataset with annotated anatomy. We study the impact of AI on automating key steps of Uteraug. DESIGN: We constructed the SurgAI3.8K dataset with 3800 images extracted from 79 laparoscopy videos. We created the following annotations: the uterus segmentation, the uterus contours and the regions of the left and right fallopian tube junctions. We divided our dataset into a training and a test dataset. Our engineers trained a neural network from the training dataset. We then investigated the performance of the neural network compared to the experts on the test dataset. In particular, we established the relationship between the size of the training dataset and the performance, by creating size-performance graphs. SETTING: University. PATIENTS: Not available. INTERVENTION: Not available. MEASUREMENTS AND MAIN RESULTS: The size-performance graphs show a performance plateau at 700 images for uterus segmentation and 2000 images for uterus contouring. The final segmentation scores on the training and test dataset were 94.6% and 84.9% (the higher, the better) and the final contour error were 19.5% and 47.3% (the lower, the better). These results allowed us to bootstrap Uteraug, achieving AR performance equivalent to its current manual setup. CONCLUSION: We describe a concrete AI system in laparoscopic surgery with all steps from data collection, data annotation, neural network training, performance evaluation, to final application.


Assuntos
Realidade Aumentada , Laparoscopia , Humanos , Feminino , Inteligência Artificial , Redes Neurais de Computação , Útero/cirurgia , Laparoscopia/métodos
3.
Surg Endosc ; 34(12): 5377-5383, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-31996995

RESUMO

BACKGROUND: In laparoscopy, the digital camera offers surgeons the opportunity to receive support from image-guided surgery systems. Such systems require image understanding, the ability for a computer to understand what the laparoscope sees. Image understanding has recently progressed owing to the emergence of artificial intelligence and especially deep learning techniques. However, the state of the art of deep learning in gynaecology only offers image-based detection, reporting the presence or absence of an anatomical structure, without finding its location. A solution to the localisation problem is given by the concept of semantic segmentation, giving the detection and pixel-level location of a structure in an image. The state-of-the-art results in semantic segmentation are achieved by deep learning, whose usage requires a massive amount of annotated data. We propose the first dataset dedicated to this task and the first evaluation of deep learning-based semantic segmentation in gynaecology. METHODS: We used the deep learning method called Mask R-CNN. Our dataset has 461 laparoscopic images manually annotated with three classes: uterus, ovaries and surgical tools. We split our dataset in 361 images to train Mask R-CNN and 100 images to evaluate its performance. RESULTS: The segmentation accuracy is reported in terms of percentage of overlap between the segmented regions from Mask R-CNN and the manually annotated ones. The accuracy is 84.5%, 29.6% and 54.5% for uterus, ovaries and surgical tools, respectively. An automatic detection of these structures was then inferred from the semantic segmentation results which led to state-of-the-art detection performance, except for the ovaries. Specifically, the detection accuracy is 97%, 24% and 86% for uterus, ovaries and surgical tools, respectively. CONCLUSION: Our preliminary results are very promising, given the relatively small size of our initial dataset. The creation of an international surgical database seems essential.


Assuntos
Aprendizado Profundo/normas , Ginecologia/métodos , Laparoscopia/métodos , Feminino , Humanos
4.
Int J Comput Assist Radiol Surg ; 15(7): 1177-1186, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32372385

RESUMO

PURPOSE: The registration of a preoperative 3D model, reconstructed, for example, from MRI, to intraoperative laparoscopy 2D images, is the main challenge to achieve augmented reality in laparoscopy. The current systems have a major limitation: they require that the surgeon manually marks the occluding contours during surgery. This requires the surgeon to fully comprehend the non-trivial concept of occluding contours and surgeon time, directly impacting acceptance and usability. To overcome this limitation, we propose a complete framework for object-class occluding contour detection (OC2D), with application to uterus surgery. METHODS: Our first contribution is a new distance-based evaluation score complying with all the relevant performance criteria. Our second contribution is a loss function combining cross-entropy and two new penalties designed to boost 1-pixel thickness responses. This allows us to train a U-Net end to end, outperforming all competing methods, which tends to produce thick responses. Our third contribution is a dataset of 3818 carefully labelled laparoscopy images of the uterus, which was used to train and evaluate our detector. RESULTS: Evaluation shows that the proposed detector has a similar false false-negative rate to existing methods but substantially reduces both false-positive rate and response thickness. Finally, we ran a user study to evaluate the impact of OC2D against manually marked occluding contours in augmented laparoscopy. We used 10 recorded gynecologic laparoscopies and involved 5 surgeons. Using OC2D led to a reduction of 3 min and 53 s in surgeon time without sacrificing registration accuracy. CONCLUSIONS: We provide a new set of criteria and a distance-based measure to evaluate an OC2D method. We propose an OC2D method which outperforms the state-of-the-art methods. The results obtained from the user study indicate that fully automatic augmented laparoscopy is feasible.


Assuntos
Aprendizado Profundo , Procedimentos Cirúrgicos em Ginecologia/métodos , Laparoscopia/métodos , Útero/cirurgia , Realidade Aumentada , Feminino , Humanos , Imageamento por Ressonância Magnética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA