Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 61
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Surg Res ; 296: 612-620, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38354617

RESUMO

INTRODUCTION: Augmented reality (AR) in laparoscopic liver resection (LLR) can improve intrahepatic navigation by creating a virtual liver transparency. Our team has recently developed Hepataug, an AR software that projects the invisible intrahepatic tumors onto the laparoscopic images and allows the surgeon to localize them precisely. However, the accuracy of registration according to the location and size of the tumors, as well as the influence of the projection axis, have never been measured. The aim of this work was to measure the three-dimensional (3D) tumor prediction error of Hepataug. METHODS: Eight 3D virtual livers were created from the computed tomography scan of a healthy human liver. Reference markers with known coordinates were virtually placed on the anterior surface. The virtual livers were then deformed and 3D printed, forming 3D liver phantoms. After placing each 3D phantom inside a pelvitrainer, registration allowed Hepataug to project virtual tumors along two axes: the laparoscope axis and the operator port axis. The surgeons had to point the center of eight virtual tumors per liver with a pointing tool whose coordinates were precisely calculated. RESULTS: We obtained 128 pointing experiments. The average pointing error was 29.4 ± 17.1 mm and 9.2 ± 5.1 mm for the laparoscope and operator port axes respectively (P = 0.001). The pointing errors tended to increase with tumor depth (correlation coefficients greater than 0.5 with P < 0.001). There was no significant dependency of the pointing error on the tumor size for both projection axes. CONCLUSIONS: Tumor visualization by projection toward the operating port improves the accuracy of AR guidance and partially solves the problem of the two-dimensional visual interface of monocular laparoscopy. Despite a lower precision of AR for tumors located in the posterior part of the liver, it could allow the surgeons to access these lesions without completely mobilizing the liver, hence decreasing the surgical trauma.


Assuntos
Realidade Aumentada , Laparoscopia , Neoplasias , Cirurgia Assistida por Computador , Humanos , Laparoscopia/métodos , Imagens de Fantasmas , Imageamento Tridimensional/métodos , Fígado/diagnóstico por imagem , Fígado/cirurgia , Cirurgia Assistida por Computador/métodos
2.
J Surg Res ; 296: 325-336, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38306938

RESUMO

INTRODUCTION: Minimally Invasive Surgery uses electrosurgical tools that generate smoke. This smoke reduces the visibility of the surgical site and spreads harmful substances with potential hazards for the surgical staff. Automatic image analysis may provide assistance. However, the existing studies are restricted to simple clear versus smoky image classification. MATERIALS AND METHODS: We propose a novel approach using surgical image analysis with machine learning, including deep neural networks. We address three tasks: 1) smoke quantification, which estimates the visual level of smoke, 2) smoke evacuation confidence, which estimates the level of confidence to evacuate smoke, and 3) smoke evacuation recommendation, which estimates the evacuation decision. We collected three datasets with expert annotations. We trained end-to-end neural networks for the three tasks. We also created indirect predictors using task 1 followed by linear regression to solve task 2 and using task 2 followed by binary classification to solve task 3. RESULTS: We observe a reasonable inter-expert variability for tasks 1 and a large one for tasks 2 and 3. For task 1, the expert error is 17.61 percentage points (pp) and the neural network error is 18.45 pp. For tasks 2, the best results are obtained from the indirect predictor based on task 1. For this task, the expert error is 27.35 pp and the predictor error is 23.60 pp. For task 3, the expert accuracy is 76.78% and the predictor accuracy is 81.30%. CONCLUSIONS: Smoke quantification, evacuation confidence, and evaluation recommendation can be achieved by automatic surgical image analysis with similar or better accuracy as the experts.


Assuntos
Processamento de Imagem Assistida por Computador , Procedimentos Cirúrgicos Minimamente Invasivos , Fumaça , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Nicotiana , Fumaça/análise
3.
World J Urol ; 41(2): 335-343, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35776173

RESUMO

INTRODUCTION: Minimally invasive partial nephrectomy (MIPN) has become the standard of care for localized kidney tumors over the past decade. The characteristics of each tumor, in particular its size and relationship with the excretory tract and vessels, allow one to judge its complexity and to attempt predicting the risk of complications. The recent development of virtual 3D model reconstruction and computer vision has opened the way to image-guided surgery and augmented reality (AR). OBJECTIVE: Our objective was to perform a systematic review to list and describe the different AR techniques proposed to support PN. MATERIALS AND METHODS: The systematic review of the literature was performed on 12/04/22, using the keywords "nephrectomy" and "augmented reality" on Embase and Medline. Articles were considered if they reported surgical outcomes when using AR with virtual image overlay on real vision, during ex vivo or in vivo MIPN. We classified them according to the registration technique they use. RESULTS: We found 16 articles describing an AR technique during MIPN procedures that met the eligibility criteria. A moderate to high risk of bias was recorded for all the studies. We classified registration methods into three main families, of which the most promising one seems to be surface-based registration. CONCLUSION: Despite promising results, there do not exist studies showing an improvement in clinical outcomes using AR. The ideal AR technique is probably yet to be established, as several designs are still being actively explored. More clinical data will be required to establish the potential contribution of this technology to MIPN.


Assuntos
Neoplasias Renais , Cirurgia Assistida por Computador , Humanos , Nefrectomia/métodos , Neoplasias Renais/cirurgia , Cirurgia Assistida por Computador/métodos
4.
J Minim Invasive Gynecol ; 30(5): 397-405, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36720429

RESUMO

STUDY OBJECTIVE: We focus on explaining the concepts underlying artificial intelligence (AI), using Uteraug, a laparoscopic surgery guidance application based on Augmented Reality (AR), to provide concrete examples. AI can be used to automatically interpret the surgical images. We are specifically interested in the tasks of uterus segmentation and uterus contouring in laparoscopic images. A major difficulty with AI methods is their requirement for a massive amount of annotated data. We propose SurgAI3.8K, the first gynaecological dataset with annotated anatomy. We study the impact of AI on automating key steps of Uteraug. DESIGN: We constructed the SurgAI3.8K dataset with 3800 images extracted from 79 laparoscopy videos. We created the following annotations: the uterus segmentation, the uterus contours and the regions of the left and right fallopian tube junctions. We divided our dataset into a training and a test dataset. Our engineers trained a neural network from the training dataset. We then investigated the performance of the neural network compared to the experts on the test dataset. In particular, we established the relationship between the size of the training dataset and the performance, by creating size-performance graphs. SETTING: University. PATIENTS: Not available. INTERVENTION: Not available. MEASUREMENTS AND MAIN RESULTS: The size-performance graphs show a performance plateau at 700 images for uterus segmentation and 2000 images for uterus contouring. The final segmentation scores on the training and test dataset were 94.6% and 84.9% (the higher, the better) and the final contour error were 19.5% and 47.3% (the lower, the better). These results allowed us to bootstrap Uteraug, achieving AR performance equivalent to its current manual setup. CONCLUSION: We describe a concrete AI system in laparoscopic surgery with all steps from data collection, data annotation, neural network training, performance evaluation, to final application.


Assuntos
Realidade Aumentada , Laparoscopia , Humanos , Feminino , Inteligência Artificial , Redes Neurais de Computação , Útero/cirurgia , Laparoscopia/métodos
5.
Surg Endosc ; 36(1): 833-843, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34734305

RESUMO

BACKGROUND: The aim of this study was to assess the performance of our augmented reality (AR) software (Hepataug) during laparoscopic resection of liver tumours and compare it to standard ultrasonography (US). MATERIALS AND METHODS: Ninety pseudo-tumours ranging from 10 to 20 mm were created in sheep cadaveric livers by injection of alginate. CT-scans were then performed and 3D models reconstructed using a medical image segmentation software (MITK). The livers were placed in a pelvi-trainer on an inclined plane, approximately perpendicular to the laparoscope. The aim was to obtain free resection margins, as close as possible to 1 cm. Laparoscopic resection was performed using US alone (n = 30, US group), AR alone (n = 30, AR group) and both US and AR (n = 30, ARUS group). R0 resection, maximal margins, minimal margins and mean margins were assessed after histopathologic examination, adjusted to the tumour depth and to a liver zone-wise difficulty level. RESULTS: The minimal margins were not different between the three groups (8.8, 8.0 and 6.9 mm in the US, AR and ARUS groups, respectively). The maximal margins were larger in the US group compared to the AR and ARUS groups after adjustment on depth and zone difficulty (21 vs. 18 mm, p = 0.001 and 21 vs. 19.5 mm, p = 0.037, respectively). The mean margins, which reflect the variability of the measurements, were larger in the US group than in the ARUS group after adjustment on depth and zone difficulty (15.2 vs. 12.8 mm, p < 0.001). When considering only the most difficult zone (difficulty 3), there were more R1/R2 resections in the US group than in the AR + ARUS group (50% vs. 21%, p = 0.019). CONCLUSION: Laparoscopic liver resection using AR seems to provide more accurate resection margins with less variability than the gold standard US navigation, particularly in difficult to access liver zones with deep tumours.


Assuntos
Realidade Aumentada , Laparoscopia , Neoplasias Hepáticas , Animais , Modelos Animais de Doenças , Imageamento Tridimensional , Laparoscopia/métodos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/cirurgia , Ovinos
6.
Surg Endosc ; 34(12): 5377-5383, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-31996995

RESUMO

BACKGROUND: In laparoscopy, the digital camera offers surgeons the opportunity to receive support from image-guided surgery systems. Such systems require image understanding, the ability for a computer to understand what the laparoscope sees. Image understanding has recently progressed owing to the emergence of artificial intelligence and especially deep learning techniques. However, the state of the art of deep learning in gynaecology only offers image-based detection, reporting the presence or absence of an anatomical structure, without finding its location. A solution to the localisation problem is given by the concept of semantic segmentation, giving the detection and pixel-level location of a structure in an image. The state-of-the-art results in semantic segmentation are achieved by deep learning, whose usage requires a massive amount of annotated data. We propose the first dataset dedicated to this task and the first evaluation of deep learning-based semantic segmentation in gynaecology. METHODS: We used the deep learning method called Mask R-CNN. Our dataset has 461 laparoscopic images manually annotated with three classes: uterus, ovaries and surgical tools. We split our dataset in 361 images to train Mask R-CNN and 100 images to evaluate its performance. RESULTS: The segmentation accuracy is reported in terms of percentage of overlap between the segmented regions from Mask R-CNN and the manually annotated ones. The accuracy is 84.5%, 29.6% and 54.5% for uterus, ovaries and surgical tools, respectively. An automatic detection of these structures was then inferred from the semantic segmentation results which led to state-of-the-art detection performance, except for the ovaries. Specifically, the detection accuracy is 97%, 24% and 86% for uterus, ovaries and surgical tools, respectively. CONCLUSION: Our preliminary results are very promising, given the relatively small size of our initial dataset. The creation of an international surgical database seems essential.


Assuntos
Aprendizado Profundo/normas , Ginecologia/métodos , Laparoscopia/métodos , Feminino , Humanos
7.
Surg Endosc ; 34(12): 5642-5648, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32691206

RESUMO

BACKGROUND: Previous work in augmented reality (AR) guidance in monocular laparoscopic hepatectomy requires the surgeon to manually overlay a rigid preoperative model onto a laparoscopy image. This may be fairly inaccurate because of significant liver deformation. We have proposed a technique which overlays a deformable preoperative model semi-automatically onto a laparoscopic image using a new software called Hepataug. The aim of this study is to show the feasibility of Hepataug to perform AR with a deformable model in laparoscopic hepatectomy. METHODS: We ran Hepataug during the procedures, as well as the usual means of laparoscopic ultrasonography (LUS) and visual inspection of the preoperative CT or MRI. The primary objective was to assess the feasibility of Hepataug, in terms of minimal disruption of the surgical workflow. The secondary objective was to assess the potential benefit of Hepataug, by subjective comparison with LUS. RESULTS: From July 2017 to March 2019, 17 consecutive patients were included in this study. AR was feasible in all procedures, with good correlation with LUS. However, for 2 patients, LUS did not reveal the location of the tumors. Hepataug gave a prediction of the tumor locations, which was confirmed and refined by careful inspection of the preoperative CT or MRI. CONCLUSION: Hepataug showed a minimal disruption of the surgical workflow and can thus be feasibly used in real hepatectomy procedures. Thanks to its new mechanism of semi-automatic deformable alignment, Hepataug also showed a good agreement with LUS and visual CT or MRI inspection in subsurface tumor localization. Importantly, Hepataug yields reproducible results. It is easy to use and could be deployed in any existing operating room. Nevertheless, comparative prospective studies are needed to study its efficacy.


Assuntos
Realidade Aumentada , Laparoscopia , Fígado/cirurgia , Modelos Biológicos , Cuidados Pré-Operatórios , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Hepatectomia , Humanos , Imageamento Tridimensional , Fígado/diagnóstico por imagem , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Tomografia Computadorizada por Raios X , Ultrassonografia
8.
J Minim Invasive Gynecol ; 27(4): 973-976, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31765829

RESUMO

Augmented reality is a technology that allows a surgeon to see key hidden subsurface structures in an endoscopic video in real-time. This works by overlaying information obtained from preoperative imaging and fusing it in real-time with the endoscopic image. Magnetic resonance diffusion tensor imaging (DTI) and fiber tractography are known to provide additional information to that obtained from standard structural magnetic resonance imaging (MRI). Here, we report the first 2 cases of the use of real-time augmented reality during laparoscopic myomectomies with visualization of uterine muscle fibers after DTI tractography-MRI to help the surgeon decide the starting point incision. In the first case, a 31-year-old patient was undergoing laparoscopic surgery for a 6-cm FIGO type V myoma. In the second case, a 38-year-old patient was undergoing a laparoscopic myomectomy for a unique 6-cm FIGO type VI myoma. Signed consent forms were obtained from both patients, which included clauses of no modification of the surgery. Before surgery, MRI was performed. The external surface of the uterus, the uterine cavity, and the surface of the myomas were delimited on the basis of the findings of preoperative MRI. A fiber tracking algorithm was used to extrapolate the uterine muscle fibers' architecture. The aligned models were blended with each video frame to give the impression that the uterus is almost transparent, enabling the surgeon to localize the myomas and uterine cavity exactly. The uterine muscle fibers were also displayed, and their visualization helped us decide the starting incision point for the myomectomies. Then, myomectomies were performed using a classic laparoscopic technique. These case reports show that augmented reality and DTI fiber tracking in a uterus with myomas are possible, providing fiber direction and helping the surgeon visualize and decide the starting incision point for laparoscopic myomectomy. Respecting the fibers' orientation could improve the quality of the scar and decrease the architectural disorganization of the uterus.


Assuntos
Realidade Aumentada , Laparoscopia , Leiomioma , Mioma , Miomectomia Uterina , Neoplasias Uterinas , Adulto , Imagem de Tensor de Difusão , Feminino , Humanos , Laparoscopia/métodos , Leiomioma/diagnóstico por imagem , Leiomioma/patologia , Leiomioma/cirurgia , Mioma/cirurgia , Miomectomia Uterina/métodos , Neoplasias Uterinas/diagnóstico por imagem , Neoplasias Uterinas/patologia , Neoplasias Uterinas/cirurgia
9.
J Minim Invasive Gynecol ; 26(6): 1177-1180, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30965117

RESUMO

Augmented reality (AR) is a surgical guidance technology that allows key hidden subsurface structures to be visualized by endoscopic imaging. We report here 2 cases of patients with adenomyoma selected for the AR technique. The adenomyomas were localized using AR during laparoscopy. Three-dimensional models of the uterus, uterine cavity, and adenomyoma were constructed before surgery from T2-weighted magnetic resonance imaging, allowing an intraoperative 3-dimensional shape of the uterus to be obtained. These models were automatically aligned and "fused" with the laparoscopic video in real time, giving the uterus a semitransparent appearance and allowing the surgeon in real time to both locate the position of the adenomyoma and uterine cavity and rapidly decide how best to access the adenomyoma. In conclusion, the use of our AR system designed for gynecologic surgery leads to improvements in laparoscopic adenomyomectomy and surgical safety.


Assuntos
Adenomioma/diagnóstico , Adenomioma/cirurgia , Realidade Aumentada , Procedimentos Cirúrgicos em Ginecologia/métodos , Cirurgia Assistida por Computador/métodos , Neoplasias Uterinas/diagnóstico , Neoplasias Uterinas/cirurgia , Adulto , Estudos de Viabilidade , Feminino , Humanos , Laparoscopia/métodos , Imageamento por Ressonância Magnética/métodos
10.
Surg Endosc ; 32(1): 514-515, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28791423

RESUMO

BACKGROUND: Laparoscopic liver surgery is seldom performed, mainly because of the risk of hepatic vein bleeding or incomplete resection of the tumour. This risk may be reduced by means of an augmented reality guidance system (ARGS), which have the potential to aid one in finding the position of intrahepatic tumours and hepatic veins and thus in facilitating the oncological resection and in limiting the risk of operative bleeding. METHODS: We report the case of an 81-year-old man who was diagnosed with a hepatocellular carcinoma after an intraabdominal bleeding. The preoperative CT scan did not show metastases. We describe our preferred approach for laparoscopic left hepatectomy with initial control of the left hepatic vein and preliminary results of our novel ARGS achieved postoperatively. In our ARGS, a 3D virtual anatomical model is created from the abdominal CT scan and manually registered to selected laparoscopic images. For this patient, the virtual model was composed of the segmented left liver, right liver, tumour and median hepatic vein. RESULTS: The patient's operating time was summed up to 205 min where a blood loss of 300 cc was recorded. The postoperative course was simple. Histopathological analysis revealed the presence of a hepatocellular carcinoma with free margins. Our results of intrahepatic visualization suggest that ARGS can be beneficial in detecting the tumour, transection plane and medial hepatic vein prior to parenchymal transection, where it does not work due to the substantial changes to the liver's shape. CONCLUSIONS: As of today, we have performed eight similar left hepatectomies, with good results. Our ARGS has shown promising results and should now be attempted intraoperatively.


Assuntos
Hepatectomia/métodos , Laparoscopia/métodos , Neoplasias Hepáticas/cirurgia , Cirurgia Assistida por Computador , Realidade Virtual , Idoso de 80 Anos ou mais , Perda Sanguínea Cirúrgica/prevenção & controle , Carcinoma Hepatocelular/cirurgia , Humanos , Masculino , Duração da Cirurgia
11.
Surg Endosc ; 32(3): 1192-1201, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-28812157

RESUMO

BACKGROUND: Augmented Reality (AR) guidance is a technology that allows a surgeon to see sub-surface structures, by overlaying pre-operative imaging data on a live laparoscopic video. Our objectives were to evaluate a state-of-the-art AR guidance system in a tumor surgical resection model, comparing the accuracy of the resection with and without the system. Our system has three phases. Phase 1: using the MRI images, the kidney's and pseudotumor's surfaces are segmented to construct a 3D model. Phase 2: the intra-operative 3D model of the kidney is computed. Phase 3: the pre-operative and intra-operative models are registered, and the laparoscopic view is augmented with the pre-operative data. METHODS: We performed a prospective experimental study on ex vivo porcine kidneys. Alginate was injected into the parenchyma to create pseudotumors measuring 4-10 mm. The kidneys were then analyzed by MRI. Next, the kidneys were placed into pelvictrainers, and the pseudotumors were laparoscopically resected. The AR guidance system allows the surgeon to see tumors and margins using classical laparoscopic instruments, and a classical screen. The resection margins were measured microscopically to evaluate the accuracy of resection. RESULTS: Ninety tumors were segmented: 28 were used to optimize the AR software, and 62 were used to randomly compare surgical resection: 29 tumors were resected using AR and 33 without AR. The analysis of our pathological results showed 4 failures (tumor with positive margins) (13.8%) in the AR group, and 10 (30.3%) in the Non-AR group. There was no complete miss in the AR group, while there were 4 complete misses in the non-AR group. In total, 14 (42.4%) tumors were completely missed or had a positive margin in the non-AR group. CONCLUSIONS: Our AR system enhances the accuracy of surgical resection, particularly for small tumors. Crucial information such as resection margins and vascularization could also be displayed.


Assuntos
Neoplasias Renais/patologia , Neoplasias Renais/cirurgia , Rim/patologia , Rim/cirurgia , Margens de Excisão , Modelos Animais , Animais , Humanos , Imageamento Tridimensional/métodos , Neoplasias Renais/diagnóstico por imagem , Laparoscopia/métodos , Imageamento por Ressonância Magnética , Estudos Prospectivos , Interpretação de Imagem Radiográfica Assistida por Computador , Suínos
15.
Surg Endosc ; 31(1): 456-461, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-27129565

RESUMO

BACKGROUND: Augmented Reality (AR) is a technology that can allow a surgeon to see subsurface structures. This works by overlaying information from another modality, such as MRI and fusing it in real time with the endoscopic images. AR has never been developed for a very mobile organ like the uterus and has never been performed for gynecology. Myomas are not always easy to localize in laparoscopic surgery when they do not significantly change the surface of the uterus, or are at multiple locations. OBJECTIVE: To study the accuracy of myoma localization using a new AR system compared to MRI-only localization. METHODS: Ten residents were asked to localize six myomas (on a uterine model into a laparoscopic box) when either using AR or in conditions that simulate a standard method (only the MRI was available). Myomas were randomly divided in two groups: the control group (MRI only, AR not activated) and the AR group (AR activated). Software was used to automatically measure the distance between the point of contact on the uterine surface and the myoma. We compared these distances to the true shortest distance to obtain accuracy measures. The time taken to perform the task was measured, and an assessment of the complexity was performed. RESULTS: The mean accuracy in the control group was 16.80 mm [0.1-52.2] versus 0.64 mm [0.01-4.71] with AR. In the control group, the mean time to perform the task was 18.68 [6.4-47.1] s compared to 19.6 [3.9-77.5] s with AR. The mean score of difficulty (evaluated for each myoma) was 2.36 [1-4] versus 0.87 [0-4], respectively, for the control and the AR group. DISCUSSION: We developed an AR system for a very mobile organ. This is the first user study to quantitatively evaluate an AR system for improving a surgical task. In our model, AR improves localization accuracy.


Assuntos
Laparoscopia/métodos , Leiomioma/cirurgia , Modelos Anatômicos , Cirurgia Assistida por Computador/métodos , Miomectomia Uterina/métodos , Neoplasias Uterinas/cirurgia , Feminino , Procedimentos Cirúrgicos em Ginecologia/métodos , Ginecologia/educação , Humanos , Internato e Residência , Leiomioma/diagnóstico por imagem , Imageamento por Ressonância Magnética , Software , Interface Usuário-Computador , Neoplasias Uterinas/diagnóstico por imagem
16.
Int J Comput Assist Radiol Surg ; 19(6): 1157-1163, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38609735

RESUMO

PURPOSE: We investigate whether foundation models pretrained on diverse visual data could be beneficial to surgical computer vision. We use instrument and uterus segmentation in mini-invasive procedures as benchmarks. We propose multiple supervised, unsupervised and few-shot supervised adaptations of foundation models, including two novel adaptation methods. METHODS: We use DINOv1, DINOv2, DINOv2 with registers, and SAM backbones, with the ART-Net surgical instrument and the SurgAI3.8K uterus segmentation datasets. We investigate five approaches: DINO unsupervised, few-shot learning with a linear decoder, supervised learning with the proposed DINO-UNet adaptation, DPT with DINO encoder, and unsupervised learning with the proposed SAM adaptation. RESULTS: We evaluate 17 models for instrument segmentation and 7 models for uterus segmentation and compare to existing ad hoc models for the tasks at hand. We show that the linear decoder can be learned with few shots. The unsupervised and linear decoder methods obtain slightly subpar results but could be considered useful in data scarcity settings. The unsupervised SAM model produces finer edges but has inconsistent outputs. However, DPT and DINO-UNet obtain strikingly good results, defining a new state of the art by outperforming the previous-best by 5.6 and 4.1 pp for instrument and 4.4 and 1.5 pp for uterus segmentation. Both methods obtain semantic and spatial precision, accurately segmenting intricate details. CONCLUSION: Our results show the huge potential of using DINO and SAM for surgical computer vision, indicating a promising role for visual foundation models in medical image analysis, particularly in scenarios with limited or complex data.


Assuntos
Cirurgia Assistida por Computador , Humanos , Feminino , Cirurgia Assistida por Computador/métodos , Útero/cirurgia , Útero/diagnóstico por imagem
17.
Artigo em Inglês | MEDLINE | ID: mdl-39058410

RESUMO

PURPOSE: A stereoscopic surgical video stream consists of left-right image pairs provided by a stereo endoscope. While the surgical display shows these image pairs synchronised, most capture cards cause de-synchronisation. This means that the paired left and right images may not correspond once used in downstream tasks such as stereo depth computation. The stereo synchronisation problem is to recover the corresponding left-right images. This is particularly challenging in the surgical setting, owing to the moist tissues, rapid camera motion, quasi-staticity and real-time processing requirement. Existing methods exploit image cues from the diffuse reflection component and are defeated by the above challenges. METHODS: We propose to exploit the specular reflection. Specifically, we propose a powerful left-right comparison score (LRCS) using the specular highlights commonly occurring on moist tissues. We detect the highlights using a neural network, characterise them with invariant descriptors, match them, and use the number of matches to form the proposed LRCS. We perform evaluation against 147 existing LRCS in 44 challenging robotic partial nephrectomy and robotic-assisted hepatic resection video sequences with simulated and real de-synchronisation. RESULTS: The proposed LRCS outperforms, with an average and maximum offsets of 0.055 and 1 frames and 94.1±3.6% successfully synchronised frames. In contrast, the best existing LRCS achieves an average and maximum offsets of 0.3 and 3 frames and 81.2±6.4% successfully synchronised frames. CONCLUSION: The use of specular reflection brings a tremendous boost to the real-time surgical stereo synchronisation problem.

18.
Artigo em Inglês | MEDLINE | ID: mdl-39014177

RESUMO

PURPOSE: Augmented reality guidance in laparoscopic liver resection requires the registration of a preoperative 3D model to the intraoperative 2D image. However, 3D-2D liver registration poses challenges owing to the liver's flexibility, particularly in the limited visibility conditions of laparoscopy. Although promising, the current registration methods are computationally expensive and often necessitate manual initialisation. METHODS: The first neural model predicting the registration (NM) is proposed, represented as 3D model deformation coefficients, from image landmarks. The strategy consists in training a patient-specific model based on synthetic data generated automatically from the patient's preoperative model. A liver shape modelling technique, which further reduces time complexity, is also proposed. RESULTS: The NM method was evaluated using the target registration error measure, showing an accuracy on par with existing methods, all based on numerical optimisation. Notably, NM runs much faster, offering the possibility of achieving real-time inference, a significant step ahead in this field. CONCLUSION: The proposed method represents the first neural method for 3D-2D liver registration. Preliminary experimental findings show comparable performance to existing methods, with superior computational efficiency. These results suggest a potential to deeply impact liver registration techniques.

19.
Comput Methods Programs Biomed ; 245: 108038, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38271792

RESUMO

BACKGROUND AND OBJECTIVE: Image segmentation is an essential component in medical image analysis. The case of 3D images such as MRI is particularly challenging and time consuming. Interactive or semi-automatic methods are thus highly desirable. However, existing methods do not exploit the typical sequentiality of real user interactions. This is due to the interaction memory used in these systems, which discards ordering. In contrast, we argue that the order of the user corrections should be used for training and lead to performance improvements. METHODS: We contribute to solving this problem by proposing a general multi-class deep learning-based interactive framework for image segmentation, which embeds a base network in a user interaction loop with a user feedback memory. We propose to model the memory explicitly as a sequence of consecutive system states, from which the features can be learned, generally learning from the segmentation refinement process. Training is a major difficulty owing to the network's input being dependent on the previous output. We adapt the network to this loop by introducing a virtual user in the training process, modelled by dynamically simulating the iterative user feedback. RESULTS: We evaluated our framework against existing methods on the complex task of multi-class semantic instance female pelvis MRI segmentation with 5 classes, including up to 27 tumour instances, using a segmentation dataset collected in our hospital, and on liver and pancreas CT segmentation, using public datasets. We conducted a user evaluation, involving both senior and junior medical personnel in matching and adjacent areas of expertise. We observed an annotation time reduction with 5'56" for our framework against 25' on average for classical tools. We systematically evaluated the influence of the number of clicks on the segmentation accuracy. A single interaction round our framework outperforms existing automatic systems with a comparable setup. We provide an ablation study and show that our framework outperforms existing interactive systems. CONCLUSIONS: Our framework largely outperforms existing systems in accuracy, with the largest impact on the smallest, most difficult classes, and drastically reduces the average user segmentation time with fast inference at 47.2±6.2 ms per image.


Assuntos
Aprendizado Profundo , Feminino , Humanos , Tomografia Computadorizada por Raios X/métodos , Imageamento Tridimensional/métodos , Fígado , Imageamento por Ressonância Magnética , Processamento de Imagem Assistida por Computador
20.
Int J Comput Assist Radiol Surg ; 19(7): 1385-1389, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38775903

RESUMO

PURPOSE: We present a novel method for augmented reality in endoscopic endonasal surgery. Our method does not require the use of external tracking devices and can show hidden anatomical structures relevant to the surgical intervention. METHODS: Our method registers a preoperative 3D model of the nasal cavity to an intraoperative 3D model by estimating a scaled-rigid transformation. Registration is based on a two-stage ICP approach on the reconstructed nasal cavity. The hidden structures are then transferred from the preoperative 3D model to the intraoperative one using the estimated transformation, projected and overlaid into the endoscopic images to obtain the augmented reality. RESULTS: We performed qualitative and quantitative validation of our method on 12 clinical cases. Qualitative results were obtained from an ENT surgeon from visual inspection of the hidden structures in the augmented images. Quantitative results were obtained by measuring a target registration error using a novel transillumination-based approach. The results show that the hidden structures of interest are augmented at the expected locations in most cases. CONCLUSION: Our method was able to augment the endoscopic images in a sufficiently precise manner when the intraoperative nasal cavity did not deform considerably with respect to its preoperative state. This is a promising step towards trackerless augmented reality in endonasal surgery.


Assuntos
Realidade Aumentada , Imageamento Tridimensional , Cavidade Nasal , Humanos , Imageamento Tridimensional/métodos , Cavidade Nasal/cirurgia , Cavidade Nasal/diagnóstico por imagem , Endoscopia/métodos , Cirurgia Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA