Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Surg Endosc ; 36(1): 833-843, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34734305

RESUMEN

BACKGROUND: The aim of this study was to assess the performance of our augmented reality (AR) software (Hepataug) during laparoscopic resection of liver tumours and compare it to standard ultrasonography (US). MATERIALS AND METHODS: Ninety pseudo-tumours ranging from 10 to 20 mm were created in sheep cadaveric livers by injection of alginate. CT-scans were then performed and 3D models reconstructed using a medical image segmentation software (MITK). The livers were placed in a pelvi-trainer on an inclined plane, approximately perpendicular to the laparoscope. The aim was to obtain free resection margins, as close as possible to 1 cm. Laparoscopic resection was performed using US alone (n = 30, US group), AR alone (n = 30, AR group) and both US and AR (n = 30, ARUS group). R0 resection, maximal margins, minimal margins and mean margins were assessed after histopathologic examination, adjusted to the tumour depth and to a liver zone-wise difficulty level. RESULTS: The minimal margins were not different between the three groups (8.8, 8.0 and 6.9 mm in the US, AR and ARUS groups, respectively). The maximal margins were larger in the US group compared to the AR and ARUS groups after adjustment on depth and zone difficulty (21 vs. 18 mm, p = 0.001 and 21 vs. 19.5 mm, p = 0.037, respectively). The mean margins, which reflect the variability of the measurements, were larger in the US group than in the ARUS group after adjustment on depth and zone difficulty (15.2 vs. 12.8 mm, p < 0.001). When considering only the most difficult zone (difficulty 3), there were more R1/R2 resections in the US group than in the AR + ARUS group (50% vs. 21%, p = 0.019). CONCLUSION: Laparoscopic liver resection using AR seems to provide more accurate resection margins with less variability than the gold standard US navigation, particularly in difficult to access liver zones with deep tumours.


Asunto(s)
Realidad Aumentada , Laparoscopía , Neoplasias Hepáticas , Animales , Modelos Animales de Enfermedad , Imagenología Tridimensional , Laparoscopía/métodos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/cirugía , Ovinos
2.
Surg Endosc ; 34(12): 5377-5383, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-31996995

RESUMEN

BACKGROUND: In laparoscopy, the digital camera offers surgeons the opportunity to receive support from image-guided surgery systems. Such systems require image understanding, the ability for a computer to understand what the laparoscope sees. Image understanding has recently progressed owing to the emergence of artificial intelligence and especially deep learning techniques. However, the state of the art of deep learning in gynaecology only offers image-based detection, reporting the presence or absence of an anatomical structure, without finding its location. A solution to the localisation problem is given by the concept of semantic segmentation, giving the detection and pixel-level location of a structure in an image. The state-of-the-art results in semantic segmentation are achieved by deep learning, whose usage requires a massive amount of annotated data. We propose the first dataset dedicated to this task and the first evaluation of deep learning-based semantic segmentation in gynaecology. METHODS: We used the deep learning method called Mask R-CNN. Our dataset has 461 laparoscopic images manually annotated with three classes: uterus, ovaries and surgical tools. We split our dataset in 361 images to train Mask R-CNN and 100 images to evaluate its performance. RESULTS: The segmentation accuracy is reported in terms of percentage of overlap between the segmented regions from Mask R-CNN and the manually annotated ones. The accuracy is 84.5%, 29.6% and 54.5% for uterus, ovaries and surgical tools, respectively. An automatic detection of these structures was then inferred from the semantic segmentation results which led to state-of-the-art detection performance, except for the ovaries. Specifically, the detection accuracy is 97%, 24% and 86% for uterus, ovaries and surgical tools, respectively. CONCLUSION: Our preliminary results are very promising, given the relatively small size of our initial dataset. The creation of an international surgical database seems essential.


Asunto(s)
Aprendizaje Profundo/normas , Ginecología/métodos , Laparoscopía/métodos , Femenino , Humanos
3.
Surg Endosc ; 34(12): 5642-5648, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32691206

RESUMEN

BACKGROUND: Previous work in augmented reality (AR) guidance in monocular laparoscopic hepatectomy requires the surgeon to manually overlay a rigid preoperative model onto a laparoscopy image. This may be fairly inaccurate because of significant liver deformation. We have proposed a technique which overlays a deformable preoperative model semi-automatically onto a laparoscopic image using a new software called Hepataug. The aim of this study is to show the feasibility of Hepataug to perform AR with a deformable model in laparoscopic hepatectomy. METHODS: We ran Hepataug during the procedures, as well as the usual means of laparoscopic ultrasonography (LUS) and visual inspection of the preoperative CT or MRI. The primary objective was to assess the feasibility of Hepataug, in terms of minimal disruption of the surgical workflow. The secondary objective was to assess the potential benefit of Hepataug, by subjective comparison with LUS. RESULTS: From July 2017 to March 2019, 17 consecutive patients were included in this study. AR was feasible in all procedures, with good correlation with LUS. However, for 2 patients, LUS did not reveal the location of the tumors. Hepataug gave a prediction of the tumor locations, which was confirmed and refined by careful inspection of the preoperative CT or MRI. CONCLUSION: Hepataug showed a minimal disruption of the surgical workflow and can thus be feasibly used in real hepatectomy procedures. Thanks to its new mechanism of semi-automatic deformable alignment, Hepataug also showed a good agreement with LUS and visual CT or MRI inspection in subsurface tumor localization. Importantly, Hepataug yields reproducible results. It is easy to use and could be deployed in any existing operating room. Nevertheless, comparative prospective studies are needed to study its efficacy.


Asunto(s)
Realidad Aumentada , Laparoscopía , Hígado/cirugía , Modelos Biológicos , Cuidados Preoperatorios , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Hepatectomía , Humanos , Imagenología Tridimensional , Hígado/diagnóstico por imagen , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Tomografía Computarizada por Rayos X , Ultrasonografía
4.
J Minim Invasive Gynecol ; 27(4): 973-976, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31765829

RESUMEN

Augmented reality is a technology that allows a surgeon to see key hidden subsurface structures in an endoscopic video in real-time. This works by overlaying information obtained from preoperative imaging and fusing it in real-time with the endoscopic image. Magnetic resonance diffusion tensor imaging (DTI) and fiber tractography are known to provide additional information to that obtained from standard structural magnetic resonance imaging (MRI). Here, we report the first 2 cases of the use of real-time augmented reality during laparoscopic myomectomies with visualization of uterine muscle fibers after DTI tractography-MRI to help the surgeon decide the starting point incision. In the first case, a 31-year-old patient was undergoing laparoscopic surgery for a 6-cm FIGO type V myoma. In the second case, a 38-year-old patient was undergoing a laparoscopic myomectomy for a unique 6-cm FIGO type VI myoma. Signed consent forms were obtained from both patients, which included clauses of no modification of the surgery. Before surgery, MRI was performed. The external surface of the uterus, the uterine cavity, and the surface of the myomas were delimited on the basis of the findings of preoperative MRI. A fiber tracking algorithm was used to extrapolate the uterine muscle fibers' architecture. The aligned models were blended with each video frame to give the impression that the uterus is almost transparent, enabling the surgeon to localize the myomas and uterine cavity exactly. The uterine muscle fibers were also displayed, and their visualization helped us decide the starting incision point for the myomectomies. Then, myomectomies were performed using a classic laparoscopic technique. These case reports show that augmented reality and DTI fiber tracking in a uterus with myomas are possible, providing fiber direction and helping the surgeon visualize and decide the starting incision point for laparoscopic myomectomy. Respecting the fibers' orientation could improve the quality of the scar and decrease the architectural disorganization of the uterus.


Asunto(s)
Realidad Aumentada , Laparoscopía , Leiomioma , Mioma , Miomectomía Uterina , Neoplasias Uterinas , Adulto , Imagen de Difusión Tensora , Femenino , Humanos , Laparoscopía/métodos , Leiomioma/diagnóstico por imagen , Leiomioma/patología , Leiomioma/cirugía , Mioma/cirugía , Miomectomía Uterina/métodos , Neoplasias Uterinas/diagnóstico por imagen , Neoplasias Uterinas/patología , Neoplasias Uterinas/cirugía
5.
J Minim Invasive Gynecol ; 26(6): 1177-1180, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30965117

RESUMEN

Augmented reality (AR) is a surgical guidance technology that allows key hidden subsurface structures to be visualized by endoscopic imaging. We report here 2 cases of patients with adenomyoma selected for the AR technique. The adenomyomas were localized using AR during laparoscopy. Three-dimensional models of the uterus, uterine cavity, and adenomyoma were constructed before surgery from T2-weighted magnetic resonance imaging, allowing an intraoperative 3-dimensional shape of the uterus to be obtained. These models were automatically aligned and "fused" with the laparoscopic video in real time, giving the uterus a semitransparent appearance and allowing the surgeon in real time to both locate the position of the adenomyoma and uterine cavity and rapidly decide how best to access the adenomyoma. In conclusion, the use of our AR system designed for gynecologic surgery leads to improvements in laparoscopic adenomyomectomy and surgical safety.


Asunto(s)
Adenomioma/diagnóstico , Adenomioma/cirugía , Realidad Aumentada , Procedimientos Quirúrgicos Ginecológicos/métodos , Cirugía Asistida por Computador/métodos , Neoplasias Uterinas/diagnóstico , Neoplasias Uterinas/cirugía , Adulto , Estudios de Factibilidad , Femenino , Humanos , Laparoscopía/métodos , Imagen por Resonancia Magnética/métodos
6.
Int J Comput Assist Radiol Surg ; 17(12): 2211-2219, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36253604

RESUMEN

PURPOSE: Laparoscopic liver resection is a challenging procedure because of the difficulty to localise inner structures such as tumours and vessels. Augmented reality overcomes this problem by overlaying preoperative 3D models on the laparoscopic views. It requires deformable registration of the preoperative 3D models to the laparoscopic views, which is a challenging task due to the liver flexibility and partial visibility. METHODS: We propose several multi-view registration methods exploiting information from multiple views simultaneously in order to improve registration accuracy. They are designed to work on two scenarios: on rigidly related views and on non-rigidly related views. These methods exploit the liver's anatomical landmarks and texture information available in all the views to constrain registration. RESULTS: We evaluated the registration accuracy of our methods quantitatively on synthetic and phantom data, and qualitatively on patient data. We measured 3D target registration errors in mm for the whole liver for the quantitative case, and 2D reprojection errors in pixels for the qualitative case. CONCLUSION: The proposed rigidly related multi-view methods improve registration accuracy compared to the baseline single-view method. They comply with the 1 cm oncologic resection margin advised for hepatocellular carcinoma interventions, depending on the available registration constraints. The non-rigidly related multi-view method does not provide a noticeable improvement. This means that using multiple views with the rigidity assumption achieves the best overall registration error.


Asunto(s)
Laparoscopía , Cirugía Asistida por Computador , Humanos , Imagenología Tridimensional/métodos , Cirugía Asistida por Computador/métodos , Laparoscopía/métodos , Hígado/diagnóstico por imagen , Hígado/cirugía , Tomografía Computarizada por Rayos X/métodos
7.
Med Image Anal ; 70: 101994, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33611053

RESUMEN

BACKGROUND AND OBJECTIVE: Surgical tool detection, segmentation, and 3D pose estimation are crucial components in Computer-Assisted Laparoscopy (CAL). The existing frameworks have two main limitations. First, they do not integrate all three components. Integration is critical; for instance, one should not attempt computing pose if detection is negative. Second, they have highly specific requirements, such as the availability of a CAD model. We propose an integrated and generic framework whose sole requirement for the 3D pose is that the tool shaft is cylindrical. Our framework makes the most of deep learning and geometric 3D vision by combining a proposed Convolutional Neural Network (CNN) with algebraic geometry. We show two applications of our framework in CAL: tool-aware rendering in Augmented Reality (AR) and tool-based 3D measurement. METHODS: We name our CNN as ART-Net (Augmented Reality Tool Network). It has a Single Input Multiple Output (SIMO) architecture with one encoder and multiple decoders to achieve detection, segmentation, and geometric primitive extraction. These primitives are the tool edge-lines, mid-line, and tip. They allow the tool's 3D pose to be estimated by a fast algebraic procedure. The framework only proceeds if a tool is detected. The accuracy of segmentation and geometric primitive extraction is boosted by a new Full resolution feature map Generator (FrG). We extensively evaluate the proposed framework with the EndoVis and new proposed datasets. We compare the segmentation results against several variants of the Fully Convolutional Network (FCN) and U-Net. Several ablation studies are provided for detection, segmentation, and geometric primitive extraction. The proposed datasets are surgery videos of different patients. RESULTS: In detection, ART-Net achieves 100.0% in both average precision and accuracy. In segmentation, it achieves 81.0% in mean Intersection over Union (mIoU) on the robotic EndoVis dataset (articulated tool), where it outperforms both FCN and U-Net, by 4.5pp and 2.9pp, respectively. It achieves 88.2% in mIoU on the remaining datasets (non-articulated tool). In geometric primitive extraction, ART-Net achieves 2.45∘ and 2.23∘ in mean Arc Length (mAL) error for the edge-lines and mid-line, respectively, and 9.3 pixels in mean Euclidean distance error for the tool-tip. Finally, in terms of 3D pose evaluated on animal data, our framework achieves 1.87 mm, 0.70 mm, and 4.80 mm mean absolute errors on the X,Y, and Z coordinates, respectively, and 5.94∘ angular error on the shaft orientation. It achieves 2.59 mm and 1.99 mm in mean and median location error of the tool head evaluated on patient data. CONCLUSIONS: The proposed framework outperforms existing ones in detection and segmentation. Compared to separate networks, integrating the tasks in a single network preserves accuracy in detection and segmentation but substantially improves accuracy in geometric primitive extraction. Overall, our framework has similar or better accuracy in 3D pose estimation while largely improving robustness against the very challenging imaging conditions of laparoscopy. The source code of our framework and our annotated dataset will be made publicly available at https://github.com/kamruleee51/ART-Net.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos
8.
Ann Biomed Eng ; 48(6): 1712-1727, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32112344

RESUMEN

Augmented Reality (AR) in monocular liver laparoscopy requires one to register a preoperative 3D liver model to a laparoscopy image. This is a difficult problem because the preoperative shape may significantly differ from the unknown intraoperative shape and the liver is only partially visible in the laparoscopy image. Previous approaches are either manual, using a rigid model, or automatic, using visual cues and a biomechanical model. We propose a new approach called the hybrid approach combining the best of both worlds. The visual cues allow us to capture the machine perception while user interaction allows us to take advantage of the surgeon's prior knowledge and spatial understanding of the patient anatomy. The registration accuracy and repeatability were evaluated on phantom, animal ex vivo and patient data respectively. The proposed registration outperforms the state of the art methods both in terms of accuracy and repeatability. An average registration error below the 1 cm oncologic margin advised in the literature for tumour resection in laparoscopy hepatectomy was obtained.


Asunto(s)
Laparoscopía/métodos , Neoplasias Hepáticas/cirugía , Hígado/cirugía , Modelos Biológicos , Animales , Realidad Aumentada , Humanos , Ovinos
9.
Int J Comput Assist Radiol Surg ; 15(7): 1177-1186, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32372385

RESUMEN

PURPOSE: The registration of a preoperative 3D model, reconstructed, for example, from MRI, to intraoperative laparoscopy 2D images, is the main challenge to achieve augmented reality in laparoscopy. The current systems have a major limitation: they require that the surgeon manually marks the occluding contours during surgery. This requires the surgeon to fully comprehend the non-trivial concept of occluding contours and surgeon time, directly impacting acceptance and usability. To overcome this limitation, we propose a complete framework for object-class occluding contour detection (OC2D), with application to uterus surgery. METHODS: Our first contribution is a new distance-based evaluation score complying with all the relevant performance criteria. Our second contribution is a loss function combining cross-entropy and two new penalties designed to boost 1-pixel thickness responses. This allows us to train a U-Net end to end, outperforming all competing methods, which tends to produce thick responses. Our third contribution is a dataset of 3818 carefully labelled laparoscopy images of the uterus, which was used to train and evaluate our detector. RESULTS: Evaluation shows that the proposed detector has a similar false false-negative rate to existing methods but substantially reduces both false-positive rate and response thickness. Finally, we ran a user study to evaluate the impact of OC2D against manually marked occluding contours in augmented laparoscopy. We used 10 recorded gynecologic laparoscopies and involved 5 surgeons. Using OC2D led to a reduction of 3 min and 53 s in surgeon time without sacrificing registration accuracy. CONCLUSIONS: We provide a new set of criteria and a distance-based measure to evaluate an OC2D method. We propose an OC2D method which outperforms the state-of-the-art methods. The results obtained from the user study indicate that fully automatic augmented laparoscopy is feasible.


Asunto(s)
Aprendizaje Profundo , Procedimientos Quirúrgicos Ginecológicos/métodos , Laparoscopía/métodos , Útero/cirugía , Realidad Aumentada , Femenino , Humanos , Imagen por Resonancia Magnética
10.
IEEE Trans Biomed Eng ; 65(12): 2769-2780, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-29993424

RESUMEN

Cardiac disease can reduce the ability of the ventricles to function well enough to sustain long-term pumping efficiency. Recent advances in cardiac motion tracking have led to improvements in the analysis of cardiac function. We propose a method to study cohort effects related to age with respect to cardiac function. The proposed approach makes use of a recent method for describing cardiac motion of a given subject using a polyaffine model, which gives a compact parameterization that reliably and accurately describes the cardiac motion across populations. Using this method, a data tensor of motion parameters is extracted for a given population. The partial least squares method for higher order arrays is used to build a model to describe the motion parameters with respect to age, from which a model of motion given age is derived. Based on the cross-sectional statistical analysis with the data tensor of each subject treated as an observation along time, the left ventricular motion over time of Tetralogy of Fallot patients is analysed to understand the temporal evolution of functional abnormalities in this population compared to healthy motion dynamics.


Asunto(s)
Corazón/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Modelos Cardiovasculares , Movimiento/fisiología , Adolescente , Adulto , Algoritmos , Niño , Femenino , Ventrículos Cardíacos/diagnóstico por imagen , Humanos , Imagen por Resonancia Cinemagnética , Masculino , Tetralogía de Fallot/diagnóstico por imagen , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA