Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 61
Filtrar
1.
Endosc Int Open ; 12(7): E924-E931, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39055264

RESUMO

Background and study aims Accurate endoscopic characterization of colorectal lesions is essential for predicting histology but is difficult even for experts. Simple criteria could help endoscopists to detect and predict malignancy. The aim of this study was to evaluate the value of the green sign and chicken skin aspects in detection of malignant colorectal neoplasia. Patients and methods We prospectively characterized and evaluated the histology of all consecutive colorectal lesions detected during screening or referred for endoscopic resection (Pro-CONECCT study). We evaluated the diagnostic accuracy of the green sign and chicken skin aspects for detection of superficial and deep invasive lesions. Results 461 patients with 803 colorectal lesions were included. The green sign had a negative predictive value of 89.6% (95% confidence interval [CI] 87.1%-91.8%) and 98.1% (95% CI 96.7%-99.0%) for superficial and deep invasive lesions, respectively. In contrast to chicken skin, the green sign showed additional value for detection of both lesion types compared with the CONECCT classification and chicken skin (adjusted odds ratio [OR] for superficial lesions 5.9; 95% CI 3.4-10.2; P <0.001), adjusted OR for deep lesions 9.0; 95% CI 3.9-21.1; P <0.001). Conclusions The green sign may be associated with malignant colorectal neoplasia. Targeting these areas before precise analysis of the lesion could be a way of improving detection of focal malignancies and prediction of the most severe histology.

2.
Artigo em Inglês | MEDLINE | ID: mdl-39058410

RESUMO

PURPOSE: A stereoscopic surgical video stream consists of left-right image pairs provided by a stereo endoscope. While the surgical display shows these image pairs synchronised, most capture cards cause de-synchronisation. This means that the paired left and right images may not correspond once used in downstream tasks such as stereo depth computation. The stereo synchronisation problem is to recover the corresponding left-right images. This is particularly challenging in the surgical setting, owing to the moist tissues, rapid camera motion, quasi-staticity and real-time processing requirement. Existing methods exploit image cues from the diffuse reflection component and are defeated by the above challenges. METHODS: We propose to exploit the specular reflection. Specifically, we propose a powerful left-right comparison score (LRCS) using the specular highlights commonly occurring on moist tissues. We detect the highlights using a neural network, characterise them with invariant descriptors, match them, and use the number of matches to form the proposed LRCS. We perform evaluation against 147 existing LRCS in 44 challenging robotic partial nephrectomy and robotic-assisted hepatic resection video sequences with simulated and real de-synchronisation. RESULTS: The proposed LRCS outperforms, with an average and maximum offsets of 0.055 and 1 frames and 94.1±3.6% successfully synchronised frames. In contrast, the best existing LRCS achieves an average and maximum offsets of 0.3 and 3 frames and 81.2±6.4% successfully synchronised frames. CONCLUSION: The use of specular reflection brings a tremendous boost to the real-time surgical stereo synchronisation problem.

3.
Artigo em Inglês | MEDLINE | ID: mdl-39014177

RESUMO

PURPOSE: Augmented reality guidance in laparoscopic liver resection requires the registration of a preoperative 3D model to the intraoperative 2D image. However, 3D-2D liver registration poses challenges owing to the liver's flexibility, particularly in the limited visibility conditions of laparoscopy. Although promising, the current registration methods are computationally expensive and often necessitate manual initialisation. METHODS: The first neural model predicting the registration (NM) is proposed, represented as 3D model deformation coefficients, from image landmarks. The strategy consists in training a patient-specific model based on synthetic data generated automatically from the patient's preoperative model. A liver shape modelling technique, which further reduces time complexity, is also proposed. RESULTS: The NM method was evaluated using the target registration error measure, showing an accuracy on par with existing methods, all based on numerical optimisation. Notably, NM runs much faster, offering the possibility of achieving real-time inference, a significant step ahead in this field. CONCLUSION: The proposed method represents the first neural method for 3D-2D liver registration. Preliminary experimental findings show comparable performance to existing methods, with superior computational efficiency. These results suggest a potential to deeply impact liver registration techniques.

4.
Int J Comput Assist Radiol Surg ; 19(7): 1385-1389, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38775903

RESUMO

PURPOSE: We present a novel method for augmented reality in endoscopic endonasal surgery. Our method does not require the use of external tracking devices and can show hidden anatomical structures relevant to the surgical intervention. METHODS: Our method registers a preoperative 3D model of the nasal cavity to an intraoperative 3D model by estimating a scaled-rigid transformation. Registration is based on a two-stage ICP approach on the reconstructed nasal cavity. The hidden structures are then transferred from the preoperative 3D model to the intraoperative one using the estimated transformation, projected and overlaid into the endoscopic images to obtain the augmented reality. RESULTS: We performed qualitative and quantitative validation of our method on 12 clinical cases. Qualitative results were obtained from an ENT surgeon from visual inspection of the hidden structures in the augmented images. Quantitative results were obtained by measuring a target registration error using a novel transillumination-based approach. The results show that the hidden structures of interest are augmented at the expected locations in most cases. CONCLUSION: Our method was able to augment the endoscopic images in a sufficiently precise manner when the intraoperative nasal cavity did not deform considerably with respect to its preoperative state. This is a promising step towards trackerless augmented reality in endonasal surgery.


Assuntos
Realidade Aumentada , Imageamento Tridimensional , Cavidade Nasal , Humanos , Imageamento Tridimensional/métodos , Cavidade Nasal/cirurgia , Cavidade Nasal/diagnóstico por imagem , Endoscopia/métodos , Cirurgia Assistida por Computador/métodos
5.
Int J Comput Assist Radiol Surg ; 19(7): 1285-1290, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38684560

RESUMO

PURPOSE: This research endeavors to improve tumor localization in minimally invasive surgeries, a challenging task primarily attributable to the absence of tactile feedback and limited visibility. The conventional solution uses laparoscopic ultrasound (LUS) which has a long learning curve and is operator-dependent. METHODS: The proposed approach involves augmenting LUS images onto laparoscopic images to improve the surgeon's ability to estimate tumor and internal organ anatomy. This augmentation relies on LUS pose estimation and filtering. RESULTS: Experiments conducted with clinical data exhibit successful outcomes in both the registration and augmentation of LUS images onto laparoscopic images. Additionally, noteworthy results are observed in filtering, leading to reduced flickering in augmentations. CONCLUSION: The outcomes reveal promising results, suggesting the potential of LUS augmentation in surgical images to assist surgeons and serve as a training tool. We have used the LUS probe's shaft to disambiguate the rotational symmetry. However, in the long run, it would be desirable to find more convenient solutions.


Assuntos
Realidade Aumentada , Laparoscopia , Humanos , Laparoscopia/métodos , Ultrassonografia/métodos
6.
Med Image Anal ; 94: 103161, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38574543

RESUMO

Augmented Reality (AR) from preoperative data is a promising approach to improve intraoperative tumour localisation in Laparoscopic Liver Resection (LLR). Existing systems register the preoperative tumour model with the laparoscopic images and render it by direct camera projection, as if the organ were transparent. However, a simple geometric reasoning shows that this may induce serious surgeon misguidance. This is because the tools enter in a different keyhole than the laparoscope. As AR is particularly important for deep tumours, this problem potentially hinders the whole interest of AR guidance. A remedy to this issue is to project the tumour from its internal position to the liver surface towards the tool keyhole, and only then to the camera. This raises the problem of estimating the tool keyhole position in laparoscope coordinates. We propose a keyhole-aware pipeline which resolves the problem by using the observed tool to probe the keyhole position and by showing a keyhole-aware visualisation of the tumour. We assess the benefits of our pipeline quantitatively on a geometric in silico model and on a liver phantom model, as well as qualitatively on three patient data.


Assuntos
Realidade Aumentada , Laparoscopia , Neoplasias , Cirurgia Assistida por Computador , Humanos , Laparoscopia/métodos , Simulação por Computador , Fígado , Cirurgia Assistida por Computador/métodos
7.
Int J Comput Assist Radiol Surg ; 19(6): 1157-1163, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38609735

RESUMO

PURPOSE: We investigate whether foundation models pretrained on diverse visual data could be beneficial to surgical computer vision. We use instrument and uterus segmentation in mini-invasive procedures as benchmarks. We propose multiple supervised, unsupervised and few-shot supervised adaptations of foundation models, including two novel adaptation methods. METHODS: We use DINOv1, DINOv2, DINOv2 with registers, and SAM backbones, with the ART-Net surgical instrument and the SurgAI3.8K uterus segmentation datasets. We investigate five approaches: DINO unsupervised, few-shot learning with a linear decoder, supervised learning with the proposed DINO-UNet adaptation, DPT with DINO encoder, and unsupervised learning with the proposed SAM adaptation. RESULTS: We evaluate 17 models for instrument segmentation and 7 models for uterus segmentation and compare to existing ad hoc models for the tasks at hand. We show that the linear decoder can be learned with few shots. The unsupervised and linear decoder methods obtain slightly subpar results but could be considered useful in data scarcity settings. The unsupervised SAM model produces finer edges but has inconsistent outputs. However, DPT and DINO-UNet obtain strikingly good results, defining a new state of the art by outperforming the previous-best by 5.6 and 4.1 pp for instrument and 4.4 and 1.5 pp for uterus segmentation. Both methods obtain semantic and spatial precision, accurately segmenting intricate details. CONCLUSION: Our results show the huge potential of using DINO and SAM for surgical computer vision, indicating a promising role for visual foundation models in medical image analysis, particularly in scenarios with limited or complex data.


Assuntos
Cirurgia Assistida por Computador , Humanos , Feminino , Cirurgia Assistida por Computador/métodos , Útero/cirurgia , Útero/diagnóstico por imagem
8.
J Surg Res ; 296: 325-336, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38306938

RESUMO

INTRODUCTION: Minimally Invasive Surgery uses electrosurgical tools that generate smoke. This smoke reduces the visibility of the surgical site and spreads harmful substances with potential hazards for the surgical staff. Automatic image analysis may provide assistance. However, the existing studies are restricted to simple clear versus smoky image classification. MATERIALS AND METHODS: We propose a novel approach using surgical image analysis with machine learning, including deep neural networks. We address three tasks: 1) smoke quantification, which estimates the visual level of smoke, 2) smoke evacuation confidence, which estimates the level of confidence to evacuate smoke, and 3) smoke evacuation recommendation, which estimates the evacuation decision. We collected three datasets with expert annotations. We trained end-to-end neural networks for the three tasks. We also created indirect predictors using task 1 followed by linear regression to solve task 2 and using task 2 followed by binary classification to solve task 3. RESULTS: We observe a reasonable inter-expert variability for tasks 1 and a large one for tasks 2 and 3. For task 1, the expert error is 17.61 percentage points (pp) and the neural network error is 18.45 pp. For tasks 2, the best results are obtained from the indirect predictor based on task 1. For this task, the expert error is 27.35 pp and the predictor error is 23.60 pp. For task 3, the expert accuracy is 76.78% and the predictor accuracy is 81.30%. CONCLUSIONS: Smoke quantification, evacuation confidence, and evaluation recommendation can be achieved by automatic surgical image analysis with similar or better accuracy as the experts.


Assuntos
Processamento de Imagem Assistida por Computador , Procedimentos Cirúrgicos Minimamente Invasivos , Fumaça , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Nicotiana , Fumaça/análise
9.
J Surg Res ; 296: 612-620, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38354617

RESUMO

INTRODUCTION: Augmented reality (AR) in laparoscopic liver resection (LLR) can improve intrahepatic navigation by creating a virtual liver transparency. Our team has recently developed Hepataug, an AR software that projects the invisible intrahepatic tumors onto the laparoscopic images and allows the surgeon to localize them precisely. However, the accuracy of registration according to the location and size of the tumors, as well as the influence of the projection axis, have never been measured. The aim of this work was to measure the three-dimensional (3D) tumor prediction error of Hepataug. METHODS: Eight 3D virtual livers were created from the computed tomography scan of a healthy human liver. Reference markers with known coordinates were virtually placed on the anterior surface. The virtual livers were then deformed and 3D printed, forming 3D liver phantoms. After placing each 3D phantom inside a pelvitrainer, registration allowed Hepataug to project virtual tumors along two axes: the laparoscope axis and the operator port axis. The surgeons had to point the center of eight virtual tumors per liver with a pointing tool whose coordinates were precisely calculated. RESULTS: We obtained 128 pointing experiments. The average pointing error was 29.4 ± 17.1 mm and 9.2 ± 5.1 mm for the laparoscope and operator port axes respectively (P = 0.001). The pointing errors tended to increase with tumor depth (correlation coefficients greater than 0.5 with P < 0.001). There was no significant dependency of the pointing error on the tumor size for both projection axes. CONCLUSIONS: Tumor visualization by projection toward the operating port improves the accuracy of AR guidance and partially solves the problem of the two-dimensional visual interface of monocular laparoscopy. Despite a lower precision of AR for tumors located in the posterior part of the liver, it could allow the surgeons to access these lesions without completely mobilizing the liver, hence decreasing the surgical trauma.


Assuntos
Realidade Aumentada , Laparoscopia , Neoplasias , Cirurgia Assistida por Computador , Humanos , Laparoscopia/métodos , Imagens de Fantasmas , Imageamento Tridimensional/métodos , Fígado/diagnóstico por imagem , Fígado/cirurgia , Cirurgia Assistida por Computador/métodos
10.
Comput Methods Programs Biomed ; 245: 108038, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38271792

RESUMO

BACKGROUND AND OBJECTIVE: Image segmentation is an essential component in medical image analysis. The case of 3D images such as MRI is particularly challenging and time consuming. Interactive or semi-automatic methods are thus highly desirable. However, existing methods do not exploit the typical sequentiality of real user interactions. This is due to the interaction memory used in these systems, which discards ordering. In contrast, we argue that the order of the user corrections should be used for training and lead to performance improvements. METHODS: We contribute to solving this problem by proposing a general multi-class deep learning-based interactive framework for image segmentation, which embeds a base network in a user interaction loop with a user feedback memory. We propose to model the memory explicitly as a sequence of consecutive system states, from which the features can be learned, generally learning from the segmentation refinement process. Training is a major difficulty owing to the network's input being dependent on the previous output. We adapt the network to this loop by introducing a virtual user in the training process, modelled by dynamically simulating the iterative user feedback. RESULTS: We evaluated our framework against existing methods on the complex task of multi-class semantic instance female pelvis MRI segmentation with 5 classes, including up to 27 tumour instances, using a segmentation dataset collected in our hospital, and on liver and pancreas CT segmentation, using public datasets. We conducted a user evaluation, involving both senior and junior medical personnel in matching and adjacent areas of expertise. We observed an annotation time reduction with 5'56" for our framework against 25' on average for classical tools. We systematically evaluated the influence of the number of clicks on the segmentation accuracy. A single interaction round our framework outperforms existing automatic systems with a comparable setup. We provide an ablation study and show that our framework outperforms existing interactive systems. CONCLUSIONS: Our framework largely outperforms existing systems in accuracy, with the largest impact on the smallest, most difficult classes, and drastically reduces the average user segmentation time with fast inference at 47.2±6.2 ms per image.


Assuntos
Aprendizado Profundo , Feminino , Humanos , Tomografia Computadorizada por Raios X/métodos , Imageamento Tridimensional/métodos , Fígado , Imageamento por Ressonância Magnética , Processamento de Imagem Assistida por Computador
11.
Int J Comput Assist Radiol Surg ; 18(7): 1323-1328, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37142809

RESUMO

PURPOSE: To detect specularities as elliptical blobs in endoscopy. The rationale is that in the endoscopic setting, specularities are generally small and that knowing the ellipse coefficients allows one to reconstruct the surface normal. In contrast, previous works detect specular masks as free-form shapes and consider the specular pixels as nuisance. METHODS: A pipeline combining deep learning with handcrafted steps for specularity detection. This pipeline is general and accurate in the context of endoscopic applications involving multiple organs and moist tissues. A fully convolutional network produces an initial mask which specifically finds specular pixels, being mainly composed of sparsely distributed blobs. Standard ellipse fitting follows for local segmentation refinement in order to only keep the blobs fulfilling the conditions for successful normal reconstruction. RESULTS: Convincing results in detection and reconstruction on synthetic and real images, showing that the elliptical shape prior improves the detection itself in both colonoscopy and kidney laparoscopy. The pipeline achieved a mean Dice of 84% and 87% respectively in test data for these two use cases, and allows one to exploit the specularities as useful information for inferring sparse surface geometry. The reconstructed normals are in good quantitative agreement with external learning-based depth reconstruction methods manifested, as shown by an average angular discrepancy of [Formula: see text] in colonoscopy. CONCLUSION: First fully automatic method to exploit specularities in endoscopic 3D reconstruction. Because the design of current reconstruction methods can vary considerably for different applications, our elliptical specularity detection could be of potential interest in clinical practice thanks to its simplicity and generalisability. In particular, the obtained results are promising towards future integration with learning-based depth inference and SfM methods.


Assuntos
Colonoscopia , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos
12.
J Minim Invasive Gynecol ; 30(5): 397-405, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36720429

RESUMO

STUDY OBJECTIVE: We focus on explaining the concepts underlying artificial intelligence (AI), using Uteraug, a laparoscopic surgery guidance application based on Augmented Reality (AR), to provide concrete examples. AI can be used to automatically interpret the surgical images. We are specifically interested in the tasks of uterus segmentation and uterus contouring in laparoscopic images. A major difficulty with AI methods is their requirement for a massive amount of annotated data. We propose SurgAI3.8K, the first gynaecological dataset with annotated anatomy. We study the impact of AI on automating key steps of Uteraug. DESIGN: We constructed the SurgAI3.8K dataset with 3800 images extracted from 79 laparoscopy videos. We created the following annotations: the uterus segmentation, the uterus contours and the regions of the left and right fallopian tube junctions. We divided our dataset into a training and a test dataset. Our engineers trained a neural network from the training dataset. We then investigated the performance of the neural network compared to the experts on the test dataset. In particular, we established the relationship between the size of the training dataset and the performance, by creating size-performance graphs. SETTING: University. PATIENTS: Not available. INTERVENTION: Not available. MEASUREMENTS AND MAIN RESULTS: The size-performance graphs show a performance plateau at 700 images for uterus segmentation and 2000 images for uterus contouring. The final segmentation scores on the training and test dataset were 94.6% and 84.9% (the higher, the better) and the final contour error were 19.5% and 47.3% (the lower, the better). These results allowed us to bootstrap Uteraug, achieving AR performance equivalent to its current manual setup. CONCLUSION: We describe a concrete AI system in laparoscopic surgery with all steps from data collection, data annotation, neural network training, performance evaluation, to final application.


Assuntos
Realidade Aumentada , Laparoscopia , Humanos , Feminino , Inteligência Artificial , Redes Neurais de Computação , Útero/cirurgia , Laparoscopia/métodos
13.
World J Urol ; 41(2): 335-343, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35776173

RESUMO

INTRODUCTION: Minimally invasive partial nephrectomy (MIPN) has become the standard of care for localized kidney tumors over the past decade. The characteristics of each tumor, in particular its size and relationship with the excretory tract and vessels, allow one to judge its complexity and to attempt predicting the risk of complications. The recent development of virtual 3D model reconstruction and computer vision has opened the way to image-guided surgery and augmented reality (AR). OBJECTIVE: Our objective was to perform a systematic review to list and describe the different AR techniques proposed to support PN. MATERIALS AND METHODS: The systematic review of the literature was performed on 12/04/22, using the keywords "nephrectomy" and "augmented reality" on Embase and Medline. Articles were considered if they reported surgical outcomes when using AR with virtual image overlay on real vision, during ex vivo or in vivo MIPN. We classified them according to the registration technique they use. RESULTS: We found 16 articles describing an AR technique during MIPN procedures that met the eligibility criteria. A moderate to high risk of bias was recorded for all the studies. We classified registration methods into three main families, of which the most promising one seems to be surface-based registration. CONCLUSION: Despite promising results, there do not exist studies showing an improvement in clinical outcomes using AR. The ideal AR technique is probably yet to be established, as several designs are still being actively explored. More clinical data will be required to establish the potential contribution of this technology to MIPN.


Assuntos
Neoplasias Renais , Cirurgia Assistida por Computador , Humanos , Nefrectomia/métodos , Neoplasias Renais/cirurgia , Cirurgia Assistida por Computador/métodos
14.
IEEE Trans Pattern Anal Mach Intell ; 45(5): 6428-6444, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36260583

RESUMO

We give an effective solution to the regularized optimization problem g (x) + h (x), where x is constrained on the unit sphere ||x ||2 = 1. Here g (·) is a smooth cost with Lipschitz continuous gradient within the unit ball {x : ||x ||2 ≤ 1 } whereas h (·) is typically non-smooth but convex and absolutely homogeneous, e.g., norm regularizers and their combinations. Our solution is based on the Riemannian proximal gradient, using an idea we call proxy step-size - a scalar variable which we prove is monotone with respect to the actual step-size within an interval. The proxy step-size exists ubiquitously for convex and absolutely homogeneous h(·), and decides the actual step-size and the tangent update in closed-form, thus the complete proximal gradient iteration. Based on these insights, we design a Riemannian proximal gradient method using the proxy step-size. We prove that our method converges to a critical point, guided by a line-search technique based on the g(·) cost only. The proposed method can be implemented in a couple of lines of code. We show its usefulness by applying nuclear norm, l1 norm, and nuclear-spectral norm regularization to three classical computer vision problems. The improvements are consistent and backed by numerical experiments. available at https://bitbucket.org/FangBai/proxystepsize-pgs.

15.
Int J Comput Assist Radiol Surg ; 17(12): 2211-2219, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36253604

RESUMO

PURPOSE: Laparoscopic liver resection is a challenging procedure because of the difficulty to localise inner structures such as tumours and vessels. Augmented reality overcomes this problem by overlaying preoperative 3D models on the laparoscopic views. It requires deformable registration of the preoperative 3D models to the laparoscopic views, which is a challenging task due to the liver flexibility and partial visibility. METHODS: We propose several multi-view registration methods exploiting information from multiple views simultaneously in order to improve registration accuracy. They are designed to work on two scenarios: on rigidly related views and on non-rigidly related views. These methods exploit the liver's anatomical landmarks and texture information available in all the views to constrain registration. RESULTS: We evaluated the registration accuracy of our methods quantitatively on synthetic and phantom data, and qualitatively on patient data. We measured 3D target registration errors in mm for the whole liver for the quantitative case, and 2D reprojection errors in pixels for the qualitative case. CONCLUSION: The proposed rigidly related multi-view methods improve registration accuracy compared to the baseline single-view method. They comply with the 1 cm oncologic resection margin advised for hepatocellular carcinoma interventions, depending on the available registration constraints. The non-rigidly related multi-view method does not provide a noticeable improvement. This means that using multiple views with the rigidity assumption achieves the best overall registration error.


Assuntos
Laparoscopia , Cirurgia Assistida por Computador , Humanos , Imageamento Tridimensional/métodos , Cirurgia Assistida por Computador/métodos , Laparoscopia/métodos , Fígado/diagnóstico por imagem , Fígado/cirurgia , Tomografia Computadorizada por Raios X/métodos
16.
Int J Comput Assist Radiol Surg ; 17(10): 1867-1877, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35650345

RESUMO

PURPOSE: Immunotherapy has dramatically improved the prognosis of patients with metastatic melanoma (MM). Yet, there is a lack of biomarkers to predict whether a patient will benefit from immunotherapy. Our aim was to create radiomics models on pretreatment computed tomography (CT) to predict overall survival (OS) and treatment response in patients with MM treated with anti-PD-1 immunotherapy. METHODS: We performed a monocentric retrospective analysis of 503 metastatic lesions in 71 patients with 46 radiomics features extracted following lesion segmentation. Predictive accuracies for OS < 1 year versus > 1 year and treatment response versus no response was compared for five feature selection methods (sequential forward selection, recursive, Boruta, relief, random forest) and four classifiers (support vector machine (SVM), random forest, K-nearest neighbor, logistic regression (LR)) used with or without SMOTE data augmentation. A fivefold cross-validation was performed at the patient level, with a tumour-based classification. RESULTS: The highest accuracy level for OS predictions was obtained with 3D lesions (0.91) without clinical data integration when combining Boruta feature selection and the LR classifier, The highest accuracy for treatment response prediction was obtained with 3D lesions (0.88) without clinical data integration when combining Boruta feature selection, the LR classifier and SMOTE data augmentation. The accuracy was significantly higher concerning OS prediction with 3D segmentation (0.91 vs 0.86) while clinical data integration led to improved accuracy notably in 2D lesions (0.76 vs 0.87) regarding treatment response prediction. Skewness was the only feature found to be an independent predictor of OS (HR (CI 95%) 1.34, p-value 0.001). CONCLUSION: This is the first study to investigate CT texture parameter selection and classification methods for predicting MM prognosis with treatment by immunotherapy. Combining pretreatment CT radiomics features from a single tumor with data selection and classifiers may accurately predict OS and treatment response in MM treated with anti-PD-1.


Assuntos
Melanoma , Humanos , Imunoterapia , Melanoma/diagnóstico por imagem , Melanoma/terapia , Prognóstico , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos
17.
Int J Comput Assist Radiol Surg ; 17(8): 1507-1511, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35527303

RESUMO

PURPOSE: We present a novel automatic system for markerless real-time augmented reality. Our system uses a dynamic keyframe database, which is required to track previously unseen or appearance-changing anatomical structures. Our main objective is to track the organ more accurately and over a longer time frame through the surgery. METHODS: Our system works with an offline stage which constructs the initial keyframe database and an online stage which dynamically updates the database with new keyframes automatically selected from the video stream. We propose five keyframe selection criteria ensuring tracking stability and a database management scheme ensuring real-time performance. RESULTS: Experimental results show that our automatic keyframe selection system based on a dynamic keyframe database outperforms the baseline system with a static keyframe database. An increase in number of tracked frames without requiring surgeon input is observed with an average improvement margin over the baseline of 11.9%. The frame rate is kept at the same values as the baseline, close to 50 FPS, and rendering remains smooth. CONCLUSION: Our software-based tracking system copes with new viewpoints and appearance changes during surgery. It improves surgical organ tracking performance. Its criterion-based architecture allows a high degree of flexibility in the implementation, hence compatibility with various use cases.


Assuntos
Realidade Aumentada , Laparoscopia , Cirurgia Assistida por Computador , Humanos , Imageamento Tridimensional/métodos , Laparoscopia/métodos , Cirurgia Assistida por Computador/métodos
20.
Surg Endosc ; 36(1): 833-843, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34734305

RESUMO

BACKGROUND: The aim of this study was to assess the performance of our augmented reality (AR) software (Hepataug) during laparoscopic resection of liver tumours and compare it to standard ultrasonography (US). MATERIALS AND METHODS: Ninety pseudo-tumours ranging from 10 to 20 mm were created in sheep cadaveric livers by injection of alginate. CT-scans were then performed and 3D models reconstructed using a medical image segmentation software (MITK). The livers were placed in a pelvi-trainer on an inclined plane, approximately perpendicular to the laparoscope. The aim was to obtain free resection margins, as close as possible to 1 cm. Laparoscopic resection was performed using US alone (n = 30, US group), AR alone (n = 30, AR group) and both US and AR (n = 30, ARUS group). R0 resection, maximal margins, minimal margins and mean margins were assessed after histopathologic examination, adjusted to the tumour depth and to a liver zone-wise difficulty level. RESULTS: The minimal margins were not different between the three groups (8.8, 8.0 and 6.9 mm in the US, AR and ARUS groups, respectively). The maximal margins were larger in the US group compared to the AR and ARUS groups after adjustment on depth and zone difficulty (21 vs. 18 mm, p = 0.001 and 21 vs. 19.5 mm, p = 0.037, respectively). The mean margins, which reflect the variability of the measurements, were larger in the US group than in the ARUS group after adjustment on depth and zone difficulty (15.2 vs. 12.8 mm, p < 0.001). When considering only the most difficult zone (difficulty 3), there were more R1/R2 resections in the US group than in the AR + ARUS group (50% vs. 21%, p = 0.019). CONCLUSION: Laparoscopic liver resection using AR seems to provide more accurate resection margins with less variability than the gold standard US navigation, particularly in difficult to access liver zones with deep tumours.


Assuntos
Realidade Aumentada , Laparoscopia , Neoplasias Hepáticas , Animais , Modelos Animais de Doenças , Imageamento Tridimensional , Laparoscopia/métodos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/cirurgia , Ovinos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA