Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Surg Endosc ; 2024 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-38789623

RESUMEN

BACKGROUND: Hyperspectral imaging (HSI), combined with machine learning, can help to identify characteristic tissue signatures enabling automatic tissue recognition during surgery. This study aims to develop the first HSI-based automatic abdominal tissue recognition with human data in a prospective bi-center setting. METHODS: Data were collected from patients undergoing elective open abdominal surgery at two international tertiary referral hospitals from September 2020 to June 2021. HS images were captured at various time points throughout the surgical procedure. Resulting RGB images were annotated with 13 distinct organ labels. Convolutional Neural Networks (CNNs) were employed for the analysis, with both external and internal validation settings utilized. RESULTS: A total of 169 patients were included, 73 (43.2%) from Strasbourg and 96 (56.8%) from Verona. The internal validation within centers combined patients from both centers into a single cohort, randomly allocated to the training (127 patients, 75.1%, 585 images) and test sets (42 patients, 24.9%, 181 images). This validation setting showed the best performance. The highest true positive rate was achieved for the skin (100%) and the liver (97%). Misclassifications included tissues with a similar embryological origin (omentum and mesentery: 32%) or with overlaying boundaries (liver and hepatic ligament: 22%). The median DICE score for ten tissue classes exceeded 80%. CONCLUSION: To improve automatic surgical scene segmentation and to drive clinical translation, multicenter accurate HSI datasets are essential, but further work is needed to quantify the clinical value of HSI. HSI might be included in a new omics science, namely surgical optomics, which uses light to extract quantifiable tissue features during surgery.

2.
Surg Endosc ; 38(1): 229-239, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37973639

RESUMEN

BACKGROUND: The large amount of heterogeneous data collected in surgical/endoscopic practice calls for data-driven approaches as machine learning (ML) models. The aim of this study was to develop ML models to predict endoscopic sleeve gastroplasty (ESG) efficacy at 12 months defined by total weight loss (TWL) % and excess weight loss (EWL) % achievement. Multicentre data were used to enhance generalizability: evaluate consistency among different center of ESG practice and assess reproducibility of the models and possible clinical application. Models were designed to be dynamic and integrate follow-up clinical data into more accurate predictions, possibly assisting management and decision-making. METHODS: ML models were developed using data of 404 ESG procedures performed at 12 centers across Europe. Collected data included clinical and demographic variables at the time of ESG and at follow-up. Multicentre/external and single center/internal and temporal validation were performed. Training and evaluation of the models were performed on Python's scikit-learn library. Performance of models was quantified as receiver operator curve (ROC-AUC), sensitivity, specificity, and calibration plots. RESULTS: Multicenter external validation: ML models using preoperative data show poor performance. Best performances were reached by linear regression (LR) and support vector machine models for TWL% and EWL%, respectively, (ROC-AUC: TWL% 0.87, EWL% 0.86) with the addition of 6-month follow-up data. Single-center internal validation: Preoperative data only ML models show suboptimal performance. Early, i.e., 3-month follow-up data addition lead to ROC-AUC of 0.79 (random forest classifiers model) and 0.81 (LR models) for TWL% and EWL% achievement prediction, respectively. Single-center temporal validation shows similar results. CONCLUSIONS: Although preoperative data only may not be sufficient for accurate postoperative predictions, the ability of ML models to adapt and evolve with the patients changes could assist in providing an effective and personalized postoperative care. ML models predictive capacity improvement with follow-up data is encouraging and may become a valuable support in patient management and decision-making.


Asunto(s)
Gastroplastia , Obesidad Mórbida , Humanos , Gastroplastia/métodos , Obesidad/cirugía , Reproducibilidad de los Resultados , Resultado del Tratamiento , Pérdida de Peso , Aprendizaje Automático , Obesidad Mórbida/cirugía
3.
Cancers (Basel) ; 15(8)2023 Apr 21.
Artículo en Inglés | MEDLINE | ID: mdl-37190325

RESUMEN

INTRODUCTION: The changes occurring in the liver in cases of outflow deprivation have rarely been investigated, and no measurements of this phenomenon are available. This investigation explored outflow occlusion in a pig model using a hyperspectral camera. METHODS: Six pigs were enrolled. The right hepatic vein was clamped for 30 min. The oxygen saturation (StO2%), deoxygenated hemoglobin level (de-Hb), near-infrared perfusion (NIR), and total hemoglobin index (THI) were investigated at different time points in four perfused lobes using a hyperspectral camera measuring light absorbance between 500 nm and 995 nm. Differences among lobes at different time points were estimated by mixed-effect linear regression. RESULTS: StO2% decreased over time in the right lateral lobe (RLL, totally occluded) when compared to the left lateral (LLL, outflow preserved) and the right medial (RML, partially occluded) lobes (p < 0.05). De-Hb significantly increased after clamping in RLL when compared to RML and LLL (p < 0.05). RML was further analyzed considering the right portion (totally occluded) and the left portion of the lobe (with an autonomous draining vein). StO2% decreased and de-Hb increased more smoothly when compared to the totally occluded RLL (p < 0.05). CONCLUSIONS: The variations of StO2% and deoxy-Hb could be considered good markers of venous liver congestion.

4.
Surg Endosc ; 37(6): 4525-4534, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36828887

RESUMEN

BACKGROUND: Visualization of key anatomical landmarks is required during surgical Trans Abdominal Pre Peritoneal repair (TAPP) of inguinal hernia. The Critical View of the MyoPectineal Orifice (CVMPO) was proposed to ensure correct dissection. An artificial intelligence (AI) system that automatically validates the presence of key and marks during the procedure is a critical step towards automatic dissection quality assessment and video-based competency evaluation. The aim of this study was to develop an AI system that automatically recognizes the TAPP key CVMPO landmarks in hernia repair videos. METHODS: Surgical videos of 160 TAPP procedures were used in this single-center study. A deep neural network-based object detector was developed to automatically recognize the pubic symphysis, direct hernia orifice, Cooper's ligament, the iliac vein, triangle of Doom, deep inguinal ring, and iliopsoas muscle. The system was trained using 130 videos, annotated and verified by two board-certified surgeons. Performance was evaluated in 30 videos of new patients excluded from the training data. RESULTS: Performance was validated in 2 ways: first, single-image validation where the AI model detected landmarks in a single laparoscopic image (mean average precision (MAP) of 51.2%). The second validation is video evaluation where the model detected landmarks throughout the myopectineal orifice visual inspection phase (mean accuracy and F-score of 77.1 and 75.4% respectively). Annotation objectivity was assessed between 2 surgeons in video evaluation, showing a high agreement of 88.3%. CONCLUSION: This study establishes the first AI-based automated recognition of critical structures in TAPP surgical videos, and a major step towards automatic CVMPO validation with AI. Strong performance was achieved in the video evaluation. The high inter-rater agreement confirms annotation quality and task objectivity.


Asunto(s)
Hernia Inguinal , Laparoscopía , Cirujanos , Humanos , Inteligencia Artificial , Laparoscopía/métodos , Peritoneo , Hernia Inguinal/cirugía
5.
Med Image Anal ; 84: 102680, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36481607

RESUMEN

In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.


Asunto(s)
Benchmarking , Neoplasias Hepáticas , Humanos , Estudios Retrospectivos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/patología , Hígado/diagnóstico por imagen , Hígado/patología , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
6.
Cancers (Basel) ; 14(22)2022 Nov 14.
Artículo en Inglés | MEDLINE | ID: mdl-36428685

RESUMEN

Ischemia-reperfusion injury during major hepatic resections is associated with high rates of post-operative complications and liver failure. Real-time intra-operative detection of liver dysfunction could provide great insight into clinical outcomes. In the present study, we demonstrate the intra-operative application of a novel optical technology, hyperspectral imaging (HSI), to predict short-term post-operative outcomes after major hepatectomy. We considered fifteen consecutive patients undergoing major hepatic resection for malignant liver lesions from January 2020 to June 2021. HSI measures included tissue water index (TWI), organ hemoglobin index (OHI), tissue oxygenation (StO2%), and near infrared (NIR). Pre-operative, intra-operative, and post-operative serum and clinical outcomes were collected. NIR values were higher in unhealthy liver tissue (p = 0.003). StO2% negatively correlated with post-operative serum ALT values (r = -0.602), while ΔStO2% positively correlated with ALP (r = 0.594). TWI significantly correlated with post-operative reintervention and OHI with post-operative sepsis and liver failure. In conclusion, the HSI imaging system is accurate and precise in translating from pre-clinical to human studies in this first clinical trial. HSI indices are related to serum and outcome metrics. Further experimental and clinical studies are necessary to determine clinical value of this technology.

7.
Diagnostics (Basel) ; 12(9)2022 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-36140626

RESUMEN

Complete mesocolic excision (CME), which involves the adequate resection of the tumor-bearing colonic segment with "en bloc" removal of its mesocolon along embryological fascial planes is associated with superior oncological outcomes. However, CME presents a higher complication rate compared to non-CME resections due to a higher risk of vascular injury. Hyperspectral imaging (HSI) is a contrast-free optical imaging technology, which facilitates the quantitative imaging of physiological tissue parameters and the visualization of anatomical structures. This study evaluates the accuracy of HSI combined with deep learning (DL) to differentiate the colon and its mesenteric tissue from retroperitoneal tissue. In an animal study including 20 pig models, intraoperative hyperspectral images of the sigmoid colon, sigmoid mesentery, and retroperitoneum were recorded. A convolutional neural network (CNN) was trained to distinguish the two tissue classes using HSI data, validated with a leave-one-out cross-validation process. The overall recognition sensitivity of the tissues to be preserved (retroperitoneum) and the tissues to be resected (colon and mesentery) was 79.0 ± 21.0% and 86.0 ± 16.0%, respectively. Automatic classification based on HSI and CNNs is a promising tool to automatically, non-invasively, and objectively differentiate the colon and its mesentery from retroperitoneal tissue.

8.
Med Image Anal ; 77: 102380, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35139482

RESUMEN

Developing accurate and real-time algorithms for a non-invasive three-dimensional representation and reconstruction of internal patient structures is one of the main research fields in computer-assisted surgery and endoscopy. Mono and stereo endoscopic images of soft tissues are converted into a three-dimensional representation by the estimation of depth maps. However, automatic, detailed, accurate and robust depth map estimation is a challenging problem that, in the stereo setting, is strictly dependent on a robust estimate of the disparity map. Many traditional algorithms are often inefficient or not accurate. In this work, novel self-supervised stacked and Siamese encoder/decoder neural networks are proposed to compute accurate disparity maps for 3D laparoscopy depth estimation. These networks run in real-time on standard GPU-equipped desktop computers and the outputs may be used for depth map estimation using the a known camera calibration. We compare performance on three different public datasets and on a new challenging simulated dataset and our solutions outperform state-of-the-art mono and stereo depth estimation methods. Extensive robustness and sensitivity analyses on more than 30000 frames has been performed. This work leads to important improvements in mono and stereo real-time depth map estimation of soft tissues and organs with a very low average mean absolute disparity reconstruction error with respect to ground truth.


Asunto(s)
Laparoscopía , Cirugía Asistida por Computador , Algoritmos , Humanos , Imagenología Tridimensional/métodos , Redes Neurales de la Computación , Cirugía Asistida por Computador/métodos
9.
Diagnostics (Basel) ; 11(10)2021 Sep 30.
Artículo en Inglés | MEDLINE | ID: mdl-34679508

RESUMEN

There are approximately 1.8 million diagnoses of colorectal cancer, 1 million diagnoses of stomach cancer, and 0.6 million diagnoses of esophageal cancer each year globally. An automatic computer-assisted diagnostic (CAD) tool to rapidly detect colorectal and esophagogastric cancer tissue in optical images would be hugely valuable to a surgeon during an intervention. Based on a colon dataset with 12 patients and an esophagogastric dataset of 10 patients, several state-of-the-art machine learning methods have been trained to detect cancer tissue using hyperspectral imaging (HSI), including Support Vector Machines (SVM) with radial basis function kernels, Multi-Layer Perceptrons (MLP) and 3D Convolutional Neural Networks (3DCNN). A leave-one-patient-out cross-validation (LOPOCV) with and without combining these sets was performed. The ROC-AUC score of the 3DCNN was slightly higher than the MLP and SVM with a difference of 0.04 AUC. The best performance was achieved with the 3DCNN for colon cancer and esophagogastric cancer detection with a high ROC-AUC of 0.93. The 3DCNN also achieved the best DICE scores of 0.49 and 0.41 on the colon and esophagogastric datasets, respectively. These scores were significantly improved using a patient-specific decision threshold to 0.58 and 0.51, respectively. This indicates that, in practical use, an HSI-based CAD system using an interactive decision threshold is likely to be valuable. Experiments were also performed to measure the benefits of combining the colorectal and esophagogastric datasets (22 patients), and this yielded significantly better results with the MLP and SVM models.

10.
Sensors (Basel) ; 21(20)2021 Oct 19.
Artículo en Inglés | MEDLINE | ID: mdl-34696147

RESUMEN

Thermal ablation is an acceptable alternative treatment for primary liver cancer, of which laser ablation (LA) is one of the least invasive approaches, especially for tumors in high-risk locations. Precise control of the LA effect is required to safely destroy the tumor. Although temperature imaging techniques provide an indirect measurement of the thermal damage, a degree of uncertainty remains about the treatment effect. Optical techniques are currently emerging as tools to directly assess tissue thermal damage. Among them, hyperspectral imaging (HSI) has shown promising results in image-guided surgery and in the thermal ablation field. The highly informative data provided by HSI, associated with deep learning, enable the implementation of non-invasive prediction models to be used intraoperatively. Here we show a novel paradigm "peak temperature prediction model" (PTPM), convolutional neural network (CNN)-based, trained with HSI and infrared imaging to predict LA-induced damage in the liver. The PTPM demonstrated an optimal agreement with tissue damage classification providing a consistent threshold (50.6 ± 1.5 °C) for the damage margins with high accuracy (~0.90). The high correlation with the histology score (r = 0.9085) and the comparison with the measured peak temperature confirmed that PTPM preserves temperature information accordingly with the histopathological assessment.


Asunto(s)
Aprendizaje Profundo , Terapia por Láser , Imágenes Hiperespectrales , Rayos Láser , Redes Neurales de la Computación
11.
Diagnostics (Basel) ; 11(9)2021 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-34573869

RESUMEN

Hyperspectral imaging (HSI) is a non-invasive imaging modality already applied to evaluate hepatic oxygenation and to discriminate different models of hepatic ischemia. Nevertheless, the ability of HSI to detect and predict the reperfusion damage intraoperatively was not yet assessed. Hypoxia caused by hepatic artery occlusion (HAO) in the liver brings about dreadful vascular complications known as ischemia-reperfusion injury (IRI). Here, we show the evaluation of liver viability in an HAO model with an artificial intelligence-based analysis of HSI. We have combined the potential of HSI to extract quantitative optical tissue properties with a deep learning-based model using convolutional neural networks. The artificial intelligence (AI) score of liver viability showed a significant correlation with capillary lactate from the liver surface (r = -0.78, p = 0.0320) and Suzuki's score (r = -0.96, p = 0.0012). CD31 immunostaining confirmed the microvascular damage accordingly with the AI score. Our results ultimately show the potential of an HSI-AI-based analysis to predict liver viability, thereby prompting for intraoperative tool development to explore its application in a clinical setting.

12.
Diagnostics (Basel) ; 11(8)2021 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-34441442

RESUMEN

Nerves are critical structures that may be difficult to recognize during surgery. Inadvertent nerve injuries can have catastrophic consequences for the patient and lead to life-long pain and a reduced quality of life. Hyperspectral imaging (HSI) is a non-invasive technique combining photography with spectroscopy, allowing non-invasive intraoperative biological tissue property quantification. We show, for the first time, that HSI combined with deep learning allows nerves and other tissue types to be automatically recognized in in vivo hyperspectral images. An animal model was used, and eight anesthetized pigs underwent neck midline incisions, exposing several structures (nerve, artery, vein, muscle, fat, skin). State-of-the-art machine learning models were trained to recognize these tissue types in HSI data. The best model was a convolutional neural network (CNN), achieving an overall average sensitivity of 0.91 and a specificity of 1.0, validated with leave-one-patient-out cross-validation. For the nerve, the CNN achieved an average sensitivity of 0.76 and a specificity of 0.99. In conclusion, HSI combined with a CNN model is suitable for in vivo nerve recognition.

13.
Sci Rep ; 11(1): 9650, 2021 05 06.
Artículo en Inglés | MEDLINE | ID: mdl-33958693

RESUMEN

Intraoperative indocyanine green (ICG) fluorescence angiography has gained popularity and acceptance in many surgical fields for the real-time assessment of tissue perfusion. Although vasopressors have the potential to preclude an accurate assessment of tissue perfusion, there is a lack of literature with regards to its effect on ICG fluorescence angiography. An experimental porcine model was used to expose the small bowel for quantitative tissue perfusion assessment. Three increasing doses of norepinephrine infusion (0.1, 0.5, and 1.0 µg/kg/min) were administered intravenously over a 25-min interval. Time-to-peak fluorescence intensity (TTP) was the primary outcome. Secondary outcomes included absolute fluorescence intensity and local capillary lactate (LCL) levels. Five large pigs (mean weight: 40.3 ± 4.24 kg) were included. There was no significant difference in mean TTP (in seconds) at baseline (4.23) as compared to the second (3.90), third (4.41), fourth (4.60), and fifth ICG assessment (5.99). As a result of ICG accumulation, the mean and the maximum absolute fluorescence intensity were significantly different as compared to the baseline assessment. There was no significant difference in LCL levels (in mmol/L) at baseline (0.74) as compared to the second (0.82), third (0.64), fourth (0.60), and fifth assessment (0.62). Increasing doses of norepinephrine infusion have no significant influence on bowel perfusion using ICG fluorescence angiography.


Asunto(s)
Angiografía con Fluoresceína/métodos , Verde de Indocianina , Norepinefrina/farmacología , Vasoconstrictores/farmacología , Animales , Modelos Animales de Enfermedad , Femenino , Infusiones Intravenosas , Inyecciones Intravenosas , Intestinos/irrigación sanguínea , Periodo Intraoperatorio , Laparotomía , Norepinefrina/administración & dosificación , Porcinos , Vasoconstrictores/administración & dosificación
14.
Sci Rep ; 10(1): 15441, 2020 09 22.
Artículo en Inglés | MEDLINE | ID: mdl-32963333

RESUMEN

Liver ischaemia reperfusion injury (IRI) is a dreaded pathophysiological complication which may lead to an impaired liver function. The level of oxygen hypoperfusion affects the level of cellular damage during the reperfusion phase. Consequently, intraoperative localisation and quantification of oxygen impairment would help in the early detection of liver ischaemia. To date, there is no real-time, non-invasive, and intraoperative tool which can compute an organ oxygenation map, quantify and discriminate different types of vascular occlusions intraoperatively. Hyperspectral imaging (HSI) is a non-invasive optical methodology which can quantify tissue oxygenation and which has recently been applied to the medical field. A hyperspectral camera detects the relative reflectance of a tissue in the range of 500 to 1000 nm, allowing the quantification of organic compounds such as oxygenated and deoxygenated haemoglobin at different depths. Here, we show the first comparative study of liver oxygenation by means of HSI quantification in a model of total vascular inflow occlusion (VIO) vs. hepatic artery occlusion (HAO), correlating optical properties with capillary lactate and histopathological evaluation. We found that liver HSI could discriminate between VIO and HAO. These results were confirmed via cross-validation of HSI which detected and quantified intestinal congestion in VIO. A significant correlation between the near-infrared spectra and capillary lactate was found (r = - 0.8645, p = 0.0003 VIO, r = - 0.7113, p = 0.0120 HAO). Finally, a statistically significant negative correlation was found between the histology score and the near-infrared parameter index (NIR) (r = - 0.88, p = 0.004). We infer that HSI, by predicting capillary lactates and the histopathological score, would be a suitable non-invasive tool for intraoperative liver perfusion assessment.


Asunto(s)
Modelos Animales de Enfermedad , Arteria Hepática/fisiopatología , Isquemia/fisiopatología , Hepatopatías/fisiopatología , Oxígeno/metabolismo , Imagen de Perfusión/métodos , Daño por Reperfusión/fisiopatología , Animales , Intestinos/fisiopatología , Masculino , Consumo de Oxígeno , Porcinos
15.
Int J Comput Assist Radiol Surg ; 15(9): 1585-1595, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32592068

RESUMEN

PURPOSE: Inexpensive benchtop training systems offer significant advantages to meet the increasing demand of training surgeons and gastroenterologists in flexible endoscopy. Established scoring systems exist, based on task duration and mistake evaluation. However, they require trained human raters, which limits broad and low-cost adoption. There is an unmet and important need to automate rating with machine learning. METHOD: We present a general and robust approach for recognizing training tasks from endoscopic training video, which consequently automates task duration computation. Our main technical novelty is to show the performance of state-of-the-art CNN-based approaches can be improved significantly with a novel semi-supervised learning approach, using both labelled and unlabelled videos. In the latter case, we assume only the task execution order is known a priori. RESULTS: Two video datasets are presented: the first has 19 videos recorded in examination conditions, where the participants complete their tasks in predetermined order. The second has 17 h of videos recorded in self-assessment conditions, where participants complete one or more tasks in any order. For the first dataset, we obtain a mean task duration estimation error of 3.65 s, with a mean task duration of 159 s ([Formula: see text] relative error). For the second dataset, we obtain a mean task duration estimation error of 3.67 s. We reduce an average of 5.63% in error to 3.67% thanks to our semi-supervised learning approach. CONCLUSION: This work is the first significant step forward to automate rating of flexible endoscopy students using a low-cost benchtop trainer. Thanks to our semi-supervised learning approach, we can scale easily to much larger unlabelled training datasets. The approach can also be used for other phase recognition tasks.


Asunto(s)
Endoscopios , Endoscopía/educación , Gastroenterología/educación , Aprendizaje Automático , Reconocimiento de Normas Patrones Automatizadas , Aprendizaje Automático Supervisado , Algoritmos , Diagnóstico por Computador , Diseño de Equipo , Gastroenterología/instrumentación , Humanos , Internado y Residencia , Reproducibilidad de los Resultados , Análisis y Desempeño de Tareas , Grabación en Video
16.
Int J Comput Assist Radiol Surg ; 15(5): 859-866, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32347463

RESUMEN

PURPOSE: A better understanding of photometry in laparoscopic images can increase the reliability of computer-assisted surgery applications. Photometry requires modelling illumination, tissue reflectance and camera response. There exists a large variety of light models, but no systematic and reproducible evaluation. We present a review of light models in laparoscopic surgery, a unified calibration approach, an evaluation methodology, and a practical use of photometry. METHOD: We use images of a calibration checkerboard to calibrate the light models. We then use these models in a proposed dense stereo algorithm exploiting the shading and simultaneously extracting the tissue albedo, which we call dense shading stereo. The approach works with a broad range of light models, giving us a way to test their respective merits. RESULTS: We show that overly complex light models are usually not needed and that the light source position must be calibrated. We also show that dense shading stereo outperforms existing methods, in terms of both geometric and photometric errors, and achieves sub-millimeter accuracy. CONCLUSION: This work demonstrates the importance of careful light modelling and calibration for computer-assisted surgical applications. It gives guidelines on choosing the best performing light model.


Asunto(s)
Laparoscopía/métodos , Fotometría/métodos , Cirugía Asistida por Computador/métodos , Algoritmos , Calibración , Humanos , Fotogrametría , Reproducibilidad de los Resultados
18.
Ann Surg Open ; 1(2): e021, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33392607

RESUMEN

OBJECTIVE: To develop consensus definitions of image-guided surgery, computer-assisted surgery, hybrid operating room, and surgical navigation systems. SUMMARY BACKGROUND DATA: The use of minimally invasive procedures has increased tremendously over the past 2 decades, but terminology related to image-guided minimally invasive procedures has not been standardized, which is a barrier to clear communication. METHODS: Experts in image-guided techniques and specialized engineers were invited to engage in a systematic process to develop consensus definitions of the key terms listed above. The process was designed following review of common consensus-development methodologies and included participation in 4 online surveys and a post-surveys face-to-face panel meeting held in Strasbourg, France. RESULTS: The experts settled on the terms computer-assisted surgery and intervention, image-guided surgery and intervention, hybrid operating room, and guidance systems and agreed-upon definitions of these terms, with rates of consensus of more than 80% for each term. The methodology used proved to be a compelling strategy to overcome the current difficulties related to data growth rates and technological convergence in this field. CONCLUSIONS: Our multidisciplinary collaborative approach resulted in consensus definitions that may improve communication, knowledge transfer, collaboration, and research in the rapidly changing field of image-guided minimally invasive techniques.

19.
Surg Endosc ; 34(1): 226-230, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-30911919

RESUMEN

Image-guided surgery is growing in importance with each year. Various imaging technologies are used. The objective of this study was to test whether a new mixed reality navigation system (MRNS) improved percutaneous punctures. This system allowed to clearly visualize the needle tip, needle orientation, US probe and puncture target simultaneously with an interactive 3D computer user inferface. Prospective pre-clinical comparative study. An opaque ballistic gel phantom containing grapes of different sizes was used to simulate puncture targets. The evaluation consisted of ultrasound-guided (US-guided) needle punctures divided into two groups, standard group consisted of punctures using the standard approach (US-guided), and assisted navigation group consisted of punctures using MRNS. Once a puncture was completed, a computed tomography scan was made of the phantom and needle. The distance between the needle tip and the center of the target was measured. The time required to complete the puncture and puncture attempts was also calculated. Total participants was n = 23, between surgeons, medical technicians and radiologist. The participants were divided into novices (without experience, 69.6%) and experienced (with experience > 25 procedures, 30.4%). Each participant performed the puncture of six targets. For puncture completion time, the assisted navigation group was faster (42.1%) compared to the standard group (57.9%) (28.3 s ± 24.7 vs. 39.3 s ± 46.3-p 0.775). The total punctures attempts was lower in the assisted navigation group (35.4%) compared to the standard group (64.6%) (1.0 mm ± 0.2 vs. 1.8 mm ± 1.1-p 0.000). The assisted navigation group was more accurate than the standard group (4.2 ± 2.9 vs. 6.5 ± 4.7-p 0.003), observed in both novices and experienced groups. The use of MRNS improved ultrasound-guided percutaneous punctures parameters compared to the standard approach.


Asunto(s)
Realidad Aumentada , Punciones/métodos , Cirugía Asistida por Computador/métodos , Ultrasonografía Intervencional/métodos , Realidad Virtual , Algoritmos , Humanos , Agujas , Fantasmas de Imagen , Estudios Prospectivos , Punciones/instrumentación , Cirugía Asistida por Computador/instrumentación
20.
Int J Comput Assist Radiol Surg ; 14(7): 1237-1245, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31147817

RESUMEN

PURPOSE: The registration of preoperative 3D images to intra-operative laparoscopic 2D images is one of the main concerns for augmented reality in computer-assisted surgery. For laparoscopic liver surgery, while several algorithms have been proposed, there is neither a public dataset nor a systematic evaluation methodology to quantitatively evaluate registration accuracy. METHOD: Our main contribution is to provide such a dataset with an in vivo porcine model. It is used to evaluate a state-of-the-art registration algorithm that is capable of simultaneous registration and soft-body collision reasoning. RESULTS: The dataset consists of 13 deformed liver states, with corresponding exploration videos and interventional CT acquisitions with 60 small artificial fiducials located on the surface of the liver and distributed within the parenchyma, where a precise registration is crucial for augmented reality. This dataset will be made public. Using this dataset, we show that collision reasoning improves performance of registration for strong deformation and independent lobe motion. CONCLUSION: This dataset addresses the lack of public datasets in this field. As an example of use, we present and evaluate a state-of-the-art energy-based approach and a novel extension that handles self-collisions.


Asunto(s)
Imagenología Tridimensional/métodos , Laparoscopía/métodos , Hígado/cirugía , Cirugía Asistida por Computador/métodos , Algoritmos , Animales , Conjuntos de Datos como Asunto , Movimientos de los Órganos , Porcinos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...