RESUMEN
BACKGROUND: The large amount of heterogeneous data collected in surgical/endoscopic practice calls for data-driven approaches as machine learning (ML) models. The aim of this study was to develop ML models to predict endoscopic sleeve gastroplasty (ESG) efficacy at 12 months defined by total weight loss (TWL) % and excess weight loss (EWL) % achievement. Multicentre data were used to enhance generalizability: evaluate consistency among different center of ESG practice and assess reproducibility of the models and possible clinical application. Models were designed to be dynamic and integrate follow-up clinical data into more accurate predictions, possibly assisting management and decision-making. METHODS: ML models were developed using data of 404 ESG procedures performed at 12 centers across Europe. Collected data included clinical and demographic variables at the time of ESG and at follow-up. Multicentre/external and single center/internal and temporal validation were performed. Training and evaluation of the models were performed on Python's scikit-learn library. Performance of models was quantified as receiver operator curve (ROC-AUC), sensitivity, specificity, and calibration plots. RESULTS: Multicenter external validation: ML models using preoperative data show poor performance. Best performances were reached by linear regression (LR) and support vector machine models for TWL% and EWL%, respectively, (ROC-AUC: TWL% 0.87, EWL% 0.86) with the addition of 6-month follow-up data. Single-center internal validation: Preoperative data only ML models show suboptimal performance. Early, i.e., 3-month follow-up data addition lead to ROC-AUC of 0.79 (random forest classifiers model) and 0.81 (LR models) for TWL% and EWL% achievement prediction, respectively. Single-center temporal validation shows similar results. CONCLUSIONS: Although preoperative data only may not be sufficient for accurate postoperative predictions, the ability of ML models to adapt and evolve with the patients changes could assist in providing an effective and personalized postoperative care. ML models predictive capacity improvement with follow-up data is encouraging and may become a valuable support in patient management and decision-making.
Asunto(s)
Gastroplastia , Obesidad Mórbida , Humanos , Gastroplastia/métodos , Obesidad/cirugía , Reproducibilidad de los Resultados , Resultado del Tratamiento , Pérdida de Peso , Aprendizaje Automático , Obesidad Mórbida/cirugíaRESUMEN
BACKGROUND: Hyperspectral imaging (HSI), combined with machine learning, can help to identify characteristic tissue signatures enabling automatic tissue recognition during surgery. This study aims to develop the first HSI-based automatic abdominal tissue recognition with human data in a prospective bi-center setting. METHODS: Data were collected from patients undergoing elective open abdominal surgery at two international tertiary referral hospitals from September 2020 to June 2021. HS images were captured at various time points throughout the surgical procedure. Resulting RGB images were annotated with 13 distinct organ labels. Convolutional Neural Networks (CNNs) were employed for the analysis, with both external and internal validation settings utilized. RESULTS: A total of 169 patients were included, 73 (43.2%) from Strasbourg and 96 (56.8%) from Verona. The internal validation within centers combined patients from both centers into a single cohort, randomly allocated to the training (127 patients, 75.1%, 585 images) and test sets (42 patients, 24.9%, 181 images). This validation setting showed the best performance. The highest true positive rate was achieved for the skin (100%) and the liver (97%). Misclassifications included tissues with a similar embryological origin (omentum and mesentery: 32%) or with overlaying boundaries (liver and hepatic ligament: 22%). The median DICE score for ten tissue classes exceeded 80%. CONCLUSION: To improve automatic surgical scene segmentation and to drive clinical translation, multicenter accurate HSI datasets are essential, but further work is needed to quantify the clinical value of HSI. HSI might be included in a new omics science, namely surgical optomics, which uses light to extract quantifiable tissue features during surgery.
Asunto(s)
Aprendizaje Profundo , Imágenes Hiperespectrales , Humanos , Estudios Prospectivos , Imágenes Hiperespectrales/métodos , Masculino , Femenino , Persona de Mediana Edad , Anciano , Abdomen/cirugía , Abdomen/diagnóstico por imagen , Cirugía Asistida por Computador/métodosRESUMEN
BACKGROUND: Visualization of key anatomical landmarks is required during surgical Trans Abdominal Pre Peritoneal repair (TAPP) of inguinal hernia. The Critical View of the MyoPectineal Orifice (CVMPO) was proposed to ensure correct dissection. An artificial intelligence (AI) system that automatically validates the presence of key and marks during the procedure is a critical step towards automatic dissection quality assessment and video-based competency evaluation. The aim of this study was to develop an AI system that automatically recognizes the TAPP key CVMPO landmarks in hernia repair videos. METHODS: Surgical videos of 160 TAPP procedures were used in this single-center study. A deep neural network-based object detector was developed to automatically recognize the pubic symphysis, direct hernia orifice, Cooper's ligament, the iliac vein, triangle of Doom, deep inguinal ring, and iliopsoas muscle. The system was trained using 130 videos, annotated and verified by two board-certified surgeons. Performance was evaluated in 30 videos of new patients excluded from the training data. RESULTS: Performance was validated in 2 ways: first, single-image validation where the AI model detected landmarks in a single laparoscopic image (mean average precision (MAP) of 51.2%). The second validation is video evaluation where the model detected landmarks throughout the myopectineal orifice visual inspection phase (mean accuracy and F-score of 77.1 and 75.4% respectively). Annotation objectivity was assessed between 2 surgeons in video evaluation, showing a high agreement of 88.3%. CONCLUSION: This study establishes the first AI-based automated recognition of critical structures in TAPP surgical videos, and a major step towards automatic CVMPO validation with AI. Strong performance was achieved in the video evaluation. The high inter-rater agreement confirms annotation quality and task objectivity.
Asunto(s)
Hernia Inguinal , Laparoscopía , Cirujanos , Humanos , Inteligencia Artificial , Laparoscopía/métodos , Peritoneo , Hernia Inguinal/cirugíaRESUMEN
Thermal ablation is an acceptable alternative treatment for primary liver cancer, of which laser ablation (LA) is one of the least invasive approaches, especially for tumors in high-risk locations. Precise control of the LA effect is required to safely destroy the tumor. Although temperature imaging techniques provide an indirect measurement of the thermal damage, a degree of uncertainty remains about the treatment effect. Optical techniques are currently emerging as tools to directly assess tissue thermal damage. Among them, hyperspectral imaging (HSI) has shown promising results in image-guided surgery and in the thermal ablation field. The highly informative data provided by HSI, associated with deep learning, enable the implementation of non-invasive prediction models to be used intraoperatively. Here we show a novel paradigm "peak temperature prediction model" (PTPM), convolutional neural network (CNN)-based, trained with HSI and infrared imaging to predict LA-induced damage in the liver. The PTPM demonstrated an optimal agreement with tissue damage classification providing a consistent threshold (50.6 ± 1.5 °C) for the damage margins with high accuracy (~0.90). The high correlation with the histology score (r = 0.9085) and the comparison with the measured peak temperature confirmed that PTPM preserves temperature information accordingly with the histopathological assessment.
Asunto(s)
Aprendizaje Profundo , Terapia por Láser , Imágenes Hiperespectrales , Rayos Láser , Redes Neurales de la ComputaciónRESUMEN
Image-guided surgery is growing in importance with each year. Various imaging technologies are used. The objective of this study was to test whether a new mixed reality navigation system (MRNS) improved percutaneous punctures. This system allowed to clearly visualize the needle tip, needle orientation, US probe and puncture target simultaneously with an interactive 3D computer user inferface. Prospective pre-clinical comparative study. An opaque ballistic gel phantom containing grapes of different sizes was used to simulate puncture targets. The evaluation consisted of ultrasound-guided (US-guided) needle punctures divided into two groups, standard group consisted of punctures using the standard approach (US-guided), and assisted navigation group consisted of punctures using MRNS. Once a puncture was completed, a computed tomography scan was made of the phantom and needle. The distance between the needle tip and the center of the target was measured. The time required to complete the puncture and puncture attempts was also calculated. Total participants was n = 23, between surgeons, medical technicians and radiologist. The participants were divided into novices (without experience, 69.6%) and experienced (with experience > 25 procedures, 30.4%). Each participant performed the puncture of six targets. For puncture completion time, the assisted navigation group was faster (42.1%) compared to the standard group (57.9%) (28.3 s ± 24.7 vs. 39.3 s ± 46.3-p 0.775). The total punctures attempts was lower in the assisted navigation group (35.4%) compared to the standard group (64.6%) (1.0 mm ± 0.2 vs. 1.8 mm ± 1.1-p 0.000). The assisted navigation group was more accurate than the standard group (4.2 ± 2.9 vs. 6.5 ± 4.7-p 0.003), observed in both novices and experienced groups. The use of MRNS improved ultrasound-guided percutaneous punctures parameters compared to the standard approach.
Asunto(s)
Realidad Aumentada , Punciones/métodos , Cirugía Asistida por Computador/métodos , Ultrasonografía Intervencional/métodos , Realidad Virtual , Algoritmos , Humanos , Agujas , Fantasmas de Imagen , Estudios Prospectivos , Punciones/instrumentación , Cirugía Asistida por Computador/instrumentaciónRESUMEN
Orthognathic surgery belongs to the scope of maxillofacial surgery. It treats dentofacial deformities consisting in discrepancy between the facial bones (upper and lower jaws). Such impairment affects chewing, talking, and breathing and can ultimately result in the loss of teeth. Orthognathic surgery restores facial harmony and dental occlusion through bone cutting, repositioning, and fixation. However, in routine practice, we face the limitations of conventional tools and the lack of intraoperative assistance. These limitations occur at every step of the surgical workflow: preoperative planning, simulation, and intraoperative navigation. The aim of this research was to provide novel tools to improve simulation and navigation. We first developed a semiautomated segmentation pipeline allowing accurate and time-efficient patient-specific 3D modeling from computed tomography scans mandatory to achieve surgical planning. This step allowed an improvement of processing time by a factor of 6 compared with interactive segmentation, with a 1.5-mm distance error. Next, we developed a software to simulate the postoperative outcome on facial soft tissues. Volume meshes were processed from segmented DICOM images, and the Bullet open source mechanical engine was used together with a mass-spring model to reach a postoperative simulation accuracy <1 mm. Our toolset was completed by the development of a real-time navigation system using minimally invasive electromagnetic sensors. This navigation system featured a novel user-friendly interface based on augmented virtuality that improved surgical accuracy and operative time especially for trainee surgeons, therefore demonstrating its educational benefits. The resulting software suite could enhance operative accuracy and surgeon education for improved patient care.
Asunto(s)
Simulación por Computador , Imagenología Tridimensional , Procedimientos Quirúrgicos Ortognáticos/métodos , Modelación Específica para el Paciente , Programas Informáticos , Cirugía Asistida por Computador/métodos , Francia , Hospitales Universitarios , Humanos , Anomalías Maxilofaciales/diagnóstico por imagen , Anomalías Maxilofaciales/cirugía , Cirugía Ortognática/normas , Cirugía Ortognática/tendencias , Procedimientos Quirúrgicos Ortognáticos/instrumentación , Sensibilidad y EspecificidadRESUMEN
Imaging is one of the pillars for the ongoing evolution of surgical oncology toward a precision paradigm. In the present overview, some established or emerging intraoperative imaging technologies are described in light of the vision and experience of our group in image-guided surgery, focusing on digestive surgical oncology.
Asunto(s)
Neoplasias/diagnóstico por imagen , Neoplasias/cirugía , Cirugía Asistida por Computador/instrumentación , Cirugía Asistida por Computador/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/instrumentación , Procesamiento de Imagen Asistido por Computador/métodos , Monitoreo Intraoperatorio/instrumentación , Monitoreo Intraoperatorio/métodosRESUMEN
PURPOSE: Automatic registration between abdominal ultrasound (US) and computed tomography (CT) images is needed to enhance interventional guidance of renal procedures, but it remains an open research challenge. We propose a novel method that doesn't require an initial registration estimate (a global method) and also handles registration ambiguity caused by the organ's natural symmetry. Combined with a registration refinement algorithm, this method achieves robust and accurate kidney registration while avoiding manual initialization. METHODS: We propose solving global registration in a three-step approach: (1) Automatic anatomical landmark localization, where 2 deep neural networks (DNNs) localize a set of landmarks in each modality. (2) Registration hypothesis generation, where potential registrations are computed from the landmarks with a deterministic variant of RANSAC. Due to the Kidney's strong bilateral symmetry, there are usually 2 compatible solutions. Finally, in Step (3), the correct solution is determined automatically, using a DNN classifier that resolves the geometric ambiguity. The registration may then be iteratively improved with a registration refinement method. Results are presented with state-of-the-art surface-based refinement-Bayesian coherent point drift (BCPD). RESULTS: This automatic global registration approach gives better results than various competitive state-of-the-art methods, which, additionally, require organ segmentation. The results obtained on 59 pairs of 3D US/CT kidney images show that the proposed method, combined with BCPD refinement, achieves a target registration error (TRE) of an internal kidney landmark (the renal pelvis) of 5.78 mm and an average nearest neighbor surface distance (nndist) of 2.42 mm. CONCLUSION: This work presents the first approach for automatic kidney registration in US and CT images, which doesn't require an initial manual registration estimate to be known a priori. The results show a fully automatic registration approach with performances comparable to manual methods is feasible.
RESUMEN
INTRODUCTION: The changes occurring in the liver in cases of outflow deprivation have rarely been investigated, and no measurements of this phenomenon are available. This investigation explored outflow occlusion in a pig model using a hyperspectral camera. METHODS: Six pigs were enrolled. The right hepatic vein was clamped for 30 min. The oxygen saturation (StO2%), deoxygenated hemoglobin level (de-Hb), near-infrared perfusion (NIR), and total hemoglobin index (THI) were investigated at different time points in four perfused lobes using a hyperspectral camera measuring light absorbance between 500 nm and 995 nm. Differences among lobes at different time points were estimated by mixed-effect linear regression. RESULTS: StO2% decreased over time in the right lateral lobe (RLL, totally occluded) when compared to the left lateral (LLL, outflow preserved) and the right medial (RML, partially occluded) lobes (p < 0.05). De-Hb significantly increased after clamping in RLL when compared to RML and LLL (p < 0.05). RML was further analyzed considering the right portion (totally occluded) and the left portion of the lobe (with an autonomous draining vein). StO2% decreased and de-Hb increased more smoothly when compared to the totally occluded RLL (p < 0.05). CONCLUSIONS: The variations of StO2% and deoxy-Hb could be considered good markers of venous liver congestion.
RESUMEN
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.
Asunto(s)
Benchmarking , Neoplasias Hepáticas , Humanos , Estudios Retrospectivos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/patología , Hígado/diagnóstico por imagen , Hígado/patología , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Developing accurate and real-time algorithms for a non-invasive three-dimensional representation and reconstruction of internal patient structures is one of the main research fields in computer-assisted surgery and endoscopy. Mono and stereo endoscopic images of soft tissues are converted into a three-dimensional representation by the estimation of depth maps. However, automatic, detailed, accurate and robust depth map estimation is a challenging problem that, in the stereo setting, is strictly dependent on a robust estimate of the disparity map. Many traditional algorithms are often inefficient or not accurate. In this work, novel self-supervised stacked and Siamese encoder/decoder neural networks are proposed to compute accurate disparity maps for 3D laparoscopy depth estimation. These networks run in real-time on standard GPU-equipped desktop computers and the outputs may be used for depth map estimation using the a known camera calibration. We compare performance on three different public datasets and on a new challenging simulated dataset and our solutions outperform state-of-the-art mono and stereo depth estimation methods. Extensive robustness and sensitivity analyses on more than 30000 frames has been performed. This work leads to important improvements in mono and stereo real-time depth map estimation of soft tissues and organs with a very low average mean absolute disparity reconstruction error with respect to ground truth.
Asunto(s)
Laparoscopía , Cirugía Asistida por Computador , Algoritmos , Humanos , Imagenología Tridimensional/métodos , Redes Neurales de la Computación , Cirugía Asistida por Computador/métodosRESUMEN
Complete mesocolic excision (CME), which involves the adequate resection of the tumor-bearing colonic segment with "en bloc" removal of its mesocolon along embryological fascial planes is associated with superior oncological outcomes. However, CME presents a higher complication rate compared to non-CME resections due to a higher risk of vascular injury. Hyperspectral imaging (HSI) is a contrast-free optical imaging technology, which facilitates the quantitative imaging of physiological tissue parameters and the visualization of anatomical structures. This study evaluates the accuracy of HSI combined with deep learning (DL) to differentiate the colon and its mesenteric tissue from retroperitoneal tissue. In an animal study including 20 pig models, intraoperative hyperspectral images of the sigmoid colon, sigmoid mesentery, and retroperitoneum were recorded. A convolutional neural network (CNN) was trained to distinguish the two tissue classes using HSI data, validated with a leave-one-out cross-validation process. The overall recognition sensitivity of the tissues to be preserved (retroperitoneum) and the tissues to be resected (colon and mesentery) was 79.0 ± 21.0% and 86.0 ± 16.0%, respectively. Automatic classification based on HSI and CNNs is a promising tool to automatically, non-invasively, and objectively differentiate the colon and its mesentery from retroperitoneal tissue.
RESUMEN
Ischemia-reperfusion injury during major hepatic resections is associated with high rates of post-operative complications and liver failure. Real-time intra-operative detection of liver dysfunction could provide great insight into clinical outcomes. In the present study, we demonstrate the intra-operative application of a novel optical technology, hyperspectral imaging (HSI), to predict short-term post-operative outcomes after major hepatectomy. We considered fifteen consecutive patients undergoing major hepatic resection for malignant liver lesions from January 2020 to June 2021. HSI measures included tissue water index (TWI), organ hemoglobin index (OHI), tissue oxygenation (StO2%), and near infrared (NIR). Pre-operative, intra-operative, and post-operative serum and clinical outcomes were collected. NIR values were higher in unhealthy liver tissue (p = 0.003). StO2% negatively correlated with post-operative serum ALT values (r = -0.602), while ΔStO2% positively correlated with ALP (r = 0.594). TWI significantly correlated with post-operative reintervention and OHI with post-operative sepsis and liver failure. In conclusion, the HSI imaging system is accurate and precise in translating from pre-clinical to human studies in this first clinical trial. HSI indices are related to serum and outcome metrics. Further experimental and clinical studies are necessary to determine clinical value of this technology.
RESUMEN
Nerves are critical structures that may be difficult to recognize during surgery. Inadvertent nerve injuries can have catastrophic consequences for the patient and lead to life-long pain and a reduced quality of life. Hyperspectral imaging (HSI) is a non-invasive technique combining photography with spectroscopy, allowing non-invasive intraoperative biological tissue property quantification. We show, for the first time, that HSI combined with deep learning allows nerves and other tissue types to be automatically recognized in in vivo hyperspectral images. An animal model was used, and eight anesthetized pigs underwent neck midline incisions, exposing several structures (nerve, artery, vein, muscle, fat, skin). State-of-the-art machine learning models were trained to recognize these tissue types in HSI data. The best model was a convolutional neural network (CNN), achieving an overall average sensitivity of 0.91 and a specificity of 1.0, validated with leave-one-patient-out cross-validation. For the nerve, the CNN achieved an average sensitivity of 0.76 and a specificity of 0.99. In conclusion, HSI combined with a CNN model is suitable for in vivo nerve recognition.
RESUMEN
There are approximately 1.8 million diagnoses of colorectal cancer, 1 million diagnoses of stomach cancer, and 0.6 million diagnoses of esophageal cancer each year globally. An automatic computer-assisted diagnostic (CAD) tool to rapidly detect colorectal and esophagogastric cancer tissue in optical images would be hugely valuable to a surgeon during an intervention. Based on a colon dataset with 12 patients and an esophagogastric dataset of 10 patients, several state-of-the-art machine learning methods have been trained to detect cancer tissue using hyperspectral imaging (HSI), including Support Vector Machines (SVM) with radial basis function kernels, Multi-Layer Perceptrons (MLP) and 3D Convolutional Neural Networks (3DCNN). A leave-one-patient-out cross-validation (LOPOCV) with and without combining these sets was performed. The ROC-AUC score of the 3DCNN was slightly higher than the MLP and SVM with a difference of 0.04 AUC. The best performance was achieved with the 3DCNN for colon cancer and esophagogastric cancer detection with a high ROC-AUC of 0.93. The 3DCNN also achieved the best DICE scores of 0.49 and 0.41 on the colon and esophagogastric datasets, respectively. These scores were significantly improved using a patient-specific decision threshold to 0.58 and 0.51, respectively. This indicates that, in practical use, an HSI-based CAD system using an interactive decision threshold is likely to be valuable. Experiments were also performed to measure the benefits of combining the colorectal and esophagogastric datasets (22 patients), and this yielded significantly better results with the MLP and SVM models.
RESUMEN
Intraoperative indocyanine green (ICG) fluorescence angiography has gained popularity and acceptance in many surgical fields for the real-time assessment of tissue perfusion. Although vasopressors have the potential to preclude an accurate assessment of tissue perfusion, there is a lack of literature with regards to its effect on ICG fluorescence angiography. An experimental porcine model was used to expose the small bowel for quantitative tissue perfusion assessment. Three increasing doses of norepinephrine infusion (0.1, 0.5, and 1.0 µg/kg/min) were administered intravenously over a 25-min interval. Time-to-peak fluorescence intensity (TTP) was the primary outcome. Secondary outcomes included absolute fluorescence intensity and local capillary lactate (LCL) levels. Five large pigs (mean weight: 40.3 ± 4.24 kg) were included. There was no significant difference in mean TTP (in seconds) at baseline (4.23) as compared to the second (3.90), third (4.41), fourth (4.60), and fifth ICG assessment (5.99). As a result of ICG accumulation, the mean and the maximum absolute fluorescence intensity were significantly different as compared to the baseline assessment. There was no significant difference in LCL levels (in mmol/L) at baseline (0.74) as compared to the second (0.82), third (0.64), fourth (0.60), and fifth assessment (0.62). Increasing doses of norepinephrine infusion have no significant influence on bowel perfusion using ICG fluorescence angiography.
Asunto(s)
Angiografía con Fluoresceína/métodos , Verde de Indocianina , Norepinefrina/farmacología , Vasoconstrictores/farmacología , Animales , Modelos Animales de Enfermedad , Femenino , Infusiones Intravenosas , Inyecciones Intravenosas , Intestinos/irrigación sanguínea , Periodo Intraoperatorio , Laparotomía , Norepinefrina/administración & dosificación , Porcinos , Vasoconstrictores/administración & dosificaciónRESUMEN
Hyperspectral imaging (HSI) is a non-invasive imaging modality already applied to evaluate hepatic oxygenation and to discriminate different models of hepatic ischemia. Nevertheless, the ability of HSI to detect and predict the reperfusion damage intraoperatively was not yet assessed. Hypoxia caused by hepatic artery occlusion (HAO) in the liver brings about dreadful vascular complications known as ischemia-reperfusion injury (IRI). Here, we show the evaluation of liver viability in an HAO model with an artificial intelligence-based analysis of HSI. We have combined the potential of HSI to extract quantitative optical tissue properties with a deep learning-based model using convolutional neural networks. The artificial intelligence (AI) score of liver viability showed a significant correlation with capillary lactate from the liver surface (r = -0.78, p = 0.0320) and Suzuki's score (r = -0.96, p = 0.0012). CD31 immunostaining confirmed the microvascular damage accordingly with the AI score. Our results ultimately show the potential of an HSI-AI-based analysis to predict liver viability, thereby prompting for intraoperative tool development to explore its application in a clinical setting.
RESUMEN
PURPOSE: A better understanding of photometry in laparoscopic images can increase the reliability of computer-assisted surgery applications. Photometry requires modelling illumination, tissue reflectance and camera response. There exists a large variety of light models, but no systematic and reproducible evaluation. We present a review of light models in laparoscopic surgery, a unified calibration approach, an evaluation methodology, and a practical use of photometry. METHOD: We use images of a calibration checkerboard to calibrate the light models. We then use these models in a proposed dense stereo algorithm exploiting the shading and simultaneously extracting the tissue albedo, which we call dense shading stereo. The approach works with a broad range of light models, giving us a way to test their respective merits. RESULTS: We show that overly complex light models are usually not needed and that the light source position must be calibrated. We also show that dense shading stereo outperforms existing methods, in terms of both geometric and photometric errors, and achieves sub-millimeter accuracy. CONCLUSION: This work demonstrates the importance of careful light modelling and calibration for computer-assisted surgical applications. It gives guidelines on choosing the best performing light model.
Asunto(s)
Laparoscopía/métodos , Fotometría/métodos , Cirugía Asistida por Computador/métodos , Algoritmos , Calibración , Humanos , Fotogrametría , Reproducibilidad de los ResultadosRESUMEN
PURPOSE: Inexpensive benchtop training systems offer significant advantages to meet the increasing demand of training surgeons and gastroenterologists in flexible endoscopy. Established scoring systems exist, based on task duration and mistake evaluation. However, they require trained human raters, which limits broad and low-cost adoption. There is an unmet and important need to automate rating with machine learning. METHOD: We present a general and robust approach for recognizing training tasks from endoscopic training video, which consequently automates task duration computation. Our main technical novelty is to show the performance of state-of-the-art CNN-based approaches can be improved significantly with a novel semi-supervised learning approach, using both labelled and unlabelled videos. In the latter case, we assume only the task execution order is known a priori. RESULTS: Two video datasets are presented: the first has 19 videos recorded in examination conditions, where the participants complete their tasks in predetermined order. The second has 17 h of videos recorded in self-assessment conditions, where participants complete one or more tasks in any order. For the first dataset, we obtain a mean task duration estimation error of 3.65 s, with a mean task duration of 159 s ([Formula: see text] relative error). For the second dataset, we obtain a mean task duration estimation error of 3.67 s. We reduce an average of 5.63% in error to 3.67% thanks to our semi-supervised learning approach. CONCLUSION: This work is the first significant step forward to automate rating of flexible endoscopy students using a low-cost benchtop trainer. Thanks to our semi-supervised learning approach, we can scale easily to much larger unlabelled training datasets. The approach can also be used for other phase recognition tasks.
Asunto(s)
Endoscopios , Endoscopía/educación , Gastroenterología/educación , Aprendizaje Automático , Reconocimiento de Normas Patrones Automatizadas , Aprendizaje Automático Supervisado , Algoritmos , Diagnóstico por Computador , Diseño de Equipo , Gastroenterología/instrumentación , Humanos , Internado y Residencia , Reproducibilidad de los Resultados , Análisis y Desempeño de Tareas , Grabación en VideoRESUMEN
Liver ischaemia reperfusion injury (IRI) is a dreaded pathophysiological complication which may lead to an impaired liver function. The level of oxygen hypoperfusion affects the level of cellular damage during the reperfusion phase. Consequently, intraoperative localisation and quantification of oxygen impairment would help in the early detection of liver ischaemia. To date, there is no real-time, non-invasive, and intraoperative tool which can compute an organ oxygenation map, quantify and discriminate different types of vascular occlusions intraoperatively. Hyperspectral imaging (HSI) is a non-invasive optical methodology which can quantify tissue oxygenation and which has recently been applied to the medical field. A hyperspectral camera detects the relative reflectance of a tissue in the range of 500 to 1000 nm, allowing the quantification of organic compounds such as oxygenated and deoxygenated haemoglobin at different depths. Here, we show the first comparative study of liver oxygenation by means of HSI quantification in a model of total vascular inflow occlusion (VIO) vs. hepatic artery occlusion (HAO), correlating optical properties with capillary lactate and histopathological evaluation. We found that liver HSI could discriminate between VIO and HAO. These results were confirmed via cross-validation of HSI which detected and quantified intestinal congestion in VIO. A significant correlation between the near-infrared spectra and capillary lactate was found (r = - 0.8645, p = 0.0003 VIO, r = - 0.7113, p = 0.0120 HAO). Finally, a statistically significant negative correlation was found between the histology score and the near-infrared parameter index (NIR) (r = - 0.88, p = 0.004). We infer that HSI, by predicting capillary lactates and the histopathological score, would be a suitable non-invasive tool for intraoperative liver perfusion assessment.