Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 78
Filtrar
1.
Int J Comput Assist Radiol Surg ; 19(4): 757-766, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38386176

RESUMO

PURPOSE: Intracardiac transcatheter interventions allow for reducing trauma and hospitalization stays as compared to standard surgery. In the treatment of mitral regurgitation, the most widely adopted transcatheter approach consists in deploying a clip on the mitral valve leaflets by means of a catheter that is run through veins from a peripheral access to the left atrium. However, precise manipulation of the catheter from outside the body while copying with the path constraints imposed by the vessels remains challenging. METHODS: We proposed a path tracking control framework that provides adequate motion commands to the robotic steerable catheter for autonomous navigation through vascular lumens. The proposed work implements a catheter kinematic model featuring nonholonomic constraints. Relying on the real-time measurements from an electromagnetic sensor and a fiber Bragg grating sensor, a two-level feedback controller was designed to control the catheter. RESULTS: The proposed method was tested in a patient-specific vessel phantom. A median position error between the center line of the vessel and the catheter tip trajectory was found to be below 2 mm, with a maximum error below 3 mm. Statistical testing confirmed that the performance of the proposed method exhibited no significant difference in both free space and the contact region. CONCLUSION: The preliminary in vitro studies presented in this paper showed promising accuracy in navigating the catheter within the vessel. The proposed approach enables autonomous control of a steerable catheter for transcatheter cardiology interventions without the request of calibrating the intuitive parameters or acquiring a training dataset.


Assuntos
Cardiologia , Insuficiência da Valva Mitral , Robótica , Humanos , Catéteres , Valva Mitral
2.
Int J Comput Assist Radiol Surg ; 19(3): 481-492, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38066354

RESUMO

PURPOSE: In twin-to-twin transfusion syndrome (TTTS), abnormal vascular anastomoses in the monochorionic placenta can produce uneven blood flow between the two fetuses. In the current practice, TTTS is treated surgically by closing abnormal anastomoses using laser ablation. This surgery is minimally invasive and relies on fetoscopy. Limited field of view makes anastomosis identification a challenging task for the surgeon. METHODS: To tackle this challenge, we propose a learning-based framework for in vivo fetoscopy frame registration for field-of-view expansion. The novelties of this framework rely on a learning-based keypoint proposal network and an encoding strategy to filter (i) irrelevant keypoints based on fetoscopic semantic image segmentation and (ii) inconsistent homographies. RESULTS: We validate our framework on a dataset of six intraoperative sequences from six TTTS surgeries from six different women against the most recent state-of-the-art algorithm, which relies on the segmentation of placenta vessels. CONCLUSION: The proposed framework achieves higher performance compared to the state of the art, paving the way for robust mosaicking to provide surgeons with context awareness during TTTS surgery.


Assuntos
Transfusão Feto-Fetal , Terapia a Laser , Gravidez , Feminino , Humanos , Fetoscopia/métodos , Transfusão Feto-Fetal/diagnóstico por imagem , Transfusão Feto-Fetal/cirurgia , Placenta/cirurgia , Placenta/irrigação sanguínea , Terapia a Laser/métodos , Algoritmos
3.
Comput Methods Programs Biomed ; 244: 107937, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38006707

RESUMO

BACKGROUND AND OBJECTIVE: Safety of robotic surgery can be enhanced through augmented vision or artificial constraints to the robotl motion, and intra-operative depth estimation is the cornerstone of these applications because it provides precise position information of surgical scenes in 3D space. High-quality depth estimation of endoscopic scenes has been a valuable issue, and the development of deep learning provides more possibility and potential to address this issue. METHODS: In this paper, a deep learning-based approach is proposed to recover 3D information of intra-operative scenes. To this aim, a fully 3D encoder-decoder network integrating spatio-temporal layers is designed, and it adopts hierarchical prediction and progressive learning to enhance prediction accuracy and shorten training time. RESULTS: Our network gets the depth estimation accuracy of MAE 2.55±1.51 (mm) and RMSE 5.23±1.40 (mm) using 8 surgical videos with a resolution of 1280×1024, which performs better compared with six other state-of-the-art methods that were trained on the same data. CONCLUSIONS: Our network can implement a promising depth estimation performance in intra-operative scenes using stereo images, allowing the integration in robot-assisted surgery to enhance safety.


Assuntos
Procedimentos Cirúrgicos Robóticos , Movimento (Física)
4.
Med Image Anal ; 92: 103066, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38141453

RESUMO

Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to restore a physiological blood exchange among twins. The procedure is particularly challenging, from the surgeon's side, due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to amniotic fluid turbidity, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation of pathological anastomoses, resulting in persistent TTTS. Computer-assisted intervention (CAI) can provide TTTS surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision (EndoVis) challenge, we released the first large-scale multi-center TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms with a focus on creating drift-free mosaics from long duration fetoscopy videos. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips of an average length of 411 frames for developing placental scene segmentation and frame registration for mosaicking techniques. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. For the segmentation task, overall baseline performed was the top performing (aggregated mIoU of 0.6763) and was the best on the vessel class (mIoU of 0.5817) while team RREB was the best on the tool (mIoU of 0.6335) and fetus (mIoU of 0.5178) classes. For the registration task, overall the baseline performed better than team SANO with an overall mean 5-frame SSIM of 0.9348. Qualitatively, it was observed that team SANO performed better in planar scenarios, while baseline was better in non-planner scenarios. The detailed analysis showed that no single team outperformed on all 6 test fetoscopic videos. The challenge provided an opportunity to create generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge, alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-center fetoscopic data, we provide a benchmark for future research in this field.


Assuntos
Transfusão Feto-Fetal , Placenta , Feminino , Humanos , Gravidez , Algoritmos , Transfusão Feto-Fetal/diagnóstico por imagem , Transfusão Feto-Fetal/cirurgia , Transfusão Feto-Fetal/patologia , Fetoscopia/métodos , Feto , Placenta/diagnóstico por imagem
5.
IEEE Int Conf Rehabil Robot ; 2023: 1-6, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37941270

RESUMO

Robotic rehabilitation has demonstrated slight positive effects compared to traditional care, but there is still a lack of targeted high-level control strategies in the current state-of-the-art for minimizing pathological motor behaviors. In this study, we analyzed upper-limb motion capture data from healthy subjects performing a pick-and-place task to identify task-specific variability in postural patterns. The results revealed consistent behaviors among subjects, presenting an opportunity to develop a novel extraction method for variable volume references based solely on observations from healthy individuals. These human-centered references were tested on a simulated 4 degrees-of-freedom upper-limb exoskeleton, showing its compliant adaptation to the path considering the variance in healthy subjects' motor behavior.


Assuntos
Exoesqueleto Energizado , Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Extremidade Superior , Fenômenos Biomecânicos
6.
Int J Comput Assist Radiol Surg ; 18(12): 2349-2356, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37587389

RESUMO

PURPOSE: Fetoscopic laser photocoagulation of placental anastomoses is the most effective treatment for twin-to-twin transfusion syndrome (TTTS). A robust mosaic of placenta and its vascular network could support surgeons' exploration of the placenta by enlarging the fetoscope field-of-view. In this work, we propose a learning-based framework for field-of-view expansion from intra-operative video frames. METHODS: While current state of the art for fetoscopic mosaicking builds upon the registration of anatomical landmarks which may not always be visible, our framework relies on learning-based features and keypoints, as well as robust transformer-based image-feature matching, without requiring any anatomical priors. We further address the problem of occlusion recovery and frame relocalization, relying on the computed features and their descriptors. RESULTS: Experiments were conducted on 10 in-vivo TTTS videos from two different fetal surgery centers. The proposed framework was compared with several state-of-the-art approaches, achieving higher [Formula: see text] on 7 out of 10 videos and a success rate of [Formula: see text] in occlusion recovery. CONCLUSION: This work introduces a learning-based framework for placental mosaicking with occlusion recovery from intra-operative videos using a keypoint-based strategy and features. The proposed framework can compute the placental panorama and recover even in case of camera tracking loss where other methods fail. The results suggest that the proposed framework has large potential to pave the way to creating a surgical navigation system for TTTS by providing robust field-of-view expansion.


Assuntos
Transfusão Feto-Fetal , Fetoscopia , Feminino , Humanos , Gravidez , Transfusão Feto-Fetal/cirurgia , Fetoscopia/métodos , Fotocoagulação , Placenta/cirurgia
7.
Comput Biol Med ; 163: 107121, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37311383

RESUMO

3D reconstruction of the intra-operative scenes provides precise position information which is the foundation of various safety related applications in robot-assisted surgery, such as augmented reality. Herein, a framework integrated into a known surgical system is proposed to enhance the safety of robotic surgery. In this paper, we present a scene reconstruction framework to restore the 3D information of the surgical site in real time. In particular, a lightweight encoder-decoder network is designed to perform disparity estimation, which is the key component of the scene reconstruction framework. The stereo endoscope of da Vinci Research Kit (dVRK) is adopted to explore the feasibility of the proposed approach, and it provides the possibility for the migration to other Robot Operating System (ROS) based robot platforms due to the strong independence on hardware. The framework is evaluated using three different scenarios, including a public dataset (3018 pairs of endoscopic images), the scene from the dVRK endoscope in our lab as well as a self-made clinical dataset captured from an oncology hospital. Experimental results show that the proposed framework can reconstruct 3D surgical scenes in real time (25 FPS), and achieve high accuracy (2.69 ± 1.48 mm in MAE, 5.47 ± 1.34 mm in RMSE and 0.41 ± 0.23 in SRE, respectively). It demonstrates that our framework can reconstruct intra-operative scenes with high reliability of both accuracy and speed, and the validation of clinical data also shows its potential in surgery. This work enhances the state of art in 3D intra-operative scene reconstruction based on medical robot platforms. The clinical dataset has been released to promote the development of scene reconstruction in the medical image community.


Assuntos
Robótica , Cirurgia Assistida por Computador , Cirurgia Assistida por Computador/métodos , Reprodutibilidade dos Testes , Imageamento Tridimensional/métodos , Procedimentos Cirúrgicos Minimamente Invasivos
8.
IEEE Trans Biomed Eng ; 70(10): 2822-2833, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37037233

RESUMO

OBJECTIVE: Accurate visual classification of bladder tissue during Trans-Urethral Resection of Bladder Tumor (TURBT) procedures is essential to improve early cancer diagnosis and treatment. During TURBT interventions, White Light Imaging (WLI) and Narrow Band Imaging (NBI) techniques are used for lesion detection. Each imaging technique provides diverse visual information that allows clinicians to identify and classify cancerous lesions. Computer vision methods that use both imaging techniques could improve endoscopic diagnosis. We address the challenge of tissue classification when annotations are available only in one domain, in our case WLI, and the endoscopic images correspond to an unpaired dataset, i.e. there is no exact equivalent for every image in both NBI and WLI domains. METHOD: We propose a semi-surprised Generative Adversarial Network (GAN)-based method composed of three main components: a teacher network trained on the labeled WLI data; a cycle-consistency GAN to perform unpaired image-to-image translation, and a multi-input student network. To ensure the quality of the synthetic images generated by the proposed GAN we perform a detailed quantitative, and qualitative analysis with the help of specialists. CONCLUSION: The overall average classification accuracy, precision, and recall obtained with the proposed method for tissue classification are 0.90, 0.88, and 0.89 respectively, while the same metrics obtained in the unlabeled domain (NBI) are 0.92, 0.64, and 0.94 respectively. The quality of the generated images is reliable enough to deceive specialists. SIGNIFICANCE: This study shows the potential of using semi-supervised GAN-based bladder tissue classification when annotations are limited in multi-domain data.


Assuntos
Neoplasias da Bexiga Urinária , Bexiga Urinária , Humanos , Bexiga Urinária/diagnóstico por imagem , Endoscopia , Luz , Neoplasias da Bexiga Urinária/diagnóstico por imagem , Neoplasias da Bexiga Urinária/patologia , Imagem de Banda Estreita/métodos
9.
Int J Comput Assist Radiol Surg ; 18(10): 1849-1856, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37083973

RESUMO

PURPOSE: Primary central nervous system lymphoma (PCNSL) is a rare, aggressive form of extranodal non-Hodgkin lymphoma. To predict the overall survival (OS) in advance is of utmost importance as it has the potential to aid clinical decision-making. Though radiomics-based machine learning (ML) has demonstrated the promising performance in PCNSL, it demands large amounts of manual feature extraction efforts from magnetic resonance images beforehand. deep learning (DL) overcomes this limitation. METHODS: In this paper, we tailored the 3D ResNet to predict the OS of patients with PCNSL. To overcome the limitation of data sparsity, we introduced data augmentation and transfer learning, and we evaluated the results using r stratified k-fold cross-validation. To explain the results of our model, gradient-weighted class activation mapping was applied. RESULTS: We obtained the best performance (the standard error) on post-contrast T1-weighted (T1Gd)-area under curve [Formula: see text], accuracy [Formula: see text], precision [Formula: see text], recall [Formula: see text] and F1-score [Formula: see text], while compared with ML-based models on clinical data and radiomics data, respectively, further confirming the stability of our model. Also, we observed that PCNSL is a whole-brain disease and in the cases where the OS is less than 1 year, it is more difficult to distinguish the tumor boundary from the normal part of the brain, which is consistent with the clinical outcome. CONCLUSIONS: All these findings indicate that T1Gd can improve prognosis predictions of patients with PCNSL. To the best of our knowledge, this is the first time to use DL to explain model patterns in OS classification of patients with PCNSL. Future work would involve collecting more data of patients with PCNSL, or additional retrospective studies on different patient populations with rare diseases, to further promote the clinical role of our model.


Assuntos
Neoplasias Encefálicas , Neoplasias do Sistema Nervoso Central , Aprendizado Profundo , Linfoma , Humanos , Estudos Retrospectivos , Linfoma/diagnóstico por imagem , Sistema Nervoso Central , Neoplasias do Sistema Nervoso Central/diagnóstico por imagem , Neoplasias do Sistema Nervoso Central/terapia
10.
Bioengineering (Basel) ; 10(3)2023 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-36978676

RESUMO

Primary Central Nervous System Lymphoma (PCNSL) is an aggressive neoplasm with a poor prognosis. Although therapeutic progresses have significantly improved Overall Survival (OS), a number of patients do not respond to HD-MTX-based chemotherapy (15-25%) or experience relapse (25-50%) after an initial response. The reasons underlying this poor response to therapy are unknown. Thus, there is an urgent need to develop improved predictive models for PCNSL. In this study, we investigated whether radiomics features can improve outcome prediction in patients with PCNSL. A total of 80 patients diagnosed with PCNSL were enrolled. A patient sub-group, with complete Magnetic Resonance Imaging (MRI) series, were selected for the stratification analysis. Following radiomics feature extraction and selection, different Machine Learning (ML) models were tested for OS and Progression-free Survival (PFS) prediction. To assess the stability of the selected features, images from 23 patients scanned at three different time points were used to compute the Interclass Correlation Coefficient (ICC) and to evaluate the reproducibility of each feature for both original and normalized images. Features extracted from Z-score normalized images were significantly more stable than those extracted from non-normalized images with an improvement of about 38% on average (p-value < 10-12). The area under the ROC curve (AUC) showed that radiomics-based prediction overcame prediction based on current clinical prognostic factors with an improvement of 23% for OS and 50% for PFS, respectively. These results indicate that radiomics features extracted from normalized MR images can improve prognosis stratification of PCNSL patients and pave the way for further study on its potential role to drive treatment choice.

11.
Med Image Anal ; 85: 102751, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36716700

RESUMO

Automatic surgical instrument segmentation of endoscopic images is a crucial building block of many computer-assistance applications for minimally invasive surgery. So far, state-of-the-art approaches completely rely on the availability of a ground-truth supervision signal, obtained via manual annotation, thus expensive to collect at large scale. In this paper, we present FUN-SIS, a Fully-UNsupervised approach for binary Surgical Instrument Segmentation. FUN-SIS trains a per-frame segmentation model on completely unlabelled endoscopic videos, by solely relying on implicit motion information and instrument shape-priors. We define shape-priors as realistic segmentation masks of the instruments, not necessarily coming from the same dataset/domain as the videos. The shape-priors can be collected in various and convenient ways, such as recycling existing annotations from other datasets. We leverage them as part of a novel generative-adversarial approach, allowing to perform unsupervised instrument segmentation of optical-flow images during training. We then use the obtained instrument masks as pseudo-labels in order to train a per-frame segmentation model; to this aim, we develop a learning-from-noisy-labels architecture, designed to extract a clean supervision signal from these pseudo-labels, leveraging their peculiar noise properties. We validate the proposed contributions on three surgical datasets, including the MICCAI 2017 EndoVis Robotic Instrument Segmentation Challenge dataset. The obtained fully-unsupervised results for surgical instrument segmentation are almost on par with the ones of fully-supervised state-of-the-art approaches. This suggests the tremendous potential of the proposed method to leverage the great amount of unlabelled data produced in the context of minimally invasive surgery.


Assuntos
Processamento de Imagem Assistida por Computador , Robótica , Humanos , Processamento de Imagem Assistida por Computador/métodos , Endoscopia , Instrumentos Cirúrgicos
12.
Med Eng Phys ; 110: 103920, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36564143

RESUMO

A major challenge during autonomous navigation in endovascular interventions is the complexity of operating in a deformable but constrained workspace with an instrument. Simulation of deformations for it can provide a cost-effective training platform for path planning. Aim of this study is to develop a realistic, auto-adaptive, and visually plausible simulator to predict vessels' global deformation induced by the robotic catheter's contact and cyclic heartbeat motion. Based on a Position-based Dynamics (PBD) approach for vessel modeling, Particle Swarm Optimization (PSO) algorithm is employed for an auto-adaptive calibration of PBD deformation parameters and of the vessels movement due to a heartbeat. In-vitro experiments were conducted and compared with in-silico results. The end-user evaluation results were reported through quantitative performance metrics and a 5-Point Likert Scale questionnaire. Compared with literature, this simulator has an error of 0.23±0.13% for deformation and 0.30±0.85mm for the aortic root displacement. In-vitro experiments show an error of 1.35±1.38mm for deformation prediction. The end-user evaluation results show that novices are more accustomed to using joystick controllers, and cardiologists are more satisfied with the visual authenticity. The real-time and accurate performance of the simulator make this framework suitable for creating a dynamic environment for autonomous navigation of robotic catheters.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Cateterismo , Catéteres , Simulação por Computador
13.
Int J Comput Assist Radiol Surg ; 17(12): 2315-2323, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35802223

RESUMO

PURPOSE: Advanced developments in the medical field have gradually increased the public demand for surgical skill evaluation. However, this assessment always depends on the direct observation of experienced surgeons, which is time-consuming and variable. The introduction of robot-assisted surgery provides a new possibility for this evaluation paradigm. This paper aims at evaluating surgeon performance automatically with novel evaluation metrics based on different surgical data. METHODS: Urologists ([Formula: see text]) from a hospital were requested to perform a simplified neobladder reconstruction on an ex vivo setup twice with different camera modalities ([Formula: see text]) randomly. They were divided into novices and experts ([Formula: see text], respectively) according to their experience in robot-assisted surgeries. Different performance metrics ([Formula: see text]) are proposed to achieve the surgical skill evaluation, considering both instruments and endoscope. Also, nonparametric tests are adopted to check if there are significant differences when evaluating surgeons performance. RESULTS: When grouping according to four stages of neobladder reconstruction, statistically significant differences can be appreciated in phase 1 ([Formula: see text]) and phase 2 ([Formula: see text]) with normalized time-related metrics and camera movement-related metrics, respectively. On the other hand, considering experience grouping shows that both metrics are able to highlight statistically significant differences between novice and expert performances in the control protocol. It also shows that the camera-related performance of experts is significantly different ([Formula: see text]) when handling the endoscope manually and when it is automatic. CONCLUSION: Surgical skill evaluation, using the approach in this paper, can effectively measure surgical procedures of surgeons with different experience. Preliminary results demonstrate that different surgical data can be fully utilized to improve the reliability of surgical evaluation. It also demonstrates its versatility and potential in the quantitative assessment of various surgical operations.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Cirurgiões , Humanos , Reprodutibilidade dos Testes , Competência Clínica , Procedimentos Cirúrgicos Robóticos/métodos
14.
Int J Comput Assist Radiol Surg ; 17(8): 1419-1427, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35503394

RESUMO

PURPOSE: Automation of sub-tasks during robotic surgery is challenging due to the high variability of the surgical scenes intra- and inter-patients. For example, the pick and place task can be executed different times during the same operation and for distinct purposes. Hence, designing automation solutions that can generalise a skill over different contexts becomes hard. All the experiments are conducted using the Pneumatic Attachable Flexible (PAF) rail, a novel surgical tool designed for robotic-assisted intraoperative organ manipulation. METHODS: We build upon previous open-source surgical Reinforcement Learning (RL) training environment to develop a new RL framework for manipulation skills, rlman. In rlman, contextual RL agents are trained to solve different aspects of the pick and place task using the PAF rail system. rlman is implemented to support both low- and high-dimensional state information to solve surgical sub-tasks in a simulation environment. RESULTS: We use rlman to train state of the art RL agents to solve four different surgical sub-tasks involving manipulation skills using the PAF rail. We compare the results with state-of-the-art benchmarks found in the literature. We evaluate the ability of the agent to be able to generalise over different aspects of the targeted surgical environment. CONCLUSION: We have shown that the rlman framework can support the training of different RL algorithms for solving surgical sub-task, analysing the importance of context information for generalisation capabilities. We are aiming to deploy the trained policy on the real da Vinci using the dVRK and show that the generalisation of the trained policy can be transferred to the real world.


Assuntos
Aprendizagem , Procedimentos Cirúrgicos Robóticos , Algoritmos , Simulação por Computador , Humanos , Procedimentos Cirúrgicos Robóticos/educação
15.
Int J Comput Assist Radiol Surg ; 17(6): 1069-1077, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35296950

RESUMO

PURPOSE: Complications related to vascular damage such as intra-operative bleeding may be avoided during neurosurgical procedures such as petroclival meningioma surgery. To address this and improve the patient's safety, we designed a real-time blood vessel avoidance strategy that enables operation on deformable tissue during petroclival meningioma surgery using Micron, a handheld surgical robotic tool. METHODS: We integrated real-time intra-operative blood vessel segmentation of brain vasculature using deep learning, with a 3D reconstruction algorithm to obtain the vessel point cloud in real time. We then implemented a virtual-fixture-based strategy that prevented Micron's tooltip from entering a forbidden region around the vessel, thus avoiding damage to it. RESULTS: We achieved a median Dice similarity coefficient of 0.97, 0.86, 0.87 and 0.77 on datasets of phantom blood vessels, petrosal vein, internal carotid artery and superficial vessels, respectively. We conducted trials with deformable clay vessel phantoms, keeping the forbidden region 400 [Formula: see text]m outside and 400 [Formula: see text]m inside the vessel. Micron's tip entered the forbidden region with a median penetration of just 8.84 [Formula: see text]m and 9.63 [Formula: see text]m, compared to 148.74 [Formula: see text]m and 117.17 [Formula: see text]m without our strategy, for the former and latter trials, respectively. CONCLUSION: Real-time control of Micron was achieved at 33.3 fps. We achieved improvements in real-time segmentation of brain vasculature from intra-operative images and showed that our approach works even on non-stationary vessel phantoms. The results suggest that by enabling precise, real-time control, we are one step closer to using Micron in real neurosurgical procedures.


Assuntos
Neoplasias Meníngeas , Meningioma , Algoritmos , Humanos , Neoplasias Meníngeas/diagnóstico por imagem , Neoplasias Meníngeas/cirurgia , Meningioma/diagnóstico por imagem , Meningioma/cirurgia , Procedimentos Neurocirúrgicos , Imagens de Fantasmas
16.
Sci Robot ; 7(62): eabn6522, 2022 01 26.
Artigo em Inglês | MEDLINE | ID: mdl-35080900

RESUMO

An autonomous robotic laparoscopic surgical technique is capable of tracking tissue motion and offers consistency in suturing for the anastomosis of the small bowel.


Assuntos
Laparoscopia , Procedimentos Cirúrgicos Robóticos , Robótica , Anastomose Cirúrgica , Robótica/instrumentação , Técnicas de Sutura
17.
Front Robot AI ; 8: 707704, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34901168

RESUMO

Robots for minimally invasive surgery introduce many advantages, but still require the surgeon to alternatively control the surgical instruments and the endoscope. This work aims at providing autonomous navigation of the endoscope during a surgical procedure. The autonomous endoscope motion was based on kinematic tracking of the surgical instruments and integrated with the da Vinci Research Kit. A preclinical usability study was conducted by 10 urologists. They carried out an ex vivo orthotopic neobladder reconstruction twice, using both traditional and autonomous endoscope control. The usability of the system was tested by asking participants to fill standard system usability scales. Moreover, the effectiveness of the method was assessed by analyzing the total procedure time and the time spent with the instruments out of the field of view. The average system usability score overcame the threshold usually identified as the limit to assess good usability (average score = 73.25 > 68). The average total procedure time with the autonomous endoscope navigation was comparable with the classic control (p = 0.85 > 0.05), yet it significantly reduced the time out of the field of view (p = 0.022 < 0.05). Based on our findings, the autonomous endoscope improves the usability of the surgical system, and it has the potential to be an additional and customizable tool for the surgeon that can always take control of the endoscope or leave it to move autonomously.

18.
Cancers (Basel) ; 13(16)2021 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-34439355

RESUMO

Isocitrate dehydrogenase (IDH) mutational status is pivotal in the management of gliomas. Patients with IDH-mutated (IDH-MUT) tumors have a better prognosis and benefit more from extended surgical resection than IDH wild-type (IDH-WT). Raman spectroscopy (RS) is a minimally invasive optical technique with great potential for intraoperative diagnosis. We evaluated the RS's ability to characterize the IDH mutational status onto unprocessed glioma biopsies. We extracted 2073 Raman spectra from thirty-eight unprocessed samples. The classification performance was assessed using the eXtreme Gradient Boosted trees (XGB) and Support Vector Machine with Radial Basis Function kernel (RBF-SVM). Measured Raman spectra displayed differences between IDH-MUT and IDH-WT tumor tissue. From the 103 Raman shifts screened as input features, the cross-validation loop identified 52 shifts with the highest performance in the distinction of the two groups. Raman analysis showed differences in spectral features of lipids, collagen, DNA and cholesterol/phospholipids. We were able to distinguish between IDH-MUT and IDH-WT tumors with an accuracy and precision of 87%. RS is a valuable and accurate tool for characterizing the mutational status of IDH mutation in unprocessed glioma samples. This study improves RS knowledge for future personalized surgical strategy or in situ target therapies for glioma tumors.

19.
Cancers (Basel) ; 13(5)2021 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-33802369

RESUMO

Identifying tumor cells infiltrating normal-appearing brain tissue is critical to achieve a total glioma resection. Raman spectroscopy (RS) is an optical technique with potential for real-time glioma detection. Most RS reports are based on formalin-fixed or frozen samples, with only a few studies deployed on fresh untreated tissue. We aimed to probe RS on untreated brain biopsies exploring novel Raman bands useful in distinguishing glioma and normal brain tissue. Sixty-three fresh tissue biopsies were analyzed within few minutes after resection. A total of 3450 spectra were collected, with 1377 labelled as Healthy and 2073 as Tumor. Machine learning methods were used to classify spectra compared to the histo-pathological standard. The algorithms extracted information from 60 different Raman peaks identified as the most representative among 135 peaks screened. We were able to distinguish between tumor and healthy brain tissue with accuracy and precision of 83% and 82%, respectively. We identified 19 new Raman shifts with known biological significance. Raman spectroscopy was effective and accurate in discriminating glioma tissue from healthy brain ex-vivo in fresh samples. This study added new spectroscopic data that can contribute to further develop Raman Spectroscopy as an intraoperative tool for in-vivo glioma detection.

20.
Front Oncol ; 11: 626602, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33842330

RESUMO

INTRODUCTION: Fully convoluted neural networks (FCNN) applied to video-analysis are of particular interest in the field of head and neck oncology, given that endoscopic examination is a crucial step in diagnosis, staging, and follow-up of patients affected by upper aero-digestive tract cancers. The aim of this study was to test FCNN-based methods for semantic segmentation of squamous cell carcinoma (SCC) of the oral cavity (OC) and oropharynx (OP). MATERIALS AND METHODS: Two datasets were retrieved from the institutional registry of a tertiary academic hospital analyzing 34 and 45 NBI endoscopic videos of OC and OP lesions, respectively. The dataset referring to the OC was composed of 110 frames, while 116 frames composed the OP dataset. Three FCNNs (U-Net, U-Net 3, and ResNet) were investigated to segment the neoplastic images. FCNNs performance was evaluated for each tested network and compared to the gold standard, represented by the manual annotation performed by expert clinicians. RESULTS: For FCNN-based segmentation of the OC dataset, the best results in terms of Dice Similarity Coefficient (Dsc) were achieved by ResNet with 5(×2) blocks and 16 filters, with a median value of 0.6559. In FCNN-based segmentation for the OP dataset, the best results in terms of Dsc were achieved by ResNet with 4(×2) blocks and 16 filters, with a median value of 0.7603. All tested FCNNs presented very high values of variance, leading to very low values of minima for all metrics evaluated. CONCLUSIONS: FCNNs have promising potential in the analysis and segmentation of OC and OP video-endoscopic images. All tested FCNN architectures demonstrated satisfying outcomes in terms of diagnostic accuracy. The inference time of the processing networks were particularly short, ranging between 14 and 115 ms, thus showing the possibility for real-time application.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA