Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38558289

RESUMO

Purpose Surgical workflow recognition is a challenging task that requires understanding multiple aspects of surgery, such as gestures, phases, and steps. However, most existing methods focus on single-task or single-modal models and rely on costly annotations for training. To address these limitations, we propose a novel semi-supervised learning approach that leverages multimodal data and self-supervision to create meaningful representations for various surgical tasks. Methods Our representation learning approach conducts two processes. In the first stage, time contrastive learning is used to learn spatiotemporal visual features from video data, without any labels. In the second stage, multimodal VAE fuses the visual features with kinematic data to obtain a shared representation, which is fed into recurrent neural networks for online recognition. Results Our method is evaluated on two datasets: JIGSAWS and MISAW. We confirmed that it achieved comparable or better performance in multi-granularity workflow recognition compared to fully supervised models specialized for each task. On the JIGSAWS Suturing dataset, we achieve a gesture recognition accuracy of 83.3%. In addition, our model is more efficient in annotation usage, as it can maintain high performance with only half of the labels. On the MISAW dataset, we achieve 84.0% AD-Accuracy in phase recognition and 56.8% AD-Accuracy in step recognition. Conclusion Our multimodal representation exhibits versatility across various surgical tasks and enhances annotation efficiency. This work has significant implications for real-time decision-making systems within the operating room.

2.
Sensors (Basel) ; 23(24)2023 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-38139711

RESUMO

In the context of Minimally Invasive Surgery, surgeons mainly rely on visual feedback during medical operations. In common procedures such as tissue resection, the automation of endoscopic control is crucial yet challenging, particularly due to the interactive dynamics of multi-agent operations and the necessity for real-time adaptation. This paper introduces a novel framework that unites a Hierarchical Quadratic Programming controller with an advanced interactive perception module. This integration addresses the need for adaptive visual field control and robust tool tracking in the operating scene, ensuring that surgeons and assistants have optimal viewpoint throughout the surgical task. The proposed framework handles multiple objectives within predefined thresholds, ensuring efficient tracking even amidst changes in operating backgrounds, varying lighting conditions, and partial occlusions. Empirical validations in scenarios involving single, double, and quadruple tool tracking during tissue resection tasks have underscored the system's robustness and adaptability. The positive feedback from user studies, coupled with the low cognitive and physical strain reported by surgeons and assistants, highlight the system's potential for real-world application.


Assuntos
Endoscópios , Procedimentos Cirúrgicos Minimamente Invasivos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Endoscopia/métodos , Automação , Percepção
3.
Sensors (Basel) ; 23(6)2023 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-36992038

RESUMO

Minimally invasive surgery has undergone significant advancements in recent years, transforming various surgical procedures by minimizing patient trauma, postoperative pain, and recovery time. However, the use of robotic systems in minimally invasive surgery introduces significant challenges related to the control of the robot's motion and the accuracy of its movements. In particular, the inverse kinematics (IK) problem is critical for robot-assisted minimally invasive surgery (RMIS), where satisfying the remote center of motion (RCM) constraint is essential to prevent tissue damage at the incision point. Several IK strategies have been proposed for RMIS, including classical inverse Jacobian IK and optimization-based approaches. However, these methods have limitations and perform differently depending on the kinematic configuration. To address these challenges, we propose a novel concurrent IK framework that combines the strengths of both approaches and explicitly incorporates RCM constraints and joint limits into the optimization process. In this paper, we present the design and implementation of concurrent inverse kinematics solvers, as well as experimental validation in both simulation and real-world scenarios. Concurrent IK solvers outperform single-method solvers, achieving a 100% solve rate and reducing the IK solving time by up to 85% for an endoscope positioning task and 37% for a tool pose control task. In particular, the combination of an iterative inverse Jacobian method with a hierarchical quadratic programming method showed the highest average solve rate and lowest computation time in real-world experiments. Our results demonstrate that concurrent IK solving provides a novel and effective solution to the constrained IK problem in RMIS applications.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Fenômenos Biomecânicos , Procedimentos Cirúrgicos Robóticos/métodos , Movimento (Física) , Procedimentos Cirúrgicos Minimamente Invasivos/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA