Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Front Hum Neurosci ; 9: 526, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26528160

RESUMO

Minimally invasive and robotic surgery changes the capacity for surgical mentors to guide their trainees with the control customary to open surgery. This neuroergonomic study aims to assess a "Collaborative Gaze Channel" (CGC); which detects trainer gaze-behavior and displays the point of regard to the trainee. A randomized crossover study was conducted in which twenty subjects performed a simulated robotic surgical task necessitating collaboration either with verbal (control condition) or visual guidance with CGC (study condition). Trainee occipito-parietal (O-P) cortical function was assessed with optical topography (OT) and gaze-behavior was evaluated using video-oculography. Performance during gaze-assistance was significantly superior [biopsy number: (mean ± SD): control = 5.6 ± 1.8 vs. CGC = 6.6 ± 2.0; p < 0.05] and was associated with significantly lower O-P cortical activity [ΔHbO2 mMol × cm [median (IQR)] control = 2.5 (12.0) vs. CGC 0.63 (11.2), p < 0.001]. A random effect model (REM) confirmed the association between guidance mode and O-P excitation. Network cost and global efficiency were not significantly influenced by guidance mode. A gaze channel enhances performance, modulates visual search, and alleviates the burden in brain centers subserving visual attention and does not induce changes in the trainee's O-P functional network observable with the current OT technique. The results imply that through visual guidance, attentional resources may be liberated, potentially improving the capability of trainees to attend to other safety critical events during the procedure.

2.
Ann Biomed Eng ; 40(10): 2156-67, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22581476

RESUMO

The use of multiple robots for performing complex tasks is becoming a common practice for many robot applications. When different operators are involved, effective cooperation with anticipated manoeuvres is important for seamless, synergistic control of all the end-effectors. In this paper, the concept of Collaborative Gaze Channelling (CGC) is presented for improved control of surgical robots for a shared task. Through eye tracking, the fixations of each operator are monitored and presented in a shared surgical workspace. CGC permits remote or physically separated collaborators to share their intention by visualising the eye gaze of their counterparts, and thus recovers, to a certain extent, the information of mutual intent that we rely upon in a vis-à-vis working setting. In this study, the efficiency of surgical manipulation with and without CGC for controlling a pair of bimanual surgical robots is evaluated by analysing the level of coordination of two independent operators. Fitts' law is used to compare the quality of movement with or without CGC. A total of 40 subjects have been recruited for this study and the results show that the proposed CGC framework exhibits significant improvement (p < 0.05) on all the motion indices used for quality assessment. This study demonstrates that visual guidance is an implicit yet effective way of communication during collaborative tasks for robotic surgery. Detailed experimental validation results demonstrate the potential clinical value of the proposed CGC framework.


Assuntos
Robótica/instrumentação , Robótica/métodos , Cirurgia Vídeoassistida/instrumentação , Cirurgia Vídeoassistida/métodos , Humanos , Masculino
3.
Surg Endosc ; 26(7): 2003-9, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-22258302

RESUMO

BACKGROUND: Eye-tracking technology has been shown to improve trainee performance in the aircraft industry, radiology, and surgery. The ability to track the point-of-regard of a supervisor and reflect this onto a subjects' laparoscopic screen to aid instruction of a simulated task is attractive, in particular when considering the multilingual make up of modern surgical teams and the development of collaborative surgical techniques. We tried to develop a bespoke interface to project a supervisors' point-of-regard onto a subjects' laparoscopic screen and to investigate whether using the supervisor's eye-gaze could be used as a tool to aid the identification of a target during a surgical-simulated task. METHODS: We developed software to project a supervisors' point-of-regard onto a subjects' screen whilst undertaking surgically related laparoscopic tasks. Twenty-eight subjects with varying levels of operative experience and proficiency in English undertook a series of surgically minded laparoscopic tasks. Subjects were instructed with verbal queues (V), a cursor reflecting supervisor's eye-gaze (E), or both (VE). Performance metrics included time to complete tasks, eye-gaze latency, and number of errors. RESULTS: Completion times and number of errors were significantly reduced when eye-gaze instruction was employed (VE, E). In addition, the time taken for the subject to correctly focus on the target (latency) was significantly reduced. CONCLUSIONS: We have successfully demonstrated the effectiveness of a novel framework to enable a supervisor eye-gaze to be projected onto a trainee's laparoscopic screen. Furthermore, we have shown that utilizing eye-tracking technology to provide visual instruction improves completion times and reduces errors in a simulated environment. Although this technology requires significant development, the potential applications are wide-ranging.


Assuntos
Simulação por Computador , Educação Médica/métodos , Movimentos Oculares , Fixação Ocular , Laparoscopia/educação , Materiais de Ensino , Análise de Variância , Desenho de Equipamento , Feminino , Humanos , Laparoscopia/instrumentação , Masculino , Reforço Verbal , Software
4.
Artigo em Inglês | MEDLINE | ID: mdl-20426014

RESUMO

In robot-assisted procedures, the surgeon's ability can be enhanced by navigation guidance through the use of virtual fixtures or active constraints. This paper presents a real-time modeling scheme for dynamic active constraints with fast and simple mesh adaptation under cardiac deformation and changes in anatomic structure. A smooth tubular pathway is constructed which provides assistance for a flexible hyper-redundant robot to circumnavigate the heart with the aim of undertaking bilateral pulmonary vein isolation as part of a modified maze procedure for the treatment of debilitating arrhythmia and atrial fibrillation. In contrast to existing approaches, the method incorporates detailed geometrical constraints with explicit manipulation margins of the forbidden region for an entire articulated surgical instrument, rather than just the end-effector itself. Detailed experimental validation is conducted to demonstrate the speed and accuracy of the instrument navigation with and without the use of the proposed dynamic constraints.


Assuntos
Procedimentos Cirúrgicos Cardiovasculares/métodos , Gráficos por Computador , Imageamento Tridimensional/métodos , Sistemas Homem-Máquina , Robótica/métodos , Cirurgia Assistida por Computador/métodos , Interface Usuário-Computador
5.
Rep U S ; 2009: 2783-2788, 2009 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-24748996

RESUMO

This paper presents a human-robot interface with perceptual docking to allow for the control of multiple microbots. The aim is to demonstrate that real-time eye tracking can be used for empowering robots with human vision by using knowledge acquired in situ. Several micro-robots can be directly controlled through a combination of manual and eye control. The novel control environment is demonstrated on a virtual biopsy of gastric lesion through an endoluminal approach. Twenty-one subjects were recruited to test the control environment. Statistical analysis was conducted on the completion time of the task using the keyboard control and the proposed eye tracking framework. System integration with the concept of perceptual docking framework demonstrated statistically significant improvement of task execution.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA