Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Surg Endosc ; 37(5): 3557-3566, 2023 05.
Article in English | MEDLINE | ID: mdl-36609924

ABSTRACT

BACKGROUND: In minimally invasive surgery (MIS), trainees need to learn how to interpret the operative field displayed on the laparoscopic screen. Experts currently guide trainees mainly verbally during laparoscopic procedures. A newly developed telestration system with augmented reality (iSurgeon) allows the instructor to display hand gestures in real-time on the laparoscopic screen in augmented reality to provide visual expert guidance (telestration). This study analysed the effect of telestration guided instructions on gaze behaviour during MIS training. METHODS: In a randomized-controlled crossover study, 40 MIS naive medical students performed 8 laparoscopic tasks with telestration or with verbal instructions only. Pupil Core eye-tracking glasses were used to capture the instructor's and trainees' gazes. Gaze behaviour measures for tasks 1-7 were gaze latency, gaze convergence and collaborative gaze convergence. Performance measures included the number of errors in tasks 1-7 and trainee's ratings in structured and standardized performance scores in task 8 (ex vivo porcine laparoscopic cholecystectomy). RESULTS: There was a significant improvement 1-7 on gaze latency [F(1,39) = 762.5, p < 0.01, ηp2 = 0.95], gaze convergence [F(1,39) = 482.8, p < 0.01, ηp2 = 0.93] and collaborative gaze convergence [F(1,39) = 408.4, p < 0.01, ηp2 = 0.91] upon instruction with iSurgeon. The number of errors was significantly lower in tasks 1-7 (0.18 ± 0.56 vs. 1.94 ± 1.80, p < 0.01) and the score ratings for laparoscopic cholecystectomy were significantly higher with telestration (global OSATS: 29 ± 2.5 vs. 25 ± 5.5, p < 0.01; task-specific OSATS: 60 ± 3 vs. 50 ± 6, p < 0.01). CONCLUSIONS: Telestration with augmented reality successfully improved surgical performance. The trainee's gaze behaviour was improved by reducing the time from instruction to fixation on targets and leading to a higher convergence of the instructor's and the trainee's gazes. Also, the convergence of trainee's gaze and target areas increased with telestration. This confirms augmented reality-based telestration works by means of gaze guidance in MIS and could be used to improve training outcomes.


Subject(s)
Augmented Reality , Education, Medical , Learning , Animals , Cholecystectomy, Laparoscopic/education , Cholecystectomy, Laparoscopic/methods , Clinical Competence , Cross-Over Studies , Laparoscopy/education , Swine , Students, Medical , Education, Medical/methods , Humans
2.
Dig Endosc ; 35(3): 314-322, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36281784

ABSTRACT

The visual patterns of an endoscopist, that is, what the endoscopist is looking at during luminal endoscopy, is an interesting area with an evolving evidence base. The tools required for gaze analysis have become cheaper and more easily accessible. A comprehensive literature search was undertaken identifying 19 relevant papers. Gaze analysis has been used to identify certain visual patterns associated with higher polyp detection rates. There have also been increasing applications of gaze analysis as an objective study tool to compare the effectiveness of endoscopic imaging technologies. Gaze analysis also has the potential to be incorporated into endoscopic training. Eye movements have been used to control and steer a robotic endoscope. This review presents the current evidence available in this novel and evolving field of endoscopic research.


Subject(s)
Colonic Polyps , Eye-Tracking Technology , Humans , Colonoscopy/methods , Eye Movements , Endoscopy, Gastrointestinal
3.
J Surg Res ; 280: 258-272, 2022 12.
Article in English | MEDLINE | ID: mdl-36030601

ABSTRACT

INTRODUCTION: Increased cognitive workload (CWL) is a well-established entity that can impair surgical performance and increase the likelihood of surgical error. The use of pupil and gaze tracking data is increasingly being used to measure CWL objectively in surgery. The aim of this review is to summarize and synthesize the existing evidence that surrounds this. METHODS: A systematic review was undertaken in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A search of OVID MEDLINE, IEEE Xplore, Web of Science, Google Scholar, APA PsychINFO, and EMBASE was conducted for articles published in English between 1990 and January 2021. In total, 6791 articles were screened and 32 full-text articles were selected based on the inclusion criteria. A narrative analysis was undertaken in view of the heterogeneity of studies. RESULTS: Seventy-eight percent of selected studies were deemed high quality. The most frequent surgical environment and task studied was surgical simulation (75%) and performance of laparoscopic skills (56%) respectively. The results demonstrated that the current literature can be broadly categorized into pupil, blink, and gaze metrics used in the assessment of CWL. These can be further categorized according to their use in the context of CWL: (1) direct measurement of CWL (n = 16), (2) determination of expertise level (n = 14), and (3) predictors of performance (n = 2). CONCLUSIONS: Eye-tracking data provide a wealth of information; however, there is marked study heterogeneity. Pupil diameter and gaze entropy demonstrate promise in CWL assessment. Future work will entail the use of artificial intelligence in the form of deep learning and the use of a multisensor platform to accurately measure CWL.


Subject(s)
Benchmarking , Pupil , Artificial Intelligence , Workload/psychology , Cognition
4.
Surg Endosc ; 35(9): 5381-5391, 2021 09.
Article in English | MEDLINE | ID: mdl-34101012

ABSTRACT

BACKGROUND: Within surgery, assistive robotic devices (ARD) have reported improved patient outcomes. ARD can offer the surgical team a "third hand" to perform wider tasks and more degrees of motion in comparison with conventional laparoscopy. We test an eye-tracking based robotic scrub nurse (RSN) in a simulated operating room based on a novel real-time framework for theatre-wide 3D gaze localization in a mobile fashion. METHODS: Surgeons performed segmental resection of pig colon and handsewn end-to-end anastomosis while wearing eye-tracking glasses (ETG) assisted by distributed RGB-D motion sensors. To select instruments, surgeons (ST) fixed their gaze on a screen, initiating the RSN to pick up and transfer the item. Comparison was made between the task with the assistance of a human scrub nurse (HSNt) versus the task with the assistance of robotic and human scrub nurse (R&HSNt). Task load (NASA-TLX), technology acceptance (Van der Laan's), metric data on performance and team communication were measured. RESULTS: Overall, 10 ST participated. NASA-TLX feedback for ST on HSNt vs R&HSNt usage revealed no significant difference in mental, physical or temporal demands and no change in task performance. ST reported significantly higher frustration score with R&HSNt. Van der Laan's scores showed positive usefulness and satisfaction scores in using the RSN. No significant difference in operating time was observed. CONCLUSIONS: We report initial findings of our eye-tracking based RSN. This enables mobile, unrestricted hands-free human-robot interaction intra-operatively. Importantly, this platform is deemed non-inferior to HSNt and accepted by ST and HSN test users.


Subject(s)
Laparoscopy , Robotic Surgical Procedures , Robotics , Animals , Eye-Tracking Technology , Swine , Task Performance and Analysis
5.
Surg Endosc ; 35(8): 4890-4899, 2021 08.
Article in English | MEDLINE | ID: mdl-34028606

ABSTRACT

BACKGROUND: Interventional endoluminal therapy is rapidly advancing as a minimally invasive surgical technique. The expanding remit of endoscopic therapy necessitates precision control. Eye tracking is an emerging technology which allows intuitive control of devices. This was a feasibility study to establish if a novel eye gaze-controlled endoscopic system could be used to intuitively control an endoscope. METHODS: An eye gaze-control system consisting of eye tracking glasses, specialist cameras and a joystick was used to control a robotically driven endoscope allowing steering, advancement, withdrawal and retroflexion. Eight experienced and eight non-endoscopists used both the eye gaze system and a conventional endoscope to identify ten targets in two simulated environments: a sphere and an upper gastrointestinal (UGI) model. Completion of tasks was timed. Subjective feedback was collected from each participant on task load (NASA Task Load Index) and acceptance of technology (Van der Laan scale). RESULTS: When using gaze-control endoscopy, non-endoscopists were significantly quicker when using gaze-control rather than conventional endoscopy (sphere task 3:54 ± 1:17 vs. 9:05 ± 5:40 min, p = 0.012, and UGI model task 1:59 ± 0:24 vs 3:45 ± 0:53 min, p < .001). Non-endoscopists reported significantly higher NASA-TLX workload total scores using conventional endoscopy versus gaze-control (80.6 ± 11.3 vs 22.5 ± 13.8, p < .001). Endoscopists reported significantly higher total NASA-TLX workload scores using gaze control versus conventional endoscopy (54.2 ± 16 vs 26.9 ± 15.3, p = 0.012). All subjects reported that the gaze-control had positive 'usefulness' and 'satisfaction' score of 0.56 ± 0.83 and 1.43 ± 0.51 respectively. CONCLUSIONS: The novel eye gaze-control system was significantly quicker to use and subjectively lower in workload when used by non-endoscopists. Further work is needed to see if this would translate into a shallower learning curve to proficiency versus conventional endoscopy. The eye gaze-control system appears feasible as an intuitive endoscope control system. Hybrid gaze and hand control may prove a beneficial technology to evolving endoscopic platforms.


Subject(s)
Endoscopes , Workload , Endoscopy , Humans , Minimally Invasive Surgical Procedures
6.
Int J Comput Assist Radiol Surg ; 12(7): 1131-1140, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28397111

ABSTRACT

PURPOSE: Improved surgical outcome and patient safety in the operating theatre are constant challenges. We hypothesise that a framework that collects and utilises information -especially perceptually enabled ones-from multiple sources, could help to meet the above goals. This paper presents some core functionalities of a wider low-cost framework under development that allows perceptually enabled interaction within the surgical environment. METHODS: The synergy of wearable eye-tracking and advanced computer vision methodologies, such as SLAM, is exploited. As a demonstration of one of the framework's possible functionalities, an articulated collaborative robotic arm and laser pointer is integrated and the set-up is used to project the surgeon's fixation point in 3D space. RESULTS: The implementation is evaluated over 60 fixations on predefined targets, with distances between the subject and the targets of 92-212 cm and between the robot and the targets of 42-193 cm. The median overall system error is currently 3.98 cm. Its real-time potential is also highlighted. CONCLUSIONS: The work presented here represents an introduction and preliminary experimental validation of core functionalities of a larger framework under development. The proposed framework is geared towards a safer and more efficient surgical theatre.


Subject(s)
Eye Movement Measurements , Fixation, Ocular , Operating Rooms , Robotics/methods , Workflow , Humans , Image Interpretation, Computer-Assisted , Minimally Invasive Surgical Procedures
SELECTION OF CITATIONS
SEARCH DETAIL
...