Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
Int J Comput Assist Radiol Surg ; 14(11): 2005-2020, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31037493

ABSTRACT

PURPOSE: Automatically segmenting and classifying surgical activities is an important prerequisite to providing automated, targeted assessment and feedback during surgical training. Prior work has focused almost exclusively on recognizing gestures, or short, atomic units of activity such as pushing needle through tissue, whereas we also focus on recognizing higher-level maneuvers, such as suture throw. Maneuvers exhibit more complexity and variability than the gestures from which they are composed, however working at this granularity has the benefit of being consistent with existing training curricula. METHODS: Prior work has focused on hidden Markov model and conditional-random-field-based methods, which typically leverage unary terms that are local in time and linear in model parameters. Because maneuvers are governed by long-term, nonlinear dynamics, we argue that the more expressive unary terms offered by recurrent neural networks (RNNs) are better suited for this task. Four RNN architectures are compared for recognizing activities from kinematics: simple RNNs, long short-term memory, gated recurrent units, and mixed history RNNs. We report performance in terms of error rate and edit distance, and we use a functional analysis-of-variance framework to assess hyperparameter sensitivity for each architecture. RESULTS: We obtain state-of-the-art performance for both maneuver recognition from kinematics (4 maneuvers; error rate of [Formula: see text]; normalized edit distance of [Formula: see text]) and gesture recognition from kinematics (10 gestures; error rate of [Formula: see text]; normalized edit distance of [Formula: see text]). CONCLUSIONS: Automated maneuver recognition is feasible with RNNs, an exciting result which offers the opportunity to provide targeted assessment and feedback at a higher level of granularity. In addition, we show that multiple hyperparameters are important for achieving good performance, and our hyperparameter analysis serves to aid future work in RNN-based activity recognition.


Subject(s)
Education, Medical, Graduate/methods , General Surgery/education , Neural Networks, Computer , Pattern Recognition, Automated/methods , Robotics/education , Suture Techniques/education , Gestures , Humans , Robotics/methods
2.
Surg Endosc ; 32(1): 62-72, 2018 01.
Article in English | MEDLINE | ID: mdl-28634632

ABSTRACT

BACKGROUND: While it is often claimed that virtual reality (VR) training system can offer self-directed and mentor-free skill learning using the system's performance metrics (PM), no studies have yet provided evidence-based confirmation. This experimental study investigated what extent to which trainees achieved their self-learning with a current VR simulator and whether additional mentoring improved skill learning, skill transfer and cognitive workloads in robotic surgery simulation training. METHODS: Thirty-two surgical trainees were randomly assigned to either the Control-Group (CG) or Experiment-Group (EG). While the CG participants reviewed the PM at their discretion, the EG participants had explanations about PM and instructions on how to improve scores. Each subject completed a 5-week training using four simulation tasks. Pre- and post-training data were collected using both a simulator and robot. Peri-training data were collected after each session. Skill learning, time spent on PM (TPM), and cognitive workloads were compared between groups. RESULTS: After the simulation training, CG showed substantially lower simulation task scores (82.9 ± 6.0) compared with EG (93.2 ± 4.8). Both groups demonstrated improved physical model tasks performance with the actual robot, but the EG had a greater improvement in two tasks. The EG exhibited lower global mental workload/distress, higher engagement, and a better understanding regarding using PM to improve performance. The EG's TPM was initially long but substantially shortened as the group became familiar with PM. CONCLUSION: Our study demonstrated that the current VR simulator offered limited self-skill learning and additional mentoring still played an important role in improving the robotic surgery simulation training.


Subject(s)
Clinical Competence/statistics & numerical data , Internship and Residency/methods , Robotic Surgical Procedures/education , Simulation Training/methods , Virtual Reality , Adult , Cognition , Humans , Mentoring/methods , Mentors , Surveys and Questionnaires , Workload
3.
Int J Comput Assist Radiol Surg ; 11(6): 987-96, 2016 Jun.
Article in English | MEDLINE | ID: mdl-27072835

ABSTRACT

PURPOSE: Easy acquisition of surgical data opens many opportunities to automate skill evaluation and teaching. Current technology to search tool motion data for surgical activity segments of interest is limited by the need for manual pre-processing, which can be prohibitive at scale. We developed a content-based information retrieval method, query-by-example (QBE), to automatically detect activity segments within surgical data recordings of long duration that match a query. METHODS: The example segment of interest (query) and the surgical data recording (target trial) are time series of kinematics. Our approach includes an unsupervised feature learning module using a stacked denoising autoencoder (SDAE), two scoring modules based on asymmetric subsequence dynamic time warping (AS-DTW) and template matching, respectively, and a detection module. A distance matrix of the query against the trial is computed using the SDAE features, followed by AS-DTW combined with template scoring, to generate a ranked list of candidate subsequences (substrings). To evaluate the quality of the ranked list against the ground-truth, thresholding conventional DTW distances and bipartite matching are applied. We computed the recall, precision, F1-score, and a Jaccard index-based score on three experimental setups. We evaluated our QBE method using a suture throw maneuver as the query, on two tool motion datasets (JIGSAWS and MISTIC-SL) captured in a training laboratory. RESULTS: We observed a recall of 93, 90 and 87 % and a precision of 93, 91, and 88 % with same surgeon same trial (SSST), same surgeon different trial (SSDT) and different surgeon (DS) experiment setups on JIGSAWS, and a recall of 87, 81 and 75 % and a precision of 72, 61, and 53 % with SSST, SSDT and DS experiment setups on MISTIC-SL, respectively. CONCLUSION: We developed a novel, content-based information retrieval method to automatically detect multiple instances of an activity within long surgical recordings. Our method demonstrated adequate recall across different complexity datasets and experimental conditions.


Subject(s)
Algorithms , Information Storage and Retrieval/methods , Surgical Procedures, Operative , Humans
5.
Surg Endosc ; 28(2): 456-65, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24196542

ABSTRACT

BACKGROUND: We conducted this study to investigate how physical and cognitive ergonomic workloads would differ between robotic and laparoscopic surgeries and whether any ergonomic differences would be related to surgeons' robotic surgery skill level. Our hypothesis is that the unique features in robotic surgery will demonstrate skill-related results both in substantially less physical and cognitive workload and uncompromised task performance. METHODS: Thirteen MIS surgeons were recruited for this institutional review board-approved study and divided into three groups based on their robotic surgery experiences: laparoscopy experts with no robotic experience, novices with no or little robotic experience, and robotic experts. Each participant performed six surgical training tasks using traditional laparoscopy and robotic surgery. Physical workload was assessed by using surface electromyography from eight muscles (biceps, triceps, deltoid, trapezius, flexor carpi ulnaris, extensor digitorum, thenar compartment, and erector spinae). Mental workload assessment was conducted using the NASA-TLX. RESULTS: The cumulative muscular workload (CMW) from the biceps and the flexor carpi ulnaris with robotic surgery was significantly lower than with laparoscopy (p < 0.05). Interestingly, the CMW from the trapezius was significantly higher with robotic surgery than with laparoscopy (p < 0.05), but this difference was only observed in laparoscopic experts (LEs) and robotic surgery novices. NASA-TLX analysis showed that both robotic surgery novices and experts expressed lower global workloads with robotic surgery than with laparoscopy, whereas LEs showed higher global workload with robotic surgery (p > 0.05). Robotic surgery experts and novices had significantly higher performance scores with robotic surgery than with laparoscopy (p < 0.05). CONCLUSIONS: This study demonstrated that the physical and cognitive ergonomics with robotic surgery were significantly less challenging. Additionally, several ergonomic components were skill-related. Robotic experts could benefit the most from the ergonomic advantages in robotic surgery. These results emphasize the need for well-structured training and well-defined ergonomics guidelines to maximize the benefits utilizing the robotic surgery.


Subject(s)
Cognition/physiology , Ergonomics/standards , Forearm/physiology , Laparoscopy/instrumentation , Muscle, Skeletal/physiology , Robotics/standards , Workload , Electromyography , Equipment Design , Humans , Laparoscopy/standards
SELECTION OF CITATIONS
SEARCH DETAIL