Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Methods Programs Biomed ; 212: 106452, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34688174

ABSTRACT

BACKGROUND AND OBJECTIVE: Automatic surgical workflow recognition is an essential step in developing context-aware computer-assisted surgical systems. Video recordings of surgeries are becoming widely accessible, as the operational field view is captured during laparoscopic surgeries. Head and ceiling mounted cameras are also increasingly being used to record videos in open surgeries. This makes videos a common choice in surgical workflow recognition. Additional modalities, such as kinematic data captured during robot-assisted surgeries, could also improve workflow recognition. This paper presents the design and results of the MIcro-Surgical Anastomose Workflow recognition on training sessions (MISAW) challenge whose objective was to develop workflow recognition models based on kinematic data and/or videos. METHODS: The MISAW challenge provided a data set of 27 sequences of micro-surgical anastomosis on artificial blood vessels. This data set was composed of videos, kinematics, and workflow annotations. The latter described the sequences at three different granularity levels: phase, step, and activity. Four tasks were proposed to the participants: three of them were related to the recognition of surgical workflow at three different granularity levels, while the last one addressed the recognition of all granularity levels in the same model. We used the average application-dependent balanced accuracy (AD-Accuracy) as the evaluation metric. This takes unbalanced classes into account and it is more clinically relevant than a frame-by-frame score. RESULTS: Six teams participated in at least one task. All models employed deep learning models, such as convolutional neural networks (CNN), recurrent neural networks (RNN), or a combination of both. The best models achieved accuracy above 95%, 80%, 60%, and 75% respectively for recognition of phases, steps, activities, and multi-granularity. The RNN-based models outperformed the CNN-based ones as well as the dedicated modality models compared to the multi-granularity except for activity recognition. CONCLUSION: For high levels of granularity, the best models had a recognition rate that may be sufficient for applications such as prediction of remaining surgical time. However, for activities, the recognition rate was still low for applications that can be employed clinically. The MISAW data set is publicly available at http://www.synapse.org/MISAW to encourage further research in surgical workflow recognition.


Subject(s)
Laparoscopy , Robotic Surgical Procedures , Anastomosis, Surgical , Humans , Neural Networks, Computer , Workflow
2.
Simul Healthc ; 16(1): 67-72, 2021 Feb 01.
Article in English | MEDLINE | ID: mdl-32502122

ABSTRACT

INTRODUCTION: The objective of the study was to identify objective metrics to evaluate the significance of a sonographer's expertise on trajectories of ultrasound probe during obstetric ultrasound training procedures. METHODS: This prospective observational study was conducted at Rennes University Hospital, Department of Obstetrics and Gynecology. We evaluated a panel of sonographers (expert, intermediate, and novice) in performing 3 tasks (brain, heart, and spine) with an obstetric ultrasound simulator (Scantrainer; Medaphor, Cardiff, UK). The trajectories of the probe were logged and recorded by a custom data acquisition software. We computed metrics on the trajectories (duration, path length, average velocity, average acceleration, jerk, working volume) to compare the 3 groups and identify discriminating metrics. RESULTS: A total of 33 participants were enrolled: 5 experts, 12 intermediates, and 16 novices. Discriminatory metrics were observed among the 3 levels of expertise for duration, velocity, acceleration, and jerk for brain and spine tasks. Working volume was discriminatory for the brain and the heart task. Path length was discriminatory for the brain task. CONCLUSIONS: Our results suggest a relationship between the sonographer's level of expertise and probe trajectory metrics. Such measurements could be used as an indicator of sonographer proficiency and contribute to automatic analysis of probe trajectory to evaluate the quality of sonography and the sonographer.


Subject(s)
Gynecology , Physicians , Clinical Competence , Female , Humans , Pregnancy , Ultrasonography , Ultrasonography, Prenatal
3.
Int J Comput Assist Radiol Surg ; 14(10): 1663-1671, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31177422

ABSTRACT

PURPOSE: Annotation of surgical activities becomes increasingly important for many recent applications such as surgical workflow analysis, surgical situation awareness, and the design of the operating room of the future, especially to train machine learning methods in order to develop intelligent assistance. Currently, annotation is mostly performed by observers with medical background and is incredibly costly and time-consuming, creating a major bottleneck for the above-mentioned technologies. In this paper, we propose a way to eliminate, or at least limit, the human intervention in the annotation process. METHODS: Meaningful information about interaction between objects is inherently available in virtual reality environments. We propose a strategy to convert automatically this information into annotations in order to provide as output individual surgical process models. VALIDATION: We implemented our approach through a peg-transfer task simulator and compared it to manual annotations. To assess the impact of our contribution, we studied both intra- and inter-observer variability. RESULTS AND CONCLUSION: In average, manual annotations took more than 12 min for 1 min of video to achieve low-level physical activity annotation, whereas automatic annotation is achieved in less than a second for the same video period. We also demonstrated that manual annotation introduced mistakes as well as intra- and inter-observer variability that our method is able to suppress due to the high precision and reproducibility.


Subject(s)
Machine Learning , Models, Anatomic , Surgery, Computer-Assisted/methods , Humans , Operating Rooms , Reproducibility of Results , Virtual Reality
4.
Artif Intell Med ; 91: 3-11, 2018 09.
Article in English | MEDLINE | ID: mdl-30172445

ABSTRACT

OBJECTIVE: The analysis of surgical motion has received a growing interest with the development of devices allowing their automatic capture. In this context, the use of advanced surgical training systems makes an automated assessment of surgical trainee possible. Automatic and quantitative evaluation of surgical skills is a very important step in improving surgical patient care. MATERIAL AND METHOD: In this paper, we present an approach for the discovery and ranking of discriminative and interpretable patterns of surgical practice from recordings of surgical motions. A pattern is defined as a series of actions or events in the kinematic data that together are distinctive of a specific gesture or skill level. Our approach is based on the decomposition of continuous kinematic data into a set of overlapping gestures represented by strings (bag of words) for which we compute comparative numerical statistic (tf-idf) enabling the discriminative gesture discovery via its relative occurrence frequency. RESULTS: We carried out experiments on three surgical motion datasets. The results show that the patterns identified by the proposed method can be used to accurately classify individual gestures, skill levels and surgical interfaces. We also present how the patterns provide a detailed feedback on the trainee skill assessment. CONCLUSIONS: The proposed approach is an interesting addition to existing learning tools for surgery as it provides a way to obtain a feedback on which parts of an exercise have been used to classify the attempt as correct or incorrect.


Subject(s)
Gestures , Pattern Recognition, Automated/methods , Surgical Procedures, Operative/education , Algorithms , Biomechanical Phenomena , Clinical Competence , Formative Feedback , Humans , Task Performance and Analysis , Time and Motion Studies
5.
Int J Comput Assist Radiol Surg ; 13(1): 13-24, 2018 Jan.
Article in English | MEDLINE | ID: mdl-28914409

ABSTRACT

PURPOSE: Teleoperated robotic systems are nowadays routinely used for specific interventions. Benefits of robotic training courses have already been acknowledged by the community since manipulation of such systems requires dedicated training. However, robotic surgical simulators remain expensive and require a dedicated human-machine interface. METHODS: We present a low-cost contactless optical sensor, the Leap Motion, as a novel control device to manipulate the RAVEN-II robot. We compare peg manipulations during a training task with a contact-based device, the electro-mechanical Sigma.7. We perform two complementary analyses to quantitatively assess the performance of each control method: a metric-based comparison and a novel unsupervised spatiotemporal trajectory clustering. RESULTS: We show that contactless control does not offer as good manipulability as the contact-based. Where part of the metric-based evaluation presents the mechanical control better than the contactless one, the unsupervised spatiotemporal trajectory clustering from the surgical tool motions highlights specific signature inferred by the human-machine interfaces. CONCLUSIONS: Even if the current implementation of contactless control does not overtake manipulation with high-standard mechanical interface, we demonstrate that using the optical sensor complete control of the surgical instruments is feasible. The proposed method allows fine tracking of the trainee's hands in order to execute dexterous laparoscopic training gestures. This work is promising for development of future human-machine interfaces dedicated to robotic surgical training systems.


Subject(s)
Robotic Surgical Procedures/education , User-Computer Interface , Gestures , Humans , Robotic Surgical Procedures/methods
6.
IEEE Trans Biomed Eng ; 63(6): 1280-91, 2016 06.
Article in English | MEDLINE | ID: mdl-26513773

ABSTRACT

Dexterity and procedural knowledge are two critical skills that surgeons need to master to perform accurate and safe surgical interventions. However, current training systems do not allow us to provide an in-depth analysis of surgical gestures to precisely assess these skills. Our objective is to develop a method for the automatic and quantitative assessment of surgical gestures. To reach this goal, we propose a new unsupervised algorithm that can automatically segment kinematic data from robotic training sessions. Without relying on any prior information or model, this algorithm detects critical points in the kinematic data that define relevant spatio-temporal segments. Based on the association of these segments, we obtain an accurate recognition of the gestures involved in the surgical training task. We, then, perform an advanced analysis and assess our algorithm using datasets recorded during real expert training sessions. After comparing our approach with the manual annotations of the surgical gestures, we observe 97.4% accuracy for the learning purpose and an average matching score of 81.9% for the fully automated gesture recognition process. Our results show that trainees workflow can be followed and surgical gestures may be automatically evaluated according to an expert database. This approach tends toward improving training efficiency by minimizing the learning curve.


Subject(s)
Gestures , Pattern Recognition, Automated/methods , Robotic Surgical Procedures/education , Robotic Surgical Procedures/methods , Unsupervised Machine Learning , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...