Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
1.
Eur J Obstet Gynecol Reprod Biol ; 298: 13-17, 2024 May 03.
Article in English | MEDLINE | ID: mdl-38705008

ABSTRACT

INTRODUCTION: This study aims to investigate probe motion during full mid-trimester anomaly scans. METHODS: We undertook a prospective, observational study of obstetric sonographers at a UK University Teaching Hospital. We collected prospectively full-length video recordings of routine second-trimester anomaly scans synchronized with probe trajectory tracking data during the scan. Videos were reviewed and trajectories analyzed using duration, path metrics (path length, velocity, acceleration, jerk, and volume) and angular metrics (spectral arc, angular area, angular velocity, angular acceleration, and angular jerk). These trajectories were then compared according to the participant level of expertise, fetal presentation, and patient BMI. RESULTS: A total of 17 anomaly scans were recorded. The average velocity of the probe was 12.9 ± 3.4 mm/s for the consultants versus 24.6 ± 5.7 mm/s for the fellows (p = 0.02), the average acceleration 170.4 ± 26.3 mm/s2 versus 328.9 ± 62.7 mm/s2 (p = 0.02), and the average jerk 7491.7 ± 1056.1 mm/s3 versus 14944.1 ± 3146.3 mm/s3 (p = 0.02), the working volume 9.106 ± 4.106 mm3 versus 29.106 ± 11.106 mm3 (p = 0.03), respectively. The angular metrics were not significantly different according to the participant level of expertise, the fetal presentation, or to patients BMI. CONCLUSION: Some differences in the probe path metrics (velocity, acceleration, jerk and working volume) were noticed according to operator's level.

2.
J Exp Orthop ; 10(1): 138, 2023 Dec 14.
Article in English | MEDLINE | ID: mdl-38095746

ABSTRACT

PURPOSE: Limited data exist on the actual transfer of skills learned using a virtual reality (VR) simulator for arthroscopy training because studies mainly focused on VR performance improvement and not on transfer to real word (transfer validity). The purpose of this single-blinded, controlled trial was to objectively investigate transfer validity in the context of initial knee arthroscopy training. METHODS: For this study, 36 junior resident orthopaedic surgeons (postgraduate year one and year two) without prior experience in arthroscopic surgery were enrolled to receive standard knee arthroscopy surgery training (NON-VR group) or standard training plus training on a hybrid virtual reality knee arthroscopy simulator (1 h/month) (VR group). At inclusion, all participants completed a questionnaire on their current arthroscopic technical skills. After 6 months of training, both groups performed three exercises that were evaluated independently by two blinded trainers: i) arthroscopic partial meniscectomy on a bench-top knee simulator; ii) supervised diagnostic knee arthroscopy on a cadaveric knee; and iii) supervised knee partial meniscectomy on a cadaveric knee. Training level was determined with the Arthroscopic Surgical Skill Evaluation Tool (ASSET) score. RESULTS: Overall, performance (ASSET scores) was better in the VR group than NON-VR group (difference in the global scores: p < 0.001, in bench-top meniscectomy scores: p = 0.03, in diagnostic knee arthroscopy on a cadaveric knee scores: p = 0.04, and in partial meniscectomy on a cadaveric knee scores: p = 0.02). Subgroup analysis by postgraduate year showed that the year-one NON-VR subgroup performed worse than the other subgroups, regardless of the exercise. CONCLUSION: This study showed the transferability of the technical skills acquired by novice residents on a hybrid virtual reality simulator to the bench-top and cadaveric models. Surgical skill acquired with a VR arthroscopy surgical simulator might safely improve arthroscopy competences in the operating room, also helping to standardise resident training and follow their progress.

3.
Article in English | MEDLINE | ID: mdl-38083107

ABSTRACT

Robotic surgery represents a major breakthrough in the evolution of medical technology. Accordingly, efficient skill training and assessment methods should be developed to meet the surgeon's need of acquiring such robotic skills over a relatively short learning curve in a safe manner. Different from conventional training and assessment methods, we aim to explore the surface electromyography (sEMG) signal during the training process in order to obtain semantic and interpretable information to help the trainee better understand and improve his/her training performance. As a preliminary study, motion primitive recognition based on sEMG signal is studied in this work. Using machine learning (ML) technique, it is shown that the sEMG-based motion recognition method is feasible and promising for hand motions along 3 Cartesian axes in the virtual reality (VR) environment of a commercial robotic surgery training platform, which will hence serve as the basis for new robotic surgical skill assessment criterion and training guidance based on muscle activity information. Considering certain motion patterns were less accurately recognized than others, more data collection and deep learning-based analysis will be carried out to further improve the recognition accuracy in future research.


Subject(s)
Robotic Surgical Procedures , Robotics , Virtual Reality , Female , Male , Humans , Robotic Surgical Procedures/education , Electromyography/methods , Motion
4.
Article in English | MEDLINE | ID: mdl-37406465

ABSTRACT

INTRODUCTION: Environmental factors in the operating room during cesarean sections are likely important for both women/birthing people and their babies but there is currently a lack of rigorous literature about their evaluation. The principal aim of this study was to systematically examine studies published on the physical environment in the obstetrical operating room during c-sections and its impact on mother and neonate outcomes. The secondary objective was to identify the sensors used to investigate the operating room environment during cesarean sections. METHODS: In this literature review, we searched MEDLINE a database using the following keywords: Cesarean section AND (operating room environment OR Noise OR Music OR Video recording OR Light level OR Gentle OR Temperature OR Motion Data). Eligible studies had to be published in English or French within the past 10 years and had to investigate the operating room environment during cesarean sections in women. For each study we reported which aspects of the physical environment were investigated in the OR (i.e., noise, music, movement, light or temperature) and the involved sensors. RESULTS: Of a total of 105 studies screened, we selected 8 articles from title and abstract in PubMed. This small number shows that the field is poorly investigated. The most evaluated environment factors to date are operating room noise and temperature, and the presence of music. Few studies used advanced sensors in the operating room to evaluate environmental factors in a more nuanced and complete way. Two studies concern the sound level, four concern music, one concerns temperature and one analyzed the number of entrances/exits into the OR. No study analyzed light level or more fine-grained movement data. CONCLUSIONS: Main findings include increase of noise and motion at specific time-points, for example during delivery or anaesthesia; the positive impact of music on parents and staff alike; and that a warmer theatre is better for babies but more uncomfortable for surgeons.


Subject(s)
Cesarean Section , Obstetrics , Infant, Newborn , Pregnancy , Humans , Female , Operating Rooms , Temperature , Mothers
5.
Int J Comput Assist Radiol Surg ; 18(9): 1697-1705, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37286642

ABSTRACT

PURPOSE: Simulation-based training allows surgical skills to be learned safely. Most virtual reality-based surgical simulators address technical skills without considering non-technical skills, such as gaze use. In this study, we investigated surgeons' visual behavior during virtual reality-based surgical training where visual guidance is provided. Our hypothesis was that the gaze distribution in the environment is correlated with the simulator's technical skills assessment. METHODS: We recorded 25 surgical training sessions on an arthroscopic simulator. Trainees were equipped with a head-mounted eye-tracking device. A U-net was trained on two sessions to segment three simulator-specific areas of interest (AoI) and the background, to quantify gaze distribution. We tested whether the percentage of gazes in those areas was correlated with the simulator's scores. RESULTS: The neural network was able to segment all AoI with a mean Intersection over Union superior to 94% for each area. The gaze percentage in the AoI differed among trainees. Despite several sources of data loss, we found significant correlations between gaze position and the simulator scores. For instance, trainees obtained better procedural scores when their gaze focused on the virtual assistance (Spearman correlation test, N = 7, r = 0.800, p = 0.031). CONCLUSION: Our findings suggest that visual behavior should be quantified for assessing surgical expertise in simulation-based training environments, especially when visual guidance is provided. Ultimately visual behavior could be used to quantitatively assess surgeons' learning curve and expertise while training on VR simulators, in a way that complements existing metrics.


Subject(s)
Simulation Training , Surgeons , Virtual Reality , Humans , Clinical Competence , Education, Medical, Graduate , Learning Curve , Surgeons/education , Computer Simulation , User-Computer Interface
6.
Surg Endosc ; 37(6): 4298-4314, 2023 06.
Article in English | MEDLINE | ID: mdl-37157035

ABSTRACT

BACKGROUND: Annotated data are foundational to applications of supervised machine learning. However, there seems to be a lack of common language used in the field of surgical data science. The aim of this study is to review the process of annotation and semantics used in the creation of SPM for minimally invasive surgery videos. METHODS: For this systematic review, we reviewed articles indexed in the MEDLINE database from January 2000 until March 2022. We selected articles using surgical video annotations to describe a surgical process model in the field of minimally invasive surgery. We excluded studies focusing on instrument detection or recognition of anatomical areas only. The risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data from the studies were visually presented in table using the SPIDER tool. RESULTS: Of the 2806 articles identified, 34 were selected for review. Twenty-two were in the field of digestive surgery, six in ophthalmologic surgery only, one in neurosurgery, three in gynecologic surgery, and two in mixed fields. Thirty-one studies (88.2%) were dedicated to phase, step, or action recognition and mainly relied on a very simple formalization (29, 85.2%). Clinical information in the datasets was lacking for studies using available public datasets. The process of annotation for surgical process model was lacking and poorly described, and description of the surgical procedures was highly variable between studies. CONCLUSION: Surgical video annotation lacks a rigorous and reproducible framework. This leads to difficulties in sharing videos between institutions and hospitals because of the different languages used. There is a need to develop and use common ontology to improve libraries of annotated surgical videos.


Subject(s)
Gynecologic Surgical Procedures , Minimally Invasive Surgical Procedures , Humans , Female , Minimally Invasive Surgical Procedures/methods
7.
Comput Methods Programs Biomed ; 236: 107561, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37119774

ABSTRACT

BACKGROUND AND OBJECTIVE: In order to be context-aware, computer-assisted surgical systems require accurate, real-time automatic surgical workflow recognition. In the past several years, surgical video has been the most commonly-used modality for surgical workflow recognition. But with the democratization of robot-assisted surgery, new modalities, such as kinematics, are now accessible. Some previous methods use these new modalities as input for their models, but their added value has rarely been studied. This paper presents the design and results of the "PEg TRAnsfer Workflow recognition" (PETRAW) challenge with the objective of developing surgical workflow recognition methods based on one or more modalities and studying their added value. METHODS: The PETRAW challenge included a data set of 150 peg transfer sequences performed on a virtual simulator. This data set included videos, kinematic data, semantic segmentation data, and annotations, which described the workflow at three levels of granularity: phase, step, and activity. Five tasks were proposed to the participants: three were related to the recognition at all granularities simultaneously using a single modality, and two addressed the recognition using multiple modalities. The mean application-dependent balanced accuracy (AD-Accuracy) was used as an evaluation metric to take into account class balance and is more clinically relevant than a frame-by-frame score. RESULTS: Seven teams participated in at least one task with four participating in every task. The best results were obtained by combining video and kinematic data (AD-Accuracy of between 93% and 90% for the four teams that participated in all tasks). CONCLUSION: The improvement of surgical workflow recognition methods using multiple modalities compared with unimodal methods was significant for all teams. However, the longer execution time required for video/kinematic-based methods(compared to only kinematic-based methods) must be considered. Indeed, one must ask if it is wise to increase computing time by 2000 to 20,000% only to increase accuracy by 3%. The PETRAW data set is publicly available at www.synapse.org/PETRAW to encourage further research in surgical workflow recognition.


Subject(s)
Algorithms , Robotic Surgical Procedures , Humans , Workflow , Robotic Surgical Procedures/methods
8.
Int J Comput Assist Radiol Surg ; 18(2): 279-288, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36197605

ABSTRACT

PURPOSE: Surgery simulators can be used to learn technical and non-technical skills and, to analyse posture. Ergonomic skill can be automatically detected with a Human Pose Estimation algorithm to help improve the surgeon's work quality. The objective of this study was to analyse the postural behaviour of surgeons and identify expertise-dependent movements. Our hypothesis was that hesitation and the occurrence of surgical instruments interfering with movement (defined as interfering movements) decrease with expertise. MATERIAL AND METHODS: Sixty surgeons with three expertise levels (novice, intermediate, and expert) were recruited. During a training session using an arthroscopic simulator, each participant's movements were video-recorded with an RGB camera. A modified OpenPose algorithm was used to detect the surgeon's joints. The detection frequency of each joint in a specific area was visualized with a heatmap-like approach and used to calculate a mobility score. RESULTS: This analysis allowed quantifying surgical movements. Overall, the mean mobility score was 0.823, 0.816, and 0.820 for novice, intermediate and expert surgeons, respectively. The mobility score alone was not enough to identify postural behaviour differences. A visual analysis of each participants' movements highlighted expertise-dependent interfering movements. CONCLUSION: Video-recording and analysis of surgeon's movements are a non-invasive approach to obtain quantitative and qualitative ergonomic information in order to provide feedback during training. Our findings suggest that the interfering movements do not decrease with expertise but differ in function of the surgeon's level.


Subject(s)
Orthopedic Procedures , Surgeons , Humans , Surgical Instruments , Movement , Ergonomics , Clinical Competence
9.
Surg Endosc ; 36(2): 853-870, 2022 02.
Article in English | MEDLINE | ID: mdl-34750700

ABSTRACT

INTRODUCTION: Robot-assisted laparoscopy is a safe surgical approach with several studies suggesting correlations between complication rates and the surgeon's technical skills. Surgical skills are usually assessed by questionnaires completed by an expert observer. With the advent of surgical robots, automated surgical performance metrics (APMs)-objective measures related to instrument movements-can be computed. The aim of this systematic review was thus to assess APMs use in robot-assisted laparoscopic procedures. The primary outcome was the assessment of surgical skills by APMs and the secondary outcomes were the association between APM and surgeon parameters and the prediction of clinical outcomes. METHODS: A systematic review following the PRISMA guidelines was conducted. PubMed and Scopus electronic databases were screened with the query "robot-assisted surgery OR robotic surgery AND performance metrics" between January 2010 and January 2021. The quality of the studies was assessed by the medical education research study quality instrument. The study settings, metrics, and applications were analysed. RESULTS: The initial search yielded 341 citations of which 16 studies were finally included. The study settings were either simulated virtual reality (VR) (4 studies) or real clinical environment (12 studies). Data to compute APMs were kinematics (motion tracking), and system and specific events data (actions from the robot console). APMs were used to differentiate expertise levels, and thus validate VR modules, predict outcomes, and integrate datasets for automatic recognition models. APMs were correlated with clinical outcomes for some studies. CONCLUSIONS: APMs constitute an objective approach for assessing technical skills. Evidence of associations between APMs and clinical outcomes remain to be confirmed by further studies, particularly, for non-urological procedures. Concurrent validation is also required.


Subject(s)
Laparoscopy , Robotic Surgical Procedures , Robotics , Virtual Reality , Benchmarking , Clinical Competence , Humans , Robotic Surgical Procedures/methods
10.
Orthop Traumatol Surg Res ; 107(8): 103079, 2021 12.
Article in English | MEDLINE | ID: mdl-34597826

ABSTRACT

BACKGROUND: Virtual reality (VR) simulation is particularly suitable for learning arthroscopy skills. Despite significant research, one drawback often outlined is the difficulty in distinguishing performance levels (Construct Validity) in experienced surgeons. Therefore, it seems adequate to search new methods of performance measurements using probe trajectories instead of commonly used metrics. HYPOTHESIS: It was hypothesized that a larger experience in surgical shoulder arthroscopy would be correlated with better performance on a VR shoulder arthroscopy simulator and that experienced operators would share similar probe trajectories. MATERIALS & METHODS: After answering to standardized questionnaires, 104 trajectories from 52 surgeons divided into 2 cohorts (26 intermediates and 26 experts) were recorded on a shoulder arthroscopy simulator. The procedure analysed was the "loose body removal" in a right shoulder joint. 10 metrics were computed on the trajectories including procedure duration, overall path length, economy of motion and smoothness. Additionally, Dynamic Time Warping (DTW) was computed on the trajectories for unsupervised hierarchical clustering of the surgeons. RESULTS: Experts were significantly faster (Median 70.9s Interquartile range [56.4-86.3] vs. 116.1s [82.8-154.2], p<0.01), more fluid (4.6.105mm.s-3 [3.1.105-7.2.105] vs. 1.5.106mm.s-3 [2.6.106-3.5.106], p=0.05), and economical in their motion (19.3mm2 [9.1-25.9] vs. 33.8mm2 [14.8-50.5], p<0.01), but there was no significant difference in performance for path length (671.4mm [503.8-846.1] vs 694.6mm [467.0-1090.1], p=0.62). The DTW clustering differentiates two expertise related groups of trajectories with performance similarities, respectively including 48 expert trajectories for the first group and 52 intermediates and 4 expert trajectories for the second group (Sensitivity of 92%, Specificity of 100%). Hierarchical clustering with DTW significantly identified expert operators from intermediate operators and found trajectory similarities among 24/26 experts. CONCLUSION: This study demonstrated the Construct Validity of the VR shoulder arthroscopy simulator within groups of experienced surgeons. With new types of metrics simply based on the simulator's raw trajectories, it was possible to significantly distinguish levels of expertise. We demonstrated that clustering analysis with Dynamic Time Warping was able to reliably discriminate between expert operators and intermediate operators. CLINICAL RELEVANCE: The results have implications for the future of arthroscopic surgical training or post-graduate accreditation programs using virtual reality simulation. LEVEL OF EVIDENCE: III; prospective comparative study.


Subject(s)
Simulation Training , Surgeons , Virtual Reality , Arthroscopy/education , Clinical Competence , Computer Simulation , Humans , Prospective Studies , Simulation Training/methods
11.
Comput Methods Programs Biomed ; 212: 106452, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34688174

ABSTRACT

BACKGROUND AND OBJECTIVE: Automatic surgical workflow recognition is an essential step in developing context-aware computer-assisted surgical systems. Video recordings of surgeries are becoming widely accessible, as the operational field view is captured during laparoscopic surgeries. Head and ceiling mounted cameras are also increasingly being used to record videos in open surgeries. This makes videos a common choice in surgical workflow recognition. Additional modalities, such as kinematic data captured during robot-assisted surgeries, could also improve workflow recognition. This paper presents the design and results of the MIcro-Surgical Anastomose Workflow recognition on training sessions (MISAW) challenge whose objective was to develop workflow recognition models based on kinematic data and/or videos. METHODS: The MISAW challenge provided a data set of 27 sequences of micro-surgical anastomosis on artificial blood vessels. This data set was composed of videos, kinematics, and workflow annotations. The latter described the sequences at three different granularity levels: phase, step, and activity. Four tasks were proposed to the participants: three of them were related to the recognition of surgical workflow at three different granularity levels, while the last one addressed the recognition of all granularity levels in the same model. We used the average application-dependent balanced accuracy (AD-Accuracy) as the evaluation metric. This takes unbalanced classes into account and it is more clinically relevant than a frame-by-frame score. RESULTS: Six teams participated in at least one task. All models employed deep learning models, such as convolutional neural networks (CNN), recurrent neural networks (RNN), or a combination of both. The best models achieved accuracy above 95%, 80%, 60%, and 75% respectively for recognition of phases, steps, activities, and multi-granularity. The RNN-based models outperformed the CNN-based ones as well as the dedicated modality models compared to the multi-granularity except for activity recognition. CONCLUSION: For high levels of granularity, the best models had a recognition rate that may be sufficient for applications such as prediction of remaining surgical time. However, for activities, the recognition rate was still low for applications that can be employed clinically. The MISAW data set is publicly available at http://www.synapse.org/MISAW to encourage further research in surgical workflow recognition.


Subject(s)
Laparoscopy , Robotic Surgical Procedures , Anastomosis, Surgical , Humans , Neural Networks, Computer , Workflow
12.
Artif Intell Med ; 104: 101837, 2020 04.
Article in English | MEDLINE | ID: mdl-32499005

ABSTRACT

OBJECTIVE: According to a meta-analysis of 7 studies, the median number of patients with at least one adverse event during the surgery is 14.4%, and a third of those adverse events were preventable. The occurrence of adverse events forces surgeons to implement corrective strategies and, thus, deviate from the standard surgical process. Therefore, it is clear that the automatic identification of adverse events is a major challenge for patient safety. In this paper, we have proposed a method enabling us to identify such deviations. We have focused on identifying surgeons' deviations from standard surgical processes due to surgical events rather than anatomic specificities. This is particularly challenging, given the high variability in typical surgical procedure workflows. METHODS: We have introduced a new approach designed to automatically detect and distinguish surgical process deviations based on multi-dimensional non-linear temporal scaling with a hidden semi-Markov model using manual annotation of surgical processes. The approach was then evaluated using cross-validation. RESULTS: The best results have over 90% accuracy. Recall and precision for event deviations, i.e. related to adverse events, are respectively below 80% and 40%. To understand these results, we have provided a detailed analysis of the incorrectly-detected observations. CONCLUSION: Multi-dimensional non-linear temporal scaling with a hidden semi-Markov model provides promising results for detecting deviations. Our error analysis of the incorrectly-detected observations offers different leads in order to further improve our method. SIGNIFICANCE: Our method demonstrated the feasibility of automatically detecting surgical deviations that could be implemented for both skill analysis and developing situation awareness-based computer-assisted surgical systems.


Subject(s)
Laparoscopy , Surgeons , Computer Systems , Humans , Workflow
13.
Int J Comput Assist Radiol Surg ; 14(10): 1663-1671, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31177422

ABSTRACT

PURPOSE: Annotation of surgical activities becomes increasingly important for many recent applications such as surgical workflow analysis, surgical situation awareness, and the design of the operating room of the future, especially to train machine learning methods in order to develop intelligent assistance. Currently, annotation is mostly performed by observers with medical background and is incredibly costly and time-consuming, creating a major bottleneck for the above-mentioned technologies. In this paper, we propose a way to eliminate, or at least limit, the human intervention in the annotation process. METHODS: Meaningful information about interaction between objects is inherently available in virtual reality environments. We propose a strategy to convert automatically this information into annotations in order to provide as output individual surgical process models. VALIDATION: We implemented our approach through a peg-transfer task simulator and compared it to manual annotations. To assess the impact of our contribution, we studied both intra- and inter-observer variability. RESULTS AND CONCLUSION: In average, manual annotations took more than 12 min for 1 min of video to achieve low-level physical activity annotation, whereas automatic annotation is achieved in less than a second for the same video period. We also demonstrated that manual annotation introduced mistakes as well as intra- and inter-observer variability that our method is able to suppress due to the high precision and reproducibility.


Subject(s)
Machine Learning , Models, Anatomic , Surgery, Computer-Assisted/methods , Humans , Operating Rooms , Reproducibility of Results , Virtual Reality
14.
Int J Comput Assist Radiol Surg ; 14(8): 1449-1459, 2019 Aug.
Article in English | MEDLINE | ID: mdl-31119486

ABSTRACT

PURPOSE: To assess surgical skills in robot-assisted partial nephrectomy (RAPN) with and without surgical navigation (SN). METHODS: We employed an SN system that synchronizes the real-time endoscopic image with a virtual reality three-dimensional (3D) model for RAPN and evaluated the skills of two expert surgeons with regard to the identification and dissection of the renal artery (non-SN group, n = 21 [first surgeon n = 9, second surgeon n = 12]; SN group, n = 32 [first surgeon n = 11, second surgeon n = 21]). We converted all movements of the robotic forceps during RAPN into a dedicated vocabulary. Using RAPN videos, we classified all movements of the robotic forceps into direct action (defined as movements of the robotic forceps that directly affect tissues) and connected motion (defined as movements that link actions). In addition, we analyzed the frequency, duration, and occupancy rate of the connected motion. RESULTS: In the SN group, the R.E.N.A.L nephrometry score was lower (7 vs. 6, P = 0.019) and the time to identify and dissect the renal artery (16 vs. 9 min, P = 0.008) was significantly shorter. The connected motions of inefficient "insert," "pull," and "rotate" motions were significantly improved by SN. SN significantly improved the frequency, duration, and occupancy rate of connected motions of the right hand of the first surgeon and of both hands of the second surgeon. The improvements in connected motions were positively associated with SN for both surgeons. CONCLUSION: This is the first study to investigate SN for nephron-sparing surgery. SN with 3D models might help improve the connected motions of expert surgeons to ensure efficient RAPN.


Subject(s)
Kidney Neoplasms/surgery , Nephrectomy , Robotic Surgical Procedures , Surgeons , Surgery, Computer-Assisted , Aged , Female , Humans , Imaging, Three-Dimensional , Male , Middle Aged , Professional Competence , Prospective Studies , Renal Artery , Reproducibility of Results , Retrospective Studies , Treatment Outcome
15.
Artif Intell Med ; 91: 3-11, 2018 09.
Article in English | MEDLINE | ID: mdl-30172445

ABSTRACT

OBJECTIVE: The analysis of surgical motion has received a growing interest with the development of devices allowing their automatic capture. In this context, the use of advanced surgical training systems makes an automated assessment of surgical trainee possible. Automatic and quantitative evaluation of surgical skills is a very important step in improving surgical patient care. MATERIAL AND METHOD: In this paper, we present an approach for the discovery and ranking of discriminative and interpretable patterns of surgical practice from recordings of surgical motions. A pattern is defined as a series of actions or events in the kinematic data that together are distinctive of a specific gesture or skill level. Our approach is based on the decomposition of continuous kinematic data into a set of overlapping gestures represented by strings (bag of words) for which we compute comparative numerical statistic (tf-idf) enabling the discriminative gesture discovery via its relative occurrence frequency. RESULTS: We carried out experiments on three surgical motion datasets. The results show that the patterns identified by the proposed method can be used to accurately classify individual gestures, skill levels and surgical interfaces. We also present how the patterns provide a detailed feedback on the trainee skill assessment. CONCLUSIONS: The proposed approach is an interesting addition to existing learning tools for surgery as it provides a way to obtain a feedback on which parts of an exercise have been used to classify the attempt as correct or incorrect.


Subject(s)
Gestures , Pattern Recognition, Automated/methods , Surgical Procedures, Operative/education , Algorithms , Biomechanical Phenomena , Clinical Competence , Formative Feedback , Humans , Task Performance and Analysis , Time and Motion Studies
16.
Int J Comput Assist Radiol Surg ; 13(9): 1419-1428, 2018 Sep.
Article in English | MEDLINE | ID: mdl-29752636

ABSTRACT

PURPOSE: Surgical processes are generally only studied by identifying differences in populations such as participants or level of expertise. But the similarity between this population is also important in understanding the process. We therefore proposed to study these two aspects. METHODS: In this article, we show how similarities in process workflow within a population can be identified as sequential surgical signatures. To this purpose, we have proposed a pattern mining approach to identify these signatures. VALIDATION: We validated our method with a data set composed of seventeen micro-surgical suturing tasks performed by four participants with two levels of expertise. RESULTS: We identified sequential surgical signatures specific to each participant, shared between participants with and without the same level of expertise. These signatures are also able to perfectly define the level of expertise of the participant who performed a new micro-surgical suturing task. However, it is more complicated to determine who the participant is, and the method correctly determines this information in only 64% of cases. CONCLUSION: We show for the first time the concept of sequential surgical signature. This new concept has the potential to further help to understand surgical procedures and provide useful knowledge to define future CAS systems.


Subject(s)
Pattern Recognition, Automated , Surgery, Computer-Assisted , Suture Techniques , Workflow , Humans
17.
J Biomed Inform ; 67: 34-41, 2017 03.
Article in English | MEDLINE | ID: mdl-28179119

ABSTRACT

OBJECTIVE: Each surgical procedure is unique due to patient's and also surgeon's particularities. In this study, we propose a new approach to distinguish surgical behaviors between surgical sites, levels of expertise and individual surgeons thanks to a pattern discovery method. METHODS: The developed approach aims to distinguish surgical behaviors based on shared longest frequent sequential patterns between surgical process models. To allow clustering, we propose a new metric called SLFSP. The approach is validated by comparison with a clustering method using Dynamic Time Warping as a metric to characterize the similarity between surgical process models. RESULTS: Our method outperformed the existing approach. It was able to make a perfect distinction between surgical sites (accuracy of 100%). We reached an accuracy superior to 90% and 85% for distinguishing levels of expertise and individual surgeons. CONCLUSION: Clustering based on shared longest frequent sequential patterns outperformed the previous study based on time analysis. SIGNIFICANCE: The proposed method shows the feasibility of comparing surgical process models, not only by their duration but also by their structure of activities. Furthermore, patterns may show risky behaviors, which could be an interesting information for surgical training to prevent adverse events.


Subject(s)
Clinical Competence , Cluster Analysis , General Surgery/education , General Surgery/methods , Surgical Procedures, Operative , Humans , Models, Anatomic , Risk , Time Factors
18.
Int J Comput Assist Radiol Surg ; 11(6): 1081-9, 2016 Jun.
Article in English | MEDLINE | ID: mdl-26995598

ABSTRACT

PURPOSE: With the intention of extending the perception and action of surgical staff inside the operating room, the medical community has expressed a growing interest towards context-aware systems. Requiring an accurate identification of the surgical workflow, such systems make use of data from a diverse set of available sensors. In this paper, we propose a fully data-driven and real-time method for segmentation and recognition of surgical phases using a combination of video data and instrument usage signals, exploiting no prior knowledge. We also introduce new validation metrics for assessment of workflow detection. METHODS: The segmentation and recognition are based on a four-stage process. Firstly, during the learning time, a Surgical Process Model is automatically constructed from data annotations to guide the following process. Secondly, data samples are described using a combination of low-level visual cues and instrument information. Then, in the third stage, these descriptions are employed to train a set of AdaBoost classifiers capable of distinguishing one surgical phase from others. Finally, AdaBoost responses are used as input to a Hidden semi-Markov Model in order to obtain a final decision. RESULTS: On the MICCAI EndoVis challenge laparoscopic dataset we achieved a precision and a recall of 91 % in classification of 7 phases. CONCLUSION: Compared to the analysis based on one data type only, a combination of visual features and instrument signals allows better segmentation, reduction of the detection delay and discovery of the correct phase order.


Subject(s)
Algorithms , Laparoscopy , Surgery, Computer-Assisted/methods , Task Performance and Analysis , Workflow , Data Collection , Humans , Models, Anatomic , Operating Rooms , Video Recording
SELECTION OF CITATIONS
SEARCH DETAIL
...