Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 15130, 2024 07 02.
Article in English | MEDLINE | ID: mdl-38956112

ABSTRACT

Trainees develop surgical technical skills by learning from experts who provide context for successful task completion, identify potential risks, and guide correct instrument handling. This expert-guided training faces significant limitations in objectively assessing skills in real-time and tracking learning. It is unknown whether AI systems can effectively replicate nuanced real-time feedback, risk identification, and guidance in mastering surgical technical skills that expert instructors offer. This randomized controlled trial compared real-time AI feedback to in-person expert instruction. Ninety-seven medical trainees completed a 90-min simulation training with five practice tumor resections followed by a realistic brain tumor resection. They were randomly assigned into 1-real-time AI feedback, 2-in-person expert instruction, and 3-no real-time feedback. Performance was assessed using a composite-score and Objective Structured Assessment of Technical Skills rating, rated by blinded experts. Training with real-time AI feedback (n = 33) resulted in significantly better performance outcomes compared to no real-time feedback (n = 32) and in-person instruction (n = 32), .266, [95% CI .107 .425], p < .001; .332, [95% CI .173 .491], p = .005, respectively. Learning from AI resulted in similar OSATS ratings (4.30 vs 4.11, p = 1) compared to in-person training with expert instruction. Intelligent systems may refine the way operating skills are taught, providing tailored, quantifiable feedback and actionable instructions in real-time.


Subject(s)
Artificial Intelligence , Clinical Competence , Humans , Female , Male , Adult , Simulation Training/methods
2.
Comput Biol Med ; 179: 108809, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38944904

ABSTRACT

BACKGROUND: Virtual and augmented reality surgical simulators, integrated with machine learning, are becoming essential for training psychomotor skills, and analyzing surgical performance. Despite the promise of methods like the Connection Weights Algorithm, the small sample sizes (small number of participants (N)) typical of these trials challenge the generalizability and robustness of models. Approaches like data augmentation and transfer learning from models trained on similar surgical tasks address these limitations. OBJECTIVE: To demonstrate the efficacy of artificial neural network and transfer learning algorithms in evaluating virtual surgical performances, applied to a simulated oblique lateral lumbar interbody fusion technique in an augmented and virtual reality simulator. DESIGN: The study developed and integrated artificial neural network algorithms within a novel simulator platform, using data from the simulated tasks to generate 276 performance metrics across motion, safety, and efficiency. Innovatively, it applies transfer learning from a pre-trained ANN model developed for a similar spinal simulator, enhancing the training process, and addressing the challenge of small datasets. SETTING: Musculoskeletal Biomechanics Research Lab; Neurosurgical Simulation and Artificial Intelligence Learning Centre, McGill University, Montreal, Canada. PARTICIPANTS: Twenty-seven participants divided into 3 groups: 9 post-residents, 6 senior and 12 junior residents. RESULTS: Two models, a stand-alone model trained from scratch and another leveraging transfer learning, were trained on nine selected surgical metrics achieving 75 % and 87.5 % testing accuracy respectively. CONCLUSIONS: This study presents a novel blueprint for addressing limited datasets in surgical simulations through the strategic use of transfer learning and data augmentation. It also evaluates and reinforces the application of the Connection Weights Algorithm from our previous publication. Together, these methodologies not only enhance the precision of performance classification but also advance the validation of surgical training platforms.


Subject(s)
Machine Learning , Humans , Virtual Reality , Neural Networks, Computer , Algorithms , Spinal Fusion/methods , Augmented Reality , Male , Female , Clinical Competence
3.
Med Biol Eng Comput ; 62(6): 1887-1897, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38403863

ABSTRACT

Mixed-reality surgical simulators are seen more objective than conventional training. The simulators' utility in training must be established through validation studies. Establish face-, content-, and construct-validity of a novel mixed-reality surgical simulator developed by McGill University, CAE-Healthcare, and DePuy Synthes. This study, approved by a Research Ethics Board, examined a simulated L4-L5 oblique lateral lumbar interbody fusion (OLLIF) scenario. A 5-point Likert scale questionnaire was used. Chi-square test verified validity consensus. Construct validity investigated 276 surgical performance metrics across three groups, using ANOVA, Welch-ANOVA, or Kruskal-Wallis tests. A post-hoc Dunn's test with a Bonferroni correction was used for further analysis on significant metrics. Musculoskeletal Biomechanics Research Lab, McGill University, Montreal, Canada. DePuy Synthes, Johnson & Johnson Family of Companies, research lab. Thirty-four participants were recruited: spine surgeons, fellows, neurosurgical, and orthopedic residents. Only seven surgeons out of the 34 were recruited in a side-by-side cadaver trial, where participants completed an OLLIF surgery first on a cadaver and then immediately on the simulator. Participants were separated a priori into three groups: post-, senior-, and junior-residents. Post-residents rated validity, median > 3, for 13/20 face-validity and 9/25 content-validity statements. Seven face-validity and 12 content-validity statements were rated neutral. Chi-square test indicated agreeability between group responses. Construct validity found eight metrics with significant differences (p < 0.05) between the three groups. Validity was established. Most face-validity statements were positively rated, with few neutrally rated pertaining to the simulation's graphics. Although fewer content-validity statements were validated, most were rated neutral (only four were negatively rated). The findings underscored the importance of using realistic physics-based forces in surgical simulations. Construct validity demonstrated the simulator's capacity to differentiate surgical expertise.


Subject(s)
Minimally Invasive Surgical Procedures , Humans , Minimally Invasive Surgical Procedures/education , Spinal Fusion/methods , Reproducibility of Results , Virtual Reality , Female , Male , Surveys and Questionnaires , Computer Simulation , Spine/surgery , Adult , Augmented Reality
4.
Article in English | MEDLINE | ID: mdl-38190098

ABSTRACT

BACKGROUND AND OBJECTIVES: Subpial corticectomy involving complete lesion resection while preserving pial membranes and avoiding injury to adjacent normal tissues is an essential bimanual task necessary for neurosurgical trainees to master. We sought to develop an ex vivo calf brain corticectomy simulation model with continuous assessment of surgical instrument movement during the simulation. A case series study of skilled participants was performed to assess face and content validity to gain insights into the utility of this training platform, along with determining if skilled and less skilled participants had statistical differences in validity assessment. METHODS: An ex vivo calf brain simulation model was developed in which trainees performed a subpial corticectomy of three defined areas. A case series study assessed face and content validity of the model using 7-point Likert scale questionnaires. RESULTS: Twelve skilled and 11 less skilled participants were included in this investigation. Overall median scores of 6.0 (range 4.0-6.0) for face validity and 6.0 (range 3.5-7.0) for content validity were determined on the 7-point Likert scale, with no statistical differences between skilled and less skilled groups identified. CONCLUSION: A novel ex vivo calf brain simulator was developed to replicate the subpial resection procedure and demonstrated face and content validity.

5.
J Surg Educ ; 81(2): 275-287, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38160107

ABSTRACT

OBJECTIVE: To explore optimal feedback methodologies to enhance trainee skill acquisition in simulated surgical bimanual skills learning during brain tumor resections. HYPOTHESES: (1) Providing feedback results in better learning outcomes in teaching surgical technical skill when compared to practice alone with no tailored performance feedback. (2) Providing more visual and visuospatial feedback results in better learning outcomes when compared to providing numerical feedback. DESIGN: A prospective 4-parallel-arm randomized controlled trial. SETTING: Neurosurgical Simulation and Artificial Intelligence Learning Centre, McGill University, Canada. PARTICIPANTS: Medical students (n = 120) from 4 Quebec medical schools. RESULTS: Participants completed a virtually simulated tumor resection task 5 times while receiving 1 of 4 feedback based on their group allocation: (1) practice-alone without feedback, (2) numerical feedback, (3) visual feedback, and (4) visuospatial feedback. Outcome measures were participants' scores on 14-performance metrics and the number of expert benchmarks achieved during each task. There were no significant differences in the first task which determined baseline performance. A statistically significant interaction between feedback allocation and task repetition was found on the number of benchmarks achieved, F (10.558, 408.257)=3.220, p < 0.001. Participants in all feedback groups significantly improved their performance compared to baseline. The visual feedback group achieved significantly higher number of benchmarks than the practice-alone group by the third repetition of the task, p = 0.005, 95%CI [0.42 3.25]. Visual feedback and visuospatial feedback improved performance significantly by the second repetition of the task, p = 0.016, 95%CI [0.19 2.71] and p = 0.003, 95%CI [0.4 2.57], respectively. CONCLUSION: Simulations with autonomous visual computer assistance may be effective pedagogical tools in teaching bimanual operative skills via visual and visuospatial feedback information delivery.


Subject(s)
Artificial Intelligence , Simulation Training , Humans , Feedback , Prospective Studies , Simulation Training/methods , Computer Simulation , Clinical Competence
SELECTION OF CITATIONS
SEARCH DETAIL