Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 105
1.
J Robot Surg ; 18(1): 245, 2024 Jun 07.
Article En | MEDLINE | ID: mdl-38847926

Previously, our group established a surgical gesture classification system that deconstructs robotic tissue dissection into basic surgical maneuvers. Here, we evaluate gestures by correlating the metric with surgeon experience and technical skill assessment scores in the apical dissection (AD) of robotic-assisted radical prostatectomy (RARP). Additionally, we explore the association between AD performance and early continence recovery following RARP. 78 AD surgical videos from 2016 to 2018 across two international institutions were included. Surgeons were grouped by median robotic caseload (range 80-5,800 cases): less experienced group (< 475 cases) and more experienced (≥ 475 cases). Videos were decoded with gestures and assessed using Dissection Assessment for Robotic Technique (DART). Statistical findings revealed more experienced surgeons (n = 10) used greater proportions of cold cut (p = 0.008) and smaller proportions of peel/push, spread, and two-hand spread (p < 0.05) than less experienced surgeons (n = 10). Correlations between gestures and technical skills assessments ranged from - 0.397 to 0.316 (p < 0.05). Surgeons utilizing more retraction gestures had lower total DART scores (p < 0.01), suggesting less dissection proficiency. Those who used more gestures and spent more time per gesture had lower efficiency scores (p < 0.01). More coagulation and hook gestures were found in cases of patients with continence recovery compared to those with ongoing incontinence (p < 0.04). Gestures performed during AD vary based on surgeon experience level and patient continence recovery duration. Significant correlations were demonstrated between gestures and dissection technical skills. Gestures can serve as a novel method to objectively evaluate dissection performance and anticipate outcomes.


Clinical Competence , Dissection , Prostatectomy , Robotic Surgical Procedures , Prostatectomy/methods , Humans , Robotic Surgical Procedures/methods , Male , Dissection/methods , Gestures , Prostatic Neoplasms/surgery , Surgeons
2.
NPJ Digit Med ; 7(1): 152, 2024 Jun 11.
Article En | MEDLINE | ID: mdl-38862627

Suturing skill scores have demonstrated strong predictive capabilities for patient functional recovery. The suturing can be broken down into several substep components, including needle repositioning, needle entry angle, etc. Artificial intelligence (AI) systems have been explored to automate suturing skill scoring. Traditional approaches to skill assessment typically focus on evaluating individual sub-skills required for particular substeps in isolation. However, surgical procedures require the integration and coordination of multiple sub-skills to achieve successful outcomes. Significant associations among the technical sub-skill have been established by existing studies. In this paper, we propose a framework for joint skill assessment that takes into account the interconnected nature of sub-skills required in surgery. The prior known relationships among sub-skills are firstly identified. Our proposed AI system is then empowered by the prior known relationships to perform the suturing skill scoring for each sub-skill domain simultaneously. Our approach can effectively improve skill assessment performance through the prior known relationships among sub-skills. Through the proposed approach to joint skill assessment, we aspire to enhance the evaluation of surgical proficiency and ultimately improve patient outcomes in surgery.

3.
J Robot Surg ; 18(1): 102, 2024 Mar 01.
Article En | MEDLINE | ID: mdl-38427094

Artificial intelligence (AI) is revolutionizing nearly every aspect of modern life. In the medical field, robotic surgery is the sector with some of the most innovative and impactful advancements. In this narrative review, we outline recent contributions of AI to the field of robotic surgery with a particular focus on intraoperative enhancement. AI modeling is allowing surgeons to have advanced intraoperative metrics such as force and tactile measurements, enhanced detection of positive surgical margins, and even allowing for the complete automation of certain steps in surgical procedures. AI is also Query revolutionizing the field of surgical education. AI modeling applied to intraoperative surgical video feeds and instrument kinematics data is allowing for the generation of automated skills assessments. AI also shows promise for the generation and delivery of highly specialized intraoperative surgical feedback for training surgeons. Although the adoption and integration of AI show promise in robotic surgery, it raises important, complex ethical questions. Frameworks for thinking through ethical dilemmas raised by AI are outlined in this review. AI enhancements in robotic surgery is some of the most groundbreaking research happening today, and the studies outlined in this review represent some of the most exciting innovations in recent years.


Artificial Intelligence , Robotic Surgical Procedures , Humans , Automation , Benchmarking , Robotic Surgical Procedures/methods , Surgeons
4.
J Surg Educ ; 81(3): 422-430, 2024 Mar.
Article En | MEDLINE | ID: mdl-38290967

OBJECTIVE: Surgical skill assessment tools such as the End-to-End Assessment of Suturing Expertise (EASE) can differentiate a surgeon's experience level. In this simulation-based study, we define a competency benchmark for intraoperative robotic suturing using EASE as a validated measure of performance. DESIGN: Participants conducted a dry-lab vesicourethral anastomosis (VUA) exercise. Videos were each independently scored by 2 trained, blinded reviewers using EASE. Inter-rater reliability was measured with prevalence-adjusted bias-adjusted Kappa (PABAK) using 2 example videos. All videos were reviewed by an expert surgeon, who determined if the suturing skills exhibited were at a competency level expected for residency graduation (pass or fail). The Contrasting Group (CG) method was then used to set a pass/fail score at the intercept of the pass and fail cohorts' EASE score distributions. SETTING: Keck School of Medicine, University of Southern California. PARTICIPANTS: Twenty-six participants: 8 medical students, 8 junior residents (PGY 1-2), 7 senior residents (PGY 3-5) and 3 attending urologists. RESULTS: After 1 round of consensus-building, average PABAK across EASE subskills was 0.90 (Range 0.67-1.0). The CG method produced a competency benchmark EASE score of >35/39, with a pass rate of 10/26 (38%); 27% were deemed competent by expert evaluation. False positives and negatives were defined as medical students who passed and attendings who failed the assessment, respectively. This pass/fail score produced no false positives or negatives, and fewer JR than SR were considered competent by both the expert and CG benchmark. CONCLUSIONS: Using an absolute standard setting method, competency scores were set to identify trainees who could competently execute a standardized dry-lab robotic suturing exercise. This standard can be used for high stakes decisions regarding a trainee's technical readiness for independent practice. Future work includes validation of this standard in the clinical environment through correlation with clinical outcomes.


Internship and Residency , Robotic Surgical Procedures , Robotics , Surgeons , Humans , Robotic Surgical Procedures/education , Reproducibility of Results , Clinical Competence
5.
Nat Rev Urol ; 21(1): 50-59, 2024 01.
Article En | MEDLINE | ID: mdl-37524914

The use of artificial intelligence (AI) in medicine and in urology specifically has increased over the past few years, during which time it has enabled optimization of patient workflow, increased diagnostic accuracy and enhanced computer analysis of radiological and pathological images. However, before further use of AI is undertaken, possible ethical issues need to be evaluated to improve understanding of this technology and to protect patients and providers. Possible ethical issues that require consideration when applying AI in clinical practice include patient safety, cybersecurity, transparency and interpretability of the data, inclusivity and equity, fostering responsibility and accountability, and the preservation of providers' decision-making and autonomy. Ethical principles for the application of AI to health care and in urology are proposed to guide urologists, patients and regulators to improve use of AI technologies and guide policy-making.


Artificial Intelligence , Urology , Humans , Urologists
6.
J Endourol ; 2024 Jan 29.
Article En | MEDLINE | ID: mdl-37905524

Introduction: Automated skills assessment can provide surgical trainees with objective, personalized feedback during training. Here, we measure the efficacy of artificial intelligence (AI)-based feedback on a robotic suturing task. Materials and Methods: Forty-two participants with no robotic surgical experience were randomized to a control or feedback group and video-recorded while completing two rounds (R1 and R2) of suturing tasks on a da Vinci surgical robot. Participants were assessed on needle handling and needle driving, and feedback was provided via a visual interface after R1. For feedback group, participants were informed of their AI-based skill assessment and presented with specific video clips from R1. For control group, participants were presented with randomly selected video clips from R1 as a placebo. Participants from each group were further labeled as underperformers or innate-performers based on a median split of their technical skill scores from R1. Results: Demographic features were similar between the control (n = 20) and feedback group (n = 22) (p > 0.05). Observing the improvement from R1 to R2, the feedback group had a significantly larger improvement in needle handling score (0.30 vs -0.02, p = 0.018) when compared with the control group, although the improvement of needle driving score was not significant when compared with the control group (0.17 vs -0.40, p = 0.074). All innate-performers exhibited similar improvements across rounds, regardless of feedback (p > 0.05). In contrast, underperformers in the feedback group improved more than the control group in needle handling (p = 0.02). Conclusion: AI-based feedback facilitates surgical trainees' acquisition of robotic technical skills, especially underperformers. Future research will extend AI-based feedback to additional suturing skills, surgical tasks, and experience groups.

7.
Curr Opin Urol ; 34(1): 37-42, 2024 Jan 01.
Article En | MEDLINE | ID: mdl-37909886

PURPOSE OF REVIEW: This review outlines recent innovations in simulation technology as it applies to urology. It is essential for the next generation of urologists to attain a solid foundation of technical and nontechnical skills, and simulation technology provides a variety of safe, controlled environments to acquire this baseline knowledge. RECENT FINDINGS: With a focus on urology, this review first outlines the evidence to support surgical simulation, then discusses the strides being made in the development of 3D-printed models for surgical skill training and preoperative planning, virtual reality models for different urologic procedures, surgical skill assessment for simulation, and integration of simulation into urology residency curricula. SUMMARY: Simulation continues to be an integral part of the journey towards the mastery of skills necessary for becoming an expert urologist. Clinicians and researchers should consider how to further incorporate simulation technology into residency training and help future generations of urologists throughout their career.


Internship and Residency , Simulation Training , Urology , Humans , Urology/education , Clinical Competence , Simulation Training/methods , Computer Simulation , Urologic Surgical Procedures
8.
JAMA Netw Open ; 6(6): e2320702, 2023 06 01.
Article En | MEDLINE | ID: mdl-37378981

Importance: Live feedback in the operating room is essential in surgical training. Despite the role this feedback plays in developing surgical skills, an accepted methodology to characterize the salient features of feedback has not been defined. Objective: To quantify the intraoperative feedback provided to trainees during live surgical cases and propose a standardized deconstruction for feedback. Design, Setting, and Participants: In this qualitative study using a mixed methods analysis, surgeons at a single academic tertiary care hospital were audio and video recorded in the operating room from April to October 2022. Urological residents, fellows, and faculty attending surgeons involved in robotic teaching cases during which trainees had active control of the robotic console for at least some portion of a surgery were eligible to voluntarily participate. Feedback was time stamped and transcribed verbatim. An iterative coding process was performed using recordings and transcript data until recurring themes emerged. Exposure: Feedback in audiovisual recorded surgery. Main Outcomes and Measures: The primary outcomes were the reliability and generalizability of a feedback classification system in characterizing surgical feedback. Secondary outcomes included assessing the utility of our system. Results: In 29 surgical procedures that were recorded and analyzed, 4 attending surgeons, 6 minimally invasive surgery fellows, and 5 residents (postgraduate years, 3-5) were involved. For the reliability of the system, 3 trained raters achieved moderate to substantial interrater reliability in coding cases using 5 types of triggers, 6 types of feedback, and 9 types of responses (prevalence-adjusted and bias-adjusted κ range: a 0.56 [95% CI, 0.45-0.68] minimum for triggers to a 0.99 [95% CI, 0.97-1.00] maximum for feedback and responses). For the generalizability of the system, 6 types of surgical procedures and 3711 instances of feedback were analyzed and coded with types of triggers, feedback, and responses. Significant differences in triggers, feedback, and responses reflected surgeon experience level and surgical task being performed. For example, as a response, attending surgeons took over for safety concerns more often for fellows than residents (prevalence rate ratio [RR], 3.97 [95% CI, 3.12-4.82]; P = .002), and suturing involved more errors that triggered feedback than dissection (RR, 1.65 [95% CI, 1.03-3.33]; P = .007). For the utility of the system, different combinations of trainer feedback had associations with rates of different trainee responses. For example, technical feedback with a visual component was associated with an increased rate of trainee behavioral change or verbal acknowledgment responses (RR, 1.11 [95% CI, 1.03-1.20]; P = .02). Conclusions and Relevance: These findings suggest that identifying different types of triggers, feedback, and responses may be a feasible and reliable method for classifying surgical feedback across several robotic procedures. Outcomes suggest that a system that can be generalized across surgical specialties and for trainees of different experience levels may help galvanize novel surgical education strategies.


Specialties, Surgical , Surgeons , Humans , Feedback , Reproducibility of Results , Neoplasm Recurrence, Local , Surgeons/education
9.
Eur Urol Focus ; 9(6): 1044-1051, 2023 11.
Article En | MEDLINE | ID: mdl-37277274

BACKGROUND: Virtual reality (VR) simulators are increasingly being used for surgical skills training. It is unclear what skills are best improved via VR, translate to live surgical skills, and influence patient outcomes. OBJECTIVE: To assess surgeons in VR and live surgery using a suturing assessment tool and evaluate the association between technical skills and a clinical outcome. DESIGN, SETTING, AND PARTICIPANTS: This prospective five-center study enrolled participants who completed VR suturing exercises and provided live surgical video. Graders provided skill assessments using the validated End-To-End Assessment of Suturing Expertise (EASE) suturing evaluation tool. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: A hierarchical Poisson model was used to compare skill scores among cohorts and evaluate the association of scores with clinical outcomes. Spearman's method was used to assess correlation between VR and live skills. RESULTS AND LIMITATIONS: Ten novices, ten surgeons with intermediate expertise (median 64 cases, interquartile range [IQR] 6-80), and 26 expert surgeons (median 850 cases, IQR 375-3000) participated in this study. Intermediate and expert surgeons were significantly more likely to have ideal scores in comparison to novices for the subskills needle hold angle, wrist rotation, and wrist rotation needle withdrawal (p < 0.01). For both intermediate and expert surgeons, there was positive correlation between VR and live skills for needle hold angle (p < 0.05). For expert surgeons, there was a positive association between ideal scores for VR needle hold angle and driving smoothness subskills and 3-mo continence recovery (p < 0.05). Limitations include the size of the intermediate surgeon sample and clinical data limited to expert surgeons. CONCLUSIONS: EASE can be used in VR to identify skills to improve for trainee surgeons. Technical skills that influence postoperative outcomes may be assessable in VR. PATIENT SUMMARY: This study provides insights into surgical skills that translate from virtual simulation to live surgery and that have an impact on urinary continence after robot-assisted removal of the prostate. We also highlight the usefulness of virtual reality in surgical education.


Robotics , Virtual Reality , Male , Humans , Prostate , Prospective Studies , Prostatectomy/methods
10.
Commun Med (Lond) ; 3(1): 42, 2023 Mar 30.
Article En | MEDLINE | ID: mdl-36997578

BACKGROUND: Surgeons who receive reliable feedback on their performance quickly master the skills necessary for surgery. Such performance-based feedback can be provided by a recently-developed artificial intelligence (AI) system that assesses a surgeon's skills based on a surgical video while simultaneously highlighting aspects of the video most pertinent to the assessment. However, it remains an open question whether these highlights, or explanations, are equally reliable for all surgeons. METHODS: Here, we systematically quantify the reliability of AI-based explanations on surgical videos from three hospitals across two continents by comparing them to explanations generated by humans experts. To improve the reliability of AI-based explanations, we propose the strategy of training with explanations -TWIX -which uses human explanations as supervision to explicitly teach an AI system to highlight important video frames. RESULTS: We show that while AI-based explanations often align with human explanations, they are not equally reliable for different sub-cohorts of surgeons (e.g., novices vs. experts), a phenomenon we refer to as an explanation bias. We also show that TWIX enhances the reliability of AI-based explanations, mitigates the explanation bias, and improves the performance of AI systems across hospitals. These findings extend to a training environment where medical students can be provided with feedback today. CONCLUSIONS: Our study informs the impending implementation of AI-augmented surgical training and surgeon credentialing programs, and contributes to the safe and fair democratization of surgery.


Surgeons aim to master skills necessary for surgery. One such skill is suturing which involves connecting objects together through a series of stitches. Mastering these surgical skills can be improved by providing surgeons with feedback on the quality of their performance. However, such feedback is often absent from surgical practice. Although performance-based feedback can be provided, in theory, by recently-developed artificial intelligence (AI) systems that use a computational model to assess a surgeon's skill, the reliability of this feedback remains unknown. Here, we compare AI-based feedback to that provided by human experts and demonstrate that they often overlap with one another. We also show that explicitly teaching an AI system to align with human feedback further improves the reliability of AI-based feedback on new videos of surgery. Our findings outline the potential of AI systems to support the training of surgeons by providing feedback that is reliable and focused on a particular skill, and guide programs that give surgeons qualifications by complementing skill assessments with explanations that increase the trustworthiness of such assessments.

11.
NPJ Digit Med ; 6(1): 54, 2023 Mar 30.
Article En | MEDLINE | ID: mdl-36997642

Artificial intelligence (AI) systems can now reliably assess surgeon skills through videos of intraoperative surgical activity. With such systems informing future high-stakes decisions such as whether to credential surgeons and grant them the privilege to operate on patients, it is critical that they treat all surgeons fairly. However, it remains an open question whether surgical AI systems exhibit bias against surgeon sub-cohorts, and, if so, whether such bias can be mitigated. Here, we examine and mitigate the bias exhibited by a family of surgical AI systems-SAIS-deployed on videos of robotic surgeries from three geographically-diverse hospitals (USA and EU). We show that SAIS exhibits an underskilling bias, erroneously downgrading surgical performance, and an overskilling bias, erroneously upgrading surgical performance, at different rates across surgeon sub-cohorts. To mitigate such bias, we leverage a strategy -TWIX-which teaches an AI system to provide a visual explanation for its skill assessment that otherwise would have been provided by human experts. We show that whereas baseline strategies inconsistently mitigate algorithmic bias, TWIX can effectively mitigate the underskilling and overskilling bias while simultaneously improving the performance of these AI systems across hospitals. We discovered that these findings carry over to the training environment where we assess medical students' skills today. Our study is a critical prerequisite to the eventual implementation of AI-augmented global surgeon credentialing programs, ensuring that all surgeons are treated fairly.

12.
Nat Biomed Eng ; 7(6): 780-796, 2023 06.
Article En | MEDLINE | ID: mdl-36997732

The intraoperative activity of a surgeon has substantial impact on postoperative outcomes. However, for most surgical procedures, the details of intraoperative surgical actions, which can vary widely, are not well understood. Here we report a machine learning system leveraging a vision transformer and supervised contrastive learning for the decoding of elements of intraoperative surgical activity from videos commonly collected during robotic surgeries. The system accurately identified surgical steps, actions performed by the surgeon, the quality of these actions and the relative contribution of individual video frames to the decoding of the actions. Through extensive testing on data from three different hospitals located in two different continents, we show that the system generalizes across videos, surgeons, hospitals and surgical procedures, and that it can provide information on surgical gestures and skills from unannotated videos. Decoding intraoperative activity via accurate machine learning systems could be used to provide surgeons with feedback on their operating skills, and may allow for the identification of optimal surgical behaviour and for the study of relationships between intraoperative factors and postoperative outcomes.


Robotic Surgical Procedures , Surgeons , Humans , Robotic Surgical Procedures/methods
13.
Curr Urol Rep ; 24(5): 231-240, 2023 May.
Article En | MEDLINE | ID: mdl-36808595

PURPOSE OF REVIEW: This review aims to explore the current state of research on the use of artificial intelligence (AI) in the management of prostate cancer. We examine the various applications of AI in prostate cancer, including image analysis, prediction of treatment outcomes, and patient stratification. Additionally, the review will evaluate the current limitations and challenges faced in the implementation of AI in prostate cancer management. RECENT FINDINGS: Recent literature has focused particularly on the use of AI in radiomics, pathomics, the evaluation of surgical skills, and patient outcomes. AI has the potential to revolutionize the future of prostate cancer management by improving diagnostic accuracy, treatment planning, and patient outcomes. Studies have shown improved accuracy and efficiency of AI models in the detection and treatment of prostate cancer, but further research is needed to understand its full potential as well as limitations.


Artificial Intelligence , Prostatic Neoplasms , Male , Humans , Image Processing, Computer-Assisted
14.
J Clin Med ; 12(4)2023 Feb 20.
Article En | MEDLINE | ID: mdl-36836223

Intraoperative adverse events (iAEs) impact the outcomes of surgery, and yet are not routinely collected, graded, and reported. Advancements in artificial intelligence (AI) have the potential to power real-time, automatic detection of these events and disrupt the landscape of surgical safety through the prediction and mitigation of iAEs. We sought to understand the current implementation of AI in this space. A literature review was performed to PRISMA-DTA standards. Included articles were from all surgical specialties and reported the automatic identification of iAEs in real-time. Details on surgical specialty, adverse events, technology used for detecting iAEs, AI algorithm/validation, and reference standards/conventional parameters were extracted. A meta-analysis of algorithms with available data was conducted using a hierarchical summary receiver operating characteristic curve (ROC). The QUADAS-2 tool was used to assess the article risk of bias and clinical applicability. A total of 2982 studies were identified by searching PubMed, Scopus, Web of Science, and IEEE Xplore, with 13 articles included for data extraction. The AI algorithms detected bleeding (n = 7), vessel injury (n = 1), perfusion deficiencies (n = 1), thermal damage (n = 1), and EMG abnormalities (n = 1), among other iAEs. Nine of the thirteen articles described at least one validation method for the detection system; five explained using cross-validation and seven divided the dataset into training and validation cohorts. Meta-analysis showed the algorithms were both sensitive and specific across included iAEs (detection OR 14.74, CI 4.7-46.2). There was heterogeneity in reported outcome statistics and article bias risk. There is a need for standardization of iAE definitions, detection, and reporting to enhance surgical care for all patients. The heterogeneous applications of AI in the literature highlights the pluripotent nature of this technology. Applications of these algorithms across a breadth of urologic procedures should be investigated to assess the generalizability of these data.

16.
J Robot Surg ; 17(2): 597-603, 2023 Apr.
Article En | MEDLINE | ID: mdl-36149590

Our group previously defined a dissection gesture classification system that deconstructs robotic tissue dissection into its most elemental yet meaningful movements. The purpose of this study was to expand upon this framework by adding an assessment of gesture efficacy (ineffective, effective, or erroneous) and analyze dissection patterns between groups of surgeons of varying experience. We defined three possible gesture efficacies as ineffective (no meaningful effect on the tissue), effective (intended effect on the tissue), and erroneous (unintended disruption of the tissue). Novices (0 prior robotic cases), intermediates (1-99 cases), and experts (≥ 100 cases) completed a robotic dissection task in a dry-lab training environment. Video recordings were reviewed to classify each gesture and determine its efficacy, then dissection patterns between groups were analyzed. 23 participants completed the task, with 9 novices, 8 intermediates with median caseload 60 (IQR 41-80), and 6 experts with median caseload 525 (IQR 413-900). For gesture selection, we found increasing experience associated with increasing proportion of overall dissection gestures (p = 0.009) and decreasing proportion of retraction gestures (p = 0.009). For gesture efficacy, novices performed the greatest proportion of ineffective gestures (9.8%, p < 0.001), intermediates commit the greatest proportion of erroneous gestures (26.8%, p < 0.001), and the three groups performed similar proportions of overall effective gestures, though experts performed the greatest proportion of effective retraction gestures (85.6%, p < 0.001). Between groups of experience, we found significant differences in gesture selection and gesture efficacy. These relationships may provide insight into further improving surgical training.


Robotic Surgical Procedures , Robotics , Humans , Robotic Surgical Procedures/methods , Gestures , Movement
17.
Int J Comput Assist Radiol Surg ; 18(3): 545-552, 2023 Mar.
Article En | MEDLINE | ID: mdl-36282465

OBJECTIVES: Manually-collected suturing technical skill scores are strong predictors of continence recovery after robotic radical prostatectomy. Herein, we automate suturing technical skill scoring through computer vision (CV) methods as a scalable method to provide feedback. METHODS: Twenty-two surgeons completed a suturing exercise three times on the Mimic™ Flex VR simulator. Instrument kinematic data (XYZ coordinates of each instrument and pose) were captured at 30 Hz. After standardized training, three human raters manually video segmented suturing task into four sub-stitch phases (Needle handling, Needle targeting, Needle driving, Needle withdrawal) and labeled the corresponding technical skill domains (Needle positioning, Needle entry, Needle driving, and Needle withdrawal). The CV framework extracted RGB features and optical flow frames using a pre-trained AlexNet. Additional CV strategies including auxiliary supervision (using kinematic data during training only) and attention mechanisms were implemented to improve performance. RESULTS: This study included data from 15 expert surgeons (median caseload 300 [IQR 165-750]) and 7 training surgeons (0 [IQR 0-8]). In all, 226 virtual sutures were captured. Automated assessments for Needle positioning performed best with the simplest approach (1 s video; AUC 0.749). Remaining skill domains exhibited improvements with the implementation of auxiliary supervision and attention mechanisms when deployed separately (AUC 0.604-0.794). All techniques combined produced the best performance, particularly for Needle driving and Needle withdrawal (AUC 0.959 and 0.879, respectively). CONCLUSIONS: This study demonstrated the best performance of automated suturing technical skills assessment to date using advanced CV techniques. Future work will determine if a "human in the loop" is necessary to verify surgeon evaluations.


Robotic Surgical Procedures , Robotics , Surgeons , Male , Humans , Surgeons/education , Automation , Neurosurgical Procedures , Sutures , Clinical Competence , Suture Techniques/education , Robotic Surgical Procedures/methods
18.
JU Open Plus ; 1(8)2023 Aug.
Article En | MEDLINE | ID: mdl-38187460

Purpose: To examine the association between the quality of neurovascular bundle dissection and urinary continence recovery after robotic-assisted radical prostatectomy. Materials and Methods: Patients who underwent RARPs from 2016 to 2018 in two institutions with ≥1-year postoperative follow-up were included. The primary outcomes were time to urinary continence recovery. Surgical videos were independently assessed by 3 blinded raters using the validated Dissection Assessment for Robotic Technique (DART) tool after standardized training. Cox regression was used to test the association between DART scores and urinary continence recovery while adjusting for relevant patient features. Results: 121 RARP performed by 23 surgeons with various experience levels were included. The median follow-up was 24 months (95% CI 20 - 28 months). The median time to continence recovery was 7.3 months (95% CI 4.7 - 9.8 months). After adjusting for patient age, higher scores of certain DART domains, specifically tissue retraction and efficiency, were significantly associated with increased odds of continence recovery (p<0.05). Conclusions: Technical skill scores of neurovascular bundle dissection vary among surgeons and correlate with urinary continence recovery. Unveiling the specific robotic dissection skillsets which impact patient outcomes has the potential to focus surgical training.

19.
Eur Urol Open Sci ; 46: 15-21, 2022 Dec.
Article En | MEDLINE | ID: mdl-36506257

Background: There is no standard for the feedback that an attending surgeon provides to a training surgeon, which may lead to variable outcomes in teaching cases. Objective: To create and administer standardized feedback to medical students in an attempt to improve performance and learning. Design setting and participants: A cohort of 45 medical students was recruited from a single medical school. Participants were randomly assigned to two groups. Both completed two rounds of a robotic surgical dissection task on a da Vinci Xi surgical system. The first round was the baseline assessment. In the second round, one group received feedback and the other served as the control (no feedback). Outcome measurements and statistical analysis: Video from each round was retrospectively reviewed by four blinded raters and given a total error tally (primary outcome) and a technical skills score (Global Evaluative Assessment of Robotic Surgery [GEARS]). Generalized linear models were used for statistical modeling. According to their initial performance, each participant was categorized as either an innate performer or an underperformer, depending on whether their error tally was above or below the median. Results and limitations: In round 2, the intervention group had a larger decrease in error rate than the control group, with a risk ratio (RR) of 1.51 (95% confidence interval [CI] 1.07-2.14; p = 0.02). The intervention group also had a greater increase in GEARS score in comparison to the control group, with a mean group difference of 2.15 (95% CI 0.81-3.49; p < 0.01). The interaction effect between innate performers versus underperformers and the intervention was statistically significant for the error rates, at F(1,38) = 5.16 (p = 0.03). Specifically, the intervention had a statistically significant effect on the error rate for underperformers (RR 2.23, 95% CI 1.37-3.62; p < 0.01) but not for innate performers (RR 1.03, 95% CI 0.63-1.68; p = 0.91). Conclusions: Real-time feedback improved performance globally compared to the control. The benefit of real-time feedback was stronger for underperformers than for trainees with innate skill. Patient summary: We found that real-time feedback during a training task using a surgical robot improved the performance of trainees when the task was repeated. This feedback approach could help in training doctors in robotic surgery.

20.
NPJ Digit Med ; 5(1): 187, 2022 Dec 22.
Article En | MEDLINE | ID: mdl-36550203

How well a surgery is performed impacts a patient's outcomes; however, objective quantification of performance remains an unsolved challenge. Deconstructing a procedure into discrete instrument-tissue "gestures" is a emerging way to understand surgery. To establish this paradigm in a procedure where performance is the most important factor for patient outcomes, we identify 34,323 individual gestures performed in 80 nerve-sparing robot-assisted radical prostatectomies from two international medical centers. Gestures are classified into nine distinct dissection gestures (e.g., hot cut) and four supporting gestures (e.g., retraction). Our primary outcome is to identify factors impacting a patient's 1-year erectile function (EF) recovery after radical prostatectomy. We find that less use of hot cut and more use of peel/push are statistically associated with better chance of 1-year EF recovery. Our results also show interactions between surgeon experience and gesture types-similar gesture selection resulted in different EF recovery rates dependent on surgeon experience. To further validate this framework, two teams independently constructe distinct machine learning models using gesture sequences vs. traditional clinical features to predict 1-year EF. In both models, gesture sequences are able to better predict 1-year EF (Team 1: AUC 0.77, 95% CI 0.73-0.81; Team 2: AUC 0.68, 95% CI 0.66-0.70) than traditional clinical features (Team 1: AUC 0.69, 95% CI 0.65-0.73; Team 2: AUC 0.65, 95% CI 0.62-0.68). Our results suggest that gestures provide a granular method to objectively indicate surgical performance and outcomes. Application of this methodology to other surgeries may lead to discoveries on methods to improve surgery.

...