Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Br J Surg ; 111(1)2024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-37951600

RESUMO

BACKGROUND: There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS: A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS: Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS: Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.


BACKGROUND: Robotic surgery is increasingly used worldwide to treat many different diseases. The robot is controlled by a surgeon, which may give them greater precision and better outcomes for patients. However, surgeons' robotic skills should be assessed properly, to make sure patients are safe, to improve feedback and for exam assessments for certification to indicate competency. This should be done by experts, using assessment tools that have been agreed upon and proven to work. AIM: This review's aim was to find and explain which training and examination tools are best for assessing surgeons' robotic skills and to find out what gaps remain requiring future research. METHOD: This review searched for all available studies looking at assessment tools in robotic surgery and summarized their findings using several different methods. FINDINGS AND CONCLUSION: Two hundred and forty-seven studies were looked at, finding many assessment tools. Further research is needed for operation-specific and automatic assessment tools before they should be used in the clinical setting.


Assuntos
Laparoscopia , Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Procedimentos Cirúrgicos Robóticos/educação , Inteligência Artificial , Competência Clínica , Laparoscopia/educação
2.
Surg Endosc ; 38(1): 116-128, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-37932602

RESUMO

BACKGROUND: Using a validated, objective, and standardised assessment tool to assess progression and competency is essential for basic robotic surgical training programmes. Objective clinical human reliability analysis (OCHRA) is an error-based assessment tool that provides in-depth analysis of individual technical errors. We conducted a feasibility study to assess the concurrent validity and reliability of OCHRA when applied to basic, generic robotic technical skills assessment. METHODS: Selected basic robotic surgical skill tasks, in virtual reality (VR) and dry lab equivalent, were performed by novice robotic surgeons during an intensive 5-day robotic surgical skills course on da Vinci® X and Xi surgical systems. For each task, we described a hierarchical task analysis. Our developed robotic surgical-specific OCHRA methodology was applied to error events in recorded videos with a standardised definition. Statistical analysis to assess concurrent validity with existing tools and inter-rater reliability were performed. RESULTS: OCHRA methodology was applied to 272 basic robotic surgical skills tasks performed by 20 novice robotic surgeons. Performance scores improved from the start of the course to the end using all three assessment tools; Global Evaluative Assessment of Robotic Skills (GEARS) [VR: t(19) = - 9.33, p < 0.001] [dry lab: t(19) = - 10.17, p < 0.001], OCHRA [VR: t(19) = 6.33, p < 0.001] [dry lab: t(19) = 10.69, p < 0.001] and automated VR [VR: t(19) = - 8.26, p < 0.001]. Correlation analysis, for OCHRA compared to GEARS and automated VR scores, shows a significant and strong inverse correlation in every VR and dry lab task; OCHRA vs GEARS [VR: mean r = - 0.78, p < 0.001] [dry lab: mean r = - 0.82, p < 0.001] and OCHRA vs automated VR [VR: mean r = - 0.77, p < 0.001]. There is very strong and significant inter-rater reliability between two independent reviewers (r = 0.926, p < 0.001). CONCLUSION: OCHRA methodology provides a detailed error analysis tool in basic robotic surgical skills with high reliability and concurrent validity with existing tools. OCHRA requires further evaluation in more advanced robotic surgical procedures.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Realidade Virtual , Humanos , Procedimentos Cirúrgicos Robóticos/educação , Reprodutibilidade dos Testes , Competência Clínica , Robótica/educação , Simulação por Computador
3.
J Reconstr Microsurg ; 39(8): 589-600, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36564051

RESUMO

BACKGROUND: Microsurgery is one of the most challenging areas of surgery with a steep learning curve. To address this educational need, microsurgery curricula have been developed and validated, with the majority focus on technical skills only. The aim of this study was to report on the evaluation of a well-established curriculum using the Kirkpatrick model. METHODS: A training curriculum was delivered over 5 days between 2017 and 2020 focusing on (1) microscopic field manipulation, (2) knot tying, nondominant hand usage, (3) 3-D models/anastomosis, and (4) tissue experience. The Kirkpatrick model was applied to evaluate the curriculum at four levels: (1) participants' feedback (2) skills development using a validated, objective assessment tool (Global Assessment Score form) and CUSUM charts were constructed to model proficiency gain (3) and (4) assessing skill retention/long-term impact. RESULTS: In total, 155 participants undertook the curriculum, totaling 5,425 hours of training. More than 75% of students reported the course as excellent, with the remaining voting for "good." All participants agreed that the curriculum met expectations and would recommend it. Significant improvement in anastomosis attainment scores between days 1 and 3 (median score 4) and days 4 and 5 (median score 5) (W = 494.5, p = 0.00170). The frequency of errors reduced with successive attempts (chi square = 9.81, p = 0.00174). The steepest learning curve was in anastomosis and patency domains, requiring 11 attempts on average to reach proficiency. In total, 88.5% survey respondents could apply the skills learnt and 76.9% applied the skills learnt within 6 months. Key areas of improvement were identified from this evaluation, and actions to address them were implemented in the following programs. CONCLUSION: Robust evaluation of curriculum can be applied to microsurgery training demonstrating its efficacy in reducing surgical errors with an improvement in overall technical skills that can extend to impact clinical practice. It allows the identification of areas of improvement, driving the refinement of training programs.


Assuntos
Internato e Residência , Microcirurgia , Humanos , Microcirurgia/educação , Competência Clínica , Currículo , Curva de Aprendizado
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA