Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
J Endourol ; 29(11): 1295-301, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-26057232

RESUMEN

BACKGROUND: A surgeon's skill in the operating room has been shown to correlate with a patient's clinical outcome. The prompt accurate assessment of surgical skill remains a challenge, in part, because expert faculty reviewers are often unavailable. By harnessing the power of large readily available crowds through the Internet, rapid, accurate, and low-cost assessments may be achieved. We hypothesized that assessments provided by crowd workers highly correlate with expert surgeons' assessments. MATERIALS AND METHODS: A group of 49 surgeons from two hospitals performed two dry-laboratory robotic surgical skill assessment tasks. The performance of these tasks was video recorded and posted online for evaluation using Amazon Mechanical Turk. The surgical tasks in each video were graded by (n=30) varying crowd workers and (n=3) experts using a modified global evaluative assessment of Robotic Skills (GEARS) grading tool, and the mean scores were compared using Cronbach's alpha statistic. RESULTS: GEARS evaluations from the crowd were obtained for each video and task and compared with the GEARS ratings from the expert surgeons. The crowd-based performance scores agreed with the performance assessments by experts with a Cronbach's alpha of 0.84 and 0.92 for the two tasks, respectively. CONCLUSION: The assessment of surgical skill by crowd workers resulted in a high degree of agreement with the scores provided by expert surgeons in the evaluation of basic robotic surgical dry-laboratory tasks. Crowd responses cost less and were much faster to acquire. This study provides evidence that crowds may provide an adjunctive method for rapidly providing feedback of skills to training and practicing surgeons.


Asunto(s)
Competencia Clínica , Colaboración de las Masas , Internet , Procedimientos Quirúrgicos Robotizados/normas , Grabación en Video , Adulto , Femenino , Cirugía General/educación , Cirugía General/normas , Ginecología/educación , Ginecología/normas , Humanos , Internado y Residencia , Masculino , Persona de Mediana Edad , Obstetricia/educación , Obstetricia/normas , Reproducibilidad de los Resultados , Procedimientos Quirúrgicos Robotizados/educación , Urología/educación , Urología/normas
2.
J Endourol ; 29(10): 1183-8, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25867006

RESUMEN

BACKGROUND: Objective quantification of surgical skill is imperative as we enter a healthcare environment of quality improvement and performance-based reimbursement. The gold standard tools are infrequently used due to time-intensiveness, cost inefficiency, and lack of standard practices. We hypothesized that valid performance scores of surgical skill can be obtained through crowdsourcing. METHODS: Twelve surgeons of varying robotic surgical experience performed live porcine robot-assisted urinary bladder closures. Blinded video-recorded performances were scored by expert surgeon graders and by Amazon's Mechanical Turk crowdsourcing crowd workers using the Global Evaluative Assessment of Robotic Skills tool assessing five technical skills domains. Seven expert graders and 50 unique Mechanical Turkers (each paid $0.75/survey) evaluated each video. Global assessment scores were analyzed for correlation and agreement. RESULTS: Six hundred Mechanical Turkers completed the surveys in less than 5 hours, while seven surgeon graders took 14 days. The duration of video clips ranged from 2 to 11 minutes. The correlation coefficient between the Turkers' and expert graders' scores was 0.95 and Cronbach's Alpha was 0.93. Inter-rater reliability among the surgeon graders was 0.89. CONCLUSION: Crowdsourcing surgical skills assessment yielded rapid inexpensive agreement with global performance scores given by expert surgeon graders. The crowdsourcing method may provide surgical educators and medical institutions with a boundless number of procedural skills assessors to efficiently quantify technical skills for use in trainee advancement and hospital quality improvement.


Asunto(s)
Competencia Clínica/normas , Colaboración de las Masas/métodos , Laparoscopía/métodos , Procedimientos Quirúrgicos Robotizados/métodos , Vejiga Urinaria/cirugía , Procedimientos Quirúrgicos Urológicos/métodos , Animales , Humanos , Reproducibilidad de los Resultados , Encuestas y Cuestionarios , Porcinos , Grabación en Video
3.
J Surg Res ; 196(2): 302-6, 2015 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-25888499

RESUMEN

BACKGROUND: Objective assessment of surgical skills is resource intensive and requires valuable time of expert surgeons. The goal of this study was to assess the ability of a large group of laypersons using a crowd-sourcing tool to grade a surgical procedure (cricothyrotomy) performed on a simulator. The grading included an assessment of the entire procedure by completing an objective assessment of technical skills survey. MATERIALS AND METHODS: Two groups of graders were recruited as follows: (1) Amazon Mechanical Turk users and (2) three expert surgeons from University of Washington Department of Otolaryngology. Graders were presented with a video of participants performing the procedure on the simulator and were asked to grade the video using the objective assessment of technical skills questions. Mechanical Turk users were paid $0.50 for each completed survey. It took 10 h to obtain all responses from 30 Mechanical Turk users for 26 training participants (26 videos/tasks), whereas it took 60 d for three expert surgeons to complete the same 26 tasks. RESULTS: The assessment of surgical performance by a group (n = 30) of laypersons matched the assessment by a group (n = 3) of expert surgeons with a good level of agreement determined by Cronbach alpha coefficient = 0.83. CONCLUSIONS: We found crowd sourcing was an efficient, accurate, and inexpensive method for skills assessment with a good level of agreement to experts' grading.


Asunto(s)
Competencia Clínica/normas , Colaboración de las Masas , Procedimientos Quirúrgicos Operativos/normas , Humanos , Procedimientos Quirúrgicos Operativos/educación
4.
J Endourol ; 29(5): 604-9, 2015 May.
Artículo en Inglés | MEDLINE | ID: mdl-25356517

RESUMEN

BACKGROUND: Crowdsourcing is the practice of obtaining services from a large group of people, typically an online community. Validated methods of evaluating surgical video are time-intensive, expensive, and involve participation of multiple expert surgeons. We sought to obtain valid performance scores of urologic trainees and faculty on a dry-laboratory robotic surgery task module by using crowdsourcing through a web-based grading tool called Crowd Sourced Assessment of Technical Skill (CSATS). METHODS: IRB approval was granted to test the technical skills grading accuracy of Amazon.com Mechanical Turk™ crowd-workers compared to three expert faculty surgeon graders. The two groups assessed dry-laboratory robotic surgical suturing performances of three urology residents (PGY-2, -4, -5) and two faculty using three performance domains from the validated Global Evaluative Assessment of Robotic Skills assessment tool. RESULTS: After an average of 2 hours 50 minutes, each of the five videos received 50 crowd-worker assessments. The inter-rater reliability (IRR) between the surgeons and crowd was 0.91 using Cronbach's alpha statistic (confidence intervals=0.20-0.92), indicating an agreement level between the two groups of "excellent." The crowds were able to discriminate the surgical level, and both the crowds and the expert faculty surgeon graders scored one senior trainee's performance above a faculty's performance. CONCLUSION: Surgery-naive crowd-workers can rapidly assess varying levels of surgical skill accurately relative to a panel of faculty raters. The crowds provided rapid feedback and were inexpensive. CSATS may be a valuable adjunct to surgical simulation training as requirements for more granular and iterative performance tracking of trainees become mandated and commonplace.


Asunto(s)
Competencia Clínica , Internado y Residencia , Procedimientos Quirúrgicos Robotizados/educación , Entrenamiento Simulado , Técnicas de Sutura/educación , Urología/educación , Grabación en Video , Colaboración de las Masas/métodos , Evaluación Educacional/métodos , Humanos , Médicos , Reproducibilidad de los Resultados , Procedimientos Quirúrgicos Urológicos/educación
5.
J Minim Invasive Gynecol ; 22(3): 483-8, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25543068

RESUMEN

OBJECTIVE: To compare the efficacy of simulation-based training between the Mimic dV- Trainer and traditional dry lab da Vinci robot training. DESIGN: A prospective randomized study analyzing the performance of 20 robotics-naive participants. Participants were enrolled in an online da Vinci Intuitive Surgical didactic training module, followed by training in use of the da Vinci standard surgical robot. Spatial ability tests were performed as well. Participants were randomly assigned to 1 of 2 training conditions: performance of 3 Fundamentals of Laparoscopic Surgery dry lab tasks using the da Vinci or performance of 4 dV-Trainer tasks. Participants in both groups performed all tasks to empirically establish proficiency criterion. Participants then performed the transfer task, a cystotomy closure using the daVinci robot on a live animal (swine) model. The performance of robotic tasks was blindly assessed by a panel of experienced surgeons using objective tracking data and using the validated Global Evaluative Assessment of Robotic Surgery (GEARS), a structured assessment tool. RESULTS: No statistically significant difference in surgeon performance was found between the 2 training conditions, dV-Trainer and da Vinci robot. Analysis of a 95% confidence interval for the difference in means (-0.803 to 0.543) indicated that the 2 methods are unlikely to differ to an extent that would be clinically meaningful. CONCLUSION: Based on the results of this study, a curriculum on the dV- Trainer was shown to be comparable to traditional da Vinci robot training. Therefore, we have identified that training on a virtual reality system may be an alternative to live animal training for future robotic surgeons.


Asunto(s)
Simulación por Computador , Laparoscopía , Robótica , Adulto , Animales , Competencia Clínica , Curriculum , Cistotomía/métodos , Evaluación Educacional , Humanos , Laparoscopía/educación , Laparoscopía/métodos , Modelos Animales , Proyectos Piloto , Evaluación de Programas y Proyectos de Salud , Estudios Prospectivos , Porcinos , Análisis y Desempeño de Tareas , Interfaz Usuario-Computador
6.
J Surg Res ; 192(2): 329-38, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25108691

RESUMEN

BACKGROUND: Laparoscopic psychomotor skills are challenging to learn and objectively evaluate. The Fundamentals of Laparoscopic Skills (FLS) program provides a popular, inexpensive, widely-studied, and reported method for evaluating basic laparoscopic skills. With an emphasis on training safety before efficiency, we present data that explore the metrics in the FLS curriculum. MATERIALS AND METHODS: A multi-institutional (n = 3) cross-sectional study enrolled subjects (n = 98) of all laparoscopic skill levels to perform FLS tasks in an instrumented box trainer. Recorded task videos were postevaluated by faculty reviewers (n = 2) blinded to subject identity using a modified Objective Structured Assessment of Technical Skills (OSATS) protocol. FLS scores were computed for each completed task and compared with demographically established skill levels (training level and number of procedures), video review scoring, and objective performance metrics including path length, economy of motion, and peak grasping force. RESULTS: Three criteria used to determine expert skill, training and experience level, blinded review of performance by faculty via OSATS, and FLS scores, disagree in establishing concurrent validity for determining "true experts" in FLS tasks. FLS-scoring exhibited near-perfect correlation with task time for all three tasks (Pearson r = 0.99, 1.00, 1.00 with P <0.00000001). FLS error penalties had negligible effect on FLS scores. Peak grasping force did not correlate with task time or FLS scores. CONCLUSIONS: FLS technical skills scores presented negligible benefit beyond the measurement of task time. FLS scoring is weighted more toward speed than precision and may not significantly address poor tissue handling skills, especially regarding excessive grasping force. Categories of experience or training level may not form a suitable basis for establishing proficiency thresholds or for construct validity studies for technical skills.


Asunto(s)
Instrucción por Computador/instrumentación , Educación Médica/métodos , Laparoscopía/educación , Desempeño Psicomotor , Cirujanos/educación , Instrucción por Computador/métodos , Instrucción por Computador/normas , Educación Médica/normas , Evaluación Educacional , Humanos , Reproducibilidad de los Resultados , Estudiantes de Medicina , Técnicas de Sutura/educación , Estudios de Tiempo y Movimiento , Interfaz Usuario-Computador
7.
J Urol ; 188(3): 919-23, 2012 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-22819403

RESUMEN

PURPOSE: Rapid adoption of robot-assisted surgery has outpaced our ability to train novice roboticists. Objective metrics are required to adequately assess robotic surgical skills and yet surrogates for proficiency, such as economy of motion and tool path metrics, are not readily accessible directly from the da Vinci® robot system. The trakSTAR™ Tool Tip Tracker is a widely available, cost-effective electromagnetic position sensing mechanism by which objective proficiency metrics can be quantified. We validated a robotic surgery curriculum using the trakSTAR device to objectively capture robotic task proficiency metrics. MATERIALS AND METHODS: Through an institutional review board approved study 10 subjects were recruited from 2 surgical experience groups (novice and experienced). All subjects completed 3 technical skills modules, including block transfer, intracorporeal suturing/knot tying (fundamentals of laparoscopic surgery) and ring tower transfer, using the da Vinci robot with the trakSTAR device affixed to the robotic instruments. Recorded objective metrics included task time and path length, which were used to calculate economy of motion. Student t test statistics were performed using STATA®. RESULTS: The novice and experienced groups consisted of 5 subjects each. The experienced group outperformed the novice group in all 3 tasks. Experienced surgeons described the simulator platform as useful for training and agreed with incorporating it into a residency curriculum. CONCLUSIONS: Robotic surgery curricula can be validated by an off-the-shelf instrument tracking system. This platform allows surgical educators to objectively assess trainees and may provide credentialing offices with a means of objectively assessing any surgical staff member seeking robotic surgery privileges at an institution.


Asunto(s)
Curriculum , Procedimientos Quirúrgicos Mínimamente Invasivos/educación , Robótica/educación , Procedimientos Quirúrgicos Urológicos/educación , Procedimientos Quirúrgicos Urológicos/métodos , Fenómenos Electromagnéticos , Humanos , Estudios Prospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...