Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Surg Endosc ; 37(9): 7170-7177, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37336843

RESUMEN

BACKGROUND: Laparoscopic training remains inaccessible for surgeons in low- and middle-income countries, limiting its widespread adoption. We developed a novel tool for assessment of laparoscopic appendectomy skills through ALL-SAFE, a low-cost laparoscopy training system. METHODS: This pilot study in Ethiopia, Cameroon, and the USA assessed appendectomy skills using the ALL-SAFE training system. Performance measures were captured using the ALL-SAFE verification of proficiency tool (APPY-VOP), consisting of a checklist, modified Objective Structured Assessment of Technical Skills (m-OSATS), and final rating. Twenty participants, including novice (n = 11), intermediate (n = 8), and expert (n = 1), completed an online module covering appendicitis management and psychomotor skills in laparoscopic appendectomy. After viewing an expert skills demonstration video, participants recorded their performance within ALL-SAFE. Using the APPY-VOP, participants rated their own and three peer videos. We used the Kruskal-Wallis test and a Many-Facet Rasch Model to evaluate (i) capacity of APPY-VOP to differentiate performance levels, (ii) correlation among three APPY-VOP components, and (iii) rating differences across groups. RESULTS: Checklist scores increased from novice (M = 21.02) to intermediate (M = 23.64) and expert (M = 28.25), with differentiation between experts and novices, P = 0.005. All five m-OSATS domains and global summed, total summed, and final rating discriminated across all performance levels (P < 0.001). APPY-VOP final ratings adequately discriminated Competent (M = 2.0), Borderline (N = 1.8), and Not Competent (M = 1.4) performances, Χ2 (2,85) = 32.3, P = 0.001. There was a positive correlation between ALL-SAFE checklist and m-OSATS summed scores, r(83) = 0.63, P < 0.001. Comparison of ratings suggested no differences across expertise levels (P = 0.69) or location (P = 0.66). CONCLUSION: APPY-VOP effectively discriminated between novice and expert performance in laparoscopic appendectomy skills in a simulated setting. Scoring alignment across raters suggests consistent evaluation, independent of expertise. These results support the use of APPY-VOP among all skill levels inside a peer rating system. Future studies will focus on correlating proficiency to clinical practice and scaling ALL-SAFE to other settings.


Asunto(s)
Laparoscopía , Cirujanos , Humanos , Proyectos Piloto , Apendicectomía , Laparoscopía/educación , Cirujanos/educación , Competencia Clínica
2.
J Surg Educ ; 81(2): 267-274, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38160118

RESUMEN

OBJECTIVE: Laparoscopic surgical skill assessment and machine learning are often inaccessible to low-and-middle-income countries (LMIC). Our team developed a low-cost laparoscopic training system to teach and assess psychomotor skills required in laparoscopic salpingostomy in LMICs. We performed video review using AI to assess global surgical techniques. The objective of this study was to assess the validity of artificial intelligence (AI) generated scoring measures of laparoscopic simulation videos by comparing the accuracy of AI results to human-generated scores. DESIGN: Seventy-four surgical simulation videos were collected and graded by human participants using a modified OSATS (Objective Structured Assessment of Technical Skills). The videos were then analyzed via AI using 3 different time and distance-based calculations of the laparoscopic instruments including path length, dimensionless jerk, and standard deviation of tool position. Predicted scores were generated using 5-fold cross validation and K-Nearest-Neighbors to train classifiers. SETTING: Surgical novices and experts from a variety of hospitals in Ethiopia, Cameroon, Kenya, and the United States contributed 74 laparoscopic salpingostomy simulation videos. RESULTS: Complete accuracy of AI compared to human assessment ranged from 65-77%. There were no statistical differences in rank mean scores for 3 domains, Flow of Operation, Respect for Tissue, and Economy of Motion, while there were significant differences in ratings for Instrument Handling, Overall Performance, and the total summed score of all 5 domains (Summed). Estimated effect sizes were all less than 0.11, indicating very small practical effect. Estimated intraclass correlation coefficient (ICC) of Summed was 0.72 indicating moderate correlation between AI and Human scores. CONCLUSIONS: Video review using AI technology of global characteristics was similar to that of human review in our laparoscopic training system. Machine learning may help fill an educational gap in LMICs where direct apprenticeship may not be feasible.


Asunto(s)
Internado y Residencia , Laparoscopía , Femenino , Humanos , Inteligencia Artificial , Laparoscopía/educación , Simulación por Computador , Evaluación Educacional/métodos , Competencia Clínica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA