Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
1.
Laryngoscope ; 134(8): 3664-3672, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38651539

RESUMO

OBJECTIVE: Accurate prediction of hospital length of stay (LOS) following surgical management of oral cavity cancer (OCC) may be associated with improved patient counseling, hospital resource utilization and cost. The objective of this study was to compare the performance of statistical models, a machine learning (ML) model, and The American College of Surgeons National Surgical Quality Improvement Program's (ACS-NSQIP) calculator in predicting LOS following surgery for OCC. MATERIALS AND METHODS: A retrospective multicenter database study was performed at two major academic head and neck cancer centers. Patients with OCC who underwent major free flap reconstructive surgery between January 2008 and June 2019 surgery were selected. Data were pooled and split into training and validation datasets. Statistical and ML models were developed, and performance was evaluated by comparing predicted and actual LOS using correlation coefficient values and percent accuracy. RESULTS: Totally 837 patients were selected with mean patient age being 62.5 ± 11.7 [SD] years and 67% being male. The ML model demonstrated the best accuracy (validation correlation 0.48, 4-day accuracy 70%), compared with the statistical models: multivariate analysis (0.45, 67%) and least absolute shrinkage and selection operator (0.42, 70%). All were superior to the ACS-NSQIP calculator's performance (0.23, 59%). CONCLUSION: We developed statistical and ML models that predicted LOS following major free flap reconstructive surgery for OCC. Our models demonstrated superior predictive performance to the ACS-NSQIP calculator. The ML model identified several novel predictors of LOS. These models must be validated in other institutions before being used in clinical practice. LEVEL OF EVIDENCE: 3 Laryngoscope, 134:3664-3672, 2024.


Assuntos
Tempo de Internação , Aprendizado de Máquina , Modelos Estatísticos , Neoplasias Bucais , Humanos , Masculino , Estudos Retrospectivos , Feminino , Neoplasias Bucais/cirurgia , Pessoa de Meia-Idade , Tempo de Internação/estatística & dados numéricos , Idoso , Melhoria de Qualidade , Procedimentos de Cirurgia Plástica/estatística & dados numéricos , Procedimentos de Cirurgia Plástica/métodos , Retalhos de Tecido Biológico
2.
JAMA Netw Open ; 3(3): e201664, 2020 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-32227178

RESUMO

Importance: When evaluating surgeons in the operating room, experienced physicians must rely on live or recorded video to assess the surgeon's technical performance, an approach prone to subjectivity and error. Owing to the large number of surgical procedures performed daily, it is infeasible to review every procedure; therefore, there is a tremendous loss of invaluable performance data that would otherwise be useful for improving surgical safety. Objective: To evaluate a framework for assessing surgical video clips by categorizing them based on the surgical step being performed and the level of the surgeon's competence. Design, Setting, and Participants: This quality improvement study assessed 103 video clips of 8 surgeons of various levels performing knot tying, suturing, and needle passing from the Johns Hopkins University-Intuitive Surgical Gesture and Skill Assessment Working Set. Data were collected before 2015, and data analysis took place from March to July 2019. Main Outcomes and Measures: Deep learning models were trained to estimate categorical outputs such as performance level (ie, novice, intermediate, and expert) and surgical actions (ie, knot tying, suturing, and needle passing). The efficacy of these models was measured using precision, recall, and model accuracy. Results: The provided architectures achieved accuracy in surgical action and performance calculation tasks using only video input. The embedding representation had a mean (root mean square error [RMSE]) precision of 1.00 (0) for suturing, 0.99 (0.01) for knot tying, and 0.91 (0.11) for needle passing, resulting in a mean (RMSE) precision of 0.97 (0.01). Its mean (RMSE) recall was 0.94 (0.08) for suturing, 1.00 (0) for knot tying, and 0.99 (0.01) for needle passing, resulting in a mean (RMSE) recall of 0.98 (0.01). It also estimated scores on the Objected Structured Assessment of Technical Skill Global Rating Scale categories, with a mean (RMSE) precision of 0.85 (0.09) for novice level, 0.67 (0.07) for intermediate level, and 0.79 (0.12) for expert level, resulting in a mean (RMSE) precision of 0.77 (0.04). Its mean (RMSE) recall was 0.85 (0.05) for novice level, 0.69 (0.14) for intermediate level, and 0.80 (0.13) for expert level, resulting in a mean (RMSE) recall of 0.78 (0.03). Conclusions and Relevance: The proposed models and the accompanying results illustrate that deep machine learning can identify associations in surgical video clips. These are the first steps to creating a feedback mechanism for surgeons that would allow them to learn from their experiences and refine their skills.


Assuntos
Competência Clínica/normas , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Cirurgiões , Procedimentos Cirúrgicos Operatórios , Algoritmos , Humanos , Cirurgiões/classificação , Cirurgiões/educação , Cirurgiões/normas , Instrumentos Cirúrgicos , Procedimentos Cirúrgicos Operatórios/classificação , Procedimentos Cirúrgicos Operatórios/educação , Procedimentos Cirúrgicos Operatórios/normas , Gravação em Vídeo
3.
J Surg Educ ; 76(6): 1629-1639, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31272846

RESUMO

OBJECTIVE: The goal of the current study is to systematically review the literature addressing the use of automated methods to evaluate technical skills in surgery. BACKGROUND: The classic apprenticeship model of surgical training includes subjective assessments of technical skill. However, automated methods to evaluate surgical technical skill have been recently studied. These automated methods are a more objective, versatile, and analytical way to evaluate a surgical trainee's technical skill. STUDY DESIGN: A literature search of the Ovid Medline, Web of Science, and EMBASE Classic databases was performed. Articles evaluating automated methods for surgical technical skill assessment were abstracted. The quality of all included studies was assessed using the Medical Education Research Study Quality Instrument. RESULTS: A total of 1715 articles were identified, 76 of which were selected for final analysis. An automated methods pathway was defined that included kinetics and computer vision data extraction methods. Automated methods included tool motion tracking, hand motion tracking, eye motion tracking, and muscle contraction analysis. Finally, machine learning, deep learning, and performance classification were used to analyse these methods. These methods of surgical skill assessment were used in the operating room and simulated environments. The average Medical Education Research Study Quality Instrument score across all studies was 10.86 (maximum score of 18). CONCLUSIONS: Automated methods for technical skill assessment is a growing field in surgical education. We found quality studies evaluating these techniques across many environments and surgeries. More research must be done to ensure these techniques are further verified and implemented in surgical curricula.


Assuntos
Automação , Competência Clínica , Educação de Pós-Graduação em Medicina/métodos , Avaliação Educacional/métodos , Cirurgia Geral/educação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA