Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Surg Endosc ; 38(1): 158-170, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37945709

RESUMEN

BACKGROUND: Video-based review is paramount for operative performance assessment but can be laborious when performed manually. Hierarchical Task Analysis (HTA) is a well-known method that divides any procedure into phases, steps, and tasks. HTA requires large datasets of videos with consistent definitions at each level. Our aim was to develop an AI model for automated segmentation of phases, steps, and tasks for laparoscopic cholecystectomy videos using a standardized HTA. METHODS: A total of 160 laparoscopic cholecystectomy videos were collected from a publicly available dataset known as cholec80 and from our own institution. All videos were annotated for the beginning and ending of a predefined set of phases, steps, and tasks. Deep learning models were then separately developed and trained for the three levels using a 3D Convolutional Neural Network architecture. RESULTS: Four phases, eight steps, and nineteen tasks were defined through expert consensus. The training set for our deep learning models contained 100 videos with an additional 20 videos for hyperparameter optimization and tuning. The remaining 40 videos were used for testing the performance. The overall accuracy for phases, steps, and tasks were 0.90, 0.81, and 0.65 with the average F1 score of 0.86, 0.76 and 0.48 respectively. Control of bleeding and bile spillage tasks were most variable in definition, operative management, and clinical relevance. CONCLUSION: The use of hierarchical task analysis for surgical video analysis has numerous applications in AI-based automated systems. Our results show that our tiered method of task analysis can successfully be used to train a DL model.


Asunto(s)
Colecistectomía Laparoscópica , Aprendizaje Profundo , Humanos , Redes Neurales de la Computación , Colecistectomía
2.
Surg Endosc ; 38(6): 3241-3252, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38653899

RESUMEN

BACKGROUND: The learning curve in minimally invasive surgery (MIS) is lengthened compared to open surgery. It has been reported that structured feedback and training in teams of two trainees improves MIS training and MIS performance. Annotation of surgical images and videos may prove beneficial for surgical training. This study investigated whether structured feedback and video debriefing, including annotation of critical view of safety (CVS), have beneficial learning effects in a predefined, multi-modal MIS training curriculum in teams of two trainees. METHODS: This randomized-controlled single-center study included medical students without MIS experience (n = 80). The participants first completed a standardized and structured multi-modal MIS training curriculum. They were then randomly divided into two groups (n = 40 each), and four laparoscopic cholecystectomies (LCs) were performed on ex-vivo porcine livers each. Students in the intervention group received structured feedback after each LC, consisting of LC performance evaluations through tutor-trainee joint video debriefing and CVS video annotation. Performance was evaluated using global and LC-specific Objective Structured Assessments of Technical Skills (OSATS) and Global Operative Assessment of Laparoscopic Skills (GOALS) scores. RESULTS: The participants in the intervention group had higher global and LC-specific OSATS as well as global and LC-specific GOALS scores than the participants in the control group (25.5 ± 7.3 vs. 23.4 ± 5.1, p = 0.003; 47.6 ± 12.9 vs. 36 ± 12.8, p < 0.001; 17.5 ± 4.4 vs. 16 ± 3.8, p < 0.001; 6.6 ± 2.3 vs. 5.9 ± 2.1, p = 0.005). The intervention group achieved CVS more often than the control group (1. LC: 20 vs. 10 participants, p = 0.037, 2. LC: 24 vs. 8, p = 0.001, 3. LC: 31 vs. 8, p < 0.001, 4. LC: 31 vs. 10, p < 0.001). CONCLUSIONS: Structured feedback and video debriefing with CVS annotation improves CVS achievement and ex-vivo porcine LC training performance based on OSATS and GOALS scores.


Asunto(s)
Colecistectomía Laparoscópica , Competencia Clínica , Grabación en Video , Colecistectomía Laparoscópica/educación , Humanos , Porcinos , Animales , Femenino , Masculino , Curva de Aprendizaje , Curriculum , Adulto , Estudiantes de Medicina , Retroalimentación Formativa , Adulto Joven , Retroalimentación
3.
Surg Endosc ; 38(5): 2553-2561, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38488870

RESUMEN

BACKGROUND: Minimally invasive surgery provides an unprecedented opportunity to review video for assessing surgical performance. Surgical video analysis is time-consuming and expensive. Deep learning provides an alternative for analysis. Robotic pancreaticoduodenectomy (RPD) is a complex and morbid operation. Surgeon technical performance of pancreaticojejunostomy (PJ) has been associated with postoperative pancreatic fistula. In this work, we aimed to utilize deep learning to automatically segment PJ RPD videos. METHODS: This was a retrospective review of prospectively collected videos from 2011 to 2022 that were in libraries at tertiary referral centers, including 111 PJ videos. Each frame of a robotic PJ video was categorized based on 6 tasks. A 3D convolutional neural network was trained for frame-level visual feature extraction and classification. All the videos were manually annotated for the start and end of each task. RESULTS: Of the 100 videos assessed, 60 videos were used for the training the model, 10 for hyperparameter optimization, and 30 for the testing of performance. All the frames were extracted (6 frames/second) and annotated. The accuracy and mean per-class F1 scores were 88.01% and 85.34% for tasks. CONCLUSION: The deep learning model performed well for automated segmentation of PJ videos. Future work will focus on skills assessment and outcome prediction.


Asunto(s)
Aprendizaje Profundo , Pancreatoyeyunostomía , Procedimientos Quirúrgicos Robotizados , Humanos , Procedimientos Quirúrgicos Robotizados/métodos , Pancreatoyeyunostomía/métodos , Estudios Retrospectivos , Pancreaticoduodenectomía/métodos , Grabación en Video
4.
Ann Vasc Surg ; 99: 96-104, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37914075

RESUMEN

BACKGROUND: Adverse events during surgery can occur in part due to errors in visual perception and judgment. Deep learning is a branch of artificial intelligence (AI) that has shown promise in providing real-time intraoperative guidance. This study aims to train and test the performance of a deep learning model that can identify inappropriate landing zones during endovascular aneurysm repair (EVAR). METHODS: A deep learning model was trained to identify a "No-Go" landing zone during EVAR, defined by coverage of the lowest renal artery by the stent graft. Fluoroscopic images from elective EVAR procedures performed at a single institution and from open-access sources were selected. Annotations of the "No-Go" zone were performed by trained annotators. A 10-fold cross-validation technique was used to evaluate the performance of the model against human annotations. Primary outcomes were intersection-over-union (IoU) and F1 score and secondary outcomes were pixel-wise accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). RESULTS: The AI model was trained using 369 images procured from 110 different patients/videos, including 18 patients/videos (44 images) from open-access sources. For the primary outcomes, IoU and F1 were 0.43 (standard deviation ± 0.29) and 0.53 (±0.32), respectively. For the secondary outcomes, accuracy, sensitivity, specificity, NPV, and PPV were 0.97 (±0.002), 0.51 (±0.34), 0.99 (±0.001). 0.99 (±0.002), and 0.62 (±0.34), respectively. CONCLUSIONS: AI can effectively identify suboptimal areas of stent deployment during EVAR. Further directions include validating the model on datasets from other institutions and assessing its ability to predict optimal stent graft placement and clinical outcomes.


Asunto(s)
Aneurisma de la Aorta Abdominal , Implantación de Prótesis Vascular , Procedimientos Endovasculares , Humanos , Aneurisma de la Aorta Abdominal/diagnóstico por imagen , Aneurisma de la Aorta Abdominal/cirugía , Aneurisma de la Aorta Abdominal/etiología , Implantación de Prótesis Vascular/efectos adversos , Implantación de Prótesis Vascular/métodos , Resultado del Tratamiento , Inteligencia Artificial , Procedimientos Endovasculares/efectos adversos , Procedimientos Endovasculares/métodos , Stents , Estudios Retrospectivos , Prótesis Vascular
5.
Surg Endosc ; 37(1): 402-411, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-35982284

RESUMEN

BACKGROUND: Early introduction and distributed learning have been shown to improve student comfort with basic requisite suturing skills. The need for more frequent and directed feedback, however, remains an enduring concern for both remote and in-person training. A previous in-person curriculum for our second-year medical students transitioning to clerkships was adapted to an at-home video-based assessment model due to the social distancing implications of COVID-19. We aimed to develop an Artificial Intelligence (AI) model to perform video-based assessment. METHODS: Second-year medical students were asked to submit a video of a simple interrupted knot on a penrose drain with instrument tying technique after self-training to proficiency. Proficiency was defined as performing the task under two minutes with no critical errors. All the videos were first manually rated with a pass-fail rating and then subsequently underwent task segmentation. We developed and trained two AI models based on convolutional neural networks to identify errors (instrument holding and knot-tying) and provide automated ratings. RESULTS: A total of 229 medical student videos were reviewed (150 pass, 79 fail). Of those who failed, the critical error distribution was 15 knot-tying, 47 instrument-holding, and 17 multiple. A total of 216 videos were used to train the models after excluding the low-quality videos. A k-fold cross-validation (k = 10) was used. The accuracy of the instrument holding model was 89% with an F-1 score of 74%. For the knot-tying model, the accuracy was 91% with an F-1 score of 54%. CONCLUSIONS: Medical students require assessment and directed feedback to better acquire surgical skill, but this is often time-consuming and inadequately done. AI techniques can instead be employed to perform automated surgical video analysis. Future work will optimize the current model to identify discrete errors in order to supplement video-based rating with specific feedback.


Asunto(s)
COVID-19 , Tutoría , Estudiantes de Medicina , Humanos , Inteligencia Artificial , Competencia Clínica , Técnicas de Sutura/educación , Grabación de Cinta de Video
6.
Surg Endosc ; 37(3): 2260-2268, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-35918549

RESUMEN

BACKGROUND: Many surgical adverse events, such as bile duct injuries during laparoscopic cholecystectomy (LC), occur due to errors in visual perception and judgment. Artificial intelligence (AI) can potentially improve the quality and safety of surgery, such as through real-time intraoperative decision support. GoNoGoNet is a novel AI model capable of identifying safe ("Go") and dangerous ("No-Go") zones of dissection on surgical videos of LC. Yet, it is unknown how GoNoGoNet performs in comparison to expert surgeons. This study aims to evaluate the GoNoGoNet's ability to identify Go and No-Go zones compared to an external panel of expert surgeons. METHODS: A panel of high-volume surgeons from the SAGES Safe Cholecystectomy Task Force was recruited to draw free-hand annotations on frames of prospectively collected videos of LC to identify the Go and No-Go zones. Expert consensus on the location of Go and No-Go zones was established using Visual Concordance Test pixel agreement. Identification of Go and No-Go zones by GoNoGoNet was compared to expert-derived consensus using mean F1 Dice Score, and pixel accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). RESULTS: A total of 47 frames from 25 LC videos, procured from 3 countries and 9 surgeons, were annotated simultaneously by an expert panel of 6 surgeons and GoNoGoNet. Mean (± standard deviation) F1 Dice score were 0.58 (0.22) and 0.80 (0.12) for Go and No-Go zones, respectively. Mean (± standard deviation) accuracy, sensitivity, specificity, PPV and NPV for the Go zones were 0.92 (0.05), 0.52 (0.24), 0.97 (0.03), 0.70 (0.21), and 0.94 (0.04) respectively. For No-Go zones, these metrics were 0.92 (0.05), 0.80 (0.17), 0.95 (0.04), 0.84 (0.13) and 0.95 (0.05), respectively. CONCLUSIONS: AI can be used to identify safe and dangerous zones of dissection within the surgical field, with high specificity/PPV for Go zones and high sensitivity/NPV for No-Go zones. Overall, model prediction was better for No-Go zones compared to Go zones. This technology may eventually be used to provide real-time guidance and minimize the risk of adverse events.


Asunto(s)
Colecistectomía Laparoscópica , Cirujanos , Humanos , Colecistectomía Laparoscópica/efectos adversos , Inteligencia Artificial , Recolección de Datos , Colecistectomía
7.
Ann Surg ; 276(2): 363-369, 2022 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-33196488

RESUMEN

OBJECTIVE: The aim of this study was to develop and evaluate the performance of artificial intelligence (AI) models that can identify safe and dangerous zones of dissection, and anatomical landmarks during laparoscopic cholecystectomy (LC). SUMMARY BACKGROUND DATA: Many adverse events during surgery occur due to errors in visual perception and judgment leading to misinterpretation of anatomy. Deep learning, a subfield of AI, can potentially be used to provide real-time guidance intraoperatively. METHODS: Deep learning models were developed and trained to identify safe (Go) and dangerous (No-Go) zones of dissection, liver, gallbladder, and hepatocystic triangle during LC. Annotations were performed by 4 high-volume surgeons. AI predictions were evaluated using 10-fold cross-validation against annotations by expert surgeons. Primary outcomes were intersection- over-union (IOU) and F1 score (validated spatial correlation indices), and secondary outcomes were pixel-wise accuracy, sensitivity, specificity, ± standard deviation. RESULTS: AI models were trained on 2627 random frames from 290 LC videos, procured from 37 countries, 136 institutions, and 153 surgeons. Mean IOU, F1 score, accuracy, sensitivity, and specificity for the AI to identify Go zones were 0.53 (±0.24), 0.70 (±0.28), 0.94 (±0.05), 0.69 (±0.20). and 0.94 (±0.03), respectively. For No-Go zones, these metrics were 0.71 (±0.29), 0.83 (±0.31), 0.95 (±0.06), 0.80 (±0.21), and 0.98 (±0.05), respectively. Mean IOU for identification of the liver, gallbladder, and hepatocystic triangle were: 0.86 (±0.12), 0.72 (±0.19), and 0.65 (±0.22), respectively. CONCLUSIONS: AI can be used to identify anatomy within the surgical field. This technology may eventually be used to provide real-time guidance and minimize the risk of adverse events.


Asunto(s)
Colecistectomía Laparoscópica , Cirujanos , Inteligencia Artificial , Colecistectomía Laparoscópica/efectos adversos , Vesícula Biliar/cirugía , Humanos , Semántica
8.
Surg Endosc ; 36(1): 679-688, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-33559057

RESUMEN

BACKGROUND: The complexity of laparoscopy requires special training and assessment. Analyzing the streaming videos during the surgery can potentially improve surgical education. The tedium and cost of such an analysis can be dramatically reduced using an automated tool detection system, among other things. We propose a new multilabel classifier, called LapTool-Net to detect the presence of surgical tools in each frame of a laparoscopic video. METHODS: The novelty of LapTool-Net is the exploitation of the correlations among the usage of different tools and, the tools and tasks-i.e., the context of the tools' usage. Towards this goal, the pattern in the co-occurrence of the tools is utilized for designing a decision policy for the multilabel classifier based on a Recurrent Convolutional Neural Network (RCNN), which is trained in an end-to-end manner. In the post-processing step, the predictions are corrected by modeling the long-term tasks' order with an RNN. RESULTS: LapTool-Net was trained using publicly available datasets of laparoscopic cholecystectomy, viz., M2CAI16 and Cholec80. For M2CAI16, our exact match accuracies (when all the tools in one frame are predicted correctly) in online and offline modes were 80.95% and 81.84% with per-class F1-score of 88.29% and 90.53%. For Cholec80, the accuracies were 85.77% and 91.92% with F1-scores if 93.10% and 96.11% for online and offline, respectively. CONCLUSIONS: The results show LapTool-Net outperformed state-of-the-art methods significantly, even while using fewer training samples and a shallower architecture. Our context-aware model does not require expert's domain-specific knowledge, and the simple architecture can potentially improve all existing methods.


Asunto(s)
Aprendizaje Profundo , Laparoscopía , Humanos , Redes Neurales de la Computación
9.
West J Emerg Med ; 22(2): 244-251, 2021 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-33856307

RESUMEN

INTRODUCTION: Within a few months coronavirus disease 2019 (COVID-19) evolved into a pandemic causing millions of cases worldwide, but it remains challenging to diagnose the disease in a timely fashion in the emergency department (ED). In this study we aimed to construct machine-learning (ML) models to predict severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) infection based on the clinical features of patients visiting an ED during the early COVID-19 pandemic. METHODS: We retrospectively collected the data of all patients who received reverse transcriptase polymerase chain reaction (RT-PCR) testing for SARS-CoV-2 at the ED of Baylor Scott & White All Saints Medical Center, Fort Worth, from February 23-May 12, 2020. The variables collected included patient demographics, ED triage data, clinical symptoms, and past medical history. The primary outcome was the confirmed diagnosis of COVID-19 (or SARS-CoV-2 infection) by a positive RT-PCR test result for SARS-CoV-2, and was used as the label for ML tasks. We used univariate analyses for feature selection, and variables with P<0.1 were selected for model construction. Samples were split into training and testing cohorts on a 60:40 ratio chronologically. We tried various ML algorithms to construct the best predictive model, and we evaluated performances with the area under the receiver operating characteristic curve (AUC) in the testing cohort. RESULTS: A total of 580 ED patients were tested for SARS-CoV-2 during the study periods, and 98 (16.9%) were identified as having the SARS-CoV-2 infection based on the RT-PCR results. Univariate analyses selected 21 features for model construction. We assessed three ML methods for performance: of the three methods, random forest outperformed the others with the best AUC result (0.86), followed by gradient boosting (0.83) and extra trees classifier (0.82). CONCLUSION: This study shows that it is feasible to use ML models as an initial screening tool for identifying patients with SARS-CoV-2 infection. Further validation will be necessary to determine how effectively this prediction model can be used prospectively in clinical practice.


Asunto(s)
Algoritmos , COVID-19/diagnóstico , Servicio de Urgencia en Hospital , Aprendizaje Automático , Adulto , Prueba de COVID-19 , Estudios de Cohortes , Femenino , Humanos , Masculino , Persona de Mediana Edad , Pandemias , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA