Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Gastric Cancer ; 27(1): 187-196, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38038811

RESUMO

BACKGROUND: Gastric surgery involves numerous surgical phases; however, its steps can be clearly defined. Deep learning-based surgical phase recognition can promote stylization of gastric surgery with applications in automatic surgical skill assessment. This study aimed to develop a deep learning-based surgical phase-recognition model using multicenter videos of laparoscopic distal gastrectomy, and examine the feasibility of automatic surgical skill assessment using the developed model. METHODS: Surgical videos from 20 hospitals were used. Laparoscopic distal gastrectomy was defined and annotated into nine phases and a deep learning-based image classification model was developed for phase recognition. We examined whether the developed model's output, including the number of frames in each phase and the adequacy of the surgical field development during the phase of supra-pancreatic lymphadenectomy, correlated with the manually assigned skill assessment score. RESULTS: The overall accuracy of phase recognition was 88.8%. Regarding surgical skill assessment based on the number of frames during the phases of lymphadenectomy of the left greater curvature and reconstruction, the number of frames in the high-score group were significantly less than those in the low-score group (829 vs. 1,152, P < 0.01; 1,208 vs. 1,586, P = 0.01, respectively). The output score of the adequacy of the surgical field development, which is the developed model's output, was significantly higher in the high-score group than that in the low-score group (0.975 vs. 0.970, P = 0.04). CONCLUSION: The developed model had high accuracy in phase-recognition tasks and has the potential for application in automatic surgical skill assessment systems.


Assuntos
Laparoscopia , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/cirurgia , Laparoscopia/métodos , Gastroenterostomia , Gastrectomia/métodos
2.
JAMA Surg ; 158(8): e231131, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37285142

RESUMO

Importance: Automatic surgical skill assessment with artificial intelligence (AI) is more objective than manual video review-based skill assessment and can reduce human burden. Standardization of surgical field development is an important aspect of this skill assessment. Objective: To develop a deep learning model that can recognize the standardized surgical fields in laparoscopic sigmoid colon resection and to evaluate the feasibility of automatic surgical skill assessment based on the concordance of the standardized surgical field development using the proposed deep learning model. Design, Setting, and Participants: This retrospective diagnostic study used intraoperative videos of laparoscopic colorectal surgery submitted to the Japan Society for Endoscopic Surgery between August 2016 and November 2017. Data were analyzed from April 2020 to September 2022. Interventions: Videos of surgery performed by expert surgeons with Endoscopic Surgical Skill Qualification System (ESSQS) scores higher than 75 were used to construct a deep learning model able to recognize a standardized surgical field and output its similarity to standardized surgical field development as an AI confidence score (AICS). Other videos were extracted as the validation set. Main Outcomes and Measures: Videos with scores less than or greater than 2 SDs from the mean were defined as the low- and high-score groups, respectively. The correlation between AICS and ESSQS score and the screening performance using AICS for low- and high-score groups were analyzed. Results: The sample included 650 intraoperative videos, 60 of which were used for model construction and 60 for validation. The Spearman rank correlation coefficient between the AICS and ESSQS score was 0.81. The receiver operating characteristic (ROC) curves for the screening of the low- and high-score groups were plotted, and the areas under the ROC curve for the low- and high-score group screening were 0.93 and 0.94, respectively. Conclusions and Relevance: The AICS from the developed model strongly correlated with the ESSQS score, demonstrating the model's feasibility for use as a method of automatic surgical skill assessment. The findings also suggest the feasibility of the proposed model for creating an automated screening system for surgical skills and its potential application to other types of endoscopic procedures.


Assuntos
Procedimentos Cirúrgicos do Sistema Digestório , Laparoscopia , Humanos , Inteligência Artificial , Estudos Retrospectivos , Laparoscopia/métodos , Curva ROC
3.
BJS Open ; 7(2)2023 03 07.
Artigo em Inglês | MEDLINE | ID: mdl-36882082

RESUMO

BACKGROUND: Purse-string suture in transanal total mesorectal excision is a key procedural step. The aims of this study were to develop an automatic skill assessment system for purse-string suture in transanal total mesorectal excision using deep learning and to evaluate the reliability of the score output from the proposed system. METHODS: Purse-string suturing extracted from consecutive transanal total mesorectal excision videos was manually scored using a performance rubric scale and computed into a deep learning model as training data. Deep learning-based image regression analysis was performed, and the purse-string suture skill scores predicted by the trained deep learning model (artificial intelligence score) were output as continuous variables. The outcomes of interest were the correlation, assessed using Spearman's rank correlation coefficient, between the artificial intelligence score and the manual score, purse-string suture time, and surgeon's experience. RESULTS: Forty-five videos obtained from five surgeons were evaluated. The mean(s.d.) total manual score was 9.2(2.7) points, the mean(s.d.) total artificial intelligence score was 10.2(3.9) points, and the mean(s.d.) absolute error between the artificial intelligence and manual scores was 0.42(0.39). Further, the artificial intelligence score significantly correlated with the purse-string suture time (correlation coefficient = -0.728) and surgeon's experience (P< 0.001). CONCLUSION: An automatic purse-string suture skill assessment system using deep learning-based video analysis was shown to be feasible, and the results indicated that the artificial intelligence score was reliable. This application could be expanded to other endoscopic surgeries and procedures.


Assuntos
Aprendizado Profundo , Neoplasias Retais , Humanos , Inteligência Artificial , Reprodutibilidade dos Testes , Suturas
4.
JAMA Netw Open ; 4(8): e2120786, 2021 08 02.
Artigo em Inglês | MEDLINE | ID: mdl-34387676

RESUMO

Importance: A high level of surgical skill is essential to prevent intraoperative problems. One important aspect of surgical education is surgical skill assessment, with pertinent feedback facilitating efficient skill acquisition by novices. Objectives: To develop a 3-dimensional (3-D) convolutional neural network (CNN) model for automatic surgical skill assessment and to evaluate the performance of the model in classification tasks by using laparoscopic colorectal surgical videos. Design, Setting, and Participants: This prognostic study used surgical videos acquired prior to 2017. In total, 650 laparoscopic colorectal surgical videos were provided for study purposes by the Japan Society for Endoscopic Surgery, and 74 were randomly extracted. Every video had highly reliable scores based on the Endoscopic Surgical Skill Qualification System (ESSQS, range 1-100, with higher scores indicating greater surgical skill) established by the society. Data were analyzed June to December 2020. Main Outcomes and Measures: From the groups with scores less than the difference between the mean and 2 SDs, within the range spanning the mean and 1 SD, and greater than the sum of the mean and 2 SDs, 17, 26, and 31 videos, respectively, were randomly extracted. In total, 1480 video clips with a length of 40 seconds each were extracted for each surgical step (medial mobilization, lateral mobilization, inferior mesenteric artery transection, and mesorectal transection) and separated into 1184 training sets and 296 test sets. Automatic surgical skill classification was performed based on spatiotemporal video analysis using the fully automated 3-D CNN model, and classification accuracies and screening accuracies for the groups with scores less than the mean minus 2 SDs and greater than the mean plus 2 SDs were calculated. Results: The mean (SD) ESSQS score of all 650 intraoperative videos was 66.2 (8.6) points and for the 74 videos used in the study, 67.6 (16.1) points. The proposed 3-D CNN model automatically classified video clips into groups with scores less than the mean minus 2 SDs, within 1 SD of the mean, and greater than the mean plus 2 SDs with a mean (SD) accuracy of 75.0% (6.3%). The highest accuracy was 83.8% for the inferior mesenteric artery transection. The model also screened for the group with scores less than the mean minus 2 SDs with 94.1% sensitivity and 96.5% specificity and for group with greater than the mean plus 2 SDs with 87.1% sensitivity and 86.0% specificity. Conclusions and Relevance: The results of this prognostic study showed that the proposed 3-D CNN model classified laparoscopic colorectal surgical videos with sufficient accuracy to be used for screening groups with scores greater than the mean plus 2 SDs and less than the mean minus 2 SDs. The proposed approach was fully automatic and easy to use for various types of surgery, and no special annotations or kinetics data extraction were required, indicating that this approach warrants further development for application to automatic surgical skill assessment.


Assuntos
Competência Clínica , Cirurgia Colorretal/normas , Laparoscopia/normas , Redes Neurais de Computação , Gravação em Vídeo , Humanos , Japão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA