Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Urol ; 211(4): 575-584, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38265365

RESUMO

PURPOSE: The widespread use of minimally invasive surgery generates vast amounts of potentially useful data in the form of surgical video. However, raw video footage is often unstructured and unlabeled, thereby limiting its use. We developed a novel computer-vision algorithm for automated identification and labeling of surgical steps during robotic-assisted radical prostatectomy (RARP). MATERIALS AND METHODS: Surgical videos from RARP were manually annotated by a team of image annotators under the supervision of 2 urologic oncologists. Full-length surgical videos were labeled to identify all steps of surgery. These manually annotated videos were then utilized to train a computer vision algorithm to perform automated video annotation of RARP surgical video. Accuracy of automated video annotation was determined by comparing to manual human annotations as the reference standard. RESULTS: A total of 474 full-length RARP videos (median 149 minutes; IQR 81 minutes) were manually annotated with surgical steps. Of these, 292 cases served as a training dataset for algorithm development, 69 cases were used for internal validation, and 113 were used as a separate testing cohort for evaluating algorithm accuracy. Concordance between artificial intelligence‒enabled automated video analysis and manual human video annotation was 92.8%. Algorithm accuracy was highest for the vesicourethral anastomosis step (97.3%) and lowest for the final inspection and extraction step (76.8%). CONCLUSIONS: We developed a fully automated artificial intelligence tool for annotation of RARP surgical video. Automated surgical video analysis has immediate practical applications in surgeon video review, surgical training and education, quality and safety benchmarking, medical billing and documentation, and operating room logistics.


Assuntos
Prostatectomia , Procedimentos Cirúrgicos Robóticos , Humanos , Masculino , Inteligência Artificial , Escolaridade , Próstata/cirurgia , Prostatectomia/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Gravação em Vídeo
2.
Surg Endosc ; 37(11): 8818-8828, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37626236

RESUMO

INTRODUCTION: Artificial intelligence and computer vision are revolutionizing the way we perceive video analysis in minimally invasive surgery. This emerging technology has increasingly been leveraged successfully for video segmentation, documentation, education, and formative assessment. New, sophisticated platforms allow pre-determined segments chosen by surgeons to be automatically presented without the need to review entire videos. This study aimed to validate and demonstrate the accuracy of the first reported AI-based computer vision algorithm that automatically recognizes surgical steps in videos of totally extraperitoneal (TEP) inguinal hernia repair. METHODS: Videos of TEP procedures were manually labeled by a team of annotators trained to identify and label surgical workflow according to six major steps. For bilateral hernias, an additional change of focus step was also included. The videos were then used to train a computer vision AI algorithm. Performance accuracy was assessed in comparison to the manual annotations. RESULTS: A total of 619 full-length TEP videos were analyzed: 371 were used to train the model, 93 for internal validation, and the remaining 155 as a test set to evaluate algorithm accuracy. The overall accuracy for the complete procedure was 88.8%. Per-step accuracy reached the highest value for the hernia sac reduction step (94.3%) and the lowest for the preperitoneal dissection step (72.2%). CONCLUSIONS: These results indicate that the novel AI model was able to provide fully automated video analysis with a high accuracy level. High-accuracy models leveraging AI to enable automation of surgical video analysis allow us to identify and monitor surgical performance, providing mathematical metrics that can be stored, evaluated, and compared. As such, the proposed model is capable of enabling data-driven insights to improve surgical quality and demonstrate best practices in TEP procedures.


Assuntos
Hérnia Inguinal , Laparoscopia , Humanos , Hérnia Inguinal/cirurgia , Laparoscopia/métodos , Inteligência Artificial , Fluxo de Trabalho , Procedimentos Cirúrgicos Minimamente Invasivos , Herniorrafia/métodos , Telas Cirúrgicas
3.
Front Artif Intell ; 7: 1375482, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38525302

RESUMO

Objective: Automated surgical step recognition (SSR) using AI has been a catalyst in the "digitization" of surgery. However, progress has been limited to laparoscopy, with relatively few SSR tools in endoscopic surgery. This study aimed to create a SSR model for transurethral resection of bladder tumors (TURBT), leveraging a novel application of transfer learning to reduce video dataset requirements. Materials and methods: Retrospective surgical videos of TURBT were manually annotated with the following steps of surgery: primary endoscopic evaluation, resection of bladder tumor, and surface coagulation. Manually annotated videos were then utilized to train a novel AI computer vision algorithm to perform automated video annotation of TURBT surgical video, utilizing a transfer-learning technique to pre-train on laparoscopic procedures. Accuracy of AI SSR was determined by comparison to human annotations as the reference standard. Results: A total of 300 full-length TURBT videos (median 23.96 min; IQR 14.13-41.31 min) were manually annotated with sequential steps of surgery. One hundred and seventy-nine videos served as a training dataset for algorithm development, 44 for internal validation, and 77 as a separate test cohort for evaluating algorithm accuracy. Overall accuracy of AI video analysis was 89.6%. Model accuracy was highest for the primary endoscopic evaluation step (98.2%) and lowest for the surface coagulation step (82.7%). Conclusion: We developed a fully automated computer vision algorithm for high-accuracy annotation of TURBT surgical videos. This represents the first application of transfer-learning from laparoscopy-based computer vision models into surgical endoscopy, demonstrating the promise of this approach in adapting to new procedure types.

4.
Acta Med Acad ; 49 Suppl 1: 70-77, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33543633

RESUMO

OBJECTIVE: Brain parenchyma retraction is often necessary to reach various deep brain lesions during surgery. In order to minimise the incidence of the brain retraction injury, an endoport system may be employed. We present a report of a navigated endoport system in conjunction with an purely endoscopic microsurgery that was used in a patient with a deep-seated subependymoma. CASE REPORT: A navigated endoport with purely endoscopic microsurgery were used in a patient with a tumour located in the frontal horn of the left lateral ventricle. The endoport channel was made of a polyvinyl sheet that was cut into a 7 cm square, rolled into a tubular structure that was wrapped around the neuronavigational probe, and inserted in the access trajectory to the tumour. The endoport tube was then expanded with a balloon to a diameter of 7 mm and a surgical corridor was thus formed. During the purely endoscopic microsurgical lesionectomy, the tumour was completely removed from the frontal horn. The foramen of Monro was released and the septum pellucidum was perforated for better cerebrospinal fluid circulation. Histopathological examination confirmed the tumour as subependymoma. The recovery of the patient was unremarkable. CONCLUSION: The expandable endoport system supplemented with neuronavigation is a safe and efficient option for deep-seated tumour removal. The tubular shape of the retractor enables standard microsurgical techniques through minimally invasive approaches and offers an excellent visualization of the underlying lesion.


Assuntos
Microcirurgia , Neuronavegação , Encéfalo/cirurgia , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA