Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Ann Surg ; 2024 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-38842169

RESUMO

OBJECTIVE: To examine the use of surgical intelligence for automatically monitoring critical view of safety (CVS) in laparoscopic cholecystectomy (LC) in a real-world quality initiative. BACKGROUND: Surgical intelligence encompasses routine, AI-based capture and analysis of surgical video, and connection of derived data with patient and outcomes data. These capabilities are applied to continuously assess and improve surgical quality and efficiency in real-world settings. METHODS: LCs conducted at two general surgery departments between December 2022 and August 2023 were routinely captured by a surgical intelligence platform, which identified and continuously presented CVS adoption, surgery duration, complexity, and negative events. In March 2023, the departments launched a quality initiative aiming for 75% CVS adoption. RESULTS: 279 procedures were performed during the study. Adoption increased from 39.2% in the 3 pre-intervention months to 69.2% in the final 3 months (P < .001). Monthly adoption rose from 33.3% to 75.7%. Visualization of the cystic duct and artery accounted for most of the improvement; the other two components had high adoption throughout. Procedures with full CVS were shorter (P = .007) and had fewer events (P = .011) than those without. OR time decreased following intervention (P = .033). CONCLUSION: Surgical intelligence facilitated a steady increase in CVS adoption, reaching the goal within 6 months. Low initial adoption stemmed from a single CVS component, and increased adoption was associated with improved OR efficiency. Real-world use of surgical intelligence can uncover new insights, modify surgeon behavior, and support best practices to improve surgical quality and efficiency.

2.
J Urol ; 211(4): 575-584, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38265365

RESUMO

PURPOSE: The widespread use of minimally invasive surgery generates vast amounts of potentially useful data in the form of surgical video. However, raw video footage is often unstructured and unlabeled, thereby limiting its use. We developed a novel computer-vision algorithm for automated identification and labeling of surgical steps during robotic-assisted radical prostatectomy (RARP). MATERIALS AND METHODS: Surgical videos from RARP were manually annotated by a team of image annotators under the supervision of 2 urologic oncologists. Full-length surgical videos were labeled to identify all steps of surgery. These manually annotated videos were then utilized to train a computer vision algorithm to perform automated video annotation of RARP surgical video. Accuracy of automated video annotation was determined by comparing to manual human annotations as the reference standard. RESULTS: A total of 474 full-length RARP videos (median 149 minutes; IQR 81 minutes) were manually annotated with surgical steps. Of these, 292 cases served as a training dataset for algorithm development, 69 cases were used for internal validation, and 113 were used as a separate testing cohort for evaluating algorithm accuracy. Concordance between artificial intelligence‒enabled automated video analysis and manual human video annotation was 92.8%. Algorithm accuracy was highest for the vesicourethral anastomosis step (97.3%) and lowest for the final inspection and extraction step (76.8%). CONCLUSIONS: We developed a fully automated artificial intelligence tool for annotation of RARP surgical video. Automated surgical video analysis has immediate practical applications in surgeon video review, surgical training and education, quality and safety benchmarking, medical billing and documentation, and operating room logistics.


Assuntos
Prostatectomia , Procedimentos Cirúrgicos Robóticos , Humanos , Masculino , Inteligência Artificial , Escolaridade , Próstata/cirurgia , Prostatectomia/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Gravação em Vídeo
3.
Artigo em Inglês | MEDLINE | ID: mdl-38848990

RESUMO

OBJECTIVE: To demonstrate the use of surgical intelligence to routinely and automatically assess the proportion of time spent outside of the patient's body (out-of-body-OOB) in laparoscopic gynecological procedures, as a potential basis for clinical and efficiency-related insights. DESIGN: A retrospective analysis of videos of laparoscopic gynecological procedures. SETTING: Two operating rooms at the Gynecology Department of a tertiary medical center. PARTICIPANTS: All patients who underwent laparoscopic gynecological procedures between January 1, 2021 and December 31, 2022 in those two rooms. INTERVENTIONS: A surgical intelligence platform installed in the two rooms routinely captured and analyzed surgical video, using AI to identify and document procedure duration and the amount and percentage of time that the laparoscope was withdrawn from the patient's body per procedure. RESULTS: A total of 634 surgical videos were included in the final dataset. The cumulative time for all procedures was 639 hours, of which 48 hours (7.5%) were OOB segments. Average OOB percentage was 8.7% (SD = 8.7%) for all the procedures and differed significantly between procedure types (p < .001), with unilateral and bilateral salpingo-oophorectomies showing the highest percentages at 15.6% (SD = 13.3%) and 13.3% (SD = 11.3%), respectively. Hysterectomy and myomectomy, which do not require the endoscope to be removed for specimen extraction, showed a lower percentage (mean = 4.2%, SD = 5.2%) than the other procedures (mean = 11.1%, SD = 9.3%; p < .001). Percentages were lower when the operating team included a senior surgeon (mean = 8.4%, standard deviation = 9.2%) than when it did not (mean = 10.1%, standard deviation = 6.9%; p < .001). CONCLUSION: Surgical intelligence revealed a substantial percentage of OOB segments in laparoscopic gynecological procedures, alongside associations with surgeon seniority and procedure type. Further research is needed to evaluate how laparoscope removal affects postoperative outcomes and operational efficiency in surgery.

4.
Surg Endosc ; 37(11): 8818-8828, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37626236

RESUMO

INTRODUCTION: Artificial intelligence and computer vision are revolutionizing the way we perceive video analysis in minimally invasive surgery. This emerging technology has increasingly been leveraged successfully for video segmentation, documentation, education, and formative assessment. New, sophisticated platforms allow pre-determined segments chosen by surgeons to be automatically presented without the need to review entire videos. This study aimed to validate and demonstrate the accuracy of the first reported AI-based computer vision algorithm that automatically recognizes surgical steps in videos of totally extraperitoneal (TEP) inguinal hernia repair. METHODS: Videos of TEP procedures were manually labeled by a team of annotators trained to identify and label surgical workflow according to six major steps. For bilateral hernias, an additional change of focus step was also included. The videos were then used to train a computer vision AI algorithm. Performance accuracy was assessed in comparison to the manual annotations. RESULTS: A total of 619 full-length TEP videos were analyzed: 371 were used to train the model, 93 for internal validation, and the remaining 155 as a test set to evaluate algorithm accuracy. The overall accuracy for the complete procedure was 88.8%. Per-step accuracy reached the highest value for the hernia sac reduction step (94.3%) and the lowest for the preperitoneal dissection step (72.2%). CONCLUSIONS: These results indicate that the novel AI model was able to provide fully automated video analysis with a high accuracy level. High-accuracy models leveraging AI to enable automation of surgical video analysis allow us to identify and monitor surgical performance, providing mathematical metrics that can be stored, evaluated, and compared. As such, the proposed model is capable of enabling data-driven insights to improve surgical quality and demonstrate best practices in TEP procedures.


Assuntos
Hérnia Inguinal , Laparoscopia , Humanos , Hérnia Inguinal/cirurgia , Laparoscopia/métodos , Inteligência Artificial , Fluxo de Trabalho , Procedimentos Cirúrgicos Minimamente Invasivos , Herniorrafia/métodos , Telas Cirúrgicas
5.
Front Artif Intell ; 7: 1375482, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38525302

RESUMO

Objective: Automated surgical step recognition (SSR) using AI has been a catalyst in the "digitization" of surgery. However, progress has been limited to laparoscopy, with relatively few SSR tools in endoscopic surgery. This study aimed to create a SSR model for transurethral resection of bladder tumors (TURBT), leveraging a novel application of transfer learning to reduce video dataset requirements. Materials and methods: Retrospective surgical videos of TURBT were manually annotated with the following steps of surgery: primary endoscopic evaluation, resection of bladder tumor, and surface coagulation. Manually annotated videos were then utilized to train a novel AI computer vision algorithm to perform automated video annotation of TURBT surgical video, utilizing a transfer-learning technique to pre-train on laparoscopic procedures. Accuracy of AI SSR was determined by comparison to human annotations as the reference standard. Results: A total of 300 full-length TURBT videos (median 23.96 min; IQR 14.13-41.31 min) were manually annotated with sequential steps of surgery. One hundred and seventy-nine videos served as a training dataset for algorithm development, 44 for internal validation, and 77 as a separate test cohort for evaluating algorithm accuracy. Overall accuracy of AI video analysis was 89.6%. Model accuracy was highest for the primary endoscopic evaluation step (98.2%) and lowest for the surface coagulation step (82.7%). Conclusion: We developed a fully automated computer vision algorithm for high-accuracy annotation of TURBT surgical videos. This represents the first application of transfer-learning from laparoscopy-based computer vision models into surgical endoscopy, demonstrating the promise of this approach in adapting to new procedure types.

6.
Sci Rep ; 10(1): 22208, 2020 12 17.
Artigo em Inglês | MEDLINE | ID: mdl-33335191

RESUMO

AI is becoming ubiquitous, revolutionizing many aspects of our lives. In surgery, it is still a promise. AI has the potential to improve surgeon performance and impact patient care, from post-operative debrief to real-time decision support. But, how much data is needed by an AI-based system to learn surgical context with high fidelity? To answer this question, we leveraged a large-scale, diverse, cholecystectomy video dataset. We assessed surgical workflow recognition and report a deep learning system, that not only detects surgical phases, but does so with high accuracy and is able to generalize to new settings and unseen medical centers. Our findings provide a solid foundation for translating AI applications from research to practice, ushering in a new era of surgical intelligence.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA