Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
1.
J Urol ; 211(4): 575-584, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38265365

RESUMEN

PURPOSE: The widespread use of minimally invasive surgery generates vast amounts of potentially useful data in the form of surgical video. However, raw video footage is often unstructured and unlabeled, thereby limiting its use. We developed a novel computer-vision algorithm for automated identification and labeling of surgical steps during robotic-assisted radical prostatectomy (RARP). MATERIALS AND METHODS: Surgical videos from RARP were manually annotated by a team of image annotators under the supervision of 2 urologic oncologists. Full-length surgical videos were labeled to identify all steps of surgery. These manually annotated videos were then utilized to train a computer vision algorithm to perform automated video annotation of RARP surgical video. Accuracy of automated video annotation was determined by comparing to manual human annotations as the reference standard. RESULTS: A total of 474 full-length RARP videos (median 149 minutes; IQR 81 minutes) were manually annotated with surgical steps. Of these, 292 cases served as a training dataset for algorithm development, 69 cases were used for internal validation, and 113 were used as a separate testing cohort for evaluating algorithm accuracy. Concordance between artificial intelligence‒enabled automated video analysis and manual human video annotation was 92.8%. Algorithm accuracy was highest for the vesicourethral anastomosis step (97.3%) and lowest for the final inspection and extraction step (76.8%). CONCLUSIONS: We developed a fully automated artificial intelligence tool for annotation of RARP surgical video. Automated surgical video analysis has immediate practical applications in surgeon video review, surgical training and education, quality and safety benchmarking, medical billing and documentation, and operating room logistics.


Asunto(s)
Prostatectomía , Procedimientos Quirúrgicos Robotizados , Humanos , Masculino , Inteligencia Artificial , Escolaridad , Próstata/cirugía , Prostatectomía/métodos , Procedimientos Quirúrgicos Robotizados/métodos , Grabación en Video
2.
Matern Child Health J ; 27(4): 719-727, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36670306

RESUMEN

OBJECTIVES: While the rates of maternal mortality in developed countries have remained low in recent years, rates of severe maternal morbidity (SMM) are still increasing in high income countries. As a result, SMM is currently used as a measure of maternity care level. The aim of this study was to investigate the prevalence and risk factors of SMM surrounding childbirth. METHODS: A nested case-control study was performed between the years 2013-2018. SMM was defined as peripartum hospitalization involving intensive care unit (ICU). A comparison was conducted between parturient with SMM to those without, randomly matched for delivery mode and date of birth in a 1:1 ratio. Multivariable logistic regression models were used to evaluate the independent association between SMM and different maternal and pregnancy characteristics. RESULTS: During the study period, 96,017 live births took place, of which 144 (1.5 per 1,000 live births-0.15%) involved SMM with ICU admissions. Parturient with SMM were more likely to have a history of 2 or more pregnancy losses (18.2% vs. 8.3%, p = 0.004), deliver preterm (48.9% vs. 8.8%, p < 0.001), and suffer from placenta previa (11.9% vs. 1.5%, p < 0.001), and/or placenta accreta (9.7% vs. 1.5%, p = 0.003). Several significant and independent risk factors for SMM were noted in the multivariable regression models: preterm delivery, history of ≥ 2 pregnancy losses, grand-multiparity, Jewish ethnicity, and abnormal placentation (previa or accreta). CONCLUSIONS FOR PRACTICE: SMM rates in our cohort were lower than reported in developed countries. An independent association exists between peripartum maternal ICU admissions and several demographic and clinical risk factors, including preterm birth and abnormal placentation.


Asunto(s)
Aborto Espontáneo , Servicios de Salud Materna , Nacimiento Prematuro , Embarazo , Femenino , Recién Nacido , Humanos , Estudios de Casos y Controles , Estudios de Cohortes , Periodo Periparto , Nacimiento Prematuro/epidemiología , Factores de Riesgo , Nacimiento Vivo , Estudios Retrospectivos , Morbilidad
3.
Front Artif Intell ; 7: 1375482, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38525302

RESUMEN

Objective: Automated surgical step recognition (SSR) using AI has been a catalyst in the "digitization" of surgery. However, progress has been limited to laparoscopy, with relatively few SSR tools in endoscopic surgery. This study aimed to create a SSR model for transurethral resection of bladder tumors (TURBT), leveraging a novel application of transfer learning to reduce video dataset requirements. Materials and methods: Retrospective surgical videos of TURBT were manually annotated with the following steps of surgery: primary endoscopic evaluation, resection of bladder tumor, and surface coagulation. Manually annotated videos were then utilized to train a novel AI computer vision algorithm to perform automated video annotation of TURBT surgical video, utilizing a transfer-learning technique to pre-train on laparoscopic procedures. Accuracy of AI SSR was determined by comparison to human annotations as the reference standard. Results: A total of 300 full-length TURBT videos (median 23.96 min; IQR 14.13-41.31 min) were manually annotated with sequential steps of surgery. One hundred and seventy-nine videos served as a training dataset for algorithm development, 44 for internal validation, and 77 as a separate test cohort for evaluating algorithm accuracy. Overall accuracy of AI video analysis was 89.6%. Model accuracy was highest for the primary endoscopic evaluation step (98.2%) and lowest for the surface coagulation step (82.7%). Conclusion: We developed a fully automated computer vision algorithm for high-accuracy annotation of TURBT surgical videos. This represents the first application of transfer-learning from laparoscopy-based computer vision models into surgical endoscopy, demonstrating the promise of this approach in adapting to new procedure types.

4.
Artículo en Inglés | MEDLINE | ID: mdl-38546527

RESUMEN

OBJECTIVE: The analysis of surgical videos using artificial intelligence holds great promise for the future of surgery by facilitating the development of surgical best practices, identifying key pitfalls, enhancing situational awareness, and disseminating that information via real-time, intraoperative decision-making. The objective of the present study was to examine the feasibility and accuracy of a novel computer vision algorithm for hysterectomy surgical step identification. METHODS: This was a retrospective study conducted on surgical videos of laparoscopic hysterectomies performed in 277 patients in five medical centers. We used a surgical intelligence platform (Theator Inc.) that employs advanced computer vision and AI technology to automatically capture video data during surgery, deidentify, and upload procedures to a secure cloud infrastructure. Videos were manually annotated with sequential steps of surgery by a team of annotation specialists. Subsequently, a computer vision system was trained to perform automated step detection in hysterectomy. Analyzing automated video annotations in comparison to manual human annotations was used to determine accuracy. RESULTS: The mean duration of the videos was 103 ± 43 min. Accuracy between AI-based predictions and manual human annotations was 93.1% on average. Accuracy was highest for the dissection and mobilization step (96.9%) and lowest for the adhesiolysis step (70.3%). CONCLUSION: The results of the present study demonstrate that a novel AI-based model achieves high accuracy for automated steps identification in hysterectomy. This lays the foundations for the next phase of AI, focused on real-time clinical decision support and prediction of outcome measures, to optimize surgeon workflow and elevate patient care.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA