Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
J Urol ; 211(4): 575-584, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38265365

RESUMO

PURPOSE: The widespread use of minimally invasive surgery generates vast amounts of potentially useful data in the form of surgical video. However, raw video footage is often unstructured and unlabeled, thereby limiting its use. We developed a novel computer-vision algorithm for automated identification and labeling of surgical steps during robotic-assisted radical prostatectomy (RARP). MATERIALS AND METHODS: Surgical videos from RARP were manually annotated by a team of image annotators under the supervision of 2 urologic oncologists. Full-length surgical videos were labeled to identify all steps of surgery. These manually annotated videos were then utilized to train a computer vision algorithm to perform automated video annotation of RARP surgical video. Accuracy of automated video annotation was determined by comparing to manual human annotations as the reference standard. RESULTS: A total of 474 full-length RARP videos (median 149 minutes; IQR 81 minutes) were manually annotated with surgical steps. Of these, 292 cases served as a training dataset for algorithm development, 69 cases were used for internal validation, and 113 were used as a separate testing cohort for evaluating algorithm accuracy. Concordance between artificial intelligence‒enabled automated video analysis and manual human video annotation was 92.8%. Algorithm accuracy was highest for the vesicourethral anastomosis step (97.3%) and lowest for the final inspection and extraction step (76.8%). CONCLUSIONS: We developed a fully automated artificial intelligence tool for annotation of RARP surgical video. Automated surgical video analysis has immediate practical applications in surgeon video review, surgical training and education, quality and safety benchmarking, medical billing and documentation, and operating room logistics.


Assuntos
Prostatectomia , Procedimentos Cirúrgicos Robóticos , Humanos , Masculino , Inteligência Artificial , Escolaridade , Próstata/cirurgia , Prostatectomia/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Gravação em Vídeo
2.
Artigo em Inglês | MEDLINE | ID: mdl-38848990

RESUMO

OBJECTIVE: To demonstrate the use of surgical intelligence to routinely and automatically assess the proportion of time spent outside of the patient's body (out-of-body-OOB) in laparoscopic gynecological procedures, as a potential basis for clinical and efficiency-related insights. DESIGN: A retrospective analysis of videos of laparoscopic gynecological procedures. SETTING: Two operating rooms at the Gynecology Department of a tertiary medical center. PARTICIPANTS: All patients who underwent laparoscopic gynecological procedures between January 1, 2021 and December 31, 2022 in those two rooms. INTERVENTIONS: A surgical intelligence platform installed in the two rooms routinely captured and analyzed surgical video, using AI to identify and document procedure duration and the amount and percentage of time that the laparoscope was withdrawn from the patient's body per procedure. RESULTS: A total of 634 surgical videos were included in the final dataset. The cumulative time for all procedures was 639 hours, of which 48 hours (7.5%) were OOB segments. Average OOB percentage was 8.7% (SD = 8.7%) for all the procedures and differed significantly between procedure types (p < .001), with unilateral and bilateral salpingo-oophorectomies showing the highest percentages at 15.6% (SD = 13.3%) and 13.3% (SD = 11.3%), respectively. Hysterectomy and myomectomy, which do not require the endoscope to be removed for specimen extraction, showed a lower percentage (mean = 4.2%, SD = 5.2%) than the other procedures (mean = 11.1%, SD = 9.3%; p < .001). Percentages were lower when the operating team included a senior surgeon (mean = 8.4%, standard deviation = 9.2%) than when it did not (mean = 10.1%, standard deviation = 6.9%; p < .001). CONCLUSION: Surgical intelligence revealed a substantial percentage of OOB segments in laparoscopic gynecological procedures, alongside associations with surgeon seniority and procedure type. Further research is needed to evaluate how laparoscope removal affects postoperative outcomes and operational efficiency in surgery.

3.
Int J Mol Sci ; 25(2)2024 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-38256266

RESUMO

Autism spectrum disorder (ASD) is a common condition with lifelong implications. The last decade has seen dramatic improvements in DNA sequencing and related bioinformatics and databases. We analyzed the raw DNA sequencing files on the Variantyx® bioinformatics platform for the last 50 ASD patients evaluated with trio whole-genome sequencing (trio-WGS). "Qualified" variants were defined as coding, rare, and evolutionarily conserved. Primary Diagnostic Variants (PDV), additionally, were present in genes directly linked to ASD and matched clinical correlation. A PDV was identified in 34/50 (68%) of cases, including 25 (50%) cases with heterozygous de novo and 10 (20%) with inherited variants. De novo variants in genes directly associated with ASD were far more likely to be Qualifying than non-Qualifying versus a control group of genes (p = 0.0002), validating that most are indeed disease related. Sequence reanalysis increased diagnostic yield from 28% to 68%, mostly through inclusion of de novo PDVs in genes not yet reported as ASD associated. Thirty-three subjects (66%) had treatment recommendation(s) based on DNA analyses. Our results demonstrate a high yield of trio-WGS for revealing molecular diagnoses in ASD, which is greatly enhanced by reanalyzing DNA sequencing files. In contrast to previous reports, de novo variants dominate the findings, mostly representing novel conditions. This has implications to the cause and rising prevalence of autism.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Humanos , Transtorno do Espectro Autista/genética , Sequenciamento Completo do Genoma , Análise de Sequência de DNA , Biologia Computacional
4.
Int J Gynaecol Obstet ; 166(3): 1273-1278, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38546527

RESUMO

OBJECTIVE: The analysis of surgical videos using artificial intelligence holds great promise for the future of surgery by facilitating the development of surgical best practices, identifying key pitfalls, enhancing situational awareness, and disseminating that information via real-time, intraoperative decision-making. The objective of the present study was to examine the feasibility and accuracy of a novel computer vision algorithm for hysterectomy surgical step identification. METHODS: This was a retrospective study conducted on surgical videos of laparoscopic hysterectomies performed in 277 patients in five medical centers. We used a surgical intelligence platform (Theator Inc.) that employs advanced computer vision and AI technology to automatically capture video data during surgery, deidentify, and upload procedures to a secure cloud infrastructure. Videos were manually annotated with sequential steps of surgery by a team of annotation specialists. Subsequently, a computer vision system was trained to perform automated step detection in hysterectomy. Analyzing automated video annotations in comparison to manual human annotations was used to determine accuracy. RESULTS: The mean duration of the videos was 103 ± 43 min. Accuracy between AI-based predictions and manual human annotations was 93.1% on average. Accuracy was highest for the dissection and mobilization step (96.9%) and lowest for the adhesiolysis step (70.3%). CONCLUSION: The results of the present study demonstrate that a novel AI-based model achieves high accuracy for automated steps identification in hysterectomy. This lays the foundations for the next phase of AI, focused on real-time clinical decision support and prediction of outcome measures, to optimize surgeon workflow and elevate patient care.


Assuntos
Inteligência Artificial , Histerectomia , Laparoscopia , Humanos , Feminino , Histerectomia/métodos , Estudos Retrospectivos , Laparoscopia/métodos , Gravação em Vídeo , Estudos de Viabilidade , Ginecologia , Algoritmos , Adulto
5.
Front Artif Intell ; 7: 1375482, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38525302

RESUMO

Objective: Automated surgical step recognition (SSR) using AI has been a catalyst in the "digitization" of surgery. However, progress has been limited to laparoscopy, with relatively few SSR tools in endoscopic surgery. This study aimed to create a SSR model for transurethral resection of bladder tumors (TURBT), leveraging a novel application of transfer learning to reduce video dataset requirements. Materials and methods: Retrospective surgical videos of TURBT were manually annotated with the following steps of surgery: primary endoscopic evaluation, resection of bladder tumor, and surface coagulation. Manually annotated videos were then utilized to train a novel AI computer vision algorithm to perform automated video annotation of TURBT surgical video, utilizing a transfer-learning technique to pre-train on laparoscopic procedures. Accuracy of AI SSR was determined by comparison to human annotations as the reference standard. Results: A total of 300 full-length TURBT videos (median 23.96 min; IQR 14.13-41.31 min) were manually annotated with sequential steps of surgery. One hundred and seventy-nine videos served as a training dataset for algorithm development, 44 for internal validation, and 77 as a separate test cohort for evaluating algorithm accuracy. Overall accuracy of AI video analysis was 89.6%. Model accuracy was highest for the primary endoscopic evaluation step (98.2%) and lowest for the surface coagulation step (82.7%). Conclusion: We developed a fully automated computer vision algorithm for high-accuracy annotation of TURBT surgical videos. This represents the first application of transfer-learning from laparoscopy-based computer vision models into surgical endoscopy, demonstrating the promise of this approach in adapting to new procedure types.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA