Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38848990

RESUMO

OBJECTIVE: To demonstrate the use of surgical intelligence to routinely and automatically assess the proportion of time spent outside of the patient's body (out-of-body-OOB) in laparoscopic gynecological procedures, as a potential basis for clinical and efficiency-related insights. DESIGN: A retrospective analysis of videos of laparoscopic gynecological procedures. SETTING: Two operating rooms at the Gynecology Department of a tertiary medical center. PARTICIPANTS: All patients who underwent laparoscopic gynecological procedures between January 1, 2021 and December 31, 2022 in those two rooms. INTERVENTIONS: A surgical intelligence platform installed in the two rooms routinely captured and analyzed surgical video, using AI to identify and document procedure duration and the amount and percentage of time that the laparoscope was withdrawn from the patient's body per procedure. RESULTS: A total of 634 surgical videos were included in the final dataset. The cumulative time for all procedures was 639 hours, of which 48 hours (7.5%) were OOB segments. Average OOB percentage was 8.7% (SD = 8.7%) for all the procedures and differed significantly between procedure types (p < .001), with unilateral and bilateral salpingo-oophorectomies showing the highest percentages at 15.6% (SD = 13.3%) and 13.3% (SD = 11.3%), respectively. Hysterectomy and myomectomy, which do not require the endoscope to be removed for specimen extraction, showed a lower percentage (mean = 4.2%, SD = 5.2%) than the other procedures (mean = 11.1%, SD = 9.3%; p < .001). Percentages were lower when the operating team included a senior surgeon (mean = 8.4%, standard deviation = 9.2%) than when it did not (mean = 10.1%, standard deviation = 6.9%; p < .001). CONCLUSION: Surgical intelligence revealed a substantial percentage of OOB segments in laparoscopic gynecological procedures, alongside associations with surgeon seniority and procedure type. Further research is needed to evaluate how laparoscope removal affects postoperative outcomes and operational efficiency in surgery.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38546527

RESUMO

OBJECTIVE: The analysis of surgical videos using artificial intelligence holds great promise for the future of surgery by facilitating the development of surgical best practices, identifying key pitfalls, enhancing situational awareness, and disseminating that information via real-time, intraoperative decision-making. The objective of the present study was to examine the feasibility and accuracy of a novel computer vision algorithm for hysterectomy surgical step identification. METHODS: This was a retrospective study conducted on surgical videos of laparoscopic hysterectomies performed in 277 patients in five medical centers. We used a surgical intelligence platform (Theator Inc.) that employs advanced computer vision and AI technology to automatically capture video data during surgery, deidentify, and upload procedures to a secure cloud infrastructure. Videos were manually annotated with sequential steps of surgery by a team of annotation specialists. Subsequently, a computer vision system was trained to perform automated step detection in hysterectomy. Analyzing automated video annotations in comparison to manual human annotations was used to determine accuracy. RESULTS: The mean duration of the videos was 103 ± 43 min. Accuracy between AI-based predictions and manual human annotations was 93.1% on average. Accuracy was highest for the dissection and mobilization step (96.9%) and lowest for the adhesiolysis step (70.3%). CONCLUSION: The results of the present study demonstrate that a novel AI-based model achieves high accuracy for automated steps identification in hysterectomy. This lays the foundations for the next phase of AI, focused on real-time clinical decision support and prediction of outcome measures, to optimize surgeon workflow and elevate patient care.

3.
Front Artif Intell ; 7: 1375482, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38525302

RESUMO

Objective: Automated surgical step recognition (SSR) using AI has been a catalyst in the "digitization" of surgery. However, progress has been limited to laparoscopy, with relatively few SSR tools in endoscopic surgery. This study aimed to create a SSR model for transurethral resection of bladder tumors (TURBT), leveraging a novel application of transfer learning to reduce video dataset requirements. Materials and methods: Retrospective surgical videos of TURBT were manually annotated with the following steps of surgery: primary endoscopic evaluation, resection of bladder tumor, and surface coagulation. Manually annotated videos were then utilized to train a novel AI computer vision algorithm to perform automated video annotation of TURBT surgical video, utilizing a transfer-learning technique to pre-train on laparoscopic procedures. Accuracy of AI SSR was determined by comparison to human annotations as the reference standard. Results: A total of 300 full-length TURBT videos (median 23.96 min; IQR 14.13-41.31 min) were manually annotated with sequential steps of surgery. One hundred and seventy-nine videos served as a training dataset for algorithm development, 44 for internal validation, and 77 as a separate test cohort for evaluating algorithm accuracy. Overall accuracy of AI video analysis was 89.6%. Model accuracy was highest for the primary endoscopic evaluation step (98.2%) and lowest for the surface coagulation step (82.7%). Conclusion: We developed a fully automated computer vision algorithm for high-accuracy annotation of TURBT surgical videos. This represents the first application of transfer-learning from laparoscopy-based computer vision models into surgical endoscopy, demonstrating the promise of this approach in adapting to new procedure types.

4.
J Urol ; 211(4): 575-584, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38265365

RESUMO

PURPOSE: The widespread use of minimally invasive surgery generates vast amounts of potentially useful data in the form of surgical video. However, raw video footage is often unstructured and unlabeled, thereby limiting its use. We developed a novel computer-vision algorithm for automated identification and labeling of surgical steps during robotic-assisted radical prostatectomy (RARP). MATERIALS AND METHODS: Surgical videos from RARP were manually annotated by a team of image annotators under the supervision of 2 urologic oncologists. Full-length surgical videos were labeled to identify all steps of surgery. These manually annotated videos were then utilized to train a computer vision algorithm to perform automated video annotation of RARP surgical video. Accuracy of automated video annotation was determined by comparing to manual human annotations as the reference standard. RESULTS: A total of 474 full-length RARP videos (median 149 minutes; IQR 81 minutes) were manually annotated with surgical steps. Of these, 292 cases served as a training dataset for algorithm development, 69 cases were used for internal validation, and 113 were used as a separate testing cohort for evaluating algorithm accuracy. Concordance between artificial intelligence‒enabled automated video analysis and manual human video annotation was 92.8%. Algorithm accuracy was highest for the vesicourethral anastomosis step (97.3%) and lowest for the final inspection and extraction step (76.8%). CONCLUSIONS: We developed a fully automated artificial intelligence tool for annotation of RARP surgical video. Automated surgical video analysis has immediate practical applications in surgeon video review, surgical training and education, quality and safety benchmarking, medical billing and documentation, and operating room logistics.


Assuntos
Prostatectomia , Procedimentos Cirúrgicos Robóticos , Humanos , Masculino , Inteligência Artificial , Escolaridade , Próstata/cirurgia , Prostatectomia/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Gravação em Vídeo
5.
Int J Mol Sci ; 25(2)2024 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-38256266

RESUMO

Autism spectrum disorder (ASD) is a common condition with lifelong implications. The last decade has seen dramatic improvements in DNA sequencing and related bioinformatics and databases. We analyzed the raw DNA sequencing files on the Variantyx® bioinformatics platform for the last 50 ASD patients evaluated with trio whole-genome sequencing (trio-WGS). "Qualified" variants were defined as coding, rare, and evolutionarily conserved. Primary Diagnostic Variants (PDV), additionally, were present in genes directly linked to ASD and matched clinical correlation. A PDV was identified in 34/50 (68%) of cases, including 25 (50%) cases with heterozygous de novo and 10 (20%) with inherited variants. De novo variants in genes directly associated with ASD were far more likely to be Qualifying than non-Qualifying versus a control group of genes (p = 0.0002), validating that most are indeed disease related. Sequence reanalysis increased diagnostic yield from 28% to 68%, mostly through inclusion of de novo PDVs in genes not yet reported as ASD associated. Thirty-three subjects (66%) had treatment recommendation(s) based on DNA analyses. Our results demonstrate a high yield of trio-WGS for revealing molecular diagnoses in ASD, which is greatly enhanced by reanalyzing DNA sequencing files. In contrast to previous reports, de novo variants dominate the findings, mostly representing novel conditions. This has implications to the cause and rising prevalence of autism.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Humanos , Transtorno do Espectro Autista/genética , Sequenciamento Completo do Genoma , Análise de Sequência de DNA , Biologia Computacional
6.
Surg Endosc ; 37(11): 8818-8828, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37626236

RESUMO

INTRODUCTION: Artificial intelligence and computer vision are revolutionizing the way we perceive video analysis in minimally invasive surgery. This emerging technology has increasingly been leveraged successfully for video segmentation, documentation, education, and formative assessment. New, sophisticated platforms allow pre-determined segments chosen by surgeons to be automatically presented without the need to review entire videos. This study aimed to validate and demonstrate the accuracy of the first reported AI-based computer vision algorithm that automatically recognizes surgical steps in videos of totally extraperitoneal (TEP) inguinal hernia repair. METHODS: Videos of TEP procedures were manually labeled by a team of annotators trained to identify and label surgical workflow according to six major steps. For bilateral hernias, an additional change of focus step was also included. The videos were then used to train a computer vision AI algorithm. Performance accuracy was assessed in comparison to the manual annotations. RESULTS: A total of 619 full-length TEP videos were analyzed: 371 were used to train the model, 93 for internal validation, and the remaining 155 as a test set to evaluate algorithm accuracy. The overall accuracy for the complete procedure was 88.8%. Per-step accuracy reached the highest value for the hernia sac reduction step (94.3%) and the lowest for the preperitoneal dissection step (72.2%). CONCLUSIONS: These results indicate that the novel AI model was able to provide fully automated video analysis with a high accuracy level. High-accuracy models leveraging AI to enable automation of surgical video analysis allow us to identify and monitor surgical performance, providing mathematical metrics that can be stored, evaluated, and compared. As such, the proposed model is capable of enabling data-driven insights to improve surgical quality and demonstrate best practices in TEP procedures.


Assuntos
Hérnia Inguinal , Laparoscopia , Humanos , Hérnia Inguinal/cirurgia , Laparoscopia/métodos , Inteligência Artificial , Fluxo de Trabalho , Procedimentos Cirúrgicos Minimamente Invasivos , Herniorrafia/métodos , Telas Cirúrgicas
7.
Front Neurol ; 14: 1151835, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37234784

RESUMO

Objective: To utilize whole exome or genome sequencing and the scientific literature for identifying candidate genes for cyclic vomiting syndrome (CVS), an idiopathic migraine variant with paroxysmal nausea and vomiting. Methods: A retrospective chart review of 80 unrelated participants, ascertained by a quaternary care CVS specialist, was conducted. Genes associated with paroxysmal symptoms were identified querying the literature for genes associated with dominant cases of intermittent vomiting or both discomfort and disability; among which the raw genetic sequence was reviewed. "Qualifying" variants were defined as coding, rare, and conserved. Additionally, "Key Qualifying" variants were Pathogenic/Likely Pathogenic, or "Clinical" based upon the presence of a corresponding diagnosis. Candidate association to CVS was based on a point system. Results: Thirty-five paroxysmal genes were identified per the literature review. Among these, 12 genes were scored as "Highly likely" (SCN4A, CACNA1A, CACNA1S, RYR2, TRAP1, MEFV) or "Likely" (SCN9A, TNFRSF1A, POLG, SCN10A, POGZ, TRPA1) CVS related. Nine additional genes (OTC, ATP1A3, ATP1A2, GFAP, SLC2A1, TUBB3, PPM1D, CHAMP1, HMBS) had sufficient evidence in the literature but not from our study participants. Candidate status for mitochondrial DNA was confirmed by the literature and our study data. Among the above-listed 22 CVS candidate genes, a Key Qualifying variant was identified in 31/80 (34%), and any Qualifying variant was present in 61/80 (76%) of participants. These findings were highly statistically significant (p < 0.0001, p = 0.004, respectively) compared to an alternative hypothesis/control group regarding brain neurotransmitter receptor genes. Additional, post-analyses, less-intensive review of all genes (exome) outside our paroxysmal genes identified 13 additional genes as "Possibly" CVS related. Conclusion: All 22 CVS candidate genes are associated with either cation transport or energy metabolism (14 directly, 8 indirectly). Our findings suggest a cellular model in which aberrant ion gradients lead to mitochondrial dysfunction, or vice versa, in a pathogenic vicious cycle of cellular hyperexcitability. Among the non-paroxysmal genes identified, 5 are known causes of peripheral neuropathy. Our model is consistent with multiple current hypotheses of CVS.

8.
Curr Top Med Chem ; 22(8): 686-698, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35139798

RESUMO

An urgent need exists for a rapid, cost-effective, facile, and reliable nucleic acid assay for mass screening to control and prevent the spread of emerging pandemic diseases. This urgent need is not fully met by current diagnostic tools. In this review, we summarize the current state-of-the-art research in novel nucleic acid amplification and detection that could be applied to point-of-care (POC) diagnosis and mass screening of diseases. The critical technological breakthroughs will be discussed for their advantages and disadvantages. Finally, we will discuss the future challenges of developing nucleic acid-based POC diagnosis.


Assuntos
Ácidos Nucleicos , Técnicas de Amplificação de Ácido Nucleico , Pandemias , Sistemas Automatizados de Assistência Junto ao Leito
9.
Sci Rep ; 10(1): 22208, 2020 12 17.
Artigo em Inglês | MEDLINE | ID: mdl-33335191

RESUMO

AI is becoming ubiquitous, revolutionizing many aspects of our lives. In surgery, it is still a promise. AI has the potential to improve surgeon performance and impact patient care, from post-operative debrief to real-time decision support. But, how much data is needed by an AI-based system to learn surgical context with high fidelity? To answer this question, we leveraged a large-scale, diverse, cholecystectomy video dataset. We assessed surgical workflow recognition and report a deep learning system, that not only detects surgical phases, but does so with high accuracy and is able to generalize to new settings and unseen medical centers. Our findings provide a solid foundation for translating AI applications from research to practice, ushering in a new era of surgical intelligence.

10.
Genet Med ; 21(12): 2807-2814, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31164752

RESUMO

PURPOSE: Phenotype information is crucial for the interpretation of genomic variants. So far it has only been accessible for bioinformatics workflows after encoding into clinical terms by expert dysmorphologists. METHODS: Here, we introduce an approach driven by artificial intelligence that uses portrait photographs for the interpretation of clinical exome data. We measured the value added by computer-assisted image analysis to the diagnostic yield on a cohort consisting of 679 individuals with 105 different monogenic disorders. For each case in the cohort we compiled frontal photos, clinical features, and the disease-causing variants, and simulated multiple exomes of different ethnic backgrounds. RESULTS: The additional use of similarity scores from computer-assisted analysis of frontal photos improved the top 1 accuracy rate by more than 20-89% and the top 10 accuracy rate by more than 5-99% for the disease-causing gene. CONCLUSION: Image analysis by deep-learning algorithms can be used to quantify the phenotypic similarity (PP4 criterion of the American College of Medical Genetics and Genomics guidelines) and to advance the performance of bioinformatics pipelines for exome analysis.


Assuntos
Biologia Computacional/métodos , Processamento de Imagem Assistida por Computador/métodos , Análise de Sequência de DNA/métodos , Algoritmos , Bases de Dados Genéticas , Aprendizado Profundo , Exoma/genética , Feminino , Genômica , Humanos , Masculino , Fenótipo , Software
11.
Nat Med ; 25(1): 60-64, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30617323

RESUMO

Syndromic genetic conditions, in aggregate, affect 8% of the population1. Many syndromes have recognizable facial features2 that are highly informative to clinical geneticists3-5. Recent studies show that facial analysis technologies measured up to the capabilities of expert clinicians in syndrome identification6-9. However, these technologies identified only a few disease phenotypes, limiting their role in clinical settings, where hundreds of diagnoses must be considered. Here we present a facial image analysis framework, DeepGestalt, using computer vision and deep-learning algorithms, that quantifies similarities to hundreds of syndromes. DeepGestalt outperformed clinicians in three initial experiments, two with the goal of distinguishing subjects with a target syndrome from other syndromes, and one of separating different genetic subtypes in Noonan syndrome. On the final experiment reflecting a real clinical setting problem, DeepGestalt achieved 91% top-10 accuracy in identifying the correct syndrome on 502 different images. The model was trained on a dataset of over 17,000 images representing more than 200 syndromes, curated through a community-driven phenotyping platform. DeepGestalt potentially adds considerable value to phenotypic evaluations in clinical genetics, genetic testing, research and precision medicine.


Assuntos
Aprendizado Profundo , Fácies , Doenças Genéticas Inatas/diagnóstico , Algoritmos , Genótipo , Humanos , Processamento de Imagem Assistida por Computador , Fenótipo , Síndrome
12.
J Gastroenterol ; 51(3): 214-21, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26112122

RESUMO

BACKGROUND: Early detection of colorectal cancer (CRC) can reduce mortality and morbidity. Current screening methods include colonoscopy and stool tests, but a simple low-cost blood test would increase compliance. This preliminary study assessed the utility of analyzing the entire bio-molecular profile of peripheral blood mononuclear cells (PBMCs) and plasma using Fourier transform infrared (FTIR) spectroscopy for early detection of CRC. METHODS: Blood samples were prospectively collected from 62 candidates for CRC screening/diagnostic colonoscopy or surgery for colonic neoplasia. PBMCs and plasma were separated by Ficoll gradient, dried on zinc selenide slides, and placed under a FTIR microscope. FTIR spectra were analyzed for biomarkers and classified by principal component and discriminant analyses. Findings were compared among diagnostic groups. RESULTS: Significant changes in multiple bands that can serve as CRC biomarkers were observed in PBMCs (p = ~0.01) and plasma (p = ~0.0001) spectra. There were minor but statistically significant differences in both blood components between healthy individuals and patients with benign polyps. Following multivariate analysis, the healthy individuals could be well distinguished from patients with CRC, and the patients with benign polyps were mostly distributed as a distinct subgroup within the overlap region. Leave-one-out cross-validation for evaluating method performance yielded an area under the receiver operating characteristics curve of 0.77, with sensitivity 81.5% and specificity 71.4%. CONCLUSIONS: Joint analysis of the biochemical profile of two blood components rather than a single biomarker is a promising strategy for early detection of CRC. Additional studies are required to validate our preliminary clinical results.


Assuntos
Biomarcadores Tumorais/sangue , Neoplasias Colorretais/diagnóstico , Espectroscopia de Infravermelho com Transformada de Fourier/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Coleta de Amostras Sanguíneas/métodos , Colonoscopia , Detecção Precoce de Câncer/métodos , Feminino , Humanos , Leucócitos Mononucleares/química , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Adulto Jovem
13.
BMC Cancer ; 15: 408, 2015 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-25975566

RESUMO

BACKGROUND: Most of the blood tests aiming for breast cancer screening rely on quantification of a single or few biomarkers. The aim of this study was to evaluate the feasibility of detecting breast cancer by analyzing the total biochemical composition of plasma as well as peripheral blood mononuclear cells (PBMCs) using infrared spectroscopy. METHODS: Blood was collected from 29 patients with confirmed breast cancer and 30 controls with benign or no breast tumors, undergoing screening for breast cancer. PBMCs and plasma were isolated and dried on a zinc selenide slide and measured under a Fourier transform infrared (FTIR) microscope to obtain their infrared absorption spectra. Differences in the spectra of PBMCs and plasma between the groups were analyzed as well as the specific influence of the relevant pathological characteristics of the cancer patients. RESULTS: Several bands in the FTIR spectra of both blood components significantly distinguished patients with and without cancer. Employing feature extraction with quadratic discriminant analysis, a sensitivity of ~90 % and a specificity of ~80 % for breast cancer detection was achieved. These results were confirmed by Monte Carlo cross-validation. Further analysis of the cancer group revealed an influence of several clinical parameters, such as the involvement of lymph nodes, on the infrared spectra, with each blood component affected by different parameters. CONCLUSION: The present preliminary study suggests that FTIR spectroscopy of PBMCs and plasma is a potentially feasible and efficient tool for the early detection of breast neoplasms. An important application of our study is the distinction between benign lesions (considered as part of the non-cancer group) and malignant tumors thus reducing false positive results at screening. Furthermore, the correlation of specific spectral changes with clinical parameters of cancer patients indicates for possible contribution to diagnosis and prognosis.


Assuntos
Neoplasias da Mama/diagnóstico , Neoplasias da Mama/metabolismo , Detecção Precoce de Câncer , Adulto , Idoso , Idoso de 80 Anos ou mais , Biomarcadores Tumorais , Biópsia , Análise Química do Sangue , Neoplasias da Mama/sangue , Estudos de Casos e Controles , Detecção Precoce de Câncer/métodos , Feminino , Humanos , Leucócitos Mononucleares/metabolismo , Pessoa de Meia-Idade , Curva ROC , Fatores de Risco , Espectroscopia de Infravermelho com Transformada de Fourier , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA