Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Nature ; 595(7866): 283-288, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34010947

RESUMO

COVID-19 manifests with a wide spectrum of clinical phenotypes that are characterized by exaggerated and misdirected host immune responses1-6. Although pathological innate immune activation is well-documented in severe disease1, the effect of autoantibodies on disease progression is less well-defined. Here we use a high-throughput autoantibody discovery technique known as rapid extracellular antigen profiling7 to screen a cohort of 194 individuals infected with SARS-CoV-2, comprising 172 patients with COVID-19 and 22 healthcare workers with mild disease or asymptomatic infection, for autoantibodies against 2,770 extracellular and secreted proteins (members of the exoproteome). We found that patients with COVID-19 exhibit marked increases in autoantibody reactivities as compared to uninfected individuals, and show a high prevalence of autoantibodies against immunomodulatory proteins (including cytokines, chemokines, complement components and cell-surface proteins). We established that these autoantibodies perturb immune function and impair virological control by inhibiting immunoreceptor signalling and by altering peripheral immune cell composition, and found that mouse surrogates of these autoantibodies increase disease severity in a mouse model of SARS-CoV-2 infection. Our analysis of autoantibodies against tissue-associated antigens revealed associations with specific clinical characteristics. Our findings suggest a pathological role for exoproteome-directed autoantibodies in COVID-19, with diverse effects on immune functionality and associations with clinical outcomes.


Assuntos
Autoanticorpos/análise , Autoanticorpos/imunologia , COVID-19/imunologia , COVID-19/metabolismo , Proteoma/imunologia , Proteoma/metabolismo , Animais , Antígenos de Superfície/imunologia , COVID-19/patologia , COVID-19/fisiopatologia , Estudos de Casos e Controles , Proteínas do Sistema Complemento/imunologia , Citocinas/imunologia , Modelos Animais de Doenças , Progressão da Doença , Feminino , Humanos , Masculino , Camundongos , Especificidade de Órgãos/imunologia
2.
Hum Genomics ; 17(1): 80, 2023 08 29.
Artigo em Inglês | MEDLINE | ID: mdl-37641126

RESUMO

Over the last century, outbreaks and pandemics have occurred with disturbing regularity, necessitating advance preparation and large-scale, coordinated response. Here, we developed a machine learning predictive model of disease severity and length of hospitalization for COVID-19, which can be utilized as a platform for future unknown viral outbreaks. We combined untargeted metabolomics on plasma data obtained from COVID-19 patients (n = 111) during hospitalization and healthy controls (n = 342), clinical and comorbidity data (n = 508) to build this patient triage platform, which consists of three parts: (i) the clinical decision tree, which amongst other biomarkers showed that patients with increased eosinophils have worse disease prognosis and can serve as a new potential biomarker with high accuracy (AUC = 0.974), (ii) the estimation of patient hospitalization length with ± 5 days error (R2 = 0.9765) and (iii) the prediction of the disease severity and the need of patient transfer to the intensive care unit. We report a significant decrease in serotonin levels in patients who needed positive airway pressure oxygen and/or were intubated. Furthermore, 5-hydroxy tryptophan, allantoin, and glucuronic acid metabolites were increased in COVID-19 patients and collectively they can serve as biomarkers to predict disease progression. The ability to quickly identify which patients will develop life-threatening illness would allow the efficient allocation of medical resources and implementation of the most effective medical interventions. We would advocate that the same approach could be utilized in future viral outbreaks to help hospitals triage patients more effectively and improve patient outcomes while optimizing healthcare resources.


Assuntos
COVID-19 , Humanos , COVID-19/epidemiologia , Triagem , Alantoína , Surtos de Doenças , Aprendizado de Máquina
3.
Eur Heart J ; 44(43): 4592-4604, 2023 11 14.
Artigo em Inglês | MEDLINE | ID: mdl-37611002

RESUMO

BACKGROUND AND AIMS: Early diagnosis of aortic stenosis (AS) is critical to prevent morbidity and mortality but requires skilled examination with Doppler imaging. This study reports the development and validation of a novel deep learning model that relies on two-dimensional (2D) parasternal long axis videos from transthoracic echocardiography without Doppler imaging to identify severe AS, suitable for point-of-care ultrasonography. METHODS AND RESULTS: In a training set of 5257 studies (17 570 videos) from 2016 to 2020 [Yale-New Haven Hospital (YNHH), Connecticut], an ensemble of three-dimensional convolutional neural networks was developed to detect severe AS, leveraging self-supervised contrastive pretraining for label-efficient model development. This deep learning model was validated in a temporally distinct set of 2040 consecutive studies from 2021 from YNHH as well as two geographically distinct cohorts of 4226 and 3072 studies, from California and other hospitals in New England, respectively. The deep learning model achieved an area under the receiver operating characteristic curve (AUROC) of 0.978 (95% CI: 0.966, 0.988) for detecting severe AS in the temporally distinct test set, maintaining its diagnostic performance in geographically distinct cohorts [0.952 AUROC (95% CI: 0.941, 0.963) in California and 0.942 AUROC (95% CI: 0.909, 0.966) in New England]. The model was interpretable with saliency maps identifying the aortic valve, mitral annulus, and left atrium as the predictive regions. Among non-severe AS cases, predicted probabilities were associated with worse quantitative metrics of AS suggesting an association with various stages of AS severity. CONCLUSION: This study developed and externally validated an automated approach for severe AS detection using single-view 2D echocardiography, with potential utility for point-of-care screening.


Assuntos
Estenose da Valva Aórtica , Aprendizado Profundo , Humanos , Ecocardiografia , Estenose da Valva Aórtica/diagnóstico por imagem , Estenose da Valva Aórtica/complicações , Valva Aórtica/diagnóstico por imagem , Ultrassonografia
4.
J Infect Dis ; 227(5): 663-674, 2023 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-36408616

RESUMO

BACKGROUND: The impact variant-specific immune evasion and waning protection have on declining coronavirus disease 2019 (COVID-19) vaccine effectiveness (VE) remains unclear. Using whole-genome sequencing (WGS), we examined the contribution these factors had on the decline that followed the introduction of the Delta variant. Furthermore, we evaluated calendar-period-based classification as a WGS alternative. METHODS: We conducted a test-negative case-control study among people tested for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) between 1 April and 24 August 2021. Variants were classified using WGS and calendar period. RESULTS: We included 2029 cases (positive, sequenced samples) and 343 727 controls (negative tests). VE 14-89 days after second dose was significantly higher against Alpha (84.4%; 95% confidence interval [CI], 75.6%-90.0%) than Delta infection (68.9%; 95% CI, 58.0%-77.1%). The odds of Delta infection were significantly higher 90-149 than 14-89 days after second dose (P value = .003). Calendar-period-classified VE estimates approximated WGS-classified estimates; however, calendar-period-based classification was subject to misclassification (35% Alpha, 4% Delta). CONCLUSIONS: Both waning protection and variant-specific immune evasion contributed to the lower effectiveness. While calendar-period-classified VE estimates mirrored WGS-classified estimates, our analysis highlights the need for WGS when variants are cocirculating and misclassification is likely.


Assuntos
COVID-19 , Hepatite D , Humanos , Vacinas contra COVID-19 , Estudos de Casos e Controles , Evasão da Resposta Imune , SARS-CoV-2 , Eficácia de Vacinas
5.
PLoS Med ; 19(12): e1004136, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36454733

RESUMO

BACKGROUND: The benefit of primary and booster vaccination in people who experienced a prior Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection remains unclear. The objective of this study was to estimate the effectiveness of primary (two-dose series) and booster (third dose) mRNA vaccination against Omicron (lineage BA.1) infection among people with a prior documented infection. METHODS AND FINDINGS: We conducted a test-negative case-control study of reverse transcription PCRs (RT-PCRs) analyzed with the TaqPath (Thermo Fisher Scientific) assay and recorded in the Yale New Haven Health system from November 1, 2021, to April 30, 2022. Overall, 11,307 cases (positive TaqPath analyzed RT-PCRs with S-gene target failure [SGTF]) and 130,041 controls (negative TaqPath analyzed RT-PCRs) were included (median age: cases: 35 years, controls: 39 years). Among cases and controls, 5.9% and 8.1% had a documented prior infection (positive SARS-CoV-2 test record ≥90 days prior to the included test), respectively. We estimated the effectiveness of primary and booster vaccination relative to SGTF-defined Omicron (lineage BA.1) variant infection using a logistic regression adjusted for date of test, age, sex, race/ethnicity, insurance, comorbidities, social venerability index, municipality, and healthcare utilization. The effectiveness of primary vaccination 14 to 149 days after the second dose was 41.0% (95% confidence interval (CI): 14.1% to 59.4%, p 0.006) and 27.1% (95% CI: 18.7% to 34.6%, p < 0.001) for people with and without a documented prior infection, respectively. The effectiveness of booster vaccination (≥14 days after booster dose) was 47.1% (95% CI: 22.4% to 63.9%, p 0.001) and 54.1% (95% CI: 49.2% to 58.4%, p < 0.001) in people with and without a documented prior infection, respectively. To test whether booster vaccination reduced the risk of infection beyond that of the primary series, we compared the odds of infection among boosted (≥14 days after booster dose) and booster-eligible people (≥150 days after second dose). The odds ratio (OR) comparing boosted and booster-eligible people with a documented prior infection was 0.79 (95% CI: 0.54 to 1.16, p 0.222), whereas the OR comparing boosted and booster-eligible people without a documented prior infection was 0.54 (95% CI: 0.49 to 0.59, p < 0.001). This study's limitations include the risk of residual confounding, the use of data from a single system, and the reliance on TaqPath analyzed RT-PCR results. CONCLUSIONS: In this study, we observed that primary vaccination provided significant but limited protection against Omicron (lineage BA.1) infection among people with and without a documented prior infection. While booster vaccination was associated with additional protection against Omicron BA.1 infection in people without a documented prior infection, it was not found to be associated with additional protection among people with a documented prior infection. These findings support primary vaccination in people regardless of documented prior infection status but suggest that infection history may impact the relative benefit of booster doses.


Assuntos
COVID-19 , Humanos , Adulto , COVID-19/epidemiologia , COVID-19/prevenção & controle , SARS-CoV-2/genética , Estudos de Casos e Controles , Razão de Chances , Vacinação
6.
BMC Med Inform Decis Mak ; 21(1): 61, 2021 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-33596898

RESUMO

BACKGROUND: The electronic health record (EHR) holds the prospect of providing more complete and timely access to clinical information for biomedical research, quality assessments, and quality improvement compared to other data sources, such as administrative claims. In this study, we sought to assess the completeness and timeliness of structured diagnoses in the EHR compared to computed diagnoses for hypertension (HTN), hyperlipidemia (HLD), and diabetes mellitus (DM). METHODS: We determined the amount of time for a structured diagnosis to be recorded in the EHR from when an equivalent diagnosis could be computed from other structured data elements, such as vital signs and laboratory results. We used EHR data for encounters from January 1, 2012 through February 10, 2019 from an academic health system. Diagnoses for HTN, HLD, and DM were computed for patients with at least two observations above threshold separated by at least 30 days, where the thresholds were outpatient blood pressure of ≥ 140/90 mmHg, any low-density lipoprotein ≥ 130 mg/dl, or any hemoglobin A1c ≥ 6.5%, respectively. The primary measure was the length of time between the computed diagnosis and the time at which a structured diagnosis could be identified within the EHR history or problem list. RESULTS: We found that 39.8% of those with HTN, 21.6% with HLD, and 5.2% with DM did not receive a corresponding structured diagnosis recorded in the EHR. For those who received a structured diagnosis, a mean of 389, 198, and 166 days elapsed before the patient had the corresponding diagnosis of HTN, HLD, or DM, respectively, recorded in the EHR. CONCLUSIONS: We found a marked temporal delay between when a diagnosis can be computed or inferred and when an equivalent structured diagnosis is recorded within the EHR. These findings demonstrate the continued need for additional study of the EHR to avoid bias when using observational data and reinforce the need for computational approaches to identify clinical phenotypes.


Assuntos
Diabetes Mellitus , Hipertensão , Diabetes Mellitus/diagnóstico , Diabetes Mellitus/epidemiologia , Registros Eletrônicos de Saúde , Humanos , Hipertensão/diagnóstico , Hipertensão/epidemiologia , Armazenamento e Recuperação da Informação , Pacientes Ambulatoriais
7.
J Med Internet Res ; 21(4): e13043, 2019 04 09.
Artigo em Inglês | MEDLINE | ID: mdl-30964441

RESUMO

BACKGROUND: Health care data are increasing in volume and complexity. Storing and analyzing these data to implement precision medicine initiatives and data-driven research has exceeded the capabilities of traditional computer systems. Modern big data platforms must be adapted to the specific demands of health care and designed for scalability and growth. OBJECTIVE: The objectives of our study were to (1) demonstrate the implementation of a data science platform built on open source technology within a large, academic health care system and (2) describe 2 computational health care applications built on such a platform. METHODS: We deployed a data science platform based on several open source technologies to support real-time, big data workloads. We developed data-acquisition workflows for Apache Storm and NiFi in Java and Python to capture patient monitoring and laboratory data for downstream analytics. RESULTS: Emerging data management approaches, along with open source technologies such as Hadoop, can be used to create integrated data lakes to store large, real-time datasets. This infrastructure also provides a robust analytics platform where health care and biomedical research data can be analyzed in near real time for precision medicine and computational health care use cases. CONCLUSIONS: The implementation and use of integrated data science platforms offer organizations the opportunity to combine traditional datasets, including data from the electronic health record, with emerging big data sources, such as continuous patient monitoring and real-time laboratory results. These platforms can enable cost-effective and scalable analytics for the information that will be key to the delivery of precision medicine initiatives. Organizations that can take advantage of the technical advances found in data science platforms will have the opportunity to provide comprehensive access to health care data for computational health care and precision medicine research.


Assuntos
Ciência de Dados/métodos , Atenção à Saúde/métodos , Informática Médica/métodos , Medicina de Precisão/métodos , Humanos
8.
Pharmacoepidemiol Drug Saf ; 27(8): 848-856, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29896873

RESUMO

PURPOSE: To estimate medical device utilization needed to detect safety differences among implantable cardioverter defibrillators (ICDs) generator models and compare these estimates to utilization in practice. METHODS: We conducted repeated sample size estimates to calculate the medical device utilization needed, systematically varying device-specific safety event rate ratios and significance levels while maintaining 80% power, testing 3 average adverse event rates (3.9, 6.1, and 12.6 events per 100 person-years) estimated from the American College of Cardiology's 2006 to 2010 National Cardiovascular Data Registry of ICDs. We then compared with actual medical device utilization. RESULTS: At significance level 0.05 and 80% power, 34% or fewer ICD models accrued sufficient utilization in practice to detect safety differences for rate ratios <1.15 and an average event rate of 12.6 events per 100 person-years. For average event rates of 3.9 and 12.6 events per 100 person-years, 30% and 50% of ICD models, respectively, accrued sufficient utilization for a rate ratio of 1.25, whereas 52% and 67% for a rate ratio of 1.50. Because actual ICD utilization was not uniformly distributed across ICD models, the proportion of individuals receiving any ICD that accrued sufficient utilization in practice was 0% to 21%, 32% to 70%, and 67% to 84% for rate ratios of 1.05, 1.15, and 1.25, respectively, for the range of 3 average adverse event rates. CONCLUSIONS: Small safety differences among ICD generator models are unlikely to be detected through routine surveillance given current ICD utilization in practice, but large safety differences can be detected for most patients at anticipated average adverse event rates.


Assuntos
Bases de Dados Factuais/estatística & dados numéricos , Desfibriladores Implantáveis/estatística & dados numéricos , Vigilância de Produtos Comercializados/estatística & dados numéricos , Falha de Prótese , Sistema de Registros/estatística & dados numéricos , Procedimentos Cirúrgicos Cardíacos/instrumentação , Procedimentos Cirúrgicos Cardíacos/estatística & dados numéricos , Interpretação Estatística de Dados , Morte Súbita Cardíaca , Desfibriladores Implantáveis/efeitos adversos , Insuficiência Cardíaca/cirurgia , Humanos , Prevenção Primária , Vigilância de Produtos Comercializados/métodos , Implantação de Prótese/instrumentação , Implantação de Prótese/estatística & dados numéricos , Tamanho da Amostra , Estados Unidos
9.
medRxiv ; 2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38562897

RESUMO

Background: Risk stratification strategies for cancer therapeutics-related cardiac dysfunction (CTRCD) rely on serial monitoring by specialized imaging, limiting their scalability. Objectives: To examine an artificial intelligence (AI)-enhanced electrocardiographic (AI-ECG) surrogate for imaging risk biomarkers, and its association with CTRCD. Methods: Across a five-hospital U.S.-based health system (2013-2023), we identified patients with breast cancer or non-Hodgkin lymphoma (NHL) who received anthracyclines (AC) and/or trastuzumab (TZM), and a control cohort receiving immune checkpoint inhibitors (ICI). We deployed a validated AI model of left ventricular systolic dysfunction (LVSD) to ECG images (≥0.1, positive screen) and explored its association with i) global longitudinal strain (GLS) measured within 15 days (n=7,271 pairs); ii) future CTRCD (new cardiomyopathy, heart failure, or left ventricular ejection fraction [LVEF]<50%), and LVEF<40%. In the ICI cohort we correlated baseline AI-ECG-LVSD predictions with downstream myocarditis. Results: Higher AI-ECG LVSD predictions were associated with worse GLS (-18% [IQR:-20 to -17%] for predictions<0.1, to -12% [IQR:-15 to -9%] for ≥0.5 (p<0.001)). In 1,308 patients receiving AC/TZM (age 59 [IQR:49-67] years, 999 [76.4%] women, 80 [IQR:42-115] follow-up months) a positive baseline AI-ECG LVSD screen was associated with ~2-fold and ~4.8-fold increase in the incidence of the composite CTRCD endpoint (adj.HR 2.22 [95%CI:1.63-3.02]), and LVEF<40% (adj.HR 4.76 [95%CI:2.62-8.66]), respectively. Among 2,056 patients receiving ICI (age 65 [IQR:57-73] years, 913 [44.4%] women, follow-up 63 [IQR:28-99] months) AI-ECG predictions were not associated with ICI myocarditis (adj.HR 1.36 [95%CI:0.47-3.93]). Conclusion: AI applied to baseline ECG images can stratify the risk of CTRCD associated with anthracycline or trastuzumab exposure.

10.
Artigo em Inglês | MEDLINE | ID: mdl-39221857

RESUMO

Background: Risk stratification strategies for cancer therapeutics-related cardiac dysfunction (CTRCD) rely on serial monitoring by specialized imaging, limiting their scalability. We aimed to examine an application of artificial intelligence (AI) to electrocardiographic (ECG) images as a surrogate for imaging risk biomarkers, and its association with early CTRCD. Methods: Across a U.S.-based health system (2013-2023), we identified 1,550 patients (age 60 [IQR:51-69] years, 1223 [78.9%] women) without cardiomyopathy who received anthracyclines and/or trastuzumab for breast cancer or non-Hodgkin lymphoma and had ECG performed ≤12 months before treatment. We deployed a validated AI model of left ventricular systolic dysfunction (LVSD) to baseline ECG images and defined low, intermediate, and high-risk groups based on AI-ECG LVSD probabilities of <0.01, 0.01 to 0.1, and ≥0.1 (positive screen), respectively. We explored the association with early CTRCD (new cardiomyopathy, heart failure, or left ventricular ejection fraction [LVEF]<50%), or LVEF<40%, up to 12 months post-treatment. In a mechanistic analysis, we assessed the association between global longitudinal strain (GLS) and AI-ECG LVSD probabilities in studies performed within 15 days of each other. Results: Among 1,550 patients without known cardiomyopathy (median follow-up: 14.1 [IQR:13.4-17.1] months), 83 (5.4%), 562 (36.3%) and 905 (58.4%) were classified as high, intermediate, and low risk by baseline AI-ECG. A high- vs low-risk AI-ECG screen (≥0.1 vs <0.01) was associated with a 3.4-fold and 13.5-fold higher incidence of CTRCD (adj.HR 3.35 [95%CI:2.25-4.99]) and LVEF<40% (adj.HR 13.52 [95%CI:5.06-36.10]), respectively. Post-hoc analyses supported longitudinal increases in AI-ECG probabilities within 6-to-12 months of a CTRCD event. Among 1,428 temporally-linked echocardiograms and ECGs, AI-ECG LVSD probabilities were associated with worse GLS (GLS -19% [IQR:-21 to -17%] for probabilities <0.1, to -15% [IQR:-15 to -9%] for ≥0.5 [p<0.001]). Conclusions: AI applied to baseline ECG images can stratify the risk of early CTRCD associated with anthracycline or trastuzumab exposure in the setting of breast cancer or non-Hodgkin lymphoma therapy.

11.
medRxiv ; 2024 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-38559021

RESUMO

Background: Point-of-care ultrasonography (POCUS) enables cardiac imaging at the bedside and in communities but is limited by abbreviated protocols and variation in quality. We developed and tested artificial intelligence (AI) models to automate the detection of underdiagnosed cardiomyopathies from cardiac POCUS. Methods: In a development set of 290,245 transthoracic echocardiographic videos across the Yale-New Haven Health System (YNHHS), we used augmentation approaches and a customized loss function weighted for view quality to derive a POCUS-adapted, multi-label, video-based convolutional neural network (CNN) that discriminates HCM (hypertrophic cardiomyopathy) and ATTR-CM (transthyretin amyloid cardiomyopathy) from controls without known disease. We evaluated the final model across independent, internal and external, retrospective cohorts of individuals who underwent cardiac POCUS across YNHHS and Mount Sinai Health System (MSHS) emergency departments (EDs) (2011-2024) to prioritize key views and validate the diagnostic and prognostic performance of single-view screening protocols. Findings: We identified 33,127 patients (median age 61 [IQR: 45-75] years, n=17,276 [52·2%] female) at YNHHS and 5,624 (57 [IQR: 39-71] years, n=1,953 [34·7%] female) at MSHS with 78,054 and 13,796 eligible cardiac POCUS videos, respectively. An AI-enabled single-view screening approach successfully discriminated HCM (AUROC of 0·90 [YNHHS] & 0·89 [MSHS]) and ATTR-CM (YNHHS: AUROC of 0·92 [YNHHS] & 0·99 [MSHS]). In YNHHS, 40 (58·0%) HCM and 23 (47·9%) ATTR-CM cases had a positive screen at median of 2·1 [IQR: 0·9-4·5] and 1·9 [IQR: 1·0-3·4] years before clinical diagnosis. Moreover, among 24,448 participants without known cardiomyopathy followed over 2·2 [IQR: 1·1-5·8] years, AI-POCUS probabilities in the highest (vs lowest) quintile for HCM and ATTR-CM conferred a 15% (adj.HR 1·15 [95%CI: 1·02-1·29]) and 39% (adj.HR 1·39 [95%CI: 1·22-1·59]) higher age- and sex-adjusted mortality risk, respectively. Interpretation: We developed and validated an AI framework that enables scalable, opportunistic screening of treatable cardiomyopathies wherever POCUS is used. Funding: National Heart, Lung and Blood Institute, Doris Duke Charitable Foundation, BridgeBio.

12.
medRxiv ; 2024 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-39252891

RESUMO

Background and Aims: Diagnosing transthyretin amyloid cardiomyopathy (ATTR-CM) requires advanced imaging, precluding large-scale testing for pre-clinical disease. We examined the application of artificial intelligence (AI) to echocardiography (TTE) and electrocardiography (ECG) as a scalable strategy to quantify pre-clinical trends in ATTR-CM. Methods: Across age/sex-matched case-control datasets in the Yale-New Haven Health System (YNHHS) we trained deep learning models to identify ATTR-CM-specific signatures on TTE videos and ECG images (area under the curve of 0.93 and 0.91, respectively). We deployed these across all studies of individuals referred for cardiac nuclear amyloid imaging in an independent population at YNHHS and an external population from the Houston Methodist Hospitals (HMH) to define longitudinal trends in AI-defined probabilities for ATTR-CM using age/sex-adjusted linear mixed models, and describe discrimination metrics during the early pre-clinical stage. Results: Among 984 participants referred for cardiac nuclear amyloid imaging at YNHHS (median age 74 years, 44.3% female) and 806 at HMH (69 years, 34.5% female), 112 (11.4%) and 174 (21.6%) tested positive for ATTR-CM, respectively. Across both cohorts and modalities, AI-defined ATTR-CM probabilities derived from 7,423 TTEs and 32,205 ECGs showed significantly faster progression rates in the years before clinical diagnosis in cases versus controls (p time × group interaction ≤0.004). In the one-to-three-year window before cardiac nuclear amyloid imaging sensitivity/specificity metrics were estimated at 86.2%/44.2% [YNHHS] vs 65.7%/65.5% [HMH] for AI-Echo, and 89.8%/40.6% [YNHHS] vs 88.5%/35.1% [HMH] for AI-ECG. Conclusions: We demonstrate that AI tools for echocardiographic videos and ECG images can enable scalable identification of pre-clinical ATTR-CM, flagging individuals who may benefit from risk-modifying therapies.

13.
medRxiv ; 2024 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-38405776

RESUMO

Timely and accurate assessment of electrocardiograms (ECGs) is crucial for diagnosing, triaging, and clinically managing patients. Current workflows rely on a computerized ECG interpretation using rule-based tools built into the ECG signal acquisition systems with limited accuracy and flexibility. In low-resource settings, specialists must review every single ECG for such decisions, as these computerized interpretations are not available. Additionally, high-quality interpretations are even more essential in such low-resource settings as there is a higher burden of accuracy for automated reads when access to experts is limited. Artificial Intelligence (AI)-based systems have the prospect of greater accuracy yet are frequently limited to a narrow range of conditions and do not replicate the full diagnostic range. Moreover, these models often require raw signal data, which are unavailable to physicians and necessitate costly technical integrations that are currently limited. To overcome these challenges, we developed and validated a format-independent vision encoder-decoder model - ECG-GPT - that can generate free-text, expert-level diagnosis statements directly from ECG images. The model shows robust performance, validated on 2.6 million ECGs across 6 geographically distinct health settings: (1) 2 large and diverse US health systems- Yale-New Haven and Mount Sinai Health Systems, (2) a consecutive ECG dataset from a central ECG repository from Minas Gerais, Brazil, (3) the prospective cohort study, UK Biobank, (4) a Germany-based, publicly available repository, PTB-XL, and (5) a community hospital in Missouri. The model demonstrated consistently high performance (AUROC≥0.81) across a wide range of rhythm and conduction disorders. This can be easily accessed via a web-based application capable of receiving ECG images and represents a scalable and accessible strategy for generating accurate, expert-level reports from images of ECGs, enabling accurate triage of patients globally, especially in low-resource settings.

14.
medRxiv ; 2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-37808685

RESUMO

Importance: Aortic stenosis (AS) is a major public health challenge with a growing therapeutic landscape, but current biomarkers do not inform personalized screening and follow-up. Objective: A video-based artificial intelligence (AI) biomarker (Digital AS Severity index [DASSi]) can detect severe AS using single-view long-axis echocardiography without Doppler. Here, we deploy DASSi to patients with no or mild/moderate AS at baseline to identify AS development and progression. Design Setting and Participants: We defined two cohorts of patients without severe AS undergoing echocardiography in the Yale-New Haven Health System (YNHHS) (2015-2021, 4.1[IQR:2.4-5.4] follow-up years) and Cedars-Sinai Medical Center (CSMC) (2018-2019, 3.4[IQR:2.8-3.9] follow-up years). We further developed a novel computational pipeline for the cross-modality translation of DASSi into cardiac magnetic resonance (CMR) imaging in the UK Biobank (2.5[IQR:1.6-3.9] follow-up years). Analyses were performed between August 2023-February 2024. Exposure: DASSi (range: 0-1) derived from AI applied to echocardiography and CMR videos. Main Outcomes and Measures: Annualized change in peak aortic valve velocity (AV-Vmax) and late (>6 months) aortic valve replacement (AVR). Results: A total of 12,599 participants were included in the echocardiographic study (YNHHS: n=8,798, median age of 71 [IQR (interquartile range):60-80] years, 4250 [48.3%] women, and CSMC: n=3,801, 67 [IQR:54-78] years, 1685 [44.3%] women). Higher baseline DASSi was associated with faster progression in AV-Vmax (per 0.1 DASSi increments: YNHHS: +0.033 m/s/year [95%CI:0.028-0.038], n=5,483, and CSMC: +0.082 m/s/year [0.053-0.111], n=1,292), with levels ≥ vs <0.2 linked to a 4-to-5-fold higher AVR risk (715 events in YNHHS; adj.HR 4.97 [95%CI: 2.71-5.82], 56 events in CSMC: 4.04 [0.92-17.7]), independent of age, sex, ethnicity/race, ejection fraction and AV-Vmax. This was reproduced across 45,474 participants (median age 65 [IQR:59-71] years, 23,559 [51.8%] women) undergoing CMR in the UK Biobank (adj.HR 11.4 [95%CI:2.56-50.60] for DASSi ≥vs<0.2). Saliency maps and phenome-wide association studies supported links with traditional cardiovascular risk factors and diastolic dysfunction. Conclusions and Relevance: In this cohort study of patients without severe AS undergoing echocardiography or CMR imaging, a new AI-based video biomarker is independently associated with AS development and progression, enabling opportunistic risk stratification across cardiovascular imaging modalities as well as potential application on handheld devices.

15.
JAMA Cardiol ; 9(6): 534-544, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38581644

RESUMO

Importance: Aortic stenosis (AS) is a major public health challenge with a growing therapeutic landscape, but current biomarkers do not inform personalized screening and follow-up. A video-based artificial intelligence (AI) biomarker (Digital AS Severity index [DASSi]) can detect severe AS using single-view long-axis echocardiography without Doppler characterization. Objective: To deploy DASSi to patients with no AS or with mild or moderate AS at baseline to identify AS development and progression. Design, Setting, and Participants: This is a cohort study that examined 2 cohorts of patients without severe AS undergoing echocardiography in the Yale New Haven Health System (YNHHS; 2015-2021) and Cedars-Sinai Medical Center (CSMC; 2018-2019). A novel computational pipeline for the cross-modal translation of DASSi into cardiac magnetic resonance (CMR) imaging was further developed in the UK Biobank. Analyses were performed between August 2023 and February 2024. Exposure: DASSi (range, 0-1) derived from AI applied to echocardiography and CMR videos. Main Outcomes and Measures: Annualized change in peak aortic valve velocity (AV-Vmax) and late (>6 months) aortic valve replacement (AVR). Results: A total of 12 599 participants were included in the echocardiographic study (YNHHS: n = 8798; median [IQR] age, 71 [60-80] years; 4250 [48.3%] women; median [IQR] follow-up, 4.1 [2.4-5.4] years; and CSMC: n = 3801; median [IQR] age, 67 [54-78] years; 1685 [44.3%] women; median [IQR] follow-up, 3.4 [2.8-3.9] years). Higher baseline DASSi was associated with faster progression in AV-Vmax (per 0.1 DASSi increment: YNHHS, 0.033 m/s per year [95% CI, 0.028-0.038] among 5483 participants; CSMC, 0.082 m/s per year [95% CI, 0.053-0.111] among 1292 participants), with values of 0.2 or greater associated with a 4- to 5-fold higher AVR risk than values less than 0.2 (YNHHS: 715 events; adjusted hazard ratio [HR], 4.97 [95% CI, 2.71-5.82]; CSMC: 56 events; adjusted HR, 4.04 [95% CI, 0.92-17.70]), independent of age, sex, race, ethnicity, ejection fraction, and AV-Vmax. This was reproduced across 45 474 participants (median [IQR] age, 65 [59-71] years; 23 559 [51.8%] women; median [IQR] follow-up, 2.5 [1.6-3.9] years) undergoing CMR imaging in the UK Biobank (for participants with DASSi ≥0.2 vs those with DASSi <.02, adjusted HR, 11.38 [95% CI, 2.56-50.57]). Saliency maps and phenome-wide association studies supported associations with cardiac structure and function and traditional cardiovascular risk factors. Conclusions and Relevance: In this cohort study of patients without severe AS undergoing echocardiography or CMR imaging, a new AI-based video biomarker was independently associated with AS development and progression, enabling opportunistic risk stratification across cardiovascular imaging modalities as well as potential application on handheld devices.


Assuntos
Estenose da Valva Aórtica , Inteligência Artificial , Progressão da Doença , Ecocardiografia , Índice de Gravidade de Doença , Humanos , Estenose da Valva Aórtica/diagnóstico por imagem , Estenose da Valva Aórtica/cirurgia , Estenose da Valva Aórtica/fisiopatologia , Feminino , Masculino , Idoso , Ecocardiografia/métodos , Pessoa de Meia-Idade , Biomarcadores , Idoso de 80 Anos ou mais , Estudos de Coortes , Gravação em Vídeo , Imagem Multimodal/métodos , Imageamento por Ressonância Magnética/métodos
16.
Am J Med ; 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38735354

RESUMO

BACKGROUND: Individuals with long COVID lack evidence-based treatments and have difficulty participating in traditional site-based trials. Our digital, decentralized trial investigates the efficacy and safety of nirmatrelvir/ritonavir, targeting viral persistence as a potential cause of long COVID. METHODS: The PAX LC trial (NCT05668091) is a Phase 2, 1:1 randomized, double-blind, superiority, placebo-controlled trial in 100 community-dwelling, highly symptomatic adult participants with long COVID residing in the 48 contiguous US states to determine the efficacy, safety, and tolerability of 15 days of nirmatrelvir/ritonavir compared with placebo/ritonavir. Participants are recruited via patient groups, cultural ambassadors, and social media platforms. Medical records are reviewed through a platform facilitating participant-mediated data acquisition from electronic health records nationwide. During the drug treatment, participants complete daily digital diaries using a web-based application. Blood draws for eligibility and safety assessments are conducted at or near participants' homes. The study drug is shipped directly to participants' homes. The primary endpoint is the PROMIS-29 Physical Health Summary Score difference between baseline and Day 28, evaluated by a mixed model repeated measure analysis. Secondary endpoints include PROMIS-29 (Mental Health Summary Score and all items), Modified GSQ-30 with supplemental symptoms questionnaire, COVID Core Outcome Measures for Recovery, EQ-5D-5L (Utility Score and all items), PGIS 1 and 2, PGIC 1 and 2, and healthcare utilization. The trial incorporates immunophenotyping to identify long COVID biomarkers and treatment responders. CONCLUSION: The PAX LC trial uses a novel decentralized design and a participant-centric approach to test a 15-day regimen of nirmatrelvir/ritonavir for long COVID.

17.
PLoS One ; 18(9): e0291572, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37713393

RESUMO

OBJECTIVE: We aimed to discover computationally-derived phenotypes of opioid-related patient presentations to the ED via clinical notes and structured electronic health record (EHR) data. METHODS: This was a retrospective study of ED visits from 2013-2020 across ten sites within a regional healthcare network. We derived phenotypes from visits for patients ≥18 years of age with at least one prior or current documentation of an opioid-related diagnosis. Natural language processing was used to extract clinical entities from notes, which were combined with structured data within the EHR to create a set of features. We performed latent dirichlet allocation to identify topics within these features. Groups of patient presentations with similar attributes were identified by cluster analysis. RESULTS: In total 82,577 ED visits met inclusion criteria. The 30 topics were discovered ranging from those related to substance use disorder, chronic conditions, mental health, and medical management. Clustering on these topics identified nine unique cohorts with one-year survivals ranging from 84.2-96.8%, rates of one-year ED returns from 9-34%, rates of one-year opioid event 10-17%, rates of medications for opioid use disorder from 17-43%, and a median Carlson comorbidity index of 2-8. Two cohorts of phenotypes were identified related to chronic substance use disorder, or acute overdose. CONCLUSIONS: Our results indicate distinct phenotypic clusters with varying patient-oriented outcomes which provide future targets better allocation of resources and therapeutics. This highlights the heterogeneity of the overall population, and the need to develop targeted interventions for each population.


Assuntos
Analgésicos Opioides , Transtornos Relacionados ao Uso de Opioides , Humanos , Analgésicos Opioides/efeitos adversos , Estudos Retrospectivos , Serviço Hospitalar de Emergência , Fenótipo
18.
NPJ Digit Med ; 6(1): 124, 2023 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-37433874

RESUMO

Artificial intelligence (AI) can detect left ventricular systolic dysfunction (LVSD) from electrocardiograms (ECGs). Wearable devices could allow for broad AI-based screening but frequently obtain noisy ECGs. We report a novel strategy that automates the detection of hidden cardiovascular diseases, such as LVSD, adapted for noisy single-lead ECGs obtained on wearable and portable devices. We use 385,601 ECGs for development of a standard and noise-adapted model. For the noise-adapted model, ECGs are augmented during training with random gaussian noise within four distinct frequency ranges, each emulating real-world noise sources. Both models perform comparably on standard ECGs with an AUROC of 0.90. The noise-adapted model performs significantly better on the same test set augmented with four distinct real-world noise recordings at multiple signal-to-noise ratios (SNRs), including noise isolated from a portable device ECG. The standard and noise-adapted models have an AUROC of 0.72 and 0.87, respectively, when evaluated on ECGs augmented with portable ECG device noise at an SNR of 0.5. This approach represents a novel strategy for the development of wearable-adapted tools from clinical ECG repositories.

19.
medRxiv ; 2023 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-37745445

RESUMO

Background: The lack of automated tools for measuring care quality has limited the implementation of a national program to assess and improve guideline-directed care in heart failure with reduced ejection fraction (HFrEF). A key challenge for constructing such a tool has been an accurate, accessible approach for identifying patients with HFrEF at hospital discharge, an opportunity to evaluate and improve the quality of care. Methods: We developed a novel deep learning-based language model for identifying patients with HFrEF from discharge summaries using a semi-supervised learning framework. For this purpose, hospitalizations with heart failure at Yale New Haven Hospital (YNHH) between 2015 to 2019 were labeled as HFrEF if the left ventricular ejection fraction was under 40% on antecedent echocardiography. The model was internally validated with model-based net reclassification improvement (NRI) assessed against chart-based diagnosis codes. We externally validated the model on discharge summaries from hospitalizations with heart failure at Northwestern Medicine, community hospitals of Yale New Haven Health in Connecticut and Rhode Island, and the publicly accessible MIMIC-III database, confirmed with chart abstraction. Results: A total of 13,251 notes from 5,392 unique individuals (mean age 73 ± 14 years, 48% female), including 2,487 patients with HFrEF (46.1%), were used for model development (train/held-out test: 70/30%). The deep learning model achieved an area under receiving operating characteristic (AUROC) of 0.97 and an area under precision-recall curve (AUPRC) of 0.97 in detecting HFrEF on the held-out set. In external validation, the model had high performance in identifying HFrEF from discharge summaries with AUROC 0.94 and AUPRC 0.91 on 19,242 notes from Northwestern Medicine, AUROC 0.95 and AUPRC 0.96 on 139 manually abstracted notes from Yale community hospitals, and AUROC 0.91 and AUPRC 0.92 on 146 manually reviewed notes at MIMIC-III. Model-based prediction of HFrEF corresponded to an overall NRI of 60.2 ± 1.9% compared with the chart diagnosis codes (p-value < 0.001) and an increase in AUROC from 0.61 [95% CI: 060-0.63] to 0.91 [95% CI 0.90-0.92]. Conclusions: We developed and externally validated a deep learning language model that automatically identifies HFrEF from clinical notes with high precision and accuracy, representing a key element in automating quality assessment and improvement for individuals with HFrEF.

20.
Med ; 3(5): 325-334.e4, 2022 05 13.
Artigo em Inglês | MEDLINE | ID: mdl-35399324

RESUMO

Background: The SARS-CoV-2 Omicron variant became a global concern due to its rapid spread and displacement of the dominant Delta variant. We hypothesized that part of Omicron's rapid rise was based on its increased ability to cause infections in persons that are vaccinated compared to Delta. Methods: We analyzed nasal swab PCR tests for samples collected between December 12 and 16, 2021, in Connecticut when the proportion of Delta and Omicron variants was relatively equal. We used the spike gene target failure (SGTF) to classify probable Delta and Omicron infections. We fitted an exponential curve to the estimated infections to determine the doubling times for each variant. We compared the test positivity rates for each variant by vaccination status, number of doses, and vaccine manufacturer. Generalized linear models were used to assess factors associated with odds of infection with each variant among persons testing positive for SARS-CoV-2. Findings: For infections with high virus copies (Ct < 30) among vaccinated persons, we found higher odds that they were infected with Omicron compared to Delta, and that the odds increased with increased number of vaccine doses. Compared to unvaccinated persons, we found significant reduction in Delta positivity rates after two (43.4%-49.1%) and three vaccine doses (81.1%), while we only found a significant reduction in Omicron positivity rates after three doses (62.3%). Conclusion: The rapid rise in Omicron infections was likely driven by Omicron's escape from vaccine-induced immunity. Funding: This work was supported by the Centers for Disease Control and Prevention (CDC).


Assuntos
COVID-19 , SARS-CoV-2 , COVID-19/epidemiologia , Vacinas contra COVID-19 , Hospitalização , Humanos , SARS-CoV-2/genética
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa