Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Nature ; 595(7866): 283-288, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34010947

RESUMEN

COVID-19 manifests with a wide spectrum of clinical phenotypes that are characterized by exaggerated and misdirected host immune responses1-6. Although pathological innate immune activation is well-documented in severe disease1, the effect of autoantibodies on disease progression is less well-defined. Here we use a high-throughput autoantibody discovery technique known as rapid extracellular antigen profiling7 to screen a cohort of 194 individuals infected with SARS-CoV-2, comprising 172 patients with COVID-19 and 22 healthcare workers with mild disease or asymptomatic infection, for autoantibodies against 2,770 extracellular and secreted proteins (members of the exoproteome). We found that patients with COVID-19 exhibit marked increases in autoantibody reactivities as compared to uninfected individuals, and show a high prevalence of autoantibodies against immunomodulatory proteins (including cytokines, chemokines, complement components and cell-surface proteins). We established that these autoantibodies perturb immune function and impair virological control by inhibiting immunoreceptor signalling and by altering peripheral immune cell composition, and found that mouse surrogates of these autoantibodies increase disease severity in a mouse model of SARS-CoV-2 infection. Our analysis of autoantibodies against tissue-associated antigens revealed associations with specific clinical characteristics. Our findings suggest a pathological role for exoproteome-directed autoantibodies in COVID-19, with diverse effects on immune functionality and associations with clinical outcomes.


Asunto(s)
Autoanticuerpos/análisis , Autoanticuerpos/inmunología , COVID-19/inmunología , COVID-19/metabolismo , Proteoma/inmunología , Proteoma/metabolismo , Animales , Antígenos de Superficie/inmunología , COVID-19/patología , COVID-19/fisiopatología , Estudios de Casos y Controles , Proteínas del Sistema Complemento/inmunología , Citocinas/inmunología , Modelos Animales de Enfermedad , Progresión de la Enfermedad , Femenino , Humanos , Masculino , Ratones , Especificidad de Órganos/inmunología
2.
Hum Genomics ; 17(1): 80, 2023 08 29.
Artículo en Inglés | MEDLINE | ID: mdl-37641126

RESUMEN

Over the last century, outbreaks and pandemics have occurred with disturbing regularity, necessitating advance preparation and large-scale, coordinated response. Here, we developed a machine learning predictive model of disease severity and length of hospitalization for COVID-19, which can be utilized as a platform for future unknown viral outbreaks. We combined untargeted metabolomics on plasma data obtained from COVID-19 patients (n = 111) during hospitalization and healthy controls (n = 342), clinical and comorbidity data (n = 508) to build this patient triage platform, which consists of three parts: (i) the clinical decision tree, which amongst other biomarkers showed that patients with increased eosinophils have worse disease prognosis and can serve as a new potential biomarker with high accuracy (AUC = 0.974), (ii) the estimation of patient hospitalization length with ± 5 days error (R2 = 0.9765) and (iii) the prediction of the disease severity and the need of patient transfer to the intensive care unit. We report a significant decrease in serotonin levels in patients who needed positive airway pressure oxygen and/or were intubated. Furthermore, 5-hydroxy tryptophan, allantoin, and glucuronic acid metabolites were increased in COVID-19 patients and collectively they can serve as biomarkers to predict disease progression. The ability to quickly identify which patients will develop life-threatening illness would allow the efficient allocation of medical resources and implementation of the most effective medical interventions. We would advocate that the same approach could be utilized in future viral outbreaks to help hospitals triage patients more effectively and improve patient outcomes while optimizing healthcare resources.


Asunto(s)
COVID-19 , Humanos , COVID-19/epidemiología , Triaje , Alantoína , Brotes de Enfermedades , Aprendizaje Automático
3.
Eur Heart J ; 44(43): 4592-4604, 2023 11 14.
Artículo en Inglés | MEDLINE | ID: mdl-37611002

RESUMEN

BACKGROUND AND AIMS: Early diagnosis of aortic stenosis (AS) is critical to prevent morbidity and mortality but requires skilled examination with Doppler imaging. This study reports the development and validation of a novel deep learning model that relies on two-dimensional (2D) parasternal long axis videos from transthoracic echocardiography without Doppler imaging to identify severe AS, suitable for point-of-care ultrasonography. METHODS AND RESULTS: In a training set of 5257 studies (17 570 videos) from 2016 to 2020 [Yale-New Haven Hospital (YNHH), Connecticut], an ensemble of three-dimensional convolutional neural networks was developed to detect severe AS, leveraging self-supervised contrastive pretraining for label-efficient model development. This deep learning model was validated in a temporally distinct set of 2040 consecutive studies from 2021 from YNHH as well as two geographically distinct cohorts of 4226 and 3072 studies, from California and other hospitals in New England, respectively. The deep learning model achieved an area under the receiver operating characteristic curve (AUROC) of 0.978 (95% CI: 0.966, 0.988) for detecting severe AS in the temporally distinct test set, maintaining its diagnostic performance in geographically distinct cohorts [0.952 AUROC (95% CI: 0.941, 0.963) in California and 0.942 AUROC (95% CI: 0.909, 0.966) in New England]. The model was interpretable with saliency maps identifying the aortic valve, mitral annulus, and left atrium as the predictive regions. Among non-severe AS cases, predicted probabilities were associated with worse quantitative metrics of AS suggesting an association with various stages of AS severity. CONCLUSION: This study developed and externally validated an automated approach for severe AS detection using single-view 2D echocardiography, with potential utility for point-of-care screening.


Asunto(s)
Estenosis de la Válvula Aórtica , Aprendizaje Profundo , Humanos , Ecocardiografía , Estenosis de la Válvula Aórtica/diagnóstico por imagen , Estenosis de la Válvula Aórtica/complicaciones , Válvula Aórtica/diagnóstico por imagen , Ultrasonografía
4.
J Infect Dis ; 227(5): 663-674, 2023 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-36408616

RESUMEN

BACKGROUND: The impact variant-specific immune evasion and waning protection have on declining coronavirus disease 2019 (COVID-19) vaccine effectiveness (VE) remains unclear. Using whole-genome sequencing (WGS), we examined the contribution these factors had on the decline that followed the introduction of the Delta variant. Furthermore, we evaluated calendar-period-based classification as a WGS alternative. METHODS: We conducted a test-negative case-control study among people tested for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) between 1 April and 24 August 2021. Variants were classified using WGS and calendar period. RESULTS: We included 2029 cases (positive, sequenced samples) and 343 727 controls (negative tests). VE 14-89 days after second dose was significantly higher against Alpha (84.4%; 95% confidence interval [CI], 75.6%-90.0%) than Delta infection (68.9%; 95% CI, 58.0%-77.1%). The odds of Delta infection were significantly higher 90-149 than 14-89 days after second dose (P value = .003). Calendar-period-classified VE estimates approximated WGS-classified estimates; however, calendar-period-based classification was subject to misclassification (35% Alpha, 4% Delta). CONCLUSIONS: Both waning protection and variant-specific immune evasion contributed to the lower effectiveness. While calendar-period-classified VE estimates mirrored WGS-classified estimates, our analysis highlights the need for WGS when variants are cocirculating and misclassification is likely.


Asunto(s)
COVID-19 , Hepatitis D , Humanos , Vacunas contra la COVID-19 , Estudios de Casos y Controles , Evasión Inmune , SARS-CoV-2 , Eficacia de las Vacunas
5.
PLoS Med ; 19(12): e1004136, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36454733

RESUMEN

BACKGROUND: The benefit of primary and booster vaccination in people who experienced a prior Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection remains unclear. The objective of this study was to estimate the effectiveness of primary (two-dose series) and booster (third dose) mRNA vaccination against Omicron (lineage BA.1) infection among people with a prior documented infection. METHODS AND FINDINGS: We conducted a test-negative case-control study of reverse transcription PCRs (RT-PCRs) analyzed with the TaqPath (Thermo Fisher Scientific) assay and recorded in the Yale New Haven Health system from November 1, 2021, to April 30, 2022. Overall, 11,307 cases (positive TaqPath analyzed RT-PCRs with S-gene target failure [SGTF]) and 130,041 controls (negative TaqPath analyzed RT-PCRs) were included (median age: cases: 35 years, controls: 39 years). Among cases and controls, 5.9% and 8.1% had a documented prior infection (positive SARS-CoV-2 test record ≥90 days prior to the included test), respectively. We estimated the effectiveness of primary and booster vaccination relative to SGTF-defined Omicron (lineage BA.1) variant infection using a logistic regression adjusted for date of test, age, sex, race/ethnicity, insurance, comorbidities, social venerability index, municipality, and healthcare utilization. The effectiveness of primary vaccination 14 to 149 days after the second dose was 41.0% (95% confidence interval (CI): 14.1% to 59.4%, p 0.006) and 27.1% (95% CI: 18.7% to 34.6%, p < 0.001) for people with and without a documented prior infection, respectively. The effectiveness of booster vaccination (≥14 days after booster dose) was 47.1% (95% CI: 22.4% to 63.9%, p 0.001) and 54.1% (95% CI: 49.2% to 58.4%, p < 0.001) in people with and without a documented prior infection, respectively. To test whether booster vaccination reduced the risk of infection beyond that of the primary series, we compared the odds of infection among boosted (≥14 days after booster dose) and booster-eligible people (≥150 days after second dose). The odds ratio (OR) comparing boosted and booster-eligible people with a documented prior infection was 0.79 (95% CI: 0.54 to 1.16, p 0.222), whereas the OR comparing boosted and booster-eligible people without a documented prior infection was 0.54 (95% CI: 0.49 to 0.59, p < 0.001). This study's limitations include the risk of residual confounding, the use of data from a single system, and the reliance on TaqPath analyzed RT-PCR results. CONCLUSIONS: In this study, we observed that primary vaccination provided significant but limited protection against Omicron (lineage BA.1) infection among people with and without a documented prior infection. While booster vaccination was associated with additional protection against Omicron BA.1 infection in people without a documented prior infection, it was not found to be associated with additional protection among people with a documented prior infection. These findings support primary vaccination in people regardless of documented prior infection status but suggest that infection history may impact the relative benefit of booster doses.


Asunto(s)
COVID-19 , Humanos , Adulto , COVID-19/epidemiología , COVID-19/prevención & control , SARS-CoV-2/genética , Estudios de Casos y Controles , Oportunidad Relativa , Vacunación
6.
BMC Med Inform Decis Mak ; 21(1): 61, 2021 02 17.
Artículo en Inglés | MEDLINE | ID: mdl-33596898

RESUMEN

BACKGROUND: The electronic health record (EHR) holds the prospect of providing more complete and timely access to clinical information for biomedical research, quality assessments, and quality improvement compared to other data sources, such as administrative claims. In this study, we sought to assess the completeness and timeliness of structured diagnoses in the EHR compared to computed diagnoses for hypertension (HTN), hyperlipidemia (HLD), and diabetes mellitus (DM). METHODS: We determined the amount of time for a structured diagnosis to be recorded in the EHR from when an equivalent diagnosis could be computed from other structured data elements, such as vital signs and laboratory results. We used EHR data for encounters from January 1, 2012 through February 10, 2019 from an academic health system. Diagnoses for HTN, HLD, and DM were computed for patients with at least two observations above threshold separated by at least 30 days, where the thresholds were outpatient blood pressure of ≥ 140/90 mmHg, any low-density lipoprotein ≥ 130 mg/dl, or any hemoglobin A1c ≥ 6.5%, respectively. The primary measure was the length of time between the computed diagnosis and the time at which a structured diagnosis could be identified within the EHR history or problem list. RESULTS: We found that 39.8% of those with HTN, 21.6% with HLD, and 5.2% with DM did not receive a corresponding structured diagnosis recorded in the EHR. For those who received a structured diagnosis, a mean of 389, 198, and 166 days elapsed before the patient had the corresponding diagnosis of HTN, HLD, or DM, respectively, recorded in the EHR. CONCLUSIONS: We found a marked temporal delay between when a diagnosis can be computed or inferred and when an equivalent structured diagnosis is recorded within the EHR. These findings demonstrate the continued need for additional study of the EHR to avoid bias when using observational data and reinforce the need for computational approaches to identify clinical phenotypes.


Asunto(s)
Diabetes Mellitus , Hipertensión , Diabetes Mellitus/diagnóstico , Diabetes Mellitus/epidemiología , Registros Electrónicos de Salud , Humanos , Hipertensión/diagnóstico , Hipertensión/epidemiología , Almacenamiento y Recuperación de la Información , Pacientes Ambulatorios
7.
J Med Internet Res ; 21(4): e13043, 2019 04 09.
Artículo en Inglés | MEDLINE | ID: mdl-30964441

RESUMEN

BACKGROUND: Health care data are increasing in volume and complexity. Storing and analyzing these data to implement precision medicine initiatives and data-driven research has exceeded the capabilities of traditional computer systems. Modern big data platforms must be adapted to the specific demands of health care and designed for scalability and growth. OBJECTIVE: The objectives of our study were to (1) demonstrate the implementation of a data science platform built on open source technology within a large, academic health care system and (2) describe 2 computational health care applications built on such a platform. METHODS: We deployed a data science platform based on several open source technologies to support real-time, big data workloads. We developed data-acquisition workflows for Apache Storm and NiFi in Java and Python to capture patient monitoring and laboratory data for downstream analytics. RESULTS: Emerging data management approaches, along with open source technologies such as Hadoop, can be used to create integrated data lakes to store large, real-time datasets. This infrastructure also provides a robust analytics platform where health care and biomedical research data can be analyzed in near real time for precision medicine and computational health care use cases. CONCLUSIONS: The implementation and use of integrated data science platforms offer organizations the opportunity to combine traditional datasets, including data from the electronic health record, with emerging big data sources, such as continuous patient monitoring and real-time laboratory results. These platforms can enable cost-effective and scalable analytics for the information that will be key to the delivery of precision medicine initiatives. Organizations that can take advantage of the technical advances found in data science platforms will have the opportunity to provide comprehensive access to health care data for computational health care and precision medicine research.


Asunto(s)
Ciencia de los Datos/métodos , Atención a la Salud/métodos , Informática Médica/métodos , Medicina de Precisión/métodos , Humanos
8.
Pharmacoepidemiol Drug Saf ; 27(8): 848-856, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29896873

RESUMEN

PURPOSE: To estimate medical device utilization needed to detect safety differences among implantable cardioverter defibrillators (ICDs) generator models and compare these estimates to utilization in practice. METHODS: We conducted repeated sample size estimates to calculate the medical device utilization needed, systematically varying device-specific safety event rate ratios and significance levels while maintaining 80% power, testing 3 average adverse event rates (3.9, 6.1, and 12.6 events per 100 person-years) estimated from the American College of Cardiology's 2006 to 2010 National Cardiovascular Data Registry of ICDs. We then compared with actual medical device utilization. RESULTS: At significance level 0.05 and 80% power, 34% or fewer ICD models accrued sufficient utilization in practice to detect safety differences for rate ratios <1.15 and an average event rate of 12.6 events per 100 person-years. For average event rates of 3.9 and 12.6 events per 100 person-years, 30% and 50% of ICD models, respectively, accrued sufficient utilization for a rate ratio of 1.25, whereas 52% and 67% for a rate ratio of 1.50. Because actual ICD utilization was not uniformly distributed across ICD models, the proportion of individuals receiving any ICD that accrued sufficient utilization in practice was 0% to 21%, 32% to 70%, and 67% to 84% for rate ratios of 1.05, 1.15, and 1.25, respectively, for the range of 3 average adverse event rates. CONCLUSIONS: Small safety differences among ICD generator models are unlikely to be detected through routine surveillance given current ICD utilization in practice, but large safety differences can be detected for most patients at anticipated average adverse event rates.


Asunto(s)
Bases de Datos Factuales/estadística & datos numéricos , Desfibriladores Implantables/estadística & datos numéricos , Vigilancia de Productos Comercializados/estadística & datos numéricos , Falla de Prótesis , Sistema de Registros/estadística & datos numéricos , Procedimientos Quirúrgicos Cardíacos/instrumentación , Procedimientos Quirúrgicos Cardíacos/estadística & datos numéricos , Interpretación Estadística de Datos , Muerte Súbita Cardíaca , Desfibriladores Implantables/efectos adversos , Insuficiencia Cardíaca/cirugía , Humanos , Prevención Primaria , Vigilancia de Productos Comercializados/métodos , Implantación de Prótesis/instrumentación , Implantación de Prótesis/estadística & datos numéricos , Tamaño de la Muestra , Estados Unidos
9.
Artículo en Inglés | MEDLINE | ID: mdl-39221857

RESUMEN

Background: Risk stratification strategies for cancer therapeutics-related cardiac dysfunction (CTRCD) rely on serial monitoring by specialized imaging, limiting their scalability. We aimed to examine an application of artificial intelligence (AI) to electrocardiographic (ECG) images as a surrogate for imaging risk biomarkers, and its association with early CTRCD. Methods: Across a U.S.-based health system (2013-2023), we identified 1,550 patients (age 60 [IQR:51-69] years, 1223 [78.9%] women) without cardiomyopathy who received anthracyclines and/or trastuzumab for breast cancer or non-Hodgkin lymphoma and had ECG performed ≤12 months before treatment. We deployed a validated AI model of left ventricular systolic dysfunction (LVSD) to baseline ECG images and defined low, intermediate, and high-risk groups based on AI-ECG LVSD probabilities of <0.01, 0.01 to 0.1, and ≥0.1 (positive screen), respectively. We explored the association with early CTRCD (new cardiomyopathy, heart failure, or left ventricular ejection fraction [LVEF]<50%), or LVEF<40%, up to 12 months post-treatment. In a mechanistic analysis, we assessed the association between global longitudinal strain (GLS) and AI-ECG LVSD probabilities in studies performed within 15 days of each other. Results: Among 1,550 patients without known cardiomyopathy (median follow-up: 14.1 [IQR:13.4-17.1] months), 83 (5.4%), 562 (36.3%) and 905 (58.4%) were classified as high, intermediate, and low risk by baseline AI-ECG. A high- vs low-risk AI-ECG screen (≥0.1 vs <0.01) was associated with a 3.4-fold and 13.5-fold higher incidence of CTRCD (adj.HR 3.35 [95%CI:2.25-4.99]) and LVEF<40% (adj.HR 13.52 [95%CI:5.06-36.10]), respectively. Post-hoc analyses supported longitudinal increases in AI-ECG probabilities within 6-to-12 months of a CTRCD event. Among 1,428 temporally-linked echocardiograms and ECGs, AI-ECG LVSD probabilities were associated with worse GLS (GLS -19% [IQR:-21 to -17%] for probabilities <0.1, to -15% [IQR:-15 to -9%] for ≥0.5 [p<0.001]). Conclusions: AI applied to baseline ECG images can stratify the risk of early CTRCD associated with anthracycline or trastuzumab exposure in the setting of breast cancer or non-Hodgkin lymphoma therapy.

10.
medRxiv ; 2024 Mar 19.
Artículo en Inglés | MEDLINE | ID: mdl-38562897

RESUMEN

Background: Risk stratification strategies for cancer therapeutics-related cardiac dysfunction (CTRCD) rely on serial monitoring by specialized imaging, limiting their scalability. Objectives: To examine an artificial intelligence (AI)-enhanced electrocardiographic (AI-ECG) surrogate for imaging risk biomarkers, and its association with CTRCD. Methods: Across a five-hospital U.S.-based health system (2013-2023), we identified patients with breast cancer or non-Hodgkin lymphoma (NHL) who received anthracyclines (AC) and/or trastuzumab (TZM), and a control cohort receiving immune checkpoint inhibitors (ICI). We deployed a validated AI model of left ventricular systolic dysfunction (LVSD) to ECG images (≥0.1, positive screen) and explored its association with i) global longitudinal strain (GLS) measured within 15 days (n=7,271 pairs); ii) future CTRCD (new cardiomyopathy, heart failure, or left ventricular ejection fraction [LVEF]<50%), and LVEF<40%. In the ICI cohort we correlated baseline AI-ECG-LVSD predictions with downstream myocarditis. Results: Higher AI-ECG LVSD predictions were associated with worse GLS (-18% [IQR:-20 to -17%] for predictions<0.1, to -12% [IQR:-15 to -9%] for ≥0.5 (p<0.001)). In 1,308 patients receiving AC/TZM (age 59 [IQR:49-67] years, 999 [76.4%] women, 80 [IQR:42-115] follow-up months) a positive baseline AI-ECG LVSD screen was associated with ~2-fold and ~4.8-fold increase in the incidence of the composite CTRCD endpoint (adj.HR 2.22 [95%CI:1.63-3.02]), and LVEF<40% (adj.HR 4.76 [95%CI:2.62-8.66]), respectively. Among 2,056 patients receiving ICI (age 65 [IQR:57-73] years, 913 [44.4%] women, follow-up 63 [IQR:28-99] months) AI-ECG predictions were not associated with ICI myocarditis (adj.HR 1.36 [95%CI:0.47-3.93]). Conclusion: AI applied to baseline ECG images can stratify the risk of CTRCD associated with anthracycline or trastuzumab exposure.

11.
medRxiv ; 2024 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-39252891

RESUMEN

Background and Aims: Diagnosing transthyretin amyloid cardiomyopathy (ATTR-CM) requires advanced imaging, precluding large-scale testing for pre-clinical disease. We examined the application of artificial intelligence (AI) to echocardiography (TTE) and electrocardiography (ECG) as a scalable strategy to quantify pre-clinical trends in ATTR-CM. Methods: Across age/sex-matched case-control datasets in the Yale-New Haven Health System (YNHHS) we trained deep learning models to identify ATTR-CM-specific signatures on TTE videos and ECG images (area under the curve of 0.93 and 0.91, respectively). We deployed these across all studies of individuals referred for cardiac nuclear amyloid imaging in an independent population at YNHHS and an external population from the Houston Methodist Hospitals (HMH) to define longitudinal trends in AI-defined probabilities for ATTR-CM using age/sex-adjusted linear mixed models, and describe discrimination metrics during the early pre-clinical stage. Results: Among 984 participants referred for cardiac nuclear amyloid imaging at YNHHS (median age 74 years, 44.3% female) and 806 at HMH (69 years, 34.5% female), 112 (11.4%) and 174 (21.6%) tested positive for ATTR-CM, respectively. Across both cohorts and modalities, AI-defined ATTR-CM probabilities derived from 7,423 TTEs and 32,205 ECGs showed significantly faster progression rates in the years before clinical diagnosis in cases versus controls (p time × group interaction ≤0.004). In the one-to-three-year window before cardiac nuclear amyloid imaging sensitivity/specificity metrics were estimated at 86.2%/44.2% [YNHHS] vs 65.7%/65.5% [HMH] for AI-Echo, and 89.8%/40.6% [YNHHS] vs 88.5%/35.1% [HMH] for AI-ECG. Conclusions: We demonstrate that AI tools for echocardiographic videos and ECG images can enable scalable identification of pre-clinical ATTR-CM, flagging individuals who may benefit from risk-modifying therapies.

12.
medRxiv ; 2024 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-38405776

RESUMEN

Timely and accurate assessment of electrocardiograms (ECGs) is crucial for diagnosing, triaging, and clinically managing patients. Current workflows rely on a computerized ECG interpretation using rule-based tools built into the ECG signal acquisition systems with limited accuracy and flexibility. In low-resource settings, specialists must review every single ECG for such decisions, as these computerized interpretations are not available. Additionally, high-quality interpretations are even more essential in such low-resource settings as there is a higher burden of accuracy for automated reads when access to experts is limited. Artificial Intelligence (AI)-based systems have the prospect of greater accuracy yet are frequently limited to a narrow range of conditions and do not replicate the full diagnostic range. Moreover, these models often require raw signal data, which are unavailable to physicians and necessitate costly technical integrations that are currently limited. To overcome these challenges, we developed and validated a format-independent vision encoder-decoder model - ECG-GPT - that can generate free-text, expert-level diagnosis statements directly from ECG images. The model shows robust performance, validated on 2.6 million ECGs across 6 geographically distinct health settings: (1) 2 large and diverse US health systems- Yale-New Haven and Mount Sinai Health Systems, (2) a consecutive ECG dataset from a central ECG repository from Minas Gerais, Brazil, (3) the prospective cohort study, UK Biobank, (4) a Germany-based, publicly available repository, PTB-XL, and (5) a community hospital in Missouri. The model demonstrated consistently high performance (AUROC≥0.81) across a wide range of rhythm and conduction disorders. This can be easily accessed via a web-based application capable of receiving ECG images and represents a scalable and accessible strategy for generating accurate, expert-level reports from images of ECGs, enabling accurate triage of patients globally, especially in low-resource settings.

13.
medRxiv ; 2024 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-38559021

RESUMEN

Background: Point-of-care ultrasonography (POCUS) enables cardiac imaging at the bedside and in communities but is limited by abbreviated protocols and variation in quality. We developed and tested artificial intelligence (AI) models to automate the detection of underdiagnosed cardiomyopathies from cardiac POCUS. Methods: In a development set of 290,245 transthoracic echocardiographic videos across the Yale-New Haven Health System (YNHHS), we used augmentation approaches and a customized loss function weighted for view quality to derive a POCUS-adapted, multi-label, video-based convolutional neural network (CNN) that discriminates HCM (hypertrophic cardiomyopathy) and ATTR-CM (transthyretin amyloid cardiomyopathy) from controls without known disease. We evaluated the final model across independent, internal and external, retrospective cohorts of individuals who underwent cardiac POCUS across YNHHS and Mount Sinai Health System (MSHS) emergency departments (EDs) (2011-2024) to prioritize key views and validate the diagnostic and prognostic performance of single-view screening protocols. Findings: We identified 33,127 patients (median age 61 [IQR: 45-75] years, n=17,276 [52·2%] female) at YNHHS and 5,624 (57 [IQR: 39-71] years, n=1,953 [34·7%] female) at MSHS with 78,054 and 13,796 eligible cardiac POCUS videos, respectively. An AI-enabled single-view screening approach successfully discriminated HCM (AUROC of 0·90 [YNHHS] & 0·89 [MSHS]) and ATTR-CM (YNHHS: AUROC of 0·92 [YNHHS] & 0·99 [MSHS]). In YNHHS, 40 (58·0%) HCM and 23 (47·9%) ATTR-CM cases had a positive screen at median of 2·1 [IQR: 0·9-4·5] and 1·9 [IQR: 1·0-3·4] years before clinical diagnosis. Moreover, among 24,448 participants without known cardiomyopathy followed over 2·2 [IQR: 1·1-5·8] years, AI-POCUS probabilities in the highest (vs lowest) quintile for HCM and ATTR-CM conferred a 15% (adj.HR 1·15 [95%CI: 1·02-1·29]) and 39% (adj.HR 1·39 [95%CI: 1·22-1·59]) higher age- and sex-adjusted mortality risk, respectively. Interpretation: We developed and validated an AI framework that enables scalable, opportunistic screening of treatable cardiomyopathies wherever POCUS is used. Funding: National Heart, Lung and Blood Institute, Doris Duke Charitable Foundation, BridgeBio.

14.
JACC Heart Fail ; 2024 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-39453355

RESUMEN

BACKGROUND: The lack of automated tools for measuring care quality limits the implementation of a national program to assess guideline-directed care in heart failure with reduced ejection fraction (HFrEF). OBJECTIVES: The authors aimed to automate the identification of patients with HFrEF at hospital discharge, an opportunity to evaluate and improve the quality of care. METHODS: The authors developed a novel deep-learning language model for identifying patients with HFrEF from discharge summaries of hospitalizations with heart failure at Yale New Haven Hospital during 2015 to 2019. HFrEF was defined by left ventricular ejection fraction <40% on antecedent echocardiography. The authors externally validated the model at Northwestern Medicine, community hospitals of Yale, and the MIMIC-III (Medical Information Mart for Intensive Care III) database. RESULTS: A total of 13,251 notes from 5,392 unique individuals (age 73 ± 14 years, 48% women), including 2,487 patients with HFrEF (46.1%), were used for model development (train/held-out: 70%/30%). The model achieved an area under receiver-operating characteristic curve (AUROC) of 0.97 and area under precision recall curve (AUPRC) of 0.97 in detecting HFrEF on the held-out set. The model had high performance in identifying HFrEF with AUROC = 0.94 and AUPRC = 0.91 on 19,242 notes from Northwestern Medicine, AUROC = 0.95 and AUPRC = 0.96 on 139 manually abstracted notes from Yale community hospitals, and AUROC = 0.91 and AUPRC = 0.92 on 146 manually reviewed notes from MIMIC-III. Model-based predictions of HFrEF corresponded to a net reclassification improvement of 60.2 ± 1.9% compared with diagnosis codes (P < 0.001). CONCLUSIONS: The authors developed a language model that identifies HFrEF from clinical notes with high precision and accuracy, representing a key element in automating quality assessment for individuals with HFrEF.

15.
medRxiv ; 2024 Oct 08.
Artículo en Inglés | MEDLINE | ID: mdl-39417103

RESUMEN

Background and Aims: AI-enhanced 12-lead ECG can detect a range of structural heart diseases (SHDs) but has a limited role in community-based screening. We developed and externally validated a noise-resilient single-lead AI-ECG algorithm that can detect SHD and predict the risk of their development using wearable/portable devices. Methods: Using 266,740 ECGs from 99,205 patients with paired echocardiographic data at Yale New Haven Hospital, we developed ADAPT-HEART, a noise-resilient, deep-learning algorithm, to detect SHD using lead I ECG. SHD was defined as a composite of LVEF<40%, moderate or severe left-sided valvular disease, and severe LVH. ADAPT-HEART was validated in four community hospitals in the US, and the population-based cohort of ELSA-Brasil. We assessed the model's performance as a predictive biomarker among those without baseline SHD across hospital-based sites and the UK Biobank. Results: The development population had a median age of 66 [IQR, 54-77] years and included 49,947 (50.3%) women, with 18,896 (19.0%) having any SHD. ADAPT-HEART had an AUROC of 0.879 (95% CI, 0.870-0.888) with good calibration for detecting SHD in the test set, and consistent performance in hospital-based external sites (AUROC: 0.852-0.891) and ELSA-Brasil (AUROC: 0.859). Among those without baseline SHD, high vs. low ADAPT-HEART probability conferred a 2.8- to 5.7-fold increase in the risk of future SHD across data sources (all P<0.05). Conclusions: We propose a novel model that detects and predicts a range of SHDs from noisy single-lead ECGs obtainable on portable/wearable devices, providing a scalable strategy for community-based screening and risk stratification for SHD.

16.
medRxiv ; 2024 Oct 07.
Artículo en Inglés | MEDLINE | ID: mdl-39417095

RESUMEN

Background: Identifying structural heart diseases (SHDs) early can change the course of the disease, but their diagnosis requires cardiac imaging, which is limited in accessibility. Objective: To leverage 12-lead ECG images for automated detection and prediction of multiple SHDs using a novel deep learning model. Methods: We developed a series of convolutional neural network models for detecting a range of individual SHDs from images of ECGs with SHDs defined by transthoracic echocardiograms (TTEs) performed within 30 days of the ECG at the Yale New Haven Hospital (YNHH). SHDs were defined based on TTEs with LV ejection fraction <40%, moderate-to-severe left-sided valvular disease (aortic/mitral stenosis or regurgitation), or severe left ventricular hypertrophy (IVSd > 1.5cm and diastolic dysfunction). We developed an ensemble XGBoost model, PRESENT-SHD, as a composite screen across all SHDs. We validated PRESENT-SHD at 4 US hospitals and a prospective population-based cohort study, the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil), with concurrent protocolized ECGs and TTEs. We also used PRESENT-SHD for risk stratification of new-onset SHD or heart failure (HF) in clinical cohorts and the population-based UK Biobank (UKB). Results: The models were developed using 261,228 ECGs from 93,693 YNHH patients and evaluated on a single ECG from 11,023 individuals at YNHH (19% with SHD), 44,591 across external hospitals (20-27% with SHD), and 3,014 in the ELSA-Brasil (3% with SHD). In the held-out test set, PRESENT-SHD demonstrated an AUROC of 0.886 (0.877-894), sensitivity of 89%, and specificity of 66%. At hospital-based sites, PRESENT-SHD had AUROCs ranging from 0.854-0.900, with sensitivities and specificities of 93-96% and 51-56%, respectively. The model generalized well to ELSA-Brasil (AUROC, 0.853 [0.811-0.897], sensitivity 88%, specificity 62%). PRESENT-SHD performance was consistent across demographic subgroups. A positive PRESENT-SHD screen portended a 2- to 4-fold higher risk of new-onset SHD/HF, independent of demographics, comorbidities, and the competing risk of death across clinical sites and UKB, with high predictive discrimination. Conclusion: We developed and validated PRESENT-SHD, an AI-ECG tool identifying a range of SHD using images of 12-lead ECGs, representing a robust, scalable, and accessible modality for automated SHD screening and risk stratification.

17.
medRxiv ; 2024 Feb 29.
Artículo en Inglés | MEDLINE | ID: mdl-37808685

RESUMEN

Importance: Aortic stenosis (AS) is a major public health challenge with a growing therapeutic landscape, but current biomarkers do not inform personalized screening and follow-up. Objective: A video-based artificial intelligence (AI) biomarker (Digital AS Severity index [DASSi]) can detect severe AS using single-view long-axis echocardiography without Doppler. Here, we deploy DASSi to patients with no or mild/moderate AS at baseline to identify AS development and progression. Design Setting and Participants: We defined two cohorts of patients without severe AS undergoing echocardiography in the Yale-New Haven Health System (YNHHS) (2015-2021, 4.1[IQR:2.4-5.4] follow-up years) and Cedars-Sinai Medical Center (CSMC) (2018-2019, 3.4[IQR:2.8-3.9] follow-up years). We further developed a novel computational pipeline for the cross-modality translation of DASSi into cardiac magnetic resonance (CMR) imaging in the UK Biobank (2.5[IQR:1.6-3.9] follow-up years). Analyses were performed between August 2023-February 2024. Exposure: DASSi (range: 0-1) derived from AI applied to echocardiography and CMR videos. Main Outcomes and Measures: Annualized change in peak aortic valve velocity (AV-Vmax) and late (>6 months) aortic valve replacement (AVR). Results: A total of 12,599 participants were included in the echocardiographic study (YNHHS: n=8,798, median age of 71 [IQR (interquartile range):60-80] years, 4250 [48.3%] women, and CSMC: n=3,801, 67 [IQR:54-78] years, 1685 [44.3%] women). Higher baseline DASSi was associated with faster progression in AV-Vmax (per 0.1 DASSi increments: YNHHS: +0.033 m/s/year [95%CI:0.028-0.038], n=5,483, and CSMC: +0.082 m/s/year [0.053-0.111], n=1,292), with levels ≥ vs <0.2 linked to a 4-to-5-fold higher AVR risk (715 events in YNHHS; adj.HR 4.97 [95%CI: 2.71-5.82], 56 events in CSMC: 4.04 [0.92-17.7]), independent of age, sex, ethnicity/race, ejection fraction and AV-Vmax. This was reproduced across 45,474 participants (median age 65 [IQR:59-71] years, 23,559 [51.8%] women) undergoing CMR in the UK Biobank (adj.HR 11.4 [95%CI:2.56-50.60] for DASSi ≥vs<0.2). Saliency maps and phenome-wide association studies supported links with traditional cardiovascular risk factors and diastolic dysfunction. Conclusions and Relevance: In this cohort study of patients without severe AS undergoing echocardiography or CMR imaging, a new AI-based video biomarker is independently associated with AS development and progression, enabling opportunistic risk stratification across cardiovascular imaging modalities as well as potential application on handheld devices.

18.
JAMA Cardiol ; 9(6): 534-544, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38581644

RESUMEN

Importance: Aortic stenosis (AS) is a major public health challenge with a growing therapeutic landscape, but current biomarkers do not inform personalized screening and follow-up. A video-based artificial intelligence (AI) biomarker (Digital AS Severity index [DASSi]) can detect severe AS using single-view long-axis echocardiography without Doppler characterization. Objective: To deploy DASSi to patients with no AS or with mild or moderate AS at baseline to identify AS development and progression. Design, Setting, and Participants: This is a cohort study that examined 2 cohorts of patients without severe AS undergoing echocardiography in the Yale New Haven Health System (YNHHS; 2015-2021) and Cedars-Sinai Medical Center (CSMC; 2018-2019). A novel computational pipeline for the cross-modal translation of DASSi into cardiac magnetic resonance (CMR) imaging was further developed in the UK Biobank. Analyses were performed between August 2023 and February 2024. Exposure: DASSi (range, 0-1) derived from AI applied to echocardiography and CMR videos. Main Outcomes and Measures: Annualized change in peak aortic valve velocity (AV-Vmax) and late (>6 months) aortic valve replacement (AVR). Results: A total of 12 599 participants were included in the echocardiographic study (YNHHS: n = 8798; median [IQR] age, 71 [60-80] years; 4250 [48.3%] women; median [IQR] follow-up, 4.1 [2.4-5.4] years; and CSMC: n = 3801; median [IQR] age, 67 [54-78] years; 1685 [44.3%] women; median [IQR] follow-up, 3.4 [2.8-3.9] years). Higher baseline DASSi was associated with faster progression in AV-Vmax (per 0.1 DASSi increment: YNHHS, 0.033 m/s per year [95% CI, 0.028-0.038] among 5483 participants; CSMC, 0.082 m/s per year [95% CI, 0.053-0.111] among 1292 participants), with values of 0.2 or greater associated with a 4- to 5-fold higher AVR risk than values less than 0.2 (YNHHS: 715 events; adjusted hazard ratio [HR], 4.97 [95% CI, 2.71-5.82]; CSMC: 56 events; adjusted HR, 4.04 [95% CI, 0.92-17.70]), independent of age, sex, race, ethnicity, ejection fraction, and AV-Vmax. This was reproduced across 45 474 participants (median [IQR] age, 65 [59-71] years; 23 559 [51.8%] women; median [IQR] follow-up, 2.5 [1.6-3.9] years) undergoing CMR imaging in the UK Biobank (for participants with DASSi ≥0.2 vs those with DASSi <.02, adjusted HR, 11.38 [95% CI, 2.56-50.57]). Saliency maps and phenome-wide association studies supported associations with cardiac structure and function and traditional cardiovascular risk factors. Conclusions and Relevance: In this cohort study of patients without severe AS undergoing echocardiography or CMR imaging, a new AI-based video biomarker was independently associated with AS development and progression, enabling opportunistic risk stratification across cardiovascular imaging modalities as well as potential application on handheld devices.


Asunto(s)
Estenosis de la Válvula Aórtica , Inteligencia Artificial , Progresión de la Enfermedad , Ecocardiografía , Índice de Severidad de la Enfermedad , Humanos , Estenosis de la Válvula Aórtica/diagnóstico por imagen , Estenosis de la Válvula Aórtica/cirugía , Estenosis de la Válvula Aórtica/fisiopatología , Femenino , Masculino , Anciano , Ecocardiografía/métodos , Persona de Mediana Edad , Biomarcadores , Anciano de 80 o más Años , Estudios de Cohortes , Grabación en Video , Imagen Multimodal/métodos , Imagen por Resonancia Magnética/métodos
19.
Am J Med ; 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38735354

RESUMEN

BACKGROUND: Individuals with long COVID lack evidence-based treatments and have difficulty participating in traditional site-based trials. Our digital, decentralized trial investigates the efficacy and safety of nirmatrelvir/ritonavir, targeting viral persistence as a potential cause of long COVID. METHODS: The PAX LC trial (NCT05668091) is a Phase 2, 1:1 randomized, double-blind, superiority, placebo-controlled trial in 100 community-dwelling, highly symptomatic adult participants with long COVID residing in the 48 contiguous US states to determine the efficacy, safety, and tolerability of 15 days of nirmatrelvir/ritonavir compared with placebo/ritonavir. Participants are recruited via patient groups, cultural ambassadors, and social media platforms. Medical records are reviewed through a platform facilitating participant-mediated data acquisition from electronic health records nationwide. During the drug treatment, participants complete daily digital diaries using a web-based application. Blood draws for eligibility and safety assessments are conducted at or near participants' homes. The study drug is shipped directly to participants' homes. The primary endpoint is the PROMIS-29 Physical Health Summary Score difference between baseline and Day 28, evaluated by a mixed model repeated measure analysis. Secondary endpoints include PROMIS-29 (Mental Health Summary Score and all items), Modified GSQ-30 with supplemental symptoms questionnaire, COVID Core Outcome Measures for Recovery, EQ-5D-5L (Utility Score and all items), PGIS 1 and 2, PGIC 1 and 2, and healthcare utilization. The trial incorporates immunophenotyping to identify long COVID biomarkers and treatment responders. CONCLUSION: The PAX LC trial uses a novel decentralized design and a participant-centric approach to test a 15-day regimen of nirmatrelvir/ritonavir for long COVID.

20.
PLoS One ; 18(9): e0291572, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37713393

RESUMEN

OBJECTIVE: We aimed to discover computationally-derived phenotypes of opioid-related patient presentations to the ED via clinical notes and structured electronic health record (EHR) data. METHODS: This was a retrospective study of ED visits from 2013-2020 across ten sites within a regional healthcare network. We derived phenotypes from visits for patients ≥18 years of age with at least one prior or current documentation of an opioid-related diagnosis. Natural language processing was used to extract clinical entities from notes, which were combined with structured data within the EHR to create a set of features. We performed latent dirichlet allocation to identify topics within these features. Groups of patient presentations with similar attributes were identified by cluster analysis. RESULTS: In total 82,577 ED visits met inclusion criteria. The 30 topics were discovered ranging from those related to substance use disorder, chronic conditions, mental health, and medical management. Clustering on these topics identified nine unique cohorts with one-year survivals ranging from 84.2-96.8%, rates of one-year ED returns from 9-34%, rates of one-year opioid event 10-17%, rates of medications for opioid use disorder from 17-43%, and a median Carlson comorbidity index of 2-8. Two cohorts of phenotypes were identified related to chronic substance use disorder, or acute overdose. CONCLUSIONS: Our results indicate distinct phenotypic clusters with varying patient-oriented outcomes which provide future targets better allocation of resources and therapeutics. This highlights the heterogeneity of the overall population, and the need to develop targeted interventions for each population.


Asunto(s)
Analgésicos Opioides , Trastornos Relacionados con Opioides , Humanos , Analgésicos Opioides/efectos adversos , Estudios Retrospectivos , Servicio de Urgencia en Hospital , Fenotipo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA