Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 21
1.
Front Health Serv ; 4: 1365785, 2024.
Article En | MEDLINE | ID: mdl-38807747

Introduction: During the COVID-19 pandemic individuals with mental illnesses faced challenges accessing psychiatric care. Our study aimed to describe patient characteristics and compare admissions and length of stay (LOS) for psychiatric-related hospitalizations before and during the COVID-19 pandemic. Methods: We conducted a retrospective analysis using health administrative data comparing individuals with an acute psychiatric admission between two time periods: 1st March 2019 to 31st December 2019 (pre-COVID) and 1st March 2020 to 31st December 2020 (during-COVID). Multivariable negative binomial regression was used to model the association between most responsible diagnosis type and the two-time periods to hospital LOS, reporting the Rate Ratio (RR) as the measure of effect. Results: The cohort comprised 939 individuals who were predominately male (60.3%) with a severe mental illness (schizophrenia or mood-affective disorder) (72.7%) and a median age of 38 (IQR: 28.0, 52.0) years. In the multivariable analysis, anxiety disorders (RR: 0.63, CI: 0.4, 0.99) and personality disorders (RR: 0.52, CI: 0.32, 0.85) were significantly associated with a shorter LOS when compared to individuals without those disorders. Additionally, when compared to hospital admissions for non-substance related disorders the LOS for patients with substance-related disorders were significantly shorter during the COVID period (RR: 0.45, CI: 0.30, 0.67) and pre-COVID period (RR: 0.31, CI: 0.21, 0.46). Conclusions: We observed a significant difference in the type and length of admissions for various psychiatric disorders during the COVID-19 period. These findings can support systems of care in adapting to utilization changes during pandemics or other global health events.

2.
Nat Med ; 2024 May 28.
Article En | MEDLINE | ID: mdl-38806679

Fibrotic diseases affect multiple organs and are associated with morbidity and mortality. To examine organ-specific and shared biologic mechanisms that underlie fibrosis in different organs, we developed machine learning models to quantify T1 time, a marker of interstitial fibrosis, in the liver, pancreas, heart and kidney among 43,881 UK Biobank participants who underwent magnetic resonance imaging. In phenome-wide association analyses, we demonstrate the association of increased organ-specific T1 time, reflecting increased interstitial fibrosis, with prevalent diseases across multiple organ systems. In genome-wide association analyses, we identified 27, 18, 11 and 10 independent genetic loci associated with liver, pancreas, myocardial and renal cortex T1 time, respectively. There was a modest genetic correlation between the examined organs. Several loci overlapped across the examined organs implicating genes involved in a myriad of biologic pathways including metal ion transport (SLC39A8, HFE and TMPRSS6), glucose metabolism (PCK2), blood group antigens (ABO and FUT2), immune function (BANK1 and PPP3CA), inflammation (NFKB1) and mitosis (CENPE). Finally, we found that an increasing number of organs with T1 time falling in the top quintile was associated with increased mortality in the population. Individuals with a high burden of fibrosis in ≥3 organs had a 3-fold increase in mortality compared to those with a low burden of fibrosis across all examined organs in multivariable-adjusted analysis (hazard ratio = 3.31, 95% confidence interval 1.77-6.19; P = 1.78 × 10-4). By leveraging machine learning to quantify T1 time across multiple organs at scale, we uncovered new organ-specific and shared biologic pathways underlying fibrosis that may provide therapeutic targets.

3.
JAMA Cardiol ; 9(2): 174-181, 2024 Feb 01.
Article En | MEDLINE | ID: mdl-37950744

Importance: The gold standard for outcome adjudication in clinical trials is medical record review by a physician clinical events committee (CEC), which requires substantial time and expertise. Automated adjudication of medical records by natural language processing (NLP) may offer a more resource-efficient alternative but this approach has not been validated in a multicenter setting. Objective: To externally validate the Community Care Cohort Project (C3PO) NLP model for heart failure (HF) hospitalization adjudication, which was previously developed and tested within one health care system, compared to gold-standard CEC adjudication in a multicenter clinical trial. Design, Setting, and Participants: This was a retrospective analysis of the Influenza Vaccine to Effectively Stop Cardio Thoracic Events and Decompensated Heart Failure (INVESTED) trial, which compared 2 influenza vaccines in 5260 participants with cardiovascular disease at 157 sites in the US and Canada between September 2016 and January 2019. Analysis was performed from November 2022 to October 2023. Exposures: Individual sites submitted medical records for each hospitalization. The central INVESTED CEC and the C3PO NLP model independently adjudicated whether the cause of hospitalization was HF using the prepared hospitalization dossier. The C3PO NLP model was fine-tuned (C3PO + INVESTED) and a de novo NLP model was trained using half the INVESTED hospitalizations. Main Outcomes and Measures: Concordance between the C3PO NLP model HF adjudication and the gold-standard INVESTED CEC adjudication was measured by raw agreement, κ, sensitivity, and specificity. The fine-tuned and de novo INVESTED NLP models were evaluated in an internal validation cohort not used for training. Results: Among 4060 hospitalizations in 1973 patients (mean [SD] age, 66.4 [13.2] years; 514 [27.4%] female and 1432 [72.6%] male]), 1074 hospitalizations (26%) were adjudicated as HF by the CEC. There was good agreement between the C3PO NLP and CEC HF adjudications (raw agreement, 87% [95% CI, 86-88]; κ, 0.69 [95% CI, 0.66-0.72]). C3PO NLP model sensitivity was 94% (95% CI, 92-95) and specificity was 84% (95% CI, 83-85). The fine-tuned C3PO and de novo NLP models demonstrated agreement of 93% (95% CI, 92-94) and κ of 0.82 (95% CI, 0.77-0.86) and 0.83 (95% CI, 0.79-0.87), respectively, vs the CEC. CEC reviewer interrater reproducibility was 94% (95% CI, 93-95; κ, 0.85 [95% CI, 0.80-0.89]). Conclusions and Relevance: The C3PO NLP model developed within 1 health care system identified HF events with good agreement relative to the gold-standard CEC in an external multicenter clinical trial. Fine-tuning the model improved agreement and approximated human reproducibility. Further study is needed to determine whether NLP will improve the efficiency of future multicenter clinical trials by identifying clinical events at scale.

4.
Semin Dial ; 37(1): 79-82, 2024.
Article En | MEDLINE | ID: mdl-37968773

Central venous catheter (CVC) provides ready vascular access and is widely used for the performance of hemodialysis. The use of CVC is associated with many complications and one life-threatening complication is central venous injury. We describe an unusual case of central venous injury in a 69-year-old lady with a poorly functioning left internal jugular vein catheter, which was in situ at the time of attempting insertion of a replacement right internal jugular catheter. The management included initial stabilization, urgent hemodialysis, imaging, and an endovascular approach to mitigate the iatrogenic venous injury. The case highlights many learning points. The operator needs to be vigilant for anatomical abnormalities like stenosis in patients who have had previous CVC. In those with central venous perforation, the CVC should be left in situ till a definitive management plan is formulated. An endovascular approach, when feasible, is a minimally invasive effective management strategy.


Catheterization, Central Venous , Central Venous Catheters , Female , Humans , Aged , Catheterization, Central Venous/adverse effects , Catheterization, Central Venous/methods , Renal Dialysis/adverse effects , Central Venous Catheters/adverse effects , Jugular Veins/diagnostic imaging , Jugular Veins/surgery , Iatrogenic Disease
5.
Eur J Prev Cardiol ; 31(2): 252-262, 2024 Jan 25.
Article En | MEDLINE | ID: mdl-37798122

AIMS: To leverage deep learning on the resting 12-lead electrocardiogram (ECG) to estimate peak oxygen consumption (V˙O2peak) without cardiopulmonary exercise testing (CPET). METHODS AND RESULTS: V ˙ O 2 peak estimation models were developed in 1891 individuals undergoing CPET at Massachusetts General Hospital (age 45 ± 19 years, 38% female) and validated in a separate test set (MGH Test, n = 448) and external sample (BWH Test, n = 1076). Three penalized linear models were compared: (i) age, sex, and body mass index ('Basic'), (ii) Basic plus standard ECG measurements ('Basic + ECG Parameters'), and (iii) basic plus 320 deep learning-derived ECG variables instead of ECG measurements ('Deep ECG-V˙O2'). Associations between estimated V˙O2peak and incident disease were assessed using proportional hazards models within 84 718 primary care patients without CPET. Inference ECGs preceded CPET by 7 days (median, interquartile range 27-0 days). Among models, Deep ECG-V˙O2 was most accurate in MGH Test [r = 0.845, 95% confidence interval (CI) 0.817-0.870; mean absolute error (MAE) 5.84, 95% CI 5.39-6.29] and BWH Test (r = 0.552, 95% CI 0.509-0.592, MAE 6.49, 95% CI 6.21-6.67). Deep ECG-V˙O2 also outperformed the Wasserman, Jones, and FRIEND reference equations (P < 0.01 for comparisons of correlation). Performance was higher in BWH Test when individuals with heart failure (HF) were excluded (r = 0.628, 95% CI 0.567-0.682; MAE 5.97, 95% CI 5.57-6.37). Deep ECG-V˙O2 estimated V˙O2peak <14 mL/kg/min was associated with increased risks of incident atrial fibrillation [hazard ratio 1.36 (95% CI 1.21-1.54)], myocardial infarction [1.21 (1.02-1.45)], HF [1.67 (1.49-1.88)], and death [1.84 (1.68-2.03)]. CONCLUSION: Deep learning-enabled analysis of the resting 12-lead ECG can estimate exercise capacity (V˙O2peak) at scale to enable efficient cardiovascular risk stratification.


Researchers here present data describing a method of estimating exercise capacity from the resting electrocardiogram. Electrocardiogram estimation of exercise capacity was accurate and was found to predict the onset of the wide range of cardiovascular diseases including heart attacks, heart failure, arrhythmia, and death.This approach offers the ability to estimate exercise capacity without dedicated exercise testing and may enable efficient risk stratification of cardiac patients at scale.


Electrocardiography , Heart Failure , Humans , Female , Adult , Middle Aged , Male , Prognosis , Exercise Test/methods , Oxygen Consumption
6.
J Am Coll Cardiol ; 82(20): 1936-1948, 2023 11 14.
Article En | MEDLINE | ID: mdl-37940231

BACKGROUND: Deep learning interpretation of echocardiographic images may facilitate automated assessment of cardiac structure and function. OBJECTIVES: We developed a deep learning model to interpret echocardiograms and examined the association of deep learning-derived echocardiographic measures with incident outcomes. METHODS: We trained and validated a 3-dimensional convolutional neural network model for echocardiographic view classification and quantification of left atrial dimension, left ventricular wall thickness, chamber diameter, and ejection fraction. The training sample comprised 64,028 echocardiograms (n = 27,135) from a retrospective multi-institutional ambulatory cardiology electronic health record sample. Validation was performed in a separate longitudinal primary care sample and an external health care system data set. Cox models evaluated the association of model-derived left heart measures with incident outcomes. RESULTS: Deep learning discriminated echocardiographic views (area under the receiver operating curve >0.97 for parasternal long axis, apical 4-chamber, and apical 2-chamber views vs human expert annotation) and quantified standard left heart measures (R2 range = 0.53 to 0.91 vs study report values). Model performance was similar in 2 external validation samples. Model-derived left heart measures predicted incident heart failure, atrial fibrillation, myocardial infarction, and death. A 1-SD lower model-left ventricular ejection fraction was associated with 43% greater risk of heart failure (HR: 1.43; 95% CI: 1.23-1.66) and 17% greater risk of death (HR: 1.17; 95% CI: 1.06-1.30). Similar results were observed for other model-derived left heart measures. CONCLUSIONS: Deep learning echocardiographic interpretation accurately quantified standard measures of left heart structure and function, which in turn were associated with future clinical outcomes. Deep learning may enable automated echocardiogram interpretation and disease prediction at scale.


Atrial Fibrillation , Deep Learning , Heart Failure , Humans , Stroke Volume , Ventricular Function, Left , Retrospective Studies
7.
Med J Armed Forces India ; 79(6): 694-701, 2023.
Article En | MEDLINE | ID: mdl-37981932

Background: Amongst the infections in kidney transplant recipients, brain abscess represents an uncommon life-threatening complication. Mortality continues to be high despite improvements in diagnostics and therapeutics. Method: We conducted an observational study, describing the incidence, presentation, implicating pathogen, management and outcome of brain abscess following kidney transplantation at our centre. Result: Amongst the 1492 patients who underwent kidney transplantation at our centre between June 1991 and January 2023 (cumulative follow-up: 4936 patient-years), five females and four males, developed brain abscesses. The incidence proportion (risk) is 0.6% with an incidence rate of 6.03 cases per 1000 patient years. The median duration from transplant to development of brain abscess was 5 weeks (range: 4 weeks to 9 years). The commonest presentation was a headache. A definitive microbiological diagnosis was established in eight out of nine patients. The commonest implicated organism was a dematiaceous fungus, Cladophialophora bantiana (3 patients, 33.3%). Despite the reduction in immunosuppression, surgical evacuation and optimal medical therapy, five (55.55%) patients succumbed to their illness. Conclusions: Brain abscesses following kidney transplantation is an uncommon, life-threatening condition. It usually occurs in the early post-transplant period and the presentation is often subtle. Unlike immunocompetent individuals, a fungus is the most common causative organism in those with solid organ transplants. The management includes a reduction in immunosuppression, early antimicrobial therapy, and surgical decompression.

8.
Med J Armed Forces India ; 79(6): 665-671, 2023.
Article En | MEDLINE | ID: mdl-37981933

Background: Parvovirus B19 is an uncommon cause of anaemia in kidney transplant recipients (KTRs). The study aims to determine the incidence, clinical presentation, laboratory findings and outcome of parvovirus B19-related anaemia in KTR. Method: We conducted a 12-year retrospective, single-centre study describing the clinical profile of KTRs with parvovirus B19-related anaemia. Result: Amongst the 714 patients who underwent kidney transplantation between January 2011 and January 2023, (cumulative follow-up: 1287 patient-years), six females and one male, developed parvovirus B19-related anaemia. The incidence proportion (risk) is 0.98% with an incidence rate of 5.43 cases per 1000 patient-year. The median duration from transplant to development of anaemia was 6 weeks (range: 4-40 weeks). The mean fall in haemoglobin was 2.88 ± 1.55 gm/dl; concomitant leukopenia and thrombocytopenia were observed in 57.1 and 28.6% of patients. Three patients responded to a reduction in immunosuppression, the four non-responders required the administration of low-dose intravenous immunoglobulin. The mean duration from initiation of therapy to a sustained rise in haemoglobin was 7.71 ± 2.62 weeks. None of the patients had a relapse of the infection. Conclusions: Parvovirus B19 infection is an uncommon cause of post-transplant refractory anaemia. The key to successfully managing such patients includes a high index of suspicion, early diagnosis and reduction of immunosuppression with or without administration of intravenous immunoglobulin.

9.
medRxiv ; 2023 Aug 23.
Article En | MEDLINE | ID: mdl-37662283

Background: The gold standard for outcome adjudication in clinical trials is chart review by a physician clinical events committee (CEC), which requires substantial time and expertise. Automated adjudication by natural language processing (NLP) may offer a more resource-efficient alternative. We previously showed that the Community Care Cohort Project (C3PO) NLP model adjudicates heart failure (HF) hospitalizations accurately within one healthcare system. Methods: This study externally validated the C3PO NLP model against CEC adjudication in the INVESTED trial. INVESTED compared influenza vaccination formulations in 5260 patients with cardiovascular disease at 157 North American sites. A central CEC adjudicated the cause of hospitalizations from medical records. We applied the C3PO NLP model to medical records from 4060 INVESTED hospitalizations and evaluated agreement between the NLP and final consensus CEC HF adjudications. We then fine-tuned the C3PO NLP model (C3PO+INVESTED) and trained a de novo model using half the INVESTED hospitalizations, and evaluated these models in the other half. NLP performance was benchmarked to CEC reviewer inter-rater reproducibility. Results: 1074 hospitalizations (26%) were adjudicated as HF by the CEC. There was high agreement between the C3PO NLP and CEC HF adjudications (agreement 87%, kappa statistic 0.69). C3PO NLP model sensitivity was 94% and specificity was 84%. The fine-tuned C3PO and de novo NLP models demonstrated agreement of 93% and kappa of 0.82 and 0.83, respectively. CEC reviewer inter-rater reproducibility was 94% (kappa 0.85). Conclusion: Our NLP model developed within a single healthcare system accurately identified HF events relative to the gold-standard CEC in an external multi-center clinical trial. Fine-tuning the model improved agreement and approximated human reproducibility. NLP may improve the efficiency of future multi-center clinical trials by accurately identifying clinical events at scale.

10.
Circ Genom Precis Med ; 16(4): 340-349, 2023 08.
Article En | MEDLINE | ID: mdl-37278238

BACKGROUND: Artificial intelligence (AI) models applied to 12-lead ECG waveforms can predict atrial fibrillation (AF), a heritable and morbid arrhythmia. However, the factors forming the basis of risk predictions from AI models are usually not well understood. We hypothesized that there might be a genetic basis for an AI algorithm for predicting the 5-year risk of new-onset AF using 12-lead ECGs (ECG-AI)-based risk estimates. METHODS: We applied a validated ECG-AI model for predicting incident AF to ECGs from 39 986 UK Biobank participants without AF. We then performed a genome-wide association study (GWAS) of the predicted AF risk and compared it with an AF GWAS and a GWAS of risk estimates from a clinical variable model. RESULTS: In the ECG-AI GWAS, we identified 3 signals (P<5×10-8) at established AF susceptibility loci marked by the sarcomeric gene TTN and sodium channel genes SCN5A and SCN10A. We also identified 2 novel loci near the genes VGLL2 and EXT1. In contrast, the clinical variable model prediction GWAS indicated a different genetic profile. In genetic correlation analysis, the prediction from the ECG-AI model was estimated to have a higher correlation with AF than that from the clinical variable model. CONCLUSIONS: Predicted AF risk from an ECG-AI model is influenced by genetic variation implicating sarcomeric, ion channel and body height pathways. ECG-AI models may identify individuals at risk for disease via specific biological pathways.


Atrial Fibrillation , Deep Learning , Humans , Atrial Fibrillation/diagnosis , Atrial Fibrillation/genetics , Genetic Predisposition to Disease , Artificial Intelligence , Genome-Wide Association Study , Electrocardiography
12.
Indian J Nephrol ; 33(2): 119-124, 2023.
Article En | MEDLINE | ID: mdl-37234443

Introduction: The clinical practice guidelines for peritoneal access state that no particular peritoneal dialysis catheter (PDC) type has been proven superior to another. We present our experience with the use of different PDC tip designs. Method: The study is a retrospective, real-world, observational, outcome analysis correlating the PDC tip design (straight vs. coiled-tip) and technique survival. The primary outcome was technique survival, and the secondary outcome included catheter migration and infectious complications. Result: A total of 50 PDC (28 coiled-tip and 22 straight-tip) were implanted between March 2017 and April 2019 by using a guided percutaneous approach. The 1-month and 1-year technique survival in the coiled-tip PDC was 96.4% and 92.8%, respectively. Of the two coiled-tip catheters lost, one was a consequence of the patient having undergone live related kidney transplantation. The corresponding 1-month and 1-year technique survival with straight-tip PDC was 86.4% and 77.3%, respectively. When compared to straight-tip PDC, the use of coiled-tip PDC was associated with fewer early migration (3.6% vs. 31.8%; odds ratio (OR): 12.6; 95% confidence interval (CI): 1.41-112.39; P = 0.02) and a trend toward favorable 1-year technique survival (P = 0.07; numbers needed to treat = 11). Therapy-related complications noted in the study included peri-catheter leak and PD peritonitis. The PD peritonitis rate in the coiled-tip and straight-tip group was 0.14 and 0.11 events per patient year, respectively. Conclusion: The use of coiled-tip PDC, when placed using a guided percutaneous approach, reduces early catheter migration and shows a trend toward favorable long-term technique survival.

13.
Cardiovasc Digit Health J ; 4(2): 48-59, 2023 Apr.
Article En | MEDLINE | ID: mdl-37101945

Background: Differentiating among cardiac diseases associated with left ventricular hypertrophy (LVH) informs diagnosis and clinical care. Objective: To evaluate if artificial intelligence-enabled analysis of the 12-lead electrocardiogram (ECG) facilitates automated detection and classification of LVH. Methods: We used a pretrained convolutional neural network to derive numerical representations of 12-lead ECG waveforms from patients in a multi-institutional healthcare system who had cardiac diseases associated with LVH (n = 50,709), including cardiac amyloidosis (n = 304), hypertrophic cardiomyopathy (n = 1056), hypertension (n = 20,802), aortic stenosis (n = 446), and other causes (n = 4766). We then regressed LVH etiologies relative to no LVH on age, sex, and the numerical 12-lead representations using logistic regression ("LVH-Net"). To assess deep learning model performance on single-lead data analogous to mobile ECGs, we also developed 2 single-lead deep learning models by training models on lead I ("LVH-Net Lead I") or lead II ("LVH-Net Lead II") from the 12-lead ECG. We compared the performance of the LVH-Net models to alternative models fit on (1) age, sex, and standard ECG measures, and (2) clinical ECG-based rules for diagnosing LVH. Results: The areas under the receiver operator characteristic curve of LVH-Net by specific LVH etiology were cardiac amyloidosis 0.95 [95% CI, 0.93-0.97], hypertrophic cardiomyopathy 0.92 [95% CI, 0.90-0.94], aortic stenosis LVH 0.90 [95% CI, 0.88-0.92], hypertensive LVH 0.76 [95% CI, 0.76-0.77], and other LVH 0.69 [95% CI 0.68-0.71]. The single-lead models also discriminated LVH etiologies well. Conclusion: An artificial intelligence-enabled ECG model is favorable for detection and classification of LVH and outperforms clinical ECG-based rules.

15.
Geobiology ; 21(2): 175-192, 2023 03.
Article En | MEDLINE | ID: mdl-36329603

The end-Triassic biodiversity crisis was one of the most severe mass extinctions in the history of animal life. However, the extent to which the loss of taxonomic diversity was coupled with a reduction in organismal abundance remains to be quantified. Further, the temporal relationship between organismal abundance and local marine redox conditions is lacking in carbonate sections. To address these questions, we measured skeletal grain abundance in shallow-marine limestones by point counting 293 thin sections from four stratigraphic sections across the Triassic/Jurassic boundary in the Lombardy Basin and Apennine Platform of western Tethys. Skeletal abundance decreased abruptly across the Triassic/Jurassic boundary in all stratigraphic sections. The abundance of skeletal organisms remained low throughout the lower-middle Hettangian strata and began to rebound during the late Hettangian and early Sinemurian. A two-way ANOVA indicates that sample age (p < .01, η2  = 0.30) explains more of the variation in skeletal abundance than the depositional environment or paleobathymetry (p < .01, η2  = 0.15). Measured I/Ca ratios, a proxy for local shallow-marine redox conditions, show this same pattern with the lowest I/Ca ratios occurring in the early Hettangian. The close correspondence between oceanic water column oxygen levels and skeletal abundance indicates a connection between redox conditions and benthic organismal abundance across the Triassic/Jurassic boundary. These findings indicate that the end-Triassic mass extinction reduced not only the biodiversity but also the carrying capacity for skeletal organisms in early Hettangian ecosystems, adding to evidence that mass extinction of species generally leads to mass rarity among survivors.


Ecosystem , Extinction, Biological , Animals , Fossils , Oxygen , Biodiversity , Biological Evolution
16.
Cardiovasc Digit Health J ; 3(4): 161-170, 2022 Aug.
Article En | MEDLINE | ID: mdl-36046430

Background and Objective: Postexercise heart rate recovery (HRR) is an important indicator of cardiac autonomic function and abnormal HRR is associated with adverse outcomes. We hypothesized that deep learning on resting electrocardiogram (ECG) tracings may identify individuals with impaired HRR. Methods: We trained a deep learning model (convolutional neural network) to infer HRR based on resting ECG waveforms (HRRpred) among UK Biobank participants who had undergone exercise testing. We examined the association of HRRpred with incident cardiovascular disease using Cox models, and investigated the genetic architecture of HRRpred in genome-wide association analysis. Results: Among 56,793 individuals (mean age 57 years, 51% women), the HRRpred model was moderately correlated with actual HRR (r = 0.48, 95% confidence interval [CI] 0.47-0.48). Over a median follow-up of 10 years, we observed 2060 incident diabetes mellitus (DM) events, 862 heart failure events, and 2065 deaths. Higher HRRpred was associated with lower risk of DM (hazard ratio [HR] 0.79 per 1 standard deviation change, 95% CI 0.76-0.83), heart failure (HR 0.89, 95% CI 0.83-0.95), and death (HR 0.83, 95% CI 0.79-0.86). After accounting for resting heart rate, the association of HRRpred with incident DM and all-cause mortality were similar. Genetic determinants of HRRpred included known heart rate, cardiac conduction system, cardiomyopathy, and metabolic trait loci. Conclusion: Deep learning-derived estimates of HRR using resting ECG independently associated with future clinical outcomes, including new-onset DM and all-cause mortality. Inferring postexercise heart rate response from a resting ECG may have potential clinical implications and impact on preventive strategies warrants future study.

17.
JMIR Med Inform ; 10(9): e38178, 2022 Sep 16.
Article En | MEDLINE | ID: mdl-35960155

BACKGROUND: Cardiac magnetic resonance imaging (CMR) is a powerful diagnostic modality that provides detailed quantitative assessment of cardiac anatomy and function. Automated extraction of CMR measurements from clinical reports that are typically stored as unstructured text in electronic health record systems would facilitate their use in research. Existing machine learning approaches either rely on large quantities of expert annotation or require the development of engineered rules that are time-consuming and are specific to the setting in which they were developed. OBJECTIVE: We hypothesize that the use of pretrained transformer-based language models may enable label-efficient numerical extraction from clinical text without the need for heuristics or large quantities of expert annotations. Here, we fine-tuned pretrained transformer-based language models on a small quantity of CMR annotations to extract 21 CMR measurements. We assessed the effect of clinical pretraining to reduce labeling needs and explored alternative representations of numerical inputs to improve performance. METHODS: Our study sample comprised 99,252 patients that received longitudinal cardiology care in a multi-institutional health care system. There were 12,720 available CMR reports from 9280 patients. We adapted PRAnCER (Platform Enabling Rapid Annotation for Clinical Entity Recognition), an annotation tool for clinical text, to collect annotations from a study clinician on 370 reports. We experimented with 5 different representations of numerical quantities and several model weight initializations. We evaluated extraction performance using macroaveraged F1-scores across the measurements of interest. We applied the best-performing model to extract measurements from the remaining CMR reports in the study sample and evaluated established associations between selected extracted measures with clinical outcomes to demonstrate validity. RESULTS: All combinations of weight initializations and numerical representations obtained excellent performance on the gold-standard test set, suggesting that transformer models fine-tuned on a small set of annotations can effectively extract numerical quantities. Our results further indicate that custom numerical representations did not appear to have a significant impact on extraction performance. The best-performing model achieved a macroaveraged F1-score of 0.957 across the evaluated CMR measurements (range 0.92 for the lowest-performing measure of left atrial anterior-posterior dimension to 1.0 for the highest-performing measures of left ventricular end systolic volume index and left ventricular end systolic diameter). Application of the best-performing model to the study cohort yielded 136,407 measurements from all available reports in the study sample. We observed expected associations between extracted left ventricular mass index, left ventricular ejection fraction, and right ventricular ejection fraction with clinical outcomes like atrial fibrillation, heart failure, and mortality. CONCLUSIONS: This study demonstrated that a domain-agnostic pretrained transformer model is able to effectively extract quantitative clinical measurements from diagnostic reports with a relatively small number of gold-standard annotations. The proposed workflow may serve as a roadmap for other quantitative entity extraction.

18.
NPJ Digit Med ; 5(1): 47, 2022 Apr 08.
Article En | MEDLINE | ID: mdl-35396454

Electronic health record (EHR) datasets are statistically powerful but are subject to ascertainment bias and missingness. Using the Mass General Brigham multi-institutional EHR, we approximated a community-based cohort by sampling patients receiving longitudinal primary care between 2001-2018 (Community Care Cohort Project [C3PO], n = 520,868). We utilized natural language processing (NLP) to recover vital signs from unstructured notes. We assessed the validity of C3PO by deploying established risk models for myocardial infarction/stroke and atrial fibrillation. We then compared C3PO to Convenience Samples including all individuals from the same EHR with complete data, but without a longitudinal primary care requirement. NLP reduced the missingness of vital signs by 31%. NLP-recovered vital signs were highly correlated with values derived from structured fields (Pearson r range 0.95-0.99). Atrial fibrillation and myocardial infarction/stroke incidence were lower and risk models were better calibrated in C3PO as opposed to the Convenience Samples (calibration error range for myocardial infarction/stroke: 0.012-0.030 in C3PO vs. 0.028-0.046 in Convenience Samples; calibration error for atrial fibrillation 0.028 in C3PO vs. 0.036 in Convenience Samples). Sampling patients receiving regular primary care and using NLP to recover missing data may reduce bias and maximize generalizability of EHR research.

20.
Circulation ; 145(2): 122-133, 2022 01 11.
Article En | MEDLINE | ID: mdl-34743566

BACKGROUND: Artificial intelligence (AI)-enabled analysis of 12-lead ECGs may facilitate efficient estimation of incident atrial fibrillation (AF) risk. However, it remains unclear whether AI provides meaningful and generalizable improvement in predictive accuracy beyond clinical risk factors for AF. METHODS: We trained a convolutional neural network (ECG-AI) to infer 5-year incident AF risk using 12-lead ECGs in patients receiving longitudinal primary care at Massachusetts General Hospital (MGH). We then fit 3 Cox proportional hazards models, composed of ECG-AI 5-year AF probability, CHARGE-AF clinical risk score (Cohorts for Heart and Aging in Genomic Epidemiology-Atrial Fibrillation), and terms for both ECG-AI and CHARGE-AF (CH-AI), respectively. We assessed model performance by calculating discrimination (area under the receiver operating characteristic curve) and calibration in an internal test set and 2 external test sets (Brigham and Women's Hospital [BWH] and UK Biobank). Models were recalibrated to estimate 2-year AF risk in the UK Biobank given limited available follow-up. We used saliency mapping to identify ECG features most influential on ECG-AI risk predictions and assessed correlation between ECG-AI and CHARGE-AF linear predictors. RESULTS: The training set comprised 45 770 individuals (age 55±17 years, 53% women, 2171 AF events) and the test sets comprised 83 162 individuals (age 59±13 years, 56% women, 2424 AF events). Area under the receiver operating characteristic curve was comparable using CHARGE-AF (MGH, 0.802 [95% CI, 0.767-0.836]; BWH, 0.752 [95% CI, 0.741-0.763]; UK Biobank, 0.732 [95% CI, 0.704-0.759]) and ECG-AI (MGH, 0.823 [95% CI, 0.790-0.856]; BWH, 0.747 [95% CI, 0.736-0.759]; UK Biobank, 0.705 [95% CI, 0.673-0.737]). Area under the receiver operating characteristic curve was highest using CH-AI (MGH, 0.838 [95% CI, 0.807 to 0.869]; BWH, 0.777 [95% CI, 0.766 to 0.788]; UK Biobank, 0.746 [95% CI, 0.716 to 0.776]). Calibration error was low using ECG-AI (MGH, 0.0212; BWH, 0.0129; UK Biobank, 0.0035) and CH-AI (MGH, 0.012; BWH, 0.0108; UK Biobank, 0.0001). In saliency analyses, the ECG P-wave had the greatest influence on AI model predictions. ECG-AI and CHARGE-AF linear predictors were correlated (Pearson r: MGH, 0.61; BWH, 0.66; UK Biobank, 0.41). CONCLUSIONS: AI-based analysis of 12-lead ECGs has similar predictive usefulness to a clinical risk factor model for incident AF and the approaches are complementary. ECG-AI may enable efficient quantification of future AF risk.


Atrial Fibrillation/diagnosis , Deep Learning/standards , Electrocardiography/methods , Atrial Fibrillation/pathology , Female , Humans , Male , Middle Aged , Risk Factors
...