Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 46
Filter
1.
Eur Heart J ; 45(22): 2002-2012, 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38503537

ABSTRACT

BACKGROUND AND AIMS: Early identification of cardiac structural abnormalities indicative of heart failure is crucial to improving patient outcomes. Chest X-rays (CXRs) are routinely conducted on a broad population of patients, presenting an opportunity to build scalable screening tools for structural abnormalities indicative of Stage B or worse heart failure with deep learning methods. In this study, a model was developed to identify severe left ventricular hypertrophy (SLVH) and dilated left ventricle (DLV) using CXRs. METHODS: A total of 71 589 unique CXRs from 24 689 different patients completed within 1 year of echocardiograms were identified. Labels for SLVH, DLV, and a composite label indicating the presence of either were extracted from echocardiograms. A deep learning model was developed and evaluated using area under the receiver operating characteristic curve (AUROC). Performance was additionally validated on 8003 CXRs from an external site and compared against visual assessment by 15 board-certified radiologists. RESULTS: The model yielded an AUROC of 0.79 (0.76-0.81) for SLVH, 0.80 (0.77-0.84) for DLV, and 0.80 (0.78-0.83) for the composite label, with similar performance on an external data set. The model outperformed all 15 individual radiologists for predicting the composite label and achieved a sensitivity of 71% vs. 66% against the consensus vote across all radiologists at a fixed specificity of 73%. CONCLUSIONS: Deep learning analysis of CXRs can accurately detect the presence of certain structural abnormalities and may be useful in early identification of patients with LV hypertrophy and dilation. As a resource to promote further innovation, 71 589 CXRs with adjoining echocardiographic labels have been made publicly available.


Subject(s)
Deep Learning , Hypertrophy, Left Ventricular , Radiography, Thoracic , Humans , Hypertrophy, Left Ventricular/diagnostic imaging , Radiography, Thoracic/methods , Female , Male , Middle Aged , Echocardiography/methods , Aged , Heart Failure/diagnostic imaging , Heart Ventricles/diagnostic imaging , ROC Curve
2.
Pediatr Crit Care Med ; 25(1): 54-61, 2024 Jan 01.
Article in English | MEDLINE | ID: mdl-37966346

ABSTRACT

OBJECTIVES: Patient vital sign data charted in the electronic health record (EHR) are used for time-sensitive decisions, yet little is known about when these data become nominally available compared with when the vital sign was actually measured. The objective of this study was to determine the magnitude of any delay between when a vital sign was actually measured in a patient and when it nominally appears in the EHR. DESIGN: We performed a single-center retrospective cohort study. SETTING: Tertiary academic children's hospital. PATIENTS: A total of 5,458 patients were admitted to a PICU from January 2014 to December 2018. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: We analyzed entry and display times of all vital signs entered in the EHR. The primary outcome measurement was time between vital sign occurrence and nominal timing of the vital sign in the EHR. An additional outcome measurement was the frequency of batch charting. A total of 9,818,901 vital sign recordings occurred during the study period. Across the entire cohort the median (interquartile range [IQR]) difference between time of occurrence and nominal time in the EHR was in hours:minutes:seconds, 00:41:58 (IQR 00:13:42-01:44:10). Lag in the first 24 hours of PICU admission was 00:47:34 (IQR 00:15:23-02:19:00), lag in the last 24 hours was 00:38:49 (IQR 00:13:09-01:29:22; p < 0.001). There were 1,892,143 occurrences of batch charting. CONCLUSIONS: This retrospective study shows a lag between vital sign occurrence and its appearance in the EHR, as well as a frequent practice of batch charting. The magnitude of the delay-median ~40 minutes-suggests that vital signs available in the EHR for clinical review and incorporation into clinical alerts may be outdated by the time they are available.


Subject(s)
Electronic Health Records , Vital Signs , Child , Humans , Retrospective Studies , Time Factors , Intensive Care Units, Pediatric
3.
Lancet Digit Health ; 6(1): e70-e78, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38065778

ABSTRACT

BACKGROUND: Preoperative risk assessments used in clinical practice are insufficient in their ability to identify risk for postoperative mortality. Deep-learning analysis of electrocardiography can identify hidden risk markers that can help to prognosticate postoperative mortality. We aimed to develop a prognostic model that accurately predicts postoperative mortality in patients undergoing medical procedures and who had received preoperative electrocardiographic diagnostic testing. METHODS: In a derivation cohort of preoperative patients with available electrocardiograms (ECGs) from Cedars-Sinai Medical Center (Los Angeles, CA, USA) between Jan 1, 2015 and Dec 31, 2019, a deep-learning algorithm was developed to leverage waveform signals to discriminate postoperative mortality. We randomly split patients (8:1:1) into subsets for training, internal validation, and final algorithm test analyses. Model performance was assessed using area under the receiver operating characteristic curve (AUC) values in the hold-out test dataset and in two external hospital cohorts and compared with the established Revised Cardiac Risk Index (RCRI) score. The primary outcome was post-procedural mortality across three health-care systems. FINDINGS: 45 969 patients had a complete ECG waveform image available for at least one 12-lead ECG performed within the 30 days before the procedure date (59 975 inpatient procedures and 112 794 ECGs): 36 839 patients in the training dataset, 4549 in the internal validation dataset, and 4581 in the internal test dataset. In the held-out internal test cohort, the algorithm discriminates mortality with an AUC value of 0·83 (95% CI 0·79-0·87), surpassing the discrimination of the RCRI score with an AUC of 0·67 (0·61-0·72). The algorithm similarly discriminated risk for mortality in two independent US health-care systems, with AUCs of 0·79 (0·75-0·83) and 0·75 (0·74-0·76), respectively. Patients determined to be high risk by the deep-learning model had an unadjusted odds ratio (OR) of 8·83 (5·57-13·20) for postoperative mortality compared with an unadjusted OR of 2·08 (0·77-3·50) for postoperative mortality for RCRI scores of more than 2. The deep-learning algorithm performed similarly for patients undergoing cardiac surgery (AUC 0·85 [0·77-0·92]), non-cardiac surgery (AUC 0·83 [0·79-0·88]), and catheterisation or endoscopy suite procedures (AUC 0·76 [0·72-0·81]). INTERPRETATION: A deep-learning algorithm interpreting preoperative ECGs can improve discrimination of postoperative mortality. The deep-learning algorithm worked equally well for risk stratification of cardiac surgeries, non-cardiac surgeries, and catheterisation laboratory procedures, and was validated in three independent health-care systems. This algorithm can provide additional information to clinicians making the decision to perform medical procedures and stratify the risk of future complications. FUNDING: National Heart, Lung, and Blood Institute.


Subject(s)
Deep Learning , Humans , Risk Assessment/methods , Algorithms , Prognosis , Electrocardiography
4.
J Am Med Inform Assoc ; 30(6): 1022-1031, 2023 05 19.
Article in English | MEDLINE | ID: mdl-36921288

ABSTRACT

OBJECTIVE: To develop a computable representation for medical evidence and to contribute a gold standard dataset of annotated randomized controlled trial (RCT) abstracts, along with a natural language processing (NLP) pipeline for transforming free-text RCT evidence in PubMed into the structured representation. MATERIALS AND METHODS: Our representation, EvidenceMap, consists of 3 levels of abstraction: Medical Evidence Entity, Proposition and Map, to represent the hierarchical structure of medical evidence composition. Randomly selected RCT abstracts were annotated following EvidenceMap based on the consensus of 2 independent annotators to train an NLP pipeline. Via a user study, we measured how the EvidenceMap improved evidence comprehension and analyzed its representative capacity by comparing the evidence annotation with EvidenceMap representation and without following any specific guidelines. RESULTS: Two corpora including 229 disease-agnostic and 80 COVID-19 RCT abstracts were annotated, yielding 12 725 entities and 1602 propositions. EvidenceMap saves users 51.9% of the time compared to reading raw-text abstracts. Most evidence elements identified during the freeform annotation were successfully represented by EvidenceMap, and users gave the enrollment, study design, and study Results sections mean 5-scale Likert ratings of 4.85, 4.70, and 4.20, respectively. The end-to-end evaluations of the pipeline show that the evidence proposition formulation achieves F1 scores of 0.84 and 0.86 in the adjusted random index score. CONCLUSIONS: EvidenceMap extends the participant, intervention, comparator, and outcome framework into 3 levels of abstraction for transforming free-text evidence from the clinical literature into a computable structure. It can be used as an interoperable format for better evidence retrieval and synthesis and an interpretable representation to efficiently comprehend RCT findings.


Subject(s)
COVID-19 , Comprehension , Humans , Natural Language Processing , PubMed
5.
AMIA Annu Symp Proc ; 2023: 289-298, 2023.
Article in English | MEDLINE | ID: mdl-38222422

ABSTRACT

Complete and accurate race and ethnicity (RE) patient information is important for many areas of biomedical informatics research, such as defining and characterizing cohorts, performing quality assessments, and identifying health inequities. Patient-level RE data is often inaccurate or missing in structured sources, but can be supplemented through clinical notes and natural language processing (NLP). While NLP has made many improvements in recent years with large language models, bias remains an often-unaddressed concern, with research showing that harmful and negative language is more often used for certain racial/ethnic groups than others. We present an approach to audit the learned associations of models trained to identify RE information in clinical text by measuring the concordance between model-derived salient features and manually identified RE-related spans of text. We show that while models perform well on the surface, there exist concerning learned associations and potential for future harms from RE-identification models if left unaddressed.


Subject(s)
Deep Learning , Ethnicity , Humans , Language , Natural Language Processing
6.
J Am Coll Cardiol ; 80(6): 613-626, 2022 08 09.
Article in English | MEDLINE | ID: mdl-35926935

ABSTRACT

BACKGROUND: Valvular heart disease is an important contributor to cardiovascular morbidity and mortality and remains underdiagnosed. Deep learning analysis of electrocardiography (ECG) may be useful in detecting aortic stenosis (AS), aortic regurgitation (AR), and mitral regurgitation (MR). OBJECTIVES: This study aimed to develop ECG deep learning algorithms to identify moderate or severe AS, AR, and MR alone and in combination. METHODS: A total of 77,163 patients undergoing ECG within 1 year before echocardiography from 2005-2021 were identified and split into train (n = 43,165), validation (n = 12,950), and test sets (n = 21,048; 7.8% with any of AS, AR, or MR). Model performance was assessed using area under the receiver-operating characteristic (AU-ROC) and precision-recall curves. Outside validation was conducted on an independent data set. Test accuracy was modeled using different disease prevalence levels to simulate screening efficacy using the deep learning model. RESULTS: The deep learning algorithm model accuracy was as follows: AS (AU-ROC: 0.88), AR (AU-ROC: 0.77), MR (AU-ROC: 0.83), and any of AS, AR, or MR (AU-ROC: 0.84; sensitivity 78%, specificity 73%) with similar accuracy in external validation. In screening program modeling, test characteristics were dependent on underlying prevalence and selected sensitivity levels. At a prevalence of 7.8%, the positive and negative predictive values were 20% and 97.6%, respectively. CONCLUSIONS: Deep learning analysis of the ECG can accurately detect AS, AR, and MR in this multicenter cohort and may serve as the basis for the development of a valvular heart disease screening program.


Subject(s)
Aortic Valve Insufficiency , Aortic Valve Stenosis , Deep Learning , Heart Valve Diseases , Mitral Valve Insufficiency , Aortic Valve Insufficiency/diagnosis , Aortic Valve Stenosis/diagnosis , Electrocardiography , Heart Valve Diseases/diagnosis , Heart Valve Diseases/epidemiology , Humans , Mitral Valve Insufficiency/diagnosis , Mitral Valve Insufficiency/epidemiology
7.
Am J Transplant ; 22(5): 1372-1381, 2022 05.
Article in English | MEDLINE | ID: mdl-35000284

ABSTRACT

Deceased donor kidney allocation follows a ranked match-run of potential recipients. Organ procurement organizations (OPOs) are permitted to deviate from the mandated match-run in exceptional circumstances. Using match-run data for all deceased donor kidney transplants (Ktx) in the US between 2015 and 2019, we identified 1544 kidneys transplanted from 933 donors with an OPO-initiated allocation exception. Most OPOs (55/58) used this process at least once, but 3 OPOs performed 64% of the exceptions and just 2 transplant centers received 25% of allocation exception Ktx. At 2 of 3 outlier OPOs these transplants increased 136% and 141% between 2015 and 2019 compared to only a 35% increase in all Ktx. Allocation exception donors had less favorable characteristics (median KDPI 70, 41% with history of hypertension), but only 29% had KDPI ≥ 85% and the majority did not meet the traditional threshold for marginal kidneys. Allocation exception kidneys went to larger centers with higher offer acceptance ratios and to recipients with 2 fewer priority points-equivalent to 2 less years of waiting time. OPO-initiated exceptions for kidney allocation are growing increasingly frequent and more concentrated at a few outlier centers. Increasing pressure to improve organ utilization risks increasing out-of-sequence allocations, potentially exacerbating disparities in access to transplantation.


Subject(s)
Kidney Transplantation , Tissue and Organ Procurement , Transplants , Humans , Kidney , Tissue Donors
8.
J Am Med Inform Assoc ; 28(9): 1955-1963, 2021 08 13.
Article in English | MEDLINE | ID: mdl-34270710

ABSTRACT

OBJECTIVE: To propose an algorithm that utilizes only timestamps of longitudinal electronic health record data to classify clinical deterioration events. MATERIALS AND METHODS: This retrospective study explores the efficacy of machine learning algorithms in classifying clinical deterioration events among patients in intensive care units using sequences of timestamps of vital sign measurements, flowsheets comments, order entries, and nursing notes. We design a data pipeline to partition events into discrete, regular time bins that we refer to as timesteps. Logistic regressions, random forest classifiers, and recurrent neural networks are trained on datasets of different length of timesteps, respectively, against a composite outcome of death, cardiac arrest, and Rapid Response Team calls. Then these models are validated on a holdout dataset. RESULTS: A total of 6720 intensive care unit encounters meet the criteria and the final dataset includes 830 578 timestamps. The gated recurrent unit model utilizes timestamps of vital signs, order entries, flowsheet comments, and nursing notes to achieve the best performance on the time-to-outcome dataset, with an area under the precision-recall curve of 0.101 (0.06, 0.137), a sensitivity of 0.443, and a positive predictive value of 0. 092 at the threshold of 0.6. DISCUSSION AND CONCLUSION: This study demonstrates that our recurrent neural network models using only timestamps of longitudinal electronic health record data that reflect healthcare processes achieve well-performing discriminative power.


Subject(s)
Clinical Deterioration , Electronic Health Records , Humans , Machine Learning , Retrospective Studies , Vital Signs
9.
J Biomed Inform ; 121: 103870, 2021 09.
Article in English | MEDLINE | ID: mdl-34302957

ABSTRACT

Evidence-Based Medicine (EBM) encourages clinicians to seek the most reputable evidence. The quality of evidence is organized in a hierarchy in which randomized controlled trials (RCTs) are regarded as least biased. However, RCTs are plagued by poor generalizability, impeding the translation of clinical research to practice. Though the presence of poor external validity is known, the factors that contribute to poor generalizability have not been summarized and placed in a framework. We propose a new population-oriented conceptual framework to facilitate consistent and comprehensive evaluation of generalizability, replicability, and assessment of RCT study quality.


Subject(s)
Evidence-Based Medicine , Randomized Controlled Trials as Topic , Research Design
10.
J Am Med Inform Assoc ; 28(9): 1970-1976, 2021 08 13.
Article in English | MEDLINE | ID: mdl-34151966

ABSTRACT

Clinical notes present a wealth of information for applications in the clinical domain, but heterogeneity across clinical institutions and settings presents challenges for their processing. The clinical natural language processing field has made strides in overcoming domain heterogeneity, while pretrained deep learning models present opportunities to transfer knowledge from one task to another. Pretrained models have performed well when transferred to new tasks; however, it is not well understood if these models generalize across differences in institutions and settings within the clinical domain. We explore if institution or setting specific pretraining is necessary for pretrained models to perform well when transferred to new tasks. We find no significant performance difference between models pretrained across institutions and settings, indicating that clinically pretrained models transfer well across such boundaries. Given a clinically pretrained model, clinical natural language processing researchers may forgo the time-consuming pretraining step without a significant performance drop.


Subject(s)
Deep Learning , Humans , Natural Language Processing , Research Personnel
11.
Transpl Int ; 34(7): 1239-1250, 2021 07.
Article in English | MEDLINE | ID: mdl-33964036

ABSTRACT

Unfavourable procurement biopsy findings are the most common reason for deceased donor kidney discard in the United States. We sought to assess the association between biopsy findings and post-transplant outcomes when donor characteristics are accounted for. We used registry data to identify 1566 deceased donors of 3132 transplanted kidneys (2015-2020) with discordant right/left procurement biopsy classification and performed time-to-event analyses to determine the association between optimal histology and hazard of death-censored graft failure or death. We then repeated all analyses using a local cohort of 147 donors of kidney pairs with detailed procurement histology data available (2006-2016). Among transplanted kidney pairs in the national cohort, there were no significant differences in incidence of delayed graft function or primary nonfunction. Time to death-censored graft failure was not significantly different between recipients of optimal versus suboptimal kidneys. Results were similar in analyses using the local cohort. Regarding recipient survival, analysis of the national, but not local, cohort showed optimal kidneys were associated with a lower hazard of death (adjusted HR 0.68, 95% CI 0.52-0.90, P = 0.006). In conclusion, in a large national cohort of deceased donor kidney pairs with discordant right/left procurement biopsy findings, we found no association between histology and death-censored graft survival.


Subject(s)
Kidney Transplantation , Tissue and Organ Procurement , Biopsy , Donor Selection , Graft Survival , Humans , Kidney , Tissue Donors , Treatment Outcome , United States
12.
J Am Med Inform Assoc ; 28(8): 1703-1711, 2021 07 30.
Article in English | MEDLINE | ID: mdl-33956981

ABSTRACT

OBJECTIVE: We introduce Medical evidence Dependency (MD)-informed attention, a novel neuro-symbolic model for understanding free-text clinical trial publications with generalizability and interpretability. MATERIALS AND METHODS: We trained one head in the multi-head self-attention model to attend to the Medical evidence Ddependency (MD) and to pass linguistic and domain knowledge on to later layers (MD informed). This MD-informed attention model was integrated into BioBERT and tested on 2 public machine reading comprehension benchmarks for clinical trial publications: Evidence Inference 2.0 and PubMedQA. We also curated a small set of recently published articles reporting randomized controlled trials on COVID-19 (coronavirus disease 2019) following the Evidence Inference 2.0 guidelines to evaluate the model's robustness to unseen data. RESULTS: The integration of MD-informed attention head improves BioBERT substantially in both benchmark tasks-as large as an increase of +30% in the F1 score-and achieves the new state-of-the-art performance on the Evidence Inference 2.0. It achieves 84% and 82% in overall accuracy and F1 score, respectively, on the unseen COVID-19 data. CONCLUSIONS: MD-informed attention empowers neural reading comprehension models with interpretability and generalizability via reusable domain knowledge. Its compositionality can benefit any transformer-based architecture for machine reading comprehension of free-text medical evidence.


Subject(s)
Artificial Intelligence , Clinical Trials as Topic , Information Storage and Retrieval/methods , Models, Neurological , Natural Language Processing , COVID-19 , Computer Simulation , Data Mining , Humans , Software
13.
J Am Med Inform Assoc ; 28(7): 1480-1488, 2021 07 14.
Article in English | MEDLINE | ID: mdl-33706377

ABSTRACT

OBJECTIVE: Coronavirus disease 2019 (COVID-19) patients are at risk for resource-intensive outcomes including mechanical ventilation (MV), renal replacement therapy (RRT), and readmission. Accurate outcome prognostication could facilitate hospital resource allocation. We develop and validate predictive models for each outcome using retrospective electronic health record data for COVID-19 patients treated between March 2 and May 6, 2020. MATERIALS AND METHODS: For each outcome, we trained 3 classes of prediction models using clinical data for a cohort of SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2)-positive patients (n = 2256). Cross-validation was used to select the best-performing models per the areas under the receiver-operating characteristic and precision-recall curves. Models were validated using a held-out cohort (n = 855). We measured each model's calibration and evaluated feature importances to interpret model output. RESULTS: The predictive performance for our selected models on the held-out cohort was as follows: area under the receiver-operating characteristic curve-MV 0.743 (95% CI, 0.682-0.812), RRT 0.847 (95% CI, 0.772-0.936), readmission 0.871 (95% CI, 0.830-0.917); area under the precision-recall curve-MV 0.137 (95% CI, 0.047-0.175), RRT 0.325 (95% CI, 0.117-0.497), readmission 0.504 (95% CI, 0.388-0.604). Predictions were well calibrated, and the most important features within each model were consistent with clinical intuition. DISCUSSION: Our models produce performant, well-calibrated, and interpretable predictions for COVID-19 patients at risk for the target outcomes. They demonstrate the potential to accurately estimate outcome prognosis in resource-constrained care sites managing COVID-19 patients. CONCLUSIONS: We develop and validate prognostic models targeting MV, RRT, and readmission for hospitalized COVID-19 patients which produce accurate, interpretable predictions. Additional external validation studies are needed to further verify the generalizability of our results.


Subject(s)
COVID-19/therapy , Models, Statistical , Patient Readmission , Renal Replacement Therapy , Respiration, Artificial , Adolescent , Adult , Aged , Aged, 80 and over , Area Under Curve , COVID-19/complications , Electronic Health Records , Female , Humans , Logistic Models , Male , Middle Aged , Prognosis , ROC Curve , Retrospective Studies , Statistics, Nonparametric , Young Adult
14.
Adv Neural Inf Process Syst ; 34: 2160-2172, 2021 Dec.
Article in English | MEDLINE | ID: mdl-35859987

ABSTRACT

Deep models trained through maximum likelihood have achieved state-of-the-art results for survival analysis. Despite this training scheme, practitioners evaluate models under other criteria, such as binary classification losses at a chosen set of time horizons, e.g. Brier score (BS) and Bernoulli log likelihood (BLL). Models trained with maximum likelihood may have poor BS or BLL since maximum likelihood does not directly optimize these criteria. Directly optimizing criteria like BS requires inverse-weighting by the censoring distribution. However, estimating the censoring model under these metrics requires inverse-weighting by the failure distribution. The objective for each model requires the other, but neither are known. To resolve this dilemma, we introduce Inverse-Weighted Survival Games. In these games, objectives for each model are built from re-weighted estimates featuring the other model, where the latter is held fixed during training. When the loss is proper, we show that the games always have the true failure and censoring distributions as a stationary point. This means models in the game do not leave the correct distributions once reached. We construct one case where this stationary point is unique. We show that these games optimize BS on simulations and then apply these principles on real world cancer and critically-ill patient data.

15.
J Am Med Inform Assoc ; 28(4): 812-823, 2021 03 18.
Article in English | MEDLINE | ID: mdl-33367705

ABSTRACT

OBJECTIVE: The study sought to develop and evaluate a knowledge-based data augmentation method to improve the performance of deep learning models for biomedical natural language processing by overcoming training data scarcity. MATERIALS AND METHODS: We extended the easy data augmentation (EDA) method for biomedical named entity recognition (NER) by incorporating the Unified Medical Language System (UMLS) knowledge and called this method UMLS-EDA. We designed experiments to systematically evaluate the effect of UMLS-EDA on popular deep learning architectures for both NER and classification. We also compared UMLS-EDA to BERT. RESULTS: UMLS-EDA enables substantial improvement for NER tasks from the original long short-term memory conditional random fields (LSTM-CRF) model (micro-F1 score: +5%, + 17%, and +15%), helps the LSTM-CRF model (micro-F1 score: 0.66) outperform LSTM-CRF with transfer learning by BERT (0.63), and improves the performance of the state-of-the-art sentence classification model. The largest gain on micro-F1 score is 9%, from 0.75 to 0.84, better than classifiers with BERT pretraining (0.82). CONCLUSIONS: This study presents a UMLS-based data augmentation method, UMLS-EDA. It is effective at improving deep learning models for both NER and sentence classification, and contributes original insights for designing new, superior deep learning approaches for low-resource biomedical domains.


Subject(s)
Biomedical Research , Information Storage and Retrieval/methods , Natural Language Processing , Unified Medical Language System , Data Management
16.
J Am Heart Assoc ; 10(1): e018476, 2021 01 05.
Article in English | MEDLINE | ID: mdl-33169643

ABSTRACT

Background Cardiovascular involvement in coronavirus disease 2019 (COVID-19) is common and leads to worsened mortality. Diagnostic cardiovascular studies may be helpful for resource appropriation and identifying patients at increased risk for death. Methods and Results We analyzed 887 patients (aged 64±17 years) admitted with COVID-19 from March 1 to April 3, 2020 in New York City with 12 lead electrocardiography within 2 days of diagnosis. Demographics, comorbidities, and laboratory testing, including high sensitivity cardiac troponin T (hs-cTnT), were abstracted. At 30 days follow-up, 556 patients (63%) were living without requiring mechanical ventilation, 123 (14%) were living and required mechanical ventilation, and 203 (23%) had expired. Electrocardiography findings included atrial fibrillation or atrial flutter (AF/AFL) in 46 (5%) and ST-T wave changes in 306 (38%). 27 (59%) patients with AF/AFL expired as compared to 181 (21%) of 841 with other non-life-threatening rhythms (P<0.001). Multivariable analysis incorporating age, comorbidities, AF/AFL, QRS abnormalities, and ST-T wave changes, and initial hs-cTnT ≥20 ng/L showed that increased age (HR 1.04/year), elevated hs-cTnT (HR 4.57), AF/AFL (HR 2.07), and a history of coronary artery disease (HR 1.56) and active cancer (HR 1.87) were associated with increased mortality. Conclusions Myocardial injury with hs-cTnT ≥20 ng/L, in addition to cardiac conduction perturbations, especially AF/AFL, upon hospital admission for COVID-19 infection is associated with markedly increased risk for mortality than either diagnostic abnormality alone.


Subject(s)
Atrial Fibrillation/diagnosis , COVID-19/epidemiology , Electrocardiography , Heart Rate/physiology , Risk Assessment/methods , SARS-CoV-2 , Troponin T/blood , Atrial Fibrillation/blood , Atrial Fibrillation/epidemiology , Biomarkers/blood , COVID-19/blood , Comorbidity , Female , Follow-Up Studies , Humans , Male , Middle Aged , New York City/epidemiology , Prognosis , Retrospective Studies , Risk Factors
17.
JACC Cardiovasc Imaging ; 14(6): 1221-1231, 2021 06.
Article in English | MEDLINE | ID: mdl-33221204

ABSTRACT

OBJECTIVES: This study aimed to characterize trends in technetium Tc 99m pyrophosphate (99mTc-PYP) scanning for amyloid transthyretin cardiac amyloidosis (ATTR-CA) diagnosis, to determine whether patients underwent appropriate assessment with monoclonal protein and genetic testing, to evaluate use of single-photon emission computed tomography (SPECT) in addition to planar imaging, and to identify predictive factors for ATTR-CA. BACKGROUND: 99mTc-PYP scintigraphy has been repurposed for noninvasive diagnosis of ATTR-CA. Increasing use of 99mTc-PYP can facilitate identification of ATTR-CA, but appropriate use is critical for accurate diagnosis in an era of high-cost targeted therapeutics. METHODS: Patients undergoing 99mTc-PYP scanning 1 h after injection at a quaternary care center from 2010 to 2019 were analyzed; clinical information was abstracted; and SPECT results were analyzed. RESULTS: Over the decade, endomyocardial biopsy rates remained stable with scanning rates peaking at 132 in 2019 (p < 0.001). Among 753 patients (516 men, mean age 77 years), 307 (41%) had a visual score of 0, 177 (23%) of 1, and 269 (36%) of 2 or 3. Of 751 patients with analyzable heart to contralateral chest ratios, 249 (33%) had a ratio ≥1.5. Monoclonal protein testing status was assessed in 550 patients, of these, 174 (32%) did not undergo both serum immunofixation and serum free light chain analysis tests, and 331 (60%) did not undergo all 3 tests-serum immunofixation, serum free light chain analysis, and urine protein electrophoresis. Of 196 patients with confirmed ATTR-CA, 143 (73%) had genetic testing for transthyretin mutations. In 103 patients undergoing cardiac biopsy, grades 2 and 3 99mTc-PYP had sensitivity of 94% and specificity of 89% for ATTR-CA with 100% specificity for grade 3 scans. With respect to SPECT as a reference standard, planar imaging had false positive results in 16 of 25 (64%) grade 2 scans. CONCLUSIONS: Use of noninvasive testing with 99mTc-PYP scanning for evaluation of ATTR-CA is increasing, and the inclusion of monoclonal protein testing and SPECT imaging is crucial to rule out amyloid light chain amyloidosis and distinguish myocardial retention from blood pooling.


Subject(s)
Amyloidosis , Prealbumin , Aged , Amyloidosis/diagnostic imaging , Amyloidosis/genetics , Female , Humans , Male , Prealbumin/genetics , Predictive Value of Tests , Technetium Tc 99m Pyrophosphate
18.
PLoS One ; 15(12): e0244131, 2020.
Article in English | MEDLINE | ID: mdl-33370368

ABSTRACT

INTRODUCTION: A large proportion of patients with COVID-19 develop acute kidney injury (AKI). While the most severe of these cases require renal replacement therapy (RRT), little is known about their clinical course. METHODS: We describe the clinical characteristics of COVID-19 patients in the ICU with AKI requiring RRT at an academic medical center in New York City and followed patients for outcomes of death and renal recovery using time-to-event analyses. RESULTS: Our cohort of 115 patients represented 23% of all ICU admissions at our center, with a peak prevalence of 29%. Patients were followed for a median of 29 days (2542 total patient-RRT-days; median 54 days for survivors). Mechanical ventilation and vasopressor use were common (99% and 84%, respectively), and the median Sequential Organ Function Assessment (SOFA) score was 14. By the end of follow-up 51% died, 41% recovered kidney function (84% of survivors), and 8% still needed RRT (survival probability at 60 days: 0.46 [95% CI: 0.36-0.56])). In an adjusted Cox model, coronary artery disease and chronic obstructive pulmonary disease were associated with increased mortality (HRs: 3.99 [95% CI 1.46-10.90] and 3.10 [95% CI 1.25-7.66]) as were angiotensin-converting-enzyme inhibitors (HR 2.33 [95% CI 1.21-4.47]) and a SOFA score >15 (HR 3.46 [95% CI 1.65-7.25). CONCLUSIONS AND RELEVANCE: Our analysis demonstrates the high prevalence of AKI requiring RRT among critically ill patients with COVID-19 and is associated with a high mortality, however, the rate of renal recovery is high among survivors and should inform shared-decision making.


Subject(s)
Acute Kidney Injury/etiology , Acute Kidney Injury/pathology , COVID-19/complications , Kidney/pathology , Acute Kidney Injury/virology , Aged , Critical Illness/mortality , Female , Humans , Intensive Care Units , Kidney/virology , Male , Middle Aged , New York City , Proportional Hazards Models , Renal Replacement Therapy/methods , Retrospective Studies , SARS-CoV-2/pathogenicity , Survivors
19.
Kidney Int Rep ; 5(11): 1906-1913, 2020 Nov.
Article in English | MEDLINE | ID: mdl-33163711

ABSTRACT

INTRODUCTION: The factors that influence deceased donor kidney procurement biopsy reliability are not well established. We examined the impact of biopsy technique and pathologist training on procurement biopsy accuracy. METHODS: We retrospectively identified all deceased donor kidney-only transplants at our center from 2006 to 2016 with both procurement and reperfusion biopsies performed and information available on procurement biopsy technique and pathologist (n = 392). Biopsies were scored using a previously validated system, classifying "suboptimal" histology as the presence of at least 1 of the following: glomerulosclerosis ≥11%, moderate/severe interstitial fibrosis/tubular atrophy, or moderate/severe vascular disease. We calculated relative risk ratios (RRR) to determine the influence of technique (core vs. wedge) and pathologist (renal vs. nonrenal) on concordance between procurement and reperfusion biopsy histologic classification. RESULTS: A total of 171 (44%) procurement biopsies used wedge technique, and 221 (56%) used core technique. Results of only 36 biopsies (9%) were interpreted by renal pathologists. Correlation between procurement and reperfusion glomerulosclerosis was poor for both wedge (r 2 = 0.11) and core (r 2 = 0.14) biopsies. Overall, 34% of kidneys had discordant classification on procurement versus reperfusion biopsy. Neither biopsy technique nor pathologist training was associated with concordance between procurement and reperfusion histology, but a larger number of sampled glomeruli was associated with a higher likelihood of concordance (adjusted RRR = 1.12 per 10 glomeruli, 95% confidence interval = 1.04-1.22). CONCLUSIONS: Biopsy technique and pathologist training were not associated with procurement biopsy histologic accuracy in this retrospective study. Prospective trials are needed to determine how to optimize procurement biopsy practices.

20.
JAMA Netw Open ; 3(11): e2025134, 2020 11 02.
Article in English | MEDLINE | ID: mdl-33175177

ABSTRACT

Importance: Adults who belong to racial/ethnic minority groups are more likely than White adults to receive a diagnosis of chronic disease in the United States. Objective: To evaluate which health indicators have improved or become worse among Black and Hispanic middle-aged and older adults since the Minority Health and Health Disparities Research and Education Act of 2000. Design, Setting, and Participants: In this repeated cross-sectional study, a total of 4 856 326 records were extracted from the Behavioral Risk Factor Surveillance System from January 1999 through December 2018 of persons who self-identified as Black (non-Hispanic), Hispanic (non-White), or White and who were 45 years or older. Exposure: The 1999 legislation to reduce racial/ethnic health disparities. Main Outcomes and Measures: Poor health indicators and disparities including major chronic diseases, physical inactivity, uninsured status, and overall poor health. Results: Among the 4 856 326 participants (2 958 041 [60.9%] women; mean [SD] age, 60.4 [11.8] years), Black adults showed an overall decrease indicating improvement in uninsured status (ß = -0.40%; P < .001) and physical inactivity (ß = -0.29%; P < .001), while they showed an overall increase indicating deterioration in hypertension (ß = 0.88%; P < .001), diabetes (ß = 0.52%; P < .001), asthma (ß = 0.25%; P < .001), and stroke (ß = 0.15%; P < .001) during the last 20 years. The Black-White gap (ie, the change in ß between groups) showed improvement (2 trend lines converging) in uninsured status (-0.20%; P < .001) and physical inactivity (-0.29%; P < .001), while the Black-White gap worsened (2 trend lines diverging) in diabetes (0.14%; P < .001), hypertension (0.15%; P < .001), coronary heart disease (0.07%; P < .001), stroke (0.07%; P < .001), and asthma (0.11%; P < .001). Hispanic adults showed improvement in physical inactivity (ß = -0.28%; P = .02) and perceived poor health (ß = -0.22%; P = .001), while they showed overall deterioration in hypertension (ß = 0.79%; P < .001) and diabetes (ß = 0.50%; P < .001). The Hispanic-White gap showed improvement in coronary heart disease (-0.15%; P < .001), stroke (-0.04%; P < .001), kidney disease (-0.06%; P < .001), asthma (-0.06%; P = .02), arthritis (-0.26%; P < .001), depression (-0.23%; P < .001), and physical inactivity (-0.10%; P = .001), while the Hispanic-White gap worsened in diabetes (0.15%; P < .001), hypertension (0.05%; P = .03), and uninsured status (0.09%; P < .001). Conclusions and Relevance: This study suggests that Black-White disparities increased in diabetes, hypertension, and asthma, while Hispanic-White disparities remained in diabetes, hypertension, and uninsured status.


Subject(s)
Asthma/ethnology , Diabetes Mellitus/ethnology , Health Status Disparities , Hypertension/ethnology , Medically Uninsured/ethnology , Minority Health/trends , Sedentary Behavior/ethnology , Black or African American/statistics & numerical data , Aged , Arthritis/ethnology , Coronary Disease/ethnology , Cross-Sectional Studies , Depression/ethnology , Female , Health Status Indicators , Hispanic or Latino/statistics & numerical data , Humans , Insurance, Health/trends , Kidney Diseases/ethnology , Male , Middle Aged , Stroke/ethnology , United States/epidemiology , White People/statistics & numerical data
SELECTION OF CITATIONS
SEARCH DETAIL