Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Eur Heart J ; 2024 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-38503537

RESUMEN

BACKGROUND AND AIMS: Early identification of cardiac structural abnormalities indicative of heart failure is crucial to improving patient outcomes. Chest X-rays (CXRs) are routinely conducted on a broad population of patients, presenting an opportunity to build scalable screening tools for structural abnormalities indicative of Stage B or worse heart failure with deep learning methods. In this study, a model was developed to identify severe left ventricular hypertrophy (SLVH) and dilated left ventricle (DLV) using CXRs. METHODS: A total of 71 589 unique CXRs from 24 689 different patients completed within 1 year of echocardiograms were identified. Labels for SLVH, DLV, and a composite label indicating the presence of either were extracted from echocardiograms. A deep learning model was developed and evaluated using area under the receiver operating characteristic curve (AUROC). Performance was additionally validated on 8003 CXRs from an external site and compared against visual assessment by 15 board-certified radiologists. RESULTS: The model yielded an AUROC of 0.79 (0.76-0.81) for SLVH, 0.80 (0.77-0.84) for DLV, and 0.80 (0.78-0.83) for the composite label, with similar performance on an external data set. The model outperformed all 15 individual radiologists for predicting the composite label and achieved a sensitivity of 71% vs. 66% against the consensus vote across all radiologists at a fixed specificity of 73%. CONCLUSIONS: Deep learning analysis of CXRs can accurately detect the presence of certain structural abnormalities and may be useful in early identification of patients with LV hypertrophy and dilation. As a resource to promote further innovation, 71 589 CXRs with adjoining echocardiographic labels have been made publicly available.

2.
J Am Coll Cardiol ; 80(6): 613-626, 2022 08 09.
Artículo en Inglés | MEDLINE | ID: mdl-35926935

RESUMEN

BACKGROUND: Valvular heart disease is an important contributor to cardiovascular morbidity and mortality and remains underdiagnosed. Deep learning analysis of electrocardiography (ECG) may be useful in detecting aortic stenosis (AS), aortic regurgitation (AR), and mitral regurgitation (MR). OBJECTIVES: This study aimed to develop ECG deep learning algorithms to identify moderate or severe AS, AR, and MR alone and in combination. METHODS: A total of 77,163 patients undergoing ECG within 1 year before echocardiography from 2005-2021 were identified and split into train (n = 43,165), validation (n = 12,950), and test sets (n = 21,048; 7.8% with any of AS, AR, or MR). Model performance was assessed using area under the receiver-operating characteristic (AU-ROC) and precision-recall curves. Outside validation was conducted on an independent data set. Test accuracy was modeled using different disease prevalence levels to simulate screening efficacy using the deep learning model. RESULTS: The deep learning algorithm model accuracy was as follows: AS (AU-ROC: 0.88), AR (AU-ROC: 0.77), MR (AU-ROC: 0.83), and any of AS, AR, or MR (AU-ROC: 0.84; sensitivity 78%, specificity 73%) with similar accuracy in external validation. In screening program modeling, test characteristics were dependent on underlying prevalence and selected sensitivity levels. At a prevalence of 7.8%, the positive and negative predictive values were 20% and 97.6%, respectively. CONCLUSIONS: Deep learning analysis of the ECG can accurately detect AS, AR, and MR in this multicenter cohort and may serve as the basis for the development of a valvular heart disease screening program.


Asunto(s)
Insuficiencia de la Válvula Aórtica , Estenosis de la Válvula Aórtica , Aprendizaje Profundo , Enfermedades de las Válvulas Cardíacas , Insuficiencia de la Válvula Mitral , Insuficiencia de la Válvula Aórtica/diagnóstico , Estenosis de la Válvula Aórtica/diagnóstico , Electrocardiografía , Enfermedades de las Válvulas Cardíacas/diagnóstico , Enfermedades de las Válvulas Cardíacas/epidemiología , Humanos , Insuficiencia de la Válvula Mitral/diagnóstico , Insuficiencia de la Válvula Mitral/epidemiología
3.
Adv Neural Inf Process Syst ; 34: 2160-2172, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35859987

RESUMEN

Deep models trained through maximum likelihood have achieved state-of-the-art results for survival analysis. Despite this training scheme, practitioners evaluate models under other criteria, such as binary classification losses at a chosen set of time horizons, e.g. Brier score (BS) and Bernoulli log likelihood (BLL). Models trained with maximum likelihood may have poor BS or BLL since maximum likelihood does not directly optimize these criteria. Directly optimizing criteria like BS requires inverse-weighting by the censoring distribution. However, estimating the censoring model under these metrics requires inverse-weighting by the failure distribution. The objective for each model requires the other, but neither are known. To resolve this dilemma, we introduce Inverse-Weighted Survival Games. In these games, objectives for each model are built from re-weighted estimates featuring the other model, where the latter is held fixed during training. When the loss is proper, we show that the games always have the true failure and censoring distributions as a stationary point. This means models in the game do not leave the correct distributions once reached. We construct one case where this stationary point is unique. We show that these games optimize BS on simulations and then apply these principles on real world cancer and critically-ill patient data.

4.
J Biomed Inform ; 109: 103515, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32771540

RESUMEN

Causal inference often relies on the counterfactual framework, which requires that treatment assignment is independent of the outcome, known as strong ignorability. Approaches to enforcing strong ignorability in causal analyses of observational data include weighting and matching methods. Effect estimates, such as the average treatment effect (ATE), are then estimated as expectations under the re-weighted or matched distribution, P. The choice of P is important and can impact the interpretation of the effect estimate and the variance of effect estimates. In this work, instead of specifying P, we learn a distribution that simultaneously maximizes coverage and minimizes variance of ATE estimates. In order to learn this distribution, this research proposes a generative adversarial network (GAN)-based model called the Counterfactual χ-GAN (cGAN), which also learns feature-balancing weights and supports unbiased causal estimation in the absence of unobserved confounding. Our model minimizes the Pearson χ2-divergence, which we show simultaneously maximizes coverage and minimizes the variance of importance sampling estimates. To our knowledge, this is the first such application of the Pearson χ2-divergence. We demonstrate the effectiveness of cGAN in achieving feature balance relative to established weighting methods in simulation and with real-world medical data.


Asunto(s)
Causalidad , Simulación por Computador , Humanos
5.
JAMIA Open ; 3(1): 77-86, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32607490

RESUMEN

INTRODUCTION: The opioid epidemic is a modern public health emergency. Common interventions to alleviate the opioid epidemic aim to discourage excessive prescription of opioids. However, these methods often take place over large municipal areas (state-level) and may fail to address the diversity that exists within each opioid case (individual-level). An intervention to combat the opioid epidemic that takes place at the individual-level would be preferable. METHODS: This research leverages computational tools and methods to characterize the opioid epidemic at the individual-level using the electronic health record data from a large, academic medical center. To better understand the characteristics of patients with opioid use disorder (OUD) we leveraged a self-controlled analysis to compare the healthcare encounters before and after an individual's first overdose event recorded within the data. We further contrast these patients with matched, non-OUD controls to demonstrate the unique qualities of the OUD cohort. RESULTS: Our research confirms that the rate of opioid overdoses in our hospital significantly increased between 2006 and 2015 (P < 0.001), at an average rate of 9% per year. We further found that the period just prior to the first overdose is marked by conditions of pain or malignancy, which may suggest that overdose stems from pharmaceutical opioids prescribed for these conditions. CONCLUSIONS: Informatics-based methodologies, like those presented here, may play a role in better understanding those individuals who suffer from opioid dependency and overdose, and may lead to future research and interventions that could successfully prevent morbidity and mortality associated with this epidemic.

6.
Adv Neural Inf Process Syst ; 33: 5115-5125, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33953524

RESUMEN

Causal inference relies on two fundamental assumptions: ignorability and positivity. We study causal inference when the true confounder value can be expressed as a function of the observed data; we call this setting estimation with functional confounders (EFC). In this setting ignorability is satisfied, however positivity is violated, and causal inference is impossible in general. We consider two scenarios where causal effects are estimable. First, we discuss interventions on a part of the treatment called functional interventions and a sufficient condition for effect estimation of these interventions called functional positivity. Second, we develop conditions for nonparametric effect estimation based on the gradient fields of the functional confounder and the true outcome function. To estimate effects under these conditions, we develop Level-set Orthogonal Descent Estimation (LODE). Further, we prove error bounds on LODE's effect estimates, evaluate our methods on simulated and real data, and empirically demonstrate the value of EFC.

7.
Adv Neural Inf Process Syst ; 33: 18296-18307, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34017160

RESUMEN

Survival analysis models the distribution of time until an event of interest, such as discharge from the hospital or admission to the ICU. When a model's predicted number of events within any time interval is similar to the observed number, it is called well-calibrated. A survival model's calibration can be measured using, for instance, distributional calibration (D-CALIBRATION) [Haider et al., 2020] which computes the squared difference between the observed and predicted number of events within different time intervals. Classically, calibration is addressed in post-training analysis. We develop explicit calibration (X-CAL), which turns D-CALIBRATION into a differentiable objective that can be used in survival modeling alongside maximum likelihood estimation and other objectives. X-CAL allows practitioners to directly optimize calibration and strike a desired balance between predictive power and calibration. In our experiments, we fit a variety of shallow and deep models on simulated data, a survival dataset based on MNIST, on length-of-stay prediction using MIMIC-III data, and on brain cancer data from The Cancer Genome Atlas. We show that the models we study can be miscalibrated. We give experimental evidence on these datasets that X-CAL improves D-CALIBRATION without a large decrease in concordance or likelihood.

9.
PLoS One ; 12(2): e0172348, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28212433

RESUMEN

Parkinson's disease (PD) is a common neurodegenerative disease whose pathological hallmark is the accumulation of intracellular α-synuclein aggregates in Lewy bodies. Lipid metabolism dysregulation may play a significant role in PD pathogenesis; however, large plasma lipidomic studies in PD are lacking. In the current study, we analyzed the lipidomic profile of plasma obtained from 150 idiopathic PD patients and 100 controls, taken from the 'Spot' study at Columbia University Medical Center in New York. Our mass spectrometry based analytical panel consisted of 520 lipid species from 39 lipid subclasses including all major classes of glycerophospholipids, sphingolipids, glycerolipids and sterols. Each lipid species was analyzed using a logistic regression model. The plasma concentrations of two lipid subclasses, triglycerides and monosialodihexosylganglioside (GM3), were different between PD and control participants. GM3 ganglioside concentration had the most significant difference between PD and controls (1.531±0.037 pmol/µl versus 1.337±0.040 pmol/µl respectively; p-value = 5.96E-04; q-value = 0.048; when normalized to total lipid: p-value = 2.890E-05; q-value = 2.933E-03). Next, we used a collection of 20 GM3 and glucosylceramide (GlcCer) species concentrations normalized to total lipid to perform a ROC curve analysis, and found that these lipids compare favorably with biomarkers reported in previous studies (AUC = 0.742 for males, AUC = 0.644 for females). Our results suggest that higher plasma GM3 levels are associated with PD. GM3 lies in the same glycosphingolipid metabolic pathway as GlcCer, a substrate of the enzyme glucocerebrosidase, which has been associated with PD. These findings are consistent with previous reports implicating lower glucocerebrosidase activity with PD risk.


Asunto(s)
Gangliósido G(M3)/sangre , Enfermedad de Parkinson/sangre , Anciano , Anciano de 80 o más Años , Biomarcadores/sangre , Estudios de Casos y Controles , Femenino , Humanos , Masculino , Persona de Mediana Edad , Enfermedad de Parkinson/fisiopatología , Caracteres Sexuales
10.
J Biomed Inform ; 58: 156-165, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26464024

RESUMEN

We present the Unsupervised Phenome Model (UPhenome), a probabilistic graphical model for large-scale discovery of computational models of disease, or phenotypes. We tackle this challenge through the joint modeling of a large set of diseases and a large set of clinical observations. The observations are drawn directly from heterogeneous patient record data (notes, laboratory tests, medications, and diagnosis codes), and the diseases are modeled in an unsupervised fashion. We apply UPhenome to two qualitatively different mixtures of patients and diseases: records of extremely sick patients in the intensive care unit with constant monitoring, and records of outpatients regularly followed by care providers over multiple years. We demonstrate that the UPhenome model can learn from these different care settings, without any additional adaptation. Our experiments show that (i) the learned phenotypes combine the heterogeneous data types more coherently than baseline LDA-based phenotypes; (ii) they each represent single diseases rather than a mix of diseases more often than the baseline ones; and (iii) when applied to unseen patient records, they are correlated with the patients' ground-truth disorders. Code for training, inference, and quantitative evaluation is made available to the research community.


Asunto(s)
Registros Electrónicos de Salud , Aprendizaje , Probabilidad , Humanos , Fenotipo
11.
Neural Netw ; 18(9): 1212-28, 2005 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-16260116

RESUMEN

The stability-plasticity problem (i.e. how the brain incorporates new information into its model of the world, while at the same time preserving existing knowledge) has been at the forefront of computational memory research for several decades. In this paper, we critically evaluate how well the Complementary Learning Systems theory of hippocampo-cortical interactions addresses the stability-plasticity problem. We identify two major challenges for the model: Finding a learning algorithm for cortex and hippocampus that enacts selective strengthening of weak memories, and selective punishment of competing memories; and preventing catastrophic forgetting in the case of non-stationary environments (i.e. when items are temporarily removed from the training set). We then discuss potential solutions to these problems: First, we describe a recently developed learning algorithm that leverages neural oscillations to find weak parts of memories (so they can be strengthened) and strong competitors (so they can be punished), and we show how this algorithm outperforms other learning algorithms (CPCA Hebbian learning and Leabra at memorizing overlapping patterns. Second, we describe how autonomous re-activation of memories (separately in cortex and hippocampus) during REM sleep, coupled with the oscillating learning algorithm, can reduce the rate of forgetting of input patterns that are no longer present in the environment. We then present a simple demonstration of how this process can prevent catastrophic interference in an AB-AC learning paradigm.


Asunto(s)
Algoritmos , Aprendizaje/fisiología , Modelos Neurológicos , Modelos Psicológicos , Neocórtex/fisiología , Redes Neurales de la Computación , Animales , Hipocampo/fisiología , Humanos , Memoria/fisiología , Red Nerviosa , Plasticidad Neuronal , Reconocimiento en Psicología/fisiología , Sueño REM/fisiología , Ritmo Teta
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...