Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 84
Filtrar
1.
Circulation ; 2024 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-38860364

RESUMEN

BACKGROUND: The majority of out-of-hospital cardiac arrests (OHCAs) occur among individuals in the general population, for whom there is no established strategy to identify risk. In this study, we assess the use of electronic health record (EHR) data to identify OHCA in the general population and define salient factors contributing to OHCA risk. METHODS: The analytical cohort included 2366 individuals with OHCA and 23 660 age- and sex-matched controls receiving health care at the University of Washington. Comorbidities, electrocardiographic measures, vital signs, and medication prescription were abstracted from the EHR. The primary outcome was OHCA. Secondary outcomes included shockable and nonshockable OHCA. Model performance including area under the receiver operating characteristic curve and positive predictive value were assessed and adjusted for observed rate of OHCA across the health system. RESULTS: There were significant differences in demographic characteristics, vital signs, electrocardiographic measures, comorbidities, and medication distribution between individuals with OHCA and controls. In external validation, discrimination in machine learning models (area under the receiver operating characteristic curve 0.80-0.85) was superior to a baseline model with conventional cardiovascular risk factors (area under the receiver operating characteristic curve 0.66). At a specificity threshold of 99%, correcting for baseline OHCA incidence across the health system, positive predictive value was 2.5% to 3.1% in machine learning models compared with 0.8% for the baseline model. Longer corrected QT interval, substance abuse disorder, fluid and electrolyte disorder, alcohol abuse, and higher heart rate were identified as salient predictors of OHCA risk across all machine learning models. Established cardiovascular risk factors retained predictive importance for shockable OHCA, but demographic characteristics (minority race, single marital status) and noncardiovascular comorbidities (substance abuse disorder) also contributed to risk prediction. For nonshockable OHCA, a range of salient predictors, including comorbidities, habits, vital signs, demographic characteristics, and electrocardiographic measures, were identified. CONCLUSIONS: In a population-based case-control study, machine learning models incorporating readily available EHR data showed reasonable discrimination and risk enrichment for OHCA in the general population. Salient factors associated with OCHA risk were myriad across the cardiovascular and noncardiovascular spectrum. Public health and tailored strategies for OHCA prediction and prevention will require incorporation of this complexity.

2.
Am J Epidemiol ; 2024 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-38918020

RESUMEN

Development of new therapeutics for a rare disease such as cystic fibrosis (CF) is hindered by challenges in accruing enough patients for clinical trials. Using external controls from well-matched historical trials can reduce prospective trial sizes, and this approach has supported regulatory approval of new interventions for other rare diseases. We consider three statistical methods that incorporate external controls into a hypothetical clinical trial of a new treatment to reduce pulmonary exacerbations in CF patients: 1) inverse probability weighting, 2) Bayesian modeling with propensity score-based power priors, and 3) hierarchical Bayesian modeling with commensurate priors. We compare the methods via simulation study and in a real clinical trial data setting. Simulations showed that bias in the treatment effect was <4% using any of the methods, with type 1 error (or in the Bayesian cases, posterior probability of the null hypothesis) usually <5%. Inverse probability weighting was sensitive to similarity in prevalence of the covariates between historical and prospective trial populations. The commensurate prior method performed best with real clinical trial data. Using external controls to reduce trial size in future clinical trials holds promise and can advance the therapeutic pipeline for rare diseases.

3.
Sci Rep ; 14(1): 12436, 2024 05 30.
Artículo en Inglés | MEDLINE | ID: mdl-38816422

RESUMEN

We construct non-linear machine learning (ML) prediction models for systolic and diastolic blood pressure (SBP, DBP) using demographic and clinical variables and polygenic risk scores (PRSs). We developed a two-model ensemble, consisting of a baseline model, where prediction is based on demographic and clinical variables only, and a genetic model, where we also include PRSs. We evaluate the use of a linear versus a non-linear model at both the baseline and the genetic model levels and assess the improvement in performance when incorporating multiple PRSs. We report the ensemble model's performance as percentage variance explained (PVE) on a held-out test dataset. A non-linear baseline model improved the PVEs from 28.1 to 30.1% (SBP) and 14.3% to 17.4% (DBP) compared with a linear baseline model. Including seven PRSs in the genetic model computed based on the largest available GWAS of SBP/DBP improved the genetic model PVE from 4.8 to 5.1% (SBP) and 4.7 to 5% (DBP) compared to using a single PRS. Adding additional 14 PRSs computed based on two independent GWASs further increased the genetic model PVE to 6.3% (SBP) and 5.7% (DBP). PVE differed across self-reported race/ethnicity groups, with primarily all non-White groups benefitting from the inclusion of additional PRSs. In summary, non-linear ML models improves BP prediction in models incorporating diverse populations.


Asunto(s)
Presión Sanguínea , Estudio de Asociación del Genoma Completo , Aprendizaje Automático , Herencia Multifactorial , Fenotipo , Humanos , Presión Sanguínea/genética , Herencia Multifactorial/genética , Estudio de Asociación del Genoma Completo/métodos , Factores de Riesgo , Masculino , Femenino , Predisposición Genética a la Enfermedad , Modelos Genéticos , Hipertensión/genética , Hipertensión/fisiopatología , Persona de Mediana Edad , Puntuación de Riesgo Genético
4.
J Med Screen ; : 9691413241228041, 2024 Feb 02.
Artículo en Inglés | MEDLINE | ID: mdl-38304990

RESUMEN

OBJECTIVES: Designing cancer screening trials for multi-cancer early detection (MCED) tests presents a significant methodology challenge, as natural histories of cell-free DNA-shedding cancers are not yet known. A microsimulation model was developed to project the performance and utility of an MCED test in cancer screening trials. METHODS: Individual natural history of preclinical progression through cancer stages for 23 cancer classes was simulated by a stage-transition model under a broad range of cancer latency parameters. Cancer incidences and stage distributions at clinical presentation in simulated trials were set to match the data from Surveillance, Epidemiology, and End Results program. One or multiple rounds of annual screening using a targeted methylation-based MCED test (GalleriⓇ) was conducted to detect preclinical cancers. Mortality benefit of early detection was simulated by a stage-shift model. RESULTS: In simulated trials, accounting for healthy volunteer effect and varying test sensitivity, positive predictive value in the prevalence screening round reached 48% to 61% in 6 natural history scenarios. After 3 rounds of annual screening, the cumulative proportions of stage I/II cancers increased by approximately 9% to 14%, the incidence of stage IV cancers was reduced by 37% to 46%, the reduction of stages III and IV cancer incidences was 9% to 24%, and the reduction of mortality reached 13% to 16%. Greater reductions of late-stage cancers and cancer mortality were achieved by five rounds of MCED screening. CONCLUSIONS: Simulation results guide trial design and suggest that adding this MCED test to routine screening in the United States may shift cancer detection to earlier stages, and potentially save lives.

5.
J Cyst Fibros ; 2024 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-38388235

RESUMEN

BACKGROUND: In 2017, the US Food and Drug Administration initiated expansion of drug labels for the treatment of cystic fibrosis (CF) to include CF transmembrane conductance regulator (CFTR) gene variants based on in vitro functional studies. This study aims to identify CFTR variants that result in increased chloride (Cl-) transport function by the CFTR protein after treatment with the CFTR modulator combination elexacaftor/tezacaftor/ivacaftor (ELX/TEZ/IVA). These data may benefit people with CF (pwCF) who are not currently eligible for modulator therapies. METHODS: Plasmid DNA encoding 655 CFTR variants and wild-type (WT) CFTR were transfected into Fisher Rat Thyroid cells that do not natively express CFTR. After 24 h of incubation with control or TEZ and ELX, and acute addition of IVA, CFTR function was assessed using the transepithelial current clamp conductance assay. Each variant's forskolin/cAMP-induced baseline Cl- transport activity, responsiveness to IVA alone, and responsiveness to the TEZ/ELX/IVA combination were measured in three different laboratories. Western blots were conducted to evaluate CFTR protein maturation and complement the functional data. RESULTS AND CONCLUSIONS: 253 variants not currently approved for CFTR modulator therapy showed low baseline activity (<10 % of normal CFTR Cl- transport activity). For 152 of these variants, treatment with ELX/TEZ/IVA improved the Cl- transport activity by ≥10 % of normal CFTR function, which is suggestive of clinical benefit. ELX/TEZ/IVA increased CFTR function by ≥10 percentage points for an additional 140 unapproved variants with ≥10 % but <50 % of normal CFTR function at baseline. These findings significantly expand the number of rare CFTR variants for which ELX/TEZ/IVA treatment should result in clinical benefit.

6.
bioRxiv ; 2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-37503246

RESUMEN

A key goal of evolutionary genomics is to harness molecular data to draw inferences about selective forces that have acted on genomes. The field progresses in large part through the development of advanced molecular-evolution analysis methods. Here we explored the intersection between classical sequence-based tests for selection and an empirical expression-based approach, using stem cells from Mus musculus subspecies as a model. Using a test of directional, cis-regulatory evolution across genes in pathways, we discovered a unique program of induction of translation genes in stem cells of the Southeast Asian mouse M. m. castaneus relative to its sister taxa. As a complement, we used sequence analyses to find population-genomic signatures of selection in M. m. castaneus, at the upstream regions of the translation genes, including at transcription factor binding sites. We interpret our data under a model of changes in lineage-specific pressures across Mus musculus in stem cells with high translational capacity. Together, our findings underscore the rigor of integrating expression and sequence-based methods to generate hypotheses about evolutionary events from long ago.

7.
J Am Stat Assoc ; 118(543): 1645-1658, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37982008

RESUMEN

In many applications, it is of interest to assess the relative contribution of features (or subsets of features) toward the goal of predicting a response - in other words, to gauge the variable importance of features. Most recent work on variable importance assessment has focused on describing the importance of features within the confines of a given prediction algorithm. However, such assessment does not necessarily characterize the prediction potential of features, and may provide a misleading reflection of the intrinsic value of these features. To address this limitation, we propose a general framework for nonparametric inference on interpretable algorithm-agnostic variable importance. We define variable importance as a population-level contrast between the oracle predictiveness of all available features versus all features except those under consideration. We propose a nonparametric efficient estimation procedure that allows the construction of valid confidence intervals, even when machine learning techniques are used. We also outline a valid strategy for testing the null importance hypothesis. Through simulations, we show that our proposal has good operating characteristics, and we illustrate its use with data from a study of an antibody against HIV-1 infection.

8.
Stat Sin ; 33(SI): 1507-1532, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37409184

RESUMEN

In Bayesian data analysis, it is often important to evaluate quantiles of the posterior distribution of a parameter of interest (e.g., to form posterior intervals). In multi-dimensional problems, when non-conjugate priors are used, this is often difficult generally requiring either an analytic or sampling-based approximation, such as Markov chain Monte-Carlo (MCMC), Approximate Bayesian computation (ABC) or variational inference. We discuss a general approach that reframes this as a multi-task learning problem and uses recurrent deep neural networks (RNNs) to approximately evaluate posterior quantiles. As RNNs carry information along a sequence, this application is particularly useful in time-series. An advantage of this risk-minimization approach is that we do not need to sample from the posterior or calculate the likelihood. We illustrate the proposed approach in several examples.

9.
Clin Trials ; 20(4): 362-369, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37269222

RESUMEN

Adaptive Enrichment Trials aim to make efficient use of data in a pivotal trial of a new targeted therapy to both (a) more precisely identify who benefits from that therapy and (b) improve the likelihood of successfully concluding that the drug is effective, while controlling the probability of false positives. There are a number of frameworks for conducting such a trial and decisions that must be made regarding how to identify that target subgroup. Among those decisions, one must choose how aggressively to restrict enrollment criteria based on the accumulating evidence in the trial. In this article, we empirically evaluate the impact of aggressive versus conservative enrollment restrictions on the power of the trial to detect an effect of treatment. We identify that, in some cases, a more aggressive strategy can substantially improve power. This additionally raises an important question regarding label indication: To what degree do we need a formal test of the hypothesis of no treatment effect in the exact population implied by the label indication? We discuss this question and evaluate how our answer for adaptive enrichment trials may relate to the answer implied by current practice for broad eligibility trials.


Asunto(s)
Ensayos Clínicos Adaptativos como Asunto , Proyectos de Investigación , Humanos
10.
Stat Sin ; 33(1): 127-148, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37153711

RESUMEN

The goal of nonparametric regression is to recover an underlying regression function from noisy observations, under the assumption that the regression function belongs to a prespecified infinite-dimensional function space. In the online setting, in which the observations come in a stream, it is generally computationally infeasible to refit the whole model repeatedly. As yet, there are no methods that are both computationally efficient and statistically rate optimal. In this paper, we propose an estimator for online nonparametric regression. Notably, our estimator is an empirical risk minimizer in a deterministic linear space, which is quite different from existing methods that use random features and a functional stochastic gradient. Our theoretical analysis shows that this estimator obtains a rate-optimal generalization error when the regression function is known to live in a reproducing kernel Hilbert space. We also show, theoretically and empirically, that the computational cost of our estimator is much lower than that of other rate-optimal estimators proposed for this online setting.

11.
Elife ; 122023 05 25.
Artículo en Inglés | MEDLINE | ID: mdl-37227256

RESUMEN

To appropriately defend against a wide array of pathogens, humans somatically generate highly diverse repertoires of B cell and T cell receptors (BCRs and TCRs) through a random process called V(D)J recombination. Receptor diversity is achieved during this process through both the combinatorial assembly of V(D)J-genes and the junctional deletion and insertion of nucleotides. While the Artemis protein is often regarded as the main nuclease involved in V(D)J recombination, the exact mechanism of nucleotide trimming is not understood. Using a previously published TCRß repertoire sequencing data set, we have designed a flexible probabilistic model of nucleotide trimming that allows us to explore various mechanistically interpretable sequence-level features. We show that local sequence context, length, and GC nucleotide content in both directions of the wider sequence, together, can most accurately predict the trimming probabilities of a given V-gene sequence. Because GC nucleotide content is predictive of sequence-breathing, this model provides quantitative statistical evidence regarding the extent to which double-stranded DNA may need to be able to breathe for trimming to occur. We also see evidence of a sequence motif that appears to get preferentially trimmed, independent of GC-content-related effects. Further, we find that the inferred coefficients from this model provide accurate prediction for V- and J-gene sequences from other adaptive immune receptor loci. These results refine our understanding of how the Artemis nuclease may function to trim nucleotides during V(D)J recombination and provide another step toward understanding how V(D)J recombination generates diverse receptors and supports a powerful, unique immune response in healthy humans.


Asunto(s)
Nucleótidos , Recombinación V(D)J , Humanos , Nucleótidos/metabolismo , Composición de Base
12.
Crit Care Med ; 51(4): 503-512, 2023 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-36752628

RESUMEN

OBJECTIVES: Withdrawal of life-sustaining therapies for perceived poor neurologic prognosis (WLST-N) is common after resuscitation from cardiac arrest and may bias outcome estimates from models trained using observational data. We compared several approaches to outcome prediction with the goal of identifying strategies to quantify and reduce this bias. DESIGN: Retrospective observational cohort study. SETTING: Two academic medical centers ("UPMC" and "University of Alabama Birmingham" [UAB]). PATIENTS: Comatose adults resuscitated from cardiac arrest. INTERVENTION: None. MEASUREMENTS AND MAIN RESULTS: As potential predictors, we considered clinical, laboratory, imaging, and quantitative electroencephalography data available early after hospital arrival. We followed patients until death, discharge, or awakening from coma. We used penalized Cox regression with a least absolute shrinkage and selection operator penalty and five-fold cross-validation to predict time to awakening in UPMC patients and then externally validated the model in UAB patients. This model censored patients after WLST-N, considering subsequent potential for awakening to be unknown. Next, we developed a penalized logistic model predicting awakening, which treated failure to awaken after WLST-N as a true observed outcome, and a separate logistic model predicting WLST-N. We scaled and centered individual patients' Cox and logistic predictions for awakening to allow direct comparison and then explored the difference in predictions across probabilities of WLST-N. Overall, 1,254 patients were included, and 29% awakened. Cox models performed well (mean area under the curve was 0.93 in the UPMC test sets and 0.83 in external validation). Logistic predictions of awakening were systematically more pessimistic than Cox-based predictions for patients at higher risk of WLST-N, suggesting potential for self-fulfilling prophecies to arise when failure to awaken after WLST-N is considered as the ground truth outcome. CONCLUSIONS: Compared with traditional binary outcome prediction, censoring outcomes after WLST-N may reduce potential for bias and self-fulfilling prophecies.


Asunto(s)
Paro Cardíaco , Adulto , Humanos , Estudios Retrospectivos , Paro Cardíaco/terapia , Coma/terapia , Factores de Tiempo , Pronóstico
13.
BMC Med Res Methodol ; 23(1): 33, 2023 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-36721082

RESUMEN

BACKGROUND: There is increasing interest in clinical prediction models for rare outcomes such as suicide, psychiatric hospitalizations, and opioid overdose. Accurate model validation is needed to guide model selection and decisions about whether and how prediction models should be used. Split-sample estimation and validation of clinical prediction models, in which data are divided into training and testing sets, may reduce predictive accuracy and precision of validation. Using all data for estimation and validation increases sample size for both procedures, but validation must account for overfitting, or optimism. Our study compared split-sample and entire-sample methods for estimating and validating a suicide prediction model. METHODS: We compared performance of random forest models estimated in a sample of 9,610,318 mental health visits ("entire-sample") and in a 50% subset ("split-sample") as evaluated in a prospective validation sample of 3,754,137 visits. We assessed optimism of three internal validation approaches: for the split-sample prediction model, validation in the held-out testing set and, for the entire-sample model, cross-validation and bootstrap optimism correction. RESULTS: The split-sample and entire-sample prediction models showed similar prospective performance; the area under the curve, AUC, and 95% confidence interval was 0.81 (0.77-0.85) for both. Performance estimates evaluated in the testing set for the split-sample model (AUC = 0.85 [0.82-0.87]) and via cross-validation for the entire-sample model (AUC = 0.83 [0.81-0.85]) accurately reflected prospective performance. Validation of the entire-sample model with bootstrap optimism correction overestimated prospective performance (AUC = 0.88 [0.86-0.89]). Measures of classification accuracy, including sensitivity and positive predictive value at the 99th, 95th, 90th, and 75th percentiles of the risk score distribution, indicated similar conclusions: bootstrap optimism correction overestimated classification accuracy in the prospective validation set. CONCLUSIONS: While previous literature demonstrated the validity of bootstrap optimism correction for parametric models in small samples, this approach did not accurately validate performance of a rare-event prediction model estimated with random forests in a large clinical dataset. Cross-validation of prediction models estimated with all available data provides accurate independent validation while maximizing sample size.


Asunto(s)
Proyectos de Investigación , Suicidio , Humanos , Tamaño de la Muestra , Factores de Riesgo , Salud Mental
14.
Biometrics ; 79(2): 811-825, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-34854476

RESUMEN

The current approach to using machine learning (ML) algorithms in healthcare is to either require clinician oversight for every use case or use their predictions without any human oversight. We explore a middle ground that lets ML algorithms abstain from making a prediction to simultaneously improve their reliability and reduce the burden placed on human experts. To this end, we present a general penalized loss minimization framework for training selective prediction-set (SPS) models, which choose to either output a prediction set or abstain. The resulting models abstain when the outcome is difficult to predict accurately, such as on subjects who are too different from the training data, and achieve higher accuracy on those they do give predictions for. We then introduce a model-agnostic, statistical inference procedure for the coverage rate of an SPS model that ensembles individual models trained using K-fold cross-validation. We find that SPS ensembles attain prediction-set coverage rates closer to the nominal level and have narrower confidence intervals for its marginal coverage rate. We apply our method to train neural networks that abstain more for out-of-sample images on the MNIST digit prediction task and achieve higher predictive accuracy for ICU patients compared to existing approaches.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Humanos , Reproducibilidad de los Resultados , Algoritmos , Proyectos de Investigación
15.
Nephrol Dial Transplant ; 38(4): 834-844, 2023 03 31.
Artículo en Inglés | MEDLINE | ID: mdl-35022767

RESUMEN

Acute kidney injury (AKI) is a growing epidemic and is independently associated with increased risk of death, chronic kidney disease (CKD) and cardiovascular events. Randomized-controlled trials (RCTs) in this domain are notoriously challenging and many clinical studies in AKI have yielded inconclusive findings. Underlying this conundrum is the inherent heterogeneity of AKI in its etiology, presentation and course. AKI is best understood as a syndrome and identification of AKI subphenotypes is needed to elucidate the disease's myriad etiologies and to tailor effective prevention and treatment strategies. Conventional RCTs are logistically cumbersome and often feature highly selected patient populations that limit external generalizability and thus alternative trial designs should be considered when appropriate. In this narrative review of recent developments in AKI trials based on the Kidney Disease Clinical Trialists (KDCT) 2020 meeting, we discuss barriers to and strategies for improved design and implementation of clinical trials for AKI patients, including predictive and prognostic enrichment techniques, the use of pragmatic trials and adaptive trials.


Asunto(s)
Lesión Renal Aguda , Humanos , Lesión Renal Aguda/diagnóstico , Lesión Renal Aguda/etiología , Lesión Renal Aguda/terapia , Pronóstico
16.
Artículo en Inglés | MEDLINE | ID: mdl-35254989

RESUMEN

In life sciences, high-throughput techniques typically lead to high-dimensional data and often the number of covariates is much larger than the number of observations. This inherently comes with multicollinearity challenging a statistical analysis in a linear regression framework. Penalization methods such as the lasso, ridge regression, the group lasso, and convex combinations thereof, which introduce additional conditions on regression variables, have proven themselves effective. In this study, we introduce a novel approach by combining the lasso and the standardized group lasso leading to meaningful weighting of the predicted ("fitted") outcome which is of primary importance, e.g., in breeding populations. This "fitted" sparse-group lasso was implemented as a proximal-averaged gradient descent method and is part of the R package "seagull" available at CRAN. For the evaluation of the novel method, we executed an extensive simulation study. We simulated genotypes and phenotypes which resemble data of a dairy cattle population. Genotypes at thousands of genomic markers were used as covariates to fit a quantitative response. The proximity of markers on a chromosome determined grouping. In the majority of simulated scenarios, the new method revealed improved prediction abilities compared to other penalization approaches and was able to localize the signals of simulated features.


Asunto(s)
Genoma , Animales , Bovinos , Genoma/genética , Genotipo , Simulación por Computador , Modelos Lineales , Fenotipo
17.
Surv Ophthalmol ; 68(3): 539-555, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-35970232

RESUMEN

Every year millions of children are exposed to general anesthesia while undergoing surgical and diagnostic procedures. In the field of ophthalmology, 44,000 children are exposed to general anesthesia annually for strabismus surgery alone. While it is clear that general anesthesia is necessary for sedation and pain minimization during surgical procedures, the possibility of neurotoxic impairments from its exposure is of concern. In animals there is strong evidence linking early anesthesia exposure to abnormal neural development. but in humans the effects of anesthesia are debated. In humans many aspects of vision develop within the first year of life, making the visual system vulnerable to early adverse experiences and potentially vulnerable to early exposure to general anesthesia. We attempt to address whether the visual system is affected by early postnatal exposure to general anesthesia. We first summarize key mechanisms that could account for the neurotoxic effects of general anesthesia on the developing brain and review existing literature on the effects of early anesthesia exposure on the visual system in both animals and humans and on neurocognitive development in humans. Finally, we conclude by proposing future directions for research that could address unanswered questions regarding the impact of general anesthesia on visual development.


Asunto(s)
Anestesia General , Encéfalo , Niño , Animales , Humanos , Anestesia General/efectos adversos
18.
medRxiv ; 2023 Dec 14.
Artículo en Inglés | MEDLINE | ID: mdl-38168328

RESUMEN

We construct non-linear machine learning (ML) prediction models for systolic and diastolic blood pressure (SBP, DBP) using demographic and clinical variables and polygenic risk scores (PRSs). We developed a two-model ensemble, consisting of a baseline model, where prediction is based on demographic and clinical variables only, and a genetic model, where we also include PRSs. We evaluate the use of a linear versus a non-linear model at both the baseline and the genetic model levels and assess the improvement in performance when incorporating multiple PRSs. We report the ensemble model's performance as percentage variance explained (PVE) on a held-out test dataset. A non-linear baseline model improved the PVEs from 28.1% to 30.1% (SBP) and 14.3% to 17.4% (DBP) compared with a linear baseline model. Including seven PRSs in the genetic model computed based on the largest available GWAS of SBP/DBP improved the genetic model PVE from 4.8% to 5.1% (SBP) and 4.7% to 5% (DBP) compared to using a single PRS. Adding additional 14 PRSs computed based on two independent GWASs further increased the genetic model PVE to 6.3% (SBP) and 5.7% (DBP). PVE differed across self-reported race/ethnicity groups, with primarily all non-White groups benefitting from the inclusion of additional PRSs.

19.
JMIR Cardio ; 6(2): e38040, 2022 Nov 02.
Artículo en Inglés | MEDLINE | ID: mdl-36322114

RESUMEN

BACKGROUND: Many machine learning approaches are limited to classification of outcomes rather than longitudinal prediction. One strategy to use machine learning in clinical risk prediction is to classify outcomes over a given time horizon. However, it is not well-known how to identify the optimal time horizon for risk prediction. OBJECTIVE: In this study, we aim to identify an optimal time horizon for classification of incident myocardial infarction (MI) using machine learning approaches looped over outcomes with increasing time horizons. Additionally, we sought to compare the performance of these models with the traditional Framingham Heart Study (FHS) coronary heart disease gender-specific Cox proportional hazards regression model. METHODS: We analyzed data from a single clinic visit of 5201 participants of a cardiovascular health study. We examined 61 variables collected from this baseline exam, including demographic and biologic data, medical history, medications, serum biomarkers, electrocardiographic, and echocardiographic data. We compared several machine learning methods (eg, random forest, L1 regression, gradient boosted decision tree, support vector machine, and k-nearest neighbor) trained to predict incident MI that occurred within time horizons ranging from 500-10,000 days of follow-up. Models were compared on a 20% held-out testing set using area under the receiver operating characteristic curve (AUROC). Variable importance was performed for random forest and L1 regression models across time points. We compared results with the FHS coronary heart disease gender-specific Cox proportional hazards regression functions. RESULTS: There were 4190 participants included in the analysis, with 2522 (60.2%) female participants and an average age of 72.6 years. Over 10,000 days of follow-up, there were 813 incident MI events. The machine learning models were most predictive over moderate follow-up time horizons (ie, 1500-2500 days). Overall, the L1 (Lasso) logistic regression demonstrated the strongest classification accuracy across all time horizons. This model was most predictive at 1500 days follow-up, with an AUROC of 0.71. The most influential variables differed by follow-up time and model, with gender being the most important feature for the L1 regression and weight for the random forest model across all time frames. Compared with the Framingham Cox function, the L1 and random forest models performed better across all time frames beyond 1500 days. CONCLUSIONS: In a population free of coronary heart disease, machine learning techniques can be used to predict incident MI at varying time horizons with reasonable accuracy, with the strongest prediction accuracy in moderate follow-up periods. Validation across additional populations is needed to confirm the validity of this approach in risk prediction.

20.
Br J Community Nurs ; 27(11): 540-544, 2022 Nov 02.
Artículo en Inglés | MEDLINE | ID: mdl-36327210

RESUMEN

Multimorbidity is increasingly common and inevitably results in uncertainties about health, care and the future. Such uncertainties may be experienced by patients, carers and health professionals. Given the ubiquitous presence of uncertainty, all professionals should be prepared to approach and address it in clinical practice. Uncertainty in multimorbidity can rarely be eliminated, and so, must be carefully addressed and communicated; however, there is little evidence on how to approach it. Key areas are: recognising the existence of uncertainty, acknowledging it, and communicating to achieve a shared understanding. Evaluation of what has been discussed, and preparedness to repeat such conversations are also important. Future research should explore optimal communication of uncertainty in multimorbidity.


Asunto(s)
Cuidadores , Multimorbilidad , Humanos , Incertidumbre , Personal de Salud , Comunicación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...