Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Am J Respir Crit Care Med ; 207(12): 1602-1611, 2023 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-36877594

RESUMO

Rationale: A recent randomized trial found that using a bougie did not increase the incidence of successful intubation on first attempt in critically ill adults. The average effect of treatment in a trial population, however, may differ from effects for individuals. Objective: We hypothesized that application of a machine learning model to data from a clinical trial could estimate the effect of treatment (bougie vs. stylet) for individual patients based on their baseline characteristics ("individualized treatment effects"). Methods: This was a secondary analysis of the BOUGIE (Bougie or Stylet in Patients Undergoing Intubation Emergently) trial. A causal forest algorithm was used to model differences in outcome probabilities by randomized group assignment (bougie vs. stylet) for each patient in the first half of the trial (training cohort). This model was used to predict individualized treatment effects for each patient in the second half (validation cohort). Measurements and Main Results: Of 1,102 patients in the BOUGIE trial, 558 (50.6%) were the training cohort, and 544 (49.4%) were the validation cohort. In the validation cohort, individualized treatment effects predicted by the model significantly modified the effect of trial group assignment on the primary outcome (P value for interaction = 0.02; adjusted qini coefficient, 2.46). The most important model variables were difficult airway characteristics, body mass index, and Acute Physiology and Chronic Health Evaluation II score. Conclusions: In this hypothesis-generating secondary analysis of a randomized trial with no average treatment effect and no treatment effect in any prespecified subgroups, a causal forest machine learning algorithm identified patients who appeared to benefit from the use of a bougie over a stylet and from the use of a stylet over a bougie using complex interactions between baseline patient and operator characteristics.


Assuntos
Estado Terminal , Intubação Intratraqueal , Adulto , Humanos , Estado Terminal/terapia , Intubação Intratraqueal/efeitos adversos , Calibragem , Laringoscopia
2.
JAMA ; 331(14): 1195-1204, 2024 04 09.
Artigo em Inglês | MEDLINE | ID: mdl-38501205

RESUMO

Importance: Among critically ill adults, randomized trials have not found oxygenation targets to affect outcomes overall. Whether the effects of oxygenation targets differ based on an individual's characteristics is unknown. Objective: To determine whether an individual's characteristics modify the effect of lower vs higher peripheral oxygenation-saturation (Spo2) targets on mortality. Design, Setting, and Participants: A machine learning model to predict the effect of treatment with a lower vs higher Spo2 target on mortality for individual patients was derived in the Pragmatic Investigation of Optimal Oxygen Targets (PILOT) trial and externally validated in the Intensive Care Unit Randomized Trial Comparing Two Approaches to Oxygen Therapy (ICU-ROX) trial. Critically ill adults received invasive mechanical ventilation in an intensive care unit (ICU) in the United States between July 2018 and August 2021 for PILOT (n = 1682) and in 21 ICUs in Australia and New Zealand between September 2015 and May 2018 for ICU-ROX (n = 965). Exposures: Randomization to a lower vs higher Spo2 target group. Main Outcome and Measure: 28-Day mortality. Results: In the ICU-ROX validation cohort, the predicted effect of treatment with a lower vs higher Spo2 target for individual patients ranged from a 27.2% absolute reduction to a 34.4% absolute increase in 28-day mortality. For example, patients predicted to benefit from a lower Spo2 target had a higher prevalence of acute brain injury, whereas patients predicted to benefit from a higher Spo2 target had a higher prevalence of sepsis and abnormally elevated vital signs. Patients predicted to benefit from a lower Spo2 target experienced lower mortality when randomized to the lower Spo2 group, whereas patients predicted to benefit from a higher Spo2 target experienced lower mortality when randomized to the higher Spo2 group (likelihood ratio test for effect modification P = .02). The use of a Spo2 target predicted to be best for each patient, instead of the randomized Spo2 target, would have reduced the absolute overall mortality by 6.4% (95% CI, 1.9%-10.9%). Conclusion and relevance: Oxygenation targets that are individualized using machine learning analyses of randomized trials may reduce mortality for critically ill adults. A prospective trial evaluating the use of individualized oxygenation targets is needed.


Assuntos
Estado Terminal , Oxigênio , Adulto , Humanos , Oxigênio/uso terapêutico , Estado Terminal/terapia , Respiração Artificial , Estudos Prospectivos , Oxigenoterapia , Unidades de Terapia Intensiva
3.
J Physiol ; 601(11): 2139-2163, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36086823

RESUMO

Low-protein (LP) diets are associated with a decreased risk of diabetes in humans, and promote leanness and glycaemic control in both rodents and humans. While the effects of an LP diet on glycaemic control are mediated by reduced levels of the branched-chain amino acids, we have observed that reducing dietary levels of the other six essential amino acids leads to changes in body composition. Here, we find that dietary histidine plays a key role in the response to an LP diet in male C57BL/6J mice. Specifically reducing dietary levels of histidine by 67% reduces the weight gain of young, lean male mice, reducing both adipose and lean mass without altering glucose metabolism, and rapidly reverses diet-induced obesity and hepatic steatosis in diet-induced obese male mice, increasing insulin sensitivity. This normalization of metabolic health was associated not with caloric restriction or increased activity, but with increased energy expenditure. Surprisingly, the effects of histidine restriction do not require the energy balance hormone Fgf21. Histidine restriction that was started in midlife promoted leanness and glucose tolerance in aged males but not females, but did not affect frailty or lifespan in either sex. Finally, we demonstrate that variation in dietary histidine levels helps to explain body mass index differences in humans. Overall, our findings demonstrate that dietary histidine is a key regulator of weight and body composition in male mice and in humans, and suggest that reducing dietary histidine may be a translatable option for the treatment of obesity. KEY POINTS: Protein restriction (PR) promotes metabolic health in rodents and humans and extends rodent lifespan. Restriction of specific individual essential amino acids can recapitulate the benefits of PR. Reduced histidine promotes leanness and increased energy expenditure in male mice. Reduced histidine does not extend the lifespan of mice when begun in midlife. Dietary levels of histidine are positively associated with body mass index in humans.


Assuntos
Histidina , Magreza , Masculino , Humanos , Animais , Camundongos , Idoso , Histidina/metabolismo , Camundongos Endogâmicos C57BL , Dieta , Obesidade/metabolismo , Proteínas , Metabolismo Energético
4.
Crit Care Med ; 51(12): 1697-1705, 2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-37378460

RESUMO

OBJECTIVES: To identify and validate novel COVID-19 subphenotypes with potential heterogenous treatment effects (HTEs) using electronic health record (EHR) data and 33 unique biomarkers. DESIGN: Retrospective cohort study of adults presenting for acute care, with analysis of biomarkers from residual blood collected during routine clinical care. Latent profile analysis (LPA) of biomarker and EHR data identified subphenotypes of COVID-19 inpatients, which were validated using a separate cohort of patients. HTE for glucocorticoid use among subphenotypes was evaluated using both an adjusted logistic regression model and propensity matching analysis for in-hospital mortality. SETTING: Emergency departments from four medical centers. PATIENTS: Patients diagnosed with COVID-19 based on International Classification of Diseases , 10th Revision codes and laboratory test results. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Biomarker levels generally paralleled illness severity, with higher levels among more severely ill patients. LPA of 522 COVID-19 inpatients from three sites identified two profiles: profile 1 ( n = 332), with higher levels of albumin and bicarbonate, and profile 2 ( n = 190), with higher inflammatory markers. Profile 2 patients had higher median length of stay (7.4 vs 4.1 d; p < 0.001) and in-hospital mortality compared with profile 1 patients (25.8% vs 4.8%; p < 0.001). These were validated in a separate, single-site cohort ( n = 192), which demonstrated similar outcome differences. HTE was observed ( p = 0.03), with glucocorticoid treatment associated with increased mortality for profile 1 patients (odds ratio = 4.54). CONCLUSIONS: In this multicenter study combining EHR data with research biomarker analysis of patients with COVID-19, we identified novel profiles with divergent clinical outcomes and differential treatment responses.


Assuntos
COVID-19 , Adulto , Humanos , Estudos Retrospectivos , Glucocorticoides/uso terapêutico , Biomarcadores , Mortalidade Hospitalar
5.
Am J Respir Crit Care Med ; 204(403-411)2021 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-33891529

RESUMO

RATIONALE: Variation in hospital mortality has been described for coronavirus disease 2019 (COVID-19), but the factors that explain these differences remain unclear. OBJECTIVE: Our objective was to utilize a large, nationally representative dataset of critically ill adults with COVID-19 to determine which factors explain mortality variability. METHODS: In this multicenter cohort study, we examined adults hospitalized in intensive care units with COVID-19 at 70 United States hospitals between March and June 2020. The primary outcome was 28-day mortality. We examined patient-level and hospital-level variables. Mixed-effects logistic regression was used to identify factors associated with interhospital variation. The median odds ratio (OR) was calculated to compare outcomes in higher- vs. lower-mortality hospitals. A gradient boosted machine algorithm was developed for individual-level mortality models. MEASUREMENTS AND MAIN RESULTS: A total of 4,019 patients were included, 1537 (38%) of whom died by 28 days. Mortality varied considerably across hospitals (0-82%). After adjustment for patient- and hospital-level domains, interhospital variation was attenuated (OR decline from 2.06 [95% CI, 1.73-2.37] to 1.22 [95% CI, 1.00-1.38]), with the greatest changes occurring with adjustment for acute physiology, socioeconomic status, and strain. For individual patients, the relative contribution of each domain to mortality risk was: acute physiology (49%), demographics and comorbidities (20%), socioeconomic status (12%), strain (9%), hospital quality (8%), and treatments (3%). CONCLUSION: There is considerable interhospital variation in mortality for critically ill patients with COVID-19, which is mostly explained by hospital-level socioeconomic status, strain, and acute physiologic differences. Individual mortality is driven mostly by patient-level factors. This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives License 4.0 (http://creativecommons.org/licenses/by-nc-nd/4.0/).


Assuntos
Algoritmos , COVID-19/epidemiologia , Estado Terminal/terapia , Unidades de Terapia Intensiva/estatística & dados numéricos , Idoso , Comorbidade , Estado Terminal/epidemiologia , Feminino , Seguimentos , Mortalidade Hospitalar/tendências , Humanos , Incidência , Masculino , Pessoa de Meia-Idade , Prognóstico , Estudos Retrospectivos , Fatores de Risco , SARS-CoV-2 , Taxa de Sobrevida/tendências , Estados Unidos/epidemiologia
6.
Crit Care Med ; 49(10): 1694-1705, 2021 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-33938715

RESUMO

OBJECTIVES: Early antibiotic administration is a central component of sepsis guidelines, and delays may increase mortality. However, prior studies have examined the delay to first antibiotic administration as a single time period even though it contains two distinct processes: antibiotic ordering and antibiotic delivery, which can each be targeted for improvement through different interventions. The objective of this study was to characterize and compare patients who experienced order or delivery delays, investigate the association of each delay type with mortality, and identify novel patient subphenotypes with elevated risk of harm from delays. DESIGN: Retrospective analysis of multicenter inpatient data. SETTING: Two tertiary care medical centers (2008-2018, 2006-2017) and four community-based hospitals (2008-2017). PATIENTS: All patients admitted through the emergency department who met clinical criteria for infection. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patient demographics, vitals, laboratory values, medication order and administration times, and in-hospital survival data were obtained from the electronic health record. Order and delivery delays were calculated for each admission. Adjusted logistic regression models were used to examine the relationship between each delay and in-hospital mortality. Causal forests, a machine learning method, was used to identify a high-risk subgroup. A total of 60,817 admissions were included, and delays occurred in 58% of patients. Each additional hour of order delay (odds ratio, 1.04; 95% CI, 1.03-1.05) and delivery delay (odds ratio, 1.05; 95% CI, 1.02-1.08) was associated with increased mortality. A patient subgroup identified by causal forests with higher comorbidity burden, greater organ dysfunction, and abnormal initial lactate measurements had a higher risk of death associated with delays (odds ratio, 1.07; 95% CI, 1.06-1.09 vs odds ratio, 1.02; 95% CI, 1.01-1.03). CONCLUSIONS: Delays in antibiotic ordering and drug delivery are both associated with a similar increase in mortality. A distinct subgroup of high-risk patients exist who could be targeted for more timely therapy.


Assuntos
Antibacterianos/administração & dosagem , Fenótipo , Sepse/genética , Tempo para o Tratamento/estatística & dados numéricos , Idoso , Idoso de 80 Anos ou mais , Antibacterianos/uso terapêutico , Serviço Hospitalar de Emergência/organização & administração , Serviço Hospitalar de Emergência/estatística & dados numéricos , Feminino , Hospitalização/estatística & dados numéricos , Humanos , Illinois/epidemiologia , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Estudos Retrospectivos , Sepse/tratamento farmacológico , Sepse/fisiopatologia , Fatores de Tempo
7.
J Addict Med ; 2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38776423

RESUMO

OBJECTIVE: A trial comparing extended-release naltrexone and sublingual buprenorphine-naloxone demonstrated higher relapse rates in individuals randomized to extended-release naltrexone. The effectiveness of treatment might vary based on patient characteristics. We hypothesized that causal machine learning would identify individualized treatment effects for each medication. METHODS: This is a secondary analysis of a multicenter randomized trial that compared the effectiveness of extended-release naltrexone versus buprenorphine-naloxone for preventing relapse of opioid misuse. Three machine learning models were derived using all trial participants with 50% randomly selected for training (n = 285) and the remaining 50% for validation. Individualized treatment effect was measured by the Qini value and c-for-benefit, with the absence of relapse denoting treatment success. Patients were grouped into quartiles by predicted individualized treatment effect to examine differences in characteristics and the observed treatment effects. RESULTS: The best-performing model had a Qini value of 4.45 (95% confidence interval, 1.02-7.83) and a c-for-benefit of 0.63 (95% confidence interval, 0.53-0.68). The quartile most likely to benefit from buprenorphine-naloxone had a 35% absolute benefit from this treatment, and at study entry, they had a high median opioid withdrawal score (P < 0.001), used cocaine on more days over the prior 30 days than other quartiles (P < 0.001), and had highest proportions with alcohol and cocaine use disorder (P ≤ 0.02). Quartile 4 individuals were predicted to be most likely to benefit from extended-release naltrexone, with the greatest proportion having heroin drug preference (P = 0.02) and all experiencing homelessness (P < 0.001). CONCLUSIONS: Causal machine learning identified differing individualized treatment effects between medications based on characteristics associated with preventing relapse.

8.
Am J Med Qual ; 38(3): 147-153, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37125670

RESUMO

Early warning scores are algorithms designed to identify clinical deterioration. Current literature is predominantly in non-Veteran populations. Studies in Veterans are lacking. This study was a prospective quality improvement project deploying and assessing the National Early Warning Score (NEWS) at Kansas City VA Medical Center. Performance of NEWS was assessed as follows: discrimination for predicting a composite outcome of intensive care unit transfer or mortality within 24 hours via area under the receiver operating curve. A total of 4781 Veterans with 142 375 NEWS values were included. The NEWS area under the receiver operating curve for the composite outcome was 0.72 (95% CI, 0.71-0.74), indicating acceptable predictive accuracy. A NEWS of ≥7 was more likely associated with the composite outcome versus <7 (13.6% vs 0.8%; P < 0.001). This is one of the first studies to demonstrate successful deployment of NEWS in a Veteran population, with resultant important implications across the Veterans Health Administration.


Assuntos
Escore de Alerta Precoce , Humanos , Estudos Retrospectivos , Estudos Prospectivos , Melhoria de Qualidade , Curva ROC , Medição de Risco , Unidades de Terapia Intensiva , Mortalidade Hospitalar
9.
JAMIA Open ; 6(4): ooad109, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38144168

RESUMO

Objectives: To develop and externally validate machine learning models using structured and unstructured electronic health record data to predict postoperative acute kidney injury (AKI) across inpatient settings. Materials and Methods: Data for adult postoperative admissions to the Loyola University Medical Center (2009-2017) were used for model development and admissions to the University of Wisconsin-Madison (2009-2020) were used for validation. Structured features included demographics, vital signs, laboratory results, and nurse-documented scores. Unstructured text from clinical notes were converted into concept unique identifiers (CUIs) using the clinical Text Analysis and Knowledge Extraction System. The primary outcome was the development of Kidney Disease Improvement Global Outcomes stage 2 AKI within 7 days after leaving the operating room. We derived unimodal extreme gradient boosting machines (XGBoost) and elastic net logistic regression (GLMNET) models using structured-only data and multimodal models combining structured data with CUI features. Model comparison was performed using the receiver operating characteristic curve (AUROC), with Delong's test for statistical differences. Results: The study cohort included 138 389 adult patient admissions (mean [SD] age 58 [16] years; 11 506 [8%] African-American; and 70 826 [51%] female) across the 2 sites. Of those, 2959 (2.1%) developed stage 2 AKI or higher. Across all data types, XGBoost outperformed GLMNET (mean AUROC 0.81 [95% confidence interval (CI), 0.80-0.82] vs 0.78 [95% CI, 0.77-0.79]). The multimodal XGBoost model incorporating CUIs parameterized as term frequency-inverse document frequency (TF-IDF) showed the highest discrimination performance (AUROC 0.82 [95% CI, 0.81-0.83]) over unimodal models (AUROC 0.79 [95% CI, 0.78-0.80]). Discussion: A multimodality approach with structured data and TF-IDF weighting of CUIs increased model performance over structured data-only models. Conclusion: These findings highlight the predictive power of CUIs when merged with structured data for clinical prediction models, which may improve the detection of postoperative AKI.

10.
WMJ ; 121(4): 263-268, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36637835

RESUMO

INTRODUCTION: Tobacco dependence treatment is usually offered in primary care settings. Yet, if many patients who smoke do no not access primary care, cessation interventions may be missing those who most need them. This study describes Wisconsin adults' health care utilization by smoking status. METHODS: Data were analyzed from 1726 individuals participating in a population-based, cross-sectional, in-person health survey of Wisconsin residents (2014-2016). Demographic characteristics were compared across smoking status using Wald chi-square tests weighted for the complex survey design. Odds ratios were calculated using multivariate logistic regression models. RESULTS: Of 1726 respondents, 15.3% reported current smoking, 25.4% former smoking, and 59.4% never smoking. Those currently smoking were more likely than former- or never-smoking respondents to report emergency departments as their "usual place to go when sick" (12% vs 3%) or report they had "no place to go when sick" (16% vs 7%). People who currently smoke also reported more emergency department visits during the past year (mean = 1.4 visits) than did others (mean = 0.4, P< 0.01). Among those currently smoking, 18% reported that they "needed health care but didn't get it" over the past year, compared to 6% of others (P< 0.01). Those currently smoking also were more likely to report a "delay in getting care" (16% vs 9%, P = 0.02) and were less likely to have had a "general health checkup" within the past year (58% vs 70%, P< 0.02). These relationships persisted in logistic regression models controlling for variables related to smoking status and health care utilization, including health insurance. CONCLUSIONS: These findings suggest that more than a quarter of Wisconsin adults who smoke do not receive primary care every year and that they delay care or seek care in emergency departments more frequently than do those who never smoked or who quit smoking. As a result, such individuals may be missing out on evidence-based tobacco cessation treatment.


Assuntos
Nicotiana , Abandono do Hábito de Fumar , Adulto , Humanos , Wisconsin/epidemiologia , Estudos Transversais , Atenção Primária à Saúde , Fumar/epidemiologia
11.
EBioMedicine ; 74: 103697, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34861492

RESUMO

BACKGROUND: Heterogeneity in Acute Respiratory Distress Syndrome (ARDS), as a consequence of its non-specific definition, has led to a multitude of negative randomised controlled trials (RCTs). Investigators have sought to identify heterogeneity of treatment effect (HTE) in RCTs using clustering algorithms. We evaluated the proficiency of several commonly-used machine-learning algorithms to identify clusters where HTE may be detected. METHODS: Five unsupervised: Latent class analysis (LCA), K-means, partition around medoids, hierarchical, and spectral clustering; and four supervised algorithms: model-based recursive partitioning, Causal Forest (CF), and X-learner with Random Forest (XL-RF) and Bayesian Additive Regression Trees were individually applied to three prior ARDS RCTs. Clinical data and research protein biomarkers were used as partitioning variables, with the latter excluded for secondary analyses. For a clustering schema, HTE was evaluated based on the interaction term of treatment group and cluster with day-90 mortality as the dependent variable. FINDINGS: No single algorithm identified clusters with significant HTE in all three trials. LCA, XL-RF, and CF identified HTE most frequently (2/3 RCTs). Important partitioning variables in the unsupervised approaches were consistent across algorithms and RCTs. In supervised models, important partitioning variables varied between algorithms and across RCTs. In algorithms where clusters demonstrated HTE in the same trial, patients frequently interchanged clusters from treatment-benefit to treatment-harm clusters across algorithms. LCA aside, results from all other algorithms were subject to significant alteration in cluster composition and HTE with random seed change. Removing research biomarkers as partitioning variables greatly reduced the chances of detecting HTE across all algorithms. INTERPRETATION: Machine-learning algorithms were inconsistent in their abilities to identify clusters with significant HTE. Protein biomarkers were essential in identifying clusters with HTE. Investigations using machine-learning approaches to identify clusters to seek HTE require cautious interpretation. FUNDING: NIGMS R35 GM142992 (PS), NHLBI R35 HL140026 (CSC); NIGMS R01 GM123193, Department of Defense W81XWH-21-1-0009, NIA R21 AG068720, NIDA R01 DA051464 (MMC).


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Síndrome do Desconforto Respiratório/terapia , Algoritmos , Teorema de Bayes , Análise por Conglomerados , Humanos , Aprendizado de Máquina Supervisionado , Resultado do Tratamento , Aprendizado de Máquina não Supervisionado
12.
Crit Care Explor ; 3(8): e0515, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34476402

RESUMO

OBJECTIVES: Critically ill patients with coronavirus disease 2019 have variable mortality. Risk scores could improve care and be used for prognostic enrichment in trials. We aimed to compare machine learning algorithms and develop a simple tool for predicting 28-day mortality in ICU patients with coronavirus disease 2019. DESIGN: This was an observational study of adult patients with coronavirus disease 2019. The primary outcome was 28-day inhospital mortality. Machine learning models and a simple tool were derived using variables from the first 48 hours of ICU admission and validated externally in independent sites and temporally with more recent admissions. Models were compared with a modified Sequential Organ Failure Assessment score, National Early Warning Score, and CURB-65 using the area under the receiver operating characteristic curve and calibration. SETTING: Sixty-eight U.S. ICUs. PATIENTS: Adults with coronavirus disease 2019 admitted to 68 ICUs in the United States between March 4, 2020, and June 29, 2020. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The study included 5,075 patients, 1,846 (36.4%) of whom died by day 28. eXtreme Gradient Boosting had the highest area under the receiver operating characteristic curve in external validation (0.81) and was well-calibrated, while k-nearest neighbors were the lowest performing machine learning algorithm (area under the receiver operating characteristic curve 0.69). Findings were similar with temporal validation. The simple tool, which was created using the most important features from the eXtreme Gradient Boosting model, had a significantly higher area under the receiver operating characteristic curve in external validation (0.78) than the Sequential Organ Failure Assessment score (0.69), National Early Warning Score (0.60), and CURB-65 (0.65; p < 0.05 for all comparisons). Age, number of ICU beds, creatinine, lactate, arterial pH, and Pao2/Fio2 ratio were the most important predictors in the eXtreme Gradient Boosting model. CONCLUSIONS: eXtreme Gradient Boosting had the highest discrimination overall, and our simple tool had higher discrimination than a modified Sequential Organ Failure Assessment score, National Early Warning Score, and CURB-65 on external validation. These models could be used to improve triage decisions and clinical trial enrichment.

13.
Cell Metab ; 33(5): 905-922.e6, 2021 05 04.
Artigo em Inglês | MEDLINE | ID: mdl-33887198

RESUMO

Low-protein diets promote metabolic health in rodents and humans, and the benefits of low-protein diets are recapitulated by specifically reducing dietary levels of the three branched-chain amino acids (BCAAs), leucine, isoleucine, and valine. Here, we demonstrate that each BCAA has distinct metabolic effects. A low isoleucine diet reprograms liver and adipose metabolism, increasing hepatic insulin sensitivity and ketogenesis and increasing energy expenditure, activating the FGF21-UCP1 axis. Reducing valine induces similar but more modest metabolic effects, whereas these effects are absent with low leucine. Reducing isoleucine or valine rapidly restores metabolic health to diet-induced obese mice. Finally, we demonstrate that variation in dietary isoleucine levels helps explain body mass index differences in humans. Our results reveal isoleucine as a key regulator of metabolic health and the adverse metabolic response to dietary BCAAs and suggest reducing dietary isoleucine as a new approach to treating and preventing obesity and diabetes.


Assuntos
Aminoácidos de Cadeia Ramificada/metabolismo , Dieta , Isoleucina/metabolismo , Valina/metabolismo , Tecido Adiposo Branco/metabolismo , Animais , Índice de Massa Corporal , Dieta/veterinária , Metabolismo Energético , Fatores de Crescimento de Fibroblastos/deficiência , Fatores de Crescimento de Fibroblastos/genética , Fatores de Crescimento de Fibroblastos/metabolismo , Humanos , Fígado/metabolismo , Masculino , Alvo Mecanístico do Complexo 1 de Rapamicina/metabolismo , Camundongos , Camundongos Endogâmicos C57BL , Camundongos Knockout , Obesidade/metabolismo , Obesidade/patologia , Proteínas Serina-Treonina Quinases/metabolismo , Proteína Desacopladora 1/genética , Proteína Desacopladora 1/metabolismo
14.
Artigo em Inglês | MEDLINE | ID: mdl-32708240

RESUMO

Interdisciplinary approaches are needed to measure the additive or multiplicative impacts of chemical and non-chemical stressors on child development outcomes. The lack of interdisciplinary approaches to environmental health and child development has led to a gap in the development of effective intervention strategies. It is hypothesized that a broader systems approach can support more effective interventions over time. To achieve these goals, detailed study protocols are needed. Researchers in child development typically focus on psychosocial stressors. Less attention is paid to chemical and non-chemical stressors and how the interaction of these stressors may impact child development. This feasibility study aims to bridge the gap between child development and environmental epidemiology research by trialing novel methods of gathering ultrafine particle data with a wearable air sensor, while simultaneously gathering language and noise data with the Language Environment Analysis (LENA) system. Additionally, psychosocial data (e.g., parenting quality, caregiver depression, and household chaos) was gathered from parent reports. Child participants (age 3-4 years) completed cognitive tasks to assess self-regulation and receptive language skills, and provided a biospecimen analyzed for inflammatory biomarkers. Data collection was completed at two time points, roughly corresponding to fall and spring. Twenty-six participants were recruited for baseline data, and 11 participants completed a follow-up session. Preliminary results indicate that it is feasible to gather personal Particulate Matter (PM2.5), language, and noise data, cognitive assessments, and biospecimens from our sample of 3-4-year-old children. While there are obstacles to overcome when working with this age group, future studies can benefit from adapting lessons learned regarding recruitment strategies, study design, and protocol implementation.


Assuntos
Poluentes Atmosféricos , Exposição Ambiental/análise , Monitoramento Ambiental/instrumentação , Dispositivos Eletrônicos Vestíveis , Poluentes Atmosféricos/análise , Pré-Escolar , Monitoramento Ambiental/métodos , Estudos de Viabilidade , Feminino , Humanos , Idioma , Masculino , Ruído dos Transportes , Material Particulado/análise , Poluição Relacionada com o Tráfego/análise
15.
Prev Med Rep ; 15: 100908, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31297308

RESUMO

INTRODUCTION: Despite the well-established benefits of physical activity (PA), a large portion of U.S. adults are not meeting recommended health-based guidelines. Although PA occurs in several domains, population-based studies tend to focus on leisure-time PA, with few studies examining occupational activity (OA) level as a separate determinant of overall PA. METHODS: Data were obtained from the 2014-2016 Survey of Health of Wisconsin (SHOW). Currently employed SHOW participants (n = 822) were categorized into OA level categories. Bivariate analyses and multinomial logistic regression analyses were used to identify predictors and to test associations between OA and odds of meeting total PA guidelines using both self-reported and accelerometer-based data. RESULTS: Individuals with high OA level jobs tended to be males (p < 0.01), current smokers (p < 0.01), and have low education (p < 0.01). When measured by self-report, a greater proportion of individuals in high OA jobs (89%) met the physical activity guidelines compared to those in medium (78%) and low (76%) OA jobs (p = 0.01). Further, adjusted odds of doing some PA vs meeting PA guidelines were higher for low OA vs. high OA level (OR = 2.40, 95% CI 1.46-3.94, p < 0.01). This trend was not observed when PA was measured via accelerometer (OR = 1.00, 95% CI 0.62-1.60, p = 0.99). CONCLUSIONS: Correlations between low, intermediate, and high OA and levels of overall PA varied by measurement type. Further research is needed to improve PA measurements within subdomains such as OA and to examine the tradeoffs between OA and leisure-time PA and relationships with health.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA