Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 250
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Nature ; 621(7979): 558-567, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37704720

RESUMO

Sustainable Development Goal 2.2-to end malnutrition by 2030-includes the elimination of child wasting, defined as a weight-for-length z-score that is more than two standard deviations below the median of the World Health Organization standards for child growth1. Prevailing methods to measure wasting rely on cross-sectional surveys that cannot measure onset, recovery and persistence-key features that inform preventive interventions and estimates of disease burden. Here we analyse 21 longitudinal cohorts and show that wasting is a highly dynamic process of onset and recovery, with incidence peaking between birth and 3 months. Many more children experience an episode of wasting at some point during their first 24 months than prevalent cases at a single point in time suggest. For example, at the age of 24 months, 5.6% of children were wasted, but by the same age (24 months), 29.2% of children had experienced at least one wasting episode and 10.0% had experienced two or more episodes. Children who were wasted before the age of 6 months had a faster recovery and shorter episodes than did children who were wasted at older ages; however, early wasting increased the risk of later growth faltering, including concurrent wasting and stunting (low length-for-age z-score), and thus increased the risk of mortality. In diverse populations with high seasonal rainfall, the population average weight-for-length z-score varied substantially (more than 0.5 z in some cohorts), with the lowest mean z-scores occurring during the rainiest months; this indicates that seasonally targeted interventions could be considered. Our results show the importance of establishing interventions to prevent wasting from birth to the age of 6 months, probably through improved maternal nutrition, to complement current programmes that focus on children aged 6-59 months.


Assuntos
Caquexia , Países em Desenvolvimento , Transtornos do Crescimento , Desnutrição , Pré-Escolar , Humanos , Lactente , Recém-Nascido , Caquexia/epidemiologia , Caquexia/mortalidade , Caquexia/prevenção & controle , Estudos Transversais , Transtornos do Crescimento/epidemiologia , Transtornos do Crescimento/mortalidade , Transtornos do Crescimento/prevenção & controle , Incidência , Estudos Longitudinais , Desnutrição/epidemiologia , Desnutrição/mortalidade , Desnutrição/prevenção & controle , Chuva , Estações do Ano
2.
Nature ; 621(7979): 550-557, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37704719

RESUMO

Globally, 149 million children under 5 years of age are estimated to be stunted (length more than 2 standard deviations below international growth standards)1,2. Stunting, a form of linear growth faltering, increases the risk of illness, impaired cognitive development and mortality. Global stunting estimates rely on cross-sectional surveys, which cannot provide direct information about the timing of onset or persistence of growth faltering-a key consideration for defining critical windows to deliver preventive interventions. Here we completed a pooled analysis of longitudinal studies in low- and middle-income countries (n = 32 cohorts, 52,640 children, ages 0-24 months), allowing us to identify the typical age of onset of linear growth faltering and to investigate recurrent faltering in early life. The highest incidence of stunting onset occurred from birth to the age of 3 months, with substantially higher stunting at birth in South Asia. From 0 to 15 months, stunting reversal was rare; children who reversed their stunting status frequently relapsed, and relapse rates were substantially higher among children born stunted. Early onset and low reversal rates suggest that improving children's linear growth will require life course interventions for women of childbearing age and a greater emphasis on interventions for children under 6 months of age.


Assuntos
Países em Desenvolvimento , Transtornos do Crescimento , Adulto , Pré-Escolar , Feminino , Humanos , Lactente , Recém-Nascido , Ásia Meridional/epidemiologia , Cognição , Estudos Transversais , Países em Desenvolvimento/estatística & dados numéricos , Deficiências do Desenvolvimento/epidemiologia , Deficiências do Desenvolvimento/mortalidade , Deficiências do Desenvolvimento/prevenção & controle , Transtornos do Crescimento/epidemiologia , Transtornos do Crescimento/mortalidade , Transtornos do Crescimento/prevenção & controle , Estudos Longitudinais , Mães
3.
Nature ; 621(7979): 568-576, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37704722

RESUMO

Growth faltering in children (low length for age or low weight for length) during the first 1,000 days of life (from conception to 2 years of age) influences short-term and long-term health and survival1,2. Interventions such as nutritional supplementation during pregnancy and the postnatal period could help prevent growth faltering, but programmatic action has been insufficient to eliminate the high burden of stunting and wasting in low- and middle-income countries. Identification of age windows and population subgroups on which to focus will benefit future preventive efforts. Here we use a population intervention effects analysis of 33 longitudinal cohorts (83,671 children, 662,763 measurements) and 30 separate exposures to show that improving maternal anthropometry and child condition at birth accounted for population increases in length-for-age z-scores of up to 0.40 and weight-for-length z-scores of up to 0.15 by 24 months of age. Boys had consistently higher risk of all forms of growth faltering than girls. Early postnatal growth faltering predisposed children to subsequent and persistent growth faltering. Children with multiple growth deficits exhibited higher mortality rates from birth to 2 years of age than children without growth deficits (hazard ratios 1.9 to 8.7). The importance of prenatal causes and severe consequences for children who experienced early growth faltering support a focus on pre-conception and pregnancy as a key opportunity for new preventive interventions.


Assuntos
Caquexia , Países em Desenvolvimento , Transtornos do Crescimento , Pré-Escolar , Feminino , Humanos , Lactente , Recém-Nascido , Masculino , Gravidez , Caquexia/economia , Caquexia/epidemiologia , Caquexia/etiologia , Caquexia/prevenção & controle , Estudos de Coortes , Países em Desenvolvimento/economia , Países em Desenvolvimento/estatística & dados numéricos , Suplementos Nutricionais , Transtornos do Crescimento/epidemiologia , Transtornos do Crescimento/prevenção & controle , Estudos Longitudinais , Mães , Fatores Sexuais , Desnutrição/economia , Desnutrição/epidemiologia , Desnutrição/etiologia , Desnutrição/prevenção & controle , Antropometria
4.
Am J Epidemiol ; 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38517025

RESUMO

Lasso regression is widely used for large-scale propensity score (PS) estimation in healthcare database studies. In these settings, previous work has shown that undersmoothing (overfitting) Lasso PS models can improve confounding control, but it can also cause problems of non-overlap in covariate distributions. It remains unclear how to select the degree of undersmoothing when fitting large-scale Lasso PS models to improve confounding control while avoiding issues that can result from reduced covariate overlap. Here, we used simulations to evaluate the performance of using collaborative-controlled targeted learning to data-adaptively select the degree of undersmoothing when fitting large-scale PS models within both singly and doubly robust frameworks to reduce bias in causal estimators. Simulations showed that collaborative learning can data-adaptively select the degree of undersmoothing to reduce bias in estimated treatment effects. Results further showed that when fitting undersmoothed Lasso PS-models, the use of cross-fitting was important for avoiding non-overlap in covariate distributions and reducing bias in causal estimates.

5.
Biostatistics ; 24(3): 686-707, 2023 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-35102366

RESUMO

Causal mediation analysis has historically been limited in two important ways: (i) a focus has traditionally been placed on binary exposures and static interventions and (ii) direct and indirect effect decompositions have been pursued that are only identifiable in the absence of intermediate confounders affected by exposure. We present a theoretical study of an (in)direct effect decomposition of the population intervention effect, defined by stochastic interventions jointly applied to the exposure and mediators. In contrast to existing proposals, our causal effects can be evaluated regardless of whether an exposure is categorical or continuous and remain well-defined even in the presence of intermediate confounders affected by exposure. Our (in)direct effects are identifiable without a restrictive assumption on cross-world counterfactual independencies, allowing for substantive conclusions drawn from them to be validated in randomized controlled trials. Beyond the novel effects introduced, we provide a careful study of nonparametric efficiency theory relevant for the construction of flexible, multiply robust estimators of our (in)direct effects, while avoiding undue restrictions induced by assuming parametric models of nuisance parameter functionals. To complement our nonparametric estimation strategy, we introduce inferential techniques for constructing confidence intervals and hypothesis tests, and discuss open-source software, the $\texttt{medshift}$$\texttt{R}$ package, implementing the proposed methodology. Application of our (in)direct effects and their nonparametric estimators is illustrated using data from a comparative effectiveness trial examining the direct and indirect effects of pharmacological therapeutics on relapse to opioid use disorder.


Assuntos
Análise de Mediação , Modelos Estatísticos , Humanos , Modelos Teóricos , Causalidade
6.
Biostatistics ; 24(4): 1085-1105, 2023 10 18.
Artigo em Inglês | MEDLINE | ID: mdl-35861622

RESUMO

An endeavor central to precision medicine is predictive biomarker discovery; they define patient subpopulations which stand to benefit most, or least, from a given treatment. The identification of these biomarkers is often the byproduct of the related but fundamentally different task of treatment rule estimation. Using treatment rule estimation methods to identify predictive biomarkers in clinical trials where the number of covariates exceeds the number of participants often results in high false discovery rates. The higher than expected number of false positives translates to wasted resources when conducting follow-up experiments for drug target identification and diagnostic assay development. Patient outcomes are in turn negatively affected. We propose a variable importance parameter for directly assessing the importance of potentially predictive biomarkers and develop a flexible nonparametric inference procedure for this estimand. We prove that our estimator is double robust and asymptotically linear under loose conditions in the data-generating process, permitting valid inference about the importance metric. The statistical guarantees of the method are verified in a thorough simulation study representative of randomized control trials with moderate and high-dimensional covariate vectors. Our procedure is then used to discover predictive biomarkers from among the tumor gene expression data of metastatic renal cell carcinoma patients enrolled in recently completed clinical trials. We find that our approach more readily discerns predictive from nonpredictive biomarkers than procedures whose primary purpose is treatment rule estimation. An open-source software implementation of the methodology, the uniCATE R package, is briefly introduced.


Assuntos
Pesquisa Biomédica , Carcinoma de Células Renais , Neoplasias Renais , Humanos , Carcinoma de Células Renais/diagnóstico , Carcinoma de Células Renais/genética , Neoplasias Renais/diagnóstico , Neoplasias Renais/genética , Biomarcadores , Simulação por Computador
7.
Biostatistics ; 24(2): 502-517, 2023 04 14.
Artigo em Inglês | MEDLINE | ID: mdl-34939083

RESUMO

Cluster randomized trials (CRTs) randomly assign an intervention to groups of individuals (e.g., clinics or communities) and measure outcomes on individuals in those groups. While offering many advantages, this experimental design introduces challenges that are only partially addressed by existing analytic approaches. First, outcomes are often missing for some individuals within clusters. Failing to appropriately adjust for differential outcome measurement can result in biased estimates and inference. Second, CRTs often randomize limited numbers of clusters, resulting in chance imbalances on baseline outcome predictors between arms. Failing to adaptively adjust for these imbalances and other predictive covariates can result in efficiency losses. To address these methodological gaps, we propose and evaluate a novel two-stage targeted minimum loss-based estimator to adjust for baseline covariates in a manner that optimizes precision, after controlling for baseline and postbaseline causes of missing outcomes. Finite sample simulations illustrate that our approach can nearly eliminate bias due to differential outcome measurement, while existing CRT estimators yield misleading results and inferences. Application to real data from the SEARCH community randomized trial demonstrates the gains in efficiency afforded through adaptive adjustment for baseline covariates, after controlling for missingness on individual-level outcomes.


Assuntos
Avaliação de Resultados em Cuidados de Saúde , Projetos de Pesquisa , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto , Probabilidade , Viés , Análise por Conglomerados , Simulação por Computador
8.
Biometrics ; 80(1)2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38281772

RESUMO

Strategic test allocation is important for control of both emerging and existing pandemics (eg, COVID-19, HIV). It supports effective epidemic control by (1) reducing transmission via identifying cases and (2) tracking outbreak dynamics to inform targeted interventions. However, infectious disease surveillance presents unique statistical challenges. For instance, the true outcome of interest (positive infection status) is often a latent variable. In addition, presence of both network and temporal dependence reduces data to a single observation. In this work, we study an adaptive sequential design, which allows for unspecified dependence among individuals and across time. Our causal parameter is the mean latent outcome we would have obtained, if, starting at time t given the observed past, we had carried out a stochastic intervention that maximizes the outcome under a resource constraint. The key strength of the method is that we do not have to model network and time dependence: a short-term performance Online Super Learner is used to select among dependence models and randomization schemes. The proposed strategy learns the optimal choice of testing over time while adapting to the current state of the outbreak and learning across samples, through time, or both. We demonstrate the superior performance of the proposed strategy in an agent-based simulation modeling a residential university environment during the COVID-19 pandemic.


Assuntos
COVID-19 , Doenças Transmissíveis , Humanos , Pandemias/prevenção & controle , COVID-19/epidemiologia , Simulação por Computador , Surtos de Doenças
11.
Biometrics ; 79(2): 1029-1041, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-35839293

RESUMO

Inverse-probability-weighted estimators are the oldest and potentially most commonly used class of procedures for the estimation of causal effects. By adjusting for selection biases via a weighting mechanism, these procedures estimate an effect of interest by constructing a pseudopopulation in which selection biases are eliminated. Despite their ease of use, these estimators require the correct specification of a model for the weighting mechanism, are known to be inefficient, and suffer from the curse of dimensionality. We propose a class of nonparametric inverse-probability-weighted estimators in which the weighting mechanism is estimated via undersmoothing of the highly adaptive lasso, a nonparametric regression function proven to converge at nearly n - 1 / 3 $ n^{-1/3}$ -rate to the true weighting mechanism. We demonstrate that our estimators are asymptotically linear with variance converging to the nonparametric efficiency bound. Unlike doubly robust estimators, our procedures require neither derivation of the efficient influence function nor specification of the conditional outcome model. Our theoretical developments have broad implications for the construction of efficient inverse-probability-weighted estimators in large statistical models and a variety of problem settings. We assess the practical performance of our estimators in simulation studies and demonstrate use of our proposed methodology with data from a large-scale epidemiologic study.


Assuntos
Modelos Estatísticos , Probabilidade , Simulação por Computador , Viés de Seleção , Causalidade
12.
Biometrics ; 79(4): 3038-3049, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-36988158

RESUMO

This work considers targeted maximum likelihood estimation (TMLE) of treatment effects on absolute risk and survival probabilities in classical time-to-event settings characterized by right-censoring and competing risks. TMLE is a general methodology combining flexible ensemble learning and semiparametric efficiency theory in a two-step procedure for substitution estimation of causal parameters. We specialize and extend the continuous-time TMLE methods for competing risks settings, proposing a targeting algorithm that iteratively updates cause-specific hazards to solve the efficient influence curve equation for the target parameter. As part of the work, we further detail and implement the recently proposed highly adaptive lasso estimator for continuous-time conditional hazards with L1 -penalized Poisson regression. The resulting estimation procedure benefits from relying solely on very mild nonparametric restrictions on the statistical model, thus providing a novel tool for machine-learning-based semiparametric causal inference for continuous-time time-to-event data. We apply the methods to a publicly available dataset on follicular cell lymphoma where subjects are followed over time until disease relapse or death without relapse. The data display important time-varying effects that can be captured by the highly adaptive lasso. In our simulations that are designed to imitate the data, we compare our methods to a similar approach based on random survival forests and to the discrete-time TMLE.


Assuntos
Algoritmos , Modelos Estatísticos , Humanos , Funções Verossimilhança , Aprendizado de Máquina , Recidiva
13.
Biometrics ; 79(3): 1934-1946, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36416173

RESUMO

In biomedical science, analyzing treatment effect heterogeneity plays an essential role in assisting personalized medicine. The main goals of analyzing treatment effect heterogeneity include estimating treatment effects in clinically relevant subgroups and predicting whether a patient subpopulation might benefit from a particular treatment. Conventional approaches often evaluate the subgroup treatment effects via parametric modeling and can thus be susceptible to model mis-specifications. In this paper, we take a model-free semiparametric perspective and aim to efficiently evaluate the heterogeneous treatment effects of multiple subgroups simultaneously under the one-step targeted maximum-likelihood estimation (TMLE) framework. When the number of subgroups is large, we further expand this path of research by looking at a variation of the one-step TMLE that is robust to the presence of small estimated propensity scores in finite samples. From our simulations, our method demonstrates substantial finite sample improvements compared to conventional methods. In a case study, our method unveils the potential treatment effect heterogeneity of rs12916-T allele (a proxy for statin usage) in decreasing Alzheimer's disease risk.


Assuntos
Aprendizado de Máquina , Medicina de Precisão , Humanos , Funções Verossimilhança , Simulação por Computador , Pontuação de Propensão
14.
Stat Med ; 42(7): 1013-1044, 2023 03 30.
Artigo em Inglês | MEDLINE | ID: mdl-36897184

RESUMO

In this work we introduce the personalized online super learner (POSL), an online personalizable ensemble machine learning algorithm for streaming data. POSL optimizes predictions with respect to baseline covariates, so personalization can vary from completely individualized, that is, optimization with respect to subject ID, to many individuals, that is, optimization with respect to common baseline covariates. As an online algorithm, POSL learns in real time. As a super learner, POSL is grounded in statistical optimality theory and can leverage a diversity of candidate algorithms, including online algorithms with different training and update times, fixed/offline algorithms that are not updated during POSL's fitting procedure, pooled algorithms that learn from many individuals' time series, and individualized algorithms that learn from within a single time series. POSL's ensembling of the candidates can depend on the amount of data collected, the stationarity of the time series, and the mutual characteristics of a group of time series. Depending on the underlying data-generating process and the information available in the data, POSL is able to adapt to learning across samples, through time, or both. For a range of simulations that reflect realistic forecasting scenarios and in a medical application, we examine the performance of POSL relative to other current ensembling and online learning methods. We show that POSL is able to provide reliable predictions for both short and long time series, and it's able to adjust to changing data-generating environments. We further cultivate POSL's practicality by extending it to settings where time series dynamically enter and exit.


Assuntos
Algoritmos , Aprendizado de Máquina , Humanos
15.
Stat Med ; 42(19): 3443-3466, 2023 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-37308115

RESUMO

Across research disciplines, cluster randomized trials (CRTs) are commonly implemented to evaluate interventions delivered to groups of participants, such as communities and clinics. Despite advances in the design and analysis of CRTs, several challenges remain. First, there are many possible ways to specify the causal effect of interest (eg, at the individual-level or at the cluster-level). Second, the theoretical and practical performance of common methods for CRT analysis remain poorly understood. Here, we present a general framework to formally define an array of causal effects in terms of summary measures of counterfactual outcomes. Next, we provide a comprehensive overview of CRT estimators, including the t-test, generalized estimating equations (GEE), augmented-GEE, and targeted maximum likelihood estimation (TMLE). Using finite sample simulations, we illustrate the practical performance of these estimators for different causal effects and when, as commonly occurs, there are limited numbers of clusters of different sizes. Finally, our application to data from the Preterm Birth Initiative (PTBi) study demonstrates the real-world impact of varying cluster sizes and targeting effects at the cluster-level or at the individual-level. Specifically, the relative effect of the PTBi intervention was 0.81 at the cluster-level, corresponding to a 19% reduction in outcome incidence, and was 0.66 at the individual-level, corresponding to a 34% reduction in outcome risk. Given its flexibility to estimate a variety of user-specified effects and ability to adaptively adjust for covariates for precision gains while maintaining Type-I error control, we conclude TMLE is a promising tool for CRT analysis.


Assuntos
Nascimento Prematuro , Recém-Nascido , Feminino , Humanos , Simulação por Computador , Ensaios Clínicos Controlados Aleatórios como Assunto , Tamanho da Amostra , Causalidade , Análise por Conglomerados
16.
BMC Med Res Methodol ; 23(1): 178, 2023 08 02.
Artigo em Inglês | MEDLINE | ID: mdl-37533017

RESUMO

BACKGROUND: The Targeted Learning roadmap provides a systematic guide for generating and evaluating real-world evidence (RWE). From a regulatory perspective, RWE arises from diverse sources such as randomized controlled trials that make use of real-world data, observational studies, and other study designs. This paper illustrates a principled approach to assessing the validity and interpretability of RWE. METHODS: We applied the roadmap to a published observational study of the dose-response association between ritodrine hydrochloride and pulmonary edema among women pregnant with twins in Japan. The goal was to identify barriers to causal effect estimation beyond unmeasured confounding reported by the study's authors, and to explore potential options for overcoming the barriers that robustify results. RESULTS: Following the roadmap raised issues that led us to formulate alternative causal questions that produced more reliable, interpretable RWE. The process revealed a lack of information in the available data to identify a causal dose-response curve. However, under explicit assumptions the effect of treatment with any amount of ritodrine versus none, albeit a less ambitious parameter, can be estimated from data. CONCLUSIONS: Before RWE can be used in support of clinical and regulatory decision-making, its quality and reliability must be systematically evaluated. The TL roadmap prescribes how to carry out a thorough, transparent, and realistic assessment of RWE. We recommend this approach be a routine part of any decision-making process.


Assuntos
Projetos de Pesquisa , Feminino , Humanos , Reprodutibilidade dos Testes , Japão , Ensaios Clínicos Controlados Aleatórios como Assunto
17.
Proc Natl Acad Sci U S A ; 117(9): 4571-4577, 2020 03 03.
Artigo em Inglês | MEDLINE | ID: mdl-32071251

RESUMO

Machine learning is proving invaluable across disciplines. However, its success is often limited by the quality and quantity of available data, while its adoption is limited by the level of trust afforded by given models. Human vs. machine performance is commonly compared empirically to decide whether a certain task should be performed by a computer or an expert. In reality, the optimal learning strategy may involve combining the complementary strengths of humans and machines. Here, we present expert-augmented machine learning (EAML), an automated method that guides the extraction of expert knowledge and its integration into machine-learned models. We used a large dataset of intensive-care patient data to derive 126 decision rules that predict hospital mortality. Using an online platform, we asked 15 clinicians to assess the relative risk of the subpopulation defined by each rule compared to the total sample. We compared the clinician-assessed risk to the empirical risk and found that, while clinicians agreed with the data in most cases, there were notable exceptions where they overestimated or underestimated the true risk. Studying the rules with greatest disagreement, we identified problems with the training data, including one miscoded variable and one hidden confounder. Filtering the rules based on the extent of disagreement between clinician-assessed risk and empirical risk, we improved performance on out-of-sample data and were able to train with less data. EAML provides a platform for automated creation of problem-specific priors, which help build robust and dependable machine-learning models in critical applications.


Assuntos
Sistemas Inteligentes , Aprendizado de Máquina/normas , Informática Médica/métodos , Gerenciamento de Dados/métodos , Sistemas de Gerenciamento de Base de Dados , Informática Médica/normas
18.
Am J Epidemiol ; 191(9): 1640-1651, 2022 08 22.
Artigo em Inglês | MEDLINE | ID: mdl-35512316

RESUMO

Inverse probability weighting (IPW) and targeted maximum likelihood estimation (TMLE) are methodologies that can adjust for confounding and selection bias and are often used for causal inference. Both estimators rely on the positivity assumption that within strata of confounders there is a positive probability of receiving treatment at all levels under consideration. Practical applications of IPW require finite inverse probability (IP) weights. TMLE requires that propensity scores (PS) be bounded away from 0 and 1. Although truncation can improve variance and finite sample bias, this artificial distortion of the IP weights and PS distribution introduces asymptotic bias. As sample size grows, truncation-induced bias eventually swamps variance, rendering nominal confidence interval coverage and hypothesis tests invalid. We present a simple truncation strategy based on the sample size, n, that sets the upper bound on IP weights at $\sqrt{\textit{n}}$ ln n/5. For TMLE, the lower bound on the PS should be set to 5/($\sqrt{\textit{n}}$ ln n/5). Our strategy was designed to optimize the mean squared error of the parameter estimate. It naturally extends to data structures with missing outcomes. Simulation studies and a data analysis demonstrate our strategy's ability to minimize both bias and mean squared error in comparison with other common strategies, including the popular but flawed quantile-based heuristic.


Assuntos
Pontuação de Propensão , Viés , Causalidade , Simulação por Computador , Humanos , Funções Verossimilhança
19.
N Engl J Med ; 381(3): 219-229, 2019 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-31314966

RESUMO

BACKGROUND: Universal antiretroviral therapy (ART) with annual population testing and a multidisease, patient-centered strategy could reduce new human immunodeficiency virus (HIV) infections and improve community health. METHODS: We randomly assigned 32 rural communities in Uganda and Kenya to baseline HIV and multidisease testing and national guideline-restricted ART (control group) or to baseline testing plus annual testing, eligibility for universal ART, and patient-centered care (intervention group). The primary end point was the cumulative incidence of HIV infection at 3 years. Secondary end points included viral suppression, death, tuberculosis, hypertension control, and the change in the annual incidence of HIV infection (which was evaluated in the intervention group only). RESULTS: A total of 150,395 persons were included in the analyses. Population-level viral suppression among 15,399 HIV-infected persons was 42% at baseline and was higher in the intervention group than in the control group at 3 years (79% vs. 68%; relative prevalence, 1.15; 95% confidence interval [CI], 1.11 to 1.20). The annual incidence of HIV infection in the intervention group decreased by 32% over 3 years (from 0.43 to 0.31 cases per 100 person-years; relative rate, 0.68; 95% CI, 0.56 to 0.84). However, the 3-year cumulative incidence (704 incident HIV infections) did not differ significantly between the intervention group and the control group (0.77% and 0.81%, respectively; relative risk, 0.95; 95% CI, 0.77 to 1.17). Among HIV-infected persons, the risk of death by year 3 was 3% in the intervention group and 4% in the control group (0.99 vs. 1.29 deaths per 100 person-years; relative risk, 0.77; 95% CI, 0.64 to 0.93). The risk of HIV-associated tuberculosis or death by year 3 among HIV-infected persons was 4% in the intervention group and 5% in the control group (1.19 vs. 1.50 events per 100 person-years; relative risk, 0.79; 95% CI, 0.67 to 0.94). At 3 years, 47% of adults with hypertension in the intervention group and 37% in the control group had hypertension control (relative prevalence, 1.26; 95% CI, 1.15 to 1.39). CONCLUSIONS: Universal HIV treatment did not result in a significantly lower incidence of HIV infection than standard care, probably owing to the availability of comprehensive baseline HIV testing and the rapid expansion of ART eligibility in the control group. (Funded by the National Institutes of Health and others; SEARCH ClinicalTrials.gov number, NCT01864603.).


Assuntos
Antirretrovirais/uso terapêutico , Serviços de Saúde Comunitária , Infecções por HIV/tratamento farmacológico , Administração Massiva de Medicamentos , Programas de Rastreamento , Infecções Oportunistas Relacionadas com a AIDS/diagnóstico , Infecções Oportunistas Relacionadas com a AIDS/epidemiologia , Adolescente , Adulto , Feminino , Infecções por HIV/diagnóstico , Infecções por HIV/epidemiologia , Infecções por HIV/mortalidade , Humanos , Incidência , Quênia/epidemiologia , Masculino , Pessoa de Meia-Idade , Assistência Centrada no Paciente , Prevalência , Fatores Socioeconômicos , Tuberculose/diagnóstico , Tuberculose/epidemiologia , Uganda/epidemiologia , Carga Viral , Adulto Jovem
20.
Stat Med ; 41(12): 2132-2165, 2022 05 30.
Artigo em Inglês | MEDLINE | ID: mdl-35172378

RESUMO

Several recently developed methods have the potential to harness machine learning in the pursuit of target quantities inspired by causal inference, including inverse weighting, doubly robust estimating equations and substitution estimators like targeted maximum likelihood estimation. There are even more recent augmentations of these procedures that can increase robustness, by adding a layer of cross-validation (cross-validated targeted maximum likelihood estimation and double machine learning, as applied to substitution and estimating equation approaches, respectively). While these methods have been evaluated individually on simulated and experimental data sets, a comprehensive analysis of their performance across real data based simulations have yet to be conducted. In this work, we benchmark multiple widely used methods for estimation of the average treatment effect using ten different nutrition intervention studies data. A nonparametric regression method, undersmoothed highly adaptive lasso, is used to generate the simulated distribution which preserves important features from the observed data and reproduces a set of true target parameters. For each simulated data, we apply the methods above to estimate the average treatment effects as well as their standard errors and resulting confidence intervals. Based on the analytic results, a general recommendation is put forth for use of the cross-validated variants of both substitution and estimating equation estimators. We conclude that the additional layer of cross-validation helps in avoiding unintentional over-fitting of nuisance parameter functionals and leads to more robust inferences.


Assuntos
Aprendizado de Máquina , Projetos de Pesquisa , Causalidade , Simulação por Computador , Humanos , Funções Verossimilhança , Modelos Estatísticos , Análise de Regressão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA