Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 18.070
Filtrar
Mais filtros

Intervalo de ano de publicação
2.
Biometrics ; 80(3)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38994639

RESUMO

What is the best way to split one stratum into two to maximally reduce the within-stratum imbalance in many covariates? We formulate this as an integer program and approximate the solution by randomized rounding of a linear program. A linear program may assign a fraction of a person to each refined stratum. Randomized rounding views fractional people as probabilities, assigning intact people to strata using biased coins. Randomized rounding is a well-studied theoretical technique for approximating the optimal solution of certain insoluble integer programs. When the number of people in a stratum is large relative to the number of covariates, we prove the following new results: (i) randomized rounding to split a stratum does very little randomizing, so it closely resembles the linear programming relaxation without splitting intact people; (ii) the linear relaxation and the randomly rounded solution place lower and upper bounds on the unattainable integer programming solution; and because of (i), these bounds are often close, thereby ratifying the usable randomly rounded solution. We illustrate using an observational study that balanced many covariates by forming matched pairs composed of 2016 patients selected from 5735 using a propensity score. Instead, we form 5 propensity score strata and refine them into 10 strata, obtaining excellent covariate balance while retaining all patients. An R package optrefine at CRAN implements the method. Supplementary materials are available online.


Assuntos
Pontuação de Propensão , Humanos , Modelos Estatísticos , Biometria/métodos , Simulação por Computador
3.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38682464

RESUMO

The current Poisson factor models often assume that the factors are unknown, which overlooks the explanatory potential of certain observable covariates. This study focuses on high dimensional settings, where the number of the count response variables and/or covariates can diverge as the sample size increases. A covariate-augmented overdispersed Poisson factor model is proposed to jointly perform a high-dimensional Poisson factor analysis and estimate a large coefficient matrix for overdispersed count data. A group of identifiability conditions is provided to theoretically guarantee computational identifiability. We incorporate the interdependence of both response variables and covariates by imposing a low-rank constraint on the large coefficient matrix. To address the computation challenges posed by nonlinearity, two high-dimensional latent matrices, and the low-rank constraint, we propose a novel variational estimation scheme that combines Laplace and Taylor approximations. We also develop a criterion based on a singular value ratio to determine the number of factors and the rank of the coefficient matrix. Comprehensive simulation studies demonstrate that the proposed method outperforms the state-of-the-art methods in estimation accuracy and computational efficiency. The practical merit of our method is demonstrated by an application to the CITE-seq dataset. A flexible implementation of our proposed method is available in the R package COAP.


Assuntos
Simulação por Computador , Modelos Estatísticos , Distribuição de Poisson , Humanos , Tamanho da Amostra , Biometria/métodos , Análise Fatorial
4.
Biometrics ; 80(3)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39036984

RESUMO

Recently, it has become common for applied works to combine commonly used survival analysis modeling methods, such as the multivariable Cox model and propensity score weighting, with the intention of forming a doubly robust estimator of an exposure effect hazard ratio that is unbiased in large samples when either the Cox model or the propensity score model is correctly specified. This combination does not, in general, produce a doubly robust estimator, even after regression standardization, when there is truly a causal effect. We demonstrate via simulation this lack of double robustness for the semiparametric Cox model, the Weibull proportional hazards model, and a simple proportional hazards flexible parametric model, with both the latter models fit via maximum likelihood. We provide a novel proof that the combination of propensity score weighting and a proportional hazards survival model, fit either via full or partial likelihood, is consistent under the null of no causal effect of the exposure on the outcome under particular censoring mechanisms if either the propensity score or the outcome model is correctly specified and contains all confounders. Given our results suggesting that double robustness only exists under the null, we outline 2 simple alternative estimators that are doubly robust for the survival difference at a given time point (in the above sense), provided the censoring mechanism can be correctly modeled, and one doubly robust method of estimation for the full survival curve. We provide R code to use these estimators for estimation and inference in the supporting information.


Assuntos
Simulação por Computador , Pontuação de Propensão , Modelos de Riscos Proporcionais , Humanos , Análise de Sobrevida , Funções Verossimilhança , Biometria/métodos
5.
Biometrics ; 80(3)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39101548

RESUMO

We consider the setting where (1) an internal study builds a linear regression model for prediction based on individual-level data, (2) some external studies have fitted similar linear regression models that use only subsets of the covariates and provide coefficient estimates for the reduced models without individual-level data, and (3) there is heterogeneity across these study populations. The goal is to integrate the external model summary information into fitting the internal model to improve prediction accuracy. We adapt the James-Stein shrinkage method to propose estimators that are no worse and are oftentimes better in the prediction mean squared error after information integration, regardless of the degree of study population heterogeneity. We conduct comprehensive simulation studies to investigate the numerical performance of the proposed estimators. We also apply the method to enhance a prediction model for patella bone lead level in terms of blood lead level and other covariates by integrating summary information from published literature.


Assuntos
Simulação por Computador , Humanos , Modelos Lineares , Biometria/métodos , Chumbo/sangue , Patela , Modelos Estatísticos , Interpretação Estatística de Dados
6.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38819313

RESUMO

Ruberu et al. (2023) introduce an elegant approach to fit a complicated meta-analysis problem with diverse reporting modalities into the framework of hierarchical Bayesian inference. We discuss issues related to some of the involved parametric model assumptions.


Assuntos
Teorema de Bayes , Metanálise como Assunto , Neoplasias , Penetrância , Humanos , Modelos Estatísticos , Biometria/métodos
7.
Biometrics ; 80(3)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38949889

RESUMO

The response envelope model proposed by Cook et al. (2010) is an efficient method to estimate the regression coefficient under the context of the multivariate linear regression model. It improves estimation efficiency by identifying material and immaterial parts of responses and removing the immaterial variation. The response envelope model has been investigated only for continuous response variables. In this paper, we propose the multivariate probit model with latent envelope, in short, the probit envelope model, as a response envelope model for multivariate binary response variables. The probit envelope model takes into account relations between Gaussian latent variables of the multivariate probit model by using the idea of the response envelope model. We address the identifiability of the probit envelope model by employing the essential identifiability concept and suggest a Bayesian method for the parameter estimation. We illustrate the probit envelope model via simulation studies and real-data analysis. The simulation studies show that the probit envelope model has the potential to gain efficiency in estimation compared to the multivariate probit model. The real data analysis shows that the probit envelope model is useful for multi-label classification.


Assuntos
Teorema de Bayes , Simulação por Computador , Modelos Estatísticos , Análise Multivariada , Humanos , Modelos Lineares , Biometria/métodos , Distribuição Normal
8.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38771658

RESUMO

Limitations of using the traditional Cox's hazard ratio for summarizing the magnitude of the treatment effect on time-to-event outcomes have been widely discussed, and alternative measures that do not have such limitations are gaining attention. One of the alternative methods recently proposed, in a simple 2-sample comparison setting, uses the average hazard with survival weight (AH), which can be interpreted as the general censoring-free person-time incidence rate on a given time window. In this paper, we propose a new regression analysis approach for the AH with a truncation time τ. We investigate 3 versions of AH regression analysis, assuming (1) independent censoring, (2) group-specific censoring, and (3) covariate-dependent censoring. The proposed AH regression methods are closely related to robust Poisson regression. While the new approach needs to require a truncation time τ explicitly, it can be more robust than Poisson regression in the presence of censoring. With the AH regression approach, one can summarize the between-group treatment difference in both absolute difference and relative terms, adjusting for covariates that are associated with the outcome. This property will increase the likelihood that the treatment effect magnitude is correctly interpreted. The AH regression approach can be a useful alternative to the traditional Cox's hazard ratio approach for estimating and reporting the magnitude of the treatment effect on time-to-event outcomes.


Assuntos
Modelos de Riscos Proporcionais , Humanos , Análise de Regressão , Análise de Sobrevida , Simulação por Computador , Distribuição de Poisson , Biometria/métodos , Modelos Estatísticos
9.
Biometrics ; 80(3)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39073773

RESUMO

The scope of this paper is a multivariate setting involving categorical variables. Following an external manipulation of one variable, the goal is to evaluate the causal effect on an outcome of interest. A typical scenario involves a system of variables representing lifestyle, physical and mental features, symptoms, and risk factors, with the outcome being the presence or absence of a disease. These variables are interconnected in complex ways, allowing the effect of an intervention to propagate through multiple paths. A distinctive feature of our approach is the estimation of causal effects while accounting for uncertainty in both the dependence structure, which we represent through a directed acyclic graph (DAG), and the DAG-model parameters. Specifically, we propose a Markov chain Monte Carlo algorithm that targets the joint posterior over DAGs and parameters, based on an efficient reversible-jump proposal scheme. We validate our method through extensive simulation studies and demonstrate that it outperforms current state-of-the-art procedures in terms of estimation accuracy. Finally, we apply our methodology to analyze a dataset on depression and anxiety in undergraduate students.


Assuntos
Algoritmos , Causalidade , Simulação por Computador , Depressão , Cadeias de Markov , Modelos Estatísticos , Método de Monte Carlo , Humanos , Ansiedade , Biometria/métodos
10.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38742906

RESUMO

Semicompeting risks refer to the phenomenon that the terminal event (such as death) can censor the nonterminal event (such as disease progression) but not vice versa. The treatment effect on the terminal event can be delivered either directly following the treatment or indirectly through the nonterminal event. We consider 2 strategies to decompose the total effect into a direct effect and an indirect effect under the framework of mediation analysis in completely randomized experiments by adjusting the prevalence and hazard of nonterminal events, respectively. They require slightly different assumptions on cross-world quantities to achieve identifiability. We establish asymptotic properties for the estimated counterfactual cumulative incidences and decomposed treatment effects. We illustrate the subtle difference between these 2 decompositions through simulation studies and two real-data applications in the Supplementary Materials.


Assuntos
Simulação por Computador , Humanos , Modelos Estatísticos , Risco , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Análise de Mediação , Resultado do Tratamento , Biometria/métodos
11.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38742907

RESUMO

We propose a new non-parametric conditional independence test for a scalar response and a functional covariate over a continuum of quantile levels. We build a Cramer-von Mises type test statistic based on an empirical process indexed by random projections of the functional covariate, effectively avoiding the "curse of dimensionality" under the projected hypothesis, which is almost surely equivalent to the null hypothesis. The asymptotic null distribution of the proposed test statistic is obtained under some mild assumptions. The asymptotic global and local power properties of our test statistic are then investigated. We specifically demonstrate that the statistic is able to detect a broad class of local alternatives converging to the null at the parametric rate. Additionally, we recommend a simple multiplier bootstrap approach for estimating the critical values. The finite-sample performance of our statistic is examined through several Monte Carlo simulation experiments. Finally, an analysis of an EEG data set is used to show the utility and versatility of our proposed test statistic.


Assuntos
Simulação por Computador , Modelos Estatísticos , Método de Monte Carlo , Humanos , Eletroencefalografia/estatística & dados numéricos , Interpretação Estatística de Dados , Biometria/métodos , Estatísticas não Paramétricas
12.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38768225

RESUMO

Conventional supervised learning usually operates under the premise that data are collected from the same underlying population. However, challenges may arise when integrating new data from different populations, resulting in a phenomenon known as dataset shift. This paper focuses on prior probability shift, where the distribution of the outcome varies across datasets but the conditional distribution of features given the outcome remains the same. To tackle the challenges posed by such shift, we propose an estimation algorithm that can efficiently combine information from multiple sources. Unlike existing methods that are restricted to discrete outcomes, the proposed approach accommodates both discrete and continuous outcomes. It also handles high-dimensional covariate vectors through variable selection using an adaptive least absolute shrinkage and selection operator penalty, producing efficient estimates that possess the oracle property. Moreover, a novel semiparametric likelihood ratio test is proposed to check the validity of prior probability shift assumptions by embedding the null conditional density function into Neyman's smooth alternatives (Neyman, 1937) and testing study-specific parameters. We demonstrate the effectiveness of our proposed method through extensive simulations and a real data example. The proposed methods serve as a useful addition to the repertoire of tools for dealing dataset shifts.


Assuntos
Algoritmos , Simulação por Computador , Modelos Estatísticos , Probabilidade , Humanos , Funções Verossimilhança , Biometria/métodos , Interpretação Estatística de Dados , Aprendizado de Máquina Supervisionado
13.
Biometrics ; 80(3)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39136277

RESUMO

Time-to-event data are often recorded on a discrete scale with multiple, competing risks as potential causes for the event. In this context, application of continuous survival analysis methods with a single risk suffers from biased estimation. Therefore, we propose the multivariate Bernoulli detector for competing risks with discrete times involving a multivariate change point model on the cause-specific baseline hazards. Through the prior on the number of change points and their location, we impose dependence between change points across risks, as well as allowing for data-driven learning of their number. Then, conditionally on these change points, a multivariate Bernoulli prior is used to infer which risks are involved. Focus of posterior inference is cause-specific hazard rates and dependence across risks. Such dependence is often present due to subject-specific changes across time that affect all risks. Full posterior inference is performed through a tailored local-global Markov chain Monte Carlo (MCMC) algorithm, which exploits a data augmentation trick and MCMC updates from nonconjugate Bayesian nonparametric methods. We illustrate our model in simulations and on ICU data, comparing its performance with existing approaches.


Assuntos
Algoritmos , Teorema de Bayes , Simulação por Computador , Cadeias de Markov , Método de Monte Carlo , Humanos , Análise de Sobrevida , Modelos Estatísticos , Análise Multivariada , Biometria/métodos
14.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38919141

RESUMO

Observational studies are frequently used to estimate the effect of an exposure or treatment on an outcome. To obtain an unbiased estimate of the treatment effect, it is crucial to measure the exposure accurately. A common type of exposure misclassification is recall bias, which occurs in retrospective cohort studies when study subjects may inaccurately recall their past exposure. Particularly challenging is differential recall bias in the context of self-reported binary exposures, where the bias may be directional rather than random and its extent varies according to the outcomes experienced. This paper makes several contributions: (1) it establishes bounds for the average treatment effect even when a validation study is not available; (2) it proposes multiple estimation methods across various strategies predicated on different assumptions; and (3) it suggests a sensitivity analysis technique to assess the robustness of the causal conclusion, incorporating insights from prior research. The effectiveness of these methods is demonstrated through simulation studies that explore various model misspecification scenarios. These approaches are then applied to investigate the effect of childhood physical abuse on mental health in adulthood.


Assuntos
Viés , Rememoração Mental , Estudos Observacionais como Assunto , Humanos , Estudos Observacionais como Assunto/estatística & dados numéricos , Simulação por Computador , Resultado do Tratamento , Criança , Modelos Estatísticos , Adulto , Biometria/métodos
15.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38801258

RESUMO

In comparative studies, covariate balance and sequential allocation schemes have attracted growing academic interest. Although many theoretically justified adaptive randomization methods achieve the covariate balance, they often allocate patients in pairs or groups. To better meet the practical requirements where the clinicians cannot wait for other participants to assign the current patient for some economic or ethical reasons, we propose a method that randomizes patients individually and sequentially. The proposed method conceptually separates the covariate imbalance, measured by the newly proposed modified Mahalanobis distance, and the marginal imbalance, that is the sample size difference between the 2 groups, and it minimizes them with an explicit priority order. Compared with the existing sequential randomization methods, the proposed method achieves the best possible covariate balance while maintaining the marginal balance directly, offering us more control of the randomization process. We demonstrate the superior performance of the proposed method through a wide range of simulation studies and real data analysis, and also establish theoretical guarantees for the proposed method in terms of both the convergence of the imbalance measure and the subsequent treatment effect estimation.


Assuntos
Simulação por Computador , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Biometria/métodos , Modelos Estatísticos , Interpretação Estatística de Dados , Distribuição Aleatória , Tamanho da Amostra , Algoritmos
16.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38708763

RESUMO

Time-series data collected from a network of random variables are useful for identifying temporal pathways among the network nodes. Observed measurements may contain multiple sources of signals and noises, including Gaussian signals of interest and non-Gaussian noises, including artifacts, structured noise, and other unobserved factors (eg, genetic risk factors, disease susceptibility). Existing methods, including vector autoregression (VAR) and dynamic causal modeling do not account for unobserved non-Gaussian components. Furthermore, existing methods cannot effectively distinguish contemporaneous relationships from temporal relations. In this work, we propose a novel method to identify latent temporal pathways using time-series biomarker data collected from multiple subjects. The model adjusts for the non-Gaussian components and separates the temporal network from the contemporaneous network. Specifically, an independent component analysis (ICA) is used to extract the unobserved non-Gaussian components, and residuals are used to estimate the contemporaneous and temporal networks among the node variables based on method of moments. The algorithm is fast and can easily scale up. We derive the identifiability and the asymptotic properties of the temporal and contemporaneous networks. We demonstrate superior performance of our method by extensive simulations and an application to a study of attention-deficit/hyperactivity disorder (ADHD), where we analyze the temporal relationships between brain regional biomarkers. We find that temporal network edges were across different brain regions, while most contemporaneous network edges were bilateral between the same regions and belong to a subset of the functional connectivity network.


Assuntos
Algoritmos , Biomarcadores , Simulação por Computador , Modelos Estatísticos , Humanos , Biomarcadores/análise , Distribuição Normal , Transtorno do Deficit de Atenção com Hiperatividade , Fatores de Tempo , Biometria/métodos
17.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38647000

RESUMO

Fish growth models are crucial for fisheries stock assessments and are commonly estimated using fish length-at-age data. This data is widely collected using length-stratified age sampling (LSAS), a cost-effective two-phase response-selective sampling method. The data may contain age measurement errors (MEs). We propose a methodology that accounts for both LSAS and age MEs to accurately estimate fish growth. The proposed methods use empirical proportion likelihood methodology for LSAS and the structural errors in variables methodology for age MEs. We provide a measure of uncertainty for parameter estimates and standardized residuals for model validation. To model the age distribution, we employ a continuation ratio-logit model that is consistent with the random nature of the true age distribution. We also apply a discretization approach for age and length distributions, which significantly improves computational efficiency and is consistent with the discrete age and length data typically encountered in practice. Our simulation study shows that neglecting age MEs can lead to significant bias in growth estimation, even with small but non-negligible age MEs. However, our new approach performs well regardless of the magnitude of age MEs and accurately estimates SEs of parameter estimators. Real data analysis demonstrates the effectiveness of the proposed model validation device. Computer codes to implement the methodology are provided.


Assuntos
Simulação por Computador , Peixes , Animais , Peixes/crescimento & desenvolvimento , Modelos Estatísticos , Pesqueiros/estatística & dados numéricos , Biometria/métodos , Funções Verossimilhança , Viés
18.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38837900

RESUMO

Randomization-based inference using the Fisher randomization test allows for the computation of Fisher-exact P-values, making it an attractive option for the analysis of small, randomized experiments with non-normal outcomes. Two common test statistics used to perform Fisher randomization tests are the difference-in-means between the treatment and control groups and the covariate-adjusted version of the difference-in-means using analysis of covariance. Modern computing allows for fast computation of the Fisher-exact P-value, but confidence intervals have typically been obtained by inverting the Fisher randomization test over a range of possible effect sizes. The test inversion procedure is computationally expensive, limiting the usage of randomization-based inference in applied work. A recent paper by Zhu and Liu developed a closed form expression for the randomization-based confidence interval using the difference-in-means statistic. We develop an important extension of Zhu and Liu to obtain a closed form expression for the randomization-based covariate-adjusted confidence interval and give practitioners a sufficiency condition that can be checked using observed data and that guarantees that these confidence intervals have correct coverage. Simulations show that our procedure generates randomization-based covariate-adjusted confidence intervals that are robust to non-normality and that can be calculated in nearly the same time as it takes to calculate the Fisher-exact P-value, thus removing the computational barrier to performing randomization-based inference when adjusting for covariates. We also demonstrate our method on a re-analysis of phase I clinical trial data.


Assuntos
Simulação por Computador , Intervalos de Confiança , Humanos , Biometria/métodos , Modelos Estatísticos , Interpretação Estatística de Dados , Distribuição Aleatória , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos
19.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38819309

RESUMO

Doubly adaptive biased coin design (DBCD), a response-adaptive randomization scheme, aims to skew subject assignment probabilities based on accrued responses for ethical considerations. Recent years have seen substantial advances in understanding DBCD's theoretical properties, assuming correct model specification for the responses. However, concerns have been raised about the impact of model misspecification on its design and analysis. In this paper, we assess the robustness to both design model misspecification and analysis model misspecification under DBCD. On one hand, we confirm that the consistency and asymptotic normality of the allocation proportions can be preserved, even when the responses follow a distribution other than the one imposed by the design model during the implementation of DBCD. On the other hand, we extensively investigate three commonly used linear regression models for estimating and inferring the treatment effect, namely difference-in-means, analysis of covariance (ANCOVA) I, and ANCOVA II. By allowing these regression models to be arbitrarily misspecified, thereby not reflecting the true data generating process, we derive the consistency and asymptotic normality of the treatment effect estimators evaluated from the three models. The asymptotic properties show that the ANCOVA II model, which takes covariate-by-treatment interaction terms into account, yields the most efficient estimator. These results can provide theoretical support for using DBCD in scenarios involving model misspecification, thereby promoting the widespread application of this randomization procedure.


Assuntos
Modelos Estatísticos , Distribuição Aleatória , Humanos , Simulação por Computador , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Modelos Lineares , Biometria/métodos , Interpretação Estatística de Dados , Viés , Análise de Variância , Projetos de Pesquisa
20.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38884127

RESUMO

The marginal structure quantile model (MSQM) provides a unique lens to understand the causal effect of a time-varying treatment on the full distribution of potential outcomes. Under the semiparametric framework, we derive the efficiency influence function for the MSQM, from which a new doubly robust estimator is proposed for point estimation and inference. We show that the doubly robust estimator is consistent if either of the models associated with treatment assignment or the potential outcome distributions is correctly specified, and is semiparametric efficient if both models are correct. To implement the doubly robust MSQM estimator, we propose to solve a smoothed estimating equation to facilitate efficient computation of the point and variance estimates. In addition, we develop a confounding function approach to investigate the sensitivity of several MSQM estimators when the sequential ignorability assumption is violated. Extensive simulations are conducted to examine the finite-sample performance characteristics of the proposed methods. We apply the proposed methods to the Yale New Haven Health System Electronic Health Record data to study the effect of antihypertensive medications to patients with severe hypertension and assess the robustness of the findings to unmeasured baseline and time-varying confounding.


Assuntos
Simulação por Computador , Hipertensão , Modelos Estatísticos , Humanos , Hipertensão/tratamento farmacológico , Anti-Hipertensivos/uso terapêutico , Registros Eletrônicos de Saúde/estatística & dados numéricos , Biometria/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA