Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 446
Filtrar
Más filtros

Publication year range
1.
Biostatistics ; 25(2): 449-467, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-36610077

RESUMEN

An important task in survival analysis is choosing a structure for the relationship between covariates of interest and the time-to-event outcome. For example, the accelerated failure time (AFT) model structures each covariate effect as a constant multiplicative shift in the outcome distribution across all survival quantiles. Though parsimonious, this structure cannot detect or capture effects that differ across quantiles of the distribution, a limitation that is analogous to only permitting proportional hazards in the Cox model. To address this, we propose a general framework for quantile-varying multiplicative effects under the AFT model. Specifically, we embed flexible regression structures within the AFT model and derive a novel formula for interpretable effects on the quantile scale. A regression standardization scheme based on the g-formula is proposed to enable the estimation of both covariate-conditional and marginal effects for an exposure of interest. We implement a user-friendly Bayesian approach for the estimation and quantification of uncertainty while accounting for left truncation and complex censoring. We emphasize the intuitive interpretation of this model through numerical and graphical tools and illustrate its performance through simulation and application to a study of Alzheimer's disease and dementia.


Asunto(s)
Modelos Estadísticos , Humanos , Teorema de Bayes , Modelos de Riesgos Proporcionales , Simulación por Computador , Análisis de Supervivencia
2.
BMC Bioinformatics ; 25(1): 51, 2024 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-38297208

RESUMEN

BACKGROUND: Strongly multicollinear covariates, such as those typically represented in metabolomics applications, represent a challenge for multivariate regression analysis. These challenges are commonly circumvented by reducing the number of covariates to a subset of linearly independent variables, but this strategy may lead to loss of resolution and thus produce models with poorer interpretative potential. The aim of this work was to implement and illustrate a method, multivariate pattern analysis (MVPA), which can handle multivariate covariates without compromising resolution or model quality. RESULTS: MVPA has been implemented in an open-source R package of the same name, mvpa. To facilitate the usage and interpretation of complex association patterns, mvpa has also been integrated into an R shiny app, mvpaShiny, which can be accessed on www.mvpashiny.org . MVPA utilizes a general projection algorithm that embraces a diversity of possible models. The method handles multicollinear and even linear dependent covariates. MVPA separates the variance in the data into orthogonal parts within the frame of a single joint model: one part describing the relations between covariates, outcome, and explanatory variables and another part describing the "net" predictive association pattern between outcome and explanatory variables. These patterns are visualized and interpreted in variance plots and plots for pattern analysis and ranking according to variable importance. Adjustment for a linear dependent covariate is performed in three steps. First, partial least squares regression with repeated Monte Carlo resampling is used to determine the number of predictive PLS components for a model relating the covariate to the outcome. Second, postprocessing of this PLS model by target projection provided a single component expressing the predictive association pattern between the outcome and the covariate. Third, the outcome and the explanatory variables were adjusted for the covariate by using the target score in the projection algorithm to obtain "net" data. We illustrate the main features of MVPA by investigating the partial mediation of a linearly dependent metabolomics descriptor on the association pattern between a measure of insulin resistance and lifestyle-related factors. CONCLUSIONS: Our method and implementation in R extend the range of possible analyses and visualizations that can be performed for complex multivariate data structures. The R packages are available on github.com/liningtonlab/mvpa and github.com/liningtonlab/mvpaShiny.


Asunto(s)
Algoritmos , Programas Informáticos , Análisis Multivariante , Análisis de los Mínimos Cuadrados , Método de Montecarlo
3.
New Phytol ; 2024 Aug 25.
Artículo en Inglés | MEDLINE | ID: mdl-39183371

RESUMEN

Phenotypic plasticity describes a genotype's ability to produce different phenotypes in response to different environments. Breeding crops that exhibit appropriate levels of plasticity for future climates will be crucial to meeting global demand, but knowledge of the critical environmental factors is limited to a handful of well-studied major crops. Using 727 maize (Zea mays L.) hybrids phenotyped for grain yield in 45 environments, we investigated the ability of a genetic algorithm and two other methods to identify environmental determinants of grain yield from a large set of candidate environmental variables constructed using minimal assumptions. The genetic algorithm identified pre- and postanthesis maximum temperature, mid-season solar radiation, and whole season net evapotranspiration as the four most important variables from a candidate set of 9150. Importantly, these four variables are supported by previous literature. After calculating reaction norms for each environmental variable, candidate genes were identified and gene annotations investigated to demonstrate how this method can generate insights into phenotypic plasticity. The genetic algorithm successfully identified known environmental determinants of hybrid maize grain yield. This demonstrates that the methodology could be applied to other less well-studied phenotypes and crops to improve understanding of phenotypic plasticity and facilitate breeding crops for future climates.

4.
Biometrics ; 80(1)2024 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-38497824

RESUMEN

The semiparametric Cox proportional hazards model, together with the partial likelihood principle, has been widely used to study the effects of potentially time-dependent covariates on a possibly censored event time. We propose a computationally efficient method for fitting the Cox model to big data involving millions of study subjects. Specifically, we perform maximum partial likelihood estimation on a small subset of the whole data and improve the initial estimator by incorporating the remaining data through one-step estimation with estimated efficient score functions. We show that the final estimator has the same asymptotic distribution as the conventional maximum partial likelihood estimator using the whole dataset but requires only a small fraction of computation time. We demonstrate the usefulness of the proposed method through extensive simulation studies and an application to the UK Biobank data.


Asunto(s)
Macrodatos , Biobanco del Reino Unido , Humanos , Modelos de Riesgos Proporcionales , Probabilidad , Simulación por Computador
5.
Stat Med ; 43(7): 1315-1328, 2024 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-38270062

RESUMEN

Joint models for longitudinal and time-to-event data are often employed to calculate dynamic individualized predictions used in numerous applications of precision medicine. Two components of joint models that influence the accuracy of these predictions are the shape of the longitudinal trajectories and the functional form linking the longitudinal outcome history to the hazard of the event. Finding a single well-specified model that produces accurate predictions for all subjects and follow-up times can be challenging, especially when considering multiple longitudinal outcomes. In this work, we use the concept of super learning and avoid selecting a single model. In particular, we specify a weighted combination of the dynamic predictions calculated from a library of joint models with different specifications. The weights are selected to optimize a predictive accuracy metric using V-fold cross-validation. We use as predictive accuracy measures the expected quadratic prediction error and the expected predictive cross-entropy. In a simulation study, we found that the super learning approach produces results very similar to the Oracle model, which was the model with the best performance in the test datasets. All proposed methodology is implemented in the freely available R package JMbayes2.


Asunto(s)
Medicina de Precisión , Humanos , Simulación por Computador , Medicina de Precisión/métodos
6.
Stat Med ; 2024 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-39080838

RESUMEN

Marginal structural models have been increasingly used by analysts in recent years to account for confounding bias in studies with time-varying treatments. The parameters of these models are often estimated using inverse probability of treatment weighting. To ensure that the estimated weights adequately control confounding, it is possible to check for residual imbalance between treatment groups in the weighted data. Several balance metrics have been developed and compared in the cross-sectional case but have not yet been evaluated and compared in longitudinal studies with time-varying treatment. We have first extended the definition of several balance metrics to the case of a time-varying treatment, with or without censoring. We then compared the performance of these balance metrics in a simulation study by assessing the strength of the association between their estimated level of imbalance and bias. We found that the Mahalanobis balance performed best. Finally, the method was illustrated for estimating the cumulative effect of statins exposure over one year on the risk of cardiovascular disease or death in people aged 65 and over in population-wide administrative data. This illustration confirms the feasibility of employing our proposed metrics in large databases with multiple time-points.

7.
Br J Clin Pharmacol ; 90(3): 849-862, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-37984417

RESUMEN

AIMS: This study was conducted to develop a population pharmacokinetic (PK) model of methotrexate in Korean patients with haematologic malignancy, identify factors affecting methotrexate PK, and propose an optimal dosage regimen for the Korean population. METHODS: Data were retrospectively collected from 188 patients with acute leukaemia or non-Hodgkin's lymphoma who were admitted to Severance Hospital during the period from November 2005 to January 2016. Using demographic factors and laboratory results as potential covariates for PK parameters, model development was performed using NONMEM and optimal dosing regimens were developed using the final PK model. RESULTS: A two-compartment model incorporating body weight via allometry best described the data, yielding typical parameter values of 25.09 L for central volume of distribution ( V 1 ), 17.65 L for peripheral volume of distribution ( V 2 ), 12.89 L/h for clearance (CL) and 0.655 L/h for inter-compartmental clearance in a 50 kg patient. Covariate analyses showed that, at the weight of 50 kg, CL decreased by 0.11 L/h for each 1-year increase in age above 14 years old and decreased 0.8-fold when serum creatinine level doubled, indicating the importance of age-specific dose individualization in methotrexate treatment. Volume of distribution at steady state derived from PK parameters (= V 1 + V 2 ) was 0.85 L/kg, which was similar to those in the Western or Chinese populations. Optimal doses simulated from the final model successfully produced the PK measures close to the target chosen. CONCLUSIONS: The population PK model and optimal dosage regimens developed in this study can be used as a basis to achieve precision dosing in Korean patients with haematologic malignancy.


Asunto(s)
Neoplasias Hematológicas , Metotrexato , Humanos , Adolescente , Metotrexato/uso terapéutico , Metotrexato/farmacocinética , Estudios Retrospectivos , Neoplasias Hematológicas/tratamiento farmacológico , República de Corea , Modelos Biológicos
8.
BMC Med Res Methodol ; 24(1): 22, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38273261

RESUMEN

When multiple influential covariates need to be balanced during a clinical trial, stratified blocked randomization and covariate-adaptive randomization procedures are frequently used in trials to prevent bias and enhance the validity of data analysis results. The latter approach is increasingly used in practice for a study with multiple covariates and limited sample sizes. Among a group of these approaches, the covariate-adaptive procedures proposed by Pocock and Simon are straightforward to be utilized in practice. We aim to investigate the optimal design parameters for the patient treatment assignment probability of their developed three methods. In addition, we seek to answer the question related to the randomization performance when additional covariates are added to the existing randomization procedure. We conducted extensive simulation studies to address these practically important questions.


Asunto(s)
Proyectos de Investigación , Humanos , Simulación por Computador , Probabilidad , Distribución Aleatoria , Tamaño de la Muestra , Ensayos Clínicos como Asunto
9.
BMC Med Res Methodol ; 24(1): 101, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38689224

RESUMEN

BACKGROUND: Vaccine efficacy (VE) assessed in a randomized controlled clinical trial can be affected by demographic, clinical, and other subject-specific characteristics evaluated as baseline covariates. Understanding the effect of covariates on efficacy is key to decisions by vaccine developers and public health authorities. METHODS: This work evaluates the impact of including correlate of protection (CoP) data in logistic regression on its performance in identifying statistically and clinically significant covariates in settings typical for a vaccine phase 3 trial. The proposed approach uses CoP data and covariate data as predictors of clinical outcome (diseased versus non-diseased) and is compared to logistic regression (without CoP data) to relate vaccination status and covariate data to clinical outcome. RESULTS: Clinical trial simulations, in which the true relationship between CoP data and clinical outcome probability is a sigmoid function, show that use of CoP data increases the positive predictive value for detection of a covariate effect. If the true relationship is characterized by a decreasing convex function, use of CoP data does not substantially change positive or negative predictive value. In either scenario, vaccine efficacy is estimated more precisely (i.e., confidence intervals are narrower) in covariate-defined subgroups if CoP data are used, implying that using CoP data increases the ability to determine clinical significance of baseline covariate effects on efficacy. CONCLUSIONS: This study proposes and evaluates a novel approach for assessing baseline demographic covariates potentially affecting VE. Results show that the proposed approach can sensitively and specifically identify potentially important covariates and provides a method for evaluating their likely clinical significance in terms of predicted impact on vaccine efficacy. It shows further that inclusion of CoP data can enable more precise VE estimation, thus enhancing study power and/or efficiency and providing even better information to support health policy and development decisions.


Asunto(s)
Eficacia de las Vacunas , Humanos , Modelos Logísticos , Eficacia de las Vacunas/estadística & datos numéricos , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Vacunación/estadística & datos numéricos , Vacunación/métodos , Vacunas/uso terapéutico , Demografía/estadística & datos numéricos , Simulación por Computador , Ensayos Clínicos Fase III como Asunto/estadística & datos numéricos , Ensayos Clínicos Fase III como Asunto/métodos
10.
Clin Trials ; 21(4): 399-411, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38825841

RESUMEN

There has been a growing interest in covariate adjustment in the analysis of randomized controlled trials in past years. For instance, the US Food and Drug Administration recently issued guidance that emphasizes the importance of distinguishing between conditional and marginal treatment effects. Although these effects may sometimes coincide in the context of linear models, this is not typically the case in other settings, and this distinction is often overlooked in clinical trial practice. Considering these developments, this article provides a review of when and how to use covariate adjustment to enhance precision in randomized controlled trials. We describe the differences between conditional and marginal estimands and stress the necessity of aligning statistical analysis methods with the chosen estimand. In addition, we highlight the potential misalignment of commonly used methods in estimating marginal treatment effects. We hereby advocate for the use of the standardization approach, as it can improve efficiency by leveraging the information contained in baseline covariates while remaining robust to model misspecification. Finally, we present practical considerations that have arisen in our respective consultations to further clarify the advantages and limitations of covariate adjustment.


Asunto(s)
Ensayos Clínicos Controlados Aleatorios como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Humanos , Interpretación Estadística de Datos , Modelos Estadísticos , Proyectos de Investigación , Estados Unidos , Modelos Lineales
11.
J Biopharm Stat ; : 1-18, 2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39216007

RESUMEN

We study optimal designs for clinical trials when the value of the response and its variance depend on treatment and covariates are included in the response model. Such designs are generalizations of Neyman allocation, commonly used in personalized medicine when external factors may have differing effects on the response depending on subgroups of patients. We develop theoretical results for D-, A-, E- and D A-optimal designs and construct semidefinite programming (SDP) formulations that support their numerical computation. D-, A-, and E-optimal designs are appropriate for efficient estimation of distinct properties of the parameters of the response models. Our formulation allows finding optimal allocation schemes for a general number of treatments and of covariates. Finally, we study frequentist sequential clinical trial allocation within contexts where response parameters and their respective variances remain unknown. We illustrate, with a simulated example and with a redesigned clinical trial on the treatment of neuro-degenerative disease, that both theoretical and SDP results, derived under the assumption of known variances, converge asymptotically to allocations obtained through the sequential scheme. Procedures to use static and sequential allocation are proposed.

12.
J Biopharm Stat ; : 1-20, 2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38590156

RESUMEN

When evaluating the real-world treatment effect, the analysis based on randomized clinical trials (RCTs) often introduces generalizability bias due to the difference in risk factors between the trial participants and the real-world patient population. This problem of lack of generalizability associated with the RCT-only analysis can be addressed by leveraging observational studies with large sample sizes that are representative of the real-world population. A set of novel statistical methods, termed "genRCT", for improving the generalizability of the trial has been developed using calibration weighting, which enforces the covariates balance between the RCT and observational study. This paper aims to review statistical methods for generalizing the RCT findings by harnessing information from large observational studies that represent real-world patients. Specifically, we discuss the choices of data sources and variables to meet key theoretical assumptions and principles. We introduce and compare estimation methods for continuous, binary, and survival endpoints. We showcase the use of the R package genRCT through a case study that estimates the average treatment effect of adjuvant chemotherapy for the stage 1B non-small cell lung patients represented by a large cancer registry.

13.
Multivariate Behav Res ; 59(3): 502-522, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38348679

RESUMEN

In psychology and education, tests (e.g., reading tests) and self-reports (e.g., clinical questionnaires) generate counts, but corresponding Item Response Theory (IRT) methods are underdeveloped compared to binary data. Recent advances include the Two-Parameter Conway-Maxwell-Poisson model (2PCMPM), generalizing Rasch's Poisson Counts Model, with item-specific difficulty, discrimination, and dispersion parameters. Explaining differences in model parameters informs item construction and selection but has received little attention. We introduce two 2PCMPM-based explanatory count IRT models: The Distributional Regression Test Model for item covariates, and the Count Latent Regression Model for (categorical) person covariates. Estimation methods are provided and satisfactory statistical properties are observed in simulations. Two examples illustrate how the models help understand tests and underlying constructs.


Asunto(s)
Modelos Estadísticos , Humanos , Análisis de Regresión , Reproducibilidad de los Resultados , Simulación por Computador/estadística & datos numéricos , Distribución de Poisson , Psicometría/métodos , Interpretación Estadística de Datos
14.
Pharm Stat ; 2024 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-39238047

RESUMEN

A common feature in cohort studies is when there is a baseline measurement of the continuous follow-up or outcome variable. Common examples include baseline measurements of physiological characteristics such as blood pressure or heart rate in studies where the outcome is post-baseline measurement of the same variable. Methods incorporating the propensity score are increasingly being used to estimate the effects of treatments using observational studies. We examined six methods for incorporating the baseline value of the follow-up variable when using propensity score matching or weighting. These methods differed according to whether the baseline value of the follow-up variable was included or excluded from the propensity score model, whether subsequent regression adjustment was conducted in the matched or weighted sample to adjust for the baseline value of the follow-up variable, and whether the analysis estimated the effect of treatment on the follow-up variable or on the change from baseline. We used Monte Carlo simulations with 750 scenarios. While no analytic method had uniformly superior performance, we provide the following recommendations: first, when using weighting and the ATE is the target estimand, use an augmented inverse probability weighted estimator or include the baseline value of the follow-up variable in the propensity score model and subsequently adjust for the baseline value of the follow-up variable in a regression model. Second, when the ATT is the target estimand, regardless of whether using weighting or matching, analyze change from baseline using a propensity score that excludes the baseline value of the follow-up variable.

15.
Biom J ; 66(6): e202400008, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39049627

RESUMEN

Finlay-Wilkinson regression is a popular method for modeling genotype-environment interaction in plant breeding and crop variety testing. When environment is a random factor, this model may be cast as a factor-analytic variance-covariance structure, implying a regression on random latent environmental variables. This paper reviews such models with a focus on their use in the analysis of multi-environment trials for the purpose of making predictions in a target population of environments. We investigate the implication of random versus fixed effects assumptions, starting from basic analysis-of-variance models, then moving on to factor-analytic models and considering the transition to models involving observable environmental covariates, which promise to provide more accurate and targeted predictions than models with latent environmental variables.


Asunto(s)
Biometría , Biometría/métodos , Ambiente , Modelos Estadísticos , Análisis de Varianza , Fitomejoramiento/métodos , Interacción Gen-Ambiente
16.
Behav Res Methods ; 56(4): 3873-3890, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38580862

RESUMEN

In behavioral research, it is very common to have manage multiple datasets containing information about the same set of individuals, in such a way that one dataset attempts to explain the others. To address this need, in this paper the Tucker3-PCovR model is proposed. This model is a particular case of PCovR models which focuses on the analysis of a three-way data array and a two-way data matrix where the latter plays the explanatory role. The Tucker3-PCovR model reduces the predictors to a few components and predicts the criterion by using these components and, at the same time, the three-way data is fitted by the Tucker3 model. Both the reduction of the predictors and the prediction of the criterion are done simultaneously. An alternating least squares algorithm is proposed to estimate the Tucker3-PCovR model. A biplot representation is presented to facilitate the interpretation of the results. Some applications are made to empirical datasets from the field of psychology.


Asunto(s)
Algoritmos , Modelos Estadísticos , Humanos , Análisis de Regresión , Interpretación Estadística de Datos , Investigación Conductal/métodos , Análisis de los Mínimos Cuadrados
17.
Biometrics ; 79(4): 2869-2880, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37700503

RESUMEN

Covariate-adaptive randomization methods are widely used in clinical trials to balance baseline covariates. Recent studies have shown the validity of using regression-based estimators for treatment effects without imposing functional form requirements on the true data generation model. These studies have had limitations in certain scenarios; for example, in the case of multiple treatment groups, these studies did not consider additional covariates or assumed that the allocation ratios were the same across strata. To address these limitations, we develop a stratum-common estimator and a stratum-specific estimator under multiple treatments. We derive the asymptotic behaviors of these estimators and propose consistent nonparametric estimators for asymptotic variances. To determine their efficiency, we compare the estimators with the stratified difference-in-means estimator as the benchmark. We find that the stratum-specific estimator guarantees efficiency gains, regardless of whether the allocation ratios across strata are the same or different. Our conclusions were also validated by simulation studies and a real clinical trial example.


Asunto(s)
Distribución Aleatoria , Simulación por Computador
18.
Biometrics ; 79(4): 3111-3125, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37403227

RESUMEN

We propose a broad class of so-called Cox-Aalen transformation models that incorporate both multiplicative and additive covariate effects on the baseline hazard function within a transformation. The proposed models provide a highly flexible and versatile class of semiparametric models that include the transformation models and the Cox-Aalen model as special cases. Specifically, it extends the transformation models by allowing potentially time-dependent covariates to work additively on the baseline hazard and extends the Cox-Aalen model through a predetermined transformation function. We propose an estimating equation approach and devise an expectation-solving (ES) algorithm that involves fast and robust calculations. The resulting estimator is shown to be consistent and asymptotically normal via modern empirical process techniques. The ES algorithm yields a computationally simple method for estimating the variance of both parametric and nonparametric estimators. Finally, we demonstrate the performance of our procedures through extensive simulation studies and applications in two randomized, placebo-controlled human immunodeficiency virus (HIV) prevention efficacy trials. The data example shows the utility of the proposed Cox-Aalen transformation models in enhancing statistical power for discovering covariate effects.


Asunto(s)
Algoritmos , Proyectos de Investigación , Humanos , Modelos de Riesgos Proporcionales , Simulación por Computador , Modelos Estadísticos
19.
Biometrics ; 79(2): 1145-1158, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-35146750

RESUMEN

An estimated quadratic inference function method is proposed for correlated failure time data with auxiliary covariates. The proposed method makes efficient use of the auxiliary information for the incomplete exposure covariates and preserves the property of the quadratic inference function method that requires the covariates to be completely observed. It can improve the estimation efficiency and easily deal with the situation when the cluster size is large. The proposed estimator which minimizes the estimated quadratic inference function is shown to be consistent and asymptotically normal. A chi-squared test based on the estimated quadratic inference function is proposed to test hypotheses about the regression parameters. The small-sample performance of the proposed method is investigated through extensive simulation studies. The proposed method is then applied to analyze the Study of Left Ventricular Dysfunction (SOLVD) data as an illustration.


Asunto(s)
Interpretación Estadística de Datos , Simulación por Computador
20.
Biometrics ; 79(4): 3690-3700, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37337620

RESUMEN

In clinical follow-up studies with a time-to-event end point, the difference in the restricted mean survival time (RMST) is a suitable substitute for the hazard ratio (HR). However, the RMST only measures the survival of patients over a period of time from the baseline and cannot reflect changes in life expectancy over time. Based on the RMST, we study the conditional restricted mean survival time (cRMST) by estimating life expectancy in the future according to the time that patients have survived, reflecting the dynamic survival status of patients during follow-up. In this paper, we introduce the estimation method of cRMST based on pseudo-observations, the statistical inference concerning the difference between two cRMSTs (cRMSTd), and the establishment of the robust dynamic prediction model using the landmark method. Simulation studies are conducted to evaluate the statistical properties of these methods. The results indicate that the estimation of the cRMST is accurate, and the dynamic RMST model has high accuracy in coefficient estimation and good predictive performance. In addition, an example of patients with chronic kidney disease who received renal transplantations is employed to illustrate that the dynamic RMST model can predict patients' expected survival times from any prediction time, considering the time-dependent covariates and time-varying effects of covariates.


Asunto(s)
Trasplante de Riñón , Humanos , Tasa de Supervivencia , Modelos de Riesgos Proporcionales , Estudios de Seguimiento , Simulación por Computador , Análisis de Supervivencia
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda