Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 18.087
Filtrar
Más filtros

Intervalo de año de publicación
2.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38682464

RESUMEN

The current Poisson factor models often assume that the factors are unknown, which overlooks the explanatory potential of certain observable covariates. This study focuses on high dimensional settings, where the number of the count response variables and/or covariates can diverge as the sample size increases. A covariate-augmented overdispersed Poisson factor model is proposed to jointly perform a high-dimensional Poisson factor analysis and estimate a large coefficient matrix for overdispersed count data. A group of identifiability conditions is provided to theoretically guarantee computational identifiability. We incorporate the interdependence of both response variables and covariates by imposing a low-rank constraint on the large coefficient matrix. To address the computation challenges posed by nonlinearity, two high-dimensional latent matrices, and the low-rank constraint, we propose a novel variational estimation scheme that combines Laplace and Taylor approximations. We also develop a criterion based on a singular value ratio to determine the number of factors and the rank of the coefficient matrix. Comprehensive simulation studies demonstrate that the proposed method outperforms the state-of-the-art methods in estimation accuracy and computational efficiency. The practical merit of our method is demonstrated by an application to the CITE-seq dataset. A flexible implementation of our proposed method is available in the R package COAP.


Asunto(s)
Simulación por Computador , Modelos Estadísticos , Distribución de Poisson , Humanos , Tamaño de la Muestra , Biometría/métodos , Análisis Factorial
3.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38819313

RESUMEN

Ruberu et al. (2023) introduce an elegant approach to fit a complicated meta-analysis problem with diverse reporting modalities into the framework of hierarchical Bayesian inference. We discuss issues related to some of the involved parametric model assumptions.


Asunto(s)
Teorema de Bayes , Metaanálisis como Asunto , Neoplasias , Penetrancia , Humanos , Modelos Estadísticos , Biometría/métodos
4.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38771658

RESUMEN

Limitations of using the traditional Cox's hazard ratio for summarizing the magnitude of the treatment effect on time-to-event outcomes have been widely discussed, and alternative measures that do not have such limitations are gaining attention. One of the alternative methods recently proposed, in a simple 2-sample comparison setting, uses the average hazard with survival weight (AH), which can be interpreted as the general censoring-free person-time incidence rate on a given time window. In this paper, we propose a new regression analysis approach for the AH with a truncation time τ. We investigate 3 versions of AH regression analysis, assuming (1) independent censoring, (2) group-specific censoring, and (3) covariate-dependent censoring. The proposed AH regression methods are closely related to robust Poisson regression. While the new approach needs to require a truncation time τ explicitly, it can be more robust than Poisson regression in the presence of censoring. With the AH regression approach, one can summarize the between-group treatment difference in both absolute difference and relative terms, adjusting for covariates that are associated with the outcome. This property will increase the likelihood that the treatment effect magnitude is correctly interpreted. The AH regression approach can be a useful alternative to the traditional Cox's hazard ratio approach for estimating and reporting the magnitude of the treatment effect on time-to-event outcomes.


Asunto(s)
Modelos de Riesgos Proporcionales , Humanos , Análisis de Regresión , Análisis de Supervivencia , Simulación por Computador , Distribución de Poisson , Biometría/métodos , Modelos Estadísticos
5.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38949889

RESUMEN

The response envelope model proposed by Cook et al. (2010) is an efficient method to estimate the regression coefficient under the context of the multivariate linear regression model. It improves estimation efficiency by identifying material and immaterial parts of responses and removing the immaterial variation. The response envelope model has been investigated only for continuous response variables. In this paper, we propose the multivariate probit model with latent envelope, in short, the probit envelope model, as a response envelope model for multivariate binary response variables. The probit envelope model takes into account relations between Gaussian latent variables of the multivariate probit model by using the idea of the response envelope model. We address the identifiability of the probit envelope model by employing the essential identifiability concept and suggest a Bayesian method for the parameter estimation. We illustrate the probit envelope model via simulation studies and real-data analysis. The simulation studies show that the probit envelope model has the potential to gain efficiency in estimation compared to the multivariate probit model. The real data analysis shows that the probit envelope model is useful for multi-label classification.


Asunto(s)
Teorema de Bayes , Simulación por Computador , Modelos Estadísticos , Análisis Multivariante , Humanos , Modelos Lineales , Biometría/métodos , Distribución Normal
6.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38768225

RESUMEN

Conventional supervised learning usually operates under the premise that data are collected from the same underlying population. However, challenges may arise when integrating new data from different populations, resulting in a phenomenon known as dataset shift. This paper focuses on prior probability shift, where the distribution of the outcome varies across datasets but the conditional distribution of features given the outcome remains the same. To tackle the challenges posed by such shift, we propose an estimation algorithm that can efficiently combine information from multiple sources. Unlike existing methods that are restricted to discrete outcomes, the proposed approach accommodates both discrete and continuous outcomes. It also handles high-dimensional covariate vectors through variable selection using an adaptive least absolute shrinkage and selection operator penalty, producing efficient estimates that possess the oracle property. Moreover, a novel semiparametric likelihood ratio test is proposed to check the validity of prior probability shift assumptions by embedding the null conditional density function into Neyman's smooth alternatives (Neyman, 1937) and testing study-specific parameters. We demonstrate the effectiveness of our proposed method through extensive simulations and a real data example. The proposed methods serve as a useful addition to the repertoire of tools for dealing dataset shifts.


Asunto(s)
Algoritmos , Simulación por Computador , Modelos Estadísticos , Probabilidad , Humanos , Funciones de Verosimilitud , Biometría/métodos , Interpretación Estadística de Datos , Aprendizaje Automático Supervisado
7.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38742906

RESUMEN

Semicompeting risks refer to the phenomenon that the terminal event (such as death) can censor the nonterminal event (such as disease progression) but not vice versa. The treatment effect on the terminal event can be delivered either directly following the treatment or indirectly through the nonterminal event. We consider 2 strategies to decompose the total effect into a direct effect and an indirect effect under the framework of mediation analysis in completely randomized experiments by adjusting the prevalence and hazard of nonterminal events, respectively. They require slightly different assumptions on cross-world quantities to achieve identifiability. We establish asymptotic properties for the estimated counterfactual cumulative incidences and decomposed treatment effects. We illustrate the subtle difference between these 2 decompositions through simulation studies and two real-data applications in the Supplementary Materials.


Asunto(s)
Simulación por Computador , Humanos , Modelos Estadísticos , Riesgo , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Análisis de Mediación , Resultado del Tratamiento , Biometría/métodos
8.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38742907

RESUMEN

We propose a new non-parametric conditional independence test for a scalar response and a functional covariate over a continuum of quantile levels. We build a Cramer-von Mises type test statistic based on an empirical process indexed by random projections of the functional covariate, effectively avoiding the "curse of dimensionality" under the projected hypothesis, which is almost surely equivalent to the null hypothesis. The asymptotic null distribution of the proposed test statistic is obtained under some mild assumptions. The asymptotic global and local power properties of our test statistic are then investigated. We specifically demonstrate that the statistic is able to detect a broad class of local alternatives converging to the null at the parametric rate. Additionally, we recommend a simple multiplier bootstrap approach for estimating the critical values. The finite-sample performance of our statistic is examined through several Monte Carlo simulation experiments. Finally, an analysis of an EEG data set is used to show the utility and versatility of our proposed test statistic.


Asunto(s)
Simulación por Computador , Modelos Estadísticos , Método de Montecarlo , Humanos , Electroencefalografía/estadística & datos numéricos , Interpretación Estadística de Datos , Biometría/métodos , Estadísticas no Paramétricas
9.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38919141

RESUMEN

Observational studies are frequently used to estimate the effect of an exposure or treatment on an outcome. To obtain an unbiased estimate of the treatment effect, it is crucial to measure the exposure accurately. A common type of exposure misclassification is recall bias, which occurs in retrospective cohort studies when study subjects may inaccurately recall their past exposure. Particularly challenging is differential recall bias in the context of self-reported binary exposures, where the bias may be directional rather than random and its extent varies according to the outcomes experienced. This paper makes several contributions: (1) it establishes bounds for the average treatment effect even when a validation study is not available; (2) it proposes multiple estimation methods across various strategies predicated on different assumptions; and (3) it suggests a sensitivity analysis technique to assess the robustness of the causal conclusion, incorporating insights from prior research. The effectiveness of these methods is demonstrated through simulation studies that explore various model misspecification scenarios. These approaches are then applied to investigate the effect of childhood physical abuse on mental health in adulthood.


Asunto(s)
Sesgo , Recuerdo Mental , Estudios Observacionales como Asunto , Humanos , Estudios Observacionales como Asunto/estadística & datos numéricos , Simulación por Computador , Resultado del Tratamiento , Niño , Modelos Estadísticos , Adulto , Biometría/métodos
10.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38884127

RESUMEN

The marginal structure quantile model (MSQM) provides a unique lens to understand the causal effect of a time-varying treatment on the full distribution of potential outcomes. Under the semiparametric framework, we derive the efficiency influence function for the MSQM, from which a new doubly robust estimator is proposed for point estimation and inference. We show that the doubly robust estimator is consistent if either of the models associated with treatment assignment or the potential outcome distributions is correctly specified, and is semiparametric efficient if both models are correct. To implement the doubly robust MSQM estimator, we propose to solve a smoothed estimating equation to facilitate efficient computation of the point and variance estimates. In addition, we develop a confounding function approach to investigate the sensitivity of several MSQM estimators when the sequential ignorability assumption is violated. Extensive simulations are conducted to examine the finite-sample performance characteristics of the proposed methods. We apply the proposed methods to the Yale New Haven Health System Electronic Health Record data to study the effect of antihypertensive medications to patients with severe hypertension and assess the robustness of the findings to unmeasured baseline and time-varying confounding.


Asunto(s)
Simulación por Computador , Hipertensión , Modelos Estadísticos , Humanos , Hipertensión/tratamiento farmacológico , Antihipertensivos/uso terapéutico , Registros Electrónicos de Salud/estadística & datos numéricos , Biometría/métodos
11.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38819309

RESUMEN

Doubly adaptive biased coin design (DBCD), a response-adaptive randomization scheme, aims to skew subject assignment probabilities based on accrued responses for ethical considerations. Recent years have seen substantial advances in understanding DBCD's theoretical properties, assuming correct model specification for the responses. However, concerns have been raised about the impact of model misspecification on its design and analysis. In this paper, we assess the robustness to both design model misspecification and analysis model misspecification under DBCD. On one hand, we confirm that the consistency and asymptotic normality of the allocation proportions can be preserved, even when the responses follow a distribution other than the one imposed by the design model during the implementation of DBCD. On the other hand, we extensively investigate three commonly used linear regression models for estimating and inferring the treatment effect, namely difference-in-means, analysis of covariance (ANCOVA) I, and ANCOVA II. By allowing these regression models to be arbitrarily misspecified, thereby not reflecting the true data generating process, we derive the consistency and asymptotic normality of the treatment effect estimators evaluated from the three models. The asymptotic properties show that the ANCOVA II model, which takes covariate-by-treatment interaction terms into account, yields the most efficient estimator. These results can provide theoretical support for using DBCD in scenarios involving model misspecification, thereby promoting the widespread application of this randomization procedure.


Asunto(s)
Modelos Estadísticos , Distribución Aleatoria , Humanos , Simulación por Computador , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Modelos Lineales , Biometría/métodos , Interpretación Estadística de Datos , Sesgo , Análisis de Varianza , Proyectos de Investigación
12.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38647000

RESUMEN

Fish growth models are crucial for fisheries stock assessments and are commonly estimated using fish length-at-age data. This data is widely collected using length-stratified age sampling (LSAS), a cost-effective two-phase response-selective sampling method. The data may contain age measurement errors (MEs). We propose a methodology that accounts for both LSAS and age MEs to accurately estimate fish growth. The proposed methods use empirical proportion likelihood methodology for LSAS and the structural errors in variables methodology for age MEs. We provide a measure of uncertainty for parameter estimates and standardized residuals for model validation. To model the age distribution, we employ a continuation ratio-logit model that is consistent with the random nature of the true age distribution. We also apply a discretization approach for age and length distributions, which significantly improves computational efficiency and is consistent with the discrete age and length data typically encountered in practice. Our simulation study shows that neglecting age MEs can lead to significant bias in growth estimation, even with small but non-negligible age MEs. However, our new approach performs well regardless of the magnitude of age MEs and accurately estimates SEs of parameter estimators. Real data analysis demonstrates the effectiveness of the proposed model validation device. Computer codes to implement the methodology are provided.


Asunto(s)
Simulación por Computador , Peces , Animales , Peces/crecimiento & desarrollo , Modelos Estadísticos , Explotaciones Pesqueras/estadística & datos numéricos , Biometría/métodos , Funciones de Verosimilitud , Sesgo
13.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38708763

RESUMEN

Time-series data collected from a network of random variables are useful for identifying temporal pathways among the network nodes. Observed measurements may contain multiple sources of signals and noises, including Gaussian signals of interest and non-Gaussian noises, including artifacts, structured noise, and other unobserved factors (eg, genetic risk factors, disease susceptibility). Existing methods, including vector autoregression (VAR) and dynamic causal modeling do not account for unobserved non-Gaussian components. Furthermore, existing methods cannot effectively distinguish contemporaneous relationships from temporal relations. In this work, we propose a novel method to identify latent temporal pathways using time-series biomarker data collected from multiple subjects. The model adjusts for the non-Gaussian components and separates the temporal network from the contemporaneous network. Specifically, an independent component analysis (ICA) is used to extract the unobserved non-Gaussian components, and residuals are used to estimate the contemporaneous and temporal networks among the node variables based on method of moments. The algorithm is fast and can easily scale up. We derive the identifiability and the asymptotic properties of the temporal and contemporaneous networks. We demonstrate superior performance of our method by extensive simulations and an application to a study of attention-deficit/hyperactivity disorder (ADHD), where we analyze the temporal relationships between brain regional biomarkers. We find that temporal network edges were across different brain regions, while most contemporaneous network edges were bilateral between the same regions and belong to a subset of the functional connectivity network.


Asunto(s)
Algoritmos , Biomarcadores , Simulación por Computador , Modelos Estadísticos , Humanos , Biomarcadores/análisis , Distribución Normal , Trastorno por Déficit de Atención con Hiperactividad , Factores de Tiempo , Biometría/métodos
14.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38801258

RESUMEN

In comparative studies, covariate balance and sequential allocation schemes have attracted growing academic interest. Although many theoretically justified adaptive randomization methods achieve the covariate balance, they often allocate patients in pairs or groups. To better meet the practical requirements where the clinicians cannot wait for other participants to assign the current patient for some economic or ethical reasons, we propose a method that randomizes patients individually and sequentially. The proposed method conceptually separates the covariate imbalance, measured by the newly proposed modified Mahalanobis distance, and the marginal imbalance, that is the sample size difference between the 2 groups, and it minimizes them with an explicit priority order. Compared with the existing sequential randomization methods, the proposed method achieves the best possible covariate balance while maintaining the marginal balance directly, offering us more control of the randomization process. We demonstrate the superior performance of the proposed method through a wide range of simulation studies and real data analysis, and also establish theoretical guarantees for the proposed method in terms of both the convergence of the imbalance measure and the subsequent treatment effect estimation.


Asunto(s)
Simulación por Computador , Ensayos Clínicos Controlados Aleatorios como Asunto , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Biometría/métodos , Modelos Estadísticos , Interpretación Estadística de Datos , Distribución Aleatoria , Tamaño de la Muestra , Algoritmos
15.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38819307

RESUMEN

To infer the treatment effect for a single treated unit using panel data, synthetic control (SC) methods construct a linear combination of control units' outcomes that mimics the treated unit's pre-treatment outcome trajectory. This linear combination is subsequently used to impute the counterfactual outcomes of the treated unit had it not been treated in the post-treatment period, and used to estimate the treatment effect. Existing SC methods rely on correctly modeling certain aspects of the counterfactual outcome generating mechanism and may require near-perfect matching of the pre-treatment trajectory. Inspired by proximal causal inference, we obtain two novel nonparametric identifying formulas for the average treatment effect for the treated unit: one is based on weighting, and the other combines models for the counterfactual outcome and the weighting function. We introduce the concept of covariate shift to SCs to obtain these identification results conditional on the treatment assignment. We also develop two treatment effect estimators based on these two formulas and generalized method of moments. One new estimator is doubly robust: it is consistent and asymptotically normal if at least one of the outcome and weighting models is correctly specified. We demonstrate the performance of the methods via simulations and apply them to evaluate the effectiveness of a pneumococcal conjugate vaccine on the risk of all-cause pneumonia in Brazil.


Asunto(s)
Simulación por Computador , Modelos Estadísticos , Vacunas Neumococicas , Humanos , Vacunas Neumococicas/uso terapéutico , Vacunas Neumococicas/administración & dosificación , Resultado del Tratamiento , Biometría/métodos , Interpretación Estadística de Datos
16.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38837900

RESUMEN

Randomization-based inference using the Fisher randomization test allows for the computation of Fisher-exact P-values, making it an attractive option for the analysis of small, randomized experiments with non-normal outcomes. Two common test statistics used to perform Fisher randomization tests are the difference-in-means between the treatment and control groups and the covariate-adjusted version of the difference-in-means using analysis of covariance. Modern computing allows for fast computation of the Fisher-exact P-value, but confidence intervals have typically been obtained by inverting the Fisher randomization test over a range of possible effect sizes. The test inversion procedure is computationally expensive, limiting the usage of randomization-based inference in applied work. A recent paper by Zhu and Liu developed a closed form expression for the randomization-based confidence interval using the difference-in-means statistic. We develop an important extension of Zhu and Liu to obtain a closed form expression for the randomization-based covariate-adjusted confidence interval and give practitioners a sufficiency condition that can be checked using observed data and that guarantees that these confidence intervals have correct coverage. Simulations show that our procedure generates randomization-based covariate-adjusted confidence intervals that are robust to non-normality and that can be calculated in nearly the same time as it takes to calculate the Fisher-exact P-value, thus removing the computational barrier to performing randomization-based inference when adjusting for covariates. We also demonstrate our method on a re-analysis of phase I clinical trial data.


Asunto(s)
Simulación por Computador , Intervalos de Confianza , Humanos , Biometría/métodos , Modelos Estadísticos , Interpretación Estadística de Datos , Distribución Aleatoria , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos
17.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38861372

RESUMEN

In many randomized placebo-controlled trials with a biomarker defined subgroup, it is believed that this subgroup has the same or higher treatment effect compared with its complement. These subgroups are often referred to as the biomarker positive and negative subgroups. Most biomarker-stratified pivotal trials are aimed at demonstrating a significant treatment effect either in the biomarker positive subgroup or in the overall population. A major shortcoming of this approach is that the treatment can be declared effective in the overall population even though it has no effect in the biomarker negative subgroup. We use the isotonic assumption about the treatment effects in the two subgroups to construct an efficient way to test for a treatment effect in both the biomarker positive and negative subgroups. A substantial reduction in the required sample size for such a trial compared with existing methods makes evaluating the treatment effect in both the biomarker positive and negative subgroups feasible in pivotal trials especially when the prevalence of the biomarker positive subgroup is less than 0.5.


Asunto(s)
Biomarcadores , Ensayos Clínicos Controlados Aleatorios como Asunto , Humanos , Biomarcadores/análisis , Biomarcadores/sangre , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Tamaño de la Muestra , Resultado del Tratamiento , Biometría/métodos , Simulación por Computador , Modelos Estadísticos
18.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38804219

RESUMEN

Sequential multiple assignment randomized trials (SMARTs) are the gold standard for estimating optimal dynamic treatment regimes (DTRs), but are costly and require a large sample size. We introduce the multi-stage augmented Q-learning estimator (MAQE) to improve efficiency of estimation of optimal DTRs by augmenting SMART data with observational data. Our motivating example comes from the Back Pain Consortium, where one of the overarching aims is to learn how to tailor treatments for chronic low back pain to individual patient phenotypes, knowledge which is lacking clinically. The Consortium-wide collaborative SMART and observational studies within the Consortium collect data on the same participant phenotypes, treatments, and outcomes at multiple time points, which can easily be integrated. Previously published single-stage augmentation methods for integration of trial and observational study (OS) data were adapted to estimate optimal DTRs from SMARTs using Q-learning. Simulation studies show the MAQE, which integrates phenotype, treatment, and outcome information from multiple studies over multiple time points, more accurately estimates the optimal DTR, and has a higher average value than a comparable Q-learning estimator without augmentation. We demonstrate this improvement is robust to a wide range of trial and OS sample sizes, addition of noise variables, and effect sizes.


Asunto(s)
Simulación por Computador , Dolor de la Región Lumbar , Estudios Observacionales como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto , Humanos , Estudios Observacionales como Asunto/estadística & datos numéricos , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Dolor de la Región Lumbar/terapia , Tamaño de la Muestra , Resultado del Tratamiento , Modelos Estadísticos , Biometría/métodos
19.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38708764

RESUMEN

When studying the treatment effect on time-to-event outcomes, it is common that some individuals never experience failure events, which suggests that they have been cured. However, the cure status may not be observed due to censoring which makes it challenging to define treatment effects. Current methods mainly focus on estimating model parameters in various cure models, ultimately leading to a lack of causal interpretations. To address this issue, we propose 2 causal estimands, the timewise risk difference and mean survival time difference, in the always-uncured based on principal stratification as a complement to the treatment effect on cure rates. These estimands allow us to study the treatment effects on failure times in the always-uncured subpopulation. We show the identifiability using a substitutional variable for the potential cure status under ignorable treatment assignment mechanism, these 2 estimands are identifiable. We also provide estimation methods using mixture cure models. We applied our approach to an observational study that compared the leukemia-free survival rates of different transplantation types to cure acute lymphoblastic leukemia. Our proposed approach yielded insightful results that can be used to inform future treatment decisions.


Asunto(s)
Modelos Estadísticos , Leucemia-Linfoma Linfoblástico de Células Precursoras , Humanos , Leucemia-Linfoma Linfoblástico de Células Precursoras/mortalidad , Leucemia-Linfoma Linfoblástico de Células Precursoras/terapia , Leucemia-Linfoma Linfoblástico de Células Precursoras/tratamiento farmacológico , Causalidad , Biometría/métodos , Resultado del Tratamiento , Simulación por Computador , Supervivencia sin Enfermedad , Análisis de Supervivencia
20.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38837902

RESUMEN

In mobile health, tailoring interventions for real-time delivery is of paramount importance. Micro-randomized trials have emerged as the "gold-standard" methodology for developing such interventions. Analyzing data from these trials provides insights into the efficacy of interventions and the potential moderation by specific covariates. The "causal excursion effect," a novel class of causal estimand, addresses these inquiries. Yet, existing research mainly focuses on continuous or binary data, leaving count data largely unexplored. The current work is motivated by the Drink Less micro-randomized trial from the UK, which focuses on a zero-inflated proximal outcome, i.e., the number of screen views in the subsequent hour following the intervention decision point. To be specific, we revisit the concept of causal excursion effect, specifically for zero-inflated count outcomes, and introduce novel estimation approaches that incorporate nonparametric techniques. Bidirectional asymptotics are established for the proposed estimators. Simulation studies are conducted to evaluate the performance of the proposed methods. As an illustration, we also implement these methods to the Drink Less trial data.


Asunto(s)
Simulación por Computador , Telemedicina , Humanos , Telemedicina/estadística & datos numéricos , Estadísticas no Paramétricas , Causalidad , Ensayos Clínicos Controlados Aleatorios como Asunto , Modelos Estadísticos , Biometría/métodos , Interpretación Estadística de Datos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA