Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 81
Filtrar
1.
Am J Epidemiol ; 2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38717330

RESUMEN

Quantitative bias analysis (QBA) permits assessment of the expected impact of various imperfections of the available data on the results and conclusions of a particular real-world study. This article extends QBA methodology to multivariable time-to-event analyses with right-censored endpoints, possibly including time-varying exposures or covariates. The proposed approach employs data-driven simulations, which preserve important features of the data at hand while offering flexibility in controlling the parameters and assumptions that may affect the results. First, the steps required to perform data-driven simulations are described, and then two examples of real-world time-to-event analyses illustrate their implementation and the insights they may offer. The first example focuses on the omission of an important time-invariant predictor of the outcome in a prognostic study of cancer mortality, and permits separating the expected impact of confounding bias from non-collapsibility. The second example assesses how imprecise timing of an interval-censored event - ascertained only at sparse times of clinic visits - affects its estimated association with a time-varying drug exposure. The simulation results also provide a basis for comparing the performance of two alternative strategies for imputing the unknown event times in this setting. The R scripts that permit the reproduction of our examples are provided.

2.
Pharm Stat ; 2024 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-38631678

RESUMEN

Accurate frequentist performance of a method is desirable in confirmatory clinical trials, but is not sufficient on its own to justify the use of a missing data method. Reference-based conditional mean imputation, with variance estimation justified solely by its frequentist performance, has the surprising and undesirable property that the estimated variance becomes smaller the greater the number of missing observations; as explained under jump-to-reference it effectively forces the true treatment effect to be exactly zero for patients with missing data.

3.
Stat Med ; 43(11): 2083-2095, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38487976

RESUMEN

To obtain valid inference following stratified randomisation, treatment effects should be estimated with adjustment for stratification variables. Stratification sometimes requires categorisation of a continuous prognostic variable (eg, age), which raises the question: should adjustment be based on randomisation categories or underlying continuous values? In practice, adjustment for randomisation categories is more common. We reviewed trials published in general medical journals and found none of the 32 trials that stratified randomisation based on a continuous variable adjusted for continuous values in the primary analysis. Using data simulation, this article evaluates the performance of different adjustment strategies for continuous and binary outcomes where the covariate-outcome relationship (via the link function) was either linear or non-linear. Given the utility of covariate adjustment for addressing missing data, we also considered settings with complete or missing outcome data. Analysis methods included linear or logistic regression with no adjustment for the stratification variable, adjustment for randomisation categories, or adjustment for continuous values assuming a linear covariate-outcome relationship or allowing for non-linearity using fractional polynomials or restricted cubic splines. Unadjusted analysis performed poorly throughout. Adjustment approaches that misspecified the underlying covariate-outcome relationship were less powerful and, alarmingly, biased in settings where the stratification variable predicted missing outcome data. Adjustment for randomisation categories tends to involve the highest degree of misspecification, and so should be avoided in practice. To guard against misspecification, we recommend use of flexible approaches such as fractional polynomials and restricted cubic splines when adjusting for continuous stratification variables in randomised trials.


Asunto(s)
Ensayos Clínicos Controlados Aleatorios como Asunto , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Simulación por Computador , Modelos Lineales , Interpretación Estadística de Datos , Modelos Logísticos , Distribución Aleatoria
5.
BMJ ; 384: q173, 2024 01 23.
Artículo en Inglés | MEDLINE | ID: mdl-38262675
7.
Biom J ; 66(1): e2200222, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36737675

RESUMEN

Although new biostatistical methods are published at a very high rate, many of these developments are not trustworthy enough to be adopted by the scientific community. We propose a framework to think about how a piece of methodological work contributes to the evidence base for a method. Similar to the well-known phases of clinical research in drug development, we propose to define four phases of methodological research. These four phases cover (I) proposing a new methodological idea while providing, for example, logical reasoning or proofs, (II) providing empirical evidence, first in a narrow target setting, then (III) in an extended range of settings and for various outcomes, accompanied by appropriate application examples, and (IV) investigations that establish a method as sufficiently well-understood to know when it is preferred over others and when it is not; that is, its pitfalls. We suggest basic definitions of the four phases to provoke thought and discussion rather than devising an unambiguous classification of studies into phases. Too many methodological developments finish before phase III/IV, but we give two examples with references. Our concept rebalances the emphasis to studies in phases III and IV, that is, carefully planned method comparison studies and studies that explore the empirical properties of existing methods in a wider range of problems.


Asunto(s)
Bioestadística , Proyectos de Investigación
8.
Biom J ; 66(1): e2300085, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37823668

RESUMEN

For simulation studies that evaluate methods of handling missing data, we argue that generating partially observed data by fixing the complete data and repeatedly simulating the missingness indicators is a superficially attractive idea but only rarely appropriate to use.


Asunto(s)
Investigación , Interpretación Estadística de Datos , Simulación por Computador
9.
Int J Epidemiol ; 53(1)2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-37833853

RESUMEN

Simulation studies are powerful tools in epidemiology and biostatistics, but they can be hard to conduct successfully. Sometimes unexpected results are obtained. We offer advice on how to check a simulation study when this occurs, and how to design and conduct the study to give results that are easier to check. Simulation studies should be designed to include some settings in which answers are already known. They should be coded in stages, with data-generating mechanisms checked before simulated data are analysed. Results should be explored carefully, with scatterplots of standard error estimates against point estimates surprisingly powerful tools. Failed estimation and outlying estimates should be identified and dealt with by changing data-generating mechanisms or coding realistic hybrid analysis procedures. Finally, we give a series of ideas that have been useful to us in the past for checking unexpected results. Following our advice may help to prevent errors and to improve the quality of published simulation studies.


Asunto(s)
Bioestadística , Humanos , Método de Montecarlo , Simulación por Computador
10.
PLoS One ; 18(12): e0292257, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38096223

RESUMEN

BACKGROUND: Patient and public involvement (PPI) in trials aims to enhance research by improving its relevance and transparency. Planning for statistical analysis begins at the design stage of a trial within the protocol and is refined and detailed in a Statistical Analysis Plan (SAP). While PPI is common in design and protocol development it is less common within SAPs. This study aimed to reach consensus on the most important and relevant statistical analysis items within an SAP to involve patients and the public. METHODS: We developed a UK-based, two-round Delphi survey through an iterative consultation with public partners, statisticians, and trialists. The consultation process started with 55 items from international guidance for statistical analysis plans. We aimed to recruit at least 20 participants per key stakeholder group for inclusion in the final analysis of the Delphi survey. Participants were asked to vote on each item using a Likert scale from 1 to 9, where a rating of 1 to 3 was labelled as having 'limited importance'; 4 to 6 as 'important but not critical' and 7 to 9 as 'critical' to involve patients and the public. Results from the second round determined consensus on critical items for PPI. RESULTS: The consultation exercise led to the inclusion of 15 statistical items in the Delphi survey. We recruited 179 participants, of whom 72% (129: 36 statisticians, 29 patients or public partners, 25 clinical researchers or methodologists, 27 trial managers, and 12 PPI coordinators) completed both rounds. Participants were on average 48 years old, 60% were female, 84% were White, 64% were based in England and 84% had at least five years' experience in trials. Four items reached consensus regarding critical importance for patient and public involvement: presentation of results to trial participants; summary and presentation of harms; interpretation and presentation of findings in an academic setting; factors impacting how well a treatment works. No consensus was reached for the remaining 11 items. In general, the results were consistent across stakeholder groups. DISCUSSION: We identified four critical items to involve patients and the public in statistical analysis plans. The remaining 11 items did not reach consensus and need to be considered in a case-by-case basis with most responders considering patient and public involvement important (but not critical). Our research provides a platform to enable focused future efforts to improve patient and public involvement in trials and enhance the relevance of statistical analyses to patients and the public.


Asunto(s)
Participación del Paciente , Proyectos de Investigación , Humanos , Femenino , Persona de Mediana Edad , Masculino , Técnica Delphi , Consenso , Pacientes
11.
Biom J ; 65(8): e2300069, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37775940

RESUMEN

The marginality principle guides analysts to avoid omitting lower-order terms from models in which higher-order terms are included as covariates. Lower-order terms are viewed as "marginal" to higher-order terms. We consider how this principle applies to three cases: regression models that may include the ratio of two measured variables; polynomial transformations of a measured variable; and factorial arrangements of defined interventions. For each case, we show that which terms or transformations are considered to be lower-order, and therefore marginal, depends on the scale of measurement, which is frequently arbitrary. Understanding the implications of this point leads to an intuitive understanding of the curse of dimensionality. We conclude that the marginality principle may be useful to analysts in some specific cases but caution against invoking it as a context-free recipe.


Asunto(s)
Algoritmos , Análisis de Regresión
12.
Stat Med ; 42(27): 4917-4930, 2023 11 30.
Artículo en Inglés | MEDLINE | ID: mdl-37767752

RESUMEN

In network meta-analysis, studies evaluating multiple treatment comparisons are modeled simultaneously, and estimation is informed by a combination of direct and indirect evidence. Network meta-analysis relies on an assumption of consistency, meaning that direct and indirect evidence should agree for each treatment comparison. Here we propose new local and global tests for inconsistency and demonstrate their application to three example networks. Because inconsistency is a property of a loop of treatments in the network meta-analysis, we locate the local test in a loop. We define a model with one inconsistency parameter that can be interpreted as loop inconsistency. The model builds on the existing ideas of node-splitting and side-splitting in network meta-analysis. To provide a global test for inconsistency, we extend the model across multiple independent loops with one degree of freedom per loop. We develop a new algorithm for identifying independent loops within a network meta-analysis. Our proposed models handle treatments symmetrically, locate inconsistency in loops rather than in nodes or treatment comparisons, and are invariant to choice of reference treatment, making the results less dependent on model parameterization. For testing global inconsistency in network meta-analysis, our global model uses fewer degrees of freedom than the existing design-by-treatment interaction approach and has the potential to increase power. To illustrate our methods, we fit the models to three network meta-analyses varying in size and complexity. Local and global tests for inconsistency are performed and we demonstrate that the global model is invariant to choice of independent loops.


Asunto(s)
Algoritmos , Proyectos de Investigación , Humanos , Metaanálisis en Red
13.
Res Synth Methods ; 14(6): 903-910, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37606180

RESUMEN

Individual participant data meta-analysis (IPDMA) projects obtain, check, harmonise and synthesise raw data from multiple studies. When undertaking the meta-analysis, researchers must decide between a two-stage or a one-stage approach. In a two-stage approach, the IPD are first analysed separately within each study to obtain aggregate data (e.g., treatment effect estimates and standard errors); then, in the second stage, these aggregate data are combined in a standard meta-analysis model (e.g., common-effect or random-effects). In a one-stage approach, the IPD from all studies are analysed in a single step using an appropriate model that accounts for clustering of participants within studies and, potentially, between-study heterogeneity (e.g., a general or generalised linear mixed model). The best approach to take is debated in the literature, and so here we provide clearer guidance for a broad audience. Both approaches are important tools for IPDMA researchers and neither are a panacea. If most studies in the IPDMA are small (few participants or events), a one-stage approach is recommended due to using a more exact likelihood. However, in other situations, researchers can choose either approach, carefully following best practice. Some previous claims recommending to always use a one-stage approach are misleading, and the two-stage approach will often suffice for most researchers. When differences do arise between the two approaches, often it is caused by researchers using different modelling assumptions or estimation methods, rather than using one or two stages per se.


Asunto(s)
Investigación , Humanos , Modelos Lineales , Análisis por Conglomerados
14.
Clin Trials ; 20(6): 594-602, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37337728

RESUMEN

BACKGROUND: The population-level summary measure is a key component of the estimand for clinical trials with time-to-event outcomes. This is particularly the case for non-inferiority trials, because different summary measures imply different null hypotheses. Most trials are designed using the hazard ratio as summary measure, but recent studies suggested that the difference in restricted mean survival time might be more powerful, at least in certain situations. In a recent letter, we conjectured that differences between summary measures can be explained using the concept of the non-inferiority frontier and that for a fair simulation comparison of summary measures, the same analysis methods, making the same assumptions, should be used to estimate different summary measures. The aim of this article is to make such a comparison between three commonly used summary measures: hazard ratio, difference in restricted mean survival time and difference in survival at a fixed time point. In addition, we aim to investigate the impact of using an analysis method that assumes proportional hazards on the operating characteristics of a trial designed with any of the three summary measures. METHODS: We conduct a simulation study in the proportional hazards setting. We estimate difference in restricted mean survival time and difference in survival non-parametrically, without assuming proportional hazards. We also estimate all three measures parametrically, using flexible survival regression, under the proportional hazards assumption. RESULTS: Comparing the hazard ratio assuming proportional hazards with the other summary measures not assuming proportional hazards, relative performance varies substantially depending on the specific scenario. Fixing the summary measure, assuming proportional hazards always leads to substantial power gains compared to using non-parametric methods. Fixing the modelling approach to flexible parametric regression assuming proportional hazards, difference in restricted mean survival time is most often the most powerful summary measure among those considered. CONCLUSION: When the hazards are likely to be approximately proportional, reflecting this in the analysis can lead to large gains in power for difference in restricted mean survival time and difference in survival. The choice of summary measure for a non-inferiority trial with time-to-event outcomes should be made on clinical grounds; when any of the three summary measures discussed here is equally justifiable, difference in restricted mean survival time is most often associated with the most powerful test, on the condition that it is estimated under proportional hazards.


Asunto(s)
Proyectos de Investigación , Humanos , Simulación por Computador , Modelos de Riesgos Proporcionales , Tamaño de la Muestra , Análisis de Supervivencia , Factores de Tiempo
15.
Stat Med ; 42(19): 3529-3546, 2023 08 30.
Artículo en Inglés | MEDLINE | ID: mdl-37365776

RESUMEN

Many trials use stratified randomisation, where participants are randomised within strata defined by one or more baseline covariates. While it is important to adjust for stratification variables in the analysis, the appropriate method of adjustment is unclear when stratification variables are affected by misclassification and hence some participants are randomised in the incorrect stratum. We conducted a simulation study to compare methods of adjusting for stratification variables affected by misclassification in the analysis of continuous outcomes when all or only some stratification errors are discovered, and when the treatment effect or treatment-by-covariate interaction effect is of interest. The data were analysed using linear regression with no adjustment, adjustment for the strata used to perform the randomisation (randomisation strata), adjustment for the strata if all errors are corrected (true strata), and adjustment for the strata after some errors are discovered and corrected (updated strata). The unadjusted model performed poorly in all settings. Adjusting for the true strata was optimal, while the relative performance of adjusting for the randomisation strata or the updated strata varied depending on the setting. As the true strata are unlikely to be known with certainty in practice, we recommend using the updated strata for adjustment and performing subgroup analyses, provided the discovery of errors is unlikely to depend on treatment group, as expected in blinded trials. Greater transparency is needed in the reporting of stratification errors and how they were addressed in the analysis.


Asunto(s)
Proyectos de Investigación , Humanos , Modelos Lineales , Simulación por Computador , Distribución Aleatoria
16.
Stata J ; 23(1): 3-23, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37155554

RESUMEN

We describe a new command, artcat, that calculates sample size or power for a randomized controlled trial or similar experiment with an ordered categorical outcome, where analysis is by the proportional-odds model. artcat implements the method of Whitehead (1993, Statistics in Medicine 12: 2257-2271). We also propose and implement a new method that 1) allows the user to specify a treatment effect that does not obey the proportional-odds assumption, 2) offers greater accuracy for large treatment effects, and 3) allows for noninferiority trials. We illustrate the command and explore the value of an ordered categorical outcome over a binary outcome in various settings. We show by simulation that the methods perform well and that the new method is more accurate than Whitehead's method.

17.
Int J Epidemiol ; 52(1): 44-57, 2023 02 08.
Artículo en Inglés | MEDLINE | ID: mdl-36474414

RESUMEN

BACKGROUND: Non-random selection of analytic subsamples could introduce selection bias in observational studies. We explored the potential presence and impact of selection in studies of SARS-CoV-2 infection and COVID-19 prognosis. METHODS: We tested the association of a broad range of characteristics with selection into COVID-19 analytic subsamples in the Avon Longitudinal Study of Parents and Children (ALSPAC) and UK Biobank (UKB). We then conducted empirical analyses and simulations to explore the potential presence, direction and magnitude of bias due to this selection (relative to our defined UK-based adult target populations) when estimating the association of body mass index (BMI) with SARS-CoV-2 infection and death-with-COVID-19. RESULTS: In both cohorts, a broad range of characteristics was related to selection, sometimes in opposite directions (e.g. more-educated people were more likely to have data on SARS-CoV-2 infection in ALSPAC, but less likely in UKB). Higher BMI was associated with higher odds of SARS-CoV-2 infection and death-with-COVID-19. We found non-negligible bias in many simulated scenarios. CONCLUSIONS: Analyses using COVID-19 self-reported or national registry data may be biased due to selection. The magnitude and direction of this bias depend on the outcome definition, the true effect of the risk factor and the assumed selection mechanism; these are likely to differ between studies with different target populations. Bias due to sample selection is a key concern in COVID-19 research based on national registry data, especially as countries end free mass testing. The framework we have used can be applied by other researchers assessing the extent to which their results may be biased for their research question of interest.


Asunto(s)
COVID-19 , Adulto , Niño , Humanos , Sesgo , COVID-19/epidemiología , Estudios Longitudinales , SARS-CoV-2 , Sesgo de Selección , Estudios Observacionales como Asunto
18.
BMJ ; 378: e070351, 2022 09 28.
Artículo en Inglés | MEDLINE | ID: mdl-36170988

RESUMEN

OBJECTIVE: To quantify the effects of a series of text messages (safetxt) delivered in the community on incidence of chlamydia and gonorrhoea reinfection at one year in people aged 16-24 years. DESIGN: Parallel group randomised controlled trial. SETTING: 92 sexual health clinics in the United Kingdom. PARTICIPANTS: People aged 16-24 years with a diagnosis of, or treatment for, chlamydia, gonorrhoea, or non-specific urethritis in the past two weeks who owned a mobile phone. INTERVENTIONS: 3123 participants assigned to the safetxt intervention received a series of text messages to improve sex behaviours: four texts daily for days 1-3, one or two daily for days 4-28, two or three weekly for month 2, and 2-5 monthly for months 3-12. 3125 control participants received a monthly text message for one year asking for any change to postal or email address. It was hypothesised that safetxt would reduce the risk of chlamydia and gonorrhoea reinfection at one year by improving three key safer sex behaviours: partner notification at one month, condom use, and sexually transmitted infection testing before unprotected sex with a new partner. Care providers and outcome assessors were blind to allocation. MAIN OUTCOME MEASURES: The primary outcome was the cumulative incidence of chlamydia or gonorrhoea reinfection at one year, assessed by nucleic acid amplification tests. Safety outcomes were self-reported road traffic incidents and partner violence. All analyses were by intention to treat. RESULTS: 6248 of 20 476 people assessed for eligibility between 1 April 2016 and 23 November 2018 were randomised. Primary outcome data were available for 4675/6248 (74.8%). At one year, the cumulative incidence of chlamydia or gonorrhoea reinfection was 22.2% (693/3123) in the safetxt arm versus 20.3% (633/3125) in the control arm (odds ratio 1.13, 95% confidence interval 0.98 to 1.31). The number needed to harm was 64 (95% confidence interval number needed to benefit 334 to ∞ to number needed to harm 24) The risk of road traffic incidents and partner violence was similar between the groups. CONCLUSIONS: The safetxt intervention did not reduce chlamydia and gonorrhoea reinfections at one year in people aged 16-24 years. More reinfections occurred in the safetxt group. The results highlight the need for rigorous evaluation of health communication interventions. TRIAL REGISTRATION: ISRCTN registry ISRCTN64390461.


Asunto(s)
Gonorrea , Enfermedades de Transmisión Sexual , Envío de Mensajes de Texto , Gonorrea/epidemiología , Gonorrea/prevención & control , Humanos , Reinfección , Conducta Sexual , Enfermedades de Transmisión Sexual/epidemiología , Enfermedades de Transmisión Sexual/prevención & control
19.
Stat Med ; 41(22): 4299-4310, 2022 09 30.
Artículo en Inglés | MEDLINE | ID: mdl-35751568

RESUMEN

Factorial trials offer an efficient method to evaluate multiple interventions in a single trial, however the use of additional treatments can obscure research objectives, leading to inappropriate analytical methods and interpretation of results. We define a set of estimands for factorial trials, and describe a framework for applying these estimands, with the aim of clarifying trial objectives and ensuring appropriate primary and sensitivity analyses are chosen. This framework is intended for use in factorial trials where the intent is to conduct "two-trials-in-one" (ie, to separately evaluate the effects of treatments A and B), and is comprised of four steps: (i) specifying how additional treatment(s) (eg, treatment B) will be handled in the estimand, and how intercurrent events affecting the additional treatment(s) will be handled; (ii) designating the appropriate factorial estimator as the primary analysis strategy; (iii) evaluating the interaction to assess the plausibility of the assumptions underpinning the factorial estimator; and (iv) performing a sensitivity analysis using an appropriate multiarm estimator to evaluate to what extent departures from the underlying assumption of no interaction may affect results. We show that adjustment for other factors is necessary for noncollapsible effect measures (such as odds ratio), and through a trial re-analysis we find that failure to consider the estimand could lead to inappropriate interpretation of results. We conclude that careful use of the estimands framework clarifies research objectives and reduces the risk of misinterpretation of trial results, and should become a standard part of both the protocol and reporting of factorial trials.


Asunto(s)
Modelos Estadísticos , Proyectos de Investigación , Interpretación Estadística de Datos , Humanos , Oportunidad Relativa
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA