Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 81
Filtrar
1.
Am J Epidemiol ; 2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38717330

RESUMO

Quantitative bias analysis (QBA) permits assessment of the expected impact of various imperfections of the available data on the results and conclusions of a particular real-world study. This article extends QBA methodology to multivariable time-to-event analyses with right-censored endpoints, possibly including time-varying exposures or covariates. The proposed approach employs data-driven simulations, which preserve important features of the data at hand while offering flexibility in controlling the parameters and assumptions that may affect the results. First, the steps required to perform data-driven simulations are described, and then two examples of real-world time-to-event analyses illustrate their implementation and the insights they may offer. The first example focuses on the omission of an important time-invariant predictor of the outcome in a prognostic study of cancer mortality, and permits separating the expected impact of confounding bias from non-collapsibility. The second example assesses how imprecise timing of an interval-censored event - ascertained only at sparse times of clinic visits - affects its estimated association with a time-varying drug exposure. The simulation results also provide a basis for comparing the performance of two alternative strategies for imputing the unknown event times in this setting. The R scripts that permit the reproduction of our examples are provided.

2.
Stat Med ; 43(11): 2083-2095, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38487976

RESUMO

To obtain valid inference following stratified randomisation, treatment effects should be estimated with adjustment for stratification variables. Stratification sometimes requires categorisation of a continuous prognostic variable (eg, age), which raises the question: should adjustment be based on randomisation categories or underlying continuous values? In practice, adjustment for randomisation categories is more common. We reviewed trials published in general medical journals and found none of the 32 trials that stratified randomisation based on a continuous variable adjusted for continuous values in the primary analysis. Using data simulation, this article evaluates the performance of different adjustment strategies for continuous and binary outcomes where the covariate-outcome relationship (via the link function) was either linear or non-linear. Given the utility of covariate adjustment for addressing missing data, we also considered settings with complete or missing outcome data. Analysis methods included linear or logistic regression with no adjustment for the stratification variable, adjustment for randomisation categories, or adjustment for continuous values assuming a linear covariate-outcome relationship or allowing for non-linearity using fractional polynomials or restricted cubic splines. Unadjusted analysis performed poorly throughout. Adjustment approaches that misspecified the underlying covariate-outcome relationship were less powerful and, alarmingly, biased in settings where the stratification variable predicted missing outcome data. Adjustment for randomisation categories tends to involve the highest degree of misspecification, and so should be avoided in practice. To guard against misspecification, we recommend use of flexible approaches such as fractional polynomials and restricted cubic splines when adjusting for continuous stratification variables in randomised trials.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Simulação por Computador , Modelos Lineares , Interpretação Estatística de Dados , Modelos Logísticos , Distribuição Aleatória
3.
Pharm Stat ; 2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38631678

RESUMO

Accurate frequentist performance of a method is desirable in confirmatory clinical trials, but is not sufficient on its own to justify the use of a missing data method. Reference-based conditional mean imputation, with variance estimation justified solely by its frequentist performance, has the surprising and undesirable property that the estimated variance becomes smaller the greater the number of missing observations; as explained under jump-to-reference it effectively forces the true treatment effect to be exactly zero for patients with missing data.

4.
Biom J ; 66(1): e2200222, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36737675

RESUMO

Although new biostatistical methods are published at a very high rate, many of these developments are not trustworthy enough to be adopted by the scientific community. We propose a framework to think about how a piece of methodological work contributes to the evidence base for a method. Similar to the well-known phases of clinical research in drug development, we propose to define four phases of methodological research. These four phases cover (I) proposing a new methodological idea while providing, for example, logical reasoning or proofs, (II) providing empirical evidence, first in a narrow target setting, then (III) in an extended range of settings and for various outcomes, accompanied by appropriate application examples, and (IV) investigations that establish a method as sufficiently well-understood to know when it is preferred over others and when it is not; that is, its pitfalls. We suggest basic definitions of the four phases to provoke thought and discussion rather than devising an unambiguous classification of studies into phases. Too many methodological developments finish before phase III/IV, but we give two examples with references. Our concept rebalances the emphasis to studies in phases III and IV, that is, carefully planned method comparison studies and studies that explore the empirical properties of existing methods in a wider range of problems.


Assuntos
Bioestatística , Projetos de Pesquisa
5.
Biom J ; 66(1): e2300085, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37823668

RESUMO

For simulation studies that evaluate methods of handling missing data, we argue that generating partially observed data by fixing the complete data and repeatedly simulating the missingness indicators is a superficially attractive idea but only rarely appropriate to use.


Assuntos
Pesquisa , Interpretação Estatística de Dados , Simulação por Computador
6.
PLoS Comput Biol ; 18(5): e1008800, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35604952

RESUMO

The fraction of cases reported, known as 'reporting', is a key performance indicator in an outbreak response, and an essential factor to consider when modelling epidemics and assessing their impact on populations. Unfortunately, its estimation is inherently difficult, as it relates to the part of an epidemic which is, by definition, not observed. We introduce a simple statistical method for estimating reporting, initially developed for the response to Ebola in Eastern Democratic Republic of the Congo (DRC), 2018-2020. This approach uses transmission chain data typically gathered through case investigation and contact tracing, and uses the proportion of investigated cases with a known, reported infector as a proxy for reporting. Using simulated epidemics, we study how this method performs for different outbreak sizes and reporting levels. Results suggest that our method has low bias, reasonable precision, and despite sub-optimal coverage, usually provides estimates within close range (5-10%) of the true value. Being fast and simple, this method could be useful for estimating reporting in real-time in settings where person-to-person transmission is the main driver of the epidemic, and where case investigation is routinely performed as part of surveillance and contact tracing activities.


Assuntos
Epidemias , Doença pelo Vírus Ebola , Busca de Comunicante , República Democrática do Congo/epidemiologia , Surtos de Doenças , Doença pelo Vírus Ebola/epidemiologia , Humanos
7.
Stat Med ; 42(19): 3529-3546, 2023 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-37365776

RESUMO

Many trials use stratified randomisation, where participants are randomised within strata defined by one or more baseline covariates. While it is important to adjust for stratification variables in the analysis, the appropriate method of adjustment is unclear when stratification variables are affected by misclassification and hence some participants are randomised in the incorrect stratum. We conducted a simulation study to compare methods of adjusting for stratification variables affected by misclassification in the analysis of continuous outcomes when all or only some stratification errors are discovered, and when the treatment effect or treatment-by-covariate interaction effect is of interest. The data were analysed using linear regression with no adjustment, adjustment for the strata used to perform the randomisation (randomisation strata), adjustment for the strata if all errors are corrected (true strata), and adjustment for the strata after some errors are discovered and corrected (updated strata). The unadjusted model performed poorly in all settings. Adjusting for the true strata was optimal, while the relative performance of adjusting for the randomisation strata or the updated strata varied depending on the setting. As the true strata are unlikely to be known with certainty in practice, we recommend using the updated strata for adjustment and performing subgroup analyses, provided the discovery of errors is unlikely to depend on treatment group, as expected in blinded trials. Greater transparency is needed in the reporting of stratification errors and how they were addressed in the analysis.


Assuntos
Projetos de Pesquisa , Humanos , Modelos Lineares , Simulação por Computador , Distribuição Aleatória
8.
Stat Med ; 42(27): 4917-4930, 2023 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-37767752

RESUMO

In network meta-analysis, studies evaluating multiple treatment comparisons are modeled simultaneously, and estimation is informed by a combination of direct and indirect evidence. Network meta-analysis relies on an assumption of consistency, meaning that direct and indirect evidence should agree for each treatment comparison. Here we propose new local and global tests for inconsistency and demonstrate their application to three example networks. Because inconsistency is a property of a loop of treatments in the network meta-analysis, we locate the local test in a loop. We define a model with one inconsistency parameter that can be interpreted as loop inconsistency. The model builds on the existing ideas of node-splitting and side-splitting in network meta-analysis. To provide a global test for inconsistency, we extend the model across multiple independent loops with one degree of freedom per loop. We develop a new algorithm for identifying independent loops within a network meta-analysis. Our proposed models handle treatments symmetrically, locate inconsistency in loops rather than in nodes or treatment comparisons, and are invariant to choice of reference treatment, making the results less dependent on model parameterization. For testing global inconsistency in network meta-analysis, our global model uses fewer degrees of freedom than the existing design-by-treatment interaction approach and has the potential to increase power. To illustrate our methods, we fit the models to three network meta-analyses varying in size and complexity. Local and global tests for inconsistency are performed and we demonstrate that the global model is invariant to choice of independent loops.


Assuntos
Algoritmos , Projetos de Pesquisa , Humanos , Metanálise em Rede
9.
Clin Trials ; 20(6): 594-602, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37337728

RESUMO

BACKGROUND: The population-level summary measure is a key component of the estimand for clinical trials with time-to-event outcomes. This is particularly the case for non-inferiority trials, because different summary measures imply different null hypotheses. Most trials are designed using the hazard ratio as summary measure, but recent studies suggested that the difference in restricted mean survival time might be more powerful, at least in certain situations. In a recent letter, we conjectured that differences between summary measures can be explained using the concept of the non-inferiority frontier and that for a fair simulation comparison of summary measures, the same analysis methods, making the same assumptions, should be used to estimate different summary measures. The aim of this article is to make such a comparison between three commonly used summary measures: hazard ratio, difference in restricted mean survival time and difference in survival at a fixed time point. In addition, we aim to investigate the impact of using an analysis method that assumes proportional hazards on the operating characteristics of a trial designed with any of the three summary measures. METHODS: We conduct a simulation study in the proportional hazards setting. We estimate difference in restricted mean survival time and difference in survival non-parametrically, without assuming proportional hazards. We also estimate all three measures parametrically, using flexible survival regression, under the proportional hazards assumption. RESULTS: Comparing the hazard ratio assuming proportional hazards with the other summary measures not assuming proportional hazards, relative performance varies substantially depending on the specific scenario. Fixing the summary measure, assuming proportional hazards always leads to substantial power gains compared to using non-parametric methods. Fixing the modelling approach to flexible parametric regression assuming proportional hazards, difference in restricted mean survival time is most often the most powerful summary measure among those considered. CONCLUSION: When the hazards are likely to be approximately proportional, reflecting this in the analysis can lead to large gains in power for difference in restricted mean survival time and difference in survival. The choice of summary measure for a non-inferiority trial with time-to-event outcomes should be made on clinical grounds; when any of the three summary measures discussed here is equally justifiable, difference in restricted mean survival time is most often associated with the most powerful test, on the condition that it is estimated under proportional hazards.


Assuntos
Projetos de Pesquisa , Humanos , Simulação por Computador , Modelos de Riscos Proporcionais , Tamanho da Amostra , Análise de Sobrevida , Fatores de Tempo
10.
Stata J ; 23(1): 3-23, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37155554

RESUMO

We describe a new command, artcat, that calculates sample size or power for a randomized controlled trial or similar experiment with an ordered categorical outcome, where analysis is by the proportional-odds model. artcat implements the method of Whitehead (1993, Statistics in Medicine 12: 2257-2271). We also propose and implement a new method that 1) allows the user to specify a treatment effect that does not obey the proportional-odds assumption, 2) offers greater accuracy for large treatment effects, and 3) allows for noninferiority trials. We illustrate the command and explore the value of an ordered categorical outcome over a binary outcome in various settings. We show by simulation that the methods perform well and that the new method is more accurate than Whitehead's method.

11.
Biom J ; 65(8): e2300069, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37775940

RESUMO

The marginality principle guides analysts to avoid omitting lower-order terms from models in which higher-order terms are included as covariates. Lower-order terms are viewed as "marginal" to higher-order terms. We consider how this principle applies to three cases: regression models that may include the ratio of two measured variables; polynomial transformations of a measured variable; and factorial arrangements of defined interventions. For each case, we show that which terms or transformations are considered to be lower-order, and therefore marginal, depends on the scale of measurement, which is frequently arbitrary. Understanding the implications of this point leads to an intuitive understanding of the curse of dimensionality. We conclude that the marginality principle may be useful to analysts in some specific cases but caution against invoking it as a context-free recipe.


Assuntos
Algoritmos , Análise de Regressão
12.
Stat Med ; 41(22): 4299-4310, 2022 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-35751568

RESUMO

Factorial trials offer an efficient method to evaluate multiple interventions in a single trial, however the use of additional treatments can obscure research objectives, leading to inappropriate analytical methods and interpretation of results. We define a set of estimands for factorial trials, and describe a framework for applying these estimands, with the aim of clarifying trial objectives and ensuring appropriate primary and sensitivity analyses are chosen. This framework is intended for use in factorial trials where the intent is to conduct "two-trials-in-one" (ie, to separately evaluate the effects of treatments A and B), and is comprised of four steps: (i) specifying how additional treatment(s) (eg, treatment B) will be handled in the estimand, and how intercurrent events affecting the additional treatment(s) will be handled; (ii) designating the appropriate factorial estimator as the primary analysis strategy; (iii) evaluating the interaction to assess the plausibility of the assumptions underpinning the factorial estimator; and (iv) performing a sensitivity analysis using an appropriate multiarm estimator to evaluate to what extent departures from the underlying assumption of no interaction may affect results. We show that adjustment for other factors is necessary for noncollapsible effect measures (such as odds ratio), and through a trial re-analysis we find that failure to consider the estimand could lead to inappropriate interpretation of results. We conclude that careful use of the estimands framework clarifies research objectives and reduces the risk of misinterpretation of trial results, and should become a standard part of both the protocol and reporting of factorial trials.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Interpretação Estatística de Dados , Humanos , Razão de Chances
13.
Stat Med ; 40(29): 6634-6650, 2021 12 20.
Artigo em Inglês | MEDLINE | ID: mdl-34590333

RESUMO

Composite endpoints are commonly used to define primary outcomes in randomized controlled trials. A participant may be classified as meeting the endpoint if they experience an event in one or several components (eg, a favorable outcome based on a composite of being alive and attaining negative culture results in trials assessing tuberculosis treatments). Partially observed components that are not missing simultaneously complicate the analysis of the composite endpoint. An intuitive strategy frequently used in practice for handling missing values in the components is to derive the values of the composite endpoint from observed components when possible, and exclude from analysis participants whose composite endpoint cannot be derived. Alternatively, complete record analysis (CRA) (excluding participants with any missing components) or multiple imputation (MI) can be used. We compare a set of methods for analyzing a composite endpoint with partially observed components mathematically and by simulation, and apply these methods in a reanalysis of a published trial (TOPPS). We show that the derived composite endpoint can be missing not at random even when the components are missing completely at random. Consequently, the treatment effect estimated from the derived endpoint is biased while CRA results without the derived endpoint are valid. Missing at random mechanisms require MI of the components. We conclude that, although superficially attractive, deriving the composite endpoint from observed components should generally be avoided. Despite the potential risk of imputation model mis-specification, MI of missing components is the preferred approach in this study setting.


Assuntos
Interpretação Estatística de Dados , Simulação por Computador , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto
14.
BMC Med ; 18(1): 286, 2020 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-32900372

RESUMO

When designing a clinical trial, explicitly defining the treatment estimands of interest (that which is to be estimated) can help to clarify trial objectives and ensure the questions being addressed by the trial are clinically meaningful. There are several challenges when defining estimands. Here, we discuss a number of these in the context of trials of treatments for patients hospitalised with COVID-19 and make suggestions for how estimands should be defined for key outcomes. We suggest that treatment effects should usually be measured as differences in proportions (or risk or odds ratios) for outcomes such as death and requirement for ventilation, and differences in means for outcomes such as the number of days ventilated. We further recommend that truncation due to death should be handled differently depending on whether a patient- or resource-focused perspective is taken; for the former, a composite approach should be used, while for the latter, a while-alive approach is preferred. Finally, we suggest that discontinuation of randomised treatment should be handled from a treatment policy perspective, where non-adherence is ignored in the analysis (i.e. intention to treat).


Assuntos
Betacoronavirus , Infecções por Coronavirus/terapia , Pneumonia Viral/terapia , COVID-19 , Ensaios Clínicos como Assunto , Infecções por Coronavirus/tratamento farmacológico , Hospitalização , Humanos , Razão de Chances , Pandemias , Projetos de Pesquisa , SARS-CoV-2 , Tratamento Farmacológico da COVID-19
15.
Stat Med ; 39(21): 2815-2842, 2020 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-32419182

RESUMO

Missing data due to loss to follow-up or intercurrent events are unintended, but unfortunately inevitable in clinical trials. Since the true values of missing data are never known, it is necessary to assess the impact of untestable and unavoidable assumptions about any unobserved data in sensitivity analysis. This tutorial provides an overview of controlled multiple imputation (MI) techniques and a practical guide to their use for sensitivity analysis of trials with missing continuous outcome data. These include δ- and reference-based MI procedures. In δ-based imputation, an offset term, δ, is typically added to the expected value of the missing data to assess the impact of unobserved participants having a worse or better response than those observed. Reference-based imputation draws imputed values with some reference to observed data in other groups of the trial, typically in other treatment arms. We illustrate the accessibility of these methods using data from a pediatric eczema trial and a chronic headache trial and provide Stata code to facilitate adoption. We discuss issues surrounding the choice of δ in δ-based sensitivity analysis. We also review the debate on variance estimation within reference-based analysis and justify the use of Rubin's variance estimator in this setting, since as we further elaborate on within, it provides information anchored inference.


Assuntos
Interpretação Estatística de Dados , Criança , Humanos
16.
Stat Med ; 39(19): 2536-2555, 2020 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-32394498

RESUMO

A one-stage individual participant data (IPD) meta-analysis synthesizes IPD from multiple studies using a general or generalized linear mixed model. This produces summary results (eg, about treatment effect) in a single step, whilst accounting for clustering of participants within studies (via a stratified study intercept, or random study intercepts) and between-study heterogeneity (via random treatment effects). We use simulation to evaluate the performance of restricted maximum likelihood (REML) and maximum likelihood (ML) estimation of one-stage IPD meta-analysis models for synthesizing randomized trials with continuous or binary outcomes. Three key findings are identified. First, for ML or REML estimation of stratified intercept or random intercepts models, a t-distribution based approach generally improves coverage of confidence intervals for the summary treatment effect, compared with a z-based approach. Second, when using ML estimation of a one-stage model with a stratified intercept, the treatment variable should be coded using "study-specific centering" (ie, 1/0 minus the study-specific proportion of participants in the treatment group), as this reduces the bias in the between-study variance estimate (compared with 1/0 and other coding options). Third, REML estimation reduces downward bias in between-study variance estimates compared with ML estimation, and does not depend on the treatment variable coding; for binary outcomes, this requires REML estimation of the pseudo-likelihood, although this may not be stable in some situations (eg, when data are sparse). Two applied examples are used to illustrate the findings.


Assuntos
Modelos Estatísticos , Viés , Análise por Conglomerados , Simulação por Computador , Humanos , Modelos Lineares
17.
BMC Med Res Methodol ; 20(1): 134, 2020 05 29.
Artigo em Inglês | MEDLINE | ID: mdl-32471366

RESUMO

BACKGROUND: Missing data in covariates can result in biased estimates and loss of power to detect associations. It can also lead to other challenges in time-to-event analyses including the handling of time-varying effects of covariates, selection of covariates and their flexible modelling. This review aims to describe how researchers approach time-to-event analyses with missing data. METHODS: Medline and Embase were searched for observational time-to-event studies in oncology published from January 2012 to January 2018. The review focused on proportional hazards models or extended Cox models. We investigated the extent and reporting of missing data and how it was addressed in the analysis. Covariate modelling and selection, and assessment of the proportional hazards assumption were also investigated, alongside the treatment of missing data in these procedures. RESULTS: 148 studies were included. The mean proportion of individuals with missingness in any covariate was 32%. 53% of studies used complete-case analysis, and 22% used multiple imputation. In total, 14% of studies stated an assumption concerning missing data and only 34% stated missingness as a limitation. The proportional hazards assumption was checked in 28% of studies, of which, 17% did not state the assessment method. 58% of 144 multivariable models stated their covariate selection procedure with use of a pre-selected set of covariates being the most popular followed by stepwise methods and univariable analyses. Of 69 studies that included continuous covariates, 81% did not assess the appropriateness of the functional form. CONCLUSION: While guidelines for handling missing data in epidemiological studies are in place, this review indicates that few report implementing recommendations in practice. Although missing data are present in many studies, we found that few state clearly how they handled it or the assumptions they have made. Easy-to-implement but potentially biased approaches such as complete-case analysis are most commonly used despite these relying on strong assumptions and where often more appropriate methods should be employed. Authors should be encouraged to follow existing guidelines to address missing data, and increased levels of expectation from journals and editors could be used to improve practice.


Assuntos
Oncologia , Pesquisa , Interpretação Estatística de Dados , Humanos , Modelos de Riscos Proporcionais
18.
BMC Med Res Methodol ; 20(1): 208, 2020 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-32787782

RESUMO

BACKGROUND: The coronavirus pandemic (Covid-19) presents a variety of challenges for ongoing clinical trials, including an inevitably higher rate of missing outcome data, with new and non-standard reasons for missingness. International drug trial guidelines recommend trialists review plans for handling missing data in the conduct and statistical analysis, but clear recommendations are lacking. METHODS: We present a four-step strategy for handling missing outcome data in the analysis of randomised trials that are ongoing during a pandemic. We consider handling missing data arising due to (i) participant infection, (ii) treatment disruptions and (iii) loss to follow-up. We consider both settings where treatment effects for a 'pandemic-free world' and 'world including a pandemic' are of interest. RESULTS: In any trial, investigators should; (1) Clarify the treatment estimand of interest with respect to the occurrence of the pandemic; (2) Establish what data are missing for the chosen estimand; (3) Perform primary analysis under the most plausible missing data assumptions followed by; (4) Sensitivity analysis under alternative plausible assumptions. To obtain an estimate of the treatment effect in a 'pandemic-free world', participant data that are clinically affected by the pandemic (directly due to infection or indirectly via treatment disruptions) are not relevant and can be set to missing. For primary analysis, a missing-at-random assumption that conditions on all observed data that are expected to be associated with both the outcome and missingness may be most plausible. For the treatment effect in the 'world including a pandemic', all participant data is relevant and should be included in the analysis. For primary analysis, a missing-at-random assumption - potentially incorporating a pandemic time-period indicator and participant infection status - or a missing-not-at-random assumption with a poorer response may be most relevant, depending on the setting. In all scenarios, sensitivity analysis under credible missing-not-at-random assumptions should be used to evaluate the robustness of results. We highlight controlled multiple imputation as an accessible tool for conducting sensitivity analyses. CONCLUSIONS: Missing data problems will be exacerbated for trials active during the Covid-19 pandemic. This four-step strategy will facilitate clear thinking about the appropriate analysis for relevant questions of interest.


Assuntos
Avaliação de Resultados em Cuidados de Saúde/estatística & dados numéricos , Guias de Prática Clínica como Assunto , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Projetos de Pesquisa/estatística & dados numéricos , Betacoronavirus/fisiologia , COVID-19 , Comorbidade , Infecções por Coronavirus/epidemiologia , Infecções por Coronavirus/terapia , Infecções por Coronavirus/virologia , Humanos , Avaliação de Resultados em Cuidados de Saúde/métodos , Pandemias , Pneumonia Viral/epidemiologia , Pneumonia Viral/terapia , Pneumonia Viral/virologia , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Reprodutibilidade dos Testes , SARS-CoV-2
19.
Eur J Epidemiol ; 35(7): 619-630, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32445007

RESUMO

In this paper we study approaches for dealing with treatment when developing a clinical prediction model. Analogous to the estimand framework recently proposed by the European Medicines Agency for clinical trials, we propose a 'predictimand' framework of different questions that may be of interest when predicting risk in relation to treatment started after baseline. We provide a formal definition of the estimands matching these questions, give examples of settings in which each is useful and discuss appropriate estimators including their assumptions. We illustrate the impact of the predictimand choice in a dataset of patients with end-stage kidney disease. We argue that clearly defining the estimand is equally important in prediction research as in causal inference.


Assuntos
Regras de Decisão Clínica , Ensaios Clínicos como Assunto/métodos , Projetos de Pesquisa , Ensaios Clínicos como Assunto/normas , Interpretação Estatística de Dados , Humanos , Modelos Estatísticos
20.
Stat Med ; 38(11): 2074-2102, 2019 05 20.
Artigo em Inglês | MEDLINE | ID: mdl-30652356

RESUMO

Simulation studies are computer experiments that involve creating data by pseudo-random sampling. A key strength of simulation studies is the ability to understand the behavior of statistical methods because some "truth" (usually some parameter/s of interest) is known from the process of generating the data. This allows us to consider properties of methods, such as bias. While widely used, simulation studies are often poorly designed, analyzed, and reported. This tutorial outlines the rationale for using simulation studies and offers guidance for design, execution, analysis, reporting, and presentation. In particular, this tutorial provides a structured approach for planning and reporting simulation studies, which involves defining aims, data-generating mechanisms, estimands, methods, and performance measures ("ADEMP"); coherent terminology for simulation studies; guidance on coding simulation studies; a critical discussion of key performance measures and their estimation; guidance on structuring tabular and graphical presentation of results; and new graphical presentations. With a view to describing recent practice, we review 100 articles taken from Volume 34 of Statistics in Medicine, which included at least one simulation study and identify areas for improvement.


Assuntos
Simulação por Computador , Modelos Estatísticos , Viés , Bioestatística/métodos , Guias como Assunto , Método de Monte Carlo , Projetos de Pesquisa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA