Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
1.
Age Ageing ; 53(5)2024 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-38796315

RESUMO

INTRODUCTION: Community-based services to sustain independence for older people have varying configurations. A typology of these interventions would improve service provision and research by providing conceptual clarity and enabling the identification of effective configurations. We aimed to produce such a typology. METHOD: We developed our typology by qualitatively synthesising community-based complex interventions to sustain independence in older people, evaluated in randomised controlled trials (RCTs), in four stages: (i) systematically identifying relevant RCTs; (ii) extracting descriptions of interventions (including control) using the Template for Intervention Description and Replication; (iii) generating categories of key intervention features and (iv) grouping the interventions based on these categories. PROSPERO registration: CRD42019162195. RESULTS: Our search identified 129 RCTs involving 266 intervention arms. The Community-based complex Interventions to sustain Independence in Older People (CII-OP) typology comprises 14 action components and 5 tailoring components. Action components include procedures for treating patients or otherwise intended to directly improve their outcomes; regular examples include formal homecare; physical exercise; health education; activities of daily living training; providing aids and adaptations and nutritional support. Tailoring components involve a process that may result in care planning, with multiple action components being planned, recommended or prescribed. Multifactorial action from care planning was the most common tailoring component. It involves individualised, multidomain assessment and management, as in comprehensive geriatric assessment. Sixty-three different intervention types (combinations) were identified. CONCLUSIONS: Our typology provides an empirical basis for service planning and evidence synthesis. We recommend better reporting about organisational aspects of interventions and usual care.


Assuntos
Atividades Cotidianas , Serviços de Saúde Comunitária , Vida Independente , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Idoso , Serviços de Saúde Comunitária/organização & administração , Serviços de Saúde para Idosos/organização & administração , Idoso de 80 Anos ou mais , Estado Funcional , Masculino , Feminino , Envelhecimento , Fatores Etários , Serviços de Assistência Domiciliar/organização & administração
2.
Stat Med ; 42(27): 5007-5024, 2023 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-37705296

RESUMO

We have previously proposed temporal recalibration to account for trends in survival over time to improve the calibration of predictions from prognostic models for new patients. This involves first estimating the predictor effects using data from all individuals (full dataset) and then re-estimating the baseline using a subset of the most recent data whilst constraining the predictor effects to remain the same. In this article, we demonstrate how temporal recalibration can be applied in competing risk settings by recalibrating each cause-specific (or subdistribution) hazard model separately. We illustrate this using an example of colon cancer survival with data from the Surveillance Epidemiology and End Results (SEER) program. Data from patients diagnosed in 1995-2004 were used to fit two models for deaths due to colon cancer and other causes respectively. We discuss considerations that need to be made in order to apply temporal recalibration such as the choice of data used in the recalibration step. We also demonstrate how to assess the calibration of these models in new data for patients diagnosed subsequently in 2005. Comparison was made to a standard analysis (when improvements over time are not taken into account) and a period analysis which is similar to temporal recalibration but differs in the data used to estimate the predictor effects. The 10-year calibration plots demonstrated that using the standard approach over-estimated the risk of death due to colon cancer and the total risk of death and that calibration was improved using temporal recalibration or period analysis.


Assuntos
Neoplasias do Colo , Humanos , Calibragem , Prognóstico , Modelos de Riscos Proporcionais , Neoplasias do Colo/diagnóstico
3.
Stat Med ; 41(24): 4822-4837, 2022 10 30.
Artigo em Inglês | MEDLINE | ID: mdl-35932153

RESUMO

Before embarking on an individual participant data meta-analysis (IPDMA) project, researchers and funders need assurance it is worth their time and cost. This should include consideration of how many studies are promising their IPD and, given the characteristics of these studies, the power of an IPDMA including them. Here, we show how to estimate the power of a planned IPDMA of randomized trials to examine treatment-covariate interactions at the participant level (ie, treatment effect modifiers). We focus on a binary outcome with binary or continuous covariates, and propose a three-step approach, which assumes the true interaction size is common to all trials. In step one, the user must specify a minimally important interaction size and, for each trial separately (eg, as obtained from trial publications), the following aggregate data: the number of participants and events in control and treatment groups, the mean and SD for each continuous covariate, and the proportion of participants in each category for each binary covariate. This allows the variance of the interaction estimate to be calculated for each trial, using an analytic solution for Fisher's information matrix from a logistic regression model. Step 2 calculates the variance of the summary interaction estimate from the planned IPDMA (equal to the inverse of the sum of the inverse trial variances from step 1), and step 3 calculates the corresponding power based on a two-sided Wald test. Stata and R code are provided, and two examples given for illustration. Extension to allow for between-study heterogeneity is also considered.


Assuntos
Análise de Dados , Modelos Estatísticos , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto
4.
Stat Med ; 41(7): 1280-1295, 2022 03 30.
Artigo em Inglês | MEDLINE | ID: mdl-34915593

RESUMO

Previous articles in Statistics in Medicine describe how to calculate the sample size required for external validation of prediction models with continuous and binary outcomes. The minimum sample size criteria aim to ensure precise estimation of key measures of a model's predictive performance, including measures of calibration, discrimination, and net benefit. Here, we extend the sample size guidance to prediction models with a time-to-event (survival) outcome, to cover external validation in datasets containing censoring. A simulation-based framework is proposed, which calculates the sample size required to target a particular confidence interval width for the calibration slope measuring the agreement between predicted risks (from the model) and observed risks (derived using pseudo-observations to account for censoring) on the log cumulative hazard scale. Precise estimation of calibration curves, discrimination, and net-benefit can also be checked in this framework. The process requires assumptions about the validation population in terms of the (i) distribution of the model's linear predictor and (ii) event and censoring distributions. Existing information can inform this; in particular, the linear predictor distribution can be approximated using the C-index or Royston's D statistic from the model development article, together with the overall event risk. We demonstrate how the approach can be used to calculate the sample size required to validate a prediction model for recurrent venous thromboembolism. Ideally the sample size should ensure precise calibration across the entire range of predicted risks, but must at least ensure adequate precision in regions important for clinical decision-making. Stata and R code are provided.


Assuntos
Modelos Estatísticos , Calibragem , Simulação por Computador , Humanos , Prognóstico , Tamanho da Amostra
5.
Stat Med ; 40(19): 4230-4251, 2021 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-34031906

RESUMO

In prediction model research, external validation is needed to examine an existing model's performance using data independent to that for model development. Current external validation studies often suffer from small sample sizes and consequently imprecise predictive performance estimates. To address this, we propose how to determine the minimum sample size needed for a new external validation study of a prediction model for a binary outcome. Our calculations aim to precisely estimate calibration (Observed/Expected and calibration slope), discrimination (C-statistic), and clinical utility (net benefit). For each measure, we propose closed-form and iterative solutions for calculating the minimum sample size required. These require specifying: (i) target SEs (confidence interval widths) for each estimate of interest, (ii) the anticipated outcome event proportion in the validation population, (iii) the prediction model's anticipated (mis)calibration and variance of linear predictor values in the validation population, and (iv) potential risk thresholds for clinical decision-making. The calculations can also be used to inform whether the sample size of an existing (already collected) dataset is adequate for external validation. We illustrate our proposal for external validation of a prediction model for mechanical heart valve failure with an expected outcome event proportion of 0.018. Calculations suggest at least 9835 participants (177 events) are required to precisely estimate the calibration and discrimination measures, with this number driven by the calibration slope criterion, which we anticipate will often be the case. Also, 6443 participants (116 events) are required to precisely estimate net benefit at a risk threshold of 8%. Software code is provided.


Assuntos
Modelos Estatísticos , Modelos Teóricos , Calibragem , Humanos , Prognóstico , Tamanho da Amostra
6.
Stat Med ; 40(1): 133-146, 2021 01 15.
Artigo em Inglês | MEDLINE | ID: mdl-33150684

RESUMO

Clinical prediction models provide individualized outcome predictions to inform patient counseling and clinical decision making. External validation is the process of examining a prediction model's performance in data independent to that used for model development. Current external validation studies often suffer from small sample sizes, and subsequently imprecise estimates of a model's predictive performance. To address this, we propose how to determine the minimum sample size needed for external validation of a clinical prediction model with a continuous outcome. Four criteria are proposed, that target precise estimates of (i) R2 (the proportion of variance explained), (ii) calibration-in-the-large (agreement between predicted and observed outcome values on average), (iii) calibration slope (agreement between predicted and observed values across the range of predicted values), and (iv) the variance of observed outcome values. Closed-form sample size solutions are derived for each criterion, which require the user to specify anticipated values of the model's performance (in particular R2 ) and the outcome variance in the external validation dataset. A sensible starting point is to base values on those for the model development study, as obtained from the publication or study authors. The largest sample size required to meet all four criteria is the recommended minimum sample size needed in the external validation dataset. The calculations can also be applied to estimate expected precision when an existing dataset with a fixed sample size is available, to help gauge if it is adequate. We illustrate the proposed methods on a case-study predicting fat-free mass in children.


Assuntos
Modelos Estatísticos , Calibragem , Criança , Humanos , Prognóstico , Tamanho da Amostra
7.
Stat Med ; 40(13): 3066-3084, 2021 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-33768582

RESUMO

Individual participant data (IPD) from multiple sources allows external validation of a prognostic model across multiple populations. Often this reveals poor calibration, potentially causing poor predictive performance in some populations. However, rather than discarding the model outright, it may be possible to modify the model to improve performance using recalibration techniques. We use IPD meta-analysis to identify the simplest method to achieve good model performance. We examine four options for recalibrating an existing time-to-event model across multiple populations: (i) shifting the baseline hazard by a constant, (ii) re-estimating the shape of the baseline hazard, (iii) adjusting the prognostic index as a whole, and (iv) adjusting individual predictor effects. For each strategy, IPD meta-analysis examines (heterogeneity in) model performance across populations. Additionally, the probability of achieving good performance in a new population can be calculated allowing ranking of recalibration methods. In an applied example, IPD meta-analysis reveals that the existing model had poor calibration in some populations, and large heterogeneity across populations. However, re-estimation of the intercept substantially improved the expected calibration in new populations, and reduced between-population heterogeneity. Comparing recalibration strategies showed that re-estimating both the magnitude and shape of the baseline hazard gave the highest predicted probability of good performance in a new population. In conclusion, IPD meta-analysis allows a prognostic model to be externally validated in multiple settings, and enables recalibration strategies to be compared and ranked to decide on the least aggressive recalibration strategy to achieve acceptable external model performance without discarding existing model information.


Assuntos
Análise de Dados , Projetos de Pesquisa , Calibragem , Humanos , Metanálise como Assunto , Probabilidade , Prognóstico
8.
Int J Clin Pract ; 75(10): e14345, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33973320

RESUMO

AIM: To identify existing comorbidity measures and summarise their association with acute coronary syndrome (ACS) outcomes. METHODS: We searched published studies from MEDLINE (OVIDSP) and EMBASE from inception to March 2021, studies of the pre-specified conference proceedings from Web of Science since May 2017, and studies included in any relevant systematic reviews. Studies that reported no comorbidity measures, no association of comorbid burden with ACS outcomes, or only used a comorbidity measure as a confounder without further information were excluded. After independent screening by three reviewers, data extraction and risk of bias assessment of each included study was undertaken. Results were narratively synthesised. RESULTS: Of 4166 potentially eligible studies identified, 12 (combined n = 6 885 982 participants) were included. Most studies had a high risk of bias at quality assessment. Six different types of comorbidity measures were identified with the Charlson comorbidity index (CCI) the most widely used measure among studies. Overall, the greater the comorbid burden or the higher comorbidity scores recorded, the greater was the association with the risk of mortality. CONCLUSION: The review summarised different comorbidity measures and reported that higher comorbidity scores were associated with worse ACS outcomes. The CCI is the most widely measure of comorbid burden and shows additive value to clinical risk scores in use.


Assuntos
Síndrome Coronariana Aguda , Síndrome Coronariana Aguda/epidemiologia , Comorbidade , Humanos , Prognóstico , Fatores de Risco
9.
Stat Med ; 39(19): 2536-2555, 2020 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-32394498

RESUMO

A one-stage individual participant data (IPD) meta-analysis synthesizes IPD from multiple studies using a general or generalized linear mixed model. This produces summary results (eg, about treatment effect) in a single step, whilst accounting for clustering of participants within studies (via a stratified study intercept, or random study intercepts) and between-study heterogeneity (via random treatment effects). We use simulation to evaluate the performance of restricted maximum likelihood (REML) and maximum likelihood (ML) estimation of one-stage IPD meta-analysis models for synthesizing randomized trials with continuous or binary outcomes. Three key findings are identified. First, for ML or REML estimation of stratified intercept or random intercepts models, a t-distribution based approach generally improves coverage of confidence intervals for the summary treatment effect, compared with a z-based approach. Second, when using ML estimation of a one-stage model with a stratified intercept, the treatment variable should be coded using "study-specific centering" (ie, 1/0 minus the study-specific proportion of participants in the treatment group), as this reduces the bias in the between-study variance estimate (compared with 1/0 and other coding options). Third, REML estimation reduces downward bias in between-study variance estimates compared with ML estimation, and does not depend on the treatment variable coding; for binary outcomes, this requires REML estimation of the pseudo-likelihood, although this may not be stable in some situations (eg, when data are sparse). Two applied examples are used to illustrate the findings.


Assuntos
Modelos Estatísticos , Viés , Análise por Conglomerados , Simulação por Computador , Humanos , Modelos Lineares
10.
Stat Med ; 39(15): 2115-2137, 2020 07 10.
Artigo em Inglês | MEDLINE | ID: mdl-32350891

RESUMO

Precision medicine research often searches for treatment-covariate interactions, which refers to when a treatment effect (eg, measured as a mean difference, odds ratio, hazard ratio) changes across values of a participant-level covariate (eg, age, gender, biomarker). Single trials do not usually have sufficient power to detect genuine treatment-covariate interactions, which motivate the sharing of individual participant data (IPD) from multiple trials for meta-analysis. Here, we provide statistical recommendations for conducting and planning an IPD meta-analysis of randomized trials to examine treatment-covariate interactions. For conduct, two-stage and one-stage statistical models are described, and we recommend: (i) interactions should be estimated directly, and not by calculating differences in meta-analysis results for subgroups; (ii) interaction estimates should be based solely on within-study information; (iii) continuous covariates and outcomes should be analyzed on their continuous scale; (iv) nonlinear relationships should be examined for continuous covariates, using a multivariate meta-analysis of the trend (eg, using restricted cubic spline functions); and (v) translation of interactions into clinical practice is nontrivial, requiring individualized treatment effect prediction. For planning, we describe first why the decision to initiate an IPD meta-analysis project should not be based on between-study heterogeneity in the overall treatment effect; and second, how to calculate the power of a potential IPD meta-analysis project in advance of IPD collection, conditional on characteristics (eg, number of participants, standard deviation of covariates) of the trials (potentially) promising their IPD. Real IPD meta-analysis projects are used for illustration throughout.


Assuntos
Análise de Dados , Modelos Estatísticos , Humanos , Metanálise como Assunto , Modelos de Riscos Proporcionais
11.
Br J Sports Med ; 54(1): 13-22, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31186258

RESUMO

BACKGROUND: Despite absence of evidence of a clinical benefit of arthroscopic partial meniscectomy (APM), many surgeons claim that subgroups of patients benefit from APM. OBJECTIVE: We developed a prognostic model predicting change in patient-reported outcome 1 year following arthroscopic meniscal surgery to identify such subgroups. METHODS: We included 641 patients (age 48.7 years (SD 13), 56% men) undergoing arthroscopic meniscal surgery from the Knee Arthroscopy Cohort Southern Denmark. 18 preoperative factors identified from literature and/or orthopaedic surgeons (patient demographics, medical history, symptom onset and duration, knee-related symptoms, etc) were combined in a multivariable linear regression model. The outcome was change in Knee injury and Osteoarthritis Outcome Score (KOOS4) (average score of 4 of 5 KOOS subscales excluding the activities of daily living subscale) from presurgery to 52 weeks after surgery. A positive KOOS4 change score constitutes improvement. Prognostic performance was assessed using R2 statistics and calibration plots and was internally validated by adjusting for optimism using 1000 bootstrap samples. RESULTS: Patients improved on average 18.6 (SD 19.7, range -38.0 to 87.8) in KOOS4. The strongest prognostic factors for improvement were (1) no previous meniscal surgery on index knee and (2) more severe preoperative knee-related symptoms. The model's overall predictive performance was low (apparent R2=0.162, optimism adjusted R2=0.080) and it showed poor calibration (calibration-in-the-large=0.205, calibration slope=0.772). CONCLUSION: Despite combining a large number of preoperative factors presumed clinically relevant, change in patient-reported outcome 1 year following meniscal surgery was not predictable. This essentially quashes the existence of 'subgroups' with certain characteristics having a particularly favourable outcome after meniscal surgery. TRIAL REGISTRATION NUMBER: NCT01871272.


Assuntos
Meniscectomia , Medidas de Resultados Relatados pelo Paciente , Lesões do Menisco Tibial/cirurgia , Adolescente , Adulto , Idoso , Dinamarca , Feminino , Seguimentos , Humanos , Modelos Logísticos , Masculino , Meniscectomia/efeitos adversos , Pessoa de Meia-Idade , Complicações Pós-Operatórias , Estudos Prospectivos , Adulto Jovem
12.
Stat Med ; 38(7): 1276-1296, 2019 03 30.
Artigo em Inglês | MEDLINE | ID: mdl-30357870

RESUMO

When designing a study to develop a new prediction model with binary or time-to-event outcomes, researchers should ensure their sample size is adequate in terms of the number of participants (n) and outcome events (E) relative to the number of predictor parameters (p) considered for inclusion. We propose that the minimum values of n and E (and subsequently the minimum number of events per predictor parameter, EPP) should be calculated to meet the following three criteria: (i) small optimism in predictor effect estimates as defined by a global shrinkage factor of ≥0.9, (ii) small absolute difference of ≤ 0.05 in the model's apparent and adjusted Nagelkerke's R2 , and (iii) precise estimation of the overall risk in the population. Criteria (i) and (ii) aim to reduce overfitting conditional on a chosen p, and require prespecification of the model's anticipated Cox-Snell R2 , which we show can be obtained from previous studies. The values of n and E that meet all three criteria provides the minimum sample size required for model development. Upon application of our approach, a new diagnostic model for Chagas disease requires an EPP of at least 4.8 and a new prognostic model for recurrent venous thromboembolism requires an EPP of at least 23. This reinforces why rules of thumb (eg, 10 EPP) should be avoided. Researchers might additionally ensure the sample size gives precise estimates of key predictor effects; this is especially important when key categorical predictors have few events in some categories, as this may substantially increase the numbers required.


Assuntos
Análise Multivariada , Análise de Regressão , Tamanho da Amostra , Simulação por Computador , Humanos , Tempo
13.
Stat Med ; 38(7): 1262-1275, 2019 03 30.
Artigo em Inglês | MEDLINE | ID: mdl-30347470

RESUMO

In the medical literature, hundreds of prediction models are being developed to predict health outcomes in individuals. For continuous outcomes, typically a linear regression model is developed to predict an individual's outcome value conditional on values of multiple predictors (covariates). To improve model development and reduce the potential for overfitting, a suitable sample size is required in terms of the number of subjects (n) relative to the number of predictor parameters (p) for potential inclusion. We propose that the minimum value of n should meet the following four key criteria: (i) small optimism in predictor effect estimates as defined by a global shrinkage factor of ≥0.9; (ii) small absolute difference of ≤ 0.05 in the apparent and adjusted R2 ; (iii) precise estimation (a margin of error ≤ 10% of the true value) of the model's residual standard deviation; and similarly, (iv) precise estimation of the mean predicted outcome value (model intercept). The criteria require prespecification of the user's chosen p and the model's anticipated R2 as informed by previous studies. The value of n that meets all four criteria provides the minimum sample size required for model development. In an applied example, a new model to predict lung function in African-American women using 25 predictor parameters requires at least 918 subjects to meet all criteria, corresponding to at least 36.7 subjects per predictor parameter. Even larger sample sizes may be needed to additionally ensure precise estimates of key predictor effects, especially when important categorical predictors have low prevalence in certain categories.


Assuntos
Análise Multivariada , Tamanho da Amostra , Negro ou Afro-Americano , Simulação por Computador , Feminino , Humanos , Testes de Função Respiratória
14.
Stat Med ; 37(29): 4404-4420, 2018 12 20.
Artigo em Inglês | MEDLINE | ID: mdl-30101507

RESUMO

One-stage individual participant data meta-analysis models should account for within-trial clustering, but it is currently debated how to do this. For continuous outcomes modeled using a linear regression framework, two competing approaches are a stratified intercept or a random intercept. The stratified approach involves estimating a separate intercept term for each trial, whereas the random intercept approach assumes that trial intercepts are drawn from a normal distribution. Here, through an extensive simulation study for continuous outcomes, we evaluate the impact of using the stratified and random intercept approaches on statistical properties of the summary treatment effect estimate. Further aims are to compare (i) competing estimation options for the one-stage models, including maximum likelihood and restricted maximum likelihood, and (ii) competing options for deriving confidence intervals (CI) for the summary treatment effect, including the standard normal-based 95% CI, and more conservative approaches of Kenward-Roger and Satterthwaite, which inflate CIs to account for uncertainty in variance estimates. The findings reveal that, for an individual participant data meta-analysis of randomized trials with a 1:1 treatment:control allocation ratio and heterogeneity in the treatment effect, (i) bias and coverage of the summary treatment effect estimate are very similar when using stratified or random intercept models with restricted maximum likelihood, and thus either approach could be taken in practice, (ii) CIs are generally best derived using either a Kenward-Roger or Satterthwaite correction, although occasionally overly conservative, and (iii) if maximum likelihood is required, a random intercept performs better than a stratified intercept model. An illustrative example is provided.


Assuntos
Metanálise como Assunto , Modelos Estatísticos , Intervalos de Confiança , Interpretação Estatística de Dados , Humanos , Funções Verossimilhança , Modelos Lineares , Distribuição Normal , Resultado do Tratamento
15.
BMC Med Res Methodol ; 18(1): 41, 2018 05 18.
Artigo em Inglês | MEDLINE | ID: mdl-29776399

RESUMO

BACKGROUND: Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions. METHODS: The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value. RESULTS: In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has < 60% power to detect a reduction of 1 kg weight gain for a 10-unit increase in BMI. Additional IPD from ten other published trials (containing 1761 patients) would improve power to over 80%, but only if a fixed-effect meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials. CONCLUSIONS: Simulation-based power calculations could inform the planning and funding of IPD projects, and should be used routinely.


Assuntos
Simulação por Computador , Ganho de Peso na Gestação/fisiologia , Sobrepeso/prevenção & controle , Complicações na Gravidez/prevenção & controle , Algoritmos , Índice de Massa Corporal , Feminino , Humanos , Modelos Estatísticos , Sobrepeso/fisiopatologia , Gravidez , Complicações na Gravidez/fisiopatologia , Ensaios Clínicos Controlados Aleatórios como Assunto
16.
Stat Med ; 36(5): 855-875, 2017 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-27747915

RESUMO

Meta-analysis using individual participant data (IPD) obtains and synthesises the raw, participant-level data from a set of relevant studies. The IPD approach is becoming an increasingly popular tool as an alternative to traditional aggregate data meta-analysis, especially as it avoids reliance on published results and provides an opportunity to investigate individual-level interactions, such as treatment-effect modifiers. There are two statistical approaches for conducting an IPD meta-analysis: one-stage and two-stage. The one-stage approach analyses the IPD from all studies simultaneously, for example, in a hierarchical regression model with random effects. The two-stage approach derives aggregate data (such as effect estimates) in each study separately and then combines these in a traditional meta-analysis model. There have been numerous comparisons of the one-stage and two-stage approaches via theoretical consideration, simulation and empirical examples, yet there remains confusion regarding when each approach should be adopted, and indeed why they may differ. In this tutorial paper, we outline the key statistical methods for one-stage and two-stage IPD meta-analyses, and provide 10 key reasons why they may produce different summary results. We explain that most differences arise because of different modelling assumptions, rather than the choice of one-stage or two-stage itself. We illustrate the concepts with recently published IPD meta-analyses, summarise key statistical software and provide recommendations for future IPD meta-analyses. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.


Assuntos
Metanálise como Assunto , Estatística como Assunto/métodos , Análise por Conglomerados , Humanos , Funções Verossimilhança , Modelos Estatísticos , Resultado do Tratamento
17.
Stat Med ; 36(5): 772-789, 2017 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-27910122

RESUMO

Stratified medicine utilizes individual-level covariates that are associated with a differential treatment effect, also known as treatment-covariate interactions. When multiple trials are available, meta-analysis is used to help detect true treatment-covariate interactions by combining their data. Meta-regression of trial-level information is prone to low power and ecological bias, and therefore, individual participant data (IPD) meta-analyses are preferable to examine interactions utilizing individual-level information. However, one-stage IPD models are often wrongly specified, such that interactions are based on amalgamating within- and across-trial information. We compare, through simulations and an applied example, fixed-effect and random-effects models for a one-stage IPD meta-analysis of time-to-event data where the goal is to estimate a treatment-covariate interaction. We show that it is crucial to centre patient-level covariates by their mean value in each trial, in order to separate out within-trial and across-trial information. Otherwise, bias and coverage of interaction estimates may be adversely affected, leading to potentially erroneous conclusions driven by ecological bias. We revisit an IPD meta-analysis of five epilepsy trials and examine age as a treatment effect modifier. The interaction is -0.011 (95% CI: -0.019 to -0.003; p = 0.004), and thus highly significant, when amalgamating within-trial and across-trial information. However, when separating within-trial from across-trial information, the interaction is -0.007 (95% CI: -0.019 to 0.005; p = 0.22), and thus its magnitude and statistical significance are greatly reduced. We recommend that meta-analysts should only use within-trial information to examine individual predictors of treatment effect and that one-stage IPD models should separate within-trial from across-trial information to avoid ecological bias. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.


Assuntos
Viés , Metanálise como Assunto , Modelos Estatísticos , Fatores Etários , Anticonvulsivantes/uso terapêutico , Fatores de Confusão Epidemiológicos , Epilepsia/tratamento farmacológico , Feminino , Humanos , Masculino , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Fatores Sexuais , Resultado do Tratamento
18.
J Clin Epidemiol ; 165: 111206, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37925059

RESUMO

OBJECTIVES: Risk of bias assessments are important in meta-analyses of both aggregate and individual participant data (IPD). There is limited evidence on whether and how risk of bias of included studies or datasets in IPD meta-analyses (IPDMAs) is assessed. We review how risk of bias is currently assessed, reported, and incorporated in IPDMAs of test accuracy and clinical prediction model studies and provide recommendations for improvement. STUDY DESIGN AND SETTING: We searched PubMed (January 2018-May 2020) to identify IPDMAs of test accuracy and prediction models, then elicited whether each IPDMA assessed risk of bias of included studies and, if so, how assessments were reported and subsequently incorporated into the IPDMAs. RESULTS: Forty-nine IPDMAs were included. Nineteen of 27 (70%) test accuracy IPDMAs assessed risk of bias, compared to 5 of 22 (23%) prediction model IPDMAs. Seventeen of 19 (89%) test accuracy IPDMAs used Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2), but no tool was used consistently among prediction model IPDMAs. Of IPDMAs assessing risk of bias, 7 (37%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided details on the information sources (e.g., the original manuscript, IPD, primary investigators) used to inform judgments, and 4 (21%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided information or whether assessments were done before or after obtaining the IPD of the included studies or datasets. Of all included IPDMAs, only seven test accuracy IPDMAs (26%) and one prediction model IPDMA (5%) incorporated risk of bias assessments into their meta-analyses. For future IPDMA projects, we provide guidance on how to adapt tools such as Prediction model Risk Of Bias ASsessment Tool (for prediction models) and QUADAS-2 (for test accuracy) to assess risk of bias of included primary studies and their IPD. CONCLUSION: Risk of bias assessments and their reporting need to be improved in IPDMAs of test accuracy and, especially, prediction model studies. Using recommended tools, both before and after IPD are obtained, will address this.


Assuntos
Confiabilidade dos Dados , Modelos Estatísticos , Humanos , Prognóstico , Viés
19.
BMJ ; 384: e077764, 2024 03 21.
Artigo em Inglês | MEDLINE | ID: mdl-38514079

RESUMO

OBJECTIVE: To synthesise evidence of the effectiveness of community based complex interventions, grouped according to their intervention components, to sustain independence for older people. DESIGN: Systematic review and network meta-analysis. DATA SOURCES: Medline, Embase, CINAHL, PsycINFO, CENTRAL, clinicaltrials.gov, and International Clinical Trials Registry Platform from inception to 9 August 2021 and reference lists of included studies. ELIGIBILITY CRITERIA: Randomised controlled trials or cluster randomised controlled trials with ≥24 weeks' follow-up studying community based complex interventions for sustaining independence in older people (mean age ≥65 years) living at home, with usual care, placebo, or another complex intervention as comparators. MAIN OUTCOMES: Living at home, activities of daily living (personal/instrumental), care home placement, and service/economic outcomes at 12 months. DATA SYNTHESIS: Interventions were grouped according to a specifically developed typology. Random effects network meta-analysis estimated comparative effects; Cochrane's revised tool (RoB 2) structured risk of bias assessment. Grading of recommendations assessment, development and evaluation (GRADE) network meta-analysis structured certainty assessment. RESULTS: The review included 129 studies (74 946 participants). Nineteen intervention components, including "multifactorial action from individualised care planning" (a process of multidomain assessment and management leading to tailored actions), were identified in 63 combinations. For living at home, compared with no intervention/placebo, evidence favoured multifactorial action from individualised care planning including medication review and regular follow-ups (routine review) (odds ratio 1.22, 95% confidence interval 0.93 to 1.59; moderate certainty); multifactorial action from individualised care planning including medication review without regular follow-ups (2.55, 0.61 to 10.60; low certainty); combined cognitive training, medication review, nutritional support, and exercise (1.93, 0.79 to 4.77; low certainty); and combined activities of daily living training, nutritional support, and exercise (1.79, 0.67 to 4.76; low certainty). Risk screening or the addition of education and self-management strategies to multifactorial action from individualised care planning and routine review with medication review may reduce odds of living at home. For instrumental activities of daily living, evidence favoured multifactorial action from individualised care planning and routine review with medication review (standardised mean difference 0.11, 95% confidence interval 0.00 to 0.21; moderate certainty). Two interventions may reduce instrumental activities of daily living: combined activities of daily living training, aids, and exercise; and combined activities of daily living training, aids, education, exercise, and multifactorial action from individualised care planning and routine review with medication review and self-management strategies. For personal activities of daily living, evidence favoured combined exercise, multifactorial action from individualised care planning, and routine review with medication review and self-management strategies (0.16, -0.51 to 0.82; low certainty). For homecare recipients, evidence favoured addition of multifactorial action from individualised care planning and routine review with medication review (0.60, 0.32 to 0.88; low certainty). High risk of bias and imprecise estimates meant that most evidence was low or very low certainty. Few studies contributed to each comparison, impeding evaluation of inconsistency and frailty. CONCLUSIONS: The intervention most likely to sustain independence is individualised care planning including medicines optimisation and regular follow-up reviews resulting in multifactorial action. Homecare recipients may particularly benefit from this intervention. Unexpectedly, some combinations may reduce independence. Further research is needed to investigate which combinations of interventions work best for different participants and contexts. REGISTRATION: PROSPERO CRD42019162195.


Assuntos
Atividades Cotidianas , Humanos , Idoso , Metanálise em Rede
20.
Res Synth Methods ; 14(6): 903-910, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37606180

RESUMO

Individual participant data meta-analysis (IPDMA) projects obtain, check, harmonise and synthesise raw data from multiple studies. When undertaking the meta-analysis, researchers must decide between a two-stage or a one-stage approach. In a two-stage approach, the IPD are first analysed separately within each study to obtain aggregate data (e.g., treatment effect estimates and standard errors); then, in the second stage, these aggregate data are combined in a standard meta-analysis model (e.g., common-effect or random-effects). In a one-stage approach, the IPD from all studies are analysed in a single step using an appropriate model that accounts for clustering of participants within studies and, potentially, between-study heterogeneity (e.g., a general or generalised linear mixed model). The best approach to take is debated in the literature, and so here we provide clearer guidance for a broad audience. Both approaches are important tools for IPDMA researchers and neither are a panacea. If most studies in the IPDMA are small (few participants or events), a one-stage approach is recommended due to using a more exact likelihood. However, in other situations, researchers can choose either approach, carefully following best practice. Some previous claims recommending to always use a one-stage approach are misleading, and the two-stage approach will often suffice for most researchers. When differences do arise between the two approaches, often it is caused by researchers using different modelling assumptions or estimation methods, rather than using one or two stages per se.


Assuntos
Pesquisa , Humanos , Modelos Lineares , Análise por Conglomerados
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA