Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Stat Med ; 42(18): 3114-3127, 2023 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-37190904

RESUMO

The Cox regression, a semi-parametric method of survival analysis, is extremely popular in biomedical applications. The proportional hazards assumption is a key requirement in the Cox model. To accommodate non-proportional hazards, we propose to parameterize the shape parameter of the baseline hazard function using the additional, separate Cox-regression term which depends on the vector of the covariates. This parametrization retains the general form of the hazard function over the strata and is similar to one in Devarajan and Ebrahimi (Comput Stat Data Anal. 2011;55:667-676) in the case of the Weibull distribution, but differs for other hazard functions. We call this model the double-Cox model. We formally introduce the double-Cox model with shared frailty and investigate, by simulation, the estimation bias and the coverage of the proposed point and interval estimation methods for the Gompertz and the Weibull baseline hazards. For real-life applications with low frailty variance and a large number of clusters, the marginal likelihood estimation is almost unbiased and the profile likelihood-based confidence intervals provide good coverage for all model parameters. We also compare the results from the over-parametrized double-Cox model to those from the standard Cox model with frailty in the case of the scale-only proportional hazards. The model is illustrated on an example of the survival after a diagnosis of type 2 diabetes mellitus. The R programs for fitting the double-Cox model are available on Github.


Assuntos
Diabetes Mellitus Tipo 2 , Fragilidade , Humanos , Modelos de Riscos Proporcionais , Funções Verossimilhança , Análise de Sobrevida
2.
BMC Med Res Methodol ; 23(1): 146, 2023 06 21.
Artigo em Inglês | MEDLINE | ID: mdl-37344771

RESUMO

BACKGROUND: Cochran's Q statistic is routinely used for testing heterogeneity in meta-analysis. Its expected value (under an incorrect null distribution) is part of several popular estimators of the between-study variance, [Formula: see text]. Those applications generally do not account for use of the studies' estimated variances in the inverse-variance weights that define Q (more explicitly, [Formula: see text]). Importantly, those weights make approximating the distribution of [Formula: see text] rather complicated. METHODS: As an alternative, we are investigating a Q statistic, [Formula: see text], whose constant weights use only the studies' arm-level sample sizes. For log-odds-ratio (LOR), log-relative-risk (LRR), and risk difference (RD) as the measures of effect, we study, by simulation, approximations to distributions of [Formula: see text] and [Formula: see text], as the basis for tests of heterogeneity. RESULTS: The results show that: for LOR and LRR, a two-moment gamma approximation to the distribution of [Formula: see text] works well for small sample sizes, and an approximation based on an algorithm of Farebrother is recommended for larger sample sizes. For RD, the Farebrother approximation works very well, even for small sample sizes. For [Formula: see text], the standard chi-square approximation provides levels that are much too low for LOR and LRR and too high for RD. The Kulinskaya et al. (Res Synth Methods 2:254-70, 2011) approximation for RD and the Kulinskaya and Dollinger (BMC Med Res Methodol 15:49, 2015) approximation for LOR work well for [Formula: see text] but have some convergence issues for very small sample sizes combined with small probabilities. CONCLUSIONS: The performance of the standard [Formula: see text] approximation is inadequate for all three binary effect measures. Instead, we recommend a test of heterogeneity based on [Formula: see text] and provide practical guidelines for choosing an appropriate test at the .05 level for all three effect measures.


Assuntos
Algoritmos , Humanos , Simulação por Computador , Probabilidade , Razão de Chances , Tamanho da Amostra
3.
BMC Biol ; 19(1): 33, 2021 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-33596922

RESUMO

BACKGROUND: Meta-analysis is often used to make generalisations across all available evidence at the global scale. But how can these global generalisations be used for evidence-based decision making at the local scale, if the global evidence is not perceived to be relevant to local decisions? We show how an interactive method of meta-analysis-dynamic meta-analysis-can be used to assess the local relevance of global evidence. RESULTS: We developed Metadataset ( www.metadataset.com ) as a proof-of-concept for dynamic meta-analysis. Using Metadataset, we show how evidence can be filtered and weighted, and results can be recalculated, using dynamic methods of subgroup analysis, meta-regression, and recalibration. With an example from agroecology, we show how dynamic meta-analysis could lead to different conclusions for different subsets of the global evidence. Dynamic meta-analysis could also lead to a rebalancing of power and responsibility in evidence synthesis, since evidence users would be able to make decisions that are typically made by systematic reviewers-decisions about which studies to include (e.g. critical appraisal) and how to handle missing or poorly reported data (e.g. sensitivity analysis). CONCLUSIONS: In this study, we show how dynamic meta-analysis can meet an important challenge in evidence-based decision making-the challenge of using global evidence for local decisions. We suggest that dynamic meta-analysis can be used for subject-wide evidence synthesis in several scientific disciplines, including agroecology and conservation biology. Future studies should develop standardised classification systems for the metadata that are used to filter and weight the evidence. Future studies should also develop standardised software packages, so that researchers can efficiently publish dynamic versions of their meta-analyses and keep them up-to-date as living systematic reviews. Metadataset is a proof-of-concept for this type of software, and it is open source. Future studies should improve the user experience, scale the software architecture, agree on standards for data and metadata storage and processing, and develop protocols for responsible evidence use.


Assuntos
Tomada de Decisões , Metanálise como Assunto , Projetos de Pesquisa , Software , Humanos
4.
J Stroke Cerebrovasc Dis ; 31(9): 106663, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35907306

RESUMO

OBJECTIVE: Transient ischaemic attacks (TIA) serve as warning signs for future stroke, and the impact of TIA on long term survival is uncertain. We assessed the long-term hazards of all-cause mortality following a first episode of a transient ischaemic attack (TIA). DESIGN: Retrospective matched cohort study. METHODS: Cohort study using electronic primary health care records from The Health Improvement Network (THIN) database in the United Kingdom. Cases born in or before 1960, resident in England, with a first diagnosis of TIA between January 1986 and January 2017 were matched to three controls on age, sex and general practice. The primary outcome was all-cause mortality. The hazards of all-cause mortality were estimated using a time-varying Double-Cox Weibull survival model with a random frailty effect of general practice, while adjusting for different socio-demographic factors, medical therapies, and comorbidities. RESULTS: 20,633 cases and 58,634 controls were included. During the study period, 24,176 participants died comprising of 7,745 (37.5%) cases and 16,431(28.0%) controls. In terms of hazards of mortality, cases aged 39 to 60 years at the first TIA event had the highest hazard ratio (HR) of mortality compared to their 39-60 years matched controls (HR = 3.04 (2.91 - 3.18)). The HR for cases aged 61-70 years, 71-76 years and 77+ years were 1.98 (1.55 - 2.30), 1.79 (1.20 - 2.07) and 1.52 (1.15 - 1.97) compared to their same-aged matched controls. Cases aged 39-60 at TIA onset who were prescribed aspirin were associated with reduced HR of 0.93 (0.84 - 1.01), 0.90 (0.82 - 0.98) and 0.88 (0.80 - 0.96) at 5, 10 and 15 years respectively, compared to the same aged cases who were not prescribed any antiplatelet. Statistically significant reductions in hazard ratios were observed with aspirin at 10 and 15 years in all age groups. Hazard ratio point estimates for other antiplatelets (dipyridamole or clopidogrel) and dual antiplatelet therapy were very similar to aspirin at 5, 10 and 15 years but with wider confidence intervals that included 1. There was no survival benefit associated with antiplatelet prescription in controls. CONCLUSIONS: The overall risk of death was considerably elevated in all age groups after a first-ever TIA event. Aspirin prescription was associated with a reduced risk. These findings support the use of aspirin in secondary prevention for people with a TIA. The results do not support the use of antiplatelet medication in people without TIA.


Assuntos
Ataque Isquêmico Transitório , Acidente Vascular Cerebral , Aspirina/uso terapêutico , Estudos de Coortes , Humanos , Ataque Isquêmico Transitório/complicações , Ataque Isquêmico Transitório/diagnóstico , Ataque Isquêmico Transitório/terapia , Inibidores da Agregação Plaquetária/uso terapêutico , Estudos Retrospectivos , Acidente Vascular Cerebral/tratamento farmacológico , Acidente Vascular Cerebral/terapia
5.
Stat Med ; 39(2): 171-191, 2020 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-31709582

RESUMO

Methods for random-effects meta-analysis require an estimate of the between-study variance, τ2 . The performance of estimators of τ2 (measured by bias and coverage) affects their usefulness in assessing heterogeneity of study-level effects and also the performance of related estimators of the overall effect. However, as we show, the performance of the methods varies widely among effect measures. For the effect measures mean difference (MD) and standardized MD (SMD), we use improved effect-measure-specific approximations to the expected value of Q for both MD and SMD to introduce two new methods of point estimation of τ2 for MD (Welch-type and corrected DerSimonian-Laird) and one WT interval method. We also introduce one point estimator and one interval estimator for τ2 in SMD. Extensive simulations compare our methods with four point estimators of τ2 (the popular methods of DerSimonian-Laird, restricted maximum likelihood, and Mandel and Paule, and the less-familiar method of Jackson) and four interval estimators for τ2 (profile likelihood, Q-profile, Biggerstaff and Jackson, and Jackson). We also study related point and interval estimators of the overall effect, including an estimator whose weights use only study-level sample sizes. We provide measure-specific recommendations from our comprehensive simulation study and discuss an example.


Assuntos
Funções Verossimilhança , Metanálise como Assunto , Simulação por Computador , Humanos
6.
BMC Med Res Methodol ; 20(1): 263, 2020 10 22.
Artigo em Inglês | MEDLINE | ID: mdl-33092521

RESUMO

BACKGROUND: For outcomes that studies report as the means in the treatment and control groups, some medical applications and nearly half of meta-analyses in ecology express the effect as the ratio of means (RoM), also called the response ratio (RR), analyzed in the logarithmic scale as the log-response-ratio, LRR. METHODS: In random-effects meta-analysis of LRR, with normal and lognormal data, we studied the performance of estimators of the between-study variance, τ2, (measured by bias and coverage) in assessing heterogeneity of study-level effects, and also the performance of related estimators of the overall effect in the log scale, λ. We obtained additional empirical evidence from two examples. RESULTS: The results of our extensive simulations showed several challenges in using LRR as an effect measure. Point estimators of τ2 had considerable bias or were unreliable, and interval estimators of τ2 seldom had the intended 95% coverage for small to moderate-sized samples (n<40). Results for estimating λ differed between lognormal and normal data. CONCLUSIONS: For lognormal data, we can recommend only SSW, a weighted average in which a study's weight is proportional to its effective sample size, (when n≥40) and its companion interval (when n≥10). Normal data posed greater challenges. When the means were far enough from 0 (more than one standard deviation, 4 in our simulations), SSW was practically unbiased, and its companion interval was the only option.


Assuntos
Tamanho da Amostra , Humanos
7.
BMC Med Res Methodol ; 19(1): 217, 2019 11 27.
Artigo em Inglês | MEDLINE | ID: mdl-31775636

RESUMO

BACKGROUND: Continuous monitoring of surgical outcomes after joint replacement is needed to detect which brands' components have a higher than expected failure rate and are therefore no longer recommended to be used in surgical practice. We developed a monitoring method based on cumulative sum (CUSUM) chart specifically for this application. METHODS: Our method entails the use of the competing risks model with the Weibull and the Gompertz hazard functions adjusted for observed covariates to approximate the baseline time-to-revision and time-to-death distributions, respectively. The correlated shared frailty terms for competing risks, corresponding to the operating unit, are also included in the model. A bootstrap-based boundary adjustment is then required for risk-adjusted CUSUM charts to guarantee a given probability of the false alarm rates. We propose a method to evaluate the CUSUM scores and the adjusted boundary for a survival model with the shared frailty terms. We also introduce a unit performance quality score based on the posterior frailty distribution. This method is illustrated using the 2003-2012 hip replacement data from the UK National Joint Registry (NJR). RESULTS: We found that the best model included the shared frailty for revision but not for death. This means that the competing risks of revision and death are independent in NJR data. Our method was superior to the standard NJR methodology. For one of the two monitored components, it produced alarms four years before the increased failure rate came to the attention of the UK regulatory authorities. The hazard ratios of revision across the units varied from 0.38 to 2.28. CONCLUSIONS: An earlier detection of failure signal by our method in comparison to the standard method used by the NJR may be explained by proper risk-adjustment and the ability to accommodate time-dependent hazards. The continuous monitoring of hip replacement outcomes should include risk adjustment at both the individual and unit level.


Assuntos
Artroplastia de Quadril/mortalidade , Fragilidade/mortalidade , Risco Ajustado , Medição de Risco , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Probabilidade , Sistema de Registros , Reoperação , Análise de Sobrevida , Resultado do Tratamento , Reino Unido
8.
BMC Med Res Methodol ; 18(1): 70, 2018 07 04.
Artigo em Inglês | MEDLINE | ID: mdl-29973146

RESUMO

BACKGROUND: Systematic reviews and meta-analyses of binary outcomes are widespread in all areas of application. The odds ratio, in particular, is by far the most popular effect measure. However, the standard meta-analysis of odds ratios using a random-effects model has a number of potential problems. An attractive alternative approach for the meta-analysis of binary outcomes uses a class of generalized linear mixed models (GLMMs). GLMMs are believed to overcome the problems of the standard random-effects model because they use a correct binomial-normal likelihood. However, this belief is based on theoretical considerations, and no sufficient simulations have assessed the performance of GLMMs in meta-analysis. This gap may be due to the computational complexity of these models and the resulting considerable time requirements. METHODS: The present study is the first to provide extensive simulations on the performance of four GLMM methods (models with fixed and random study effects and two conditional methods) for meta-analysis of odds ratios in comparison to the standard random effects model. RESULTS: In our simulations, the hypergeometric-normal model provided less biased estimation of the heterogeneity variance than the standard random-effects meta-analysis using the restricted maximum likelihood (REML) estimation when the data were sparse, but the REML method performed similarly for the point estimation of the odds ratio, and better for the interval estimation. CONCLUSIONS: It is difficult to recommend the use of GLMMs in the practice of meta-analysis. The problem of finding uniformly good methods of the meta-analysis for binary outcomes is still open.


Assuntos
Algoritmos , Funções Verossimilhança , Modelos Lineares , Modelos Estatísticos , Distribuição Binomial , Simulação por Computador , Humanos , Razão de Chances
9.
Stat Med ; 36(11): 1715-1734, 2017 05 20.
Artigo em Inglês | MEDLINE | ID: mdl-28124446

RESUMO

In meta-analysis of odds ratios (ORs), heterogeneity between the studies is usually modelled via the additive random effects model (REM). An alternative, multiplicative REM for ORs uses overdispersion. The multiplicative factor in this overdispersion model (ODM) can be interpreted as an intra-class correlation (ICC) parameter. This model naturally arises when the probabilities of an event in one or both arms of a comparative study are themselves beta-distributed, resulting in beta-binomial distributions. We propose two new estimators of the ICC for meta-analysis in this setting. One is based on the inverted Breslow-Day test, and the other on the improved gamma approximation by Kulinskaya and Dollinger (2015, p. 26) to the distribution of Cochran's Q. The performance of these and several other estimators of ICC on bias and coverage is studied by simulation. Additionally, the Mantel-Haenszel approach to estimation of ORs is extended to the beta-binomial model, and we study performance of various ICC estimators when used in the Mantel-Haenszel or the inverse-variance method to combine ORs in meta-analysis. The results of the simulations show that the improved gamma-based estimator of ICC is superior for small sample sizes, and the Breslow-Day-based estimator is the best for n⩾100. The Mantel-Haenszel-based estimator of OR is very biased and is not recommended. The inverse-variance approach is also somewhat biased for ORs≠1, but this bias is not very large in practical settings. Developed methods and R programs, provided in the Web Appendix, make the beta-binomial model a feasible alternative to the standard REM for meta-analysis of ORs. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.


Assuntos
Metanálise como Assunto , Modelos Estatísticos , Razão de Chances , Viés , Distribuição Binomial , Interpretação Estatística de Dados , Humanos , Probabilidade
10.
Biom J ; 58(4): 896-914, 2016 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-27192062

RESUMO

We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence.


Assuntos
Metanálise como Assunto , Modelos Estatísticos , Probabilidade , Distribuição Binomial , Simulação por Computador , Estudos Longitudinais , Prevalência
11.
BMC Med Res Methodol ; 15: 49, 2015 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-26054650

RESUMO

BACKGROUND: A frequently used statistic for testing homogeneity in a meta-analysis of K independent studies is Cochran's Q. For a standard test of homogeneity the Q statistic is referred to a chi-square distribution with K-1 degrees of freedom. For the situation in which the effects of the studies are logarithms of odds ratios, the chi-square distribution is much too conservative for moderate size studies, although it may be asymptotically correct as the individual studies become large. METHODS: Using a mixture of theoretical results and simulations, we provide formulas to estimate the shape and scale parameters of a gamma distribution to fit the distribution of Q. RESULTS: Simulation studies show that the gamma distribution is a good approximation to the distribution for Q. CONCLUSIONS: Use of the gamma distribution instead of the chi-square distribution for Q should eliminate inaccurate inferences in assessing homogeneity in a meta-analysis. (A computer program for implementing this test is provided.) This hypothesis test is competitive with the Breslow-Day test both in accuracy of level and in power.


Assuntos
Algoritmos , Biometria/métodos , Razão de Chances , Software , Distribuição de Qui-Quadrado , Simulação por Computador , Humanos , Metanálise como Assunto , Reprodutibilidade dos Testes
12.
Res Synth Methods ; 15(3): 398-412, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38111354

RESUMO

Outcomes of meta-analyses are increasingly used to inform evidence-based decision making in various research fields. However, a number of recent studies have reported rapid temporal changes in magnitude and significance of the reported effects which could make policy-relevant recommendations from meta-analyses to quickly go out of date. We assessed the extent and patterns of temporal trends in magnitude and statistical significance of the cumulative effects in meta-analyses in applied ecology and conservation published between 2004 and 2018. Of the 121 meta-analyses analysed, 93% showed a temporal trend in cumulative effect magnitude or significance with 27% of the datasets exhibiting temporal trends in both. The most common trend was the early study effect when at least one of the first 5 years effect size estimates exhibited more than 50% magnitude difference to the subsequent estimate. The observed temporal trends persisted in majority of datasets once moderators were accounted for. Only 5 datasets showed significant changes in sample size over time which could potentially explain the observed temporal change in the cumulative effects. Year of publication of meta-analysis had no significant effect on presence of temporal trends in cumulative effects. Our results show that temporal changes in magnitude and statistical significance in applied ecology are widespread and represent a serious potential threat to use of meta-analyses for decision-making in conservation and environmental management. We recommend use of cumulative meta-analyses and call for more studies exploring the causes of the temporal effects.


Assuntos
Conservação dos Recursos Naturais , Ecologia , Metanálise como Assunto , Humanos , Tomada de Decisões , Tamanho da Amostra , Fatores de Tempo , Projetos de Pesquisa
13.
Brain ; 135(Pt 8): 2478-91, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22761293

RESUMO

Hemispatial neglect following right-hemisphere stroke is a common and disabling disorder, for which there is currently no effective pharmacological treatment. Dopamine agonists have been shown to play a role in selective attention and working memory, two core cognitive components of neglect. Here, we investigated whether the dopamine agonist rotigotine would have a beneficial effect on hemispatial neglect in stroke patients. A double-blind, randomized, placebo-controlled ABA design was used, in which each patient was assessed for 20 testing sessions, in three phases: pretreatment (Phase A1), on transdermal rotigotine for 7-11 days (Phase B) and post-treatment (Phase A2), with the exact duration of each phase randomized within limits. Outcome measures included performance on cancellation (visual search), line bisection, visual working memory, selective attention and sustained attention tasks, as well as measures of motor control. Sixteen right-hemisphere stroke patients were recruited, all of whom completed the trial. Performance on the Mesulam shape cancellation task improved significantly while on rotigotine, with the number of targets found on the left side increasing by 12.8% (P = 0.012) on treatment and spatial bias reducing by 8.1% (P = 0.016). This improvement in visual search was associated with an enhancement in selective attention but not on our measures of working memory or sustained attention. The positive effect of rotigotine on visual search was not associated with the degree of preservation of prefrontal cortex and occurred even in patients with significant prefrontal involvement. Rotigotine was not associated with any significant improvement in motor performance. This proof-of-concept study suggests a beneficial role of dopaminergic modulation on visual search and selective attention in patients with hemispatial neglect following stroke.


Assuntos
Agonistas de Dopamina/uso terapêutico , Transtornos da Percepção/tratamento farmacológico , Transtornos da Percepção/etiologia , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/tratamento farmacológico , Tetra-Hidronaftalenos/uso terapêutico , Tiofenos/uso terapêutico , Adulto , Idoso , Idoso de 80 Anos ou mais , Agonistas de Dopamina/farmacologia , Método Duplo-Cego , Feminino , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Desempenho Psicomotor/efeitos dos fármacos , Desempenho Psicomotor/fisiologia , Tetra-Hidronaftalenos/farmacologia , Tiofenos/farmacologia , Resultado do Tratamento , Adulto Jovem
15.
Res Synth Methods ; 14(5): 671-688, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37381621

RESUMO

For estimation of heterogeneity variance τ 2 in meta-analysis of log-odds-ratio, we derive new mean- and median-unbiased point estimators and new interval estimators based on a generalized Q statistic, Q F , in which the weights depend on only the studies' effective sample sizes. We compare them with familiar estimators based on the inverse-variance-weights version of Q , Q IV . In an extensive simulation, we studied the bias (including median bias) of the point estimators and the coverage (including left and right coverage error) of the confidence intervals. Most estimators add 0.5 to each cell of the 2 × 2 table when one cell contains a zero count; we include a version that always adds 0.5 . The results show that: two of the new point estimators and two of the familiar point estimators are almost unbiased when the total sample size n ≥ 250 and the probability in the Control arm ( p iC ) is 0.1, and when n ≥ 100 and p iC is 0.2 or 0.5; for 0.1 ≤ τ 2 ≤ 1 , all estimators have negative bias for small to medium sample sizes, but for larger sample sizes some of the new median-unbiased estimators are almost median-unbiased; choices of interval estimators depend on values of parameters, but one of the new estimators is reasonable when p iC = 0.1 and another, when p iC = 0.2 or p iC = 0.5 ; and lack of balance between left and right coverage errors for small n and/or p iC implies that the available approximations for the distributions of Q IV and Q F are accurate only for larger sample sizes.


Assuntos
Razão de Chances , Probabilidade , Simulação por Computador , Tamanho da Amostra , Viés
16.
Thorax ; 67(4): 328-33, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22169361

RESUMO

BACKGROUND: Elevated plasma levels of coagulation factor VIII are a strong risk factor for pulmonary emboli and deep venous thromboses. OBJECTIVES: To identify reversible biomarkers associated with high factor VIII and assess potential significance in a specific at-risk population. PATIENTS/METHODS: 609 patients with hereditary haemorrhagic telangiectasia were recruited prospectively in two separate series at a single centre. Associations between log-transformed factor VIII measured 6 months from any known thrombosis/illness, and patient-specific variables including markers of inflammation and iron deficiency, were assessed in stepwise multiple regression analyses. Age-specific incidence rates of radiologically proven pulmonary emboli/deep venous thromboses were calculated, and logistic regression analyses performed. RESULTS: In each series, there was an inverse association between factor VIII and serum iron that persisted after adjustment for age, inflammation and/or von Willebrand factor. Iron response elements within untranslated regions of factor VIII transcripts provide potential mechanisms for the association. Low serum iron levels were also associated with venous thromboemboli (VTE): the age-adjusted OR of 0.91 (95% CI 0.86 to 0.97) per 1 µmol/litre increase in serum iron implied a 2.5-fold increase in VTE risk for a serum iron of 6 µmol/litre compared with the mid-normal range (17 µmol/litre). The association appeared to depend on factor VIII, as once adjusted for factor VIII, the association between VTE and iron was no longer evident. CONCLUSIONS: In this population, low serum iron levels attributed to inadequate replacement of haemorrhagic iron losses are associated with elevated plasma levels of coagulation factor VIII and venous thromboembolic risk. Potential implications for other clinical populations are discussed.


Assuntos
Fator VIII/análise , Ferro/sangue , Embolia Pulmonar/sangue , Telangiectasia Hemorrágica Hereditária/sangue , Trombose Venosa/sangue , Biomarcadores/sangue , Estudos de Coortes , Diagnóstico por Imagem , Feminino , Humanos , Masculino , Plasma/química , Estudos Prospectivos , Embolia Pulmonar/diagnóstico , Análise de Regressão , Fatores de Risco , Soro/química , Trombose Venosa/diagnóstico , Fator de von Willebrand/análise
17.
Res Synth Methods ; 13(1): 48-67, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34427058

RESUMO

To present time-varying evidence, cumulative meta-analysis (CMA) updates results of previous meta-analyses to incorporate new study results. We investigate the properties of CMA, suggest possible improvements and provide the first in-depth simulation study of the use of CMA and CUSUM methods for detection of temporal trends in random-effects meta-analysis. We use the standardized mean difference (SMD) as an effect measure of interest. For CMA, we compare the standard inverse-variance-weighted estimation of the overall effect using REML-based estimation of between-study variance τ 2 with the sample-size-weighted estimation of the effect accompanied by Kulinskaya-Dollinger-Bjørkestøl (Biometrics. 2011; 67:203-212) (KDB) estimation of τ 2 . For all methods, we consider Type 1 error under no shift and power under a shift in the mean in the random-effects model. To ameliorate the lack of power in CMA, we introduce two-stage CMA, in which τ 2 is estimated at Stage 1 (from the first 5-10 studies), and further CMA monitors a target value of effect, keeping the τ 2 value fixed. We recommend this two-stage CMA combined with cumulative testing for positive shift in τ 2 . In practice, use of CMA requires at least 15-20 studies.


Assuntos
Tamanho da Amostra , Simulação por Computador
18.
Br J Math Stat Psychol ; 75(3): 444-465, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35094381

RESUMO

Cochran's Q statistic is routinely used for testing heterogeneity in meta-analysis. Its expected value is also used in several popular estimators of the between-study variance, τ 2 . Those applications generally have not considered the implications of its use of estimated variances in the inverse-variance weights. Importantly, those weights make approximating the distribution of Q (more explicitly, Q IV ) rather complicated. As an alternative, we investigate a new Q statistic, Q F , whose constant weights use only the studies' effective sample sizes. For the standardized mean difference as the measure of effect, we study, by simulation, approximations to distributions of Q IV and Q F , as the basis for tests of heterogeneity and for new point and interval estimators of τ 2 . These include new DerSimonian-Kacker-type moment estimators based on the first moment of Q F , and novel median-unbiased estimators. The results show that: an approximation based on an algorithm of Farebrother follows both the null and the alternative distributions of Q F reasonably well, whereas the usual chi-squared approximation for the null distribution of Q IV and the Biggerstaff-Jackson approximation to its alternative distribution are poor; in estimating τ 2 , our moment estimator based on Q F is almost unbiased, the Mandel - Paule estimator has some negative bias in some situations, and the DerSimonian-Laird and restricted maximum likelihood estimators have considerable negative bias; and all 95% interval estimators have coverage that is too high when τ 2 = 0 , but otherwise the Q-profile interval performs very well.


Assuntos
Algoritmos , Modelos Estatísticos , Simulação por Computador
19.
Biometrics ; 67(1): 203-12, 2011 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-20528863

RESUMO

Meta-analysis seeks to combine the results of several experiments in order to improve the accuracy of decisions. It is common to use a test for homogeneity to determine if the results of the several experiments are sufficiently similar to warrant their combination into an overall result. Cochran's Q statistic is frequently used for this homogeneity test. It is often assumed that Q follows a chi-square distribution under the null hypothesis of homogeneity, but it has long been known that this asymptotic distribution for Q is not accurate for moderate sample sizes. Here, we present an expansion for the mean of Q under the null hypothesis that is valid when the effect and the weight for each study depend on a single parameter, but for which neither normality nor independence of the effect and weight estimators is needed. This expansion represents an order O(1/n) correction to the usual chi-square moment in the one-parameter case. We apply the result to the homogeneity test for meta-analyses in which the effects are measured by the standardized mean difference (Cohen's d-statistic). In this situation, we recommend approximating the null distribution of Q by a chi-square distribution with fractional degrees of freedom that are estimated from the data using our expansion for the mean of Q. The resulting homogeneity test is substantially more accurate than the currently used test. We provide a program available at the Paper Information link at the Biometrics website http://www.biometrics.tibs.org for making the necessary calculations.


Assuntos
Algoritmos , Biometria/métodos , Interpretação Estatística de Dados , Metanálise como Assunto , Modelos Estatísticos , Simulação por Computador , Métodos Epidemiológicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA