Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
Psychosom Med ; 2024 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-38787545

RESUMEN

OBJECTIVE: Acute exercise elicits various biobehavioral and psychological responses, but results are mixed with regard to the magnitude of exercise-induced affective reactions. This meta-analysis examines the magnitude of general mood state, anxiety, and depressive symptom responses to acute exercise while exploring exercise protocol characteristics and background health behaviors that may play a role in the affective response. METHODS: A total of 2,770 articles were identified from a MEDLINE/PubMed search and an additional 133 articles from reviews of reference sections. Studies had to have measured general mood before the acute exercise bout and within 30 minutes after exercise completion. Effect sizes were estimated using Hedges' g, with larger values indicating improvement in the outcome measure. RESULTS: A total of 103 studies were included presenting data from 4,671 participants. General mood state improved from pre-exercise to post-exercise (g = 0.336, 95%CI = 0.234,0.439). Anxiety (g = 0.497, 95%CI = 0.263,0.730) and depressive symptoms (g = 0.407, 95%CI = 0.249,0.564) also improved with exercise. There was substantial and statistically significant heterogeneity in each of these meta-analyses. This heterogeneity was not explained by differences in participants' health status. Meta-regression analyses with potential moderators (intensity of exercise, mode of exercise, usual physical activity level, or weight status of participants) also did not reduce the heterogeneity. CONCLUSION: This meta-analysis shows significantly improved general mood, decreased anxiety, and lower depressive symptoms in response to an acute bout of exercise. There was substantial heterogeneity in the magnitude of the effect sizes, indicating that additional research is needed to identify determinants of a positive affective response to acute exercise.

2.
Psychol Bull ; 150(3): 253-283, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38330345

RESUMEN

Theories have proposed diverse reasons for why individual differences such as personality traits lead to social status attainment in face-to-face groups. We integrated these different theoretical standpoints into a model with four paths from individual differences to status: a dominance, a competence, a virtue, and a micropolitics path. To investigate these paths, we meta-analyzed over 100 years of research on bivariate associations of personality traits, cognitive abilities, and physical size with the attainment of status-related outcomes in face-to-face groups (1,064 effects from 276 samples including 56,153 participants). The status-related outcome variables were admiring respect, social influence, popularity (i.e., being liked by others), leadership emergence, and a mixture of outcome variables. The meta-analytic correlations we found were largely in line with the micropolitics path, tentatively in line with the competence and virtue paths, and only partly in line with the dominance path. These findings suggest that status attainment depends not only on the competence and virtue of an individual but also on how individuals can enhance their apparent competence or virtue by behaving assertively, by being extraverted, or through self-monitoring. We also investigated how the relations between individual differences and status-related outcomes were moderated by kind of status-related outcome, nature of the group task, culture (collectivism/individualism), and length of acquaintance. The moderation analysis yielded mixed and inconclusive results. The review ends with directions for research, such as the need to separately assess and study the different status-related outcomes. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Personalidad , Estatus Social , Humanos , Inteligencia , Liderazgo , Trastornos de la Personalidad
3.
Behav Res Methods ; 56(3): 1994-2012, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37540470

RESUMEN

Outcome reporting bias (ORB) refers to the biasing effect caused by researchers selectively reporting outcomes within a study based on their statistical significance. ORB leads to inflated effect size estimates in meta-analysis if only the outcome with the largest effect size is reported due to ORB. We propose a new method (CORB) to correct for ORB that includes an estimate of the variability of the outcomes' effect size as a moderator in a meta-regression model. An estimate of the variability of the outcomes' effect size can be computed by assuming a correlation among the outcomes. Results of a Monte-Carlo simulation study showed that the effect size in meta-analyses may be severely overestimated without correcting for ORB. Estimates of CORB are close to the true effect size when overestimation caused by ORB is the largest. Applying the method to a meta-analysis on the effect of playing violent video games on aggression showed that the effect size estimate decreased when correcting for ORB. We recommend to routinely apply methods to correct for ORB in any meta-analysis. We provide annotated R code and functions to help researchers apply the CORB method.


Asunto(s)
Sesgo , Humanos , Simulación por Computador
4.
Stat Med ; 43(4): 756-773, 2024 02 20.
Artículo en Inglés | MEDLINE | ID: mdl-38110725

RESUMEN

A wide variety of methods are available to estimate the between-study variance under the univariate random-effects model for meta-analysis. Some, but not all, of these estimators have been extended so that they can be used in the multivariate setting. We begin by extending the univariate generalised method of moments, which immediately provides a wider class of multivariate methods than was previously available. However, our main proposal is to use this new type of estimator to derive multivariate multistep estimators of the between-study covariance matrix. We then use the connection between the univariate multistep and Paule-Mandel estimators to motivate taking the limit, where the number of steps tends toward infinity. We illustrate our methodology using two contrasting examples and investigate its properties in a simulation study. We conclude that the proposed methodology is a fully viable alternative to existing estimation methods, is well suited to sensitivity analyses that explore the use of alternative estimators, and should be used instead of the existing DerSimonian and Laird-type moments based estimator in application areas where data are expected to be heterogeneous. However, multistep estimators do not seem to outperform the existing estimators when the data are more homogeneous. Advantages of the new multivariate multistep estimator include its semi-parametric nature and that it is computationally feasible in high dimensions. Our proposed estimation methods are also applicable for multivariate random-effects meta-regression, where study-level covariates are included in the model.


Asunto(s)
Simulación por Computador , Metaanálisis como Asunto , Modelos Teóricos
5.
R Soc Open Sci ; 10(8): 202326, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37593717

RESUMEN

The COVID-19 outbreak has led to an exponential increase of publications and preprints about the virus, its causes, consequences, and possible cures. COVID-19 research has been conducted under high time pressure and has been subject to financial and societal interests. Doing research under such pressure may influence the scrutiny with which researchers perform and write up their studies. Either researchers become more diligent, because of the high-stakes nature of the research, or the time pressure may lead to cutting corners and lower quality output. In this study, we conducted a natural experiment to compare the prevalence of incorrectly reported statistics in a stratified random sample of COVID-19 preprints and a matched sample of non-COVID-19 preprints. Our results show that the overall prevalence of incorrectly reported statistics is 9-10%, but frequentist as well as Bayesian hypothesis tests show no difference in the number of statistical inconsistencies between COVID-19 and non-COVID-19 preprints. In conclusion, the literature suggests that COVID-19 research may on average have more methodological problems than non-COVID-19 research, but our results show that there is no difference in the statistical reporting quality.

6.
Res Synth Methods ; 14(5): 768-773, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37421188

RESUMEN

The partial correlation coefficient (PCC) is used to quantify the linear relationship between two variables while taking into account/controlling for other variables. Researchers frequently synthesize PCCs in a meta-analysis, but two of the assumptions of the common equal-effect and random-effects meta-analysis model are by definition violated. First, the sampling variance of the PCC cannot assumed to be known, because the sampling variance is a function of the PCC. Second, the sampling distribution of each primary study's PCC is not normal since PCCs are bounded between -1 and 1. I advocate applying the Fisher's z transformation analogous to applying Fisher's z transformation for Pearson correlation coefficients, because the Fisher's z transformed PCC is independent of the sampling variance and its sampling distribution more closely follows a normal distribution. Reproducing a simulation study by Stanley and Doucouliagos and adding meta-analyses based on Fisher's z transformed PCCs shows that the meta-analysis based on Fisher's z transformed PCCs had lower bias and root mean square error than meta-analyzing PCCs. Hence, meta-analyzing Fisher's z transformed PCCs is a viable alternative to meta-analyzing PCCs, and I recommend to accompany any meta-analysis based on PCCs with one using Fisher's z transformed PCCs to assess the robustness of the results.


Asunto(s)
Simulación por Computador , Metaanálisis como Asunto
7.
Psychol Methods ; 2023 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-37166859

RESUMEN

Researcher degrees of freedom refer to arbitrary decisions in the execution and reporting of hypothesis-testing research that allow for many possible outcomes from a single study. Selective reporting of results (p-hacking) from this "multiverse" of outcomes can inflate effect size estimates and false positive rates. We studied the effects of researcher degrees of freedom and selective reporting using empirical data from extensive multistudy projects in psychology (Registered Replication Reports) featuring 211 samples and 14 dependent variables. We used a counterfactual design to examine what biases could have emerged if the studies (and ensuing meta-analyses) had not been preregistered and could have been subjected to selective reporting based on the significance of the outcomes in the primary studies. Our results show the substantial variability in effect sizes that researcher degrees of freedom can create in relatively standard psychological studies, and how selective reporting of outcomes can alter conclusions and introduce bias in meta-analysis. Despite the typically thousands of outcomes appearing in the multiverses of the 294 included studies, only in about 30% of studies did significant effect sizes in the hypothesized direction emerge. We also observed that the effect of a particular researcher degree of freedom was inconsistent across replication studies using the same protocol, meaning multiverse analyses often fail to replicate across samples. We recommend hypothesis-testing researchers to preregister their preferred analysis and openly report multiverse analysis. We propose a descriptive index (underlying multiverse variability) that quantifies the robustness of results across alternative ways to analyze the data. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

8.
Res Synth Methods ; 14(3): 520-525, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36872642

RESUMEN

The partial correlation coefficient quantifies the relationship between two variables while taking into account the effect of one or multiple control variables. Researchers often want to synthesize partial correlation coefficients in a meta-analysis since these can be readily computed based on the reported results of a linear regression analysis. The default inverse variance weights in standard meta-analysis models require researchers to compute not only the partial correlation coefficients of each study but also its corresponding sampling variance. The existing literature is diffuse on how to estimate this sampling variance, because two estimators exist that are both widely used. We critically reflect on both estimators, study their statistical properties, and provide recommendations for applied researchers. We also compute the sampling variances of studies using both estimators in a meta-analysis on the partial correlation between self-confidence and sports performance.


Asunto(s)
Análisis de Regresión , Modelos Lineales
9.
Neuro Oncol ; 25(8): 1395-1414, 2023 08 03.
Artículo en Inglés | MEDLINE | ID: mdl-36809489

RESUMEN

BACKGROUND: Cognitive functioning is increasingly assessed as a secondary outcome in neuro-oncological trials. However, which cognitive domains or tests to assess, remains debatable. In this meta-analysis, we aimed to elucidate the longer-term test-specific cognitive outcomes in adult glioma patients. METHODS: A systematic search yielded 7098 articles for screening. To investigate cognitive changes in glioma patients and differences between patients and controls 1-year follow-up, random-effects meta-analyses were conducted per cognitive test, separately for studies with a longitudinal and cross-sectional design. A meta-regression analysis with a moderator for interval testing (additional cognitive testing between baseline and 1-year posttreatment) was performed to investigate the impact of practice in longitudinal designs. RESULTS: Eighty-three studies were reviewed, of which 37 were analyzed in the meta-analysis, involving 4078 patients. In longitudinal designs, semantic fluency was the most sensitive test to detect cognitive decline over time. Cognitive performance on mini-mental state exam (MMSE), digit span forward, phonemic and semantic fluency declined over time in patients who had no interval testing. In cross-sectional studies, patients performed worse than controls on the MMSE, digit span backward, semantic fluency, Stroop speed interference task, trail-making test B, and finger tapping. CONCLUSIONS: Cognitive performance of glioma patients 1 year after treatment is significantly lower compared to the norm, with specific tests potentially being more sensitive. Cognitive decline over time occurs as well, but can easily be overlooked in longitudinal designs due to practice effects (as a result of interval testing). It is warranted to sufficiently correct for practice effects in future longitudinal trials.


Asunto(s)
Trastornos del Conocimiento , Glioma , Humanos , Adulto , Trastornos del Conocimiento/diagnóstico , Estudios Transversales , Cognición , Pruebas Neuropsicológicas , Glioma/complicaciones , Glioma/terapia , Terapia Combinada
10.
Psychol Methods ; 28(2): 438-451, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-34928679

RESUMEN

Robust scientific knowledge is contingent upon replication of original findings. However, replicating researchers are constrained by resources, and will almost always have to choose one replication effort to focus on from a set of potential candidates. To select a candidate efficiently in these cases, we need methods for deciding which out of all candidates considered would be the most useful to replicate, given some overall goal researchers wish to achieve. In this article we assume that the overall goal researchers wish to achieve is to maximize the utility gained by conducting the replication study. We then propose a general rule for study selection in replication research based on the replication value of the set of claims considered for replication. The replication value of a claim is defined as the maximum expected utility we could gain by conducting a replication of the claim, and is a function of (a) the value of being certain about the claim, and (b) uncertainty about the claim based on current evidence. We formalize this definition in terms of a causal decision model, utilizing concepts from decision theory and causal graph modeling. We discuss the validity of using replication value as a measure of expected utility gain, and we suggest approaches for deriving quantitative estimates of replication value. Our goal in this article is not to define concrete guidelines for study selection, but to provide the necessary theoretical foundations on which such concrete guidelines could be built. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Conocimiento , Modelos Teóricos , Humanos , Incertidumbre
11.
Clin Psychol Eur ; 5(3): e9997, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38356898

RESUMEN

Background: It is a precondition for evidence-based practice that research is replicable in a wide variety of clinical settings. Current standards for identifying evidence-based psychological interventions and making recommendations for clinical practice in clinical guidelines include criteria that are relevant for replicability, but a better understanding as well refined definitions of replicability are needed enabling empirical research on this topic. Recent advances on this issue were made in the wider field of psychology and in other disciplines, which offers the opportunity to define and potentially increase replicability also in research on psychological interventions. Method: This article proposes a research strategy for assessing, understanding, and improving replicability in research on psychological interventions. Results/Conclusion: First, we establish a replication taxonomy ranging from direct to conceptual replication adapted to the field of research on clinical interventions, propose study characteristics that increase the trustworthiness of results, and define statistical criteria for successful replication with respect to the quantitative outcomes of the original and replication studies. Second, we propose how to establish such standards for future research, i.e., in order to design future replication studies for psychological interventions as well as to apply them when investigating which factors are causing the (non-)replicability of findings in the current literature.

12.
Psychon Bull Rev ; 29(1): 55-69, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34159526

RESUMEN

Meta-analysis methods are used to synthesize results of multiple studies on the same topic. The most frequently used statistical model in meta-analysis is the random-effects model containing parameters for the overall effect, between-study variance in primary study's true effect size, and random effects for the study-specific effects. We propose Bayesian hypothesis testing and estimation methods using the marginalized random-effects meta-analysis (MAREMA) model where the study-specific true effects are regarded as nuisance parameters which are integrated out of the model. We propose using a flat prior distribution on the overall effect size in case of estimation and a proper unit information prior for the overall effect size in case of hypothesis testing. For the between-study variance (which can attain negative values under the MAREMA model), a proper uniform prior is placed on the proportion of total variance that can be attributed to between-study variability. Bayes factors are used for hypothesis testing that allow testing point and one-sided hypotheses. The proposed methodology has several attractive properties. First, the proposed MAREMA model encompasses models with a zero, negative, and positive between-study variance, which enables testing a zero between-study variance as it is not a boundary problem. Second, the methodology is suitable for default Bayesian meta-analyses as it requires no prior information about the unknown parameters. Third, the proposed Bayes factors can even be used in the extreme case when only two studies are available because Bayes factors are not based on large sample theory. We illustrate the developed methods by applying it to two meta-analyses and introduce easy-to-use software in the R package BFpack to compute the proposed Bayes factors.


Asunto(s)
Modelos Estadísticos , Proyectos de Investigación , Teorema de Bayes , Humanos
13.
Res Synth Methods ; 12(4): 429-447, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33939307

RESUMEN

The pooled estimate of the average effect is of primary interest when fitting the random-effects model for meta-analysis. However estimates of study specific effects, for example those displayed on forest plots, are also often of interest. In this tutorial, we present the case, with the accompanying statistical theory, for estimating the study specific true effects using so called 'empirical Bayes estimates' or 'Best Unbiased Linear Predictions' under the random-effects model. These estimates can be accompanied by prediction intervals that indicate a plausible range of study specific true effects. We coalesce and elucidate the available literature and illustrate the methodology using two published meta-analyses as examples. We also perform a simulation study that reveals that coverage probability of study specific prediction intervals are substantially too low if the between-study variance is small but not negligible. Researchers need to be aware of this defect when interpreting prediction intervals. We also show how empirical Bayes estimates, accompanied with study specific prediction intervals, can embellish forest plots. We hope that this tutorial will serve to provide a clear theoretical underpinning for this methodology and encourage its widespread adoption.


Asunto(s)
Modelos Estadísticos , Teorema de Bayes , Simulación por Computador , Probabilidad
14.
Behav Brain Sci ; 44: e19, 2021 02 18.
Artículo en Inglés | MEDLINE | ID: mdl-33599601

RESUMEN

Lee and Schwarz interpret meta-analytic research and replication studies as providing evidence for the robustness of cleansing effects. We argue that the currently available evidence is unconvincing because (a) publication bias and the opportunistic use of researcher degrees of freedom appear to have inflated meta-analytic effect size estimates, and (b) preregistered replications failed to find any evidence of cleansing effects.

15.
Soc Sci Med ; 255: 112814, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32388075

RESUMEN

BACKGROUND: Military personnel are exposed to severe stressors across different stages of their career that may have a negative impact on mental health and functioning. It is often suggested that psychological resilience plays an important role in the maintenance and/or enhancement of their mental health and functioning under these circumstances. METHOD: A systematic literature search was conducted using PsycINFO, MEDLINE, PsycARTICLES, Psychology and Behavioral Sciences Collection, Web of Science, and PubMed up to August of 2019 retrieving 3,698 reports. Schmidt and Hunter meta-analytical techniques were used to assess the predictive value of psychological resilience on ten different military relevant mental health and functioning outcomes. Multivariate meta-analysis assessed the origin of heterogeneity among bivariate effect sizes. RESULTS: The effect sizes of 40 eligible peer-reviewed papers covering 40 unique samples were included in the meta-analysis. Seventy-eight percent of these studies were published after 2010 and were predominantly conducted in western countries. Bivariate effect sizes were low to medium (absolute values: 0.08 to 0.36) and multivariate effect sizes, adjusting for across studies varying sets of covariates, were low to trivial (absolute values: 0.02 to 0.08). Moderator analyses using multivariate meta-analysis on 60 bivariate effect sizes, revealed no significant effect of type of psychological resilience scale, time-lag, and career stage. CONCLUSIONS: The current review found no indications that different conceptualizations of psychological resilience across a variety of research designs, are strongly predictive of mental health and functioning among military personnel. Future directions (moderator/mediator models, stressor type specifications, and directionality) for prospective studies are discussed. Our results question the usefulness of interventions to enhance the resilience of soldiers to improve their mental health and functioning.


Asunto(s)
Personal Militar , Resiliencia Psicológica , Humanos , Salud Mental , Estudios Prospectivos
16.
Res Synth Methods ; 10(4): 515-527, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31111673

RESUMEN

The Hartung-Knapp method for random-effects meta-analysis, that was also independently proposed by Sidik and Jonkman, is becoming advocated for general use. This method has previously been justified by taking all estimated variances as known and using a different pivotal quantity to the more conventional one when making inferences about the average effect. We provide a new conceptual framework for, and justification of, the Hartung-Knapp method. Specifically, we show that inferences from fitted random-effects models, using both the conventional and the Hartung-Knapp method, are equivalent to those from closely related intercept only weighted least squares regression models. This observation provides a new link between Hartung and Knapp's methodology for meta-analysis and standard linear models, where it can be seen that the Hartung-Knapp method can be justified by a linear model that makes a slightly weaker assumption than taking all variances as known. This provides intuition for why the Hartung-Knapp method has been found to perform better than the conventional one in simulation studies. Furthermore, our new findings give more credence to ad hoc adjustments of confidence intervals from the Hartung-Knapp method that ensure these are at least as wide as more conventional confidence intervals. The conceptual basis for the Hartung-Knapp method that we present here should replace the established one because it more clearly illustrates the potential benefit of using it.


Asunto(s)
Análisis de los Mínimos Cuadrados , Metaanálisis como Asunto , Algoritmos , Simulación por Computador , Interpretación Estadística de Datos , Modelos Lineales , Proyectos de Investigación , Tamaño de la Muestra
17.
PLoS One ; 14(4): e0215052, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30978228

RESUMEN

Publication bias is a substantial problem for the credibility of research in general and of meta-analyses in particular, as it yields overestimated effects and may suggest the existence of non-existing effects. Although there is consensus that publication bias exists, how strongly it affects different scientific literatures is currently less well-known. We examined evidence of publication bias in a large-scale data set of primary studies that were included in 83 meta-analyses published in Psychological Bulletin (representing meta-analyses from psychology) and 499 systematic reviews from the Cochrane Database of Systematic Reviews (CDSR; representing meta-analyses from medicine). Publication bias was assessed on all homogeneous subsets (3.8% of all subsets of meta-analyses published in Psychological Bulletin) of primary studies included in meta-analyses, because publication bias methods do not have good statistical properties if the true effect size is heterogeneous. Publication bias tests did not reveal evidence for bias in the homogeneous subsets. Overestimation was minimal but statistically significant, providing evidence of publication bias that appeared to be similar in both fields. However, a Monte-Carlo simulation study revealed that the creation of homogeneous subsets resulted in challenging conditions for publication bias methods since the number of effect sizes in a subset was rather small (median number of effect sizes equaled 6). Our findings are in line with, in its most extreme case, publication bias ranging from no bias until only 5% statistically nonsignificant effect sizes being published. These and other findings, in combination with the small percentages of statistically significant primary effect sizes (28.9% and 18.9% for subsets published in Psychological Bulletin and CDSR), led to the conclusion that evidence for publication bias in the studied homogeneous subsets is weak, but suggestive of mild publication bias in both psychology and medicine.


Asunto(s)
Interpretación Estadística de Datos , Medicina , Psicología , Sesgo de Publicación , Manejo de Datos , Bases de Datos Factuales , Humanos , Método de Montecarlo , Control de Calidad , Sesgo de Selección
18.
Psychol Methods ; 24(1): 116-134, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30489099

RESUMEN

One of the main goals of meta-analysis is to test for and estimate the heterogeneity of effect sizes. We examined the effect of publication bias on the Q test and assessments of heterogeneity as a function of true heterogeneity, publication bias, true effect size, number of studies, and variation of sample sizes. The present study has two main contributions and is relevant to all researchers conducting meta-analysis. First, we show when and how publication bias affects the assessment of heterogeneity. The expected values of heterogeneity measures H² and I² were analytically derived, and the power and Type I error rate of the Q test were examined in a Monte Carlo simulation study. Our results show that the effect of publication bias on the Q test and assessment of heterogeneity is large, complex, and nonlinear. Publication bias can both dramatically decrease and increase heterogeneity in true effect size, particularly if the number of studies is large and population effect size is small. We therefore conclude that the Q test of homogeneity and heterogeneity measures H² and I² are generally not valid when publication bias is present. Our second contribution is that we introduce a web application, Q-sense, which can be used to determine the impact of publication bias on the assessment of heterogeneity within a certain meta-analysis and to assess the robustness of the meta-analytic estimate to publication bias. Furthermore, we apply Q-sense to 2 published meta-analyses, showing how publication bias can result in invalid estimates of effect size and heterogeneity. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Interpretación Estadística de Datos , Metaanálisis como Asunto , Sesgo de Publicación , Humanos , Distribución Normal
19.
Res Synth Methods ; 10(2): 225-239, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-30589219

RESUMEN

The effect sizes of studies included in a meta-analysis do often not share a common true effect size due to differences in for instance the design of the studies. Estimates of this so-called between-study variance are usually imprecise. Hence, reporting a confidence interval together with a point estimate of the amount of between-study variance facilitates interpretation of the meta-analytic results. Two methods that are recommended to be used for creating such a confidence interval are the Q-profile and generalized Q-statistic method that both make use of the Q-statistic. These methods are exact if the assumptions underlying the random-effects model hold, but these assumptions are usually violated in practice such that confidence intervals of the methods are approximate rather than exact confidence intervals. We illustrate by means of two Monte-Carlo simulation studies with odds ratio as effect size measure that coverage probabilities of both methods can be substantially below the nominal coverage rate in situations that are representative for meta-analyses in practice. We also show that these too low coverage probabilities are caused by violations of the assumptions of the random-effects model (ie, normal sampling distributions of the effect size measure and known sampling variances) and are especially prevalent if the sample sizes in the primary studies are small.


Asunto(s)
Intervalos de Confianza , Metaanálisis como Asunto , Modelos Estadísticos , Estadística como Asunto , Simulación por Computador , Método de Montecarlo , Distribución Normal , Oportunidad Relativa , Probabilidad , Proyectos de Investigación , Tamaño de la Muestra
20.
Stat Med ; 37(17): 2616-2629, 2018 07 30.
Artículo en Inglés | MEDLINE | ID: mdl-29700839

RESUMEN

A wide variety of estimators of the between-study variance are available in random-effects meta-analysis. Many, but not all, of these estimators are based on the method of moments. The DerSimonian-Laird estimator is widely used in applications, but the Paule-Mandel estimator is an alternative that is now recommended. Recently, DerSimonian and Kacker have developed two-step moment-based estimators of the between-study variance. We extend these two-step estimators so that multiple (more than two) steps are used. We establish the surprising result that the multistep estimator tends towards the Paule-Mandel estimator as the number of steps becomes large. Hence, the iterative scheme underlying our new multistep estimator provides a hitherto unknown relationship between two-step estimators and Paule-Mandel estimator. Our analysis suggests that two-step estimators are not necessarily distinct estimators in their own right; instead, they are quantities that are closely related to the usual iterative scheme that is used to calculate the Paule-Mandel estimate. The relationship that we establish between the multistep and Paule-Mandel estimator is another justification for the use of the latter estimator. Two-step and multistep estimators are perhaps best conceptualized as approximate Paule-Mandel estimators.


Asunto(s)
Metaanálisis como Asunto , Modelos Estadísticos , Análisis de Varianza , Simulación por Computador , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...