Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Res Synth Methods ; 12(3): 264-290, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33543583

RESUMO

Tolerance intervals provide a bracket intended to contain a percentage (e.g., 80%) of a population distribution given sample estimates of the mean and variance. In random-effects meta-analysis, tolerance intervals should contain researcher-specified proportions of underlying population effect sizes. Using Monte Carlo simulation, we investigated the coverage for five relevant tolerance interval estimators: the Schmidt-Hunter credibility intervals, a prediction interval, two content tolerance intervals adapted to meta-analysis, and a bootstrap tolerance interval. None of the intervals contained the desired percentage of coverage at the nominal rates in all conditions. However, the prediction worked well unless the number of primary studies was small (<30), and one of the content tolerance intervals approached nominal levels with small numbers (<20) of primary studies. The bootstrap tolerance interval achieved near nominal coverage if there were sufficient numbers of primary studies (30+) and large enough sample sizes (N ≅ 70) in the included primary studies, although it slightly exceeded nominal coverage with large numbers of large-sample primary studies. Next, we showed the results of applying the intervals to real data using a set of previously published analyses and provided suggestions for practice. Tolerance intervals incorporate error of estimation into the construction of proper brackets for fractions of population true effects. In many contexts, such intervals approach the desired nominal levels of coverage.


Assuntos
Metanálise como Assunto , Método de Monte Carlo , Simulação por Computador , Intervalos de Confiança
3.
Res Synth Methods ; 8(1): 5-18, 2017 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-28058794

RESUMO

When we speak about heterogeneity in a meta-analysis, our intent is usually to understand the substantive implications of the heterogeneity. If an intervention yields a mean effect size of 50 points, we want to know if the effect size in different populations varies from 40 to 60, or from 10 to 90, because this speaks to the potential utility of the intervention. While there is a common belief that the I2 statistic provides this information, it actually does not. In this example, if we are told that I2 is 50%, we have no way of knowing if the effects range from 40 to 60, or from 10 to 90, or across some other range. Rather, if we want to communicate the predicted range of effects, then we should simply report this range. This gives readers the information they think is being captured by I2 and does so in a way that is concise and unambiguous. Copyright © 2017 John Wiley & Sons, Ltd.


Assuntos
Metanálise como Assunto , Projetos de Pesquisa , Algoritmos , Transtorno do Deficit de Atenção com Hiperatividade/tratamento farmacológico , Atitude , Cognição/efeitos dos fármacos , Modificador do Efeito Epidemiológico , Disfunção Erétil/tratamento farmacológico , Feminino , Humanos , Masculino , Metilfenidato/uso terapêutico , Modelos Estatísticos , Mães , Prevalência , Reprodutibilidade dos Testes , Estatística como Assunto , Transtornos de Estresse Pós-Traumáticos/terapia , Resultado do Tratamento
5.
Cochrane Database Syst Rev ; 4: MR000038, 2016 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-27040721

RESUMO

BACKGROUND: Improper practices and unprofessional conduct in clinical research have been shown to waste a significant portion of healthcare funds and harm public health. OBJECTIVES: Our objective was to evaluate the effectiveness of educational or policy interventions in research integrity or responsible conduct of research on the behaviour and attitudes of researchers in health and other research areas. SEARCH METHODS: We searched the CENTRAL, MEDLINE, LILACS and CINAHL health research bibliographical databases, as well as the Academic Search Complete, AGRICOLA, GeoRef, PsycINFO, ERIC, SCOPUS and Web of Science databases. We performed the last search on 15 April 2015 and the search was limited to articles published between 1990 and 2014, inclusive. We also searched conference proceedings and abstracts from research integrity conferences and specialized websites. We handsearched 14 journals that regularly publish research integrity research. SELECTION CRITERIA: We included studies that measured the effects of one or more interventions, i.e. any direct or indirect procedure that may have an impact on research integrity and responsible conduct of research in its broadest sense, where participants were any stakeholders in research and publication processes, from students to policy makers. We included randomized and non-randomized controlled trials, such as controlled before-and-after studies, with comparisons of outcomes in the intervention versus non-intervention group or before versus after the intervention. Studies without a control group were not included in the review. DATA COLLECTION AND ANALYSIS: We used the standard methodological procedures expected by Cochrane. To assess the risk of bias in non-randomized studies, we used a modified Cochrane tool, in which we used four out of six original domains (blinding, incomplete outcome data, selective outcome reporting, other sources of bias) and two additional domains (comparability of groups and confounding factors). We categorized our primary outcome into the following levels: 1) organizational change attributable to intervention, 2) behavioural change, 3) acquisition of knowledge/skills and 4) modification of attitudes/perceptions. The secondary outcome was participants' reaction to the intervention. MAIN RESULTS: Thirty-one studies involving 9571 participants, described in 33 articles, met the inclusion criteria. All were published in English. Fifteen studies were randomized controlled trials, nine were controlled before-and-after studies, four were non-equivalent controlled studies with a historical control, one was a non-equivalent controlled study with a post-test only and two were non-equivalent controlled studies with pre- and post-test findings for the intervention group and post-test for the control group. Twenty-one studies assessed the effects of interventions related to plagiarism and 10 studies assessed interventions in research integrity/ethics. Participants included undergraduates, postgraduates and academics from a range of research disciplines and countries, and the studies assessed different types of outcomes.We judged most of the included randomized controlled trials to have a high risk of bias in at least one of the assessed domains, and in the case of non-randomized trials there were no attempts to alleviate the potential biases inherent in the non-randomized designs.We identified a range of interventions aimed at reducing research misconduct. Most interventions involved some kind of training, but methods and content varied greatly and included face-to-face and online lectures, interactive online modules, discussion groups, homework and practical exercises. Most studies did not use standardized or validated outcome measures and it was impossible to synthesize findings from studies with such diverse interventions, outcomes and participants. Overall, there is very low quality evidence that various methods of training in research integrity had some effects on participants' attitudes to ethical issues but minimal (or short-lived) effects on their knowledge. Training about plagiarism and paraphrasing had varying effects on participants' attitudes towards plagiarism and their confidence in avoiding it, but training that included practical exercises appeared to be more effective. Training on plagiarism had inconsistent effects on participants' knowledge about and ability to recognize plagiarism. Active training, particularly if it involved practical exercises or use of text-matching software, generally decreased the occurrence of plagiarism although results were not consistent. The design of a journal's author contribution form affected the truthfulness of information supplied about individuals' contributions and the proportion of listed contributors who met authorship criteria. We identified no studies testing interventions for outcomes at the organizational level. The numbers of events and the magnitude of intervention effects were generally small, so the evidence is likely to be imprecise. No adverse effects were reported. AUTHORS' CONCLUSIONS: The evidence base relating to interventions to improve research integrity is incomplete and the studies that have been done are heterogeneous, inappropriate for meta-analyses and their applicability to other settings and population is uncertain. Many studies had a high risk of bias because of the choice of study design and interventions were often inadequately reported. Even when randomized designs were used, findings were difficult to generalize. Due to the very low quality of evidence, the effects of training in responsible conduct of research on reducing research misconduct are uncertain. Low quality evidence indicates that training about plagiarism, especially if it involves practical exercises and use of text-matching software, may reduce the occurrence of plagiarism.


Assuntos
Pesquisa Biomédica/ética , Plágio , Pesquisadores/ética , Má Conduta Científica/ética , Atitude , Estudos Controlados Antes e Depois/ética , Estudos Controlados Antes e Depois/normas , Ensaios Clínicos Controlados como Assunto/ética , Ensaios Clínicos Controlados como Assunto/normas , Humanos , Editoração/ética , Editoração/normas , Ensaios Clínicos Controlados Aleatórios como Assunto/ética , Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Pesquisadores/normas
6.
Perspect Psychol Sci ; 10(5): 677-9, 2015 09.
Artigo em Inglês | MEDLINE | ID: mdl-26386006

RESUMO

Although Ferguson's (2015, this issue) meta-analysis addresses an important topic, we have serious concerns about how it was conducted. Because there was only one coder, we have no confidence in the reliability or validity of the coded variables. Two independent raters should have coded the studies. Ferguson synthesized partial correlations as if they were zero-order correlations, which can increase or decrease (sometimes substantially) the variance of the partial correlation. Moreover, he partialled different numbers of variables from different effects, partialled different variables from different studies, and did not report what was partialled from each study. Ferguson used an idiosyncratic "tandem procedure" for detecting publication bias. He also "corrected" his results for publication bias, even though there is no such thing as a "correction" for publication bias. Thus, we believe that Ferguson's meta-analysis is fatally flawed and should not have been accepted for publication in Perspective on Psychological Science (or any other journal).


Assuntos
Viés de Publicação , Reprodutibilidade dos Testes , Ira , Emoções , Humanos , Editoração
8.
Psychol Methods ; 17(1): 129-36, 2012 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-22369520

RESUMO

It is well documented that studies reporting statistically significant results are more likely to be published than are studies reporting nonsignificant results--a phenomenon called publication bias. Publication bias in meta-analytic reviews should be identified and reduced when possible. Ferguson and Brannick (2012) argued that the inclusion of unpublished articles is ineffective and possibly counterproductive as a means of reducing publication bias in meta-analyses. We show how idiosyncratic choices on the part of Ferguson and Brannick led to an erroneous conclusion. We demonstrate that their key finding--that publication bias was more likely when unpublished studies were included--may be an artifact of the way they assessed publication bias. We also point out how the lack of transparency about key choices and the absence of information about critical features of Ferguson and Brannick's sample and procedures might have obscured readers' ability to assess the validity of their claims. Furthermore, we demonstrate that many of the claims they made are without empirical support, even though they could have tested these claims empirically, and that these claims may be misleading. With their claim that addressing publication bias introduces subjectivity and bias into meta-analysis, they ignored a large body of evidence showing that including unpublished studies that meet the inclusion criteria of a meta-analysis decreases (rather than increases) publication bias. Rather than exclude unpublished studies, we recommend that meta-analysts code study characteristics related to methodological quality (e.g., experimental vs. nonexperimental design) and test whether these factors influence the meta-analytic results.


Assuntos
Metanálise como Assunto , Psicologia , Viés de Publicação/estatística & dados numéricos , Controle de Qualidade , Humanos
9.
Psychol Bull ; 136(2): 151-73, 2010 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-20192553

RESUMO

Meta-analytic procedures were used to test the effects of violent video games on aggressive behavior, aggressive cognition, aggressive affect, physiological arousal, empathy/desensitization, and prosocial behavior. Unique features of this meta-analytic review include (a) more restrictive methodological quality inclusion criteria than in past meta-analyses; (b) cross-cultural comparisons; (c) longitudinal studies for all outcomes except physiological arousal; (d) conservative statistical controls; (e) multiple moderator analyses; and (f) sensitivity analyses. Social-cognitive models and cultural differences between Japan and Western countries were used to generate theory-based predictions. Meta-analyses yielded significant effects for all 6 outcome variables. The pattern of results for different outcomes and research designs (experimental, cross-sectional, longitudinal) fit theoretical predictions well. The evidence strongly suggests that exposure to violent video games is a causal risk factor for increased aggressive behavior, aggressive cognition, and aggressive affect and for decreased empathy and prosocial behavior. Moderator analyses revealed significant research design effects, weak evidence of cultural differences in susceptibility and type of measurement effects, and no evidence of sex differences in susceptibility. Results of various sensitivity analyses revealed these effects to be robust, with little evidence of selection (publication) bias.


Assuntos
Agressão/psicologia , Empatia , Comportamento Social , Jogos de Vídeo/psicologia , Violência/psicologia , Adolescente , Comportamento do Adolescente/psicologia , Adulto , Afeto , Criança , Comportamento Infantil/psicologia , Cognição , Comparação Transcultural , Feminino , Humanos , Japão , Estudos Longitudinais , Masculino , Estados Unidos , Adulto Jovem
10.
Res Synth Methods ; 1(2): 97-111, 2010 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26061376

RESUMO

There are two popular statistical models for meta-analysis, the fixed-effect model and the random-effects model. The fact that these two models employ similar sets of formulas to compute statistics, and sometimes yield similar estimates for the various parameters, may lead people to believe that the models are interchangeable. In fact, though, the models represent fundamentally different assumptions about the data. The selection of the appropriate model is important to ensure that the various statistics are estimated correctly. Additionally, and more fundamentally, the model serves to place the analysis in context. It provides a framework for the goals of the analysis as well as for the interpretation of the statistics. In this paper we explain the key assumptions of each model, and then outline the differences between the models. We conclude with a discussion of factors to consider when choosing between the two models. Copyright © 2010 John Wiley & Sons, Ltd.

12.
J Occup Health Psychol ; 13(1): 69-93, 2008 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-18211170

RESUMO

A meta-analysis was conducted to determine the effectiveness of stress management interventions in occupational settings. Thirty-six experimental studies were included, representing 55 interventions. Total sample size was 2,847. Of the participants, 59% were female, mean age was 35.4, and average length of intervention was 7.4 weeks. The overall weighted effect size (Cohen's d) for all studies was 0.526 (95% confidence interval = 0.364, 0.687), a significant medium to large effect. Interventions were coded as cognitive-behavioral, relaxation, organizational, multimodal, or alternative. Analyses based on these subgroups suggested that intervention type played a moderating role. Cognitive-behavioral programs consistently produced larger effects than other types of interventions, but if additional treatment components were added the effect was reduced. Within the sample of studies, relaxation interventions were most frequently used, and organizational interventions continued to be scarce. Effects were based mainly on psychological outcome variables, as opposed to physiological or organizational measures. The examination of additional moderators such as treatment length, outcome variable, and occupation did not reveal significant variations in effect size by intervention type.


Assuntos
Saúde Ocupacional , Avaliação de Programas e Projetos de Saúde , Estresse Psicológico/prevenção & controle , Adulto , Feminino , Humanos , Internacionalidade , Masculino , Estados Unidos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA