RESUMO
Creativity is widely considered a skill essential to succeeding in the modern world. Numerous creativity training programs have been developed, and several meta-analyses have attempted to summarize the effectiveness of these programs and identify the features influencing their impact. Unfortunately, previous meta-analyses share a number of limitations, most notably overlooking the potentially strong impact of publication bias and the influence of study quality on effect sizes. We undertook a meta-analysis of 169 creativity training studies across 5 decades (844 effect sizes, the largest meta-analysis of creativity training to date), including a substantial number of unpublished studies (48 studies; 262 effect sizes). We employed a range of statistical methods to detect and adjust for publication bias and evaluated the robustness of the evidence in the field. In line with previous meta-analyses, we found a moderate training effect (0.53 SDs; unadjusted for publication bias). Critically, we observed converging evidence consistent with strong publication bias. All adjustment methods considerably lowered our original estimate (adjusted estimates ranged from 0.29 to 0.32 SDs). This severe bias casts doubt on the representativeness of the published literature in the field and on the conclusions of previous meta-analyses. Our analysis also revealed a high prevalence of methodological shortcomings in creativity training studies (likely to have inflated our average effect), and little signs of methodological improvement over time-a situation that limits the usefulness of this body of work. We conclude by presenting implications and recommendations for researchers and practitioners, and we propose an agenda for future research. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Assuntos
Criatividade , Humanos , Viés de PublicaçãoRESUMO
There is a norm in psychology to use causally ambiguous statistical language, rather than straightforward causal language, when describing methods and results of nonexperimental studies. However, causally ambiguous language may inhibit a critical examination of the study's causal assumptions and lead to a greater acceptance of policy recommendations that rely on causal interpretations of nonexperimental findings. In a preregistered experiment, 142 psychology faculty, postdocs, and doctoral students (54% female), ages 22-67 (M = 33.20, SD = 8.96), rated the design and analysis from hypothetical studies with causally ambiguous statistical language as of higher quality (by .34-.80 SD) and as similarly or more supportive (by .16-.27 SD) of policy recommendations than studies described in straightforward causal language. Thus, using statistical rather than causal language to describe nonexperimental findings did not decrease, and may have increased, perceived support for implicitly causal conclusions.
Assuntos
Docentes , Idioma , Humanos , Feminino , Masculino , Causalidade , Pessoal de SaúdeRESUMO
Abstract reasoning is critical for science and mathematics, but is very difficult. In 3 studies, the hypothesis that alternatives generation required for conditional reasoning with false premises facilitates abstract reasoning is examined. Study 1 (n = 372) found that reasoning with false premises improved abstract reasoning in 12- to 15-year-olds. Study 2 (n = 366) found a positive effect of simply generating alternatives, but only in 19-year-olds. Study 3 (n = 92) found that 9- to 11-year-olds were able to respond logically with false premises, whereas no such ability was observed in 6- to 7-year-olds. Reasoning with false premises was found to improve reasoning with semiabstract premises in the older children. These results support the idea that alternatives generation with false premises facilitates abstract reasoning.