RESUMO
In this study we aimed to investigate gender differences in fear generalization tendencies in humans and, inspired by recent findings in animal research, examine whether any such differences could stem from differences in memory precision. Forty men and forty women underwent a differential fear conditioning procedure using geometric shapes as cues. Subsequently, generalized fear responses were assessed across a spectrum of perceptually similar shapes. Throughout generalization testing, perceptual memory accuracy was repeatedly probed using a stimulus recreation task. Using statistical and computational modeling, we found strong evidence for the absence of gender differences in fear learning and generalization behavior. The evidence for gender differences in related processes such as perception and memory was inconclusive. Although some of our findings hinted at the possibility that women may be more perceptive of physical differences between stimuli and have more accurate memory than men, those observations were not consistently replicated across experimental conditions and analytical approaches. Our results contribute to the emerging literature on gender differences in perceptual fear generalization in humans and underscore the need for further systematic research to explore the interplay between gender and mechanisms associated with fear generalization across different experimental contexts.
RESUMO
We examined continuous affect drawings as innovative measure of affective experiences over time. Intensive longitudinal data often rely on discrete assessments, containing "blind spots" between measurements. With continuous affect drawings participants visually depict their affect fluctuations between assessments. In an experience sampling study, participants (N = 115) rated their momentary positive and negative affect 6 times daily. From the second daily rating on, they additionally drew their positive and negative affect changes and reported affective events between assessments. They received one measurement burst between assessments daily. The strength of the approach is a substantial amount of informational gain (average 7%) over linearly interpolated points between assessments. The additional information was subsequently categorized into positive and negative affect peaks and valleys, each occurring once a day per person on average. The probability of detecting peaks and valleys increased with reported events. The drawings correlated positively with momentary affect scores from the burst. Yet, the drawing predicted the bursts less well suggesting that the momentary ratings may yield different information than the drawings. Although the timing of retrospective drawings is less precise than individual momentary assessments, this method provides a comprehensive understanding of affective experiences between assessments, offering a unique perspective on affect dynamics.
RESUMO
Many-analysts studies explore how well an empirical claim withstands plausible alternative analyses of the same dataset by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g. effect size) provided by each analysis team. Although informative about the range of plausible effects in a dataset, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item subjective evidence evaluation survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous many-analysts study.
RESUMO
Exposure therapy is an evidence-based treatment option for anxiety-related disorders. Many patients also take medication that could, in principle, affect exposure therapy efficacy. Clinical and laboratory evidence indeed suggests that benzodiazepines may have detrimental effects. Large clinical trials with propranolol, a common beta-blocker, are currently lacking, but several preclinical studies do indicate impaired establishment of safety memories. Here, we investigated the effects of propranolol given prior to extinction training in 9 rat studies (N = 215) and one human study (N = 72). A Bayesian meta-analysis of our rat studies provided strong evidence against propranolol-induced extinction memory impairment during a drug-free test, and the human study found no significant difference with placebo. Two of the rat studies actually suggested a small beneficial effect of propranolol. Lastly, two rat studies with a benzodiazepine (midazolam) group provided some evidence for a harmful effect on extinction memory, i.e., impaired extinction retention. In conclusion, our midazolam findings are in line with prior literature (i.e., an extinction retention impairment), but this is not the case for the 10 studies with propranolol. Our data thus support caution regarding the use of benzodiazepines during exposure therapy, but argue against a harmful effect of propranolol on extinction learning.
Assuntos
Antagonistas Adrenérgicos beta , Extinção Psicológica , Medo , Memória , Midazolam , Propranolol , Propranolol/farmacologia , Propranolol/administração & dosagem , Animais , Medo/efeitos dos fármacos , Extinção Psicológica/efeitos dos fármacos , Ratos , Humanos , Antagonistas Adrenérgicos beta/farmacologia , Antagonistas Adrenérgicos beta/administração & dosagem , Masculino , Memória/efeitos dos fármacos , Midazolam/farmacologia , Midazolam/administração & dosagem , Midazolam/efeitos adversos , Adulto , Teorema de Bayes , Feminino , Condicionamento Clássico/efeitos dos fármacos , Adulto JovemRESUMO
How feelings change over time is a central topic in emotion research. To study these affective fluctuations, researchers often ask participants to repeatedly indicate how they feel on a self-report rating scale. Despite widespread recognition that this kind of data is subject to measurement error, the extent of this error remains an open question. Complementing many daily-life studies, this study aimed to investigate this question in an experimental setting. In such a setting, multiple trials follow each other at a fast pace, forcing experimenters to use a limited number of questions to measure affect during each trial. A total of 1398 participants completed a probabilistic reward task in which they were unknowingly presented with the same string of outcomes multiple times throughout the study. This allowed us to assess the test-retest consistency of their affective responses to the rating scales under investigation. We then compared these consistencies across different types of rating scales in hopes of finding out whether a given type of scale led to a greater consistency of affective measurements. Overall, we found moderate to good consistency of the affective measurements. Surprisingly, however, we found no differences in consistency across rating scales, which suggests that the specific rating scale that is used does not influence the measurement consistency.
RESUMO
Sharing research data allows the scientific community to verify and build upon published work. However, data sharing is not common practice yet. The reasons for not sharing data are myriad: Some are practical, others are more fear-related. One particular fear is that a reanalysis may expose errors. For this explanation, it would be interesting to know whether authors that do not share data genuinely made more errors than authors who do share data. (Wicherts, Bakker and Molenaar 2011) examined errors that can be discovered based on the published manuscript only, because it is impossible to reanalyze unavailable data. They found a higher prevalence of such errors in papers for which the data were not shared. However, (Nuijten et al. 2017) did not find support for this finding in three large studies. To shed more light on this relation, we conducted a replication of the study by (Wicherts et al. 2011). Our study consisted of two parts. In the first part, we reproduced the analyses from (Wicherts et al. 2011) to verify the results, and we carried out several alternative analytical approaches to evaluate the robustness of the results against other analytical decisions. In the second part, we used a unique and larger data set that originated from (Vanpaemel et al. 2015) on data sharing upon request for reanalysis, to replicate the findings in (Wicherts et al. 2011). We applied statcheck for the detection of consistency errors in all included papers and manually corrected false positives. Finally, we again assessed the robustness of the replication results against other analytical decisions. Everything taken together, we found no robust empirical evidence for the claim that not sharing research data for reanalysis is associated with consistency errors.
Assuntos
Disseminação de Informação , Psicologia , Projetos de PesquisaRESUMO
The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative examples, and provide an overview of key references and software with links to other applications. The article is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Assuntos
Teorema de Bayes , Pesquisa Comportamental , Psicologia , Humanos , Pesquisa Comportamental/métodos , Psicologia/métodos , Software , Projetos de PesquisaRESUMO
Human generalization research aims to understand the processes underlying the transfer of prior experiences to new contexts. Generalization research predominantly relies on descriptive statistics, assumes a single generalization mechanism, interprets generalization from mono-source data, and disregards individual differences. Unfortunately, such an approach fails to disentangle various mechanisms underlying generalization behaviour and can readily result in biased conclusions regarding generalization tendencies. Therefore, we combined a computational model with multi-source data to mechanistically investigate human generalization behaviour. By simultaneously modelling learning, perceptual and generalization data at the individual level, we revealed meaningful variations in how different mechanisms contribute to generalization behaviour. The current research suggests the need for revising the theoretical and analytic foundations in the field to shift the attention away from forecasting group-level generalization behaviour and toward understanding how such phenomena emerge at the individual level. This raises the question for future research whether a mechanism-specific differential diagnosis may be beneficial for generalization-related psychiatric disorders.
RESUMO
The way in which emotional experiences change over time can be studied through the use of computational models. An important question with regard to such models is which characteristics of the data a model should account for in order to adequately describe these data. Recently, attention has been drawn on the potential importance of nonlinearity as a characteristic of affect dynamics. However, this conclusion was reached through the use of experience sampling data in which no information was available about the context in which affect was measured. However, affective stimuli may induce some or all of the observed nonlinearity. This raises the question of whether computational models of affect dynamics should account for nonlinearity, or whether they just need to account for the affective stimuli a person encounters. To investigate this question, we used a probabilistic reward task in which participants either won or lost money at each trial. A number of plausible ways in which the experimental stimuli played a role were considered and applied to the nonlinear Affective Ising Model (AIM) and the linear Bounded Ornstein-Uhlenbeck (BOU) model. In order to reach a conclusion, the relative and absolute performance of these models were assessed. Results suggest that some of the observed nonlinearity could indeed be attributed to the experimental stimuli. However, not all nonlinearity was accounted for by these stimuli, suggesting that nonlinearity may present an inherent feature of affect dynamics. As such, nonlinearity should ideally be accounted for in the computational models of affect dynamics. Supplementary Information: The online version contains supplementary material available at 10.1007/s42761-022-00118-5.
RESUMO
The Action-sentence Compatibility Effect (ACE) is a well-known demonstration of the role of motor activity in the comprehension of language. Participants are asked to make sensibility judgments on sentences by producing movements toward the body or away from the body. The ACE is the finding that movements are faster when the direction of the movement (e.g., toward) matches the direction of the action in the to-be-judged sentence (e.g., Art gave you the pen describes action toward you). We report on a pre-registered, multi-lab replication of one version of the ACE. The results show that none of the 18 labs involved in the study observed a reliable ACE, and that the meta-analytic estimate of the size of the ACE was essentially zero.
Assuntos
Compreensão , Idioma , Humanos , Movimento , Tempo de ReaçãoRESUMO
Preregistration is a method to increase research transparency by documenting research decisions on a public, third-party repository prior to any influence by data. It is becoming increasingly popular in all subfields of psychology and beyond. Adherence to the preregistration plan may not always be feasible and even is not necessarily desirable, but without disclosure of deviations, readers who do not carefully consult the preregistration plan might get the incorrect impression that the study was exactly conducted and reported as planned. In this paper, we have investigated adherence and disclosure of deviations for all articles published with the Preregistered badge in Psychological Science between February 2015 and November 2017 and shared our findings with the corresponding authors for feedback. Two out of 27 preregistered studies contained no deviations from the preregistration plan. In one study, all deviations were disclosed. Nine studies disclosed none of the deviations. We mainly observed (un)disclosed deviations from the plan regarding the reported sample size, exclusion criteria and statistical analysis. This closer look at preregistrations of the first generation reveals possible hurdles for reporting preregistered studies and provides input for future reporting guidelines. We discuss the results and possible explanations, and provide recommendations for preregistered research.
RESUMO
Similarity-based categorization, as an important cognitive skill, can be performed by abstracting a categories' central tendency, the so-called prototype, or by memorizing individual exemplars of a category. The flexible selection of an appropriate strategy is crucial for effective cognitive functioning. The detail-focused cognitive style in individuals with autism spectrum disorders (ASD) has been hypothesized to specifically impair prototype-based categorization but to leave exemplar-based categorization unimpaired. We first give an overview of approaches to investigate prototype-based abstraction in the prototype-distortion task, with an emphasis on model-based approaches suitable to discern the two strategies on the individual level. The second part summarizes literature speaking to prototype-based categorization in ASD using that task. Despite considerable inconsistencies, most studies appear to confirm that autistic individuals have more difficulties to perform prototype-distortion tasks than non-autistic individuals. We highlight how inconsistencies in literature can be resolved by taking the differences in task designs into account. The current review illustrates the need for sensitive computational approaches, suitable to detect hidden individual differences and potential compensatory strategies.
Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Cognição , Formação de Conceito , Humanos , IndividualidadeRESUMO
Subjective well-being changes over time. While the causes of these changes have been investigated extensively, few attempts have been made to capture these changes through computational modelling. One notable exception is the study by Rutledge et al. [Rutledge, R. B., Skandali, N., Dayan, P., & Dolan, R. J. (2014). A computational and neural model of momentary subjective well-being. Proceedings of the National Academy of Sciences, 111(33), 12252-12257. https://doi.org/10.1073/pnas.1407535111], in which a model that captures momentary changes in subjective well-being was proposed. The model incorporates how an individual processes rewards and punishments in a decision context. Using this model, the authors were able to successfully explain fluctuations in subjective well-being observed in a gambling paradigm. Although Rutledge et al. reported an in-paper replication, a successful independent replication would further increase the credibility of their results. In this paper, we report a preregistered close replication of the behavioural experiment and analyses by Rutledge et al. The results of Rutledge et al. were mostly confirmed, providing further evidence for the role of rewards and punishments in subjective well-being fluctuations. Additionally, the association between personality traits and the way people process rewards and punishments was examined. No evidence for such associations was found, leaving this an open question for future research.
Assuntos
Recompensa , Humanos , Estados UnidosRESUMO
We investigated the reproducibility of the major statistical conclusions drawn in 46 articles published in 2012 in three APA journals. After having identified 232 key statistical claims, we tried to reproduce, for each claim, the test statistic, its degrees of freedom, and the corresponding p value, starting from the raw data that were provided by the authors and closely following the Method section in the article. Out of the 232 claims, we were able to successfully reproduce 163 (70%), 18 of which only by deviating from the article's analytical description. Thirteen (7%) of the 185 claims deemed significant by the authors are no longer so. The reproduction successes were often the result of cumbersome and time-consuming trial-and-error work, suggesting that APA style reporting in conjunction with raw data makes numerical verification at least hard, if not impossible. This article discusses the types of mistakes we could identify and the tediousness of our reproduction efforts in the light of a newly developed taxonomy for reproducibility. We then link our findings with other findings of empirical research on this topic, give practical recommendations on how to achieve reproducibility, and discuss the challenges of large-scale reproducibility checks as well as promising ideas that could considerably increase the reproducibility of psychological research. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Assuntos
Projetos de Pesquisa , Humanos , Reprodutibilidade dos TestesRESUMO
In their seminal article, Roberts and Pashler (2000) highlighted that providing a good fit to empirical data does not necessarily provide strong support for a theory. For a good fit to be persuasive and for a theory to be strongly supported, the theory should have survived a strong test, in the sense that it is plausible that the theory might have failed the test. The most common way to accommodate the problem of the limited value of a good fit alone is to not only report a measure of goodness-of-fit, but also a measure of the complexity. A recent example of this line of reasoning is provided by Veksler, Myers, and Gluck (2015). In this article, I argue that whereas considering complexity provides useful information when theory testing, using complexity to gauge the severity of a test, or, equivalently, the persuasiveness of a good fit, is misguided. The reason is that complexity only provides information about the possibility of a bad fit, which does not guarantee a strong test. A condition for a test to be strong and a good fit to be persuasive is the demonstration of the plausibility of a bad fit. I provide a worked example of a more complete answer to assessing whether a good fit is persuasive. Providing a strong theory test requires the use of what can be called a data prior, which quantifies-before taking the empirical data into account-which outcomes are plausible. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Assuntos
Modelos Psicológicos , Modelos Estatísticos , Teoria Psicológica , Projetos de Pesquisa , Humanos , Projetos de Pesquisa/normasRESUMO
Which is more enjoyable: trying to think enjoyable thoughts or doing everyday solitary activities? Wilson et al. (2014) found that American participants much preferred solitary everyday activities, such as reading or watching TV, to thinking for pleasure. To see whether this preference generalized outside of the United States, we replicated the study with 2,557 participants from 12 sites in 11 countries. The results were consistent in every country: Participants randomly assigned to do something reported significantly greater enjoyment than did participants randomly assigned to think for pleasure. Although we found systematic differences by country in how much participants enjoyed thinking for pleasure, we used a series of nested structural equation models to show that these differences were fully accounted for by country-level variation in 5 individual differences, 4 of which were positively correlated with thinking for pleasure (need for cognition, openness to experience, meditation experience, and initial positive affect) and 1 of which was negatively correlated (reported phone usage). (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Assuntos
Cognição , Comparação Transcultural , Prazer , Emoções , Humanos , MeditaçãoRESUMO
We present a fully preregistered, high-powered conceptual replication of Experiment 1 by Smith, Tracy, and Murray (1993). They observed a cognitive deficit in people with elevated depressive symptoms in a task requiring flexible analytic processing and deliberate hypothesis testing, but no deficit in a task assumed to require more automatic, holistic processing. Specifically, they found that individuals with depressive symptoms showed impaired performance on a criterial-attribute classification task, requiring flexible analysis of the attributes and deliberate hypothesis testing, but not on a family-resemblance classification task, assumed to rely on holistic processing. While deficits in tasks requiring flexible hypothesis testing are commonly observed in people diagnosed with a major depressive disorder, these deficits are much less commonly observed in people with merely elevated depressive symptoms, and therefore Smith et al.'s (1993) finding deserves further scrutiny. We observed no deficit in performance on the criterial-attribute task in people with above average depressive symptoms. Rather, we found a similar difference in performance on the criterial-attribute versus family-resemblance task between people with high and low depressive symptoms. The absence of a deficit in people with elevated depressive symptoms is consistent with previous findings focusing on different tasks.
RESUMO
We present a case study of hierarchical Bayesian explanatory cognitive psychometrics, examining information processing characteristics of individuals with high-functioning autism spectrum disorder (HFASD). On the basis of previously published data, we compare the classification behavior of a group of children with HFASD with that of typically developing (TD) controls using a computational model of categorization. The parameters in the model reflect characteristics of information processing that are theoretically related to HFASD. Because we expect individual differences in the model's parameters, as well as differences between HFASD and TD children, we use a hierarchical explanatory approach. A first analysis suggests that children with HFASD are less sensitive to the prototype. A second analysis, involving a mixture component, reveals that the computational model is not appropriate for a subgroup of participants, which implies parameter estimates are not informative for these children. Focusing only on the children for whom the prototype model is appropriate, no clear difference in sensitivity between HFASD and TD children is inferred.
Assuntos
Transtorno do Espectro Autista/psicologia , Teorema de Bayes , Cognição , Psicometria , Estudos de Casos e Controles , Criança , Feminino , Humanos , Masculino , AutoimagemRESUMO
The development of cognitive models involves the creative scientific formalization of assumptions, based on theory, observation, and other relevant information. In the Bayesian approach to implementing, testing, and using cognitive models, assumptions can influence both the likelihood function of the model, usually corresponding to assumptions about psychological processes, and the prior distribution over model parameters, usually corresponding to assumptions about the psychological variables that influence those processes. The specification of the prior is unique to the Bayesian context, but often raises concerns that lead to the use of vague or non-informative priors in cognitive modeling. Sometimes the concerns stem from philosophical objections, but more often practical difficulties with how priors should be determined are the stumbling block. We survey several sources of information that can help to specify priors for cognitive models, discuss some of the methods by which this information can be formalized in a prior distribution, and identify a number of benefits of including informative priors in cognitive modeling. Our discussion is based on three illustrative cognitive models, involving memory retention, categorization, and decision making.