Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 49
Filter
1.
R Soc Open Sci ; 11(7): 240125, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39050728

ABSTRACT

Many-analysts studies explore how well an empirical claim withstands plausible alternative analyses of the same dataset by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g. effect size) provided by each analysis team. Although informative about the range of plausible effects in a dataset, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item subjective evidence evaluation survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous many-analysts study.

2.
Psychol Bull ; 2024 Jun 27.
Article in English | MEDLINE | ID: mdl-38934916

ABSTRACT

Researchers have become increasingly aware that data-analysis decisions affect results. Here, we examine this issue systematically for multinomial processing tree (MPT) models, a popular class of cognitive models for categorical data. Specifically, we examine the robustness of MPT model parameter estimates that arise from two important decisions: the level of data aggregation (complete-pooling, no-pooling, or partial-pooling) and the statistical framework (frequentist or Bayesian). These decisions span a multiverse of estimation methods. We synthesized the data from 13,956 participants (164 published data sets) with a meta-analytic strategy and analyzed the magnitude of divergence between estimation methods for the parameters of nine popular MPT models in psychology (e.g., process-dissociation, source monitoring). We further examined moderators as potential sources of divergence. We found that the absolute divergence between estimation methods was small on average (<.04; with MPT parameters ranging between 0 and 1); in some cases, however, divergence amounted to nearly the maximum possible range (.97). Divergence was partly explained by few moderators (e.g., the specific MPT model parameter, uncertainty in parameter estimation), but not by other plausible candidate moderators (e.g., parameter trade-offs, parameter correlations) or their interactions. Partial-pooling methods showed the smallest divergence within and across levels of pooling and thus seem to be an appropriate default method. Using MPT models as an example, we show how transparency and robustness can be increased in the field of cognitive modeling. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

3.
Cogn Psychol ; 149: 101628, 2024 03.
Article in English | MEDLINE | ID: mdl-38199181

ABSTRACT

Response inhibition is a key attribute of human executive control. Standard stop-signal tasks require countermanding a single response; the speed at which that response can be inhibited indexes the efficacy of the inhibitory control networks. However, more complex stopping tasks, where one or more components of a multi-component action are cancelled (i.e., response-selective stopping) cannot be explained by the independent-race model appropriate for the simple task (Logan and Cowan 1984). Healthy human participants (n=28; 10 male; 19-40 years) completed a response-selective stopping task where a 'go' stimulus required simultaneous (bimanual) button presses in response to left and right pointing green arrows. On a subset of trials (30%) one, or both, arrows turned red (constituting the stop signal) requiring that only the button-press(es) associated with red arrows be cancelled. Electromyographic recordings from both index fingers (first dorsal interosseous) permitted the assessment of both voluntary motor responses that resulted in overt button presses, and activity that was cancelled prior to an overt response (i.e., partial, or covert, responses). We propose a simultaneously inhibit and start (SIS) model that extends the independent race model and provides a highly accurate account of response-selective stopping data. Together with fine-grained EMG analysis, our model-based analysis offers converging evidence that the selective-stop signal simultaneously triggers a process that stops the bimanual response and triggers a new unimanual response corresponding to the green arrow. Our results require a reconceptualisation of response-selective stopping and offer a tractable framework for assessing such tasks in healthy and patient populations. Significance Statement Response inhibition is a key attribute of human executive control, frequently investigated using the stop-signal task. After initiating a motor response to a go signal, a stop signal occasionally appears at a delay, requiring cancellation of the response. This has been conceptualised as a 'race' between the go and stop processes, with the successful (or failed) cancellation determined by which process wins the race. Here we provide a novel computational model for a complex variation of the stop-signal task, where only one component of a multicomponent action needs to be cancelled. We provide compelling muscle activation data that support our model, providing a robust and plausible framework for studying these complex inhibition tasks in both healthy and pathological cohorts.


Subject(s)
Executive Function , Psychomotor Performance , Humans , Male , Reaction Time/physiology , Psychomotor Performance/physiology , Executive Function/physiology , Inhibition, Psychological
4.
Behav Res Methods ; 2024 Jan 10.
Article in English | MEDLINE | ID: mdl-38200240

ABSTRACT

Dynamic cognitive psychometrics measures mental capacities based on the way behavior unfolds over time. It does so using models of psychological processes whose validity is grounded in research from experimental psychology and the neurosciences. However, these models can sometimes have undesirable measurement properties. We propose a "hybrid" modeling approach that achieves good measurement by blending process-based and descriptive components. We demonstrate the utility of this approach in the stop-signal paradigm, in which participants make a series of speeded choices, but occasionally are required to withhold their response when a "stop signal" occurs. The stop-signal paradigm is widely used to measure response inhibition based on a modeling framework that assumes a race between processes triggered by the choice and the stop stimuli. However, the key index of inhibition, the latency of the stop process (i.e., stop-signal reaction time), is not directly observable, and is poorly estimated when the choice and the stop runners are both modeled by psychologically realistic evidence-accumulation processes. We show that using a descriptive account of the stop process, while retaining a realistic account of the choice process, simultaneously enables good measurement of both stop-signal reaction time and the psychological factors that determine choice behavior. We show that this approach, when combined with hierarchical Bayesian estimation, is effective even in a complex choice task that requires participants to perform only a relatively modest number of test trials.

5.
Sci Rep ; 13(1): 19564, 2023 11 10.
Article in English | MEDLINE | ID: mdl-37949974

ABSTRACT

The ability to stop simple ongoing actions has been extensively studied using the stop signal task, but less is known about inhibition in more complex scenarios. Here we used a task requiring bimanual responses to go stimuli, but selective inhibition of only one of those responses following a stop signal. We assessed how proactive cues affect the nature of both the responding and stopping processes, and the well-documented stopping delay (interference effect) in the continuing action following successful stopping. In this task, estimates of the speed of inhibition based on a simple-stopping model are inappropriate, and have produced inconsistent findings about the effects of proactive control on motor inhibition. We instead used a multi-modal approach, based on improved methods of detecting and interpreting partial electromyographical responses and the recently proposed SIS (simultaneously inhibit and start) model of selective stopping behaviour. Our results provide clear and converging evidence that proactive cues reduce the stopping delay effect by slowing bimanual responses and speeding unimanual responses, with a negligible effect on the speed of the stopping process.


Subject(s)
Cues , Inhibition, Psychological , Reaction Time/physiology , Electromyography , Choice Behavior , Psychomotor Performance/physiology
6.
Sci Rep ; 13(1): 11565, 2023 07 18.
Article in English | MEDLINE | ID: mdl-37463991

ABSTRACT

Stopping an already initiated action is crucial for human everyday behavior and empirical evidence points toward the prefrontal cortex playing a key role in response inhibition. Two regions that have been consistently implicated in response inhibition are the right inferior frontal gyrus (IFG) and the more superior region of the dorsolateral prefrontal cortex (DLPFC). The present study investigated the effect of offline 1 Hz transcranial magnetic stimulation (TMS) over the right IFG and DLPFC on performance in a gamified stop-signal task (SSG). We hypothesized that perturbing each area would decrease performance in the SSG, albeit with a quantitative difference in the performance decrease after stimulation. After offline TMS, functional short-term reorganization is possible, and the domain-general area (i.e., the right DLPFC) might be able to compensate for the perturbation of the domain-specific area (i.e., the right IFG). Results showed that 1 Hz offline TMS over the right DLPFC and the right IFG at 110% intensity of the resting motor threshold had no effect on performance in the SSG. In fact, evidence in favor of the null hypothesis was found. One intriguing interpretation of this result is that within-network compensation was triggered, canceling out the potential TMS effects as has been suggested in recent theorizing on TMS effects, although the presented results do not unambiguously identify such compensatory mechanisms. Future studies may result in further support for this hypothesis, which is especially important when studying reactive response in complex environments.


Subject(s)
Prefrontal Cortex , Transcranial Magnetic Stimulation , Humans , Transcranial Magnetic Stimulation/methods , Prefrontal Cortex/physiology , Dorsolateral Prefrontal Cortex , Rest
7.
Psychol Methods ; 2023 May 11.
Article in English | MEDLINE | ID: mdl-37166854

ABSTRACT

Cognitive models provide a substantively meaningful quantitative description of latent cognitive processes. The quantitative formulation of these models supports cumulative theory building and enables strong empirical tests. However, the nonlinearity of these models and pervasive correlations among model parameters pose special challenges when applying cognitive models to data. Firstly, estimating cognitive models typically requires large hierarchical data sets that need to be accommodated by an appropriate statistical structure within the model. Secondly, statistical inference needs to appropriately account for model uncertainty to avoid overconfidence and biased parameter estimates. In the present work, we show how these challenges can be addressed through a combination of Bayesian hierarchical modeling and Bayesian model averaging. To illustrate these techniques, we apply the popular diffusion decision model to data from a collaborative selective influence study. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

8.
Nat Commun ; 14(1): 2234, 2023 04 19.
Article in English | MEDLINE | ID: mdl-37076456

ABSTRACT

Standard, well-established cognitive tasks that produce reliable effects in group comparisons also lead to unreliable measurement when assessing individual differences. This reliability paradox has been demonstrated in decision-conflict tasks such as the Simon, Flanker, and Stroop tasks, which measure various aspects of cognitive control. We aim to address this paradox by implementing carefully calibrated versions of the standard tests with an additional manipulation to encourage processing of conflicting information, as well as combinations of standard tasks. Over five experiments, we show that a Flanker task and a combined Simon and Stroop task with the additional manipulation produced reliable estimates of individual differences in under 100 trials per task, which improves on the reliability seen in benchmark Flanker, Simon, and Stroop data. We make these tasks freely available and discuss both theoretical and applied implications regarding how the cognitive testing of individual differences is carried out.


Subject(s)
Attention , Calibration , Reproducibility of Results , Neuropsychological Tests , Stroop Test , Reaction Time
9.
Dev Cogn Neurosci ; 59: 101191, 2023 02.
Article in English | MEDLINE | ID: mdl-36603413

ABSTRACT

The Adolescent Brain Cognitive Development (ABCD) Study is a longitudinal neuroimaging study of unprecedented scale that is in the process of following over 11,000 youth from middle childhood though age 20. However, a design feature of the study's stop-signal task violates "context independence", an assumption critical to current non-parametric methods for estimating stop-signal reaction time (SSRT), a key measure of inhibitory ability in the study. This has led some experts to call for the task to be changed and for previously collected data to be used with caution. We present a cognitive process modeling framework, the RDEX-ABCD model, that provides a parsimonious explanation for the impact of this design feature on "go" stimulus processing and successfully accounts for key behavioral trends in the ABCD data. Simulation studies using this model suggest that failing to account for the context independence violations in the ABCD design can lead to erroneous inferences in several realistic scenarios. However, we demonstrate that RDEX-ABCD effectively addresses these violations and can be used to accurately measure SSRT along with an array of additional mechanistic parameters of interest (e.g., attention to the stop signal, cognitive efficiency), advancing investigators' ability to draw valid and nuanced inferences from ABCD data. AVAILABILITY OF DATA AND MATERIALS: Data from the ABCD Study are available through the NIH Data Archive (NDA): nda.nih.gov/abcd. Code for all analyses featured in this study is openly available on the Open Science Framework (OSF): osf.io/2h8a7/.


Subject(s)
Executive Function , Inhibition, Psychological , Child , Adolescent , Humans , Young Adult , Adult , Reaction Time , Neuroimaging , Cognition
10.
Elife ; 112022 12 30.
Article in English | MEDLINE | ID: mdl-36583378

ABSTRACT

Inhibitory control is one of the most important control functions in the human brain. Much of our understanding of its neural basis comes from seminal work showing that lesions to the right inferior frontal gyrus (rIFG) increase stop-signal reaction time (SSRT), a latent variable that expresses the speed of inhibitory control. However, recent work has identified substantial limitations of the SSRT method. Notably, SSRT is confounded by trigger failures: stop-signal trials in which inhibitory control was never initiated. Such trials inflate SSRT, but are typically indicative of attentional, rather than inhibitory deficits. Here, we used hierarchical Bayesian modeling to identify stop-signal trigger failures in human rIFG lesion patients, non-rIFG lesion patients, and healthy comparisons. Furthermore, we measured scalp-EEG to detect ß-bursts, a neurophysiological index of inhibitory control. rIFG lesion patients showed a more than fivefold increase in trigger failure trials and did not exhibit the typical increase of stop-related frontal ß-bursts. However, on trials in which such ß-bursts did occur, rIFG patients showed the typical subsequent upregulation of ß over sensorimotor areas, indicating that their ability to implement inhibitory control, once triggered, remains intact. These findings suggest that the role of rIFG in inhibitory control has to be fundamentally reinterpreted.


Subject(s)
Frontal Lobe , Sensorimotor Cortex , Humans , Frontal Lobe/physiology , Bayes Theorem , Magnetic Resonance Imaging , Reaction Time/physiology , Prefrontal Cortex
11.
Behav Res Methods ; 54(3): 1530-1540, 2022 06.
Article in English | MEDLINE | ID: mdl-34751923

ABSTRACT

The stop-signal paradigm has become ubiquitous in investigations of inhibitory control. Tasks inspired by the paradigm, referred to as stop-signal tasks, require participants to make responses on go trials and to inhibit those responses when presented with a stop-signal on stop trials. Currently, the most popular version of the stop-signal task is the 'choice-reaction' variant, where participants make choice responses, but must inhibit those responses when presented with a stop-signal. An alternative to the choice-reaction variant of the stop-signal task is the 'anticipated response inhibition' task. In anticipated response inhibition tasks, participants are required to make a planned response that coincides with a predictably timed event (such as lifting a finger from a computer key to stop a filling bar at a predefined target). Anticipated response inhibition tasks have some advantages over the more traditional choice-reaction stop-signal tasks and are becoming increasingly popular. However, currently, there are no openly available versions of the anticipated response inhibition task, limiting potential uptake. Here, we present an open-source, free, and ready-to-use version of the anticipated response inhibition task, which we refer to as the OSARI (the Open-Source Anticipated Response Inhibition) task.


Subject(s)
Inhibition, Psychological , Psychomotor Performance , Humans , Psychomotor Performance/physiology , Reaction Time/physiology
12.
Mem Cognit ; 50(5): 962-978, 2022 07.
Article in English | MEDLINE | ID: mdl-34950999

ABSTRACT

The effects of distraction on responses manifest in three ways: prolonged reaction times, and increased error and response omission rates. However, the latter effect is often ignored or assumed to be due to a separate cognitive process. We investigated omissions occurring in two paradigms that manipulated distraction. One required simple stimulus detection of younger participants, the second required choice responses and was completed by both younger and older participants. We fit data from these paradigms with a model that identifies three causes of omissions: two are related to the process of accumulating the evidence on which a response is based: intrinsic omissions (due to between-trial variation in accumulation rates making it impossible to ever reach the evidence threshold) and design omissions (due to response windows that cause slow responses not to be recorded; a third, contaminant omissions, allows for a cause unrelated to the response process. In both data sets systematic differences in omission rates across conditions were accounted for by task-related omissions. Intrinsic omissions played a lesser role than design omissions, even though the presence of design omissions was not evident in descriptive analyses of the data. The model provided an accurate account of all aspects of the detection data and the choice-response data, but slightly underestimated overall omissions in the choice paradigm, particularly in older participants, suggesting that further investigation of contaminant omission effects is needed.


Subject(s)
Cognition , Aged , Cognition/physiology , Humans , Reaction Time
13.
Elife ; 102021 11 09.
Article in English | MEDLINE | ID: mdl-34751133

ABSTRACT

Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research.


Subject(s)
Consensus , Data Analysis , Datasets as Topic , Research
15.
Learn Behav ; 49(3): 265-275, 2021 09.
Article in English | MEDLINE | ID: mdl-34378175

ABSTRACT

Roberts (2020, Learning & Behavior, 48[2], 191-192) discussed research claiming honeybees can do arithmetic. Some readers of this research might regard such claims as unlikely. The present authors used this example as a basis for a debate on the criterion that ought to be used for publication of results or conclusions that could be viewed as unlikely by a significant number of readers, editors, or reviewers.


Subject(s)
Learning , Animals , Bees
16.
Cogn Res Princ Implic ; 6(1): 30, 2021 04 09.
Article in English | MEDLINE | ID: mdl-33835271

ABSTRACT

Human operators often experience large fluctuations in cognitive workload over seconds timescales that can lead to sub-optimal performance, ranging from overload to neglect. Adaptive automation could potentially address this issue, but to do so it needs to be aware of real-time changes in operators' spare cognitive capacity, so it can provide help in times of peak demand and take advantage of troughs to elicit operator engagement. However, it is unclear whether rapid changes in task demands are reflected in similarly rapid fluctuations in spare capacity, and if so what aspects of responses to those demands are predictive of the current level of spare capacity. We used the ISO standard detection response task (DRT) to measure cognitive workload approximately every 4 s in a demanding task requiring monitoring and refueling of a fleet of simulated unmanned aerial vehicles (UAVs). We showed that the DRT provided a valid measure that can detect differences in workload due to changes in the number of UAVs. We used cross-validation to assess whether measures related to task performance immediately preceding the DRT could predict detection performance as a proxy for cognitive workload. Although the simple occurrence of task events had weak predictive ability, composite measures that tapped operators' situational awareness with respect to fuel levels were much more effective. We conclude that cognitive workload does vary rapidly as a function of recent task events, and that real-time predictive models of operators' cognitive workload provide a potential avenue for automation to adapt without an ongoing need for intrusive workload measurements.


Subject(s)
Task Performance and Analysis , Workload , Advance Directives , Automation , Awareness , Humans
17.
Psychon Bull Rev ; 28(3): 813-826, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33037582

ABSTRACT

Despite the increasing popularity of Bayesian inference in empirical research, few practical guidelines provide detailed recommendations for how to apply Bayesian procedures and interpret the results. Here we offer specific guidelines for four different stages of Bayesian statistical reasoning in a research setting: planning the analysis, executing the analysis, interpreting the results, and reporting the results. The guidelines for each stage are illustrated with a running example. Although the guidelines are geared towards analyses performed with the open-source statistical software JASP, most guidelines extend to Bayesian inference in general.


Subject(s)
Data Interpretation, Statistical , Guidelines as Topic , Models, Statistical , Research Design , Bayes Theorem , Humans
18.
Front Psychol ; 11: 608287, 2020.
Article in English | MEDLINE | ID: mdl-33584443

ABSTRACT

Parametric cognitive models are increasingly popular tools for analyzing data obtained from psychological experiments. One of the main goals of such models is to formalize psychological theories using parameters that represent distinct psychological processes. We argue that systematic quantitative reviews of parameter estimates can make an important contribution to robust and cumulative cognitive modeling. Parameter reviews can benefit model development and model assessment by providing valuable information about the expected parameter space, and can facilitate the more efficient design of experiments. Importantly, parameter reviews provide crucial-if not indispensable-information for the specification of informative prior distributions in Bayesian cognitive modeling. From the Bayesian perspective, prior distributions are an integral part of a model, reflecting cumulative theoretical knowledge about plausible values of the model's parameters (Lee, 2018). In this paper we illustrate how systematic parameter reviews can be implemented to generate informed prior distributions for the Diffusion Decision Model (DDM; Ratcliff and McKoon, 2008), the most widely used model of speeded decision making. We surveyed the published literature on empirical applications of the DDM, extracted the reported parameter estimates, and synthesized this information in the form of prior distributions. Our parameter review establishes a comprehensive reference resource for plausible DDM parameter values in various experimental paradigms that can guide future applications of the model. Based on the challenges we faced during the parameter review, we formulate a set of general and DDM-specific suggestions aiming to increase reproducibility and the information gained from the review process.

19.
Behav Res Methods ; 52(2): 918-937, 2020 04.
Article in English | MEDLINE | ID: mdl-31755028

ABSTRACT

Over the last decade, the Bayesian estimation of evidence-accumulation models has gained popularity, largely due to the advantages afforded by the Bayesian hierarchical framework. Despite recent advances in the Bayesian estimation of evidence-accumulation models, model comparison continues to rely on suboptimal procedures, such as posterior parameter inference and model selection criteria known to favor overly complex models. In this paper, we advocate model comparison for evidence-accumulation models based on the Bayes factor obtained via Warp-III bridge sampling. We demonstrate, using the linear ballistic accumulator (LBA), that Warp-III sampling provides a powerful and flexible approach that can be applied to both nested and non-nested model comparisons, even in complex and high-dimensional hierarchical instantiations of the LBA. We provide an easy-to-use software implementation of the Warp-III sampler and outline a series of recommendations aimed at facilitating the use of Warp-III sampling in practical applications.


Subject(s)
Software , Bayes Theorem , Markov Chains , Monte Carlo Method
SELECTION OF CITATIONS
SEARCH DETAIL