Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Behav Res Methods ; 2024 Jan 10.
Artículo en Inglés | MEDLINE | ID: mdl-38200240

RESUMEN

Dynamic cognitive psychometrics measures mental capacities based on the way behavior unfolds over time. It does so using models of psychological processes whose validity is grounded in research from experimental psychology and the neurosciences. However, these models can sometimes have undesirable measurement properties. We propose a "hybrid" modeling approach that achieves good measurement by blending process-based and descriptive components. We demonstrate the utility of this approach in the stop-signal paradigm, in which participants make a series of speeded choices, but occasionally are required to withhold their response when a "stop signal" occurs. The stop-signal paradigm is widely used to measure response inhibition based on a modeling framework that assumes a race between processes triggered by the choice and the stop stimuli. However, the key index of inhibition, the latency of the stop process (i.e., stop-signal reaction time), is not directly observable, and is poorly estimated when the choice and the stop runners are both modeled by psychologically realistic evidence-accumulation processes. We show that using a descriptive account of the stop process, while retaining a realistic account of the choice process, simultaneously enables good measurement of both stop-signal reaction time and the psychological factors that determine choice behavior. We show that this approach, when combined with hierarchical Bayesian estimation, is effective even in a complex choice task that requires participants to perform only a relatively modest number of test trials.

3.
R Soc Open Sci ; 10(2): 191375, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36756055

RESUMEN

The low reproducibility rate in social sciences has produced hesitation among researchers in accepting published findings at their face value. Despite the advent of initiatives to increase transparency in research reporting, the field is still lacking tools to verify the credibility of research reports. In the present paper, we describe methodologies that let researchers craft highly credible research and allow their peers to verify this credibility. We demonstrate the application of these methods in a multi-laboratory replication of Bem's Experiment 1 (Bem 2011 J. Pers. Soc. Psychol. 100, 407-425. (doi:10.1037/a0021524)) on extrasensory perception (ESP), which was co-designed by a consensus panel including both proponents and opponents of Bem's original hypothesis. In the study we applied direct data deposition in combination with born-open data and real-time research reports to extend transparency to protocol delivery and data collection. We also used piloting, checklists, laboratory logs and video-documented trial sessions to ascertain as-intended protocol delivery, and external research auditors to monitor research integrity. We found 49.89% successful guesses, while Bem reported 53.07% success rate, with the chance level being 50%. Thus, Bem's findings were not replicated in our study. In the paper, we discuss the implementation, feasibility and perceived usefulness of the credibility-enhancing methodologies used throughout the project.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...