Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Assunto da revista
Intervalo de ano de publicação
1.
Hum Factors ; : 187208231181496, 2023 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-37287261

RESUMO

OBJECTIVE: To investigate how the visual complexity of head-up displays (HUDs) influence the allocation of driver's attention in two separate visual domains (near and far domains). BACKGROUND: The types and amount of information displayed on automobile HUDs have increased. With limited human attention capacity, increased visual complexity in the near domain may lead to interference in the effective processing of information in the far domain. METHOD: Near-domain and far-domain vision were separately tested using a dual-task paradigm. In a simulated road environment, 62 participants were to control the speed of the vehicle (SMT; near domain) and manually respond to probes (PDT; far domain) simultaneously. Five HUD complexity levels including a HUD-absent condition were presented block-wise. RESULTS: Near domain performance was not modulated by the HUD complexity levels. However, the far domain detection accuracies were impaired as the HUD complexity level increased, with greater accuracy differences observed between central and peripheral probes. CONCLUSION: Increased HUD visual complexity leads to a biased deployment of driver attention toward the central visual field. Therefore, the formulation of HUD designs must be preceded by an in-depth investigation of the dynamics of human cognition. APPLICATION: To ensure driving safety, HUD designs should be rendered with minimal visual complexity by incorporating only essential information relevant to driving and removing driving-irrelevant or additional visual details.

2.
J Exp Psychol Learn Mem Cogn ; 49(2): 181-197, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36265043

RESUMO

Previous studies on value-driven attentional capture (VDAC) have demonstrated that the uncertainty of reward value modulates attentional allocation via associative learning. However, it is unclear whether such attentional exploration is executed based on the amount of potential reward information available for refining value prediction or the absolute size of reward prediction error. The present study investigated the effects of reward information (information entropy) and prediction error (variance) on attentional bias while controlling for the influence of the strength of reward association. Participants were instructed to search for either a red or green target circle and respond to the line orientation within the target. Each target color was associated with reward contingencies with different levels of uncertainty. In Experiment 1, one color was paired with a single reward value (zero entropy and variance) and the other with multiple reward values (high entropy and variance). In Experiment 2, one color had a high-entropy, low-variance reward contingency and the other had the inverse. Attentional interference for distractors with high entropy was consistently greater than low or zero entropy distractors. In addition, in Experiment 3, when distractors with an identical level of variance were given, information entropy was observed to modulate the attentional bias toward distractors. Lastly, Experiment 4 revealed that distractors associated with contrasting levels of variance, while information entropy was kept identical, failed to modulate VDAC. These results indicate that value-based attention is primarily allocated to cues that provide maximal information about the reward outcomes and that information entropy is one of the key predictors mediating attentional exploration and associative learning. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Atenção , Sinais (Psicologia) , Humanos , Condicionamento Clássico , Incerteza , Recompensa , Tempo de Reação
3.
Psychon Bull Rev ; 30(5): 1689-1706, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37145388

RESUMO

Value-driven attentional capture (VDAC) refers to a phenomenon by which stimulus features associated with greater reward value attract more attention than those associated with smaller reward value. To date, the majority of VDAC research has revealed that the relationship between reward history and attentional allocation follows associative learning rules. Accordingly, a mathematical implementation of associative learning models and multiple comparison between them can elucidate the underlying process and properties of VDAC. In this study, we implemented the Rescorla-Wagner, Mackintosh (Mac), Schumajuk-Pearce-Hall (SPH), and Esber-Haselgrove (EH) models to determine whether different models predict different outcomes when critical parameters in VDAC were adjusted. Simulation results were compared with experimental data from a series of VDAC studies by fitting two key model parameters, associative strength (V) and associability (α), using the Bayesian information criterion as a loss function. The results showed that SPH-V and EH- α outperformed other implementations of phenomena related to VDAC, such as expected value, training session, switching (or inertia), and uncertainty. Although V of models were sufficient to simulate VDAC when the expected value was the main manipulation of the experiment, α of models could predict additional aspects of VDAC, including uncertainty and resistance to extinction. In summary, associative learning models concur with the crucial aspects of behavioral data from VDAC experiments and elucidate underlying dynamics including novel predictions that need to be verified.


Assuntos
Condicionamento Clássico , Recompensa , Humanos , Teorema de Bayes , Simulação por Computador , Incerteza , Aprendizagem por Associação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA