Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Behav Res Ther ; 134: 103706, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32920165

RESUMO

Social anxiety (SA) is thought to be maintained in part by avoidance of social threat, which exacerbates fear of negative evaluation. Yet, relatively little research has been conducted to evaluate the connection between social anxiety and attentional processes in realistic contexts. The current pilot study examined patterns of attention (eye movements) in a commonly feared social context - public speaking. Participants (N = 84) with a range of social anxiety symptoms gave an impromptu five-minute speech in an immersive 360°-video environment, while wearing a virtual reality headset equipped with eye-tracking hardware. We found evidence for the expected interaction between fear of public speaking and social threat (uninterested vs. interested audience members). Consistent with prediction, participants with greater fear of public speaking looked fewer times at uninterested members of the audience (high social threat) compared to interested members of the audience (low social threat) b = 0.418, p = 0.046, 95% CI [0.008, 0.829]. Analyses of attentional indices over the course of the speech revealed that the interaction between fear of public speaking and gaze on audience members was only significant in the first three-minutes. Our results provide support for theoretical models implicating avoidance of social threat as a maintaining factor in social anxiety. Future research is needed to test whether guided attentional training targeting in vivo attentional avoidance may improve clinical outcomes for those presenting with social anxiety.

2.
PLoS Comput Biol ; 14(10): e1006518, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30359364

RESUMO

Although a standard reinforcement learning model can capture many aspects of reward-seeking behaviors, it may not be practical for modeling human natural behaviors because of the richness of dynamic environments and limitations in cognitive resources. We propose a modular reinforcement learning model that addresses these factors. Based on this model, a modular inverse reinforcement learning algorithm is developed to estimate both the rewards and discount factors from human behavioral data, which allows predictions of human navigation behaviors in virtual reality with high accuracy across different subjects and with different tasks. Complex human navigation trajectories in novel environments can be reproduced by an artificial agent that is based on the modular model. This model provides a strategy for estimating the subjective value of actions and how they influence sensory-motor decisions in natural behavior.


Assuntos
Tomada de Decisões/fisiologia , Desempenho Psicomotor/fisiologia , Reforço Psicológico , Algoritmos , Biologia Computacional , Humanos , Modelos Biológicos , Recompensa
3.
Sci Rep ; 8(1): 4324, 2018 03 12.
Artigo em Inglês | MEDLINE | ID: mdl-29531297

RESUMO

Search is a central visual function. Most of what is known about search derives from experiments where subjects view 2D displays on computer monitors. In the natural world, however, search involves movement of the body in large-scale spatial contexts, and it is unclear how this might affect search strategies. In this experiment, we explore the nature of memory representations developed when searching in an immersive virtual environment. By manipulating target location, we demonstrate that search depends on episodic spatial memory as well as learnt spatial priors. Subjects rapidly learned the large-scale structure of the space, with shorter paths and less head rotation to find targets. These results suggest that spatial memory of the global structure allows a search strategy that involves efficient attention allocation based on the relevance of scene regions. Thus spatial memory may allow less energetically costly search strategies.


Assuntos
Memória Episódica , Percepção Espacial , Realidade Virtual , Percepção Visual , Atenção , Movimentos da Cabeça , Humanos , Aprendizagem
4.
J Vis ; 17(1): 28, 2017 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-28114501

RESUMO

While it is universally acknowledged that both bottom up and top down factors contribute to allocation of gaze, we currently have limited understanding of how top-down factors determine gaze choices in the context of ongoing natural behavior. One purely top-down model by Sprague, Ballard, and Robinson (2007) suggests that natural behaviors can be understood in terms of simple component behaviors, or modules, that are executed according to their reward value, with gaze targets chosen in order to reduce uncertainty about the particular world state needed to execute those behaviors. We explore the plausibility of the central claims of this approach in the context of a task where subjects walk through a virtual environment performing interceptions, avoidance, and path following. Many aspects of both walking direction choices and gaze allocation are consistent with this approach. Subjects use gaze to reduce uncertainty for task-relevant information that is used to inform action choices. Notably the addition of motion to peripheral objects did not affect fixations when the objects were irrelevant to the task, suggesting that stimulus saliency was not a major factor in gaze allocation. The modular approach of independent component behaviors is consistent with the main aspects of performance, but there were a number of deviations suggesting that modules interact. Thus the model forms a useful, but incomplete, starting point for understanding top-down factors in active behavior.


Assuntos
Fixação Ocular/fisiologia , Recompensa , Incerteza , Percepção Visual/fisiologia , Caminhada , Movimentos Oculares/fisiologia , Feminino , Humanos , Masculino , Modelos Teóricos , Desempenho Psicomotor , Adulto Jovem
5.
J Vis ; 16(8): 9, 2016 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-27299769

RESUMO

The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.


Assuntos
Atenção/fisiologia , Meio Ambiente , Movimentos Oculares/fisiologia , Memória/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Aprendizagem , Masculino
6.
J Vis ; 8(7): 32.1-20, 2008 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-19146264

RESUMO

We propose a definition of saliency by considering what the visual system is trying to optimize when directing attention. The resulting model is a Bayesian framework from which bottom-up saliency emerges naturally as the self-information of visual features, and overall saliency (incorporating top-down information with bottom-up saliency) emerges as the pointwise mutual information between the features and the target when searching for a target. An implementation of our framework demonstrates that our model's bottom-up saliency maps perform as well as or better than existing algorithms in predicting people's fixations in free viewing. Unlike existing saliency measures, which depend on the statistics of the particular image being viewed, our measure of saliency is derived from natural image statistics, obtained in advance from a collection of natural images. For this reason, we call our model SUN (Saliency Using Natural statistics). A measure of saliency based on natural image statistics, rather than based on a single test image, provides a straightforward explanation for many search asymmetries observed in humans; the statistics of a single test image lead to predictions that are not consistent with these asymmetries. In our model, saliency is computed locally, which is consistent with the neuroanatomy of the early visual system and results in an efficient algorithm with few free parameters.


Assuntos
Atenção/fisiologia , Teorema de Bayes , Simulação por Computador , Movimentos Oculares/fisiologia , Percepção Visual/fisiologia , Humanos
7.
Brain Res ; 1202: 14-24, 2008 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-17959155

RESUMO

What is the role of the Fusiform Face Area (FFA)? Is it specific to face processing, or is it a visual expertise area? The expertise hypothesis is appealing due to a number of studies showing that the FFA is activated by pictures of objects within the subject's domain of expertise (e.g., cars for car experts, birds for birders, etc.), and that activation of the FFA increases as new expertise is acquired in the lab. However, it is incumbent upon the proponents of the expertise hypothesis to explain how it is that an area that is initially specialized for faces becomes recruited for new classes of stimuli. We dub this the "visual expertise mystery." One suggested answer to this mystery is that the FFA is used simply because it is a fine discrimination area, but this account has historically lacked a mechanism describing exactly how the FFA would be recruited for novel domains of expertise. In this study, we show that a neurocomputational model trained to perform subordinate-level discrimination within a visually homogeneous class develops transformations that magnify differences between similar objects, in marked contrast to networks trained to simply categorize the objects. This magnification generalizes to novel classes, leading to faster learning of new discriminations. We suggest this is why the FFA is recruited for new expertise. The model predicts that individual FFA neurons will have highly variable responses to stimuli within expertise domains.


Assuntos
Simulação por Computador , Face , Redes Neurais de Computação , Reconhecimento Visual de Modelos/fisiologia , Lobo Temporal/fisiologia , Córtex Visual/fisiologia , Aprendizagem por Discriminação/fisiologia , Humanos , Aprendizagem/fisiologia , Estimulação Luminosa/métodos , Comportamento Social , Ensino/métodos , Lobo Temporal/anatomia & histologia , Córtex Visual/anatomia & histologia , Vias Visuais/anatomia & histologia , Vias Visuais/fisiologia
8.
Neural Netw ; 20(3): 424-32, 2007 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-17556116

RESUMO

Echo State Networks (ESNs) have been shown to be effective for a number of tasks, including motor control, dynamic time series prediction, and memorizing musical sequences. However, their performance on natural language tasks has been largely unexplored until now. Simple Recurrent Networks (SRNs) have a long history in language modeling and show a striking similarity in architecture to ESNs. A comparison of SRNs and ESNs on a natural language task is therefore a natural choice for experimentation. Elman applies SRNs to a standard task in statistical NLP: predicting the next word in a corpus, given the previous words. Using a simple context-free grammar and an SRN with backpropagation through time (BPTT), Elman showed that the network was able to learn internal representations that were sensitive to linguistic processes that were useful for the prediction task. Here, using ESNs, we show that training such internal representations is unnecessary to achieve levels of performance comparable to SRNs. We also compare the processing capabilities of ESNs to bigrams and trigrams. Due to some unexpected regularities of Elman's grammar, these statistical techniques are capable of maintaining dependencies over greater distances than might be initially expected. However, we show that the memory of ESNs in this word-prediction task, although noisy, extends significantly beyond that of bigrams and trigrams, enabling ESNs to make good predictions of verb agreement at distances over which these methods operate at chance. Overall, our results indicate a surprising ability of ESNs to learn a grammar, suggesting that they form useful internal representations without learning them.


Assuntos
Idioma , Aprendizagem/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Redes Neurais de Computação , Inteligência Artificial , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...