Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 121(17): e2403858121, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38635638

RESUMO

Functional neuroimaging studies indicate that the human brain can represent concepts and their relational structure in memory using coding schemes typical of spatial navigation. However, whether we can read out the internal representational geometries of conceptual spaces solely from human behavior remains unclear. Here, we report that the relational structure between concepts in memory might be reflected in spontaneous eye movements during verbal fluency tasks: When we asked participants to randomly generate numbers, their eye movements correlated with distances along the left-to-right one-dimensional geometry of the number space (mental number line), while they scaled with distance along the ring-like two-dimensional geometry of the color space (color wheel) when they randomly generated color names. Moreover, when participants randomly produced animal names, eye movements correlated with low-dimensional similarity in word frequencies. These results suggest that the representational geometries used to internally organize conceptual spaces might be read out from gaze behavior.


Assuntos
Movimentos Oculares , Navegação Espacial , Humanos , Encéfalo , Movimento , Neuroimagem Funcional
2.
Nat Commun ; 14(1): 8132, 2023 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-38065931

RESUMO

The human hippocampal-entorhinal system is known to represent both spatial locations and abstract concepts in memory in the form of allocentric cognitive maps. Using fMRI, we show that the human parietal cortex evokes complementary egocentric representations in conceptual spaces during goal-directed mental search, akin to those observable during physical navigation to determine where a goal is located relative to oneself (e.g., to our left or to our right). Concurrently, the strength of the grid-like signal, a neural signature of allocentric cognitive maps in entorhinal, prefrontal, and parietal cortices, is modulated as a function of goal proximity in conceptual space. These brain mechanisms might support flexible and parallel readout of where target conceptual information is stored in memory, capitalizing on complementary reference frames.


Assuntos
Encéfalo , Hipocampo , Humanos , Lobo Parietal/diagnóstico por imagem , Mapeamento Encefálico , Cabeça , Percepção Espacial
3.
Atten Percept Psychophys ; 83(7): 2865-2878, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34341941

RESUMO

Past research on the advantages of multisensory input for remembering spatial information has mainly focused on memory for objects or surrounding environments. Less is known about the role of cue combination in memory for own body location in space. In a previous study, we investigated participants' accuracy in reproducing a rotation angle in a self-rotation task. Here, we focus on the memory aspect of the task. Participants had to rotate themselves back to a specified starting position in three different sensory conditions: a blind condition, a condition with disrupted proprioception, and a condition where both vision and proprioception were reliably available. To investigate the difference between encoding and storage phases of remembering proprioceptive information, rotation amplitude and recall delay were manipulated. The task was completed in a real testing room and in immersive virtual reality (IVR) simulations of the same environment. We found that proprioceptive accuracy is lower when vision is not available and that performance is generally less accurate in IVR. In reality conditions, the degree of rotation affected accuracy only in the blind condition, whereas in IVR, it caused more errors in both the blind condition and to a lesser degree when proprioception was disrupted. These results indicate an improvement in encoding own body location when vision and proprioception are optimally integrated. No reliable effect of delay was found.


Assuntos
Realidade Virtual , Humanos , Movimento (Física) , Propriocepção , Visão Ocular
4.
Front Psychol ; 12: 708229, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34322072

RESUMO

Atypical sensorimotor developmental trajectories greatly contribute to the profound heterogeneity that characterizes Autism Spectrum Disorders (ASD). Individuals with ASD manifest deviations in sensorimotor processing with early markers in the use of sensory information coming from both the external world and the body, as well as motor difficulties. The cascading effect of these impairments on the later development of higher-order abilities (e.g., executive functions and social communication) underlines the need for interventions that focus on the remediation of sensorimotor integration skills. One of the promising technologies for such stimulation is Immersive Virtual Reality (IVR). In particular, head-mounted displays (HMDs) have unique features that fully immerse the user in virtual realities which disintegrate and otherwise manipulate multimodal information. The contribution of each individual sensory input and of multisensory integration to perception and motion can be evaluated and addressed according to a user's clinical needs. HMDs can therefore be used to create virtual environments aimed at improving people's sensorimotor functioning, with strong potential for individualization for users. Here we provide a narrative review of the sensorimotor atypicalities evidenced by children and adults with ASD, alongside some specific relevant features of IVR technology. We discuss how individuals with ASD may interact differently with IVR versus real environments on the basis of their specific atypical sensorimotor profiles and describe the unique potential of HMD-delivered immersive virtual environments to this end.

5.
Psychol Res ; 85(7): 2667-2681, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33146781

RESUMO

Can cognitive load enhance concentration on task-relevant information and help filter out distractors? Most of the prior research in the area of selective attention has focused on visual attention or cross-modal distraction and has yielded controversial results. Here, we studied whether working memory load can facilitate selective attention when both target and distractor stimuli are auditory. We used a letter n-back task with four levels of working memory load and two levels of distraction: congruent and incongruent distractors. This combination of updating and inhibition tasks allowed us to manipulate working memory load within the selective attention task. Participants sat in front of three loudspeakers and were asked to attend to the letter presented from the central loudspeaker while ignoring that presented from the flanking ones (spoken by a different person), which could be the same letter as the central one (congruent) or a different (incongruent) letter. Their task was to respond whether or not the central letter matched the letter presented n (0, 1, 2, or 3) trials back. Distraction was measured in terms of the difference in reaction time and accuracy on trials with incongruent versus congruent flankers. We found reduced interference from incongruent flankers in 2- and 3-back conditions compared to 0- and 1-back conditions, whereby higher working memory load almost negated the effect of incongruent flankers. These results suggest that high load on verbal working memory can facilitate inhibition of distractors in the auditory domain rather than make it more difficult as sometimes claimed.


Assuntos
Cognição , Memória de Curto Prazo , Humanos , Tempo de Reação
6.
Brain Sci ; 10(5)2020 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-32365509

RESUMO

When learning and interacting with the world, people with Autism Spectrum Disorders (ASD) show compromised use of vision and enhanced reliance on body-based information. As this atypical profile is associated with motor and social difficulties, interventions could aim to reduce the potentially isolating reliance on the body and foster the use of visual information. To this end, head-mounted displays (HMDs) have unique features that enable the design of Immersive Virtual Realities (IVR) for manipulating and training sensorimotor processing. The present study assesses feasibility and offers some early insights from a new paradigm for exploring how children and adults with ASD interact with Reality and IVR when vision and proprioception are manipulated. Seven participants (five adults, two children) performed a self-turn task in two environments (Reality and IVR) for each of three sensory conditions (Only Proprioception, Only Vision, Vision + Proprioception) in a purpose-designed testing room and an HMD-simulated environment. The pilot indicates good feasibility of the paradigm. Preliminary data visualisation suggests the importance of considering inter-individual variability. The participants in this study who performed worse with Only Vision and better with Only Proprioception seemed to benefit from the use of IVR. Those who performed better with Only Vision and worse with Only Proprioception seemed to benefit from Reality. Therefore, we invite researchers and clinicians to consider that IVR may facilitate or impair individuals depending on their profiles.

7.
PLoS One ; 15(1): e0222253, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31999710

RESUMO

Proprioceptive development relies on a variety of sensory inputs, among which vision is hugely dominant. Focusing on the developmental trajectory underpinning the integration of vision and proprioception, the present research explores how this integration is involved in interactions with Immersive Virtual Reality (IVR) by examining how proprioceptive accuracy is affected by Age, Perception, and Environment. Individuals from 4 to 43 years old completed a self-turning task which asked them to manually return to a previous location with different sensory modalities available in both IVR and reality. Results were interpreted from an exploratory perspective using Bayesian model comparison analysis, which allows the phenomena to be described using probabilistic statements rather than simplified reject/not-reject decisions. The most plausible model showed that 4-8-year-old children can generally be expected to make more proprioceptive errors than older children and adults. Across age groups, proprioceptive accuracy is higher when vision is available, and is disrupted in the visual environment provided by the IVR headset. We can conclude that proprioceptive accuracy mostly develops during the first eight years of life and that it relies largely on vision. Moreover, our findings indicate that this proprioceptive accuracy can be disrupted by the use of an IVR headset.


Assuntos
Propriocepção/fisiologia , Desempenho Psicomotor/fisiologia , Realidade Virtual , Visão Ocular/fisiologia , Adolescente , Adulto , Teorema de Bayes , Criança , Pré-Escolar , Feminino , Humanos , Masculino , Percepção Visual , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA