Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Neurophysiol ; 128(6): 1409-1420, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-36321734

RESUMO

We previously proposed a Bayesian model of multisensory integration in spatial orientation (Clemens IAH, de Vrijer M, Selen LPJ, van Gisbergen JAM, Medendorp WP. J Neurosci 31: 5365-5377, 2011). Using a Gaussian prior, centered on an upright head orientation, this model could explain various perceptual observations in roll-tilted participants, such as the subjective visual vertical, the subjective body tilt (Clemens IAH, de Vrijer M, Selen LPJ, van Gisbergen JAM, Medendorp WP. J Neurosci 31: 5365-5377, 2011), the rod-and-frame effect (Alberts BBGT, de Brouwer AJ, Selen LPJ, Medendorp WP. eNeuro 3: ENEURO.0093-16.2016, 2016), as well as their clinical (Alberts BBGT, Selen LPJ, Verhagen WIM, Medendorp WP. Physiol Rep 3: e12385, 2015) and age-related deficits (Alberts BBGT, Selen LPJ, Medendorp WP. J Neurophysiol 121: 1279-1288, 2019). Because it is generally assumed that the prior reflects an accumulated history of previous head orientations, and recent work on natural head motion suggests non-Gaussian statistics, we examined how the model would perform with a non-Gaussian prior. In the present study, we first experimentally generalized the previous observations in showing that also the natural statistics of head orientation are characterized by long tails, best quantified as a t-location-scale distribution. Next, we compared the performance of the Bayesian model and various model variants using such a t-distributed prior to the original model with the Gaussian prior on their accounts of previously published data of the subjective visual vertical and subjective body tilt tasks. All of these variants performed substantially worse than the original model, suggesting a special value of the Gaussian prior. We provide computational and neurophysiological reasons for the implementation of such a prior, in terms of its associated precision-accuracy trade-off in vertical perception across the tilt range.NEW & NOTEWORTHY It has been argued that the brain uses Bayesian computations to process multiple sensory cues in vertical perception, including a prior centered on upright head orientation which is usually taken to be Gaussian. Here, we show that non-Gaussian prior distributions, although more akin to the statistics of head orientation during natural activities, provide a much worse explanation of such perceptual observations than a Gaussian prior.


Assuntos
Orientação Espacial , Percepção Espacial , Humanos , Teorema de Bayes , Percepção Espacial/fisiologia , Sinais (Psicologia) , Cabeça , Percepção Visual/fisiologia
2.
J Neurophysiol ; 122(2): 788-796, 2019 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-31268803

RESUMO

The brain is thought to use rotation cues from both the vestibular and optokinetic system to disambiguate the gravito-inertial force, as measured by the otoliths, into components of linear acceleration and gravity direction relative to the head. Hence, when the head is stationary and upright, an erroneous percept of tilt arises during optokinetic roll stimulation (OKS) or when an artificial canal-like signal is delivered by means of galvanic vestibular stimulation (GVS). It is still unknown how this percept is affected by the combined presence of both cues or how it develops over time. Here, we measured the time course of the subjective visual vertical (SVV), as a proxy of perceived head tilt, in human participants (n = 16) exposed to constant-current GVS (1 and 2 mA, cathodal and anodal) and constant-velocity OKS (30°/s clockwise and counterclockwise) or their combination. In each trial, participants continuously adjusted the orientation of a visual line, which drifted randomly, to Earth vertical. We found that both GVS and OKS evoke an exponential time course of the SVV. These time courses have different amplitudes and different time constants, 4 and 7 s respectively, and combine linearly when the two stimulations are presented together. We discuss these results in the framework of observer theory and Bayesian state estimation.NEW & NOTEWORTHY While it is known that both roll optokinetic stimuli and galvanic vestibular stimulation affect the percept of vertical, how their effects combine and develop over time is still unclear. Here we show that both effects combined linearly but are characterized by different time constants, which we discuss from a probabilistic perspective.


Assuntos
Fluxo Óptico/fisiologia , Propriocepção/fisiologia , Percepção Espacial/fisiologia , Vestíbulo do Labirinto/fisiologia , Adulto , Teorema de Bayes , Estimulação Elétrica , Feminino , Sensação Gravitacional/fisiologia , Humanos , Masculino , Processo Mastoide , Pessoa de Meia-Idade , Estimulação Luminosa , Fatores de Tempo , Adulto Jovem
3.
J Neurophysiol ; 122(2): 480-489, 2019 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-31166820

RESUMO

While it has been well established that optostatic and optokinetic cues contribute to the perception of vertical, it is unclear how the brain processes their combined presence with the nonvisual vestibular cues. Using a psychometric approach, we examined the percept of vertical in human participants (n = 17) with their body and head upright, presented with a visual frame tilted at one of eight orientations (between ±45°, steps of 11.25°) or no frame, surrounded by an optokinetic roll-stimulus (velocity = ±30°/s or stationary). Both cues demonstrate relatively independent biases on vertical perception, with a sinusoidal modulation by frame orientation of ~4° and a general shift of ~1-2° in the rotation direction of the optic flow. Variability was unaffected by frame orientation but was higher with than without optokinetic rotation. An optimal-observer model in which vestibular, optostatic, and optokinetic cues provide independent sources to vertical perception was unable to explain these data. In contrast, a model in which the optokinetic signal biases the internal representation of gravity, which is then optimally integrated with the optostatic cue, provided a good account, at the individual participant level. We conclude that optostatic and optokinetic cues interact differently with vestibular cues in the neural computations for vertical perception.NEW & NOTEWORTHY Static and dynamic visual cues are known to bias the percept of vertical, but how they interact with vestibular cues remains to be established. Guided by an optimal-observer model, the present results suggest that optokinetic information is combined with vestibular information into a single, vestibular-optokinetic estimate, which is integrated with an optostatically derived estimate of vertical.


Assuntos
Percepção de Movimento/fisiologia , Fluxo Óptico/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Propriocepção/fisiologia , Percepção Espacial/fisiologia , Vestíbulo do Labirinto/fisiologia , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Estimulação Luminosa , Adulto Jovem
4.
Multisens Res ; 32(3): 165-178, 2019 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-31059483

RESUMO

When walking or driving, it is of the utmost importance to continuously track the spatial relationship between objects in the environment and the moving body in order to prevent collisions. Although this process of spatial updating occurs naturally, it involves the processing of a myriad of noisy and ambiguous sensory signals. Here, using a psychometric approach, we investigated the integration of visual optic flow and vestibular cues in spatially updating a remembered target position during a linear displacement of the body. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They had to remember the position of a target, briefly presented before a sideward translation of the body involving supra-threshold vestibular cues and whole-field optic flow that provided slightly discrepant motion information. After the motion, using a forced response participants indicated whether the location of a brief visual probe was left or right of the remembered target position. Our results show that in a spatial updating task involving passive linear self-motion humans integrate optic flow and vestibular self-displacement information according to a weighted-averaging process with, across subjects, on average about four times as much weight assigned to the visual compared to the vestibular contribution (i.e., 79% visual weight). We discuss our findings with respect to previous literature on the effect of optic flow on spatial updating performance.


Assuntos
Percepção de Movimento/fisiologia , Fluxo Óptico/fisiologia , Orientação Espacial/fisiologia , Percepção Espacial/fisiologia , Adolescente , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Movimento (Física) , Realidade Virtual , Percepção Visual/fisiologia , Adulto Jovem
5.
J Neurophysiol ; 121(1): 269-284, 2019 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-30461369

RESUMO

The brain uses self-motion information to internally update egocentric representations of locations of remembered world-fixed visual objects. If a discrepancy is observed between this internal update and reafferent visual feedback, this could be either due to an inaccurate update or because the object has moved during the motion. To optimally infer the object's location it is therefore critical for the brain to estimate the probabilities of these two causal structures and accordingly integrate and/or segregate the internal and sensory estimates. To test this hypothesis, we designed a spatial updating task involving passive whole body translation. Participants, seated on a vestibular sled, had to remember the world-fixed position of a visual target. Immediately after the translation, the reafferent visual feedback was provided by flashing a second target around the estimated "updated" target location, and participants had to report the initial target location. We found that the participants' responses were systematically biased toward the position of the second target position for relatively small but not for large differences between the "updated" and the second target location. This pattern was better captured by a Bayesian causal inference model than by alternative models that would always either integrate or segregate the internally updated target location and the visual feedback. Our results suggest that the brain implicitly represents the posterior probability that the internally updated estimate and the visual feedback come from a common cause and uses this probability to weigh the two sources of information in mediating spatial constancy across whole body motion. NEW & NOTEWORTHY When we move, egocentric representations of object locations require internal updating to keep them in register with their true world-fixed locations. How does this mechanism interact with reafferent visual input, given that objects typically do not disappear from view? Here we show that the brain implicitly represents the probability that both types of information derive from the same object and uses this probability to weigh their contribution for achieving spatial constancy across whole body motion.


Assuntos
Modelos Biológicos , Percepção de Movimento , Percepção Espacial , Adulto , Teorema de Bayes , Encéfalo/fisiologia , Simulação por Computador , Retroalimentação Sensorial/fisiologia , Feminino , Humanos , Masculino , Movimento (Física) , Percepção de Movimento/fisiologia , Psicofísica , Percepção Espacial/fisiologia
6.
Front Neurol ; 9: 377, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29910766

RESUMO

Perception of spatial orientation is thought to rely on the brain's integration of visual, vestibular, proprioceptive, and somatosensory signals, as well as internal beliefs. When one of these signals breaks down, such as the vestibular signal in bilateral vestibulopathy, patients start compensating by relying more on the remaining cues. How these signals are reweighted in this integration process is difficult to establish, since they cannot be measured in isolation during natural tasks, are inherently noisy, and can be ambiguous or in conflict. Here, we review our recent work, combining experimental psychophysics with a reverse engineering approach, based on Bayesian inference principles, to quantify sensory noise levels and optimal (re)weighting at the individual subject level, in both patients with bilateral vestibular deficits and healthy controls. We show that these patients reweight the remaining sensory information, relying more on visual and other nonvestibular information than healthy controls in the perception of spatial orientation. This quantification approach could improve diagnostics and prognostics of multisensory integration deficits in vestibular patients, and contribute to an evaluation of rehabilitation therapies directed toward specific training programs.

7.
PLoS Comput Biol ; 12(3): e1004766, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26967730

RESUMO

Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability.


Assuntos
Fixação Ocular/fisiologia , Memória/fisiologia , Modelos Neurológicos , Percepção de Movimento/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Movimentos Sacádicos/fisiologia , Simulação por Computador , Humanos , Modelos Estatísticos , Análise Espaço-Temporal
8.
PLoS One ; 10(12): e0145015, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26658990

RESUMO

When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.


Assuntos
Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Estimulação Luminosa , Desempenho Psicomotor , Adulto Jovem
9.
J Vis ; 12(12)2012 Nov 14.
Artigo em Inglês | MEDLINE | ID: mdl-23151410

RESUMO

In order to maintain visual stability during self-motion, the brain needs to update any egocentric spatial representations of the environment. Here, we use a novel psychophysical approach to investigate how and to what extent the brain integrates visual, extraocular, and vestibular signals pertaining to this spatial update. Participants were oscillated sideways at a frequency of 0.63 Hz while keeping gaze fixed on a stationary light. When the motion direction changed, a reference target was shown either in front of or behind the fixation point. At the next reversal, half a cycle later, we tested updating of this reference location by asking participants to judge whether a briefly flashed probe was shown to the left or right of the memorized target. We show that updating is not only biased, but that the direction and magnitude of this bias depend on both gaze and object location, implying that a gaze-centered reference frame is involved. Using geometric modeling, we further show that the gaze-dependent errors can be caused by an underestimation of translation amplitude, by a bias of visually perceived objects towards the fovea (i.e., a foveal bias), or by a combination of both.


Assuntos
Movimentos Oculares/fisiologia , Fóvea Central/fisiologia , Modelos Neurológicos , Percepção de Movimento/fisiologia , Orientação/fisiologia , Vestíbulo do Labirinto/fisiologia , Adulto , Feminino , Fixação Ocular/fisiologia , Humanos , Masculino , Movimento/fisiologia , Estimulação Luminosa/métodos , Desempenho Psicomotor/fisiologia , Psicofísica , Disparidade Visual/fisiologia , Adulto Jovem
10.
Mem Cognit ; 35(6): 1307-22, 2007 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-18035629

RESUMO

We present a computational model that provides a unified account of inference, coherence, and disambiguation. It simulates how the build-up of coherence in text leads to the knowledge-based resolution of referential ambiguity. Possible interpretations of an ambiguity are represented by centers of gravity in a high-dimensional space. The unresolved ambiguity forms a vector in the same space. This vector is attracted by the centers of gravity, while also being affected by context information and world knowledge. When the vector reaches one of the centers of gravity, the ambiguity is resolved to the corresponding interpretation. The model accounts for reading time and error rate data from experiments on ambiguous pronoun resolution and explains the effects of context informativeness, anaphor type, and processing depth. It shows how implicit causality can have an early effect during reading. A novel prediction is that ambiguities can remain unresolved if there is insufficient disambiguating information.


Assuntos
Cognição , Modelos Psicológicos , Semântica , Vocabulário , Humanos , Idioma , Leitura , Fatores de Tempo
11.
Neural Netw ; 19(3): 311-22, 2006 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-16618535

RESUMO

Many of our daily activities are supported by behavioural goals that guide the selection of actions, which allow us to reach these goals effectively. Goals are considered to be important for action observation since they allow the observer to copy the goal of the action without the need to use the exact same means. The importance of being able to use different action means becomes evident when the observer and observed actor have different bodies (robots and humans) or bodily measurements (parents and children), or when the environments of actor and observer differ substantially (when an obstacle is present or absent in either environment). A selective focus on the action goals instead of the action means furthermore circumvents the need to consider the vantage point of the actor, which is consistent with recent findings that people prefer to represent the actions of others from their own individual perspective. In this paper, we use a computational approach to investigate how knowledge about action goals and means are used in action observation. We hypothesise that in action observation human agents are primarily interested in identifying the goals of the observed actor's behaviour. Behavioural cues (e.g. the way an object is grasped) may help to disambiguate the goal of the actor (e.g. whether a cup is grasped for drinking or handing it over). Recent advances in cognitive neuroscience are cited in support of the model's architecture.


Assuntos
Simulação por Computador , Objetivos , Modelos Psicológicos , Observação , Desempenho Psicomotor/fisiologia , Atenção , Habituação Psicofisiológica , Humanos , Comportamento Imitativo/fisiologia
12.
J Exp Psychol Learn Mem Cogn ; 31(2): 374-7, 2005 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-15755254

RESUMO

T. Trabasso and J. Bartolone (2003) used a computational model of narrative text comprehension to account for empirical findings. The authors show that the same predictions are obtained without running the model. This is caused by the model's computational setup, which leaves most of the model's input unchanged.


Assuntos
Cognição , Modelos Psicológicos , Narração , Humanos , Teoria Psicológica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA