Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
J Vis ; 19(12): 21, 2019 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-31647515

RESUMO

Depth perception requires the use of an internal model of the eye-head geometry to infer distance from binocular retinal images and extraretinal 3D eye-head information, particularly ocular vergence. Similarly, for motion in depth perception, gaze angle is required to correctly interpret the spatial direction of motion from retinal images; however, it is unknown whether the brain can make adequate use of extraretinal version and vergence information to correctly transform binocular retinal motion into 3D spatial coordinates. Here we tested this hypothesis by asking participants to reconstruct the spatial trajectory of an isolated disparity stimulus moving in depth either peri-foveally or peripherally while participants' gaze was oriented at different vergence and version angles. We found large systematic errors in the perceived motion trajectory that reflected an intermediate reference frame between a purely retinal interpretation of binocular retinal motion (not accounting for veridical vergence and version) and the spatially correct motion. We quantify these errors with a 3D reference frame model accounting for target, eye, and head position upon motion percept encoding. This model could capture the behavior well, revealing that participants tended to underestimate their version by up to 17%, overestimate their vergence by up to 22%, and underestimate the overall change in retinal disparity by up to 64%, and that the use of extraretinal information depended on retinal eccentricity. Since such large perceptual errors are not observed in everyday viewing, we suggest that both monocular retinal cues and binocular extraretinal signals are required for accurate real-world motion in depth perception.


Assuntos
Percepção de Profundidade , Movimentos Oculares , Percepção de Movimento , Retina/fisiologia , Disparidade Visual , Sinais (Psicologia) , Desenho de Equipamento , Feminino , Fóvea Central/fisiologia , Humanos , Imageamento Tridimensional , Masculino , Reprodutibilidade dos Testes , Visão Binocular , Adulto Jovem
2.
J Neurophysiol ; 113(5): 1377-99, 2015 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-25475344

RESUMO

Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103-2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.


Assuntos
Modelos Neurológicos , Desempenho Psicomotor , Acompanhamento Ocular Uniforme , Percepção Visual , Humanos , Retina/fisiologia
3.
J Neurophysiol ; 110(8): 1945-57, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23926035

RESUMO

Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.


Assuntos
Desempenho Psicomotor , Acompanhamento Ocular Uniforme , Adulto , Braço/fisiologia , Encéfalo/fisiologia , Humanos , Modelos Neurológicos
4.
J Vis ; 12(5): 6, 2012 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-22637707

RESUMO

Humans often perform visually guided arm movements in a dynamic environment. To accurately plan visually guided manual tracking movements, the brain should ideally transform the retinal velocity input into a spatially appropriate motor plan, taking the three-dimensional (3D) eye-head-shoulder geometry into account. Indeed, retinal and spatial target velocity vectors generally do not align because of different eye-head postures. Alternatively, the planning could be crude (based only on retinal information) and the movement corrected online using visual feedback. This study aims to investigate how accurate the motor plan generated by the central nervous system is. We computed predictions about the movement plan if the eye and head position are taken into account (spatial hypothesis) or not (retinal hypothesis). For the motor plan to be accurate, the brain should compensate for the head roll and resulting ocular counterroll as well as the misalignment between retinal and spatial coordinates when the eyes lie in oblique gaze positions. Predictions were tested on human subjects who manually tracked moving targets in darkness and were compared to the initial arm direction, reflecting the motor plan. Subjects spatially accurately tracked the target, although imperfectly. Therefore, the brain takes the 3D eye-head-shoulder geometry into account for the planning of visually guided manual tracking.


Assuntos
Movimentos Oculares/fisiologia , Fixação Ocular/fisiologia , Movimentos da Cabeça/fisiologia , Percepção de Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Córtex Visual/fisiologia , Adulto , Humanos , Reprodutibilidade dos Testes , Adulto Jovem
5.
Artigo em Inglês | MEDLINE | ID: mdl-23443667

RESUMO

In behavioral neuroscience, many experiments are developed in 1 or 2 spatial dimensions, but when scientists tackle problems in 3-dimensions (3D), they often face problems or new challenges. Results obtained for lower dimensions are not always extendable in 3D. In motor planning of eye, gaze or arm movements, or sensorimotor transformation problems, the 3D kinematics of external (stimuli) or internal (body parts) must often be considered: how to describe the 3D position and orientation of these objects and link them together? We describe how dual quaternions provide a convenient way to describe the 3D kinematics for position only (point transformation) or for combined position and orientation (through line transformation), easily modeling rotations, translations or screw motions or combinations of these. We also derive expressions for the velocities of points and lines as well as the transformation velocities. Then, we apply these tools to a motor planning task for manual tracking and to the modeling of forward and inverse kinematics of a seven-dof three-link arm to show the interest of dual quaternions as a tool to build models for these kinds of applications.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA