Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Exp Psychol Hum Percept Perform ; 47(5): 635-647, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33705199

RESUMEN

When vision is removed, limb position has been shown to progressively drift during repetitive arm movements. The posterior parietal cortex (PPC) is known to be involved in the processing of multisensory information, the formation of internal hand estimate, and online motor control. Here, we compared hand position drift between healthy controls and 2 patients with PPC damage to gain insight into the mechanisms underlying movement drift and investigate the possible role of the PPC in this process. To do so, we asked participants to perform back-and-forth movements between 2 targets, in the dark and under different gaze fixation conditions. Each individual participant consistently drifted to the same end position for a given hand and gaze condition. We found that the final drift distance was related to small systematic errors made on the very first trial in the dark, with an approximate 3.5 fold increase in magnitude. Furthermore, PPC damage resulted in greater movement drift in patients when the unseen hand was in the contralesional oculocentric space and also when the target was located in the lower visual field. We conclude that the PPC is involved in the proprioceptive representation of hand position in oculocentric coordinates used for reach planning and motor control. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Mano , Desempeño Psicomotor , Ataxia , Humanos , Movimiento , Lóbulo Parietal
2.
PLoS One ; 16(3): e0247254, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33724991

RESUMEN

Having an optimal quality of vision as well as adequate cognitive capacities is known to be essential for driving safety. However, the interaction between vision and cognitive mechanisms while driving remains unclear. We hypothesized that, in a context of high cognitive load, reduced visual acuity would have a negative impact on driving behavior, even when the acuity corresponds to the legal threshold for obtaining a driving license in Canada, and that the impact observed on driving performance would be greater with the increase in the threshold of degradation of visual acuity. In order to investigate this relationship, we examined driving behavior in a driving simulator under optimal and reduced vision conditions through two scenarios involving different levels of cognitive demand. These were: 1. a simple rural driving scenario with some pre-programmed events and 2. a highway driving scenario accompanied by a concurrent task involving the use of a navigation device. Two groups of visual quality degradation (lower/ higher) were evaluated according to their driving behavior. The results support the hypothesis: A dual task effect was indeed observed provoking less stable driving behavior, but in addition to this, by statistically controlling the impact of cognitive load, the effect of visual load emerged in this dual task context. These results support the idea that visual quality degradation impacts driving behavior when combined with a high mental workload driving environment while specifying that this impact is not present in the context of low cognitive load driving condition.


Asunto(s)
Conducción de Automóvil/psicología , Conducción Distraída/psicología , Tiempo de Reacción/fisiología , Adulto , Atención/fisiología , Canadá , Cognición/fisiología , Simulación por Computador , Femenino , Humanos , Masculino , Desempeño Psicomotor , Visión Ocular/fisiología , Agudeza Visual/fisiología , Percepción Visual/fisiología
3.
PLoS One ; 15(12): e0240201, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33382720

RESUMEN

Driving is an everyday task involving a complex interaction between visual and cognitive processes. As such, an increase in the cognitive and/or visual demands can lead to a mental overload which can be detrimental for driving safety. Compiling evidence suggest that eye and head movements are relevant indicators of visuo-cognitive demands and attention allocation. This study aims to investigate the effects of visual degradation on eye-head coordination as well as visual scanning behavior during a highly demanding task in a driving simulator. A total of 21 emmetropic participants (21 to 34 years old) performed dual-task driving in which they were asked to maintain a constant speed on a highway while completing a visual search and detection task on a navigation device. Participants did the experiment with optimal vision and with contact lenses that introduced a visual perturbation (myopic defocus). The results indicate modifications of eye-head coordination and the dynamics of visual scanning in response to the visual perturbation induced. More specifically, the head was more involved in horizontal gaze shifts when the visual needs were not met. Furthermore, the evaluation of visual scanning dynamics, based on time-based entropy which measures the complexity and randomness of scanpaths, revealed that eye and gaze movements became less explorative and more stereotyped when vision was not optimal. These results provide evidence for a reorganization of both eye and head movements in response to increasing visual-cognitive demands during a driving task. Altogether, these findings suggest that eye and head movements can provide relevant information about visuo-cognitive demands associated with complex tasks. Ultimately, eye-head coordination and visual scanning dynamics may be good candidates to estimate drivers' workload and better characterize risky driving behavior.


Asunto(s)
Atención/fisiología , Conducción de Automóvil/psicología , Movimientos Oculares/fisiología , Movimientos de la Cabeza/fisiología , Desempeño Psicomotor , Adulto , Cognición/fisiología , Femenino , Humanos , Masculino , Asunción de Riesgos , Entrenamiento Simulado , Visión Ocular/fisiología
4.
J Vis ; 18(11): 2, 2018 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-30326049

RESUMEN

The premotor theory of attention and the visual attention model make different predictions about the temporal and spatial allocation of presaccadic attentional facilitation. The current experiment investigated the spatial and temporal dynamics of presaccadic attentional facilitation during pro- and antisaccade planning; we investigated whether attention shifts only to the saccade goal location or to the target location or elsewhere, and when. Participants performed a dual-task paradigm with blocks of either anti- or prosaccades and also discriminated symbols appearing at different locations before saccade onset (measure of attentional allocation). In prosaccades blocks, correct prosaccade discrimination was best at the target location, while during errors, discrimination was best at the location opposite to the target location. This pattern was inversed in antisaccades blocks, although discrimination remained high opposite to the target location. In addition, we took the benefit of a large range of saccadic landing positions and showed that performance across both types of saccades was best at the actual saccade goal location (where the eye will actually land) rather than at the instructed position. Finally, temporal analyses showed that discrimination remained highest at the saccade goal location, from long before to closer to saccade onset, increasing slightly for antisaccades closer to saccade onset. These findings are in line with the premises of the premotor theory of attention, showing that attentional allocation is primarily linked both temporally and spatially to the saccade goal location.


Asunto(s)
Atención , Movimientos Sacádicos/fisiología , Adulto , Femenino , Humanos , Masculino , Estimulación Luminosa , Tiempo de Reacción , Adulto Joven
5.
PLoS One ; 13(7): e0199627, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29979697

RESUMEN

When pointing to parts of our own body (e.g., the opposite index finger), the position of the target is derived from proprioceptive signals. Consistent with the principles of multisensory integration, it has been found that participants better matched the position of their index finger when they also had visual cues about its location. Unlike vision, touch may not provide an additional information about finger position in space, since fingertip tactile information theoretically remains the same irrespective of the postural configuration of the upper limb. However, since tactile and proprioceptive information are ultimately coded within the same population of posterior parietal neurons within high-level spatial representations, we nevertheless hypothesized that additional tactile information could benefit the processing of proprioceptive signals. To investigate the influence of tactile information on proprioceptive localization, we asked 19 participants to reach with the right hand towards the opposite unseen index finger (proprioceptive target). Vibrotactile stimuli were applied to the target index finger prior to movement execution. We found that participants made smaller errors and more consistent reaches following tactile stimulation. These results demonstrate that transient touch provided at the proprioceptive target improves subsequent reaching precision and accuracy. Such improvement was not observed when tactile stimulation was delivered to a distinct body part (the shoulder). This suggests a specific spatial integration of touch and proprioception at the level of high-level cortical body representations, resulting in touch improving position sense.


Asunto(s)
Propiocepción , Tacto , Vibración , Adolescente , Adulto , Análisis de Varianza , Femenino , Dedos , Humanos , Masculino , Adulto Joven
6.
J Neurophysiol ; 119(5): 1981-1992, 2018 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-29465322

RESUMEN

When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand's specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.


Asunto(s)
Mano/fisiología , Actividad Motora/fisiología , Propiocepción/fisiología , Desempeño Psicomotor/fisiología , Percepción Visual/fisiología , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...