Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 87
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 119(20): e2118445119, 2022 05 17.
Artigo em Inglês | MEDLINE | ID: mdl-35533281

RESUMO

The ability to sample sensory information with our hands is crucial for smooth and efficient interactions with the world. Despite this important role of touch, tactile sensations on a moving hand are perceived weaker than when presented on the same but stationary hand. This phenomenon of tactile suppression has been explained by predictive mechanisms, such as internal forward models, that estimate future sensory states of the body on the basis of the motor command and suppress the associated predicted sensory feedback. The origins of tactile suppression have sparked a lot of debate, with contemporary accounts claiming that suppression is independent of sensorimotor predictions and is instead due to an unspecific mechanism. Here, we target this debate and provide evidence for specific tactile suppression due to precise sensorimotor predictions. Participants stroked with their finger over textured objects that caused predictable vibrotactile feedback signals on that finger. Shortly before touching the texture, we probed tactile suppression by applying external vibrotactile probes on the moving finger that either matched or mismatched the frequency generated by the stroking movement along the texture. We found stronger suppression of the probes that matched the predicted sensory feedback. These results show that tactile suppression is specifically tuned to the predicted sensory states of a movement.


Assuntos
Movimento , Percepção do Tato , Retroalimentação Sensorial , Mãos , Humanos , Tato
2.
Psychol Sci ; 35(2): 191-201, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38252798

RESUMO

To estimate object properties such as mass or friction, our brain relies on visual information to efficiently compute approximations. The role of sensorimotor feedback, however, is not well understood. Here we tested healthy adults (N = 79) in an inclined-plane problem, that is, how much a plane can be tilted before an object starts to slide, and contrasted the interaction group with observation groups who accessed involved forces by watching objects being manipulated. We created objects of different masses and levels of friction and asked participants to estimate the critical tilt angle after pushing an object, lifting it, or both. Estimates correlated with applied forces and were biased toward object mass, with higher estimates for heavier objects. Our findings highlight that inferences about physical object properties are tightly linked to the human sensorimotor system and that humans integrate sensorimotor information even at the risk of nonveridical perceptual estimates.


Assuntos
Percepção de Peso , Adulto , Humanos , Fricção , Encéfalo , Desempenho Psicomotor , Força da Mão
3.
J Vis ; 24(7): 10, 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38995109

RESUMO

A current focus in sensorimotor research is the study of human perception and action in increasingly naturalistic tasks and visual environments. This is further enabled by the recent commercial success of virtual reality (VR) technology, which allows for highly realistic but well-controlled three-dimensional (3D) scenes. VR enables a multitude of different ways to interact with virtual objects, but only rarely are such interaction techniques evaluated and compared before being selected for a sensorimotor experiment. Here, we compare different response techniques for a memory-guided action task, in which participants indicated the position of a previously seen 3D object in a VR scene: pointing, using a virtual laser pointer of short or unlimited length, and placing, either the target object itself or a generic reference cube. Response techniques differed in availability of 3D object cues and requirement to physically move to the remembered object position by walking. Object placement was the most accurate but slowest due to repeated repositioning. When placing objects, participants tended to match the original object's orientation. In contrast, the laser pointer was fastest but least accurate, with the short pointer showing a good speed-accuracy compromise. Our findings can help researchers in selecting appropriate methods when studying naturalistic visuomotor behavior in virtual environments.


Assuntos
Realidade Virtual , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Desempenho Psicomotor/fisiologia , Sinais (Psicologia) , Estimulação Luminosa/métodos
4.
J Neurophysiol ; 130(5): 1142-1149, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-37791381

RESUMO

Allocentric and egocentric reference frames are used to code the spatial position of action targets in reference to objects in the environment, i.e., relative to landmarks (allocentric), or the observer (egocentric). Previous research investigated reference frames in isolation, for example, by shifting landmarks relative to the target and asking participants to reach to the remembered target location. Systematic reaching errors were found in the direction of the landmark shift and used as a proxy for allocentric spatial coding. Here, we examined the interaction of both allocentric and egocentric reference frames by shifting the landmarks as well as the observer. We asked participants to encode a three-dimensional configuration of balls and to reproduce this configuration from memory after a short delay followed by a landmark or an observer shift. We also manipulated the number of landmarks to test its effect on the use of allocentric and egocentric reference frames. We found that participants were less accurate when reproducing the configuration of balls after an observer shift, which was reflected in larger configurational errors. In addition, an increase in the number of landmarks led to a stronger reliance on allocentric cues and a weaker contribution of egocentric cues. In sum, our results highlight the important role of egocentric cues for allocentric spatial coding in the context of memory-guided actions.NEW & NOTEWORTHY Objects in our environment are coded relative to each other (allocentrically) and are thought to serve as independent and reliable cues (landmarks) in the context of unreliable egocentric signals. Contrary to this assumption, we demonstrate that egocentric cues alter the allocentric spatial memory, which could reflect recently discovered interactions between allocentric and egocentric neural processing pathways. Furthermore, additional landmarks lead to a higher contribution of allocentric and a lower contribution of egocentric cues.


Assuntos
Sinais (Psicologia) , Memória Espacial , Humanos , Desempenho Psicomotor , Percepção Espacial , Rememoração Mental
5.
J Neurophysiol ; 130(1): 104-116, 2023 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-37283453

RESUMO

Pupillary responses have been reliably identified for cognitive and motor tasks, but less is known about their relation to mentally simulated movements (known as motor imagery). Previous work found pupil dilations during the execution of simple finger movements, where peak pupillary dilation scaled with the complexity of the finger movement and force required. Recently, pupillary dilations were reported during imagery of grasping and piano playing. Here, we examined whether pupillary responses are sensitive to the dynamics of the underlying motor task for both executed and imagined reach movements. Participants reached or imagined reaching to one of three targets placed at different distances from a start position. Both executed and imagined movement times scaled with target distance, and they were highly correlated, confirming previous work and suggesting that participants did imagine the respective movement. Increased pupillary dilation was evident during motor execution compared with rest, with stronger dilations for larger movements. Pupil dilations also occurred during motor imagery, however, they were generally weaker than those during motor execution and they were not influenced by imagined movement distance. Instead, dilations during motor imagery resembled pupil responses obtained during a nonmotor imagery task (imagining a previously viewed painting). Our results demonstrate that pupillary responses can reliably capture the dynamics of an executed goal-directed reaching movement, but suggest that pupillary responses during imagined reaching movements reflect general cognitive processes, rather than motor-specific components related to the simulated dynamics of the sensorimotor system.NEW & NOTEWORTHY Pupil size is influenced by the performance of cognitive and motor tasks. Here, we demonstrate that pupil size increases not only during execution but also during mental simulation of goal-directed reaching movements. However, pupil dilations scale with movement amplitude of executed but not of imagined movement, whereas they are similar during motor imagery and a nonmotor imagery task.


Assuntos
Imaginação , Pupila , Humanos , Pupila/fisiologia , Imaginação/fisiologia , Movimento/fisiologia , Tempo , Extremidade Superior , Desempenho Psicomotor/fisiologia
6.
J Vis ; 23(5): 3, 2023 05 02.
Artigo em Inglês | MEDLINE | ID: mdl-37140913

RESUMO

Humans can judge the quality of their perceptual decisions-an ability known as perceptual confidence. Previous work suggested that confidence can be evaluated on an abstract scale that can be sensory modality-independent or even domain-general. However, evidence is still scarce on whether confidence judgments can be directly made across visual and tactile decisions. Here, we investigated in a sample of 56 adults whether visual and tactile confidence share a common scale by measuring visual contrast and vibrotactile discrimination thresholds in a confidence-forced choice paradigm. Confidence judgments were made about the correctness of the perceptual decision between two trials involving either the same or different modalities. To estimate confidence efficiency, we compared discrimination thresholds obtained from all trials to those from trials judged to be relatively more confident. We found evidence for metaperception because higher confidence was associated with better perceptual performance in both modalities. Importantly, participants were able to judge their confidence across modalities without any costs in metaperceptual sensitivity and only minor changes in response times compared to unimodal confidence judgments. In addition, we were able to predict cross-modal confidence well from unimodal judgments. In conclusion, our findings show that perceptual confidence is computed on an abstract scale and that it can assess the quality of our decisions across sensory modalities.


Assuntos
Metacognição , Adulto , Humanos , Metacognição/fisiologia , Percepção Visual , Tempo de Reação , Julgamento , Tato
7.
Behav Brain Sci ; 46: e405, 2023 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-38054279

RESUMO

Bowers et al. focus their criticisms on research that compares behavioral and brain data from the ventral stream with a class of deep neural networks for object recognition. While they are right to identify issues with current benchmarking research programs, they overlook a much more fundamental limitation of this literature: Disregarding the importance of action and interaction for perception.


Assuntos
Reconhecimento Visual de Modelos , Percepção Visual , Humanos , Encéfalo , Mapeamento Encefálico
8.
Behav Res Methods ; 55(2): 570-582, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35322350

RESUMO

Virtual reality (VR) is a powerful tool for researchers due to its potential to study dynamic human behavior in highly naturalistic environments while retaining full control over the presented stimuli. Due to advancements in consumer hardware, VR devices are now very affordable and have also started to include technologies such as eye tracking, further extending potential research applications. Rendering engines such as Unity, Unreal, or Vizard now enable researchers to easily create complex VR environments. However, implementing the experimental design can still pose a challenge, and these packages do not provide out-of-the-box support for trial-based behavioral experiments. Here, we present a Python toolbox, designed to facilitate common tasks when developing experiments using the Vizard VR platform. It includes functionality for common tasks like creating, randomizing, and presenting trial-based experimental designs or saving results to standardized file formats. Moreover, the toolbox greatly simplifies continuous recording of eye and body movements using any hardware supported in Vizard. We further implement and describe a simple goal-directed reaching task in VR and show sample data recorded from five volunteers. The toolbox, example code, and data are all available on GitHub under an open-source license. We hope that our toolbox can simplify VR experiment development, reduce code duplication, and aid reproducibility and open-science efforts.


Assuntos
Interface Usuário-Computador , Realidade Virtual , Humanos , Reprodutibilidade dos Testes , Software
9.
Neuroimage ; 236: 118000, 2021 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-33864902

RESUMO

Somatosensory signals on a moving limb are typically suppressed. This results mainly from a predictive mechanism that generates an efference copy, and attenuates the predicted sensory consequences of that movement. Sensory feedback is, however, important for movement control. Behavioral studies show that the strength of suppression on a moving limb increases during somatosensory reaching, when reach-relevant somatosensory signals from the target limb can be additionally used to plan and guide the movement, leading to increased reliability of sensorimotor predictions. It is still unknown how this suppression is neurally implemented. In this fMRI study, participants reached to a somatosensory (static finger) or an external target (touch-screen) without vision. To probe suppression, participants detected brief vibrotactile stimuli on their moving finger shortly before reach onset. As expected, sensitivity to probes was reduced during reaching compared to baseline (resting), and this suppression was stronger during somatosensory than external reaching. BOLD activation associated with suppression was also modulated by the reach target: relative to baseline, processing of probes during somatosensory reaching led to distinct BOLD deactivations in somatosensory regions (postcentral gyrus, supramarginal gyrus-SMG) whereas probes during external reaching led to deactivations in the cerebellum. In line with the behavioral results, we also found additional deactivations during somatosensory relative to external reaching in the supplementary motor area, a region linked with sensorimotor prediction. Somatosensory reaching was also linked with increased functional connectivity between the left SMG and the right parietal operculum along with the right anterior insula. We show that somatosensory processing on a moving limb is reduced when additional reach-relevant feedback signals from the target limb contribute to the movement, by down-regulating activation in regions associated with predictive and feedback processing.


Assuntos
Cerebelo/fisiologia , Córtex Cerebral/fisiologia , Dedos/fisiologia , Atividade Motora/fisiologia , Rede Nervosa/fisiologia , Percepção do Tato/fisiologia , Adulto , Mapeamento Encefálico , Cerebelo/diagnóstico por imagem , Córtex Cerebral/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Rede Nervosa/diagnóstico por imagem , Córtex Somatossensorial/diagnóstico por imagem , Córtex Somatossensorial/fisiologia , Adulto Jovem
10.
Perception ; 50(10): 904-907, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34617834

RESUMO

Everyday movements are guided by objects' positions relative to other items in the scene (allocentric information) as well as by objects' positions relative to oneself (egocentric information). Allocentric information can guide movements to the remembered positions of hidden objects, but is it also used when the object remains visible? To stimulate the use of allocentric information, the position of the participant's finger controlled the velocity of a cursor that they used to intercept moving targets, so there was no one-to-one mapping between egocentric positions of the hand and cursor. We evaluated whether participants relied on allocentric information by shifting all task-relevant items simultaneously leaving their allocentric relationships unchanged. If participants rely on allocentric information they should not respond to this perturbation. However, they did. They responded in accordance with their responses to each item shifting independently, supporting the idea that fast guidance of ongoing movements primarily relies on egocentric information.


Assuntos
Movimento , Percepção Espacial , Mãos , Humanos , Rememoração Mental
11.
J Neurophysiol ; 124(4): 1092-1102, 2020 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-32845193

RESUMO

For any type of goal-directed hand and eye movement, it is important to determine the position of the target. Though many of these movements are directed toward visual targets, humans also perform movements to targets derived by somatosensory information only, such as proprioceptive (sensory signals about static limb position), kinesthetic (sensory signals about limb movement), and tactile signals (sensory signals about touch on skin). In this study we investigated how each of these types of somatosensory information influences goal-directed hand and eye movements. Furthermore, we examined whether somatosensory target information has a differential influence on isolated and combined eye-hand movements. Participants performed right-hand reaching, eye, or coordinated eye-hand movements to their left index or middle fingers in the absence of any visual information. We varied somatosensory target information by allowing proprioceptive, proprioceptive-kinesthetic, proprioceptive-tactile, or proprioceptive-kinesthetic-tactile information. Reach endpoint precision was poorest when the target was derived by proprioceptive information only but improved when two different types of input were available. In addition, reach endpoints in conditions with kinesthetic target information were systematically shifted toward the direction of movement, while static somatosensory information decayed over time and led to systematic undershoots of the reach target location. In contrast to the effect on reaches, somatosensory information did not influence gaze endpoint accuracy or precision. When performing coordinated eye-hand movements reach accuracy and gaze endpoint precision improved, suggesting a bidirectional use of efferent information. We conclude that somatosensory target information influence endpoint control differently for goal-directed hand and eye movements to unseen targets.NEW & NOTEWORTHY A systematic investigation of contributions of different somatosensory modalities (proprioception, kinesthesia, tactile) for goal-directed movements is missing. Here we demonstrate that while eye movements are not affected by different types of somatosensory information, reach precision improves when two different types of information are available. Moreover, reach accuracy and gaze precision to unseen somatosensory targets improve when performing coordinated eye-hand movements, suggesting bidirectional contributions of efferent information in reach and eye movement control.


Assuntos
Mãos/fisiologia , Propriocepção , Movimentos Sacádicos , Percepção do Tato , Adulto , Feminino , Objetivos , Humanos , Cinestesia , Masculino , Destreza Motora , Desempenho Psicomotor
12.
J Neurophysiol ; 123(5): 1920-1932, 2020 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-32267186

RESUMO

When reaching to a visual target, humans need to transform the spatial target representation into the coordinate system of their moving arm. It has been shown that increased uncertainty in such coordinate transformations, for instance, when the head is rolled toward one shoulder, leads to higher movement variability and influence movement decisions. However, it is unknown whether the brain incorporates such added variability in planning and executing movements. We designed an obstacle avoidance task in which participants had to reach with or without visual feedback of the hand to a visual target while avoiding collisions with an obstacle. We varied coordinate transformation uncertainty by varying head roll (straight, 30° clockwise, and 30° counterclockwise). In agreement with previous studies, we observed that the reaching variability increased when the head was tilted. Indeed, head roll did not influence the number of collisions during reaching compared with the head-straight condition, but it did systematically change the obstacle avoidance behavior. Participants changed the preferred direction of passing the obstacle and increased the safety margins indicated by stronger movement curvature. These results suggest that the brain takes the added movement variability during head roll into account and compensates for it by adjusting the reaching trajectories.NEW & NOTEWORTHY We show that changing body geometry such as head roll results in compensatory reaching behaviors around obstacles. Specifically, we observed head roll causes changed preferred movement direction and increased trajectory curvature. As has been shown before, head roll increases movement variability due to stochastic coordinate transformations. Thus these results provide evidence that the brain must consider the added movement variability caused by coordinate transformations for accurate reach movements.


Assuntos
Retroalimentação Sensorial/fisiologia , Movimentos da Cabeça/fisiologia , Atividade Motora/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Espacial/fisiologia , Incerteza , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
13.
Exp Brain Res ; 238(9): 1813-1826, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32500297

RESUMO

In everyday life, our brain constantly builds spatial representations of the objects surrounding us. Many studies have investigated the nature of these spatial representations. It is well established that we use allocentric information in real-time and memory-guided movements. Most studies relied on small-scale and static experiments, leaving it unclear whether similar paradigms yield the same results on a larger scale using dynamic objects. We created a virtual reality task that required participants to encode the landing position of a virtual ball thrown by an avatar. Encoding differed in the nature of the task in that it was either purely perceptual ("view where the ball landed while standing still"-Experiment 1) or involved an action ("intercept the ball with the foot just before it lands"-Experiment 2). After encoding, participants were asked to place a real ball at the remembered landing position in the virtual scene. In some trials, we subtly shifted either the thrower or the midfield line on a soccer field to manipulate allocentric coding of the ball's landing position. In both experiments, we were able to replicate classic findings from small-scale experiments and to generalize these results to different encoding tasks (perception vs. action) and response modes (reaching vs. walking-and-placing). Moreover, we found that participants preferably encoded the ball relative to the thrower when they had to intercept the ball, suggesting that the use of allocentric information is determined by the encoding task by enhancing task-relevant allocentric information. Our findings indicate that results previously obtained from memory-guided reaching are not restricted to small-scale movements, but generalize to whole-body movements in large-scale dynamic scenes.


Assuntos
Percepção Espacial , Realidade Virtual , Humanos , Memória , Rememoração Mental , Movimento
14.
J Vis ; 20(4): 1, 2020 04 09.
Artigo em Inglês | MEDLINE | ID: mdl-32271893

RESUMO

An essential difference between pictorial space displayed as paintings, photographs, or computer screens, and the visual space experienced in the real world is that the observer has a defined location, and thus valid information about distance and direction of objects, in the latter but not in the former. Thus egocentric information should be more reliable in visual space, whereas allocentric information should be more reliable in pictorial space. The majority of studies relied on pictorial representations (images on a computer screen), leaving it unclear whether the same coding mechanisms apply in visual space. Using a memory-guided reaching task in virtual reality, we investigated allocentric coding in both visual space (on a table in virtual reality) and pictorial space (on a monitor that is on the table in virtual reality). Our results suggest that the brain uses allocentric information to represent objects in both pictorial and visual space. Contrary to our hypothesis, the influence of allocentric cues was stronger in visual space than in pictorial space, also after controlling for retinal stimulus size, confounding allocentric cues, and differences in presentation depth. We discuss possible reasons for stronger allocentric coding in visual than in pictorial space.


Assuntos
Memória de Curto Prazo/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Percepção Espacial/fisiologia , Adulto , Feminino , Humanos , Masculino , Ilusões Ópticas , Orientação , Adulto Jovem
15.
J Vis ; 20(2): 8, 2020 02 10.
Artigo em Inglês | MEDLINE | ID: mdl-32097487

RESUMO

The wide diversity of articles in this issue reveals an explosion of evidence for the mechanisms of prediction in the visual system. When thought of as visual priors, predictive mechanisms can be seen as tightly interwoven with incoming sensory data. Prediction is thus a fundamental and essential aspect not only of visual perception but of the actions that are guided by perception.


Assuntos
Movimentos Sacádicos/fisiologia , Córtex Somatossensorial/fisiologia , Percepção Visual/fisiologia , Previsões , Humanos
16.
Hum Brain Mapp ; 40(18): 5172-5184, 2019 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-31430005

RESUMO

Exploring an object's shape by touch also renders information about its surface roughness. It has been suggested that shape and roughness are processed distinctly in the brain, a result based on comparing brain activation when exploring objects that differed in one of these features. To investigate the neural mechanisms of top-down control on haptic perception of shape and roughness, we presented the same multidimensional objects but varied the relevance of each feature. Specifically, participants explored two objects that varied in shape (oblongness of cuboids) and surface roughness. They either had to compare the shape or the roughness in an alternative-forced-choice-task. Moreover, we examined whether the activation strength of the identified brain regions as measured by functional magnetic resonance imaging (fMRI) can predict the behavioral performance in the haptic discrimination task. We observed a widespread network of activation for shape and roughness perception comprising bilateral precentral and postcentral gyrus, cerebellum, and insula. Task-relevance of the object's shape increased activation in the right supramarginal gyrus (SMG/BA 40) and the right precentral gyrus (PreCG/BA 44) suggesting that activation in these areas does not merely reflect stimulus-driven processes, such as exploring shape, but also entails top-down controlled processes driven by task-relevance. Moreover, the strength of the SMG/PreCG activation predicted individual performance in the shape but not in the roughness discrimination task. No activation was found for the reversed contrast (roughness > shape). We conclude that macrogeometric properties, such as shape, can be modulated by top-down mechanisms whereas roughness, a microgeometric feature, seems to be processed automatically.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Comportamento Exploratório/fisiologia , Imageamento por Ressonância Magnética/métodos , Percepção do Tato/fisiologia , Adulto , Aprendizagem por Discriminação/fisiologia , Feminino , Humanos , Masculino , Distribuição Aleatória , Adulto Jovem
17.
J Vis ; 19(5): 4, 2019 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-31058990

RESUMO

Somatosensory perception is hampered on the moving limb during a goal-directed movement. This somatosensory suppression is mostly attributed to a forward model that predicts future states of the system based on the established motor command. Here, we examined whether and how this suppression is modulated by the predictability of object features important for controlling a grasping movement. Participants reached to grasp an object between thumb and index finger and then lifted it as straight as possible. Objects with symmetric or asymmetric mass distributions were presented either in a blocked or random manner, so that the object's mass distribution could be anticipated or not. At the moment of object contact, a brief vibrotactile stimulus of varying intensities was presented on the dorsal part of the moving index finger. Participants had to report whether they detected the stimulus. When the object's mass distribution was predictable, contact points with the object were modulated to the object's centre of mass. This modulation contributed to a minimized resultant object roll during lifting. When the object's mass distribution was unpredictable, participants chose a default grasping configuration, resulting in greater object roll for asymmetric mass distributions. Somatosensory perception was hampered when grasping both types of objects compared to baseline (no-movement). Importantly, somatosensory suppression was stronger when participants could predict the object's mass distribution. We suggest that the strength of somatosensory suppression depends on the predictability of movement-relevant object features.


Assuntos
Antecipação Psicológica/fisiologia , Percepção de Movimento/fisiologia , Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Adulto , Feminino , Dedos/fisiologia , Força da Mão/fisiologia , Humanos , Masculino , Pessoa de Meia-Idade , Córtex Somatossensorial/fisiologia , Adulto Jovem
18.
J Vis ; 19(9): 10, 2019 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-31434106

RESUMO

Prediction allows humans and other animals to prepare for future interactions with their environment. This is important in our dynamically changing world that requires fast and accurate reactions to external events. Knowing when and where an event is likely to occur allows us to plan eye, hand, and body movements that are suitable for the circumstances. Predicting the sensory consequences of such movements helps to differentiate between self-produced and externally generated movements. In this review, we provide a selective overview of experimental studies on predictive mechanisms in human vision for action. We present classic paradigms and novel approaches investigating mechanisms that underlie the prediction of events guiding eye and hand movements.


Assuntos
Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Visão Ocular/fisiologia , Retroalimentação Sensorial/fisiologia , Objetivos , Humanos
19.
J Vis ; 19(9): 9, 2019 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-31426084

RESUMO

Tactile suppression refers to the phenomenon that tactile signals are attenuated during movement planning and execution when presented on a moving limb compared to rest. It is usually explained in the context of the forward model of movement control that predicts the sensory consequences of an action. Recent research suggests that aging increases reliance on sensorimotor predictions resulting in stronger somatosensory suppression. However, the mechanisms contributing to this age effect remain to be clarified. We measured age-related differences in tactile suppression during reaching and investigated the modulation by cognitive processes. A total of 23 younger (18-27 years) and 26 older (59-78 years) adults participated in our study. We found robust suppression of tactile signals when executing reaching movements. Age group differences corroborated stronger suppression in old age. Cognitive task demands during reaching, although overall boosting suppression effects, did not modulate the age effect. Across age groups, stronger suppression was associated with lower individual executive capacities. There was no evidence that baseline sensitivity had a prominent impact on the magnitude of suppression. We conclude that aging alters the weighting of sensory signals and sensorimotor predictions during movement control. Our findings suggest that individual differences in tactile suppression are critically driven by executive functions.


Assuntos
Envelhecimento/fisiologia , Movimento/fisiologia , Percepção do Tato/fisiologia , Adolescente , Adulto , Idoso , Braço , Função Executiva/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
20.
J Neurophysiol ; 119(1): 118-123, 2018 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-29021392

RESUMO

Simultaneous eye and hand movements are highly coordinated and tightly coupled. This raises the question whether the selection of eye and hand targets relies on a shared attentional mechanism or separate attentional systems. Previous studies have revealed conflicting results by reporting evidence for both a shared as well as separate systems. Movement properties such as movement curvature can provide novel insights into this question as they provide a sensitive measure for attentional allocation during target selection. In the current study, participants performed simultaneous eye and hand movements to the same or different visual target locations. We show that both saccade and reaching movements curve away from the other effector's target location when they are simultaneously performed to spatially distinct locations. We argue that there is a shared attentional mechanism involved in selecting eye and hand targets that may be found on the level of effector-independent priority maps. NEW & NOTEWORTHY Movement properties such as movement curvature have been widely neglected as important sources of information in investigating whether the attentional systems underlying target selection for eye and hand movements are separate or shared. We convincingly show that movement curvature is influenced by the other effector's target location in simultaneous eye and hand movements to spatially distinct locations. Our results provide evidence for shared attentional systems involved in the selection of saccade and reach targets.


Assuntos
Mãos/fisiologia , Movimento , Movimentos Sacádicos , Adulto , Atenção , Feminino , Humanos , Masculino , Percepção Visual
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA