Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 26
Filter
Add more filters










Publication year range
1.
Neuropsychologia ; 193: 108746, 2024 Jan 29.
Article in English | MEDLINE | ID: mdl-38081353

ABSTRACT

A stable representation of object size, in spite of continuous variations in retinal input due to changes in viewing distance, is critical for perceiving and acting in a real 3D world. In fact, our perceptual and visuo-motor systems exhibit size and grip constancies in order to compensate for the natural shrinkage of the retinal image with increased distance. The neural basis of this size-distance scaling remains largely unknown, although multiple lines of evidence suggest that size-constancy operations might take place remarkably early, already at the level of the primary visual cortex. In this study, we examined for the first time the temporal dynamics of size constancy during perception and action by using a combined measurement of event-related potentials (ERPs) and kinematics. Participants were asked to maintain their gaze steadily on a fixation point and perform either a manual estimation or a grasping task towards disks of different sizes placed at different distances. Importantly, the physical size of the target was scaled with distance to yield a constant retinal angle. Meanwhile, we recorded EEG data from 64 scalp electrodes and hand movements with a motion capture system. We focused on the first positive-going visual evoked component peaking at approximately 90 ms after stimulus onset. We found earlier latencies and greater amplitudes in response to bigger than smaller disks of matched retinal size, regardless of the task. In line with the ERP results, manual estimates and peak grip apertures were larger for the bigger targets. We also found task-related differences at later stages of processing from a cluster of central electrodes, whereby the mean amplitude of the P2 component was greater for manual estimation than grasping. Taken together, these findings provide novel evidence that size constancy for real objects at real distances occurs at the earliest cortical stages and that early visual processing does not change as a function of task demands.


Subject(s)
Distance Perception , Visual Perception , Humans , Distance Perception/physiology , Biomechanical Phenomena , Movement , Electroencephalography , Size Perception/physiology
2.
Neuropsychologia ; 194: 108773, 2024 02 15.
Article in English | MEDLINE | ID: mdl-38142960

ABSTRACT

Sensorimotor integration involves feedforward and reentrant processing of sensory input. Grasp-related motor activity precedes and is thought to influence visual object processing. Yet, while the importance of reentrant feedback is well established in perception, the top-down modulations for action and the neural circuits involved in this process have received less attention. Do action-specific intentions influence the processing of visual information in the human cortex? Using a cue-separation fMRI paradigm, we found that action-specific instruction processing (manual alignment vs. grasp) became apparent only after the visual presentation of oriented stimuli, and occurred as early as in the primary visual cortex and extended to the dorsal visual stream, motor and premotor areas. Further, dorsal stream area aIPS, known to be involved in object manipulation, and the primary visual cortex showed task-related functional connectivity with frontal, parietal and temporal areas, consistent with the idea that reentrant feedback from dorsal and ventral visual stream areas modifies visual inputs to prepare for action. Importantly, both the task-dependent modulations and connections were linked specifically to the object presentation phase of the task, suggesting a role in processing the action goal. Our results show that intended manual actions have an early, pervasive, and differential influence on the cortical processing of vision.


Subject(s)
Magnetic Resonance Imaging , Visual Perception , Humans , Magnetic Resonance Imaging/methods , Brain Mapping
3.
Curr Res Neurobiol ; 4: 100070, 2023.
Article in English | MEDLINE | ID: mdl-36632448

ABSTRACT

The functional specialization of the ventral stream in Perception and the dorsal stream in Action is the cornerstone of the leading model proposed by Goodale and Milner in 1992. This model is based on neuropsychological evidence and has been a matter of debate for almost three decades, during which the dual-visual stream hypothesis has received much attention, including support and criticism. The advent of functional magnetic resonance imaging (fMRI) has allowed investigating the brain areas involved in Perception and Action, and provided useful data on the functional specialization of the two streams. Research on this topic has been quite prolific, yet no meta-analysis so far has explored the spatial convergence in the involvement of the two streams in Action. The present meta-analysis (N = 53 fMRI and PET studies) was designed to reveal the specific neural activations associated with Action (i.e., grasping and reaching movements), and the extent to which visual information affects the involvement of the two streams during motor control. Our results provide a comprehensive view of the consistent and spatially convergent neural correlates of Action based on neuroimaging studies conducted over the past two decades. In particular, occipital-temporal areas showed higher activation likelihood in the Vision compared to the No vision condition, but no difference between reach and grasp actions. Frontal-parietal areas were consistently involved in both reach and grasp actions regardless of visual availability. We discuss our results in light of the well-established dual-visual stream model and frame these findings in the context of recent discoveries obtained with advanced fMRI methods, such as multivoxel pattern analysis.

4.
Eur J Neurosci ; 56(6): 4803-4818, 2022 09.
Article in English | MEDLINE | ID: mdl-35841138

ABSTRACT

The visual cortex has been extensively studied to investigate its role in object recognition but to a lesser degree to determine how action planning influences the representation of objects' features. We used functional MRI and pattern classification methods to determine if during action planning, object features (orientation and location) could be decoded in an action-dependent way. Sixteen human participants used their right dominant hand to perform movements (Align or Open reach) towards one of two 3D-real oriented objects that were simultaneously presented and placed on either side of a fixation cross. While both movements required aiming towards target location, Align but not Open reach movements required participants to precisely adjust hand orientation. Therefore, we hypothesized that if the representation of object features is modulated by the upcoming action, pre-movement activity pattern would allow more accurate dissociation between object features in Align than Open reach tasks. We found such dissociation in the anterior and posterior parietal cortex, as well as in the dorsal premotor cortex, suggesting that visuomotor processing is modulated by the upcoming task. The early visual cortex showed significant decoding accuracy for the dissociation between object features in the Align but not Open reach task. However, there was no significant difference between the decoding accuracy in the two tasks. These results demonstrate that movement-specific preparatory signals modulate object representation in the frontal and parietal cortex, and to a lesser extent in the early visual cortex, likely through feedback functional connections.


Subject(s)
Brain Mapping , Visual Cortex , Brain Mapping/methods , Humans , Magnetic Resonance Imaging/methods , Occipital Lobe , Parietal Lobe , Psychomotor Performance
5.
Sci Rep ; 10(1): 15774, 2020 09 25.
Article in English | MEDLINE | ID: mdl-32978418

ABSTRACT

Haptic exploration produces mental object representations that can be memorized for subsequent object-directed behaviour. Storage of haptically-acquired object images (HOIs), engages, besides canonical somatosensory areas, the early visual cortex (EVC). Clear evidence for a causal contribution of EVC to HOI representation is still lacking. The use of visual information by the grasping system undergoes necessarily a frame of reference shift by integrating eye-position. We hypothesize that if the motor system uses HOIs stored in a retinotopic coding in the visual cortex, then its use is likely to depend at least in part on eye position. We measured the kinematics of 4 fingers in the right hand of 15 healthy participants during the task of grasping different unseen objects behind an opaque panel, that had been previously explored haptically. The participants never saw the object and operated exclusively based on haptic information. The position of the object was fixed, in front of the participant, but the subject's gaze varied from trial to trial between 3 possible positions, towards the unseen object or away from it, on either side. Results showed that the middle and little fingers' kinematics during reaching for the unseen object changed significantly according to gaze position. In a control experiment we showed that intransitive hand movements were not modulated by gaze direction. Manipulating eye-position produces small but significant configuration errors, (behavioural errors due to shifts in frame of reference) possibly related to an eye-centered frame of reference, despite the absence of visual information, indicating sharing of resources between the haptic and the visual/oculomotor system to delayed haptic grasping.


Subject(s)
Eye Movements , Hand Strength/physiology , Touch Perception , Adult , Female , Humans , Male , Young Adult
6.
J Neurosci ; 40(23): 4525-4535, 2020 06 03.
Article in English | MEDLINE | ID: mdl-32354854

ABSTRACT

Coordinated reach-to-grasp movements are often accompanied by rapid eye movements (saccades) that displace the desired object image relative to the retina. Parietal cortex compensates for this by updating reach goals relative to current gaze direction, but its role in the integration of oculomotor and visual orientation signals for updating grasp plans is unknown. Based on a recent perceptual experiment, we hypothesized that inferior parietal cortex (specifically supramarginal gyrus [SMG]) integrates saccade and visual signals to update grasp plans in additional intraparietal/superior parietal regions. To test this hypothesis in humans (7 females, 6 males), we used a functional magnetic resonance paradigm, where saccades sometimes interrupted grasp preparation toward a briefly presented object that later reappeared (with the same/different orientation) just before movement. Right SMG and several parietal grasp regions, namely, left anterior intraparietal sulcus and bilateral superior parietal lobule, met our criteria for transsaccadic orientation integration: they showed task-dependent saccade modulations and, during grasp execution, they were specifically sensitive to changes in object orientation that followed saccades. Finally, SMG showed enhanced functional connectivity with both prefrontal saccade regions (consistent with oculomotor input) and anterior intraparietal sulcus/superior parietal lobule (consistent with sensorimotor output). These results support the general role of parietal cortex for the integration of visuospatial perturbations, and provide specific cortical modules for the integration of oculomotor and visual signals for grasp updating.SIGNIFICANCE STATEMENT How does the brain simultaneously compensate for both external and internally driven changes in visual input? For example, how do we grasp an unstable object while eye movements are simultaneously changing its retinal location? Here, we used fMRI to identify a group of inferior parietal (supramarginal gyrus) and superior parietal (intraparietal and superior parietal) regions that show saccade-specific modulations during unexpected changes in object/grasp orientation, and functional connectivity with frontal cortex saccade centers. This provides a network, complementary to the reach goal updater, that integrates visuospatial updating into grasp plans, and may help to explain some of the more complex symptoms associated with parietal damage, such as constructional ataxia.


Subject(s)
Hand Strength/physiology , Orientation, Spatial/physiology , Parietal Lobe/diagnostic imaging , Parietal Lobe/physiology , Psychomotor Performance/physiology , Saccades/physiology , Adult , Female , Humans , Magnetic Resonance Imaging/methods , Male , Photic Stimulation/methods , Young Adult
7.
Neuroimage ; 218: 116981, 2020 09.
Article in English | MEDLINE | ID: mdl-32454207

ABSTRACT

Recent evidence points to a role of the primary visual cortex that goes beyond visual processing into high-level cognitive and motor-related functions, including action planning, even in absence of feedforward visual information. It has been proposed that, at the neural level, motor imagery is a simulation based on motor representations, and neuroimaging studies have shown overlapping and shared activity patterns for motor imagery and action execution in frontal and parietal cortices. Yet, the role of the early visual cortex in motor imagery remains unclear. Here we used multivoxel pattern analyses on functional magnetic resonance imaging (fMRI) data to examine whether the content of motor imagery and action intention can be reliably decoded from the activity patterns in the retinotopic location of the target object in the early visual cortex. Further, we investigated whether the discrimination between specific actions generalizes across imagined and intended movements. Eighteen right-handed human participants (11 females) imagined or performed delayed hand actions towards a centrally located object composed of a small shape attached on a large shape. Actions consisted of grasping the large or small shape, and reaching to the center of the object. We found that despite comparable fMRI signal amplitude for different planned and imagined movements, activity patterns in the early visual cortex, as well as dorsal premotor and anterior intraparietal cortex, accurately represented action plans and action imagery. However, movement content is similar irrespective of whether actions are actively planned or covertly imagined in parietal but not early visual or premotor cortex, suggesting a generalized motor representation only in regions that are highly specialized in object directed grasping actions and movement goals. In sum, action planning and imagery have overlapping but non identical neural mechanisms in the cortical action network.


Subject(s)
Imagination/physiology , Psychomotor Performance/physiology , Visual Cortex/physiology , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
8.
Brain Struct Funct ; 224(9): 3291-3308, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31673774

ABSTRACT

Predictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in the absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available and vice versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.


Subject(s)
Anticipation, Psychological/physiology , Intention , Memory/physiology , Psychomotor Performance/physiology , Visual Cortex/physiology , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Motor Activity , Visual Pathways/physiology , Visual Perception/physiology , Young Adult
9.
Neuropsychologia ; 128: 150-165, 2019 05.
Article in English | MEDLINE | ID: mdl-29753019

ABSTRACT

Patients with injury to early visual cortex or its inputs can display the Riddoch phenomenon: preserved awareness for moving but not stationary stimuli. We provide a detailed case report of a patient with the Riddoch phenomenon, MC. MC has extensive bilateral lesions to occipitotemporal cortex that include most early visual cortex and complete blindness in visual field perimetry testing with static targets. Nevertheless, she shows a remarkably robust preserved ability to perceive motion, enabling her to navigate through cluttered environments and perform actions like catching moving balls. Comparisons of MC's structural magnetic resonance imaging (MRI) data to a probabilistic atlas based on controls reveals that MC's lesions encompass the posterior, lateral, and ventral early visual cortex bilaterally (V1, V2, V3A/B, LO1/2, TO1/2, hV4 and VO1 in both hemispheres) as well as more extensive damage to right parietal (inferior parietal lobule) and left ventral occipitotemporal cortex (VO1, PHC1/2). She shows some sparing of anterior occipital cortex, which may account for her ability to see moving targets beyond ~15 degrees eccentricity during perimetry. Most strikingly, functional and structural MRI revealed robust and reliable spared functionality of the middle temporal motion complex (MT+) bilaterally. Moreover, consistent with her preserved ability to discriminate motion direction in psychophysical testing, MC also shows direction-selective adaptation in MT+. A variety of tests did not enable us to discern whether input to MT+ was driven by her spared anterior occipital cortex or subcortical inputs. Nevertheless, MC shows rich motion perception despite profoundly impaired static and form vision, combined with clear preservation of activation in MT+, thus supporting the role of MT+ in the Riddoch phenomenon.


Subject(s)
Blindness, Cortical/diagnostic imaging , Blindness, Cortical/psychology , Motion Perception , Visual Cortex/pathology , Brain Mapping , Cerebral Infarction/pathology , Cerebral Infarction/psychology , Contrast Sensitivity , Discrimination, Psychological , Female , Humans , Magnetic Resonance Imaging , Middle Aged , Neuroimaging , Psychophysics , Visual Perception
10.
Eur J Neurosci ; 47(8): 901-917, 2018 04.
Article in English | MEDLINE | ID: mdl-29512943

ABSTRACT

Targets for goal-directed action can be encoded in allocentric coordinates (relative to another visual landmark), but it is not known how these are converted into egocentric commands for action. Here, we investigated this using a slow event-related fMRI paradigm, based on our previous behavioural finding that the allocentric-to-egocentric (Allo-Ego) conversion for reach is performed at the first possible opportunity. Participants were asked to remember (and eventually reach towards) the location of a briefly presented target relative to another visual landmark. After a first memory delay, participants were forewarned by a verbal instruction if the landmark would reappear at the same location (potentially allowing them to plan a reach following the auditory cue before the second delay), or at a different location where they had to wait for the final landmark to be presented before response, and then reach towards the remembered target location. As predicted, participants showed landmark-centred directional selectivity in occipital-temporal cortex during the first memory delay, and only developed egocentric directional selectivity in occipital-parietal cortex during the second delay for the 'Same cue' task, and during response for the 'Different cue' task. We then compared cortical activation between these two tasks at the times when the Allo-Ego conversion occurred, and found common activation in right precuneus, right presupplementary area and bilateral dorsal premotor cortex. These results confirm that the brain converts allocentric codes to egocentric plans at the first possible opportunity, and identify the four most likely candidate sites specific to the Allo-Ego transformation for reaches.


Subject(s)
Cerebral Cortex/physiology , Mental Recall/physiology , Psychomotor Performance/physiology , Space Perception/physiology , Adult , Auditory Perception/physiology , Female , Functional Neuroimaging , Humans , Magnetic Resonance Imaging , Male , Visual Perception/physiology , Young Adult
11.
Cortex ; 98: 128-148, 2018 01.
Article in English | MEDLINE | ID: mdl-28668221

ABSTRACT

Although the neural underpinnings of visually guided grasping and reaching have been well delineated within lateral and medial fronto-parietal networks (respectively), the contributions of subcomponents of visuomotor actions have not been explored in detail. Using careful subtraction logic, here we investigated which aspects of grasping, reaching, and pointing movements drive activation across key areas within visuomotor networks implicated in hand actions. For grasping tasks, we find activation differences based on the precision required (fine > coarse grip: anterior intraparietal sulcus, aIPS), the requirement to lift the object (grip + lift > grip: aIPS; dorsal premotor cortex, PMd; and supplementary motor area, SMA), and the number of digits employed (3-/5- vs. 2-digit grasps: ventral premotor cortex, PMv; motor cortex, M1, and somatosensory cortex, S1). For reaching/pointing tasks, we find activation differences based on whether the task required arm transport ((reach-to-point with index finger and reach-to-touch with knuckles) vs. point-without-reach; anterior superior parietal lobule, aSPL) and whether it required pointing to the object centre ((point-without-reach and reach-to-point) vs. reach-to-touch: anterior superior parieto-occipital cortex, aSPOC). For point-without-reach, in which the index finger is oriented towards the object centre but from a distance (point-without-reach > (reach-to-point and reach-to-touch)), we find activation differences that may be related to the communicative nature of the task (temporo-parietal junction, TPJ) and the need to precisely locate the target (lateral occipito-temporal cortex, LOTC). The present findings elucidate the different subcomponents of hand actions and the roles of specific brain regions in their computation.


Subject(s)
Brain/diagnostic imaging , Hand Strength/physiology , Psychomotor Performance/physiology , Adult , Brain/physiology , Brain Mapping , Female , Functional Neuroimaging , Humans , Magnetic Resonance Imaging , Male , Movement/physiology , Young Adult
12.
Cortex ; 98: 84-101, 2018 01.
Article in English | MEDLINE | ID: mdl-28532578

ABSTRACT

An influential model of vision suggests the presence of two visual streams within the brain: a dorsal occipito-parietal stream which mediates action and a ventral occipito-temporal stream which mediates perception. One of the cornerstones of this model is DF, a patient with visual form agnosia following bilateral ventral stream lesions. Despite her inability to identify and distinguish visual stimuli, DF can still use visual information to control her hand actions towards these stimuli. These observations have been widely interpreted as demonstrating a double dissociation from optic ataxia, a condition observed after bilateral dorsal stream damage in which patients are unable to act towards objects that they can recognize. In Experiment 1, we investigated how patient DF performed on the classical diagnostic task for optic ataxia, reaching in central and peripheral vision. We replicated recent findings that DF is remarkably inaccurate when reaching to peripheral targets, but not when reaching in free vision. In addition we present new evidence that her peripheral reaching errors follow the optic ataxia pattern increasing with target eccentricity and being biased towards fixation. In Experiments 2 and 3, for the first time we examined DF's on-line control of reaching using a double-step paradigm in fixation-controlled and free-vision versions of the task. DF was impaired when performing fast on-line corrections on all conditions tested, similarly to optic ataxia patients. Our findings question the long-standing assumption that DF's dorsal visual stream is functionally intact and that her on-line visuomotor control is spared. In contrast, in addition to visual form agnosia, DF also has visuomotor symptoms of optic ataxia which are most likely explained by bilateral damage to the superior parietal-occipital cortex (SPOC). We thus conclude that patient DF can no longer be considered as an appropriate single-case model for testing the neural basis of perception and action dissociations.


Subject(s)
Agnosia/physiopathology , Ataxia/physiopathology , Psychomotor Performance/physiology , Visual Perception/physiology , Female , Humans , Middle Aged , Reaction Time/physiology
13.
J Neurosci ; 37(48): 11572-11591, 2017 11 29.
Article in English | MEDLINE | ID: mdl-29066555

ABSTRACT

The role of the early visual cortex and higher-order occipitotemporal cortex has been studied extensively for visual recognition and to a lesser degree for haptic recognition and visually guided actions. Using a slow event-related fMRI experiment, we investigated whether tactile and visual exploration of objects recruit the same "visual" areas (and in the case of visual cortex, the same retinotopic zones) and if these areas show reactivation during delayed actions in the dark toward haptically explored objects (and if so, whether this reactivation might be due to imagery). We examined activation during visual or haptic exploration of objects and action execution (grasping or reaching) separated by an 18 s delay. Twenty-nine human volunteers (13 females) participated in this study. Participants had their eyes open and fixated on a point in the dark. The objects were placed below the fixation point and accordingly visual exploration activated the cuneus, which processes retinotopic locations in the lower visual field. Strikingly, the occipital pole (OP), representing foveal locations, showed higher activation for tactile than visual exploration, although the stimulus was unseen and location in the visual field was peripheral. Moreover, the lateral occipital tactile-visual area (LOtv) showed comparable activation for tactile and visual exploration. Psychophysiological interaction analysis indicated that the OP showed stronger functional connectivity with anterior intraparietal sulcus and LOtv during the haptic than visual exploration of shapes in the dark. After the delay, the cuneus, OP, and LOtv showed reactivation that was independent of the sensory modality used to explore the object. These results show that haptic actions not only activate "visual" areas during object touch, but also that this information appears to be used in guiding grasping actions toward targets after a delay.SIGNIFICANCE STATEMENT Visual presentation of an object activates shape-processing areas and retinotopic locations in early visual areas. Moreover, if the object is grasped in the dark after a delay, these areas show "reactivation." Here, we show that these areas are also activated and reactivated for haptic object exploration and haptically guided grasping. Touch-related activity occurs not only in the retinotopic location of the visual stimulus, but also at the occipital pole (OP), corresponding to the foveal representation, even though the stimulus was unseen and located peripherally. That is, the same "visual" regions are implicated in both visual and haptic exploration; however, touch also recruits high-acuity central representation within early visual areas during both haptic exploration of objects and subsequent actions toward them. Functional connectivity analysis shows that the OP is more strongly connected with ventral and dorsal stream areas when participants explore an object in the dark than when they view it.


Subject(s)
Contrast Sensitivity/physiology , Darkness , Fovea Centralis/physiology , Pattern Recognition, Visual/physiology , Touch Perception/physiology , Visual Cortex/physiology , Adult , Female , Humans , Magnetic Resonance Imaging/methods , Male , Touch/physiology , Young Adult
14.
Cereb Cortex ; 27(11): 5242-5260, 2017 11 01.
Article in English | MEDLINE | ID: mdl-27744289

ABSTRACT

The cortical mechanisms for reach have been studied extensively, but directionally selective mechanisms for visuospatial target memory, movement planning, and movement execution have not been clearly differentiated in the human. We used an event-related fMRI design with a visuospatial memory delay, followed by a pro-/anti-reach instruction, a planning delay, and finally a "go" instruction for movement. This sequence yielded temporally separable preparatory responses that expanded from modest parieto-frontal activation for visual target memory to broad occipital-parietal-frontal activation during planning and execution. Using the pro/anti instruction to differentiate visual and motor directional selectivity during planning, we found that one occipital area showed contralateral "visual" selectivity, whereas a broad constellation of left hemisphere occipital, parietal, and frontal areas showed contralateral "movement" selectivity. Temporal analysis of these areas through the entire memory-planning sequence revealed early visual selectivity in most areas, followed by movement selectivity in most areas, with all areas showing a stereotypical visuo-movement transition. Cross-correlation of these spatial parameters through time revealed separate spatiotemporally correlated modules for visual input, motor output, and visuo-movement transformations that spanned occipital, parietal, and frontal cortex. These results demonstrate a highly distributed occipital-parietal-frontal reach network involved in the transformation of retrospective sensory information into prospective movement plans.


Subject(s)
Frontal Lobe/physiology , Hand/physiology , Motor Activity/physiology , Movement/physiology , Occipital Lobe/physiology , Parietal Lobe/physiology , Adult , Brain Mapping , Female , Frontal Lobe/diagnostic imaging , Humans , Magnetic Resonance Imaging , Male , Occipital Lobe/diagnostic imaging , Parietal Lobe/diagnostic imaging , Time Factors , Young Adult
16.
eNeuro ; 2(3)2015.
Article in English | MEDLINE | ID: mdl-26464989

ABSTRACT

Reaching to a location in space is supported by a cortical network that operates in a variety of reference frames. Computational models and recent fMRI evidence suggest that this diversity originates from neuronal populations dynamically shifting between reference frames as a function of task demands and sensory modality. In this human fMRI study, we extend this framework to nonmanipulative grasping movements, an action that depends on multiple properties of a target, not only its spatial location. By presenting targets visually or somaesthetically, and by manipulating gaze direction, we investigate how information about a target is encoded in gaze- and body-centered reference frames in dorsomedial and dorsolateral grasping-related circuits. Data were analyzed using a novel multivariate approach that combines classification and cross-classification measures to explicitly aggregate evidence in favor of and against the presence of gaze- and body-centered reference frames. We used this approach to determine whether reference frames are differentially recruited depending on the availability of sensory information, and where in the cortical networks there is common coding across modalities. Only in the left anterior intraparietal sulcus (aIPS) was coding of the grasping target modality dependent: predominantly gaze-centered for visual targets and body-centered for somaesthetic targets. Left superior parieto-occipital cortex consistently coded targets for grasping in a gaze-centered reference frame. Left anterior precuneus and premotor areas operated in a modality-independent, body-centered frame. These findings reveal how dorsolateral grasping area aIPS could play a role in the transition between modality-independent gaze-centered spatial maps and body-centered motor areas.

17.
Eur J Neurosci ; 41(4): 454-65, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25400211

ABSTRACT

The visuo-motor channel hypothesis (Jeannerod, 1981) postulates that grasping movements consist of a grip and a transport component differing in their reliance on intrinsic vs. extrinsic object properties (e.g. size vs. location, respectively). While recent neuroimaging studies have revealed separate brain areas implicated in grip and transport components within the parietal lobe, less is known about the neural processing of extrinsic and intrinsic properties of objects for grasping actions. We used functional magnetic resonance imaging adaptation to examine the cortical areas involved in processing object size, object location or both. Participants grasped (using the dominant right hand) or passively viewed sequential pairs of objects that could differ in size, location or both. We hypothesized that if intrinsic and extrinsic object properties are processed separately, as suggested by the visuo-motor channel hypothesis, we would observe adaptation to object size in areas that code the grip and adaptation to location in areas that code the transport component. On the other hand, if intrinsic and extrinsic object properties are not processed separately, brain areas involved in grasping may show adaptation to both object size and location. We found adaptation to object size for grasping movements in the left anterior intraparietal sulcus (aIPS), in agreement with the idea that object size is processed separately from location. In addition, the left superior parietal occipital sulcus (SPOC), primary somatosensory and motor area (S1/M1), precuneus, dorsal premotor cortex (PMd), and supplementary motor area (SMA) showed non-additive adaptation to both object size and location. We propose different roles for the aIPS as compared with the SPOC, S1/M1, precuneus, PMd and SMA. In particular, while the aIPS codes intrinsic object properties, which are relevant for hand preshaping and force scaling, area SPOC, S1/M1, precuneus, PMd and SMA code intrinsic as well as extrinsic object properties, both of which are relevant for digit positioning during grasping.


Subject(s)
Brain Mapping , Cerebral Cortex/physiology , Hand/physiology , Movement , Psychomotor Performance , Visual Perception , Adaptation, Physiological , Adult , Female , Hand/innervation , Hand Strength , Humans , Magnetic Resonance Imaging , Male
18.
J Neurosci ; 34(37): 12515-26, 2014 Sep 10.
Article in English | MEDLINE | ID: mdl-25209289

ABSTRACT

The location of a remembered reach target can be encoded in egocentric and/or allocentric reference frames. Cortical mechanisms for egocentric reach are relatively well described, but the corresponding allocentric representations are essentially unknown. Here, we used an event-related fMRI design to distinguish human brain areas involved in these two types of representation. Our paradigm consisted of three tasks with identical stimulus display but different instructions: egocentric reach (remember absolute target location), allocentric reach (remember target location relative to a visual landmark), and a nonspatial control, color report (report color of target). During the delay phase (when only target location was specified), the egocentric and allocentric tasks elicited widely overlapping regions of cortical activity (relative to the control), but with higher activation in parietofrontal cortex for egocentric task and higher activation in early visual cortex for allocentric tasks. In addition, egocentric directional selectivity (target relative to gaze) was observed in the superior occipital gyrus and the inferior occipital gyrus, whereas allocentric directional selectivity (target relative to a visual landmark) was observed in the inferior temporal gyrus and inferior occipital gyrus. During the response phase (after movement direction had been specified either by reappearance of the visual landmark or a pro-/anti-reach instruction), the parietofrontal network resumed egocentric directional selectivity, showing higher activation for contralateral than ipsilateral reaches. These results show that allocentric and egocentric reach mechanisms use partially overlapping but different cortical substrates and that directional specification is different for target memory versus reach response.


Subject(s)
Cerebral Cortex/physiology , Movement/physiology , Orientation/physiology , Psychomotor Performance/physiology , Space Perception/physiology , Spatial Memory/physiology , Spatial Navigation/physiology , Adult , Humans , Male , Nerve Net/physiology , Young Adult
19.
Cereb Cortex ; 24(6): 1540-54, 2014 Jun.
Article in English | MEDLINE | ID: mdl-23362111

ABSTRACT

Grasping behaviors require the selection of grasp-relevant object dimensions, independent of overall object size. Previous neuroimaging studies found that the intraparietal cortex processes object size, but it is unknown whether the graspable dimension (i.e., grasp axis between selected points on the object) or the overall size of objects triggers activation in that region. We used functional magnetic resonance imaging adaptation to investigate human brain areas involved in processing the grasp-relevant dimension of real 3-dimensional objects in grasping and viewing tasks. Trials consisted of 2 sequential stimuli in which the object's grasp-relevant dimension, its global size, or both were novel or repeated. We found that calcarine and extrastriate visual areas adapted to object size regardless of the grasp-relevant dimension during viewing tasks. In contrast, the superior parietal occipital cortex (SPOC) and lateral occipital complex of the left hemisphere adapted to the grasp-relevant dimension regardless of object size and task. Finally, the dorsal premotor cortex adapted to the grasp-relevant dimension in grasping, but not in viewing, tasks, suggesting that motor processing was complete at this stage. Taken together, our results provide a complete cortical circuit for progressive transformation of general object properties into grasp-related responses.


Subject(s)
Cerebral Cortex/physiology , Form Perception/physiology , Hand/physiology , Psychomotor Performance/physiology , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Neural Pathways/physiology , Physical Stimulation , Psychophysics , Signal Processing, Computer-Assisted , Visual Perception/physiology , Young Adult
20.
PLoS One ; 8(9): e73629, 2013.
Article in English | MEDLINE | ID: mdl-24040007

ABSTRACT

Behavioral and neuropsychological research suggests that delayed actions rely on different neural substrates than immediate actions; however, the specific brain areas implicated in the two types of actions remain unknown. We used functional magnetic resonance imaging (fMRI) to measure human brain activation during delayed grasping and reaching. Specifically, we examined activation during visual stimulation and action execution separated by a 18-s delay interval in which subjects had to remember an intended action toward the remembered object. The long delay interval enabled us to unambiguously distinguish visual, memory-related, and action responses. Most strikingly, we observed reactivation of the lateral occipital complex (LOC), a ventral-stream area implicated in visual object recognition, and early visual cortex (EVC) at the time of action. Importantly this reactivation was observed even though participants remained in complete darkness with no visual stimulation at the time of the action. Moreover, within EVC, higher activation was observed for grasping than reaching during both vision and action execution. Areas in the dorsal visual stream were activated during action execution as expected and, for some, also during vision. Several areas, including the anterior intraparietal sulcus (aIPS), dorsal premotor cortex (PMd), primary motor cortex (M1) and the supplementary motor area (SMA), showed sustained activation during the delay phase. We propose that during delayed actions, dorsal-stream areas plan and maintain coarse action goals; however, at the time of execution, motor programming requires re-recruitment of detailed visual information about the object through reactivation of (1) ventral-stream areas involved in object perception and (2) early visual areas that contain richly detailed visual representations, particularly for grasping.


Subject(s)
Hand Strength/physiology , Magnetic Resonance Imaging/methods , Psychomotor Performance/physiology , Reaction Time/physiology , Visual Perception/physiology , Brain/physiology , Brain Mapping , Humans , Photic Stimulation/methods , Visual Cortex/physiology
SELECTION OF CITATIONS
SEARCH DETAIL