Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Hum Brain Mapp ; 45(3): e26605, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38379447

RESUMO

The lateral occipitotemporal cortex (LOTC) has been shown to capture the representational structure of a smaller range of actions. In the current study, we carried out an fMRI experiment in which we presented human participants with images depicting 100 different actions and used representational similarity analysis (RSA) to determine which brain regions capture the semantic action space established using judgments of action similarity. Moreover, to determine the contribution of a wide range of action-related features to the neural representation of the semantic action space we constructed an action feature model on the basis of ratings of 44 different features. We found that the semantic action space model and the action feature model are best captured by overlapping activation patterns in bilateral LOTC and ventral occipitotemporal cortex (VOTC). An RSA on eight dimensions resulting from principal component analysis carried out on the action feature model revealed partly overlapping representations within bilateral LOTC, VOTC, and the parietal lobe. Our results suggest spatially overlapping representations of the semantic action space of a wide range of actions and the corresponding action-related features. Together, our results add to our understanding of the kind of representations along the LOTC that support action understanding.


Assuntos
Lobo Occipital , Lobo Temporal , Humanos , Lobo Occipital/fisiologia , Lobo Temporal/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Mapeamento Encefálico/métodos , Estimulação Luminosa/métodos , Imageamento por Ressonância Magnética
2.
J Neurosci ; 43(48): 8219-8230, 2023 11 29.
Artigo em Inglês | MEDLINE | ID: mdl-37798129

RESUMO

Actions can be planned and recognized at different hierarchical levels, ranging from very specific (e.g., to swim backstroke) to very broad (e.g., locomotion). Understanding the corresponding neural representation is an important prerequisite to reveal how our brain flexibly assigns meaning to the world around us. To address this question, we conducted an event-related fMRI study in male and female human participants in which we examined distinct representations of observed actions at the subordinate, basic and superordinate level. Using multiple regression representational similarity analysis (RSA) in predefined regions of interest, we found that the three different taxonomic levels were best captured by patterns of activations in bilateral lateral occipitotemporal cortex (LOTC), showing the highest similarity with the basic level model. A whole-brain multiple regression RSA revealed that information unique to the basic level was captured by patterns of activation in dorsal and ventral portions of the LOTC and in parietal regions. By contrast, the unique information for the subordinate level was limited to bilateral occipitotemporal cortex, while no single cluster was obtained that captured unique information for the superordinate level. The behaviorally established action space was best captured by patterns of activation in the LOTC and superior parietal cortex, and the corresponding neural patterns of activation showed the highest similarity with patterns of activation corresponding to the basic level model. Together, our results suggest that occipitotemporal cortex shows a preference for the basic level model, with flexible access across the subordinate and the basic level.SIGNIFICANCE STATEMENT The human brain captures information at varying levels of abstraction. It is debated which brain regions host representations across different hierarchical levels, with some studies emphasizing parietal and premotor regions, while other studies highlight the role of the lateral occipitotemporal cortex (LOTC). To shed light on this debate, here we examined the representation of observed actions at the three taxonomic levels suggested by Rosch et al. (1976) Our results highlight the role of the LOTC, which hosts a shared representation across the subordinate and the basic level, with the highest similarity with the basic level model. These results shed new light on the hierarchical organization of observed actions and provide insights into the neural basis underlying the basic level advantage.


Assuntos
Lobo Occipital , Lobo Temporal , Humanos , Masculino , Feminino , Lobo Occipital/fisiologia , Lobo Temporal/fisiologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Lobo Parietal , Imageamento por Ressonância Magnética , Reconhecimento Visual de Modelos/fisiologia
3.
Behav Res Methods ; 55(4): 1890-1906, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35788973

RESUMO

In daily life, we frequently encounter actions performed by other people. Here we aimed to examine the key categories and features underlying the organization of a wide range of actions in three behavioral experiments (N = 378 participants). In Experiment 1, we used a multi-arrangement task of 100 different actions. Inverse multidimensional scaling and hierarchical clustering revealed 11 action categories, including Locomotion, Communication, and Aggressive actions. In Experiment 2, we used a feature-listing paradigm to obtain a wide range of action features that were subsequently reduced to 59 key features and used in a rating study (Experiment 3). A direct comparison of the feature ratings obtained in Experiment 3 between actions belonging to the categories identified in Experiment 1 revealed a number of features that appear to be critical for the distinction between these categories, e.g., the features Harm and Noise for the category Aggressive actions, and the features Targeting a person and Contact with others for the category Interaction. Finally, we found that a part of the category-based organization is explained by a combination of weighted features, whereas a significant proportion of variability remained unexplained, suggesting that there are additional sources of information that contribute to the categorization of observed actions. The characterization of action categories and their associated features serves as an important extension of previous studies examining the cognitive structure of actions. Moreover, our results may serve as the basis for future behavioral, neuroimaging and computational modeling studies.


Assuntos
Comportamento , Cognição , Humanos
4.
Psychol Res ; 86(6): 1871-1891, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34907466

RESUMO

Objects can be categorized at different levels of abstraction, ranging from the superordinate (e.g., fruit) and the basic (e.g., apple) to the subordinate level (e.g., golden delicious). The basic level is assumed to play a key role in categorization, e.g., in terms of the number of features used to describe these actions and the speed of processing. To which degree do these principles also apply to the categorization of observed actions? To address this question, we first selected a range of actions at the superordinate (e.g., locomotion), basic (e.g., to swim) and subordinate level (e.g., to swim breaststroke), using verbal material (Experiments 1-3). Experiments 4-6 aimed to determine the characteristics of these actions across the three taxonomic levels. Using a feature listing paradigm (Experiment 4), we determined the number of features that were provided by at least six out of twenty participants (common features), separately for the three different levels. In addition, we examined the number of shared (i.e., provided for more than one category) and distinct (i.e., provided for one category only) features. Participants produced the highest number of common features for actions at the basic level. Actions at the subordinate level shared more features with other actions at the same level than those at the superordinate level. Actions at the superordinate and basic level were described with more distinct features compared to those provided at the subordinate level. Using an auditory priming paradigm (Experiment 5), we observed that participants responded faster to action images preceded by a matching auditory cue corresponding to the basic and subordinate level, but not for superordinate level cues, suggesting that the basic level is the most abstract level at which verbal cues facilitate the processing of an upcoming action. Using a category verification task (Experiment 6), we found that participants were faster and more accurate to verify action categories (depicted as images) at the basic and subordinate level in comparison to the superordinate level. Together, in line with the object categorization literature, our results suggest that information about action categories is maximized at the basic level.


Assuntos
Formação de Conceito , Reconhecimento Visual de Modelos , Humanos , Tempo de Reação
5.
PLoS One ; 16(9): e0256912, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34469494

RESUMO

Social interaction requires fast and efficient processing of another person's intentions. In face-to-face interactions, aversive or appetitive actions typically co-occur with emotional expressions, allowing an observer to anticipate action intentions. In the present study, we investigated the influence of facial emotions on the processing of action intentions. Thirty-two participants were presented with video clips showing virtual agents displaying a facial emotion (angry vs. happy) while performing an action (punch vs. fist-bump) directed towards the observer. During each trial, video clips stopped at varying durations of the unfolding action, and participants had to recognize the presented action. Naturally, participants' recognition accuracy improved with increasing duration of the unfolding actions. Interestingly, while facial emotions did not influence accuracy, there was a significant influence on participants' action judgements. Participants were more likely to judge a presented action as a punch when agents showed an angry compared to a happy facial emotion. This effect was more pronounced in short video clips, showing only the beginning of an unfolding action, than in long video clips, showing near-complete actions. These results suggest that facial emotions influence anticipatory processing of action intentions allowing for fast and adaptive responses in social interactions.


Assuntos
Ira , Expressão Facial , Reconhecimento Facial , Preconceito/psicologia , Interação Social , Adulto , Feminino , Felicidade , Voluntários Saudáveis , Humanos , Masculino , Estimulação Luminosa/métodos , Adulto Jovem
6.
Neuroimage ; 241: 118428, 2021 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-34311066

RESUMO

Visual imagery relies on a widespread network of brain regions, partly engaged during the perception of external stimuli. Beyond the recruitment of category-selective areas (FFA, PPA), perception of familiar faces and places has been reported to engage brain areas associated with semantic information, comprising the precuneus, temporo-parietal junction (TPJ), medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). Here we used multivariate pattern analyzes (MVPA) to examine to which degree areas of the visual imagery network, category-selective and semantic areas contain information regarding the category and familiarity of imagined stimuli. Participants were instructed via auditory cues to imagine personally familiar and unfamiliar stimuli (i.e. faces and places). Using region-of-interest (ROI)-based MVPA, we were able to distinguish between imagined faces and places within nodes of the visual imagery network (V1, SPL, aIPS), within category-selective inferotemporal regions (FFA, PPA) and across all brain regions of the extended semantic network (i.e. precuneus, mPFC, IFG and TPJ). Moreover, we were able to decode familiarity of imagined stimuli in the SPL and aIPS, and in some regions of the extended semantic network (in particular, right precuneus, right TPJ), but not in V1. Our results suggest that posterior visual areas - including V1 - host categorical representations about imagined stimuli, and that stimulus familiarity might be an additional aspect that is shared between perception and visual imagery.


Assuntos
Encéfalo/fisiologia , Imaginação/fisiologia , Rede Nervosa/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Reconhecimento Psicológico/fisiologia , Estimulação Acústica/métodos , Adulto , Encéfalo/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Rede Nervosa/diagnóstico por imagem , Estimulação Luminosa/métodos , Distribuição Aleatória , Percepção Visual/fisiologia , Adulto Jovem
7.
Cortex ; 139: 152-165, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33873036

RESUMO

When we see a manipulable object (henceforth tool) or a hand performing a grasping movement, our brain is automatically tuned to how that tool can be grasped (i.e., its affordance) or what kind of grasp that hand is performing (e.g., a power or precision grasp). However, it remains unclear where visual information related to tools or hands are transformed into abstract grasp representations. We therefore investigated where different levels of abstractness in grasp information are processed: grasp information that is invariant to the kind of stimuli that elicits it (tool-hand invariance); and grasp information that is hand-specific but viewpoint-invariant (viewpoint invariance). We focused on brain areas activated when viewing both tools and hands, i.e., the posterior parietal cortices (PPC), ventral premotor cortices (PMv), and lateral occipitotemporal cortex/posterior middle temporal cortex (LOTC/pMTG). To test for invariant grasp representations, we presented participants with tool images and grasp videos (from first or third person perspective; 1pp or 3pp) inside an MRI scanner, and cross-decoded power versus precision grasps across (i) grasp perspectives (viewpoint invariance), (ii) tool images and grasp 1pp videos (tool-hand 1pp invariance), and (iii) tool images and grasp 3pp videos (tool-hand 3pp invariance). Tool-hand 1pp, but not tool-hand 3pp, invariant grasp information was found in left PPC, whereas viewpoint-invariant information was found bilaterally in PPC, left PMv, and left LOTC/pMTG. These findings suggest different levels of abstractness-where visual information is transformed into stimuli-invariant grasp representations/tool affordances in left PPC, and viewpoint invariant but hand-specific grasp representations in the hand network.


Assuntos
Mapeamento Encefálico , Mãos , Força da Mão , Humanos , Imageamento por Ressonância Magnética , Lobo Parietal , Desempenho Psicomotor
8.
Neuropsychologia ; 149: 107673, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33186572

RESUMO

The general aim of this study was to assess the effect produced by visuo-spatial attention on both behavioural performance and brain activation in hemianopic patients following visual stimulus presentation to the blind hemifield. To do that, we tested five hemianopic patients and six age-matched healthy controls in an MRI scanner during the execution of a Posner-like paradigm using a predictive central cue. Participants were instructed to covertly orient attention toward the blind or sighted hemifield in different blocks while discriminating the orientation of a visual grating. In patients, we found significantly faster reaction times (RT) in valid and neutral than invalid trials not only in the sighted but also in the blind hemifield, despite the impairment of consciousness and performance at chance. As to the fMRI signal, in valid trials we observed the activation of ipsilesional visual areas (mainly lingual gyrus - area 19) during the orientation of attention toward the blind hemifield. Importantly, this activation was similar in patients and controls. In order to assess the related functional network, we performed a psychophysiological interactions (PPI) analysis that revealed an increased functional connectivity (FC) in patients with respect to controls between the ipsilesional lingual gyrus and ipsilateral fronto-parietal as well as contralesional parietal regions. Moreover, the shift of attention from the blind to the sighted hemifield revealed stronger FC between the contralesional visual areas V3/V4 and ipsilateral parietal regions in patients than controls. These results indicate a higher cognitive effort in patients when paying attention to the blind hemifiled or when shifting attention from the blind to the sighted hemfield, possibly as an attempt to compensate for the visual loss. Taken together, these results show that hemianopic patients can covertly orient attention toward the blind hemifield with a top-down mechanism by activating a functional network mainly including fronto-parietal regions belonging to the dorsal attentional network.


Assuntos
Cegueira , Hemianopsia , Cegueira/diagnóstico por imagem , Lateralidade Funcional , Hemianopsia/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Orientação , Lobo Parietal , Estimulação Luminosa , Tempo de Reação , Percepção Visual
9.
Cortex ; 131: 87-102, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32818916

RESUMO

Word retrieval deficits are a common problem in patients with stroke-induced brain damage. While complete recovery of language in chronic aphasia is rare, patients' naming ability can be significantly improved by speech therapy. A growing number of neuroimaging studies have tried to pinpoint the neural changes associated with successful outcome of naming treatment. However, the mechanisms supporting naming practice in the healthy brain have received little attention. Yet, understanding these mechanisms is crucial for teasing them apart from functional reorganization following brain damage. To address this issue, we trained a group of healthy monolingual Italian speakers on naming pictured objects and actions for ten consecutive days and scanned them before and after training. Although activity during object versus action naming dissociated in several regions (lateral occipitotemporal, parietal and left inferior frontal cortices), training effects for the two word classes were similar and included activation decreases in classical language regions of the left hemisphere (posterior inferior frontal gyrus, anterior insula), potentially due to decreased lexical selection demands. Additionally, MVPA revealed training-related activation changes in the left parietal and temporal cortices associated with the retrieval of knowledge from episodic memory (precuneus, angular gyrus) and facilitated access to phonological word forms (posterior superior temporal sulcus).


Assuntos
Afasia , Acidente Vascular Cerebral , Afasia/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Humanos , Idioma , Imageamento por Ressonância Magnética
10.
Cortex ; 127: 371-387, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32289581

RESUMO

In the absence of input from the external world, humans are still able to generate vivid mental images. This cognitive process, known as visual mental imagery, involves a network of prefrontal, parietal, inferotemporal, and occipital regions. Using multivariate pattern analysis (MVPA), previous studies were able to distinguish between the different orientations of imagined gratings, but not between more complex imagined stimuli, such as common objects, in early visual cortex (V1). Here we asked whether letters, simple shapes, and objects can be decoded in early visual areas during visual mental imagery. In a delayed spatial judgment task, we asked participants to observe or imagine stimuli. To examine whether it is possible to discriminate between neural patterns during perception and visual mental imagery, we performed ROI-based and whole-brain searchlight-based MVPA. We were able to decode imagined stimuli in early visual (V1, V2), parietal (SPL, IPL, aIPS), inferotemporal (LOC) and prefrontal (PMd) areas. In a subset of these areas (i.e., V1, V2, LOC, SPL, IPL and aIPS), we also obtained significant cross-decoding across visual imagery and perception. Moreover, we observed a linear relationship between behavioral accuracy and the amplitude of the BOLD signal in parietal and inferotemporal cortices, but not in early visual cortex, in line with the view that these areas contribute to the ability to perform visual imagery. Together, our results suggest that in the absence of bottom-up visual inputs, patterns of functional activation in early visual cortex allow distinguishing between different imagined stimulus exemplars, most likely mediated by signals from parietal and inferotemporal areas.


Assuntos
Imaginação , Imageamento por Ressonância Magnética , Mapeamento Encefálico , Córtex Cerebral , Humanos , Lobo Occipital/diagnóstico por imagem , Percepção Visual
11.
Neuropsychologia ; 141: 107430, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32173624

RESUMO

Unilateral damage to post-chiasmatic visual pathways or cortical areas results in the loss of vision in the contralateral hemifield, known as hemianopia. Some patients, however, may retain the ability to perform an above chance unconscious detection or discrimination of visual stimuli presented to the blind hemifield, known as "blindsight". An important finding in blindsight research is that it can often be elicited by moving stimuli. Therefore, in the present study, we wanted to test whether moving stimuli might yield blindsight phenomena in patients with cortical lesions resulting in hemianopia, in a discrimination task where stimulus movement is orthogonal to the feature of interest. This could represent an important strategy for rehabilitation because it might improve discrimination ability of stimulus features different but related to movement, e.g. line orientation. We tested eight hemianopic patients and eight age-matched healthy controls in an orientation discrimination task with moving or static visual stimuli. During performance of the task we carried out fMRI scanning and tractography. Behaviourally, we did not find a reliable main effect of motion on orientation discrimination; however, an important result was that in different patients blindsight could occur only with moving or stationary stimuli or with both. As to brain imaging results, following presentation of moving stimuli to the blind hemifield, a widespread fronto-parietal bilateral network was recruited including areas of the dorsal stream and in particular bilateral motion area hMT + whose activation positively correlated with behavioural performance. This bilateral network was not activated in controls suggesting that it represents a compensatory functional change following brain damage. Moreover, there was a higher activation of ipsilesional area hMT+ in patients who performed above chance in the moving condition. By contrast, in patients who performed above chance in the static condition, we found a higher activation of contralesional area V1 and extrastriate visual areas. Finally, we found a linear relationship between structural integrity of the ipsilesional pathway connecting lateral geniculate nucleus (LGN) with motion area hMT+ and both behavioural performance and ipsilesional hMT + activation. These results support the role of LGN in modulating performance as well as BOLD amplitude in the absence of visual awareness in ipsilesional area hMT+ during an orientation discrimination task with moving stimuli.


Assuntos
Hemianopsia , Córtex Visual , Humanos , Estimulação Luminosa , Vias Visuais/diagnóstico por imagem , Percepção Visual
12.
Cereb Cortex ; 30(5): 2924-2938, 2020 05 14.
Artigo em Inglês | MEDLINE | ID: mdl-31942941

RESUMO

Humans are able to interact with objects with extreme flexibility. To achieve this ability, the brain does not only control specific muscular patterns, but it also needs to represent the abstract goal of an action, irrespective of its implementation. It is debated, however, how abstract action goals are implemented in the brain. To address this question, we used multivariate pattern analysis of functional magnetic resonance imaging data. Human participants performed grasping actions (precision grip, whole hand grip) with two different wrist orientations (canonical, rotated), using either the left or right hand. This design permitted to investigate a hierarchical organization consisting of three levels of abstraction: 1) "concrete action" encoding; 2) "effector-dependent goal" encoding (invariant to wrist orientation); and 3) "effector-independent goal" encoding (invariant to effector and wrist orientation). We found that motor cortices hosted joint encoding of concrete actions and of effector-dependent goals, while the parietal lobe housed a convergence of all three representations, comprising action goals within and across effectors. The left lateral occipito-temporal cortex showed effector-independent goal encoding, but no convergence across the three levels of representation. Our results support a hierarchical organization of action encoding, shedding light on the neural substrates supporting the extraordinary flexibility of human hand behavior.


Assuntos
Mapeamento Encefálico/métodos , Força da Mão/fisiologia , Córtex Motor/diagnóstico por imagem , Córtex Motor/fisiologia , Desempenho Psicomotor/fisiologia , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Estimulação Luminosa/métodos
13.
Elife ; 82019 12 05.
Artigo em Inglês | MEDLINE | ID: mdl-31804177

RESUMO

Categorizing and understanding other people's actions is a key human capability. Whereas there exists a growing literature regarding the organization of objects, the representational space underlying the organization of observed actions remains largely unexplored. Here we examined the organizing principles of a large set of actions and the corresponding neural representations. Using multiple regression representational similarity analysis of fMRI data, in which we accounted for variability due to major action components (body parts, scenes, movements, objects, sociality, transitivity) and three control models (distance between observer and actor, number of people, HMAX-C1), we found that the semantic dissimilarity structure was best captured by patterns of activation in the lateral occipitotemporal cortex (LOTC). Together, our results demonstrate that the organization of observed actions in the LOTC resembles the organizing principles used by participants to classify actions behaviorally, in line with the view that this region is crucial for accessing the meaning of actions.


Assuntos
Córtex Cerebral/fisiologia , Atividades Humanas , Desempenho Psicomotor/fisiologia , Adulto , Mapeamento Encefálico , Córtex Cerebral/diagnóstico por imagem , Feminino , Corpo Humano , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Análise Multivariada , Lobo Occipital/diagnóstico por imagem , Lobo Occipital/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiologia
14.
Front Neurosci ; 13: 646, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31354404

RESUMO

Viewing a real scene or a stereoscopic image (e.g., 3D movies) with both eyes yields a vivid subjective impression of object solidity, tangibility, immersive negative space and sense of realness; something that is not experienced when viewing single pictures of 3D scenes normally with both eyes. This phenomenology, sometimes referred to as stereopsis, is conventionally ascribed to the derivation of depth from the differences in the two eye's images (binocular disparity). Here we report on a pilot study designed to explore if dissociable neural activity associated with the phenomenology of realness can be localized in the cortex. In order to dissociate subjective impression from disparity processing, we capitalized on the finding that the impression of realness associated with stereoscopic viewing can also be generated when viewing a single picture of a 3D scene with one eye through an aperture. Under a blocked fMRI design, subjects viewed intact and scrambled images of natural 3-D objects, and scenes under three viewing conditions: (1) single pictures viewed normally with both eyes (binocular); (2) single pictures viewed with one eye through an aperture (monocular-aperture); and (3) stereoscopic anaglyph images of the same scenes viewed with both eyes (binocular stereopsis). Fixed-effects GLM contrasts aimed at isolating the phenomenology of stereopsis demonstrated a selective recruitment of similar posterior parietal regions for both monocular and binocular stereopsis conditions. Our findings provide preliminary evidence that the cortical processing underlying the subjective impression of realness may be dissociable and distinct from the derivation of depth from disparity.

15.
Neuroimage ; 200: 332-343, 2019 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-31247298

RESUMO

Visual imagery has been suggested to recruit occipital cortex via feedback projections from fronto-parietal regions, suggesting that these feedback projections might be exploited to boost recruitment of occipital cortex by means of real-time neurofeedback. To test this prediction, we instructed a group of healthy participants to perform peripheral visual imagery while they received real-time auditory feedback based on the BOLD signal from either early visual cortex or the medial superior parietal lobe. We examined the amplitude and temporal aspects of the BOLD response in the two regions. Moreover, we compared the impact of self-rated mental focus and vividness of visual imagery on the BOLD responses in these two areas. We found that both early visual cortex and the medial superior parietal cortex are susceptible to auditory neurofeedback within a single feedback session per region. However, the signal in parietal cortex was sustained for a longer time compared to the signal in occipital cortex. Moreover, the BOLD signal in the medial superior parietal lobe was more affected by focus and vividness of the visual imagery than early visual cortex. Our results thus demonstrate that (a) participants can learn to self-regulate the BOLD signal in early visual and parietal cortex within a single session, (b) that different nodes in the visual imagery network respond differently to neurofeedback, and that (c) responses in parietal, but not in occipital cortex are susceptible to self-rated vividness of mental imagery. Together, these results suggest that medial superior parietal cortex might be a suitable candidate to provide real-time feedback to patients suffering from visual field defects.


Assuntos
Neuroimagem Funcional/métodos , Imaginação/fisiologia , Rede Nervosa/fisiologia , Neurorretroalimentação/fisiologia , Lobo Occipital/fisiologia , Lobo Parietal/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
16.
J Neurosci ; 39(30): 5966-5974, 2019 07 24.
Artigo em Inglês | MEDLINE | ID: mdl-31126999

RESUMO

The middle temporal gyrus (MTG) has been shown to be recruited during the processing of words, but also during the observation of actions. Here we investigated how information related to words and gestures is organized along the MTG. To this aim, we measured the BOLD response in the MTG to video clips of gestures and spoken words in 17 healthy human adults (male and female). Gestures consisted of videos of an actress performing object-use pantomimes (iconic representations of object-directed actions; e.g., playing guitar), emblems (conventional gestures, e.g., thumb up), and meaningless gestures. Word stimuli (verbs, nouns) consisted of video clips of the same actress pronouncing words. We found a stronger response to meaningful compared with meaningless gestures along the whole left and large portions of the right MTG. Importantly, we observed a gradient, with posterior regions responding more strongly to gestures (pantomimes and emblems) than words and anterior regions showing a stronger response to words than gestures. In an intermediate region in the left hemisphere, the response was significantly higher to words and emblems (i.e., items with a greater arbitrariness of the sign-to-meaning mapping) than to pantomimes. These results show that the large-scale organization of information in the MTG is driven by the input modality and may also reflect the arbitrariness of the relationship between sign and meaning.SIGNIFICANCE STATEMENT Here we investigated the organizing principle of information in the middle temporal gyrus, taking into consideration the input-modality and the arbitrariness of the relationship between a sign and its meaning. We compared the middle temporal gyrus response during the processing of pantomimes, emblems, and spoken words. We found that posterior regions responded more strongly to pantomimes and emblems than to words, whereas anterior regions responded more strongly to words than to pantomimes and emblems. In an intermediate region, only in the left hemisphere, words and emblems evoked a stronger response than pantomimes. Our results identify two organizing principles of neural representation: the modality of communication (gestural or verbal) and the (arbitrariness of the) relationship between sign and meanings.


Assuntos
Gestos , Idioma , Fala/fisiologia , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiologia , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Distribuição Aleatória , Adulto Jovem
17.
Neuroimage ; 191: 234-242, 2019 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-30769145

RESUMO

A network of frontal and parietal regions is known to be recruited during the planning and execution of arm and eye movements. While movements of the two effectors are typically coupled with each other, it remains unresolved how information is shared between them. Here we aimed to identify regions containing neuronal populations that show directional tuning for both arm and eye movements. In two separate fMRI experiments, the same participants were scanned while performing a center-out arm or eye movement task. Using a whole-brain searchlight-based representational similarity analysis (RSA), we found that a bilateral region in the posterior superior parietal lobule represents both arm and eye movement direction, thus extending previous findings in monkeys.


Assuntos
Movimento/fisiologia , Lobo Parietal/fisiologia , Desempenho Psicomotor/fisiologia , Adulto , Braço/fisiologia , Movimentos Oculares/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
18.
Behav Res Methods ; 51(6): 2817-2826, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-30542913

RESUMO

Recent years have witnessed a growing interest in behavioral and neuroimaging studies on the processing of symbolic communicative gestures, such as pantomimes and emblems, but well-controlled stimuli have been scarce. This study describes a dataset of more than 200 video clips of an actress performing pantomimes (gestures that mimic object-directed/object-use actions; e.g., playing guitar), emblems (conventional gestures; e.g., thumbs up), and meaningless gestures. Gestures were divided into four lists. For each of these four lists, 50 Italian and 50 American raters judged the meaningfulness of the gestures and provided names and descriptions for them. The results of these rating and norming measures are reported separately for the Italian and American raters, offering the first normed set of meaningful and meaningless gestures for experimental studies. The stimuli are available for download via the Figshare database.


Assuntos
Compreensão , Emblemas e Insígnias , Gestos , Feminino , Humanos
19.
Cortex ; 103: 266-276, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29673783

RESUMO

When we observe other people's actions, a number of parietal and precentral regions known to be involved in the planning and execution of actions are recruited for example seen as power decreases in alpha and beta frequencies indicative of increased activation. It has been argued that this recruitment reflects the process of simulating the observed action, thereby providing access to the meaning of the action. Alternatively, it has been suggested that rather than providing access to the meaning of an action, parietal and precentral regions might be recruited as a consequence of action understanding. A way to distinguish between these alternatives is to examine where in the brain and at which time point it is possible to discriminate between different types of actions (e.g., pointing or grasping) irrespective of the way these are performed. To this aim, we presented participants with videos of simple hand actions performed with the left or right hand towards a target on the left or the right side while recording magnetoencephalography (MEG) data. In each trial, participants were presented with two subsequent videos (S1, S2) depicting either the same (repeat trials) or different (non-repeat trials) actions. We predicted that areas that are sensitive to the type of action should show stronger adaptation (i.e., a smaller decrease in alpha and beta power) in repeat in comparison to non-repeat trials. Indeed, we observed less alpha and beta power decreases during the presentation of S2 when the action was repeated compared to when two different actions were presented indicating adaptation of neuronal populations that are selective for the type of action. Sources were obtained exclusively in posterior occipitotemporal regions, supporting the notion that an early differentiation of actions occurs outside the motor system.


Assuntos
Lateralidade Funcional/fisiologia , Magnetoencefalografia , Rede Nervosa/fisiologia , Lobo Occipital/fisiologia , Desempenho Psicomotor/fisiologia , Lobo Temporal/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Força da Mão , Humanos , Masculino , Adulto Jovem
20.
Cortex ; 99: 330-345, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29334647

RESUMO

Different contexts require us either to react immediately, or to delay (or suppress) a planned movement. Previous studies that aimed at decoding movement plans typically dissociated movement preparation and execution by means of delayed-movement paradigms. Here we asked whether these results can be generalized to the planning and execution of immediate movements. To directly compare delayed, non-delayed, and suppressed reaching and grasping movements, we used a slow event-related functional magnetic resonance imaging (fMRI) design. To examine how neural representations evolved throughout movement planning, execution, and suppression, we performed time-resolved multivariate pattern analysis (MVPA). During the planning phase, we were able to decode upcoming reaching and grasping movements in contralateral parietal and premotor areas. During the execution phase, we were able to decode movements in a widespread bilateral network of motor, premotor, and somatosensory areas. Moreover, we obtained significant decoding across delayed and non-delayed movement plans in contralateral primary motor cortex. Our results demonstrate the feasibility of time-resolved MVPA and provide new insights into the dynamics of the prehension network, suggesting early neural representations of movement plans in the primary motor cortex that are shared between delayed and non-delayed contexts.


Assuntos
Força da Mão , Córtex Motor/fisiologia , Movimento , Córtex Somatossensorial/fisiologia , Adolescente , Adulto , Feminino , Neuroimagem Funcional , Humanos , Imageamento por Ressonância Magnética , Masculino , Córtex Motor/diagnóstico por imagem , Análise Multivariada , Córtex Somatossensorial/diagnóstico por imagem , Fatores de Tempo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA