Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cortex ; 126: 253-264, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32092494

RESUMO

Unequivocally demonstrating the presence of multisensory signals at the earliest stages of cortical processing remains challenging in humans. In our study, we relied on the unique spatio-temporal resolution provided by intracranial stereotactic electroencephalographic (SEEG) recordings in patients with drug-resistant epilepsy to characterize the signal extracted from early visual (calcarine and pericalcarine) and auditory (Heschl's gyrus and planum temporale) regions during a simple audio-visual oddball task. We provide evidences that both cross-modal responses (visual responses in auditory cortex or the reverse) and multisensory processing (alteration of the unimodal responses during bimodal stimulation) can be observed in intracranial event-related potentials (iERPs) and in power modulations of oscillatory activity at different temporal scales within the first 150 msec after stimulus onset. The temporal profiles of the iERPs are compatible with the hypothesis that MSI occurs by means of direct pathways linking early visual and auditory regions. Our data indicate, moreover, that MSI mainly relies on modulations of the low-frequency bands (foremost the theta band in the auditory cortex and the alpha band in the visual cortex), suggesting the involvement of feedback pathways between the two sensory regions. Remarkably, we also observed high-gamma power modulations by sounds in the early visual cortex, thus suggesting the presence of neuronal populations involved in auditory processing in the calcarine and pericalcarine region in humans.


Assuntos
Córtex Auditivo , Estimulação Acústica , Percepção Auditiva , Mapeamento Encefálico , Eletroencefalografia , Humanos , Estimulação Luminosa , Percepção Visual
2.
Cereb Cortex ; 29(9): 3590-3605, 2019 08 14.
Artigo em Inglês | MEDLINE | ID: mdl-30272134

RESUMO

The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains' response to faces, voices, and combined face-voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest, extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic causal modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area, and voice-selective temporal voice area, with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.


Assuntos
Encéfalo/fisiologia , Emoções/fisiologia , Reconhecimento Facial/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Vias Neurais/fisiologia , Estimulação Luminosa , Adulto Jovem
3.
Sci Rep ; 8(1): 16968, 2018 11 16.
Artigo em Inglês | MEDLINE | ID: mdl-30446699

RESUMO

The discovery of intrinsically photosensitive retinal ganglion cells (ipRGCs) marked a major shift in our understanding of how light information is processed by the mammalian brain. These ipRGCs influence multiple functions not directly related to image formation such as circadian resetting and entrainment, pupil constriction, enhancement of alertness, as well as the modulation of cognition. More recently, it was demonstrated that ipRGCs may also contribute to basic visual functions. The impact of ipRGCs on visual function, independently of image forming photoreceptors, remains difficult to isolate, however, particularly in humans. We previously showed that exposure to intense monochromatic blue light (465 nm) induced non-conscious light perception in a forced choice task in three rare totally visually blind individuals without detectable rod and cone function, but who retained non-image-forming responses to light, very likely via ipRGCs. The neural foundation of such light perception in the absence of conscious vision is unknown, however. In this study, we characterized the brain activity of these three participants using electroencephalography (EEG), and demonstrate that unconsciously perceived light triggers an early and reliable transient desynchronization (i.e. decreased power) of the alpha EEG rhythm (8-14 Hz) over the occipital cortex. These results provide compelling insight into how ipRGC may contribute to transient changes in ongoing brain activity. They suggest that occipital alpha rhythm synchrony, which is typically linked to the visual system, is modulated by ipRGCs photoreception; a process that may contribute to the non-conscious light perception in those blind individuals.


Assuntos
Luz , Lobo Occipital/efeitos da radiação , Células Fotorreceptoras/efeitos da radiação , Células Ganglionares da Retina/efeitos da radiação , Pessoas com Deficiência Visual , Idoso , Mapeamento Encefálico , Ritmo Circadiano/fisiologia , Eletroencefalografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Lobo Occipital/fisiologia , Estimulação Luminosa/métodos , Células Fotorreceptoras/fisiologia , Células Ganglionares da Retina/fisiologia
4.
Elife ; 72018 01 17.
Artigo em Inglês | MEDLINE | ID: mdl-29338838

RESUMO

The occipital cortex of early blind individuals (EB) activates during speech processing, challenging the notion of a hard-wired neurobiology of language. But, at what stage of speech processing do occipital regions participate in EB? Here we demonstrate that parieto-occipital regions in EB enhance their synchronization to acoustic fluctuations in human speech in the theta-range (corresponding to syllabic rate), irrespective of speech intelligibility. Crucially, enhanced synchronization to the intelligibility of speech was selectively observed in primary visual cortex in EB, suggesting that this region is at the interface between speech perception and comprehension. Moreover, EB showed overall enhanced functional connectivity between temporal and occipital cortices that are sensitive to speech intelligibility and altered directionality when compared to the sighted group. These findings suggest that the occipital cortex of the blind adopts an architecture that allows the tracking of speech material, and therefore does not fully abstract from the reorganized sensory inputs it receives.


Assuntos
Cegueira , Sincronização Cortical , Neurônios/fisiologia , Lobo Occipital/fisiologia , Percepção da Fala , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
5.
Proc Natl Acad Sci U S A ; 114(31): E6437-E6446, 2017 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-28652333

RESUMO

Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.


Assuntos
Córtex Auditivo/fisiologia , Surdez/fisiopatologia , Reconhecimento Facial/fisiologia , Plasticidade Neuronal/fisiologia , Vias Visuais/fisiologia , Estimulação Acústica , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Neuroimagem/métodos , Estimulação Luminosa , Privação Sensorial/fisiologia , Percepção Visual/fisiologia
6.
Soc Cogn Affect Neurosci ; 11(9): 1402-10, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27131039

RESUMO

Human communication relies on the ability to process linguistic structure and to map words and utterances onto our environment. Furthermore, as what we communicate is often not directly encoded in our language (e.g. in the case of irony, jokes or indirect requests), we need to extract additional cues to infer the beliefs and desires of our conversational partners. Although the functional interplay between language and the ability to mentalise has been discussed in theoretical accounts in the past, the neurobiological underpinnings of these dynamics are currently not well understood. Here, we address this issue using functional imaging (fMRI). Participants listened to question-reply dialogues. In these dialogues, a reply is interpreted as a direct reply, an indirect reply or a request for action, depending on the question. We show that inferring meaning from indirect replies engages parts of the mentalising network (mPFC) while requests for action also activate the cortical motor system (IPL). Subsequent connectivity analysis using Dynamic Causal Modelling (DCM) revealed that this pattern of activation is best explained by an increase in effective connectivity from the mentalising network (mPFC) to the action system (IPL). These results are an important step towards a more integrative understanding of the neurobiological basis of indirect speech processing.


Assuntos
Neurônios/fisiologia , Teoria da Mente/fisiologia , Adolescente , Adulto , Mapeamento Encefálico , Compreensão , Feminino , Humanos , Relações Interpessoais , Imageamento por Ressonância Magnética , Córtex Motor/fisiologia , Vias Neurais/fisiologia , Córtex Pré-Frontal/fisiologia , Semântica , Percepção da Fala/fisiologia , Adulto Jovem
7.
J Neurosci ; 34(43): 14318-23, 2014 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-25339744

RESUMO

Research from the previous decade suggests that word meaning is partially stored in distributed modality-specific cortical networks. However, little is known about the mechanisms by which semantic content from multiple modalities is integrated into a coherent multisensory representation. Therefore we aimed to characterize differences between integration of lexical-semantic information from a single modality compared with two sensory modalities. We used magnetoencephalography in humans to investigate changes in oscillatory neuronal activity while participants verified two features for a given target word (e.g., "bus"). Feature pairs consisted of either two features from the same modality (visual: "red," "big") or different modalities (auditory and visual: "red," "loud"). The results suggest that integrating modality-specific features of the target word is associated with enhanced high-frequency power (80-120 Hz), while integrating features from different modalities is associated with a sustained increase in low-frequency power (2-8 Hz). Source reconstruction revealed a peak in the anterior temporal lobe for low-frequency and high-frequency effects. These results suggest that integrating lexical-semantic knowledge at different cortical scales is reflected in frequency-specific oscillatory neuronal activity in unisensory and multisensory association networks.


Assuntos
Córtex Auditivo/fisiologia , Relógios Biológicos/fisiologia , Rede Nervosa/fisiologia , Semântica , Córtex Visual/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Córtex Cerebral/fisiologia , Feminino , Humanos , Magnetoencefalografia/métodos , Masculino , Estimulação Luminosa/métodos , Adulto Jovem
8.
PLoS One ; 9(7): e101042, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25007074

RESUMO

In recent years, numerous studies have provided converging evidence that word meaning is partially stored in modality-specific cortical networks. However, little is known about the mechanisms supporting the integration of this distributed semantic content into coherent conceptual representations. In the current study we aimed to address this issue by using EEG to look at the spatial and temporal dynamics of feature integration during word comprehension. Specifically, participants were presented with two modality-specific features (i.e., visual or auditory features such as silver and loud) and asked to verify whether these two features were compatible with a subsequently presented target word (e.g., WHISTLE). Each pair of features described properties from either the same modality (e.g., silver, tiny  =  visual features) or different modalities (e.g., silver, loud  =  visual, auditory). Behavioral and EEG data were collected. The results show that verifying features that are putatively represented in the same modality-specific network is faster than verifying features across modalities. At the neural level, integrating features across modalities induces sustained oscillatory activity around the theta range (4-6 Hz) in left anterior temporal lobe (ATL), a putative hub for integrating distributed semantic content. In addition, enhanced long-range network interactions in the theta range were seen between left ATL and a widespread cortical network. These results suggest that oscillatory dynamics in the theta range could be involved in integrating multimodal semantic content by creating transient functional networks linking distributed modality-specific networks and multimodal semantic hubs such as left ATL.


Assuntos
Compreensão , Semântica , Lobo Temporal/fisiologia , Adolescente , Mapeamento Encefálico , Eletroencefalografia , Feminino , Humanos , Masculino , Leitura , Ritmo Teta , Adulto Jovem
9.
J Cogn Neurosci ; 26(8): 1644-53, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-24456389

RESUMO

Language content and action/perception have been shown to activate common brain areas in previous neuroimaging studies. However, it is unclear whether overlapping cortical activation reflects a common neural source or adjacent, but distinct, sources. We address this issue by using multivoxel pattern analysis on fMRI data. Specifically, participants were instructed to engage in five tasks: (1) execute hand actions (AE), (2) observe hand actions (AO), (3) observe nonbiological motion (MO), (4) read action verbs, and (5) read nonaction verbs. A classifier was trained to distinguish between data collected from neural motor areas during (1) AE versus MO and (2) AO versus MO. These two algorithms were then used to test for a distinction between data collected during the reading of action versus nonaction verbs. The results show that the algorithm trained to distinguish between AE and MO distinguishes between word categories using signal recorded from the left parietal cortex and pre-SMA, but not from ventrolateral premotor cortex. In contrast, the algorithm trained to distinguish between AO and MO discriminates between word categories using the activity pattern in the left premotor and left parietal cortex. This shows that the sensitivity of premotor areas to language content is more similar to the process of observing others acting than to acting oneself. Furthermore, those parts of the brain that show comparable neural pattern for action execution and action word comprehension are high-level integrative motor areas rather than low-level motor areas.


Assuntos
Mapeamento Encefálico/métodos , Idioma , Atividade Motora/fisiologia , Córtex Motor/fisiologia , Lobo Parietal/fisiologia , Percepção Visual/fisiologia , Adulto , Algoritmos , Compreensão/fisiologia , Feminino , Mãos/fisiologia , Humanos , Imageamento por Ressonância Magnética , Masculino , Movimento (Física) , Semântica , Adulto Jovem
10.
J Cogn Neurosci ; 24(11): 2237-47, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22849399

RESUMO

Research from the past decade has shown that understanding the meaning of words and utterances (i.e., abstracted symbols) engages the same systems we used to perceive and interact with the physical world in a content-specific manner. For example, understanding the word "grasp" elicits activation in the cortical motor network, that is, part of the neural substrate involved in planned and executing a grasping action. In the embodied literature, cortical motor activation during language comprehension is thought to reflect motor simulation underlying conceptual knowledge [note that outside the embodied framework, other explanations for the link between action and language are offered, e.g., Mahon, B. Z., & Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grouding conceptual content. Journal of Physiology, 102, 59-70, 2008; Hagoort, P. On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9, 416-423, 2005]. Previous research has supported the view that the coupling between language and action is flexible, and reading an action-related word form is not sufficient for cortical motor activation [Van Dam, W. O., van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical-semantic representations. Human Brain Mapping, doi: 10.1002/hbm.21365, 2011]. The current study goes one step further by addressing the necessity of action-related word forms for motor activation during language comprehension. Subjects listened to indirect requests (IRs) for action during an fMRI session. IRs for action are speech acts in which access to an action concept is required, although it is not explicitly encoded in the language. For example, the utterance "It is hot here!" in a room with a window is likely to be interpreted as a request to open the window. However, the same utterance in a desert will be interpreted as a statement. The results indicate (1) that comprehension of IR sentences activates cortical motor areas reliably more than comprehension of sentences devoid of any implicit motor information. This is true despite the fact that IR sentences contain no lexical reference to action. (2) Comprehension of IR sentences also reliably activates substantial portions of the theory of mind network, known to be involved in making inferences about mental states of others. The implications of these findings for embodied theories of language are discussed.


Assuntos
Estimulação Acústica/métodos , Córtex Motor/fisiologia , Rede Nervosa/fisiologia , Estimulação Luminosa/métodos , Desempenho Psicomotor/fisiologia , Teoria da Mente/fisiologia , Adolescente , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA