Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 127
Filtrar
1.
Proc Natl Acad Sci U S A ; 121(5): e2308859121, 2024 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-38271338

RESUMO

Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.


Assuntos
Música , Humanos , Música/psicologia , Sensação , Comparação Transcultural , Acústica , Emoções , Percepção Auditiva
2.
Neuroimage ; 297: 120712, 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38945181

RESUMO

Relationships between humans are essential for how we see the world. Using fMRI, we explored the neural basis of homophily, a sociological concept that describes the tendency to bond with similar others. Our comparison of brain activity between sisters, friends and acquaintances while they watched a movie, indicate that sisters' brain activity is more similar than that of friends and friends' activity is more similar than that of acquaintances. The increased similarity in brain activity measured as inter-subject correlation (ISC) was found both in higher-order brain areas including the default-mode network (DMN) and sensory areas. Increased ISC could not be explained by genetic relation between sisters neither by similarities in eye-movements, emotional experiences, and physiological activity. Our findings shed light on the neural basis of homophily by revealing that similarity in brain activity in the DMN and sensory areas is the stronger the closer is the relationship between the people.

3.
Neuroimage ; 273: 120082, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37030414

RESUMO

Laughter and crying are universal signals of prosociality and distress, respectively. Here we investigated the functional brain basis of perceiving laughter and crying using naturalistic functional magnetic resonance imaging (fMRI) approach. We measured haemodynamic brain activity evoked by laughter and crying in three experiments with 100 subjects in each. The subjects i) viewed a 20-minute medley of short video clips, and ii) 30 min of a full-length feature film, and iii) listened to 13.5 min of a radio play that all contained bursts of laughter and crying. Intensity of laughing and crying in the videos and radio play was annotated by independent observes, and the resulting time series were used to predict hemodynamic activity to laughter and crying episodes. Multivariate pattern analysis (MVPA) was used to test for regional selectivity in laughter and crying evoked activations. Laughter induced widespread activity in ventral visual cortex and superior and middle temporal and motor cortices. Crying activated thalamus, cingulate cortex along the anterior-posterior axis, insula and orbitofrontal cortex. Both laughter and crying could be decoded accurately (66-77% depending on the experiment) from the BOLD signal, and the voxels contributing most significantly to classification were in superior temporal cortex. These results suggest that perceiving laughter and crying engage distinct neural networks, whose activity suppresses each other to manage appropriate behavioral responses to others' bonding and distress signals.


Assuntos
Choro , Riso , Humanos , Choro/fisiologia , Encéfalo/fisiologia , Mapeamento Encefálico , Giro do Cíngulo/fisiologia
4.
Proc Natl Acad Sci U S A ; 117(26): 15242-15252, 2020 06 30.
Artigo em Inglês | MEDLINE | ID: mdl-32541016

RESUMO

Human speech production requires the ability to couple motor actions with their auditory consequences. Nonhuman primates might not have speech because they lack this ability. To address this question, we trained macaques to perform an auditory-motor task producing sound sequences via hand presses on a newly designed device ("monkey piano"). Catch trials were interspersed to ascertain the monkeys were listening to the sounds they produced. Functional MRI was then used to map brain activity while the animals listened attentively to the sound sequences they had learned to produce and to two control sequences, which were either completely unfamiliar or familiar through passive exposure only. All sounds activated auditory midbrain and cortex, but listening to the sequences that were learned by self-production additionally activated the putamen and the hand and arm regions of motor cortex. These results indicate that, in principle, monkeys are capable of forming internal models linking sound perception and production in motor regions of the brain, so this ability is not special to speech in humans. However, the coupling of sounds and actions in nonhuman primates (and the availability of an internal model supporting it) seems not to extend to the upper vocal tract, that is, the supralaryngeal articulators, which are key for the production of speech sounds in humans. The origin of speech may have required the evolution of a "command apparatus" similar to the control of the hand, which was crucial for the evolution of tool use.


Assuntos
Percepção Auditiva/fisiologia , Aprendizagem , Macaca mulatta/fisiologia , Córtex Motor/fisiologia , Som , Animais , Mapeamento Encefálico , Potenciais Evocados Auditivos , Feminino , Imageamento por Ressonância Magnética , Masculino
5.
Neuroimage ; 247: 118800, 2022 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-34896586

RESUMO

Neurophysiological and psychological models posit that emotions depend on connections across wide-spread corticolimbic circuits. While previous studies using pattern recognition on neuroimaging data have shown differences between various discrete emotions in brain activity patterns, less is known about the differences in functional connectivity. Thus, we employed multivariate pattern analysis on functional magnetic resonance imaging data (i) to develop a pipeline for applying pattern recognition in functional connectivity data, and (ii) to test whether connectivity patterns differ across emotion categories. Six emotions (anger, fear, disgust, happiness, sadness, and surprise) and a neutral state were induced in 16 participants using one-minute-long emotional narratives with natural prosody while brain activity was measured with functional magnetic resonance imaging (fMRI). We computed emotion-wise connectivity matrices both for whole-brain connections and for 10 previously defined functionally connected brain subnetworks and trained an across-participant classifier to categorize the emotional states based on whole-brain data and for each subnetwork separately. The whole-brain classifier performed above chance level with all emotions except sadness, suggesting that different emotions are characterized by differences in large-scale connectivity patterns. When focusing on the connectivity within the 10 subnetworks, classification was successful within the default mode system and for all emotions. We thus show preliminary evidence for consistently different sustained functional connectivity patterns for instances of emotion categories particularly within the default mode system.


Assuntos
Conectoma/métodos , Emoções/fisiologia , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Adulto , Feminino , Voluntários Saudáveis , Humanos , Estimulação Luminosa
6.
Neuroimage ; 224: 117445, 2021 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-33059053

RESUMO

Using movies and narratives as naturalistic stimuli in human neuroimaging studies has yielded significant advances in understanding of cognitive and emotional functions. The relevant literature was reviewed, with emphasis on how the use of naturalistic stimuli has helped advance scientific understanding of human memory, attention, language, emotions, and social cognition in ways that would have been difficult otherwise. These advances include discovering a cortical hierarchy of temporal receptive windows, which supports processing of dynamic information that accumulates over several time scales, such as immediate reactions vs. slowly emerging patterns in social interactions. Naturalistic stimuli have also helped elucidate how the hippocampus supports segmentation and memorization of events in day-to-day life and have afforded insights into attentional brain mechanisms underlying our ability to adopt specific perspectives during natural viewing. Further, neuroimaging studies with naturalistic stimuli have revealed the role of the default-mode network in narrative-processing and in social cognition. Finally, by robustly eliciting genuine emotions, these stimuli have helped elucidate the brain basis of both basic and social emotions apparently manifested as highly overlapping yet distinguishable patterns of brain activity.


Assuntos
Atenção , Encéfalo/diagnóstico por imagem , Emoções , Idioma , Memória , Filmes Cinematográficos , Narração , Cognição Social , Encéfalo/fisiologia , Mapeamento Encefálico , Eletroencefalografia , Neuroimagem Funcional , Humanos , Imageamento por Ressonância Magnética , Vias Neurais
7.
Cereb Cortex ; 29(9): 4006-4016, 2019 08 14.
Artigo em Inglês | MEDLINE | ID: mdl-30475982

RESUMO

Emotions can be characterized by dimensions of arousal and valence (pleasantness). While the functional brain bases of emotional arousal and valence have been actively investigated, the neuromolecular underpinnings remain poorly understood. We tested whether the opioid and dopamine systems involved in reward and motivational processes would be associated with emotional arousal and valence. We used in vivo positron emission tomography to quantify µ-opioid receptor and type 2 dopamine receptor (MOR and D2R, respectively) availability in brains of 35 healthy adult females. During subsequent functional magnetic resonance imaging carried out to monitor hemodynamic activity, the subjects viewed movie scenes of varying emotional content. Arousal and valence were associated with hemodynamic activity in brain regions involved in emotional processing, including amygdala, thalamus, and superior temporal sulcus. Cerebral MOR availability correlated negatively with the hemodynamic responses to arousing scenes in amygdala, hippocampus, thalamus, and hypothalamus, whereas no positive correlations were observed in any brain region. D2R availability-here reliably quantified only in striatum-was not associated with either arousal or valence. These results suggest that emotional arousal is regulated by the MOR system, and that cerebral MOR availability influences brain activity elicited by arousing stimuli.


Assuntos
Nível de Alerta , Encéfalo/fisiologia , Emoções/fisiologia , Receptores de Dopamina D2/metabolismo , Receptores Opioides mu/metabolismo , Adulto , Encéfalo/metabolismo , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Pessoa de Meia-Idade , Tomografia por Emissão de Pósitrons , Adulto Jovem
8.
Hum Brain Mapp ; 40(16): 4777-4788, 2019 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-31400052

RESUMO

Individuals often align their emotional states during conversation. Here, we reveal how such emotional alignment is reflected in synchronization of brain activity across speakers and listeners. Two "speaker" subjects told emotional and neutral autobiographical stories while their hemodynamic brain activity was measured with functional magnetic resonance imaging (fMRI). The stories were recorded and played back to 16 "listener" subjects during fMRI. After scanning, both speakers and listeners rated the moment-to-moment valence and arousal of the stories. Time-varying similarity of the blood-oxygenation-level-dependent (BOLD) time series was quantified by intersubject phase synchronization (ISPS) between speaker-listener pairs. Telling and listening to the stories elicited similar emotions across speaker-listener pairs. Arousal was associated with increased speaker-listener neural synchronization in brain regions supporting attentional, auditory, somatosensory, and motor processing. Valence was associated with increased speaker-listener neural synchronization in brain regions involved in emotional processing, including amygdala, hippocampus, and temporal pole. Speaker-listener synchronization of subjective feelings of arousal was associated with increased neural synchronization in somatosensory and subcortical brain regions; synchronization of valence was associated with neural synchronization in parietal cortices and midline structures. We propose that emotion-dependent speaker-listener neural synchronization is associated with emotional contagion, thereby implying that listeners reproduce some aspects of the speaker's emotional state at the neural level.


Assuntos
Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Emoções/fisiologia , Adulto , Nível de Alerta , Atenção/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Circulação Cerebrovascular/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Movimento/fisiologia , Lobo Parietal/diagnóstico por imagem , Lobo Parietal/fisiologia , Sensação/fisiologia , Córtex Somatossensorial/diagnóstico por imagem , Córtex Somatossensorial/fisiologia , Fala , Adulto Jovem
9.
J Neurosci ; 37(25): 6125-6131, 2017 06 21.
Artigo em Inglês | MEDLINE | ID: mdl-28536272

RESUMO

The size of human social networks significantly exceeds the network that can be maintained by social grooming or touching in other primates. It has been proposed that endogenous opioid release after social laughter would provide a neurochemical pathway supporting long-term relationships in humans (Dunbar, 2012), yet this hypothesis currently lacks direct neurophysiological support. We used PET and the µ-opioid-receptor (MOR)-specific ligand [11C]carfentanil to quantify laughter-induced endogenous opioid release in 12 healthy males. Before the social laughter scan, the subjects watched laughter-inducing comedy clips with their close friends for 30 min. Before the baseline scan, subjects spent 30 min alone in the testing room. Social laughter increased pleasurable sensations and triggered endogenous opioid release in thalamus, caudate nucleus, and anterior insula. In addition, baseline MOR availability in the cingulate and orbitofrontal cortices was associated with the rate of social laughter. In a behavioral control experiment, pain threshold-a proxy of endogenous opioidergic activation-was elevated significantly more in both male and female volunteers after watching laughter-inducing comedy versus non-laughter-inducing drama in groups. Modulation of the opioidergic activity by social laughter may be an important neurochemical pathway that supports the formation, reinforcement, and maintenance of human social bonds.SIGNIFICANCE STATEMENT Social contacts are vital to humans. The size of human social networks significantly exceeds the network that can be maintained by social grooming in other primates. Here, we used PET to show that endogenous opioid release after social laughter may provide a neurochemical mechanism supporting long-term relationships in humans. Participants were scanned twice: after a 30 min social laughter session and after spending 30 min alone in the testing room (baseline). Endogenous opioid release was stronger after laughter versus the baseline scan. Opioid receptor density in the frontal cortex predicted social laughter rates. Modulation of the opioidergic activity by social laughter may be an important neurochemical mechanism reinforcing and maintaining social bonds between humans.


Assuntos
Química Encefálica/fisiologia , Endorfinas/metabolismo , Riso/fisiologia , Meio Social , Adulto , Mapeamento Encefálico , Feminino , Humanos , Masculino , Apego ao Objeto , Prazer , Tomografia por Emissão de Pósitrons , Receptores Opioides mu/efeitos dos fármacos , Receptores Opioides mu/metabolismo , Adulto Jovem
10.
Neuroimage ; 181: 44-54, 2018 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-29964190

RESUMO

Recent advances in machine learning allow faster training, improved performance and increased interpretability of classification techniques. Consequently, their application in neuroscience is rapidly increasing. While classification approaches have proved useful in functional magnetic resonance imaging (fMRI) studies, there are concerns regarding extraction, reproducibility and visualization of brain regions that contribute most significantly to the classification. We addressed these issues using an fMRI classification scheme based on neural networks and compared a set of methods for extraction of category-related voxel importances in three simulated and two empirical datasets. The simulation data revealed that the proposed scheme successfully detects spatially distributed and overlapping activation patterns upon successful classification. Application of the proposed classification scheme to two previously published empirical fMRI datasets revealed robust importance maps that extensively overlap with univariate maps but also provide complementary information. Our results demonstrate increased statistical power of importance maps compared to univariate approaches for both detection of overlapping patterns and patterns with weak univariate information.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Adulto , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/normas , Classificação , Simulação por Computador , Emoções/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador/normas , Imageamento por Ressonância Magnética/normas , Masculino , Reconhecimento Automatizado de Padrão/normas , Reconhecimento Visual de Modelos/fisiologia , Percepção Social , Adulto Jovem
11.
Neuroimage ; 167: 309-315, 2018 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-29175201

RESUMO

Recent functional studies suggest that noise sensitivity, a trait describing attitudes towards noise and predicting noise annoyance, is associated with altered processing in the central auditory system. In the present work, we examined whether noise sensitivity could be related to the structural anatomy of auditory and limbic brain areas. Anatomical MR brain images of 80 subjects were parcellated with FreeSurfer to measure grey matter volume, cortical thickness, cortical area and folding index of anatomical structures in the temporal lobe and insular cortex. The grey matter volume of amygdala and hippocampus was measured as well. According to our findings, noise sensitivity is associated with the grey matter volume in the selected structures. Among those, we propose and discuss particular areas, previously linked to auditory perceptual, emotional and interoceptive processing, in which larger grey matter volume seems to be related to higher noise sensitivity.


Assuntos
Tonsila do Cerebelo/anatomia & histologia , Percepção Auditiva/fisiologia , Córtex Cerebral/anatomia & histologia , Substância Cinzenta/anatomia & histologia , Hipocampo/anatomia & histologia , Ruído , Personalidade/fisiologia , Adulto , Tonsila do Cerebelo/diagnóstico por imagem , Córtex Auditivo/anatomia & histologia , Córtex Auditivo/diagnóstico por imagem , Córtex Cerebral/diagnóstico por imagem , Feminino , Substância Cinzenta/diagnóstico por imagem , Hipocampo/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Ruído/efeitos adversos , Adulto Jovem
12.
Cereb Cortex ; 27(8): 4257-4266, 2017 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-28541428

RESUMO

Neuroimaging studies have shown that seeing others in pain activates brain regions that are involved in first-hand pain, suggesting that shared neuromolecular pathways support processing of first-hand and vicarious pain. We tested whether the dopamine and opioid neurotransmitter systems involved in nociceptive processing also contribute to vicarious pain experience. We used in vivo positron emission tomography to quantify type 2 dopamine and µ-opioid receptor (D2R and MOR, respectively) availabilities in brains of 35 subjects. During functional magnetic resonance imaging, the subjects watched short movie clips depicting persons in painful and painless situations. Painful scenes activated pain-responsive brain regions including anterior insulae, thalamus and secondary somatosensory cortices, as well as posterior superior temporal sulci. MOR availability correlated negatively with the haemodynamic responses during painful scenes in anterior and posterior insulae, thalamus, secondary and primary somatosensory cortices, primary motor cortex, and superior temporal sulci. MOR availability correlated positively with orbitofrontal haemodynamic responses during painful scenes. D2R availability was not correlated with the haemodynamic responses in any brain region. These results suggest that the opioid system contributes to neural processing of vicarious pain, and that interindividual differences in opioidergic system could explain why some individuals react more strongly than others to seeing pain.


Assuntos
Encéfalo/metabolismo , Empatia/fisiologia , Percepção da Dor/fisiologia , Receptores de Dopamina D2/metabolismo , Receptores Opioides mu/metabolismo , Percepção Visual/fisiologia , Adulto , Encéfalo/diagnóstico por imagem , Encéfalo/efeitos dos fármacos , Mapeamento Encefálico , Circulação Cerebrovascular/efeitos dos fármacos , Circulação Cerebrovascular/fisiologia , Empatia/efeitos dos fármacos , Feminino , Fentanila/análogos & derivados , Humanos , Imageamento por Ressonância Magnética , Pessoa de Meia-Idade , Imagem Multimodal , Oxigênio/sangue , Percepção da Dor/efeitos dos fármacos , Tomografia por Emissão de Pósitrons , Racloprida , Compostos Radiofarmacêuticos , Receptores Opioides mu/antagonistas & inibidores , Percepção Social , Percepção Visual/efeitos dos fármacos , Adulto Jovem
13.
Neuroimage ; 157: 108-117, 2017 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-27932074

RESUMO

During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos , Música , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Percepção da Fala/fisiologia , Adulto Jovem
14.
Hum Brain Mapp ; 38(7): 3360-3376, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28379608

RESUMO

To understand temporally extended events, the human brain needs to accumulate information continuously across time. Interruptions that require switching of attention to other event sequences disrupt this process. To reveal neural mechanisms supporting integration of event information, we measured brain activity with functional magnetic resonance imaging (fMRI) from 18 participants while they viewed 6.5-minute excerpts from three movies (i) consecutively and (ii) as interleaved segments of approximately 50-s in duration. We measured inter-subject reliability of brain activity by calculating inter-subject correlations (ISC) of fMRI signals and analyzed activation timecourses with a general linear model (GLM). Interleaving decreased the ISC in posterior temporal lobes, medial prefrontal cortex, superior precuneus, medial occipital cortex, and cerebellum. In the GLM analyses, posterior temporal lobes were activated more consistently by instances of speech when the movies were viewed consecutively than as interleaved segments. By contrast, low-level auditory and visual stimulus features and editing boundaries caused similar activity patterns in both conditions. In the medial occipital cortex, decreases in ISC were seen in short bursts throughout the movie clips. By contrast, the other areas showed longer-lasting differences in ISC during isolated scenes depicting socially-relevant and suspenseful content, such as deception or inter-subject conflict. The areas in the posterior temporal lobes also showed sustained activity during continuous actions and were deactivated when actions ended at scene boundaries. Our results suggest that the posterior temporal and dorsomedial prefrontal cortices, as well as the cerebellum and dorsal precuneus, support integration of events into coherent event sequences. Hum Brain Mapp 38:3360-3376, 2017. © 2017 Wiley Periodicals, Inc.

15.
Cereb Cortex ; 26(6): 2563-2573, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-25924952

RESUMO

Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Emoções/fisiologia , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Adulto , Feminino , Humanos , Imaginação/fisiologia , Masculino , Percepção de Movimento/fisiologia , Análise Multivariada , Testes Neuropsicológicos , Estimulação Luminosa , Adulto Jovem
16.
Neuroimage ; 124(Pt A): 224-231, 2016 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-26364862

RESUMO

Low-level (timbral) and high-level (tonal and rhythmical) musical features during continuous listening to music, studied by functional magnetic resonance imaging (fMRI), have been shown to elicit large-scale responses in cognitive, motor, and limbic brain networks. Using a similar methodological approach and a similar group of participants, we aimed to study the replicability of previous findings. Participants' fMRI responses during continuous listening of a tango Nuevo piece were correlated voxelwise against the time series of a set of perceptually validated musical features computationally extracted from the music. The replicability of previous results and the present study was assessed by two approaches: (a) correlating the respective activation maps, and (b) computing the overlap of active voxels between datasets at variable levels of ranked significance. Activity elicited by timbral features was better replicable than activity elicited by tonal and rhythmical ones. These results indicate more reliable processing mechanisms for low-level musical features as compared to more high-level features. The processing of such high-level features is probably more sensitive to the state and traits of the listeners, as well as of their background in music.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Música , Estimulação Acústica , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Reprodutibilidade dos Testes , Adulto Jovem
17.
Neuroimage ; 124(Pt A): 858-868, 2016 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-26419388

RESUMO

Spatial and non-spatial information of sound events is presumably processed in parallel auditory cortex (AC) "what" and "where" streams, which are modulated by inputs from the respective visual-cortex subsystems. How these parallel processes are integrated to perceptual objects that remain stable across time and the source agent's movements is unknown. We recorded magneto- and electroencephalography (MEG/EEG) data while subjects viewed animated video clips featuring two audiovisual objects, a black cat and a gray cat. Adaptor-probe events were either linked to the same object (the black cat meowed twice in a row in the same location) or included a visually conveyed identity change (the black and then the gray cat meowed with identical voices in the same location). In addition to effects in visual (including fusiform, middle temporal or MT areas) and frontoparietal association areas, the visually conveyed object-identity change was associated with a release from adaptation of early (50-150ms) activity in posterior ACs, spreading to left anterior ACs at 250-450ms in our combined MEG/EEG source estimates. Repetition of events belonging to the same object resulted in increased theta-band (4-8Hz) synchronization within the "what" and "where" pathways (e.g., between anterior AC and fusiform areas). In contrast, the visually conveyed identity changes resulted in distributed synchronization at higher frequencies (alpha and beta bands, 8-32Hz) across different auditory, visual, and association areas. The results suggest that sound events become initially linked to perceptual objects in posterior AC, followed by modulations of representations in anterior AC. Hierarchical what and where pathways seem to operate in parallel after repeating audiovisual associations, whereas the resetting of such associations engages a distributed network across auditory, visual, and multisensory areas.


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Animais , Gatos , Sincronização Cortical , Eletroencefalografia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Magnetoencefalografia , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa , Córtex Visual/fisiologia , Vocalização Animal , Adulto Jovem
18.
Neuroimage ; 129: 214-223, 2016 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-26774614

RESUMO

Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream.


Assuntos
Córtex Pré-Frontal/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Magnetoencefalografia , Masculino , Processamento de Sinais Assistido por Computador , Adulto Jovem
19.
Neuroimage ; 138: 242-247, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-27238727

RESUMO

In non-human primates, opioid-receptor blockade increases social grooming, and the endogenous opioid system has therefore been hypothesized to support maintenance of long-term relationships in humans as well. Here we tested whether social touch modulates opioidergic activation in humans using in vivo positron emission tomography (PET). Eighteen male participants underwent two PET scans with [11C]carfentanil, a ligand specific to µ-opioid receptors (MOR). During the social touch scan, the participants lay in the scanner while their partners caressed their bodies in a non-sexual fashion. In the baseline scan, participants lay alone in the scanner. Social touch triggered pleasurable sensations and increased MOR availability in the thalamus, striatum, and frontal, cingulate, and insular cortices. Modulation of activity of the opioid system by social touching might provide a neurochemical mechanism reinforcing social bonds between humans.


Assuntos
Encéfalo/metabolismo , Apego ao Objeto , Prazer/fisiologia , Receptores Opioides mu/metabolismo , Comportamento Social , Percepção do Tato/fisiologia , Tato/fisiologia , Adulto , Feminino , Humanos , Masculino , Imagem Molecular/métodos , Tomografia por Emissão de Pósitrons/métodos
20.
Eur J Neurosci ; 44(9): 2673-2684, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27602806

RESUMO

Networks have become a standard tool for analyzing functional magnetic resonance imaging (fMRI) data. In this approach, brain areas and their functional connections are mapped to the nodes and links of a network. Even though this mapping reduces the complexity of the underlying data, it remains challenging to understand the structure of the resulting networks due to the large number of nodes and links. One solution is to partition networks into modules and then investigate the modules' composition and relationship with brain functioning. While this approach works well for single networks, understanding differences between two networks by comparing their partitions is difficult and alternative approaches are thus necessary. To this end, we present a coarse-graining framework that uses a single set of data-driven modules as a frame of reference, enabling one to zoom out from the node- and link-level details. As a result, differences in the module-level connectivity can be understood in a transparent, statistically verifiable manner. We demonstrate the feasibility of the method by applying it to networks constructed from fMRI data recorded from 13 healthy subjects during rest and movie viewing. While independently partitioning the rest and movie networks is shown to yield little insight, the coarse-graining framework enables one to pinpoint differences in the module-level structure, such as the increased number of intra-module links within the visual cortex during movie viewing. In addition to quantifying differences due to external stimuli, the approach could also be applied in clinical settings, such as comparing patients with healthy controls.


Assuntos
Conectoma , Córtex Visual/fisiologia , Humanos , Imageamento por Ressonância Magnética , Modelos Neurológicos , Percepção Visual
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA