Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
Neuroimage ; 216: 116191, 2020 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-31525500

RESUMEN

Keeping time is fundamental for our everyday existence. Various isochronous activities, such as locomotion, require us to use internal timekeeping. This phenomenon comes into play also in other human pursuits such as dance and music. When listening to music, we spontaneously perceive and predict its beat. The process of beat perception comprises both beat inference and beat maintenance, their relative importance depending on the salience of beat in the music. To study functional connectivity associated with these processes in a naturalistic situation, we used functional magnetic resonance imaging to measure brain responses of participants while they were listening to a piece of music containing strong contrasts in beat salience. Subsequently, we utilized dynamic graph analysis and psychophysiological interactions (PPI) analysis in connection with computational modelling of beat salience to investigate how functional connectivity manifests these processes. As the main effect, correlation analyses between the obtained dynamic graph measures and the beat salience measure revealed increased centrality in auditory-motor cortices, cerebellum, and extrastriate visual areas during low beat salience, whereas regions of the default mode- and central executive networks displayed high centrality during high beat salience. PPI analyses revealed partial dissociation of functional networks belonging to this pathway indicating complementary neural mechanisms crucial in beat inference and maintenance, processes pivotal for extracting and predicting temporal regularities in our environment.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Cerebelo/fisiología , Conectoma/psicología , Corteza Motora/fisiología , Música/psicología , Estimulación Acústica/métodos , Adulto , Corteza Auditiva/diagnóstico por imagen , Cerebelo/diagnóstico por imagen , Conectoma/métodos , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Corteza Motora/diagnóstico por imagen , Periodicidad , Adulto Joven
2.
Hum Brain Mapp ; 38(6): 2955-2970, 2017 06.
Artículo en Inglés | MEDLINE | ID: mdl-28349620

RESUMEN

Musical expertise is visible both in the morphology and functionality of the brain. Recent research indicates that functional integration between multi-sensory, somato-motor, default-mode (DMN), and salience (SN) networks of the brain differentiates musicians from non-musicians during resting state. Here, we aimed at determining whether brain networks differentially exchange information in musicians as opposed to non-musicians during naturalistic music listening. Whole-brain graph-theory analyses were performed on participants' fMRI responses. Group-level differences revealed that musicians' primary hubs comprised cerebral and cerebellar sensorimotor regions whereas non-musicians' dominant hubs encompassed DMN-related regions. Community structure analyses of the key hubs revealed greater integration of motor and somatosensory homunculi representing the upper limbs and torso in musicians. Furthermore, musicians who started training at an earlier age exhibited greater centrality in the auditory cortex, and areas related to top-down processes, attention, emotion, somatosensory processing, and non-verbal processing of speech. We here reveal how brain networks organize themselves in a naturalistic music listening situation wherein musicians automatically engage neural networks that are action-based while non-musicians use those that are perception-based to process an incoming auditory stream. Hum Brain Mapp 38:2955-2970, 2017. © 2017 Wiley Periodicals, Inc.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Música , Vías Nerviosas/fisiología , Estimulación Acústica , Adulto , Corteza Auditiva/diagnóstico por imagen , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Modelos Neurológicos , Vías Nerviosas/diagnóstico por imagen , Oxígeno/sangre , Adulto Joven
3.
Neuroimage ; 124(Pt A): 224-231, 2016 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-26364862

RESUMEN

Low-level (timbral) and high-level (tonal and rhythmical) musical features during continuous listening to music, studied by functional magnetic resonance imaging (fMRI), have been shown to elicit large-scale responses in cognitive, motor, and limbic brain networks. Using a similar methodological approach and a similar group of participants, we aimed to study the replicability of previous findings. Participants' fMRI responses during continuous listening of a tango Nuevo piece were correlated voxelwise against the time series of a set of perceptually validated musical features computationally extracted from the music. The replicability of previous results and the present study was assessed by two approaches: (a) correlating the respective activation maps, and (b) computing the overlap of active voxels between datasets at variable levels of ranked significance. Activity elicited by timbral features was better replicable than activity elicited by tonal and rhythmical ones. These results indicate more reliable processing mechanisms for low-level musical features as compared to more high-level features. The processing of such high-level features is probably more sensitive to the state and traits of the listeners, as well as of their background in music.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Música , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Reproducibilidad de los Resultados , Adulto Joven
4.
Neuroimage ; 88: 170-80, 2014 03.
Artículo en Inglés | MEDLINE | ID: mdl-24269803

RESUMEN

We investigated neural correlates of musical feature processing with a decoding approach. To this end, we used a method that combines computational extraction of musical features with regularized multiple regression (LASSO). Optimal model parameters were determined by maximizing the decoding accuracy using a leave-one-out cross-validation scheme. The method was applied to functional magnetic resonance imaging (fMRI) data that were collected using a naturalistic paradigm, in which participants' brain responses were recorded while they were continuously listening to pieces of real music. The dependent variables comprised musical feature time series that were computationally extracted from the stimulus. We expected timbral features to obtain a higher prediction accuracy than rhythmic and tonal ones. Moreover, we expected the areas significantly contributing to the decoding models to be consistent with areas of significant activation observed in previous research using a naturalistic paradigm with fMRI. Of the six musical features considered, five could be significantly predicted for the majority of participants. The areas significantly contributing to the optimal decoding models agreed to a great extent with results obtained in previous studies. In particular, areas in the superior temporal gyrus, Heschl's gyrus, Rolandic operculum, and cerebellum contributed to the decoding of timbral features. For the decoding of the rhythmic feature, we found the bilateral superior temporal gyrus, right Heschl's gyrus, and hippocampus to contribute most. The tonal feature, however, could not be significantly predicted, suggesting a higher inter-participant variability in its neural processing. A subsequent classification experiment revealed that segments of the stimulus could be classified from the fMRI data with significant accuracy. The present findings provide compelling evidence for the involvement of the auditory cortex, the cerebellum and the hippocampus in the processing of musical features during continuous listening to music.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico/métodos , Cerebelo/fisiología , Hipocampo/fisiología , Música , Procesamiento de Señales Asistido por Computador , Adulto , Corteza Auditiva/diagnóstico por imagen , Cerebelo/diagnóstico por imagen , Femenino , Hipocampo/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Adulto Joven
5.
Neuroimage ; 83: 627-36, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23810975

RESUMEN

We aimed at predicting the temporal evolution of brain activity in naturalistic music listening conditions using a combination of neuroimaging and acoustic feature extraction. Participants were scanned using functional Magnetic Resonance Imaging (fMRI) while listening to two musical medleys, including pieces from various genres with and without lyrics. Regression models were built to predict voxel-wise brain activations which were then tested in a cross-validation setting in order to evaluate the robustness of the hence created models across stimuli. To further assess the generalizability of the models we extended the cross-validation procedure by including another dataset, which comprised continuous fMRI responses of musically trained participants to an Argentinean tango. Individual models for the two musical medleys revealed that activations in several areas in the brain belonging to the auditory, limbic, and motor regions could be predicted. Notably, activations in the medial orbitofrontal region and the anterior cingulate cortex, relevant for self-referential appraisal and aesthetic judgments, could be predicted successfully. Cross-validation across musical stimuli and participant pools helped identify a region of the right superior temporal gyrus, encompassing the planum polare and the Heschl's gyrus, as the core structure that processed complex acoustic features of musical pieces from various genres, with or without lyrics. Models based on purely instrumental music were able to predict activation in the bilateral auditory cortices, parietal, somatosensory, and left hemispheric primary and supplementary motor areas. The presence of lyrics on the other hand weakened the prediction of activations in the left superior temporal gyrus. Our results suggest spontaneous emotion-related processing during naturalistic listening to music and provide supportive evidence for the hemispheric specialization for categorical sounds with realistic stimuli. We herewith introduce a powerful means to predict brain responses to music, speech, or soundscapes across a large variety of contexts.


Asunto(s)
Percepción Auditiva/fisiología , Mapeo Encefálico , Encéfalo/fisiología , Lateralidad Funcional/fisiología , Música , Adulto , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Análisis de Componente Principal , Adulto Joven
6.
Ann N Y Acad Sci ; 1530(1): 18-22, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37847675

RESUMEN

Music listening is a dynamic process that entails complex interactions between sensory, cognitive, and emotional processes. The naturalistic paradigm provides a means to investigate these processes in an ecologically valid manner by allowing experimental settings that mimic real-life musical experiences. In this paper, we highlight the importance of the naturalistic paradigm in studying dynamic music processing and discuss how it allows for investigating both the segregation and integration of brain processes using model-based and model-free methods. We further suggest that studying individual difference-modulated music processing in this paradigm can provide insights into the mechanisms of brain plasticity, which can have implications for the development of interventions and therapies in a personalized way. Finally, despite the challenges that the naturalistic paradigm poses, we end with a discussion on future prospects of music and neuroscience research, especially with the continued development and refinement of naturalistic paradigms and the adoption of open science practices.


Asunto(s)
Mapeo Encefálico , Música , Humanos , Mapeo Encefálico/métodos , Percepción Auditiva , Imagen por Resonancia Magnética , Encéfalo
7.
PLoS One ; 18(7): e0287975, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37471415

RESUMEN

Individuals choose varying music listening strategies to fulfill particular mood-regulation goals. However, ineffective musical choices and a lack of cognizance of the effects thereof can be detrimental to their well-being and may lead to adverse outcomes like anxiety or depression. In our study, we use the social media platform Reddit to perform a large-scale analysis to unearth the several music-mediated mood-regulation goals that individuals opt for in the context of depression. A mixed-methods approach involving natural language processing techniques followed by qualitative analysis was performed on all music-related posts to identify the various music-listening strategies and group them into healthy and unhealthy associations. Analysis of the music content (acoustic features and lyrical themes) accompanying healthy and unhealthy associations showed significant differences. Individuals resorting to unhealthy strategies gravitate towards low-valence tracks. Moreover, lyrical themes associated with unhealthy strategies incorporated tracks with low optimism, high blame, and high self-reference. Our findings demonstrate that being mindful of the objectives of using music, the subsequent effects thereof, and aligning both for well-being outcomes is imperative for comprehensive understanding of the effectiveness of music.


Asunto(s)
Musicoterapia , Música , Humanos , Depresión/terapia , Ansiedad , Afecto , Musicoterapia/métodos
8.
Neuroimage ; 59(4): 3677-89, 2012 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-22116038

RESUMEN

We investigated the neural underpinnings of timbral, tonal, and rhythmic features of a naturalistic musical stimulus. Participants were scanned with functional Magnetic Resonance Imaging (fMRI) while listening to a stimulus with a rich musical structure, a modern tango. We correlated temporal evolutions of timbral, tonal, and rhythmic features of the stimulus, extracted using acoustic feature extraction procedures, with the fMRI time series. Results corroborate those obtained with controlled stimuli in previous studies and highlight additional areas recruited during musical feature processing. While timbral feature processing was associated with activations in cognitive areas of the cerebellum, and sensory and default mode network cerebrocortical areas, musical pulse and tonality processing recruited cortical and subcortical cognitive, motor and emotion-related circuits. In sum, by combining neuroimaging, acoustic feature extraction and behavioral methods, we revealed the large-scale cognitive, motor and limbic brain circuitry dedicated to acoustic feature processing during listening to a naturalistic stimulus. In addition to these novel findings, our study has practical relevance as it provides a powerful means to localize neural processing of individual acoustical features, be it those of music, speech, or soundscapes, in ecological settings.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Música , Red Nerviosa/fisiología , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
9.
Sci Rep ; 12(1): 2672, 2022 02 17.
Artículo en Inglés | MEDLINE | ID: mdl-35177683

RESUMEN

Movement is a universal response to music, with dance often taking place in social settings. Although previous work has suggested that socially relevant information, such as personality and gender, are encoded in dance movement, the generalizability of previous work is limited. The current study aims to decode dancers' gender, personality traits, and music preference from music-induced movements. We propose a method that predicts such individual difference from free dance movements, and demonstrate the robustness of the proposed method by using two data sets collected using different musical stimuli. In addition, we introduce a novel measure to explore the relative importance of different joints in predicting individual differences. Results demonstrated near perfect classification of gender, and notably high prediction of personality and music preferences. Furthermore, learned models demonstrated generalizability across datasets highlighting the importance of certain joints in intrinsic movement patterns specific to individual differences. Results further support theories of embodied music cognition and the role of bodily movement in musical experiences by demonstrating the influence of gender, personality, and music preferences on embodied responses to heard music.

10.
PLoS One ; 17(1): e0261151, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35020739

RESUMEN

The experience often described as feeling moved, understood chiefly as a social-relational emotion with social bonding functions, has gained significant research interest in recent years. Although listening to music often evokes what people describe as feeling moved, very little is known about the appraisals or musical features contributing to the experience. In the present study, we investigated experiences of feeling moved in response to music using a continuous rating paradigm. A total of 415 US participants completed an online experiment where they listened to seven moving musical excerpts and rated their experience while listening. Each excerpt was randomly coupled with one of seven rating scales (perceived sadness, perceived joy, feeling moved or touched, sense of connection, perceived beauty, warmth [in the chest], or chills) for each participant. The results revealed that musically evoked experiences of feeling moved are associated with a similar pattern of appraisals, physiological sensations, and trait correlations as feeling moved by videos depicting social scenarios (found in previous studies). Feeling moved or touched by both sadly and joyfully moving music was associated with experiencing a sense of connection and perceiving joy in the music, while perceived sadness was associated with feeling moved or touched only in the case of sadly moving music. Acoustic features related to arousal contributed to feeling moved only in the case of joyfully moving music. Finally, trait empathic concern was positively associated with feeling moved or touched by music. These findings support the role of social cognitive and empathic processes in music listening, and highlight the social-relational aspects of feeling moved or touched by music.


Asunto(s)
Emociones/fisiología , Música , Adulto , Anciano , Nivel de Alerta , Escalofríos , Empatía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Tristeza , Adulto Joven
11.
Artículo en Inglés | MEDLINE | ID: mdl-32119881

RESUMEN

The psychoactive effects of cannabis, one of the most commonly used narcotics, have been documented extensively. Despite multiple studies being undertaken, there have been only a few longitudinal studies investigating the effect of long term usage of cannabis on various subcortical structures. This study aims at looking deeper into the effects of long term usage of cannabis on different hippocampus subfields.2 Participants were split into two groups, cannabis users and healthy controls. All the test subjects filled out the Cannabis Usage and Disorder Identification Test (CUDIT) and underwent T1-structural MRI scans twice, at a baseline and a followup 3 years later. The subfield volumes were measured using the software package Freesurfer with the LongitudinalHippocampalSubfields (v6.0) Module. Lifetime usage in grams was calculated for each participant until baseline and followup, independently, using linear interpolation. Usage of cannabis (lifetime consumption score) was correlated to increased volumes in certain subfields: the CA3 and CA4 in the right hemisphere and the presubiculum in both, the left and right hemispheres at baseline. Other tests including student's t-test and multivariate analysis of covariance were performed. Tests to understand the effects of varying consumption were also performed. Persistent usage of cannabis, however, did not result in atrophy of the subfields over time. Rather, there were lower growth rates observed in the healthy controls group as compared to that of the cannabis users in certain subfields.


Asunto(s)
Hipocampo/diagnóstico por imagen , Imagen por Resonancia Magnética/tendencias , Uso de la Marihuana/tendencias , Adolescente , Adulto , Femenino , Hipocampo/efectos de los fármacos , Humanos , Estudios Longitudinales , Masculino , Uso de la Marihuana/efectos adversos , Adulto Joven
12.
J Neurosci Methods ; 303: 1-6, 2018 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-29596859

RESUMEN

BACKGROUND: There has been growing interest towards naturalistic neuroimaging experiments, which deepen our understanding of how human brain processes and integrates incoming streams of multifaceted sensory information, as commonly occurs in real world. Music is a good example of such complex continuous phenomenon. In a few recent fMRI studies examining neural correlates of music in continuous listening settings, multiple perceptual attributes of music stimulus were represented by a set of high-level features, produced as the linear combination of the acoustic descriptors computationally extracted from the stimulus audio. NEW METHOD: fMRI data from naturalistic music listening experiment were employed here. Kernel principal component analysis (KPCA) was applied to acoustic descriptors extracted from the stimulus audio to generate a set of nonlinear stimulus features. Subsequently, perceptual and neural correlates of the generated high-level features were examined. RESULTS: The generated features captured musical percepts that were hidden from the linear PCA features, namely Rhythmic Complexity and Event Synchronicity. Neural correlates of the new features revealed activations associated to processing of complex rhythms, including auditory, motor, and frontal areas. COMPARISON WITH EXISTING METHOD: Results were compared with the findings in the previously published study, which analyzed the same fMRI data but applied linear PCA for generating stimulus features. To enable comparison of the results, methodology for finding stimulus-driven functional maps was adopted from the previous study. CONCLUSIONS: Exploiting nonlinear relationships among acoustic descriptors can lead to the novel high-level stimulus features, which can in turn reveal new brain structures involved in music processing.


Asunto(s)
Percepción Auditiva/fisiología , Mapeo Encefálico/métodos , Encéfalo/fisiología , Neurociencia Cognitiva/métodos , Música , Adulto , Encéfalo/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Análisis de Componente Principal , Adulto Joven
13.
Sci Rep ; 8(1): 2266, 2018 02 02.
Artículo en Inglés | MEDLINE | ID: mdl-29396524

RESUMEN

Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.


Asunto(s)
Estimulación Acústica , Corteza Auditiva/fisiología , Imagen por Resonancia Magnética , Música , Adulto , Femenino , Voluntarios Sanos , Humanos , Masculino , Modelos Neurológicos , Análisis Espacio-Temporal , Adulto Joven
15.
Front Hum Neurosci ; 9: 676, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26778996

RESUMEN

Emotion-related areas of the brain, such as the medial frontal cortices, amygdala, and striatum, are activated during listening to sad or happy music as well as during listening to pleasurable music. Indeed, in music, like in other arts, sad and happy emotions might co-exist and be distinct from emotions of pleasure or enjoyment. Here we aimed at discerning the neural correlates of sadness or happiness in music as opposed those related to musical enjoyment. We further investigated whether musical expertise modulates the neural activity during affective listening of music. To these aims, 13 musicians and 16 non-musicians brought to the lab their most liked and disliked musical pieces with a happy and sad connotation. Based on a listening test, we selected the most representative 18 sec excerpts of the emotions of interest for each individual participant. Functional magnetic resonance imaging (fMRI) recordings were obtained while subjects listened to and rated the excerpts. The cortico-thalamo-striatal reward circuit and motor areas were more active during liked than disliked music, whereas only the auditory cortex and the right amygdala were more active for disliked over liked music. These results discern the brain structures responsible for the perception of sad and happy emotions in music from those related to musical enjoyment. We also obtained novel evidence for functional differences in the limbic system associated with musical expertise, by showing enhanced liking-related activity in fronto-insular and cingulate areas in musicians.

16.
Cortex ; 57: 254-69, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-24949579

RESUMEN

We aimed at determining the functional neuroanatomy of working memory (WM) recognition of musical motifs that occurs while listening to music by adopting a non-standard procedure. Western tonal music provides naturally occurring repetition and variation of motifs. These serve as WM triggers, thus allowing us to study the phenomenon of motif tracking within real music. Adopting a modern tango as stimulus, a behavioural test helped to identify the stimulus motifs and build a time-course regressor of WM neural responses. This regressor was then correlated with the participants' (musicians') functional magnetic resonance imaging (fMRI) signal obtained during a continuous listening condition. In order to fine-tune the identification of WM processes in the brain, the variance accounted for by the sensory processing of a set of the stimulus' acoustic features was pruned from participants' neurovascular responses to music. Motivic repetitions activated prefrontal and motor cortical areas, basal ganglia, medial temporal lobe (MTL) structures, and cerebellum. The findings suggest that WM processing of motifs while listening to music emerges from the integration of neural activity distributed over cognitive, motor and limbic subsystems. The recruitment of the hippocampus stands as a novel finding in auditory WM. Effective connectivity and agglomerative hierarchical clustering analyses indicate that the hippocampal connectivity is modulated by motif repetitions, showing strong connections with WM-relevant areas (dorsolateral prefrontal cortex - dlPFC, supplementary motor area - SMA, and cerebellum), which supports the role of the hippocampus in the encoding of the musical motifs in WM, and may evidence long-term memory (LTM) formation, enabled by the use of a realistic listening condition.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Cognición/fisiología , Memoria a Largo Plazo/fisiología , Memoria a Corto Plazo/fisiología , Música , Estimulación Acústica/métodos , Adolescente , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Persona de Mediana Edad , Reconocimiento en Psicología/fisiología , Adulto Joven
17.
J Neurosci Methods ; 223: 74-84, 2014 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-24333752

RESUMEN

BACKGROUND: Independent component analysis (ICA) has been often used to decompose fMRI data mostly for the resting-state, block and event-related designs due to its outstanding advantage. For fMRI data during free-listening experiences, only a few exploratory studies applied ICA. NEW METHOD: For processing the fMRI data elicited by 512-s modern tango, a FFT based band-pass filter was used to further pre-process the fMRI data to remove sources of no interest and noise. Then, a fast model order selection method was applied to estimate the number of sources. Next, both individual ICA and group ICA were performed. Subsequently, ICA components whose temporal courses were significantly correlated with musical features were selected. Finally, for individual ICA, common components across majority of participants were found by diffusion map and spectral clustering. RESULTS: The extracted spatial maps (by the new ICA approach) common across most participants evidenced slightly right-lateralized activity within and surrounding the auditory cortices. Meanwhile, they were found associated with the musical features. COMPARISON WITH EXISTING METHOD(S): Compared with the conventional ICA approach, more participants were found to have the common spatial maps extracted by the new ICA approach. Conventional model order selection methods underestimated the true number of sources in the conventionally pre-processed fMRI data for the individual ICA. CONCLUSIONS: Pre-processing the fMRI data by using a reasonable band-pass digital filter can greatly benefit the following model order selection and ICA with fMRI data by naturalistic paradigms. Diffusion map and spectral clustering are straightforward tools to find common ICA spatial maps.


Asunto(s)
Percepción Auditiva/fisiología , Mapeo Encefálico , Encéfalo/irrigación sanguínea , Encéfalo/fisiología , Imagen por Resonancia Magnética , Música , Análisis de Componente Principal , Estimulación Acústica , Adulto , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Oxígeno , Adulto Joven
18.
Front Psychol ; 2: 308, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-22144968

RESUMEN

Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants' self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects' selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca's area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA