Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
1.
Neuroimage ; 216: 116191, 2020 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-31525500

RESUMO

Keeping time is fundamental for our everyday existence. Various isochronous activities, such as locomotion, require us to use internal timekeeping. This phenomenon comes into play also in other human pursuits such as dance and music. When listening to music, we spontaneously perceive and predict its beat. The process of beat perception comprises both beat inference and beat maintenance, their relative importance depending on the salience of beat in the music. To study functional connectivity associated with these processes in a naturalistic situation, we used functional magnetic resonance imaging to measure brain responses of participants while they were listening to a piece of music containing strong contrasts in beat salience. Subsequently, we utilized dynamic graph analysis and psychophysiological interactions (PPI) analysis in connection with computational modelling of beat salience to investigate how functional connectivity manifests these processes. As the main effect, correlation analyses between the obtained dynamic graph measures and the beat salience measure revealed increased centrality in auditory-motor cortices, cerebellum, and extrastriate visual areas during low beat salience, whereas regions of the default mode- and central executive networks displayed high centrality during high beat salience. PPI analyses revealed partial dissociation of functional networks belonging to this pathway indicating complementary neural mechanisms crucial in beat inference and maintenance, processes pivotal for extracting and predicting temporal regularities in our environment.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Cerebelo/fisiologia , Conectoma/psicologia , Córtex Motor/fisiologia , Música/psicologia , Estimulação Acústica/métodos , Adulto , Córtex Auditivo/diagnóstico por imagem , Cerebelo/diagnóstico por imagem , Conectoma/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Córtex Motor/diagnóstico por imagem , Periodicidade , Adulto Jovem
2.
Brain Topogr ; 33(3): 289-302, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32124110

RESUMO

Recently, exploring brain activity based on functional networks during naturalistic stimuli especially music and video represents an attractive challenge because of the low signal-to-noise ratio in collected brain data. Although most efforts focusing on exploring the listening brain have been made through functional magnetic resonance imaging (fMRI), sensor-level electro- or magnetoencephalography (EEG/MEG) technique, little is known about how neural rhythms are involved in the brain network activity under naturalistic stimuli. This study exploited cortical oscillations through analysis of ongoing EEG and musical feature during freely listening to music. We used a data-driven method that combined music information retrieval with spatial Fourier Independent Components Analysis (spatial Fourier-ICA) to probe the interplay between the spatial profiles and the spectral patterns of the brain network emerging from music listening. Correlation analysis was performed between time courses of brain networks extracted from EEG data and musical feature time series extracted from music stimuli to derive the musical feature related oscillatory patterns in the listening brain. We found brain networks of musical feature processing were frequency-dependent. Musical feature time series, especially fluctuation centroid and key feature, were associated with an increased beta activation in the bilateral superior temporal gyrus. An increased alpha oscillation in the bilateral occipital cortex emerged during music listening, which was consistent with alpha functional suppression hypothesis in task-irrelevant regions. We also observed an increased delta-beta oscillatory activity in the prefrontal cortex associated with musical feature processing. In addition to these findings, the proposed method seems valuable for characterizing the large-scale frequency-dependent brain activity engaged in musical feature processing.


Assuntos
Percepção Auditiva , Mapeamento Encefálico , Música , Encéfalo/diagnóstico por imagem , Eletroencefalografia , Humanos
3.
Neuroimage ; 167: 309-315, 2018 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-29175201

RESUMO

Recent functional studies suggest that noise sensitivity, a trait describing attitudes towards noise and predicting noise annoyance, is associated with altered processing in the central auditory system. In the present work, we examined whether noise sensitivity could be related to the structural anatomy of auditory and limbic brain areas. Anatomical MR brain images of 80 subjects were parcellated with FreeSurfer to measure grey matter volume, cortical thickness, cortical area and folding index of anatomical structures in the temporal lobe and insular cortex. The grey matter volume of amygdala and hippocampus was measured as well. According to our findings, noise sensitivity is associated with the grey matter volume in the selected structures. Among those, we propose and discuss particular areas, previously linked to auditory perceptual, emotional and interoceptive processing, in which larger grey matter volume seems to be related to higher noise sensitivity.


Assuntos
Tonsila do Cerebelo/anatomia & histologia , Percepção Auditiva/fisiologia , Córtex Cerebral/anatomia & histologia , Substância Cinzenta/anatomia & histologia , Hipocampo/anatomia & histologia , Ruído , Personalidade/fisiologia , Adulto , Tonsila do Cerebelo/diagnóstico por imagem , Córtex Auditivo/anatomia & histologia , Córtex Auditivo/diagnóstico por imagem , Córtex Cerebral/diagnóstico por imagem , Feminino , Substância Cinzenta/diagnóstico por imagem , Hipocampo/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Ruído/efeitos adversos , Adulto Jovem
4.
Eur J Neurosci ; 47(5): 433-445, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29359365

RESUMO

When watching performing arts, a wide and complex network of brain processes emerge. These processes can be shaped by professional expertise. When compared to laymen, dancers have enhanced processes in observation of short dance movement and listening to music. But how do the cortical processes differ in musicians and dancers when watching an audio-visual dance performance? In our study, we presented the participants long excerpts from the contemporary dance choreography of Carmen. During multimodal movement of a dancer, theta phase synchrony over the fronto-central electrodes was stronger in dancers when compared to musicians and laymen. In addition, alpha synchrony was decreased in all groups during large rapid movement when compared to nearly motionless parts of the choreography. Our results suggest an enhanced cortical communication in dancers when watching dance and, further, that this enhancement is rather related to multimodal, cognitive and emotional processes than to simple observation of dance movement.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Dança , Emoções/fisiologia , Movimento/fisiologia , Adulto , Feminino , Humanos , Masculino , Música
5.
Psychol Res ; 82(6): 1195-1211, 2018 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28712036

RESUMO

Previous studies have found relationships between music-induced movement and musical characteristics on more general levels, such as tempo or pulse clarity. This study focused on synchronization abilities to music of finely-varying tempi and varying degrees of low-frequency spectral change/flux. Excerpts from six classic Motown/R&B songs at three different tempos (105, 115, and 130 BPM) were used as stimuli in this experiment. Each was then time-stretched by a factor of 5% with regard to the original tempo, yielding a total of 12 stimuli that were presented to 30 participants. Participants were asked to move along with the stimuli while being recorded with an optical motion capture system. Synchronization analysis was performed relative to the beat and the bar level of the music and four body parts. Results suggest that participants synchronized different body parts to specific metrical levels; in particular, vertical movements of hip and feet were synchronized to the beat level when the music contained large amounts of low-frequency spectral flux and had a slower tempo, while synchronization of head and hands was more tightly coupled to the weak flux stimuli at the bar level. Synchronization was generally more tightly coupled to the slower versions of the same stimuli, while synchronization showed an inverted u-shape effect at the bar level as tempo increased. These results indicate complex relationships between musical characteristics, in particular regarding metrical and temporal structure, and our ability to synchronize and entrain to such musical stimuli.


Assuntos
Percepção Auditiva/fisiologia , Atividade Motora/fisiologia , Música , Percepção do Tempo/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
6.
Hum Brain Mapp ; 38(6): 2955-2970, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28349620

RESUMO

Musical expertise is visible both in the morphology and functionality of the brain. Recent research indicates that functional integration between multi-sensory, somato-motor, default-mode (DMN), and salience (SN) networks of the brain differentiates musicians from non-musicians during resting state. Here, we aimed at determining whether brain networks differentially exchange information in musicians as opposed to non-musicians during naturalistic music listening. Whole-brain graph-theory analyses were performed on participants' fMRI responses. Group-level differences revealed that musicians' primary hubs comprised cerebral and cerebellar sensorimotor regions whereas non-musicians' dominant hubs encompassed DMN-related regions. Community structure analyses of the key hubs revealed greater integration of motor and somatosensory homunculi representing the upper limbs and torso in musicians. Furthermore, musicians who started training at an earlier age exhibited greater centrality in the auditory cortex, and areas related to top-down processes, attention, emotion, somatosensory processing, and non-verbal processing of speech. We here reveal how brain networks organize themselves in a naturalistic music listening situation wherein musicians automatically engage neural networks that are action-based while non-musicians use those that are perception-based to process an incoming auditory stream. Hum Brain Mapp 38:2955-2970, 2017. © 2017 Wiley Periodicals, Inc.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Música , Vias Neurais/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo/diagnóstico por imagem , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Modelos Neurológicos , Vias Neurais/diagnóstico por imagem , Oxigênio/sangue , Adulto Jovem
7.
Neuroimage ; 124(Pt A): 224-231, 2016 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-26364862

RESUMO

Low-level (timbral) and high-level (tonal and rhythmical) musical features during continuous listening to music, studied by functional magnetic resonance imaging (fMRI), have been shown to elicit large-scale responses in cognitive, motor, and limbic brain networks. Using a similar methodological approach and a similar group of participants, we aimed to study the replicability of previous findings. Participants' fMRI responses during continuous listening of a tango Nuevo piece were correlated voxelwise against the time series of a set of perceptually validated musical features computationally extracted from the music. The replicability of previous results and the present study was assessed by two approaches: (a) correlating the respective activation maps, and (b) computing the overlap of active voxels between datasets at variable levels of ranked significance. Activity elicited by timbral features was better replicable than activity elicited by tonal and rhythmical ones. These results indicate more reliable processing mechanisms for low-level musical features as compared to more high-level features. The processing of such high-level features is probably more sensitive to the state and traits of the listeners, as well as of their background in music.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Música , Estimulação Acústica , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Reprodutibilidade dos Testes , Adulto Jovem
8.
Neuroimage ; 88: 170-80, 2014 03.
Artigo em Inglês | MEDLINE | ID: mdl-24269803

RESUMO

We investigated neural correlates of musical feature processing with a decoding approach. To this end, we used a method that combines computational extraction of musical features with regularized multiple regression (LASSO). Optimal model parameters were determined by maximizing the decoding accuracy using a leave-one-out cross-validation scheme. The method was applied to functional magnetic resonance imaging (fMRI) data that were collected using a naturalistic paradigm, in which participants' brain responses were recorded while they were continuously listening to pieces of real music. The dependent variables comprised musical feature time series that were computationally extracted from the stimulus. We expected timbral features to obtain a higher prediction accuracy than rhythmic and tonal ones. Moreover, we expected the areas significantly contributing to the decoding models to be consistent with areas of significant activation observed in previous research using a naturalistic paradigm with fMRI. Of the six musical features considered, five could be significantly predicted for the majority of participants. The areas significantly contributing to the optimal decoding models agreed to a great extent with results obtained in previous studies. In particular, areas in the superior temporal gyrus, Heschl's gyrus, Rolandic operculum, and cerebellum contributed to the decoding of timbral features. For the decoding of the rhythmic feature, we found the bilateral superior temporal gyrus, right Heschl's gyrus, and hippocampus to contribute most. The tonal feature, however, could not be significantly predicted, suggesting a higher inter-participant variability in its neural processing. A subsequent classification experiment revealed that segments of the stimulus could be classified from the fMRI data with significant accuracy. The present findings provide compelling evidence for the involvement of the auditory cortex, the cerebellum and the hippocampus in the processing of musical features during continuous listening to music.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos , Cerebelo/fisiologia , Hipocampo/fisiologia , Música , Processamento de Sinais Assistido por Computador , Adulto , Córtex Auditivo/diagnóstico por imagem , Cerebelo/diagnóstico por imagem , Feminino , Hipocampo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
9.
Psychol Music ; 52(3): 305-321, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38708378

RESUMO

Music that evokes strong emotional responses is often experienced as autobiographically salient. Through emotional experience, the musical features of songs could also contribute to their subjective autobiographical saliency. Songs which have been popular during adolescence or young adulthood (ages 10-30) are more likely to evoke stronger memories, a phenomenon known as a reminiscence bump. In the present study, we sought to determine how song-specific age, emotional responsiveness to music, musical features, and subjective memory functioning contribute to the subjective autobiographical saliency of music in older adults. In a music listening study, 112 participants rated excerpts of popular songs from the 1950s to the 1980s for autobiographical saliency. Additionally, they filled out questionnaires about emotional responsiveness to music and subjective memory functioning. The song excerpts' musical features were extracted computationally using MIRtoolbox. Results showed that autobiographical saliency was best predicted by song-specific age and emotional responsiveness to music and musical features. Newer songs that were more similar in rhythm to older songs were also rated higher in autobiographical saliency. Overall, this study contributes to autobiographical memory research by uncovering a set of factors affecting the subjective autobiographical saliency of music.

10.
Neuroimage ; 83: 627-36, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23810975

RESUMO

We aimed at predicting the temporal evolution of brain activity in naturalistic music listening conditions using a combination of neuroimaging and acoustic feature extraction. Participants were scanned using functional Magnetic Resonance Imaging (fMRI) while listening to two musical medleys, including pieces from various genres with and without lyrics. Regression models were built to predict voxel-wise brain activations which were then tested in a cross-validation setting in order to evaluate the robustness of the hence created models across stimuli. To further assess the generalizability of the models we extended the cross-validation procedure by including another dataset, which comprised continuous fMRI responses of musically trained participants to an Argentinean tango. Individual models for the two musical medleys revealed that activations in several areas in the brain belonging to the auditory, limbic, and motor regions could be predicted. Notably, activations in the medial orbitofrontal region and the anterior cingulate cortex, relevant for self-referential appraisal and aesthetic judgments, could be predicted successfully. Cross-validation across musical stimuli and participant pools helped identify a region of the right superior temporal gyrus, encompassing the planum polare and the Heschl's gyrus, as the core structure that processed complex acoustic features of musical pieces from various genres, with or without lyrics. Models based on purely instrumental music were able to predict activation in the bilateral auditory cortices, parietal, somatosensory, and left hemispheric primary and supplementary motor areas. The presence of lyrics on the other hand weakened the prediction of activations in the left superior temporal gyrus. Our results suggest spontaneous emotion-related processing during naturalistic listening to music and provide supportive evidence for the hemispheric specialization for categorical sounds with realistic stimuli. We herewith introduce a powerful means to predict brain responses to music, speech, or soundscapes across a large variety of contexts.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia , Lateralidade Funcional/fisiologia , Música , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Análise de Componente Principal , Adulto Jovem
11.
Cogn Neuropsychol ; 30(5): 311-31, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24344816

RESUMO

Pitch deafness, the most commonly known form of congenital amusia, refers to a severe deficit in musical pitch processing (i.e., melody discrimination and recognition) that can leave time processing--including rhythm, metre, and "feeling the beat"--preserved. In Experiment 1, we show that by presenting musical excerpts in nonpitched drum timbres, rather than pitched piano tones, amusics show normal metre recognition. Experiment 2 reveals that body movement influences amusics' interpretation of the beat of an ambiguous drum rhythm. Experiment 3 and a subsequent exploratory study show an ability to synchronize movement to the beat of popular dance music and potential for improvement when given a modest amount of practice. Together the present results are consistent with the idea that rhythm and beat processing are spared in pitch deafness--that is, being pitch-deaf does not mean one is beat-deaf. In the context of drum music especially, amusics can be musical.


Assuntos
Estimulação Acústica , Percepção Auditiva , Transtornos da Percepção Auditiva , Música , Estimulação Acústica/métodos , Adulto , Estudos de Casos e Controles , Discriminação Psicológica , Emoções , Feminino , Humanos , Pessoas com Deficiência Auditiva/psicologia , Percepção da Altura Sonora
12.
Ann N Y Acad Sci ; 1530(1): 18-22, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37847675

RESUMO

Music listening is a dynamic process that entails complex interactions between sensory, cognitive, and emotional processes. The naturalistic paradigm provides a means to investigate these processes in an ecologically valid manner by allowing experimental settings that mimic real-life musical experiences. In this paper, we highlight the importance of the naturalistic paradigm in studying dynamic music processing and discuss how it allows for investigating both the segregation and integration of brain processes using model-based and model-free methods. We further suggest that studying individual difference-modulated music processing in this paradigm can provide insights into the mechanisms of brain plasticity, which can have implications for the development of interventions and therapies in a personalized way. Finally, despite the challenges that the naturalistic paradigm poses, we end with a discussion on future prospects of music and neuroscience research, especially with the continued development and refinement of naturalistic paradigms and the adoption of open science practices.


Assuntos
Mapeamento Encefálico , Música , Humanos , Mapeamento Encefálico/métodos , Percepção Auditiva , Imageamento por Ressonância Magnética , Encéfalo
13.
Cogn Sci ; 47(4): e13281, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-37096347

RESUMO

Body movement is a primary nonverbal communication channel in humans. Coordinated social behaviors, such as dancing together, encourage multifarious rhythmic and interpersonally coupled movements from which observers can extract socially and contextually relevant information. The investigation of relations between visual social perception and kinematic motor coupling is important for social cognition. Perceived coupling of dyads spontaneously dancing to pop music has been shown to be highly driven by the degree of frontal orientation between dancers. The perceptual salience of other aspects, including postural congruence, movement frequencies, time-delayed relations, and horizontal mirroring remains, however, uncertain. In a motion capture study, 90 participant dyads moved freely to 16 musical excerpts from eight musical genres, while their movements were recorded using optical motion capture. A total from 128 recordings from 8 dyads maximally facing each other were selected to generate silent 8-s animations. Three kinematic features describing simultaneous and sequential full body coupling were extracted from the dyads. In an online experiment, the animations were presented to 432 observers, who were asked to rate perceived similarity and interaction between dancers. We found dyadic kinematic coupling estimates to be higher than those obtained from surrogate estimates, providing evidence for a social dimension of entrainment in dance. Further, we observed links between perceived similarity and coupling of both slower simultaneous horizontal gestures and posture bounding volumes. Perceived interaction, on the other hand, was more related to coupling of faster simultaneous gestures and to sequential coupling. Also, dyads who were perceived as more coupled tended to mirror their pair's movements.


Assuntos
Gestos , Música , Humanos , Comportamento Imitativo , Movimento , Postura , Percepção Visual
14.
Neuroimage ; 59(4): 3677-89, 2012 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-22116038

RESUMO

We investigated the neural underpinnings of timbral, tonal, and rhythmic features of a naturalistic musical stimulus. Participants were scanned with functional Magnetic Resonance Imaging (fMRI) while listening to a stimulus with a rich musical structure, a modern tango. We correlated temporal evolutions of timbral, tonal, and rhythmic features of the stimulus, extracted using acoustic feature extraction procedures, with the fMRI time series. Results corroborate those obtained with controlled stimuli in previous studies and highlight additional areas recruited during musical feature processing. While timbral feature processing was associated with activations in cognitive areas of the cerebellum, and sensory and default mode network cerebrocortical areas, musical pulse and tonality processing recruited cortical and subcortical cognitive, motor and emotion-related circuits. In sum, by combining neuroimaging, acoustic feature extraction and behavioral methods, we revealed the large-scale cognitive, motor and limbic brain circuitry dedicated to acoustic feature processing during listening to a naturalistic stimulus. In addition to these novel findings, our study has practical relevance as it provides a powerful means to localize neural processing of individual acoustical features, be it those of music, speech, or soundscapes, in ecological settings.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Música , Rede Nervosa/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
15.
Hum Mov Sci ; 81: 102894, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34798445

RESUMO

Humans are able to synchronize with musical events whilst coordinating their movements with others. Interpersonal entrainment phenomena, such as dance, involve multiple body parts and movement directions. Along with being multidimensional, dance movement interaction is plurifrequential, since it can occur at different frequencies simultaneously. Moreover, it is prone to nonstationarity, due to, for instance, displacements around the dance floor. Various methodological approaches have been adopted for the study of human entrainment, but only spectrogram-based techniques allow for an integral analysis thereof. This article proposes an alternative approach based upon the cross-wavelet transform, a state-of-the-art technique for nonstationary and plurifrequential analysis of univariate interaction. The presented approach generalizes the cross-wavelet transform to multidimensional signals. It allows to identify, for different frequencies of movement, estimates of interaction and leader-follower dynamics across body parts and movement directions. Further, the generalized cross-wavelet transform can be used to quantify the frequency-wise contribution of individual body parts and movement directions to overall movement synchrony. Since both in- and anti-phase relationships are dominant modes of coordination, the proposed implementation ignores whether movements are identical or opposite in phase. The article provides a thorough mathematical description of the method and includes proofs of its invariance under translation, rotation, and reflection. Finally, its properties and performance are illustrated via four examples using simulated data and behavioral data collected through a mirror game task and a free dance movement task.


Assuntos
Movimento , Análise de Ondaletas , Humanos
16.
Sci Rep ; 12(1): 2672, 2022 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-35177683

RESUMO

Movement is a universal response to music, with dance often taking place in social settings. Although previous work has suggested that socially relevant information, such as personality and gender, are encoded in dance movement, the generalizability of previous work is limited. The current study aims to decode dancers' gender, personality traits, and music preference from music-induced movements. We propose a method that predicts such individual difference from free dance movements, and demonstrate the robustness of the proposed method by using two data sets collected using different musical stimuli. In addition, we introduce a novel measure to explore the relative importance of different joints in predicting individual differences. Results demonstrated near perfect classification of gender, and notably high prediction of personality and music preferences. Furthermore, learned models demonstrated generalizability across datasets highlighting the importance of certain joints in intrinsic movement patterns specific to individual differences. Results further support theories of embodied music cognition and the role of bodily movement in musical experiences by demonstrating the influence of gender, personality, and music preferences on embodied responses to heard music.

17.
PLoS One ; 17(9): e0275228, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36174020

RESUMO

Previous literature has shown that music preferences (and thus preferred musical features) differ depending on the listening context and reasons for listening (RL). Yet, to our knowledge no research has investigated how features of music that people dance or move to relate to particular RL. Consequently, in two online surveys, participants (N = 173) were asked to name songs they move to ("dance music"). Additionally, participants (N = 105) from Survey 1 provided RL for their selected songs. To investigate relationships between the two, we first extracted audio features from dance music using the Spotify API and compared those features with a baseline dataset that is considered to represent music in general. Analyses revealed that, compared to the baseline, the dance music dataset had significantly higher levels of energy, danceability, valence, and loudness, and lower speechiness, instrumentalness and acousticness. Second, to identify potential subgroups of dance music, a cluster analysis was performed on its Spotify audio features. Results of this cluster analysis suggested five subgroups of dance music with varying combinations of Spotify audio features: "fast-lyrical", "sad-instrumental", "soft-acoustic", "sad-energy", and "happy-energy". Third, a factor analysis revealed three main RL categories: "achieving self-awareness", "regulation of arousal and mood", and "expression of social relatedness". Finally, we identified variations in people's RL ratings for each subgroup of dance music. This suggests that certain characteristics of dance music are more suitable for listeners' particular RL, which shape their music preferences. Importantly, the highest-rated RL items for dance music belonged to the "regulation of mood and arousal" category. This might be interpreted as the main function of dance music. We hope that future research will elaborate on connections between musical qualities of dance music and particular music listening functions.


Assuntos
Meios de Comunicação , Música , Acústica , Percepção Auditiva , Auscultação , Humanos
18.
Front Psychol ; 12: 647756, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34017286

RESUMO

Although music is known to be a part of everyday life and a resource for mood and emotion management, everyday life has changed significantly for many due to the global coronavirus pandemic, making the role of music in everyday life less certain. An online survey in which participants responded to Likert scale questions as well as providing free text responses was used to explore how participants were engaging with music during the first wave of the pandemic, whether and how they were using music for mood regulation, and how their engagement with music related to their experiences of worry and anxiety resulting from the pandemic. Results indicated that, for the majority of participants, while many felt their use of music had changed since the beginning of the pandemic, the amount of their music listening behaviors were either unaffected by the pandemic or increased. This was especially true of listening to self-selected music and watching live streamed concerts. Analysis revealed correlations between participants' use of mood for music regulation, their musical engagement, and their levels of anxiety and worry. A small number of participants described having negative emotional responses to music, the majority of whom also reported severe levels of anxiety.

19.
PLoS One ; 16(5): e0251692, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33989366

RESUMO

BACKGROUND AND OBJECTIVES: Music has a unique capacity to evoke both strong emotions and vivid autobiographical memories. Previous music information retrieval (MIR) studies have shown that the emotional experience of music is influenced by a combination of musical features, including tonal, rhythmic, and loudness features. Here, our aim was to explore the relationship between music-evoked emotions and music-evoked memories and how musical features (derived with MIR) can predict them both. METHODS: Healthy older adults (N = 113, age ≥ 60 years) participated in a listening task in which they rated a total of 140 song excerpts comprising folk songs and popular songs from 1950s to 1980s on five domains measuring the emotional (valence, arousal, emotional intensity) and memory (familiarity, autobiographical salience) experience of the songs. A set of 24 musical features were extracted from the songs using computational MIR methods. Principal component analyses were applied to reduce multicollinearity, resulting in six core musical components, which were then used to predict the behavioural ratings in multiple regression analyses. RESULTS: All correlations between behavioural ratings were positive and ranged from moderate to very high (r = 0.46-0.92). Emotional intensity showed the highest correlation to both autobiographical salience and familiarity. In the MIR data, three musical components measuring salience of the musical pulse (Pulse strength), relative strength of high harmonics (Brightness), and fluctuation in the frequencies between 200-800 Hz (Low-mid) predicted both music-evoked emotions and memories. Emotional intensity (and valence to a lesser extent) mediated the predictive effect of the musical components on music-evoked memories. CONCLUSIONS: The results suggest that music-evoked emotions are strongly related to music-evoked memories in healthy older adults and that both music-evoked emotions and memories are predicted by the same core musical features.


Assuntos
Estimulação Acústica , Emoções/fisiologia , Memória Episódica , Rememoração Mental/fisiologia , Música , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
20.
Int J Neural Syst ; 31(3): 2150001, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33353528

RESUMO

To examine the electrophysiological underpinnings of the functional networks involved in music listening, previous approaches based on spatial independent component analysis (ICA) have recently been used to ongoing electroencephalography (EEG) and magnetoencephalography (MEG). However, those studies focused on healthy subjects, and failed to examine the group-level comparisons during music listening. Here, we combined group-level spatial Fourier ICA with acoustic feature extraction, to enable group comparisons in frequency-specific brain networks of musical feature processing. It was then applied to healthy subjects and subjects with major depressive disorder (MDD). The music-induced oscillatory brain patterns were determined by permutation correlation analysis between individual time courses of Fourier-ICA components and musical features. We found that (1) three components, including a beta sensorimotor network, a beta auditory network and an alpha medial visual network, were involved in music processing among most healthy subjects; and that (2) one alpha lateral component located in the left angular gyrus was engaged in music perception in most individuals with MDD. The proposed method allowed the statistical group comparison, and we found that: (1) the alpha lateral component was activated more strongly in healthy subjects than in the MDD individuals, and that (2) the derived frequency-dependent networks of musical feature processing seemed to be altered in MDD participants compared to healthy subjects. The proposed pipeline appears to be valuable for studying disrupted brain oscillations in psychiatric disorders during naturalistic paradigms.


Assuntos
Transtorno Depressivo Maior , Música , Percepção Auditiva , Encéfalo , Mapeamento Encefálico , Depressão , Eletroencefalografia , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA