Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Psychol Music ; 52(3): 305-321, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38708378

RESUMO

Music that evokes strong emotional responses is often experienced as autobiographically salient. Through emotional experience, the musical features of songs could also contribute to their subjective autobiographical saliency. Songs which have been popular during adolescence or young adulthood (ages 10-30) are more likely to evoke stronger memories, a phenomenon known as a reminiscence bump. In the present study, we sought to determine how song-specific age, emotional responsiveness to music, musical features, and subjective memory functioning contribute to the subjective autobiographical saliency of music in older adults. In a music listening study, 112 participants rated excerpts of popular songs from the 1950s to the 1980s for autobiographical saliency. Additionally, they filled out questionnaires about emotional responsiveness to music and subjective memory functioning. The song excerpts' musical features were extracted computationally using MIRtoolbox. Results showed that autobiographical saliency was best predicted by song-specific age and emotional responsiveness to music and musical features. Newer songs that were more similar in rhythm to older songs were also rated higher in autobiographical saliency. Overall, this study contributes to autobiographical memory research by uncovering a set of factors affecting the subjective autobiographical saliency of music.

2.
PLoS One ; 16(5): e0251692, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33989366

RESUMO

BACKGROUND AND OBJECTIVES: Music has a unique capacity to evoke both strong emotions and vivid autobiographical memories. Previous music information retrieval (MIR) studies have shown that the emotional experience of music is influenced by a combination of musical features, including tonal, rhythmic, and loudness features. Here, our aim was to explore the relationship between music-evoked emotions and music-evoked memories and how musical features (derived with MIR) can predict them both. METHODS: Healthy older adults (N = 113, age ≥ 60 years) participated in a listening task in which they rated a total of 140 song excerpts comprising folk songs and popular songs from 1950s to 1980s on five domains measuring the emotional (valence, arousal, emotional intensity) and memory (familiarity, autobiographical salience) experience of the songs. A set of 24 musical features were extracted from the songs using computational MIR methods. Principal component analyses were applied to reduce multicollinearity, resulting in six core musical components, which were then used to predict the behavioural ratings in multiple regression analyses. RESULTS: All correlations between behavioural ratings were positive and ranged from moderate to very high (r = 0.46-0.92). Emotional intensity showed the highest correlation to both autobiographical salience and familiarity. In the MIR data, three musical components measuring salience of the musical pulse (Pulse strength), relative strength of high harmonics (Brightness), and fluctuation in the frequencies between 200-800 Hz (Low-mid) predicted both music-evoked emotions and memories. Emotional intensity (and valence to a lesser extent) mediated the predictive effect of the musical components on music-evoked memories. CONCLUSIONS: The results suggest that music-evoked emotions are strongly related to music-evoked memories in healthy older adults and that both music-evoked emotions and memories are predicted by the same core musical features.


Assuntos
Estimulação Acústica , Emoções/fisiologia , Memória Episódica , Rememoração Mental/fisiologia , Música , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
3.
Sci Rep ; 8(1): 708, 2018 01 15.
Artigo em Inglês | MEDLINE | ID: mdl-29335643

RESUMO

Pattern recognition on neural activations from naturalistic music listening has been successful at predicting neural responses of listeners from musical features, and vice versa. Inter-subject differences in the decoding accuracies have arisen partly from musical training that has widely recognized structural and functional effects on the brain. We propose and evaluate a decoding approach aimed at predicting the musicianship class of an individual listener from dynamic neural processing of musical features. Whole brain functional magnetic resonance imaging (fMRI) data was acquired from musicians and nonmusicians during listening of three musical pieces from different genres. Six musical features, representing low-level (timbre) and high-level (rhythm and tonality) aspects of music perception, were computed from the acoustic signals, and classification into musicians and nonmusicians was performed on the musical feature and parcellated fMRI time series. Cross-validated classification accuracy reached 77% with nine regions, comprising frontal and temporal cortical regions, caudate nucleus, and cingulate gyrus. The processing of high-level musical features at right superior temporal gyrus was most influenced by listeners' musical training. The study demonstrates the feasibility to decode musicianship from how individual brains listen to music, attaining accuracy comparable to current results from automated clinical diagnosis of neurological and psychological disorders.


Assuntos
Percepção Auditiva , Encéfalo/fisiologia , Música/psicologia , Estimulação Acústica , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA