Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Tipo de documento
Ano de publicação
Intervalo de ano de publicação
1.
Neuroimage ; 81: 49-60, 2013 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-23684870

RESUMO

This study investigates neural correlates of music-evoked fear and joy with fMRI. Studies on neural correlates of music-evoked fear are scant, and there are only a few studies on neural correlates of joy in general. Eighteen individuals listened to excerpts of fear-evoking, joy-evoking, as well as neutral music and rated their own emotional state in terms of valence, arousal, fear, and joy. Results show that BOLD signal intensity increased during joy, and decreased during fear (compared to the neutral condition) in bilateral auditory cortex (AC) and bilateral superficial amygdala (SF). In the right primary somatosensory cortex (area 3b) BOLD signals increased during exposure to fear-evoking music. While emotion-specific activity in AC increased with increasing duration of each trial, SF responded phasically in the beginning of the stimulus, and then SF activity declined. Psychophysiological Interaction (PPI) analysis revealed extensive emotion-specific functional connectivity of AC with insula, cingulate cortex, as well as with visual, and parietal attentional structures. These findings show that the auditory cortex functions as a central hub of an affective-attentional network that is more extensive than previously believed. PPI analyses also showed functional connectivity of SF with AC during the joy condition, taken to reflect that SF is sensitive to social signals with positive valence. During fear music, SF showed functional connectivity with visual cortex and area 7 of the superior parietal lobule, taken to reflect increased visual alertness and an involuntary shift of attention during the perception of auditory signals of danger.


Assuntos
Tonsila do Cerebelo/fisiologia , Córtex Auditivo/fisiologia , Mapeamento Encefálico , Medo/fisiologia , Felicidade , Música/psicologia , Estimulação Acústica , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
2.
PLoS One ; 7(3): e33993, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22479497

RESUMO

Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources.


Assuntos
Modelos Estatísticos , Música , Discriminação da Altura Tonal , Som , Acústica da Fala , Algoritmos , Espectrografia do Som
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA