Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 78
Filtrar
1.
Proc Natl Acad Sci U S A ; 121(10): e2316306121, 2024 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-38408255

RESUMEN

Music is powerful in conveying emotions and triggering affective brain mechanisms. Affective brain responses in previous studies were however rather inconsistent, potentially because of the non-adaptive nature of recorded music used so far. Live music instead can be dynamic and adaptive and is often modulated in response to audience feedback to maximize emotional responses in listeners. Here, we introduce a setup for studying emotional responses to live music in a closed-loop neurofeedback setup. This setup linked live performances by musicians to neural processing in listeners, with listeners' amygdala activity was displayed to musicians in real time. Brain activity was measured using functional MRI, and especially amygdala activity was quantified in real time for the neurofeedback signal. Live pleasant and unpleasant piano music performed in response to amygdala neurofeedback from listeners was acoustically very different from comparable recorded music and elicited significantly higher and more consistent amygdala activity. Higher activity was also found in a broader neural network for emotion processing during live compared to recorded music. This finding included observations of the predominance for aversive coding in the ventral striatum while listening to unpleasant music, and involvement of the thalamic pulvinar nucleus, presumably for regulating attentional and cortical flow mechanisms. Live music also stimulated a dense functional neural network with the amygdala as a central node influencing other brain systems. Finally, only live music showed a strong and positive coupling between features of the musical performance and brain activity in listeners pointing to real-time and dynamic entrainment processes.


Asunto(s)
Música , Música/psicología , Encéfalo/fisiología , Emociones/fisiología , Amígdala del Cerebelo/fisiología , Afecto , Imagen por Resonancia Magnética , Percepción Auditiva/fisiología
2.
PLoS Biol ; 19(4): e3000751, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33848299

RESUMEN

Across many species, scream calls signal the affective significance of events to other agents. Scream calls were often thought to be of generic alarming and fearful nature, to signal potential threats, with instantaneous, involuntary, and accurate recognition by perceivers. However, scream calls are more diverse in their affective signaling nature than being limited to fearfully alarming a threat, and thus the broader sociobiological relevance of various scream types is unclear. Here we used 4 different psychoacoustic, perceptual decision-making, and neuroimaging experiments in humans to demonstrate the existence of at least 6 psychoacoustically distinctive types of scream calls of both alarming and non-alarming nature, rather than there being only screams caused by fear or aggression. Second, based on perceptual and processing sensitivity measures for decision-making during scream recognition, we found that alarm screams (with some exceptions) were overall discriminated the worst, were responded to the slowest, and were associated with a lower perceptual sensitivity for their recognition compared with non-alarm screams. Third, the neural processing of alarm compared with non-alarm screams during an implicit processing task elicited only minimal neural signal and connectivity in perceivers, contrary to the frequent assumption of a threat processing bias of the primate neural system. These findings show that scream calls are more diverse in their signaling and communicative nature in humans than previously assumed, and, in contrast to a commonly observed threat processing bias in perceptual discriminations and neural processes, we found that especially non-alarm screams, and positive screams in particular, seem to have higher efficiency in speeded discriminations and the implicit neural processing of various scream types in humans.


Asunto(s)
Percepción Auditiva/fisiología , Discriminación en Psicología/fisiología , Miedo/psicología , Reconocimiento de Voz/fisiología , Adulto , Vías Auditivas/diagnóstico por imagen , Vías Auditivas/fisiología , Encéfalo/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Patrones de Reconocimiento Fisiológico/fisiología , Reconocimiento en Psicología/fisiología , Caracteres Sexuales , Adulto Joven
3.
Cereb Cortex ; 33(4): 1170-1185, 2023 02 07.
Artículo en Inglés | MEDLINE | ID: mdl-35348635

RESUMEN

Voice signaling is integral to human communication, and a cortical voice area seemed to support the discrimination of voices from other auditory objects. This large cortical voice area in the auditory cortex (AC) was suggested to process voices selectively, but its functional differentiation remained elusive. We used neuroimaging while humans processed voices and nonvoice sounds, and artificial sounds that mimicked certain voice sound features. First and surprisingly, specific auditory cortical voice processing beyond basic acoustic sound analyses is only supported by a very small portion of the originally described voice area in higher-order AC located centrally in superior Te3. Second, besides this core voice processing area, large parts of the remaining voice area in low- and higher-order AC only accessorily process voices and might primarily pick up nonspecific psychoacoustic differences between voices and nonvoices. Third, a specific subfield of low-order AC seems to specifically decode acoustic sound features that are relevant but not exclusive for voice detection. Taken together, the previously defined voice area might have been overestimated since cortical support for human voice processing seems rather restricted. Cortical voice processing also seems to be functionally more diverse and embedded in broader functional principles of the human auditory system.


Asunto(s)
Corteza Auditiva , Voz , Humanos , Estimulación Acústica/métodos , Percepción Auditiva , Sonido , Imagen por Resonancia Magnética/métodos
4.
J Acoust Soc Am ; 153(1): 384, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36732275

RESUMEN

Fear is a frequently studied emotion category in music and emotion research. However, research in music theory suggests that music can convey finer-grained subtypes of fear, such as terror and anxiety. Previous research on musically expressed emotions has neglected to investigate subtypes of fearful emotions. This study seeks to fill this gap in the literature. To that end, 99 participants rated the emotional impression of short excerpts of horror film music predicted to convey terror and anxiety, respectively. Then, the excerpts that most effectively conveyed these target emotions were analyzed descriptively and acoustically to demonstrate the sonic differences between musically conveyed terror and anxiety. The results support the hypothesis that music conveys terror and anxiety with markedly different musical structures and acoustic features. Terrifying music has a brighter, rougher, harsher timbre, is musically denser, and may be faster and louder than anxious music. Anxious music has a greater degree of loudness variability. Both types of fearful music tend towards minor modalities and are rhythmically unpredictable. These findings further support the application of emotional granularity in music and emotion research.


Asunto(s)
Miedo , Música , Humanos , Miedo/psicología , Emociones , Música/psicología , Acústica , Encuestas y Cuestionarios
5.
Behav Res Methods ; 2023 Oct 04.
Artículo en Inglés | MEDLINE | ID: mdl-37794208

RESUMEN

All animals have to respond to immediate threats in order to survive. In non-human animals, a diversity of sophisticated behaviours has been observed, but research in humans is hampered by ethical considerations. Here, we present a novel immersive VR toolkit for the Unity engine that allows assessing threat-related behaviour in single, semi-interactive, and semi-realistic threat encounters. The toolkit contains a suite of fully modelled naturalistic environments, interactive objects, animated threats, and scripted systems. These are arranged together by the researcher as a means of creating an experimental manipulation, to form a series of independent "episodes" in immersive VR. Several specifically designed tools aid the design of these episodes, including a system to allow for pre-sequencing the movement plans of animal threats. Episodes can be built with the assets included in the toolkit, but also easily extended with custom scripts, threats, and environments if required. During the experiments, the software stores behavioural, movement, and eye tracking data. With this software, we aim to facilitate the use of immersive VR in human threat avoidance research and thus to close a gap in the understanding of human behaviour under threat.

6.
Ear Hear ; 43(4): 1178-1188, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34999594

RESUMEN

OBJECTIVES: Research on cochlear implants (CIs) has focused on speech comprehension, with little research on perception of vocal emotions. We compared emotion perception in CI users and normal-hearing (NH) individuals, using parameter-specific voice morphing. DESIGN: Twenty-five CI users and 25 NH individuals (matched for age and gender) performed fearful-angry discriminations on bisyllabic pseudoword stimuli from morph continua across all acoustic parameters (Full), or across selected parameters (F0, Timbre, or Time information), with other parameters set to a noninformative intermediate level. RESULTS: Unsurprisingly, CI users as a group showed lower performance in vocal emotion perception overall. Importantly, while NH individuals used timbre and fundamental frequency (F0) information to equivalent degrees, CI users were far more efficient in using timbre (compared to F0) information for this task. Thus, under the conditions of this task, CIs were inefficient in conveying emotion based on F0 alone. There was enormous variability between CI users, with low performers responding close to guessing level. Echoing previous research, we found that better vocal emotion perception was associated with better quality of life ratings. CONCLUSIONS: Some CI users can utilize timbre cues remarkably well when perceiving vocal emotions.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Música , Percepción del Habla , Estimulación Acústica , Percepción Auditiva , Emociones , Humanos , Calidad de Vida
7.
Neuroimage ; 228: 117710, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33385557

RESUMEN

Understanding others' speech while individuals simultaneously produce speech utterances implies neural competition and requires specific mechanisms for a neural resolution given that previous studies proposed opposing signal dynamics for both processes in the auditory cortex (AC). We here used neuroimaging in humans to investigate this neural competition by lateralized stimulations with other speech samples and ipsilateral or contralateral lateralized feedback of actively produced self speech utterances in the form of various speech vowels. In experiment 1, we show, first, that others' speech classifications during active self speech lead to activity in the planum temporale (PTe) when both self and other speech samples were presented together to only the left or right ear. The contralateral PTe also seemed to indifferently respond to single self and other speech samples. Second, specific activity in the left anterior superior temporal cortex (STC) was found during dichotic stimulations (i.e. self and other speech presented to separate ears). Unlike previous studies, this left anterior STC activity supported self speech rather than other speech processing. Furthermore, right mid and anterior STC was more involved in other speech processing. These results signify specific mechanisms for self and other speech processing in the left and right STC beyond a more general speech processing in PTe. Third, other speech recognition in the context of listening to recorded self speech in experiment 2 led to largely symmetric activity in STC and additionally in inferior frontal subregions. The latter was previously reported to be generally relevant for other speech perception and classification, but we found frontal activity only when other speech classification was challenged by recorded but not by active self speech samples. Altogether, unlike formerly established brain networks for uncompetitive other speech perception, active self speech during other speech perception seemingly leads to a neural reordering, functional reassignment, and unusual lateralization of AC and frontal brain activations.


Asunto(s)
Atención/fisiología , Encéfalo/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Neuroimagen/métodos
8.
Hum Brain Mapp ; 42(5): 1503-1517, 2021 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-33615612

RESUMEN

Voice signals are relevant for auditory communication and suggested to be processed in dedicated auditory cortex (AC) regions. While recent reports highlighted an additional role of the inferior frontal cortex (IFC), a detailed description of the integrated functioning of the AC-IFC network and its task relevance for voice processing is missing. Using neuroimaging, we tested sound categorization while human participants either focused on the higher-order vocal-sound dimension (voice task) or feature-based intensity dimension (loudness task) while listening to the same sound material. We found differential involvements of the AC and IFC depending on the task performed and whether the voice dimension was of task relevance or not. First, when comparing neural vocal-sound processing of our task-based with previously reported passive listening designs we observed highly similar cortical activations in the AC and IFC. Second, during task-based vocal-sound processing we observed voice-sensitive responses in the AC and IFC whereas intensity processing was restricted to distinct AC regions. Third, the IFC flexibly adapted to the vocal-sounds' task relevance, being only active when the voice dimension was task relevant. Forth and finally, connectivity modeling revealed that vocal signals independent of their task relevance provided significant input to bilateral AC. However, only when attention was on the voice dimension, we found significant modulations of auditory-frontal connections. Our findings suggest an integrated auditory-frontal network to be essential for behaviorally relevant vocal-sounds processing. The IFC seems to be an important hub of the extended voice network when representing higher-order vocal objects and guiding goal-directed behavior.


Asunto(s)
Atención/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Conectoma , Red Nerviosa/fisiología , Corteza Prefrontal/fisiología , Adulto , Corteza Auditiva/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Red Nerviosa/diagnóstico por imagen , Corteza Prefrontal/diagnóstico por imagen , Percepción Social , Percepción del Habla/fisiología , Adulto Joven
9.
Behav Brain Sci ; 44: e118, 2021 09 30.
Artículo en Inglés | MEDLINE | ID: mdl-34588032

RESUMEN

The credible signaling theory underexplains the evolutionary added value of less-credible affective musical signals compared to vocal signals. The theory might be extended to account for the motivation for, and consequences of, culturally decontextualizing a biologically contextualized signal. Musical signals are twofold, communicating "emotional fiction" alongside biological meaning, and could have filled an adaptive need for affect induction during storytelling.


Asunto(s)
Música , Evolución Biológica , Comunicación , Emociones , Humanos
10.
Neuroimage ; 207: 116401, 2020 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-31783116

RESUMEN

Previous work pointed to the neural and functional significance of infraslow neural oscillations below 1 â€‹Hz that can be detected and precisely located with fast functional magnetic resonance imaging (fMRI). While previous work demonstrated this significance for brain dynamics during very low-level sensory stimulation, we here provide the first evidence for the detectability and functional significance of infraslow oscillatory blood oxygenation level-dependent (BOLD) responses to auditory stimulation by the sociobiological relevant and more complex category of voices. Previous work pointed to a specific area of the mammalian auditory cortex (AC) that is sensitive to vocal signals as quantified by activation levels. Here we show, by using fast fMRI, that the human voice-sensitive AC prioritizes vocal signals not only in terms of activity level but also in terms of specific infraslow BOLD oscillations. We found unique sustained and transient oscillatory BOLD patterns in the AC for vocal signals. For transient oscillatory patterns, vocal signals showed faster peak oscillatory responses across all AC regions. Furthermore, we identified an exclusive sustained oscillatory component for vocal signals in the primary AC. Fast fMRI thus demonstrates the significance and richness of infraslow BOLD oscillations for neurocognitive mechanisms in social cognition as demonstrated here for the sociobiological relevance of voice processing.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Voz/fisiología , Estimulación Acústica/métodos , Adulto , Mapeo Encefálico/métodos , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Adulto Joven
11.
Hum Brain Mapp ; 41(6): 1532-1556, 2020 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-31868310

RESUMEN

Humans make various kinds of decisions about which emotions they perceive from others. Although it might seem like a split-second phenomenon, deliberating over which emotions we perceive unfolds across several stages of decisional processing. Neurocognitive models of general perception postulate that our brain first extracts sensory information about the world then integrates these data into a percept and lastly interprets it. The aim of the present study was to build an evidence-based neurocognitive model of perceptual decision-making on others' emotions. We conducted a series of meta-analyses of neuroimaging data spanning 30 years on the explicit evaluations of others' emotional expressions. We find that emotion perception is rather an umbrella term for various perception paradigms, each with distinct neural structures that underline task-related cognitive demands. Furthermore, the left amygdala was responsive across all classes of decisional paradigms, regardless of task-related demands. Based on these observations, we propose a neurocognitive model that outlines the information flow in the brain needed for a successful evaluation of and decisions on other individuals' emotions. HIGHLIGHTS: Emotion classification involves heterogeneous perception and decision-making tasks Decision-making processes on emotions rarely covered by existing emotions theories We propose an evidence-based neuro-cognitive model of decision-making on emotions Bilateral brain processes for nonverbal decisions, left brain processes for verbal decisions Left amygdala involved in any kind of decision on emotions.


Asunto(s)
Cognición , Toma de Decisiones/fisiología , Emociones/fisiología , Modelos Psicológicos , Percepción/fisiología , Mapeo Encefálico , Humanos , Neuroimagen
12.
J Acoust Soc Am ; 147(6): EL540, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32611175

RESUMEN

One way music is thought to convey emotion is by mimicking acoustic features of affective human vocalizations [Juslin and Laukka (2003). Psychol. Bull. 129(5), 770-814]. Regarding fear, it has been informally noted that music for scary scenes in films frequently exhibits a "scream-like" character. Here, this proposition is formally tested. This paper reports acoustic analyses for four categories of audio stimuli: screams, non-screaming vocalizations, scream-like music, and non-scream-like music. Valence and arousal ratings were also collected. Results support the hypothesis that a key feature of human screams (roughness) is imitated by scream-like music and could potentially signal danger through both music and the voice.


Asunto(s)
Música , Voz , Acústica , Animales , Nivel de Alerta , Bovinos , Emociones , Humanos , Masculino
13.
Psychol Res ; 83(8): 1640-1655, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-29675706

RESUMEN

Different parts of our brain code the perceptual features and actions related to an object, causing a binding problem: how does the brain discriminate the information of a particular event from the features of other events? Hommel (1998) suggested the event file concept: an episodic memory trace binding perceptual and motor information pertaining to an object. By adapting Hommel's paradigm to emotional faces in a previous study (Coll & Grandjean, 2016), we demonstrated that emotion could take part in an event file with motor responses. We also postulate such binding to occur with emotional prosodies, due to an equal importance of automatic reactions to such events. However, contrary to static emotional expressions, prosodies develop through time and temporal dynamics may influence the integration of these stimuli. To investigate this effect, we developed three studies with task-relevant and -irrelevant emotional prosodies. Our results showed that emotion could interact with motor responses when it was task relevant. When it was task irrelevant, this integration was also observed, but only when participants were led to focus on the details of the voices, that is, in a loudness task. No such binding was observed when participants performed a location task, in which emotion could be ignored. These results indicate that emotional binding is not restricted to visual information but is a general phenomenon allowing organisms to integrate emotion and action in an efficient and adaptive way. We discuss the influence of temporal dynamics in the emotion-action binding and the implication of Hommel's paradigm.


Asunto(s)
Ira/fisiología , Discriminación en Psicología/fisiología , Felicidad , Adulto , Emociones/fisiología , Expresión Facial , Femenino , Humanos , Masculino , Voz , Adulto Joven
14.
Proc Natl Acad Sci U S A ; 112(5): 1583-8, 2015 Feb 03.
Artículo en Inglés | MEDLINE | ID: mdl-25605886

RESUMEN

We tested whether human amygdala lesions impair vocal processing in intact cortical networks. In two functional MRI experiments, patients with unilateral amygdala resection either listened to voices and nonvocal sounds or heard binaural vocalizations with attention directed toward or away from emotional information on one side. In experiment 1, all patients showed reduced activation to voices in the ipsilesional auditory cortex. In experiment 2, emotional voices evoked increased activity in both the auditory cortex and the intact amygdala for right-damaged patients, whereas no such effects were found for left-damaged amygdala patients. Furthermore, the left inferior frontal cortex was functionally connected with the intact amygdala in right-damaged patients, but only with homologous right frontal areas and not with the amygdala in left-damaged patients. Thus, unilateral amygdala damage leads to globally reduced ipsilesional cortical voice processing, but only left amygdala lesions are sufficient to suppress the enhanced auditory cortical processing of vocal emotions.


Asunto(s)
Amígdala del Cerebelo/fisiopatología , Corteza Auditiva/fisiopatología , Emociones , Voz , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino
15.
Neuroimage ; 142: 602-612, 2016 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-27530550

RESUMEN

Whispering is a unique expression mode that is specific to auditory communication. Individuals switch their vocalization mode to whispering especially when affected by inner emotions in certain social contexts, such as in intimate relationships or intimidating social interactions. Although this context-dependent whispering is adaptive, whispered voices are acoustically far less rich than phonated voices and thus impose higher hearing and neural auditory decoding demands for recognizing their socio-affective value by listeners. The neural dynamics underlying this recognition especially from whispered voices are largely unknown. Here we show that whispered voices in humans are considerably impoverished as quantified by an entropy measure of spectral acoustic information, and this missing information needs large-scale neural compensation in terms of auditory and cognitive processing. Notably, recognizing the socio-affective information from voices was slightly more difficult from whispered voices, probably based on missing tonal information. While phonated voices elicited extended activity in auditory regions for decoding of relevant tonal and time information and the valence of voices, whispered voices elicited activity in a complex auditory-frontal brain network. Our data suggest that a large-scale multidirectional brain network compensates for the impoverished sound quality of socially meaningful environmental signals to support their accurate recognition and valence attribution.


Asunto(s)
Percepción Auditiva/fisiología , Mapeo Encefálico/métodos , Emociones/fisiología , Lóbulo Frontal/fisiología , Lóbulo Temporal/fisiología , Conducta Verbal/fisiología , Adolescente , Adulto , Femenino , Lóbulo Frontal/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Masculino , Lóbulo Temporal/diagnóstico por imagen , Adulto Joven
16.
Cereb Cortex ; 25(9): 2752-62, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-24735671

RESUMEN

Although the neural basis for the perception of vocal emotions has been described extensively, the neural basis for the expression of vocal emotions is almost unknown. Here, we asked participants both to repeat and to express high-arousing angry vocalizations to command (i.e., evoked expressions). First, repeated expressions elicited activity in the left middle superior temporal gyrus (STG), pointing to a short auditory memory trace for the repetition of vocal expressions. Evoked expressions activated the left hippocampus, suggesting the retrieval of long-term stored scripts. Secondly, angry compared with neutral expressions elicited activity in the inferior frontal cortex IFC and the dorsal basal ganglia (BG), specifically during evoked expressions. Angry expressions also activated the amygdala and anterior cingulate cortex (ACC), and the latter correlated with pupil size as an indicator of bodily arousal during emotional output behavior. Though uncorrelated, both ACC activity and pupil diameter were also increased during repetition trials indicating increased control demands during the more constraint production type of precisely repeating prosodic intonations. Finally, different acoustic measures of angry expressions were associated with activity in the left STG, bilateral inferior frontal gyrus, and dorsal BG.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Emociones/fisiología , Emoción Expresada/fisiología , Vías Nerviosas/fisiología , Habla/fisiología , Adulto , Análisis de Varianza , Encéfalo/irrigación sanguínea , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Vías Nerviosas/irrigación sanguínea , Pupila/fisiología , Estadística como Asunto , Factores de Tiempo , Adulto Joven
17.
Neuroimage ; 109: 27-34, 2015 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-25583613

RESUMEN

Dorsal and ventral pathways for syntacto-semantic speech processing in the left hemisphere are represented in the dual-stream model of auditory processing. Here we report new findings for the right dorsal and ventral temporo-frontal pathway during processing of affectively intonated speech (i.e. affective prosody) in humans, together with several left hemispheric structural connections, partly resembling those for syntacto-semantic speech processing. We investigated white matter fiber connectivity between regions responding to affective prosody in several subregions of the bilateral superior temporal cortex (secondary and higher-level auditory cortex) and of the inferior frontal cortex (anterior and posterior inferior frontal gyrus). The fiber connectivity was investigated by using probabilistic diffusion tensor based tractography. The results underscore several so far underestimated auditory pathway connections, especially for the processing of affective prosody, such as a right ventral auditory pathway. The results also suggest the existence of a dual-stream processing in the right hemisphere, and a general predominance of the dorsal pathways in both hemispheres underlying the neural processing of affective prosody in an extended temporo-frontal network.


Asunto(s)
Afecto/fisiología , Encéfalo/fisiología , Percepción del Habla/fisiología , Sustancia Blanca/fisiología , Adulto , Mapeo Encefálico , Imagen de Difusión por Resonancia Magnética , Imagen de Difusión Tensora , Femenino , Lóbulo Frontal/fisiología , Lateralidad Funcional , Humanos , Masculino , Probabilidad , Lóbulo Temporal/fisiología , Adulto Joven
18.
Neuroimage ; 103: 55-64, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25224999

RESUMEN

Rhythmic entrainment is an important component of emotion induction by music, but brain circuits recruited during spontaneous entrainment of attention by music and the influence of the subjective emotional feelings evoked by music remain still largely unresolved. In this study we used fMRI to test whether the metric structure of music entrains brain activity and how music pleasantness influences such entrainment. Participants listened to piano music while performing a speeded visuomotor detection task in which targets appeared time-locked to either strong or weak beats. Each musical piece was presented in both a consonant/pleasant and dissonant/unpleasant version. Consonant music facilitated target detection and targets presented synchronously with strong beats were detected faster. FMRI showed increased activation of bilateral caudate nucleus when responding on strong beats, whereas consonance enhanced activity in attentional networks. Meter and consonance selectively interacted in the caudate nucleus, with greater meter effects during dissonant than consonant music. These results reveal that the basal ganglia, involved both in emotion and rhythm processing, critically contribute to rhythmic entrainment of subcortical brain circuits by music.


Asunto(s)
Encéfalo/fisiología , Música/psicología , Periodicidad , Placer , Adulto , Percepción Auditiva/fisiología , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Adulto Joven
19.
Behav Brain Sci ; 37(6): 554-5; discussion 577-604, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25514944

RESUMEN

Neuroimaging studies have verified the important integrative role of the basal ganglia during affective vocalizations. They, however, also point to additional regions supporting vocal monitoring, auditory-motor feedback processing, and online adjustments of vocal motor responses. For the case of affective vocalizations, we suggest partly extending the model to fully consider the link between primate-general and human-specific neural components.


Asunto(s)
Comunicación Animal , Evolución Biológica , Comunicación , Primates/fisiología , Habla/fisiología , Animales , Humanos
20.
Emot Rev ; 16(3): 180-194, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39101012

RESUMEN

The question of why music evolved has been contemplated and debated for centuries across multiple disciplines. While many theories have been posited, they still do not fully answer the question of why humans began making music. Adding to the effort to solve this mystery, we propose the socio-affective fiction (SAF) hypothesis. Humans have a unique biological need for emotion regulation strengthening. Simulated emotional situations, like dreams, can help address that need. Immersion is key for such simulations to successfully exercise people's emotions. Therefore, we propose that music evolved as a signal for SAF to increase the immersive potential of storytelling and thereby better exercise people's emotions. In this review, we outline the SAF hypothesis and present cross-disciplinary evidence.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA