Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 17.859
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Nat Rev Neurosci ; 23(5): 287-305, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35352057

RESUMO

Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.


Assuntos
Música , Percepção Auditiva , Encéfalo , Emoções , Humanos , Aprendizagem , Música/psicologia
2.
PLoS Biol ; 22(9): e3002810, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39236087

RESUMO

The relationship between musical training and intellect is controversial. A new hypothesis may help resolve the debate by proposing an explanation for how training in rhythmic skills can improve cognitive abilities in some individuals, but not others.


Assuntos
Cognição , Música , Cognição/fisiologia , Humanos , Aprendizagem/fisiologia
3.
PLoS Biol ; 22(5): e3002631, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38805517

RESUMO

Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.


Assuntos
Estimulação Acústica , Percepção Auditiva , Música , Percepção da Fala , Humanos , Masculino , Feminino , Adulto , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos , Percepção da Fala/fisiologia , Adulto Jovem , Fala/fisiologia , Adolescente
4.
PLoS Biol ; 22(8): e3002732, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39133721

RESUMO

Music can evoke pleasurable and rewarding experiences. Past studies that examined task-related brain activity revealed individual differences in musical reward sensitivity traits and linked them to interactions between the auditory and reward systems. However, state-dependent fluctuations in spontaneous neural activity in relation to music-driven rewarding experiences have not been studied. Here, we used functional MRI to examine whether the coupling of auditory-reward networks during a silent period immediately before music listening can predict the degree of musical rewarding experience of human participants (N = 49). We used machine learning models and showed that the functional connectivity between auditory and reward networks, but not others, could robustly predict subjective, physiological, and neurobiological aspects of the strong musical reward of chills. Specifically, the right auditory cortex-striatum/orbitofrontal connections predicted the reported duration of chills and the activation level of nucleus accumbens and insula, whereas the auditory-amygdala connection was associated with psychophysiological arousal. Furthermore, the predictive model derived from the first sample of individuals was generalized in an independent dataset using different music samples. The generalization was successful only for state-like, pre-listening functional connectivity but not for stable, intrinsic functional connectivity. The current study reveals the critical role of sensory-reward connectivity in pre-task brain state in modulating subsequent rewarding experience.


Assuntos
Percepção Auditiva , Imageamento por Ressonância Magnética , Música , Prazer , Recompensa , Humanos , Música/psicologia , Masculino , Feminino , Prazer/fisiologia , Adulto , Percepção Auditiva/fisiologia , Adulto Jovem , Córtex Auditivo/fisiologia , Córtex Auditivo/diagnóstico por imagem , Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Estimulação Acústica , Rede Nervosa/fisiologia , Rede Nervosa/diagnóstico por imagem , Aprendizado de Máquina
5.
Proc Natl Acad Sci U S A ; 121(6): e2306549121, 2024 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-38300861

RESUMO

Understanding and predicting the emergence and evolution of cultural tastes manifested in consumption patterns is of central interest to social scientists, analysts of culture, and purveyors of content. Prior research suggests that taste preferences relate to personality traits, values, shifts in mood, and immigration destination. Understanding everyday patterns of listening and the function music plays in life has remained elusive, however, despite speculation that musical nostalgia may compensate for local disruption. Using more than one hundred million streams of four million songs by tens of thousands of international listeners from a global music service, we show that breaches in personal routine are systematically associated with personal musical exploration. As people visited new cities and countries, their preferences diversified, converging toward their travel destinations. As people experienced the very different disruptions associated with COVID-19 lockdowns, their preferences diversified further. Personal explorations did not tend to veer toward the global listening average, but away from it, toward distinctive regional musical content. Exposure to novel music explored during periods of routine disruption showed a persistent influence on listeners' future consumption patterns. Across all of these settings, musical preference reflected rather than compensated for life's surprises, leaving a lasting legacy on tastes. We explore the relationship between these findings and global patterns of behavior and cultural consumption.


Assuntos
Música , Humanos , Afeto , Previsões
6.
Proc Natl Acad Sci U S A ; 121(5): e2308859121, 2024 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-38271338

RESUMO

Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.


Assuntos
Música , Humanos , Música/psicologia , Sensação , Comparação Transcultural , Acústica , Emoções , Percepção Auditiva
7.
Proc Natl Acad Sci U S A ; 121(10): e2316306121, 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38408255

RESUMO

Music is powerful in conveying emotions and triggering affective brain mechanisms. Affective brain responses in previous studies were however rather inconsistent, potentially because of the non-adaptive nature of recorded music used so far. Live music instead can be dynamic and adaptive and is often modulated in response to audience feedback to maximize emotional responses in listeners. Here, we introduce a setup for studying emotional responses to live music in a closed-loop neurofeedback setup. This setup linked live performances by musicians to neural processing in listeners, with listeners' amygdala activity was displayed to musicians in real time. Brain activity was measured using functional MRI, and especially amygdala activity was quantified in real time for the neurofeedback signal. Live pleasant and unpleasant piano music performed in response to amygdala neurofeedback from listeners was acoustically very different from comparable recorded music and elicited significantly higher and more consistent amygdala activity. Higher activity was also found in a broader neural network for emotion processing during live compared to recorded music. This finding included observations of the predominance for aversive coding in the ventral striatum while listening to unpleasant music, and involvement of the thalamic pulvinar nucleus, presumably for regulating attentional and cortical flow mechanisms. Live music also stimulated a dense functional neural network with the amygdala as a central node influencing other brain systems. Finally, only live music showed a strong and positive coupling between features of the musical performance and brain activity in listeners pointing to real-time and dynamic entrainment processes.


Assuntos
Música , Música/psicologia , Encéfalo/fisiologia , Emoções/fisiologia , Tonsila do Cerebelo/fisiologia , Afeto , Imageamento por Ressonância Magnética , Percepção Auditiva/fisiologia
8.
Proc Natl Acad Sci U S A ; 121(36): e2319459121, 2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39186645

RESUMO

The perception of musical phrase boundaries is a critical aspect of human musical experience: It allows us to organize, understand, derive pleasure from, and remember music. Identifying boundaries is a prerequisite for segmenting music into meaningful chunks, facilitating efficient processing and storage while providing an enjoyable, fulfilling listening experience through the anticipation of upcoming musical events. Expanding on Sridharan et al.'s [Neuron 55, 521-532 (2007)] work on coarse musical boundaries between symphonic movements, we examined finer-grained boundaries. We measured the fMRI responses of 18 musicians and 18 nonmusicians during music listening. Using general linear model, independent component analysis, and Granger causality, we observed heightened auditory integration in anticipation to musical boundaries, and an extensive decrease within the fronto-temporal-parietal network during and immediately following boundaries. Notably, responses were modulated by musicianship. Findings uncover the intricate interplay between musical structure, expertise, and cognitive processing, advancing our knowledge of how the brain makes sense of music.


Assuntos
Percepção Auditiva , Encéfalo , Imageamento por Ressonância Magnética , Música , Humanos , Música/psicologia , Percepção Auditiva/fisiologia , Masculino , Adulto , Feminino , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodos , Adulto Jovem , Estimulação Acústica
9.
Proc Natl Acad Sci U S A ; 121(30): e2320378121, 2024 Jul 23.
Artigo em Inglês | MEDLINE | ID: mdl-39008675

RESUMO

The neuroscientific examination of music processing in audio-visual contexts offers a valuable framework to assess how auditory information influences the emotional encoding of visual information. Using fMRI during naturalistic film viewing, we investigated the neural mechanisms underlying the effect of music on valence inferences during mental state attribution. Thirty-eight participants watched the same short-film accompanied by systematically controlled consonant or dissonant music. Subjects were instructed to think about the main character's intentions. The results revealed that increasing levels of dissonance led to more negatively valenced inferences, displaying the profound emotional impact of musical dissonance. Crucially, at the neuroscientific level and despite music being the sole manipulation, dissonance evoked the response of the primary visual cortex (V1). Functional/effective connectivity analysis showed a stronger coupling between the auditory ventral stream (AVS) and V1 in response to tonal dissonance and demonstrated the modulation of early visual processing via top-down feedback inputs from the AVS to V1. These V1 signal changes indicate the influence of high-level contextual representations associated with tonal dissonance on early visual cortices, serving to facilitate the emotional interpretation of visual information. Our results highlight the significance of employing systematically controlled music, which can isolate emotional valence from the arousal dimension, to elucidate the brain's sound-to-meaning interface and its distributive crossmodal effects on early visual encoding during naturalistic film viewing.


Assuntos
Percepção Auditiva , Emoções , Imageamento por Ressonância Magnética , Música , Percepção Visual , Humanos , Música/psicologia , Feminino , Masculino , Adulto , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Emoções/fisiologia , Adulto Jovem , Mapeamento Encefálico , Estimulação Acústica , Córtex Visual/fisiologia , Córtex Visual/diagnóstico por imagem , Córtex Visual Primário/fisiologia , Estimulação Luminosa/métodos
10.
Annu Rev Neurosci ; 41: 527-552, 2018 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-29986161

RESUMO

How the cerebral cortex encodes auditory features of biologically important sounds, including speech and music, is one of the most important questions in auditory neuroscience. The pursuit to understand related neural coding mechanisms in the mammalian auditory cortex can be traced back several decades to the early exploration of the cerebral cortex. Significant progress in this field has been made in the past two decades with new technical and conceptual advances. This article reviews the progress and challenges in this area of research.


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Animais , Audição , Humanos , Música , Fala
11.
PLoS Biol ; 21(8): e3002176, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37582062

RESUMO

Music is core to human experience, yet the precise neural dynamics underlying music perception remain unknown. We analyzed a unique intracranial electroencephalography (iEEG) dataset of 29 patients who listened to a Pink Floyd song and applied a stimulus reconstruction approach previously used in the speech domain. We successfully reconstructed a recognizable song from direct neural recordings and quantified the impact of different factors on decoding accuracy. Combining encoding and decoding analyses, we found a right-hemisphere dominance for music perception with a primary role of the superior temporal gyrus (STG), evidenced a new STG subregion tuned to musical rhythm, and defined an anterior-posterior STG organization exhibiting sustained and onset responses to musical elements. Our findings show the feasibility of applying predictive modeling on short datasets acquired in single patients, paving the way for adding musical elements to brain-computer interface (BCI) applications.


Assuntos
Córtex Auditivo , Música , Humanos , Córtex Auditivo/fisiologia , Mapeamento Encefálico , Percepção Auditiva/fisiologia , Lobo Temporal/fisiologia , Estimulação Acústica
12.
Proc Natl Acad Sci U S A ; 120(37): e2218593120, 2023 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-37676911

RESUMO

Despite the variability of music across cultures, some types of human songs share acoustic characteristics. For example, dance songs tend to be loud and rhythmic, and lullabies tend to be quiet and melodious. Human perceptual sensitivity to the behavioral contexts of songs, based on these musical features, suggests that basic properties of music are mutually intelligible, independent of linguistic or cultural content. Whether these effects reflect universal interpretations of vocal music, however, is unclear because prior studies focus almost exclusively on English-speaking participants, a group that is not representative of humans. Here, we report shared intuitions concerning the behavioral contexts of unfamiliar songs produced in unfamiliar languages, in participants living in Internet-connected industrialized societies (n = 5,516 native speakers of 28 languages) or smaller-scale societies with limited access to global media (n = 116 native speakers of three non-English languages). Participants listened to songs randomly selected from a representative sample of human vocal music, originally used in four behavioral contexts, and rated the degree to which they believed the song was used for each context. Listeners in both industrialized and smaller-scale societies inferred the contexts of dance songs, lullabies, and healing songs, but not love songs. Within and across cohorts, inferences were mutually consistent. Further, increased linguistic or geographical proximity between listeners and singers only minimally increased the accuracy of the inferences. These results demonstrate that the behavioral contexts of three common forms of music are mutually intelligible cross-culturally and imply that musical diversity, shaped by cultural evolution, is nonetheless grounded in some universal perceptual phenomena.


Assuntos
Evolução Cultural , Música , Humanos , Idioma , Linguística , Acústica
13.
Proc Natl Acad Sci U S A ; 120(5): e2216146120, 2023 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-36693091

RESUMO

Some people, entirely untrained in music, can listen to a song and replicate it on a piano with unnerving accuracy. What enables some to "hear" music so much better than others? Long-standing research confirms that part of the answer is undoubtedly neurological and can be improved with training. However, are there structural, physical, or engineering attributes of the human hearing mechanism apparatus (i.e., the hair cells of the internal ear) that render one human innately superior to another in terms of propensity to listen to music? In this work, we investigate a physics-based model of the electromechanics of the hair cells in the inner ear to understand why a person might be physiologically better poised to distinguish musical sounds. A key feature of the model is that we avoid a "black-box" systems-type approach. All parameters are well-defined physical quantities, including membrane thickness, bending modulus, electromechanical properties, and geometrical features, among others. Using the two-tone interference problem as a proxy for musical perception, our model allows us to establish the basis for exploring the effect of external factors such as medicine or environment. As an example of the insights we obtain, we conclude that the reduction in bending modulus of the cell membranes (which for instance may be caused by the usage of a certain class of analgesic drugs) or an increase in the flexoelectricity of the hair cell membrane can interfere with the perception of two-tone excitation.


Assuntos
Música , Percepção da Fala , Humanos , Percepção Auditiva , Audição , Física , Percepção da Fala/fisiologia , Percepção da Altura Sonora/fisiologia
14.
J Neurosci ; 44(30)2024 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-38926087

RESUMO

Music, like spoken language, is often characterized by hierarchically organized structure. Previous experiments have shown neural tracking of notes and beats, but little work touches on the more abstract question: how does the brain establish high-level musical structures in real time? We presented Bach chorales to participants (20 females and 9 males) undergoing electroencephalogram (EEG) recording to investigate how the brain tracks musical phrases. We removed the main temporal cues to phrasal structures, so that listeners could only rely on harmonic information to parse a continuous musical stream. Phrasal structures were disrupted by locally or globally reversing the harmonic progression, so that our observations on the original music could be controlled and compared. We first replicated the findings on neural tracking of musical notes and beats, substantiating the positive correlation between musical training and neural tracking. Critically, we discovered a neural signature in the frequency range ∼0.1 Hz (modulations of EEG power) that reliably tracks musical phrasal structure. Next, we developed an approach to quantify the phrasal phase precession of the EEG power, revealing that phrase tracking is indeed an operation of active segmentation involving predictive processes. We demonstrate that the brain establishes complex musical structures online over long timescales (>5 s) and actively segments continuous music streams in a manner comparable to language processing. These two neural signatures, phrase tracking and phrasal phase precession, provide new conceptual and technical tools to study the processes underpinning high-level structure building using noninvasive recording techniques.


Assuntos
Percepção Auditiva , Eletroencefalografia , Música , Humanos , Feminino , Masculino , Eletroencefalografia/métodos , Adulto , Percepção Auditiva/fisiologia , Adulto Jovem , Estimulação Acústica/métodos , Encéfalo/fisiologia
15.
J Neurosci ; 44(37)2024 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-39160067

RESUMO

During infancy and adolescence, language develops from a predominantly interhemispheric control-through the corpus callosum (CC)-to a predominantly intrahemispheric control, mainly subserved by the left arcuate fasciculus (AF). Using multimodal neuroimaging, we demonstrate that human left-handers (both male and female) with an atypical language lateralization show a rightward participation of language areas from the auditory cortex to the inferior frontal cortex when contrasting speech to tone perception and an enhanced interhemispheric anatomical and functional connectivity. Crucially, musicianship determines two different structural pathways to this outcome. Nonmusicians present a relation between atypical lateralization and intrahemispheric underdevelopment across the anterior AF, hinting at a dysregulation of the ontogenetic shift from an interhemispheric to an intrahemispheric brain. Musicians reveal an alternative pathway related to interhemispheric overdevelopment across the posterior CC and the auditory cortex. We discuss the heterogeneity in reaching atypical language lateralization and the relevance of early musical training in altering the normal development of language cognitive functions.


Assuntos
Lateralidade Funcional , Música , Humanos , Masculino , Feminino , Música/psicologia , Adulto , Lateralidade Funcional/fisiologia , Adulto Jovem , Idioma , Vias Neurais/fisiologia , Córtex Auditivo/fisiologia , Córtex Auditivo/diagnóstico por imagem , Corpo Caloso/fisiologia , Corpo Caloso/diagnóstico por imagem , Imageamento por Ressonância Magnética , Adolescente , Mapeamento Encefálico
16.
J Neurosci ; 44(15)2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38423761

RESUMO

Music is a universal human attribute. The study of amusia, a neurologic music processing deficit, has increasingly elaborated our view on the neural organization of the musical brain. However, lesions causing amusia occur in multiple brain locations and often also cause aphasia, leaving the distinct neural networks for amusia unclear. Here, we utilized lesion network mapping to identify these networks. A systematic literature search was carried out to identify all published case reports of lesion-induced amusia. The reproducibility and specificity of the identified amusia network were then tested in an independent prospective cohort of 97 stroke patients (46 female and 51 male) with repeated structural brain imaging, specifically assessed for both music perception and language abilities. Lesion locations in the case reports were heterogeneous but connected to common brain regions, including bilateral temporoparietal and insular cortices, precentral gyrus, and cingulum. In the prospective cohort, lesions causing amusia mapped to a common brain network, centering on the right superior temporal cortex and clearly distinct from the network causally associated with aphasia. Lesion-induced longitudinal structural effects in the amusia circuit were confirmed as reduction of both gray and white matter volume, which correlated with the severity of amusia. We demonstrate that despite the heterogeneity of lesion locations disrupting music processing, there is a common brain network that is distinct from the language network. These results provide evidence for the distinct neural substrate of music processing, differentiating music-related functions from language, providing a testable target for noninvasive brain stimulation to treat amusia.


Assuntos
Rede Nervosa , Humanos , Feminino , Masculino , Pessoa de Meia-Idade , Idoso , Rede Nervosa/diagnóstico por imagem , Rede Nervosa/patologia , Rede Nervosa/fisiopatologia , Música , Transtornos da Percepção Auditiva/etiologia , Transtornos da Percepção Auditiva/fisiopatologia , Encéfalo/patologia , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Mapeamento Encefálico , Adulto , Estudos Prospectivos , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/patologia , Acidente Vascular Cerebral/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA