Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 6.455
Filter
Add more filters

Publication year range
1.
Proc Natl Acad Sci U S A ; 121(10): e2316306121, 2024 Mar 05.
Article in English | MEDLINE | ID: mdl-38408255

ABSTRACT

Music is powerful in conveying emotions and triggering affective brain mechanisms. Affective brain responses in previous studies were however rather inconsistent, potentially because of the non-adaptive nature of recorded music used so far. Live music instead can be dynamic and adaptive and is often modulated in response to audience feedback to maximize emotional responses in listeners. Here, we introduce a setup for studying emotional responses to live music in a closed-loop neurofeedback setup. This setup linked live performances by musicians to neural processing in listeners, with listeners' amygdala activity was displayed to musicians in real time. Brain activity was measured using functional MRI, and especially amygdala activity was quantified in real time for the neurofeedback signal. Live pleasant and unpleasant piano music performed in response to amygdala neurofeedback from listeners was acoustically very different from comparable recorded music and elicited significantly higher and more consistent amygdala activity. Higher activity was also found in a broader neural network for emotion processing during live compared to recorded music. This finding included observations of the predominance for aversive coding in the ventral striatum while listening to unpleasant music, and involvement of the thalamic pulvinar nucleus, presumably for regulating attentional and cortical flow mechanisms. Live music also stimulated a dense functional neural network with the amygdala as a central node influencing other brain systems. Finally, only live music showed a strong and positive coupling between features of the musical performance and brain activity in listeners pointing to real-time and dynamic entrainment processes.


Subject(s)
Music , Music/psychology , Brain/physiology , Emotions/physiology , Amygdala/physiology , Affect , Magnetic Resonance Imaging , Auditory Perception/physiology
2.
Proc Natl Acad Sci U S A ; 121(5): e2308859121, 2024 Jan 30.
Article in English | MEDLINE | ID: mdl-38271338

ABSTRACT

Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.


Subject(s)
Music , Humans , Music/psychology , Sensation , Cross-Cultural Comparison , Acoustics , Emotions , Auditory Perception
3.
Proc Natl Acad Sci U S A ; 121(30): e2320378121, 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39008675

ABSTRACT

The neuroscientific examination of music processing in audio-visual contexts offers a valuable framework to assess how auditory information influences the emotional encoding of visual information. Using fMRI during naturalistic film viewing, we investigated the neural mechanisms underlying the effect of music on valence inferences during mental state attribution. Thirty-eight participants watched the same short-film accompanied by systematically controlled consonant or dissonant music. Subjects were instructed to think about the main character's intentions. The results revealed that increasing levels of dissonance led to more negatively valenced inferences, displaying the profound emotional impact of musical dissonance. Crucially, at the neuroscientific level and despite music being the sole manipulation, dissonance evoked the response of the primary visual cortex (V1). Functional/effective connectivity analysis showed a stronger coupling between the auditory ventral stream (AVS) and V1 in response to tonal dissonance and demonstrated the modulation of early visual processing via top-down feedback inputs from the AVS to V1. These V1 signal changes indicate the influence of high-level contextual representations associated with tonal dissonance on early visual cortices, serving to facilitate the emotional interpretation of visual information. Our results highlight the significance of employing systematically controlled music, which can isolate emotional valence from the arousal dimension, to elucidate the brain's sound-to-meaning interface and its distributive crossmodal effects on early visual encoding during naturalistic film viewing.


Subject(s)
Auditory Perception , Emotions , Magnetic Resonance Imaging , Music , Visual Perception , Humans , Music/psychology , Female , Male , Adult , Visual Perception/physiology , Auditory Perception/physiology , Emotions/physiology , Young Adult , Brain Mapping , Acoustic Stimulation , Visual Cortex/physiology , Visual Cortex/diagnostic imaging , Primary Visual Cortex/physiology , Photic Stimulation/methods
4.
Annu Rev Neurosci ; 41: 527-552, 2018 07 08.
Article in English | MEDLINE | ID: mdl-29986161

ABSTRACT

How the cerebral cortex encodes auditory features of biologically important sounds, including speech and music, is one of the most important questions in auditory neuroscience. The pursuit to understand related neural coding mechanisms in the mammalian auditory cortex can be traced back several decades to the early exploration of the cerebral cortex. Significant progress in this field has been made in the past two decades with new technical and conceptual advances. This article reviews the progress and challenges in this area of research.


Subject(s)
Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Brain Mapping , Animals , Hearing , Humans , Music , Speech
5.
Proc Natl Acad Sci U S A ; 120(5): e2216146120, 2023 01 31.
Article in English | MEDLINE | ID: mdl-36693091

ABSTRACT

Some people, entirely untrained in music, can listen to a song and replicate it on a piano with unnerving accuracy. What enables some to "hear" music so much better than others? Long-standing research confirms that part of the answer is undoubtedly neurological and can be improved with training. However, are there structural, physical, or engineering attributes of the human hearing mechanism apparatus (i.e., the hair cells of the internal ear) that render one human innately superior to another in terms of propensity to listen to music? In this work, we investigate a physics-based model of the electromechanics of the hair cells in the inner ear to understand why a person might be physiologically better poised to distinguish musical sounds. A key feature of the model is that we avoid a "black-box" systems-type approach. All parameters are well-defined physical quantities, including membrane thickness, bending modulus, electromechanical properties, and geometrical features, among others. Using the two-tone interference problem as a proxy for musical perception, our model allows us to establish the basis for exploring the effect of external factors such as medicine or environment. As an example of the insights we obtain, we conclude that the reduction in bending modulus of the cell membranes (which for instance may be caused by the usage of a certain class of analgesic drugs) or an increase in the flexoelectricity of the hair cell membrane can interfere with the perception of two-tone excitation.


Subject(s)
Music , Speech Perception , Humans , Auditory Perception , Hearing , Physics , Speech Perception/physiology , Pitch Perception/physiology
6.
Proc Natl Acad Sci U S A ; 120(37): e2218593120, 2023 09 12.
Article in English | MEDLINE | ID: mdl-37676911

ABSTRACT

Despite the variability of music across cultures, some types of human songs share acoustic characteristics. For example, dance songs tend to be loud and rhythmic, and lullabies tend to be quiet and melodious. Human perceptual sensitivity to the behavioral contexts of songs, based on these musical features, suggests that basic properties of music are mutually intelligible, independent of linguistic or cultural content. Whether these effects reflect universal interpretations of vocal music, however, is unclear because prior studies focus almost exclusively on English-speaking participants, a group that is not representative of humans. Here, we report shared intuitions concerning the behavioral contexts of unfamiliar songs produced in unfamiliar languages, in participants living in Internet-connected industrialized societies (n = 5,516 native speakers of 28 languages) or smaller-scale societies with limited access to global media (n = 116 native speakers of three non-English languages). Participants listened to songs randomly selected from a representative sample of human vocal music, originally used in four behavioral contexts, and rated the degree to which they believed the song was used for each context. Listeners in both industrialized and smaller-scale societies inferred the contexts of dance songs, lullabies, and healing songs, but not love songs. Within and across cohorts, inferences were mutually consistent. Further, increased linguistic or geographical proximity between listeners and singers only minimally increased the accuracy of the inferences. These results demonstrate that the behavioral contexts of three common forms of music are mutually intelligible cross-culturally and imply that musical diversity, shaped by cultural evolution, is nonetheless grounded in some universal perceptual phenomena.


Subject(s)
Cultural Evolution , Music , Humans , Language , Linguistics , Acoustics
7.
J Neurosci ; 44(15)2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38423761

ABSTRACT

Music is a universal human attribute. The study of amusia, a neurologic music processing deficit, has increasingly elaborated our view on the neural organization of the musical brain. However, lesions causing amusia occur in multiple brain locations and often also cause aphasia, leaving the distinct neural networks for amusia unclear. Here, we utilized lesion network mapping to identify these networks. A systematic literature search was carried out to identify all published case reports of lesion-induced amusia. The reproducibility and specificity of the identified amusia network were then tested in an independent prospective cohort of 97 stroke patients (46 female and 51 male) with repeated structural brain imaging, specifically assessed for both music perception and language abilities. Lesion locations in the case reports were heterogeneous but connected to common brain regions, including bilateral temporoparietal and insular cortices, precentral gyrus, and cingulum. In the prospective cohort, lesions causing amusia mapped to a common brain network, centering on the right superior temporal cortex and clearly distinct from the network causally associated with aphasia. Lesion-induced longitudinal structural effects in the amusia circuit were confirmed as reduction of both gray and white matter volume, which correlated with the severity of amusia. We demonstrate that despite the heterogeneity of lesion locations disrupting music processing, there is a common brain network that is distinct from the language network. These results provide evidence for the distinct neural substrate of music processing, differentiating music-related functions from language, providing a testable target for noninvasive brain stimulation to treat amusia.

8.
Cereb Cortex ; 34(3)2024 03 01.
Article in English | MEDLINE | ID: mdl-38489785

ABSTRACT

Dance and music are well known to improve sensorimotor skills and cognitive functions. To reveal the underlying mechanism, previous studies focus on the brain plastic structural and functional effects of dance and music training. However, the discrepancy training effects on brain structure-function relationship are still blurred. Thus, proficient dancers, musicians, and controls were recruited in this study. The graph signal processing framework was employed to quantify the region-level and network-level relationship between brain function and structure. The results showed the increased coupling strength of the right ventromedial putamen in the dance and music groups. Distinctly, enhanced coupling strength of the ventral attention network, increased coupling strength of the right inferior frontal gyrus opercular area, and increased function connectivity of coupling function signal between the right and left middle frontal gyrus were only found in the dance group. Besides, the dance group indicated enhanced coupling function connectivity between the left inferior parietal lobule caudal area and the left superior parietal lobule intraparietal area compared with the music groups. The results might illustrate dance and music training's discrepant effect on the structure-function relationship of the subcortical and cortical attention networks. Furthermore, dance training seemed to have a greater impact on these networks.


Subject(s)
Music , Brain/diagnostic imaging , Brain Mapping , Parietal Lobe , Frontal Lobe , Magnetic Resonance Imaging/methods
9.
Cereb Cortex ; 34(4)2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38679480

ABSTRACT

Existing neuroimaging studies on neural correlates of musical familiarity often employ a familiar vs. unfamiliar contrast analysis. This singular analytical approach reveals associations between explicit musical memory and musical familiarity. However, is the neural activity associated with musical familiarity solely related to explicit musical memory, or could it also be related to implicit musical memory? To address this, we presented 130 song excerpts of varying familiarity to 21 participants. While acquiring their brain activity using functional magnetic resonance imaging (fMRI), we asked the participants to rate the familiarity of each song on a five-point scale. To comprehensively analyze the neural correlates of musical familiarity, we examined it from four perspectives: the intensity of local neural activity, patterns of local neural activity, global neural activity patterns, and functional connectivity. The results from these four approaches were consistent and revealed that musical familiarity is related to the activity of both explicit and implicit musical memory networks. Our findings suggest that: (1) musical familiarity is also associated with implicit musical memory, and (2) there is a cooperative and competitive interaction between the two types of musical memory in the perception of music.


Subject(s)
Brain Mapping , Brain , Magnetic Resonance Imaging , Music , Recognition, Psychology , Humans , Music/psychology , Recognition, Psychology/physiology , Male , Female , Young Adult , Adult , Brain/physiology , Brain/diagnostic imaging , Brain Mapping/methods , Auditory Perception/physiology , Acoustic Stimulation/methods
10.
Annu Rev Psychol ; 75: 87-128, 2024 Jan 18.
Article in English | MEDLINE | ID: mdl-37738514

ABSTRACT

Music training is generally assumed to improve perceptual and cognitive abilities. Although correlational data highlight positive associations, experimental results are inconclusive, raising questions about causality. Does music training have far-transfer effects, or do preexisting factors determine who takes music lessons? All behavior reflects genetic and environmental influences, but differences in emphasis-nature versus nurture-have been a source of tension throughout the history of psychology. After reviewing the recent literature, we conclude that the evidence that music training causes nonmusical benefits is weak or nonexistent, and that researchers routinely overemphasize contributions from experience while neglecting those from nature. The literature is also largely exploratory rather than theory driven. It fails to explain mechanistically how music-training effects could occur and ignores evidence that far transfer is rare. Instead of focusing on elusive perceptual or cognitive benefits, we argue that it is more fruitful to examine the social-emotional effects of engaging with music, particularly in groups, and that music-based interventions may be effective mainly for clinical or atypical populations.


Subject(s)
Music , Humans , Cognition , Emotions
11.
Proc Natl Acad Sci U S A ; 119(4)2022 01 25.
Article in English | MEDLINE | ID: mdl-35064081

ABSTRACT

The scientific literature sometimes considers music an abstract stimulus, devoid of explicit meaning, and at other times considers it a universal language. Here, individuals in three geographically distinct locations spanning two cultures performed a highly unconstrained task: they provided free-response descriptions of stories they imagined while listening to instrumental music. Tools from natural language processing revealed that listeners provide highly similar stories to the same musical excerpts when they share an underlying culture, but when they do not, the generated stories show limited overlap. These results paint a more complex picture of music's power: music can generate remarkably similar stories in listeners' minds, but the degree to which these imagined narratives are shared depends on the degree to which culture is shared across listeners. Thus, music is neither an abstract stimulus nor a universal language but has semantic affordances shaped by culture, requiring more sustained attention from psychology.


Subject(s)
Auditory Perception , Culture , Imagination , Music , Narration , Humans , Semantics
12.
J Neurosci ; 43(15): 2794-2802, 2023 04 12.
Article in English | MEDLINE | ID: mdl-36914264

ABSTRACT

The ability to extract rhythmic structure is important for the development of language, music, and social communication. Although previous studies show infants' brains entrain to the periodicities of auditory rhythms and even different metrical interpretations (e.g., groups of two vs three beats) of ambiguous rhythms, whether the premature brain tracks beat and meter frequencies has not been explored previously. We used high-resolution electroencephalography while premature infants (n = 19, 5 male; mean age, 32 ± 2.59 weeks gestational age) heard two auditory rhythms in the incubators. We observed selective enhancement of the neural response at both beat- and meter-related frequencies. Further, neural oscillations at the beat and duple (groups of 2) meter were phase aligned with the envelope of the auditory rhythmic stimuli. Comparing the relative power at beat and meter frequencies across stimuli and frequency revealed evidence for selective enhancement of duple meter. This suggests that even at this early stage of development, neural mechanisms for processing auditory rhythms beyond simple sensory coding are present. Our results add to a few previous neuroimaging studies demonstrating discriminative auditory abilities of premature neural networks. Specifically, our results demonstrate the early capacities of the immature neural circuits and networks to code both simple beat and beat grouping (i.e., hierarchical meter) regularities of auditory sequences. Considering the importance of rhythm processing for acquiring language and music, our findings indicate that even before birth, the premature brain is already learning this important aspect of the auditory world in a sophisticated and abstract way.SIGNIFICANCE STATEMENT Processing auditory rhythm is of great neurodevelopmental importance. In an electroencephalography experiment in premature newborns, we found converging evidence that when presented with auditory rhythms, the premature brain encodes multiple periodicities corresponding to beat and beat grouping (meter) frequencies, and even selectively enhances the neural response to meter compared with beat, as in human adults. We also found that the phase of low-frequency neural oscillations aligns to the envelope of the auditory rhythms and that this phenomenon becomes less precise at lower frequencies. These findings demonstrate the initial capacities of the developing brain to code auditory rhythm and the importance of special care to the auditory environment of this vulnerable population during a highly dynamic period of neural development.


Subject(s)
Auditory Perception , Music , Infant, Newborn , Adult , Humans , Male , Infant , Acoustic Stimulation/methods , Auditory Perception/physiology , Brain/physiology , Electroencephalography/methods , Hearing , Periodicity
13.
Neuroimage ; 291: 120582, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38521212

ABSTRACT

In the field of learning theory and practice, the superior efficacy of multisensory learning over uni-sensory is well-accepted. However, the underlying neural mechanisms at the macro-level of the human brain remain largely unexplored. This study addresses this gap by providing novel empirical evidence and a theoretical framework for understanding the superiority of multisensory learning. Through a cognitive, behavioral, and electroencephalographic assessment of carefully controlled uni-sensory and multisensory training interventions, our study uncovers a fundamental distinction in their neuroplastic patterns. A multilayered network analysis of pre- and post- training EEG data allowed us to model connectivity within and across different frequency bands at the cortical level. Pre-training EEG analysis unveils a complex network of distributed sources communicating through cross-frequency coupling, while comparison of pre- and post-training EEG data demonstrates significant differences in the reorganizational patterns of uni-sensory and multisensory learning. Uni-sensory training primarily modifies cross-frequency coupling between lower and higher frequencies, whereas multisensory training induces changes within the beta band in a more focused network, implying the development of a unified representation of audiovisual stimuli. In combination with behavioural and cognitive findings this suggests that, multisensory learning benefits from an automatic top-down transfer of training, while uni-sensory training relies mainly on limited bottom-up generalization. Our findings offer a compelling theoretical framework for understanding the advantage of multisensory learning.


Subject(s)
Brain , Learning , Humans , Neuronal Plasticity , Auditory Perception , Visual Perception
14.
Rep Prog Phys ; 87(8)2024 Jul 24.
Article in English | MEDLINE | ID: mdl-38996413

ABSTRACT

Quantum computing technology is developing at a fast pace. The impact of quantum computing on the music industry is inevitable. This paper maps the emerging field of quantum computer music. Quantum computer music investigates, and develops applications and methods to process music using quantum computing technology. The paper begins by contextualising the field. Then, it discusses significant examples of various approaches developed to date to leverage quantum computing to learn, process and generate music. The methods discussed range from rendering music using data from physical quantum mechanical systems and quantum mechanical simulations to computational quantum algorithms to generate music, including quantum AI. The ambition to develop techniques to encode audio quantumly for making sound synthesisers and audio signal processing systems is also discussed.

15.
Eur J Neurosci ; 59(12): 3162-3183, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38626924

ABSTRACT

Musical engagement can be conceptualized through various activities, modes of listening and listener states. Recent research has reported that a state of focused engagement can be indexed by the inter-subject correlation (ISC) of audience responses to a shared naturalistic stimulus. While statistically significant ISC has been reported during music listening, we lack insight into the temporal dynamics of engagement over the course of musical works-such as those composed in the Western classical style-which involve the formulation of expectations that are realized or derailed at subsequent points of arrival. Here, we use the ISC of electroencephalographic (EEG) and continuous behavioural (CB) responses to investigate the time-varying dynamics of engagement with functional tonal music. From a sample of adult musicians who listened to a complete cello concerto movement, we found that ISC varied throughout the excerpt for both measures. In particular, significant EEG ISC was observed during periods of musical tension that built to climactic highpoints, while significant CB ISC corresponded more to declarative entrances and points of arrival. Moreover, we found that a control stimulus retaining envelope characteristics of the intact music, but little other temporal structure, also elicited significantly correlated EEG and CB responses, though to lesser extents than the original version. In sum, these findings shed light on the temporal dynamics of engagement during music listening and clarify specific aspects of musical engagement that may be indexed by each measure.


Subject(s)
Auditory Perception , Electroencephalography , Music , Humans , Electroencephalography/methods , Male , Female , Adult , Auditory Perception/physiology , Young Adult , Acoustic Stimulation/methods , Brain/physiology
16.
Eur J Neurosci ; 59(1): 101-118, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37724707

ABSTRACT

The pleasurable urge to move to music (PLUMM) activates motor and reward areas of the brain and is thought to be driven by predictive processes. Dopamine in motor and limbic networks is implicated in beat-based timing and music-induced pleasure, suggesting a central role of basal ganglia (BG) dopaminergic systems in PLUMM. This study tested this hypothesis by comparing PLUMM in participants with Parkinson's disease (PD), age-matched controls, and young controls. Participants listened to musical sequences with varying rhythmic and harmonic complexity (low, medium and high), and rated their experienced pleasure and urge to move to the rhythm. In line with previous results, healthy younger participants showed an inverted U-shaped relationship between rhythmic complexity and ratings, with preference for medium complexity rhythms, while age-matched controls showed a similar, but weaker, inverted U-shaped response. Conversely, PD showed a significantly flattened response for both the urge to move and pleasure. Crucially, this flattened response could not be attributed to differences in rhythm discrimination and did not reflect an overall decrease in ratings. For harmonic complexity, PD showed a negative linear pattern for both the urge to move and pleasure while healthy age-matched controls showed the same pattern for pleasure and an inverted U for the urge to move. This contrasts with the pattern observed in young healthy controls in previous studies, suggesting that both healthy aging and PD also influence affective responses to harmonic complexity. Together, these results support the role of dopamine within cortico-striatal circuits in the predictive processes that form the link between the perceptual processing of rhythmic patterns and the affective and motor responses to rhythmic music.


Subject(s)
Music , Parkinson Disease , Humans , Parkinson Disease/psychology , Music/psychology , Dopamine , Auditory Perception/physiology , Brain
17.
Eur J Neurosci ; 59(8): 2059-2074, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38303522

ABSTRACT

Linear models are becoming increasingly popular to investigate brain activity in response to continuous and naturalistic stimuli. In the context of auditory perception, these predictive models can be 'encoding', when stimulus features are used to reconstruct brain activity, or 'decoding' when neural features are used to reconstruct the audio stimuli. These linear models are a central component of some brain-computer interfaces that can be integrated into hearing assistive devices (e.g., hearing aids). Such advanced neurotechnologies have been widely investigated when listening to speech stimuli but rarely when listening to music. Recent attempts at neural tracking of music show that the reconstruction performances are reduced compared with speech decoding. The present study investigates the performance of stimuli reconstruction and electroencephalogram prediction (decoding and encoding models) based on the cortical entrainment of temporal variations of the audio stimuli for both music and speech listening. Three hypotheses that may explain differences between speech and music stimuli reconstruction were tested to assess the importance of the speech-specific acoustic and linguistic factors. While the results obtained with encoding models suggest different underlying cortical processing between speech and music listening, no differences were found in terms of reconstruction of the stimuli or the cortical data. The results suggest that envelope-based linear modelling can be used to study both speech and music listening, despite the differences in the underlying cortical mechanisms.


Subject(s)
Music , Speech Perception , Auditory Perception/physiology , Speech , Speech Perception/physiology , Electroencephalography , Acoustic Stimulation
18.
Am Nat ; 204(2): 181-190, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39008842

ABSTRACT

AbstractWhere dramatic sexual displays are involved in attracting a mate, individuals can enhance their performances by manipulating their physical environment. Typically, individuals alter their environment either in preparation for a performance by creating a "stage" or during the display itself by using discrete objects as "props." We examined an unusual case of performative manipulation of an entire stage by male Albert's lyrebirds (Menura alberti) during their complex song and dance displays. We found that males from throughout the species' range shake the entangled forest vegetation of their display platforms, creating a highly conspicuous and stereotypical movement external to their bodies. This "stage shaking" is performed in two different rhythms, with the second rhythm an isochronous beat that matches the beat of the coinciding vocalizations. Our results provide evidence that stage shaking is an integral, and thus likely functional, component of male Albert's lyrebird sexual displays and so highlight an intriguing but poorly understood facet of complex communication.


Subject(s)
Vocalization, Animal , Male , Animals , Sexual Behavior, Animal , Environment , Passeriformes/physiology , Animal Communication
19.
Hum Brain Mapp ; 45(10): e26724, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39001584

ABSTRACT

Music is ubiquitous, both in its instrumental and vocal forms. While speech perception at birth has been at the core of an extensive corpus of research, the origins of the ability to discriminate instrumental or vocal melodies is still not well investigated. In previous studies comparing vocal and musical perception, the vocal stimuli were mainly related to speaking, including language, and not to the non-language singing voice. In the present study, to better compare a melodic instrumental line with the voice, we used singing as a comparison stimulus, to reduce the dissimilarities between the two stimuli as much as possible, separating language perception from vocal musical perception. In the present study, 45 newborns were scanned, 10 full-term born infants and 35 preterm infants at term-equivalent age (mean gestational age at test = 40.17 weeks, SD = 0.44) using functional magnetic resonance imaging while listening to five melodies played by a musical instrument (flute) or sung by a female voice. To examine the dynamic task-based effective connectivity, we employed a psychophysiological interaction of co-activation patterns (PPI-CAPs) analysis, using the auditory cortices as seed region, to investigate moment-to-moment changes in task-driven modulation of cortical activity during an fMRI task. Our findings reveal condition-specific, dynamically occurring patterns of co-activation (PPI-CAPs). During the vocal condition, the auditory cortex co-activates with the sensorimotor and salience networks, while during the instrumental condition, it co-activates with the visual cortex and the superior frontal cortex. Our results show that the vocal stimulus elicits sensorimotor aspects of the auditory perception and is processed as a more salient stimulus while the instrumental condition activated higher-order cognitive and visuo-spatial networks. Common neural signatures for both auditory stimuli were found in the precuneus and posterior cingulate gyrus. Finally, this study adds knowledge on the dynamic brain connectivity underlying the newborns capability of early and specialized auditory processing, highlighting the relevance of dynamic approaches to study brain function in newborn populations.


Subject(s)
Auditory Perception , Magnetic Resonance Imaging , Music , Humans , Female , Male , Auditory Perception/physiology , Infant, Newborn , Singing/physiology , Infant, Premature/physiology , Brain Mapping , Acoustic Stimulation , Brain/physiology , Brain/diagnostic imaging , Voice/physiology
20.
Hum Brain Mapp ; 45(7): e26705, 2024 May.
Article in English | MEDLINE | ID: mdl-38716698

ABSTRACT

The global ageing of populations calls for effective, ecologically valid methods to support brain health across adult life. Previous evidence suggests that music can promote white matter (WM) microstructure and grey matter (GM) volume while supporting auditory and cognitive functioning and emotional well-being as well as counteracting age-related cognitive decline. Adding a social component to music training, choir singing is a popular leisure activity among older adults, but a systematic account of its potential to support healthy brain structure, especially with regard to ageing, is currently missing. The present study used quantitative anisotropy (QA)-based diffusion MRI connectometry and voxel-based morphometry to explore the relationship of lifetime choir singing experience and brain structure at the whole-brain level. Cross-sectional multiple regression analyses were carried out in a large, balanced sample (N = 95; age range 21-88) of healthy adults with varying levels of choir singing experience across the whole age range and within subgroups defined by age (young, middle-aged, and older adults). Independent of age, choir singing experience was associated with extensive increases in WM QA in commissural, association, and projection tracts across the brain. Corroborating previous work, these overlapped with language and limbic networks. Enhanced corpus callosum microstructure was associated with choir singing experience across all subgroups. In addition, choir singing experience was selectively associated with enhanced QA in the fornix in older participants. No associations between GM volume and choir singing were found. The present study offers the first systematic account of amateur-level choir singing on brain structure. While no evidence for counteracting GM atrophy was found, the present evidence of enhanced structural connectivity coheres well with age-typical structural changes. Corroborating previous behavioural studies, the present results suggest that regular choir singing holds great promise for supporting brain health across the adult life span.


Subject(s)
Singing , White Matter , Humans , Adult , Male , Middle Aged , Aged , Female , Young Adult , Singing/physiology , Aged, 80 and over , White Matter/diagnostic imaging , White Matter/physiology , White Matter/anatomy & histology , Aging/physiology , Cross-Sectional Studies , Brain/diagnostic imaging , Brain/physiology , Brain/anatomy & histology , Gray Matter/diagnostic imaging , Gray Matter/anatomy & histology , Gray Matter/physiology , Diffusion Magnetic Resonance Imaging , Diffusion Tensor Imaging
SELECTION OF CITATIONS
SEARCH DETAIL