Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 17.719
Filtrar
Más filtros

Intervalo de año de publicación
1.
Nat Rev Neurosci ; 23(5): 287-305, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35352057

RESUMEN

Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.


Asunto(s)
Música , Percepción Auditiva , Encéfalo , Emociones , Humanos , Aprendizaje , Música/psicología
2.
PLoS Biol ; 22(5): e3002631, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38805517

RESUMEN

Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.


Asunto(s)
Estimulación Acústica , Percepción Auditiva , Música , Percepción del Habla , Humanos , Masculino , Femenino , Adulto , Percepción Auditiva/fisiología , Estimulación Acústica/métodos , Percepción del Habla/fisiología , Adulto Joven , Habla/fisiología , Adolescente
3.
PLoS Biol ; 22(8): e3002732, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39133721

RESUMEN

Music can evoke pleasurable and rewarding experiences. Past studies that examined task-related brain activity revealed individual differences in musical reward sensitivity traits and linked them to interactions between the auditory and reward systems. However, state-dependent fluctuations in spontaneous neural activity in relation to music-driven rewarding experiences have not been studied. Here, we used functional MRI to examine whether the coupling of auditory-reward networks during a silent period immediately before music listening can predict the degree of musical rewarding experience of human participants (N = 49). We used machine learning models and showed that the functional connectivity between auditory and reward networks, but not others, could robustly predict subjective, physiological, and neurobiological aspects of the strong musical reward of chills. Specifically, the right auditory cortex-striatum/orbitofrontal connections predicted the reported duration of chills and the activation level of nucleus accumbens and insula, whereas the auditory-amygdala connection was associated with psychophysiological arousal. Furthermore, the predictive model derived from the first sample of individuals was generalized in an independent dataset using different music samples. The generalization was successful only for state-like, pre-listening functional connectivity but not for stable, intrinsic functional connectivity. The current study reveals the critical role of sensory-reward connectivity in pre-task brain state in modulating subsequent rewarding experience.


Asunto(s)
Percepción Auditiva , Imagen por Resonancia Magnética , Música , Placer , Recompensa , Humanos , Música/psicología , Masculino , Femenino , Placer/fisiología , Adulto , Percepción Auditiva/fisiología , Adulto Joven , Corteza Auditiva/fisiología , Corteza Auditiva/diagnóstico por imagen , Mapeo Encefálico/métodos , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Estimulación Acústica , Red Nerviosa/fisiología , Red Nerviosa/diagnóstico por imagen , Aprendizaje Automático
4.
Proc Natl Acad Sci U S A ; 121(6): e2306549121, 2024 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-38300861

RESUMEN

Understanding and predicting the emergence and evolution of cultural tastes manifested in consumption patterns is of central interest to social scientists, analysts of culture, and purveyors of content. Prior research suggests that taste preferences relate to personality traits, values, shifts in mood, and immigration destination. Understanding everyday patterns of listening and the function music plays in life has remained elusive, however, despite speculation that musical nostalgia may compensate for local disruption. Using more than one hundred million streams of four million songs by tens of thousands of international listeners from a global music service, we show that breaches in personal routine are systematically associated with personal musical exploration. As people visited new cities and countries, their preferences diversified, converging toward their travel destinations. As people experienced the very different disruptions associated with COVID-19 lockdowns, their preferences diversified further. Personal explorations did not tend to veer toward the global listening average, but away from it, toward distinctive regional musical content. Exposure to novel music explored during periods of routine disruption showed a persistent influence on listeners' future consumption patterns. Across all of these settings, musical preference reflected rather than compensated for life's surprises, leaving a lasting legacy on tastes. We explore the relationship between these findings and global patterns of behavior and cultural consumption.


Asunto(s)
Música , Humanos , Afecto , Predicción
5.
Proc Natl Acad Sci U S A ; 121(5): e2308859121, 2024 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-38271338

RESUMEN

Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.


Asunto(s)
Música , Humanos , Música/psicología , Sensación , Comparación Transcultural , Acústica , Emociones , Percepción Auditiva
6.
Proc Natl Acad Sci U S A ; 121(10): e2316306121, 2024 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-38408255

RESUMEN

Music is powerful in conveying emotions and triggering affective brain mechanisms. Affective brain responses in previous studies were however rather inconsistent, potentially because of the non-adaptive nature of recorded music used so far. Live music instead can be dynamic and adaptive and is often modulated in response to audience feedback to maximize emotional responses in listeners. Here, we introduce a setup for studying emotional responses to live music in a closed-loop neurofeedback setup. This setup linked live performances by musicians to neural processing in listeners, with listeners' amygdala activity was displayed to musicians in real time. Brain activity was measured using functional MRI, and especially amygdala activity was quantified in real time for the neurofeedback signal. Live pleasant and unpleasant piano music performed in response to amygdala neurofeedback from listeners was acoustically very different from comparable recorded music and elicited significantly higher and more consistent amygdala activity. Higher activity was also found in a broader neural network for emotion processing during live compared to recorded music. This finding included observations of the predominance for aversive coding in the ventral striatum while listening to unpleasant music, and involvement of the thalamic pulvinar nucleus, presumably for regulating attentional and cortical flow mechanisms. Live music also stimulated a dense functional neural network with the amygdala as a central node influencing other brain systems. Finally, only live music showed a strong and positive coupling between features of the musical performance and brain activity in listeners pointing to real-time and dynamic entrainment processes.


Asunto(s)
Música , Música/psicología , Encéfalo/fisiología , Emociones/fisiología , Amígdala del Cerebelo/fisiología , Afecto , Imagen por Resonancia Magnética , Percepción Auditiva/fisiología
7.
Proc Natl Acad Sci U S A ; 121(36): e2319459121, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39186645

RESUMEN

The perception of musical phrase boundaries is a critical aspect of human musical experience: It allows us to organize, understand, derive pleasure from, and remember music. Identifying boundaries is a prerequisite for segmenting music into meaningful chunks, facilitating efficient processing and storage while providing an enjoyable, fulfilling listening experience through the anticipation of upcoming musical events. Expanding on Sridharan et al.'s [Neuron 55, 521-532 (2007)] work on coarse musical boundaries between symphonic movements, we examined finer-grained boundaries. We measured the fMRI responses of 18 musicians and 18 nonmusicians during music listening. Using general linear model, independent component analysis, and Granger causality, we observed heightened auditory integration in anticipation to musical boundaries, and an extensive decrease within the fronto-temporal-parietal network during and immediately following boundaries. Notably, responses were modulated by musicianship. Findings uncover the intricate interplay between musical structure, expertise, and cognitive processing, advancing our knowledge of how the brain makes sense of music.


Asunto(s)
Percepción Auditiva , Encéfalo , Imagen por Resonancia Magnética , Música , Humanos , Música/psicología , Percepción Auditiva/fisiología , Masculino , Adulto , Femenino , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Adulto Joven , Estimulación Acústica
8.
Proc Natl Acad Sci U S A ; 121(30): e2320378121, 2024 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-39008675

RESUMEN

The neuroscientific examination of music processing in audio-visual contexts offers a valuable framework to assess how auditory information influences the emotional encoding of visual information. Using fMRI during naturalistic film viewing, we investigated the neural mechanisms underlying the effect of music on valence inferences during mental state attribution. Thirty-eight participants watched the same short-film accompanied by systematically controlled consonant or dissonant music. Subjects were instructed to think about the main character's intentions. The results revealed that increasing levels of dissonance led to more negatively valenced inferences, displaying the profound emotional impact of musical dissonance. Crucially, at the neuroscientific level and despite music being the sole manipulation, dissonance evoked the response of the primary visual cortex (V1). Functional/effective connectivity analysis showed a stronger coupling between the auditory ventral stream (AVS) and V1 in response to tonal dissonance and demonstrated the modulation of early visual processing via top-down feedback inputs from the AVS to V1. These V1 signal changes indicate the influence of high-level contextual representations associated with tonal dissonance on early visual cortices, serving to facilitate the emotional interpretation of visual information. Our results highlight the significance of employing systematically controlled music, which can isolate emotional valence from the arousal dimension, to elucidate the brain's sound-to-meaning interface and its distributive crossmodal effects on early visual encoding during naturalistic film viewing.


Asunto(s)
Percepción Auditiva , Emociones , Imagen por Resonancia Magnética , Música , Percepción Visual , Humanos , Música/psicología , Femenino , Masculino , Adulto , Percepción Visual/fisiología , Percepción Auditiva/fisiología , Emociones/fisiología , Adulto Joven , Mapeo Encefálico , Estimulación Acústica , Corteza Visual/fisiología , Corteza Visual/diagnóstico por imagen , Corteza Visual Primaria/fisiología , Estimulación Luminosa/métodos
9.
Annu Rev Neurosci ; 41: 527-552, 2018 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-29986161

RESUMEN

How the cerebral cortex encodes auditory features of biologically important sounds, including speech and music, is one of the most important questions in auditory neuroscience. The pursuit to understand related neural coding mechanisms in the mammalian auditory cortex can be traced back several decades to the early exploration of the cerebral cortex. Significant progress in this field has been made in the past two decades with new technical and conceptual advances. This article reviews the progress and challenges in this area of research.


Asunto(s)
Corteza Auditiva/fisiología , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Animales , Audición , Humanos , Música , Habla
10.
PLoS Biol ; 21(8): e3002176, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37582062

RESUMEN

Music is core to human experience, yet the precise neural dynamics underlying music perception remain unknown. We analyzed a unique intracranial electroencephalography (iEEG) dataset of 29 patients who listened to a Pink Floyd song and applied a stimulus reconstruction approach previously used in the speech domain. We successfully reconstructed a recognizable song from direct neural recordings and quantified the impact of different factors on decoding accuracy. Combining encoding and decoding analyses, we found a right-hemisphere dominance for music perception with a primary role of the superior temporal gyrus (STG), evidenced a new STG subregion tuned to musical rhythm, and defined an anterior-posterior STG organization exhibiting sustained and onset responses to musical elements. Our findings show the feasibility of applying predictive modeling on short datasets acquired in single patients, paving the way for adding musical elements to brain-computer interface (BCI) applications.


Asunto(s)
Corteza Auditiva , Música , Humanos , Corteza Auditiva/fisiología , Mapeo Encefálico , Percepción Auditiva/fisiología , Lóbulo Temporal/fisiología , Estimulación Acústica
11.
Proc Natl Acad Sci U S A ; 120(37): e2218593120, 2023 09 12.
Artículo en Inglés | MEDLINE | ID: mdl-37676911

RESUMEN

Despite the variability of music across cultures, some types of human songs share acoustic characteristics. For example, dance songs tend to be loud and rhythmic, and lullabies tend to be quiet and melodious. Human perceptual sensitivity to the behavioral contexts of songs, based on these musical features, suggests that basic properties of music are mutually intelligible, independent of linguistic or cultural content. Whether these effects reflect universal interpretations of vocal music, however, is unclear because prior studies focus almost exclusively on English-speaking participants, a group that is not representative of humans. Here, we report shared intuitions concerning the behavioral contexts of unfamiliar songs produced in unfamiliar languages, in participants living in Internet-connected industrialized societies (n = 5,516 native speakers of 28 languages) or smaller-scale societies with limited access to global media (n = 116 native speakers of three non-English languages). Participants listened to songs randomly selected from a representative sample of human vocal music, originally used in four behavioral contexts, and rated the degree to which they believed the song was used for each context. Listeners in both industrialized and smaller-scale societies inferred the contexts of dance songs, lullabies, and healing songs, but not love songs. Within and across cohorts, inferences were mutually consistent. Further, increased linguistic or geographical proximity between listeners and singers only minimally increased the accuracy of the inferences. These results demonstrate that the behavioral contexts of three common forms of music are mutually intelligible cross-culturally and imply that musical diversity, shaped by cultural evolution, is nonetheless grounded in some universal perceptual phenomena.


Asunto(s)
Evolución Cultural , Música , Humanos , Lenguaje , Lingüística , Acústica
12.
Proc Natl Acad Sci U S A ; 120(5): e2216146120, 2023 01 31.
Artículo en Inglés | MEDLINE | ID: mdl-36693091

RESUMEN

Some people, entirely untrained in music, can listen to a song and replicate it on a piano with unnerving accuracy. What enables some to "hear" music so much better than others? Long-standing research confirms that part of the answer is undoubtedly neurological and can be improved with training. However, are there structural, physical, or engineering attributes of the human hearing mechanism apparatus (i.e., the hair cells of the internal ear) that render one human innately superior to another in terms of propensity to listen to music? In this work, we investigate a physics-based model of the electromechanics of the hair cells in the inner ear to understand why a person might be physiologically better poised to distinguish musical sounds. A key feature of the model is that we avoid a "black-box" systems-type approach. All parameters are well-defined physical quantities, including membrane thickness, bending modulus, electromechanical properties, and geometrical features, among others. Using the two-tone interference problem as a proxy for musical perception, our model allows us to establish the basis for exploring the effect of external factors such as medicine or environment. As an example of the insights we obtain, we conclude that the reduction in bending modulus of the cell membranes (which for instance may be caused by the usage of a certain class of analgesic drugs) or an increase in the flexoelectricity of the hair cell membrane can interfere with the perception of two-tone excitation.


Asunto(s)
Música , Percepción del Habla , Humanos , Percepción Auditiva , Audición , Física , Percepción del Habla/fisiología , Percepción de la Altura Tonal/fisiología
13.
J Neurosci ; 44(30)2024 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-38926087

RESUMEN

Music, like spoken language, is often characterized by hierarchically organized structure. Previous experiments have shown neural tracking of notes and beats, but little work touches on the more abstract question: how does the brain establish high-level musical structures in real time? We presented Bach chorales to participants (20 females and 9 males) undergoing electroencephalogram (EEG) recording to investigate how the brain tracks musical phrases. We removed the main temporal cues to phrasal structures, so that listeners could only rely on harmonic information to parse a continuous musical stream. Phrasal structures were disrupted by locally or globally reversing the harmonic progression, so that our observations on the original music could be controlled and compared. We first replicated the findings on neural tracking of musical notes and beats, substantiating the positive correlation between musical training and neural tracking. Critically, we discovered a neural signature in the frequency range ∼0.1 Hz (modulations of EEG power) that reliably tracks musical phrasal structure. Next, we developed an approach to quantify the phrasal phase precession of the EEG power, revealing that phrase tracking is indeed an operation of active segmentation involving predictive processes. We demonstrate that the brain establishes complex musical structures online over long timescales (>5 s) and actively segments continuous music streams in a manner comparable to language processing. These two neural signatures, phrase tracking and phrasal phase precession, provide new conceptual and technical tools to study the processes underpinning high-level structure building using noninvasive recording techniques.


Asunto(s)
Percepción Auditiva , Electroencefalografía , Música , Humanos , Femenino , Masculino , Electroencefalografía/métodos , Adulto , Percepción Auditiva/fisiología , Adulto Joven , Estimulación Acústica/métodos , Encéfalo/fisiología
14.
J Neurosci ; 44(15)2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-38423761

RESUMEN

Music is a universal human attribute. The study of amusia, a neurologic music processing deficit, has increasingly elaborated our view on the neural organization of the musical brain. However, lesions causing amusia occur in multiple brain locations and often also cause aphasia, leaving the distinct neural networks for amusia unclear. Here, we utilized lesion network mapping to identify these networks. A systematic literature search was carried out to identify all published case reports of lesion-induced amusia. The reproducibility and specificity of the identified amusia network were then tested in an independent prospective cohort of 97 stroke patients (46 female and 51 male) with repeated structural brain imaging, specifically assessed for both music perception and language abilities. Lesion locations in the case reports were heterogeneous but connected to common brain regions, including bilateral temporoparietal and insular cortices, precentral gyrus, and cingulum. In the prospective cohort, lesions causing amusia mapped to a common brain network, centering on the right superior temporal cortex and clearly distinct from the network causally associated with aphasia. Lesion-induced longitudinal structural effects in the amusia circuit were confirmed as reduction of both gray and white matter volume, which correlated with the severity of amusia. We demonstrate that despite the heterogeneity of lesion locations disrupting music processing, there is a common brain network that is distinct from the language network. These results provide evidence for the distinct neural substrate of music processing, differentiating music-related functions from language, providing a testable target for noninvasive brain stimulation to treat amusia.


Asunto(s)
Red Nerviosa , Humanos , Femenino , Masculino , Persona de Mediana Edad , Anciano , Red Nerviosa/diagnóstico por imagen , Red Nerviosa/patología , Red Nerviosa/fisiopatología , Música , Trastornos de la Percepción Auditiva/etiología , Trastornos de la Percepción Auditiva/fisiopatología , Encéfalo/patología , Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética , Mapeo Encefálico , Adulto , Estudios Prospectivos , Accidente Cerebrovascular/complicaciones , Accidente Cerebrovascular/patología , Accidente Cerebrovascular/diagnóstico por imagen
16.
Cereb Cortex ; 34(4)2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38679480

RESUMEN

Existing neuroimaging studies on neural correlates of musical familiarity often employ a familiar vs. unfamiliar contrast analysis. This singular analytical approach reveals associations between explicit musical memory and musical familiarity. However, is the neural activity associated with musical familiarity solely related to explicit musical memory, or could it also be related to implicit musical memory? To address this, we presented 130 song excerpts of varying familiarity to 21 participants. While acquiring their brain activity using functional magnetic resonance imaging (fMRI), we asked the participants to rate the familiarity of each song on a five-point scale. To comprehensively analyze the neural correlates of musical familiarity, we examined it from four perspectives: the intensity of local neural activity, patterns of local neural activity, global neural activity patterns, and functional connectivity. The results from these four approaches were consistent and revealed that musical familiarity is related to the activity of both explicit and implicit musical memory networks. Our findings suggest that: (1) musical familiarity is also associated with implicit musical memory, and (2) there is a cooperative and competitive interaction between the two types of musical memory in the perception of music.


Asunto(s)
Mapeo Encefálico , Encéfalo , Imagen por Resonancia Magnética , Música , Reconocimiento en Psicología , Humanos , Música/psicología , Reconocimiento en Psicología/fisiología , Masculino , Femenino , Adulto Joven , Adulto , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Percepción Auditiva/fisiología , Estimulación Acústica/métodos
17.
Cereb Cortex ; 34(3)2024 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-38489785

RESUMEN

Dance and music are well known to improve sensorimotor skills and cognitive functions. To reveal the underlying mechanism, previous studies focus on the brain plastic structural and functional effects of dance and music training. However, the discrepancy training effects on brain structure-function relationship are still blurred. Thus, proficient dancers, musicians, and controls were recruited in this study. The graph signal processing framework was employed to quantify the region-level and network-level relationship between brain function and structure. The results showed the increased coupling strength of the right ventromedial putamen in the dance and music groups. Distinctly, enhanced coupling strength of the ventral attention network, increased coupling strength of the right inferior frontal gyrus opercular area, and increased function connectivity of coupling function signal between the right and left middle frontal gyrus were only found in the dance group. Besides, the dance group indicated enhanced coupling function connectivity between the left inferior parietal lobule caudal area and the left superior parietal lobule intraparietal area compared with the music groups. The results might illustrate dance and music training's discrepant effect on the structure-function relationship of the subcortical and cortical attention networks. Furthermore, dance training seemed to have a greater impact on these networks.


Asunto(s)
Música , Encéfalo/diagnóstico por imagen , Mapeo Encefálico , Lóbulo Parietal , Lóbulo Frontal , Imagen por Resonancia Magnética/métodos
18.
Cereb Cortex ; 34(8)2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39110413

RESUMEN

Music is a non-verbal human language, built on logical, hierarchical structures, that offers excellent opportunities to explore how the brain processes complex spatiotemporal auditory sequences. Using the high temporal resolution of magnetoencephalography, we investigated the unfolding brain dynamics of 70 participants during the recognition of previously memorized musical sequences compared to novel sequences matched in terms of entropy and information content. Measures of both whole-brain activity and functional connectivity revealed a widespread brain network underlying the recognition of the memorized auditory sequences, which comprised primary auditory cortex, superior temporal gyrus, insula, frontal operculum, cingulate gyrus, orbitofrontal cortex, basal ganglia, thalamus, and hippocampus. Furthermore, while the auditory cortex responded mainly to the first tones of the sequences, the activity of higher-order brain areas such as the cingulate gyrus, frontal operculum, hippocampus, and orbitofrontal cortex largely increased over time during the recognition of the memorized versus novel musical sequences. In conclusion, using a wide range of analytical techniques spanning from decoding to functional connectivity and building on previous works, our study provided new insights into the spatiotemporal whole-brain mechanisms for conscious recognition of auditory sequences.


Asunto(s)
Percepción Auditiva , Encéfalo , Magnetoencefalografía , Música , Humanos , Masculino , Femenino , Adulto , Magnetoencefalografía/métodos , Percepción Auditiva/fisiología , Adulto Joven , Encéfalo/fisiología , Reconocimiento en Psicología/fisiología , Mapeo Encefálico/métodos , Red Nerviosa/fisiología , Red Nerviosa/diagnóstico por imagen , Estimulación Acústica/métodos
19.
Annu Rev Psychol ; 75: 87-128, 2024 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-37738514

RESUMEN

Music training is generally assumed to improve perceptual and cognitive abilities. Although correlational data highlight positive associations, experimental results are inconclusive, raising questions about causality. Does music training have far-transfer effects, or do preexisting factors determine who takes music lessons? All behavior reflects genetic and environmental influences, but differences in emphasis-nature versus nurture-have been a source of tension throughout the history of psychology. After reviewing the recent literature, we conclude that the evidence that music training causes nonmusical benefits is weak or nonexistent, and that researchers routinely overemphasize contributions from experience while neglecting those from nature. The literature is also largely exploratory rather than theory driven. It fails to explain mechanistically how music-training effects could occur and ignores evidence that far transfer is rare. Instead of focusing on elusive perceptual or cognitive benefits, we argue that it is more fruitful to examine the social-emotional effects of engaging with music, particularly in groups, and that music-based interventions may be effective mainly for clinical or atypical populations.


Asunto(s)
Música , Humanos , Cognición , Emociones
20.
Bioessays ; 45(4): e2200229, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36811379

RESUMEN

Error-free genome duplication and accurate cell division are critical for cell survival. In all three domains of life, bacteria, archaea, and eukaryotes, initiator proteins bind replication origins in an ATP-dependent manner, play critical roles in replisome assembly, and coordinate cell-cycle regulation. We discuss how the eukaryotic initiator, Origin recognition complex (ORC), coordinates different events during the cell cycle. We propose that ORC is the maestro driving the orchestra to coordinately perform the musical pieces of replication, chromatin organization, and repair.


Asunto(s)
Replicación del ADN , Música , Cromatina , Ciclo Celular/fisiología , Cromosomas , Complejo de Reconocimiento del Origen/genética , Complejo de Reconocimiento del Origen/metabolismo , Origen de Réplica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA