Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 65
Filtrar
1.
Proc Natl Acad Sci U S A ; 118(29)2021 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-34272278

RESUMO

Rhythm perception is fundamental to speech and music. Humans readily recognize a rhythmic pattern, such as that of a familiar song, independently of the tempo at which it occurs. This shows that our perception of auditory rhythms is flexible, relying on global relational patterns more than on the absolute durations of specific time intervals. Given that auditory rhythm perception in humans engages a complex auditory-motor cortical network even in the absence of movement and that the evolution of vocal learning is accompanied by strengthening of forebrain auditory-motor pathways, we hypothesize that vocal learning species share our perceptual facility for relational rhythm processing. We test this by asking whether the best-studied animal model for vocal learning, the zebra finch, can recognize a fundamental rhythmic pattern-equal timing between event onsets (isochrony)-based on temporal relations between intervals rather than on absolute durations. Prior work suggests that vocal nonlearners (pigeons and rats) are quite limited in this regard and are biased to attend to absolute durations when listening to rhythmic sequences. In contrast, using naturalistic sounds at multiple stimulus rates, we show that male zebra finches robustly recognize isochrony independent of absolute time intervals, even at rates distant from those used in training. Our findings highlight the importance of comparative studies of rhythmic processing and suggest that vocal learning species are promising animal models for key aspects of human rhythm perception. Such models are needed to understand the neural mechanisms behind the positive effect of rhythm on certain speech and movement disorders.


Assuntos
Percepção Auditiva , Tentilhões/fisiologia , Animais , Córtex Auditivo/fisiologia , Feminino , Aprendizagem , Masculino , Reconhecimento Fisiológico de Modelo , Som , Voz
2.
Cereb Cortex ; 31(8): 3622-3640, 2021 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-33749742

RESUMO

Humans can mentally represent auditory information without an external stimulus, but the specificity of these internal representations remains unclear. Here, we asked how similar the temporally unfolding neural representations of imagined music are compared to those during the original perceived experience. We also tested whether rhythmic motion can influence the neural representation of music during imagery as during perception. Participants first memorized six 1-min-long instrumental musical pieces with high accuracy. Functional MRI data were collected during: 1) silent imagery of melodies to the beat of a visual metronome; 2) same but while tapping to the beat; and 3) passive listening. During imagery, inter-subject correlation analysis showed that melody-specific temporal response patterns were reinstated in right associative auditory cortices. When tapping accompanied imagery, the melody-specific neural patterns were reinstated in more extensive temporal-lobe regions bilaterally. These results indicate that the specific contents of conscious experience are encoded similarly during imagery and perception in the dynamic activity of auditory cortices. Furthermore, rhythmic motion can enhance the reinstatement of neural patterns associated with the experience of complex sounds, in keeping with models of motor to sensory influences in auditory processing.


Assuntos
Mapeamento Encefálico , Imaginação/fisiologia , Música/psicologia , Estimulação Acústica , Adolescente , Adulto , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Movimento/fisiologia , Discriminação da Altura Tonal , Percepção da Altura Sonora , Sensação/fisiologia , Adulto Jovem
3.
J Neurolinguistics ; 622022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35002061

RESUMO

Language and music rely on complex sequences organized according to syntactic principles that are implicitly understood by enculturated listeners. Across both domains, syntactic processing involves predicting and integrating incoming elements into higher-order structures. According to the Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, 2003), musical and linguistic syntactic processing rely on shared resources for integrating incoming elements (e.g., chords, words) into unfolding sequences. One prediction of the SSIRH is that people with agrammatic aphasia (whose deficits are due to syntactic integration problems) should present with deficits in processing musical syntax. We report the first neural study to test this prediction: event-related potentials (ERPs) were measured in response to musical and linguistic syntactic violations in a group of people with agrammatic aphasia (n=7) compared to a group of healthy controls (n=14) using an acceptability judgement task. The groups were matched with respect to age, education, and extent of musical training. Violations were based on morpho-syntactic relations in sentences and harmonic relations in chord sequences. Both groups presented with a significant P600 response to syntactic violations across both domains. The aphasic participants presented with a reduced-amplitude posterior P600 compared to the healthy adults in response to linguistic, but not musical, violations. Participants with aphasia did however present with larger frontal positivities in response to violations in both domains. Intriguingly, extent of musical training was associated with larger posterior P600 responses to syntactic violations of language and music in both groups. Overall, these findings are not consistent with the predictions of the SSIRH, and instead suggest that linguistic, but not musical, syntactic processing may be selectively impaired in stroke-induced agrammatic aphasia. However, the findings also suggest a relationship between musical training and linguistic syntactic processing, which may have clinical implications for people with aphasia, and motivates more research on the relationship between these two domains.

4.
Behav Brain Sci ; 44: e85, 2021 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-34588015

RESUMO

Collective, synchronous music-making is far from ubiquitous across traditional, small-scale societies. We describe societies that lack collective music and offer hypotheses to help explain this cultural variation. Without identifying the factors that explain variation in collective music-making across these societies, theories of music evolution based on social bonding (Savage et al.) or coalition signaling (Mehr et al.) remain incomplete.


Assuntos
Música , Comparação Transcultural , Humanos
5.
Neuroimage ; 213: 116693, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32135262

RESUMO

Time is a critical component of episodic memory. Yet it is currently unclear how different types of temporal signals are represented in the brain and how these temporal signals support episodic memory. The current study investigated whether temporal cues provided by low-frequency environmental rhythms influence memory formation. Specifically, we tested the hypothesis that neural tracking of low-frequency rhythm serves as a mechanism of selective attention that dynamically biases the encoding of visual information at specific moments in time. Participants incidentally encoded a series of visual objects while passively listening to background, instrumental music with a steady beat. Objects either appeared in-synchrony or out-of-synchrony with the background beat. Participants were then given a surprise subsequent memory test (in silence). Results revealed significant neural tracking of the musical beat at encoding, evident in increased electrophysiological power and inter-trial phase coherence at the perceived beat frequency (1.25 â€‹Hz). Importantly, enhanced neural tracking of the background rhythm at encoding was associated with superior subsequent memory for in-synchrony compared to out-of-synchrony objects at test. Together, these results provide novel evidence that the brain spontaneously tracks low-frequency musical rhythm during naturalistic listening situations, and that the strength of this neural tracking is associated with the effects of rhythm on higher-order cognitive processes such as episodic memory.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Memória Episódica , Periodicidade , Estimulação Acústica/métodos , Adolescente , Adulto , Atenção/fisiologia , Sinais (Psicologia) , Feminino , Humanos , Masculino , Música , Adulto Jovem
6.
Proc Natl Acad Sci U S A ; 113(6): 1666-71, 2016 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-26811447

RESUMO

Humans easily recognize "transposed" musical melodies shifted up or down in log frequency. Surprisingly, songbirds seem to lack this capacity, although they can learn to recognize human melodies and use complex acoustic sequences for communication. Decades of research have led to the widespread belief that songbirds, unlike humans, are strongly biased to use absolute pitch (AP) in melody recognition. This work relies almost exclusively on acoustically simple stimuli that may belie sensitivities to more complex spectral features. Here, we investigate melody recognition in a species of songbird, the European Starling (Sturnus vulgaris), using tone sequences that vary in both pitch and timbre. We find that small manipulations altering either pitch or timbre independently can drive melody recognition to chance, suggesting that both percepts are poor descriptors of the perceptual cues used by birds for this task. Instead we show that melody recognition can generalize even in the absence of pitch, as long as the spectral shapes of the constituent tones are preserved. These results challenge conventional views regarding the use of pitch cues in nonhuman auditory sequence recognition.


Assuntos
Reconhecimento Fisiológico de Modelo/fisiologia , Percepção da Altura Sonora/fisiologia , Espectrografia do Som , Som , Estorninhos/fisiologia , Estimulação Acústica , Animais , Comportamento Animal , Ruído
7.
Dev Med Child Neurol ; 60(3): 256-266, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29363098

RESUMO

AIM: The neonatal intensive care unit (NICU) provides life-saving medical care for an increasing number of newborn infants each year. NICU care, while lifesaving, does have attendant consequences which can include repeated activation of the stress response and reduced maternal interaction, with possible negative long-term impacts on brain development. Here we present a neuroscientific framework for considering the impact of music on neurodevelopment in the NICU of infants born preterm and evaluate current literature on the use of music with this population to determine what is most reliably known of the physiological effects of music interventions. METHOD: Using online academic databases we collected relevant, experimental studies aimed at determining effects of music listening in infants in the NICU. These articles were evaluated for methodological rigor, ranking the 10 most experimentally stringent as a representative sample. RESULTS: The selected literature seems to indicate that effects are present on the cardio-pulmonary system and behavior of neonates, although the relative effect size remains unclear. INTERPRETATION: These findings indicate a need for more standardized longitudinal studies aimed at determining not only whether NICU music exposure has beneficial effects on the cardio-pulmonary system, but also on the hypothalamic-pituitary-adrenal axis, brain structures, and cognitive behavioral status of these children as well. WHAT THIS PAPER ADDS: Provides a neuroscience framework for considering how music might attenuate stress in neonatal intensive care unit (NICU) infants. Considers how repeated stress may cause negative neurodevelopmental impacts in infants born preterm. Posits epigenetics can serve as a mechanistic pathway for music moderating the stress response.


Assuntos
Unidades de Terapia Intensiva Neonatal , Transtornos do Neurodesenvolvimento , Estresse Psicológico , Bases de Dados Bibliográficas/estatística & dados numéricos , Humanos , Recém-Nascido , Recém-Nascido Prematuro , Musicoterapia , Transtornos do Neurodesenvolvimento/etiologia , Transtornos do Neurodesenvolvimento/psicologia , Transtornos do Neurodesenvolvimento/terapia , Sistemas On-Line/estatística & dados numéricos , Estresse Psicológico/etiologia , Estresse Psicológico/psicologia , Estresse Psicológico/terapia
8.
J Exp Child Psychol ; 167: 354-368, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29227852

RESUMO

A growing number of studies report links between nonlinguistic rhythmic abilities and certain linguistic abilities, particularly phonological skills. The current study investigated the relationship between nonlinguistic rhythmic processing, phonological abilities, and early literacy abilities in kindergarteners. A distinctive aspect of the current work was the exploration of whether processing of different types of rhythmic patterns is differentially related to kindergarteners' phonological and reading-related abilities. Specifically, we examined the processing of metrical versus nonmetrical rhythmic patterns, that is, patterns capable of being subdivided into equal temporal intervals or not (Povel & Essens, 1985). This is an important comparison because most music involves metrical sequences, in which rhythm often has an underlying temporal grid of isochronous units. In contrast, nonmetrical sequences are arguably more typical to speech rhythm, which is temporally structured but does not involve an underlying grid of equal temporal units. A rhythm discrimination app with metrical and nonmetrical patterns was administered to 74 kindergarteners in conjunction with cognitive and preliteracy measures. Findings support a relationship among rhythm perception, phonological awareness, and letter-sound knowledge (an essential precursor of reading). A mediation analysis revealed that the association between rhythm perception and letter-sound knowledge is mediated through phonological awareness. Furthermore, metrical perception accounted for unique variance in letter-sound knowledge above all other language and cognitive measures. These results point to a unique role for temporal regularity processing in the association between musical rhythm and literacy in young children.


Assuntos
Idioma , Alfabetização , Música , Leitura , Aptidão , Criança , Pré-Escolar , Feminino , Humanos , Masculino , Fala
9.
PLoS Biol ; 12(3): e1001821, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24667562

RESUMO

In The Descent of Man, Darwin speculated that our capacity for musical rhythm reflects basic aspects of brain function broadly shared among animals. Although this remains an appealing idea, it is being challenged by modern cross-species research. This research hints that our capacity to synchronize to a beat, i.e., to move in time with a perceived pulse in a manner that is predictive and flexible across a broad range of tempi, may be shared by only a few other species. Is this really the case? If so, it would have important implications for our understanding of the evolution of human musicality.


Assuntos
Evolução Biológica , Música/psicologia , Estimulação Acústica , Animais , Humanos , Especificidade da Espécie
10.
Proc Natl Acad Sci U S A ; 108(37): 15510-5, 2011 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-21876156

RESUMO

Human song exhibits great structural diversity, yet certain aspects of melodic shape (how pitch is patterned over time) are widespread. These include a predominance of arch-shaped and descending melodic contours in musical phrases, a tendency for phrase-final notes to be relatively long, and a bias toward small pitch movements between adjacent notes in a melody [Huron D (2006) Sweet Anticipation: Music and the Psychology of Expectation (MIT Press, Cambridge, MA)]. What is the origin of these features? We hypothesize that they stem from motor constraints on song production (i.e., the energetic efficiency of their underlying motor actions) rather than being innately specified. One prediction of this hypothesis is that any animals subject to similar motor constraints on song will exhibit similar melodic shapes, no matter how distantly related those animals are to humans. Conversely, animals who do not share similar motor constraints on song will not exhibit convergent melodic shapes. Birds provide an ideal case for testing these predictions, because their peripheral mechanisms of song production have both notable similarities and differences from human vocal mechanisms [Riede T, Goller F (2010) Brain Lang 115:69-80]. We use these similarities and differences to make specific predictions about shared and distinct features of human and avian song structure and find that these predictions are confirmed by empirical analysis of diverse human and avian song samples.


Assuntos
Atividade Motora/fisiologia , Música , Pardais/fisiologia , Vocalização Animal/fisiologia , Animais , Humanos , Respiração , Espectrografia do Som , Vibração
11.
Cognition ; 243: 105672, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38086279

RESUMO

Pleasure in music has been linked to predictive coding of melodic and rhythmic patterns, subserved by connectivity between regions in the brain's auditory and reward networks. Specific musical anhedonics derive little pleasure from music and have altered auditory-reward connectivity, but no difficulties with music perception abilities and no generalized physical anhedonia. Recent research suggests that specific musical anhedonics experience pleasure in nonmusical sounds, suggesting that the implicated brain pathways may be specific to music reward. However, this work used sounds with clear real-world sources (e.g., babies laughing, crowds cheering), so positive hedonic responses could be based on the referents of these sounds rather than the sounds themselves. We presented specific musical anhedonics and matched controls with isolated short pleasing and displeasing synthesized sounds of varying timbres with no clear real-world referents. While the two groups found displeasing sounds equally displeasing, the musical anhedonics gave substantially lower pleasure ratings to the pleasing sounds, indicating that their sonic anhedonia is not limited to musical rhythms and melodies. Furthermore, across a large sample of participants, mean pleasure ratings for pleasing synthesized sounds predicted significant and similar variance in six dimensions of musical reward considered to be relatively independent, suggesting that pleasure in sonic timbres play a role in eliciting reward-related responses to music. We replicate the earlier findings of preserved pleasure ratings for semantically referential sounds in musical anhedonics and find that pleasure ratings of semantic referents, when presented without sounds, correlated with ratings for the sounds themselves. This association was stronger in musical anhedonics than in controls, suggesting the use of semantic knowledge as a compensatory mechanism for affective sound processing. Our results indicate that specific musical anhedonia is not entirely specific to melodic and rhythmic processing, and suggest that timbre merits further research as a source of pleasure in music.


Assuntos
Música , Humanos , Música/psicologia , Anedonia/fisiologia , Prazer/fisiologia , Encéfalo/fisiologia , Recompensa , Percepção Auditiva/fisiologia
12.
Cognition ; 246: 105757, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38442588

RESUMO

One of the most important auditory categorization tasks a listener faces is determining a sound's domain, a process which is a prerequisite for successful within-domain categorization tasks such as recognizing different speech sounds or musical tones. Speech and song are universal in human cultures: how do listeners categorize a sequence of words as belonging to one or the other of these domains? There is growing interest in the acoustic cues that distinguish speech and song, but it remains unclear whether there are cross-cultural differences in the evidence upon which listeners rely when making this fundamental perceptual categorization. Here we use the speech-to-song illusion, in which some spoken phrases perceptually transform into song when repeated, to investigate cues to this domain-level categorization in native speakers of tone languages (Mandarin and Cantonese speakers residing in the United Kingdom and China) and in native speakers of a non-tone language (English). We find that native tone-language and non-tone-language listeners largely agree on which spoken phrases sound like song after repetition, and we also find that the strength of this transformation is not significantly different across language backgrounds or countries of residence. Furthermore, we find a striking similarity in the cues upon which listeners rely when perceiving word sequences as singing versus speech, including small pitch intervals, flat within-syllable pitch contours, and steady beats. These findings support the view that there are certain widespread cross-cultural similarities in the mechanisms by which listeners judge if a word sequence is spoken or sung.


Assuntos
Percepção da Fala , Fala , Humanos , Sinais (Psicologia) , Idioma , Fonética , Percepção da Altura Sonora
13.
Autism Res ; 17(6): 1230-1257, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38651566

RESUMO

Atypical predictive processing has been associated with autism across multiple domains, based mainly on artificial antecedents and consequents. As structured sequences where expectations derive from implicit learning of combinatorial principles, language and music provide naturalistic stimuli for investigating predictive processing. In this study, we matched melodic and sentence stimuli in cloze probabilities and examined musical and linguistic prediction in Mandarin- (Experiment 1) and English-speaking (Experiment 2) autistic and non-autistic individuals using both production and perception tasks. In the production tasks, participants listened to unfinished melodies/sentences and then produced the final notes/words to complete these items. In the perception tasks, participants provided expectedness ratings of the completed melodies/sentences based on the most frequent notes/words in the norms. While Experiment 1 showed intact musical prediction but atypical linguistic prediction in autism in the Mandarin sample that demonstrated imbalanced musical training experience and receptive vocabulary skills between groups, the group difference disappeared in a more closely matched sample of English speakers in Experiment 2. These findings suggest the importance of taking an individual differences approach when investigating predictive processing in music and language in autism, as the difficulty in prediction in autism may not be due to generalized problems with prediction in any type of complex sequence processing.


Assuntos
Transtorno Autístico , Idioma , Música , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Percepção Auditiva/fisiologia , Adolescente , Percepção da Fala/fisiologia
14.
Anim Behav ; 203: 193-206, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37842009

RESUMO

Humans readily recognize familiar rhythmic patterns, such as isochrony (equal timing between events) across a wide range of rates. This reflects a facility with perceiving the relative timing of events, not just absolute interval durations. Several lines of evidence suggest this ability is supported by precise temporal predictions arising from forebrain auditory-motor interactions. We have shown previously that male zebra finches, Taeniopygia guttata, which possess specialized auditory-motor networks and communicate with rhythmically patterned sequences, share our ability to flexibly recognize isochrony across rates. To test the hypothesis that flexible rhythm pattern perception is linked to vocal learning, we ask whether female zebra finches, which do not learn to sing, can also recognize global temporal patterns. We find that females can flexibly recognize isochrony across a wide range of rates but perform slightly worse than males on average. These findings are consistent with recent work showing that while females have reduced forebrain song regions, the overall network connectivity of vocal premotor regions is similar to males and may support predictions of upcoming events. Comparative studies of male and female songbirds thus offer an opportunity to study how individual differences in auditory-motor connectivity influence perception of relative timing, a hallmark of human music perception.

15.
Brain Cogn ; 79(3): 209-15, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22546729

RESUMO

This study examined whether "melodic contour deafness" (insensitivity to the direction of pitch movement) in congenital amusia is associated with specific types of pitch patterns (discrete versus gliding pitches) or stimulus types (speech syllables versus complex tones). Thresholds for identification of pitch direction were obtained using discrete or gliding pitches in the syllable /ma/ or its complex tone analog, from nineteen amusics and nineteen controls, all healthy university students with Mandarin Chinese as their native language. Amusics, unlike controls, had more difficulty recognizing pitch direction in discrete than in gliding pitches, for both speech and non-speech stimuli. Also, amusic thresholds were not significantly affected by stimulus types (speech versus non-speech), whereas controls showed lower thresholds for tones than for speech. These findings help explain why amusics have greater difficulty with discrete musical pitch perception than with speech perception, in which continuously changing pitch movements are prevalent.


Assuntos
Transtornos da Percepção Auditiva/fisiopatologia , Idioma , Percepção da Altura Sonora , Percepção da Fala , Estimulação Acústica , China , Feminino , Humanos , Masculino , Música , Reconhecimento Psicológico/fisiologia , Fala/fisiologia , Adulto Jovem
16.
Atten Percept Psychophys ; 84(8): 2702-2714, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36261763

RESUMO

Beat perception can serve as a window into internal time-keeping mechanisms, auditory-motor interactions, and aspects of cognition. One aspect of beat perception is the covert continuation of an internal pulse. Of the several popular tests of beat perception, none provide a satisfying test of this faculty of covert continuation. The current study proposes a new beat-perception test focused on covert pulse continuation: The Beat-Drop Alignment Test (BDAT). In this test, participants must identify the beat in musical excerpts and then judge whether a single probe falls on or off the beat. The probe occurs during a short break in the rhythmic components of the music when no rhythmic events are present, forcing participants to judge beat alignment relative to an internal pulse maintained in the absence of local acoustic timing cues. Here, we present two large (N > 100) tests of the BDAT. In the first, we explore the effect of test item parameters (e.g., probe displacement) on performance. In the second, we correlate scores on an adaptive version of the BDAT with the computerized adaptive Beat Alignment Test (CA-BAT) scores and indices of musical experience. Musical experience indices outperform CA-BAT score as a predictor of BDAT score, suggesting that the BDAT measures a distinct aspect of beat perception that is more experience-dependent and may draw on cognitive resources such as working memory and musical imagery differently than the BAT. The BDAT may prove useful in future behavioral and neural research on beat perception, and all stimuli and code are freely available for download.


Assuntos
Sinais (Psicologia) , Música , Humanos , Estimulação Acústica/métodos , Memória de Curto Prazo , Percepção , Percepção Auditiva
17.
Brain ; 133(Pt 6): 1682-93, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-20418275

RESUMO

This study investigated whether congenital amusia, a neuro-developmental disorder of musical perception, also has implications for speech intonation processing. In total, 16 British amusics and 16 matched controls completed five intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on discrimination, identification and imitation of statements and questions that were characterized primarily by pitch direction differences in the final word. This intonation-processing deficit in amusia was largely associated with a psychophysical pitch direction discrimination deficit. These findings suggest that amusia impacts upon one's language abilities in subtle ways, and support previous evidence that pitch processing in language and music involves shared mechanisms.


Assuntos
Transtornos da Percepção Auditiva , Discriminação Psicológica , Comportamento Imitativo , Música , Percepção da Altura Sonora , Percepção da Fala , Estimulação Acústica , Limiar Auditivo , Inglaterra , Feminino , Humanos , Linguística , Masculino , Pessoa de Meia-Idade , Psicolinguística , Fala , Análise e Desempenho de Tarefas
18.
Philos Trans R Soc Lond B Biol Sci ; 376(1835): 20200326, 2021 10 11.
Artigo em Inglês | MEDLINE | ID: mdl-34420384

RESUMO

The human capacity to synchronize movements to an auditory beat is central to musical behaviour and to debates over the evolution of human musicality. Have humans evolved any neural specializations for music processing, or does music rely entirely on brain circuits that evolved for other reasons? The vocal learning and rhythmic synchronization hypothesis proposes that our ability to move in time with an auditory beat in a precise, predictive and tempo-flexible manner originated in the neural circuitry for complex vocal learning. In the 15 years, since the hypothesis was proposed a variety of studies have supported it. However, one study has provided a significant challenge to the hypothesis. Furthermore, it is increasingly clear that vocal learning is not a binary trait animals have or lack, but varies more continuously across species. In the light of these developments and of recent progress in the neurobiology of beat processing and of vocal learning, the current paper revises the vocal learning hypothesis. It argues that an advanced form of vocal learning acts as a preadaptation for sporadic beat perception and synchronization (BPS), providing intrinsic rewards for predicting the temporal structure of complex acoustic sequences. It further proposes that in humans, mechanisms of gene-culture coevolution transformed this preadaptation into a genuine neural adaptation for sustained BPS. The larger significance of this proposal is that it outlines a hypothesis of cognitive gene-culture coevolution which makes testable predictions for neuroscience, cross-species studies and genetics. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.


Assuntos
Adaptação Biológica , Percepção Auditiva , Evolução Cultural , Música , Som , Voz , Humanos , Aprendizagem
19.
Trends Cogn Sci ; 25(2): 137-150, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33353800

RESUMO

Beat perception offers cognitive scientists an exciting opportunity to explore how cognition and action are intertwined in the brain even in the absence of movement. Many believe the motor system predicts the timing of beats, yet current models of beat perception do not specify how this is neurally implemented. Drawing on recent insights into the neurocomputational properties of the motor system, we propose that beat anticipation relies on action-like processes consisting of precisely patterned neural time-keeping activity in the supplementary motor area (SMA), orchestrated and sequenced by activity in the dorsal striatum. In addition to synthesizing recent advances in cognitive science and motor neuroscience, our framework provides testable predictions to guide future work.


Assuntos
Córtex Motor , Música , Percepção Auditiva , Humanos , Movimento , Neurofisiologia
20.
J Exp Psychol Hum Percept Perform ; 47(12): 1681-1697, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34881953

RESUMO

In the speech-to-song illusion, certain spoken phrases are perceived as sung after repetition. One possible explanation for this increase in musicality is that, as phrases are repeated, lexical activation dies off, enabling listeners to focus on the melodic and rhythmic characteristics of stimuli and assess them for the presence of musical structure. Here we tested the idea that perception of the illusion requires implicit assessment of melodic and rhythmic structure by presenting individuals with phrases that tend to be perceived as song when repeated, as well as phrases that tend to continue to be perceived as speech when repeated, measuring the strength of the illusion as the rating difference between these two stimulus categories after repetition. Illusion strength varied widely and stably between listeners, with large individual differences and high split-half reliability, suggesting that not all listeners are equally able to detect musical structure in speech. Although variability in illusion strength was unrelated to degree of musical training, participants who perceived the illusion more strongly were proficient in several musical skills, including beat perception, tonality perception, and selective attention to pitch. These findings support models of the speech-to-song illusion in which experience of the illusion is based on detection of musical characteristics latent in spoken phrases. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Ilusões , Música , Percepção da Fala , Aptidão , Humanos , Individualidade , Percepção da Altura Sonora , Reprodutibilidade dos Testes , Fala
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA