Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
1.
J Cogn Neurosci ; : 1-17, 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38991125

RESUMO

Accumulating evidence suggests that rhythmic temporal cues in the environment influence the encoding of information into long-term memory. Here, we test the hypothesis that these mnemonic effects of rhythm reflect the coupling of high-frequency (gamma) oscillations to entrained lower-frequency oscillations synchronized to the beat of the rhythm. In Study 1, we first test this hypothesis in the context of global effects of rhythm on memory, when memory is superior for visual stimuli presented in rhythmic compared with arrhythmic patterns at encoding [Jones, A., & Ward, E. V. Rhythmic temporal structure at encoding enhances recognition memory, Journal of Cognitive Neuroscience, 31, 1549-1562, 2019]. We found that rhythmic presentation of visual stimuli during encoding was associated with greater phase-amplitude coupling (PAC) between entrained low-frequency (delta) oscillations and higher-frequency (gamma) oscillations. In Study 2, we next investigated cross-frequency PAC in the context of local effects of rhythm on memory encoding, when memory is superior for visual stimuli presented in-synchrony compared with out-of-synchrony with a background auditory beat (Hickey et al., 2020). We found that the mnemonic effect of rhythm in this context was again associated with increased cross-frequency PAC between entrained low-frequency (delta) oscillations and higher-frequency (gamma) oscillations. Furthermore, the magnitude of gamma power modulations positively scaled with the subsequent memory benefit for in- versus out-of-synchrony stimuli. Together, these results suggest that the influence of rhythm on memory encoding may reflect the temporal coordination of higher-frequency gamma activity by entrained low-frequency oscillations.

2.
Autism Res ; 17(6): 1230-1257, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38651566

RESUMO

Atypical predictive processing has been associated with autism across multiple domains, based mainly on artificial antecedents and consequents. As structured sequences where expectations derive from implicit learning of combinatorial principles, language and music provide naturalistic stimuli for investigating predictive processing. In this study, we matched melodic and sentence stimuli in cloze probabilities and examined musical and linguistic prediction in Mandarin- (Experiment 1) and English-speaking (Experiment 2) autistic and non-autistic individuals using both production and perception tasks. In the production tasks, participants listened to unfinished melodies/sentences and then produced the final notes/words to complete these items. In the perception tasks, participants provided expectedness ratings of the completed melodies/sentences based on the most frequent notes/words in the norms. While Experiment 1 showed intact musical prediction but atypical linguistic prediction in autism in the Mandarin sample that demonstrated imbalanced musical training experience and receptive vocabulary skills between groups, the group difference disappeared in a more closely matched sample of English speakers in Experiment 2. These findings suggest the importance of taking an individual differences approach when investigating predictive processing in music and language in autism, as the difficulty in prediction in autism may not be due to generalized problems with prediction in any type of complex sequence processing.


Assuntos
Transtorno Autístico , Idioma , Música , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Percepção Auditiva/fisiologia , Adolescente , Percepção da Fala/fisiologia
3.
Cognition ; 246: 105757, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38442588

RESUMO

One of the most important auditory categorization tasks a listener faces is determining a sound's domain, a process which is a prerequisite for successful within-domain categorization tasks such as recognizing different speech sounds or musical tones. Speech and song are universal in human cultures: how do listeners categorize a sequence of words as belonging to one or the other of these domains? There is growing interest in the acoustic cues that distinguish speech and song, but it remains unclear whether there are cross-cultural differences in the evidence upon which listeners rely when making this fundamental perceptual categorization. Here we use the speech-to-song illusion, in which some spoken phrases perceptually transform into song when repeated, to investigate cues to this domain-level categorization in native speakers of tone languages (Mandarin and Cantonese speakers residing in the United Kingdom and China) and in native speakers of a non-tone language (English). We find that native tone-language and non-tone-language listeners largely agree on which spoken phrases sound like song after repetition, and we also find that the strength of this transformation is not significantly different across language backgrounds or countries of residence. Furthermore, we find a striking similarity in the cues upon which listeners rely when perceiving word sequences as singing versus speech, including small pitch intervals, flat within-syllable pitch contours, and steady beats. These findings support the view that there are certain widespread cross-cultural similarities in the mechanisms by which listeners judge if a word sequence is spoken or sung.


Assuntos
Percepção da Fala , Fala , Humanos , Sinais (Psicologia) , Idioma , Fonética , Percepção da Altura Sonora
4.
Cognition ; 243: 105672, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38086279

RESUMO

Pleasure in music has been linked to predictive coding of melodic and rhythmic patterns, subserved by connectivity between regions in the brain's auditory and reward networks. Specific musical anhedonics derive little pleasure from music and have altered auditory-reward connectivity, but no difficulties with music perception abilities and no generalized physical anhedonia. Recent research suggests that specific musical anhedonics experience pleasure in nonmusical sounds, suggesting that the implicated brain pathways may be specific to music reward. However, this work used sounds with clear real-world sources (e.g., babies laughing, crowds cheering), so positive hedonic responses could be based on the referents of these sounds rather than the sounds themselves. We presented specific musical anhedonics and matched controls with isolated short pleasing and displeasing synthesized sounds of varying timbres with no clear real-world referents. While the two groups found displeasing sounds equally displeasing, the musical anhedonics gave substantially lower pleasure ratings to the pleasing sounds, indicating that their sonic anhedonia is not limited to musical rhythms and melodies. Furthermore, across a large sample of participants, mean pleasure ratings for pleasing synthesized sounds predicted significant and similar variance in six dimensions of musical reward considered to be relatively independent, suggesting that pleasure in sonic timbres play a role in eliciting reward-related responses to music. We replicate the earlier findings of preserved pleasure ratings for semantically referential sounds in musical anhedonics and find that pleasure ratings of semantic referents, when presented without sounds, correlated with ratings for the sounds themselves. This association was stronger in musical anhedonics than in controls, suggesting the use of semantic knowledge as a compensatory mechanism for affective sound processing. Our results indicate that specific musical anhedonia is not entirely specific to melodic and rhythmic processing, and suggest that timbre merits further research as a source of pleasure in music.


Assuntos
Música , Humanos , Música/psicologia , Anedonia/fisiologia , Prazer/fisiologia , Encéfalo/fisiologia , Recompensa , Percepção Auditiva/fisiologia
5.
Anim Behav ; 203: 193-206, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37842009

RESUMO

Humans readily recognize familiar rhythmic patterns, such as isochrony (equal timing between events) across a wide range of rates. This reflects a facility with perceiving the relative timing of events, not just absolute interval durations. Several lines of evidence suggest this ability is supported by precise temporal predictions arising from forebrain auditory-motor interactions. We have shown previously that male zebra finches, Taeniopygia guttata, which possess specialized auditory-motor networks and communicate with rhythmically patterned sequences, share our ability to flexibly recognize isochrony across rates. To test the hypothesis that flexible rhythm pattern perception is linked to vocal learning, we ask whether female zebra finches, which do not learn to sing, can also recognize global temporal patterns. We find that females can flexibly recognize isochrony across a wide range of rates but perform slightly worse than males on average. These findings are consistent with recent work showing that while females have reduced forebrain song regions, the overall network connectivity of vocal premotor regions is similar to males and may support predictions of upcoming events. Comparative studies of male and female songbirds thus offer an opportunity to study how individual differences in auditory-motor connectivity influence perception of relative timing, a hallmark of human music perception.

6.
Atten Percept Psychophys ; 84(8): 2702-2714, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36261763

RESUMO

Beat perception can serve as a window into internal time-keeping mechanisms, auditory-motor interactions, and aspects of cognition. One aspect of beat perception is the covert continuation of an internal pulse. Of the several popular tests of beat perception, none provide a satisfying test of this faculty of covert continuation. The current study proposes a new beat-perception test focused on covert pulse continuation: The Beat-Drop Alignment Test (BDAT). In this test, participants must identify the beat in musical excerpts and then judge whether a single probe falls on or off the beat. The probe occurs during a short break in the rhythmic components of the music when no rhythmic events are present, forcing participants to judge beat alignment relative to an internal pulse maintained in the absence of local acoustic timing cues. Here, we present two large (N > 100) tests of the BDAT. In the first, we explore the effect of test item parameters (e.g., probe displacement) on performance. In the second, we correlate scores on an adaptive version of the BDAT with the computerized adaptive Beat Alignment Test (CA-BAT) scores and indices of musical experience. Musical experience indices outperform CA-BAT score as a predictor of BDAT score, suggesting that the BDAT measures a distinct aspect of beat perception that is more experience-dependent and may draw on cognitive resources such as working memory and musical imagery differently than the BAT. The BDAT may prove useful in future behavioral and neural research on beat perception, and all stimuli and code are freely available for download.


Assuntos
Sinais (Psicologia) , Música , Humanos , Estimulação Acústica/métodos , Memória de Curto Prazo , Percepção , Percepção Auditiva
7.
J Neurolinguistics ; 622022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35002061

RESUMO

Language and music rely on complex sequences organized according to syntactic principles that are implicitly understood by enculturated listeners. Across both domains, syntactic processing involves predicting and integrating incoming elements into higher-order structures. According to the Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, 2003), musical and linguistic syntactic processing rely on shared resources for integrating incoming elements (e.g., chords, words) into unfolding sequences. One prediction of the SSIRH is that people with agrammatic aphasia (whose deficits are due to syntactic integration problems) should present with deficits in processing musical syntax. We report the first neural study to test this prediction: event-related potentials (ERPs) were measured in response to musical and linguistic syntactic violations in a group of people with agrammatic aphasia (n=7) compared to a group of healthy controls (n=14) using an acceptability judgement task. The groups were matched with respect to age, education, and extent of musical training. Violations were based on morpho-syntactic relations in sentences and harmonic relations in chord sequences. Both groups presented with a significant P600 response to syntactic violations across both domains. The aphasic participants presented with a reduced-amplitude posterior P600 compared to the healthy adults in response to linguistic, but not musical, violations. Participants with aphasia did however present with larger frontal positivities in response to violations in both domains. Intriguingly, extent of musical training was associated with larger posterior P600 responses to syntactic violations of language and music in both groups. Overall, these findings are not consistent with the predictions of the SSIRH, and instead suggest that linguistic, but not musical, syntactic processing may be selectively impaired in stroke-induced agrammatic aphasia. However, the findings also suggest a relationship between musical training and linguistic syntactic processing, which may have clinical implications for people with aphasia, and motivates more research on the relationship between these two domains.

8.
J Exp Psychol Hum Percept Perform ; 47(12): 1681-1697, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34881953

RESUMO

In the speech-to-song illusion, certain spoken phrases are perceived as sung after repetition. One possible explanation for this increase in musicality is that, as phrases are repeated, lexical activation dies off, enabling listeners to focus on the melodic and rhythmic characteristics of stimuli and assess them for the presence of musical structure. Here we tested the idea that perception of the illusion requires implicit assessment of melodic and rhythmic structure by presenting individuals with phrases that tend to be perceived as song when repeated, as well as phrases that tend to continue to be perceived as speech when repeated, measuring the strength of the illusion as the rating difference between these two stimulus categories after repetition. Illusion strength varied widely and stably between listeners, with large individual differences and high split-half reliability, suggesting that not all listeners are equally able to detect musical structure in speech. Although variability in illusion strength was unrelated to degree of musical training, participants who perceived the illusion more strongly were proficient in several musical skills, including beat perception, tonality perception, and selective attention to pitch. These findings support models of the speech-to-song illusion in which experience of the illusion is based on detection of musical characteristics latent in spoken phrases. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Ilusões , Música , Percepção da Fala , Aptidão , Humanos , Individualidade , Percepção da Altura Sonora , Reprodutibilidade dos Testes , Fala
9.
Behav Brain Sci ; 44: e85, 2021 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-34588015

RESUMO

Collective, synchronous music-making is far from ubiquitous across traditional, small-scale societies. We describe societies that lack collective music and offer hypotheses to help explain this cultural variation. Without identifying the factors that explain variation in collective music-making across these societies, theories of music evolution based on social bonding (Savage et al.) or coalition signaling (Mehr et al.) remain incomplete.


Assuntos
Música , Comparação Transcultural , Humanos
10.
Philos Trans R Soc Lond B Biol Sci ; 376(1835): 20200326, 2021 10 11.
Artigo em Inglês | MEDLINE | ID: mdl-34420384

RESUMO

The human capacity to synchronize movements to an auditory beat is central to musical behaviour and to debates over the evolution of human musicality. Have humans evolved any neural specializations for music processing, or does music rely entirely on brain circuits that evolved for other reasons? The vocal learning and rhythmic synchronization hypothesis proposes that our ability to move in time with an auditory beat in a precise, predictive and tempo-flexible manner originated in the neural circuitry for complex vocal learning. In the 15 years, since the hypothesis was proposed a variety of studies have supported it. However, one study has provided a significant challenge to the hypothesis. Furthermore, it is increasingly clear that vocal learning is not a binary trait animals have or lack, but varies more continuously across species. In the light of these developments and of recent progress in the neurobiology of beat processing and of vocal learning, the current paper revises the vocal learning hypothesis. It argues that an advanced form of vocal learning acts as a preadaptation for sporadic beat perception and synchronization (BPS), providing intrinsic rewards for predicting the temporal structure of complex acoustic sequences. It further proposes that in humans, mechanisms of gene-culture coevolution transformed this preadaptation into a genuine neural adaptation for sustained BPS. The larger significance of this proposal is that it outlines a hypothesis of cognitive gene-culture coevolution which makes testable predictions for neuroscience, cross-species studies and genetics. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.


Assuntos
Adaptação Biológica , Percepção Auditiva , Evolução Cultural , Música , Som , Voz , Humanos , Aprendizagem
11.
Proc Natl Acad Sci U S A ; 118(29)2021 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-34272278

RESUMO

Rhythm perception is fundamental to speech and music. Humans readily recognize a rhythmic pattern, such as that of a familiar song, independently of the tempo at which it occurs. This shows that our perception of auditory rhythms is flexible, relying on global relational patterns more than on the absolute durations of specific time intervals. Given that auditory rhythm perception in humans engages a complex auditory-motor cortical network even in the absence of movement and that the evolution of vocal learning is accompanied by strengthening of forebrain auditory-motor pathways, we hypothesize that vocal learning species share our perceptual facility for relational rhythm processing. We test this by asking whether the best-studied animal model for vocal learning, the zebra finch, can recognize a fundamental rhythmic pattern-equal timing between event onsets (isochrony)-based on temporal relations between intervals rather than on absolute durations. Prior work suggests that vocal nonlearners (pigeons and rats) are quite limited in this regard and are biased to attend to absolute durations when listening to rhythmic sequences. In contrast, using naturalistic sounds at multiple stimulus rates, we show that male zebra finches robustly recognize isochrony independent of absolute time intervals, even at rates distant from those used in training. Our findings highlight the importance of comparative studies of rhythmic processing and suggest that vocal learning species are promising animal models for key aspects of human rhythm perception. Such models are needed to understand the neural mechanisms behind the positive effect of rhythm on certain speech and movement disorders.


Assuntos
Percepção Auditiva , Tentilhões/fisiologia , Animais , Córtex Auditivo/fisiologia , Feminino , Aprendizagem , Masculino , Reconhecimento Fisiológico de Modelo , Som , Voz
12.
Cereb Cortex ; 31(8): 3622-3640, 2021 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-33749742

RESUMO

Humans can mentally represent auditory information without an external stimulus, but the specificity of these internal representations remains unclear. Here, we asked how similar the temporally unfolding neural representations of imagined music are compared to those during the original perceived experience. We also tested whether rhythmic motion can influence the neural representation of music during imagery as during perception. Participants first memorized six 1-min-long instrumental musical pieces with high accuracy. Functional MRI data were collected during: 1) silent imagery of melodies to the beat of a visual metronome; 2) same but while tapping to the beat; and 3) passive listening. During imagery, inter-subject correlation analysis showed that melody-specific temporal response patterns were reinstated in right associative auditory cortices. When tapping accompanied imagery, the melody-specific neural patterns were reinstated in more extensive temporal-lobe regions bilaterally. These results indicate that the specific contents of conscious experience are encoded similarly during imagery and perception in the dynamic activity of auditory cortices. Furthermore, rhythmic motion can enhance the reinstatement of neural patterns associated with the experience of complex sounds, in keeping with models of motor to sensory influences in auditory processing.


Assuntos
Mapeamento Encefálico , Imaginação/fisiologia , Música/psicologia , Estimulação Acústica , Adolescente , Adulto , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Movimento/fisiologia , Discriminação da Altura Tonal , Percepção da Altura Sonora , Sensação/fisiologia , Adulto Jovem
13.
Trends Cogn Sci ; 25(2): 137-150, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33353800

RESUMO

Beat perception offers cognitive scientists an exciting opportunity to explore how cognition and action are intertwined in the brain even in the absence of movement. Many believe the motor system predicts the timing of beats, yet current models of beat perception do not specify how this is neurally implemented. Drawing on recent insights into the neurocomputational properties of the motor system, we propose that beat anticipation relies on action-like processes consisting of precisely patterned neural time-keeping activity in the supplementary motor area (SMA), orchestrated and sequenced by activity in the dorsal striatum. In addition to synthesizing recent advances in cognitive science and motor neuroscience, our framework provides testable predictions to guide future work.


Assuntos
Córtex Motor , Música , Percepção Auditiva , Humanos , Movimento , Neurofisiologia
14.
PLoS One ; 15(11): e0234668, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33206657

RESUMO

Accumulating evidence suggests that rhythmic temporal structures in the environment influence memory formation. For example, stimuli that appear in synchrony with the beat of background, environmental rhythms are better remembered than stimuli that appear out-of-synchrony with the beat. This rhythmic modulation of memory has been linked to entrained neural oscillations which are proposed to act as a mechanism of selective attention that prioritize processing of events that coincide with the beat. However, it is currently unclear whether rhythm influences memory formation by influencing early (sensory) or late (post-perceptual) processing of stimuli. The current study used stimulus-locked event-related potentials (ERPs) to investigate the locus of stimulus processing at which rhythm temporal cues operate in the service of memory formation. Participants viewed a series of visual objects that either appeared in-synchrony or out-of-synchrony with the beat of background music and made a semantic classification (living/non-living) for each object. Participants' memory for the objects was then tested (in silence). The timing of stimulus presentation during encoding (in-synchrony or out-of-synchrony with the background beat) influenced later ERPs associated with post-perceptual selection and orienting attention in time rather than earlier ERPs associated with sensory processing. The magnitude of post-perceptual ERPs also differed according to whether or not participants demonstrated a mnemonic benefit for in-synchrony compared to out-of-synchrony stimuli, and was related to the magnitude of the rhythmic modulation of memory performance across participants. These results support two prominent theories in the field, the Dynamic Attending Theory and the Oscillation Selection Hypothesis, which propose that neural responses to rhythm act as a core mechanism of selective attention that optimize processing at specific moments in time. Furthermore, they reveal that in addition to acting as a mechanism of early attentional selection, rhythm influences later, post-perceptual cognitive processes as events are transformed into memory.


Assuntos
Ritmo alfa , Percepção Auditiva/fisiologia , Ritmo beta , Potenciais Evocados , Memória/fisiologia , Percepção do Tempo/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Atenção , Feminino , Humanos , Masculino , Adulto Jovem
15.
Neuroimage ; 213: 116693, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32135262

RESUMO

Time is a critical component of episodic memory. Yet it is currently unclear how different types of temporal signals are represented in the brain and how these temporal signals support episodic memory. The current study investigated whether temporal cues provided by low-frequency environmental rhythms influence memory formation. Specifically, we tested the hypothesis that neural tracking of low-frequency rhythm serves as a mechanism of selective attention that dynamically biases the encoding of visual information at specific moments in time. Participants incidentally encoded a series of visual objects while passively listening to background, instrumental music with a steady beat. Objects either appeared in-synchrony or out-of-synchrony with the background beat. Participants were then given a surprise subsequent memory test (in silence). Results revealed significant neural tracking of the musical beat at encoding, evident in increased electrophysiological power and inter-trial phase coherence at the perceived beat frequency (1.25 â€‹Hz). Importantly, enhanced neural tracking of the background rhythm at encoding was associated with superior subsequent memory for in-synchrony compared to out-of-synchrony objects at test. Together, these results provide novel evidence that the brain spontaneously tracks low-frequency musical rhythm during naturalistic listening situations, and that the strength of this neural tracking is associated with the effects of rhythm on higher-order cognitive processes such as episodic memory.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Memória Episódica , Periodicidade , Estimulação Acústica/métodos , Adolescente , Adulto , Atenção/fisiologia , Sinais (Psicologia) , Feminino , Humanos , Masculino , Música , Adulto Jovem
16.
Otol Neurotol ; 41(4): e422-e431, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32176126

RESUMO

OBJECTIVE: Cochlear implant (CI) users struggle with tasks of pitch-based prosody perception. Pitch pattern recognition is vital for both music comprehension and understanding the prosody of speech, which signals emotion and intent. Research in normal-hearing individuals shows that auditory-motor training, in which participants produce the auditory pattern they are learning, is more effective than passive auditory training. We investigated whether auditory-motor training of CI users improves complex sound perception, such as vocal emotion recognition and pitch pattern recognition, compared with purely auditory training. STUDY DESIGN: Prospective cohort study. SETTING: Tertiary academic center. PATIENTS: Fifteen postlingually deafened adults with CIs. INTERVENTION(S): Participants were divided into 3 one-month training groups: auditory-motor (intervention), auditory-only (active control), and no training (control). Auditory-motor training was conducted with the "Contours" software program and auditory-only training was completed with the "AngelSound" software program. MAIN OUTCOME MEASURE: Pre and posttest examinations included tests of speech perception (consonant-nucleus-consonant, hearing-in-noise test sentence recognition), speech prosody perception, pitch discrimination, and melodic contour identification. RESULTS: Participants in the auditory-motor training group performed better than those in the auditory-only and no-training (p < 0.05) for the melodic contour identification task. No significant training effect was noted on tasks of speech perception, speech prosody perception, or pitch discrimination. CONCLUSIONS: These data suggest that short-term auditory-motor music training of CI users impacts pitch pattern recognition. This study offers approaches for enriching the world of complex sound in the CI user.


Assuntos
Implante Coclear , Implantes Cocleares , Música , Percepção da Fala , Adulto , Percepção Auditiva , Humanos , Percepção da Altura Sonora , Estudos Prospectivos
17.
Acta Psychol (Amst) ; 200: 102923, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31759191

RESUMO

Selective attention plays a key role in determining what aspects of our environment are encoded into long-term memory. Auditory rhythms with a regular beat provide temporal expectations that entrain attention and facilitate perception of visual stimuli aligned with the beat. The current study investigated whether entrainment to background auditory rhythms also facilitates higher-level cognitive functions such as episodic memory. In a series of experiments, we manipulated temporal attention through the use of rhythmic, instrumental music. In Experiment 1A and 1B, we found that background musical rhythm influenced the encoding of visual targets into memory, evident in enhanced subsequent memory for targets that appeared in-synchrony compared to out-of-synchrony with the background beat. Response times at encoding did not differ for in-synchrony compared to out-of-synchrony stimuli, suggesting that the rhythmic modulation of memory does not simply reflect rhythmic effects on perception and action. Experiment 2 investigated whether rhythmic effects on response times emerge when task procedures more closely match prior studies that have demonstrated significant auditory entrainment effects. Responses were faster for in-synchrony compared to out-of-synchrony stimuli when participants performed a more perceptually-oriented task that did not contain intervening recognition memory tests, suggesting that rhythmic effects on perception and action depend on the nature of the task demands. Together, these results support the hypothesis that rhythmic temporal regularities provided by background music can entrain attention and influence the encoding of visual stimuli into memory.


Assuntos
Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Memória/fisiologia , Música/psicologia , Tempo de Reação/fisiologia , Percepção Visual/fisiologia , Adulto , Atenção/fisiologia , Feminino , Previsões , Humanos , Masculino , Estimulação Luminosa/métodos , Adulto Jovem
18.
Curr Biol ; 29(13): R621-R622, 2019 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-31287976

RESUMO

Spontaneous movement to music occurs in every human culture and is a foundation of dance [1]. This response to music is absent in most species (including monkeys), yet it occurs in parrots, perhaps because they (like humans, and unlike monkeys) are vocal learners whose brains contain strong auditory-motor connections, conferring sophisticated audiomotor processing abilities [2,3]. Previous research has shown that parrots can bob their heads or lift their feet in synchrony with a musical beat [2,3], but humans move to music using a wide variety of movements and body parts. Is this also true of parrots? If so, it would constrain theories of how movement to music is controlled by parrot brains. Specifically, as head bobbing is part of parrot courtship displays [4] and foot lifting is part of locomotion, these may be innate movements controlled by central pattern generators which become entrained by auditory rhythms, without the involvement of complex motor planning. This would be unlike humans, where movement to music engages cortical networks including frontal and parietal areas [5]. Rich diversity in parrot movement to music would suggest a strong contribution of forebrain regions to this behavior, perhaps including motor learning regions abutting the complex vocal-learning 'shell' regions that are unique to parrots among vocal learning birds [6]. Here we report that a sulphur-crested cockatoo (Cacatua galerita eleonora) responds to music with remarkably diverse spontaneous movements employing a variety of body parts, and suggest why parrots share this response with humans.


Assuntos
Percepção Auditiva , Cacatuas , Movimento , Música/psicologia , Animais , Dança
19.
Cognition ; 189: 23-34, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-30913527

RESUMO

Expectation, or prediction, has become a major theme in cognitive science. Music offers a powerful system for studying how expectations are formed and deployed in the processing of richly structured sequences that unfold rapidly in time. We ask to what extent expectations about an upcoming note in a melody are driven by two distinct factors: Gestalt-like principles grounded in the auditory system (e.g.a preference for subsequent notes to move in small intervals), and statistical learning of melodic structure. We use multinomial regression modeling to evaluate the predictions of computationally implemented models of melodic expectation against behavioral data from a musical cloze task, in which participants hear a novel melodic opening and are asked to sing the note they expect to come next. We demonstrate that both Gestalt-like principles and statistical learning contribute to listeners' online expectations. In conjunction with results in the domain of language, our results point to a larger-than-previously-assumed role for statistical learning in predictive processing across cognitive domains, even in cases that seem potentially governed by a smaller set of theoretically motivated rules. However, we also find that both of the models tested here leave much variance in the human data unexplained, pointing to a need for models of melodic expectation that incorporate underlying hierarchical and/or harmonic structure. We propose that our combined behavioral (melodic cloze) and modeling (multinomial regression) approach provides a powerful method for further testing and development of models of melodic expectation.


Assuntos
Antecipação Psicológica/fisiologia , Percepção Auditiva/fisiologia , Teoria Gestáltica , Modelos Teóricos , Aprendizagem por Probabilidade , Adolescente , Adulto , Feminino , Humanos , Masculino , Música , Adulto Jovem
20.
Front Psychol ; 9: 2172, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30459693

RESUMO

Synchronized movements with external periodic rhythms, such as dancing to a beat, are commonly observed in daily life. Although it has been well established that some vocal learning species (including parrots and humans) spontaneously develop this ability, it has only recently been shown that monkeys are also capable of predictive and tempo-flexible synchronization to periodic stimuli. In our previous study, monkeys were trained to make predictive saccades for alternately presented visual stimuli at fixed stimulus onset asynchronies (SOAs) to obtain a liquid reward. The monkeys generalized predictive synchronization to novel SOAs in the middle of trained range, suggesting a capacity for tempo-flexible synchronization. However, it is possible that when encountering a novel tempo, the monkeys might sample learned saccade sequences from those for the short and long SOAs so that the mean saccade interval matched the untrained SOA. To eliminate this possibility, in the current study we tested monkeys on novel SOAs outside the trained range. Animals were trained to generate synchronized eye movements for 600 and 900-ms SOAs for a few weeks, and then were tested for longer SOAs. The accuracy and precision of predictive saccades for one untrained SOA (1200 ms) were comparable to those for the trained conditions. On the other hand, the variance of predictive saccade latency and the proportion of reactive saccades increased significantly in the longer SOA conditions (1800 and 2400 ms), indicating that temporal prediction of periodic stimuli was difficult in this range, similar to previous results on synchronized tapping in humans. Our results suggest that monkeys might share similar synchronization mechanisms with humans, which can be subject to physiological examination in future studies.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA