RESUMO
Detecting and learning structure in sounds is fundamental to human auditory perception. Evidence for auditory perceptual learning comes from previous studies where listeners were better at detecting repetitions of a short noise snippet embedded in longer, ongoing noise when the same snippet recurred across trials compared with when the snippet was novel in each trial. However, previous work has mainly used (a) temporally regular presentations of the repeating noise snippet and (b) highly predictable intertrial onset timings for the snippet sequences. As a result, it is unclear how these temporal features affect perceptual learning. In five online experiments, participants judged whether a repeating noise snippet was present, unaware that the snippet could be unique to that trial or used in multiple trials. In two experiments, temporal regularity was manipulated by jittering the timing of noise-snippet repetitions within a trial. In two subsequent experiments, temporal onset certainty was manipulated by varying the onset time of the entire snippet sequence across trials. We found that both temporal jittering and onset uncertainty reduced auditory perceptual learning. In addition, we observed that these reductions in perceptual learning were ameliorated when the same snippet occurred in both temporally manipulated and unmanipulated trials. Our study demonstrates the importance of temporal regularity and onset certainty for auditory perceptual learning. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Assuntos
Percepção Auditiva , Aprendizagem , Estimulação Acústica , HumanosRESUMO
Musical minimalism utilizes the temporal manipulation of restricted collections of rhythmic, melodic, and/or harmonic materials. One example, Steve Reich's Piano Phase, offers listeners readily audible formal structure with unpredictable events at the local level. For example, pattern recurrences may generate strong expectations which are violated by small temporal and pitch deviations. A hyper-detailed listening strategy prompted by these minute deviations stands in contrast to the type of listening engagement typically cultivated around functional tonal Western music. Recent research has suggested that the inter-subject correlation (ISC) of electroencephalographic (EEG) responses to natural audio-visual stimuli objectively indexes a state of "engagement," demonstrating the potential of this approach for analyzing music listening. But can ISCs capture engagement with minimalist music, which features less obvious expectation formation and has historically received a wide range of reactions? To approach this question, we collected EEG and continuous behavioral (CB) data while 30 adults listened to an excerpt from Steve Reich's Piano Phase, as well as three controlled manipulations and a popular-music remix of the work. Our analyses reveal that EEG and CB ISC are highest for the remix stimulus and lowest for our most repetitive manipulation, no statistical differences in overall EEG ISC between our most musically meaningful manipulations and Reich's original piece, and evidence that compositional features drove engagement in time-resolved ISC analyses. We also found that aesthetic evaluations corresponded well with overall EEG ISC. Finally we highlight co-occurrences between stimulus events and time-resolved EEG and CB ISC. We offer the CB paradigm as a useful analysis measure and note the value of minimalist compositions as a limit case for the neuroscientific study of music listening. Overall, our participants' neural, continuous behavioral, and question responses showed strong similarities that may help refine our understanding of the type of engagement indexed by ISC for musical stimuli.
RESUMO
Sound predictability resulting from repetitive patterns can be implicitly learned and often neither requires nor captures our conscious attention. Recently, predictive coding theory has been used as a framework to explain how predictable or expected stimuli evoke and gradually attenuate obligatory neural responses over time compared to those elicited by unpredictable events. However, these results were obtained using the repetition of simple auditory objects such as pairs of tones or phonemes. Here we examined whether the same principle would hold for more abstract temporal structures of sounds. If this is the case, we hypothesized that a regular repetition schedule of a set of musical patterns would reduce neural processing over the course of listening compared to stimuli with an irregular repetition schedule (and the same set of musical patterns). Electroencephalography (EEG) was recorded while participants passively listened to 6-8 min stimulus sequences in which five different four-tone patterns with temporally regular or irregular repetition were presented successively in a randomized order. N1 amplitudes in response to the first tone of each musical pattern were significantly less negative at the end of the regular sequence compared to the beginning, while such reduction was absent in the irregular sequence. These results extend previous findings by showing that N1 reflects automatic learning of the predictable higher-order structure of sound sequences, while continuous engagement of preattentive auditory processing is necessary for the unpredictable structure.
Assuntos
Antecipação Psicológica/fisiologia , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Música , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Adulto JovemRESUMO
Recent work in interpersonal coordination has revealed that neural oscillations, occurring spontaneously in the human brain, are modulated during the sensory, motor, and cognitive processes involved in interpersonal interactions. In particular, alpha-band (8-12 Hz) activity, linked to attention in general, is related to coordination dynamics and empathy traits. Researchers have also identified an association between each individual's attentiveness to their co-actor and the relative similarity in the co-actors' roles, influencing their behavioral synchronization patterns. We employed music ensemble performance to evaluate patterns of behavioral and neural activity when roles between co-performers are systematically varied with complete counterbalancing. Specifically, we designed a piano duet task, with three types of co-actor dissimilarity, or asymmetry: (1) musical role (starting vs. joining), (2) musical task similarity (similar vs. dissimilar melodic parts), and (3) performer animacy (human-to-human vs. human-to-non-adaptive computer). We examined how the experience of these asymmetries in four initial musical phrases, alternatingly played by the co-performers, influenced the pianists' performance of a subsequent unison phrase. Electroencephalography was recorded simultaneously from both performers while playing keyboards. We evaluated note-onset timing and alpha modulation around the unison phrase. We also investigated whether each individual's self-reported empathy was related to behavioral and neural activity. Our findings revealed closer behavioral synchronization when pianists played with a human vs. computer partner, likely because the computer was non-adaptive. When performers played with a human partner, or a joining performer played with a computer partner, having a similar vs. dissimilar musical part did not have a significant effect on their alpha modulation immediately prior to unison. However, when starting performers played with a computer partner with a dissimilar vs. similar part there was significantly greater alpha synchronization. In other words, starting players attended less to the computer partner playing a similar accompaniment, operating in a solo-like mode. Moreover, this alpha difference based on melodic similarity was related to a difference in note-onset adaptivity, which was in turn correlated with performer trait empathy. Collectively our results extend previous findings by showing that musical ensemble performance gives rise to a socialized context whose lasting effects encompass attentiveness, perceptual-motor coordination, and empathy.
RESUMO
During joint action tasks, expectations for outcomes of one's own and other's actions are collectively monitored. Recent evidence suggests that trait empathy levels may also influence performance monitoring processes. The present study investigated how outcome expectation and empathy interact during a turn-taking piano duet task, using simultaneous electroencephalography (EEG) recording. During the performances, one note in each player's part was altered in pitch to elicit the feedback-related negativity (FRN) and subsequent P3 complex. Pianists memorized and performed pieces containing either a similar or dissimilar sequence as their partner. For additional blocks, pianists also played both sequence types with an audio-only computer partner. The FRN and P3a were larger in response to self than other, while P3b occurred only in response to self, suggesting greater online monitoring of self- compared to other-produced actions during turn-taking joint action. P3a was larger when pianists played a similar sequence as their partner. Finally, as trait empathy level increased, FRN in response to self decreased. This association was absent for FRN in response to other. This may reflect that highly-empathetic musicians during joint performance could use a strategy to suppress exclusive focus on self-monitoring.