Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 56
Filter
1.
iScience ; 27(6): 109964, 2024 Jun 21.
Article in English | MEDLINE | ID: mdl-38832017

ABSTRACT

Music and social interactions represent two of the most important sources of pleasure in our lives, both engaging the mesolimbic dopaminergic system. However, there is limited understanding regarding whether and how sharing a musical activity in a social context influences and modifies individuals' rewarding experiences. Here, we aimed at (1) modulating the pleasure derived from music under different social scenarios and (2) further investigating its impact on reward-related prosocial behavior and memory. Across three online experiments, we simulated a socially shared music listening and found that participants' music reward was significantly modulated by the social context, with higher reported pleasure for greater levels of social sharing. Furthermore, the increased pleasure reported by the participants positively influenced prosocial behavior and memory outcomes, highlighting the facilitating role of socially boosted reward. These findings provide evidence about the rewarding nature of socially driven music experiences, with important potential implications in educational and clinical settings.

2.
Sci Rep ; 14(1): 13112, 2024 06 07.
Article in English | MEDLINE | ID: mdl-38849348

ABSTRACT

Music provides a reward that can enhance learning and motivation in humans. While music is often combined with exercise to improve performance and upregulate mood, the relationship between music-induced reward and motor output is poorly understood. Here, we study music reward and motor output at the same time by capitalizing on music playing. Specifically, we investigate the effects of music improvisation and live accompaniment on motor, autonomic, and affective responses. Thirty adults performed a drumming task while (i) improvising or maintaining the beat and (ii) with live or recorded accompaniment. Motor response was characterized by acceleration of hand movements (accelerometry), wrist flexor and extensor muscle activation (electromyography), and the drum strike count (i.e., the number of drum strikes played). Autonomic arousal was measured by tonic response of electrodermal activity (EDA) and heart rate (HR). Affective responses were measured by a 12-item Likert scale. The combination of improvisation and live accompaniment, as compared to all other conditions, significantly increased acceleration of hand movements and muscle activation, as well as participant reports of reward during music playing. Improvisation, regardless of type of accompaniment, increased the drum strike count and autonomic arousal (including tonic EDA responses and several measures of HR), as well as participant reports of challenge. Importantly, increased motor response was associated with increased reward ratings during music improvisation, but not while participants were maintaining the beat. The increased motor responses achieved with improvisation and live accompaniment have important implications for enhancing dose of movement during exercise and physical rehabilitation.


Subject(s)
Electromyography , Music , Reward , Humans , Music/psychology , Male , Female , Adult , Young Adult , Heart Rate/physiology , Movement/physiology , Hand/physiology , Psychomotor Performance/physiology , Motivation/physiology
3.
PLoS Biol ; 22(5): e3002622, 2024 May.
Article in English | MEDLINE | ID: mdl-38814982

ABSTRACT

Combinatoric linguistic operations underpin human language processes, but how meaning is composed and refined in the mind of the reader is not well understood. We address this puzzle by exploiting the ubiquitous function of negation. We track the online effects of negation ("not") and intensifiers ("really") on the representation of scalar adjectives (e.g., "good") in parametrically designed behavioral and neurophysiological (MEG) experiments. The behavioral data show that participants first interpret negated adjectives as affirmative and later modify their interpretation towards, but never exactly as, the opposite meaning. Decoding analyses of neural activity further reveal significant above chance decoding accuracy for negated adjectives within 600 ms from adjective onset, suggesting that negation does not invert the representation of adjectives (i.e., "not bad" represented as "good"); furthermore, decoding accuracy for negated adjectives is found to be significantly lower than that for affirmative adjectives. Overall, these results suggest that negation mitigates rather than inverts the neural representations of adjectives. This putative suppression mechanism of negation is supported by increased synchronization of beta-band neural activity in sensorimotor areas. The analysis of negation provides a steppingstone to understand how the human brain represents changes of meaning over time.


Subject(s)
Language , Humans , Female , Male , Adult , Young Adult , Brain/physiology , Magnetoencephalography/methods , Semantics , Linguistics/methods
4.
Dev Sci ; : e13513, 2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38685611

ABSTRACT

Relatively little work has focused on why we are motivated to learn words. In adults, recent experiments have shown that intrinsic reward signals accompany successful word learning from context. In addition, the experience of reward facilitated long-term memory for words. In adolescence, developmental changes are seen in reward and motivation systems as well as in reading and language systems. Here, in the face of this developmental change, we ask whether adolescents experience reward from word learning, and how the reward and memory benefit seen in adults is modulated by age. We used a naturalistic reading paradigm, which involved extracting novel word meanings from sentence context without the need for explicit feedback. By exploring ratings of enjoyment during the learning phase, as well as recognition memory for words a day later, we assessed whether adolescents show the same reward and learning patterns as adults. We tested 345 children between the ages of 10-18 (N > 84 in each 2-year age-band) using this paradigm. We found evidence for our first prediction: children aged 10-18 report greater enjoyment for successful word learning. However, we did not find evidence for age-related change in this developmental period, or memory benefits. This work gives us greater insight into the process of language acquisition and sets the stage for further investigations of intrinsic reward in typical and atypical development. RESEARCH HIGHLIGHTS: We constantly learn words from context, even in the absence of explicit rewards or feedback. In adults, intrinsic reward experienced during word learning is linked to a dopaminergic circuit in the brain, which also fuels enhancements in memory for words. We find adolescents also report enhanced reward or enjoyment when they successfully learn words from sentence context. The relationship between reward and learning is maintained between the ages of 10 and 18. Unlike in adults, we did not observe ensuing memory benefits.

5.
Ann N Y Acad Sci ; 1535(1): 121-136, 2024 May.
Article in English | MEDLINE | ID: mdl-38566486

ABSTRACT

While certain musical genres and songs are widely popular, there is still large variability in the music that individuals find rewarding or emotional, even among those with a similar musical enculturation. Interestingly, there is one Western genre that is intended to attract minimal attention and evoke a mild emotional response: elevator music. In a series of behavioral experiments, we show that elevator music consistently elicits low pleasure and surprise. Participants reported elevator music as being less pleasurable than music from popular genres, even when participants did not regularly listen to the comparison genre. Participants reported elevator music to be familiar even when they had not explicitly heard the presented song before. Computational and behavioral measures of surprisal showed that elevator music was less surprising, and thus more predictable, than other well-known genres. Elevator music covers of popular songs were rated as less pleasurable, surprising, and arousing than their original counterparts. Finally, we used elevator music as a control for self-selected rewarding songs in a proof-of-concept physiological (electrodermal activity and piloerection) experiment. Our results suggest that elevator music elicits low emotional responses consistently across Western music listeners, making it a unique control stimulus for studying musical novelty, pleasure, and surprise.


Subject(s)
Auditory Perception , Emotions , Music , Reward , Music/psychology , Humans , Male , Female , Emotions/physiology , Adult , Auditory Perception/physiology , Pleasure/physiology , Young Adult , Acoustic Stimulation/methods
6.
Cognition ; 245: 105737, 2024 04.
Article in English | MEDLINE | ID: mdl-38342068

ABSTRACT

Phonological statistical learning - our ability to extract meaningful regularities from spoken language - is considered critical in the early stages of language acquisition, in particular for helping to identify discrete words in continuous speech. Most phonological statistical learning studies use an experimental task introduced by Saffran et al. (1996), in which the syllables forming the words to be learned are presented continuously and isochronously. This raises the question of the extent to which this purportedly powerful learning mechanism is robust to the kinds of rhythmic variability that characterize natural speech. Here, we tested participants with arhythmic, semi-rhythmic, and isochronous speech during learning. In addition, we investigated how input rhythmicity interacts with two other factors previously shown to modulate learning: prior knowledge (syllable order plausibility with respect to participants' first language) and learners' speech auditory-motor synchronization ability. We show that words are extracted by all learners even when the speech input is completely arhythmic. Interestingly, high auditory-motor synchronization ability increases statistical learning when the speech input is temporally more predictable but only when prior knowledge can also be used. This suggests an additional mechanism for learning based on predictions not only about when but also about what upcoming speech will be.


Subject(s)
Individuality , Speech Perception , Humans , Learning , Linguistics , Language Development , Speech
7.
Sci Rep ; 14(1): 3262, 2024 02 08.
Article in English | MEDLINE | ID: mdl-38332159

ABSTRACT

The McGurk effect refers to an audiovisual speech illusion where the discrepant auditory and visual syllables produce a fused percept between the visual and auditory component. However, little is known about how individual differences contribute to the McGurk effect. Here, we examined whether music training experience-which involves audiovisual integration-can modulate the McGurk effect. Seventy-three participants completed the Goldsmiths Musical Sophistication Index (Gold-MSI) questionnaire to evaluate their music expertise on a continuous scale. Gold-MSI considers participants' daily-life exposure to music learning experiences (formal and informal), instead of merely classifying people into different groups according to how many years they have been trained in music. Participants were instructed to report, via a 3-alternative forced choice task, "what a person said": /Ba/, /Ga/ or /Da/. The experiment consisted of 96 audiovisual congruent trials and 96 audiovisual incongruent (McGurk) trials. We observed no significant correlations between the susceptibility of the McGurk effect and the different subscales of the Gold-MSI (active engagement, perceptual abilities, music training, singing abilities, emotion) or the general musical sophistication composite score. Together, these findings suggest that music training experience does not modulate audiovisual integration in speech as reflected by the McGurk effect.


Subject(s)
Music , Speech Perception , Humans , Visual Perception , Speech , Gold , Auditory Perception , Acoustic Stimulation
8.
J Vis ; 23(13): 6, 2023 Nov 01.
Article in English | MEDLINE | ID: mdl-37971770

ABSTRACT

What role do the emotions of subject and object play in judging the beauty of images and music? Eighty-one participants rated perceived beauty, liking, perceived happiness, and perceived sadness of 24 songs, 12 art images, and 12 nature photographs. Stimulus presentation was brief (2 seconds) or prolonged (20 seconds). The stimuli were presented in two blocks, and participants took the Positive and Negative Affect Score (PANAS) mood questionnaire before and after each block. They viewed a mood induction video between blocks either to increase their happiness or sadness or to maintain their mood. Using linear mixed-effects models, we found that perceived object happiness predicts an increase in image and song beauty regardless of duration. The effect of perceived object sadness on beauty, however, is stronger for songs than images and stronger for prolonged than brief durations. Subject emotion affects brief song beauty minimally and prolonged song beauty substantially. Whereas past studies of beauty and emotion emphasized sad music, here we analyze both happiness and sadness, both subject and object emotion, and both images and music. We conclude that the interactions between emotion and beauty are different for images and music and are strongly moderated by duration.


Subject(s)
Music , Humans , Music/psychology , Emotions , Happiness , Linear Models , Time Factors
9.
NPJ Sci Learn ; 8(1): 2, 2023 Jan 06.
Article in English | MEDLINE | ID: mdl-36609382

ABSTRACT

Incentives can decrease performance by undermining intrinsic motivation. How such an interplay of external reinforcers and internal self-regulation influences memory processes, however, is less known. Here, we investigated their interaction on memory performance while learning the meaning of new-words from their context. Specifically, participants inferred congruent meanings of new-words from semantic context (congruent trials) or lack of congruence (incongruent trials), while receiving external feedback in the first or second half of trials only. Removing feedback during learning of congruent word meanings lowered subsequent recognition rates a day later, whereas recognition remained high in the group, which received feedback only in the second half. In contrast, feedback did not substantially alter recognition rates for learning that new-words had no congruent meanings. Our findings suggest that external reinforcers can selectively impair memories if internal self-regulated processes are not already established, but whether they do so depends on what is being learned (specific word-meanings vs. unspecific incongruence). This highlights the relevance of self-regulated learning in education to support stable memory formation.

10.
Ann N Y Acad Sci ; 1519(1): 186-198, 2023 01.
Article in English | MEDLINE | ID: mdl-36401802

ABSTRACT

The COVID-19 pandemic has deeply affected the mental health of millions of people. We assessed which of many leisure activities correlated with positive mental health outputs, with particular attention to music, which has been reported to be important for coping with the psychological burden of the pandemic. Questionnaire data from about 1000 individuals primarily from Italy, Spain, and the United States during May-June 2020 show that people picked music activities (listening to, playing, singing, etc.) most often as the leisure experiences that helped them the most to cope with psychological distress related with the pandemic. During the pandemic, hours of engagement in music and food-related activities were associated with lower depressive symptoms. The negative correlation between music and depression was mediated by individual differences in sensitivity to reward, whereas the correlation between food-related activities and improved mental health outputs was explained by differences in emotion suppression strategies. Our results, while correlational, suggest that engaging in music activities could be related to improved well-being with the underlying mechanism being related to reward, consistent with neuroscience findings. Our data have practical significance in pointing to effective strategies to cope with mental health issues beyond those related to the COVID-19 pandemic.


Subject(s)
COVID-19 , Music , Humans , Music/psychology , Depression/epidemiology , Pandemics , COVID-19/epidemiology , Reward
11.
PLoS Biol ; 20(7): e3001712, 2022 07.
Article in English | MEDLINE | ID: mdl-35793349

ABSTRACT

People of all ages display the ability to detect and learn from patterns in seemingly random stimuli. Referred to as statistical learning (SL), this process is particularly critical when learning a spoken language, helping in the identification of discrete words within a spoken phrase. Here, by considering individual differences in speech auditory-motor synchronization, we demonstrate that recruitment of a specific neural network supports behavioral differences in SL from speech. While independent component analysis (ICA) of fMRI data revealed that a network of auditory and superior pre/motor regions is universally activated in the process of learning, a frontoparietal network is additionally and selectively engaged by only some individuals (high auditory-motor synchronizers). Importantly, activation of this frontoparietal network is related to a boost in learning performance, and interference with this network via articulatory suppression (AS; i.e., producing irrelevant speech during learning) normalizes performance across the entire sample. Our work provides novel insights on SL from speech and reconciles previous contrasting findings. These findings also highlight a more general need to factor in fundamental individual differences for a precise characterization of cognitive phenomena.


Subject(s)
Speech Perception , Speech , Brain Mapping , Humans , Magnetic Resonance Imaging , Speech/physiology , Speech Perception/physiology
12.
Front Integr Neurosci ; 16: 869571, 2022.
Article in English | MEDLINE | ID: mdl-35600224

ABSTRACT

Stuttering is a neurodevelopmental speech disorder associated with motor timing that differs from non-stutterers. While neurodevelopmental disorders impacted by timing are associated with compromised auditory-motor integration and interoception, the interplay between those abilities and stuttering remains unexplored. Here, we studied the relationships between speech auditory-motor synchronization (a proxy for auditory-motor integration), interoceptive awareness, and self-reported stuttering severity using remotely delivered assessments. Results indicate that in general, stutterers and non-stutterers exhibit similar auditory-motor integration and interoceptive abilities. However, while speech auditory-motor synchrony (i.e., integration) and interoceptive awareness were not related, speech synchrony was inversely related to the speaker's perception of stuttering severity as perceived by others, and interoceptive awareness was inversely related to self-reported stuttering impact. These findings support claims that stuttering is a heterogeneous, multi-faceted disorder such that uncorrelated auditory-motor integration and interoception measurements predicted different aspects of stuttering, suggesting two unrelated sources of timing differences associated with the disorder.

13.
J Assoc Res Otolaryngol ; 23(2): 151-166, 2022 04.
Article in English | MEDLINE | ID: mdl-35235100

ABSTRACT

Distinguishing between regular and irregular heartbeats, conversing with speakers of different accents, and tuning a guitar-all rely on some form of auditory learning. What drives these experience-dependent changes? A growing body of evidence suggests an important role for non-sensory influences, including reward, task engagement, and social or linguistic context. This review is a collection of contributions that highlight how these non-sensory factors shape auditory plasticity and learning at the molecular, physiological, and behavioral level. We begin by presenting evidence that reward signals from the dopaminergic midbrain act on cortico-subcortical networks to shape sound-evoked responses of auditory cortical neurons, facilitate auditory category learning, and modulate the long-term storage of new words and their meanings. We then discuss the role of task engagement in auditory perceptual learning and suggest that plasticity in top-down cortical networks mediates learning-related improvements in auditory cortical and perceptual sensitivity. Finally, we present data that illustrates how social experience impacts sound-evoked activity in the auditory midbrain and forebrain and how the linguistic environment rapidly shapes speech perception. These findings, which are derived from both human and animal models, suggest that non-sensory influences are important regulators of auditory learning and plasticity and are often implemented by shared neural substrates. Application of these principles could improve clinical training strategies and inform the development of treatments that enhance auditory learning in individuals with communication disorders.


Subject(s)
Auditory Cortex , Neuronal Plasticity , Animals , Auditory Cortex/physiology , Auditory Perception/physiology , Neuronal Plasticity/physiology
14.
STAR Protoc ; 3(2): 101248, 2022 06 17.
Article in English | MEDLINE | ID: mdl-35310080

ABSTRACT

The ability to synchronize a motor action to a rhythmic auditory stimulus is often considered an innate human skill. However, some individuals lack the ability to synchronize speech to a perceived syllabic rate. Here, we describe a simple and fast protocol to classify a single native English speaker as being or not being a speech synchronizer. This protocol consists of four parts: the pretest instructions and volume adjustment, the training procedure, the execution of the main task, and data analysis. For complete details on the use and execution of this protocol, please refer to Assaneo et al. (2019a).


Subject(s)
Acoustic Stimulation , Speech , Humans
15.
Eur J Neurol ; 29(3): 873-882, 2022 03.
Article in English | MEDLINE | ID: mdl-34661326

ABSTRACT

BACKGROUND AND PURPOSE: This study was undertaken to determine and compare lesion patterns and structural dysconnectivity underlying poststroke aprosodia and amusia, using a data-driven multimodal neuroimaging approach. METHODS: Thirty-nine patients with right or left hemisphere stroke were enrolled in a cohort study and tested for linguistic and affective prosody perception and musical pitch and rhythm perception at subacute and 3-month poststroke stages. Participants listened to words spoken with different prosodic stress that changed their meaning, and to words spoken with six different emotions, and chose which meaning or emotion was expressed. In the music tasks, participants judged pairs of short melodies as the same or different in terms of pitch or rhythm. Structural magnetic resonance imaging data were acquired at both stages, and machine learning-based lesion-symptom mapping and deterministic tractography were used to identify lesion patterns and damaged white matter pathways giving rise to aprosodia and amusia. RESULTS: Both aprosodia and amusia were behaviorally strongly correlated and associated with similar lesion patterns in right frontoinsular and striatal areas. In multiple regression models, reduced fractional anisotropy and lower tract volume of the right inferior fronto-occipital fasciculus were the strongest predictors for both disorders, over time. CONCLUSIONS: These results highlight a common origin of aprosodia and amusia, both arising from damage and disconnection of the right ventral auditory stream integrating rhythmic-melodic acoustic information in prosody and music. Comorbidity of these disabilities may worsen the prognosis and affect rehabilitation success.


Subject(s)
Auditory Perceptual Disorders , Music , Auditory Perceptual Disorders/etiology , Cohort Studies , Humans , Magnetic Resonance Imaging , Speech Disorders
16.
PLoS Biol ; 19(9): e3001119, 2021 09.
Article in English | MEDLINE | ID: mdl-34491980

ABSTRACT

Statistical learning (SL) is the ability to extract regularities from the environment. In the domain of language, this ability is fundamental in the learning of words and structural rules. In lack of reliable online measures, statistical word and rule learning have been primarily investigated using offline (post-familiarization) tests, which gives limited insights into the dynamics of SL and its neural basis. Here, we capitalize on a novel task that tracks the online SL of simple syntactic structures combined with computational modeling to show that online SL responds to reinforcement learning principles rooted in striatal function. Specifically, we demonstrate-on 2 different cohorts-that a temporal difference model, which relies on prediction errors, accounts for participants' online learning behavior. We then show that the trial-by-trial development of predictions through learning strongly correlates with activity in both ventral and dorsal striatum. Our results thus provide a detailed mechanistic account of language-related SL and an explanation for the oft-cited implication of the striatum in SL tasks. This work, therefore, bridges the long-standing gap between language learning and reinforcement learning phenomena.


Subject(s)
Corpus Striatum/physiology , Language Development , Probability Learning , Reinforcement, Psychology , Corpus Striatum/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Pattern Recognition, Physiological , Young Adult
17.
Ann N Y Acad Sci ; 1502(1): 85-98, 2021 10.
Article in English | MEDLINE | ID: mdl-34247392

ABSTRACT

Music listening provides one of the most significant abstract rewards for humans because hearing music activates the dopaminergic mesolimbic system. Given the strong link between reward, dopamine, and memory, we aimed here to investigate the hypothesis that dopamine-dependent musical reward can drive memory improvements. Twenty-nine healthy participants of both sexes provided reward ratings of unfamiliar musical excerpts that had to be remembered following a consolidation period under three separate conditions: after the ingestion of a dopaminergic antagonist, a dopaminergic precursor, or a placebo. Linear mixed modeling of the intervention data showed that the effect of reward on memory-i.e., the greater the reward experienced while listening to the musical excerpts, the better the memory recollection performance-was modulated by both dopaminergic signaling and individual differences in reward processing. Greater pleasure was consistently associated with better memory outcomes in participants with high sensitivity to musical reward, but this effect was lost when dopaminergic signaling was disrupted in participants with average or low musical hedonia. Our work highlights the flexibility of the human dopaminergic system, which can enhance memory formation not only through explicit and/or primary reinforcers but also via abstract and aesthetic rewards such as music.


Subject(s)
Dopamine/metabolism , Memory Consolidation , Mental Recall , Music , Reward , Adult , Analysis of Variance , Auditory Perception , Brain/physiology , Humans , Pleasure , Young Adult
18.
Front Psychol ; 12: 673772, 2021.
Article in English | MEDLINE | ID: mdl-34262511

ABSTRACT

The COVID-19 pandemic and the measures taken to mitigate its impact (e.g., confinement orders) have affected people's lives in profound ways that would have been unimagable only months before the pandemic began. Media reports from the height of the pandemic's initial international surge frequently highlighted that many people were engaging in music-related activities (from singing and dancing to playing music from balconies and attending virtual concerts) to help them cope with the strain of the pandemic. Our first goal in this study was to investigate changes in music-related habits due to the pandemic. We also investigated whether engagement in distinct music-related activities (singing, listening, dancing, etc.) was associated with individual differences in musical reward, music perception, musical training, or emotional regulation strategies. To do so, we collected detailed (~1 h-long) surveys during the initial peak of shelter-in-place order implementation (May-June 2020) from over a thousand individuals across different Countries in which the pandemic was especially devastating at that time: the USA, Spain, and Italy. Our findings indicate that, on average, people spent more time in music-related activities while under confinement than they had before the pandemic. Notably, this change in behavior was dependent on individual differences in music reward sensitivity, and in emotional regulation strategies. Finally, the type of musical activity with which individuals engaged was further associated with the degree to which they used music as a way to regulate stress, to address the lack of social interaction (especially the individuals more concerned about the risk of contracting the virus), or to cheer themselves up (especially those who were more worried about the pandemic consequences). Identifying which music-related activities have been particularly sought for by the population as a means for coping with such heightened uncertainty and stress, and understanding the individual differences that underlie said propensities are crucial to implementing personalized music-based interventions that aim to reduce stress, anxiety, and depression symptoms.

19.
eNeuro ; 2021 Jun 17.
Article in English | MEDLINE | ID: mdl-34140351

ABSTRACT

Listening to vocal music has been recently shown to improve language recovery in stroke survivors. The neuroplasticity mechanisms supporting this effect are, however, still unknown. Using data from a three-arm single-blind randomized controlled trial including acute stroke patients (N=38) and a 3-month follow-up, we set out to compare the neuroplasticity effects of daily listening to self-selected vocal music, instrumental music, and audiobooks on both brain activity and structural connectivity of the language network. Using deterministic tractography we show that the 3-month intervention induced an enhancement of the microstructural properties of the left frontal aslant tract (FAT) for the vocal music group as compared to the audiobook group. Importantly, this increase in the strength of the structural connectivity of the left FAT correlated with improved language skills. Analyses of stimulus-specific activation changes showed that the vocal music group exhibited increased activations in the frontal termination points of the left FAT during vocal music listening as compared to the audiobook group from acute to 3-month post-stroke stage. The increased activity correlated with the structural neuroplasticity changes in the left FAT. These results suggest that the beneficial effects of vocal music listening on post-stroke language recovery are underpinned by structural neuroplasticity changes within the language network and extend our understanding of music-based interventions in stroke rehabilitation.Significance statementPost-stroke language deficits have a devastating effect on patients and their families. Current treatments yield highly variable outcomes and the evidence for their long-term effects is limited. Patients often receive insufficient treatment that are predominantly given outside the optimal time window for brain plasticity. Post-stroke vocal music listening improves language outcome which is underpinned by neuroplasticity changes within the language network. Vocal music listening provides a complementary rehabilitation strategy which could be safely implemented in the early stages of stroke rehabilitation and seems to specifically target language symptoms and recovering language network.

20.
PLoS Biol ; 18(11): e3000895, 2020 11.
Article in English | MEDLINE | ID: mdl-33137084

ABSTRACT

A crucial aspect when learning a language is discovering the rules that govern how words are combined in order to convey meanings. Because rules are characterized by sequential co-occurrences between elements (e.g., "These cupcakes are unbelievable"), tracking the statistical relationships between these elements is fundamental. However, purely bottom-up statistical learning alone cannot fully account for the ability to create abstract rule representations that can be generalized, a paramount requirement of linguistic rules. Here, we provide evidence that, after the statistical relations between words have been extracted, the engagement of goal-directed attention is key to enable rule generalization. Incidental learning performance during a rule-learning task on an artificial language revealed a progressive shift from statistical learning to goal-directed attention. In addition, and consistent with the recruitment of attention, functional MRI (fMRI) analyses of late learning stages showed left parietal activity within a broad bilateral dorsal frontoparietal network. Critically, repetitive transcranial magnetic stimulation (rTMS) on participants' peak of activation within the left parietal cortex impaired their ability to generalize learned rules to a structurally analogous new language. No stimulation or rTMS on a nonrelevant brain region did not have the same interfering effect on generalization. Performance on an additional attentional task showed that this rTMS on the parietal site hindered participants' ability to integrate "what" (stimulus identity) and "when" (stimulus timing) information about an expected target. The present findings suggest that learning rules from speech is a two-stage process: following statistical learning, goal-directed attention-involving left parietal regions-integrates "what" and "when" stimulus information to facilitate rapid rule generalization.


Subject(s)
Attention/physiology , Learning/physiology , Parietal Lobe/physiology , Adult , Brain/physiology , Brain Mapping/methods , Cognition/physiology , Female , Frontal Lobe/physiology , Functional Laterality/physiology , Humans , Language , Linguistics/methods , Magnetic Resonance Imaging/methods , Male , Photic Stimulation/methods , Reaction Time/physiology , Transcranial Magnetic Stimulation/methods , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...