Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 53
Filter
1.
Dev Sci ; : e13513, 2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38685611

ABSTRACT

Relatively little work has focused on why we are motivated to learn words. In adults, recent experiments have shown that intrinsic reward signals accompany successful word learning from context. In addition, the experience of reward facilitated long-term memory for words. In adolescence, developmental changes are seen in reward and motivation systems as well as in reading and language systems. Here, in the face of this developmental change, we ask whether adolescents experience reward from word learning, and how the reward and memory benefit seen in adults is modulated by age. We used a naturalistic reading paradigm, which involved extracting novel word meanings from sentence context without the need for explicit feedback. By exploring ratings of enjoyment during the learning phase, as well as recognition memory for words a day later, we assessed whether adolescents show the same reward and learning patterns as adults. We tested 345 children between the ages of 10-18 (N > 84 in each 2-year age-band) using this paradigm. We found evidence for our first prediction: children aged 10-18 report greater enjoyment for successful word learning. However, we did not find evidence for age-related change in this developmental period, or memory benefits. This work gives us greater insight into the process of language acquisition and sets the stage for further investigations of intrinsic reward in typical and atypical development. RESEARCH HIGHLIGHTS: We constantly learn words from context, even in the absence of explicit rewards or feedback. In adults, intrinsic reward experienced during word learning is linked to a dopaminergic circuit in the brain, which also fuels enhancements in memory for words. We find adolescents also report enhanced reward or enjoyment when they successfully learn words from sentence context. The relationship between reward and learning is maintained between the ages of 10 and 18. Unlike in adults, we did not observe ensuing memory benefits.

2.
Ann N Y Acad Sci ; 1535(1): 121-136, 2024 May.
Article in English | MEDLINE | ID: mdl-38566486

ABSTRACT

While certain musical genres and songs are widely popular, there is still large variability in the music that individuals find rewarding or emotional, even among those with a similar musical enculturation. Interestingly, there is one Western genre that is intended to attract minimal attention and evoke a mild emotional response: elevator music. In a series of behavioral experiments, we show that elevator music consistently elicits low pleasure and surprise. Participants reported elevator music as being less pleasurable than music from popular genres, even when participants did not regularly listen to the comparison genre. Participants reported elevator music to be familiar even when they had not explicitly heard the presented song before. Computational and behavioral measures of surprisal showed that elevator music was less surprising, and thus more predictable, than other well-known genres. Elevator music covers of popular songs were rated as less pleasurable, surprising, and arousing than their original counterparts. Finally, we used elevator music as a control for self-selected rewarding songs in a proof-of-concept physiological (electrodermal activity and piloerection) experiment. Our results suggest that elevator music elicits low emotional responses consistently across Western music listeners, making it a unique control stimulus for studying musical novelty, pleasure, and surprise.


Subject(s)
Auditory Perception , Emotions , Music , Reward , Music/psychology , Humans , Male , Female , Emotions/physiology , Adult , Auditory Perception/physiology , Pleasure/physiology , Young Adult , Acoustic Stimulation/methods
3.
Cognition ; 245: 105737, 2024 04.
Article in English | MEDLINE | ID: mdl-38342068

ABSTRACT

Phonological statistical learning - our ability to extract meaningful regularities from spoken language - is considered critical in the early stages of language acquisition, in particular for helping to identify discrete words in continuous speech. Most phonological statistical learning studies use an experimental task introduced by Saffran et al. (1996), in which the syllables forming the words to be learned are presented continuously and isochronously. This raises the question of the extent to which this purportedly powerful learning mechanism is robust to the kinds of rhythmic variability that characterize natural speech. Here, we tested participants with arhythmic, semi-rhythmic, and isochronous speech during learning. In addition, we investigated how input rhythmicity interacts with two other factors previously shown to modulate learning: prior knowledge (syllable order plausibility with respect to participants' first language) and learners' speech auditory-motor synchronization ability. We show that words are extracted by all learners even when the speech input is completely arhythmic. Interestingly, high auditory-motor synchronization ability increases statistical learning when the speech input is temporally more predictable but only when prior knowledge can also be used. This suggests an additional mechanism for learning based on predictions not only about when but also about what upcoming speech will be.


Subject(s)
Individuality , Speech Perception , Humans , Learning , Linguistics , Language Development , Speech
4.
Sci Rep ; 14(1): 3262, 2024 02 08.
Article in English | MEDLINE | ID: mdl-38332159

ABSTRACT

The McGurk effect refers to an audiovisual speech illusion where the discrepant auditory and visual syllables produce a fused percept between the visual and auditory component. However, little is known about how individual differences contribute to the McGurk effect. Here, we examined whether music training experience-which involves audiovisual integration-can modulate the McGurk effect. Seventy-three participants completed the Goldsmiths Musical Sophistication Index (Gold-MSI) questionnaire to evaluate their music expertise on a continuous scale. Gold-MSI considers participants' daily-life exposure to music learning experiences (formal and informal), instead of merely classifying people into different groups according to how many years they have been trained in music. Participants were instructed to report, via a 3-alternative forced choice task, "what a person said": /Ba/, /Ga/ or /Da/. The experiment consisted of 96 audiovisual congruent trials and 96 audiovisual incongruent (McGurk) trials. We observed no significant correlations between the susceptibility of the McGurk effect and the different subscales of the Gold-MSI (active engagement, perceptual abilities, music training, singing abilities, emotion) or the general musical sophistication composite score. Together, these findings suggest that music training experience does not modulate audiovisual integration in speech as reflected by the McGurk effect.


Subject(s)
Music , Speech Perception , Humans , Visual Perception , Speech , Gold , Auditory Perception , Acoustic Stimulation
5.
J Vis ; 23(13): 6, 2023 Nov 01.
Article in English | MEDLINE | ID: mdl-37971770

ABSTRACT

What role do the emotions of subject and object play in judging the beauty of images and music? Eighty-one participants rated perceived beauty, liking, perceived happiness, and perceived sadness of 24 songs, 12 art images, and 12 nature photographs. Stimulus presentation was brief (2 seconds) or prolonged (20 seconds). The stimuli were presented in two blocks, and participants took the Positive and Negative Affect Score (PANAS) mood questionnaire before and after each block. They viewed a mood induction video between blocks either to increase their happiness or sadness or to maintain their mood. Using linear mixed-effects models, we found that perceived object happiness predicts an increase in image and song beauty regardless of duration. The effect of perceived object sadness on beauty, however, is stronger for songs than images and stronger for prolonged than brief durations. Subject emotion affects brief song beauty minimally and prolonged song beauty substantially. Whereas past studies of beauty and emotion emphasized sad music, here we analyze both happiness and sadness, both subject and object emotion, and both images and music. We conclude that the interactions between emotion and beauty are different for images and music and are strongly moderated by duration.


Subject(s)
Music , Humans , Music/psychology , Emotions , Happiness , Linear Models , Time Factors
6.
NPJ Sci Learn ; 8(1): 2, 2023 Jan 06.
Article in English | MEDLINE | ID: mdl-36609382

ABSTRACT

Incentives can decrease performance by undermining intrinsic motivation. How such an interplay of external reinforcers and internal self-regulation influences memory processes, however, is less known. Here, we investigated their interaction on memory performance while learning the meaning of new-words from their context. Specifically, participants inferred congruent meanings of new-words from semantic context (congruent trials) or lack of congruence (incongruent trials), while receiving external feedback in the first or second half of trials only. Removing feedback during learning of congruent word meanings lowered subsequent recognition rates a day later, whereas recognition remained high in the group, which received feedback only in the second half. In contrast, feedback did not substantially alter recognition rates for learning that new-words had no congruent meanings. Our findings suggest that external reinforcers can selectively impair memories if internal self-regulated processes are not already established, but whether they do so depends on what is being learned (specific word-meanings vs. unspecific incongruence). This highlights the relevance of self-regulated learning in education to support stable memory formation.

7.
Ann N Y Acad Sci ; 1519(1): 186-198, 2023 01.
Article in English | MEDLINE | ID: mdl-36401802

ABSTRACT

The COVID-19 pandemic has deeply affected the mental health of millions of people. We assessed which of many leisure activities correlated with positive mental health outputs, with particular attention to music, which has been reported to be important for coping with the psychological burden of the pandemic. Questionnaire data from about 1000 individuals primarily from Italy, Spain, and the United States during May-June 2020 show that people picked music activities (listening to, playing, singing, etc.) most often as the leisure experiences that helped them the most to cope with psychological distress related with the pandemic. During the pandemic, hours of engagement in music and food-related activities were associated with lower depressive symptoms. The negative correlation between music and depression was mediated by individual differences in sensitivity to reward, whereas the correlation between food-related activities and improved mental health outputs was explained by differences in emotion suppression strategies. Our results, while correlational, suggest that engaging in music activities could be related to improved well-being with the underlying mechanism being related to reward, consistent with neuroscience findings. Our data have practical significance in pointing to effective strategies to cope with mental health issues beyond those related to the COVID-19 pandemic.


Subject(s)
COVID-19 , Music , Humans , Music/psychology , Depression/epidemiology , Pandemics , COVID-19/epidemiology , Reward
8.
PLoS Biol ; 20(7): e3001712, 2022 07.
Article in English | MEDLINE | ID: mdl-35793349

ABSTRACT

People of all ages display the ability to detect and learn from patterns in seemingly random stimuli. Referred to as statistical learning (SL), this process is particularly critical when learning a spoken language, helping in the identification of discrete words within a spoken phrase. Here, by considering individual differences in speech auditory-motor synchronization, we demonstrate that recruitment of a specific neural network supports behavioral differences in SL from speech. While independent component analysis (ICA) of fMRI data revealed that a network of auditory and superior pre/motor regions is universally activated in the process of learning, a frontoparietal network is additionally and selectively engaged by only some individuals (high auditory-motor synchronizers). Importantly, activation of this frontoparietal network is related to a boost in learning performance, and interference with this network via articulatory suppression (AS; i.e., producing irrelevant speech during learning) normalizes performance across the entire sample. Our work provides novel insights on SL from speech and reconciles previous contrasting findings. These findings also highlight a more general need to factor in fundamental individual differences for a precise characterization of cognitive phenomena.


Subject(s)
Speech Perception , Speech , Brain Mapping , Humans , Magnetic Resonance Imaging , Speech/physiology , Speech Perception/physiology
9.
Front Integr Neurosci ; 16: 869571, 2022.
Article in English | MEDLINE | ID: mdl-35600224

ABSTRACT

Stuttering is a neurodevelopmental speech disorder associated with motor timing that differs from non-stutterers. While neurodevelopmental disorders impacted by timing are associated with compromised auditory-motor integration and interoception, the interplay between those abilities and stuttering remains unexplored. Here, we studied the relationships between speech auditory-motor synchronization (a proxy for auditory-motor integration), interoceptive awareness, and self-reported stuttering severity using remotely delivered assessments. Results indicate that in general, stutterers and non-stutterers exhibit similar auditory-motor integration and interoceptive abilities. However, while speech auditory-motor synchrony (i.e., integration) and interoceptive awareness were not related, speech synchrony was inversely related to the speaker's perception of stuttering severity as perceived by others, and interoceptive awareness was inversely related to self-reported stuttering impact. These findings support claims that stuttering is a heterogeneous, multi-faceted disorder such that uncorrelated auditory-motor integration and interoception measurements predicted different aspects of stuttering, suggesting two unrelated sources of timing differences associated with the disorder.

10.
STAR Protoc ; 3(2): 101248, 2022 06 17.
Article in English | MEDLINE | ID: mdl-35310080

ABSTRACT

The ability to synchronize a motor action to a rhythmic auditory stimulus is often considered an innate human skill. However, some individuals lack the ability to synchronize speech to a perceived syllabic rate. Here, we describe a simple and fast protocol to classify a single native English speaker as being or not being a speech synchronizer. This protocol consists of four parts: the pretest instructions and volume adjustment, the training procedure, the execution of the main task, and data analysis. For complete details on the use and execution of this protocol, please refer to Assaneo et al. (2019a).


Subject(s)
Acoustic Stimulation , Speech , Humans
11.
J Assoc Res Otolaryngol ; 23(2): 151-166, 2022 04.
Article in English | MEDLINE | ID: mdl-35235100

ABSTRACT

Distinguishing between regular and irregular heartbeats, conversing with speakers of different accents, and tuning a guitar-all rely on some form of auditory learning. What drives these experience-dependent changes? A growing body of evidence suggests an important role for non-sensory influences, including reward, task engagement, and social or linguistic context. This review is a collection of contributions that highlight how these non-sensory factors shape auditory plasticity and learning at the molecular, physiological, and behavioral level. We begin by presenting evidence that reward signals from the dopaminergic midbrain act on cortico-subcortical networks to shape sound-evoked responses of auditory cortical neurons, facilitate auditory category learning, and modulate the long-term storage of new words and their meanings. We then discuss the role of task engagement in auditory perceptual learning and suggest that plasticity in top-down cortical networks mediates learning-related improvements in auditory cortical and perceptual sensitivity. Finally, we present data that illustrates how social experience impacts sound-evoked activity in the auditory midbrain and forebrain and how the linguistic environment rapidly shapes speech perception. These findings, which are derived from both human and animal models, suggest that non-sensory influences are important regulators of auditory learning and plasticity and are often implemented by shared neural substrates. Application of these principles could improve clinical training strategies and inform the development of treatments that enhance auditory learning in individuals with communication disorders.


Subject(s)
Auditory Cortex , Neuronal Plasticity , Animals , Auditory Cortex/physiology , Auditory Perception/physiology , Neuronal Plasticity/physiology
12.
Eur J Neurol ; 29(3): 873-882, 2022 03.
Article in English | MEDLINE | ID: mdl-34661326

ABSTRACT

BACKGROUND AND PURPOSE: This study was undertaken to determine and compare lesion patterns and structural dysconnectivity underlying poststroke aprosodia and amusia, using a data-driven multimodal neuroimaging approach. METHODS: Thirty-nine patients with right or left hemisphere stroke were enrolled in a cohort study and tested for linguistic and affective prosody perception and musical pitch and rhythm perception at subacute and 3-month poststroke stages. Participants listened to words spoken with different prosodic stress that changed their meaning, and to words spoken with six different emotions, and chose which meaning or emotion was expressed. In the music tasks, participants judged pairs of short melodies as the same or different in terms of pitch or rhythm. Structural magnetic resonance imaging data were acquired at both stages, and machine learning-based lesion-symptom mapping and deterministic tractography were used to identify lesion patterns and damaged white matter pathways giving rise to aprosodia and amusia. RESULTS: Both aprosodia and amusia were behaviorally strongly correlated and associated with similar lesion patterns in right frontoinsular and striatal areas. In multiple regression models, reduced fractional anisotropy and lower tract volume of the right inferior fronto-occipital fasciculus were the strongest predictors for both disorders, over time. CONCLUSIONS: These results highlight a common origin of aprosodia and amusia, both arising from damage and disconnection of the right ventral auditory stream integrating rhythmic-melodic acoustic information in prosody and music. Comorbidity of these disabilities may worsen the prognosis and affect rehabilitation success.


Subject(s)
Auditory Perceptual Disorders , Music , Auditory Perceptual Disorders/etiology , Cohort Studies , Humans , Magnetic Resonance Imaging , Speech Disorders
13.
PLoS Biol ; 19(9): e3001119, 2021 09.
Article in English | MEDLINE | ID: mdl-34491980

ABSTRACT

Statistical learning (SL) is the ability to extract regularities from the environment. In the domain of language, this ability is fundamental in the learning of words and structural rules. In lack of reliable online measures, statistical word and rule learning have been primarily investigated using offline (post-familiarization) tests, which gives limited insights into the dynamics of SL and its neural basis. Here, we capitalize on a novel task that tracks the online SL of simple syntactic structures combined with computational modeling to show that online SL responds to reinforcement learning principles rooted in striatal function. Specifically, we demonstrate-on 2 different cohorts-that a temporal difference model, which relies on prediction errors, accounts for participants' online learning behavior. We then show that the trial-by-trial development of predictions through learning strongly correlates with activity in both ventral and dorsal striatum. Our results thus provide a detailed mechanistic account of language-related SL and an explanation for the oft-cited implication of the striatum in SL tasks. This work, therefore, bridges the long-standing gap between language learning and reinforcement learning phenomena.


Subject(s)
Corpus Striatum/physiology , Language Development , Probability Learning , Reinforcement, Psychology , Corpus Striatum/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Pattern Recognition, Physiological , Young Adult
14.
Front Psychol ; 12: 673772, 2021.
Article in English | MEDLINE | ID: mdl-34262511

ABSTRACT

The COVID-19 pandemic and the measures taken to mitigate its impact (e.g., confinement orders) have affected people's lives in profound ways that would have been unimagable only months before the pandemic began. Media reports from the height of the pandemic's initial international surge frequently highlighted that many people were engaging in music-related activities (from singing and dancing to playing music from balconies and attending virtual concerts) to help them cope with the strain of the pandemic. Our first goal in this study was to investigate changes in music-related habits due to the pandemic. We also investigated whether engagement in distinct music-related activities (singing, listening, dancing, etc.) was associated with individual differences in musical reward, music perception, musical training, or emotional regulation strategies. To do so, we collected detailed (~1 h-long) surveys during the initial peak of shelter-in-place order implementation (May-June 2020) from over a thousand individuals across different Countries in which the pandemic was especially devastating at that time: the USA, Spain, and Italy. Our findings indicate that, on average, people spent more time in music-related activities while under confinement than they had before the pandemic. Notably, this change in behavior was dependent on individual differences in music reward sensitivity, and in emotional regulation strategies. Finally, the type of musical activity with which individuals engaged was further associated with the degree to which they used music as a way to regulate stress, to address the lack of social interaction (especially the individuals more concerned about the risk of contracting the virus), or to cheer themselves up (especially those who were more worried about the pandemic consequences). Identifying which music-related activities have been particularly sought for by the population as a means for coping with such heightened uncertainty and stress, and understanding the individual differences that underlie said propensities are crucial to implementing personalized music-based interventions that aim to reduce stress, anxiety, and depression symptoms.

15.
Ann N Y Acad Sci ; 1502(1): 85-98, 2021 10.
Article in English | MEDLINE | ID: mdl-34247392

ABSTRACT

Music listening provides one of the most significant abstract rewards for humans because hearing music activates the dopaminergic mesolimbic system. Given the strong link between reward, dopamine, and memory, we aimed here to investigate the hypothesis that dopamine-dependent musical reward can drive memory improvements. Twenty-nine healthy participants of both sexes provided reward ratings of unfamiliar musical excerpts that had to be remembered following a consolidation period under three separate conditions: after the ingestion of a dopaminergic antagonist, a dopaminergic precursor, or a placebo. Linear mixed modeling of the intervention data showed that the effect of reward on memory-i.e., the greater the reward experienced while listening to the musical excerpts, the better the memory recollection performance-was modulated by both dopaminergic signaling and individual differences in reward processing. Greater pleasure was consistently associated with better memory outcomes in participants with high sensitivity to musical reward, but this effect was lost when dopaminergic signaling was disrupted in participants with average or low musical hedonia. Our work highlights the flexibility of the human dopaminergic system, which can enhance memory formation not only through explicit and/or primary reinforcers but also via abstract and aesthetic rewards such as music.


Subject(s)
Dopamine/metabolism , Memory Consolidation , Mental Recall , Music , Reward , Adult , Analysis of Variance , Auditory Perception , Brain/physiology , Humans , Pleasure , Young Adult
16.
eNeuro ; 2021 Jun 17.
Article in English | MEDLINE | ID: mdl-34140351

ABSTRACT

Listening to vocal music has been recently shown to improve language recovery in stroke survivors. The neuroplasticity mechanisms supporting this effect are, however, still unknown. Using data from a three-arm single-blind randomized controlled trial including acute stroke patients (N=38) and a 3-month follow-up, we set out to compare the neuroplasticity effects of daily listening to self-selected vocal music, instrumental music, and audiobooks on both brain activity and structural connectivity of the language network. Using deterministic tractography we show that the 3-month intervention induced an enhancement of the microstructural properties of the left frontal aslant tract (FAT) for the vocal music group as compared to the audiobook group. Importantly, this increase in the strength of the structural connectivity of the left FAT correlated with improved language skills. Analyses of stimulus-specific activation changes showed that the vocal music group exhibited increased activations in the frontal termination points of the left FAT during vocal music listening as compared to the audiobook group from acute to 3-month post-stroke stage. The increased activity correlated with the structural neuroplasticity changes in the left FAT. These results suggest that the beneficial effects of vocal music listening on post-stroke language recovery are underpinned by structural neuroplasticity changes within the language network and extend our understanding of music-based interventions in stroke rehabilitation.Significance statementPost-stroke language deficits have a devastating effect on patients and their families. Current treatments yield highly variable outcomes and the evidence for their long-term effects is limited. Patients often receive insufficient treatment that are predominantly given outside the optimal time window for brain plasticity. Post-stroke vocal music listening improves language outcome which is underpinned by neuroplasticity changes within the language network. Vocal music listening provides a complementary rehabilitation strategy which could be safely implemented in the early stages of stroke rehabilitation and seems to specifically target language symptoms and recovering language network.

17.
PLoS Biol ; 18(11): e3000895, 2020 11.
Article in English | MEDLINE | ID: mdl-33137084

ABSTRACT

A crucial aspect when learning a language is discovering the rules that govern how words are combined in order to convey meanings. Because rules are characterized by sequential co-occurrences between elements (e.g., "These cupcakes are unbelievable"), tracking the statistical relationships between these elements is fundamental. However, purely bottom-up statistical learning alone cannot fully account for the ability to create abstract rule representations that can be generalized, a paramount requirement of linguistic rules. Here, we provide evidence that, after the statistical relations between words have been extracted, the engagement of goal-directed attention is key to enable rule generalization. Incidental learning performance during a rule-learning task on an artificial language revealed a progressive shift from statistical learning to goal-directed attention. In addition, and consistent with the recruitment of attention, functional MRI (fMRI) analyses of late learning stages showed left parietal activity within a broad bilateral dorsal frontoparietal network. Critically, repetitive transcranial magnetic stimulation (rTMS) on participants' peak of activation within the left parietal cortex impaired their ability to generalize learned rules to a structurally analogous new language. No stimulation or rTMS on a nonrelevant brain region did not have the same interfering effect on generalization. Performance on an additional attentional task showed that this rTMS on the parietal site hindered participants' ability to integrate "what" (stimulus identity) and "when" (stimulus timing) information about an expected target. The present findings suggest that learning rules from speech is a two-stage process: following statistical learning, goal-directed attention-involving left parietal regions-integrates "what" and "when" stimulus information to facilitate rapid rule generalization.


Subject(s)
Attention/physiology , Learning/physiology , Parietal Lobe/physiology , Adult , Brain/physiology , Brain Mapping/methods , Cognition/physiology , Female , Frontal Lobe/physiology , Functional Laterality/physiology , Humans , Language , Linguistics/methods , Magnetic Resonance Imaging/methods , Male , Photic Stimulation/methods , Reaction Time/physiology , Transcranial Magnetic Stimulation/methods , Young Adult
18.
Ann Clin Transl Neurol ; 7(11): 2272-2287, 2020 11.
Article in English | MEDLINE | ID: mdl-33022148

ABSTRACT

OBJECTIVE: Previous studies suggest that daily music listening can aid stroke recovery, but little is known about the stimulus-dependent and neural mechanisms driving this effect. Building on neuroimaging evidence that vocal music engages extensive and bilateral networks in the brain, we sought to determine if it would be more effective for enhancing cognitive and language recovery and neuroplasticity than instrumental music or speech after stroke. METHODS: Using data pooled from two single-blind randomized controlled trials in stroke patients (N = 83), we compared the effects of daily listening to self-selected vocal music, instrumental music, and audiobooks during the first 3 poststroke months. Outcome measures comprised neuropsychological tests of verbal memory (primary outcome), language, and attention and a mood questionnaire performed at acute, 3-month, and 6-month stages and structural and functional MRI at acute and 6-month stages. RESULTS: Listening to vocal music enhanced verbal memory recovery more than instrumental music or audiobooks and language recovery more than audiobooks, especially in aphasic patients. Voxel-based morphometry and resting-state and task-based fMRI results showed that vocal music listening selectively increased gray matter volume in left temporal areas and functional connectivity in the default mode network. INTERPRETATION: Vocal music listening is an effective and easily applicable tool to support cognitive recovery after stroke as well as to enhance early language recovery in aphasia. The rehabilitative effects of vocal music are driven by both structural and functional plasticity changes in temporoparietal networks crucial for emotional processing, language, and memory.


Subject(s)
Cerebral Cortex/physiology , Cerebral Cortex/physiopathology , Cognitive Dysfunction/rehabilitation , Connectome , Default Mode Network/physiopathology , Music Therapy , Music , Outcome Assessment, Health Care , Singing , Stroke Rehabilitation , Stroke/therapy , Aged , Cerebral Cortex/diagnostic imaging , Cognitive Dysfunction/etiology , Default Mode Network/diagnostic imaging , Female , Humans , Language , Magnetic Resonance Imaging , Male , Middle Aged , Neuropsychological Tests , Stroke/complications , Temporal Lobe/diagnostic imaging , Temporal Lobe/pathology , Temporal Lobe/physiopathology , Verbal Learning/physiology
19.
Brain Imaging Behav ; 14(4): 1074-1088, 2020 Aug.
Article in English | MEDLINE | ID: mdl-31102166

ABSTRACT

The human hippocampus is believed to be a crucial node in the neural network supporting autobiographical memory retrieval. Structural mesial temporal damage associated with temporal lobe epilepsy (TLE) provides an opportunity to systematically investigate and better understand the local and distal functional consequences of mesial temporal damage in the engagement of the autobiographical memory network. We examined 19 TLE patients (49.21 ± 11.55 years; 12 females) with unilateral mesial TLE (MTLE; 12 with anterior temporal lobe resection: 6 right MTLE, 6 left MTLE) or bilateral mesial TLE (7 BMTLE) and 18 matched healthy subjects. We used functional MRI (fMRI) with an adapted autobiographical memory paradigm and a specific neuropsychological test (Autobiographical Memory Interview, AMI). While engaged in the fMRI autobiographical memory paradigm, all groups activated a large fronto-temporo-parietal network. However, while this network was left lateralized for healthy participants and right MTLE patients, left MTLE and patients with BMTLE also showed strong activation in right temporal and frontal regions. Moreover, BMTLE and left MTLE patients also showed significant mild deficits in episodic autobiographical memory performance measured with the AMI test. The right temporal and extra-temporal fMRI activation, along with the impairment in autobiographical memory retrieval found in left MTLE and BMTLE patients suggest that alternate brain areas-other than the hippocampus-may also support this process, possibly due to neuroplastic effects.


Subject(s)
Epilepsy, Temporal Lobe , Memory, Episodic , Epilepsy, Temporal Lobe/diagnostic imaging , Epilepsy, Temporal Lobe/surgery , Female , Hippocampus/diagnostic imaging , Hippocampus/pathology , Humans , Magnetic Resonance Imaging , Neuropsychological Tests , Sclerosis/diagnostic imaging , Temporal Lobe
20.
Brain Lang ; 199: 104699, 2019 12.
Article in English | MEDLINE | ID: mdl-31569040

ABSTRACT

Listening to white noise may facilitate cognitive performance, including new word learning, for some individuals. This study investigated whether auditory white noise facilitates the learning of novel written words from context in healthy young adults. Sixty-nine participants were required to determine the meaning of novel words placed within sentence contexts during a silent reading task. Learning was performed either with or without white noise, and recognition of novel word meanings was tested immediately after learning and after a short delay. Immediate recognition accuracy for learned novel word meanings was higher in the noise group relative to the no noise group, however this effect was no longer evident at the delayed recognition test. These findings suggest that white noise has the capacity to facilitate meaning acquisition from context, however further research is needed to clarify its capacity to improve longer-term retention of meaning.


Subject(s)
Noise , Reading , Verbal Learning/physiology , Acoustic Stimulation , Auditory Perception , Female , Humans , Male , Recognition, Psychology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...