ABSTRACT
Perceptual learning is the ability to enhance perception through practice. The hallmark of perceptual learning is its specificity for the trained location and stimulus features, such as orientation. For example, training in discriminating a grating's orientation improves performance only at the trained location but not in other untrained locations. Perceptual learning has mostly been studied using stimuli presented briefly while observers maintained gaze at one location. However, in everyday life, stimuli are actively explored through eye movements, which results in successive projections of the same stimulus at different retinal locations. Here, we studied perceptual learning of orientation discrimination across saccades. Observers were trained to saccade to a peripheral grating and to discriminate its orientation change that occurred during the saccade. The results showed that training led to transsaccadic perceptual learning (TPL) and performance improvements which did not generalize to an untrained orientation. Remarkably, however, for the trained orientation, we found a complete transfer of TPL to the untrained location in the opposite hemifield suggesting high flexibility of reference frame encoding in TPL. Three control experiments in which participants were trained without saccades did not show such transfer, confirming that the location transfer was contingent upon eye movements. Moreover, performance at the trained location, but not at the untrained location, was also improved in an untrained fixation task. Our results suggest that TPL has both, a location-specific component that occurs before the eye movement and a saccade-related component that involves location generalization.
Subject(s)
Saccades , Visual Perception , Humans , Learning , Eye Movements , Retina , Discrimination Learning , Photic StimulationABSTRACT
The developed human brain shows remarkable plasticity following perceptual learning, resulting in improved visual sensitivity. However, such improvements commonly require extensive stimuli exposure. Here we show that efficiently enhancing visual perception with minimal stimuli exposure recruits distinct neural mechanisms relative to standard repetition-based learning. Participants (n = 20, 12 women, 8 men) encoded a visual discrimination task, followed by brief memory reactivations of only five trials each performed on separate days, demonstrating improvements comparable with standard repetition-based learning (n = 20, 12 women, 8 men). Reactivation-induced learning engaged increased bilateral intraparietal sulcus (IPS) activity relative to repetition-based learning. Complementary evidence for differential learning processes was further provided by temporal-parietal resting functional connectivity changes, which correlated with behavioral improvements. The results suggest that efficiently enhancing visual perception with minimal stimuli exposure recruits distinct neural processes, engaging higher-order control and attentional resources while leading to similar perceptual gains. These unique brain mechanisms underlying improved perceptual learning efficiency may have important implications for daily life and in clinical conditions requiring relearning following brain damage.
Subject(s)
Neuronal Plasticity , Visual Perception , Humans , Female , Male , Neuronal Plasticity/physiology , Visual Perception/physiology , Adult , Young Adult , Magnetic Resonance Imaging , Photic Stimulation/methods , Learning/physiology , Brain Mapping , Parietal Lobe/physiologyABSTRACT
It has remained unclear whether individuals with psychiatric disorders involving altered visual processing employ similar neuronal mechanisms during perceptual learning of a visual task. We investigated this question by training patients with body dysmorphic disorder, a psychiatric disorder characterized by distressing or impairing preoccupation with nonexistent or slight defects in one's physical appearance, and healthy controls on a visual detection task for human faces with low spatial frequency components. Brain activation during task performance was measured with functional magnetic resonance imaging before the beginning and after the end of behavioral training. Both groups of participants improved performance on the trained task to a similar extent. However, neuronal changes in the fusiform face area were substantially different between groups such that activation for low spatial frequency faces in the right fusiform face area increased after training in body dysmorphic disorder patients but decreased in controls. Moreover, functional connectivity between left and right fusiform face area decreased after training in patients but increased in controls. Our results indicate that neuronal mechanisms involved in perceptual learning of a face detection task differ fundamentally between body dysmorphic disorder patients and controls. Such different neuronal mechanisms in body dysmorphic disorder patients might reflect the brain's adaptations to altered functions imposed by the psychiatric disorder.
Subject(s)
Body Dysmorphic Disorders , Learning , Magnetic Resonance Imaging , Humans , Body Dysmorphic Disorders/physiopathology , Body Dysmorphic Disorders/psychology , Body Dysmorphic Disorders/diagnostic imaging , Female , Adult , Young Adult , Male , Learning/physiology , Brain/physiopathology , Brain/diagnostic imaging , Brain Mapping , Photic Stimulation/methodsABSTRACT
Plasticity from auditory experience shapes the brain's encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ~ 45 min training sessions recorded simultaneously with high-density electroencephalography (EEG). We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. Although both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings reinforce the domain-general benefits of musicianship but reveal that successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity, which first emerge at a cortical level.
Subject(s)
Auditory Cortex , Speech Perception , Humans , Speech , Speech Perception/physiology , Auditory Cortex/physiology , Learning , Electroencephalography , Neuronal Plasticity/physiology , Acoustic StimulationABSTRACT
Neurons in visual cortical areas primary visual cortex (V1) and V4 are adaptive processors, influenced by perceptual task. This is reflected in their ability to segment the visual scene into task-relevant and task-irrelevant stimulus components and by changing their tuning to task-relevant stimulus properties according to the current top-down instruction. Differences between the information represented in each area were seen. While V1 represented detailed stimulus characteristics, V4 filtered the input from V1 to carry the binary information required for the two-alternative judgement task. Neurons in V1 were activated at locations where the behaviorally relevant stimulus was placed well outside the grating-mapped receptive field. By systematically following the development of the task-dependent signals over the course of perceptual learning, we found that neuronal selectivity for task-relevant information was initially seen in V4 and, over a period of weeks, subsequently in V1. Once the learned information was represented in V1, on any given trial, task-relevant information appeared initially in V1 responses, followed by a 12-ms delay in V4. We propose that the shifting representation of learned information constitutes a mechanism for systems consolidation of memory.
Subject(s)
Visual Cortex , Learning/physiology , Neurons/physiology , Photic Stimulation , Visual Cortex/physiology , Visual Perception/physiologyABSTRACT
Visual perceptual learning (VPL) refers to a long-term improvement of visual task performance through training or experience, reflecting brain plasticity even in adults. In human subjects, VPL has been mostly studied using functional magnetic resonance imaging (fMRI). However, due to the low temporal resolution of fMRI, how VPL affects the time course of visual information processing is largely unknown. To address this issue, we trained human subjects to perform a visual motion direction discrimination task. Their behavioral performance and magnetoencephalography (MEG) signals responding to the motion stimuli were measured before, immediately after, and two weeks after training. Training induced a long-lasting behavioral improvement for the trained direction. Based on the MEG signals from occipital sensors, we found that, for the trained motion direction, VPL increased the motion direction decoding accuracy, reduced the motion direction decoding latency, enhanced the direction-selective channel response, and narrowed the tuning profile. Following the MEG source reconstruction, we showed that VPL enhanced the cortical response in early visual cortex (EVC) and strengthened the feedforward connection from EVC to V3A. These VPL-induced neural changes co-occurred in 160-230 ms after stimulus onset. Complementary to previous fMRI findings on VPL, this study provides a comprehensive description on the neural mechanisms of visual motion perceptual learning from a temporal perspective and reveals how VPL shapes the time course of visual motion processing in the adult human brain.
ABSTRACT
When entering a coordinated flight turn without visual references, the perception of roll-angular displacement is determined by vestibular cues, and/or probably by assessment of the G load (G magnitude) and its translation into the corresponding bank angle. Herein, we examined whether repeated exposures to hypergravity (G training) in a centrifuge, would advance, not only the ability to accurately assess the G load, but also the capacity to detect or estimate the corresponding roll inclination of the centrifuge gondola. To this end, in 9 men without piloting experience, the subjective estimation of G load and roll tilt were assessed, in complete darkness, during 5-min coordinated turns in the centrifuge, performed at 1.1G (25° roll-tilt angle) and 2.0G (60° roll tilt angle). These trials were conducted before and after 5-weeks of G training [3×40-min sessionsï½¥week-1; protocol: 20×1-min at G levels close to the individual relaxed G-level tolerance (range: â¼2.6G(~67°)-3.6G(74°)), separated by 1-min intervals at idle speed (1.4G)], while continual feedback to the subjects was limited to the G load. As expected, G training improved subjects' capacity to assess G load, especially at 2.0 G (P=0.006). The perception of roll tilt, however, was consistently underestimated (by ~70-80%), and not enhanced by G training (P≥0.51). The present findings demonstrate that prolonged repeated G-induced roll-tilts in a centrifuge gondola, while external feedback is restricted to graviception, enhance the capacity to perceive G load, but fail to advance the ability to detect or consciously estimate the magnitude of roll-angular displacement during a coordinated turn.
ABSTRACT
Without visual references, nonpilots exposed to coordinated flight turns underestimate the bank angle, because of discordant information of the roll-angular displacement from the otoliths, consistently signaling vertical position, versus the semicircular canals, enabling detection of the displacement. Pilots may also use their ability to perceive the G load and knowledge of the relation between load and angle to assess the bank angle. Our aim was to investigate whether the perception of bank angle can be improved by spatial orientation training in a centrifuge. Sixteen pilots/pilot students assessed their roll tilt, in complete darkness, during both real coordinated flight turns and gondola centrifugation, at roll tilts of 30° and 60°. The experiments were repeated after a 3-wk period, during which eight of the subjects performed nine training sessions in the centrifuge, comprising feedback on roll angle vs. G load, and on indicating requested angles. Before training, the subjects perceived in the aircraft and centrifuge, respectively: 37 (17)°, 38 (14)° during 60° turns and 19 (12)°, 20 (10)° during 30° turns. Training improved the perception of angle during the 60° [to 60 (7)°, 55 (10)°; P ≤ 0.04] but not the 30° [21 (10)°, 15 (9)°; P ≥ 0.30] turns; the improvement disappeared within 2 yr after training. Angle assessments did not change in the untrained group. The results suggest that it is possible to, in a centrifuge, train a pilot's ability to perceive large but not discrete-to-moderate roll-angular displacements. The transient training effect is attributable to improved capacity to perceive and translate G load into roll angle and/or to increased reliance on semicircular canal signals.NEW & NOTEWORTHY Spatial disorientation is a major problem in aviation. When performing coordinated flight turns without external visual cues (e.g., flying in clouds or darkness), the pilot underestimates the aircraft bank angle because the vestibular system provides unreliable information of roll tilt. The present study demonstrates that it is possible to, in a long-arm centrifuge, train a pilot's ability to perceive large but not discrete-to-moderate roll-angular displacements.
Subject(s)
Centrifugation , Orientation, Spatial , Pilots , Humans , Orientation, Spatial/physiology , Male , Adult , Military Personnel , Young Adult , Space Perception/physiology , FemaleABSTRACT
In contrast to perceptual tasks, which enable concurrent processing of many stimuli, working memory (WM) tasks have a very small capacity, limiting cognitive skills. Training on WM tasks often yields substantial improvement, suggesting that training might increase the general WM capacity. To understand the underlying processes, we trained a test group with a newly designed tone manipulation WM task and a control group with a challenging perceptual task of pitch pattern discrimination. Functional magnetic resonance imaging (fMRI) scans confirmed that pretraining, manipulation was associated with a dorsal fronto-parietal WM network, while pitch comparison was associated with activation of ventral auditory regions. Training induced improvement in each group, which was limited to the trained task. Analyzing the behavior of the group trained with tone manipulation revealed that participants learned to replace active manipulation with a perceptual verification of the position of a single salient tone in the sequence presented as a tentative reply. Posttraining fMRI scans revealed modifications in ventral activation of both groups. Successful WMtrained participants learned to utilize auditory regions for the trained task. These observations suggest that the huge task-specific enhancement of WM capacity stems from a task-specific switch to perceptual routines, implemented in perceptual regions.
Subject(s)
Learning , Memory, Short-Term , Humans , Learning/physiology , Memory, Short-Term/physiology , Magnetic Resonance Imaging/methodsABSTRACT
PURPOSE: To assess the possible benefits of the use of perceptual learning and dichoptic therapy combined with patching in children with amblyopia over the use of only patching. METHODS: Quasi-experimental multicentric study including 52 amblyopic children. Patients who improved their visual acuity (VA) by combining spectacles and patching were included in patching group (PG: 20 subjects), whereas those that did not improved with patching performed visual training (perceptual learning + dichoptic therapy) combined with patching, being assigned to the visual treatment group (VT: 32 subjects). Changes in VA, contrast sensitivity (CS), and stereopsis were monitored during a 6-month follow-up in each group. RESULTS: Significant improvements in VA were found in both groups at 1 month (p < 0.01). The total improvement of VA was 0.18 ± 0.16 and 0.31 ± 0.35 logMAR in PG and VT groups, respectively (p = 0.317). The Wilcoxon effect size was slightly higher in VT (0.48 vs. 0.54) at 6 months. An enhancement in CS was observed in the amblyopic eye of the VT group for all spatial frequencies at 1 month (p < 0.001). Likewise, the binocular function score also increased significantly in VT group (p = 0.002). A prediction equation of VA improvement at 1 month in VT group was obtained by multiple linear regression analysis (p < 0.001, R2 = 0.747). CONCLUSIONS: A combined treatment of visual training and patching is effective for obtaining a predictable improvement of VA, CS, and binocularity in patching-resistant amblyopic children.
Subject(s)
Amblyopia , Sensory Deprivation , Vision, Binocular , Visual Acuity , Humans , Amblyopia/physiopathology , Amblyopia/therapy , Visual Acuity/physiology , Prospective Studies , Male , Female , Child , Vision, Binocular/physiology , Follow-Up Studies , Child, Preschool , Treatment Outcome , Eyeglasses , Contrast Sensitivity/physiology , Depth Perception/physiologyABSTRACT
We report a large study (n = 72) using combined transcranial direct current stimulation-electroencephalography (tDCS-EEG) to investigate the modulation of perceptual learning indexed by the face inversion effect. Participants were engaged with an old/new recognition task involving intermixed upright and inverted, normal and Thatcherized faces. The accuracy results showed anodal tDCS delivered at the Fp3 scalp area (cathode/reference electrode placed at Fp2) increased the behavioural inversion effect for normal faces versus sham/control and this covaried with a modulation of the N170 event-related potential component. A reduced inversion effect for normal faces was found on the N170 latency and amplitude versus sham/control, extending recent work that combined tDCS and EEG in circumstances where the behavioural face inversion effect was reduced. Our results advance understanding of the neural mechanisms responsible for perceptual learning by revealing a dissociation between the N170 amplitude and latency in response to the tDCS-induced modulation of the face inversion effect. The behavioural modulation of the inversion effect tracks the modulation of the N170 amplitudes, albeit it is negatively correlated (i.e., reduced inversion effect-larger N170 amplitude inversion effect, increased inversion effect-reduced N170 amplitude inversion effect). For the N170 latencies, the inversion effect is reduced by the tDCS protocol we use irrespective of any modulation of the behavioural inversion effect.
Subject(s)
Transcranial Direct Current Stimulation , Humans , Evoked Potentials/physiology , Electroencephalography , Recognition, Psychology/physiology , Photic Stimulation/methodsABSTRACT
Subsecond temporal processing is crucial for activities requiring precise timing. Here, we investigated perceptual learning of crossmodal (auditory-visual or visual-auditory) temporal interval discrimination (TID) and its impacts on unimodal (visual or auditory) TID performance. The research purpose was to test whether learning is based on a more abstract and conceptual representation of subsecond time, which would predict crossmodal to unimodal learning transfer. The experiments revealed that learning to discriminate a 200-ms crossmodal temporal interval, defined by a pair of visual and auditory stimuli, significantly reduced crossmodal TID thresholds. Moreover, the crossmodal TID training also minimized unimodal TID thresholds with a pair of visual or auditory stimuli at the same interval, even if crossmodal TID thresholds are multiple times higher than unimodal TID thresholds. Subsequent training on unimodal TID failed to reduce unimodal TID thresholds further. These results indicate that learning of high-threshold crossmodal TID tasks can benefit low-threshold unimodal temporal processing, which may be achieved through training-induced improvement of a conceptual representation of subsecond time in the brain.
ABSTRACT
BACKGROUND: Active vision therapy for amblyopia shows good results, but there is no standard vision therapy protocol. This study compared the results of three treatments, two combining patching with active therapy and one with patching alone, in a sample of children with amblyopia. METHODS: Two protocols have been developed: (a) perceptual learning with a computer game designed to favour the medium-to-high spatial frequency-tuned achromatic mechanisms of parvocellular origin and (b) vision therapy with a specific protocol and 2-h patching. The third treatment group used patching only. Fifty-two amblyopic children (aged 4-12 years), were randomly assigned to three monocular treatment groups: 2-h patching (n = 18), monocular perceptual learning (n = 17) and 2-h patching plus vision therapy (n = 17). Visual outcomes were analysed after 3 months and compared with a control group (n = 36) of subjects with normal vision. RESULTS: Visual acuity (VA) and stereoacuity (STA) improved significantly after treatment for the three groups with the best results for patching plus vision therapy, followed by monocular perceptual learning, with patching only least effective. Change in the interocular difference in VA was significant for monocular perceptual learning, followed by patching. Differences in STA between groups were not significant. For VA and interocular differences, the final outcomes were influenced by the baseline VA and interocular difference, respectively, with greater improvements in subjects with poorer initial values. CONCLUSIONS: Visual acuity and STA improved with the two most active treatments, that is, vision therapy followed by perceptual learning. Patching alone showed the worst outcome. These results suggest that vision therapy should include monocular accommodative exercises, ocular motility and central fixation exercises where the fovea is more active.
ABSTRACT
Multisensory integration is a fundamental function of the brain. In the typical adult, multisensory neurons' response to paired multisensory (e.g., audiovisual) cues is significantly more robust than the corresponding best unisensory response in many brain regions. Synthesizing sensory signals from multiple modalities can speed up sensory processing and improve the salience of outside events or objects. Despite its significance, multisensory integration is testified to be not a neonatal feature of the brain. Neurons' ability to effectively combine multisensory information does not occur rapidly but develops gradually during early postnatal life (for cats, 4-12 weeks required). Multisensory experience is critical for this developing process. If animals were restricted from sensing normal visual scenes or sounds (deprived of the relevant multisensory experience), the development of the corresponding integrative ability could be blocked until the appropriate multisensory experience is obtained. This section summarizes the extant literature on the development of multisensory integration (mainly using cat superior colliculus as a model), sensory-deprivation-induced cross-modal plasticity, and how sensory experience (sensory exposure and perceptual learning) leads to the plastic change and modification of neural circuits in cortical and subcortical areas.
Subject(s)
Brain , Learning , Animals , Neurons , Sensation , SoundABSTRACT
A degraded, black-and-white image of an object, which appears meaningless on first presentation, is easily identified after a single exposure to the original, intact image. This striking example of perceptual learning reflects a rapid (one-trial) change in performance, but the kind of learning that is involved is not known. We asked whether this learning depends on conscious (hippocampus-dependent) memory for the images that have been presented or on an unconscious (hippocampus-independent) change in the perception of images, independently of the ability to remember them. We tested five memory-impaired patients with hippocampal lesions or larger medial temporal lobe (MTL) lesions. In comparison to volunteers, the patients were fully intact at perceptual learning, and their improvement persisted without decrement from 1 d to more than 5 mo. Yet, the patients were impaired at remembering the test format and, even after 1 d, were impaired at remembering the images themselves. To compare perceptual learning and remembering directly, at 7 d after seeing degraded images and their solutions, patients and volunteers took either a naming test or a recognition memory test with these images. The patients improved as much as the volunteers at identifying the degraded images but were severely impaired at remembering them. Notably, the patient with the most severe memory impairment and the largest MTL lesions performed worse than the other patients on the memory tests but was the best at perceptual learning. The findings show that one-trial, long-lasting perceptual learning relies on hippocampus-independent (nondeclarative) memory, independent of any requirement to consciously remember.
Subject(s)
Consciousness/physiology , Hippocampus/physiology , Learning/physiology , Memory/physiology , Mental Recall/physiology , Temporal Lobe/physiology , Adult , Aged , Aged, 80 and over , Amnesia/physiopathology , Female , Hippocampus/physiopathology , Humans , Male , Memory Disorders/physiopathology , Middle Aged , Photic Stimulation , Psychomotor Performance/physiology , Temporal Lobe/physiopathologyABSTRACT
Perceptual stability is facilitated by a decrease in visual sensitivity during rapid eye movements, called saccadic suppression. While a large body of evidence demonstrates that saccadic programming is plastic, little is known about whether the perceptual consequences of saccades can be modified. Here, we demonstrate that saccadic suppression is attenuated during learning on a standard visual detection-in-noise task, to the point that it is effectively silenced. Across a period of 7 days, 44 participants were trained to detect brief, low-contrast stimuli embedded within dynamic noise, while eye position was tracked. Although instructed to fixate, participants regularly made small fixational saccades. Data were accumulated over a large number of trials, allowing us to assess changes in performance as a function of the temporal proximity of stimuli and saccades. This analysis revealed that improvements in sensitivity over the training period were accompanied by a systematic change in the impact of saccades on performance-robust saccadic suppression on day 1 declined gradually over subsequent days until its magnitude became indistinguishable from zero. This silencing of suppression was not explained by learning-related changes in saccade characteristics and generalized to an untrained retinal location and stimulus orientation. Suppression was restored when learned stimulus timing was perturbed, consistent with the operation of a mechanism that temporarily reduces or eliminates saccadic suppression, but only when it is behaviorally advantageous to do so. Our results indicate that learning can circumvent saccadic suppression to improve performance, without compromising its functional benefits in other viewing contexts.
Subject(s)
Learning/physiology , Saccades/physiology , Adolescent , Adult , Female , Humans , Male , Middle Aged , Photic Stimulation , Time Factors , Visual Perception/physiology , Young AdultABSTRACT
BACKGROUND: Perceptual learning refers to an augmentation of an organism's ability to respond to external stimuli, which has been described in most sensory modalities. Visual perceptual learning (VPL) is a manifestation of plasticity in visual information processing that occurs in the adult brain, and can be used to ameliorate the ability of patients with visual defects mainly based on an improvement of detection or discrimination of features in visual tasks. While some brain regions such as the primary visual cortex have been described to participate in VPL, the way more general high-level cognitive brain areas are involved in this process remains unclear. Here, we showed that the medial prefrontal cortex (mPFC) was essential for both the training and maintenance processes of VPL in mouse models. RESULTS: We built a new VPL model in a custom-designed training chamber to enable the utilization of miniScopes when mice freely executed the VPL task. We found that pyramidal neurons in the mPFC participate in both the training process and maintenance of VPL. By recording the calcium activity of mPFC pyramidal neurons while mice freely executed the task, distinct ON and OFF neural ensembles tuned to different behaviors were identified, which might encode different cognitive information. Decoding analysis showed that mouse behaviors could be well predicted using the activity of each ON ensemble. Furthermore, VPL recruited more reward-related components in the mPFC. CONCLUSION: We revealed the neural mechanism underlying vision improvement following VPL and identify distinct ON and OFF neural ensembles in the mPFC that tuned to different information during visual perceptual training. These results uncover an important role of the mPFC in VPL, with more reward-related components being also involved, and pave the way for future clarification of the reward signal coding rules in VPL.
Subject(s)
Learning , Visual Perception , Animals , Mice , Visual Perception/physiology , Learning/physiology , Brain/physiology , Prefrontal Cortex/physiologyABSTRACT
How sleep leads to offline performance gains in learning remains controversial. A use-dependent model assumes that sleep processing leading to performance gains occurs based on general cortical usage during wakefulness, whereas a learning-dependent model assumes that this processing is specific to learning. Here, we found evidence that supports a learning-dependent model in visual perceptual learning (VPL) in humans (both sexes). First, we measured the strength of spontaneous oscillations during sleep after two training conditions that required the same amount of training or visual cortical usage; one generated VPL (learning condition), while the other did not (interference condition). During a post-training nap, slow-wave activity (SWA) and sigma activity during non-rapid eye movement (NREM) sleep and theta activity during REM sleep were source localized to the early visual areas using retinotopic mapping. Inconsistent with a use-dependent model, only in the learning condition, sigma and theta activity, not SWA, increased in a trained region-specific manner and correlated with performance gains. Second, we investigated the roles of occipital sigma and theta activity during sleep. Occipital sigma activity during NREM sleep was significantly correlated with performance gains in presleep learning; however, occipital theta activity during REM sleep was correlated with presleep learning stabilization, shown as resilience to interference from postsleep learning in a trained region-specific manner. Occipital SWA was not associated with offline performance gains or stabilization. These results demonstrate that sleep processing leading to performance gains is learning dependent in VPL and involves occipital sigma and theta activity during sleep.SIGNIFICANCE STATEMENT The present study shows strong evidence that could help resolve the long-standing controversy surrounding sleep processing that strengthens learning (performance gains). There are two conflicting models. A use-dependent model assumes that sleep processing leading to performance gains occurs because of general cortical usage during wakefulness, whereas a learning-dependent model assumes that processing occurs specifically for learning. Using visual perceptual learning and interference paradigms, we found that processing did not take place after general cortical usage. Moreover, sigma activity during non-rapid eye movement (REM) sleep and theta activity during REM sleep in occipital areas were found to be involved in processing, which is consistent with the learning-dependent model and not the use-dependent model. These results support the learning-dependent model.
Subject(s)
Sleep , Visual Cortex , Electroencephalography , Female , Humans , Male , Sleep, REM , Spatial Learning , WakefulnessABSTRACT
A main characteristic of dyslexia is poor use of sound categories. We now studied within-session learning of new sound categories in dyslexia, behaviorally and neurally, using fMRI. Human participants (males and females) with and without dyslexia were asked to discriminate which of two serially-presented tones had a higher pitch. The task was administered in two protocols, with and without a repeated reference frequency. The reference condition introduces regularity, and enhances frequency sensitivity in typically developing (TD) individuals. Enhanced sensitivity facilitates the formation of "high" and "low" pitch categories above and below this reference, respectively. We found that in TDs, learning was paralleled by a gradual decrease in activation of the primary auditory cortex (PAC), and reduced activation of the superior temporal gyrus (STG) and left posterior parietal cortex (PPC), which are important for using sensory history. No such sensitivity was found among individuals with dyslexia (IDDs). Rather, IDDs showed reduced behavioral learning of stimulus regularities and no regularity-associated adaptation in the auditory cortex or in higher-level regions. We propose that IDDs' reduced cortical adaptation, associated with reduced behavioral learning of sound regularities, underlies their impoverished use of stimulus history, and consequently impedes their formation of rich sound categories.SIGNIFICANCE STATEMENT Reading difficulties in dyslexia are often attributed to poor use of phonological categories. To test whether poor category use could result from poor learning of new sound categories in general, we administered an auditory discrimination task that examined the learning of new pitch categories above and below a repeated reference sound. Individuals with dyslexia (IDDs) learned categories slower than typically developing (TD) individuals. TD individuals showed adaptation to the repeated sounds that paralleled the category learning in their primary auditory cortex (PAC) and other higher-level regions. In dyslexia, no brain region showed such adaptation. We suggest that poor learning of sound statistics in sensory regions may underlie the poor representations of both speech and nonspeech categories in dyslexia.
Subject(s)
Adaptation, Physiological/physiology , Auditory Cortex/physiopathology , Dyslexia/physiopathology , Learning/physiology , Pitch Perception/physiology , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Sound , Speech Perception/physiologyABSTRACT
A pioneering study by Volkmann (1858) revealed that training on a tactile discrimination task improved task performance, indicative of tactile learning, and that such tactile learning transferred from trained to untrained body parts. However, the neural mechanisms underlying tactile learning and transfer of tactile learning have remained unclear. We trained groups of human subjects (female and male) in daily sessions on a tactile discrimination task either by stimulating the palm of the right hand or the sole of the right foot. Task performance before training was similar between the palm and sole. Posttraining transfer of tactile learning was greater from the trained right sole to the untrained right palm than from the trained right palm to the untrained right sole. Functional magnetic resonance imaging (fMRI) and multivariate pattern classification analysis revealed that the somatotopic representation of the right palm in contralateral primary somatosensory cortex (SI) was coactivated during tactile stimulation of the right sole. More pronounced coactivation in the cortical representation of the right palm was associated with lower tactile performance for tactile stimulation of the right sole and more pronounced subsequent transfer of tactile learning from the trained right sole to the untrained right palm. In contrast, coactivation of the cortical sole representation during tactile stimulation of the palm was less pronounced and no association with tactile performance and subsequent transfer of tactile learning was found. These results indicate that tactile learning may transfer to untrained body parts that are coactivated to support tactile learning with the trained body part.SIGNIFICANCE STATEMENT Perceptual skills such as the discrimination of tactile cues can improve by means of training, indicative of perceptual learning and sensory plasticity. However, it has remained unclear whether and if so, how such perceptual learning can occur if the training task is very difficult. Here, we show for tactile perceptual learning that the representation of the palm of the hand in primary somatosensory cortex (SI) is coactivated to support learning of a difficult tactile discrimination task with tactile stimulation of the sole of the foot. Such cortical coactivation of an untrained body part to support tactile learning with a trained body part might be critically involved in the subsequent transfer of tactile learning between the trained and untrained body parts.