Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 73
Filter
1.
Front Neurosci ; 17: 1228506, 2023.
Article in English | MEDLINE | ID: mdl-37942141

ABSTRACT

Introduction: Processing the wealth of sensory information from the surrounding environment is a vital human function with the potential to develop learning, advance social interactions, and promote safety and well-being. Methods: To elucidate underlying processes governing these activities we measured neurophysiological responses to patterned stimulus sequences during a sound categorization task to evaluate attention effects on implicit learning, sound categorization, and speech perception. Using a unique experimental design, we uncoupled conceptual categorical effects from stimulus-specific effects by presenting categorical stimulus tokens that did not physically repeat. Results: We found effects of implicit learning, categorical habituation, and a speech perception bias when the sounds were attended, and the listeners performed a categorization task (task-relevant). In contrast, there was no evidence of a speech perception bias, implicit learning of the structured sound sequence, or repetition suppression to repeated within-category sounds (no categorical habituation) when participants passively listened to the sounds and watched a silent closed-captioned video (task-irrelevant). No indication of category perception was demonstrated in the scalp-recorded brain components when participants were watching a movie and had no task with the sounds. Discussion: These results demonstrate that attention is required to maintain category identification and expectations induced by a structured sequence when the conceptual information must be extracted from stimuli that are acoustically distinct. Taken together, these striking attention effects support the theoretical view that top-down control is required to initiate expectations for higher level cognitive processing.

2.
Otol Neurotol ; 44(10): 1100-1105, 2023 Dec 01.
Article in English | MEDLINE | ID: mdl-37758317

ABSTRACT

OBJECTIVE: To evaluate long-term effects of COVID-19 on auditory and vestibular symptoms in a diverse cohort impacted by the initial 2020 COVID-19 infection in the pandemic's epicenter, before vaccine availability. STUDY DESIGN: Cohort study of individuals with confirmed COVID-19 infection, diagnosed in the March-May 2020 infection wave. A randomized, retrospective chart review of 1,352 individuals was performed to identify those with documented new or worsening auditory (aural fullness, tinnitus, hyperacusis, hearing loss) or vestibular (dizziness, vertigo) symptoms. Those with documented symptoms (613 of the 1,352 initial cohort) were contacted for a follow-up telephone survey in 2021-2022 to obtain self-report of aforementioned symptoms. SETTING: Academic tertiary hospital system in Bronx, NY. PATIENTS: Adults 18 to 99 years old with confirmed COVID-19 infection, alive at time of review. One hundred forty-eight charts were excluded for restricted access, incomplete data, no COVID-19 swab, or deceased at time of review. INTERVENTION: Confirmed COVID-19 infection, March to May 2020. MAIN OUTCOMES MEASURES: Auditory and vestibular symptoms documented in 2020 medical records and by self-report on 2021 to 2022 survey. RESULTS: Among the 74 individuals with documented symptoms during the first 2020 COVID-19 wave who participated in the 2021 to 2022 follow-up survey, 58% had documented vestibular symptoms initially in 2020, whereas 43% reported vestibular symptoms on the 2021 to 2022 survey ( p = 0.10). In contrast, 9% had documented auditory symptoms initially in 2020 and 55% reported auditory symptoms on the 2021 to 2022 survey ( p < 0.01). CONCLUSIONS: COVID-19 may impact vestibular symptoms early and persistently, whereas auditory effects may have more pronounced long-term impact, suggesting the importance of continually assessing COVID-19 patients.


Subject(s)
COVID-19 , Tinnitus , Adult , Humans , Adolescent , Young Adult , Middle Aged , Aged , Aged, 80 and over , Retrospective Studies , Cohort Studies , Vertigo/diagnosis , Tinnitus/epidemiology , Tinnitus/etiology , Tinnitus/diagnosis
3.
Acad Pediatr ; 22(4): 518-525, 2022.
Article in English | MEDLINE | ID: mdl-34896271

ABSTRACT

BACKGROUND: Developmental language disorder (DLD) often remains undetected until children shift from 'learning to read' to 'reading to learn,' around 9 years of age. Mono- and bilingual children with DLD frequently have co-occurring reading, attention, and related difficulties, compared to children with typical language development (TLD). Data for mono- and bilingual children with DLD and TLD would aid differentiation of language differences versus disorders in bilingual children. OBJECTIVE: We conducted a scoping review of descriptive research on mono-and bilingual children < and >= 9 years old with DLD versus TLD, and related skills (auditory processing, attention, cognition, executive function, and reading). DATA SOURCES: We searched PubMed for the terms "bilingual" and "language disorders" or "impairment" and "child[ren]" from August 1, 1979 through October 1, 2018. CHARTING METHODS: Two abstracters charted all search results. Main exclusions were: secondary data/reviews, special populations, intervention studies, and case studies/series. Abstracted data included age, related skills measures', and four language groups of participants: monolingual DLD, monolingual TLD, bilingual DLD, and bilingual TLD. RESULTS: Of 366 articles, 159 (43%) met inclusion criteria. Relatively few (14%, n = 22) included all 4 language groups, co-occurring difficulties other than nonverbal intelligence (n = 49, 31%) or reading (n = 51, 32%) or any 9-18 year-olds (31%, n = 48). Just 5 (3%) included only 9-18 year-olds. Among studies with any 9 to 18 year olds, just 4 (8%, 4/48) included 4 language groups. CONCLUSIONS: Future research should include mono- and bilingual children with both DLD and TLD, beyond 8 years of age, along with data about their related skills.


Subject(s)
Language Development Disorders , Multilingualism , Child , Executive Function , Humans , Language , Language Development
4.
Front Hum Neurosci ; 15: 747769, 2021.
Article in English | MEDLINE | ID: mdl-34803633

ABSTRACT

The predictable rhythmic structure is important to most ecologically relevant sounds for humans, such as is found in the rhythm of speech or music. This study addressed the question of how rhythmic predictions are maintained in the auditory system when there are multiple perceptual interpretations occurring simultaneously and emanating from the same sound source. We recorded the electroencephalogram (EEG) while presenting participants with a tone sequence that had two different tone feature patterns, one based on the sequential rhythmic variation in tone duration and the other on sequential rhythmic variation in tone intensity. Participants were presented with the same sound sequences and were instructed to listen for the intensity pattern (ignore fluctuations in duration) and press a response key to detected pattern deviants (attend intensity pattern task); to listen to the duration pattern (ignore fluctuations in intensity) and make a button press to duration pattern deviants (attend duration pattern task), and to watch a movie and ignore the sounds presented to their ears (attend visual task). Both intensity and duration patterns occurred predictably 85% of the time, thus the key question involved evaluating how the brain treated the irrelevant feature patterns (standards and deviants) while performing an auditory or visual task. We expected that task-based feature patterns would have a more robust brain response to attended standards and deviants than the unattended feature patterns. Instead, we found that the neural entrainment to the rhythm of the standard attended patterns had similar power to the standard of the unattended feature patterns. In addition, the infrequent pattern deviants elicited the event-related brain potential called the mismatch negativity component (MMN). The MMN elicited by task-based feature pattern deviants had a similar amplitude to MMNs elicited by unattended pattern deviants that were unattended because they were not the target pattern or because the participant ignored the sounds and watched a movie. Thus, these results demonstrate that the brain tracks multiple predictions about the complexities in sound streams and can automatically track and detect deviations with respect to these predictions. This capability would be useful for switching attention rapidly among multiple objects in a busy auditory scene.

5.
Psychophysiology ; 58(9): e13875, 2021 09.
Article in English | MEDLINE | ID: mdl-34110020

ABSTRACT

The auditory system frequently encounters ambiguous sound input that can be perceived in multiple ways. The current study investigated the role of explicit knowledge in modulating how sounds are represented in auditory memory for a bistable sound sequence that could be perceived equally as integrated or segregated. We hypothesized that the dominant percept of the bistable sequence would suppress representation of the alternative perceptual organization as a function of how much top-down knowledge the listener had about the structure of the sequence. Performance measures and event-related brain potentials were compared when participants had explicit knowledge about one perceptual organization in the first half of the experiment to when they had explicit knowledge of both in the second half. We hypothesized that knowledge would modify the brain response to the alternative percept of the bistable sequence. However, that did not occur. When participants were performing one task, with no explicit knowledge of the bistable structure of the sequence, both integrated and segregated percepts were represented in auditory working memory. This demonstrates that explicit knowledge about the sounds is not a necessary factor for deriving and maintaining representations of multiple sound organizations within a complex sound environment. Passive attention operates in parallel with active or selective attention to maintain consistent representations of the environment, representations that may or may not be useful for task performance. It suggests a highly adaptive system useful in everyday listening situations where the listener has no prior knowledge about how the sound environment is structured.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Evoked Potentials/physiology , Memory, Short-Term/physiology , Psychomotor Performance/physiology , Adult , Electroencephalography , Female , Humans , Male , Young Adult
6.
Cereb Cortex ; 31(6): 3136-3152, 2021 05 10.
Article in English | MEDLINE | ID: mdl-33683317

ABSTRACT

A recent formulation of predictive coding theory proposes that a subset of neurons in each cortical area encodes sensory prediction errors, the difference between predictions relayed from higher cortex and the sensory input. Here, we test for evidence of prediction error responses in spiking responses and local field potentials (LFP) recorded in primary visual cortex and area V4 of macaque monkeys, and in complementary electroencephalographic (EEG) scalp recordings in human participants. We presented a fixed sequence of visual stimuli on most trials, and violated the expected ordering on a small subset of trials. Under predictive coding theory, pattern-violating stimuli should trigger robust prediction errors, but we found that spiking, LFP and EEG responses to expected and pattern-violating stimuli were nearly identical. Our results challenge the assertion that a fundamental computational motif in sensory cortex is to signal prediction errors, at least those based on predictions derived from temporal patterns of visual stimulation.


Subject(s)
Electroencephalography/methods , Photic Stimulation/methods , Primary Visual Cortex/physiology , Visual Cortex/physiology , Adult , Animals , Electrodes, Implanted , Evoked Potentials, Visual/physiology , Female , Forecasting , Humans , Macaca , Male , Young Adult
7.
Neuroimage ; 225: 117472, 2021 01 15.
Article in English | MEDLINE | ID: mdl-33099012

ABSTRACT

Learning to anticipate future states of the world based on statistical regularities in the environment is a key component of perception and is vital for the survival of many organisms. Such statistical learning and prediction are crucial for acquiring language and music appreciation. Importantly, learned expectations can be implicitly derived from exposure to sensory input, without requiring explicit information regarding contingencies in the environment. Whereas many previous studies of statistical learning have demonstrated larger neuronal responses to unexpected versus expected stimuli, the neuronal bases of the expectations themselves remain poorly understood. Here we examined behavioral and neuronal signatures of learned expectancy via human scalp-recorded event-related brain potentials (ERPs). Participants were instructed to listen to a series of sounds and press a response button as quickly as possible upon hearing a target noise burst, which was either reliably or unreliably preceded by one of three pure tones in low-, mid-, and high-frequency ranges. Participants were not informed about the statistical contingencies between the preceding tone 'cues' and the target. Over the course of a stimulus block, participants responded more rapidly to reliably cued targets. This behavioral index of learned expectancy was paralleled by a negative ERP deflection, designated as a neuronal contingency response (CR), which occurred immediately prior to the onset of the target. The amplitude and latency of the CR were systematically modulated by the strength of the predictive relationship between the cue and the target. Re-averaging ERPs with respect to the latency of behavioral responses revealed no consistent relationship between the CR and the motor response, suggesting that the CR represents a neuronal signature of learned expectancy or anticipatory attention. Our results demonstrate that statistical regularities in an auditory input stream can be implicitly learned and exploited to influence behavior. Furthermore, we uncover a potential 'prediction signal' that reflects this fundamental learning process.


Subject(s)
Auditory Perception/physiology , Evoked Potentials/physiology , Learning/physiology , Acoustic Stimulation , Adult , Attention , Brain/physiology , Cues , Electroencephalography , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Music
8.
Elife ; 92020 10 12.
Article in English | MEDLINE | ID: mdl-33043884

ABSTRACT

A neural code adapted to the statistical structure of sensory cues may optimize perception. We investigated whether interaural time difference (ITD) statistics inherent in natural acoustic scenes are parameters determining spatial discriminability. The natural ITD rate of change across azimuth (ITDrc) and ITD variability over time (ITDv) were combined in a Fisher information statistic to assess the amount of azimuthal information conveyed by this sensory cue. We hypothesized that natural ITD statistics underlie the neural code for ITD and thus influence spatial perception. To test this hypothesis, sounds with invariant statistics were presented to measure human spatial discriminability and spatial novelty detection. Human auditory spatial perception showed correlation with natural ITD statistics, supporting our hypothesis. Further analysis showed that these results are consistent with classic models of ITD coding and can explain the ITD tuning distribution observed in the mammalian brainstem.


When a person hears a sound, how do they work out where it is coming from? A sound coming from your right will reach your right ear a few fractions of a millisecond earlier than your left. The brain uses this difference, known as the interaural time difference or ITD, to locate the sound. But humans are also much better at localizing sounds that come from sources in front of them than from sources by their sides. This may be due in part to differences in the number of neurons available to detect sounds from these different locations. It may also reflect differences in the rates at which those neurons fire in response to sounds. But these factors alone cannot explain why humans are so much better at localizing sounds in front of them. Pavão et al. showed that the brain has evolved the ability to detect natural patterns that exist in sounds as a result of their location, and to use those patterns to optimize the spatial perception of sounds. Pavão et al. showed that the way in which the head and inner ear filter incoming sounds has two consequences for how we perceive them. Firstly, the change in ITD for sounds coming from different sources in front of a person is greater than for sounds coming from their sides. And secondly, the ITD for sounds that originate in front of a person varies more over time than the ITD for sounds coming from the periphery. By playing sounds to healthy volunteers while removing these differences, Pavão et al. found that natural ITD statistics were correlated with a person's ability to tell where a sound was coming from. By revealing the features the brain uses to determine the location of sounds, the work of Pavão et al. could ultimately lead to the development of more effective hearing aids. The results also provide clues to how other senses, including vision, may have evolved to respond optimally to the environment.


Subject(s)
Auditory Perception/physiology , Models, Neurological , Models, Statistical , Sound Localization , Adult , Auditory Threshold , Biological Evolution , Cochlea/physiology , Cues , Female , Humans , Male , Time
9.
Front Psychol ; 11: 1155, 2020.
Article in English | MEDLINE | ID: mdl-32655436

ABSTRACT

The ability to distinguish among different types of sounds in the environment and to identify sound sources is a fundamental skill of the auditory system. This study tested responses to sounds by stimulus category (speech, music, and environmental) in adults with normal hearing to determine under what task conditions there was a processing advantage for speech. We hypothesized that speech sounds would be processed faster and more accurately than non-speech sounds under specific listening conditions and different behavioral goals. Thus, we used three different task conditions allowing us to compare detection and identification of sound categories in an auditory oddball paradigm and in a repetition-switch category paradigm. We found that response time and accuracy were modulated by the specific task demands. The sound category itself had no effect on sound detection outcomes but had a pronounced effect on sound identification. Faster and more accurate responses to speech were found only when identifying sounds. We demonstrate a speech processing "advantage" when identifying the sound category among non-categorical sounds and when detecting and identifying among categorical sounds. Thus, overall, our results are consistent with a theory of speech processing that relies on specialized systems distinct from music and other environmental sounds.

10.
Psychophysiology ; 57(2): e13487, 2020 02.
Article in English | MEDLINE | ID: mdl-31578762

ABSTRACT

Although attention has been shown to enhance neural representations of selected inputs, the fate of unselected background sounds is still debated. The goal of the current study was to understand how processing resources are distributed among attended and unattended sounds during auditory scene analysis. We used a three-stream paradigm with four acoustic features uniquely defining each sound stream (frequency, envelope shape, spatial location, tone quality). We manipulated task load by having participants perform a difficult auditory task and an easy movie-viewing task with the same set of sounds in separate conditions. The mismatch negativity (MMN) component of event-related brain potentials (ERPs) was measured to evaluate sound processing in both conditions. We found no effect of task demands on unattended sound processing: MMNs were elicited by unattended deviants during both low- and high-load task conditions. A key factor of this result was the use of unique tone feature combinations to distinguish each of the three sound streams, strengthening the segregation of streams. In the auditory task, the P3b component demonstrates a two-stage process of target evaluation. Thus, these results, in conjunction with results of previous studies, suggest that stimulus-driven factors that strengthen stream segregation can free up processing capacity for higher-level analyses. The results illustrate the interactive nature of top-down and stimulus-driven processes in stream formation, supporting a distributive theory of attention that balances the strength of the bottom-up input with perceptual goals in analyzing the auditory scene.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Electroencephalography , Evoked Potentials/physiology , Executive Function/physiology , Visual Perception/physiology , Adult , Electrooculography , Event-Related Potentials, P300/physiology , Evoked Potentials, Auditory/physiology , Female , Humans , Male
11.
J Clin Exp Neuropsychol ; 41(8): 814-831, 2019 10.
Article in English | MEDLINE | ID: mdl-31156064

ABSTRACT

Objective: The purpose of this study was to characterize post-chemotherapy sensory, memory, and attention abilities in childhood survivors of acute lymphoblastic leukemia (ALL) to better understand how treatment affects cognitive functioning. Methods: Eight ALL survivors and eight age-matched, healthy children between the ages of 5-11 years participated in the study. Among the ALL survivors, a median of 63 days (range 22-267 days) elapsed between completion of chemotherapy and this assessment. Sounds were presented in an oddball paradigm while recording the electroencephalogram in separate conditions of passive listening and active task performance. To assess different domains of cognition, we measured event-related brain potentials (ERPs) reflecting sensory processing (P1 component), working memory (mismatch negativity [MMN] component), attentional orienting (P3a), and target detection (P3b component) in response to the sounds. We also measured sound discrimination and response speed performance. Results: Relative to control subjects, ALL survivors had poorer performance on auditory tasks, as well as decreased amplitude of the P1, MMN, P3a, and P3b components. ALL survivors also did not exhibit the amplitude gain typically observed in the sensory P1 component when attending to the sound input compared to when passively listening. Conclusions: Atypical responses were observed in brain processes associated with sensory discrimination, auditory working memory, and attentional control in pediatric ALL survivors indicating deficiencies in all cognitive domains compared to age-matched controls. Significance: ERPs differentiated aspects of cognitive functioning, which may provide a useful tool for assessing recovery and risk of post-chemotherapy cognitive deficiencies in young children. The decreased MMN amplitude in ALL survivors may indicate (N-methyl D-aspartate) NMDA dysfunction induced by methotrexate, and thus provides a potential therapeutic target for chemotherapy-associated cognitive impairments.


Subject(s)
Antineoplastic Agents/adverse effects , Brain/drug effects , Cancer Survivors/psychology , Cognition Disorders/chemically induced , Evoked Potentials/drug effects , Precursor Cell Lymphoblastic Leukemia-Lymphoma/drug therapy , Sensation Disorders/chemically induced , Adolescent , Adult , Antineoplastic Agents/therapeutic use , Attention/drug effects , Attention/physiology , Brain/physiopathology , Child , Child, Preschool , Cognition Disorders/diagnosis , Cognition Disorders/physiopathology , Cognition Disorders/psychology , Electroencephalography/drug effects , Female , Follow-Up Studies , Humans , Male , Memory, Short-Term/drug effects , Memory, Short-Term/physiology , Precursor Cell Lymphoblastic Leukemia-Lymphoma/complications , Precursor Cell Lymphoblastic Leukemia-Lymphoma/physiopathology , Precursor Cell Lymphoblastic Leukemia-Lymphoma/psychology , Reaction Time/drug effects , Reaction Time/physiology , Sensation Disorders/diagnosis , Sensation Disorders/physiopathology , Sensation Disorders/psychology
12.
Front Psychol ; 9: 335, 2018.
Article in English | MEDLINE | ID: mdl-29623054

ABSTRACT

Speech perception behavioral research suggests that rates of sensory memory decay are dependent on stimulus properties at more than one level (e.g., acoustic level, phonemic level). The neurophysiology of sensory memory decay rate has rarely been examined in the context of speech processing. In a lexical tone study, we showed that long-term memory representation of lexical tone slows the decay rate of sensory memory for these tones. Here, we tested the hypothesis that long-term memory representation of vowels slows the rate of auditory sensory memory decay in a similar way to that of lexical tone. Event-related potential (ERP) responses were recorded to Mandarin non-words contrasting the vowels /i/ vs. /u/ and /y/ vs. /u/ from first-language (L1) Mandarin and L1 American English participants under short and long interstimulus interval (ISI) conditions (short ISI: an average of 575 ms, long ISI: an average of 2675 ms). Results revealed poorer discrimination of the vowel contrasts for English listeners than Mandarin listeners, but with different patterns for behavioral perception and neural discrimination. As predicted, English listeners showed the poorest discrimination and identification for the vowel contrast /y/ vs. /u/, and poorer performance in the long ISI condition. In contrast to Yu et al. (2017), however, we found no effect of ISI reflected in the neural responses, specifically the mismatch negativity (MMN), P3a and late negativity ERP amplitudes. We did see a language group effect, with Mandarin listeners generally showing larger MMN and English listeners showing larger P3a. The behavioral results revealed that native language experience plays a role in echoic sensory memory trace maintenance, but the failure to find an effect of ISI on the ERP results suggests that vowel and lexical tone memory traces decay at different rates. Highlights: We examined the interaction between auditory sensory memory decay and language experience. We compared MMN, P3a, LN and behavioral responses in short vs. long interstimulus intervals. We found that different from lexical tone contrast, MMN, P3a, and LN changes to vowel contrasts are not influenced by lengthening the ISI to 2.6 s. We also found that the English listeners discriminated the non-native vowel contrast with lower accuracy under the long ISI condition.

13.
J Speech Lang Hear Res ; 60(10): 2989-3000, 2017 10 17.
Article in English | MEDLINE | ID: mdl-29049599

ABSTRACT

Purpose: This review article provides a new perspective on the role of attention in auditory scene analysis. Method: A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception-from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity. Conclusions: A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources. Presentation Video: http://cred.pubs.asha.org/article.aspx?articleid=2601618.


Subject(s)
Attention , Auditory Perception , Attention/physiology , Auditory Perception/physiology , Brain/physiology , Humans , Models, Neurological
14.
Neuroimage ; 159: 195-206, 2017 10 01.
Article in English | MEDLINE | ID: mdl-28757195

ABSTRACT

Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Acoustic Stimulation , Adult , Attention/physiology , Auditory Pathways/physiology , Electroencephalography , Female , Humans , Male , Young Adult
15.
J Cogn Neurosci ; 29(12): 2114-2122, 2017 Dec.
Article in English | MEDLINE | ID: mdl-28850296

ABSTRACT

The theory of statistical learning has been influential in providing a framework for how humans learn to segment patterns of regularities from continuous sensory inputs, such as speech and music. This form of learning is based on statistical cues and is thought to underlie the ability to learn to segment patterns of regularities from continuous sensory inputs, such as the transition probabilities in speech and music. However, the connection between statistical learning and brain measurements is not well understood. Here we focus on ERPs in the context of tone sequences that contain statistically cohesive melodic patterns. We hypothesized that implicit learning of statistical regularities would influence what was held in auditory working memory. We predicted that a wrong note occurring within a cohesive pattern (within-pattern deviant) would lead to a significantly larger brain signal than a wrong note occurring between cohesive patterns (between-pattern deviant), even though both deviant types were equally likely to occur with respect to the global tone sequence. We discuss this prediction within a simple Markov model framework that learns the transition probability regularities within the tone sequence. Results show that signal strength was stronger when cohesive patterns were violated and demonstrate that the transitional probability of the sequence influences the memory basis for melodic patterns. Our results thus characterize how informational units are stored in auditory memory trace for deviance detection and provide new evidence about how the brain organizes sequential sound input that is useful for perception.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Learning/physiology , Memory, Short-Term/physiology , Music , Pattern Recognition, Physiological/physiology , Acoustic Stimulation , Adult , Electroencephalography , Evoked Potentials , Female , Humans , Male , Markov Chains , Models, Neurological , Neuropsychological Tests , Young Adult
16.
Front Neurosci ; 11: 95, 2017.
Article in English | MEDLINE | ID: mdl-28321179

ABSTRACT

Language experience enhances discrimination of speech contrasts at a behavioral- perceptual level, as well as at a pre-attentive level, as indexed by event-related potential (ERP) mismatch negativity (MMN) responses. The enhanced sensitivity could be the result of changes in acoustic resolution and/or long-term memory representations of the relevant information in the auditory cortex. To examine these possibilities, we used a short (ca. 600 ms) vs. long (ca. 2,600 ms) interstimulus interval (ISI) in a passive, oddball discrimination task while obtaining ERPs. These ISI differences were used to test whether cross-linguistic differences in processing Mandarin lexical tone are a function of differences in acoustic resolution and/or differences in long-term memory representations. Bisyllabic nonword tokens that differed in lexical tone categories were presented using a passive listening multiple oddball paradigm. Behavioral discrimination and identification data were also collected. The ERP results revealed robust MMNs to both easy and difficult lexical tone differences for both groups at short ISIs. At long ISIs, there was either no change or an enhanced MMN amplitude for the Mandarin group, but reduced MMN amplitude for the English group. In addition, the Mandarin listeners showed a larger late negativity (LN) discriminative response than the English listeners for lexical tone contrasts in the long ISI condition. Mandarin speakers outperformed English speakers in the behavioral tasks, especially under the long ISI conditions with the more similar lexical tone pair. These results suggest that the acoustic correlates of lexical tone are fairly robust and easily discriminated at short ISIs, when the auditory sensory memory trace is strong. At longer ISIs beyond 2.5 s language-specific experience is necessary for robust discrimination.

17.
Dev Sci ; 20(3)2017 05.
Article in English | MEDLINE | ID: mdl-26841104

ABSTRACT

When a sound occurs at a predictable time, it gets processed more efficiently. Predictability of the temporal structure of acoustic inflow has been found to influence the P3b of event-related potentials in young adults, such that highly predictable compared to less predictable input leads to earlier P3b peak latencies. In our study, we wanted to investigate the influence of predictability on target processing indexed by the P3b in children (10-12 years old) and young adults. To do that, we used an oddball paradigm with two conditions of predictability (high and low). In the High-predictability condition, a high-pitched target tone occurred most of the time in the fifth position of a five-tone pattern (after four low-pitched non-target sounds), whereas in the Low-predictability condition, no such rule was implemented. The target tone occurred randomly following 2, 3, 4, 5, or 6 non-target tones. In both age groups, reaction time to predictable targets was faster than to non-predictable targets. Remarkably, this effect was largest in children. Consistent with the behavioral responses, the onset latency of the P3b response elicited by targets in both groups was earlier in the predictable than the unpredictable conditions. However, only the children had significantly earlier peak latency responses for predictable targets. Our results demonstrate that target stimulus predictability increases processing speed in children and adults even when predictability was only implicitly derived by the stimulus statistics. Children did have larger effects of predictability, seeming to benefit more from predictability for target detection.


Subject(s)
Anticipation, Psychological/physiology , Event-Related Potentials, P300/physiology , Age Factors , Child , Humans , Reaction Time/physiology , Young Adult
18.
Front Aging Neurosci ; 9: 414, 2017.
Article in English | MEDLINE | ID: mdl-29311902

ABSTRACT

The ability to select sound streams from background noise becomes challenging with age, even with normal peripheral auditory functioning. Reduced stream segregation ability has been reported in older compared to younger adults. However, the reason why there is a difference is still unknown. The current study investigated the hypothesis that automatic sound processing is impaired with aging, which then contributes to difficulty actively selecting subsets of sounds in noisy environments. We presented a simple intensity oddball sequence in various conditions with irrelevant background sounds while recording EEG. The ability to detect the oddball tones was dependent on the ability to automatically or actively segregate the sounds to frequency streams. Listeners were able to actively segregate sounds to perform the loudness detection task, but there was no indication of automatic segregation of background sounds while watching a movie. Thus, our results indicate impaired automatic processes in aging that may explain more effortful listening, and that tax attentional systems when selecting sound streams in noisy environments.

19.
Brain Topogr ; 30(1): 136-148, 2017 01.
Article in English | MEDLINE | ID: mdl-27752799

ABSTRACT

The auditory mismatch negativity (MMN) component of event-related potentials (ERPs) has served as a neural index of auditory change detection. MMN is elicited by presentation of infrequent (deviant) sounds randomly interspersed among frequent (standard) sounds. Deviants elicit a larger negative deflection in the ERP waveform compared to the standard. There is considerable debate as to whether the neural mechanism of this change detection response is due to release from neural adaptation (neural adaptation hypothesis) or from a prediction error signal (predictive coding hypothesis). Previous studies have not been able to distinguish between these explanations because paradigms typically confound the two. The current study disambiguated effects of stimulus-specific adaptation from expectation violation using a unique stimulus design that compared expectation violation responses that did and did not involve stimulus change. The expectation violation response without the stimulus change differed in timing, scalp distribution, and attentional modulation from the more typical MMN response. There is insufficient evidence from the current study to suggest that the negative deflection elicited by the expectation violation alone includes the MMN. Thus, we offer a novel hypothesis that the expectation violation response reflects a fundamentally different neural substrate than that attributed to the canonical MMN.


Subject(s)
Adaptation, Physiological/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Acoustic Stimulation , Adult , Attention/physiology , Electroencephalography , Female , Humans , Male
20.
J Am Acad Audiol ; 27(6): 489-497, 2016 06.
Article in English | MEDLINE | ID: mdl-27310407

ABSTRACT

BACKGROUND: Frequency discrimination is often impaired in children developing language atypically. However, findings in the detection of small frequency changes in these children are conflicting. Previous studies on children's auditory perceptual abilities usually involved establishing differential sensitivity thresholds in sample populations who were not tested for auditory deficits. To date, there are no data comparing suprathreshold frequency discrimination ability in children tested for both auditory processing and language skills. PURPOSE: : This study examined the perception of small frequency differences (∆ƒ) in children with auditory processing disorder (APD) and/or specific language impairment (SLI). The aim was to determine whether children with APD and children with SLI showed differences in their behavioral responses to frequency changes. Results were expected to identify different degrees of impairment and shed some light on the auditory perceptual overlap between pediatric APD and SLI. RESEARCH DESIGN: An experimental group design using a two-alternative forced-choice procedure was used to determine frequency discrimination ability for three magnitudes of ∆ƒ from the 1000-Hz base frequency. STUDY SAMPLE: Thirty children between 10 years of age and 12 years, 11 months of age: 17 children with APD and/or SLI, and 13 typically developing (TD) peers participated. The clinical groups included four children with APD only, four children with SLI only, and nine children with both APD and SLI. DATA COLLECTION AND ANALYSIS: Behavioral data collected using headphone delivery were analyzed using the sensitivity index d', calculated for three ∆ƒ was 2%, 5%, and 15% of the base frequency or 20, 50, and 150 Hz. Correlations between the dependent variable d' and the independent variables measuring auditory processing and language skills were also obtained. A stepwise regression analysis was then performed. RESULTS: TD children and children with APD and/or SLI differed in the detection of small-tone ∆ƒ. In addition, APD or SLI status affected behavioral results differently. Comparisons between auditory processing test scores or language test scores and the sensitivity index d' showed different strengths of correlation based on the magnitudes of the ∆ƒ. Auditory processing scores showed stronger correlation to the sensitivity index d' for the small ∆ƒ, while language scores showed stronger correlation to the sensitivity index d' for the large ∆ƒ. CONCLUSION: Although children with APD and/or SLI have difficulty with behavioral frequency discrimination, this difficulty may stem from two different levels: a basic auditory level for children with APD and a higher language processing level for children with SLI; the frequency discrimination performance seemed to be affected by the labeling demands of the same versus different frequency discrimination task for the children with SLI.


Subject(s)
Auditory Perception , Auditory Perceptual Disorders/diagnosis , Language Development Disorders/diagnosis , Child , Female , Hearing Tests , Humans , Language , Language Tests , Male
SELECTION OF CITATIONS
SEARCH DETAIL