Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Cogn Neurosci ; 29(12): 2114-2122, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28850296

RESUMO

The theory of statistical learning has been influential in providing a framework for how humans learn to segment patterns of regularities from continuous sensory inputs, such as speech and music. This form of learning is based on statistical cues and is thought to underlie the ability to learn to segment patterns of regularities from continuous sensory inputs, such as the transition probabilities in speech and music. However, the connection between statistical learning and brain measurements is not well understood. Here we focus on ERPs in the context of tone sequences that contain statistically cohesive melodic patterns. We hypothesized that implicit learning of statistical regularities would influence what was held in auditory working memory. We predicted that a wrong note occurring within a cohesive pattern (within-pattern deviant) would lead to a significantly larger brain signal than a wrong note occurring between cohesive patterns (between-pattern deviant), even though both deviant types were equally likely to occur with respect to the global tone sequence. We discuss this prediction within a simple Markov model framework that learns the transition probability regularities within the tone sequence. Results show that signal strength was stronger when cohesive patterns were violated and demonstrate that the transitional probability of the sequence influences the memory basis for melodic patterns. Our results thus characterize how informational units are stored in auditory memory trace for deviance detection and provide new evidence about how the brain organizes sequential sound input that is useful for perception.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Aprendizagem/fisiologia , Memória de Curto Prazo/fisiologia , Música , Reconhecimento Fisiológico de Modelo/fisiologia , Estimulação Acústica , Adulto , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Masculino , Cadeias de Markov , Modelos Neurológicos , Testes Neuropsicológicos , Adulto Jovem
2.
Neuroimage ; 159: 195-206, 2017 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-28757195

RESUMO

Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Estimulação Acústica , Adulto , Atenção/fisiologia , Vias Auditivas/fisiologia , Eletroencefalografia , Feminino , Humanos , Masculino , Adulto Jovem
3.
Dev Sci ; 20(3)2017 05.
Artigo em Inglês | MEDLINE | ID: mdl-26841104

RESUMO

When a sound occurs at a predictable time, it gets processed more efficiently. Predictability of the temporal structure of acoustic inflow has been found to influence the P3b of event-related potentials in young adults, such that highly predictable compared to less predictable input leads to earlier P3b peak latencies. In our study, we wanted to investigate the influence of predictability on target processing indexed by the P3b in children (10-12 years old) and young adults. To do that, we used an oddball paradigm with two conditions of predictability (high and low). In the High-predictability condition, a high-pitched target tone occurred most of the time in the fifth position of a five-tone pattern (after four low-pitched non-target sounds), whereas in the Low-predictability condition, no such rule was implemented. The target tone occurred randomly following 2, 3, 4, 5, or 6 non-target tones. In both age groups, reaction time to predictable targets was faster than to non-predictable targets. Remarkably, this effect was largest in children. Consistent with the behavioral responses, the onset latency of the P3b response elicited by targets in both groups was earlier in the predictable than the unpredictable conditions. However, only the children had significantly earlier peak latency responses for predictable targets. Our results demonstrate that target stimulus predictability increases processing speed in children and adults even when predictability was only implicitly derived by the stimulus statistics. Children did have larger effects of predictability, seeming to benefit more from predictability for target detection.


Assuntos
Antecipação Psicológica/fisiologia , Potenciais Evocados P300/fisiologia , Fatores Etários , Criança , Humanos , Tempo de Reação/fisiologia , Adulto Jovem
4.
Brain Topogr ; 30(1): 136-148, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-27752799

RESUMO

The auditory mismatch negativity (MMN) component of event-related potentials (ERPs) has served as a neural index of auditory change detection. MMN is elicited by presentation of infrequent (deviant) sounds randomly interspersed among frequent (standard) sounds. Deviants elicit a larger negative deflection in the ERP waveform compared to the standard. There is considerable debate as to whether the neural mechanism of this change detection response is due to release from neural adaptation (neural adaptation hypothesis) or from a prediction error signal (predictive coding hypothesis). Previous studies have not been able to distinguish between these explanations because paradigms typically confound the two. The current study disambiguated effects of stimulus-specific adaptation from expectation violation using a unique stimulus design that compared expectation violation responses that did and did not involve stimulus change. The expectation violation response without the stimulus change differed in timing, scalp distribution, and attentional modulation from the more typical MMN response. There is insufficient evidence from the current study to suggest that the negative deflection elicited by the expectation violation alone includes the MMN. Thus, we offer a novel hypothesis that the expectation violation response reflects a fundamentally different neural substrate than that attributed to the canonical MMN.


Assuntos
Adaptação Fisiológica/fisiologia , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica , Adulto , Atenção/fisiologia , Eletroencefalografia , Feminino , Humanos , Masculino
5.
Brain Topogr ; 27(4): 451-66, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24838819

RESUMO

Cognition is often affected in a variety of neuropsychiatric, neurological, and neurodevelopmental disorders. The neural discriminative response, reflected in mismatch negativity (MMN) and its magnetoencephalographic equivalent (MMNm), has been used as a tool to study a variety of disorders involving auditory cognition. MMN/MMNm is an involuntary brain response to auditory change or, more generally, to pattern regularity violation. For a number of disorders, MMN/MMNm amplitude to sound deviance has been shown to be attenuated or the peak-latency of the component prolonged compared to controls. This general finding suggests that while not serving as a specific marker to any particular disorder, MMN may be useful for understanding factors of cognition in various disorders, and has potential to serve as an indicator of risk. This review presents a brief history of the MMN, followed by a description of how MMN has been used to index auditory processing capability in a range of neuropsychiatric, neurological, and neurodevelopmental disorders. Finally, we suggest future directions for research to further enhance our understanding of the neural substrate of deviance detection that could lead to improvements in the use of MMN as a clinical tool.


Assuntos
Encéfalo/fisiopatologia , Transtornos Cognitivos/fisiopatologia , Potenciais Evocados Auditivos , Humanos , Transtornos Mentais/fisiopatologia
6.
Front Neurosci ; 17: 1228506, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37942141

RESUMO

Introduction: Processing the wealth of sensory information from the surrounding environment is a vital human function with the potential to develop learning, advance social interactions, and promote safety and well-being. Methods: To elucidate underlying processes governing these activities we measured neurophysiological responses to patterned stimulus sequences during a sound categorization task to evaluate attention effects on implicit learning, sound categorization, and speech perception. Using a unique experimental design, we uncoupled conceptual categorical effects from stimulus-specific effects by presenting categorical stimulus tokens that did not physically repeat. Results: We found effects of implicit learning, categorical habituation, and a speech perception bias when the sounds were attended, and the listeners performed a categorization task (task-relevant). In contrast, there was no evidence of a speech perception bias, implicit learning of the structured sound sequence, or repetition suppression to repeated within-category sounds (no categorical habituation) when participants passively listened to the sounds and watched a silent closed-captioned video (task-irrelevant). No indication of category perception was demonstrated in the scalp-recorded brain components when participants were watching a movie and had no task with the sounds. Discussion: These results demonstrate that attention is required to maintain category identification and expectations induced by a structured sequence when the conceptual information must be extracted from stimuli that are acoustically distinct. Taken together, these striking attention effects support the theoretical view that top-down control is required to initiate expectations for higher level cognitive processing.

7.
Otol Neurotol ; 44(10): 1100-1105, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-37758317

RESUMO

OBJECTIVE: To evaluate long-term effects of COVID-19 on auditory and vestibular symptoms in a diverse cohort impacted by the initial 2020 COVID-19 infection in the pandemic's epicenter, before vaccine availability. STUDY DESIGN: Cohort study of individuals with confirmed COVID-19 infection, diagnosed in the March-May 2020 infection wave. A randomized, retrospective chart review of 1,352 individuals was performed to identify those with documented new or worsening auditory (aural fullness, tinnitus, hyperacusis, hearing loss) or vestibular (dizziness, vertigo) symptoms. Those with documented symptoms (613 of the 1,352 initial cohort) were contacted for a follow-up telephone survey in 2021-2022 to obtain self-report of aforementioned symptoms. SETTING: Academic tertiary hospital system in Bronx, NY. PATIENTS: Adults 18 to 99 years old with confirmed COVID-19 infection, alive at time of review. One hundred forty-eight charts were excluded for restricted access, incomplete data, no COVID-19 swab, or deceased at time of review. INTERVENTION: Confirmed COVID-19 infection, March to May 2020. MAIN OUTCOMES MEASURES: Auditory and vestibular symptoms documented in 2020 medical records and by self-report on 2021 to 2022 survey. RESULTS: Among the 74 individuals with documented symptoms during the first 2020 COVID-19 wave who participated in the 2021 to 2022 follow-up survey, 58% had documented vestibular symptoms initially in 2020, whereas 43% reported vestibular symptoms on the 2021 to 2022 survey ( p = 0.10). In contrast, 9% had documented auditory symptoms initially in 2020 and 55% reported auditory symptoms on the 2021 to 2022 survey ( p < 0.01). CONCLUSIONS: COVID-19 may impact vestibular symptoms early and persistently, whereas auditory effects may have more pronounced long-term impact, suggesting the importance of continually assessing COVID-19 patients.


Assuntos
COVID-19 , Zumbido , Adulto , Humanos , Adolescente , Adulto Jovem , Pessoa de Meia-Idade , Idoso , Idoso de 80 Anos ou mais , Estudos Retrospectivos , Estudos de Coortes , Vertigem/diagnóstico , Zumbido/epidemiologia , Zumbido/etiologia , Zumbido/diagnóstico
8.
Acad Pediatr ; 22(4): 518-525, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34896271

RESUMO

BACKGROUND: Developmental language disorder (DLD) often remains undetected until children shift from 'learning to read' to 'reading to learn,' around 9 years of age. Mono- and bilingual children with DLD frequently have co-occurring reading, attention, and related difficulties, compared to children with typical language development (TLD). Data for mono- and bilingual children with DLD and TLD would aid differentiation of language differences versus disorders in bilingual children. OBJECTIVE: We conducted a scoping review of descriptive research on mono-and bilingual children < and >= 9 years old with DLD versus TLD, and related skills (auditory processing, attention, cognition, executive function, and reading). DATA SOURCES: We searched PubMed for the terms "bilingual" and "language disorders" or "impairment" and "child[ren]" from August 1, 1979 through October 1, 2018. CHARTING METHODS: Two abstracters charted all search results. Main exclusions were: secondary data/reviews, special populations, intervention studies, and case studies/series. Abstracted data included age, related skills measures', and four language groups of participants: monolingual DLD, monolingual TLD, bilingual DLD, and bilingual TLD. RESULTS: Of 366 articles, 159 (43%) met inclusion criteria. Relatively few (14%, n = 22) included all 4 language groups, co-occurring difficulties other than nonverbal intelligence (n = 49, 31%) or reading (n = 51, 32%) or any 9-18 year-olds (31%, n = 48). Just 5 (3%) included only 9-18 year-olds. Among studies with any 9 to 18 year olds, just 4 (8%, 4/48) included 4 language groups. CONCLUSIONS: Future research should include mono- and bilingual children with both DLD and TLD, beyond 8 years of age, along with data about their related skills.


Assuntos
Transtornos do Desenvolvimento da Linguagem , Multilinguismo , Criança , Função Executiva , Humanos , Idioma , Desenvolvimento da Linguagem
9.
Psychophysiology ; 58(9): e13875, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34110020

RESUMO

The auditory system frequently encounters ambiguous sound input that can be perceived in multiple ways. The current study investigated the role of explicit knowledge in modulating how sounds are represented in auditory memory for a bistable sound sequence that could be perceived equally as integrated or segregated. We hypothesized that the dominant percept of the bistable sequence would suppress representation of the alternative perceptual organization as a function of how much top-down knowledge the listener had about the structure of the sequence. Performance measures and event-related brain potentials were compared when participants had explicit knowledge about one perceptual organization in the first half of the experiment to when they had explicit knowledge of both in the second half. We hypothesized that knowledge would modify the brain response to the alternative percept of the bistable sequence. However, that did not occur. When participants were performing one task, with no explicit knowledge of the bistable structure of the sequence, both integrated and segregated percepts were represented in auditory working memory. This demonstrates that explicit knowledge about the sounds is not a necessary factor for deriving and maintaining representations of multiple sound organizations within a complex sound environment. Passive attention operates in parallel with active or selective attention to maintain consistent representations of the environment, representations that may or may not be useful for task performance. It suggests a highly adaptive system useful in everyday listening situations where the listener has no prior knowledge about how the sound environment is structured.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Potenciais Evocados/fisiologia , Memória de Curto Prazo/fisiologia , Desempenho Psicomotor/fisiologia , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Adulto Jovem
10.
Front Hum Neurosci ; 15: 747769, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34803633

RESUMO

The predictable rhythmic structure is important to most ecologically relevant sounds for humans, such as is found in the rhythm of speech or music. This study addressed the question of how rhythmic predictions are maintained in the auditory system when there are multiple perceptual interpretations occurring simultaneously and emanating from the same sound source. We recorded the electroencephalogram (EEG) while presenting participants with a tone sequence that had two different tone feature patterns, one based on the sequential rhythmic variation in tone duration and the other on sequential rhythmic variation in tone intensity. Participants were presented with the same sound sequences and were instructed to listen for the intensity pattern (ignore fluctuations in duration) and press a response key to detected pattern deviants (attend intensity pattern task); to listen to the duration pattern (ignore fluctuations in intensity) and make a button press to duration pattern deviants (attend duration pattern task), and to watch a movie and ignore the sounds presented to their ears (attend visual task). Both intensity and duration patterns occurred predictably 85% of the time, thus the key question involved evaluating how the brain treated the irrelevant feature patterns (standards and deviants) while performing an auditory or visual task. We expected that task-based feature patterns would have a more robust brain response to attended standards and deviants than the unattended feature patterns. Instead, we found that the neural entrainment to the rhythm of the standard attended patterns had similar power to the standard of the unattended feature patterns. In addition, the infrequent pattern deviants elicited the event-related brain potential called the mismatch negativity component (MMN). The MMN elicited by task-based feature pattern deviants had a similar amplitude to MMNs elicited by unattended pattern deviants that were unattended because they were not the target pattern or because the participant ignored the sounds and watched a movie. Thus, these results demonstrate that the brain tracks multiple predictions about the complexities in sound streams and can automatically track and detect deviations with respect to these predictions. This capability would be useful for switching attention rapidly among multiple objects in a busy auditory scene.

11.
Front Psychol ; 11: 1155, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32655436

RESUMO

The ability to distinguish among different types of sounds in the environment and to identify sound sources is a fundamental skill of the auditory system. This study tested responses to sounds by stimulus category (speech, music, and environmental) in adults with normal hearing to determine under what task conditions there was a processing advantage for speech. We hypothesized that speech sounds would be processed faster and more accurately than non-speech sounds under specific listening conditions and different behavioral goals. Thus, we used three different task conditions allowing us to compare detection and identification of sound categories in an auditory oddball paradigm and in a repetition-switch category paradigm. We found that response time and accuracy were modulated by the specific task demands. The sound category itself had no effect on sound detection outcomes but had a pronounced effect on sound identification. Faster and more accurate responses to speech were found only when identifying sounds. We demonstrate a speech processing "advantage" when identifying the sound category among non-categorical sounds and when detecting and identifying among categorical sounds. Thus, overall, our results are consistent with a theory of speech processing that relies on specialized systems distinct from music and other environmental sounds.

12.
Elife ; 92020 10 12.
Artigo em Inglês | MEDLINE | ID: mdl-33043884

RESUMO

A neural code adapted to the statistical structure of sensory cues may optimize perception. We investigated whether interaural time difference (ITD) statistics inherent in natural acoustic scenes are parameters determining spatial discriminability. The natural ITD rate of change across azimuth (ITDrc) and ITD variability over time (ITDv) were combined in a Fisher information statistic to assess the amount of azimuthal information conveyed by this sensory cue. We hypothesized that natural ITD statistics underlie the neural code for ITD and thus influence spatial perception. To test this hypothesis, sounds with invariant statistics were presented to measure human spatial discriminability and spatial novelty detection. Human auditory spatial perception showed correlation with natural ITD statistics, supporting our hypothesis. Further analysis showed that these results are consistent with classic models of ITD coding and can explain the ITD tuning distribution observed in the mammalian brainstem.


When a person hears a sound, how do they work out where it is coming from? A sound coming from your right will reach your right ear a few fractions of a millisecond earlier than your left. The brain uses this difference, known as the interaural time difference or ITD, to locate the sound. But humans are also much better at localizing sounds that come from sources in front of them than from sources by their sides. This may be due in part to differences in the number of neurons available to detect sounds from these different locations. It may also reflect differences in the rates at which those neurons fire in response to sounds. But these factors alone cannot explain why humans are so much better at localizing sounds in front of them. Pavão et al. showed that the brain has evolved the ability to detect natural patterns that exist in sounds as a result of their location, and to use those patterns to optimize the spatial perception of sounds. Pavão et al. showed that the way in which the head and inner ear filter incoming sounds has two consequences for how we perceive them. Firstly, the change in ITD for sounds coming from different sources in front of a person is greater than for sounds coming from their sides. And secondly, the ITD for sounds that originate in front of a person varies more over time than the ITD for sounds coming from the periphery. By playing sounds to healthy volunteers while removing these differences, Pavão et al. found that natural ITD statistics were correlated with a person's ability to tell where a sound was coming from. By revealing the features the brain uses to determine the location of sounds, the work of Pavão et al. could ultimately lead to the development of more effective hearing aids. The results also provide clues to how other senses, including vision, may have evolved to respond optimally to the environment.


Assuntos
Percepção Auditiva/fisiologia , Modelos Neurológicos , Modelos Estatísticos , Localização de Som , Adulto , Limiar Auditivo , Evolução Biológica , Cóclea/fisiologia , Sinais (Psicologia) , Feminino , Humanos , Masculino , Tempo
13.
Psychophysiology ; 57(2): e13487, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31578762

RESUMO

Although attention has been shown to enhance neural representations of selected inputs, the fate of unselected background sounds is still debated. The goal of the current study was to understand how processing resources are distributed among attended and unattended sounds during auditory scene analysis. We used a three-stream paradigm with four acoustic features uniquely defining each sound stream (frequency, envelope shape, spatial location, tone quality). We manipulated task load by having participants perform a difficult auditory task and an easy movie-viewing task with the same set of sounds in separate conditions. The mismatch negativity (MMN) component of event-related brain potentials (ERPs) was measured to evaluate sound processing in both conditions. We found no effect of task demands on unattended sound processing: MMNs were elicited by unattended deviants during both low- and high-load task conditions. A key factor of this result was the use of unique tone feature combinations to distinguish each of the three sound streams, strengthening the segregation of streams. In the auditory task, the P3b component demonstrates a two-stage process of target evaluation. Thus, these results, in conjunction with results of previous studies, suggest that stimulus-driven factors that strengthen stream segregation can free up processing capacity for higher-level analyses. The results illustrate the interactive nature of top-down and stimulus-driven processes in stream formation, supporting a distributive theory of attention that balances the strength of the bottom-up input with perceptual goals in analyzing the auditory scene.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Eletroencefalografia , Potenciais Evocados/fisiologia , Função Executiva/fisiologia , Percepção Visual/fisiologia , Adulto , Eletroculografia , Potenciais Evocados P300/fisiologia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino
15.
J Clin Exp Neuropsychol ; 41(8): 814-831, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31156064

RESUMO

Objective: The purpose of this study was to characterize post-chemotherapy sensory, memory, and attention abilities in childhood survivors of acute lymphoblastic leukemia (ALL) to better understand how treatment affects cognitive functioning. Methods: Eight ALL survivors and eight age-matched, healthy children between the ages of 5-11 years participated in the study. Among the ALL survivors, a median of 63 days (range 22-267 days) elapsed between completion of chemotherapy and this assessment. Sounds were presented in an oddball paradigm while recording the electroencephalogram in separate conditions of passive listening and active task performance. To assess different domains of cognition, we measured event-related brain potentials (ERPs) reflecting sensory processing (P1 component), working memory (mismatch negativity [MMN] component), attentional orienting (P3a), and target detection (P3b component) in response to the sounds. We also measured sound discrimination and response speed performance. Results: Relative to control subjects, ALL survivors had poorer performance on auditory tasks, as well as decreased amplitude of the P1, MMN, P3a, and P3b components. ALL survivors also did not exhibit the amplitude gain typically observed in the sensory P1 component when attending to the sound input compared to when passively listening. Conclusions: Atypical responses were observed in brain processes associated with sensory discrimination, auditory working memory, and attentional control in pediatric ALL survivors indicating deficiencies in all cognitive domains compared to age-matched controls. Significance: ERPs differentiated aspects of cognitive functioning, which may provide a useful tool for assessing recovery and risk of post-chemotherapy cognitive deficiencies in young children. The decreased MMN amplitude in ALL survivors may indicate (N-methyl D-aspartate) NMDA dysfunction induced by methotrexate, and thus provides a potential therapeutic target for chemotherapy-associated cognitive impairments.


Assuntos
Antineoplásicos/efeitos adversos , Encéfalo/efeitos dos fármacos , Sobreviventes de Câncer/psicologia , Transtornos Cognitivos/induzido quimicamente , Potenciais Evocados/efeitos dos fármacos , Leucemia-Linfoma Linfoblástico de Células Precursoras/tratamento farmacológico , Transtornos de Sensação/induzido quimicamente , Adolescente , Adulto , Antineoplásicos/uso terapêutico , Atenção/efeitos dos fármacos , Atenção/fisiologia , Encéfalo/fisiopatologia , Criança , Pré-Escolar , Transtornos Cognitivos/diagnóstico , Transtornos Cognitivos/fisiopatologia , Transtornos Cognitivos/psicologia , Eletroencefalografia/efeitos dos fármacos , Feminino , Seguimentos , Humanos , Masculino , Memória de Curto Prazo/efeitos dos fármacos , Memória de Curto Prazo/fisiologia , Leucemia-Linfoma Linfoblástico de Células Precursoras/complicações , Leucemia-Linfoma Linfoblástico de Células Precursoras/fisiopatologia , Leucemia-Linfoma Linfoblástico de Células Precursoras/psicologia , Tempo de Reação/efeitos dos fármacos , Tempo de Reação/fisiologia , Transtornos de Sensação/diagnóstico , Transtornos de Sensação/fisiopatologia , Transtornos de Sensação/psicologia
16.
Front Psychol ; 9: 335, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29623054

RESUMO

Speech perception behavioral research suggests that rates of sensory memory decay are dependent on stimulus properties at more than one level (e.g., acoustic level, phonemic level). The neurophysiology of sensory memory decay rate has rarely been examined in the context of speech processing. In a lexical tone study, we showed that long-term memory representation of lexical tone slows the decay rate of sensory memory for these tones. Here, we tested the hypothesis that long-term memory representation of vowels slows the rate of auditory sensory memory decay in a similar way to that of lexical tone. Event-related potential (ERP) responses were recorded to Mandarin non-words contrasting the vowels /i/ vs. /u/ and /y/ vs. /u/ from first-language (L1) Mandarin and L1 American English participants under short and long interstimulus interval (ISI) conditions (short ISI: an average of 575 ms, long ISI: an average of 2675 ms). Results revealed poorer discrimination of the vowel contrasts for English listeners than Mandarin listeners, but with different patterns for behavioral perception and neural discrimination. As predicted, English listeners showed the poorest discrimination and identification for the vowel contrast /y/ vs. /u/, and poorer performance in the long ISI condition. In contrast to Yu et al. (2017), however, we found no effect of ISI reflected in the neural responses, specifically the mismatch negativity (MMN), P3a and late negativity ERP amplitudes. We did see a language group effect, with Mandarin listeners generally showing larger MMN and English listeners showing larger P3a. The behavioral results revealed that native language experience plays a role in echoic sensory memory trace maintenance, but the failure to find an effect of ISI on the ERP results suggests that vowel and lexical tone memory traces decay at different rates. Highlights: We examined the interaction between auditory sensory memory decay and language experience. We compared MMN, P3a, LN and behavioral responses in short vs. long interstimulus intervals. We found that different from lexical tone contrast, MMN, P3a, and LN changes to vowel contrasts are not influenced by lengthening the ISI to 2.6 s. We also found that the English listeners discriminated the non-native vowel contrast with lower accuracy under the long ISI condition.

17.
Brain Res ; 1144: 127-35, 2007 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-17306232

RESUMO

The task of assigning concurrent sounds to different auditory objects is known to depend on temporal and spectral cues. When tones of high and low frequencies are presented in alternation, they can be perceived as a single, integrated melody, or as two parallel, segregated melodic lines, according to the presentation rate and frequency distance between the sounds. At an intermediate distance, in the 'ambiguous' range, both percepts are possible. We conducted an electrophysiological experiment to determine whether an ambiguous sound organization could be modulated toward an integrated or segregated percept by the synchronous presentation of visual cues. Two sets of sounds (one high frequency and one low frequency) were interleaved. To promote integration or segregation, visual stimuli were synchronized to either the within-set frequency pattern or to the across-set intensity pattern. Elicitation of the mismatch negativity (MMN) component of event-related brain potentials was used to index the segregated organization, when no task was performed with the sounds. MMN was elicited only when the visual pattern promoted the segregation of the sounds. The results demonstrate cross-modal effects on auditory object perception in that sound ambiguity was resolved by synchronous presentation of visual stimuli, which promoted either an integrated or segregated perception of the sounds.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Variação Contingente Negativa/fisiologia , Sinais (Psicologia) , Potenciais Evocados/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Adulto , Análise de Variância , Relação Dose-Resposta à Radiação , Eletroencefalografia/métodos , Feminino , Lateralidade Funcional/fisiologia , Humanos , Masculino , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia
18.
J Speech Lang Hear Res ; 60(10): 2989-3000, 2017 10 17.
Artigo em Inglês | MEDLINE | ID: mdl-29049599

RESUMO

Purpose: This review article provides a new perspective on the role of attention in auditory scene analysis. Method: A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception-from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity. Conclusions: A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources. Presentation Video: http://cred.pubs.asha.org/article.aspx?articleid=2601618.


Assuntos
Atenção , Percepção Auditiva , Atenção/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Humanos , Modelos Neurológicos
19.
Front Aging Neurosci ; 9: 414, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29311902

RESUMO

The ability to select sound streams from background noise becomes challenging with age, even with normal peripheral auditory functioning. Reduced stream segregation ability has been reported in older compared to younger adults. However, the reason why there is a difference is still unknown. The current study investigated the hypothesis that automatic sound processing is impaired with aging, which then contributes to difficulty actively selecting subsets of sounds in noisy environments. We presented a simple intensity oddball sequence in various conditions with irrelevant background sounds while recording EEG. The ability to detect the oddball tones was dependent on the ability to automatically or actively segregate the sounds to frequency streams. Listeners were able to actively segregate sounds to perform the loudness detection task, but there was no indication of automatic segregation of background sounds while watching a movie. Thus, our results indicate impaired automatic processes in aging that may explain more effortful listening, and that tax attentional systems when selecting sound streams in noisy environments.

20.
Front Neurosci ; 11: 95, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28321179

RESUMO

Language experience enhances discrimination of speech contrasts at a behavioral- perceptual level, as well as at a pre-attentive level, as indexed by event-related potential (ERP) mismatch negativity (MMN) responses. The enhanced sensitivity could be the result of changes in acoustic resolution and/or long-term memory representations of the relevant information in the auditory cortex. To examine these possibilities, we used a short (ca. 600 ms) vs. long (ca. 2,600 ms) interstimulus interval (ISI) in a passive, oddball discrimination task while obtaining ERPs. These ISI differences were used to test whether cross-linguistic differences in processing Mandarin lexical tone are a function of differences in acoustic resolution and/or differences in long-term memory representations. Bisyllabic nonword tokens that differed in lexical tone categories were presented using a passive listening multiple oddball paradigm. Behavioral discrimination and identification data were also collected. The ERP results revealed robust MMNs to both easy and difficult lexical tone differences for both groups at short ISIs. At long ISIs, there was either no change or an enhanced MMN amplitude for the Mandarin group, but reduced MMN amplitude for the English group. In addition, the Mandarin listeners showed a larger late negativity (LN) discriminative response than the English listeners for lexical tone contrasts in the long ISI condition. Mandarin speakers outperformed English speakers in the behavioral tasks, especially under the long ISI conditions with the more similar lexical tone pair. These results suggest that the acoustic correlates of lexical tone are fairly robust and easily discriminated at short ISIs, when the auditory sensory memory trace is strong. At longer ISIs beyond 2.5 s language-specific experience is necessary for robust discrimination.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA