Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 105
Filtrar
1.
J Neurosci ; 42(2): 240-254, 2022 01 12.
Artigo em Inglês | MEDLINE | ID: mdl-34764159

RESUMO

Temporal coherence of sound fluctuations across spectral channels is thought to aid auditory grouping and scene segregation. Although prior studies on the neural bases of temporal-coherence processing focused mostly on cortical contributions, neurophysiological evidence suggests that temporal-coherence-based scene analysis may start as early as the cochlear nucleus (i.e., the first auditory region supporting cross-channel processing over a wide frequency range). Accordingly, we hypothesized that aspects of temporal-coherence processing that could be realized in early auditory areas may shape speech understanding in noise. We then explored whether physiologically plausible computational models could account for results from a behavioral experiment that measured consonant categorization in different masking conditions. We tested whether within-channel masking of target-speech modulations predicted consonant confusions across the different conditions and whether predictions were improved by adding across-channel temporal-coherence processing mirroring the computations known to exist in the cochlear nucleus. Consonant confusions provide a rich characterization of error patterns in speech categorization, and are thus crucial for rigorously testing models of speech perception; however, to the best of our knowledge, they have not been used in prior studies of scene analysis. We find that within-channel modulation masking can reasonably account for category confusions, but that it fails when temporal fine structure cues are unavailable. However, the addition of across-channel temporal-coherence processing significantly improves confusion predictions across all tested conditions. Our results suggest that temporal-coherence processing strongly shapes speech understanding in noise and that physiological computations that exist early along the auditory pathway may contribute to this process.SIGNIFICANCE STATEMENT Temporal coherence of sound fluctuations across distinct frequency channels is thought to be important for auditory scene analysis. Prior studies on the neural bases of temporal-coherence processing focused mostly on cortical contributions, and it was unknown whether speech understanding in noise may be shaped by across-channel processing that exists in earlier auditory areas. Using physiologically plausible computational modeling to predict consonant confusions across different listening conditions, we find that across-channel temporal coherence contributes significantly to scene analysis and speech perception and that such processing may arise in the auditory pathway as early as the brainstem. By virtue of providing a richer characterization of error patterns not obtainable with just intelligibility scores, consonant confusions yield unique insight into scene analysis mechanisms.


Assuntos
Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Cóclea/fisiologia , Fala/fisiologia , Estimulação Acústica , Limiar Auditivo/fisiologia , Humanos , Modelos Neurológicos , Mascaramento Perceptivo
2.
Cereb Cortex ; 32(4): 855-869, 2022 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-34467399

RESUMO

Working memory (WM) supports the persistent representation of transient sensory information. Visual and auditory stimuli place different demands on WM and recruit different brain networks. Separate auditory- and visual-biased WM networks extend into the frontal lobes, but several challenges confront attempts to parcellate human frontal cortex, including fine-grained organization and between-subject variability. Here, we use differential intrinsic functional connectivity from 2 visual-biased and 2 auditory-biased frontal structures to identify additional candidate sensory-biased regions in frontal cortex. We then examine direct contrasts of task functional magnetic resonance imaging during visual versus auditory 2-back WM to validate those candidate regions. Three visual-biased and 5 auditory-biased regions are robustly activated bilaterally in the frontal lobes of individual subjects (N = 14, 7 women). These regions exhibit a sensory preference during passive exposure to task stimuli, and that preference is stronger during WM. Hierarchical clustering analysis of intrinsic connectivity among novel and previously identified bilateral sensory-biased regions confirms that they functionally segregate into visual and auditory networks, even though the networks are anatomically interdigitated. We also observe that the frontotemporal auditory WM network is highly selective and exhibits strong functional connectivity to structures serving non-WM functions, while the frontoparietal visual WM network hierarchically merges into the multiple-demand cognitive system.


Assuntos
Percepção Auditiva , Memória de Curto Prazo , Mapeamento Encefálico/métodos , Feminino , Lobo Frontal/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética
3.
Ear Hear ; 43(1): 9-22, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34751676

RESUMO

Following a conversation in a crowded restaurant or at a lively party poses immense perceptual challenges for some individuals with normal hearing thresholds. A number of studies have investigated whether noise-induced cochlear synaptopathy (CS; damage to the synapses between cochlear hair cells and the auditory nerve following noise exposure that does not permanently elevate hearing thresholds) contributes to this difficulty. A few studies have observed correlations between proxies of noise-induced CS and speech perception in difficult listening conditions, but many have found no evidence of a relationship. To understand these mixed results, we reviewed previous studies that have examined noise-induced CS and performance on speech perception tasks in adverse listening conditions in adults with normal or near-normal hearing thresholds. Our review suggests that superficially similar speech perception paradigms used in previous investigations actually placed very different demands on sensory, perceptual, and cognitive processing. Speech perception tests that use low signal-to-noise ratios and maximize the importance of fine sensory details- specifically by using test stimuli for which lexical, syntactic, and semantic cues do not contribute to performance-are more likely to show a relationship to estimated CS levels. Thus, the current controversy as to whether or not noise-induced CS contributes to individual differences in speech perception under challenging listening conditions may be due in part to the fact that many of the speech perception tasks used in past studies are relatively insensitive to CS-induced deficits.


Assuntos
Percepção da Fala , Fala , Estimulação Acústica , Adulto , Limiar Auditivo/fisiologia , Humanos , Individualidade , Mascaramento Perceptivo , Percepção da Fala/fisiologia
4.
J Acoust Soc Am ; 151(5): 3219, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35649920

RESUMO

Salient interruptions draw attention involuntarily. Here, we explored whether this effect depends on the spatial and temporal relationships between a target stream and interrupter. In a series of online experiments, listeners focused spatial attention on a target stream of spoken syllables in the presence of an otherwise identical distractor stream from the opposite hemifield. On some random trials, an interrupter (a cat "MEOW") occurred. Experiment 1 established that the interrupter, which occurred randomly in 25% of the trials in the hemifield opposite the target, degraded target recall. Moreover, a majority of participants exhibited this degradation for the first target syllable, which finished before the interrupter began. Experiment 2 showed that the effect of an interrupter was similar whether it occurred in the opposite or the same hemifield as the target. Experiment 3 found that the interrupter degraded performance slightly if it occurred before the target stream began but had no effect if it began after the target stream ended. Experiment 4 showed decreased interruption effects when the interruption frequency increased (50% of the trials). These results demonstrate that a salient interrupter disrupts recall of a target stream, regardless of its direction, especially if it occurs during a target stream.


Assuntos
Rememoração Mental , Humanos
5.
Proc Natl Acad Sci U S A ; 115(14): E3286-E3295, 2018 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-29555752

RESUMO

Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Limiar Auditivo/fisiologia , Discriminação Psicológica , Perda Auditiva Neurossensorial/fisiopatologia , Perda Auditiva Neurossensorial/psicologia , Percepção Espacial/fisiologia , Adulto , Estudos de Casos e Controles , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Teóricos , Adulto Jovem
6.
J Acoust Soc Am ; 150(3): 2230, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34598642

RESUMO

A fundamental question in the neuroscience of everyday communication is how scene acoustics shape the neural processing of attended speech sounds and in turn impact speech intelligibility. While it is well known that the temporal envelopes in target speech are important for intelligibility, how the neural encoding of target-speech envelopes is influenced by background sounds or other acoustic features of the scene is unknown. Here, we combine human electroencephalography with simultaneous intelligibility measurements to address this key gap. We find that the neural envelope-domain signal-to-noise ratio in target-speech encoding, which is shaped by masker modulations, predicts intelligibility over a range of strategically chosen realistic listening conditions unseen by the predictive model. This provides neurophysiological evidence for modulation masking. Moreover, using high-resolution vocoding to carefully control peripheral envelopes, we show that target-envelope coding fidelity in the brain depends not only on envelopes conveyed by the cochlea, but also on the temporal fine structure (TFS), which supports scene segregation. Our results are consistent with the notion that temporal coherence of sound elements across envelopes and/or TFS influences scene analysis and attentive selection of a target sound. Our findings also inform speech-intelligibility models and technologies attempting to improve real-world speech communication.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Acústica , Percepção Auditiva , Humanos , Mascaramento Perceptivo , Razão Sinal-Ruído
7.
J Acoust Soc Am ; 150(4): 3085, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34717460

RESUMO

The ability to see a talker's face improves speech intelligibility in noise, provided that the auditory and visual speech signals are approximately aligned in time. However, the importance of spatial alignment between corresponding faces and voices remains unresolved, particularly in multi-talker environments. In a series of online experiments, we investigated this using a task that required participants to selectively attend a target talker in noise while ignoring a distractor talker. In experiment 1, we found improved task performance when the talkers' faces were visible, but only when corresponding faces and voices were presented in the same hemifield (spatially aligned). In experiment 2, we tested for possible influences of eye position on this result. In auditory-only conditions, directing gaze toward the distractor voice reduced performance, but this effect could not fully explain the cost of audio-visual (AV) spatial misalignment. Lowering the signal-to-noise ratio (SNR) of the speech from +4 to -4 dB increased the magnitude of the AV spatial alignment effect (experiment 3), but accurate closed-set lipreading caused a floor effect that influenced results at lower SNRs (experiment 4). Taken together, these results demonstrate that spatial alignment between faces and voices contributes to the ability to selectively attend AV speech.


Assuntos
Percepção da Fala , Voz , Humanos , Leitura Labial , Ruído/efeitos adversos , Inteligibilidade da Fala
8.
J Acoust Soc Am ; 150(4): 2664, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34717498

RESUMO

To understand the mechanisms of speech perception in everyday listening environments, it is important to elucidate the relative contributions of different acoustic cues in transmitting phonetic content. Previous studies suggest that the envelope of speech in different frequency bands conveys most speech content, while the temporal fine structure (TFS) can aid in segregating target speech from background noise. However, the role of TFS in conveying phonetic content beyond what envelopes convey for intact speech in complex acoustic scenes is poorly understood. The present study addressed this question using online psychophysical experiments to measure the identification of consonants in multi-talker babble for intelligibility-matched intact and 64-channel envelope-vocoded stimuli. Consonant confusion patterns revealed that listeners had a greater tendency in the vocoded (versus intact) condition to be biased toward reporting that they heard an unvoiced consonant, despite envelope and place cues being largely preserved. This result was replicated when babble instances were varied across independent experiments, suggesting that TFS conveys voicing information beyond what is conveyed by envelopes for intact speech in babble. Given that multi-talker babble is a masker that is ubiquitous in everyday environments, this finding has implications for the design of assistive listening devices such as cochlear implants.


Assuntos
Implantes Cocleares , Percepção da Fala , Estimulação Acústica , Ruído/efeitos adversos , Mascaramento Perceptivo , Fonética , Fala , Inteligibilidade da Fala
9.
J Acoust Soc Am ; 149(1): 259, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33514136

RESUMO

The ability to discriminate frequency differences between pure tones declines as the duration of the interstimulus interval (ISI) increases. The conventional explanation for this finding is that pitch representations gradually decay from auditory short-term memory. Gradual decay means that internal noise increases with increasing ISI duration. Another possibility is that pitch representations experience "sudden death," disappearing without a trace from memory. Sudden death means that listeners guess (respond at random) more often when the ISIs are longer. Since internal noise and guessing probabilities influence the shape of psychometric functions in different ways, they can be estimated simultaneously. Eleven amateur musicians performed a two-interval, two-alternative forced-choice frequency-discrimination task. The frequencies of the first tones were roved, and frequency differences and ISI durations were manipulated across trials. Data were analyzed using Bayesian models that simultaneously estimated internal noise and guessing probabilities. On average across listeners, internal noise increased monotonically as a function of increasing ISI duration, suggesting that gradual decay occurred. The guessing rate decreased with an increasing ISI duration between 0.5 and 2 s but then increased with further increases in ISI duration, suggesting that sudden death occurred but perhaps only at longer ISIs. Results are problematic for decay-only models of discrimination and contrast with those from a study on visual short-term memory, which found that over similar durations, visual representations experienced little gradual decay yet substantial sudden death.


Assuntos
Memória de Curto Prazo , Música , Discriminação da Altura Tonal , Teorema de Bayes , Humanos , Ruído
10.
Ear Hear ; 41(6): 1635-1647, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33136638

RESUMO

OBJECTIVE: Top-down spatial attention is effective at selecting a target sound from a mixture. However, nonspatial features often distinguish sources in addition to location. This study explores whether redundant nonspatial features are used to maintain selective auditory attention for a spatially defined target. DESIGN: We recorded electroencephalography while subjects focused attention on one of three simultaneous melodies. In one experiment, subjects (n = 17) were given an auditory cue indicating both the location and pitch of the target melody. In a second experiment (n = 17 subjects), the cue only indicated target location, and we compared two conditions: one in which the pitch separation of competing melodies was large, and one in which this separation was small. RESULTS: In both experiments, responses evoked by onsets of events in sound streams were modulated by attention, and we found no significant difference in this modulation between small and large pitch separation conditions. Therefore, the evoked response reflected that target stimuli were the focus of attention, and distractors were suppressed successfully for all experimental conditions. In all cases, parietal alpha was lateralized following the cue, but before melody onset, indicating that subjects initially focused attention in space. During the stimulus presentation, this lateralization disappeared when pitch cues were strong but remained significant when pitch cues were weak, suggesting that strong pitch cues reduced reliance on sustained spatial attention. CONCLUSIONS: These results demonstrate that once a well-defined target stream at a known location is selected, top-down spatial attention plays a weak role in filtering out a segregated competing stream.


Assuntos
Atenção , Localização de Som , Estimulação Acústica , Percepção Auditiva , Sinais (Psicologia) , Eletroencefalografia , Humanos
11.
J Acoust Soc Am ; 147(1): 371, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-32006971

RESUMO

Perceptual anchors are representations of stimulus features stored in long-term memory rather than short-term memory. The present study investigated whether listeners use perceptual anchors to improve pure-tone frequency discrimination. Ten amateur musicians performed a two-interval, two-alternative forced-choice frequency-discrimination experiment. In one half of the experiment, the frequency of the first tone was fixed across trials, and in the other half, the frequency of the first tone was roved widely across trials. The durations of the interstimulus intervals (ISIs) and the frequency differences between the tones on each trial were also manipulated. The data were analyzed with a Bayesian model that assumed that performance was limited by sensory noise (related to the initial encoding of the stimuli), memory noise (which increased proportionally to the ISI), fluctuations in attention, and response bias. It was hypothesized that memory-noise variance increased more rapidly during roved-frequency discrimination than fixed-frequency discrimination because listeners used perceptual anchors in the latter condition. The results supported this hypothesis. The results also suggested that listeners experienced more lapses in attention during roved-frequency discrimination.


Assuntos
Percepção Auditiva , Memória de Longo Prazo , Discriminação da Altura Tonal , Estimulação Acústica , Adulto , Teorema de Bayes , Feminino , Humanos , Masculino , Psicofísica , Adulto Jovem
12.
J Acoust Soc Am ; 146(4): 2577, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31671991

RESUMO

Spatial attention may be used to select target speech in one location while suppressing irrelevant speech in another. However, if perceptual resolution of spatial cues is weak, spatially focused attention may work poorly, leading to difficulty communicating in noisy settings. In electroencephalography (EEG), the distribution of alpha (8-14 Hz) power over parietal sensors reflects the spatial focus of attention [Banerjee, Snyder, Molholm, and Foxe (2011). J. Neurosci. 31, 9923-9932; Foxe and Snyder (2011). Front. Psychol. 2, 154.] If spatial attention is degraded, however, alpha may not be modulated across parietal sensors. A previously published behavioral and EEG study found that, compared to normal-hearing (NH) listeners, hearing-impaired (HI) listeners often had higher interaural time difference thresholds, worse performance when asked to report the content of an acoustic stream from a particular location, and weaker attentional modulation of neural responses evoked by sounds in a mixture [Dai, Best, and Shinn-Cunningham (2018). Proc. Natl. Acad. Sci. U. S. A. 115, E3286]. This study explored whether these same HI listeners also showed weaker alpha lateralization during the previously reported task. In NH listeners, hemispheric parietal alpha power was greater when the ipsilateral location was attended; this lateralization was stronger when competing melodies were separated by a larger spatial difference. In HI listeners, however, alpha was not lateralized across parietal sensors, consistent with a degraded ability to use spatial features to selectively attend.


Assuntos
Ritmo alfa , Atenção/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiopatologia , Perda Auditiva Neurossensorial/fisiopatologia , Processamento Espacial/fisiologia , Estimulação Acústica , Adulto , Sinais (Psicologia) , Feminino , Lateralidade Funcional , Humanos , Masculino , Pessoa de Meia-Idade , Lobo Parietal/fisiopatologia , Pessoas com Deficiência Auditiva , Adulto Jovem
13.
J Neurosci ; 37(36): 8755-8766, 2017 09 06.
Artigo em Inglês | MEDLINE | ID: mdl-28821668

RESUMO

The functionality of much of human lateral frontal cortex (LFC) has been characterized as "multiple demand" (MD) as these regions appear to support a broad range of cognitive tasks. In contrast to this domain-general account, recent evidence indicates that portions of LFC are consistently selective for sensory modality. Michalka et al. (2015) reported two bilateral regions that are biased for visual attention, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), interleaved with two bilateral regions that are biased for auditory attention, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). In the present study, we use fMRI to examine both the multiple-demand and sensory-bias hypotheses within caudal portions of human LFC (both men and women participated). Using visual and auditory 2-back tasks, we replicate the finding of two bilateral visual-biased and two bilateral auditory-biased LFC regions, corresponding to sPCS and iPCS and to tgPCS and cIFS, and demonstrate high within-subject reliability of these regions over time and across tasks. In addition, we assess MD responsiveness using BOLD signal recruitment and multi-task activation indices. In both, we find that the two visual-biased regions, sPCS and iPCS, exhibit stronger MD responsiveness than do the auditory-biased LFC regions, tgPCS and cIFS; however, neither reaches the degree of MD responsiveness exhibited by dorsal anterior cingulate/presupplemental motor area or by anterior insula. These results reconcile two competing views of LFC by demonstrating the coexistence of sensory specialization and MD functionality, especially in visual-biased LFC structures.SIGNIFICANCE STATEMENT Lateral frontal cortex (LFC) is known to play a number of critical roles in supporting human cognition; however, the functional organization of LFC remains controversial. The "multiple demand" (MD) hypothesis suggests that LFC regions provide domain-general support for cognition. Recent evidence challenges the MD view by demonstrating that a preference for sensory modality, vision or audition, defines four discrete LFC regions. Here, the sensory-biased LFC results are reproduced using a new task, and MD responsiveness of these regions is tested. The two visual-biased regions exhibit MD behavior, whereas the auditory-biased regions have no more than weak MD responses. These findings help to reconcile two competing views of LFC functional organization.


Assuntos
Atenção/efeitos da radiação , Percepção Auditiva/fisiologia , Cognição/fisiologia , Lobo Frontal/fisiologia , Rede Nervosa/fisiologia , Percepção Visual/fisiologia , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Mascaramento Perceptivo/fisiologia
14.
J Neurosci ; 36(13): 3755-64, 2016 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-27030760

RESUMO

Evidence from animal and human studies suggests that moderate acoustic exposure, causing only transient threshold elevation, can nonetheless cause "hidden hearing loss" that interferes with coding of suprathreshold sound. Such noise exposure destroys synaptic connections between cochlear hair cells and auditory nerve fibers; however, there is no clinical test of this synaptopathy in humans. In animals, synaptopathy reduces the amplitude of auditory brainstem response (ABR) wave-I. Unfortunately, ABR wave-I is difficult to measure in humans, limiting its clinical use. Here, using analogous measurements in humans and mice, we show that the effect of masking noise on the latency of the more robust ABR wave-V mirrors changes in ABR wave-I amplitude. Furthermore, in our human cohort, the effect of noise on wave-V latency predicts perceptual temporal sensitivity. Our results suggest that measures of the effects of noise on ABR wave-V latency can be used to diagnose cochlear synaptopathy in humans. SIGNIFICANCE STATEMENT: Although there are suspicions that cochlear synaptopathy affects humans with normal hearing thresholds, no one has yet reported a clinical measure that is a reliable marker of such loss. By combining human and animal data, we demonstrate that the latency of auditory brainstem response wave-V in noise reflects auditory nerve loss. This is the first study of human listeners with normal hearing thresholds that links individual differences observed in behavior and auditory brainstem response timing to cochlear synaptopathy. These results can guide development of a clinical test to reveal this previously unknown form of noise-induced hearing loss in humans.


Assuntos
Orelha Interna/patologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Perda Auditiva Provocada por Ruído/patologia , Ruído , Tempo de Reação/fisiologia , Sinapses/patologia , Estimulação Acústica , Adulto , Animais , Percepção Auditiva/fisiologia , Limiar Auditivo/fisiologia , Modelos Animais de Doenças , Eletroencefalografia , Feminino , Perda Auditiva Provocada por Ruído/fisiopatologia , Humanos , Masculino , Camundongos , Emissões Otoacústicas Espontâneas/fisiologia , Adulto Jovem
15.
Cereb Cortex ; 26(3): 1302-1308, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26656996

RESUMO

Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2-4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands.


Assuntos
Percepção Auditiva/fisiologia , Memória de Curto Prazo/fisiologia , Lobo Parietal/fisiologia , Percepção Espacial/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Atenção/fisiologia , Mapeamento Encefálico , Medições dos Movimentos Oculares , Movimentos Oculares/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Estimulação Luminosa , Adulto Jovem
16.
J Acoust Soc Am ; 141(4): 2474, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28464677

RESUMO

Cross-modal interactions of auditory and visual temporal modulation were examined in a game-like experimental framework. Participants observed an audiovisual stimulus (an animated, sound-emitting fish) whose sound intensity and/or visual size oscillated sinusoidally at either 6 or 7 Hz. Participants made speeded judgments about the modulation rate in either the auditory or visual modality while doing their best to ignore information from the other modality. Modulation rate in the task-irrelevant modality matched the modulation rate in the task-relevant modality (congruent conditions), was at the other rate (incongruent conditions), or had no modulation (unmodulated conditions). Both performance accuracy and parameter estimates from drift-diffusion decision modeling indicated that (1) the presence of temporal modulation in both modalities, regardless of whether modulations were matched or mismatched in rate, resulted in audiovisual interactions; (2) congruence in audiovisual temporal modulation resulted in more reliable information processing; and (3) the effects of congruence appeared to be stronger when judging visual modulation rates (i.e., audition influencing vision), than when judging auditory modulation rates (i.e., vision influencing audition). The results demonstrate that audiovisual interactions from temporal modulations are bi-directional in nature, but with potential asymmetries in the size of the effect in each direction.


Assuntos
Julgamento , Discriminação da Altura Tonal , Percepção da Fala , Percepção Visual , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação , Fatores de Tempo , Jogos de Vídeo , Adulto Jovem
17.
J Neurosci ; 35(5): 2161-72, 2015 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-25653371

RESUMO

Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of "normal hearing."


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva , Limiar Auditivo , Cóclea/fisiologia , Perda Auditiva/fisiopatologia , Audição , Adulto , Córtex Auditivo/fisiopatologia , Cóclea/fisiopatologia , Feminino , Humanos , Masculino , Percepção da Fala
18.
Neuroimage ; 141: 108-119, 2016 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-27421185

RESUMO

In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased - i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object.


Assuntos
Córtex Auditivo/fisiologia , Ondas Encefálicas/fisiologia , Eletroencefalografia/métodos , Reconhecimento Fisiológico de Modelo/fisiologia , Mascaramento Perceptivo/fisiologia , Percepção da Altura Sonora/fisiologia , Reconhecimento Psicológico/fisiologia , Adulto , Mapeamento Encefálico/métodos , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Rede Nervosa/fisiologia
19.
Cereb Cortex ; 25(7): 1697-706, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24429136

RESUMO

How humans solve the cocktail party problem remains unknown. However, progress has been made recently thanks to the realization that cortical activity tracks the amplitude envelope of speech. This has led to the development of regression methods for studying the neurophysiology of continuous speech. One such method, known as stimulus-reconstruction, has been successfully utilized with cortical surface recordings and magnetoencephalography (MEG). However, the former is invasive and gives a relatively restricted view of processing along the auditory hierarchy, whereas the latter is expensive and rare. Thus it would be extremely useful for research in many populations if stimulus-reconstruction was effective using electroencephalography (EEG), a widely available and inexpensive technology. Here we show that single-trial (≈60 s) unaveraged EEG data can be decoded to determine attentional selection in a naturalistic multispeaker environment. Furthermore, we show a significant correlation between our EEG-based measure of attention and performance on a high-level attention task. In addition, by attempting to decode attention at individual latencies, we identify neural processing at ∼200 ms as being critical for solving the cocktail party problem. These findings open up new avenues for studying the ongoing dynamics of cognition using EEG and for developing effective and natural brain-computer interfaces.


Assuntos
Atenção/fisiologia , Encéfalo/fisiologia , Eletroencefalografia/métodos , Processamento de Sinais Assistido por Computador , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Fatores de Tempo
20.
PLoS Biol ; 10(5): e1001319, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22563301

RESUMO

Why is spatial tuning in auditory cortex weak, even though location is important to object recognition in natural settings? This question continues to vex neuroscientists focused on linking physiological results to auditory perception. Here we show that the spatial locations of simultaneous, competing sound sources dramatically influence how well neural spike trains recorded from the zebra finch field L (an analog of mammalian primary auditory cortex) encode source identity. We find that the location of a birdsong played in quiet has little effect on the fidelity of the neural encoding of the song. However, when the song is presented along with a masker, spatial effects are pronounced. For each spatial configuration, a subset of neurons encodes song identity more robustly than others. As a result, competing sources from different locations dominate responses of different neural subpopulations, helping to separate neural responses into independent representations. These results help elucidate how cortical processing exploits spatial information to provide a substrate for selective spatial auditory attention.


Assuntos
Córtex Auditivo/fisiologia , Tentilhões/fisiologia , Localização de Som/fisiologia , Estimulação Acústica , Potenciais de Ação , Animais , Orelha/fisiologia , Cabeça/fisiologia , Masculino , Neurônios/fisiologia , Reprodutibilidade dos Testes , Som , Vocalização Animal
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa