Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 143
Filtrar
1.
J Neurosci ; 42(2): 240-254, 2022 01 12.
Artigo em Inglês | MEDLINE | ID: mdl-34764159

RESUMO

Temporal coherence of sound fluctuations across spectral channels is thought to aid auditory grouping and scene segregation. Although prior studies on the neural bases of temporal-coherence processing focused mostly on cortical contributions, neurophysiological evidence suggests that temporal-coherence-based scene analysis may start as early as the cochlear nucleus (i.e., the first auditory region supporting cross-channel processing over a wide frequency range). Accordingly, we hypothesized that aspects of temporal-coherence processing that could be realized in early auditory areas may shape speech understanding in noise. We then explored whether physiologically plausible computational models could account for results from a behavioral experiment that measured consonant categorization in different masking conditions. We tested whether within-channel masking of target-speech modulations predicted consonant confusions across the different conditions and whether predictions were improved by adding across-channel temporal-coherence processing mirroring the computations known to exist in the cochlear nucleus. Consonant confusions provide a rich characterization of error patterns in speech categorization, and are thus crucial for rigorously testing models of speech perception; however, to the best of our knowledge, they have not been used in prior studies of scene analysis. We find that within-channel modulation masking can reasonably account for category confusions, but that it fails when temporal fine structure cues are unavailable. However, the addition of across-channel temporal-coherence processing significantly improves confusion predictions across all tested conditions. Our results suggest that temporal-coherence processing strongly shapes speech understanding in noise and that physiological computations that exist early along the auditory pathway may contribute to this process.SIGNIFICANCE STATEMENT Temporal coherence of sound fluctuations across distinct frequency channels is thought to be important for auditory scene analysis. Prior studies on the neural bases of temporal-coherence processing focused mostly on cortical contributions, and it was unknown whether speech understanding in noise may be shaped by across-channel processing that exists in earlier auditory areas. Using physiologically plausible computational modeling to predict consonant confusions across different listening conditions, we find that across-channel temporal coherence contributes significantly to scene analysis and speech perception and that such processing may arise in the auditory pathway as early as the brainstem. By virtue of providing a richer characterization of error patterns not obtainable with just intelligibility scores, consonant confusions yield unique insight into scene analysis mechanisms.


Assuntos
Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Cóclea/fisiologia , Fala/fisiologia , Estimulação Acústica , Limiar Auditivo/fisiologia , Humanos , Modelos Neurológicos , Mascaramento Perceptivo
2.
Neuroimage ; 277: 120210, 2023 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-37311535

RESUMO

Electroencephalography (EEG) and diffuse optical tomography (DOT) are imaging methods which are widely used for neuroimaging. While the temporal resolution of EEG is high, the spatial resolution is typically limited. DOT, on the other hand, has high spatial resolution, but the temporal resolution is inherently limited by the slow hemodynamics it measures. In our previous work, we showed using computer simulations that when using the results of DOT reconstruction as the spatial prior for EEG source reconstruction, high spatio-temporal resolution could be achieved. In this work, we experimentally validate the algorithm by alternatingly flashing two visual stimuli at a speed that is faster than the temporal resolution of DOT. We show that the joint reconstruction using both EEG and DOT clearly resolves the two stimuli temporally, and the spatial confinement is drastically improved in comparison to reconstruction using EEG alone.


Assuntos
Tomografia Óptica , Córtex Visual , Humanos , Eletroencefalografia/métodos , Simulação por Computador , Neuroimagem , Algoritmos , Tomografia Óptica/métodos , Córtex Visual/diagnóstico por imagem , Mapeamento Encefálico/métodos
3.
Cereb Cortex ; 32(4): 855-869, 2022 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-34467399

RESUMO

Working memory (WM) supports the persistent representation of transient sensory information. Visual and auditory stimuli place different demands on WM and recruit different brain networks. Separate auditory- and visual-biased WM networks extend into the frontal lobes, but several challenges confront attempts to parcellate human frontal cortex, including fine-grained organization and between-subject variability. Here, we use differential intrinsic functional connectivity from 2 visual-biased and 2 auditory-biased frontal structures to identify additional candidate sensory-biased regions in frontal cortex. We then examine direct contrasts of task functional magnetic resonance imaging during visual versus auditory 2-back WM to validate those candidate regions. Three visual-biased and 5 auditory-biased regions are robustly activated bilaterally in the frontal lobes of individual subjects (N = 14, 7 women). These regions exhibit a sensory preference during passive exposure to task stimuli, and that preference is stronger during WM. Hierarchical clustering analysis of intrinsic connectivity among novel and previously identified bilateral sensory-biased regions confirms that they functionally segregate into visual and auditory networks, even though the networks are anatomically interdigitated. We also observe that the frontotemporal auditory WM network is highly selective and exhibits strong functional connectivity to structures serving non-WM functions, while the frontoparietal visual WM network hierarchically merges into the multiple-demand cognitive system.


Assuntos
Percepção Auditiva , Memória de Curto Prazo , Mapeamento Encefálico/métodos , Feminino , Lobo Frontal/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética
4.
Ear Hear ; 43(1): 9-22, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34751676

RESUMO

Following a conversation in a crowded restaurant or at a lively party poses immense perceptual challenges for some individuals with normal hearing thresholds. A number of studies have investigated whether noise-induced cochlear synaptopathy (CS; damage to the synapses between cochlear hair cells and the auditory nerve following noise exposure that does not permanently elevate hearing thresholds) contributes to this difficulty. A few studies have observed correlations between proxies of noise-induced CS and speech perception in difficult listening conditions, but many have found no evidence of a relationship. To understand these mixed results, we reviewed previous studies that have examined noise-induced CS and performance on speech perception tasks in adverse listening conditions in adults with normal or near-normal hearing thresholds. Our review suggests that superficially similar speech perception paradigms used in previous investigations actually placed very different demands on sensory, perceptual, and cognitive processing. Speech perception tests that use low signal-to-noise ratios and maximize the importance of fine sensory details- specifically by using test stimuli for which lexical, syntactic, and semantic cues do not contribute to performance-are more likely to show a relationship to estimated CS levels. Thus, the current controversy as to whether or not noise-induced CS contributes to individual differences in speech perception under challenging listening conditions may be due in part to the fact that many of the speech perception tasks used in past studies are relatively insensitive to CS-induced deficits.


Assuntos
Percepção da Fala , Fala , Estimulação Acústica , Adulto , Limiar Auditivo/fisiologia , Humanos , Individualidade , Mascaramento Perceptivo , Percepção da Fala/fisiologia
5.
J Acoust Soc Am ; 151(5): 3219, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35649920

RESUMO

Salient interruptions draw attention involuntarily. Here, we explored whether this effect depends on the spatial and temporal relationships between a target stream and interrupter. In a series of online experiments, listeners focused spatial attention on a target stream of spoken syllables in the presence of an otherwise identical distractor stream from the opposite hemifield. On some random trials, an interrupter (a cat "MEOW") occurred. Experiment 1 established that the interrupter, which occurred randomly in 25% of the trials in the hemifield opposite the target, degraded target recall. Moreover, a majority of participants exhibited this degradation for the first target syllable, which finished before the interrupter began. Experiment 2 showed that the effect of an interrupter was similar whether it occurred in the opposite or the same hemifield as the target. Experiment 3 found that the interrupter degraded performance slightly if it occurred before the target stream began but had no effect if it began after the target stream ended. Experiment 4 showed decreased interruption effects when the interruption frequency increased (50% of the trials). These results demonstrate that a salient interrupter disrupts recall of a target stream, regardless of its direction, especially if it occurs during a target stream.


Assuntos
Rememoração Mental , Humanos
6.
Artigo em Inglês | MEDLINE | ID: mdl-34327551

RESUMO

Auditory neuroscience in dolphins has largely focused on auditory brainstem responses; however, such measures reveal little about the cognitive processes dolphins employ during echolocation and acoustic communication. The few previous studies of mid- and long-latency auditory-evoked potentials (AEPs) in dolphins report different latencies, polarities, and magnitudes. These inconsistencies may be due to any number of differences in methodology, but these studies do not make it clear which methodological differences may account for the disparities. The present study evaluates how electrode placement and pre-processing methods affect mid- and long-latency AEPs in (Tursiops truncatus). AEPs were measured when reference electrodes were placed on the skin surface over the forehead, the external auditory meatus, or the dorsal surface anterior to the dorsal fin. Data were pre-processed with or without a digital 50-Hz low-pass filter, and the use of independent component analysis to isolate signal components related to neural processes from other signals. Results suggest that a meatus reference electrode provides the highest quality AEP signals for analyses in sensor space, whereas a dorsal reference yielded nominal improvements in component space. These results provide guidance for measuring cortical AEPs in dolphins, supporting future studies of their cognitive auditory processing.


Assuntos
Golfinhos/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica , Nadadeiras de Animais , Animais , Vias Auditivas , Percepção Auditiva , Eletrocardiografia , Eletrodos Implantados , Eletroencefalografia , Testa , Masculino , Análise de Componente Principal , Razão Sinal-Ruído , Pele , Som
7.
Proc Natl Acad Sci U S A ; 115(14): E3286-E3295, 2018 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-29555752

RESUMO

Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Limiar Auditivo/fisiologia , Discriminação Psicológica , Perda Auditiva Neurossensorial/fisiopatologia , Perda Auditiva Neurossensorial/psicologia , Percepção Espacial/fisiologia , Adulto , Estudos de Casos e Controles , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Teóricos , Adulto Jovem
8.
J Acoust Soc Am ; 150(4): 3085, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34717460

RESUMO

The ability to see a talker's face improves speech intelligibility in noise, provided that the auditory and visual speech signals are approximately aligned in time. However, the importance of spatial alignment between corresponding faces and voices remains unresolved, particularly in multi-talker environments. In a series of online experiments, we investigated this using a task that required participants to selectively attend a target talker in noise while ignoring a distractor talker. In experiment 1, we found improved task performance when the talkers' faces were visible, but only when corresponding faces and voices were presented in the same hemifield (spatially aligned). In experiment 2, we tested for possible influences of eye position on this result. In auditory-only conditions, directing gaze toward the distractor voice reduced performance, but this effect could not fully explain the cost of audio-visual (AV) spatial misalignment. Lowering the signal-to-noise ratio (SNR) of the speech from +4 to -4 dB increased the magnitude of the AV spatial alignment effect (experiment 3), but accurate closed-set lipreading caused a floor effect that influenced results at lower SNRs (experiment 4). Taken together, these results demonstrate that spatial alignment between faces and voices contributes to the ability to selectively attend AV speech.


Assuntos
Percepção da Fala , Voz , Humanos , Leitura Labial , Ruído/efeitos adversos , Inteligibilidade da Fala
9.
J Acoust Soc Am ; 150(4): 2664, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34717498

RESUMO

To understand the mechanisms of speech perception in everyday listening environments, it is important to elucidate the relative contributions of different acoustic cues in transmitting phonetic content. Previous studies suggest that the envelope of speech in different frequency bands conveys most speech content, while the temporal fine structure (TFS) can aid in segregating target speech from background noise. However, the role of TFS in conveying phonetic content beyond what envelopes convey for intact speech in complex acoustic scenes is poorly understood. The present study addressed this question using online psychophysical experiments to measure the identification of consonants in multi-talker babble for intelligibility-matched intact and 64-channel envelope-vocoded stimuli. Consonant confusion patterns revealed that listeners had a greater tendency in the vocoded (versus intact) condition to be biased toward reporting that they heard an unvoiced consonant, despite envelope and place cues being largely preserved. This result was replicated when babble instances were varied across independent experiments, suggesting that TFS conveys voicing information beyond what is conveyed by envelopes for intact speech in babble. Given that multi-talker babble is a masker that is ubiquitous in everyday environments, this finding has implications for the design of assistive listening devices such as cochlear implants.


Assuntos
Implantes Cocleares , Percepção da Fala , Estimulação Acústica , Ruído/efeitos adversos , Mascaramento Perceptivo , Fonética , Fala , Inteligibilidade da Fala
10.
J Acoust Soc Am ; 149(1): 259, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33514136

RESUMO

The ability to discriminate frequency differences between pure tones declines as the duration of the interstimulus interval (ISI) increases. The conventional explanation for this finding is that pitch representations gradually decay from auditory short-term memory. Gradual decay means that internal noise increases with increasing ISI duration. Another possibility is that pitch representations experience "sudden death," disappearing without a trace from memory. Sudden death means that listeners guess (respond at random) more often when the ISIs are longer. Since internal noise and guessing probabilities influence the shape of psychometric functions in different ways, they can be estimated simultaneously. Eleven amateur musicians performed a two-interval, two-alternative forced-choice frequency-discrimination task. The frequencies of the first tones were roved, and frequency differences and ISI durations were manipulated across trials. Data were analyzed using Bayesian models that simultaneously estimated internal noise and guessing probabilities. On average across listeners, internal noise increased monotonically as a function of increasing ISI duration, suggesting that gradual decay occurred. The guessing rate decreased with an increasing ISI duration between 0.5 and 2 s but then increased with further increases in ISI duration, suggesting that sudden death occurred but perhaps only at longer ISIs. Results are problematic for decay-only models of discrimination and contrast with those from a study on visual short-term memory, which found that over similar durations, visual representations experienced little gradual decay yet substantial sudden death.


Assuntos
Memória de Curto Prazo , Música , Discriminação da Altura Tonal , Teorema de Bayes , Humanos , Ruído
11.
J Acoust Soc Am ; 150(3): 2230, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34598642

RESUMO

A fundamental question in the neuroscience of everyday communication is how scene acoustics shape the neural processing of attended speech sounds and in turn impact speech intelligibility. While it is well known that the temporal envelopes in target speech are important for intelligibility, how the neural encoding of target-speech envelopes is influenced by background sounds or other acoustic features of the scene is unknown. Here, we combine human electroencephalography with simultaneous intelligibility measurements to address this key gap. We find that the neural envelope-domain signal-to-noise ratio in target-speech encoding, which is shaped by masker modulations, predicts intelligibility over a range of strategically chosen realistic listening conditions unseen by the predictive model. This provides neurophysiological evidence for modulation masking. Moreover, using high-resolution vocoding to carefully control peripheral envelopes, we show that target-envelope coding fidelity in the brain depends not only on envelopes conveyed by the cochlea, but also on the temporal fine structure (TFS), which supports scene segregation. Our results are consistent with the notion that temporal coherence of sound elements across envelopes and/or TFS influences scene analysis and attentive selection of a target sound. Our findings also inform speech-intelligibility models and technologies attempting to improve real-world speech communication.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Acústica , Percepção Auditiva , Humanos , Mascaramento Perceptivo , Razão Sinal-Ruído
12.
Neuroimage ; 207: 116360, 2020 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-31760150

RESUMO

Visual and somatosensory spatial attention both induce parietal alpha (8-14 â€‹Hz) oscillations whose topographical distribution depends on the direction of spatial attentional focus. In the auditory domain, contrasts of parietal alpha power for leftward and rightward attention reveal qualitatively similar lateralization; however, it is not clear whether alpha lateralization changes monotonically with the direction of auditory attention as it does for visual spatial attention. In addition, most previous studies of alpha oscillation did not consider individual differences in alpha frequency, but simply analyzed power in a fixed spectral band. Here, we recorded electroencephalography in human subjects when they directed attention to one of five azimuthal locations. After a cue indicating the direction of an upcoming target sequence of spoken syllables (yet before the target began), alpha power changed in a task-specific manner. Individual peak alpha frequencies differed consistently between central electrodes and parieto-occipital electrodes, suggesting multiple neural generators of task-related alpha. Parieto-occipital alpha increased over the hemisphere ipsilateral to attentional focus compared to the contralateral hemisphere, and changed systematically as the direction of attention shifted from far left to far right. These results showing that parietal alpha lateralization changes smoothly with the direction of auditory attention as in visual spatial attention provide further support to the growing evidence that the frontoparietal attention network is supramodal.


Assuntos
Ritmo alfa/fisiologia , Atenção/fisiologia , Lateralidade Funcional/fisiologia , Percepção Espacial/fisiologia , Adolescente , Adulto , Mapeamento Encefálico/métodos , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Adulto Jovem
13.
Ear Hear ; 41(6): 1635-1647, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33136638

RESUMO

OBJECTIVE: Top-down spatial attention is effective at selecting a target sound from a mixture. However, nonspatial features often distinguish sources in addition to location. This study explores whether redundant nonspatial features are used to maintain selective auditory attention for a spatially defined target. DESIGN: We recorded electroencephalography while subjects focused attention on one of three simultaneous melodies. In one experiment, subjects (n = 17) were given an auditory cue indicating both the location and pitch of the target melody. In a second experiment (n = 17 subjects), the cue only indicated target location, and we compared two conditions: one in which the pitch separation of competing melodies was large, and one in which this separation was small. RESULTS: In both experiments, responses evoked by onsets of events in sound streams were modulated by attention, and we found no significant difference in this modulation between small and large pitch separation conditions. Therefore, the evoked response reflected that target stimuli were the focus of attention, and distractors were suppressed successfully for all experimental conditions. In all cases, parietal alpha was lateralized following the cue, but before melody onset, indicating that subjects initially focused attention in space. During the stimulus presentation, this lateralization disappeared when pitch cues were strong but remained significant when pitch cues were weak, suggesting that strong pitch cues reduced reliance on sustained spatial attention. CONCLUSIONS: These results demonstrate that once a well-defined target stream at a known location is selected, top-down spatial attention plays a weak role in filtering out a segregated competing stream.


Assuntos
Atenção , Localização de Som , Estimulação Acústica , Percepção Auditiva , Sinais (Psicologia) , Eletroencefalografia , Humanos
14.
Ear Hear ; 41(1): 208-216, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31107365

RESUMO

OBJECTIVES: This study aimed to evaluate the informational component of speech-on-speech masking. Speech perception in the presence of a competing talker involves not only informational masking (IM) but also a number of masking processes involving interaction of masker and target energy in the auditory periphery. Such peripherally generated masking can be eliminated by presenting the target and masker in opposite ears (dichotically). However, this also reduces IM by providing listeners with lateralization cues that support spatial release from masking (SRM). In tonal sequences, IM can be isolated by rapidly switching the lateralization of dichotic target and masker streams across the ears, presumably producing ambiguous spatial percepts that interfere with SRM. However, it is not clear whether this technique works with speech materials. DESIGN: Speech reception thresholds (SRTs) were measured in 17 young normal-hearing adults for sentences produced by a female talker in the presence of a competing male talker under three different conditions: diotic (target and masker in both ears), dichotic, and dichotic but switching the target and masker streams across the ears. Because switching rate and signal coherence were expected to influence the amount of IM observed, these two factors varied across conditions. When switches occurred, they were either at word boundaries or periodically (every 116 msec) and either with or without a brief gap (84 msec) at every switch point. In addition, SRTs were measured in a quiet condition to rule out audibility as a limiting factor. RESULTS: SRTs were poorer for the four switching dichotic conditions than for the nonswitching dichotic condition, but better than for the diotic condition. Periodic switches without gaps resulted in the worst SRTs compared to the other switch conditions, thus maximizing IM. CONCLUSIONS: These findings suggest that periodically switching the target and masker streams across the ears (without gaps) was the most efficient in disrupting SRM. Thus, this approach can be used in experiments that seek a relatively pure measure of IM, and could be readily extended to translational research.


Assuntos
Percepção da Fala , Fala , Adulto , Limiar Auditivo , Feminino , Audição , Humanos , Masculino , Mascaramento Perceptivo , Rios
15.
Proc Natl Acad Sci U S A ; 114(36): 9743-9748, 2017 09 05.
Artigo em Inglês | MEDLINE | ID: mdl-28827336

RESUMO

Studies of auditory looming bias have shown that sources increasing in intensity are more salient than sources decreasing in intensity. Researchers have argued that listeners are more sensitive to approaching sounds compared with receding sounds, reflecting an evolutionary pressure. However, these studies only manipulated overall sound intensity; therefore, it is unclear whether looming bias is truly a perceptual bias for changes in source distance, or only in sound intensity. Here we demonstrate both behavioral and neural correlates of looming bias without manipulating overall sound intensity. In natural environments, the pinnae induce spectral cues that give rise to a sense of externalization; when spectral cues are unnatural, sounds are perceived as closer to the listener. We manipulated the contrast of individually tailored spectral cues to create sounds of similar intensity but different naturalness. We confirmed that sounds were perceived as approaching when spectral contrast decreased, and perceived as receding when spectral contrast increased. We measured behavior and electroencephalography while listeners judged motion direction. Behavioral responses showed a looming bias in that responses were more consistent for sounds perceived as approaching than for sounds perceived as receding. In a control experiment, looming bias disappeared when spectral contrast changes were discontinuous, suggesting that perceived motion in distance and not distance itself was driving the bias. Neurally, looming bias was reflected in an asymmetry of late event-related potentials associated with motion evaluation. Hence, both our behavioral and neural findings support a generalization of the auditory looming bias, representing a perceptual preference for approaching auditory objects.


Assuntos
Percepção Auditiva/fisiologia , Estimulação Acústica , Adulto , Viés de Atenção/fisiologia , Córtex Auditivo/fisiologia , Sinais (Psicologia) , Eletroencefalografia , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Modelos Neurológicos , Localização de Som/fisiologia , Adulto Jovem
16.
J Acoust Soc Am ; 147(6): 3814, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32611180

RESUMO

A study by Tóth, Kocsis, Háden, Szerafin, Shinn-Cunningham, and Winkler [Neuroimage 141, 108 - 119 (2016)] reported that spatial cues (such as interaural differences or ITDs) that differentiate the perceived sound source directions of a target tone sequence (figure) from simultaneous distracting tones (background) did not improve the ability of participants to detect the target sequence. The present study aims to investigate more systematically whether spatially separating a complex auditory "figure" from the background auditory stream may enhance the detection of a target in a cluttered auditory scene. Results of the presented experiment suggest that the previous negative results arose because of the specific experimental conditions tested. Here the authors find that ITDs provide a clear benefit for detecting a target tone sequence amid a mixture of other simultaneous tone bursts.


Assuntos
Sinais (Psicologia) , Localização de Som , Estimulação Acústica , Percepção Auditiva , Humanos , Som
17.
J Acoust Soc Am ; 147(1): 371, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-32006971

RESUMO

Perceptual anchors are representations of stimulus features stored in long-term memory rather than short-term memory. The present study investigated whether listeners use perceptual anchors to improve pure-tone frequency discrimination. Ten amateur musicians performed a two-interval, two-alternative forced-choice frequency-discrimination experiment. In one half of the experiment, the frequency of the first tone was fixed across trials, and in the other half, the frequency of the first tone was roved widely across trials. The durations of the interstimulus intervals (ISIs) and the frequency differences between the tones on each trial were also manipulated. The data were analyzed with a Bayesian model that assumed that performance was limited by sensory noise (related to the initial encoding of the stimuli), memory noise (which increased proportionally to the ISI), fluctuations in attention, and response bias. It was hypothesized that memory-noise variance increased more rapidly during roved-frequency discrimination than fixed-frequency discrimination because listeners used perceptual anchors in the latter condition. The results supported this hypothesis. The results also suggested that listeners experienced more lapses in attention during roved-frequency discrimination.


Assuntos
Percepção Auditiva , Memória de Longo Prazo , Discriminação da Altura Tonal , Estimulação Acústica , Adulto , Teorema de Bayes , Feminino , Humanos , Masculino , Psicofísica , Adulto Jovem
18.
Neuroimage ; 202: 116151, 2019 11 15.
Artigo em Inglês | MEDLINE | ID: mdl-31493531

RESUMO

Spatial selective attention enables listeners to process a signal of interest in natural settings. However, most past studies on auditory spatial attention used impoverished spatial cues: presenting competing sounds to different ears, using only interaural differences in time (ITDs) and/or intensity (IIDs), or using non-individualized head-related transfer functions (HRTFs). Here we tested the hypothesis that impoverished spatial cues impair spatial auditory attention by only weakly engaging relevant cortical networks. Eighteen normal-hearing listeners reported the content of one of two competing syllable streams simulated at roughly +30° and -30° azimuth. The competing streams consisted of syllables from two different-sex talkers. Spatialization was based on natural spatial cues (individualized HRTFs), individualized IIDs, or generic ITDs. We measured behavioral performance as well as electroencephalographic markers of selective attention. Behaviorally, subjects recalled target streams most accurately with natural cues. Neurally, spatial attention significantly modulated early evoked sensory response magnitudes only for natural cues, not in conditions using only ITDs or IIDs. Consistent with this, parietal oscillatory power in the alpha band (8-14 â€‹Hz; associated with filtering out distracting events from unattended directions) showed significantly less attentional modulation with isolated spatial cues than with natural cues. Our findings support the hypothesis that spatial selective attention networks are only partially engaged by impoverished spatial auditory cues. These results not only suggest that studies using unnatural spatial cues underestimate the neural effects of spatial auditory attention, they also illustrate the importance of preserving natural spatial cues in assistive listening devices to support robust attentional control.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Sinais (Psicologia) , Processamento Espacial/fisiologia , Estimulação Acústica , Adolescente , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Vias Neurais/fisiologia , Percepção da Fala/fisiologia , Adulto Jovem
19.
J Acoust Soc Am ; 146(4): 2577, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31671991

RESUMO

Spatial attention may be used to select target speech in one location while suppressing irrelevant speech in another. However, if perceptual resolution of spatial cues is weak, spatially focused attention may work poorly, leading to difficulty communicating in noisy settings. In electroencephalography (EEG), the distribution of alpha (8-14 Hz) power over parietal sensors reflects the spatial focus of attention [Banerjee, Snyder, Molholm, and Foxe (2011). J. Neurosci. 31, 9923-9932; Foxe and Snyder (2011). Front. Psychol. 2, 154.] If spatial attention is degraded, however, alpha may not be modulated across parietal sensors. A previously published behavioral and EEG study found that, compared to normal-hearing (NH) listeners, hearing-impaired (HI) listeners often had higher interaural time difference thresholds, worse performance when asked to report the content of an acoustic stream from a particular location, and weaker attentional modulation of neural responses evoked by sounds in a mixture [Dai, Best, and Shinn-Cunningham (2018). Proc. Natl. Acad. Sci. U. S. A. 115, E3286]. This study explored whether these same HI listeners also showed weaker alpha lateralization during the previously reported task. In NH listeners, hemispheric parietal alpha power was greater when the ipsilateral location was attended; this lateralization was stronger when competing melodies were separated by a larger spatial difference. In HI listeners, however, alpha was not lateralized across parietal sensors, consistent with a degraded ability to use spatial features to selectively attend.


Assuntos
Ritmo alfa , Atenção/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiopatologia , Perda Auditiva Neurossensorial/fisiopatologia , Processamento Espacial/fisiologia , Estimulação Acústica , Adulto , Sinais (Psicologia) , Feminino , Lateralidade Funcional , Humanos , Masculino , Pessoa de Meia-Idade , Lobo Parietal/fisiopatologia , Pessoas com Deficiência Auditiva , Adulto Jovem
20.
J Acoust Soc Am ; 146(2): EL177, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31472570

RESUMO

Visual calibration of auditory space requires re-alignment of representations differing in (1) format (auditory hemispheric channels vs visual maps) and (2) reference frames (head-centered vs eye-centered). Here, a ventriloquism paradigm from Kopco, Lin, Shinn-Cunningham, and Groh [J. Neurosci. 29, 13809-13814 (2009)] was used to examine these processes in humans for ventriloquism induced within one spatial hemifield. Results show that (1) the auditory representation can be adapted even by aligned audio-visual stimuli, and (2) the spatial reference frame is primarily head-centered, with a weak eye-centered modulation. These results support the view that the ventriloquism aftereffect is driven by multiple spatially non-uniform, hemisphere-specific processes.


Assuntos
Pós-Efeito de Figura , Lateralidade Funcional , Localização de Som , Encéfalo/fisiologia , Sinais (Psicologia) , Movimentos Oculares , Humanos , Ilusões/fisiologia , Percepção da Fala , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA