Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Brain Res ; 1798: 148144, 2023 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-36328068

RESUMO

Human cognitive abilities naturally vary along a spectrum, even among those we call "neurotypical". Individuals differ in their ability to selectively attend to goal-relevant auditory stimuli. We sought to characterize this variability in a cohort of people with diverse attentional functioning. We recruited both neurotypical (N = 20) and ADHD (N = 25) young adults, all with normal hearing. Participants listened to one of three concurrent, spatially separated speech streams and reported the order of the syllables in that stream while we recorded electroencephalography (EEG). We tested both the ability to sustain attentional focus on a single "Target" stream and the ability to monitor the Target but flexibly either ignore or switch attention to an unpredictable "Interrupter" stream from another direction that sometimes appeared. Although differences in both stimulus structure and task demands affected behavioral performance, ADHD status did not. In both groups, the Interrupter evoked larger neural responses when it was to be attended compared to when it was irrelevant, including for the P3a "reorienting" response previously described as involuntary. This attentional modulation was weaker in ADHD listeners, even though their behavioral performance was the same. Across the entire cohort, individual performance correlated with the degree of top-down modulation of neural responses. These results demonstrate that listeners differ in their ability to modulate neural representations of sound based on task goals, while suggesting that adults with ADHD may have weaker volitional control of attentional processes than their neurotypical counterparts.


Assuntos
Transtorno do Deficit de Atenção com Hiperatividade , Humanos , Adulto Jovem , Percepção Auditiva/fisiologia , Eletroencefalografia , Fala , Testes Auditivos , Estimulação Acústica
2.
Ear Hear ; 41(6): 1635-1647, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33136638

RESUMO

OBJECTIVE: Top-down spatial attention is effective at selecting a target sound from a mixture. However, nonspatial features often distinguish sources in addition to location. This study explores whether redundant nonspatial features are used to maintain selective auditory attention for a spatially defined target. DESIGN: We recorded electroencephalography while subjects focused attention on one of three simultaneous melodies. In one experiment, subjects (n = 17) were given an auditory cue indicating both the location and pitch of the target melody. In a second experiment (n = 17 subjects), the cue only indicated target location, and we compared two conditions: one in which the pitch separation of competing melodies was large, and one in which this separation was small. RESULTS: In both experiments, responses evoked by onsets of events in sound streams were modulated by attention, and we found no significant difference in this modulation between small and large pitch separation conditions. Therefore, the evoked response reflected that target stimuli were the focus of attention, and distractors were suppressed successfully for all experimental conditions. In all cases, parietal alpha was lateralized following the cue, but before melody onset, indicating that subjects initially focused attention in space. During the stimulus presentation, this lateralization disappeared when pitch cues were strong but remained significant when pitch cues were weak, suggesting that strong pitch cues reduced reliance on sustained spatial attention. CONCLUSIONS: These results demonstrate that once a well-defined target stream at a known location is selected, top-down spatial attention plays a weak role in filtering out a segregated competing stream.


Assuntos
Atenção , Localização de Som , Estimulação Acústica , Percepção Auditiva , Sinais (Psicologia) , Eletroencefalografia , Humanos
3.
Neuropsychologia ; 146: 107530, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32574616

RESUMO

In order to parse the world around us, we must constantly determine which sensory inputs arise from the same physical source and should therefore be perceptually integrated. Temporal coherence between auditory and visual stimuli drives audio-visual (AV) integration, but the role played by AV spatial alignment is less well understood. Here, we manipulated AV spatial alignment and collected electroencephalography (EEG) data while human subjects performed a free-field variant of the "pip and pop" AV search task. In this paradigm, visual search is aided by a spatially uninformative auditory tone, the onsets of which are synchronized to changes in the visual target. In Experiment 1, tones were either spatially aligned or spatially misaligned with the visual display. Regardless of AV spatial alignment, we replicated the key pip and pop result of improved AV search times. Mirroring the behavioral results, we found an enhancement of early event-related potentials (ERPs), particularly the auditory N1 component, in both AV conditions. We demonstrate that both top-down and bottom-up attention contribute to these N1 enhancements. In Experiment 2, we tested whether spatial alignment influences AV integration in a more challenging context with competing multisensory stimuli. An AV foil was added that visually resembled the target and was synchronized to its own stream of synchronous tones. The visual components of the AV target and AV foil occurred in opposite hemifields; the two auditory components were also in opposite hemifields and were either spatially aligned or spatially misaligned with the visual components to which they were synchronized. Search was fastest when the auditory and visual components of the AV target (and the foil) were spatially aligned. Attention modulated ERPs in both spatial conditions, but importantly, the scalp topography of early evoked responses shifted only when stimulus components were spatially aligned, signaling the recruitment of different neural generators likely related to multisensory integration. These results suggest that AV integration depends on AV spatial alignment when stimuli in both modalities compete for selective integration, a common scenario in real-world perception.


Assuntos
Percepção Auditiva , Percepção Visual , Estimulação Acústica , Eletroencefalografia , Potenciais Evocados , Humanos , Estimulação Luminosa
4.
J Acoust Soc Am ; 147(1): 371, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-32006971

RESUMO

Perceptual anchors are representations of stimulus features stored in long-term memory rather than short-term memory. The present study investigated whether listeners use perceptual anchors to improve pure-tone frequency discrimination. Ten amateur musicians performed a two-interval, two-alternative forced-choice frequency-discrimination experiment. In one half of the experiment, the frequency of the first tone was fixed across trials, and in the other half, the frequency of the first tone was roved widely across trials. The durations of the interstimulus intervals (ISIs) and the frequency differences between the tones on each trial were also manipulated. The data were analyzed with a Bayesian model that assumed that performance was limited by sensory noise (related to the initial encoding of the stimuli), memory noise (which increased proportionally to the ISI), fluctuations in attention, and response bias. It was hypothesized that memory-noise variance increased more rapidly during roved-frequency discrimination than fixed-frequency discrimination because listeners used perceptual anchors in the latter condition. The results supported this hypothesis. The results also suggested that listeners experienced more lapses in attention during roved-frequency discrimination.


Assuntos
Percepção Auditiva , Memória de Longo Prazo , Discriminação da Altura Tonal , Estimulação Acústica , Adulto , Teorema de Bayes , Feminino , Humanos , Masculino , Psicofísica , Adulto Jovem
5.
J Acoust Soc Am ; 146(4): 2577, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31671991

RESUMO

Spatial attention may be used to select target speech in one location while suppressing irrelevant speech in another. However, if perceptual resolution of spatial cues is weak, spatially focused attention may work poorly, leading to difficulty communicating in noisy settings. In electroencephalography (EEG), the distribution of alpha (8-14 Hz) power over parietal sensors reflects the spatial focus of attention [Banerjee, Snyder, Molholm, and Foxe (2011). J. Neurosci. 31, 9923-9932; Foxe and Snyder (2011). Front. Psychol. 2, 154.] If spatial attention is degraded, however, alpha may not be modulated across parietal sensors. A previously published behavioral and EEG study found that, compared to normal-hearing (NH) listeners, hearing-impaired (HI) listeners often had higher interaural time difference thresholds, worse performance when asked to report the content of an acoustic stream from a particular location, and weaker attentional modulation of neural responses evoked by sounds in a mixture [Dai, Best, and Shinn-Cunningham (2018). Proc. Natl. Acad. Sci. U. S. A. 115, E3286]. This study explored whether these same HI listeners also showed weaker alpha lateralization during the previously reported task. In NH listeners, hemispheric parietal alpha power was greater when the ipsilateral location was attended; this lateralization was stronger when competing melodies were separated by a larger spatial difference. In HI listeners, however, alpha was not lateralized across parietal sensors, consistent with a degraded ability to use spatial features to selectively attend.


Assuntos
Ritmo alfa , Atenção/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiopatologia , Perda Auditiva Neurossensorial/fisiopatologia , Processamento Espacial/fisiologia , Estimulação Acústica , Adulto , Sinais (Psicologia) , Feminino , Lateralidade Funcional , Humanos , Masculino , Pessoa de Meia-Idade , Lobo Parietal/fisiopatologia , Pessoas com Deficiência Auditiva , Adulto Jovem
6.
J Neurosci ; 36(13): 3755-64, 2016 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-27030760

RESUMO

Evidence from animal and human studies suggests that moderate acoustic exposure, causing only transient threshold elevation, can nonetheless cause "hidden hearing loss" that interferes with coding of suprathreshold sound. Such noise exposure destroys synaptic connections between cochlear hair cells and auditory nerve fibers; however, there is no clinical test of this synaptopathy in humans. In animals, synaptopathy reduces the amplitude of auditory brainstem response (ABR) wave-I. Unfortunately, ABR wave-I is difficult to measure in humans, limiting its clinical use. Here, using analogous measurements in humans and mice, we show that the effect of masking noise on the latency of the more robust ABR wave-V mirrors changes in ABR wave-I amplitude. Furthermore, in our human cohort, the effect of noise on wave-V latency predicts perceptual temporal sensitivity. Our results suggest that measures of the effects of noise on ABR wave-V latency can be used to diagnose cochlear synaptopathy in humans. SIGNIFICANCE STATEMENT: Although there are suspicions that cochlear synaptopathy affects humans with normal hearing thresholds, no one has yet reported a clinical measure that is a reliable marker of such loss. By combining human and animal data, we demonstrate that the latency of auditory brainstem response wave-V in noise reflects auditory nerve loss. This is the first study of human listeners with normal hearing thresholds that links individual differences observed in behavior and auditory brainstem response timing to cochlear synaptopathy. These results can guide development of a clinical test to reveal this previously unknown form of noise-induced hearing loss in humans.


Assuntos
Orelha Interna/patologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Perda Auditiva Provocada por Ruído/patologia , Ruído , Tempo de Reação/fisiologia , Sinapses/patologia , Estimulação Acústica , Adulto , Animais , Percepção Auditiva/fisiologia , Limiar Auditivo/fisiologia , Modelos Animais de Doenças , Eletroencefalografia , Feminino , Perda Auditiva Provocada por Ruído/fisiopatologia , Humanos , Masculino , Camundongos , Emissões Otoacústicas Espontâneas/fisiologia , Adulto Jovem
7.
Cereb Cortex ; 26(3): 1302-1308, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26656996

RESUMO

Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2-4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands.


Assuntos
Percepção Auditiva/fisiologia , Memória de Curto Prazo/fisiologia , Lobo Parietal/fisiologia , Percepção Espacial/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Atenção/fisiologia , Mapeamento Encefálico , Medições dos Movimentos Oculares , Movimentos Oculares/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Estimulação Luminosa , Adulto Jovem
8.
J Acoust Soc Am ; 138(3): 1637-59, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26428802

RESUMO

Population responses such as the auditory brainstem response (ABR) are commonly used for hearing screening, but the relationship between single-unit physiology and scalp-recorded population responses are not well understood. Computational models that integrate physiologically realistic models of single-unit auditory-nerve (AN), cochlear nucleus (CN) and inferior colliculus (IC) cells with models of broadband peripheral excitation can be used to simulate ABRs and thereby link detailed knowledge of animal physiology to human applications. Existing functional ABR models fail to capture the empirically observed 1.2-2 ms ABR wave-V latency-vs-intensity decrease that is thought to arise from level-dependent changes in cochlear excitation and firing synchrony across different tonotopic sections. This paper proposes an approach where level-dependent cochlear excitation patterns, which reflect human cochlear filter tuning parameters, drive AN fibers to yield realistic level-dependent properties of the ABR wave-V. The number of free model parameters is minimal, producing a model in which various sources of hearing-impairment can easily be simulated on an individualized and frequency-dependent basis. The model fits latency-vs-intensity functions observed in human ABRs and otoacoustic emissions while maintaining rate-level and threshold characteristics of single-unit AN fibers. The simulations help to reveal which tonotopic regions dominate ABR waveform peaks at different stimulus intensities.


Assuntos
Tronco Encefálico/fisiologia , Nervo Coclear/fisiologia , Estimulação Acústica , Membrana Basilar/fisiologia , Ciências Biocomportamentais , Cóclea/fisiologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Audição/fisiologia , Humanos , Emissões Otoacústicas Espontâneas/fisiologia , Tempo de Reação/fisiologia , Vibração
9.
Neuron ; 87(4): 882-92, 2015 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-26291168

RESUMO

The frontal lobes control wide-ranging cognitive functions; however, functional subdivisions of human frontal cortex are only coarsely mapped. Here, functional magnetic resonance imaging reveals two distinct visual-biased attention regions in lateral frontal cortex, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), anatomically interdigitated with two auditory-biased attention regions, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic functional connectivity analysis demonstrates that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Interestingly, we observe that spatial and temporal short-term memory (STM), respectively, recruit visual and auditory attention networks in the frontal lobe, independent of sensory modality. These findings not only demonstrate that both sensory modality and information domain influence frontal lobe functional organization, they also demonstrate that spatial processing co-localizes with visual processing and that temporal processing co-localizes with auditory processing in lateral frontal cortex.


Assuntos
Atenção/fisiologia , Lobo Frontal/fisiologia , Memória de Curto Prazo/fisiologia , Rede Nervosa/fisiologia , Comportamento Espacial/fisiologia , Estimulação Acústica/métodos , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Fatores de Tempo , Adulto Jovem
10.
Brain Res ; 1626: 146-64, 2015 Nov 11.
Artigo em Inglês | MEDLINE | ID: mdl-26187756

RESUMO

Auditory brainstem responses (ABRs) and their steady-state counterpart (subcortical steady-state responses, SSSRs) are generally thought to be insensitive to cognitive demands. However, a handful of studies report that SSSRs are modulated depending on the subject׳s focus of attention, either towards or away from an auditory stimulus. Here, we explored whether attentional focus affects the envelope-following response (EFR), which is a particular kind of SSSR, and if so, whether the effects are specific to which sound elements in a sound mixture a subject is attending (selective auditory attentional modulation), specific to attended sensory input (inter-modal attentional modulation), or insensitive to attentional focus. We compared the strength of EFR-stimulus phase locking in human listeners under various tasks: listening to a monaural stimulus, selectively attending to a particular ear during dichotic stimulus presentation, and attending to visual stimuli while ignoring dichotic auditory inputs. We observed no systematic changes in the EFR across experimental manipulations, even though cortical EEG revealed attention-related modulations of alpha activity during the task. We conclude that attentional effects, if any, on human subcortical representation of sounds cannot be observed robustly using EFRs. This article is part of a Special Issue entitled SI: Prediction and Attention.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Potenciais Evocados Auditivos do Tronco Encefálico , Estimulação Acústica , Adolescente , Adulto , Ritmo alfa , Eletroencefalografia , Feminino , Humanos , Masculino , Espectrografia do Som , Adulto Jovem
11.
Curr Biol ; 25(14): 1885-91, 2015 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-26119749

RESUMO

Active search is a ubiquitous goal-driven behavior wherein organisms purposefully investigate the sensory environment to locate a target object. During active search, brain circuits analyze a stream of sensory information from the external environment, adjusting for internal signals related to self-generated movement or "top-down" weighting of anticipated target and distractor properties. Sensory responses in the cortex can be modulated by internal state, though the extent and form of modulation arising in the cortex de novo versus an inheritance from subcortical stations is not clear. We addressed this question by simultaneously recording from auditory and visual regions of the thalamus (MG and LG, respectively) while mice used dynamic auditory or visual feedback to search for a hidden target within an annular track. Locomotion was associated with strongly suppressed responses and reduced decoding accuracy in MG but a subtle increase in LG spiking. Because stimuli in one modality provided critical information about target location while the other served as a distractor, we could also estimate the importance of task relevance in both thalamic subdivisions. In contrast to the effects of locomotion, we found that LG responses were reduced overall yet decoded stimuli more accurately when vision was behaviorally relevant, whereas task relevance had little effect on MG responses. This double dissociation between the influences of task relevance and movement in MG and LG highlights a role for extrasensory modulation in the thalamus but also suggests key differences in the organization of modulatory circuitry between the auditory and visual pathways.


Assuntos
Vias Auditivas/fisiologia , Percepção Auditiva , Tálamo/fisiologia , Vias Visuais/fisiologia , Percepção Visual , Animais , Locomoção , Masculino , Camundongos , Camundongos Endogâmicos C57BL
12.
Hear Res ; 323: 81-90, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25732724

RESUMO

Recent studies have shown that prior knowledge about where, when, and who is going to talk improves speech intelligibility. How related attentional processes affect cognitive processing load has not been investigated yet. In the current study, three experiments investigated how the pupil dilation response is affected by prior knowledge of target speech location, target speech onset, and who is going to talk. A total of 56 young adults with normal hearing participated. They had to reproduce a target sentence presented to one ear while ignoring a distracting sentence simultaneously presented to the other ear. The two sentences were independently masked by fluctuating noise. Target location (left or right ear), speech onset, and talker variability were manipulated in separate experiments by keeping these features either fixed during an entire block or randomized over trials. Pupil responses were recorded during listening and performance was scored after recall. The results showed an improvement in performance when the location of the target speech was fixed instead of randomized. Additionally, location uncertainty increased the pupil dilation response, which suggests that prior knowledge of location reduces cognitive load. Interestingly, the observed pupil responses for each condition were consistent with subjective reports of listening effort. We conclude that communicating in a dynamic environment like a cocktail party (where participants in competing conversations move unpredictably) requires substantial listening effort because of the demands placed on attentional processes.


Assuntos
Atenção , Ruído/efeitos adversos , Mascaramento Perceptivo , Pupila/fisiologia , Localização de Som , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Adolescente , Adulto , Audiometria da Fala , Piscadela , Cognição , Sinais (Psicologia) , Movimentos Oculares , Feminino , Humanos , Masculino , Rememoração Mental , Miose , Midríase , Reflexo Pupilar , Fatores de Tempo , Incerteza , Adulto Jovem
13.
Cereb Cortex ; 25(7): 1697-706, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24429136

RESUMO

How humans solve the cocktail party problem remains unknown. However, progress has been made recently thanks to the realization that cortical activity tracks the amplitude envelope of speech. This has led to the development of regression methods for studying the neurophysiology of continuous speech. One such method, known as stimulus-reconstruction, has been successfully utilized with cortical surface recordings and magnetoencephalography (MEG). However, the former is invasive and gives a relatively restricted view of processing along the auditory hierarchy, whereas the latter is expensive and rare. Thus it would be extremely useful for research in many populations if stimulus-reconstruction was effective using electroencephalography (EEG), a widely available and inexpensive technology. Here we show that single-trial (≈60 s) unaveraged EEG data can be decoded to determine attentional selection in a naturalistic multispeaker environment. Furthermore, we show a significant correlation between our EEG-based measure of attention and performance on a high-level attention task. In addition, by attempting to decode attention at individual latencies, we identify neural processing at ∼200 ms as being critical for solving the cocktail party problem. These findings open up new avenues for studying the ongoing dynamics of cognition using EEG and for developing effective and natural brain-computer interfaces.


Assuntos
Atenção/fisiologia , Encéfalo/fisiologia , Eletroencefalografia/métodos , Processamento de Sinais Assistido por Computador , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Fatores de Tempo
14.
Hear Res ; 312: 114-20, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24709275

RESUMO

Dividing attention over two streams of speech strongly decreases performance compared to focusing on only one. How divided attention affects cognitive processing load as indexed with pupillometry during speech recognition has so far not been investigated. In 12 young adults the pupil response was recorded while they focused on either one or both of two sentences that were presented dichotically and masked by fluctuating noise across a range of signal-to-noise ratios. In line with previous studies, the performance decreases when processing two target sentences instead of one. Additionally, dividing attention to process two sentences caused larger pupil dilation and later peak pupil latency than processing only one. This suggests an effect of attention on cognitive processing load (pupil dilation) during speech processing in noise.


Assuntos
Atenção/fisiologia , Pupila/fisiologia , Reflexo Pupilar/fisiologia , Localização de Som/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Feminino , Lateralidade Funcional/fisiologia , Humanos , Masculino , Ruído , Razão Sinal-Ruído , Adulto Jovem
15.
Clin Neurophysiol ; 125(9): 1878-88, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-24525091

RESUMO

OBJECTIVE: Auditory subcortical steady state responses (SSSRs), also known as frequency following responses (FFRs), provide a non-invasive measure of phase-locked neural responses to acoustic and cochlear-induced periodicities. SSSRs have been used both clinically and in basic neurophysiological investigation of auditory function. SSSR data acquisition typically involves thousands of presentations of each stimulus type, sometimes in two polarities, with acquisition times often exceeding an hour per subject. Here, we present a novel approach to reduce the data acquisition times significantly. METHODS: Because the sources of the SSSR are deep compared to the primary noise sources, namely background spontaneous cortical activity, the SSSR varies more smoothly over the scalp than the noise. We exploit this property and extract SSSRs efficiently, using multichannel recordings and an eigendecomposition of the complex cross-channel spectral density matrix. RESULTS: Our proposed method yields SNR improvement exceeding a factor of 3 compared to traditional single-channel methods. CONCLUSIONS: It is possible to reduce data acquisition times for SSSRs significantly with our approach. SIGNIFICANCE: The proposed method allows SSSRs to be recorded for several stimulus conditions within a single session and also makes it possible to acquire both SSSRs and cortical EEG responses without increasing the session length.


Assuntos
Córtex Auditivo/fisiologia , Eletroencefalografia/métodos , Estimulação Acústica , Adulto , Algoritmos , Cóclea/fisiologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Feminino , Humanos , Masculino , Análise de Componente Principal , Reprodutibilidade dos Testes , Razão Sinal-Ruído , Adulto Jovem
16.
Cereb Cortex ; 24(3): 773-84, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-23180753

RESUMO

Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Percepção Espacial/fisiologia , Estimulação Acústica , Adolescente , Adulto , Córtex Cerebral/irrigação sanguínea , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Julgamento , Modelos Lineares , Imageamento por Ressonância Magnética , Masculino , Vias Neurais/irrigação sanguínea , Estimulação Luminosa , Máquina de Vetores de Suporte , Adulto Jovem
17.
Hear Res ; 307: 111-20, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-23850664

RESUMO

Over the last four decades, a range of different neuroimaging tools have been used to study human auditory attention, spanning from classic event-related potential studies using electroencephalography to modern multimodal imaging approaches (e.g., combining anatomical information based on magnetic resonance imaging with magneto- and electroencephalography). This review begins by exploring the different strengths and limitations inherent to different neuroimaging methods, and then outlines some common behavioral paradigms that have been adopted to study auditory attention. We argue that in order to design a neuroimaging experiment that produces interpretable, unambiguous results, the experimenter must not only have a deep appreciation of the imaging technique employed, but also a sophisticated understanding of perception and behavior. Only with the proper caveats in mind can one begin to infer how the cortex supports a human in solving the "cocktail party" problem. This article is part of a Special Issue entitled Human Auditory Neuroimaging.


Assuntos
Atenção , Córtex Auditivo/fisiologia , Percepção Auditiva , Mapeamento Encefálico , Estimulação Acústica , Córtex Auditivo/anatomia & histologia , Vias Auditivas/fisiologia , Mapeamento Encefálico/métodos , Eletroencefalografia , Humanos , Imageamento por Ressonância Magnética , Magnetoencefalografia , Ruído/efeitos adversos , Mascaramento Perceptivo
18.
J Acoust Soc Am ; 133(4): 2329-39, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23556599

RESUMO

Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.


Assuntos
Atenção , Auxiliares de Audição , Ruído/efeitos adversos , Mascaramento Perceptivo , Localização de Som , Percepção da Fala , Estimulação Acústica , Adolescente , Audiometria de Tons Puros , Audiometria da Fala , Limiar Auditivo , Sinais (Psicologia) , Humanos , Psicoacústica , Processamento de Sinais Assistido por Computador , Fatores de Tempo , Vibração , Adulto Jovem
19.
J Neurosci ; 32(39): 13402-10, 2012 Sep 26.
Artigo em Inglês | MEDLINE | ID: mdl-23015431

RESUMO

In the "flash-beep illusion," a single light flash is perceived as multiple flashes when presented in close temporal proximity to multiple auditory beeps. Accounts of this illusion argue that temporal auditory information interferes with visual information because temporal acuity is better in audition than vision. However, it may also be that whenever there are multiple sensory inputs, the interference caused by a to-be-ignored stimulus on an attended stimulus depends on the likelihood that the stimuli are perceived as coming from a single distal source. Here we explore, in human observers, perceptual interactions between competing auditory and visual inputs while varying spatial proximity, which affects object formation. When two spatially separated streams are presented in the same (visual or auditory) modality, temporal judgments about a target stream from one direction are biased by the content of the competing distractor stream. Cross-modally, auditory streams from both target and distractor directions bias the perceived number of events in a target visual stream; however, importantly, the auditory stream from the target direction influences visual judgments more than does the auditory stream from the opposite hemifield. As in the original flash-beep illusion, visual streams weakly influence auditory judgments, regardless of spatial proximity. We also find that perceptual interference in the flash-beep illusion is similar to within-modality interference from a competing same-modality stream. Results reveal imperfect and obligatory within- and across-modality integration of information, and hint that the strength of these interactions depends on object binding.


Assuntos
Percepção Auditiva/fisiologia , Ilusões/fisiologia , Ruído , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Sinais (Psicologia) , Feminino , Lateralidade Funcional , Humanos , Masculino , Estimulação Luminosa , Psicofísica , Tempo de Reação/fisiologia , Fatores de Tempo , Adulto Jovem
20.
J Assoc Res Otolaryngol ; 13(1): 119-29, 2012 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22124889

RESUMO

Past studies have explored the relative strengths of auditory features in a selective attention task by pitting features against one another and asking listeners to report the words perceived in a given sentence. While these studies show that the continuity of competing features affects streaming, they did not address whether the influence of specific features is modulated by volitionally directed attention. Here, we explored whether the continuity of a task-irrelevant feature affects the ability to selectively report one of two competing speech streams when attention is specifically directed to a different feature. Sequences of simultaneous pairs of spoken digits were presented in which exactly one digit of each pair matched a primer phrase in pitch and exactly one digit of each pair matched the primer location. Within a trial, location and pitch were randomly paired; they either were consistent with each other from digit to digit or were switched (e.g., the sequence from the primer's location changed pitch across digits). In otherwise identical blocks, listeners were instructed to report digits matching the primer either in location or in pitch. Listeners were told to ignore the irrelevant feature, if possible, in order to perform well. Listener responses depended on task instructions, proving that top-down attention alters how a subject performs the task. Performance improved when the separation of the target and masker in the task-relevant feature increased. Importantly, the values of the task-irrelevant feature also influenced performance in some cases. Specifically, when instructed to attend location, listeners performed worse as the separation between target and masker pitch increased, especially when the spatial separation between digits was small. These results indicate that task-relevant and task-irrelevant features are perceptually bound together: continuity of task-irrelevant features influences selective attention in an automatic, obligatory manner, consistent with the idea that auditory attention operates on objects.


Assuntos
Atenção/fisiologia , Percepção da Altura Sonora/fisiologia , Psicoacústica , Localização de Som/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Discriminação Psicológica/fisiologia , Humanos , Mascaramento Perceptivo/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA