Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 69
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Neurosci ; 44(10)2024 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-38267259

RESUMO

Sound texture perception takes advantage of a hierarchy of time-averaged statistical features of acoustic stimuli, but much remains unclear about how these statistical features are processed along the auditory pathway. Here, we compared the neural representation of sound textures in the inferior colliculus (IC) and auditory cortex (AC) of anesthetized female rats. We recorded responses to texture morph stimuli that gradually add statistical features of increasingly higher complexity. For each texture, several different exemplars were synthesized using different random seeds. An analysis of transient and ongoing multiunit responses showed that the IC units were sensitive to every type of statistical feature, albeit to a varying extent. In contrast, only a small proportion of AC units were overtly sensitive to any statistical features. Differences in texture types explained more of the variance of IC neural responses than did differences in exemplars, indicating a degree of "texture type tuning" in the IC, but the same was, perhaps surprisingly, not the case for AC responses. We also evaluated the accuracy of texture type classification from single-trial population activity and found that IC responses became more informative as more summary statistics were included in the texture morphs, while for AC population responses, classification performance remained consistently very low. These results argue against the idea that AC neurons encode sound type via an overt sensitivity in neural firing rate to fine-grain spectral and temporal statistical features.


Assuntos
Córtex Auditivo , Colículos Inferiores , Feminino , Ratos , Animais , Vias Auditivas/fisiologia , Colículos Inferiores/fisiologia , Mesencéfalo/fisiologia , Som , Córtex Auditivo/fisiologia , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia
2.
Front Psychol ; 14: 1106562, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37705948

RESUMO

The unity assumption hypothesis contends that higher-level factors, such as a perceiver's belief and prior experience, modulate multisensory integration. The McGurk illusion exemplifies such integration. When a visual velar consonant /ga/ is dubbed with an auditory bilabial /ba/, listeners unify the discrepant signals with knowledge that open lips cannot produce /ba/ and a fusion percept /da/ is perceived. Previous research claimed to have falsified the unity assumption hypothesis by demonstrating the McGurk effect occurs even when a face is dubbed with a voice of the opposite sex, and thus violates expectations from prior experience. But perhaps stronger counter-evidence is needed to prevent perceptual unity than just an apparent incongruence between unfamiliar faces and voices. Here we investigated whether the McGurk illusion with male/female incongruent stimuli can be disrupted by familiarization and priming with an appropriate pairing of face and voice. In an online experiment, the susceptibility of participants to the McGurk illusion was tested with stimuli containing either a male or female face with a voice of incongruent gender. The number of times participants experienced a McGurk illusion was measured before and after a familiarization block, which familiarized them with the true pairings of face and voice. After familiarization and priming, the susceptibility to the McGurk effects decreased significantly on average. The findings support the notion that unity assumptions modulate intersensory bias, and confirm and extend previous studies using male/female incongruent McGurk stimuli.

3.
Hear Res ; 438: 108857, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37639922

RESUMO

Perception is sensitive to statistical regularities in the environment, including temporal characteristics of sensory inputs. Interestingly, implicit learning of temporal patterns in one modality can also improve their processing in another modality. However, it is unclear how cross-modal learning transfer affects neural responses to sensory stimuli. Here, we recorded neural activity of human volunteers using electroencephalography (EEG), while participants were exposed to brief sequences of randomly timed auditory or visual pulses. Some trials consisted of a repetition of the temporal pattern within the sequence, and subjects were tasked with detecting these trials. Unknown to the participants, some trials reappeared throughout the experiment across both modalities (Transfer) or only within a modality (Control), enabling implicit learning in one modality and its transfer. Using a novel method of analysis of single-trial EEG responses, we showed that learning temporal structures within and across modalities is reflected in neural learning curves. These putative neural correlates of learning transfer were similar both when temporal information learned in audition was transferred to visual stimuli and vice versa. The modality-specific mechanisms for learning of temporal information and general mechanisms which mediate learning transfer across modalities had distinct physiological signatures: temporal learning within modalities relied on modality-specific brain regions while learning transfer affected beta-band activity in frontal regions.


Assuntos
Percepção Auditiva , Aprendizagem , Humanos , Eletroencefalografia , Lobo Frontal , Voluntários Saudáveis
4.
Front Psychol ; 13: 1026116, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36324794

RESUMO

Despite pitch being considered the primary cue for discriminating lexical tones, there are secondary cues such as loudness contour and duration, which may allow some cochlear implant (CI) tone discrimination even with severely degraded pitch cues. To isolate pitch cues from other cues, we developed a new disyllabic word stimulus set (Di) whose primary (pitch) and secondary (loudness) cue varied independently. This Di set consists of 270 disyllabic words, each having a distinct meaning depending on the perceived tone. Thus, listeners who hear the primary pitch cue clearly may hear a different meaning from listeners who struggle with the pitch cue and must rely on the secondary loudness contour. A lexical tone recognition experiment was conducted, which compared Di with a monosyllabic set of natural recordings. Seventeen CI users and eight normal-hearing (NH) listeners took part in the experiment. Results showed that CI users had poorer pitch cues encoding and their tone recognition performance was significantly influenced by the "missing" or "confusing" secondary cues with the Di corpus. The pitch-contour-based tone recognition is still far from satisfactory for CI users compared to NH listeners, even if some appear to integrate multiple cues to achieve high scores. This disyllabic corpus could be used to examine the performance of pitch recognition of CI users and the effectiveness of pitch cue enhancement based Mandarin tone enhancement strategies. The Di corpus is freely available online: https://github.com/BetterCI/DiTone.

5.
BMC Biol ; 20(1): 48, 2022 02 16.
Artigo em Inglês | MEDLINE | ID: mdl-35172815

RESUMO

BACKGROUND: To localize sound sources accurately in a reverberant environment, human binaural hearing strongly favors analyzing the initial wave front of sounds. Behavioral studies of this "precedence effect" have so far largely been confined to human subjects, limiting the scope of complementary physiological approaches. Similarly, physiological studies have mostly looked at neural responses in the inferior colliculus, the main relay point between the inner ear and the auditory cortex, or used modeling of cochlear auditory transduction in an attempt to identify likely underlying mechanisms. Studies capable of providing a direct comparison of neural coding and behavioral measures of sound localization under the precedence effect are lacking. RESULTS: We adapted a "temporal weighting function" paradigm previously developed to quantify the precedence effect in human for use in laboratory rats. The animals learned to lateralize click trains in which each click in the train had a different interaural time difference. Computing the "perceptual weight" of each click in the train revealed a strong onset bias, very similar to that reported for humans. Follow-on electrocorticographic recording experiments revealed that onset weighting of interaural time differences is a robust feature of the cortical population response, but interestingly, it often fails to manifest at individual cortical recording sites. CONCLUSION: While previous studies suggested that the precedence effect may be caused by early processing mechanisms in the cochlea or inhibitory circuitry in the brainstem and midbrain, our results indicate that the precedence effect is not fully developed at the level of individual recording sites in the auditory cortex, but robust and consistent precedence effects are observable only in the auditory cortex at the level of cortical population responses. This indicates that the precedence effect emerges at later cortical processing stages and is a significantly "higher order" feature than has hitherto been assumed.


Assuntos
Córtex Auditivo , Colículos Inferiores , Localização de Som , Estimulação Acústica/métodos , Animais , Córtex Auditivo/fisiologia , Audição , Humanos , Colículos Inferiores/fisiologia , Localização de Som/fisiologia
6.
Hear Res ; 412: 108357, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34739889

RESUMO

Previous psychophysical studies have identified a hierarchy of time-averaged statistics which determine the identity of natural sound textures. However, it is unclear whether the neurons in the inferior colliculus (IC) are sensitive to each of these statistical features in the natural sound textures. We used 13 representative sound textures spanning the space of 3 statistics extracted from over 200 natural textures. The synthetic textures were generated by incorporating the statistical features in a step-by-step manner, in which a particular statistical feature was changed while the other statistical features remain unchanged. The extracellular activity in response to the synthetic texture stimuli was recorded in the IC of anesthetized rats. Analysis of the transient and sustained multiunit activity after each transition of statistical feature showed that the IC units were sensitive to the changes of all types of statistics, although to a varying extent. For example, we found that more neurons were sensitive to the changes in variance than that in the modulation correlations. Our results suggest that the sensitivity of the statistical features in the subcortical levels contributes to the identification and discrimination of natural sound textures.


Assuntos
Colículos Inferiores , Estimulação Acústica , Animais , Colículos Inferiores/fisiologia , Neurônios/fisiologia , Ratos , Som
7.
Hear Res ; 409: 108331, 2021 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-34416492

RESUMO

While a large body of literature has examined the encoding of binaural spatial cues in the auditory midbrain, studies that ask how quantitative measures of spatial tuning in midbrain neurons compare with an animal's psychoacoustic performance remain rare. Researchers have tried to explain deficits in spatial hearing in certain patient groups, such as binaural cochlear implant users, in terms of declines in apparent reductions in spatial tuning of midbrain neurons of animal models. However, the quality of spatial tuning can be quantified in many different ways, and in the absence of evidence that a given neural tuning measure correlates with psychoacoustic performance, the interpretation of such finding remains very tentative. Here, we characterize ITD tuning in the rat inferior colliculus (IC) to acoustic pulse train stimuli with varying envelopes and at varying rates, and explore whether quality of tuning correlates behavioral performance. We quantified both mutual information (MI) and neural d' as measures of ITD sensitivity. Neural d' values paralleled behavioral ones, declining with increasing click rates or when envelopes changed from rectangular to Hanning windows, and they correlated much better with behavioral performance than MI. Meanwhile, MI values were larger in an older, more experienced cohort of animals than in naive animals, but neural d' did not differ between cohorts. However, the results obtained with neural d' and MI were highly correlated when ITD values were coded simply as left or right ear leading, rather than specific ITD values. Thus, neural measures of lateralization ability (e.g. d' or left/right MI) appear to be highly predictive of psychoacoustic performance in a two-alternative forced choice task.


Assuntos
Implante Coclear , Implantes Cocleares , Colículos Inferiores , Estimulação Acústica , Animais , Audição , Ratos , Localização de Som
8.
iScience ; 24(6): 102527, 2021 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-34142039

RESUMO

An interdisciplinary approach to sensory information combination shows a correspondence between perceptual and neural measures of nonlinear multisensory integration. In psychophysics, sensory information combinations are often characterized by the Minkowski formula, but the neural substrates of many psychophysical multisensory interactions are unknown. We show that audiovisual interactions - for both psychophysical detection threshold data and cortical bimodal neurons - obey similar vector-like Minkowski models, suggesting that cortical bimodal neurons could underlie multisensory perceptual sensitivity. An alternative Bayesian model is not a good predictor of cortical bimodal response. In contrast to cortex, audiovisual data from superior colliculus resembles the 'City-Block' combination rule used in perceptual similarity metrics. Previous work found a simple power law amplification rule is followed for perceptual appearance measures and by cortical subthreshold multisensory neurons. The two most studied neural cell classes in cortical multisensory interactions may provide neural substrates for two important perceptual modes: appearance-based and performance-based perception.

9.
PLoS One ; 16(6): e0238960, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34161323

RESUMO

Sounds like "running water" and "buzzing bees" are classes of sounds which are a collective result of many similar acoustic events and are known as "sound textures". A recent psychoacoustic study using sound textures has reported that natural sounding textures can be synthesized from white noise by imposing statistical features such as marginals and correlations computed from the outputs of cochlear models responding to the textures. The outputs being the envelopes of bandpass filter responses, the 'cochlear envelope'. This suggests that the perceptual qualities of many natural sounds derive directly from such statistical features, and raises the question of how these statistical features are distributed in the acoustic environment. To address this question, we collected a corpus of 200 sound textures from public online sources and analyzed the distributions of the textures' marginal statistics (mean, variance, skew, and kurtosis), cross-frequency correlations and modulation power statistics. A principal component analysis of these parameters revealed a great deal of redundancy in the texture parameters. For example, just two marginal principal components, which can be thought of as measuring the sparseness or burstiness of a texture, capture as much as 64% of the variance of the 128 dimensional marginal parameter space, while the first two principal components of cochlear correlations capture as much as 88% of the variance in the 496 correlation parameters. Knowledge of the statistical distributions documented here may help guide the choice of acoustic stimuli with high ecological validity in future research.


Assuntos
Percepção Auditiva/fisiologia , Som , Estimulação Acústica/métodos , Acústica , Cóclea/fisiologia , Bases de Dados Factuais , Humanos , Modelos Estatísticos , Ruído , Análise de Componente Principal/métodos , Psicoacústica
10.
Front Neurosci ; 15: 610978, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33790730

RESUMO

Learning of new auditory stimuli often requires repetitive exposure to the stimulus. Fast and implicit learning of sounds presented at random times enables efficient auditory perception. However, it is unclear how such sensory encoding is processed on a neural level. We investigated neural responses that are developed from a passive, repetitive exposure to a specific sound in the auditory cortex of anesthetized rats, using electrocorticography. We presented a series of random sequences that are generated afresh each time, except for a specific reference sequence that remains constant and re-appears at random times across trials. We compared induced activity amplitudes between reference and fresh sequences. Neural responses from both primary and non-primary auditory cortical regions showed significantly decreased induced activity amplitudes for reference sequences compared to fresh sequences, especially in the beta band. This is the first study showing that neural correlates of auditory pattern learning can be evoked even in anesthetized, passive listening animal models.

11.
Front Hum Neurosci ; 15: 613903, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33597853

RESUMO

Mismatch negativity (MMN) is the electroencephalographic (EEG) waveform obtained by subtracting event-related potential (ERP) responses evoked by unexpected deviant stimuli from responses evoked by expected standard stimuli. While the MMN is thought to reflect an unexpected change in an ongoing, predictable stimulus, it is unknown whether MMN responses evoked by changes in different stimulus features have different magnitudes, latencies, and topographies. The present study aimed to investigate whether MMN responses differ depending on whether sudden stimulus change occur in pitch, duration, location or vowel identity, respectively. To calculate ERPs to standard and deviant stimuli, EEG signals were recorded in normal-hearing participants (N = 20; 13 males, 7 females) who listened to roving oddball sequences of artificial syllables. In the roving paradigm, any given stimulus is repeated several times to form a standard, and then suddenly replaced with a deviant stimulus which differs from the standard. Here, deviants differed from preceding standards along one of four features (pitch, duration, vowel or interaural level difference). The feature levels were individually chosen to match behavioral discrimination performance. We identified neural activity evoked by unexpected violations along all four acoustic dimensions. Evoked responses to deviant stimuli increased in amplitude relative to the responses to standard stimuli. A univariate (channel-by-channel) analysis yielded no significant differences between MMN responses following violations of different features. However, in a multivariate analysis (pooling information from multiple EEG channels), acoustic features could be decoded from the topography of mismatch responses, although at later latencies than those typical for MMN. These results support the notion that deviant feature detection may be subserved by a different process than general mismatch detection.

12.
Hear Res ; 399: 107894, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-31987647

RESUMO

Predictive coding is an influential theory of neural processing underlying perceptual inference. However, it is unknown to what extent prediction violations of different sensory features are mediated in different regions in auditory cortex, with different dynamics, and by different mechanisms. This study investigates the neural responses to synthesized acoustic syllables, which could be expected or unexpected, along several features. By using electrocorticography (ECoG) in rat auditory cortex (subjects: adult female Wistar rats with normal hearing), we aimed at mapping regional differences in mismatch responses to different stimulus features. Continuous streams of morphed syllables formed roving oddball sequences in which each stimulus was repeated several times (thereby forming a standard) and subsequently replaced with a deviant stimulus which differed from the standard along one of several acoustic features: duration, pitch, interaural level differences (ILD), or consonant identity. Each of these features could assume one of several different levels, and the resulting change from standard to deviant could be larger or smaller. The deviant stimuli were then repeated to form new standards. We analyzed responses to the first repetition of a new stimulus (deviant) and its last repetition in a stimulus train (standard). For the ECoG recording, we implanted urethane-anaesthetized rats with 8 × 8 surface electrode arrays covering a 3 × 3 mm cortical patch encompassing primary and higher-order auditory cortex. We identified the response topographies and latencies of population activity evoked by acoustic stimuli in the rat auditory regions, and mapped their sensitivity to expectation violations along different acoustic features. For all features, the responses to deviant stimuli increased in amplitude relative to responses to standard stimuli. Deviance magnitude did not further modulate these mismatch responses. Mismatch responses to different feature violations showed a heterogeneous distribution across cortical areas, with no evidence for systematic topographic gradients for any of the tested features. However, within rats, the spatial distribution of mismatch responses varied more between features than the spatial distribution of tone-evoked responses. This result supports the notion that prediction error signaling along different stimulus features is subserved by different cortical populations, albeit with substantial heterogeneity across individuals.


Assuntos
Acústica , Potenciais Evocados Auditivos , Estimulação Acústica , Animais , Córtex Auditivo , Eletroencefalografia , Feminino , Ratos , Ratos Wistar
13.
Front Neurosci ; 14: 709, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32765212

RESUMO

Neural implants that deliver multi-site electrical stimulation to the nervous systems are no longer the last resort but routine treatment options for various neurological disorders. Multi-site electrical stimulation is also widely used to study nervous system function and neural circuit transformations. These technologies increasingly demand dynamic electrical stimulation and closed-loop feedback control for real-time assessment of neural function, which is technically challenging since stimulus-evoked artifacts overwhelm the small neural signals of interest. We report a novel and versatile artifact removal method that can be applied in a variety of settings, from single- to multi-site stimulation and recording and for current waveforms of arbitrary shape and size. The method capitalizes on linear electrical coupling between stimulating currents and recording artifacts, which allows us to estimate a multi-channel linear Wiener filter to predict and subsequently remove artifacts via subtraction. We confirm and verify the linearity assumption and demonstrate feasibility in a variety of recording modalities, including in vitro sciatic nerve stimulation, bilateral cochlear implant stimulation, and multi-channel stimulation and recording between the auditory midbrain and cortex. We demonstrate a vast enhancement in the recording quality with a typical artifact reduction of 25-40 dB. The method is efficient and can be scaled to arbitrary number of stimulus and recording sites, making it ideal for applications in large-scale arrays, closed-loop implants, and high-resolution multi-channel brain-machine interfaces.

14.
Neuropsychologia ; 144: 107498, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32442445

RESUMO

Contemporary schemas of brain organization now include multisensory processes both in low-level cortices as well as at early stages of stimulus processing. Evidence has also accumulated showing that unisensory stimulus processing can result in cross-modal effects. For example, task-irrelevant and lateralised sounds can activate visual cortices; a phenomenon referred to as the auditory-evoked contralateral occipital positivity (ACOP). Some claim this is an example of automatic attentional capture in visual cortices. Other results, however, indicate that context may play a determinant role. Here, we investigated whether selective attention to spatial features of sounds is a determining factor in eliciting the ACOP. We recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to four possible stimulus attributes: location, pitch, speaker identity or syllable. Sound acoustics were held constant, and their location was always equiprobable (50% left, 50% right). The only manipulation was to which sound dimension participants attended. We analysed the AEP data from healthy participants within an electrical neuroimaging framework. The presence of sound-elicited activations of visual cortices depended on the to-be-discriminated, goal-based dimension. The ACOP was elicited only when participants were required to discriminate sound location, but not when they attended to any of the non-spatial features. These results provide a further indication that the ACOP is not automatic. Moreover, our findings showcase the interplay between task-relevance and spatial (un)predictability in determining the presence of the cross-modal activation of visual cortices.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Som , Córtex Visual/fisiologia , Estimulação Acústica , Acústica , Adulto , Viés de Atenção , Eletroencefalografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
15.
R Soc Open Sci ; 7(3): 191194, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32269783

RESUMO

Previous research has shown that musical beat perception is a surprisingly complex phenomenon involving widespread neural coordination across higher-order sensory, motor and cognitive areas. However, the question of how low-level auditory processing must necessarily shape these dynamics, and therefore perception, is not well understood. Here, we present evidence that the auditory cortical representation of music, even in the absence of motor or top-down activations, already favours the beat that will be perceived. Extracellular firing rates in the rat auditory cortex were recorded in response to 20 musical excerpts diverse in tempo and genre, for which musical beat perception had been characterized by the tapping behaviour of 40 human listeners. We found that firing rates in the rat auditory cortex were on average higher on the beat than off the beat. This 'neural emphasis' distinguished the beat that was perceived from other possible interpretations of the beat, was predictive of the degree of tapping consensus across human listeners, and was accounted for by a spectrotemporal receptive field model. These findings strongly suggest that the 'bottom-up' processing of music performed by the auditory system predisposes the timing and clarity of the perceived musical beat.

16.
J Neurophysiol ; 123(4): 1536-1551, 2020 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-32186432

RESUMO

Contrast gain control is the systematic adjustment of neuronal gain in response to the contrast of sensory input. It is widely observed in sensory cortical areas and has been proposed to be a canonical neuronal computation. Here, we investigated whether shunting inhibition from parvalbumin-positive interneurons-a mechanism involved in gain control in visual cortex-also underlies contrast gain control in auditory cortex. First, we performed extracellular recordings in the auditory cortex of anesthetized male mice and optogenetically manipulated the activity of parvalbumin-positive interneurons while varying the contrast of the sensory input. We found that both activation and suppression of parvalbumin interneuron activity altered the overall gain of cortical neurons. However, despite these changes in overall gain, we found that manipulating parvalbumin interneuron activity did not alter the strength of contrast gain control in auditory cortex. Furthermore, parvalbumin-positive interneurons did not show increases in activity in response to high-contrast stimulation, which would be expected if they drive contrast gain control. Finally, we performed in vivo whole-cell recordings in auditory cortical neurons during high- and low-contrast stimulation and found that no increase in membrane conductance was observed during high-contrast stimulation. Taken together, these findings indicate that while parvalbumin-positive interneuron activity modulates the overall gain of auditory cortical responses, other mechanisms are primarily responsible for contrast gain control in this cortical area.NEW & NOTEWORTHY We investigated whether contrast gain control is mediated by shunting inhibition from parvalbumin-positive interneurons in auditory cortex. We performed extracellular and intracellular recordings in mouse auditory cortex while presenting sensory stimuli with varying contrasts and manipulated parvalbumin-positive interneuron activity using optogenetics. We show that while parvalbumin-positive interneuron activity modulates the gain of cortical responses, this activity is not the primary mechanism for contrast gain control in auditory cortex.


Assuntos
Córtex Auditivo/fisiologia , Interneurônios/fisiologia , Inibição Neural/fisiologia , Parvalbuminas , Animais , Masculino , Camundongos , Optogenética , Parvalbuminas/metabolismo , Técnicas de Patch-Clamp
17.
Front Neurosci ; 13: 1164, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31802997

RESUMO

Sound localization requires the integration in the brain of auditory spatial cues generated by interactions with the external ears, head and body. Perceptual learning studies have shown that the relative weighting of these cues can change in a context-dependent fashion if their relative reliability is altered. One factor that may influence this process is vision, which tends to dominate localization judgments when both modalities are present and induces a recalibration of auditory space if they become misaligned. It is not known, however, whether vision can alter the weighting of individual auditory localization cues. Using virtual acoustic space stimuli, we measured changes in subjects' sound localization biases and binaural localization cue weights after ∼50 min of training on audiovisual tasks in which visual stimuli were either informative or not about the location of broadband sounds. Four different spatial configurations were used in which we varied the relative reliability of the binaural cues: interaural time differences (ITDs) and frequency-dependent interaural level differences (ILDs). In most subjects and experiments, ILDs were weighted more highly than ITDs before training. When visual cues were spatially uninformative, some subjects showed a reduction in auditory localization bias and the relative weighting of ILDs increased after training with congruent binaural cues. ILDs were also upweighted if they were paired with spatially-congruent visual cues, and the largest group-level improvements in sound localization accuracy occurred when both binaural cues were matched to visual stimuli. These data suggest that binaural cue reweighting reflects baseline differences in the relative weights of ILDs and ITDs, but is also shaped by the availability of congruent visual stimuli. Training subjects with consistently misaligned binaural and visual cues produced the ventriloquism aftereffect, i.e., a corresponding shift in auditory localization bias, without affecting the inter-subject variability in sound localization judgments or their binaural cue weights. Our results show that the relative weighting of different auditory localization cues can be changed by training in ways that depend on their reliability as well as the availability of visual spatial information, with the largest improvements in sound localization likely to result from training with fully congruent audiovisual information.

18.
J Acoust Soc Am ; 145(5): EL341, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-31153346

RESUMO

Currently, there is controversy around whether rats can use interaural time differences (ITDs) to localize sound. Here, naturalistic pulse train stimuli were used to evaluate the rat's sensitivity to onset and ongoing ITDs using a two-alternative forced choice sound lateralization task. Pulse rates between 50 Hz and 4.8 kHz with rectangular or Hanning windows were delivered with ITDs between ±175 µs over a near-field acoustic setup. Similar to other mammals, rats performed with 75% accuracy at ∼50 µs ITD, demonstrating that rats are highly sensitive to envelope ITDs.


Assuntos
Vias Auditivas/fisiologia , Tempo de Reação , Localização de Som/fisiologia , Som , Estimulação Acústica , Animais , Comportamento Animal/fisiologia , Feminino , Ratos Wistar
19.
Hear Res ; 374: 58-68, 2019 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-30732921

RESUMO

Faster speech may facilitate more efficient communication, but if speech is too fast it becomes unintelligible. The maximum speeds at which Mandarin words were intelligible in a sentence context were quantified for normal hearing (NH) and cochlear implant (CI) listeners by measuring time-compression thresholds (TCTs) in an adaptive staircase procedure. In Experiment 1, both original and CI-vocoded time-compressed speech from the MSP (Mandarin speech perception) and MHINT (Mandarin hearing in noise test) corpora was presented to 10 NH subjects over headphones. In Experiment 2, original time-compressed speech was presented to 10 CI subjects and another 10 NH subjects through a loudspeaker in a soundproof room. Sentences were time-compressed without changing their spectral profile, and were presented up to three times within a single trial. At the end of each trial, the number of correctly identified words in the sentence was scored. A 50%-word recognition threshold was tracked in the psychophysical procedure. The observed median TCTs were very similar for MSP and MHINT speech. For NH listeners, median TCTs were around 16.7 syllables/s for normal speech, and 11.8 and 8.6 syllables/s respectively for 8 and 4 channel tone-carrier vocoded speech. For CI listeners, TCTs were only around 6.8 syllables/s. The interquartile range of the TCTs within each cohort was smaller than 3.0 syllables/s. Speech reception thresholds in noise were also measured in Experiment 2, and were found to be strongly correlated with TCTs for CI listeners. In conclusion, the Mandarin sentence TCTs were around 16.7 syllables/s for most NH subjects, but rarely faster than 10.0 syllables/s for CI listeners, which quantitatively illustrated upper limits of fast speech information processing with CIs.


Assuntos
Limiar Auditivo/fisiologia , Implantes Cocleares , Idioma , Inteligibilidade da Fala/fisiologia , Estimulação Acústica , Adulto , Algoritmos , Criança , Implantes Cocleares/estatística & dados numéricos , Feminino , Voluntários Saudáveis , Humanos , Masculino , Psicoacústica , Processamento de Sinais Assistido por Computador , Acústica da Fala , Percepção da Fala/fisiologia , Fatores de Tempo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...