Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 74
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
J Neurosci ; 44(10)2024 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-38267259

RESUMEN

Sound texture perception takes advantage of a hierarchy of time-averaged statistical features of acoustic stimuli, but much remains unclear about how these statistical features are processed along the auditory pathway. Here, we compared the neural representation of sound textures in the inferior colliculus (IC) and auditory cortex (AC) of anesthetized female rats. We recorded responses to texture morph stimuli that gradually add statistical features of increasingly higher complexity. For each texture, several different exemplars were synthesized using different random seeds. An analysis of transient and ongoing multiunit responses showed that the IC units were sensitive to every type of statistical feature, albeit to a varying extent. In contrast, only a small proportion of AC units were overtly sensitive to any statistical features. Differences in texture types explained more of the variance of IC neural responses than did differences in exemplars, indicating a degree of "texture type tuning" in the IC, but the same was, perhaps surprisingly, not the case for AC responses. We also evaluated the accuracy of texture type classification from single-trial population activity and found that IC responses became more informative as more summary statistics were included in the texture morphs, while for AC population responses, classification performance remained consistently very low. These results argue against the idea that AC neurons encode sound type via an overt sensitivity in neural firing rate to fine-grain spectral and temporal statistical features.


Asunto(s)
Corteza Auditiva , Colículos Inferiores , Femenino , Ratas , Animales , Vías Auditivas/fisiología , Colículos Inferiores/fisiología , Mesencéfalo/fisiología , Sonido , Corteza Auditiva/fisiología , Estimulación Acústica/métodos , Percepción Auditiva/fisiología
2.
J Neurosci ; 43(25): 4697-4708, 2023 06 21.
Artículo en Inglés | MEDLINE | ID: mdl-37221094

RESUMEN

Previous work has demonstrated that performance in an auditory selective attention task can be enhanced or impaired, depending on whether a task-irrelevant visual stimulus is temporally coherent with a target auditory stream or with a competing distractor. However, it remains unclear how audiovisual (AV) temporal coherence and auditory selective attention interact at the neurophysiological level. Here, we measured neural activity using EEG while human participants (men and women) performed an auditory selective attention task, detecting deviants in a target audio stream. The amplitude envelope of the two competing auditory streams changed independently, while the radius of a visual disk was manipulated to control the AV coherence. Analysis of the neural responses to the sound envelope demonstrated that auditory responses were enhanced largely independently of the attentional condition: both target and masker stream responses were enhanced when temporally coherent with the visual stimulus. In contrast, attention enhanced the event-related response evoked by the transient deviants, largely independently of AV coherence. These results provide evidence for dissociable neural signatures of bottom-up (coherence) and top-down (attention) effects in AV object formation.SIGNIFICANCE STATEMENT Temporal coherence between auditory stimuli and task-irrelevant visual stimuli can enhance behavioral performance in auditory selective attention tasks. However, how audiovisual temporal coherence and attention interact at the neural level has not been established. Here, we measured EEG during a behavioral task designed to independently manipulate audiovisual coherence and auditory selective attention. While some auditory features (sound envelope) could be coherent with visual stimuli, other features (timbre) were independent of visual stimuli. We find that audiovisual integration can be observed independently of attention for sound envelopes temporally coherent with visual stimuli, while the neural responses to unexpected timbre changes are most strongly modulated by attention. Our results provide evidence for dissociable neural mechanisms of bottom-up (coherence) and top-down (attention) effects on audiovisual object formation.


Asunto(s)
Percepción Auditiva , Potenciales Evocados , Masculino , Humanos , Femenino , Potenciales Evocados/fisiología , Percepción Auditiva/fisiología , Atención/fisiología , Sonido , Estimulación Acústica , Percepción Visual/fisiología , Estimulación Luminosa
3.
BMC Biol ; 20(1): 48, 2022 02 16.
Artículo en Inglés | MEDLINE | ID: mdl-35172815

RESUMEN

BACKGROUND: To localize sound sources accurately in a reverberant environment, human binaural hearing strongly favors analyzing the initial wave front of sounds. Behavioral studies of this "precedence effect" have so far largely been confined to human subjects, limiting the scope of complementary physiological approaches. Similarly, physiological studies have mostly looked at neural responses in the inferior colliculus, the main relay point between the inner ear and the auditory cortex, or used modeling of cochlear auditory transduction in an attempt to identify likely underlying mechanisms. Studies capable of providing a direct comparison of neural coding and behavioral measures of sound localization under the precedence effect are lacking. RESULTS: We adapted a "temporal weighting function" paradigm previously developed to quantify the precedence effect in human for use in laboratory rats. The animals learned to lateralize click trains in which each click in the train had a different interaural time difference. Computing the "perceptual weight" of each click in the train revealed a strong onset bias, very similar to that reported for humans. Follow-on electrocorticographic recording experiments revealed that onset weighting of interaural time differences is a robust feature of the cortical population response, but interestingly, it often fails to manifest at individual cortical recording sites. CONCLUSION: While previous studies suggested that the precedence effect may be caused by early processing mechanisms in the cochlea or inhibitory circuitry in the brainstem and midbrain, our results indicate that the precedence effect is not fully developed at the level of individual recording sites in the auditory cortex, but robust and consistent precedence effects are observable only in the auditory cortex at the level of cortical population responses. This indicates that the precedence effect emerges at later cortical processing stages and is a significantly "higher order" feature than has hitherto been assumed.


Asunto(s)
Corteza Auditiva , Colículos Inferiores , Localización de Sonidos , Estimulación Acústica/métodos , Animales , Corteza Auditiva/fisiología , Audición , Humanos , Colículos Inferiores/fisiología , Localización de Sonidos/fisiología
4.
J Neurosci ; 39(49): 9806-9817, 2019 12 04.
Artículo en Inglés | MEDLINE | ID: mdl-31662425

RESUMEN

Temporal orienting improves sensory processing, akin to other top-down biases. However, it is unknown whether these improvements reflect increased neural gain to any stimuli presented at expected time points, or specific tuning to task-relevant stimulus aspects. Furthermore, while other top-down biases are selective, the extent of trade-offs across time is less well characterized. Here, we tested whether gain and/or tuning of auditory frequency processing in humans is modulated by rhythmic temporal expectations, and whether these modulations are specific to time points relevant for task performance. Healthy participants (N = 23) of either sex performed an auditory discrimination task while their brain activity was measured using magnetoencephalography/electroencephalography (M/EEG). Acoustic stimulation consisted of sequences of brief distractors interspersed with targets, presented in a rhythmic or jittered way. Target rhythmicity not only improved behavioral discrimination accuracy and M/EEG-based decoding of targets, but also of irrelevant distractors preceding these targets. To explain this finding in terms of increased sensitivity and/or sharpened tuning to auditory frequency, we estimated tuning curves based on M/EEG decoding results, with separate parameters describing gain and sharpness. The effect of rhythmic expectation on distractor decoding was linked to gain increase only, suggesting increased neural sensitivity to any stimuli presented at relevant time points.SIGNIFICANCE STATEMENT Being able to predict when an event may happen can improve perception and action related to this event, likely due to the alignment of neural activity to the temporal structure of stimulus streams. However, it is unclear whether rhythmic increases in neural sensitivity are specific to task-relevant targets, and whether they competitively impair stimulus processing at unexpected time points. By combining magnetoencephalography and encephalographic recordings, neural decoding of auditory stimulus features, and modeling, we found that rhythmic expectation improved neural decoding of both relevant targets and irrelevant distractors presented and expected time points, but did not competitively impair stimulus processing at unexpected time points. Using a quantitative model, these results were linked to nonspecific neural gain increases due to rhythmic expectation.


Asunto(s)
Anticipación Psicológica/fisiología , Percepción de la Altura Tonal/fisiología , Estimulación Acústica , Adolescente , Adulto , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Discriminación en Psicología/fisiología , Electroencefalografía , Femenino , Voluntarios Sanos , Humanos , Magnetoencefalografía , Masculino , Desempeño Psicomotor/fisiología , Adulto Joven
5.
J Neurophysiol ; 123(4): 1536-1551, 2020 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-32186432

RESUMEN

Contrast gain control is the systematic adjustment of neuronal gain in response to the contrast of sensory input. It is widely observed in sensory cortical areas and has been proposed to be a canonical neuronal computation. Here, we investigated whether shunting inhibition from parvalbumin-positive interneurons-a mechanism involved in gain control in visual cortex-also underlies contrast gain control in auditory cortex. First, we performed extracellular recordings in the auditory cortex of anesthetized male mice and optogenetically manipulated the activity of parvalbumin-positive interneurons while varying the contrast of the sensory input. We found that both activation and suppression of parvalbumin interneuron activity altered the overall gain of cortical neurons. However, despite these changes in overall gain, we found that manipulating parvalbumin interneuron activity did not alter the strength of contrast gain control in auditory cortex. Furthermore, parvalbumin-positive interneurons did not show increases in activity in response to high-contrast stimulation, which would be expected if they drive contrast gain control. Finally, we performed in vivo whole-cell recordings in auditory cortical neurons during high- and low-contrast stimulation and found that no increase in membrane conductance was observed during high-contrast stimulation. Taken together, these findings indicate that while parvalbumin-positive interneuron activity modulates the overall gain of auditory cortical responses, other mechanisms are primarily responsible for contrast gain control in this cortical area.NEW & NOTEWORTHY We investigated whether contrast gain control is mediated by shunting inhibition from parvalbumin-positive interneurons in auditory cortex. We performed extracellular and intracellular recordings in mouse auditory cortex while presenting sensory stimuli with varying contrasts and manipulated parvalbumin-positive interneuron activity using optogenetics. We show that while parvalbumin-positive interneuron activity modulates the gain of cortical responses, this activity is not the primary mechanism for contrast gain control in auditory cortex.


Asunto(s)
Corteza Auditiva/fisiología , Interneuronas/fisiología , Inhibición Neural/fisiología , Parvalbúminas , Animales , Masculino , Ratones , Optogenética , Parvalbúminas/metabolismo , Técnicas de Placa-Clamp
6.
J Acoust Soc Am ; 145(5): EL341, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-31153346

RESUMEN

Currently, there is controversy around whether rats can use interaural time differences (ITDs) to localize sound. Here, naturalistic pulse train stimuli were used to evaluate the rat's sensitivity to onset and ongoing ITDs using a two-alternative forced choice sound lateralization task. Pulse rates between 50 Hz and 4.8 kHz with rectangular or Hanning windows were delivered with ITDs between ±175 µs over a near-field acoustic setup. Similar to other mammals, rats performed with 75% accuracy at ∼50 µs ITD, demonstrating that rats are highly sensitive to envelope ITDs.


Asunto(s)
Vías Auditivas/fisiología , Tiempo de Reacción , Localización de Sonidos/fisiología , Sonido , Estimulación Acústica , Animales , Conducta Animal/fisiología , Femenino , Ratas Wistar
7.
Neuroimage ; 176: 29-40, 2018 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-29678759

RESUMEN

Distinct anatomical and functional pathways are postulated for analysing a sound's object-related ('what') and space-related ('where') information. It remains unresolved to which extent distinct or overlapping neural resources subserve specific object-related dimensions (i.e. who is speaking and what is being said can both be derived from the same acoustic input). To address this issue, we recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to their pitch, speaker identity, uttered syllable ('what' dimensions) or their location ('where'). Sound acoustics were held constant across blocks; the only manipulation involved the sound dimension that participants had to attend to. The task-relevant dimension was varied across blocks. AEPs from healthy participants were analysed within an electrical neuroimaging framework to differentiate modulations in response strength from modulations in response topography; the latter of which forcibly follow from changes in the configuration of underlying sources. There were no behavioural differences in discrimination of sounds across the 4 feature dimensions. As early as 90ms post-stimulus onset, AEP topographies differed across 'what' conditions, supporting a functional sub-segregation within the auditory 'what' pathway. This study characterises the spatio-temporal dynamics of segregated, yet parallel, processing of multiple sound object-related feature dimensions when selective attention is directed to them.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica , Adulto , Electroencefalografía , Potenciales Evocados Auditivos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Localización de Sonidos/fisiología , Espectrografía del Sonido , Adulto Joven
8.
J Neurophysiol ; 120(4): 1872-1884, 2018 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-30044164

RESUMEN

The neocortex is thought to employ a number of canonical computations, but little is known about whether these computations rely on shared mechanisms across different neural populations. In recent years, the mouse has emerged as a powerful model organism for the dissection of the circuits and mechanisms underlying various aspects of neural processing and therefore provides an important avenue for research into putative canonical computations. One such computation is contrast gain control, the systematic adjustment of neural gain in accordance with the contrast of sensory input, which helps to construct neural representations that are robust to the presence of background stimuli. Here, we characterized contrast gain control in the mouse auditory cortex. We performed laminar extracellular recordings in the auditory cortex of the anesthetized mouse while varying the contrast of the sensory input. We observed that an increase in stimulus contrast resulted in a compensatory reduction in the gain of neural responses, leading to representations in the mouse auditory cortex that are largely contrast invariant. Contrast gain control was present in all cortical layers but was found to be strongest in deep layers, indicating that intracortical mechanisms may contribute to these gain changes. These results lay a foundation for investigations into the mechanisms underlying contrast adaptation in the mouse auditory cortex. NEW & NOTEWORTHY We investigated whether contrast gain control, the systematic reduction in neural gain in response to an increase in sensory contrast, exists in the mouse auditory cortex. We performed extracellular recordings in the mouse auditory cortex while presenting sensory stimuli with varying contrasts and found this form of processing was widespread. This finding provides evidence that contrast gain control may represent a canonical cortical computation and lays a foundation for investigations into the underlying mechanisms.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva , Animales , Corteza Auditiva/citología , Potenciales Evocados Auditivos , Espacio Extracelular/fisiología , Ratones , Ratones Endogámicos C57BL , Neuronas/fisiología
9.
J Neurosci ; 36(2): 280-9, 2016 Jan 13.
Artículo en Inglés | MEDLINE | ID: mdl-26758822

RESUMEN

Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear-nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT: An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too.


Asunto(s)
Adaptación Fisiológica/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Mesencéfalo/fisiología , Sonido , Estimulación Acústica , Animales , Vías Auditivas/fisiología , Femenino , Hurones , Modelos Lineales , Masculino , Modelos Neurológicos , Espectrografía del Sonido
10.
Proc Biol Sci ; 284(1866)2017 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-29118141

RESUMEN

The ability to spontaneously feel a beat in music is a phenomenon widely believed to be unique to humans. Though beat perception involves the coordinated engagement of sensory, motor and cognitive processes in humans, the contribution of low-level auditory processing to the activation of these networks in a beat-specific manner is poorly understood. Here, we present evidence from a rodent model that midbrain preprocessing of sounds may already be shaping where the beat is ultimately felt. For the tested set of musical rhythms, on-beat sounds on average evoked higher firing rates than off-beat sounds, and this difference was a defining feature of the set of beat interpretations most commonly perceived by human listeners over others. Basic firing rate adaptation provided a sufficient explanation for these results. Our findings suggest that midbrain adaptation, by encoding the temporal context of sounds, creates points of neural emphasis that may influence the perceptual emergence of a beat.


Asunto(s)
Percepción Auditiva/fisiología , Gerbillinae/fisiología , Colículos Inferiores/fisiología , Música , Desempeño Psicomotor , Estimulación Acústica , Adulto , Animales , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
11.
PLoS Comput Biol ; 12(11): e1005113, 2016 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-27835647

RESUMEN

Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Inhibición Neural/fisiología , Células Receptoras Sensoriales/fisiología , Estimulación Acústica/métodos , Animales , Simulación por Computador , Humanos , Dinámicas no Lineales , Integración de Sistemas
12.
13.
PLoS Biol ; 11(11): e1001710, 2013 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-24265596

RESUMEN

Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain.


Asunto(s)
Nervio Coclear/fisiología , Estimulación Acústica , Adaptación Fisiológica , Animales , Corteza Auditiva/fisiología , Percepción Auditiva , Simulación por Computador , Femenino , Hurones , Audición/fisiología , Humanos , Masculino , Modelos Neurológicos , Conducción Nerviosa , Ruido , Relación Señal-Ruido
14.
J Acoust Soc Am ; 135(6): EL357-63, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24907846

RESUMEN

Periodic stimuli are common in natural environments and are ecologically relevant, for example, footsteps and vocalizations. This study reports a detectability enhancement for temporally cued, periodic sequences. Target noise bursts (embedded in background noise) arriving at the time points which followed on from an introductory, periodic "cue" sequence were more easily detected (by ∼1.5 dB SNR) than identical noise bursts which randomly deviated from the cued temporal pattern. Temporal predictability and corresponding neuronal "entrainment" have been widely theorized to underlie important processes in auditory scene analysis and to confer perceptual advantage. This is the first study in the auditory domain to clearly demonstrate a perceptual enhancement of temporally predictable, near-threshold stimuli.


Asunto(s)
Percepción Auditiva , Señales (Psicología) , Detección de Señal Psicológica , Percepción del Tiempo , Estimulación Acústica , Adulto , Audiometría , Umbral Auditivo , Femenino , Humanos , Masculino , Movimiento (Física) , Psicoacústica , Sonido , Factores de Tiempo , Adulto Joven
15.
J Neurosci ; 32(33): 11271-84, 2012 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-22895711

RESUMEN

Auditory neurons are often described in terms of their spectrotemporal receptive fields (STRFs). These map the relationship between features of the sound spectrogram and firing rates of neurons. Recently, we showed that neurons in the primary fields of the ferret auditory cortex are also subject to gain control: when sounds undergo smaller fluctuations in their level over time, the neurons become more sensitive to small-level changes (Rabinowitz et al., 2011). Just as STRFs measure the spectrotemporal features of a sound that lead to changes in the firing rates of neurons, in this study, we sought to estimate the spectrotemporal regions in which sound statistics lead to changes in the gain of neurons. We designed a set of stimuli with complex contrast profiles to characterize these regions. This allowed us to estimate the STRFs of cortical neurons alongside a set of spectrotemporal contrast kernels. We find that these two sets of integration windows match up: the extent to which a stimulus feature causes the firing rate of a neuron to change is strongly correlated with the extent to which the contrast of that feature modulates the gain of the neuron. Adding contrast kernels to STRF models also yields considerable improvements in the ability to capture and predict how auditory cortical neurons respond to statistically complex sounds.


Asunto(s)
Potenciales de Acción/fisiología , Corteza Auditiva/citología , Percepción Auditiva/fisiología , Modelos Neurológicos , Neuronas/fisiología , Estimulación Acústica/métodos , Animales , Simulación por Computador , Femenino , Hurones , Masculino , Dinámicas no Lineales , Sonido
16.
J Acoust Soc Am ; 133(1): 365-76, 2013 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-23297909

RESUMEN

Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/ and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners.


Asunto(s)
Conducta Animal , Discriminación en Psicología , Hurones/psicología , Discriminación de la Altura Tonal , Acústica del Lenguaje , Calidad de la Voz , Estimulación Acústica , Animales , Conducta de Elección , Señales (Psicología) , Femenino , Humanos , Ruido/efectos adversos , Enmascaramiento Perceptual , Psicoacústica , Espectrografía del Sonido
17.
J Acoust Soc Am ; 134(1): EL98-104, 2013 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-23862914

RESUMEN

This study reports a role of temporal regularity on the perception of auditory streams. Listeners were presented with two-tone sequences in an A-B-A-B rhythm that was either regular or had a controlled amount of temporal jitter added independently to each of the B tones. Subjects were asked to report whether they perceived one or two streams. The percentage of trials in which two streams were reported substantially and significantly increased with increasing amounts of temporal jitter. This suggests that temporal predictability may serve as a binding cue during auditory scene analysis.


Asunto(s)
Atención , Señales (Psicología) , Ilusiones , Discriminación de la Altura Tonal , Espectrografía del Sonido , Percepción del Tiempo , Humanos , Psicoacústica
18.
Front Psychol ; 14: 1106562, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37705948

RESUMEN

The unity assumption hypothesis contends that higher-level factors, such as a perceiver's belief and prior experience, modulate multisensory integration. The McGurk illusion exemplifies such integration. When a visual velar consonant /ga/ is dubbed with an auditory bilabial /ba/, listeners unify the discrepant signals with knowledge that open lips cannot produce /ba/ and a fusion percept /da/ is perceived. Previous research claimed to have falsified the unity assumption hypothesis by demonstrating the McGurk effect occurs even when a face is dubbed with a voice of the opposite sex, and thus violates expectations from prior experience. But perhaps stronger counter-evidence is needed to prevent perceptual unity than just an apparent incongruence between unfamiliar faces and voices. Here we investigated whether the McGurk illusion with male/female incongruent stimuli can be disrupted by familiarization and priming with an appropriate pairing of face and voice. In an online experiment, the susceptibility of participants to the McGurk illusion was tested with stimuli containing either a male or female face with a voice of incongruent gender. The number of times participants experienced a McGurk illusion was measured before and after a familiarization block, which familiarized them with the true pairings of face and voice. After familiarization and priming, the susceptibility to the McGurk effects decreased significantly on average. The findings support the notion that unity assumptions modulate intersensory bias, and confirm and extend previous studies using male/female incongruent McGurk stimuli.

19.
Sci Rep ; 13(1): 3785, 2023 03 07.
Artículo en Inglés | MEDLINE | ID: mdl-36882473

RESUMEN

Spatial hearing remains one of the major challenges for bilateral cochlear implant (biCI) users, and early deaf patients in particular are often completely insensitive to interaural time differences (ITDs) delivered through biCIs. One popular hypothesis is that this may be due to a lack of early binaural experience. However, we have recently shown that neonatally deafened rats fitted with biCIs in adulthood quickly learn to discriminate ITDs as well as their normal hearing litter mates, and perform an order of magnitude better than human biCI users. Our unique behaving biCI rat model allows us to investigate other possible limiting factors of prosthetic binaural hearing, such as the effect of stimulus pulse rate and envelope shape. Previous work has indicated that ITD sensitivity may decline substantially at the high pulse rates often used in clinical practice. We therefore measured behavioral ITD thresholds in neonatally deafened, adult implanted biCI rats to pulse trains of 50, 300, 900 and 1800 pulses per second (pps), with either rectangular or Hanning window envelopes. Our rats exhibited very high sensitivity to ITDs at pulse rates up to 900 pps for both envelope shapes, similar to those in common clinical use. However, ITD sensitivity declined to near zero at 1800 pps, for both Hanning and rectangular windowed pulse trains. Current clinical cochlear implant (CI) processors are often set to pulse rates ≥ 900 pps, but ITD sensitivity in human CI listeners has been reported to decline sharply above ~ 300 pps. Our results suggest that the relatively poor ITD sensitivity seen at > 300 pps in human CI users may not reflect the hard upper limit of biCI ITD performance in the mammalian auditory pathway. Perhaps with training or better CI strategies good binaural hearing may be achievable at pulse rates high enough to allow good sampling of speech envelopes while delivering usable ITDs.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Adulto , Humanos , Animales , Ratas , Frecuencia Cardíaca , Taquicardia , Vías Auditivas , Mamíferos
20.
bioRxiv ; 2023 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-36711896

RESUMEN

Detecting patterns, and noticing unexpected pattern changes, in the environment is a vital aspect of sensory processing. Adaptation and prediction error responses are two components of neural processing related to these tasks, and previous studies in the auditory system in rodents show that these two components are partially dissociable in terms of the topography and latency of neural responses to sensory deviants. However, many previous studies have focused on repetitions of single stimuli, such as pure tones, which have limited ecological validity. In this study, we tested whether the auditory cortical activity shows adaptation to repetition of more complex sound patterns (bisyllabic pairs). Specifically, we compared neural responses to violations of sequences based on single stimulus probability only, against responses to more complex violations based on stimulus order. We employed an auditory oddball paradigm and monitored the auditory cortex (ACtx) activity of awake mice (N=8) using wide-field calcium imaging. We found that cortical responses were sensitive both to single stimulus probabilities and to more global stimulus patterns, as mismatch signals were elicited following both substitution deviants and transposition deviants. Notably, A2 area elicited larger mismatch signaling to those deviants than primary ACtx (A1), which suggests a hierarchical gradient of prediction error signaling in the auditory cortex. Such a hierarchical gradient was observed for late but not early peaks of calcium transients to deviants, suggesting that the late part of the deviant response may reflect prediction error signaling in response to more complex sensory pattern violations.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA