Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Front Neurosci ; 18: 1452450, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39170684

RESUMEN

Rodent models of tinnitus are commonly used to study its mechanisms and potential treatments. Tinnitus can be identified by changes in the gap-induced prepulse inhibition of the acoustic startle (GPIAS), most commonly by using pressure detectors to measure the whole-body startle (WBS). Unfortunately, the WBS habituates quickly, the measuring system can introduce mechanical oscillations and the response shows considerable variability. We have instead used a motion tracking system to measure the localized motion of small reflective markers in response to an acoustic startle reflex in guinea pigs and mice. For guinea pigs, the pinna had the largest responses both in terms of displacement between pairs of markers and in terms of the speed of the reflex movement. Smaller, but still reliable responses were observed with markers on the thorax, abdomen and back. The peak speed of the pinna reflex was the most sensitive measure for calculating GPIAS in the guinea pig. Recording the pinna reflex in mice proved impractical due to removal of the markers during grooming. However, recordings from their back and tail allowed us to measure the peak speed and the twitch amplitude (area under curve) of reflex responses and both analysis methods showed robust GPIAS. When mice were administered high doses of sodium salicylate, which induces tinnitus in humans, there was a significant reduction in GPIAS, consistent with the presence of tinnitus. Thus, measurement of the peak speed or twitch amplitude of pinna, back and tail markers provides a reliable assessment of tinnitus in rodents.

2.
Artículo en Inglés | MEDLINE | ID: mdl-37465203

RESUMEN

Humans and many other animals can hear a wide range of sounds. We can hear low and high notes and both quiet and loud sounds. We are also very good at telling the difference between sounds that are similar, like the speech sounds "argh" and "ah," and picking apart sounds that are mixed together, like when an orchestra is playing. But how do human hearing abilities compare to those of other animals? In this article, we discover how the inner ear determines hearing abilities. Many other mammals can hear very high notes that we cannot, and some can hear quiet sounds that we cannot. However, humans may be better than any other species at distinguishing similar sounds. We know this because, milliseconds after the sounds around us go into our ears, other sounds come out: sounds that are actually produced by those same ears!

3.
Neurosci Lett ; 747: 135705, 2021 03 16.
Artículo en Inglés | MEDLINE | ID: mdl-33548408

RESUMEN

Tinnitus has similarities to chronic neuropathic pain where there are changes in the firing rate of different types of afferent neurons. We postulated that one possible cause of tinnitus is a change in the distribution of spontaneous firing rates in at least one type of afferent auditory nerve fibre in anaesthetised guinea pigs. In control animals there was a bimodal distribution of spontaneous rates, but the position of the second mode was different depending upon whether the fibres responded best to high (> 4 kHz) or low (≤4 kHz) frequency tonal stimulation. The simplest and most reliable way of inducing tinnitus in experimental animals is to administer a high dose of sodium salicylate. The distribution of the spontaneous firing rates was different when salicylate (350 mg/kg) was administered, even when the sample was matched for the distribution of characteristic frequencies in the control population. The proportion of medium spontaneous rate fibres (MSR, 1≤ spikes/s ≤20) increased while the proportion of the highest, high spontaneous firing rate fibres (HSR, > 80 spikes/s) decreased following salicylate. The median rate fell from 64.7 spikes/s (control) to 35.4 spikes/s (salicylate); a highly significant change (Kruskal-Wallis test p < 0.001). When the changes were compared with various models of statistical probability, the most accurate model was one where most HSR fibres decreased their firing rate by 32 spikes/s. Thus, we have shown a reduction in the firing rate of HSR fibres that may be related to tinnitus.


Asunto(s)
Corteza Auditiva/efectos de los fármacos , Umbral Auditivo/efectos de los fármacos , Nervio Coclear/efectos de los fármacos , Potenciales Evocados Auditivos/efectos de los fármacos , Salicilatos/farmacología , Potenciales de Acción/fisiología , Animales , Cobayas
4.
Philos Trans R Soc Lond B Biol Sci ; 375(1802): 20190480, 2020 07 06.
Artículo en Inglés | MEDLINE | ID: mdl-32420861

RESUMEN

Conspecific acceptance thresholds (Reeve 1989 Am. Nat.133, 407-435), which have been widely applied to explain ecological behaviour in animals, proposed how sensory information, prior information and the costs of decisions determine actions. Signal detection theory (Green & Swets 1966 Signal detection theory and psychophysics; SDT), which forms the basis of CAT models, has been widely used in psychological studies to partition the ability to discriminate sensory information from the action made as a result of it. In this article, we will review the application of SDT in interpreting the behaviour of laboratory animals trained in operant conditioning tasks and then consider its potential in ecological studies of animal behaviour in natural environments. Focusing on the nest-mate recognition systems exhibited by social insects, we show how the quantitative application of SDT has the potential to transform acceptance rate data into independent indices of cue sensitivity and decision criterion (also known as the acceptance threshold). However, further tests of the assumptions underlying SDT analysis are required. Overall, we argue that SDT, as conventionally applied in psychological studies, may provide clearer insights into the mechanistic basis of decision making and information processing in behavioural ecology. This article is part of the theme issue 'Signal detection theory in recognition systems: from evolving models to experimental tests'.


Asunto(s)
Conducta Animal , Etología/métodos , Himenópteros/fisiología , Detección de Señal Psicológica , Animales , Psicología/métodos
5.
Trends Hear ; 23: 2331216519837866, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30909814

RESUMEN

Perceiving speech in background noise presents a significant challenge to listeners. Intelligibility can be improved by seeing the face of a talker. This is of particular value to hearing impaired people and users of cochlear implants. It is well known that auditory-only speech understanding depends on factors beyond audibility. How these factors impact on the audio-visual integration of speech is poorly understood. We investigated audio-visual integration when either the interfering background speech (Experiment 1) or intelligibility of the target talkers (Experiment 2) was manipulated. Clear speech was also contrasted with sine-wave vocoded speech to mimic the loss of temporal fine structure with a cochlear implant. Experiment 1 showed that for clear speech, the visual speech benefit was unaffected by the number of background talkers. For vocoded speech, a larger benefit was found when there was only one background talker. Experiment 2 showed that visual speech benefit depended upon the audio intelligibility of the talker and increased as intelligibility decreased. Degrading the speech by vocoding resulted in even greater benefit from visual speech information. A single "independent noise" signal detection theory model predicted the overall visual speech benefit in some conditions but could not predict the different levels of benefit across variations in the background or target talkers. This suggests that, similar to audio-only speech intelligibility, the integration of audio-visual speech cues may be functionally dependent on factors other than audibility and task difficulty, and that clinicians and researchers should carefully consider the characteristics of their stimuli when assessing audio-visual integration.


Asunto(s)
Implantes Cocleares , Personas con Deficiencia Auditiva/psicología , Inteligibilidad del Habla , Percepción del Habla , Percepción Visual , Adolescente , Adulto , Implantación Coclear , Cognición , Señales (Psicología) , Femenino , Humanos , Masculino , Ruido , Adulto Joven
6.
Proc Natl Acad Sci U S A ; 115(44): 11322-11326, 2018 10 30.
Artículo en Inglés | MEDLINE | ID: mdl-30322908

RESUMEN

Frequency analysis of sound by the cochlea is the most fundamental property of the auditory system. Despite its importance, the resolution of this frequency analysis in humans remains controversial. The controversy persists because the methods used to estimate tuning in humans are indirect and have not all been independently validated in other species. Some data suggest that human cochlear tuning is considerably sharper than that of laboratory animals, while others suggest little or no difference between species. We show here in a single species (ferret) that behavioral estimates of tuning bandwidths obtained using perceptual masking methods, and objective estimates obtained using otoacoustic emissions, both also employed in humans, agree closely with direct physiological measurements from single auditory-nerve fibers. Combined with human behavioral data, this outcome indicates that the frequency analysis performed by the human cochlea is of significantly higher resolution than found in common laboratory animals. This finding raises important questions about the evolutionary origins of human cochlear tuning, its role in the emergence of speech communication, and the mechanisms underlying our ability to separate and process natural sounds in complex acoustic environments.


Asunto(s)
Cóclea/fisiología , Mamíferos/fisiología , Estimulación Acústica/métodos , Acústica , Animales , Umbral Auditivo/fisiología , Audición/fisiología , Humanos , Emisiones Otoacústicas Espontáneas/fisiología , Enmascaramiento Perceptual/fisiología , Sonido
7.
Front Neurosci ; 12: 671, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30369863

RESUMEN

A fundamental task of the ascending auditory system is to produce representations that facilitate the recognition of complex sounds. This is particularly challenging in the context of acoustic variability, such as that between different talkers producing the same phoneme. These representations are transformed as information is propagated throughout the ascending auditory system from the inner ear to the auditory cortex (AI). Investigating these transformations and their role in speech recognition is key to understanding hearing impairment and the development of future clinical interventions. Here, we obtained neural responses to an extensive set of natural vowel-consonant-vowel phoneme sequences, each produced by multiple talkers, in three stages of the auditory processing pathway. Auditory nerve (AN) representations were simulated using a model of the peripheral auditory system and extracellular neuronal activity was recorded in the inferior colliculus (IC) and primary auditory cortex (AI) of anaesthetized guinea pigs. A classifier was developed to examine the efficacy of these representations for recognizing the speech sounds. Individual neurons convey progressively less information from AN to AI. Nonetheless, at the population level, representations are sufficiently rich to facilitate recognition of consonants with a high degree of accuracy at all stages indicating a progression from a dense, redundant representation to a sparse, distributed one. We examined the timescale of the neural code for consonant recognition and found that optimal timescales increase throughout the ascending auditory system from a few milliseconds in the periphery to several tens of milliseconds in the cortex. Despite these longer timescales, we found little evidence to suggest that representations up to the level of AI become increasingly invariant to across-talker differences. Instead, our results support the idea that the role of the subcortical auditory system is one of dimensionality expansion, which could provide a basis for flexible classification of arbitrary speech sounds.

8.
Acta Acust United Acust ; 104(5): 922-925, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30369861

RESUMEN

When presented with two vowels simultaneously, humans are often able to identify the constituent vowels. Computational models exist that simulate this ability, however they predict listener confusions poorly, particularly in the case where the two vowels have the same fundamental frequency. Presented here is a model that is uniquely able to predict the combined representation of concurrent vowels. The given model is able to predict listener's systematic perceptual decisions to a high degree of accuracy.

9.
J Neurosci ; 37(27): 6588-6599, 2017 07 05.
Artículo en Inglés | MEDLINE | ID: mdl-28559383

RESUMEN

The cochlea behaves like a bank of band-pass filters, segregating information into different frequency channels. Some aspects of perception reflect processing within individual channels, but others involve the integration of information across them. One instance of this is sound localization, which improves with increasing bandwidth. The processing of binaural cues for sound location has been studied extensively. However, although the advantage conferred by bandwidth is clear, we currently know little about how this additional information is combined to form our percept of space. We investigated the ability of cells in the auditory system of guinea pigs to compare interaural level differences (ILDs), a key localization cue, between tones of disparate frequencies in each ear. Cells in auditory cortex believed to be integral to ILD processing (excitatory from one ear, inhibitory from the other: EI cells) compare ILDs separately over restricted frequency ranges which are not consistent with their monaural tuning. In contrast, cells that are excitatory from both ears (EE cells) show no evidence of frequency-specific processing. Both cell types are explained by a model in which ILDs are computed within separate frequency channels and subsequently combined in a single cortical cell. Interestingly, ILD processing in all inferior colliculus cell types (EE and EI) is largely consistent with processing within single, matched-frequency channels from each ear. Our data suggest a clear constraint on the way that localization cues are integrated: cortical ILD tuning to broadband sounds is a composite of separate, frequency-specific, binaurally sensitive channels. This frequency-specific processing appears after the level of the midbrain.SIGNIFICANCE STATEMENT For some sensory modalities (e.g., somatosensation, vision), the spatial arrangement of the outside world is inherited by the brain from the periphery. The auditory periphery is arranged spatially by frequency, not spatial location. Therefore, our auditory perception of location must be synthesized from physical cues in separate frequency channels. There are multiple cues (e.g., timing, level, spectral cues), but even single cues (e.g., level differences) are frequency dependent. The synthesis of location must account for this frequency dependence, but it is not known how this might occur. Here, we investigated how interaural-level differences are combined across frequency along the ascending auditory system. We found that the integration in auditory cortex preserves the independence of the different-level cues in different frequency regions.


Asunto(s)
Corteza Auditiva/fisiología , Mesencéfalo/fisiología , Red Nerviosa/fisiología , Discriminación de la Altura Tonal/fisiología , Localización de Sonidos/fisiología , Percepción Espacial/fisiología , Animales , Femenino , Cobayas , Masculino
10.
Behav Neurosci ; 130(4): 393-405, 2016 08.
Artículo en Inglés | MEDLINE | ID: mdl-27196623

RESUMEN

Psychophysical experiments seek to measure the limits of perception. While straightforward in humans, in animals they are time consuming. Choosing an appropriate task and interpreting measurements can be challenging. We investigated the localization of high-frequency auditory signals in noise using an "approach-to-target" task in ferrets, how task performance should be interpreted in terms of perception, and how the measurements relate to other types of tasks. To establish their general ability to localize, animals were first trained to discriminate broadband noise from 12 locations. Subsequently we tested their ability to discriminate between band-limited targets at 2 or 3 more widely spaced locations, in a continuous background noise. The ability to discriminate between 3 possible locations (-90°, 0°, 90°) of a 10-kHz pure tone decreased gradually over a wide range (>30 dB) of signal-to-noise ratios (SNRs). Location discrimination ability was better for wide band noise targets (0.5 and 2 octave). These results were consistent with localization ability limiting performance for pure tones. Discrimination of pure tones at 2 locations (-90/left, 90/right) was robust at positive SNRs, yielding psychometric functions which fell steeply at negative SNRs. Thresholds for discrimination were similar to previous tone-in-noise thresholds measured in ferrets using a yes/no task. Thus, using an approach-to-target task, sound "localization" in noise can reflect detectability or the ability to localize, depending on the stimulus configuration. Signal-detection-theory-based models were able to account for the results when discriminating between pure tones from 2- and 3-source locations. (PsycINFO Database Record


Asunto(s)
Percepción Auditiva/fisiología , Psicoacústica , Localización de Sonidos/fisiología , Animales , Hurones , Humanos , Ruido , Enmascaramiento Perceptual/fisiología , Discriminación de la Altura Tonal/fisiología
11.
Hear Res ; 336: 17-28, 2016 06.
Artículo en Inglés | MEDLINE | ID: mdl-27085797

RESUMEN

Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues.


Asunto(s)
Señales (Psicología) , Percepción del Habla , Percepción Visual , Estimulación Acústica , Adolescente , Adulto , Calibración , Implantes Cocleares , Femenino , Pruebas Auditivas , Humanos , Masculino , Modelos Teóricos , Ruido , Enmascaramiento Perceptual , Adulto Joven
12.
J Acoust Soc Am ; 139(2): EL19-24, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26936579

RESUMEN

Frequency selectivity is a fundamental property of hearing which affects almost all aspects of auditory processing. Here auditory filter widths at 1, 3, 7, and 10 kHz were estimated from behavioural thresholds using the notched-noise method [Patterson, Nimmo-Smith, Weber, and Milroy, J. Acoust. Soc. Am. 72, 1788-1803 (1982)] in ferrets. The mean bandwidth was 21% of the signal frequency, excluding wider bandwidths at 1 kHz (65%). They were comparable although on average broader than equivalent measurements in other mammals (∼11%-20%), and wider than bandwidths measured from the auditory nerve in ferrets (∼18%). In non-human mammals there is considerable variation between individuals, species, and in the correspondence with auditory nerve tuning.


Asunto(s)
Conducta Animal , Hurones/psicología , Ruido/efectos adversos , Enmascaramiento Perceptual , Percepción de la Altura Tonal , Estimulación Acústica , Acústica , Animales , Vías Auditivas/fisiología , Umbral Auditivo , Femenino , Hurones/fisiología , Masculino , Psicoacústica , Espectrografía del Sonido
13.
Artículo en Inglés | MEDLINE | ID: mdl-26903850

RESUMEN

We present a phenomenological model of electrically stimulated auditory nerve fibers (ANFs). The model reproduces the probabilistic and temporal properties of the ANF response to both monophasic and biphasic stimuli, in isolation. The main contribution of the model lies in its ability to reproduce statistics of the ANF response (mean latency, jitter, and firing probability) under both monophasic and cathodic-anodic biphasic stimulation, without changing the model's parameters. The response statistics of the model depend on stimulus level and duration of the stimulating pulse, reproducing trends observed in the ANF. In the case of biphasic stimulation, the model reproduces the effects of pseudomonophasic pulse shapes and also the dependence on the interphase gap (IPG) of the stimulus pulse, an effect that is quantitatively reproduced. The model is fitted to ANF data using a procedure that uniquely determines each model parameter. It is thus possible to rapidly parameterize a large population of neurons to reproduce a given set of response statistic distributions. Our work extends the stochastic leaky integrate and fire (SLIF) neuron, a well-studied phenomenological model of the electrically stimulated neuron. We extend the SLIF neuron so as to produce a realistic latency distribution by delaying the moment of spiking. During this delay, spiking may be abolished by anodic current. By this means, the probability of the model neuron responding to a stimulus is reduced when a trailing phase of opposite polarity is introduced. By introducing a minimum wait period that must elapse before a spike may be emitted, the model is able to reproduce the differences in the threshold level observed in the ANF for monophasic and biphasic stimuli. Thus, the ANF response to a large variety of pulse shapes are reproduced correctly by this model.

14.
Hear Res ; 328: 48-58, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26163899

RESUMEN

Auditory stream segregation describes the way that sounds are perceptually segregated into groups or streams on the basis of perceptual attributes such as pitch or spectral content. For sequences of pure tones, segregation depends on the tones' proximity in frequency and time. In the auditory cortex (and elsewhere) responses to sequences of tones are dependent on stimulus conditions in a similar way to the perception of these stimuli. However, although highly dependent on stimulus conditions, perception is also clearly influenced by factors unrelated to the stimulus, such as attention. Exactly how 'bottom-up' sensory processes and non-sensory 'top-down' influences interact is still not clear. Here, we recorded responses to alternating tones (ABAB …) of varying frequency difference (FD) and rate of presentation (PR) in the auditory cortex of anesthetized guinea-pigs. These data complement previous studies, in that top-down processing resulting from conscious perception should be absent or at least considerably attenuated. Under anesthesia, the responses of cortical neurons to the tone sequences adapted rapidly, in a manner sensitive to both the FD and PR of the sequences. While the responses to tones at frequencies more distant from neuron best frequencies (BFs) decreased as the FD increased, the responses to tones near to BF increased, consistent with a release from adaptation, or forward suppression. Increases in PR resulted in reductions in responses to all tones, but the reduction was greater for tones further from BF. Although asymptotically adapted responses to tones showed behavior that was qualitatively consistent with perceptual stream segregation, responses reached asymptote within 2 s, and responses to all tones were very weak at high PRs (>12 tones per second). A signal-detection model, driven by the cortical population response, made decisions that were dependent on both FD and PR in ways consistent with perceptual stream segregation. This included showing a range of conditions over which decisions could be made either in favor of perceptual integration or segregation, depending on the model 'decision criterion'. However, the rate of 'build-up' was more rapid than seen perceptually, and at high PR responses to tones were sometimes so weak as to be undetectable by the model. Under anesthesia, adaptation occurs rapidly, and at high PRs tones are generally poorly represented, which compromises the interpretation of the experiment. However, within these limitations, these results complement experiments in awake animals and humans. They generally support the hypothesis that 'bottom-up' sensory processing plays a major role in perceptual organization, and that processes underlying stream segregation are active in the absence of attention.


Asunto(s)
Anestesia , Corteza Auditiva/fisiología , Audición/efectos de los fármacos , Estimulación Acústica/métodos , Animales , Atención , Corteza Auditiva/efectos de los fármacos , Percepción Auditiva/efectos de los fármacos , Percepción Auditiva/fisiología , Butirofenonas/farmacología , Combinación de Medicamentos , Femenino , Fentanilo/farmacología , Cobayas , Inyecciones Intramusculares , Masculino , Neuronas/efectos de los fármacos , Neuronas/fisiología , Reproducibilidad de los Resultados , Sonido
15.
PLoS One ; 9(12): e114076, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25485733

RESUMEN

Classical signal detection theory attributes bias in perceptual decisions to a threshold criterion, against which sensory excitation is compared. The optimal criterion setting depends on the signal level, which may vary over time, and about which the subject is naïve. Consequently, the subject must optimise its threshold by responding appropriately to feedback. Here a series of experiments was conducted, and a computational model applied, to determine how the decision bias of the ferret in an auditory signal detection task tracks changes in the stimulus level. The time scales of criterion dynamics were investigated by means of a yes-no signal-in-noise detection task, in which trials were grouped into blocks that alternately contained easy- and hard-to-detect signals. The responses of the ferrets implied both long- and short-term criterion dynamics. The animals exhibited a bias in favour of responding "yes" during blocks of harder trials, and vice versa. Moreover, the outcome of each single trial had a strong influence on the decision at the next trial. We demonstrate that the single-trial and block-level changes in bias are a manifestation of the same criterion update policy by fitting a model, in which the criterion is shifted by fixed amounts according to the outcome of the previous trial and decays strongly towards a resting value. The apparent block-level stabilisation of bias arises as the probabilities of outcomes and shifts on single trials mutually interact to establish equilibrium. To gain an intuition into how stable criterion distributions arise from specific parameter sets we develop a Markov model which accounts for the dynamic effects of criterion shifts. Our approach provides a framework for investigating the dynamics of decisions at different timescales in other species (e.g., humans) and in other psychological domains (e.g., vision, memory).


Asunto(s)
Conducta Animal , Aprendizaje Discriminativo , Detección de Señal Psicológica , Animales , Femenino , Hurones , Humanos , Masculino , Modelos Teóricos
16.
J Exp Psychol Hum Percept Perform ; 40(6): 2106-11, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25328997

RESUMEN

Previous research (e.g., McGurk & MacDonald, 1976) suggests that faces and voices are bound automatically, but recent evidence suggests that attention is involved in a task of searching for a talking face (Alsius & Soto-Faraco, 2011). We hypothesized that the processing demands of the stimuli may affect the amount of attentional resources required, and investigated what effect degrading the auditory stimulus had on the time taken to locate a talking face. Twenty participants were presented with between 2 and 4 faces articulating different sentences, and had to decide which of these faces matched the sentence that they heard. The results showed that in the least demanding auditory condition (clear speech in quiet), search times did not significantly increase when the number of faces increased. However, when speech was presented in background noise or was processed to simulate the information provided by a cochlear implant, search times increased as the number of faces increased. Thus, it seems that the amount of attentional resources required vary according to the processing demands of the auditory stimuli, and when processing load is increased then faces need to be individually attended to in order to complete the task. Based on these results we would expect cochlear-implant users to find the task of locating a talking face more attentionally demanding than normal hearing listeners.


Asunto(s)
Atención , Cara , Reconocimiento Visual de Modelos , Enmascaramiento Perceptual , Percepción del Habla , Conducta Verbal , Femenino , Humanos , Masculino , Tiempo de Reacción , Acústica del Lenguaje , Adulto Joven
17.
J Physiol ; 591(16): 4003-25, 2013 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-23753527

RESUMEN

A differential response to sound frequency is a fundamental property of auditory neurons. Frequency analysis in the cochlea gives rise to V-shaped tuning functions in auditory nerve fibres, but by the level of the inferior colliculus (IC), the midbrain nucleus of the auditory pathway, neuronal receptive fields display diverse shapes that reflect the interplay of excitation and inhibition. The origin and nature of these frequency receptive field types is still open to question. One proposed hypothesis is that the frequency response class of any given neuron in the IC is predominantly inherited from one of three major afferent pathways projecting to the IC, giving rise to three distinct receptive field classes. Here, we applied subjective classification, principal component analysis, cluster analysis, and other objective statistical measures, to a large population (2826) of frequency response areas from single neurons recorded in the IC of the anaesthetised guinea pig. Subjectively, we recognised seven frequency response classes (V-shaped, non-monotonic Vs, narrow, closed, tilt down, tilt up and double-peaked), that were represented at all frequencies. We could identify similar classes using our objective classification tools. Importantly, however, many neurons exhibited properties intermediate between these classes, and none of the objective methods used here showed evidence of discrete response classes. Thus receptive field shapes in the IC form continua rather than discrete classes, a finding consistent with the integration of afferent inputs in the generation of frequency response areas. The frequency disposition of inhibition in the response areas of some neurons suggests that across-frequency inputs originating at or below the level of the IC are involved in their generation.


Asunto(s)
Vías Auditivas/fisiología , Colículos Inferiores/fisiología , Neuronas/fisiología , Estimulación Acústica , Animales , Cobayas , Neuronas/clasificación
18.
J Neurophysiol ; 110(4): 973-83, 2013 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-23719212

RESUMEN

This study investigates the temporal properties of adaptation in the late auditory-evoked potentials in humans. The results are used to make inferences about the mechanisms of adaptation in human auditory cortex. The first experiment measured adaptation by single adapters as a combined function of the adapter duration and the stimulus onset asynchrony (SOA) and interstimulus interval (ISI) between the adapter and the adapted sound ("probe"). The results showed recovery from adaptation with increasing ISI, as would be expected, but buildup of adaptation with increasing adapter duration and thus SOA. This suggests that adaptation in auditory cortex is caused by the ongoing, rather than the onset, response to the adapter. Quantitative modeling indicated that the rate of buildup of adaptation is almost an order of magnitude faster than the recovery rate of adaptation. The recovery rate suggests that cortical adaptation is caused by synaptic depression and slow afterhyperpolarization. The P2 was more strongly affected by adaptation than the N1, suggesting that the two deflections originate from different cortical generators. In the second experiment, the single adapters were replaced by trains of two or four identical adapters. The results indicated that adaptation decays faster after repeated presentation of the adapter. This increase in the recovery rate of adaptation might contribute to the elicitation of the auditory mismatch negativity response. It may be caused by top-down feedback or by local processes such as the buildup of residual Ca(2+) within presynaptic neurons.


Asunto(s)
Adaptación Fisiológica , Corteza Auditiva/fisiología , Potenciales Evocados Auditivos , Estimulación Acústica , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Adulto Joven
20.
Eur J Neurosci ; 36(4): 2428-39, 2012 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-22694786

RESUMEN

The ferret (Mustela putorius) is a medium-sized, carnivorous mammal with good low-frequency hearing; it is relatively easy to train, and there is therefore a good body of behavioural data detailing its detection thresholds and localization abilities. However, despite extensive studies of the physiology of the central nervous system of the ferret, even extending to the prefrontal cortex, little is known of the functioning of the auditory periphery. Here, we provide an insight into this peripheral function by detailing responses of single auditory nerve fibres. Our expectation was that the ferret auditory nerve responsiveness would be similar that of its near relative, the cat. However, by comparing a range of variables (the frequency tuning, the variation of rate-level functions with spontaneous rate, and the high-frequency cut-off of phase locking) across several species, we show that the auditory nerve (and hence cochlea) in the ferret is more similar to that of the guinea-pig and chinchilla than to that of the cat. Animal models of hearing are often chosen on the basis of the similarity of their audiogram to that of the human, particularly in the low-frequency region. We show here that whereas the ferret hears well at low frequencies, this is likely to occur via fibres with higher characteristic frequencies. These qualitative differences in response characteristics in auditory nerve fibres are important in interpreting data across all of auditory science, as it has been argued recently that tuning in animals is broader than in humans.


Asunto(s)
Nervio Coclear/fisiología , Fibras Nerviosas/fisiología , Estimulación Acústica , Animales , Potenciales Evocados Auditivos , Hurones , Especificidad de la Especie
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA