Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
J Neurosci ; 34(5): 1963-9, 2014 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-24478375

RESUMEN

Adaptation to both common and rare sounds has been independently reported in neurophysiological studies using probabilistic stimulus paradigms in small mammals. However, the apparent sensitivity of the mammalian auditory system to the statistics of incoming sound has not yet been generalized to task-related human auditory perception. Here, we show that human listeners selectively adapt to novel sounds within scenes unfolding over minutes. Listeners' performance in an auditory discrimination task remains steady for the most common elements within the scene but, after the first minute, performance improves for distinct and rare (oddball) sound elements, at the expense of rare sounds that are relatively less distinct. Our data provide the first evidence of enhanced coding of oddball sounds in a human auditory discrimination task and suggest the existence of an adaptive mechanism that tracks the long-term statistics of sounds and deploys coding resources accordingly.


Asunto(s)
Adaptación Fisiológica/fisiología , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Discriminación en Psicología/fisiología , Sonido , Estimulación Acústica , Humanos , Probabilidad , Psicoacústica , Estadística como Asunto , Factores de Tiempo
2.
Front Psychol ; 6: 1522, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26528202

RESUMEN

A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don't yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36) performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

3.
Multisens Res ; 27(5-6): 337-57, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25693300

RESUMEN

Sensory substitution devices such as The vOICe convert visual imagery into auditory soundscapes and can provide a basic 'visual' percept to those with visual impairment. However, it is not known whether technical or perceptual limits dominate the practical efficacy of such systems. By manipulating the resolution of sonified images and asking naïve sighted participants to identify visual objects through a six-alternative forced-choice procedure (6AFC) we demonstrate a 'ceiling effect' at 8 x 8 pixels, in both visual and tactile conditions, that is well below the theoretical limits of the technology. We discuss our results in the context of auditory neural limits on the representation of 'auditory' objects in a cortical hierarchy and how perceptual training may be used to circumvent these limitations.


Asunto(s)
Percepción Auditiva/fisiología , Ceguera/rehabilitación , Discriminación en Psicología , Percepción de Forma/fisiología , Percepción del Tacto/fisiología , Adolescente , Adulto , Ceguera/fisiopatología , Femenino , Humanos , Masculino , Dispositivos de Autoayuda , Privación Sensorial , Adulto Joven
4.
PLoS One ; 8(2): e57497, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23536749

RESUMEN

In this paper we use empirical loudness modeling to explore a perceptual sub-category of the dynamic range problem of auditory neuroscience. Humans are able to reliably report perceived intensity (loudness), and discriminate fine intensity differences, over a very large dynamic range. It is usually assumed that loudness and intensity change detection operate upon the same neural signal, and that intensity change detection may be predicted from loudness data and vice versa. However, while loudness grows as intensity is increased, improvement in intensity discrimination performance does not follow the same trend and so dynamic range estimations of the underlying neural signal from loudness data contradict estimations based on intensity just-noticeable difference (JND) data. In order to account for this apparent paradox we draw on recent advances in auditory neuroscience. We test the hypothesis that a central model, featuring central adaptation to the mean loudness level and operating on the detection of maximum central-loudness rate of change, can account for the paradoxical data. We use numerical optimization to find adaptation parameters that fit data for continuous-pedestal intensity change detection over a wide dynamic range. The optimized model is tested on a selection of equivalent pseudo-continuous intensity change detection data. We also report a supplementary experiment which confirms the modeling assumption that the detection process may be modeled as rate-of-change. Data are obtained from a listening test (N = 10) using linearly ramped increment-decrement envelopes applied to pseudo-continuous noise with an overall level of 33 dB SPL. Increments with half-ramp durations between 5 and 50,000 ms are used. The intensity JND is shown to increase towards long duration ramps (p<10(-6)). From the modeling, the following central adaptation parameters are derived; central dynamic range of 0.215 sones, 95% central normalization, and a central loudness JND constant of 5.5×10(-5) sones per ms. Through our findings, we argue that loudness reflects peripheral neural coding, and the intensity JND reflects central neural coding.


Asunto(s)
Adaptación Fisiológica/fisiología , Audición/fisiología , Percepción Sonora/fisiología , Modelos Biológicos , Sonido , Simulación por Computador , Humanos
5.
PLoS One ; 8(8): e73590, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24009759

RESUMEN

Recent studies employing speech stimuli to investigate 'cocktail-party' listening have focused on entrainment of cortical activity to modulations at syllabic (5 Hz) and phonemic (20 Hz) rates. The data suggest that cortical modulation filters (CMFs) are dependent on the sound-frequency channel in which modulations are conveyed, potentially underpinning a strategy for separating speech from background noise. Here, we characterize modulation filters in human listeners using a novel behavioral method. Within an 'inverted' adaptive forced-choice increment detection task, listening level was varied whilst contrast was held constant for ramped increments with effective modulation rates between 0.5 and 33 Hz. Our data suggest that modulation filters are tonotopically organized (i.e., vary along the primary, frequency-organized, dimension). This suggests that the human auditory system is optimized to track rapid (phonemic) modulations at high sound-frequencies and slow (prosodic/syllabic) modulations at low frequencies.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica , Adulto , Femenino , Humanos , Masculino , Enmascaramiento Perceptual , Adulto Joven
6.
PLoS One ; 8(9): e74692, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24040323

RESUMEN

The score is a symbolic encoding that describes a piece of music, written according to the conventions of music theory, which must be rendered as sound (e.g., by a performer) before it may be perceived as music by the listener. In this paper we provide a step towards unifying music theory with music perception in terms of the relationship between notated rhythm (i.e., the score) and perceived syncopation. In our experiments we evaluated this relationship by manipulating the score, rendering it as sound and eliciting subjective judgments of syncopation. We used a metronome to provide explicit cues to the prevailing rhythmic structure (as defined in the time signature). Three-bar scores with time signatures of 4/4 and 6/8 were constructed using repeated one-bar rhythm-patterns, with each pattern built from basic half-bar rhythm-components. Our manipulations gave rise to various rhythmic structures, including polyrhythms and rhythms with missing strong- and/or down-beats. Listeners (N = 10) were asked to rate the degree of syncopation they perceived in response to a rendering of each score. We observed higher degrees of syncopation in time signatures of 6/8, for polyrhythms, and for rhythms featuring a missing down-beat. We also found that the location of a rhythm-component within the bar has a significant effect on perceived syncopation. Our findings provide new insight into models of syncopation and point the way towards areas in which the models may be improved.


Asunto(s)
Percepción Auditiva/fisiología , Música , Estimulación Acústica , Adulto , Femenino , Humanos , Masculino , Periodicidad , Sonido , Percepción del Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA