Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
PLoS Comput Biol ; 9(3): e1002925, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23516340

RESUMEN

Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives-a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the dynamics of human perception in auditory streaming.


Asunto(s)
Percepción Auditiva/fisiología , Modelos Neurológicos , Estimulación Acústica , Adolescente , Adulto , Algoritmos , Simulación por Computador , Femenino , Humanos , Masculino , Modelos Estadísticos , Sonido
2.
Brain Topogr ; 27(4): 565-77, 2014 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-24271978

RESUMEN

Predictive accounts of perception have received increasing attention in the past 20 years. Detecting violations of auditory regularities, as reflected by the Mismatch Negativity (MMN) auditory event-related potential, is amongst the phenomena seamlessly fitting this approach. Largely based on the MMN literature, we propose a psychological conceptual framework called the Auditory Event Representation System (AERS), which is based on the assumption that auditory regularity violation detection and the formation of auditory perceptual objects are based on the same predictive regularity representations. Based on this notion, a computational model of auditory stream segregation, called CHAINS, has been developed. In CHAINS, the auditory sensory event representation of each incoming sound is considered for being the continuation of likely combinations of the preceding sounds in the sequence, thus providing alternative interpretations of the auditory input. Detecting repeating patterns allows predicting upcoming sound events, thus providing a test and potential support for the corresponding interpretation. Alternative interpretations continuously compete for perceptual dominance. In this paper, we briefly describe AERS and deduce some general constraints from this conceptual model. We then go on to illustrate how these constraints are computationally specified in CHAINS.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Modelos Neurológicos , Simulación por Computador , Potenciales Evocados Auditivos , Humanos
3.
J Acoust Soc Am ; 135(3): 1392-405, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24606277

RESUMEN

While many studies have assessed the efficacy of similarity-based cues for auditory stream segregation, much less is known about whether and how the larger-scale structure of sound sequences support stream formation and the choice of sound organization. Two experiments investigated the effects of musical melody and rhythm on the segregation of two interleaved tone sequences. The two sets of tones fully overlapped in pitch range but differed from each other in interaural time and intensity. Unbeknownst to the listener, separately, each of the interleaved sequences was created from the notes of a different song. In different experimental conditions, the notes and/or their timing could either follow those of the songs or they could be scrambled or, in case of timing, set to be isochronous. Listeners were asked to continuously report whether they heard a single coherent sequence (integrated) or two concurrent streams (segregated). Although temporal overlap between tones from the two streams proved to be the strongest cue for stream segregation, significant effects of tonality and familiarity with the songs were also observed. These results suggest that the regular temporal patterns are utilized as cues in auditory stream segregation and that long-term memory is involved in this process.


Asunto(s)
Señales (Psicología) , Música , Periodicidad , Percepción de la Altura Tonal , Percepción del Tiempo , Estimulación Acústica , Adolescente , Adulto , Análisis de Varianza , Audiometría , Femenino , Humanos , Masculino , Discriminación de la Altura Tonal , Psicoacústica , Factores de Tiempo , Adulto Joven
4.
Front Neurosci ; 8: 25, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24616656

RESUMEN

The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the "ABA-" auditory streaming paradigm we trained listeners until they could reliably recognize all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated). Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e., the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing) perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in human perception.

5.
Front Neurosci ; 8: 64, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24778604

RESUMEN

An audio-visual experiment using moving sound sources was designed to investigate whether the analysis of auditory scenes is modulated by synchronous presentation of visual information. Listeners were presented with an alternating sequence of two pure tones delivered by two separate sound sources. In different conditions, the two sound sources were either stationary or moving on random trajectories around the listener. Both the sounds and the movement trajectories were derived from recordings in which two humans were moving with loudspeakers attached to their heads. Visualized movement trajectories modeled by a computer animation were presented together with the sounds. In the main experiment, behavioral reports on sound organization were collected from young healthy volunteers. The proportion and stability of the different sound organizations were compared between the conditions in which the visualized trajectories matched the movement of the sound sources and when the two were independent of each other. The results corroborate earlier findings that separation of sound sources in space promotes segregation. However, no additional effect of auditory movement per se on the perceptual organization of sounds was obtained. Surprisingly, the presentation of movement-congruent visual cues did not strengthen the effects of spatial separation on segregating auditory streams. Our findings are consistent with the view that bistability in the auditory modality can occur independently from other modalities.

6.
Philos Trans R Soc Lond B Biol Sci ; 367(1591): 1001-12, 2012 Apr 05.
Artículo en Inglés | MEDLINE | ID: mdl-22371621

RESUMEN

Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm.


Asunto(s)
Percepción Auditiva/fisiología , Estimulación Acústica , Vías Auditivas/fisiología , Encéfalo/fisiología , Humanos , Modelos Neurológicos , Modelos Psicológicos , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA