Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Proc Biol Sci ; 290(2004): 20230201, 2023 08 09.
Artigo em Inglês | MEDLINE | ID: mdl-37554035

RESUMO

It is generally argued that distress vocalizations, a common modality for alerting conspecifics across a wide range of terrestrial vertebrates, share acoustic features that allow heterospecific communication. Yet studies suggest that the acoustic traits used to decode distress may vary between species, leading to decoding errors. Here we found through playback experiments that Nile crocodiles are attracted to infant hominid cries (bonobo, chimpanzee and human), and that the intensity of crocodile response depends critically on a set of specific acoustic features (mainly deterministic chaos, harmonicity and spectral prominences). Our results suggest that crocodiles are sensitive to the degree of distress encoded in the vocalizations of phylogenetically very distant vertebrates. A comparison of these results with those obtained with human subjects confronted with the same stimuli further indicates that crocodiles and humans use different acoustic criteria to assess the distress encoded in infant cries. Interestingly, the acoustic features driving crocodile reaction are likely to be more reliable markers of distress than those used by humans. These results highlight that the acoustic features encoding information in vertebrate sound signals are not necessarily identical across species.


Assuntos
Percepção Auditiva , Humanos , Animais , Lactente , Choro , Acústica , Jacarés e Crocodilos/fisiologia , Hominidae , Vocalização Animal , Som
2.
J Acoust Soc Am ; 143(1): 575, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29390738

RESUMO

Two experiments were conducted to investigate how the perceptual organization of a multi-tone mixture interacts with global and partial loudness judgments. Grouping (single-object) and segregating (two-object) conditions were created using frequency modulation by applying the same or different modulation frequencies to the odd- and even-rank harmonics. While in Experiment 1 (Exp. 1) the two objects had the same loudness, in Experiment 2 (Exp. 2), loudness level differences (LLD) were introduced (LLD = 6, 12, 18, or 24 phons). In the two-object condition, the loudness of each object was not affected by the mixture when LLD = 0 (Exp. 1), otherwise (Exp. 2), the loudness of the softest object was modulated by LLD, and the loudness of the loudest object was the same regardless of whether it was presented in or out of the mixture. In the single- and the two-object conditions, the global loudness of the mixture was close to the loudness of the loudest object. Taken together, these results suggest that while partial loudness judgments are dependent on the perceptual organization of the scene, global loudness is not. Yet, both partial and global loudness computations are governed by relative "saliences" between different auditory objects (in the segregating condition) or within a single object (in the grouping condition).

3.
J Acoust Soc Am ; 142(3): 1674, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28964066

RESUMO

Differences in spatial cues, including interaural time differences (ITDs), interaural level differences (ILDs) and spectral cues, can lead to stream segregation of alternating noise bursts. It is unknown how effective such cues are for streaming sounds with realistic spectro-temporal variations. In particular, it is not known whether the high-frequency spectral cues associated with elevation remain sufficiently robust under such conditions. To answer these questions, sequences of consonant-vowel tokens were generated and filtered by non-individualized head-related transfer functions to simulate the cues associated with different positions in the horizontal and median planes. A discrimination task showed that listeners could discriminate changes in interaural cues both when the stimulus remained constant and when it varied between presentations. However, discrimination of changes in spectral cues was much poorer in the presence of stimulus variability. A streaming task, based on the detection of repeated syllables in the presence of interfering syllables, revealed that listeners can use both interaural and spectral cues to segregate alternating syllable sequences, despite the large spectro-temporal differences between stimuli. However, only the full complement of spatial cues (ILDs, ITDs, and spectral cues) resulted in obligatory streaming in a task that encouraged listeners to integrate the tokens into a single stream.


Assuntos
Estimulação Acústica/métodos , Sinais (Psicologia) , Mascaramento Perceptivo , Percepção da Fala , Adulto , Análise de Variância , Feminino , Audição/fisiologia , Humanos , Masculino , Pessoa de Meia-Idade , Fonética , Localização de Som , Fatores de Tempo
4.
Adv Exp Med Biol ; 894: 355-362, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27080676

RESUMO

In psychoacoustics, works on pitch perception attempt to distinguish between envelope and fine structure cues that are generally viewed as independent and separated using a Hilbert transform. To empirically distinguish between envelope and fine structure cues in pitch perception experiments, a dedicated signal has been proposed. This signal is an unresolved harmonic complex tones with all harmonics shifted by the same amount of Hz. As the frequency distance between adjacent components is regular and identical than in the original harmonic complex tone, such a signal has the same envelope but a different fine structure. So, any perceptual difference between these signals is interpreted as a fine structure based percept. Here, as illustrated by very basic simulations, I suggest that this orthogonal point of view that is generally admitted could be a conceptual error. In fact, neither the fine structure nor the envelope is required to be fully encoded to explain pitch perception. Sufficient information is conveyed by the peaks in the fine structure that are located nearby a maximum of the envelope. Envelope and fine structure could then be in perpetual interaction and the pitch would be conveyed by "the fine structure under envelope". Moreover, as the temporal delay between peaks of interest is rather longer than the delay between two adjacent peaks of the fine structure, such a mechanism would be much less constrained by the phase locking limitation of the auditory system. Several data from the literature are discussed from this new conceptual point of view.


Assuntos
Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Sinais (Psicologia) , Humanos
5.
J Acoust Soc Am ; 138(6): 3500-12, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26723307

RESUMO

Interaural time differences (ITDs) and interaural level differences (ILDs) associated with monaural spectral differences (coloration) enable the localization of sound sources. The influence of these spatial cues as well as their relative importance on obligatory stream segregation were assessed in experiment 1. A temporal discrimination task favored by integration was used to measure obligatory stream segregation for sequences of speech-shaped noises. Binaural and monaural differences associated with different spatial positions increased discrimination thresholds, indicating that spatial cues can induce stream segregation. The results also demonstrated that ITDs and coloration were relatively more important cues compared to ILDs. Experiment 2 questioned whether sound segregation takes place at the level of acoustic cue extraction (ITD per se) or at the level of object formation (perceived azimuth). A difference in ITDs between stimuli was introduced either consistently or inconsistently across frequencies, leading to clearly lateralized sounds or blurred lateralization, respectively. Conditions with ITDs and clearly perceived azimuths induced significantly more segregation than the condition with ITDs but reduced lateralization. The results suggested that segregation was mainly based on a difference in lateralization, although the extraction of ITDs might have also helped segregation up to a ceiling magnitude.

6.
J Acoust Soc Am ; 136(1): 5-8, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24993189

RESUMO

Multiple sound reflections from room materials and a listener's head induce slight spectral modifications of sounds. This coloration depends on the listener and source positions, and on the room itself. This study investigated whether coloration could help segregate competing sources. Obligatory streaming was evaluated for diotic speech-shaped noises using a rhythmic discrimination task. Thresholds for detecting anisochrony were always significantly higher when stimuli differed in spectrum. The tested differences corresponded to three spatial configurations involving different levels of head and room coloration. These results suggest that, despite the generally deleterious effects of reverberation on speech intelligibility, coloration could favor source segregation.


Assuntos
Arquitetura de Instituições de Saúde , Ruído/efeitos adversos , Mascaramento Perceptivo , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Audiometria da Fala , Limiar Auditivo , Sinais (Psicologia) , Discriminação Psicológica , Humanos , Psicoacústica , Espectrografia do Som , Acústica da Fala , Fatores de Tempo , Vibração , Qualidade da Voz
7.
Int J Occup Saf Ergon ; 30(1): 264-271, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38124394

RESUMO

A model was developed to assess how elevated absolute thresholds and enlarged auditory filters can impede the ability to detect alarms in a noisy background, such alarms being of paramount importance to ensure the safety of workers. Based on previously measured masked thresholds of 80 listeners in five groups (normal hearing to strongly impaired), the model was derived from signal detection theory (SDT) applied to Glasberg and Moore's excitation pattern model. The model can describe the influence of absolute thresholds and enlarged auditory filters together or separately on the detection ability for normal hearing and hearing-impaired listeners with various hearing profiles. Furthermore, it suggests that enlarged auditory filters alone can explain all of the impairment in this specific alarm detection task. Finally, the possibility of further development of the model into an alarm detection model is discussed.


Assuntos
Audição , Ruído , Humanos , Limiar Auditivo
8.
iScience ; 26(4): 106441, 2023 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-37035010

RESUMO

Rapidly sorting the information contained in a stream of stimuli is a major challenge for animals. One cognitive mechanism for achieving this goal is categorization, where the receiving individual considers a continuous variation of a stimulus as belonging to discrete categories. Using playback experiments in a naturalistic setup, here we show that crocodiles confronted with an acoustic continuum ranging from a frog call to a crocodile call classify each acoustic variant into one of these two categories, establishing a meaningful boundary where no acoustic boundary exists. With GO/NO-GO experiments, we then observe that this boundary is defined along the continuum following learning. We further demonstrate that crocodilians rely on the spectral envelope of sounds to categorize stimuli. This study suggests that sound categorization in crocodilians is a pre-wired faculty allowing rapid decision-making and highlights the learning-dependent plasticity involved in defining the boundary between sound categories.

9.
R Soc Open Sci ; 9(8): 210342, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36061524

RESUMO

Reverberation can have a strong detrimental effect on speech intelligibility in noise. Two main monaural effects were studied here: the temporal smearing of the target speech, which makes the speech less understandable, and the temporal smearing of the noise, which reduces the opportunity for listening in the masker dips. These phenomena have been shown to affect normal-hearing (NH) listeners. The aim of this study was to understand whether hearing-impaired (HI) listeners are more affected by reverberation, and if so to identify which of these two effects is responsible. They were investigated separately and in combination, by applying reverberation either on the target speech, on the noise masker, or on both sources. Binaural effects were not investigated here. Intelligibility scores in the presence of stationary and modulated noise were systematically compared for both NH and HI listeners in these situations. At the optimal signal-to-noise ratios (SNRs) (that is to say, the SNRs with the least amount of floor and ceiling effects), the temporal smearing of both the speech and the noise had a similar effect for the HI and NH listeners, so that reverberation was not more detrimental for the HI listeners. There was only a very limited dip listening benefit at this SNR for either group. Some differences across group appeared at the SNR maximizing dip listening, but they could not be directly related to an effect of reverberation, and were rather due to floor effects or to the reduced ability of the HI listeners to benefit from dip listening, even in the absence of reverberation.

10.
Int J Occup Saf Ergon ; 28(4): 2385-2395, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34633273

RESUMO

The influence of wearing hearing protectors on the detection of seven railway warning signals in noise was evaluated by comparisons of the masked thresholds measured with and without hearing protectors, out of a total of 80 listeners. The results show that wearing hearing protection devices (HPDs) improves the audibility for normal hearing listeners whereas it tends to impede the audibility for hearing impaired listeners. Moreover, the impediments greatly depend on the warning signal acoustical characteristics. Statistical analyses were performed in order to propose a criterion for hearing impaired listeners that guarantees their security when wearing hearing protectors. If we do not consider one given high-pitched signal that is not suitable as a warning signal, the conclusion is that security is assured when the average absolute hearing threshold (average at 500, 1000 and 2000 Hz for the best ear) of the listeners remains lower than a hearing level of 30 dB.


Assuntos
Audição , Ruído , Humanos
11.
Curr Biol ; 32(2): R70-R71, 2022 01 24.
Artigo em Inglês | MEDLINE | ID: mdl-35077689

RESUMO

Planet Earth is becoming increasingly difficult for large animal species to inhabit. Yet, these species are of major importance for the functioning of the biosphere and their progressive disappearance is accompanied by profound negative alterations of ecosystems1 (Supplemental information). To implement effective conservation measures, it is essential to have a detailed knowledge of the biology of these species. Here, we show that the hippopotamus Hippopotamus amphibius, an iconic African megaherbivore for which little is known about social communication, uses vocal recognition to manage relationships between territorial groups. We conducted playback experiments on groups of hippos and observed their response to vocalizations from an individual of the same group (familiar), a group from the same lake (neighbor) and a distant group (stranger). We found that stranger vocalizations induced a stronger behavioral reaction than the other two stimuli. In addition to showing that hippos are able to identify categories of conspecifics based on vocal signatures, our study demonstrates that hippo groups are territorial entities that behave less aggressively toward their neighbors than toward strangers. These new behavioral data suggest that habituation playbacks prior to conservation translocation operations may help reduce the risk of conflict between individuals that have never seen each other.


Assuntos
Artiodáctilos , Ecossistema , Animais , Artiodáctilos/fisiologia , Reconhecimento Psicológico , Territorialidade , Vocalização Animal/fisiologia
12.
Commun Biol ; 5(1): 869, 2022 08 25.
Artigo em Inglês | MEDLINE | ID: mdl-36008592

RESUMO

Ambient noise is a major constraint on acoustic communication in both animals and humans. One mechanism to overcome this problem is Spatial Release from Masking (SRM), the ability to distinguish a target sound signal from masking noise when both sources are spatially separated. SRM is well described in humans but has been poorly explored in animals. Although laboratory tests with trained individuals have suggested that SRM may be a widespread ability in vertebrates, it may play a limited role in natural environments. Here we combine field experiments with investigations in captivity to test whether crocodilians experience SRM. We show that 2 species of crocodilians are able to use SRM in their natural habitat and that it quickly becomes effective for small angles between the target signal source and the noise source, becoming maximal when the angle exceeds 15∘. Crocodiles can therefore take advantage of SRM to improve sound scene analysis and the detection of biologically relevant signals.


Assuntos
Jacarés e Crocodilos , Mascaramento Perceptivo , Estimulação Acústica , Acústica , Animais , Humanos , Ruído
13.
J Acoust Soc Am ; 130(1): 283-91, 2011 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-21786898

RESUMO

Lip-reading has been shown to improve the intelligibility of speech in multitalker situations, where auditory stream segregation naturally takes place. This study investigated whether the benefit of lip-reading is a result of a primary audiovisual interaction that enhances the obligatory streaming mechanism. Two behavioral experiments were conducted involving sequences of French vowels that alternated in fundamental frequency. In Experiment 1, subjects attempted to identify the order of items in a sequence. In Experiment 2, subjects attempted to detect a disruption to temporal isochrony across alternate items. Both tasks are disrupted by streaming, thus providing a measure of primary or obligatory streaming. Visual lip gestures articulating alternate vowels were synchronized with the auditory sequence. Overall, the results were consistent with the hypothesis that visual lip gestures enhance segregation by affecting primary auditory streaming. Moreover, increases in the naturalness of visual lip gestures and auditory vowels, and corresponding increases in audiovisual congruence may potentially lead to increases in the effect of visual lip gestures on streaming.


Assuntos
Sinais (Psicologia) , Leitura Labial , Ruído/efeitos adversos , Mascaramento Perceptivo , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Adolescente , Análise de Variância , Audiometria de Tons Puros , Limiar Auditivo , Humanos , Estimulação Luminosa , Acústica da Fala , Fatores de Tempo , Adulto Jovem
14.
J Clin Med ; 10(10)2021 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-34068067

RESUMO

In the case of hearing loss, cochlear implants (CI) allow for the restoration of hearing. Despite the advantages of CIs for speech perception, CI users still complain about their poor perception of their auditory environment. Aiming to assess non-verbal auditory perception in CI users, we developed five listening tests. These tests measure pitch change detection, pitch direction identification, pitch short-term memory, auditory stream segregation, and emotional prosody recognition, along with perceived intensity ratings. In order to test the potential benefit of visual cues for pitch processing, the three pitch tests included half of the trials with visual indications to perform the task. We tested 10 normal-hearing (NH) participants with material being presented as original and vocoded sounds, and 10 post-lingually deaf CI users. With the vocoded sounds, the NH participants had reduced scores for the detection of small pitch differences, and reduced emotion recognition and streaming abilities compared to the original sounds. Similarly, the CI users had deficits for small differences in the pitch change detection task and emotion recognition, as well as a decreased streaming capacity. Overall, this assessment allows for the rapid detection of specific patterns of non-verbal auditory perception deficits. The current findings also open new perspectives about how to enhance pitch perception capacities using visual cues.

15.
J Acoust Soc Am ; 128(1): EL1-7, 2010 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-20649182

RESUMO

As previously suggested, attention may increase segregation via enhancement and suppression sensory mechanisms. To test this hypothesis, we proposed an interleaved melody paradigm with two rhythm conditions applied to familiar target melodies and unfamiliar distractor melodies sharing pitch and timbre properties. When rhythms of both target and distractor were irregular, target melodies were identified above chance level. A sensory enhancement mechanism guided by listeners' knowledge may have helped to extract targets from the interleaved sequence. When the distractor was rhythmically regular, performance was increased, suggesting that the distractor may have been suppressed by a sensory suppression mechanism.


Assuntos
Atenção , Percepção Auditiva , Música , Mascaramento Perceptivo , Periodicidade , Reconhecimento Psicológico , Estimulação Acústica , Adolescente , Adulto , Sinais (Psicologia) , Humanos , Detecção de Sinal Psicológico , Percepção do Tempo , Adulto Jovem
16.
J Acoust Soc Am ; 124(5): 3076-87, 2008 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-19045793

RESUMO

Cochlear-implant (CI) users often have difficulties perceiving speech in noisy environments. Although this problem likely involves auditory scene analysis, few studies have examined sequential segregation in CI listening situations. The present study aims to assess the possible role of fundamental frequency (F(0)) cues for the segregation of vowel sequences, using a noise-excited envelope vocoder that simulates certain aspects of CI stimulation. Obligatory streaming was evaluated using an order-naming task in two experiments involving normal-hearing subjects. In the first experiment, it was found that streaming did not occur based on F(0) cues when natural-duration vowels were processed to reduce spectral cues using the vocoder. In the second experiment, shorter duration vowels were used to enhance streaming. Under these conditions, F(0)-related streaming appeared even when vowels were processed to reduce spectral cues. However, the observed segregation could not be convincingly attributed to temporal periodicity cues. A subsequent analysis of the stimuli revealed that an F(0)-related spectral cue could have elicited the observed segregation. Thus, streaming under conditions of severely reduced spectral cues, such as those associated with CIs, may potentially occur as a result of this particular cue.


Assuntos
Audição/fisiologia , Fonética , Inteligibilidade da Fala , Percepção da Fala/fisiologia , Fala , Limiar Auditivo , Implantes Cocleares , Sinais (Psicologia) , Humanos , Periodicidade , Percepção da Altura Sonora , Valores de Referência , Localização de Som/fisiologia
17.
Hear Res ; 231(1-2): 32-41, 2007 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-17597319

RESUMO

Although segregation of both simultaneous and sequential speech items may be involved in the reception of speech in noisy environments, research on the latter is relatively sparse. Further, previous studies examining the ability of hearing-impaired listeners to form distinct auditory streams have produced mixed results. Finally, there is little work investigating streaming in cochlear implant recipients, who also have poor frequency resolution. The present study focused on the mechanisms involved in the segregation of vowel sequences and potential limitations to segregation associated with poor frequency resolution. An objective temporal-order paradigm was employed in which listeners reported the order of constituent vowels within a sequence. In Experiment 1, it was found that fundamental frequency based mechanisms contribute to segregation. In Experiment 2, reduced frequency tuning often associated with hearing impairment was simulated in normal-hearing listeners. In that experiment, it was found that spectral smearing of the vowels increased accurate identification of their order, presumably by reducing the tendency to form separate auditory streams. These experiments suggest that a reduction in spectral resolution may result in a reduced ability to form separate auditory streams, which may contribute to the difficulties of hearing-impaired listeners, and probably cochlear implant recipients as well, in multi-talker cocktail-party situations.


Assuntos
Implantes Cocleares , Idioma , Testes de Discriminação da Fala , Estimulação Acústica , Adulto , Limiar Auditivo , Implante Coclear , Audição , Perda Auditiva , Humanos , Percepção , Fonética , Fala , Percepção da Fala , Fatores de Tempo
18.
Hear Res ; 344: 235-243, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-27923739

RESUMO

Differences in fundamental frequency (F0) between voiced sounds are known to be a strong cue for stream segregation. However, speech consists of both voiced and unvoiced sounds, and less is known about whether and how the unvoiced portions are segregated. This study measured listeners' ability to integrate or segregate sequences of consonant-vowel tokens, comprising a voiceless fricative and a vowel, as a function of the F0 difference between interleaved sequences of tokens. A performance-based measure was used, in which listeners detected the presence of a repeated token either within one sequence or between the two sequences (measures of voluntary and obligatory streaming, respectively). The results showed a systematic increase of voluntary stream segregation as the F0 difference between the two interleaved sequences increased from 0 to 13 semitones, suggesting that F0 differences allowed listeners to segregate speech sounds, including the unvoiced portions. In contrast to the consistent effects of voluntary streaming, the trend towards obligatory stream segregation at large F0 differences failed to reach significance. Listeners were no longer able to perform the voluntary-streaming task reliably when the unvoiced portions were removed from the stimuli, suggesting that the unvoiced portions were used and correctly segregated in the original task. The results demonstrate that streaming based on F0 differences occurs for natural speech sounds, and that the unvoiced portions are correctly assigned to the corresponding voiced portions.


Assuntos
Sinais (Psicologia) , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adolescente , Adulto , Audiometria da Fala , Limiar Auditivo , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
19.
Hear Res ; 184(1-2): 41-50, 2003 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-14553902

RESUMO

Fifteen initially inexperienced subjects were trained for 4 weeks (12 2-h sessions) in frequency discrimination with pure tones around 88, 250, or 1605 Hz, or amplitude modulation rate discrimination of noise bands, using modulation rates around 88 or 250 Hz. Before, in the middle of, and after this training period, pure-tone frequency discrimination thresholds (DLFs), harmonic complex tone fundamental frequency discrimination thresholds (DLF0s), and amplitude modulation rate discrimination thresholds (DLFMs) were measured in several conditions including the trained one. Training in pure-tone frequency discrimination resulted in significantly larger improvements in DLF0s when the test complexes contained resolved harmonics than when they were composed of unresolved harmonics. This result supports the hypothesis that the discrimination of the F0 of resolved harmonics shares common underlying mechanisms with the frequency discrimination of pure tones. Training in rate discrimination did not result in larger DLF0 improvements for unresolved than for resolved harmonics.


Assuntos
Generalização Psicológica , Aprendizagem/fisiologia , Discriminação da Altura Tonal/fisiologia , Estimulação Acústica/métodos , Adulto , Limiar Auditivo , Humanos , Ruído , Ensino
20.
Philos Trans R Soc Lond B Biol Sci ; 367(1591): 896-905, 2012 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-22371612

RESUMO

This special issue presents research concerning multistable perception in different sensory modalities. Multistability occurs when a single physical stimulus produces alternations between different subjective percepts. Multistability was first described for vision, where it occurs, for example, when different stimuli are presented to the two eyes or for certain ambiguous figures. It has since been described for other sensory modalities, including audition, touch and olfaction. The key features of multistability are: (i) stimuli have more than one plausible perceptual organization; (ii) these organizations are not compatible with each other. We argue here that most if not all cases of multistability are based on competition in selecting and binding stimulus information. Binding refers to the process whereby the different attributes of objects in the environment, as represented in the sensory array, are bound together within our perceptual systems, to provide a coherent interpretation of the world around us. We argue that multistability can be used as a method for studying binding processes within and across sensory modalities. We emphasize this theme while presenting an outline of the papers in this issue. We end with some thoughts about open directions and avenues for further research.


Assuntos
Percepção/fisiologia , Sensação/fisiologia , Percepção Auditiva/fisiologia , Humanos , Modelos Neurológicos , Modelos Psicológicos , Olfato/fisiologia , Percepção Visual/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA