Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Brain Sci ; 14(6)2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38928534

RESUMO

Auditory spatial cues contribute to two distinct functions, of which one leads to explicit localization of sound sources and the other provides a location-linked representation of sound objects. Behavioral and imaging studies demonstrated right-hemispheric dominance for explicit sound localization. An early clinical case study documented the dissociation between the explicit sound localizations, which was heavily impaired, and fully preserved use of spatial cues for sound object segregation. The latter involves location-linked encoding of sound objects. We review here evidence pertaining to brain regions involved in location-linked representation of sound objects. Auditory evoked potential (AEP) and functional magnetic resonance imaging (fMRI) studies investigated this aspect by comparing encoding of individual sound objects, which changed their locations or remained stationary. Systematic search identified 1 AEP and 12 fMRI studies. Together with studies of anatomical correlates of impaired of spatial-cue-based sound object segregation after focal brain lesions, the present evidence indicates that the location-linked representation of sound objects involves strongly the left hemisphere and to a lesser degree the right hemisphere. Location-linked encoding of sound objects is present in several early-stage auditory areas and in the specialized temporal voice area. In these regions, emotional valence benefits from location-linked encoding as well.

2.
Acoust Sci Technol ; 41(1): 113-120, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34305431

RESUMO

A review of data published or presented by the authors from two populations of subjects (normal hearing listeners and patients fit with cochlear implants, CIs) involving research on sound source localization when listeners move is provided. The overall theme of the review is that sound source localization requires an integration of auditory-spatial and head-position cues and is, therefore, a multisystem process. Research with normal hearing listeners includes that related to the Wallach Azimuth Illusion, and additional aspects of sound source localization perception when listeners and sound sources rotate. Research with CI patients involves investigations of sound source localization performance by patients fit with a single CI, bilateral CIs, a CI and a hearing aid (bimodal patients), and single-sided deaf patients with one normal functioning ear and the other ear fit with a CI. Past research involving CI patients who were stationary and more recent data based on CI patients' use of head rotation to localize sound sources is summarized.

3.
J Assoc Res Otolaryngol ; 17(3): 195-207, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-26993807

RESUMO

Listeners can perceive interleaved sequences of sounds from two or more sources as segregated streams. In humans, physical separation of sound sources is a major factor enabling such stream segregation. Here, we examine spatial stream segregation with a psychophysical measure in domestic cats. Cats depressed a pedal to initiate a target sequence of brief sound bursts in a particular rhythm and then released the pedal when the rhythm changed. The target bursts were interleaved with a competing sequence of bursts that could differ in source location but otherwise were identical to the target bursts. This task was possible only when the sources were heard as segregated streams. When the sound bursts had broad spectra, cats could detect the rhythm change when target and competing sources were separated by as little as 9.4°. Essentially equal levels of performance were observed when frequencies were restricted to a high, 4-to-25-kHz, band in which the principal spatial cues presumably were related to sound levels. When the stimulus band was restricted from 0.4 to 1.6 kHz, leaving interaural time differences as the principal spatial cue, performance was severely degraded. The frequency sensitivity of cats in this task contrasts with that of humans, who show better spatial stream segregation with low- than with high-frequency sounds. Possible explanations for the species difference includes the smaller interaural delays available to cats due to smaller sizes of their heads and the potentially greater sound-level cues available due to the cat's frontally directed pinnae and higher audible frequency range.


Assuntos
Percepção Auditiva/fisiologia , Gatos/fisiologia , Animais , Limiar Auditivo , Sinais (Psicologia) , Mascaramento Perceptivo , Especificidade da Espécie , Análise e Desempenho de Tarefas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA