Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
J Neurosci ; 38(21): 4977-4984, 2018 05 23.
Artigo em Inglês | MEDLINE | ID: mdl-29712782

RESUMO

The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene.SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent.


Assuntos
Córtex Auditivo/fisiologia , Localização de Som/fisiologia , Percepção Espacial/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo/diagnóstico por imagem , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Simulação por Computador , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Máquina de Vetores de Suporte
2.
Neural Plast ; 2016: 7217630, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26885405

RESUMO

After sensory loss, the deprived cortex can reorganize to process information from the remaining modalities, a phenomenon known as cross-modal reorganization. In blind people this cross-modal processing supports compensatory behavioural enhancements in the nondeprived modalities. Deaf people also show some compensatory visual enhancements, but a direct relationship between these abilities and cross-modally reorganized auditory cortex has only been established in an animal model, the congenitally deaf cat, and not in humans. Using T1-weighted magnetic resonance imaging, we measured cortical thickness in the planum temporale, Heschl's gyrus and sulcus, the middle temporal area MT+, and the calcarine sulcus, in early-deaf persons. We tested for a correlation between this measure and visual motion detection thresholds, a visual function where deaf people show enhancements as compared to hearing. We found that the cortical thickness of a region in the right hemisphere planum temporale, typically an auditory region, was greater in deaf individuals with better visual motion detection thresholds. This same region has previously been implicated in functional imaging studies as important for functional reorganization. The structure-behaviour correlation observed here demonstrates this area's involvement in compensatory vision and indicates an anatomical correlate, increased cortical thickness, of cross-modal plasticity.


Assuntos
Córtex Auditivo/fisiopatologia , Surdez/fisiopatologia , Lateralidade Funcional/fisiologia , Percepção de Movimento/fisiologia , Percepção Visual/fisiologia , Adulto , Córtex Auditivo/patologia , Mapeamento Encefálico , Surdez/patologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Tamanho do Órgão/fisiologia , Estimulação Luminosa , Adulto Jovem
3.
J Cogn Neurosci ; 27(1): 150-63, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25000527

RESUMO

Cross-modal reorganization after sensory deprivation is a model for understanding brain plasticity. Although it is a well-documented phenomenon, we still know little of the mechanisms underlying it or the factors that constrain and promote it. Using fMRI, we identified visual motion-related activity in 17 early-deaf and 17 hearing adults. We found that, in the deaf, the posterior superior temporal gyrus (STG) was responsive to visual motion. We compared functional connectivity of this reorganized cortex between groups to identify differences in functional networks associated with reorganization. In the deaf more than the hearing, the STG displayed increased functional connectivity with a region in the calcarine fissure. We also explored the role of hearing aid use, a factor that may contribute to variability in cross-modal reorganization. We found that both the cross-modal activity in STG and the functional connectivity between STG and calcarine cortex correlated with duration of hearing aid use, supporting the hypothesis that residual hearing affects cross-modal reorganization. We conclude that early auditory deprivation alters not only the organization of auditory regions but also the interactions between auditory and primary visual cortex and that auditory input, as indexed by hearing aid use, may inhibit cross-modal reorganization in early-deaf people.


Assuntos
Córtex Auditivo/fisiopatologia , Surdez/fisiopatologia , Surdez/terapia , Auxiliares de Audição , Percepção de Movimento/fisiologia , Plasticidade Neuronal/fisiologia , Adulto , Mapeamento Encefálico , Medições dos Movimentos Oculares , Feminino , Testes Auditivos , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Vias Neurais/fisiopatologia , Estimulação Luminosa , Adulto Jovem
4.
J Speech Lang Hear Res ; 66(11): 4575-4589, 2023 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-37850878

RESUMO

PURPOSE: There is a need for tools to study real-world communication abilities in people with hearing loss. We outline a potential method for this that analyzes gaze and use it to answer the question of when and how much listeners with hearing loss look toward a new talker in a conversation. METHOD: Twenty-two older adults with hearing loss followed a prerecorded two-person audiovisual conversation in the presence of babble noise. We compared their eye-gaze direction to the conversation in two multilevel logistic regression (MLR) analyses. First, we split the conversation into events classified by the number of active talkers within a turn or a transition, and we tested if these predicted the listener's gaze. Second, we mapped the odds that a listener gazed toward a new talker over time during a conversation transition. RESULTS: We found no evidence that our conversation events predicted changes in the listener's gaze, but the listener's gaze toward the new talker during a silence-transition was predicted by time: The odds of looking at the new talker increased in an s-shaped curve from at least 0.4 s before to 1 s after the onset of the new talker's speech. A comparison of models with different random effects indicated that more variance was explained by differences between individual conversation events than by differences between individual listeners. CONCLUSIONS: MLR modeling of eye-gaze during talker transitions is a promising approach to study a listener's perception of realistic conversation. Our experience provides insight to guide future research with this method.


Assuntos
Surdez , Perda Auditiva , Percepção da Fala , Humanos , Idoso , Estimulação Acústica/métodos , Fala
5.
Front Neurosci ; 16: 873201, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35844213

RESUMO

This presentation details and evaluates a method for estimating the attended speaker during a two-person conversation by means of in-ear electro-oculography (EOG). Twenty-five hearing-impaired participants were fitted with molds equipped with EOG electrodes (in-ear EOG) and wore eye-tracking glasses while watching a video of two life-size people in a dialog solving a Diapix task. The dialogue was directionally presented and together with background noise in the frontal hemisphere at 60 dB SPL. During three conditions of steering (none, in-ear EOG, conventional eye-tracking), participants' comprehension was periodically measured using multiple-choice questions. Based on eye movement detection by in-ear EOG or conventional eye-tracking, the estimated attended speaker was amplified by 6 dB. In the in-ear EOG condition, the estimate was based on one selected channel pair of electrodes out of 36 possible electrodes. A novel calibration procedure introducing three different metrics was used to select the measurement channel. The in-ear EOG attended speaker estimates were compared to those of the eye-tracker. Across participants, the mean accuracy of in-ear EOG estimation of the attended speaker was 68%, ranging from 50 to 89%. Based on offline simulation, it was established that higher scoring metrics obtained for a channel with the calibration procedure were significantly associated with better data quality. Results showed a statistically significant improvement in comprehension of about 10% in both steering conditions relative to the no-steering condition. Comprehension in the two steering conditions was not significantly different. Further, better comprehension obtained under the in-ear EOG condition was significantly correlated with more accurate estimation of the attended speaker. In conclusion, this study shows promising results in the use of in-ear EOG for visual attention estimation with potential for applicability in hearing assistive devices.

7.
Hear Res ; 343: 64-71, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-27321204

RESUMO

The right planum temporale region is typically involved in higher-order auditory processing. After deafness, this area reorganizes to become sensitive to visual motion. This plasticity is thought to support compensatory enhancements to visual ability. In earlier work we showed that enhanced visual motion detection abilities in early-deaf people correlate with cortical thickness in a subregion of the right planum temporale. In the current study, we build on this earlier result by examining the relationship between enhanced visual motion detection ability and white matter structure in this area in the same sample. We used diffusion-weighted magnetic resonance imaging and extracted the measures of white matter structure from a region-of-interest just below the grey matter surface where cortical thickness correlates with visual motion detection ability. We also tested control regions-of-interest in the auditory and visual cortices where we did not expect to find a relationship between visual motion detection ability and white matter. We found that in the right planum temporale subregion, and in no other tested regions, fractional anisotropy, radial diffusivity, and mean diffusivity correlated with visual motion detection thresholds. We interpret this change as further evidence of a structural correlate of cross-modal reorganization after deafness.


Assuntos
Surdez/diagnóstico por imagem , Imagem de Difusão por Ressonância Magnética , Percepção de Movimento , Limiar Sensorial , Lobo Temporal/diagnóstico por imagem , Percepção Visual , Substância Branca/diagnóstico por imagem , Adaptação Fisiológica , Adaptação Psicológica , Adulto , Anisotropia , Surdez/fisiopatologia , Surdez/psicologia , Feminino , Humanos , Masculino , Plasticidade Neuronal , Valor Preditivo dos Testes , Lobo Temporal/fisiopatologia , Substância Branca/fisiopatologia , Adulto Jovem
8.
Front Neurosci ; 11: 507, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28955193

RESUMO

The ability to dance relies on the ability to synchronize movements to a perceived musical beat. Typically, beat synchronization is studied with auditory stimuli. However, in many typical social dancing situations, music can also be perceived as vibrations when objects that generate sounds also generate vibrations. This vibrotactile musical perception is of particular relevance for deaf people, who rely on non-auditory sensory information for dancing. In the present study, we investigated beat synchronization to vibrotactile electronic dance music in hearing and deaf people. We tested seven deaf and 14 hearing individuals on their ability to bounce in time with the tempo of vibrotactile stimuli (no sound) delivered through a vibrating platform. The corresponding auditory stimuli (no vibrations) were used in an additional condition in the hearing group. We collected movement data using a camera-based motion capture system and subjected it to a phase-locking analysis to assess synchronization quality. The vast majority of participants were able to precisely time their bounces to the vibrations, with no difference in performance between the two groups. In addition, we found higher performance for the auditory condition compared to the vibrotactile condition in the hearing group. Our results thus show that accurate tactile-motor synchronization in a dance-like context occurs regardless of auditory experience, though auditory-motor synchronization is of superior quality.

9.
PLoS One ; 9(2): e90498, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24587381

RESUMO

In deaf people, the auditory cortex can reorganize to support visual motion processing. Although this cross-modal reorganization has long been thought to subserve enhanced visual abilities, previous research has been unsuccessful at identifying behavioural enhancements specific to motion processing. Recently, research with congenitally deaf cats has uncovered an enhancement for visual motion detection. Our goal was to test for a similar difference between deaf and hearing people. We tested 16 early and profoundly deaf participants and 20 hearing controls. Participants completed a visual motion detection task, in which they were asked to determine which of two sinusoidal gratings was moving. The speed of the moving grating varied according to an adaptive staircase procedure, allowing us to determine the lowest speed necessary for participants to detect motion. Consistent with previous research in deaf cats, the deaf group had lower motion detection thresholds than the hearing. This finding supports the proposal that cross-modal reorganization after sensory deprivation will occur for supramodal sensory features and preserve the output functions.


Assuntos
Limiar Auditivo/fisiologia , Audição/fisiologia , Percepção de Movimento/fisiologia , Pessoas com Deficiência Auditiva , Percepção Visual/fisiologia , Adulto , Animais , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Desempenho Psicomotor/fisiologia , Privação Sensorial/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA