Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PeerJ ; 5: e3143, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28462016

RESUMO

Individuals vary in their tendency to bind signals from multiple senses. For the same set of sights and sounds, one individual may frequently integrate multisensory signals and experience a unified percept, whereas another individual may rarely bind them and often experience two distinct sensations. Thus, while this binding/integration tendency is specific to each individual, it is not clear how plastic this tendency is in adulthood, and how sensory experiences may cause it to change. Here, we conducted an exploratory investigation which provides evidence that (1) the brain's tendency to bind in spatial perception is plastic, (2) that it can change following brief exposure to simple audiovisual stimuli, and (3) that exposure to temporally synchronous, spatially discrepant stimuli provides the most effective method to modify it. These results can inform current theories about how the brain updates its internal model of the surrounding sensory world, as well as future investigations seeking to increase integration tendencies.

2.
Neurosci Lett ; 614: 24-8, 2016 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-26742638

RESUMO

In our daily lives, our capacity to selectively attend to stimuli within or across sensory modalities enables enhanced perception of the surrounding world. While previous research on selective attention has studied this phenomenon extensively, two important questions still remain unanswered: (1) how selective attention to a single modality impacts sensory integration processes, and (2) the mechanism by which selective attention improves perception. We explored how selective attention impacts performance in both a spatial task and a temporal numerosity judgment task, and employed a Bayesian Causal Inference model to investigate the computational mechanism(s) impacted by selective attention. We report three findings: (1) in the spatial domain, selective attention improves precision of the visual sensory representations (which were relatively precise), but not the auditory sensory representations (which were fairly noisy); (2) in the temporal domain, selective attention improves the sensory precision in both modalities (both of which were fairly reliable to begin with); (3) in both tasks, selective attention did not exert a significant influence over the tendency to integrate sensory stimuli. Therefore, it may be postulated that a sensory modality must possess a certain inherent degree of encoding precision in order to benefit from selective attention. It also appears that in certain basic perceptual tasks, the tendency to integrate crossmodal signals does not depend significantly on selective attention. We conclude with a discussion of how these results relate to recent theoretical considerations of selective attention.


Assuntos
Atenção , Percepção Auditiva , Percepção Visual , Estimulação Acústica , Humanos , Estimulação Luminosa
3.
PLoS Comput Biol ; 11(12): e1004649, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26646312

RESUMO

Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the precision of perceptual estimates, but also the accuracy.


Assuntos
Percepção Auditiva/fisiologia , Modelos Neurológicos , Modelos Estatísticos , Localização de Som/fisiologia , Percepção Espacial/fisiologia , Análise e Desempenho de Tarefas , Estimulação Acústica/métodos , Adolescente , Adulto , Simulação por Computador , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Física/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Adulto Jovem
4.
J Cogn Neurosci ; 27(6): 1194-206, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25514656

RESUMO

Examining the function of individual human hippocampal subfields remains challenging because of their small sizes and convoluted structures. Previous human fMRI studies at 3 T have successfully detected differences in activation between hippocampal cornu ammonis (CA) field CA1, combined CA2, CA3, and dentate gyrus (DG) region (CA23DG), and the subiculum during associative memory tasks. In this study, we investigated hippocampal subfield activity in healthy participants using an associative memory paradigm during high-resolution fMRI scanning at 7 T. We were able to localize fMRI activity to anterior CA2 and CA3 during learning and to the posterior CA2 field, the CA1, and the posterior subiculum during retrieval of novel associations. These results provide insight into more specific human hippocampal subfield functions underlying learning and memory and a unique opportunity for future investigations of hippocampal subfield function in healthy individuals as well as those suffering from neurodegenerative diseases.


Assuntos
Aprendizagem por Associação/fisiologia , Hipocampo/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Memória/fisiologia , Testes Neuropsicológicos , Adulto Jovem
5.
J Phon ; 45: 91-105, 2014 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-24932053

RESUMO

Listeners use lexical or visual context information to recalibrate auditory speech perception. After hearing an ambiguous auditory stimulus between /aba/ and /ada/ coupled with a clear visual stimulus (e.g., lip closure in /aba/), an ambiguous auditory-only stimulus is perceived in line with the previously seen visual stimulus. What remains unclear, however, is what exactly listeners are recalibrating: phonemes, phone sequences, or acoustic cues. To address this question we tested generalization of visually-guided auditory recalibration to 1) the same phoneme contrast cued differently (i.e., /aba/-/ada/ vs. /ibi/-/idi/ where the main cues are formant transitions in the vowels vs. burst and frication of the obstruent), 2) a different phoneme contrast cued identically (/aba/-/ada/ vs. /ama/-/ana/ both cued by formant transitions in the vowels), and 3) the same phoneme contrast with the same cues in a different acoustic context (/aba/-/ada/ vs. (/ubu/-/udu/). Whereas recalibration was robust for all recalibration control trials, no generalization was found in any of the experiments. This suggests that perceptual recalibration may be more specific than previously thought as it appears to be restricted to the phoneme category experienced during exposure as well as to the specific manipulated acoustic cues. We suggest that recalibration affects context-dependent sub-lexical units.

6.
J Assoc Res Otolaryngol ; 15(2): 235-48, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24464088

RESUMO

Bimodal stimulation, or stimulation of a cochlear implant (CI) together with a contralateral hearing aid (HA), can improve speech perception in noise However, this benefit is variable, and some individuals even experience interference with bimodal stimulation. One contributing factor to this variability may be differences in binaural spectral integration (BSI) due to abnormal auditory experience. CI programming introduces interaural pitch mismatches, in which the frequencies allocated to the electrodes (and contralateral HA) differ from the electrically stimulated cochlear frequencies. Previous studies have shown that some, but not all, CI users adapt pitch perception to reduce this mismatch. The purpose of this study was to determine whether broadened BSI may also reduce the perception of mismatch. Interaural pitch mismatches and dichotic pitch fusion ranges were measured in 21 bimodal CI users. Seventeen subjects with wide fusion ranges also conducted a task to pitch match various fused electrode-tone pairs. All subjects showed abnormally wide dichotic fusion frequency ranges of 1-4 octaves. The fusion range size was weakly correlated with the interaural pitch mismatch, suggesting a link between broad binaural pitch fusion and large interaural pitch mismatch. Dichotic pitch averaging was also observed, in which a new binaural pitch resulted from the fusion of the original monaural pitches, even when the pitches differed by as much as 3-4 octaves. These findings suggest that abnormal BSI, indicated by broadened fusion ranges and spectral averaging between ears, may account for speech perception interference and nonoptimal integration observed with bimodal compared with monaural hearing device use.


Assuntos
Implantes Cocleares , Percepção da Altura Sonora/fisiologia , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Percepção da Fala
7.
Artigo em Inglês | MEDLINE | ID: mdl-22069383

RESUMO

Recent research investigating the principles governing human perception has provided increasing evidence for probabilistic inference in human perception. For example, human auditory and visual localization judgments closely resemble that of a Bayesian causal inference observer, where the underlying causal structure of the stimuli are inferred based on both the available sensory evidence and prior knowledge. However, most previous studies have focused on characterization of perceptual inference within a static environment, and therefore, little is known about how this inference process changes when observers are exposed to a new environment. In this study we aimed to computationally characterize the change in auditory spatial perception induced by repeated auditory-visual spatial conflict, known as the ventriloquist aftereffect. In theory, this change could reflect a shift in the auditory sensory representations (i.e., shift in auditory likelihood distribution), a decrease in the precision of the auditory estimates (i.e., increase in spread of likelihood distribution), a shift in the auditory bias (i.e., shift in prior distribution), or an increase/decrease in strength of the auditory bias (i.e., the spread of prior distribution), or a combination of these. By quantitatively estimating the parameters of the perceptual process for each individual observer using a Bayesian causal inference model, we found that the shift in the perceived locations after exposure was associated with a shift in the mean of the auditory likelihood functions in the direction of the experienced visual offset. The results suggest that repeated exposure to a fixed auditory-visual discrepancy is attributed by the nervous system to sensory representation error and as a result, the sensory map of space is recalibrated to correct the error.

8.
Front Psychol ; 2: 264, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-22028697

RESUMO

Multisensory perception has been the focus of intense investigation in recent years. It is now well-established that crossmodal interactions are ubiquitous in perceptual processing and endow the system with improved precision, accuracy, processing speed, etc. While these findings have shed much light on principles and mechanisms of perception, ultimately it is not very surprising that multiple sources of information provides benefits in performance compared to a single source of information. Here, we argue that the more surprising recent findings are those showing that multisensory experience also influences the subsequent unisensory processing. For example, exposure to auditory-visual stimuli can change the way that auditory or visual stimuli are processed subsequently even in isolation. We review three sets of findings that represent three different types of learning ranging from perceptual learning, to sensory recalibration, to associative learning. In all these cases exposure to multisensory stimuli profoundly influences the subsequent unisensory processing. This diversity of phenomena may suggest that continuous modification of unisensory representations by multisensory relationships may be a general learning strategy employed by the brain.

9.
J Neurosci ; 31(12): 4607-12, 2011 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-21430160

RESUMO

Basic features of objects and events in the environment such as timing and spatial location are encoded by multiple sensory modalities. This redundancy in sensory coding allows recalibration of one sense by other senses if there is a conflict between the sensory maps (Radeau and Bertelson, 1974; Zwiers et al., 2003; Navarra et al., 2009). In contrast to motor or sensorimotor adaptation, which can be relatively rapid, cross-sensory recalibration (the change in an isolated sensory representation after exposure to conflicting cross-modal information) has been reported only as a result of an extensive amount of exposure to sensory discrepancy (e.g., hundreds or thousands of trials, or prolonged durations). Therefore, sensory recalibration has traditionally been associated with compensation for permanent changes that would occur during development or after traumatic injuries or stroke. Nonetheless, the dynamics of sensory recalibration is unknown, and it is unclear whether prolonged inconsistency is required to trigger recalibration or whether such mechanisms are continuously engaged in self-maintenance. We show that in humans recalibration of perceived auditory space by vision can occur after a single exposure to discrepant auditory-visual stimuli lasting only a few milliseconds. These findings suggest an impressive degree of plasticity in a basic perceptual map induced by a cross-modal error signal. Therefore, it appears that modification of sensory maps does not necessarily require accumulation of a substantial amount of evidence of error to be triggered, and is continuously operational. This scheme of sensory recalibration has many advantages. It only requires a small working memory capacity, and allows rapid adaptation to transient changes in the environment as well as the body.


Assuntos
Percepção Auditiva/fisiologia , Localização de Som/fisiologia , Percepção Espacial/fisiologia , Estimulação Acústica , Adolescente , Adulto , Meio Ambiente , Feminino , Lateralidade Funcional/fisiologia , Humanos , Masculino , Plasticidade Neuronal/fisiologia , Estimulação Luminosa , Desempenho Psicomotor/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
10.
PLoS Comput Biol ; 6(8)2010 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-20700493

RESUMO

The question of which strategy is employed in human decision making has been studied extensively in the context of cognitive tasks; however, this question has not been investigated systematically in the context of perceptual tasks. The goal of this study was to gain insight into the decision-making strategy used by human observers in a low-level perceptual task. Data from more than 100 individuals who participated in an auditory-visual spatial localization task was evaluated to examine which of three plausible strategies could account for each observer's behavior the best. This task is very suitable for exploring this question because it involves an implicit inference about whether the auditory and visual stimuli were caused by the same object or independent objects, and provides different strategies of how using the inference about causes can lead to distinctly different spatial estimates and response patterns. For example, employing the commonly used cost function of minimizing the mean squared error of spatial estimates would result in a weighted averaging of estimates corresponding to different causal structures. A strategy that would minimize the error in the inferred causal structure would result in the selection of the most likely causal structure and sticking with it in the subsequent inference of location-"model selection." A third strategy is one that selects a causal structure in proportion to its probability, thus attempting to match the probability of the inferred causal structure. This type of probability matching strategy has been reported to be used by participants predominantly in cognitive tasks. Comparing these three strategies, the behavior of the vast majority of observers in this perceptual task was most consistent with probability matching. While this appears to be a suboptimal strategy and hence a surprising choice for the perceptual system to adopt, we discuss potential advantages of such a strategy for perception.


Assuntos
Aprendizagem por Probabilidade , Percepção Visual/fisiologia , Tomada de Decisões , Feminino , Humanos , Masculino , Localização de Som/fisiologia , Adulto Jovem
11.
J Vis ; 8(3): 24.1-11, 2008 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-18484830

RESUMO

Our nervous system typically processes signals from multiple sensory modalities at any given moment and is therefore posed with two important problems: which of the signals are caused by a common event, and how to combine those signals. We investigated human perception in the presence of auditory, visual, and tactile stimulation in a numerosity judgment task. Observers were presented with stimuli in one, two, or three modalities simultaneously and were asked to report their percepts in each modality. The degree of congruency between the modalities varied across trials. For example, a single flash was paired in some trials with two beeps and two taps. Cross-modal illusions were observed in most conditions in which there was incongruence among the two or three stimuli, revealing robust interactions among the three modalities in all directions. The observers' bimodal and trimodal percepts were remarkably consistent with a Bayes-optimal strategy of combining the evidence in each modality with the prior probability of the events. These findings provide evidence that the combination of sensory information among three modalities follows optimal statistical inference for the entire spectrum of conditions.


Assuntos
Percepção Auditiva/fisiologia , Modelos Estatísticos , Percepção Espacial/fisiologia , Tato/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adolescente , Adulto , Teorema de Bayes , Feminino , Humanos , Masculino , Estimulação Luminosa , Estimulação Física
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...