RESUMO
The present study explored whether object (or event) files can be formed that integrate color imagery and perceptual location features. To assess this issue, a cue-target procedure was used whereby color imagery was cued to be generated at a particular location in space, which was then followed by a perceptual color discrimination task. Partial repetition costs (PRCs) were then measured by varying the overlap of the color and location features of the cue and target to evaluate whether an object/event file was formed. Robust PRCs were observed when imagery was generated at a location, supporting the idea that imagery and perception can be incorporated into a common event file. It was also revealed that the PRC effects for perceptual color cues were tenuous-they did not reach significance in the present study. Overall, the present study indicates that imagery can produce stronger binding effects than perception, offering important insights into the role that active engagement plays in the formation of object/event files.
Assuntos
Atenção , Sinais (Psicologia) , Humanos , Percepção , Percepção VisualRESUMO
It has been demonstrated that color imagery can have a profound impact when generated prior to search, while at the same time, perceptual cues have a somewhat limited influence. Given this discrepancy, the present study evaluated the processes impacted by imagery and perception using a singleton search task where participants had to find an oddball colored target among homogenously colored distractors. Prior to each trial, a perceptual color was displayed or imagery was generated that could match the target, distractors, or neither item in the search array. It was revealed that color imagery led to both a larger benefit when it matched the target and a larger cost when it matched the distractors relative to perceptual cues. By parsing response times into pre-search, search, and response phases based on eye movements, it was revealed that, while imagery and perceptual cues both influenced the search phase, imagery had a significantly greater influence than perceptual cues. Further, imagery influenced pre-search and response phases as well. Overall, the present findings reveal that the influence of imagery is profound as it affects multiple processes in the vision-perception pipeline, while perception only appeared to impact search.
Assuntos
Atenção , Movimentos Oculares , Percepção de Cores , Sinais (Psicologia) , Humanos , Tempo de Reação , Percepção VisualRESUMO
Congenital amusia, commonly known as tone deafness, is a lifelong impairment of music perception and production. It remains a question of debate whether the impairments in musical domain observed in congenital amusia are paralleled in other non-musical perceptual abilities. Using behavioral measures in two experiments, the current study explored face perception and memory in congenital amusics. Both congenital amusics and matched controls performed a face perception task (Experiment 1) and an old/novel object memory task (for both faces and houses, Experiment 2). The results showed that the congenital amusic group had significantly slower reaction times than that in matched control group when identifying whether two faces presented together were the same or different. For different face-pairs, the deficit was greater for upright faces compared with inverted faces. For object memory task, the congenital amusic group also showed worse memory performance than the control group. The results of the present study suggest that the impairment attributed to congenital amusia is not only limited to music, but also extends to visual perception and visual memory domain.
Assuntos
Transtornos da Percepção Auditiva/fisiopatologia , Reconhecimento Facial/fisiologia , Memória/fisiologia , Tempo de Reação/fisiologia , Estimulação Acústica , Transtornos da Percepção Auditiva/diagnóstico , Técnicas de Observação do Comportamento , Estudos de Casos e Controles , Feminino , Humanos , Masculino , Percepção da Altura Sonora/fisiologia , Adulto JovemRESUMO
Converging evidence indicates that prior knowledge plays an important role in multisensory integration. However, the neural mechanisms underlying the processes with which prior knowledge is integrated with current sensory information remains unknown. In this study, we measured event-related potentials (ERPs) while manipulating prior knowledge using a novel visual letter recognition task in which auditory information was always presented simultaneously. The color of the letters was assigned to a particular probability of being associated with audiovisual congruency (e.g., green=high probability (HP) and blue=low probability (LP)). Results demonstrate that this prior began affecting reaction times to the congruent audiovisual stimuli at about the 900th trial. Consequently, the ERP data was analyzed in two phases: the "early phase" (
Assuntos
Percepção Auditiva/fisiologia , Conhecimento , Percepção Visual/fisiologia , Estimulação Acústica , Interpretação Estatística de Dados , Eletroencefalografia , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Estimulação Luminosa , Probabilidade , Desempenho Psicomotor/fisiologia , Tempo de Reação/fisiologia , Leitura , Sensação , Adulto JovemRESUMO
To investigate the neural mechanisms of auditory-visual integration, we recorded event-related potentials during a word-identification task, in which the stimulus was presented in the auditory (A), visual (V), and in the auditory-visual (AV) modalities. The reliability of the visual information varied at the high-reliability (VH) and low-reliability (VL) levels in both the V and AV presentations. The modulation of sensory integrations owing to the variation of cue reliability was revealed in the format of the double-difference waveform generated by subtracting the difference waveform AVL-(A+VL) from the difference waveform AVH-(A+VH). The results demonstrated (i) the early modulation of the activity in the auditory and visual cortex; (ii) subsequent spatial-temporal sequence of activities mostly occurred in multisensory areas; and (iii) the timing of final outputs of AV integration at around 370-410 ms poststimulus.