Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Wiley Interdiscip Rev Cogn Sci ; 15(3): e1675, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38243393

RESUMO

Real-world environments are multisensory, meaningful, and highly complex. To parse these environments in a highly efficient manner, a subset of this information must be selected both within and across modalities. However, the bulk of attention research has been conducted within sensory modalities, with a particular focus on vision. Visual attention research has made great strides, with over a century of research methodically identifying the underlying mechanisms that allow us to select critical visual information. Spatial attention, attention to features, and object-based attention have all been studied extensively. More recently, research has established semantics (meaning) as a key component to allocating attention in real-world scenes, with the meaning of an item or environment affecting visual attentional selection. However, a full understanding of how semantic information modulates real-world attention requires studying more than vision in isolation. The world provides semantic information across all senses, but with this extra information comes greater complexity. Here, we summarize visual attention (including semantic-based visual attention), crossmodal attention, and argue for the importance of studying crossmodal semantic guidance of attention. This article is categorized under: Psychology > Attention Psychology > Perception and Psychophysics.


Assuntos
Atenção , Semântica , Percepção Visual , Atenção/fisiologia , Humanos , Percepção Visual/fisiologia
2.
Q J Exp Psychol (Hove) ; : 17470218241230812, 2024 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-38279528

RESUMO

It's been repeatedly shown that pictures of graspable objects can facilitate visual processing, even in the absence of reach-to-grasp actions, an effect often attributed to the concept of affordances. A classic demonstration of this is the handle compatibility effect, characterised by faster reaction times when the orientation of a graspable object's handle is compatible with the hand used to respond, even when the handle orientation is task-irrelevant. Nevertheless, it is debated whether the speeded reaction times are a result of affordances or spatial compatibility. First, we investigated whether we could replicate the handle compatibility effect while controlling for spatial compatibility. Participants (N = 68) responded with left or right-handed keypresses to whether the object was upright or inverted and, in separate blocks, whether the object was red or green. We failed to replicate the handle compatibility effect, with no significant difference between compatible and incompatible conditions, in both tasks. Second, we investigated whether there is a lower visual field (VF) advantage for the handle compatibility effect in line with what has been found for hand actions. A further 68 participants responded to object orientation presented either in the upper or lower VF. A significant handle compatibility effect was observed in the lower VF, but not the upper VF. This suggests that there is a lower VF advantage for affordances, possibly as the lower VF is where our actions most frequently occur. However, future studies should explore the impact of eye movements on the handle compatibility effect and tool affordances.

3.
J Exp Psychol Gen ; 152(7): 1907-1936, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37126050

RESUMO

Scene meaning is processed rapidly, with "gist" extracted even when presentation duration spans a few dozen milliseconds. This has led some to suggest a primacy of bottom-up information. However, gist research has typically relied on showing successions of unrelated scene images, contrary to our everyday experience in which the world unfolds around us in a predictable manner. Thus, we investigated whether top-down information-in the form of observers' predictions of an upcoming scene-facilitates gist processing. Within each trial, participants (N = 370) experienced a series of images, organized to represent an approach to a destination (e.g., walking down a sidewalk), followed by a target scene either congruous or incongruous with the expected destination (e.g., a store interior or a bedroom). A series of behavioral experiments revealed that appropriate expectations facilitated gist processing; inappropriate expectations interfered with gist processing; sequentially-arranged scene images benefitted gist processing when semantically related to the target scene; expectation-based facilitation was most apparent when presentation duration was most curtailed; and findings were not simply the result of response bias. We then investigated the neural correlates of predictability on scene processing using event-related potentials (ERPs) (N = 24). Congruency-related differences were found in a putative scene-selective ERP component, related to integrating visual properties (P2), and in later components related to contextual integration including semantic and syntactic coherence (N400 and P600, respectively). Together, results suggest that in real-world situations, top-down predictions of an upcoming scene influence even the earliest stages of its processing, affecting both the integration of visual properties and meaning. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Potenciais Evocados , Motivação , Humanos , Masculino , Feminino , Potenciais Evocados/fisiologia , Eletroencefalografia , Tempo de Reação/fisiologia , Estimulação Luminosa
4.
Cognition ; 235: 105398, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36791506

RESUMO

Face pareidolia is the experience of seeing illusory faces in inanimate objects. While children experience face pareidolia, it is unknown whether they perceive gender in illusory faces, as their face evaluation system is still developing in the first decade of life. In a sample of 412 children and adults from 4 to 80 years of age we found that like adults, children perceived many illusory faces in objects to have a gender and had a strong bias to see them as male rather than female, regardless of their own gender identification. These results provide evidence that the male bias for face pareidolia emerges early in life, even before the ability to discriminate gender from facial cues alone is fully developed. Further, the existence of a male bias in children suggests that any social context that elicits the cognitive bias to see faces as male has remained relatively consistent across generations.


Assuntos
Face , Ilusões , Adulto , Humanos , Masculino , Criança , Feminino , Ilusões/psicologia
5.
Atten Percept Psychophys ; 85(1): 113-119, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36451076

RESUMO

Visual short-term memory (VSTM) is an essential store that creates continuous representations from disjointed visual input. However, severe capacity limits exist, reflecting constraints in supporting brain networks. VSTM performance shows spatial biases predicted by asymmetries in the brain based upon the location of the remembered object. Visual representations are retinotopic, or relative to location of the representation on the retina. It therefore stands to reason that memory performance may also show retinotopic biases. Here, eye position was manipulated to tease apart retinotopic coordinates from spatiotopic coordinates, or location relative to the external world. Memory performance was measured while participants performed a color change-detection task for items presented across the visual field while subjects fixated central or peripheral position. VSTM biases reflected the location of the stimulus on the retina, regardless of where the stimulus appeared on the screen. Therefore, spatial biases occur in retinotopic coordinates in VSTM and suggest a fundamental link between behavioral VSTM measures and visual representations.


Assuntos
Memória de Curto Prazo , Campos Visuais , Humanos , Encéfalo , Rememoração Mental , Cognição , Percepção Visual
6.
Atten Percept Psychophys ; 84(4): 1317-1327, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35449432

RESUMO

Semantic information about objects, events, and scenes influences how humans perceive, interact with, and navigate the world. The semantic information about any object or event can be highly complex and frequently draws on multiple sensory modalities, which makes it difficult to quantify. Past studies have primarily relied on either a simplified binary classification of semantic relatedness based on category or on algorithmic values based on text corpora rather than human perceptual experience and judgement. With the aim to further accelerate research into multisensory semantics, we created a constrained audiovisual stimulus set and derived similarity ratings between items within three categories (animals, instruments, household items). A set of 140 participants provided similarity judgments between sounds and images. Participants either heard a sound (e.g., a meow) and judged which of two pictures of objects (e.g., a picture of a dog and a duck) it was more similar to, or saw a picture (e.g., a picture of a duck) and selected which of two sounds it was more similar to (e.g., a bark or a meow). Judgements were then used to calculate similarity values of any given cross-modal pair. An additional 140 participants provided word judgement to calculate similarity of word-word pairs. The derived and reported similarity judgements reflect a range of semantic similarities across three categories and items, and highlight similarities and differences among similarity judgments between modalities. We make the derived similarity values available in a database format to the research community to be used as a measure of semantic relatedness in cognitive psychology experiments, enabling more robust studies of semantics in audiovisual environments.


Assuntos
Julgamento , Semântica , Feminino , Humanos
7.
Cereb Cortex Commun ; 2(3): tgab049, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34447936

RESUMO

Objects can be described in terms of low-level (e.g., boundaries) and high-level properties (e.g., object semantics). While recent behavioral findings suggest that the influence of semantic relatedness between objects on attentional allocation can be independent of task-relevance, the underlying neural substrate of semantic influences on attention remains ill-defined. Here, we employ behavioral and functional magnetic resonance imaging measures to uncover the mechanism by which semantic information increases visual processing efficiency. We demonstrate that the strength of the semantic relatedness signal decoded from the left inferior frontal gyrus: 1) influences attention, producing behavioral semantic benefits; 2) biases spatial attention maps in the intraparietal sulcus, subsequently modulating early visual cortex activity; and 3) directly predicts the magnitude of behavioral semantic benefit. Altogether, these results identify a specific mechanism driving task-independent semantic influences on attention.

8.
Curr Opin Psychol ; 29: 153-159, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-30925285

RESUMO

Attentional selection is a mechanism by which incoming sensory information is prioritized for further, detailed, and more effective, processing. Given that attended information is privileged by the sensory system, understanding and predicting what information is granted prioritization becomes an important endeavor. It has been argued that salient events as well as information that is related to the current goal of the organism (i.e., task-relevant) receive such a priority. Here, we propose that attentional prioritization is not limited to task-relevance, and discuss evidence showing that task-irrelevant, non-salient, high-level properties of unattended objects, namely object meaning and size, influence attentional allocation. Such an intrusion of non-salient, task-irrelevant, high-level information points to the need to re-conceptualize and formally modify current models of attentional guidance.


Assuntos
Atenção , Reconhecimento Visual de Modelos , Semântica , Humanos , Análise e Desempenho de Tarefas
9.
Front Hum Neurosci ; 12: 189, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29867413

RESUMO

We can understand viewed scenes and extract task-relevant information within a few hundred milliseconds. This process is generally supported by three cortical regions that show selectivity for scene images: parahippocampal place area (PPA), medial place area (MPA) and occipital place area (OPA). Prior studies have focused on the visual information each region is responsive to, usually within the context of recognition or navigation. Here, we move beyond these tasks to investigate gaze allocation during scene viewing. Eye movements rely on a scene's visual representation to direct saccades, and thus foveal vision. In particular, we focus on the contribution of OPA, which is: (i) located in occipito-parietal cortex, likely feeding information into parts of the dorsal pathway critical for eye movements; and (ii) contains strong retinotopic representations of the contralateral visual field. Participants viewed scene images for 1034 ms while their eye movements were recorded. On half of the trials, a 500 ms train of five transcranial magnetic stimulation (TMS) pulses was applied to the participant's cortex, starting at scene onset. TMS was applied to the right hemisphere over either OPA or the occipital face area (OFA), which also exhibits a contralateral visual field bias but shows selectivity for face stimuli. Participants generally made an overall left-to-right, top-to-bottom pattern of eye movements across all conditions. When TMS was applied to OPA, there was an increased saccade latency for eye movements toward the contralateral relative to the ipsilateral visual field after the final TMS pulse (400 ms). Additionally, TMS to the OPA biased fixation positions away from the contralateral side of the scene compared to the control condition, while the OFA group showed no such effect. There was no effect on horizontal saccade amplitudes. These combined results suggest that OPA might serve to represent local scene information that can then be utilized by visuomotor control networks to guide gaze allocation in natural scenes.

10.
Trends Cogn Sci ; 20(11): 843-856, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27769727

RESUMO

To interact with the world, we have to make sense of the continuous sensory input conveying information about our environment. A recent surge of studies has investigated the processes enabling scene understanding, using increasingly complex stimuli and sophisticated analyses to highlight the visual features and brain regions involved. However, there are two major challenges to producing a comprehensive framework for scene understanding. First, scene perception is highly dynamic, subserving multiple behavioral goals. Second, a multitude of different visual properties co-occur across scenes and may be correlated or independent. We synthesize the recent literature and argue that for a complete view of scene understanding, it is necessary to account for both differing observer goals and the contribution of diverse scene properties.


Assuntos
Encéfalo/fisiologia , Percepção Espacial/fisiologia , Percepção Visual/fisiologia , Meio Ambiente , Humanos , Sensação
11.
Atten Percept Psychophys ; 78(7): 2066-78, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-27381630

RESUMO

Every object is represented by semantic information in extension to its low-level properties. It is well documented that such information biases attention when it is necessary for an ongoing task. However, whether semantic relationships influence attentional selection when they are irrelevant to the ongoing task remains an open question. The ubiquitous nature of semantic information suggests that it could bias attention even when these properties are irrelevant. In the present study, three objects appeared on screen, two of which were semantically related. After a varying time interval, a target or distractor appeared on top of each object. The objects' semantic relationships never predicted the target location. Despite this, a semantic bias on attentional allocation was observed, with an initial, transient bias to semantically related objects. Further experiments demonstrated that this effect was contingent on the objects being attended: if an object never contained the target, it no longer exerted a semantic influence. In a final set of experiments, we demonstrated that the semantic bias is robust and appears even in the presence of more predictive cues (spatial probability). These results suggest that as long as an object is attended, its semantic properties bias attention, even if it is irrelevant to an ongoing task and if more predictive factors are available.


Assuntos
Atenção/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Semântica , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
12.
J Vis ; 16(2): 3, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26824640

RESUMO

The visual system utilizes environmental features to direct gaze efficiently when locating objects. While previous research has isolated various features' contributions to gaze guidance, these studies generally used sparse displays and did not investigate how features facilitated search as a function of their location on the visual field. The current study investigated how features across the visual field--particularly color--facilitate gaze guidance during real-world search. A gaze-contingent window followed participants' eye movements, restricting color information to specified regions. Scene images were presented in full color, with color in the periphery and gray in central vision or gray in the periphery and color in central vision, or in grayscale. Color conditions were crossed with a search cue manipulation, with the target cued either with a word label or an exact picture. Search times increased as color information in the scene decreased. A gaze-data based decomposition of search time revealed color-mediated effects on specific subprocesses of search. Color in peripheral vision facilitated target localization, whereas color in central vision facilitated target verification. Picture cues facilitated search, with the effects of cue specificity and scene color combining additively. When available, the visual system utilizes the environment's color information to facilitate different real-world visual search behaviors based on the location within the visual field.


Assuntos
Visão de Cores/fisiologia , Movimentos Oculares/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Visão Ocular/fisiologia , Adolescente , Adulto , Atenção , Sinais (Psicologia) , Feminino , Fixação Ocular/fisiologia , Humanos , Masculino , Campos Visuais/fisiologia , Adulto Jovem
13.
Cogn Sci ; 40(8): 1995-2024, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-26519097

RESUMO

The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices.


Assuntos
Atenção/fisiologia , Memória/fisiologia , Percepção Visual/fisiologia , Adulto , Movimentos Oculares/fisiologia , Feminino , Humanos , Masculino , Estimulação Luminosa , Adulto Jovem
14.
J Exp Psychol Gen ; 144(2): 257-63, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25844622

RESUMO

We are continually confronted with more visual information than we can process in a given moment. In order to interact effectively with our environment, attentional mechanisms are used to select subsets of environmental properties for enhanced processing. Previous research demonstrated that spatial regions can be selected based on either their low-level feature or high-level semantic properties. However, the efficiency with which we interact with the world suggests that there must be an additional, midlevel, factor constraining effective attentional space. The present study investigates whether object-based attentional selection is one such midlevel factor that constrains visual attention in complex, real-world scenes. Participants viewed scene images while their eye movements were recorded. During viewing, a cue appeared on an object which participants were instructed to fixate. A target then appeared either on the same object as the cue, on a different object, or floating. Participants initiated saccades faster and had shorter response times to targets presented on the same object as the fixated cue. The results strongly suggest that when attending to a location on an object, the entire object benefits perceptually. This object-based effect on the distribution of spatial attention forms a critical link between low- and high-level factors that direct attention efficiently in complex real-world scenes.


Assuntos
Atenção/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Movimentos Sacádicos/fisiologia , Percepção Espacial/fisiologia , Adulto , Medições dos Movimentos Oculares , Humanos , Adulto Jovem
15.
J Vis ; 15(2)2015 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-25761330

RESUMO

Previous research has suggested that correctly placed objects facilitate eye guidance, but also that objects violating spatial associations within scenes may be prioritized for selection and subsequent inspection. We analyzed the respective eye guidance of spatial expectations and target template (precise picture or verbal label) in visual search, while taking into account any impact of object spatial inconsistency on extrafoveal or foveal processing. Moreover, we isolated search disruption due to misleading spatial expectations about the target from the influence of spatial inconsistency within the scene upon search behavior. Reliable spatial expectations and precise target template improved oculomotor efficiency across all search phases. Spatial inconsistency resulted in preferential saccadic selection when guidance by template was insufficient to ensure effective search from the outset and the misplaced object was bigger than the objects consistently placed in the same scene region. This prioritization emerged principally during early inspection of the region, but the inconsistent object also tended to be preferentially fixated overall across region viewing. These results suggest that objects are first selected covertly on the basis of their relative size and that subsequent overt selection is made considering object-context associations processed in extrafoveal vision. Once the object was fixated, inconsistency resulted in longer first fixation duration and longer total dwell time. As a whole, our findings indicate that observed impairment of oculomotor behavior when searching for an implausibly placed target is the combined product of disruption due to unreliable spatial expectations and prioritization of inconsistent objects before and during object fixation.


Assuntos
Sinais (Psicologia) , Movimentos Oculares/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Percepção Espacial/fisiologia , Vias Visuais/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Estimulação Luminosa , Adulto Jovem
16.
Psychol Sci ; 25(5): 1087-97, 2014 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-24604146

RESUMO

Research on scene categorization generally concentrates on gist processing, particularly the speed and minimal features with which the "story" of a scene can be extracted. However, this focus has led to a paucity of research into how scenes are categorized at specific hierarchical levels (e.g., a scene could be a road or more specifically a highway); consequently, research has disregarded a potential diagnostically driven feedback process. We presented participants with scenes that were low-pass filtered so only their gist was revealed, while a gaze-contingent window provided the fovea with full-resolution details. By recording where in a scene participants fixated prior to making a basic- or subordinate-level judgment, we identified the scene information accrued when participants made either categorization. We observed a feedback process, dependent on categorization level, that systematically accrues sufficient and detailed diagnostic information from the same scene. Our results demonstrate that during scene processing, a diagnostically driven bidirectional interplay between top-down and bottom-up information facilitates relevant category processing.


Assuntos
Percepção de Forma/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Movimentos Oculares/fisiologia , Fixação Ocular/fisiologia , Humanos , Julgamento/fisiologia , Tempo de Reação/fisiologia
17.
J Vis ; 14(2)2014 Feb 11.
Artigo em Inglês | MEDLINE | ID: mdl-24520149

RESUMO

This study investigated how the visual system utilizes context and task information during the different phases of a visual search task. The specificity of the target template (the picture or the name of the target) and the plausibility of target position in real-world scenes were manipulated orthogonally. Our findings showed that both target template information and guidance of spatial context are utilized to guide eye movements from the beginning of scene inspection. In both search initiation and subsequent scene scanning, the availability of a specific visual template was particularly useful when the spatial context of the scene was misleading and the availability of a reliable scene context facilitated search mainly when the template was abstract. Target verification was affected principally by the level of detail of target template, and was quicker in the case of a picture cue. The results indicate that the visual system can utilize target template guidance and context guidance flexibly from the beginning of scene inspection, depending upon the amount and the quality of the available information supplied by either of these high-level sources. This allows for optimization of oculomotor behavior throughout the different phases of search within a real-world scene.


Assuntos
Atenção/fisiologia , Sinais (Psicologia) , Movimentos Oculares/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Vias Visuais/fisiologia , Adolescente , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Adulto Jovem
18.
Q J Exp Psychol (Hove) ; 67(6): 1096-120, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24224949

RESUMO

An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.


Assuntos
Atenção/fisiologia , Fixação Ocular , Nomes , Reconhecimento Visual de Modelos/fisiologia , Estimulação Acústica , Feminino , Humanos , Modelos Lineares , Linguística , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia
19.
Neuropsychologia ; 50(7): 1271-85, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22391475

RESUMO

Object processing is affected by the gist of the scene within which it is embedded. Previous ERP research has suggested that manipulating the semantic congruency between an object and the surrounding scene affects the high level (semantic) representation of that object emerging after the presentation of the scene (Ganis & Kutas, 2003). In two ERP experiments, we investigated whether there would be a similar electrophysiological response when spatial congruency of an object in a scene was manipulated while the semantic congruency remained the same. Apart from the location of the object, all other object features were congruent with the scene (e.g., in a bedroom scene, either a painting or a cat appeared on the wall). In the first experiment, participants were shown a location cue and then a scene image for 300 ms, after which an object image appeared on the cued location for 300 ms. Spatially incongruent objects elicited a stronger centro-frontal N300-N400 effect in the 275-500 ms window relative to the spatially congruent objects. We also found early ERP effects, dominant on the left hemisphere electrodes. Strikingly, LORETA analysis revealed that these activations were mainly located in the superior and middle temporal gyrus of the right hemisphere. In the second experiment, we used a paradigm similar to Mudrik, Lamy, and Deouell (2010). The scene and the object were presented together for 300 ms after the location cue. This time, we did not observe either an early or the pronounced N300-N400 effect. In contrast to Experiment 1, LORETA analysis on the N400 time-window revealed that the generators of these weak ERP effects were mainly located in the temporal lobe of the left hemisphere. Our results suggest that, when the scene is presented before the object, top-down spatial encoding processes are initiated and the right superior temporal gyrus is activated, as previously suggested (Ellison, Schindler, Pattison, & Milner, 2004). Mismatch between the actual object features and the spatially driven top-down structural and functional features could lead to the early effect, and then to the N300-N400 effect. In contrast, when the scene is not presented before the object, the spatial encoding could not happen early and strong enough to initiate spatial object-integration effects. Our results indicate that spatial information is an early and essential part in scene-object integration, and it primes structural as well as semantic features of an object.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Potenciais Evocados Visuais/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Adolescente , Adulto , Análise de Variância , Eletroencefalografia , Feminino , Lateralidade Funcional , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia , Fatores de Tempo , Adulto Jovem
20.
J Vis ; 10(2): 4.1-11, 2010 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-20462305

RESUMO

Eye movements can be guided by various types of information in real-world scenes. Here we investigated how the visual system combines multiple types of top-down information to facilitate search. We manipulated independently the specificity of the search target template and the usefulness of contextual constraint in an object search task. An eye tracker was used to segment search time into three behaviorally defined epochs so that influences on specific search processes could be identified. The results support previous studies indicating that the availability of either a specific target template or scene context facilitates search. The results also show that target template and contextual constraints combine additively in facilitating search. The results extend recent eye guidance models by suggesting the manner in which our visual system utilizes multiple types of top-down information.


Assuntos
Atenção/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Movimentos Sacádicos/fisiologia , Vias Visuais/fisiologia , Adulto , Feminino , Área de Dependência-Independência , Humanos , Masculino , Estimulação Luminosa/métodos , Campos Visuais/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...