Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 13175, 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38849398

ABSTRACT

Recent behavioral evidence suggests that the semantic relationships between isolated objects can influence attentional allocation, with highly semantically related objects showing an increase in processing efficiency. This semantic influence is present even when it is task-irrelevant (i.e., when semantic information is not central to the task). However, given that objects exist within larger contexts, i.e., scenes, it is critical to understand whether the semantic relationship between a scene and its objects continuously influence attention. Here, we investigated the influence of task-irrelevant scene semantic properties on attentional allocation and the degree to which semantic relationships between scenes and objects interact. Results suggest that task-irrelevant associations between scenes and objects continuously influence attention and that this influence is directly predicted by the perceived strength of semantic associations.

2.
Wiley Interdiscip Rev Cogn Sci ; 15(3): e1675, 2024.
Article in English | MEDLINE | ID: mdl-38243393

ABSTRACT

Real-world environments are multisensory, meaningful, and highly complex. To parse these environments in a highly efficient manner, a subset of this information must be selected both within and across modalities. However, the bulk of attention research has been conducted within sensory modalities, with a particular focus on vision. Visual attention research has made great strides, with over a century of research methodically identifying the underlying mechanisms that allow us to select critical visual information. Spatial attention, attention to features, and object-based attention have all been studied extensively. More recently, research has established semantics (meaning) as a key component to allocating attention in real-world scenes, with the meaning of an item or environment affecting visual attentional selection. However, a full understanding of how semantic information modulates real-world attention requires studying more than vision in isolation. The world provides semantic information across all senses, but with this extra information comes greater complexity. Here, we summarize visual attention (including semantic-based visual attention), crossmodal attention, and argue for the importance of studying crossmodal semantic guidance of attention. This article is categorized under: Psychology > Attention Psychology > Perception and Psychophysics.


Subject(s)
Attention , Semantics , Visual Perception , Attention/physiology , Humans , Visual Perception/physiology
3.
Q J Exp Psychol (Hove) ; : 17470218241230812, 2024 Feb 18.
Article in English | MEDLINE | ID: mdl-38279528

ABSTRACT

It's been repeatedly shown that pictures of graspable objects can facilitate visual processing, even in the absence of reach-to-grasp actions, an effect often attributed to the concept of affordances. A classic demonstration of this is the handle compatibility effect, characterised by faster reaction times when the orientation of a graspable object's handle is compatible with the hand used to respond, even when the handle orientation is task-irrelevant. Nevertheless, it is debated whether the speeded reaction times are a result of affordances or spatial compatibility. First, we investigated whether we could replicate the handle compatibility effect while controlling for spatial compatibility. Participants (N = 68) responded with left or right-handed keypresses to whether the object was upright or inverted and, in separate blocks, whether the object was red or green. We failed to replicate the handle compatibility effect, with no significant difference between compatible and incompatible conditions, in both tasks. Second, we investigated whether there is a lower visual field (VF) advantage for the handle compatibility effect in line with what has been found for hand actions. A further 68 participants responded to object orientation presented either in the upper or lower VF. A significant handle compatibility effect was observed in the lower VF, but not the upper VF. This suggests that there is a lower VF advantage for affordances, possibly as the lower VF is where our actions most frequently occur. However, future studies should explore the impact of eye movements on the handle compatibility effect and tool affordances.

4.
Neurosurg Focus ; 55(6): E2, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38039525

ABSTRACT

OBJECTIVE: There is growing evidence for the use of enhanced recovery protocols (ERPs) in cranial surgery. As they become widespread, successful implementation of these complex interventions will become a challenge for neurosurgical teams owing to the need for multidisciplinary engagement. Here, the authors describe the novel use of an implementation framework (normalization process theory [NPT]) to promote the incorporation of a cranial surgery ERP into routine neuro-oncology practice. METHODS: A baseline audit was conducted to determine the degree of implementation of the ERP into practice. The Normalization MeAsure Development (NoMAD) questionnaire was circulated among 6 groups of stakeholders (neurosurgeons, anesthetists, intensivists, recovery nurses, preoperative assessment nurses, and neurosurgery ward staff) to examine barriers to implementation. Based on these findings, a theory-guided implementation intervention was delivered. A repeat audit and NoMAD questionnaire were conducted to assess the impact of the intervention on the uptake of the ERP. RESULTS: The baseline audit (n = 24) demonstrated limited delivery of the ERP elements. The NoMAD questionnaire (n = 32) identified 4 subconstructs of the NPT as barriers to implementation: communal specification, contextual integration, skill set workability, and relational integration. These guided an implementation intervention that included the following: 1) teamwork-focused training; 2) ERP promotion; and 3) procedure simplification. The reaudit (n = 21) demonstrated significant increases in the delivery of 5 protocol elements: scalp block (12.5% of patients before intervention vs 76.2% of patients after intervention, p < 0.00001), recommended analgesia (25.0% vs 100.0%, p < 0.00001) and antiemetics (12.5% vs 100.0%, p < 0.00001), trial without catheter (13.6% vs 88.9%, p < 0.00001), and mobilization on the 1st postoperative day (45.5% vs 94.4%, p < 0.00001). There was a significant reduction in the mean hospital length of stay from 6.3 ± 3.4 to 4.2 ± 1.7 days (p = 0.022). Two months after implementation, a repeat NoMAD survey demonstrated significant improvement in communal specification. CONCLUSIONS: Here, the authors have demonstrated the successful implementation of a cranial surgery ERP by using a systematic theory-based approach.


Subject(s)
Neurosurgical Procedures , Humans , Surveys and Questionnaires , Length of Stay
5.
J Exp Psychol Gen ; 152(7): 1907-1936, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37126050

ABSTRACT

Scene meaning is processed rapidly, with "gist" extracted even when presentation duration spans a few dozen milliseconds. This has led some to suggest a primacy of bottom-up information. However, gist research has typically relied on showing successions of unrelated scene images, contrary to our everyday experience in which the world unfolds around us in a predictable manner. Thus, we investigated whether top-down information-in the form of observers' predictions of an upcoming scene-facilitates gist processing. Within each trial, participants (N = 370) experienced a series of images, organized to represent an approach to a destination (e.g., walking down a sidewalk), followed by a target scene either congruous or incongruous with the expected destination (e.g., a store interior or a bedroom). A series of behavioral experiments revealed that appropriate expectations facilitated gist processing; inappropriate expectations interfered with gist processing; sequentially-arranged scene images benefitted gist processing when semantically related to the target scene; expectation-based facilitation was most apparent when presentation duration was most curtailed; and findings were not simply the result of response bias. We then investigated the neural correlates of predictability on scene processing using event-related potentials (ERPs) (N = 24). Congruency-related differences were found in a putative scene-selective ERP component, related to integrating visual properties (P2), and in later components related to contextual integration including semantic and syntactic coherence (N400 and P600, respectively). Together, results suggest that in real-world situations, top-down predictions of an upcoming scene influence even the earliest stages of its processing, affecting both the integration of visual properties and meaning. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Evoked Potentials , Motivation , Humans , Male , Female , Evoked Potentials/physiology , Electroencephalography , Reaction Time/physiology , Photic Stimulation
6.
Cognition ; 235: 105398, 2023 06.
Article in English | MEDLINE | ID: mdl-36791506

ABSTRACT

Face pareidolia is the experience of seeing illusory faces in inanimate objects. While children experience face pareidolia, it is unknown whether they perceive gender in illusory faces, as their face evaluation system is still developing in the first decade of life. In a sample of 412 children and adults from 4 to 80 years of age we found that like adults, children perceived many illusory faces in objects to have a gender and had a strong bias to see them as male rather than female, regardless of their own gender identification. These results provide evidence that the male bias for face pareidolia emerges early in life, even before the ability to discriminate gender from facial cues alone is fully developed. Further, the existence of a male bias in children suggests that any social context that elicits the cognitive bias to see faces as male has remained relatively consistent across generations.


Subject(s)
Face , Illusions , Adult , Humans , Male , Child , Female , Illusions/psychology
7.
Atten Percept Psychophys ; 85(1): 113-119, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36451076

ABSTRACT

Visual short-term memory (VSTM) is an essential store that creates continuous representations from disjointed visual input. However, severe capacity limits exist, reflecting constraints in supporting brain networks. VSTM performance shows spatial biases predicted by asymmetries in the brain based upon the location of the remembered object. Visual representations are retinotopic, or relative to location of the representation on the retina. It therefore stands to reason that memory performance may also show retinotopic biases. Here, eye position was manipulated to tease apart retinotopic coordinates from spatiotopic coordinates, or location relative to the external world. Memory performance was measured while participants performed a color change-detection task for items presented across the visual field while subjects fixated central or peripheral position. VSTM biases reflected the location of the stimulus on the retina, regardless of where the stimulus appeared on the screen. Therefore, spatial biases occur in retinotopic coordinates in VSTM and suggest a fundamental link between behavioral VSTM measures and visual representations.


Subject(s)
Memory, Short-Term , Visual Fields , Humans , Brain , Mental Recall , Cognition , Visual Perception
8.
Br J Neurosurg ; : 1-6, 2022 Dec 21.
Article in English | MEDLINE | ID: mdl-36541810

ABSTRACT

OBJECTIVE: Case series presentation and literature review of patient group suffering from symptomatic tension subdural extra-arachnoid hygroma following decompressive surgery for degenerative lumbar stenosis or disc disease. The purpose was to better understand this rare post-operative complication with a pathognomic radiological sign to help recommend optimal strategies for clinical management. METHODS: Retrospective case series comprising seven cases from one tertiary Neurosurgical centre spanning a 10-year period from 2011 to 2021. Patients included were those known to have undergone a spinal procedure and subsequently to have developed a symptomatic spinal subdural extra-arachnoid hygroma (SSEH). A literature review was conducted using PubMed, MEDLINE and EMBASE (keywords 'subdural hygroma', 'lumbar CSF hygroma', 'extra arachnoid hygroma', 'extra-arachnoid CSF collection', 'CSF tension hygroma', 'lumbar extra arachnoid hygroma', 'lumbar spinal hygroma', 'post-operating spinal hygroma', 'post-operative spinal CSF collection') and through reading references cited in relevant articles. Articles involving post-operative SSEH following lumbar spinal surgery were included. RESULTS: Rare complication with only five other cases in the literature. Dural breach described intra-operatively in only 5 of 12 total cases from our series and the literature. 5 patients in our series were managed surgically with 2 being managed conservatively. All patients in our series improved symptomatically and radiologically following surgical or conservative management. CONCLUSIONS: This is a rare post-lumbar surgery complication that can cause rapidly deteriorating lower limb and sphincteric function. Surgical management with wide durotomy and arachnoid marsupialisation can lead to reversal of neurological deterioration and excellent clinical results. A delayed presentation with pseudomeningocele formation may be managed conservatively if neurology is stable or improving. It is a condition that it is important for the clinician to recognise in order to instigate appropriate management in a time-dependent fashion.

9.
Atten Percept Psychophys ; 84(4): 1317-1327, 2022 May.
Article in English | MEDLINE | ID: mdl-35449432

ABSTRACT

Semantic information about objects, events, and scenes influences how humans perceive, interact with, and navigate the world. The semantic information about any object or event can be highly complex and frequently draws on multiple sensory modalities, which makes it difficult to quantify. Past studies have primarily relied on either a simplified binary classification of semantic relatedness based on category or on algorithmic values based on text corpora rather than human perceptual experience and judgement. With the aim to further accelerate research into multisensory semantics, we created a constrained audiovisual stimulus set and derived similarity ratings between items within three categories (animals, instruments, household items). A set of 140 participants provided similarity judgments between sounds and images. Participants either heard a sound (e.g., a meow) and judged which of two pictures of objects (e.g., a picture of a dog and a duck) it was more similar to, or saw a picture (e.g., a picture of a duck) and selected which of two sounds it was more similar to (e.g., a bark or a meow). Judgements were then used to calculate similarity values of any given cross-modal pair. An additional 140 participants provided word judgement to calculate similarity of word-word pairs. The derived and reported similarity judgements reflect a range of semantic similarities across three categories and items, and highlight similarities and differences among similarity judgments between modalities. We make the derived similarity values available in a database format to the research community to be used as a measure of semantic relatedness in cognitive psychology experiments, enabling more robust studies of semantics in audiovisual environments.


Subject(s)
Judgment , Semantics , Female , Humans
10.
Cereb Cortex Commun ; 2(3): tgab049, 2021.
Article in English | MEDLINE | ID: mdl-34447936

ABSTRACT

Objects can be described in terms of low-level (e.g., boundaries) and high-level properties (e.g., object semantics). While recent behavioral findings suggest that the influence of semantic relatedness between objects on attentional allocation can be independent of task-relevance, the underlying neural substrate of semantic influences on attention remains ill-defined. Here, we employ behavioral and functional magnetic resonance imaging measures to uncover the mechanism by which semantic information increases visual processing efficiency. We demonstrate that the strength of the semantic relatedness signal decoded from the left inferior frontal gyrus: 1) influences attention, producing behavioral semantic benefits; 2) biases spatial attention maps in the intraparietal sulcus, subsequently modulating early visual cortex activity; and 3) directly predicts the magnitude of behavioral semantic benefit. Altogether, these results identify a specific mechanism driving task-independent semantic influences on attention.

11.
Curr Opin Psychol ; 29: 153-159, 2019 10.
Article in English | MEDLINE | ID: mdl-30925285

ABSTRACT

Attentional selection is a mechanism by which incoming sensory information is prioritized for further, detailed, and more effective, processing. Given that attended information is privileged by the sensory system, understanding and predicting what information is granted prioritization becomes an important endeavor. It has been argued that salient events as well as information that is related to the current goal of the organism (i.e., task-relevant) receive such a priority. Here, we propose that attentional prioritization is not limited to task-relevance, and discuss evidence showing that task-irrelevant, non-salient, high-level properties of unattended objects, namely object meaning and size, influence attentional allocation. Such an intrusion of non-salient, task-irrelevant, high-level information points to the need to re-conceptualize and formally modify current models of attentional guidance.


Subject(s)
Attention , Pattern Recognition, Visual , Semantics , Humans , Task Performance and Analysis
12.
Front Hum Neurosci ; 12: 189, 2018.
Article in English | MEDLINE | ID: mdl-29867413

ABSTRACT

We can understand viewed scenes and extract task-relevant information within a few hundred milliseconds. This process is generally supported by three cortical regions that show selectivity for scene images: parahippocampal place area (PPA), medial place area (MPA) and occipital place area (OPA). Prior studies have focused on the visual information each region is responsive to, usually within the context of recognition or navigation. Here, we move beyond these tasks to investigate gaze allocation during scene viewing. Eye movements rely on a scene's visual representation to direct saccades, and thus foveal vision. In particular, we focus on the contribution of OPA, which is: (i) located in occipito-parietal cortex, likely feeding information into parts of the dorsal pathway critical for eye movements; and (ii) contains strong retinotopic representations of the contralateral visual field. Participants viewed scene images for 1034 ms while their eye movements were recorded. On half of the trials, a 500 ms train of five transcranial magnetic stimulation (TMS) pulses was applied to the participant's cortex, starting at scene onset. TMS was applied to the right hemisphere over either OPA or the occipital face area (OFA), which also exhibits a contralateral visual field bias but shows selectivity for face stimuli. Participants generally made an overall left-to-right, top-to-bottom pattern of eye movements across all conditions. When TMS was applied to OPA, there was an increased saccade latency for eye movements toward the contralateral relative to the ipsilateral visual field after the final TMS pulse (400 ms). Additionally, TMS to the OPA biased fixation positions away from the contralateral side of the scene compared to the control condition, while the OFA group showed no such effect. There was no effect on horizontal saccade amplitudes. These combined results suggest that OPA might serve to represent local scene information that can then be utilized by visuomotor control networks to guide gaze allocation in natural scenes.

13.
Trends Cogn Sci ; 20(11): 843-856, 2016 11.
Article in English | MEDLINE | ID: mdl-27769727

ABSTRACT

To interact with the world, we have to make sense of the continuous sensory input conveying information about our environment. A recent surge of studies has investigated the processes enabling scene understanding, using increasingly complex stimuli and sophisticated analyses to highlight the visual features and brain regions involved. However, there are two major challenges to producing a comprehensive framework for scene understanding. First, scene perception is highly dynamic, subserving multiple behavioral goals. Second, a multitude of different visual properties co-occur across scenes and may be correlated or independent. We synthesize the recent literature and argue that for a complete view of scene understanding, it is necessary to account for both differing observer goals and the contribution of diverse scene properties.


Subject(s)
Brain/physiology , Space Perception/physiology , Visual Perception/physiology , Environment , Humans , Sensation
14.
Atten Percept Psychophys ; 78(7): 2066-78, 2016 10.
Article in English | MEDLINE | ID: mdl-27381630

ABSTRACT

Every object is represented by semantic information in extension to its low-level properties. It is well documented that such information biases attention when it is necessary for an ongoing task. However, whether semantic relationships influence attentional selection when they are irrelevant to the ongoing task remains an open question. The ubiquitous nature of semantic information suggests that it could bias attention even when these properties are irrelevant. In the present study, three objects appeared on screen, two of which were semantically related. After a varying time interval, a target or distractor appeared on top of each object. The objects' semantic relationships never predicted the target location. Despite this, a semantic bias on attentional allocation was observed, with an initial, transient bias to semantically related objects. Further experiments demonstrated that this effect was contingent on the objects being attended: if an object never contained the target, it no longer exerted a semantic influence. In a final set of experiments, we demonstrated that the semantic bias is robust and appears even in the presence of more predictive cues (spatial probability). These results suggest that as long as an object is attended, its semantic properties bias attention, even if it is irrelevant to an ongoing task and if more predictive factors are available.


Subject(s)
Attention/physiology , Pattern Recognition, Visual/physiology , Semantics , Adult , Female , Humans , Male , Young Adult
15.
J Vis ; 16(2): 3, 2016.
Article in English | MEDLINE | ID: mdl-26824640

ABSTRACT

The visual system utilizes environmental features to direct gaze efficiently when locating objects. While previous research has isolated various features' contributions to gaze guidance, these studies generally used sparse displays and did not investigate how features facilitated search as a function of their location on the visual field. The current study investigated how features across the visual field--particularly color--facilitate gaze guidance during real-world search. A gaze-contingent window followed participants' eye movements, restricting color information to specified regions. Scene images were presented in full color, with color in the periphery and gray in central vision or gray in the periphery and color in central vision, or in grayscale. Color conditions were crossed with a search cue manipulation, with the target cued either with a word label or an exact picture. Search times increased as color information in the scene decreased. A gaze-data based decomposition of search time revealed color-mediated effects on specific subprocesses of search. Color in peripheral vision facilitated target localization, whereas color in central vision facilitated target verification. Picture cues facilitated search, with the effects of cue specificity and scene color combining additively. When available, the visual system utilizes the environment's color information to facilitate different real-world visual search behaviors based on the location within the visual field.


Subject(s)
Color Vision/physiology , Eye Movements/physiology , Pattern Recognition, Visual/physiology , Vision, Ocular/physiology , Adolescent , Adult , Attention , Cues , Female , Fixation, Ocular/physiology , Humans , Male , Visual Fields/physiology , Young Adult
16.
Cogn Sci ; 40(8): 1995-2024, 2016 11.
Article in English | MEDLINE | ID: mdl-26519097

ABSTRACT

The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices.


Subject(s)
Attention/physiology , Memory/physiology , Visual Perception/physiology , Adult , Eye Movements/physiology , Female , Humans , Male , Photic Stimulation , Young Adult
17.
J Exp Psychol Gen ; 144(2): 257-63, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25844622

ABSTRACT

We are continually confronted with more visual information than we can process in a given moment. In order to interact effectively with our environment, attentional mechanisms are used to select subsets of environmental properties for enhanced processing. Previous research demonstrated that spatial regions can be selected based on either their low-level feature or high-level semantic properties. However, the efficiency with which we interact with the world suggests that there must be an additional, midlevel, factor constraining effective attentional space. The present study investigates whether object-based attentional selection is one such midlevel factor that constrains visual attention in complex, real-world scenes. Participants viewed scene images while their eye movements were recorded. During viewing, a cue appeared on an object which participants were instructed to fixate. A target then appeared either on the same object as the cue, on a different object, or floating. Participants initiated saccades faster and had shorter response times to targets presented on the same object as the fixated cue. The results strongly suggest that when attending to a location on an object, the entire object benefits perceptually. This object-based effect on the distribution of spatial attention forms a critical link between low- and high-level factors that direct attention efficiently in complex real-world scenes.


Subject(s)
Attention/physiology , Pattern Recognition, Visual/physiology , Saccades/physiology , Space Perception/physiology , Adult , Eye Movement Measurements , Humans , Young Adult
18.
J Vis ; 15(2)2015 Feb 10.
Article in English | MEDLINE | ID: mdl-25761330

ABSTRACT

Previous research has suggested that correctly placed objects facilitate eye guidance, but also that objects violating spatial associations within scenes may be prioritized for selection and subsequent inspection. We analyzed the respective eye guidance of spatial expectations and target template (precise picture or verbal label) in visual search, while taking into account any impact of object spatial inconsistency on extrafoveal or foveal processing. Moreover, we isolated search disruption due to misleading spatial expectations about the target from the influence of spatial inconsistency within the scene upon search behavior. Reliable spatial expectations and precise target template improved oculomotor efficiency across all search phases. Spatial inconsistency resulted in preferential saccadic selection when guidance by template was insufficient to ensure effective search from the outset and the misplaced object was bigger than the objects consistently placed in the same scene region. This prioritization emerged principally during early inspection of the region, but the inconsistent object also tended to be preferentially fixated overall across region viewing. These results suggest that objects are first selected covertly on the basis of their relative size and that subsequent overt selection is made considering object-context associations processed in extrafoveal vision. Once the object was fixated, inconsistency resulted in longer first fixation duration and longer total dwell time. As a whole, our findings indicate that observed impairment of oculomotor behavior when searching for an implausibly placed target is the combined product of disruption due to unreliable spatial expectations and prioritization of inconsistent objects before and during object fixation.


Subject(s)
Cues , Eye Movements/physiology , Pattern Recognition, Visual/physiology , Space Perception/physiology , Visual Pathways/physiology , Adolescent , Adult , Female , Humans , Male , Photic Stimulation , Young Adult
20.
Psychol Sci ; 25(5): 1087-97, 2014 May 01.
Article in English | MEDLINE | ID: mdl-24604146

ABSTRACT

Research on scene categorization generally concentrates on gist processing, particularly the speed and minimal features with which the "story" of a scene can be extracted. However, this focus has led to a paucity of research into how scenes are categorized at specific hierarchical levels (e.g., a scene could be a road or more specifically a highway); consequently, research has disregarded a potential diagnostically driven feedback process. We presented participants with scenes that were low-pass filtered so only their gist was revealed, while a gaze-contingent window provided the fovea with full-resolution details. By recording where in a scene participants fixated prior to making a basic- or subordinate-level judgment, we identified the scene information accrued when participants made either categorization. We observed a feedback process, dependent on categorization level, that systematically accrues sufficient and detailed diagnostic information from the same scene. Our results demonstrate that during scene processing, a diagnostically driven bidirectional interplay between top-down and bottom-up information facilitates relevant category processing.


Subject(s)
Form Perception/physiology , Pattern Recognition, Visual/physiology , Eye Movements/physiology , Fixation, Ocular/physiology , Humans , Judgment/physiology , Reaction Time/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...