Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
Psychon Bull Rev ; 30(5): 1874-1886, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37095319

ABSTRACT

While object meaning has been demonstrated to guide attention during active scene viewing and object salience guides attention during passive viewing, it is unknown whether object meaning predicts attention in passive viewing tasks and whether attention during passive viewing is more strongly related to meaning or salience. To answer this question, we used a mixed modeling approach where we computed the average meaning and physical salience of objects in scenes while statistically controlling for the roles of object size and eccentricity. Using eye-movement data from aesthetic judgment and memorization tasks, we then tested whether fixations are more likely to land on high-meaning objects than low-meaning objects while controlling for object salience, size, and eccentricity. The results demonstrated that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of these other factors. Further analyses revealed that fixation durations were positively associated with object meaning irrespective of the other object properties. Overall, these findings provide the first evidence that objects are, in part, selected by meaning for attentional selection during passive scene viewing.


Subject(s)
Fixation, Ocular , Visual Perception , Humans , Photic Stimulation/methods , Eye Movements , Judgment
2.
Q J Exp Psychol (Hove) ; 76(3): 632-648, 2023 Mar.
Article in English | MEDLINE | ID: mdl-35510885

ABSTRACT

Models of visual search in scenes include image salience as a source of attentional guidance. However, because scene meaning is correlated with image salience, it could be that the salience predictor in these models is driven by meaning. To test this proposal, we generated meaning maps that represented the spatial distribution of semantic informativeness in scenes, and salience maps which represented the spatial distribution of conspicuous image features and tested their influence on fixation densities from two object search tasks in real-world scenes. The results showed that meaning accounted for significantly greater variance in fixation densities than image salience, both overall and in early attention across both studies. Here, meaning explained 58% and 63% of the theoretical ceiling of variance in attention across both studies, respectively. Furthermore, both studies demonstrated that fast initial saccades were not more likely to be directed to higher salience regions than slower initial saccades, and initial saccades of all latencies were directed to regions containing higher meaning than salience. Together, these results demonstrated that even though meaning was task-neutral, the visual system still selected meaningful over salient scene regions for attention during search.


Subject(s)
Semantics , Visual Perception , Humans , Saccades , Fixation, Ocular
3.
Atten Percept Psychophys ; 84(5): 1583-1610, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35484443

ABSTRACT

As we act on the world around us, our eyes seek out objects we plan to interact with. A growing body of evidence suggests that overt visual attention selects objects in the environment that could be interacted with, even when the task precludes physical interaction. In previous work, objects that afford grasping interactions influenced attention when static scenes depicted reachable spaces, and attention was otherwise better explained by general informativeness. Because grasping is but one of many object interactions, previous work may have downplayed the influence of object affordances on attention. The current study investigated the relationship between overt visual attention and object affordances versus broadly construed semantic information in scenes as speakers describe or memorize scenes. In addition to meaning and grasp maps-which capture informativeness and grasping object affordances in scenes, respectively-we introduce interact maps, which capture affordances more broadly. In a mixed-effects analysis of 5 eyetracking experiments, we found that meaning predicted fixated locations in a general description task and during scene memorization. Grasp maps marginally predicted fixated locations during action description for scenes that depicted reachable spaces only. Interact maps predicted fixated regions in description experiments alone. Our findings suggest observers allocate attention to scene regions that could be readily interacted with when talking about the scene, while general informativeness preferentially guides attention when the task does not encourage careful consideration of objects in the scene. The current study suggests that the influence of object affordances on visual attention in scenes is mediated by task demands.


Subject(s)
Eye Movements , Visual Perception , Hand Strength , Humans , Pattern Recognition, Visual , Semantics
4.
J Vis ; 22(1): 2, 2022 01 04.
Article in English | MEDLINE | ID: mdl-34982104

ABSTRACT

Numerous studies have demonstrated that visuospatial attention is a requirement for successful working memory encoding. It is unknown, however, whether this established relationship manifests in consistent gaze dynamics as people orient their visuospatial attention toward an encoding target when searching for information in naturalistic environments. To test this hypothesis, participants' eye movements were recorded while they searched for and encoded objects in a virtual apartment (Experiment 1). We decomposed gaze into 61 features that capture gaze dynamics and a trained sliding window logistic regression model that has potential for use in real-time systems to predict when participants found target objects for working memory encoding. A model trained on group data successfully predicted when people oriented to a target for encoding for the trained task (Experiment 1) and for a novel task (Experiment 2), where a new set of participants found objects and encoded an associated nonword in a cluttered virtual kitchen. Six of these features were predictive of target orienting for encoding, even during the novel task, including decreased distances between subsequent fixation/saccade events, increased fixation probabilities, and slower saccade decelerations before encoding. This suggests that as people orient toward a target to encode new information at the end of search, they decrease task-irrelevant, exploratory sampling behaviors. This behavior was common across the two studies. Together, this research demonstrates how gaze dynamics can be used to capture target orienting for working memory encoding and has implications for real-world use in technology and special populations.


Subject(s)
Memory, Short-Term , Virtual Reality , Attention , Eye Movements , Fixation, Ocular , Humans , Saccades
5.
J Vis ; 21(11): 1, 2021 10 05.
Article in English | MEDLINE | ID: mdl-34609475

ABSTRACT

How do spatial constraints and meaningful scene regions interact to control overt attention during visual search for objects in real-world scenes? To answer this question, we combined novel surface maps of the likely locations of target objects with maps of the spatial distribution of scene semantic content. The surface maps captured likely target surfaces as continuous probabilities. Meaning was represented by meaning maps highlighting the distribution of semantic content in local scene regions. Attention was indexed by eye movements during the search for target objects that varied in the likelihood they would appear on specific surfaces. The interaction between surface maps and meaning maps was analyzed to test whether fixations were directed to meaningful scene regions on target-related surfaces. Overall, meaningful scene regions were more likely to be fixated if they appeared on target-related surfaces than if they appeared on target-unrelated surfaces. These findings suggest that the visual system prioritizes meaningful scene regions on target-related surfaces during visual search in scenes.


Subject(s)
Attention , Visual Perception , Eye Movements , Humans , Pattern Recognition, Visual , Probability , Semantics
6.
Cognition ; 214: 104742, 2021 09.
Article in English | MEDLINE | ID: mdl-33892912

ABSTRACT

Pedziwiatr, Kümmerer, Wallis, Bethge, & Teufel (2021) contend that Meaning Maps do not represent the spatial distribution of semantic features in scenes. We argue that Pesziwiatr et al. provide neither logical nor empirical support for that claim, and we conclude that Meaning Maps do what they were designed to do: represent the spatial distribution of meaning in scenes.


Subject(s)
Fixation, Ocular , Semantics , Attention , Eye Movements , Humans , Visual Perception
7.
Front Psychol ; 11: 1877, 2020.
Article in English | MEDLINE | ID: mdl-32849101

ABSTRACT

Studies assessing the relationship between high-level meaning and low-level image salience on real-world attention have shown that meaning better predicts eye movements than image salience. However, it is not yet clear whether the advantage of meaning over salience is a general phenomenon or whether it is related to center bias: the tendency for viewers to fixate scene centers. Previous meaning mapping studies have shown meaning predicts eye movements beyond center bias whereas saliency does not. However, these past findings were correlational or post hoc in nature. Therefore, to causally test whether meaning predicts eye movements beyond center bias, we used an established paradigm to reduce center bias in free viewing: moving the initial fixation position away from the center and delaying the first saccade. We compared the ability of meaning maps and image salience maps to account for the spatial distribution of fixations with reduced center bias. We found that meaning continued to explain both overall and early attention significantly better than image salience even when center bias was reduced by manipulation. In addition, although both meaning and image salience capture scene-specific information, image salience is driven by significantly greater scene-independent center bias in viewing than meaning. In total, the present findings indicate that the strong association of attention with meaning is not due to center bias.

8.
Atten Percept Psychophys ; 82(6): 2814-2820, 2020 Aug.
Article in English | MEDLINE | ID: mdl-32557006

ABSTRACT

Working memory is thought to be divided into distinct visual and verbal subsystems. Studies of visual working memory frequently use verbal working memory tasks as control conditions and/or use articulatory suppression to ensure that visual load is not transferred to verbal working memory. Using these verbal tasks relies on the assumption that the verbal working memory load will not interfere with the same processes as visual working memory. In the present study, participants maintained a visual or verbal working memory load as they simultaneously viewed scenes while their eye movements were recorded. Because eye movements and visual working memory are closely linked, we anticipated the visual load would interfere with scene-viewing (and vice versa), while the verbal load would not. Surprisingly, both visual and verbal memory loads interfered with scene-viewing behavior, while eye movements during scene-viewing did not significantly interfere with performance on either memory task. These results suggest that a verbal working memory load can interfere with eye movements in a visual task.


Subject(s)
Eye Movements , Memory, Short-Term , Humans
9.
J Exp Psychol Learn Mem Cogn ; 46(9): 1659-1681, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32271065

ABSTRACT

The world is visually complex, yet we can efficiently describe it by extracting the information that is most relevant to convey. How do the properties of real-world scenes help us decide where to look and what to say? Image salience has been the dominant explanation for what drives visual attention and production as we describe displays, but new evidence shows scene meaning predicts attention better than image salience. Here we investigated the relevance of one aspect of meaning, graspability (the grasping interactions objects in the scene afford), given that affordances have been implicated in both visual and linguistic processing. We quantified image salience, meaning, and graspability for real-world scenes. In 3 eyetracking experiments, native English speakers described possible actions that could be carried out in a scene. We hypothesized that graspability would preferentially guide attention due to its task-relevance. In 2 experiments using stimuli from a previous study, meaning explained visual attention better than graspability or salience did, and graspability explained attention better than salience. In a third experiment we quantified image salience, meaning, graspability, and reach-weighted graspability for scenes that depicted reachable spaces containing graspable objects. Graspability and meaning explained attention equally well in the third experiment, and both explained attention better than salience. We conclude that speakers use object graspability to allocate attention to plan descriptions when scenes depict graspable objects within reach, and otherwise rely more on general meaning. The results shed light on what aspects of meaning guide attention during scene viewing in language production tasks. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Attention/physiology , Motor Activity/physiology , Pattern Recognition, Visual/physiology , Space Perception/physiology , Verbal Behavior/physiology , Adult , Eye-Tracking Technology , Humans , Young Adult
10.
Vision (Basel) ; 3(2)2019 May 10.
Article in English | MEDLINE | ID: mdl-31735820

ABSTRACT

Perception of a complex visual scene requires that important regions be prioritized and attentionally selected for processing. What is the basis for this selection? Although much research has focused on image salience as an important factor guiding attention, relatively little work has focused on semantic salience. To address this imbalance, we have recently developed a new method for measuring, representing, and evaluating the role of meaning in scenes. In this method, the spatial distribution of semantic features in a scene is represented as a meaning map. Meaning maps are generated from crowd-sourced responses given by naïve subjects who rate the meaningfulness of a large number of scene patches drawn from each scene. Meaning maps are coded in the same format as traditional image saliency maps, and therefore both types of maps can be directly evaluated against each other and against maps of the spatial distribution of attention derived from viewers' eye fixations. In this review we describe our work focusing on comparing the influences of meaning and image salience on attentional guidance in real-world scenes across a variety of viewing tasks that we have investigated, including memorization, aesthetic judgment, scene description, and saliency search and judgment. Overall, we have found that both meaning and salience predict the spatial distribution of attention in a scene, but that when the correlation between meaning and salience is statistically controlled, only meaning uniquely accounts for variance in attention.

11.
Acta Psychol (Amst) ; 198: 102889, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31302302

ABSTRACT

In real-world vision, humans prioritize the most relevant visual information at the expense of other information via attentional selection. The current study sought to understand the role of semantic features and image features on attentional selection during free viewing of real-world scenes. We compared the ability of meaning maps generated from ratings of isolated, context-free image patches and saliency maps generated from the Graph-Based Visual Saliency model to predict the spatial distribution of attention in scenes as measured by eye movements. Additionally, we introduce new contextualized meaning maps in which scene patches were rated based upon how informative or recognizable they were in the context of the scene from which they derived. We found that both context-free and contextualized meaning explained significantly more of the overall variance in the spatial distribution of attention than image salience. Furthermore, meaning explained early attention to a significantly greater extent than image salience, contrary to predictions of the 'saliency first' hypothesis. Finally, both context-free and contextualized meaning predicted attention equivalently. These results support theories in which meaning plays a dominant role in attentional guidance during free viewing of real-world scenes.


Subject(s)
Attention/physiology , Fixation, Ocular/physiology , Photic Stimulation/methods , Visual Perception/physiology , Eye Movements/physiology , Female , Humans , Male , Semantics , Young Adult
12.
Atten Percept Psychophys ; 81(5): 1740, 2019 Jul.
Article in English | MEDLINE | ID: mdl-30887383

ABSTRACT

A minor coding error slightly affected a few originally reported values.

13.
Memory ; 27(4): 465-479, 2019 04.
Article in English | MEDLINE | ID: mdl-30207206

ABSTRACT

Humans possess a unique ability to communicate spatially-relevant information, yet the intersection between language and navigation remains largely unexplored. One possibility is that verbal cues accentuate heuristics useful for coding spatial layouts, yet this idea remains largely untested. We test the idea that verbal cues flexibly accentuate the coding of heuristics to remember spatial layouts via spatial boundaries or landmarks. The alternative hypothesis instead conceives of encoding during navigation as a step-wise process involving binding lower-level features, and thus subsequently formed spatial representations should not be modified by verbal cues. Across three experiments, we found that verbal cues significantly affected pointing error patterns at axes that were aligned with the verbally cued heuristic, suggesting that verbal cues influenced the heuristics employed to remember object positions. Further analyses suggested evidence for a hybrid model, in which boundaries were encoded more obligatorily than landmarks, but both were accessed flexibly with verbal instruction. These findings could not be accounted for by a tendency to spend more time facing the instructed component during navigation, ruling out an attentional-encoding mechanism. Our findings argue that verbal cues influence the heuristics employed to code environments, suggesting a mechanism for how humans use language to communicate navigationally-relevant information.


Subject(s)
Cues , Language , Memory/physiology , Spatial Memory/physiology , Attention , Female , Humans , Male
14.
Atten Percept Psychophys ; 81(1): 20-34, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30353498

ABSTRACT

During real-world scene viewing, humans must prioritize scene regions for attention. What are the roles of low-level image salience and high-level semantic meaning in attentional prioritization? A previous study suggested that when salience and meaning are directly contrasted in scene memorization and preference tasks, attentional priority is assigned by meaning (Henderson & Hayes in Nature Human Behavior, 1, 743-747, 2017). Here we examined the role of meaning in attentional guidance using two tasks in which meaning was irrelevant and salience was relevant: a brightness rating task and a brightness search task. Meaning was represented by meaning maps that captured the spatial distribution of semantic features. Meaning was contrasted with image salience, represented by saliency maps. Critically, both maps were represented similarly, allowing us to directly compare how meaning and salience influenced the spatial distribution of attention, as measured by fixation density maps. Our findings suggest that even in tasks for which meaning is irrelevant and salience is relevant, meaningful scene regions are prioritized for attention over salient scene regions. These results support theories in which scene semantics play a dominant role in attentional guidance in scenes.


Subject(s)
Attention/physiology , Photic Stimulation/methods , Reaction Time/physiology , Semantics , Visual Perception/physiology , Eye Movements/physiology , Female , Fixation, Ocular/physiology , Humans , Male , Random Allocation , Young Adult
15.
Cogn Affect Behav Neurosci ; 18(2): 353-365, 2018 04.
Article in English | MEDLINE | ID: mdl-29446044

ABSTRACT

Why are some visual stimuli remembered, whereas others are forgotten? A limitation of recognition paradigms is that they measure aggregate behavioral performance and/or neural responses to all stimuli presented in a visual working memory (VWM) array. To address this limitation, we paired an electroencephalography (EEG) frequency-tagging technique with two full-report VWM paradigms. This permitted the tracking of individual stimuli as well as the aggregate response. We recorded high-density EEG (256 channel) while participants viewed four shape stimuli, each flickering at a different frequency. At retrieval, participants either recalled the location of all stimuli in any order (simultaneous full report) or were cued to report the item in a particular location over multiple screen displays (sequential full report). The individual frequency tag amplitudes evoked for correctly recalled items were significantly larger than the amplitudes of subsequently forgotten stimuli, regardless of retrieval task. An induced-power analysis examined the aggregate neural correlates of VWM encoding as a function of items correctly recalled. We found increased induced power across a large number of electrodes in the theta, alpha, and beta frequency bands when more items were successfully recalled. This effect was more robust for sequential full report, suggesting that retrieval demands can influence encoding processes. These data are consistent with a model in which encoding-related resources are directed to a subset of items, rather than a model in which resources are allocated evenly across the array. These data extend previous work using recognition paradigms and stress the importance of encoding in determining later VWM retrieval success.


Subject(s)
Brain/physiology , Memory, Short-Term/physiology , Pattern Recognition, Visual/physiology , Adult , Brain Waves , Electroencephalography , Female , Humans , Male , Mental Recall/physiology , Photic Stimulation , Young Adult
16.
Perception ; 47(2): 216-224, 2018 Feb.
Article in English | MEDLINE | ID: mdl-29126374

ABSTRACT

Given adaptation changes perceptual experience, it probably shapes long-term memory (LTM). Across four experiments, participants were adapted to strongly gendered (male, female: Experiments 1 and 2) or aged faces (old, young: Experiments 3 and 4) before LTM encoding and later completed an LTM test in which the encoded faces were morphed with the opposite end of the relevant continuum. At retrieval, participants judged whether probe faces were more or less male or female or young or old than when presented during encoding. For male, female, and young faces, encoding-stage adaptation significantly shifted the point of subjective equality in the unadapted direction. Additionally, encoding-stage adaptation significantly enhanced recognition of faces during LTM retrieval. We conclude that encoding-related adaptation is reflected in LTM.


Subject(s)
Adaptation, Psychological/physiology , Facial Recognition/physiology , Memory, Long-Term/physiology , Mental Recall/physiology , Adult , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...