Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Q J Exp Psychol (Hove) ; 76(3): 632-648, 2023 Mar.
Article in English | MEDLINE | ID: mdl-35510885

ABSTRACT

Models of visual search in scenes include image salience as a source of attentional guidance. However, because scene meaning is correlated with image salience, it could be that the salience predictor in these models is driven by meaning. To test this proposal, we generated meaning maps that represented the spatial distribution of semantic informativeness in scenes, and salience maps which represented the spatial distribution of conspicuous image features and tested their influence on fixation densities from two object search tasks in real-world scenes. The results showed that meaning accounted for significantly greater variance in fixation densities than image salience, both overall and in early attention across both studies. Here, meaning explained 58% and 63% of the theoretical ceiling of variance in attention across both studies, respectively. Furthermore, both studies demonstrated that fast initial saccades were not more likely to be directed to higher salience regions than slower initial saccades, and initial saccades of all latencies were directed to regions containing higher meaning than salience. Together, these results demonstrated that even though meaning was task-neutral, the visual system still selected meaningful over salient scene regions for attention during search.


Subject(s)
Semantics , Visual Perception , Humans , Saccades , Fixation, Ocular
2.
Psychol Aging ; 38(1): 49-66, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36395016

ABSTRACT

As we age, we accumulate a wealth of information about the surrounding world. Evidence from visual search suggests that older adults retain intact knowledge for where objects tend to occur in everyday environments (semantic information) that allows them to successfully locate objects in scenes, but may overrely on semantic guidance. We investigated age differences in the allocation of attention to semantically informative and visually salient information in a task in which the eye movements of younger (N = 30, aged 18-24) and older (N = 30, aged 66-82) adults were tracked as they described real-world scenes. We measured the semantic information in scenes based on "meaning map" ratings from a norming sample of young and older adults, and image salience as graph-based visual saliency. Logistic mixed-effects modeling was used to determine whether, controlling for center bias, fixated scene locations differed in semantic informativeness and visual salience from locations that were not fixated, and whether these effects differed for young and older adults. Semantic informativeness predicted fixated locations well overall, as did image salience, although unique variance in the model was better explained by semantic informativeness than image salience. Older adults were less likely to fixate informative locations in scenes than young adults were, though the locations older adults' fixated were independently predicted well by informativeness. These results suggest young and older adults both use semantic information to guide attention in scenes and that older adults do not overrely on semantic information across the board. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Healthy Aging , Visual Perception , Humans , Aged , Aging , Eye Movements , Semantics , Fixation, Ocular
3.
Atten Percept Psychophys ; 84(5): 1583-1610, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35484443

ABSTRACT

As we act on the world around us, our eyes seek out objects we plan to interact with. A growing body of evidence suggests that overt visual attention selects objects in the environment that could be interacted with, even when the task precludes physical interaction. In previous work, objects that afford grasping interactions influenced attention when static scenes depicted reachable spaces, and attention was otherwise better explained by general informativeness. Because grasping is but one of many object interactions, previous work may have downplayed the influence of object affordances on attention. The current study investigated the relationship between overt visual attention and object affordances versus broadly construed semantic information in scenes as speakers describe or memorize scenes. In addition to meaning and grasp maps-which capture informativeness and grasping object affordances in scenes, respectively-we introduce interact maps, which capture affordances more broadly. In a mixed-effects analysis of 5 eyetracking experiments, we found that meaning predicted fixated locations in a general description task and during scene memorization. Grasp maps marginally predicted fixated locations during action description for scenes that depicted reachable spaces only. Interact maps predicted fixated regions in description experiments alone. Our findings suggest observers allocate attention to scene regions that could be readily interacted with when talking about the scene, while general informativeness preferentially guides attention when the task does not encourage careful consideration of objects in the scene. The current study suggests that the influence of object affordances on visual attention in scenes is mediated by task demands.


Subject(s)
Eye Movements , Visual Perception , Hand Strength , Humans , Pattern Recognition, Visual , Semantics
4.
Cognition ; 214: 104742, 2021 09.
Article in English | MEDLINE | ID: mdl-33892912

ABSTRACT

Pedziwiatr, Kümmerer, Wallis, Bethge, & Teufel (2021) contend that Meaning Maps do not represent the spatial distribution of semantic features in scenes. We argue that Pesziwiatr et al. provide neither logical nor empirical support for that claim, and we conclude that Meaning Maps do what they were designed to do: represent the spatial distribution of meaning in scenes.


Subject(s)
Fixation, Ocular , Semantics , Attention , Eye Movements , Humans , Visual Perception
5.
Cogn Res Princ Implic ; 6(1): 32, 2021 04 14.
Article in English | MEDLINE | ID: mdl-33855644

ABSTRACT

A major problem in human cognition is to understand how newly acquired information and long-standing beliefs about the environment combine to make decisions and plan behaviors. Over-dependence on long-standing beliefs may be a significant source of suboptimal decision-making in unusual circumstances. While the contribution of long-standing beliefs about the environment to search in real-world scenes is well-studied, less is known about how new evidence informs search decisions, and it is unclear whether the two sources of information are used together optimally to guide search. The present study expanded on the literature on semantic guidance in visual search by modeling a Bayesian ideal observer's use of long-standing semantic beliefs and recent experience in an active search task. The ability to adjust expectations to the task environment was simulated using the Bayesian ideal observer, and subjects' performance was compared to ideal observers that depended on prior knowledge and recent experience to varying degrees. Target locations were either congruent with scene semantics, incongruent with what would be expected from scene semantics, or random. Half of the subjects were able to learn to search for the target in incongruent locations over repeated experimental sessions when it was optimal to do so. These results suggest that searchers can learn to prioritize recent experience over knowledge of scenes in a near-optimal fashion when it is beneficial to do so, as long as the evidence from recent experience was learnable.


Subject(s)
Microwaves , Semantics , Attention , Bayes Theorem , Humans , Uncertainty
6.
Cogn Res Princ Implic ; 6(1): 10, 2021 02 17.
Article in English | MEDLINE | ID: mdl-33595751

ABSTRACT

According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice in Logic and conversation, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al. in Psychol Rev 127(4):591-621, 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1 = 48, NExp. 2 = 48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the Maxim of Quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al. (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension. SIGNIFICANCE STATEMENT: This study investigated whether providing more information than someone needs to find an object in a photograph helps them to find that object more easily, even though it means they need to interpret a more complicated sentence. Before searching a scene, participants were either given information about where the object would be located in the scene, what color the object was, or were only told what object to search for. The results showed that providing additional information helped participants locate an object in an image more easily only when at least one piece of information communicated what part of the scene the object was in, which suggests that more information can be beneficial as long as that information is specific and helps the recipient achieve a goal. We conclude that people will pay attention to redundant information when it supports their task. In practice, our results suggest that instructions in other contexts (e.g., real-world navigation, using a smartphone app, prescription instructions, etc.) can benefit from the inclusion of what appears to be redundant information.


Subject(s)
Communication , Language , Humans
7.
J Cogn Neurosci ; 33(4): 574-593, 2021 04.
Article in English | MEDLINE | ID: mdl-33475452

ABSTRACT

In recent years, a growing number of studies have used cortical tracking methods to investigate auditory language processing. Although most studies that employ cortical tracking stem from the field of auditory signal processing, this approach should also be of interest to psycholinguistics-particularly the subfield of sentence processing-given its potential to provide insight into dynamic language comprehension processes. However, there has been limited collaboration between these fields, which we suggest is partly because of differences in theoretical background and methodological constraints, some mutually exclusive. In this paper, we first review the theories and methodological constraints that have historically been prioritized in each field and provide concrete examples of how some of these constraints may be reconciled. We then elaborate on how further collaboration between the two fields could be mutually beneficial. Specifically, we argue that the use of cortical tracking methods may help resolve long-standing debates in the field of sentence processing that commonly used behavioral and neural measures (e.g., ERPs) have failed to adjudicate. Similarly, signal processing researchers who use cortical tracking may be able to reduce noise in the neural data and broaden the impact of their results by controlling for linguistic features of their stimuli and by using simple comprehension tasks. Overall, we argue that a balance between the methodological constraints of the two fields will lead to an overall improved understanding of language processing as well as greater clarity on what mechanisms cortical tracking of speech reflects. Increased collaboration will help resolve debates in both fields and will lead to new and exciting avenues for research.


Subject(s)
Speech Perception , Speech , Comprehension , Humans , Language , Psycholinguistics
8.
Mem Cognit ; 48(7): 1181-1195, 2020 10.
Article in English | MEDLINE | ID: mdl-32430889

ABSTRACT

The complexity of the visual world requires that we constrain visual attention and prioritize some regions of the scene for attention over others. The current study investigated whether verbal encoding processes influence how attention is allocated in scenes. Specifically, we asked whether the advantage of scene meaning over image salience in attentional guidance is modulated by verbal encoding, given that we often use language to process information. In two experiments, 60 subjects studied scenes (N1 = 30 and N2 = 60) for 12 s each in preparation for a scene-recognition task. Half of the time, subjects engaged in a secondary articulatory suppression task concurrent with scene viewing. Meaning and saliency maps were quantified for each of the experimental scenes. In both experiments, we found that meaning explained more of the variance in visual attention than image salience did, particularly when we controlled for the overlap between meaning and salience, with and without the suppression task. Based on these results, verbal encoding processes do not appear to modulate the relationship between scene meaning and visual attention. Our findings suggest that semantic information in the scene steers the attentional ship, consistent with cognitive guidance theory.


Subject(s)
Semantics , Speech , Fixation, Ocular , Humans , Pattern Recognition, Visual , Recognition, Psychology , Visual Perception
9.
J Exp Psychol Learn Mem Cogn ; 46(9): 1659-1681, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32271065

ABSTRACT

The world is visually complex, yet we can efficiently describe it by extracting the information that is most relevant to convey. How do the properties of real-world scenes help us decide where to look and what to say? Image salience has been the dominant explanation for what drives visual attention and production as we describe displays, but new evidence shows scene meaning predicts attention better than image salience. Here we investigated the relevance of one aspect of meaning, graspability (the grasping interactions objects in the scene afford), given that affordances have been implicated in both visual and linguistic processing. We quantified image salience, meaning, and graspability for real-world scenes. In 3 eyetracking experiments, native English speakers described possible actions that could be carried out in a scene. We hypothesized that graspability would preferentially guide attention due to its task-relevance. In 2 experiments using stimuli from a previous study, meaning explained visual attention better than graspability or salience did, and graspability explained attention better than salience. In a third experiment we quantified image salience, meaning, graspability, and reach-weighted graspability for scenes that depicted reachable spaces containing graspable objects. Graspability and meaning explained attention equally well in the third experiment, and both explained attention better than salience. We conclude that speakers use object graspability to allocate attention to plan descriptions when scenes depict graspable objects within reach, and otherwise rely more on general meaning. The results shed light on what aspects of meaning guide attention during scene viewing in language production tasks. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Attention/physiology , Motor Activity/physiology , Pattern Recognition, Visual/physiology , Space Perception/physiology , Verbal Behavior/physiology , Adult , Eye-Tracking Technology , Humans , Young Adult
10.
Vision (Basel) ; 3(2)2019 May 10.
Article in English | MEDLINE | ID: mdl-31735820

ABSTRACT

Perception of a complex visual scene requires that important regions be prioritized and attentionally selected for processing. What is the basis for this selection? Although much research has focused on image salience as an important factor guiding attention, relatively little work has focused on semantic salience. To address this imbalance, we have recently developed a new method for measuring, representing, and evaluating the role of meaning in scenes. In this method, the spatial distribution of semantic features in a scene is represented as a meaning map. Meaning maps are generated from crowd-sourced responses given by naïve subjects who rate the meaningfulness of a large number of scene patches drawn from each scene. Meaning maps are coded in the same format as traditional image saliency maps, and therefore both types of maps can be directly evaluated against each other and against maps of the spatial distribution of attention derived from viewers' eye fixations. In this review we describe our work focusing on comparing the influences of meaning and image salience on attentional guidance in real-world scenes across a variety of viewing tasks that we have investigated, including memorization, aesthetic judgment, scene description, and saliency search and judgment. Overall, we have found that both meaning and salience predict the spatial distribution of attention in a scene, but that when the correlation between meaning and salience is statistically controlled, only meaning uniquely accounts for variance in attention.

11.
Sci Rep ; 8(1): 13504, 2018 09 10.
Article in English | MEDLINE | ID: mdl-30202075

ABSTRACT

Intelligent analysis of a visual scene requires that important regions be prioritized and attentionally selected for preferential processing. What is the basis for this selection? Here we compared the influence of meaning and image salience on attentional guidance in real-world scenes during two free-viewing scene description tasks. Meaning was represented by meaning maps capturing the spatial distribution of semantic features. Image salience was represented by saliency maps capturing the spatial distribution of image features. Both types of maps were coded in a format that could be directly compared to maps of the spatial distribution of attention derived from viewers' eye fixations in the scene description tasks. The results showed that both meaning and salience predicted the spatial distribution of attention in these tasks, but that when the correlation between meaning and salience was statistically controlled, only meaning accounted for unique variance in attention. The results support theories in which cognitive relevance plays the dominant functional role in controlling human attentional guidance in scenes. The results also have practical implications for current artificial intelligence approaches to labeling real-world images.


Subject(s)
Attention/physiology , Cognition/physiology , Semantics , Visual Perception/physiology , Eye Movement Measurements/instrumentation , Eye Movements/physiology , Humans
12.
J Genet Psychol ; 179(1): 9-18, 2018.
Article in English | MEDLINE | ID: mdl-29192871

ABSTRACT

Human figure drawing tasks such as the Draw-a-Person test have long been used to assess intelligence (F. Goodenough, 1926). The authors investigate the skills tapped by drawing and the risk factors associated with poor drawing. Self-portraits of 345 preschool children were scored by raters trained in using the Draw-a-Person Intellectual Ability test (DAP:IQ) rubric (C. R. Reynolds & J. A. Hickman, 2004). Analyses of children's fine motor, gross motor, social, cognitive, and language skills revealed that only fine motor skill was an independent predictor of DAP:IQ scores. Being a boy and having a low birth weight were associated with lower DAP:IQ scores. These findings suggest that although the DAP:IQ may not be a valid measure of cognitive ability, it may be a useful screening tool for fine motor disturbances in at-risk children, such as boys who were born at low birth weights. Furthermore, researchers who use human figure drawing tasks to measure intelligence should measure fine motor skill in addition to intelligence.


Subject(s)
Child Development/physiology , Intelligence Tests , Intelligence/physiology , Motor Skills/physiology , Birth Weight/physiology , Child, Preschool , Female , Humans , Male , Sex Factors
13.
J Neurophysiol ; 110(7): 1646-62, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23864377

ABSTRACT

Current observational inventories used to diagnose autism spectrum disorders (ASD) apply similar criteria to females and males alike, despite developmental differences between the sexes. Recent work investigating the chronology of diagnosis in ASD has raised the concern that females run the risk of receiving a delayed diagnosis, potentially missing a window of opportunity for early intervention. Here, we retake this issue in the context of the objective measurements of natural behaviors that involve decision-making processes. Within this context, we quantified movement variability in typically developing (TD) individuals and those diagnosed with ASD across different ages. We extracted the latencies of the decision movements and velocity-dependent parameters as the hand movements unfolded for two movement segments within the reach: movements intended toward the target and withdrawing movements that spontaneously, without instruction, occurred incidentally. The stochastic signatures of the movement decision latencies and the percent of time to maximum speed differed between males and females with ASD. This feature was also observed in the empirically estimated probability distributions of the maximum speed values, independent of limb size. Females with ASD showed different dispersion than males with ASD. The distinctions found for females with ASD were better appreciated compared with those of TD females. In light of these results, behavioral assessment of autistic traits in females should be performed relative to TD females to increase the chance of detection.


Subject(s)
Child Development Disorders, Pervasive/diagnosis , Phenotype , Psychomotor Performance , Adolescent , Adult , Case-Control Studies , Child , Child Development Disorders, Pervasive/physiopathology , Child, Preschool , Decision Making , Female , Humans , Male , Middle Aged , Movement , Neuropsychological Tests , Sex Factors
SELECTION OF CITATIONS
SEARCH DETAIL