Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Vis ; 19(1): 14, 2019 01 02.
Artigo em Inglês | MEDLINE | ID: mdl-30677124

RESUMO

The ability to perceive and remember the spatial layout of a scene is critical to understanding the visual world, both for navigation and for other complex tasks that depend upon the structure of the current environment. However, surprisingly little work has investigated how and when scene layout information is maintained in memory. One prominent line of work investigating this issue is a scene-priming paradigm (e.g., Sanocki & Epstein, 1997), in which different types of previews are presented to participants shortly before they judge which of two regions of a scene is closer in depth to the viewer. Experiments using this paradigm have been widely cited as evidence that scene layout information is stored across brief delays and have been used to investigate the structure of the representations underlying memory for scene layout. In the present experiments, we better characterize these scene-priming effects. We find that a large amount of visual detail rather than the presence of depth information is necessary for the priming effect; that participants show a preview benefit for a judgment completely unrelated to the scene itself; and that preview benefits are susceptible to masking and quickly decay. Together, these results suggest that "scene priming" effects do not isolate scene layout information in memory, and that they may arise from low-level visual information held in sensory memory. This broadens the range of interpretations of scene priming effects and suggests that other paradigms may need to be developed to selectively investigate how we represent scene layout information in memory.


Assuntos
Memória de Curto Prazo/fisiologia , Rememoração Mental/fisiologia , Percepção Espacial/fisiologia , Percepção Visual/fisiologia , Adulto , Humanos , Julgamento , Estimulação Luminosa/métodos , Adulto Jovem
2.
Artigo em Inglês | MEDLINE | ID: mdl-33090835

RESUMO

The "spatial congruency bias" is a behavioral phenomenon where 2 objects presented sequentially are more likely to be judged as being the same object if they are presented in the same location (Golomb, Kupitz, & Thiemann, 2014), suggesting that irrelevant spatial location information may be bound to object representations. Here, we examine whether the spatial congruency bias extends to higher-level object judgments of facial identity and expression. On each trial, 2 real-world faces were sequentially presented in variable screen locations, and subjects were asked to make same-different judgments on the facial expression (Experiments 1-2) or facial identity (Experiment 3) of the stimuli. We observed a robust spatial congruency bias for judgments of facial identity, yet a more fragile one for judgments of facial expression. Subjects were more likely to judge 2 faces as displaying the same expression if they were presented in the same location (compared to in different locations), but only when the faces shared the same identity. On the other hand, a spatial congruency bias was found when subjects made judgments on facial identity, even across faces displaying different facial expressions. These findings suggest a possible difference between the binding of facial identity and facial expression to spatial location. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

3.
Psychon Bull Rev ; 25(4): 1388-1398, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29159799

RESUMO

To interact successfully with objects, we must maintain stable representations of their locations in the world. However, their images on the retina may be displaced several times per second by large, rapid eye movements. A number of studies have demonstrated that visual processing is heavily influenced by gaze-centered (retinotopic) information, including a recent finding that memory for an object's location is more accurate and precise in gaze-centered (retinotopic) than world-centered (spatiotopic) coordinates (Golomb & Kanwisher, 2012b). This effect is somewhat surprising, given our intuition that behavior is successfully guided by spatiotopic representations. In the present experiment, we asked whether the visual system may rely on a more spatiotopic memory store depending on the mode of responding. Specifically, we tested whether reaching toward and tapping directly on an object's location could improve memory for its spatiotopic location. Participants performed a spatial working memory task under four conditions: retinotopic vs. spatiotopic task, and computer mouse click vs. touchscreen reaching response. When participants responded by clicking with a mouse on the screen, we replicated Golomb & Kanwisher's original results, finding that memory was more accurate in retinotopic than spatiotopic coordinates and that the accuracy of spatiotopic memory deteriorated substantially more than retinotopic memory with additional eye movements during the memory delay. Critically, we found the same pattern of results when participants responded by using their finger to reach and tap the remembered location on the monitor. These results further support the hypothesis that spatial memory is natively retinotopic; we found no evidence that engaging the motor system improves spatiotopic memory across saccades.


Assuntos
Memória de Curto Prazo/fisiologia , Rememoração Mental/fisiologia , Desempenho Psicomotor/fisiologia , Retina/fisiologia , Percepção Espacial/fisiologia , Memória Espacial/fisiologia , Percepção Visual/fisiologia , Adulto , Humanos , Adulto Jovem
4.
Atten Percept Psychophys ; 79(3): 765-781, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28070793

RESUMO

Despite frequent eye movements that rapidly shift the locations of objects on our retinas, our visual system creates a stable perception of the world. To do this, it must convert eye-centered (retinotopic) input to world-centered (spatiotopic) percepts. Moreover, for successful behavior we must also incorporate information about object features/identities during this updating - a fundamental challenge that remains to be understood. Here we adapted a recent behavioral paradigm, the "spatial congruency bias," to investigate object-location binding across an eye movement. In two initial baseline experiments, we showed that the spatial congruency bias was present for both gabor and face stimuli in addition to the object stimuli used in the original paradigm. Then, across three main experiments, we found the bias was preserved across an eye movement, but only in retinotopic coordinates: Subjects were more likely to perceive two stimuli as having the same features/identity when they were presented in the same retinotopic location. Strikingly, there was no evidence of location binding in the more ecologically relevant spatiotopic (world-centered) coordinates; the reference frame did not update to spatiotopic even at longer post-saccade delays, nor did it transition to spatiotopic with more complex stimuli (gabors, shapes, and faces all showed a retinotopic congruency bias). Our results suggest that object-location binding may be tied to retinotopic coordinates, and that it may need to be re-established following each eye movement rather than being automatically updated to spatiotopic coordinates.


Assuntos
Reconhecimento Visual de Modelos/fisiologia , Retina/fisiologia , Movimentos Sacádicos/fisiologia , Percepção Espacial/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
5.
J Exp Psychol Hum Percept Perform ; 43(6): 1160-1176, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28263635

RESUMO

Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties. (PsycINFO Database Record


Assuntos
Reconhecimento Visual de Modelos/fisiologia , Percepção Espacial/fisiologia , Adolescente , Adulto , Humanos , Adulto Jovem
6.
Atten Percept Psychophys ; 79(6): 1682-1694, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28584957

RESUMO

One of the fundamental challenges of visual cognition is how our visual systems combine information about an object's features with its spatial location. A recent phenomenon related to object-location binding, the "spatial congruency bias," revealed that two objects are more likely to be perceived as having the same identity or features if they appear in the same spatial location, versus if the second object appears in a different location. The spatial congruency bias suggests that irrelevant location information is automatically encoded with and bound to other object properties, biasing perceptual judgments. Here we further explored this new phenomenon and its role in object-location binding by asking what happens when an object moves to a new location: Is the spatial congruency bias sensitive to spatiotemporal contiguity cues, or does it remain linked to the original object location? Across four experiments, we found that the spatial congruency bias remained strongly linked to the original object location. However, under certain circumstances-for instance, when the first object paused and remained visible for a brief time after the movement-the congruency bias was found at both the original location and the updated location. These data suggest that the spatial congruency bias is based more on low-level visual information than on spatiotemporal contiguity cues, and reflects a type of object-location binding that is primarily tied to the original object location and that may only update to the object's new location if there is time for the features to be re-encoded and rebound following the movement.


Assuntos
Viés de Atenção/fisiologia , Movimento (Física) , Processamento Espacial/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Cognição , Sinais (Psicologia) , Humanos , Julgamento , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA