Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Vis ; 23(14): 3, 2023 Dec 04.
Artículo en Inglés | MEDLINE | ID: mdl-38064227

RESUMEN

Material depictions in artwork are useful tools for revealing image features that support material categorization. For example, artistic recipes for drawing specific materials make explicit the critical information leading to recognizable material properties (Di Cicco, Wjintjes, & Pont, 2020) and investigating the recognizability of material renderings as a function of their visual features supports conclusions about the vocabulary of material perception. Here, we examined how the recognition of materials from photographs and drawings was affected by the application of the Portilla-Simoncelli texture synthesis model. This manipulation allowed us to examine how categorization may be affected differently across materials and image formats when only summary statistic information about appearance was retained. Further, we compared human performance to the categorization accuracy obtained from a pretrained deep convolutional neural network to determine if observers' performance was reflected in the network. Although we found some similarities between human and network performance for photographic images, the results obtained from drawings differed substantially. Our results demonstrate that texture statistics play a variable role in material categorization across rendering formats and material categories and that the human perception of material drawings is not effectively captured by deep convolutional neural networks trained for object recognition.


Asunto(s)
Reconocimiento Visual de Modelos , Percepción Visual , Humanos , Redes Neurales de la Computación , Reconocimiento en Psicología
2.
Open Mind (Camb) ; 7: 445-459, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37637297

RESUMEN

Scene memory has known spatial biases. Boundary extension is a well-known bias whereby observers remember visual information beyond an image's boundaries. While recent studies demonstrate that boundary contraction also reliably occurs based on intrinsic image properties, the specific properties that drive the effect are unknown. This study assesses the extent to which scene memory might have a fixed capacity for information. We assessed both visual and semantic information in a scene database using techniques from image processing and natural language processing, respectively. We then assessed how both types of information predicted memory errors for scene boundaries using a standard rapid serial visual presentation (RSVP) forced error paradigm. A linear regression model indicated that memories for scene boundaries were significantly predicted by semantic, but not visual, information and that this effect persisted when scene depth was considered. Boundary extension was observed for images with low semantic information, and contraction was observed for images with high semantic information. This suggests a cognitive process that normalizes the amount of semantic information held in memory.

3.
PLoS Comput Biol ; 17(9): e1009456, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34570753

RESUMEN

A number of neuroimaging techniques have been employed to understand how visual information is transformed along the visual pathway. Although each technique has spatial and temporal limitations, they can each provide important insights into the visual code. While the BOLD signal of fMRI can be quite informative, the visual code is not static and this can be obscured by fMRI's poor temporal resolution. In this study, we leveraged the high temporal resolution of EEG to develop an encoding technique based on the distribution of responses generated by a population of real-world scenes. This approach maps neural signals to each pixel within a given image and reveals location-specific transformations of the visual code, providing a spatiotemporal signature for the image at each electrode. Our analyses of the mapping results revealed that scenes undergo a series of nonuniform transformations that prioritize different spatial frequencies at different regions of scenes over time. This mapping technique offers a potential avenue for future studies to explore how dynamic feedforward and recurrent processes inform and refine high-level representations of our visual world.


Asunto(s)
Mapeo Encefálico/métodos , Electroencefalografía/estadística & datos numéricos , Vías Visuales/fisiología , Adolescente , Mapeo Encefálico/instrumentación , Mapeo Encefálico/estadística & datos numéricos , Biología Computacional , Electrodos , Electroencefalografía/instrumentación , Femenino , Neuroimagen Funcional/estadística & datos numéricos , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética/estadística & datos numéricos , Masculino , Estimulación Luminosa , Análisis Espacio-Temporal , Corteza Visual/fisiología , Adulto Joven
4.
J Neurosci ; 40(27): 5283-5299, 2020 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-32467356

RESUMEN

Human scene categorization is characterized by its remarkable speed. While many visual and conceptual features have been linked to this ability, significant correlations exist between feature spaces, impeding our ability to determine their relative contributions to scene categorization. Here, we used a whitening transformation to decorrelate a variety of visual and conceptual features and assess the time course of their unique contributions to scene categorization. Participants (both sexes) viewed 2250 full-color scene images drawn from 30 different scene categories while having their brain activity measured through 256-channel EEG. We examined the variance explained at each electrode and time point of visual event-related potential (vERP) data from nine different whitened encoding models. These ranged from low-level features obtained from filter outputs to high-level conceptual features requiring human annotation. The amount of category information in the vERPs was assessed through multivariate decoding methods. Behavioral similarity measures were obtained in separate crowdsourced experiments. We found that all nine models together contributed 78% of the variance of human scene similarity assessments and were within the noise ceiling of the vERP data. Low-level models explained earlier vERP variability (88 ms after image onset), whereas high-level models explained later variance (169 ms). Critically, only high-level models shared vERP variability with behavior. Together, these results suggest that scene categorization is primarily a high-level process, but reliant on previously extracted low-level features.SIGNIFICANCE STATEMENT In a single fixation, we glean enough information to describe a general scene category. Many types of features are associated with scene categories, ranging from low-level properties, such as colors and contours, to high-level properties, such as objects and attributes. Because these properties are correlated, it is difficult to understand each property's unique contributions to scene categorization. This work uses a whitening transformation to remove the correlations between features and examines the extent to which each feature contributes to visual event-related potentials over time. We found that low-level visual features contributed first but were not correlated with categorization behavior. High-level features followed 80 ms later, providing key insights into how the brain makes sense of a complex visual world.


Asunto(s)
Percepción de Forma/fisiología , Percepción Visual/fisiología , Adolescente , Encéfalo/fisiología , Color , Electroencefalografía , Potenciales Evocados Visuales/fisiología , Femenino , Humanos , Masculino , Procesos Mentales/fisiología , Ruido , Estimulación Luminosa , Análisis de Ondículas , Adulto Joven
5.
Neuroimage ; 201: 116027, 2019 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-31325643

RESUMEN

Our understanding of information processing by the mammalian visual system has come through a variety of techniques ranging from psychophysics and fMRI to single unit recording and EEG. Each technique provides unique insights into the processing framework of the early visual system. Here, we focus on the nature of the information that is carried by steady state visual evoked potentials (SSVEPs). To study the information provided by SSVEPs, we presented human participants with a population of natural scenes and measured the relative SSVEP response. Rather than focus on particular features of this signal, we focused on the full state-space of possible responses and investigated how the evoked responses are mapped onto this space. Our results show that it is possible to map the relatively high-dimensional signal carried by SSVEPs onto a 2-dimensional space with little loss. We also show that a simple biologically plausible model can account for a high proportion of the explainable variance (~73%) in that space. Finally, we describe a technique for measuring the mutual information that is available about images from SSVEPs. The techniques introduced here represent a new approach to understanding the nature of the information carried by SSVEPs. Crucially, this approach is general and can provide a means of comparing results across different neural recording methods. Altogether, our study sheds light on the encoding principles of early vision and provides a much needed reference point for understanding subsequent transformations of the early visual response space to deeper knowledge structures that link different visual environments.


Asunto(s)
Mapeo Encefálico/métodos , Potenciales Evocados Visuales/fisiología , Análisis Espacial , Adolescente , Adulto , Femenino , Humanos , Masculino , Modelos Teóricos , Adulto Joven
6.
PLoS Comput Biol ; 14(7): e1006327, 2018 07.
Artículo en Inglés | MEDLINE | ID: mdl-30040821

RESUMEN

Visual scene category representations emerge very rapidly, yet the computational transformations that enable such invariant categorizations remain elusive. Deep convolutional neural networks (CNNs) perform visual categorization at near human-level accuracy using a feedforward architecture, providing neuroscientists with the opportunity to assess one successful series of representational transformations that enable categorization in silico. The goal of the current study is to assess the extent to which sequential scene category representations built by a CNN map onto those built in the human brain as assessed by high-density, time-resolved event-related potentials (ERPs). We found correspondence both over time and across the scalp: earlier (0-200 ms) ERP activity was best explained by early CNN layers at all electrodes. Although later activity at most electrode sites corresponded to earlier CNN layers, activity in right occipito-temporal electrodes was best explained by the later, fully-connected layers of the CNN around 225 ms post-stimulus, along with similar patterns in frontal electrodes. Taken together, these results suggest that the emergence of scene category representations develop through a dynamic interplay between early activity over occipital electrodes as well as later activity over temporal and frontal electrodes.


Asunto(s)
Mapeo Encefálico/métodos , Potenciales Evocados Somatosensoriales , Potenciales Evocados , Redes Neurales de la Computación , Corteza Somatosensorial/fisiología , Visión Ocular , Adolescente , Electrodos , Electroencefalografía/instrumentación , Electroencefalografía/métodos , Femenino , Humanos , Masculino , Estimulación Luminosa , Adulto Joven
7.
Elife ; 72018 03 07.
Artículo en Inglés | MEDLINE | ID: mdl-29513219

RESUMEN

Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information.


Asunto(s)
Encéfalo/fisiología , Red Nerviosa/fisiología , Reconocimiento Visual de Modelos/fisiología , Percepción Visual/fisiología , Adulto , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Red Nerviosa/diagnóstico por imagen , Estimulación Luminosa , Semántica
8.
J Vis ; 16(9): 15, 2016 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-27472502

RESUMEN

An L-vertex, the point at which two contours coterminate, provides highly reliable evidence that a surface terminates at that vertex, thus providing the strongest constraint on the extraction of shape from images (Guzman, 1968). Such vertices are pervasive in our visual world but the importance of a statistical regularity about them has been underappreciated: The contours defining the vertex are (almost) always of the same direction of contrast with respect to the background (i.e., both darker or both lighter). Here we show that when the two contours are of different directions of contrast, the capacity of the L-vertex to signal the termination of a surface, as reflected in object recognition, is markedly reduced. Although image statistics have been implicated in determining the connectivity in the earliest cortical visual stage (V1) and in grouping during visual search, this finding provides evidence that such statistics are involved in later stages where object representations are derived from two-dimensional images.


Asunto(s)
Sensibilidad de Contraste/fisiología , Percepción de Forma/fisiología , Corteza Visual/fisiología , Adolescente , Adulto , Biometría , Femenino , Humanos , Masculino , Adulto Joven
9.
Neuroimage ; 134: 170-179, 2016 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-27079531

RESUMEN

The purpose of categorization is to identify generalizable classes of objects whose members can be treated equivalently. Within a category, however, some exemplars are more representative of that concept than others. Despite long-standing behavioral effects, little is known about how typicality influences the neural representation of real-world objects from the same category. Using fMRI, we showed participants 64 subordinate object categories (exemplars) grouped into 8 basic categories. Typicality for each exemplar was assessed behaviorally and we used several multi-voxel pattern analyses to characterize how typicality affects the pattern of responses elicited in early visual and object-selective areas: V1, V2, V3v, hV4, LOC. We found that in LOC, but not in early areas, typical exemplars elicited activity more similar to the central category tendency and created sharper category boundaries than less typical exemplars, suggesting that typicality enhances within-category similarity and between-category dissimilarity. Additionally, we uncovered a brain region (cIPL) where category boundaries favor less typical categories. Our results suggest that typicality may constitute a previously unexplored principle of organization for intra-category neural structure and, furthermore, that this representation is not directly reflected in image features describing natural input, but rather built by the visual system at an intermediate processing stage.


Asunto(s)
Formación de Concepto/fisiología , Percepción de Forma/fisiología , Red Nerviosa/fisiología , Reconocimiento en Psicología/fisiología , Corteza Visual/fisiología , Adulto , Mapeo Encefálico/métodos , Femenino , Humanos , Masculino , Reconocimiento Visual de Modelos/fisiología
10.
Cognition ; 149: 6-10, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26774103

RESUMEN

Real-world scenes are complex but lawful: blenders are more likely to be found in kitchens than beaches, and elephants are not generally found inside homes. Research over the past 40years has demonstrated that contextual associations influence object recognition, change eye movement distributions, and modulate brain activity. However, the majority of these studies choose object-scene pairs from experimenters' intuitions because the statistical relationships between objects and scenes had yet to be systematically quantified. How do intuitive estimations compare to actual object frequencies? Across six experiments, observers estimated the frequency with which an object is found in a particular environment, such as the frequency of "mug" in an office. Estimated frequencies were compared to observed frequencies in two fully labeled scene databases (Greene, 2013). Although inter-observer similarity was high, observers systematically overestimated object frequency by an average of 32% across experiments. Altogether, these results speak to the richness of scene schemata and to the necessity of measuring object frequencies.


Asunto(s)
Reconocimiento Visual de Modelos , Reconocimiento en Psicología , Adulto , Femenino , Humanos , Masculino , Probabilidad , Adulto Joven
11.
J Exp Psychol Gen ; 145(1): 82-94, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26709590

RESUMEN

How do we know that a kitchen is a kitchen by looking? Traditional models posit that scene categorization is achieved through recognizing necessary and sufficient features and objects, yet there is little consensus about what these may be. However, scene categories should reflect how we use visual information. Therefore, we test the hypothesis that scene categories reflect functions, or the possibilities for actions within a scene. Our approach is to compare human categorization patterns with predictions made by both functions and alternative models. We collected a large-scale scene category distance matrix (5 million trials) by asking observers to simply decide whether 2 images were from the same or different categories. Using the actions from the American Time Use Survey, we mapped actions onto each scene (1.4 million trials). We found a strong relationship between ranked category distance and functional distance (r = .50, or 66% of the maximum possible correlation). The function model outperformed alternative models of object-based distance (r = .33), visual features from a convolutional neural network (r = .39), lexical distance (r = .27), and models of visual features. Using hierarchical linear regression, we found that functions captured 85.5% of overall explained variance, with nearly half of the explained variance captured only by functions, implying that the predictive power of alternative models was because of their shared variance with the function-based model. These results challenge the dominant school of thought that visual features and objects are sufficient for scene categorization, suggesting instead that a scene's category may be determined by the scene's function.


Asunto(s)
Aprendizaje por Asociación , Formación de Concepto , Reconocimiento Visual de Modelos , Adulto , Comprensión , Toma de Decisiones , Aprendizaje Discriminativo , Percepción de Distancia , Femenino , Humanos , Masculino , Modelos Psicológicos , Semántica , Medio Social , Estadística como Asunto
12.
J Cogn Neurosci ; 27(7): 1427-46, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-25811711

RESUMEN

Objects can be simultaneously categorized at multiple levels of specificity ranging from very broad ("natural object") to very distinct ("Mr. Woof"), with a mid-level of generality (basic level: "dog") often providing the most cognitively useful distinction between categories. It is unknown, however, how this hierarchical representation is achieved in the brain. Using multivoxel pattern analyses, we examined how well each taxonomic level (superordinate, basic, and subordinate) of real-world object categories is represented across occipitotemporal cortex. We found that, although in early visual cortex objects are best represented at the subordinate level (an effect mostly driven by low-level feature overlap between objects in the same category), this advantage diminishes compared to the basic level as we move up the visual hierarchy, disappearing in object-selective regions of occipitotemporal cortex. This pattern stems from a combined increase in within-category similarity (category cohesion) and between-category dissimilarity (category distinctiveness) of neural activity patterns at the basic level, relative to both subordinate and superordinate levels, suggesting that successive visual areas may be optimizing basic level representations.


Asunto(s)
Corteza Cerebral/fisiología , Modelos Neurológicos , Reconocimiento Visual de Modelos/fisiología , Corteza Visual/fisiología , Adolescente , Adulto , Mapeo Encefálico , Femenino , Humanos , Juicio/fisiología , Imagen por Resonancia Magnética , Masculino , Pruebas Neuropsicológicas , Estimulación Luminosa , Tiempo de Reacción , Adulto Joven
13.
Atten Percept Psychophys ; 77(4): 1239-51, 2015 May.
Artículo en Inglés | MEDLINE | ID: mdl-25776799

RESUMEN

Although we are able to rapidly understand novel scene images, little is known about the mechanisms that support this ability. Theories of optimal coding assert that prior visual experience can be used to ease the computational burden of visual processing. A consequence of this idea is that more probable visual inputs should be facilitated relative to more unlikely stimuli. In three experiments, we compared the perceptions of highly improbable real-world scenes (e.g., an underwater press conference) with common images matched for visual and semantic features. Although the two groups of images could not be distinguished by their low-level visual features, we found profound deficits related to the improbable images: Observers wrote poorer descriptions of these images (Exp. 1), had difficulties classifying the images as unusual (Exp. 2), and even had lower sensitivity to detect these images in noise than to detect their more probable counterparts (Exp. 3). Taken together, these results place a limit on our abilities for rapid scene perception and suggest that perception is facilitated by prior visual experience.


Asunto(s)
Memoria , Percepción Visual , Adulto , Femenino , Humanos , Masculino , Probabilidad , Semántica , Adulto Joven
14.
J Vis ; 14(1)2014 Jan 16.
Artículo en Inglés | MEDLINE | ID: mdl-24434626

RESUMEN

Human observers categorize visual stimuli with remarkable efficiency--a result that has led to the suggestion that object and scene categorization may be automatic processes. We tested this hypothesis by presenting observers with a modified Stroop paradigm in which object or scene words were presented over images of objects or scenes. Terms were either congruent or incongruent with the images. Observers classified the words as being object or scene terms while ignoring images. Classifying a word on an incongruent image came at a cost for both objects and scenes. Furthermore, automatic processing was observed for entry-level scene categories, but not superordinate-level categories, suggesting that not all rapid categorizations are automatic. Taken together, we have demonstrated that entry-level visual categorization is an automatic and obligatory process.


Asunto(s)
Percepción de Forma/fisiología , Reconocimiento Visual de Modelos/fisiología , Test de Stroop , Adolescente , Adulto , Humanos , Adulto Joven
15.
Front Psychol ; 4: 777, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24194723

RESUMEN

CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition.

16.
Vision Res ; 62: 1-8, 2012 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-22487718

RESUMEN

In 1967, Yarbus presented qualitative data from one observer showing that the patterns of eye movements were dramatically affected by an observer's task, suggesting that complex mental states could be inferred from scan paths. The strong claim of this very influential finding has never been rigorously tested. Our observers viewed photographs for 10s each. They performed one of four image-based tasks while eye movements were recorded. A pattern classifier, given features from the static scan paths, could identify the image and the observer at above-chance levels. However, it could not predict a viewer's task. Shorter and longer (60s) viewing epochs produced similar results. Critically, human judges also failed to identify the tasks performed by the observers based on the static scan paths. The Yarbus finding is evocative, and while it is possible an observer's mental state might be decoded from some aspect of eye movements, static scan paths alone do not appear to be adequate to infer complex mental states of an observer.


Asunto(s)
Movimientos Oculares/fisiología , Análisis y Desempeño de Tareas , Percepción Visual/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa/métodos , Fotograbar , Adulto Joven
17.
J Vis ; 11(6)2011 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-21610085

RESUMEN

While basic visual features such as color, motion, and orientation can guide attention, it is likely that additional features guide search for objects in real-world scenes. Recent work has shown that human observers efficiently extract global scene properties such as mean depth or navigability from a brief glance at a single scene (M. R. Greene & A. Oliva, 2009a, 2009b). Can human observers also efficiently search for an image possessing a particular global scene property among other images lacking that property? Observers searched for scene image targets defined by global properties of naturalness, transience, navigability, and mean depth. All produced inefficient search. Search efficiency for a property was not correlated with its classification threshold time from M. R. Greene and A. Oliva (2009b). Differences in search efficiency between properties can be partially explained by low-level visual features that are correlated with the global property. Overall, while global scene properties can be rapidly classified from a single image, it does not appear to be possible to use those properties to guide attention to one of several images.


Asunto(s)
Atención/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Humanos , Persona de Mediana Edad , Reconocimiento Visual de Modelos/fisiología , Tiempo de Reacción , Adulto Joven
18.
Trends Cogn Sci ; 15(2): 77-84, 2011 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-21227734

RESUMEN

How does one find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This article argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes might be best explained by a dual-path model: a 'selective' path in which candidate objects must be individually selected for recognition and a 'nonselective' path in which information can be extracted from global and/or statistical information.


Asunto(s)
Discriminación en Psicología/fisiología , Reconocimiento Visual de Modelos/fisiología , Solución de Problemas , Vías Visuales/fisiología , Percepción Visual/fisiología , Conducta Exploratoria/fisiología , Movimientos Oculares/fisiología , Humanos
19.
J Neurosci ; 31(4): 1333-40, 2011 Jan 26.
Artículo en Inglés | MEDLINE | ID: mdl-21273418

RESUMEN

Behavioral and computational studies suggest that visual scene analysis rapidly produces a rich description of both the objects and the spatial layout of surfaces in a scene. However, there is still a large gap in our understanding of how the human brain accomplishes these diverse functions of scene understanding. Here we probe the nature of real-world scene representations using multivoxel functional magnetic resonance imaging pattern analysis. We show that natural scenes are analyzed in a distributed and complementary manner by the parahippocampal place area (PPA) and the lateral occipital complex (LOC) in particular, as well as other regions in the ventral stream. Specifically, we study the classification performance of different scene-selective regions using images that vary in spatial boundary and naturalness content. We discover that, whereas both the PPA and LOC can accurately classify scenes, they make different errors: the PPA more often confuses scenes that have the same spatial boundaries, whereas the LOC more often confuses scenes that have the same content. By demonstrating that visual scene analysis recruits distinct and complementary high-level representations, our results testify to distinct neural pathways for representing the spatial boundaries and content of a visual scene.


Asunto(s)
Lóbulo Occipital/fisiología , Giro Parahipocampal/fisiología , Percepción Visual , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Estimulación Luminosa , Adulto Joven
20.
J Exp Psychol Hum Percept Perform ; 36(6): 1430-42, 2010 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-20731502

RESUMEN

Adaptation is ubiquitous in the human visual system, allowing recalibration to the statistical regularities of its input. Previous work has shown that global scene properties such as openness and mean depth are informative dimensions of natural scene variation useful for human and machine scene categorization (Greene & Oliva, 2009b; Oliva & Torralba, 2001). A visual system that rapidly categorizes scenes using such statistical regularities should be continuously updated, and therefore is prone to adaptation along these dimensions. Using a rapid serial visual presentation paradigm, we show aftereffects to several global scene properties (magnitude 8-21%). In addition, aftereffects were preserved when the test image was presented 10 degrees away from the adapted location, suggesting that the origin of these aftereffects is not solely due to low-level adaptation. We show systematic modulation of observers' basic-level scene categorization performances after adapting to a global property, suggesting a strong representational role of global properties in rapid scene categorization.


Asunto(s)
Atención , Percepción de Color , Percepción de Profundidad , Discriminación en Psicología , Área de Dependencia-Independencia , Efecto Tardío Figurativo , Orientación , Reconocimiento Visual de Modelos , Percepción Espacial , Aprendizaje por Asociación , Concienciación , Formación de Concepto , Aprendizaje Discriminativo , Humanos , Tiempo de Reacción , Transferencia de Experiencia en Psicología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA