Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Cogn Neurosci ; 35(5): 816-840, 2023 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-36877074

RESUMO

Color and form information can be decoded in every region of the human ventral visual hierarchy, and at every layer of many convolutional neural networks (CNNs) trained to recognize objects, but how does the coding strength of these features vary over processing? Here, we characterize for these features both their absolute coding strength-how strongly each feature is represented independent of the other feature-and their relative coding strength-how strongly each feature is encoded relative to the other, which could constrain how well a feature can be read out by downstream regions across variation in the other feature. To quantify relative coding strength, we define a measure called the form dominance index that compares the relative influence of color and form on the representational geometry at each processing stage. We analyze brain and CNN responses to stimuli varying based on color and either a simple form feature, orientation, or a more complex form feature, curvature. We find that while the brain and CNNs largely differ in how the absolute coding strength of color and form vary over processing, comparing them in terms of their relative emphasis of these features reveals a striking similarity: For both the brain and for CNNs trained for object recognition (but not for untrained CNNs), orientation information is increasingly de-emphasized, and curvature information is increasingly emphasized, relative to color information over processing, with corresponding processing stages showing largely similar values of the form dominance index.


Assuntos
Córtex Visual , Vias Visuais , Humanos , Vias Visuais/fisiologia , Córtex Visual/fisiologia , Percepção Visual , Redes Neurais de Computação , Encéfalo/fisiologia
2.
Behav Brain Sci ; 46: e392, 2023 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-38054329

RESUMO

An ideal vision model accounts for behavior and neurophysiology in both naturalistic conditions and designed lab experiments. Unlike psychological theories, artificial neural networks (ANNs) actually perform visual tasks and generate testable predictions for arbitrary inputs. These advantages enable ANNs to engage the entire spectrum of the evidence. Failures of particular models drive progress in a vibrant ANN research program of human vision.


Assuntos
Idioma , Redes Neurais de Computação , Humanos
3.
Neuroimage ; 251: 118941, 2022 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-35122966

RESUMO

Despite decades of research, our understanding of the relationship between color and form processing in the primate ventral visual pathway remains incomplete. Using fMRI multivoxel pattern analysis, we examined coding of color and form, using a simple form feature (orientation) and a mid-level form feature (curvature), in human ventral visual processing regions. We found that both color and form could be decoded from activity in early visual areas V1 to V4, as well as in the posterior color-selective region and shape-selective regions in ventral and lateral occipitotemporal cortex defined based on their univariate selectivity to color or shape, respectively (the central color region only showed color but not form decoding). Meanwhile, decoding biases towards one feature or the other existed in the color- and shape-selective regions, consistent with their univariate feature selectivity reported in past studies. Additional extensive analyses show that while all these regions contain independent (linearly additive) coding for both features, several early visual regions also encode the conjunction of color and the simple, but not the complex, form feature in a nonlinear, interactive manner. Taken together, the results show that color and form are encoded in a biased distributed and largely independent manner across ventral visual regions in the human brain.


Assuntos
Córtex Visual , Vias Visuais , Animais , Mapeamento Encefálico , Humanos , Imageamento por Ressonância Magnética , Reconhecimento Visual de Modelos , Estimulação Luminosa/métodos , Córtex Visual/diagnóstico por imagem , Vias Visuais/diagnóstico por imagem
4.
J Cogn Neurosci ; 31(1): 49-63, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30188780

RESUMO

Primate ventral and dorsal visual pathways both contain visual object representations. Dorsal regions receive more input from magnocellular system while ventral regions receive inputs from both magnocellular and parvocellular systems. Due to potential differences in the spatial sensitivites of manocellular and parvocellular systems, object representations in ventral and dorsal regions may differ in how they represent visual input from different spatial scales. To test this prediction, we asked observers to view blocks of images from six object categories, shown in full spectrum, high spatial frequency (SF), or low SF. We found robust object category decoding in all SF conditions as well as SF decoding in nearly all the early visual, ventral, and dorsal regions examined. Cross-SF decoding further revealed that object category representations in all regions exhibited substantial tolerance across the SF components. No difference between ventral and dorsal regions was found in their preference for the different SF components. Further comparisons revealed that, whereas differences in the SF component separated object category representations in early visual areas, such a separation was much smaller in downstream ventral and dorsal regions. In those regions, variations among the object categories played a more significant role in shaping the visual representational structures. Our findings show that ventral and dorsal regions are similar in how they represent visual input from different spatial scales and argue against a dissociation of these regions based on differential sensitivity to different SFs.


Assuntos
Reconhecimento Visual de Modelos/fisiologia , Córtex Visual/fisiologia , Adolescente , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Estimulação Luminosa , Vias Visuais/fisiologia , Adulto Jovem
6.
Sci Rep ; 13(1): 14375, 2023 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-37658079

RESUMO

Deep neural network models (DNNs) are essential to modern AI and provide powerful models of information processing in biological neural networks. Researchers in both neuroscience and engineering are pursuing a better understanding of the internal representations and operations that undergird the successes and failures of DNNs. Neuroscientists additionally evaluate DNNs as models of brain computation by comparing their internal representations to those found in brains. It is therefore essential to have a method to easily and exhaustively extract and characterize the results of the internal operations of any DNN. Many models are implemented in PyTorch, the leading framework for building DNN models. Here we introduce TorchLens, a new open-source Python package for extracting and characterizing hidden-layer activations in PyTorch models. Uniquely among existing approaches to this problem, TorchLens has the following features: (1) it exhaustively extracts the results of all intermediate operations, not just those associated with PyTorch module objects, yielding a full record of every step in the model's computational graph, (2) it provides an intuitive visualization of the model's complete computational graph along with metadata about each computational step in a model's forward pass for further analysis, (3) it contains a built-in validation procedure to algorithmically verify the accuracy of all saved hidden-layer activations, and (4) the approach it uses can be automatically applied to any PyTorch model with no modifications, including models with conditional (if-then) logic in their forward pass, recurrent models, branching models where layer outputs are fed into multiple subsequent layers in parallel, and models with internally generated tensors (e.g., injections of noise). Furthermore, using TorchLens requires minimal additional code, making it easy to incorporate into existing pipelines for model development and analysis, and useful as a pedagogical aid when teaching deep learning concepts. We hope this contribution will help researchers in AI and neuroscience understand the internal representations of DNNs.


Assuntos
Encéfalo , Cognição , Engenharia , Metadados , Redes Neurais de Computação
7.
bioRxiv ; 2023 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-36993311

RESUMO

Deep neural network models (DNNs) are essential to modern AI and provide powerful models of information processing in biological neural networks. Researchers in both neuroscience and engineering are pursuing a better understanding of the internal representations and operations that undergird the successes and failures of DNNs. Neuroscientists additionally evaluate DNNs as models of brain computation by comparing their internal representations to those found in brains. It is therefore essential to have a method to easily and exhaustively extract and characterize the results of the internal operations of any DNN. Many models are implemented in PyTorch, the leading framework for building DNN models. Here we introduce TorchLens , a new open-source Python package for extracting and characterizing hidden-layer activations in PyTorch models. Uniquely among existing approaches to this problem, TorchLens has the following features: (1) it exhaustively extracts the results of all intermediate operations, not just those associated with PyTorch module objects, yielding a full record of every step in the model's computational graph, (2) it provides an intuitive visualization of the model's complete computational graph along with metadata about each computational step in a model's forward pass for further analysis, (3) it contains a built-in validation procedure to algorithmically verify the accuracy of all saved hidden-layer activations, and (4) the approach it uses can be automatically applied to any PyTorch model with no modifications, including models with conditional (if-then) logic in their forward pass, recurrent models, branching models where layer outputs are fed into multiple subsequent layers in parallel, and models with internally generated tensors (e.g., injections of noise). Furthermore, using TorchLens requires minimal additional code, making it easy to incorporate into existing pipelines for model development and analysis, and useful as a pedagogical aid when teaching deep learning concepts. We hope this contribution will help researchers in AI and neuroscience understand the internal representations of DNNs.

8.
PLoS One ; 16(6): e0253442, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34191815

RESUMO

To interact with real-world objects, any effective visual system must jointly code the unique features defining each object. Despite decades of neuroscience research, we still lack a firm grasp on how the primate brain binds visual features. Here we apply a novel network-based stimulus-rich representational similarity approach to study color and form binding in five convolutional neural networks (CNNs) with varying architecture, depth, and presence/absence of recurrent processing. All CNNs showed near-orthogonal color and form processing in early layers, but increasingly interactive feature coding in higher layers, with this effect being much stronger for networks trained for object classification than untrained networks. These results characterize for the first time how multiple basic visual features are coded together in CNNs. The approach developed here can be easily implemented to characterize whether a similar coding scheme may serve as a viable solution to the binding problem in the primate brain.


Assuntos
Encéfalo/fisiologia , Percepção de Cores/fisiologia , Percepção de Forma/fisiologia , Redes Neurais de Computação , Reconhecimento Visual de Modelos/fisiologia , Animais , Primatas
9.
Biol Psychol ; 118: 136-146, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27241517

RESUMO

Face recognition includes identifying a face as perceptually familiar and recollecting biographical information, or person-knowledge, associated with the face. The majority of studies examining the neural basis of face recognition have confounded these stages by comparing brain responses evoked by novel and perceptually familiar famous faces. Here, we recorded EEG in two tasks in which subjects viewed two sets of faces that were equally perceptually familiar, but which had differing levels of associated person-knowledge. Our results dissociated the effects of person-knowledge from perceptual familiarity. Faces with associated biographical information elicited a larger ∼600ms centroparietal positivity in both a passive viewing task in which subjects viewed faces without explicitly responding, and an active question-answering task in which subjects indicated whether or not they knew particular facts about the faces. In the question task only, person-knowledge was associated with a negative ERP difference over right posterior scalp over the 170-450ms interval which appeared again at long latency (>900ms).


Assuntos
Potenciais Evocados Visuais , Reconhecimento Facial/fisiologia , Rememoração Mental/fisiologia , Reconhecimento Psicológico/fisiologia , Eletroencefalografia , Feminino , Humanos , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa