Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Asunto principal
Tipo del documento
Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 121(25): e2312293121, 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38857385

RESUMEN

The perception of sensory attributes is often quantified through measurements of sensitivity (the ability to detect small stimulus changes), as well as through direct judgments of appearance or intensity. Despite their ubiquity, the relationship between these two measurements remains controversial and unresolved. Here, we propose a framework in which they arise from different aspects of a common representation. Specifically, we assume that judgments of stimulus intensity (e.g., as measured through rating scales) reflect the mean value of an internal representation, and sensitivity reflects a combination of mean value and noise properties, as quantified by the statistical measure of Fisher information. Unique identification of these internal representation properties can be achieved by combining measurements of sensitivity and judgments of intensity. As a central example, we show that Weber's law of perceptual sensitivity can coexist with Stevens' power-law scaling of intensity ratings (for all exponents), when the noise amplitude increases in proportion to the representational mean. We then extend this result beyond the Weber's law range by incorporating a more general and physiology-inspired form of noise and show that the combination of noise properties and sensitivity measurements accurately predicts intensity ratings across a variety of sensory modalities and attributes. Our framework unifies two primary perceptual measurements-thresholds for sensitivity and rating scales for intensity-and provides a neural interpretation for the underlying representation.


Asunto(s)
Percepción , Humanos , Percepción/fisiología , Umbral Sensorial/fisiología , Sensación/fisiología , Juicio/fisiología
2.
ArXiv ; 2024 Jul 18.
Artículo en Inglés | MEDLINE | ID: mdl-39070038

RESUMEN

Human ability to recognize complex visual patterns arises through transformations performed by successive areas in the ventral visual cortex. Deep neural networks trained end-to-end for object recognition approach human capabilities, and offer the best descriptions to date of neural responses in the late stages of the hierarchy. But these networks provide a poor account of the early stages, compared to traditional hand-engineered models, or models optimized for coding efficiency or prediction. Moreover, the gradient backpropagation used in end-to-end learning is generally considered to be biologically implausible. Here, we overcome both of these limitations by developing a bottom-up self-supervised training methodology that operates independently on successive layers. Specifically, we maximize feature similarity between pairs of locally-deformed natural image patches, while decorrelating features across patches sampled from other images. Crucially, the deformation amplitudes are adjusted proportionally to receptive field sizes in each layer, thus matching the task complexity to the capacity at each stage of processing. In comparison with architecture-matched versions of previous models, we demonstrate that our layerwise complexity-matched learning (LCL) formulation produces a two-stage model (LCL-V2) that is better aligned with selectivity properties and neural activity in primate area V2. We demonstrate that the complexity-matched learning paradigm is responsible for much of the emergence of the improved biological alignment. Finally, when the two-stage model is used as a fixed front-end for a deep network trained to perform object recognition, the resultant model (LCL-V2Net) is significantly better than standard end-to-end self-supervised, supervised, and adversarially-trained models in terms of generalization to out-of-distribution tasks and alignment with human behavior. Our code and pre-trained checkpoints are available at https://github.com/nikparth/LCL-V2.git.

3.
bioRxiv ; 2024 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-38496618

RESUMEN

We have measured the visually evoked activity of single neurons recorded in areas V1 and V2 of awake, fixating macaque monkeys, and captured their responses with a common computational model. We used a stimulus set composed of "droplets" of localized contrast, band-limited in orientation and spatial frequency; each brief stimulus contained a random superposition of droplets presented in and near the mapped receptive field. We accounted for neuronal responses with a 2-layer linear-nonlinear model, representing each receptive field by a combination of orientation- and scale-selective filters. We fit the data by jointly optimizing the model parameters to enforce sparsity and to prevent overfitting. We visualized and interpreted the fits in terms of an "afferent field" of nonlinearly combined inputs, dispersed in the 4 dimensions of space and spatial frequency. The resulting fits generally give a good account of the responses of neurons in both V1 and V2, capturing an average of 40% of the explainable variance in neuronal firing. Moreover, the resulting models predict neuronal responses to image families outside the test set, such as gratings of different orientations and spatial frequencies. Our results offer a common framework for understanding processing in the early visual cortex, and also demonstrate the ways in which the distributions of neuronal responses in V1 and V2 are similar but not identical.

4.
bioRxiv ; 2024 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-38464304

RESUMEN

The visual world is richly adorned with texture, which can serve to delineate important elements of natural scenes. In anesthetized macaque monkeys, selectivity for the statistical features of natural texture is weak in V1, but substantial in V2, suggesting that neuronal activity in V2 might directly support texture perception. To test this, we investigated the relation between single cell activity in macaque V1 and V2 and simultaneously measured behavioral judgments of texture. We generated stimuli along a continuum between naturalistic texture and phase-randomized noise and trained two macaque monkeys to judge whether a sample texture more closely resembled one or the other extreme. Analysis of responses revealed that individual V1 and V2 neurons carried much less information about texture naturalness than behavioral reports. However, the sensitivity of V2 neurons, especially those preferring naturalistic textures, was significantly closer to that of behavior compared with V1. The firing of both V1 and V2 neurons predicted perceptual choices in response to repeated presentations of the same ambiguous stimulus in one monkey, despite low individual neural sensitivity. However, neither population predicted choice in the second monkey. We conclude that neural responses supporting texture perception likely continue to develop downstream of V2. Further, combined with neural data recorded while the same two monkeys performed an orientation discrimination task, our results demonstrate that choice-correlated neural activity in early sensory cortex is unstable across observers and tasks, untethered from neuronal sensitivity, and thus unlikely to reflect a critical aspect of the formation of perceptual decisions. Significance statement: As visual signals propagate along the cortical hierarchy, they encode increasingly complex aspects of the sensory environment and likely have a more direct relationship with perceptual experience. We replicate and extend previous results from anesthetized monkeys differentiating the selectivity of neurons along the first step in cortical vision from area V1 to V2. However, our results further complicate efforts to establish neural signatures that reveal the relationship between perception and the neuronal activity of sensory populations. We find that choice-correlated activity in V1 and V2 is unstable across different observers and tasks, and also untethered from neuronal sensitivity and other features of nonsensory response modulation.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA