Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Nat Neurosci ; 2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38898182

RESUMO

We use efficient coding principles borrowed from sensory neuroscience to derive the optimal neural population to encode a reward distribution. We show that the responses of dopaminergic reward prediction error neurons in mouse and macaque are similar to those of the efficient code in the following ways: the neurons have a broad distribution of midpoints covering the reward distribution; neurons with higher thresholds have higher gains, more convex tuning functions and lower slopes; and their slope is higher when the reward distribution is narrower. Furthermore, we derive learning rules that converge to the efficient code. The learning rule for the position of the neuron on the reward axis closely resembles distributional reinforcement learning. Thus, reward prediction error neuron responses may be optimized to broadcast an efficient reward signal, forming a connection between efficient coding and reinforcement learning, two of the most successful theories in computational neuroscience.

2.
bioRxiv ; 2024 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-38712051

RESUMO

Measurements of neural responses to identically repeated experimental events often exhibit large amounts of variability. This noise is distinct from signal, operationally defined as the average expected response across repeated trials for each given event. Accurately distinguishing signal from noise is important, as each is a target that is worthy of study (many believe noise reflects important aspects of brain function) and it is important not to confuse one for the other. Here, we introduce a principled modeling approach in which response measurements are explicitly modeled as the sum of samples from multivariate signal and noise distributions. In our proposed method-termed Generative Modeling of Signal and Noise (GSN)-the signal distribution is estimated by subtracting the estimated noise distribution from the estimated data distribution. We validate GSN using ground-truth simulations and demonstrate the application of GSN to empirical fMRI data. In doing so, we illustrate a simple consequence of GSN: by disentangling signal and noise components in neural responses, GSN denoises principal components analysis and improves estimates of dimensionality. We end by discussing other situations that may benefit from GSN's characterization of signal and noise, such as estimation of noise ceilings for computational models of neural activity. A code toolbox for GSN is provided with both MATLAB and Python implementations.

3.
Behav Brain Sci ; 46: e392, 2023 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-38054329

RESUMO

An ideal vision model accounts for behavior and neurophysiology in both naturalistic conditions and designed lab experiments. Unlike psychological theories, artificial neural networks (ANNs) actually perform visual tasks and generate testable predictions for arbitrary inputs. These advantages enable ANNs to engage the entire spectrum of the evidence. Failures of particular models drive progress in a vibrant ANN research program of human vision.


Assuntos
Idioma , Redes Neurais de Computação , Humanos
4.
Sci Rep ; 13(1): 20269, 2023 11 20.
Artigo em Inglês | MEDLINE | ID: mdl-37985896

RESUMO

When developing models in cognitive science, researchers typically start with their own intuitions about human behavior in a given task and then build in mechanisms that explain additional aspects of the data. This refinement step is often hindered by how difficult it is to distinguish the unpredictable randomness of people's decisions from meaningful deviations between those decisions and the model. One solution for this problem is to compare the model against deep neural networks trained on behavioral data, which can detect almost any pattern given sufficient data. Here, we apply this method to the domain of planning with a heuristic search model for human play in 4-in-a-row, a combinatorial game where participants think multiple steps into the future. Using a data set consisting of 10,874,547 games, we train deep neural networks to predict human moves and find that they accurately do so while capturing meaningful patterns in the data. Thus, deviations between the model and the best network allow us to identify opportunities for model improvement despite starting with a model that has undergone substantial testing in previous work. Based on this analysis, we add three extensions to the model that range from a simple opening bias to specific adjustments regarding endgame planning. Overall, our work demonstrates the advantages of model comparison with a high-performance deep neural network as well as the feasibility of scaling cognitive models to massive data sets for systematically investigating the processes underlying human sequential decision-making.


Assuntos
Redes Neurais de Computação , Pensamento , Humanos
5.
Elife ; 122023 08 23.
Artigo em Inglês | MEDLINE | ID: mdl-37610302

RESUMO

Neuroscience has recently made much progress, expanding the complexity of both neural activity measurements and brain-computational models. However, we lack robust methods for connecting theory and experiment by evaluating our new big models with our new big data. Here, we introduce new inference methods enabling researchers to evaluate and compare models based on the accuracy of their predictions of representational geometries: A good model should accurately predict the distances among the neural population representations (e.g. of a set of stimuli). Our inference methods combine novel 2-factor extensions of crossvalidation (to prevent overfitting to either subjects or conditions from inflating our estimates of model accuracy) and bootstrapping (to enable inferential model comparison with simultaneous generalization to both new subjects and new conditions). We validate the inference methods on data where the ground-truth model is known, by simulating data with deep neural networks and by resampling of calcium-imaging and functional MRI data. Results demonstrate that the methods are valid and conclusions generalize correctly. These data analysis methods are available in an open-source Python toolbox (rsatoolbox.readthedocs.io).


Assuntos
Big Data , Neurociências , Humanos , Cálcio da Dieta , Generalização Psicológica , Redes Neurais de Computação
6.
Psychol Rev ; 130(2): 334-367, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36809000

RESUMO

Bayesian optimal inference is often heralded as a principled, general framework for human perception. However, optimal inference requires integration over all possible world states, which quickly becomes intractable in complex real-world settings. Additionally, deviations from optimal inference have been observed in human decisions. A number of approximation methods have previously been suggested, such as sampling methods. In this study, we additionally propose point estimate observers, which evaluate only a single best estimate of the world state per response category. We compare the predicted behavior of these model observers to human decisions in five perceptual categorization tasks. Compared to the Bayesian observer, the point estimate observer loses decisively in one task, ties in two and wins in two tasks. Two sampling observers also improve upon the Bayesian observer, but in a different set of tasks. Thus, none of the existing general observer models appears to fit human perceptual decisions in all situations, but the point estimate observer is competitive with other observer models and may provide another stepping stone for future model development. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Tomada de Decisões , Humanos , Teorema de Bayes , Tomada de Decisões/fisiologia
7.
J Vis ; 22(4): 17, 2022 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-35353153

RESUMO

Color constancy is our ability to perceive constant colors across varying illuminations. Here, we trained deep neural networks to be color constant and evaluated their performance with varying cues. Inputs to the networks consisted of two-dimensional images of simulated cone excitations derived from three-dimensional (3D) rendered scenes of 2,115 different 3D shapes, with spectral reflectances of 1,600 different Munsell chips, illuminated under 278 different natural illuminations. The models were trained to classify the reflectance of the objects. Testing was done with four new illuminations with equally spaced CIEL*a*b* chromaticities, two along the daylight locus and two orthogonal to it. High levels of color constancy were achieved with different deep neural networks, and constancy was higher along the daylight locus. When gradually removing cues from the scene, constancy decreased. Both ResNets and classical ConvNets of varying degrees of complexity performed well. However, DeepCC, our simplest sequential convolutional network, represented colors along the three color dimensions of human color vision, while ResNets showed a more complex representation.


Assuntos
Percepção de Cores , Visão de Cores , Humanos , Iluminação , Estimulação Luminosa , Células Fotorreceptoras Retinianas Cones
8.
J Vis ; 19(3): 1, 2019 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-30821809

RESUMO

Bottom-up and top-down as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analyzing their influence over time. For this purpose, we develop a saliency model that is based on the internal representation of a recent early spatial vision model to measure the low-level, bottom-up factor. To measure the influence of high-level, bottom-up features, we use a recent deep neural network-based saliency model. To account for top-down influences, we evaluate the models on two large data sets with different tasks: first, a memorization task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: the first saccade, an initial guided exploration characterized by a gradual broadening of the fixation density, and a steady state that is reached after roughly 10 fixations. Saccade-target selection during the initial exploration and in the steady state is related to similar areas of interest, which are better predicted when including high-level features. In the search data set, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties, and as early as 200 ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level, bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later, this high-level, bottom-up control can be overruled by top-down influences.


Assuntos
Movimentos Oculares/fisiologia , Fixação Ocular/fisiologia , Medições dos Movimentos Oculares , Feminino , Humanos , Masculino , Memória/fisiologia , Redes Neurais de Computação , Estimulação Luminosa , Movimentos Sacádicos/fisiologia , Visão Ocular/fisiologia , Adulto Jovem
9.
Sci Rep ; 9(1): 1635, 2019 02 07.
Artigo em Inglês | MEDLINE | ID: mdl-30733470

RESUMO

When searching a target in a natural scene, it has been shown that both the target's visual properties and similarity to the background influence whether and how fast humans are able to find it. So far, it was unclear whether searchers adjust the dynamics of their eye movements (e.g., fixation durations, saccade amplitudes) to the target they search for. In our experiment, participants searched natural scenes for six artificial targets with different spatial frequency content throughout eight consecutive sessions. High-spatial frequency targets led to smaller saccade amplitudes and shorter fixation durations than low-spatial frequency targets if target identity was known. If a saccade was programmed in the same direction as the previous saccade, fixation durations and successive saccade amplitudes were not influenced by target type. Visual saliency and empirical fixation density at the endpoints of saccades which maintain direction were comparatively low, indicating that these saccades were less selective. Our results suggest that searchers adjust their eye movement dynamics to the search target efficiently, since previous research has shown that low-spatial frequencies are visible farther into the periphery than high-spatial frequencies. We interpret the saccade direction specificity of our effects as an underlying separation into a default scanning mechanism and a selective, target-dependent mechanism.


Assuntos
Movimentos Oculares/fisiologia , Adolescente , Adulto , Feminino , Fixação Ocular , Humanos , Masculino , Experimentação Humana não Terapêutica , Estimulação Luminosa , Movimentos Sacádicos , Processamento Espacial , Fatores de Tempo , Adulto Jovem
10.
J Vis ; 17(13): 3, 2017 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-29094148

RESUMO

When watching the image of a natural scene on a computer screen, observers initially move their eyes toward the center of the image-a reliable experimental finding termed central fixation bias. This systematic tendency in eye guidance likely masks attentional selection driven by image properties and top-down cognitive processes. Here, we show that the central fixation bias can be reduced by delaying the initial saccade relative to image onset. In four scene-viewing experiments we manipulated observers' initial gaze position and delayed their first saccade by a specific time interval relative to the onset of an image. We analyzed the distance to image center over time and show that the central fixation bias of initial fixations was significantly reduced after delayed saccade onsets. We additionally show that selection of the initial saccade target strongly depended on the first saccade latency. A previously published model of saccade generation was extended with a central activation map on the initial fixation whose influence declined with increasing saccade latency. This extension was sufficient to replicate the central fixation bias from our experiments. Our results suggest that the central fixation bias is generated by default activation as a response to the sudden image onset and that this default activation pattern decreases over time. Thus, it may often be preferable to use a modified version of the scene viewing paradigm that decouples image onset from the start signal for scene exploration to explicitly reduce the central fixation bias.


Assuntos
Atenção/fisiologia , Fixação Ocular/fisiologia , Movimentos Sacádicos/fisiologia , Adolescente , Adulto , Movimentos Oculares , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Adulto Jovem
11.
J Vis ; 17(12): 12, 2017 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-29053781

RESUMO

A large part of classical visual psychophysics was concerned with the fundamental question of how pattern information is initially encoded in the human visual system. From these studies a relatively standard model of early spatial vision emerged, based on spatial frequency and orientation-specific channels followed by an accelerating nonlinearity and divisive normalization: contrast gain-control. Here we implement such a model in an image-computable way, allowing it to take arbitrary luminance images as input. Testing our implementation on classical psychophysical data, we find that it explains contrast detection data including the ModelFest data, contrast discrimination data, and oblique masking data, using a single set of parameters. Leveraging the advantage of an image-computable model, we test our model against a recent dataset using natural images as masks. We find that the model explains these data reasonably well, too. To explain data obtained at different presentation durations, our model requires different parameters to achieve an acceptable fit. In addition, we show that contrast gain-control with the fitted parameters results in a very sparse encoding of luminance information, in line with notions from efficient coding. Translating the standard early spatial vision model to be image-computable resulted in two further insights: First, the nonlinear processing requires a denser sampling of spatial frequency and orientation than optimal coding suggests. Second, the normalization needs to be fairly local in space to fit the data obtained with natural image masks. Finally, our image-computable model can serve as tool in future quantitative analyses: It allows optimized stimuli to be used to test the model and variants of it, with potential applications as an image-quality metric. In addition, it may serve as a building block for models of higher level processing.


Assuntos
Simulação por Computador , Sensibilidades de Contraste/fisiologia , Orientação/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Psicofísica/métodos , Percepção Espacial/fisiologia , Navegação Espacial/fisiologia , Humanos
12.
Psychol Rev ; 124(4): 505-524, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-28447811

RESUMO

Dynamical models of cognition play an increasingly important role in driving theoretical and experimental research in psychology. Therefore, parameter estimation, model analysis and comparison of dynamical models are of essential importance. In this article, we propose a maximum likelihood approach for model analysis in a fully dynamical framework that includes time-ordered experimental data. Our methods can be applied to dynamical models for the prediction of discrete behavior (e.g., movement onsets); in particular, we use a dynamical model of saccade generation in scene viewing as a case study for our approach. For this model, the likelihood function can be computed directly by numerical simulation, which enables more efficient parameter estimation including Bayesian inference to obtain reliable estimates and corresponding credible intervals. Using hierarchical models inference is even possible for individual observers. Furthermore, our likelihood approach can be used to compare different models. In our example, the dynamical framework is shown to outperform nondynamical statistical models. Additionally, the likelihood based evaluation differentiates model variants, which produced indistinguishable predictions on hitherto used statistics. Our results indicate that the likelihood approach is a promising framework for dynamical cognitive models. (PsycINFO Database Record


Assuntos
Teorema de Bayes , Cognição , Funções Verossimilhança , Modelos Estatísticos , Simulação por Computador , Humanos
13.
Vision Res ; 129: 33-49, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27771330

RESUMO

During scene perception our eyes generate complex sequences of fixations. Predictors of fixation locations are bottom-up factors such as luminance contrast, top-down factors like viewing instruction, and systematic biases, e.g., the tendency to place fixations near the center of an image. However, comparatively little is known about the dynamics of scanpaths after experimental manipulation of specific fixation locations. Here we investigate the influence of initial fixation position on subsequent eye-movement behavior on an image. We presented 64 colored photographs to participants who started their scanpaths from one of two experimentally controlled positions in the right or left part of an image. Additionally, we used computational models to predict the images' fixation locations and classified them as balanced images or images with high conspicuity on either the left or right side of a picture. The manipulation of the starting position influenced viewing behavior for several seconds and produced a tendency to overshoot to the image side opposite to the starting position. Possible mechanisms for the generation of this overshoot were investigated using numerical simulations of statistical and dynamical models. Our model comparisons show that inhibitory tagging is a viable mechanism for dynamical planning of scanpaths.


Assuntos
Atenção/fisiologia , Fixação Ocular/fisiologia , Adulto , Análise de Variância , Feminino , Humanos , Inibição Psicológica , Masculino , Estimulação Luminosa/métodos , Movimentos Sacádicos/fisiologia , Adulto Jovem
14.
Vision Res ; 122: 105-123, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-27013261

RESUMO

The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available.


Assuntos
Teorema de Bayes , Interpretação Estatística de Dados , Psicometria/métodos , Psicofísica/métodos , Humanos , Modelos Estatísticos , Limiar Sensorial
15.
J Vis ; 16(3): 9, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26868887

RESUMO

Varying the distance of a light source from an object alters both the intensity and spatial distribution of surface shading patterns. We tested whether observers can use such cues to infer light source distance. Participants viewed stereoscopic renderings of rough objects with diffuse and glossy surfaces, which were illuminated by a point source at a range of distances. In one task, they adjusted the position of a small probe dot in three dimensions to report the apparent location of the light in the scene. In a second task, they adjusted the shading on one object (by moving an invisible light source) until it appeared to be illuminated from the same distance as another object. Participants' responses increased linearly with the true light source distance, suggesting that they have clear intuitions about how light source distance affects shading patterns for a variety of different surfaces. However, there were also systematic errors: Subjects overestimated light source distance in the probe adjustment task, and in both experiments, roughness and glossiness affected responses. We find the pattern of results is predicted surprisingly well by a simplistic model based only on the area of the image that exceeds a certain intensity threshold. Thus, although subjects can report light source distance, they may rely on simple--sometimes erroneous--heuristics to do so.


Assuntos
Percepção de Distância/fisiologia , Percepção de Forma/fisiologia , Luz , Percepção Visual/fisiologia , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA