Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
J Comput Neurosci ; 47(1): 43-60, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31286380

RESUMO

A neuron's firing correlates are defined as the features of the external world to which its activity is correlated. In many parts of the brain, neurons have quite simple such firing correlates. A striking example are grid cells in the rodent medial entorhinal cortex: their activity correlates with the animal's position in space, defining 'grid fields' arranged with a remarkable periodicity. Here, we show that the organization and evolution of grid fields relate very simply to physical space. To do so, we use an effective model and consider grid fields as point objects (particles) moving around in space under the influence of forces. We reproduce several observations on the geometry of grid patterns. This particle-like behavior is particularly salient in a recent experiment in which two separate grid patterns merge. We discuss pattern formation in the light of known results from physics of two-dimensional colloidal systems. Notably, we study the limitations of the widely used 'gridness score' and show how physics of 2d systems could be a source of inspiration, both for data analysis and computational modeling. Finally, we draw the relationship between our 'macroscopic' model for grid fields and existing 'microscopic' models of grid cell activity and discuss how a description at the level of grid fields allows to put constraints on the underlying grid cell network.


Assuntos
Simulação por Computador , Córtex Entorrinal/citologia , Modelos Neurológicos , Redes Neurais de Computação , Potenciais de Ação/fisiologia , Animais , Orientação Espacial , Células Receptoras Sensoriais/fisiologia
2.
Front Comput Neurosci ; 10: 13, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26924979

RESUMO

After the discovery of grid cells, which are an essential component to understand how the mammalian brain encodes spatial information, three main classes of computational models were proposed in order to explain their working principles. Amongst them, the one based on continuous attractor networks (CAN), is promising in terms of biological plausibility and suitable for robotic applications. However, in its current formulation, it is unable to reproduce important electrophysiological findings and cannot be used to perform path integration for long periods of time. In fact, in absence of an appropriate resetting mechanism, the accumulation of errors over time due to the noise intrinsic in velocity estimation and neural computation prevents CAN models to reproduce stable spatial grid patterns. In this paper, we propose an extension of the CAN model using Hebbian plasticity to anchor grid cell activity to environmental landmarks. To validate our approach we used as input to the neural simulations both artificial data and real data recorded from a robotic setup. The additional neural mechanism can not only anchor grid patterns to external sensory cues but also recall grid patterns generated in previously explored environments. These results might be instrumental for next generation bio-inspired robotic navigation algorithms that take advantage of neural computation in order to cope with complex and dynamic environments.

3.
Artigo em Inglês | MEDLINE | ID: mdl-26737015

RESUMO

Proposed is a prototype of a wearable mobility device which aims to assist the blind with navigation and object avoidance via auditory-vision-substitution. The described system uses two dynamic vision sensors and event-based information processing techniques to extract depth information. The 3D visual input is then processed using three different strategies, and converted to a 3D output sound using an individualized head-related transfer function. The performance of the device with different processing strategies is evaluated via initial tests with ten subjects. The outcome of these tests demonstrate promising performance of the system after only very short training times of a few minutes due to the minimal encoding of outputs from the vision sensors which are translated into simple sound patterns easily interpretable for the user. The envisioned system will allow for efficient real-time algorithms on a hands-free and lightweight device with exceptional battery life-time.


Assuntos
Cegueira/terapia , Retina/fisiologia , Visão Ocular/fisiologia , Pessoas com Deficiência Visual , Adulto , Algoritmos , Desenho de Equipamento , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Som , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA