Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Biomimetics (Basel) ; 9(4)2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38667249

RESUMO

To improve the rapidity of path planning for drones in unknown environments, a new bio-inspired path planning method using E-DQN (event-based deep Q-network), referring to introducing event stream to reinforcement learning network, is proposed. Firstly, event data are collected through an airsim simulator for environmental perception, and an auto-encoder is presented to extract data features and generate event weights. Then, event weights are input into DQN (deep Q-network) to choose the action of the next step. Finally, simulation and verification experiments are conducted in a virtual obstacle environment built with an unreal engine and airsim. The experiment results show that the proposed algorithm is adaptable for drones to find the goal in unknown environments and can improve the rapidity of path planning compared with that of commonly used methods.

2.
Front Neurorobot ; 17: 1111861, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36937552

RESUMO

Introduction: With the development of artificial intelligence and brain science, brain-inspired navigation and path planning has attracted widespread attention. Methods: In this paper, we present a place cell based path planning algorithm that utilizes spiking neural network (SNN) to create efficient routes for drones. First, place cells are characterized by the leaky integrate-and-fire (LIF) neuron model. Then, the connection weights between neurons are trained by spike-timing-dependent plasticity (STDP) learning rules. Afterwards, a synaptic vector field is created to avoid obstacles and to find the shortest path. Results: Finally, simulation experiments both in a Python simulation environment and in an Unreal Engine environment are conducted to evaluate the validity of the algorithms. Discussion: Experiment results demonstrate the validity, its robustness and the computational speed of the proposed model.

3.
Front Neurosci ; 15: 667011, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34267622

RESUMO

Animal brains still outperform even the most performant machines with significantly lower speed. Nonetheless, impressive progress has been made in robotics in the areas of vision, motion- and path planning in the last decades. Brain-inspired Spiking Neural Networks (SNN) and the parallel hardware necessary to exploit their full potential have promising features for robotic application. Besides the most obvious platform for deploying SNN, brain-inspired neuromorphic hardware, Graphical Processing Units (GPU) are well capable of parallel computing as well. Libraries for generating CUDA-optimized code, like GeNN and affordable embedded systems make them an attractive alternative due to their low price and availability. While a few performance tests exist, there has been a lack of benchmarks targeting robotic applications. We compare the performance of a neural Wavefront algorithm as a representative of use cases in robotics on different hardware suitable for running SNN simulations. The SNN used for this benchmark is modeled in the simulator-independent declarative language PyNN, which allows using the same model for different simulator backends. Our emphasis is the comparison between Nest, running on serial CPU, SpiNNaker, as a representative of neuromorphic hardware, and an implementation in GeNN. Beyond that, we also investigate the differences of GeNN deployed to different hardware. A comparison between the different simulators and hardware is performed with regard to total simulation time, average energy consumption per run, and the length of the resulting path. We hope that the insights gained about performance details of parallel hardware solutions contribute to developing more efficient SNN implementations for robotics.

4.
Front Neurorobot ; 13: 77, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31619981

RESUMO

The human motor system is robust, adaptive and very flexible. The underlying principles of human motion provide inspiration for robotics. Pointing at different targets is a common robotics task, where insights about human motion can be applied. Traditionally in robotics, when a motion is generated it has to be validated so that the robot configurations involved are appropriate. The human brain, in contrast, uses the motor cortex to generate new motions reusing and combining existing knowledge before executing the motion. We propose a method to generate and control pointing motions for a robot using a biological inspired architecture implemented with spiking neural networks. We outline a simplified model of the human motor cortex that generates motions using motor primitives. The network learns a base motor primitive for pointing at a target in the center, and four correction primitives to point at targets up, down, left and right from the base primitive, respectively. The primitives are combined to reach different targets. We evaluate the performance of the network with a humanoid robot pointing at different targets marked on a plane. The network was able to combine one, two or three motor primitives at the same time to control the robot in real-time to reach a specific target. We work on extending this work from pointing to a given target to performing a grasping or tool manipulation task. This has many applications for engineering and industry involving real robots.

5.
Front Neurorobot ; 13: 81, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31632262

RESUMO

The endeavor to understand the brain involves multiple collaborating research fields. Classically, synaptic plasticity rules derived by theoretical neuroscientists are evaluated in isolation on pattern classification tasks. This contrasts with the biological brain which purpose is to control a body in closed-loop. This paper contributes to bringing the fields of computational neuroscience and robotics closer together by integrating open-source software components from these two fields. The resulting framework allows to evaluate the validity of biologically-plausibe plasticity models in closed-loop robotics environments. We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following. We show that SPORE is capable of learning to perform policies within the course of simulated hours for both tasks. Provisional parameter explorations indicate that the learning rate and the temperature driving the stochastic processes that govern synaptic learning dynamics need to be regulated for performance improvements to be retained. We conclude by discussing the recent deep reinforcement learning techniques which would be beneficial to increase the functionality of SPORE on visuomotor tasks.

6.
Front Neurorobot ; 13: 28, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31191287

RESUMO

Any visual sensor, whether artificial or biological, maps the 3D-world on a 2D-representation. The missing dimension is depth and most species use stereo vision to recover it. Stereo vision implies multiple perspectives and matching, hence it obtains depth from a pair of images. Algorithms for stereo vision are also used prosperously in robotics. Although, biological systems seem to compute disparities effortless, artificial methods suffer from high energy demands and latency. The crucial part is the correspondence problem; finding the matching points of two images. The development of event-based cameras, inspired by the retina, enables the exploitation of an additional physical constraint-time. Due to their asynchronous course of operation, considering the precise occurrence of spikes, Spiking Neural Networks take advantage of this constraint. In this work, we investigate sensors and algorithms for event-based stereo vision leading to more biologically plausible robots. Hereby, we focus mainly on binocular stereo vision.

7.
Bioinspir Biomim ; 12(5): 055001, 2017 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-28569669

RESUMO

Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently. In theoretical neuroscience, liquid state machines have been proposed as a biologically inspired method to perform asynchronous prediction without a model. However, they have so far only been demonstrated in simulation or small scale pre-processed camera images. In this paper, we use a liquid state machine to predict over the whole [Formula: see text] event stream provided by a real dynamic vision sensor (DVS, or silicon retina). Thanks to the event-based nature of the DVS, the liquid is constantly fed with data when an object is in motion, fully embracing the asynchronicity of spiking neural networks. We propose a smooth continuous representation of the event stream for the short-term visual prediction task. Moreover, compared to previous works (2002 Neural Comput. 2525 282-93 and Burgsteiner H et al 2007 Appl. Intell. 26 99-109), we scale the input dimensionality that the liquid operates on by two order of magnitudes. We also expose the current limits of our method by running experiments in a challenging environment where multiple objects are in motion. This paper is a step towards integrating biologically inspired algorithms derived in theoretical neuroscience to real world robotic setups. We believe that liquid state machines could complement current prediction algorithms used in robotics, especially when dealing with asynchronous sensors.


Assuntos
Algoritmos , Materiais Biomiméticos , Biomimética/instrumentação , Percepção de Movimento , Redes Neurais de Computação , Visão Ocular , Desenho de Equipamento , Humanos , Retina , Robótica , Treinamento por Simulação
8.
Front Neurorobot ; 11: 2, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28179882

RESUMO

Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 "Neurorobotics" of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...