RESUMO
Insect-inspired navigation strategies have the potential to unlock robotic navigation in power-constrained scenarios, as they can function effectively with limited computational resources. One such strategy, familiarity-based navigation, has successfully navigated a robot along routes of up to 60 m using a single-layer neural network trained with an Infomax learning rule. Given the small size of the network that effectively encodes the route, here we investigate the limits of this method, challenging it to navigate longer routes, investigating the relationship between performance, view acquisition rate and dimension, network size, and robustness to noise. Our goal is both to determine the parameters at which this method operates effectively and to explore the profile with which it fails, both to inform theories of insect navigation and to improve robotic deployments. We show that effective memorization of familiar views is possible for longer routes than previously attempted, but that this length decreases for reduced input view dimensions. We also show that the ideal view acquisition rate must be increased with route length for consistent performance. We further demonstrate that computational and memory savings may be made with equivalent performance by reducing the network size-an important consideration for applicability to small, lower-power robots-and investigate the profile of memory failure, demonstrating increased confusion across the route as it extends in length. In this extension to previous work, we also investigate the form taken by the network weights as training extends and the areas of the image on which visual familiarity-based navigation most relies. Additionally, we investigate the robustness of familiarity-based navigation to view variation caused by noise.
RESUMO
Ants are capable of learning long visually guided foraging routes with limited neural resources. The visual scene memory needed for this behaviour is mediated by the mushroom bodies; an insect brain region important for learning and memory. In a visual navigation context, the mushroom bodies are theorised to act as familiarity detectors, guiding ants to views that are similar to those previously learned when first travelling along a foraging route. Evidence from behavioural experiments, computational studies and brain lesions all support this idea. Here we further investigate the role of mushroom bodies in visual navigation with a spiking neural network model learning complex natural scenes. By implementing these networks in GeNN-a library for building GPU accelerated spiking neural networks-we were able to test these models offline on an image database representing navigation through a complex outdoor natural environment, and also online embodied on a robot. The mushroom body model successfully learnt a large series of visual scenes (400 scenes corresponding to a 27 m route) and used these memories to choose accurate heading directions during route recapitulation in both complex environments. Through analysing our model's Kenyon cell (KC) activity, we were able to demonstrate that KC activity is directly related to the respective novelty of input images. Through conducting a parameter search we found that there is a non-linear dependence between optimal KC to visual projection neuron (VPN) connection sparsity and the length of time the model is presented with an image stimulus. The parameter search also showed training the model on lower proportions of a route generally produced better accuracy when testing on the entire route. We embodied the mushroom body model and comparator visual navigation algorithms on a Quanser Q-car robot with all processing running on an Nvidia Jetson TX2. On a 6.5 m route, the mushroom body model had a mean distance to training route (error) of 0.144 ± 0.088 m over 5 trials, which was performance comparable to standard visual-only navigation algorithms. Thus, we have demonstrated that a biologically plausible model of the ant mushroom body can navigate complex environments both in simulation and the real world. Understanding the neural basis of this behaviour will provide insight into how neural circuits are tuned to rapidly learn behaviourally relevant information from complex environments and provide inspiration for creating bio-mimetic computer/robotic systems that can learn rapidly with low energy requirements.