RESUMO
In this paper we compare Bayesian Optimization, Differential Evolution, and an Evolution Strategy, employed as a gait learning algorithm in modular robots. The motivational scenario is the joint evolution of morphologies and controllers, where 'newborn' robots also undergo a learning process to optimize their inherited controllers (without changing their bodies). This context raises the question: How do gait learning algorithms compare when applied to various morphologies that are not known in advance (thus need to be treated without priors)? To answer this question, we use a test suite of twenty different robot morphologies to evaluate our gait learners and compare their efficiency, efficacy, and sensitivity to morphological differences. The results indicate that Bayesian Optimization and Differential Evolution deliver the same solution quality (walking speed for the robot) with fewer evaluations than the Evolution Strategy. Furthermore, the Evolution Strategy is more sensitive for morphological differences (its efficacy varies more between different morphologies) and is more subject to luck (repeated runs on the same morphology show greater variance in the outcomes).
RESUMO
Division of labor is ubiquitous in biological systems, as evidenced by various forms of complex task specialization observed in both animal societies and multicellular organisms. Although clearly adaptive, the way in which division of labor first evolved remains enigmatic, as it requires the simultaneous co-occurrence of several complex traits to achieve the required degree of coordination. Recently, evolutionary swarm robotics has emerged as an excellent test bed to study the evolution of coordinated group-level behavior. Here we use this framework for the first time to study the evolutionary origin of behavioral task specialization among groups of identical robots. The scenario we study involves an advanced form of division of labor, common in insect societies and known as "task partitioning", whereby two sets of tasks have to be carried out in sequence by different individuals. Our results show that task partitioning is favored whenever the environment has features that, when exploited, reduce switching costs and increase the net efficiency of the group, and that an optimal mix of task specialists is achieved most readily when the behavioral repertoires aimed at carrying out the different subtasks are available as pre-adapted building blocks. Nevertheless, we also show for the first time that self-organized task specialization could be evolved entirely from scratch, starting only from basic, low-level behavioral primitives, using a nature-inspired evolutionary method known as Grammatical Evolution. Remarkably, division of labor was achieved merely by selecting on overall group performance, and without providing any prior information on how the global object retrieval task was best divided into smaller subtasks. We discuss the potential of our method for engineering adaptively behaving robot swarms and interpret our results in relation to the likely path that nature took to evolve complex sociality and task specialization.
Assuntos
Inteligência Artificial , Evolução Biológica , Modelos Biológicos , Robótica , Comportamento Social , Animais , Formigas/fisiologia , Biologia Computacional , Robótica/instrumentação , Robótica/métodos , Análise e Desempenho de Tarefas , TrabalhoRESUMO
Legged robots are well-suited for deployment in unstructured environments but require a unique control scheme specific for their design. As controllers optimised in simulation do not transfer well to the real world (the infamous sim-to-real gap), methods enabling quick learning in the real world, without any assumptions on the specific robot model and its dynamics, are necessary. In this paper, we present a generic method based on Central Pattern Generators, that enables the acquisition of basic locomotion skills in parallel, through very few trials. The novelty of our approach, underpinned by a mathematical analysis of the controller model, is to search for good initial states, instead of optimising connection weights. Empirical validation in six different robot morphologies demonstrates that our method enables robots to learn primary locomotion skills in less than 15 minutes in the real world. In the end, we showcase our skills in a targeted locomotion experiment.
RESUMO
We introduce an elasticity-based mechanism that drives active particles to self-organize by cascading self-propulsion energy towards lower-energy modes. We illustrate it on a simple model of self-propelled agents linked by linear springs that reach a collectively rotating or translating state without requiring aligning interactions. We develop an active elastic sheet theory, complementary to the prevailing active fluid theories, and find analytical stability conditions for the ordered state. Given its ubiquity, this mechanism could play a relevant role in various natural and artificial swarms.
Assuntos
Modelos Teóricos , Animais , Comportamento Animal , Cristalização , Elasticidade , Modelos Biológicos , Modelos QuímicosRESUMO
This paper is concerned with learning transferable contact models for aerial manipulation tasks. We investigate a contact-based approach for enabling unmanned aerial vehicles with cable-suspended passive grippers to compute the attach points on novel payloads for aerial transportation. This is the first time that the problem of autonomously generating contact points for such tasks has been investigated. Our approach builds on the underpinning idea that we can learn a probability density of contacts over objects' surfaces from a single demonstration. We enhance this formulation for encoding aerial transportation tasks while maintaining the one-shot learning paradigm without handcrafting task-dependent features or employing ad-hoc heuristics; the only prior is extrapolated directly from a single demonstration. Our models only rely on the geometrical properties of the payloads computed from a point cloud, and they are robust to partial views. The effectiveness of our approach is evaluated in simulation, in which one or three quadcopters are requested to transport previously unseen payloads along a desired trajectory. The contact points and the quadcopters configurations are computed on-the-fly for each test by our approach and compared with a baseline method, a modified grasp learning algorithm from the literature. Empirical experiments show that the contacts generated by our approach yield a better controllability of the payload for a transportation task. We conclude this paper with a discussion on the strengths and limitations of the presented idea, and our suggested future research directions.
RESUMO
Swarm behaviors offer scalability and robustness to failure through a decentralized and distributed design. When designing coherent group motion as in swarm flocking, virtual potential functions are a widely used mechanism to ensure the aforementioned properties. However, arbitrating through different virtual potential sources in real-time has proven to be difficult. Such arbitration is often affected by fine tuning of the control parameters used to select among the different sources and by manually set cut-offs used to achieve a balance between stability and velocity. A reliance on parameter tuning makes these methods not ideal for field operations of aerial drones which are characterized by fast non-linear dynamics hindering the stability of potential functions designed for slower dynamics. A situation that is further exacerbated by parameters that are fine-tuned in the lab is often not appropriate to achieve satisfying performances on the field. In this work, we investigate the problem of dynamic tuning of local interactions in a swarm of aerial vehicles with the objective of tackling the stability-velocity trade-off. We let the focal agent autonomously and adaptively decide which source of local information to prioritize and at which degree-for example, which neighbor interaction or goal direction. The main novelty of the proposed method lies in a Gaussian kernel used to regulate the importance of each element in the swarm scheme. Each agent in the swarm relies on such a mechanism at every algorithmic iteration and uses it to tune the final output velocities. We show that the presented approach can achieve cohesive flocking while at the same time navigating through a set of way-points at speed. In addition, the proposed method allows to achieve other desired field properties such as automatic group splitting and joining over long distances. The aforementioned properties have been empirically proven by an extensive set of simulated and field experiments, in communication-full and communication-less scenarios. Moreover, the presented approach has been proven to be robust to failures, intermittent communication, and noisy perceptions.
RESUMO
Evolutionary robot systems are usually affected by the properties of the environment indirectly through selection. In this paper, we present and investigate a system where the environment also has a direct effect-through regulation. We propose a novel robot encoding method where a genotype encodes multiple possible phenotypes, and the incarnation of a robot depends on the environmental conditions taking place in a determined moment of its life. This means that the morphology, controller, and behavior of a robot can change according to the environment. Importantly, this process of development can happen at any moment of a robot's lifetime, according to its experienced environmental stimuli. We provide an empirical proof-of-concept, and the analysis of the experimental results shows that environmental regulation improves adaptation (task performance) while leading to different evolved morphologies, controllers, and behavior.
RESUMO
The field of Evolutionary Robotics addresses the challenge of automatically designing robotic systems. Furthermore, the field can also support biological investigations related to evolution. In this paper, we evolve (simulated) modular robots under diverse environmental conditions and analyze the influences that these conditions have on the evolved morphologies, controllers, and behavior. To this end, we introduce a set of morphological, controller, and behavioral descriptors that together span a multi-dimensional trait space. Using these descriptors, we demonstrate how changes in environmental conditions induce different levels of differentiation in this trait space. Our main goal is to gain deeper insights into the effect of the environment on a robotic evolutionary process.
Assuntos
Meio Ambiente , Robótica , Algoritmos , Fenótipo , Estações do AnoRESUMO
While direct local communication is very important for the organization of robot swarms, so far it has mostly been used for relatively simple tasks such as signaling robots preferences or states. Inspired by the emergence of meaning found in natural languages, more complex communication skills could allow robot swarms to tackle novel situations in ways that may not be a priori obvious to the experimenter. This would pave the way for the design of robot swarms with higher autonomy and adaptivity. The state of the art regarding the emergence of communication for robot swarms has mostly focused on offline evolutionary approaches, which showed that signaling and communication can emerge spontaneously even when not explicitly promoted. However, these approaches do not lead to complex, language-like communication skills, and signals are tightly linked to environmental and/or sensory-motor states that are specific to the task for which communication was evolved. To move beyond current practice, we advocate an approach to emergent communication in robot swarms based on language games. Thanks to language games, previous studies showed that cultural self-organization-rather than biological evolution-can be responsible for the complexity and expressive power of language. We suggest that swarm robotics can be an ideal test-bed to advance research on the emergence of language-like communication. The latter can be key to provide robot swarms with additional skills to support self-organization and adaptivity, enabling the design of more complex collective behaviors.
RESUMO
We study how the structure of the interaction network affects self-organized collective motion in two minimal models of self-propelled agents: the Vicsek model and the Active-Elastic (AE) model. We perform simulations with topologies that interpolate between a nearest-neighbour network and random networks with different degree distributions to analyse the relationship between the interaction topology and the resilience to noise of the ordered state. For the Vicsek case, we find that a higher fraction of random connections with homogeneous or power-law degree distribution increases the critical noise, and thus the resilience to noise, as expected due to small-world effects. Surprisingly, for the AE model, a higher fraction of random links with power-law degree distribution can decrease this resilience, despite most links being long-range. We explain this effect through a simple mechanical analogy, arguing that the larger presence of agents with few connections contributes localized low-energy modes that are easily excited by noise, thus hindering the collective dynamics. These results demonstrate the strong effects of the interaction topology on self-organization. Our work suggests potential roles of the interaction network structure in biological collective behaviour and could also help improve decentralized swarm robotics control and other distributed consensus systems.
Assuntos
Relações Interpessoais , Movimento (Física)RESUMO
Self-organized collective coordinated behaviour is an impressive phenomenon, observed in a variety of natural and artificial systems, in which coherent global structures or dynamics emerge from local interactions between individual parts. If the degree of collective integration of a system does not depend on size, its level of robustness and adaptivity is typically increased and we refer to it as scale-invariant. In this review, we first identify three main types of self-organized scale-invariant systems: scale-invariant spatial structures, scale-invariant topologies and scale-invariant dynamics. We then provide examples of scale invariance from different domains in science, describe their origins and main features and discuss potential challenges and approaches for designing and engineering artificial systems with scale-invariant properties.
Assuntos
Modelos TeóricosRESUMO
In this paper, we propose a collective decision-making method for swarms of robots. The method enables a robot swarm to select, from a set of possible actions, the one that has the fastest mean execution time. By means of positive feedback the method achieves consensus on the fastest action. The novelty of our method is that it allows robots to collectively find consensus on the fastest action without measuring explicitly the execution times of all available actions. We study two analytical models of the decision-making method in order to understand the dynamics of the consensus formation process. Moreover, we verify the applicability of the method in a real swarm robotics scenario. To this end, we conduct three sets of experiments that show that a robotic swarm can collectively select the shortest of two paths. Finally, we use a Monte Carlo simulation model to study and predict the influence of different parameters on the method.