Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Assunto da revista
Intervalo de ano de publicação
1.
Nature ; 602(7896): 223-228, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35140384

RESUMO

Many potential applications of artificial intelligence involve making real-time decisions in physical systems while interacting with humans. Automobile racing represents an extreme example of these conditions; drivers must execute complex tactical manoeuvres to pass or block opponents while operating their vehicles at their traction limits1. Racing simulations, such as the PlayStation game Gran Turismo, faithfully reproduce the non-linear control challenges of real race cars while also encapsulating the complex multi-agent interactions. Here we describe how we trained agents for Gran Turismo that can compete with the world's best e-sports drivers. We combine state-of-the-art, model-free, deep reinforcement learning algorithms with mixed-scenario training to learn an integrated control policy that combines exceptional speed with impressive tactics. In addition, we construct a reward function that enables the agent to be competitive while adhering to racing's important, but under-specified, sportsmanship rules. We demonstrate the capabilities of our agent, Gran Turismo Sophy, by winning a head-to-head competition against four of the world's best Gran Turismo drivers. By describing how we trained championship-level racers, we demonstrate the possibilities and challenges of using these techniques to control complex dynamical systems in domains where agents must respect imprecisely defined human norms.


Assuntos
Condução de Veículo , Aprendizado Profundo , Reforço Psicológico , Esportes , Jogos de Vídeo , Condução de Veículo/normas , Comportamento Competitivo , Humanos , Recompensa , Esportes/normas
2.
Dev Sci ; 27(2): e13449, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37750490

RESUMO

What is the optimal penalty for errors in infant skill learning? Behavioral analyses indicate that errors are frequent but trivial as infants acquire foundational skills. In learning to walk, for example, falling is commonplace but appears to incur only a negligible penalty. Behavioral data, however, cannot reveal whether a low penalty for falling is beneficial for learning to walk. Here, we used a simulated bipedal robot as an embodied model to test the optimal penalty for errors in learning to walk. We trained the robot to walk using 12,500 independent simulations on walking paths produced by infants during free play and systematically varied the penalty for falling-a level of precision, control, and magnitude impossible with real infants. When trained with lower penalties for falling, the robot learned to walk farther and better on familiar, trained paths and better generalized its learning to novel, untrained paths. Indeed, zero penalty for errors led to the best performance for both learning and generalization. Moreover, the beneficial effects of a low penalty were stronger for generalization than for learning. Robot simulations corroborate prior behavioral data and suggest that a low penalty for errors helps infants learn foundational skills (e.g., walking, talking, and social interactions) that require immense flexibility, creativity, and adaptability. RESEARCH HIGHLIGHTS: During infant skill acquisition, errors are commonplace but appear to incur a low penalty; when learning to walk, for example, falls are frequent but trivial. To test the optimal penalty for errors, we trained a simulated robot to walk using real infant paths and systematically manipulated the penalty for falling. Lower penalties in training led to better performance on familiar, trained paths and on novel untrained paths, and zero penalty was most beneficial. Benefits of a low penalty were stronger for untrained than for trained paths, suggesting that discounting errors facilitates acquiring skills that require immense flexibility and generalization.


Assuntos
Robótica , Lactente , Humanos , Acidentes por Quedas , Caminhada , Aprendizagem , Generalização Psicológica
3.
Front Neurorobot ; 12: 19, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29867427

RESUMO

Although both infancy and artificial intelligence (AI) researchers are interested in developing systems that produce adaptive, functional behavior, the two disciplines rarely capitalize on their complementary expertise. Here, we used soccer-playing robots to test a central question about the development of infant walking. During natural activity, infants' locomotor paths are immensely varied. They walk along curved, multi-directional paths with frequent starts and stops. Is the variability observed in spontaneous infant walking a "feature" or a "bug?" In other words, is variability beneficial for functional walking performance? To address this question, we trained soccer-playing robots on walking paths generated by infants during free play and tested them in simulated games of "RoboCup." In Tournament 1, we compared the functional performance of a simulated robot soccer team trained on infants' natural paths with teams trained on less varied, geometric paths-straight lines, circles, and squares. Across 1,000 head-to-head simulated soccer matches, the infant-trained team consistently beat all teams trained with less varied walking paths. In Tournament 2, we compared teams trained on different clusters of infant walking paths. The team trained with the most varied combination of path shape, step direction, number of steps, and number of starts and stops outperformed teams trained with less varied paths. This evidence indicates that variety is a crucial feature supporting functional walking performance. More generally, we propose that robotics provides a fruitful avenue for testing hypotheses about infant development; reciprocally, observations of infant behavior may inform research on artificial intelligence.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA