Your browser doesn't support javascript.
loading
Real-world humanoid locomotion with reinforcement learning.
Radosavovic, Ilija; Xiao, Tete; Zhang, Bike; Darrell, Trevor; Malik, Jitendra; Sreenath, Koushil.
Afiliação
  • Radosavovic I; University of California, Berkeley CA, USA.
  • Xiao T; University of California, Berkeley CA, USA.
  • Zhang B; University of California, Berkeley CA, USA.
  • Darrell T; University of California, Berkeley CA, USA.
  • Malik J; University of California, Berkeley CA, USA.
  • Sreenath K; University of California, Berkeley CA, USA.
Sci Robot ; 9(89): eadi9579, 2024 04 17.
Article em En | MEDLINE | ID: mdl-38630806
ABSTRACT
Humanoid robots that can autonomously operate in diverse environments have the potential to help address labor shortages in factories, assist elderly at home, and colonize new planets. Although classical controllers for humanoid robots have shown impressive results in a number of settings, they are challenging to generalize and adapt to new environments. Here, we present a fully learning-based approach for real-world humanoid locomotion. Our controller is a causal transformer that takes the history of proprioceptive observations and actions as input and predicts the next action. We hypothesized that the observation-action history contains useful information about the world that a powerful transformer model can use to adapt its behavior in context, without updating its weights. We trained our model with large-scale model-free reinforcement learning on an ensemble of randomized environments in simulation and deployed it to the real-world zero-shot. Our controller could walk over various outdoor terrains, was robust to external disturbances, and could adapt in context.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Robótica Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Robótica Idioma: En Ano de publicação: 2024 Tipo de documento: Article