Abstract
Efficient algorithms for 3D character control in continuous control setting remain an open problem in spite of the remarkable recent advances in the field. We present a sampling-based model-predictive controller that comes in the form of a Monte Carlo tree search (MCTS). The tree search utilizes information from multiple sources including two machine learning models. This allows rapid development of complex skills such as 3D humanoid locomotion with less than a million simulation steps, in less than a minute of computing on a modest personal computer. We demonstrate locomotion of 3D characters with varying topologies under disturbances such as heavy projectile hits and abruptly changing target direction. In this paper we also present a new way to combine information from the various sources such that minimal amount of information is lost. We furthermore extend the neural network, involved in the algorithm, to represent stochastic policies. Our approach yields a robust control algorithm that is easy to use. While learning, the algorithm runs in near real-time, and after learning the sampling budget can be reduced for real-time operation.
Original language | English |
---|---|
Pages (from-to) | 2540-2553 |
Journal | IEEE Transactions on Visualization and Computer Graphics |
Volume | 25 |
Issue number | 8 |
Early online date | 29 Jun 2018 |
DOIs | |
Publication status | Published - 2 Jul 2018 |
MoE publication type | A1 Journal article-refereed |
Keywords
- Continuous Control
- Learning (artificial intelligence)
- Monte Carlo methods
- Monte Carlo Tree Search
- Neural networks
- Planning
- Predictive models
- Real-time systems
- Reinforcement Learning
- Three-dimensional displays