TY - JOUR
T1 - Adapting behavior via intrinsic reward
T2 - A survey and empirical study
AU - Linke, Cam
AU - Ady, Nadia M.
AU - White, Martha
AU - Degris, Thomas
AU - White, Adam
N1 - Funding Information:
We would like to thank our generous funders for supporting this work, specifically the NSERC Discovery grant program and CIFAR for funding the Alberta Machine Intelligence Institute and the CIFAR Canada AI Chairs program. We would like to thank Rich Sutton for his ideas and insights that shaped this project early on. We would also like to thank Andrew Jacobsen, Andrew Patterson and Tor Lattimore for helpful comments on the text. Finally, we would like to thank our colleagues at the Reinforcement Learning and Artificial Intelligence Lab at the University of Alberta and DeepMind Alberta for providing an exciting and stimulating environment for research.
Publisher Copyright:
©c 2020 AI Access Foundation. All rights reserved.
PY - 2020/12/14
Y1 - 2020/12/14
N2 - Learning about many things can provide numerous benefits to a reinforcement learning system. For example, learning many auxiliary value functions, in addition to optimizing the environmental reward, appears to improve both exploration and representation learning. The question we tackle in this paper is how to sculpt the stream of experience—how to adapt the learning system’s behavior—to optimize the learning of a collection of value functions. A simple answer is to compute an intrinsic reward based on the statistics of each auxiliary learner, and use reinforcement learning to maximize that intrinsic reward. Unfortunately, implementing this simple idea has proven difficult, and thus has been the focus of decades of study. It remains unclear which of the many possible measures of learning would work well in a parallel learning setting where environmental reward is extremely sparse or absent. In this paper, we investigate and compare different intrinsic reward mechanisms in a new bandit-like parallel-learning testbed. We discuss the interaction between reward and prediction learners and highlight the importance of introspective prediction learners: those that increase their rate of learning when progress is possible, and decrease when it is not. We provide a comprehensive empirical comparison of 14 different rewards, including well-known ideas from reinforcement learning and active learning. Our results highlight a simple but seemingly powerful principle: intrinsic rewards based on the amount of learning can generate useful behavior, if each individual learner is introspective.
AB - Learning about many things can provide numerous benefits to a reinforcement learning system. For example, learning many auxiliary value functions, in addition to optimizing the environmental reward, appears to improve both exploration and representation learning. The question we tackle in this paper is how to sculpt the stream of experience—how to adapt the learning system’s behavior—to optimize the learning of a collection of value functions. A simple answer is to compute an intrinsic reward based on the statistics of each auxiliary learner, and use reinforcement learning to maximize that intrinsic reward. Unfortunately, implementing this simple idea has proven difficult, and thus has been the focus of decades of study. It remains unclear which of the many possible measures of learning would work well in a parallel learning setting where environmental reward is extremely sparse or absent. In this paper, we investigate and compare different intrinsic reward mechanisms in a new bandit-like parallel-learning testbed. We discuss the interaction between reward and prediction learners and highlight the importance of introspective prediction learners: those that increase their rate of learning when progress is possible, and decrease when it is not. We provide a comprehensive empirical comparison of 14 different rewards, including well-known ideas from reinforcement learning and active learning. Our results highlight a simple but seemingly powerful principle: intrinsic rewards based on the amount of learning can generate useful behavior, if each individual learner is introspective.
U2 - 10.1613/JAIR.1.12087
DO - 10.1613/JAIR.1.12087
M3 - Review Article
AN - SCOPUS:85099406083
SN - 1076-9757
VL - 69
SP - 1287
EP - 1332
JO - Journal of Artificial Intelligence Research
JF - Journal of Artificial Intelligence Research
ER -