The Actor-Dueling-Critic Method for Reinforcement Learning

Research output: Contribution to journalArticleScientificpeer-review

Researchers

Research units

  • Harbin Engineering University

Abstract

Model-free reinforcement learning is a powerful and efficient machine-learning paradigm which has been generally used in the robotic control domain. In the reinforcement learning setting, the value function method learns policies by maximizing the state-action value (Q value), but it suffers from inaccurate Q estimation and results in poor performance in a stochastic environment. To mitigate this issue, we present an approach based on the actor-critic framework, and in the critic branch we modify the manner of estimating Q-value by introducing the advantage function, such as dueling network, which can estimate the action-advantage value. The action-advantage value is independent of state and environment noise, we use it as a fine-tuning factor to the estimated Q value. We refer to this approach as the actor-dueling-critic (ADC) network since the frame is inspired by the dueling network. Furthermore, we redesign the dueling network part in the critic branch to make it adapt to the continuous action space. The method was tested on gym classic control environments and an obstacle avoidance environment, and we design a noise environment to test the training stability. The results indicate the ADC approach is more stable and converges faster than the DDPG method in noise environments.

Details

Original languageEnglish
Article number1547
Pages (from-to)1-20
Number of pages20
JournalSensors (Basel, Switzerland)
Volume19
Issue number7
Publication statusPublished - 1 Apr 2019
MoE publication typeA1 Journal article-refereed

    Research areas

  • advantage, continuous control, DDPG, dueling network, reinforcement learning, Advantage, Continuous control, Reinforcement learning, Dueling network

Download statistics

No data available

ID: 33412549