A Scale-Independent Multi-Objective Reinforcement Learning with Convergence Analysis

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review


Many sequential decision-making problems need optimization of different objectives which possibly conflict with each other. The conventional way to deal with a multitask problem is to establish a scalar objective function based on a linear combination of different objectives. However, for the case where we have conflicting objectives with different scales, this method needs a trial-and-error approach to properly find proper weights for the combination. As such, in most cases, this approach cannot guarantee an optimal Pareto solution. In this paper, we develop a single-agent scale-independent multi-objective reinforcement learning on the basis of the Advantage Actor-Critic (A2C) algorithm. A convergence analysis is then done for the devised multi-objective algorithm providing a convergence-in-mean guarantee. We then perform some experiments over a multitask problem to evaluate the performance of the proposed algorithm. Simulation results show the superiority of developed multi-objective A2C approach against the single-objective algorithm.
Original languageEnglish
Title of host publication2023 62nd IEEE Conference on Decision and Control (CDC)
Number of pages8
ISBN (Print)979-8-3503-0125-0
Publication statusPublished - 15 Dec 2023
MoE publication typeA4 Conference publication
EventIEEE Conference on Decision and Control - Marina Bay Sands, Singapore, Singapore
Duration: 13 Dec 202315 Dec 2023
Conference number: 62

Publication series

NameProceedings of the IEEE Conference on Decision & Control
ISSN (Electronic)2576-2370


ConferenceIEEE Conference on Decision and Control
Abbreviated titleCDC
Internet address


  • Measurement
  • Simulation
  • Decision making
  • Reinforcement learning
  • Quality of service
  • Linear programming
  • Optimization


Dive into the research topics of 'A Scale-Independent Multi-Objective Reinforcement Learning with Convergence Analysis'. Together they form a unique fingerprint.

Cite this