Quantum bandit with amplitude amplification exploration in an adversarial environment

Byungjin Cho, Yu Xiao, Pan Hui, Daoyi Dong

Research output: Contribution to journalArticleScientificpeer-review

26 Downloads (Pure)

Abstract

The rapid proliferation of learning systems in an arbitrarily changing environment mandates the need to manage tensions between exploration and exploitation. This work proposes a quantum-inspired bandit learning approach for the learning-and-adapting-based offloading problem where a client observes and learns the costs of each task offloaded to the candidate resource providers, e.g., fog nodes. In this approach, a new action update strategy and novel probabilistic action selection are adopted, provoked by the amplitude amplification and collapse postulate in quantum computation theory. We devise a locally linear mapping between a quantum-mechanical phase in a quantum domain, e.g., Grover-type search algorithm, and a distilled probability-magnitude in a value-based decision-making domain, e.g., adversarial multi-armed bandit algorithm. The proposed algorithm is generalized, via the devised mapping, for better learning weight adjustments on favorable/unfavorable actions, and its effectiveness is verified via simulation.
Original languageEnglish
Article number10136755
Pages (from-to)311-317
Number of pages7
JournalIEEE Transactions on Knowledge and Data Engineering
Volume36
Issue number1
Early online date26 May 2023
DOIs
Publication statusPublished - 1 Jan 2024
MoE publication typeA1 Journal article-refereed

Keywords

  • Costs
  • Task analysis
  • Decision making
  • Uncertainty
  • Quantum algorithm
  • Qubit
  • Quantum state

Fingerprint

Dive into the research topics of 'Quantum bandit with amplitude amplification exploration in an adversarial environment'. Together they form a unique fingerprint.

Cite this