Online Markov decoding: Lower bounds and near-optimal approximation algorithms

Vikas K. Garg, Tamar Pichkhadze

Research output: Contribution to journalConference articleScientificpeer-review

Abstract

We resolve the fundamental problem of online decoding with general nth order ergodic Markov chain models. Specifically, we provide deterministic and randomized algorithms whose performance is close to that of the optimal offline algorithm even when latency is small. Our algorithms admit efficient implementation via dynamic programs, and readily extend to (adversarial) non-stationary or time-varying settings. We also establish lower bounds for online methods under latency constraints in both deterministic and randomized settings, and show that no online algorithm can perform significantly better than our algorithms. To our knowledge, our work is the first to analyze general Markov chain decoding under hard constraints on latency. We provide strong empirical evidence to illustrate the potential impact of our work in applications such as gene sequencing.

Original languageEnglish
JournalADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS
Volume32
Publication statusPublished - 2019
MoE publication typeA4 Article in a conference publication
EventConference on Neural Information Processing Systems - Vancouver, Canada
Duration: 8 Dec 201914 Dec 2019
Conference number: 33
https://neurips.cc

Fingerprint Dive into the research topics of 'Online Markov decoding: Lower bounds and near-optimal approximation algorithms'. Together they form a unique fingerprint.

Cite this