Approximation of Markov Processes by Lower Dimensional Processes

Ioannis Tzortzis*, Charalambos D. Charalambous, Themistoklis Charalambous, Christoforos N. Hadjicostis, Mikael Johansson

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

In this paper, we investigate the problem of aggregating a given finite-state Markov process by another process with fewer states. The aggregation utilizes total variation distance as a measure of discriminating the Markov process by the aggregate process, and aims to maximize the entropy of the aggregate process invariant probability, subject to a fidelity described by the total variation distance ball. An iterative algorithm is presented to compute the invariant distribution of the aggregate process, as a function of the invariant distribution of the Markov process. It turns out that the approximation method via aggregation leads to an optimal aggregate process which is a hidden Markov process, and the optimal solution exhibits a water-filling behavior. Finally, the algorithm is applied to specific examples to illustrate the methodology and properties of the approximations.

Original languageEnglish
Title of host publication2014 IEEE 53rd Annual Conference on Decision and Control (CDC)
PublisherIEEE
Pages4441-4446
Number of pages6
ISBN (Electronic)978-1-4673-6090-6
DOIs
Publication statusPublished - 2014
MoE publication typeA4 Article in a conference publication
EventIEEE Conference on Decision and Control - Los Angeles, Canada
Duration: 15 Dec 201417 Dec 2014
Conference number: 53

Conference

ConferenceIEEE Conference on Decision and Control
Abbreviated titleCDC
CountryCanada
CityLos Angeles
Period15/12/201417/12/2014

Keywords

  • AGGREGATION

Cite this