Approximation of Markov Processes by Lower Dimensional Processes

Ioannis Tzortzis*, Charalambos D. Charalambous, Themistoklis Charalambous, Christoforos N. Hadjicostis, Mikael Johansson

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


In this paper, we investigate the problem of aggregating a given finite-state Markov process by another process with fewer states. The aggregation utilizes total variation distance as a measure of discriminating the Markov process by the aggregate process, and aims to maximize the entropy of the aggregate process invariant probability, subject to a fidelity described by the total variation distance ball. An iterative algorithm is presented to compute the invariant distribution of the aggregate process, as a function of the invariant distribution of the Markov process. It turns out that the approximation method via aggregation leads to an optimal aggregate process which is a hidden Markov process, and the optimal solution exhibits a water-filling behavior. Finally, the algorithm is applied to specific examples to illustrate the methodology and properties of the approximations.

Original languageEnglish
Title of host publication2014 IEEE 53rd Annual Conference on Decision and Control (CDC)
Number of pages6
ISBN (Electronic)978-1-4673-6090-6
Publication statusPublished - 2014
MoE publication typeA4 Article in a conference publication
EventIEEE Conference on Decision and Control - Los Angeles, Canada
Duration: 15 Dec 201417 Dec 2014
Conference number: 53


ConferenceIEEE Conference on Decision and Control
Abbreviated titleCDC
CityLos Angeles



Cite this