• Main.ADistributedDynamicalSchemeForFastestMixingMarkovChains


Kod*lab Menu



Internal Links (Login Required)



Kod*lab Publications

A distributed dynamical scheme for fastest mixing Markov chains

American Control Conference, 2009

Zavlanos, M.M., Koditschek, D.E., Pappas, G.J.
University of Pennsylvania

Full PDF | Penn Scholarly Commons | IEEE Xplore

Abstract
This paper introduces the problem of determining through distributed consensus the fastest mixing Markov chain with a desired sparsity pattern. In contrast to the centralized optimization-based problem formulation, we develop a novel distributed relaxation by constructing a dynamical system over the cross product of an appropriately patterned set of stochastic matrices. In particular, we define a probability distribution over the set of such patterned stochastic matrices and associate an agent with a random matrix drawn from this distribution. Under the assumption that the network of agents is connected, we employ consensus to achieve agreement of all agents regardless of their initial states. For sufficiently many agents, the law of large numbers implies that the asymptotic consensus limit converges to the mean stochastic matrix, which for the distribution under consideration, corresponds to the chain with the fastest mixing rate, relative to a standard bound on the exact rate. Our approach relies on results that express general element-wise nonnegative stochastic matrices as convex combinations of 0–1 stochastic matrices. Its performance, as a function of the weights in these convex combinations and the number of agents, is illustrated in computer simulations. Because of its differential and distributed nature, this approach can handle large problems and seems likely to be well suited for applications in distributed control and robotics.
BibTeX entry
@INPROCEEDINGS{5160197, 
author={Zavlanos, M.M. and Koditschek, D.E. and Pappas, G.J.}, 
booktitle={American Control Conference, 2009. ACC '09.}, 
title={A distributed dynamical scheme for fastest mixing Markov chains}, 
year={2009}, 
month={june}, 
volume={}, 
number={}, 
pages={1436 -1441}, 
doi={10.1109/ACC.2009.5160197}, 
ISSN={0743-1619},}
}

Copyright Kodlab, 2017