Surprise probabilities in Markov chains

Combinatorics, Probability & Computing(2017)

引用 13|浏览32
暂无评分
摘要
In a Markov chain started at a state x, the hitting time τ(y) is the first time that the chain reaches another state y. We study the probability Px(τ(y) = t) that the first visit to y occurs precisely at a given time t. Informally speaking, the event that a new state is visited at a large time t may be considered a \"surprise\". We prove the following three bounds: • In any Markov chain with n states, Px(τ(y) = t) ≤ n/t. • In a reversible chain with n states, Px(τ(y) = t) ≤ [EQUATION]2n/t for t ≥ 4n + 4. • For random walk on a simple graph with n ≥ 2 vertices, Px(τ(y) = t) ≤ 4e log n/t. We construct examples showing that these bounds are close to optimal. The main feature of our bounds is that they require very little knowledge of the structure of the Markov chain. To prove the bound for random walk on graphs, we establish the following estimate conjectured by Aldous, Ding and Oveis-Gharan (private communication): For random walk on an n-vertex graph, for every initial vertex x, [EQUATION]
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要