Shrewd Selection Speeds Surfing: Use Smart EXP3!

2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS)(2018)

引用 7|浏览38
暂无评分
摘要
In this paper, we explore the use of multi-armed bandit online learning techniques to solve distributed resource selection problems. As an example, we focus on the problem of network selection. Mobile devices often have several wireless networks at their disposal. While choosing the right network is vital for good performance, a decentralized solution remains a challenge. The impressive theoretical properties of multi-armed bandit algorithms, like EXP3, suggest that it should work well for this type of problem. Yet, its real-word performance lags far behind. The main reasons are the hidden cost of switching networks and its slow rate of convergence. We propose Smart EXP3, a novel bandit-style algorithm that (a) retains the good theoretical properties of EXP3, (b) bounds the number of switches, and (c) yields significantly better performance in practice. We evaluate Smart EXP3 using simulations, controlled experiments, and in-the-wild experiments. Results show that it stabilizes at the optimal state, achieves fairness among devices and gracefully deals with transient behaviors. In real world experiments, it can achieve 18% faster download over alternate strategies. We conclude that multi-armed bandit algorithms can play an important role in distributed resource selection problems, when practical concerns, such as switching costs and convergence time, are addressed.
更多
查看译文
关键词
bandit algorithm,congestion game,wireless network selection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要