UCB EXPLORATION VIA Q-ENSEMBLES

arXiv: Learning, 2018.

Cited by: 26|Bibtex|Views54
Other Links: academic.microsoft.com|arxiv.org

Abstract:

We show how an ensemble of $Q^*$-functions can be leveraged for more effective exploration in deep reinforcement learning. We build on well established algorithms from the bandit setting, and adapt them to the $Q$-learning setting. We propose an exploration strategy based on upper-confidence bounds (UCB). Our experiments show significant ...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments