Proceedings of the fifteenth ACM conference on Economics and computation, pp. 5-22, 2014.
crowdsourcingeconomicsexplorationincentivesmulti-armed bandit problems
We study a Bayesian multi-armed bandit (MAB) setting in which a principal seeks to maximize the sum of expected time-discounted rewards obtained by pulling arms, when the arms are actually pulled by selfish and myopic individuals. Since such individuals pull the arm with highest expected posterior reward (i.e., they always exploit and nev...More
Full Text (Upload PDF)
PPT (Upload PPT)