Motivating Time-Inconsistent Agents: A Computational Approach

WINE(2018)

引用 8|浏览38
暂无评分
摘要
We study the complexity of motivating time-inconsistent agents to complete long term projects in a graph-based planning model proposed by Kleinberg and Oren ( 2014 ). Given a task graph G with n nodes, our objective is to guide an agent towards a target node t under certain budget constraints. The crux is that the agent may change its strategy over time due to its present-bias. We consider two strategies to guide the agent. First, a single reward is placed at t and arbitrary edges can be removed from G . Secondly, rewards can be placed at arbitrary nodes of G but no edges must be deleted. In both cases we show that it is NP-complete to decide if a given budget is sufficient to keep the agent motivated. For the first setting, we give complementing upper and lower bounds on the approximability of the minimum required budget. In particular, we devise a (1+√(n)) -approximation algorithm and prove NP-hardness for ratios greater than √(n)/3 . We also argue that the second setting does not permit any efficient approximation unless P = NP.
更多
查看译文
关键词
Approximation algorithms,Behavioral economics,Commitment devices,Computational complexity,Time-inconsistent preferences
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要