Trustworthy Agents under a Veil of Misinformation: Mechanism Design under Adversarial Upstream Conditions

ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V(2023)

引用 0|浏览4
暂无评分
摘要
In many situations, agents are on the same team as the central decision-maker; they express preferences truthfully to that decision-maker, who then aggregates those expressed preferences to decide on an outcome. This strays from, and is easier to analyze than, the traditional mechanism design setting from microeconomics, where agents are not assumed to be truthful. But, what if those trusted agents' true preferences are manipulated by upstream actors (either intentional or not)? How should a decision-maker act when either strategic or natural uncertainty manipulates trusted agents' beliefs? In this brief position paper, we propose a new area of focus for mechanism designers that captures a variety of real-world settings where agents are "on the same team" but can be manipulated by an exogenous actor.
更多
查看译文
关键词
Mechanism design,manipulation,adversarial robustness,information uncertainty
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要