Understanding Model Selection For Learning In Strategic Environments
CoRR(2024)
摘要
The deployment of ever-larger machine learning models reflects a growing
consensus that the more expressive the model class one optimizes
overx2013and the more data one has access tox2013the
more one can improve performance. As models get deployed in a variety of
real-world scenarios, they inevitably face strategic environments. In this
work, we consider the natural question of how the interplay of models and
strategic interactions affects the relationship between performance at
equilibrium and the expressivity of model classes. We find that strategic
interactions can break the conventional viewx2013meaning that
performance does not necessarily monotonically improve as model classes get
larger or more expressive (even with infinite data). We show the implications
of this result in several contexts including strategic regression, strategic
classification, and multi-agent reinforcement learning. In particular, we show
that each of these settings admits a Braess' paradox-like phenomenon in which
optimizing over less expressive model classes allows one to achieve strictly
better equilibrium outcomes. Motivated by these examples, we then propose a new
paradigm for model selection in games wherein an agent seeks to choose amongst
different model classes to use as their action set in a game.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要