Evolution of cooperation on reinforcement-learning driven-adaptive networks

CHAOS(2024)

引用 0|浏览0
暂无评分
摘要
Complex networks are widespread in real-world environments across diverse domains. Real-world networks tend to form spontaneously through interactions between individual agents. Inspired by this, we design an evolutionary game model in which agents participate in a prisoner's dilemma game (PDG) with their neighboring agents. Agents can autonomously modify their connections with neighbors using reinforcement learning to avoid unfavorable environments. Interestingly, our findings reveal some remarkable results. Exploiting reinforcement learning-based adaptive networks improves cooperation when juxtaposed with existing PDGs performed on homogeneous networks. At the same time, the network's topology evolves from homogeneous to heterogeneous states. This change occurs as players gain experience from past games and become more astute in deciding whether to join PDGs with their current neighbors or disconnect from the least profitable neighbors. Instead, they seek out more favorable environments by establishing connections with second-order neighbors with higher rewards. By calculating the degree distribution and modularity of the adaptive network in a steady state, we confirm that the adaptive network follows a power law and has a clear community structure, indicating that the adaptive network is similar to networks in the real world. Our study reports a new phenomenon in evolutionary game theory on networks. It proposes a new perspective to generate scale-free networks, which is generating scale-free networks by the evolution of homogeneous networks rather than typical ways of network growth and preferential connection. Our results provide new aspects to understanding the network structure, the emergence of cooperation, and the behavior of actors in nature and society.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要