Topological Limits To The Parallel Processing Capability Of Network Architectures
NATURE PHYSICS(2021)
摘要
The ability to perform multiple tasks simultaneously is a key characteristic of parallel architectures. Using methods from statistical physics, this study provides analytical results that quantify the limitations of processing capacity for different types of tasks in neural networks.The ability to learn new tasks and generalize to others is a remarkable characteristic of both human brains and recent artificial intelligence systems. The ability to perform multiple tasks simultaneously is also a key characteristic of parallel architectures, as is evident in the human brain and exploited in traditional parallel architectures. Here we show that these two characteristics reflect a fundamental tradeoff between interactive parallelism, which supports learning and generalization, and independent parallelism, which supports processing efficiency through concurrent multitasking. Although the maximum number of possible parallel tasks grows linearly with network size, under realistic scenarios their expected number grows sublinearly. Hence, even modest reliance on shared representations, which support learning and generalization, constrains the number of parallel tasks. This has profound consequences for understanding the human brain's mix of sequential and parallel capabilities, as well as for the development of artificial intelligence systems that can optimally manage the tradeoff between learning and processing efficiency.
更多查看译文
关键词
Complex networks,Computational science,Information theory and computation,Statistical physics,Physics,general,Theoretical,Mathematical and Computational Physics,Classical and Continuum Physics,Atomic,Molecular,Optical and Plasma Physics,Condensed Matter Physics,Complex Systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络