Adaptive Programmable Networks for In Materia Neuromorphic Computing

arxiv(2023)

引用 0|浏览20
暂无评分
摘要
Modern AI and machine-learning provide striking performance. However, this comes with rapidly-spiralling energy costs arising from growing network size and inefficiencies of the von Neumann architecture. `Reservoir computing' offers an energy-efficient alternative to large networks, fixing randomised weights for energetically-cheap training. The massively-parallel processing underpinning machine-learning is poorly catered for by CMOS, with in materia neuromorphic computing an attractive solution. Nanomagnetic artificial spin-systems are ideal candidates for neuromorphic hardware. Their passive memory, state-dependent dynamics and nonlinear GHz spin-wave response provide powerful computation. However, any single physical reservoir must trade-off between performance metrics including nonlinearity and memory-capacity, with the compromise typically hard-coded during nanofabrication. Here, we present three artificial spin-systems: square artificial spin ice, square artificial spin-vortex ice and a disordered pinwheel artificial spin-vortex ice. We show how tuning system geometry and dynamics defines computing performance. We engineer networks where each node is a high-dimensional physical reservoir, implementing parallel, deep and multilayer physical neural network architectures. This solves the issue of physical reservoir performance compromise, allowing a small suite of synergistic physical systems to address diverse tasks and provide a broad range of reprogrammable computationally-distinct configurations. These networks outperform any single reservoir across a broad taskset. Crucially, we move beyond reservoir computing to present a method for reconfigurably programming inter-layer network connections, enabling on-demand task optimised performance.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要