谷歌浏览器插件
订阅小程序
在清言上使用

Exploiting transient dynamics of a time-multiplexed reservoir to boost the system performance

2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2021)

引用 4|浏览19
暂无评分
摘要
Delay-based reservoir computing is an unconventional information processing method that allows the implementation of recurrent neural networks on different kinds of hardware substrates. It facilitates machine learning based on the transient dynamics of a single nonlinear node through time-multiplexing. Here, we explore the interplay of the driving strength of the nonlinear node and the modulation rate of the time-multiplexing. We find two contrasting combinations of input gain and node separation, each yielding the best performance in a different prediction task, respectively. A weak input gain and large node separation is superior in a near-future prediction of a chaotic Mackey-Glass system, while a high input gain and short node separation leads to the best performance in a far-future prediction. Furthermore, for increasing input gains, we obtain that the node separation yielding the best performance decreases significantly below the characteristic time scale of the underlying delay system. This allows the realization of large networks with up to one thousand nodes even for high processing rates. We investigate the parameter's relation further by analyzing the average state entropy and computing the information processing capacity in a time-multiplexed reservoir for one input mask. This supports an in-depth understanding of the interplay between node separation and input gain.
更多
查看译文
关键词
reservoir computing, time multiplexing, information processing capacity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要