Monotonic learning with hypothesis evolution

Information Sciences(2023)

引用 0|浏览9
暂无评分
摘要
A machine learning algorithm is monotonic if it returns a model with better performance when trained with a larger data set. Monotonicity is essential in scenarios when a learning algorithm is working with continually collected data, as non-monotonicity may result in unstable performance and a huge waste of resources during the learning process. However, existing learning algorithms working in scenarios such as online learning, domain incremental learning and reinforcement learning hardly address the monotonicity issue. In this paper, we propose an evolutionary framework that focuses on the enforcement of monotonicity for a learning algorithm over streaming data feeds. In each iteration, training is triggered by a new collection of incoming data, which consequently creates a new generation of hypotheses, and only a portion of the generation with best performance is retained for the next round based on a novel statistical hypothesis test. We carry out experiments on DNN models with continual data feeds constructed from MNIST, CIFAR-10, SST-2 and Tiny ImageNet. The results justify that our approach can significantly increase the probability of locally monotonic updates on the generated learning curves for the trained models and outperforms the state-of-the-art methods on that purpose.
更多
查看译文
关键词
Monotonic learning,Learning curve,Hypothesis evolution,Risk monotonicity,Learning theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要