Ada-QPacknet - Multi-Task Forget-Free Continual Learning with Quantization Driven Adaptive Pruning

ECAI 2023(2023)

引用 0|浏览15
暂无评分
摘要
Continual learning (CL) is a challenging machine learning setting that is attracting the interest of an increasing number of researchers. Among recent CL works, architectural strategies appear particularly promising due to their potential to expand and adapt the model architecture as new tasks are presented. However, existing solutions do not efficiently exploit model sparsity due to the adoption of constant pruning ratios. Moreover, current approaches exhibit a tendency to quickly saturate model capacity since the number of weights is limited and each weight is restricted to a single value. In this paper, we propose Ada-QPacknet, a novel architectural CL method that resorts to adaptive pruning and quantization. These two features allow our model to overcome the two crucial issues of effective exploitation of model sparsity and efficient use of model capacity. Specifically, adaptive pruning restores model capacity by reducing the number of weights assigned to each task to a smaller subset of weights that preserves the performance of the full set, allowing other weights to be used for future tasks. Adaptive quantization separates each weight into multiple components with adaptively reduced bit-width, allowing a single weight to solve more than one task without significant performance drops, leading to improved exploitation of model capacity. Experimental results on benchmark CL scenarios show that our proposed method achieves better results in terms of accuracy than existing rehearsal, regularization, and architectural CL strategies. Moreover, our method significantly outperforms forget-free competitors in terms of efficient exploitation of model capacity.
更多
查看译文
关键词
quantization driven adaptive pruning,learning,ada-qpacknet,multi-task,forget-free
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要