Deep Augmentation: Self-Supervised Learning with Transformations in Activation Space
CoRR(2023)
摘要
We introduce Deep Augmentation, an approach to implicit data augmentation
using dropout or PCA to transform a targeted layer within a neural network to
improve performance and generalization. We demonstrate Deep Augmentation
through extensive experiments on contrastive learning tasks in NLP, computer
vision, and graph learning. We observe substantial performance gains with
Transformers, ResNets, and Graph Neural Networks as the underlying models in
contrastive learning, but observe inverse effects on the corresponding
supervised problems. Our analysis suggests that Deep Augmentation alleviates
co-adaption between layers, a form of "collapse." We use this observation to
formulate a method for selecting which layer to target; in particular, our
experimentation reveals that targeting deeper layers with Deep Augmentation
outperforms augmenting the input data. The simple network- and
modality-agnostic nature of this approach enables its integration into various
machine learning pipelines.
更多查看译文
关键词
deep augmentation,higher activation space,learning,self-supervised
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要