SynerMix: Synergistic Mixup Solution for Enhanced Intra-Class Cohesion and Inter-Class Separability in Image Classification
CoRR(2024)
摘要
To address the issues of MixUp and its variants (e.g., Manifold MixUp) in
image classification tasks-namely, their neglect of mixing within the same
class (intra-class mixup) and their inadequacy in enhancing intra-class
cohesion through their mixing operations-we propose a novel mixup method named
SynerMix-Intra and, building upon this, introduce a synergistic mixup solution
named SynerMix. SynerMix-Intra specifically targets intra-class mixup to
bolster intra-class cohesion, a feature not addressed by current mixup methods.
For each mini-batch, it leverages feature representations of unaugmented
original images from each class to generate a synthesized feature
representation through random linear interpolation. All synthesized
representations are then fed into the classification and loss layers to
calculate an average classification loss that significantly enhances
intra-class cohesion. Furthermore, SynerMix combines SynerMix-Intra with an
existing mixup approach (e.g., MixUp, Manifold MixUp), which primarily focuses
on inter-class mixup and has the benefit of enhancing inter-class separability.
In doing so, it integrates both inter- and intra-class mixup in a balanced way
while concurrently improving intra-class cohesion and inter-class separability.
Experimental results on six datasets show that SynerMix achieves a 0.1
3.43
averaging a 1.16
MixUp or SynerMix-Intra by 0.12
that SynerMix is model-agnostic, it holds significant potential for application
in other domains where mixup methods have shown promise, such as speech and
text classification. Our code is publicly available at:
https://github.com/wxitxy/synermix.git.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要