Boosting Standard Classification Architectures Through a Ranking Regularizer

2020 IEEE Winter Conference on Applications of Computer Vision (WACV)(2020)

引用 12|浏览100
暂无评分
摘要
We employ triplet loss as a feature embedding regularizer to boost classification performance. Standard architectures, like ResNet and Inception, are extended to support both losses with minimal hyper-parameter tuning. This promotes generality while fine-tuning pretrained networks. Triplet loss is a powerful surrogate for recently proposed embedding regularizers. Yet, it is avoided due to large batch-size requirement and high computational cost. Through our experiments, we re-assess these assumptions.During inference, our network supports both classification and embedding tasks without any computational overhead. Quantitative evaluation highlights a steady improvement on five fine-grained recognition datasets. Further evaluation on an imbalanced video dataset achieves significant improvement. Triplet loss brings feature embedding capabilities like nearest neighbor to classification models. Code available at http://bit.ly/2LNYEqL.
更多
查看译文
关键词
embedding tasks,fine-grained recognition datasets,triplet loss,classification models,boosting standard classification architectures,ranking regularizer,classification performance,standard architectures,hyper-parameter tuning,batch-size requirement,embedding regularizers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要