Adapting Semantic Segmentation Of Urban Scenes Via Mask-Aware Gated Discriminator

2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME)(2019)

引用 10|浏览15
暂无评分
摘要
Training a deep neural network for semantic segmentation relies on pixel-level ground truth labels for supervision. However, collecting large datasets with pixel-level annotations is very expensive and time consuming. One workaround is to utilize synthetic data where we can generate potentially unlimited data with their corresponding ground truth labels. Unfortunately, networks trained on synthetic data perform poorly on real images due to the domain shift problem. Domain adaptation techniques have shown potential in transferring the knowledge learned from synthetic data to real world data. Prior works have mostly leveraged on adversarial training to perform a global aligning of features. However, we observed that background objects have lesser variations across different domains as opposed to foreground objects. Using this insight, we propose a method for domain adaptation that models and adapts foreground objects and background objects separately. Our approach starts with a fast style transfer to match the appearance of the inputs. This is followed by a foreground adaptation module that learns a foreground mask that is used by our gated discriminator in order to adapt the foreground and background objects separately. We demonstrate in our experiments that our model outperforms several state-of-the-art baselines in terms of mean intersection over union (mIoU).
更多
查看译文
关键词
Semantic segmentation, Domain adaptation, Gated-convolution
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要