simple but effective techniques to reduce biases

arxiv(2019)

引用 18|浏览65
暂无评分
摘要
There have been several studies recently showing that strong natural language inference (NLI) models are prone to relying on unwanted dataset biases, resulting in models which fail to capture the underlying generalization, and are likely to perform poorly in real-world scenarios. Biases are identified as statistical cues or superficial heuristic correlated with certain labels that are effective for the majority of examples but fail to succeed in more challenging hard examples. In this work, we propose several learning strategies to train neural models which are more robust to such biases and transfer better to out-of-domain datasets. We first introduce an additive lightweight model which learn dataset biases. We then use its prediction to adjust the loss of the base model to reduce the biases. In other words, our methods down-weight the importance of the biased examples, and focus training on hard examples which require grounded reasoning to deduce the label. Our approaches are model agnostic and simple to implement. We experiment on large-scale natural language inference and fact-verification datasets and show that our debiased models obtain significant gain over the baselines on several challenging out-of-domain datasets.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要