LMVE at SemEval-2020 Task 4: Commonsense Validation and Explanation using Pretraining Language Model

SemEval@COLING(2020)

引用 3|浏览6
暂无评分
摘要
This paper describes our submission to subtask a and b of SemEval-2020 Task 4. For subtask a, we use a ALBERT based model with improved input form to pick out the common sense statement from two statement candidates. For subtask b, we use a multiple choice model enhanced by hint sentence mechanism to select the reason from given options about why a statement is against common sense. Besides, we propose a novel transfer learning strategy between subtasks which help improve the performance. The accuracy scores of our system are 95.6 / 94.9 on official test set and rank 7$^{th}$ / 2$^{nd}$ on Post-Evaluation leaderboard.
更多
查看译文
关键词
commonsense validation,language
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要