Dice Loss for Data-imbalanced NLP Tasks
ACL, pp. 465-476, 2020.
We propose the dice-based loss to narrow down the gap between training objective and evaluation metrics
Many NLP tasks such as tagging and machine reading comprehension are faced with the severe data imbalance issue: negative examples significantly outnumber positive examples, and the huge number of background examples (or easy-negative examples) overwhelms the training. The most commonly used cross entropy (CE) criteria is actually an ac...More
PPT (Upload PPT)