Dice Loss for Data-imbalanced NLP Tasks

ACL, pp. 465-476, 2020.

Cited by: 25|Views71
EI
Weibo:
We propose the dice-based loss to narrow down the gap between training objective and evaluation metrics

Abstract:

Many NLP tasks such as tagging and machine reading comprehension are faced with the severe data imbalance issue: negative examples significantly outnumber positive examples, and the huge number of background examples (or easy-negative examples) overwhelms the training. The most commonly used cross entropy (CE) criteria is actually an ac...More

Code:

Data:

0
Your rating :
0

 

Tags
Comments