Revisiting Skip-Gram Negative Sampling Model with Regularization.

arXiv: Computation and Language(2018)

引用 2|浏览67
暂无评分
摘要
We revisit skip-gram negative sampling (SGNS), a popular neural-network based approach to learning distributed word representation. We first point out the ambiguity issue undermining the SGNS model, in the sense that the word vectors can be entirely distorted without changing the objective value. To resolve this issue, we rectify the SGNS model with quadratic regularization. A theoretical justification, which provides a novel insight into quadratic regularization, is presented. Preliminary experiments are also conducted on Googleu0027s analytical reasoning task to support the modified SGNS model.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络