Learning low-rank tensors for transitive verbs

Advances in Distributional Semantics Workshop(2015)

引用 1|浏览36
暂无评分
摘要
4 ResultsTable 1 displays correlations between the systems’ scores and human SVO similarity judgements on the two tasks. Low-rank tensor performance varies greatly across values of R, but is maximized at either R= 10 or R= 20 for both the joint and alternating optimization methods. In six out of eight combinations of dataset and maximal rank, the alternating optimization training method achieves higher results than the joint optimization method. For the combination R= 20 and alternating optimization method, which seems to provide the most stable performance, the low-rank tensor achieves results comparable to the unconstrainedrank tensor on both datasets:. 03 higher on GS2011, and. 03 lower on KS2013, despite using only 4,800 parameters per verb compared to the 400,000 parameters per verb of the unconstrained tensor model. This is ongoing work, and we plan to continue to compare the low-rank and unconstrained-rank tensor models on other evaluation tasks.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要