Efficient Elastic Net Regularization for Sparse Linear Models.

arXiv: Learning(2015)

引用 23|浏览37
暂无评分
摘要
We extend previous work on efficiently training linear model s by applying stochastic updates to non-zero features only, lazily bring ing weights current as needed. To date, only the closed form updates for the l1, l1, and the rarely used l2 norm have been described. We extend this work by showing the proper closed form updates for the popular l 2 and elastic net regularized models. We show a dynamic programming algorithm to calculate the proper elastic net update with only one constant-time subproblem computation per update. Our algorithm handles both fixed and decreasing learning rates and we derive th e result for both stochastic gradient descent (SGD) and forward backward splitting (FoBoS) . We empirically validate the algorithm, showing that on a bag-of-words dataset with 260, 941 features and 88 nonzero features on average per example, our method trains a logistic regression classifier with elastic net reg ularization 612 times faster than an otherwise identical implementation with dense updates.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要