# Projection-free Online Learning over Strongly Convex Sets

Cited by: 0|Bibtex|Views11
Weibo:
We introduce a strongly convex variant of online frank-wolfe, and prove that it achieves a regret bound of O(T 2/3√) over general convex sets and a better regret bound of O( T ) over strongly convex sets

Abstract:

To efficiently solve online problems with complicated constraints, projection-free algorithms including online frank-wolfe (OFW) and its variants have received significant interest recently. However, in the general case, existing projection-free algorithms only achieved the regret bound of $O(T^{3/4})$, which is worse than the regret of...More

Code:

Data:

Introduction
Highlights
• Online convex optimization (OCO) is a powerful framework that has been used to model and solve problems from diverse domains such as online routing (Awerbuch and Kleinberg 2004, 2008), online portfolio selection (Blum and Kalai

1999; Agarwal et al 2006) and prediction from expert advice (Cesa-Bianchi et al 1997; Freund et al 1997)
• We study OCO with strongly convex losses, and propose a strongly convex variant of online frank-wolfe (OFW) (SC-OFW)
• We present an improved regret bound for OFW over strongly convex sets
• Since this paper considers OCO over strongly convex sets, our SC-OFW adopts the linear optimization step utilized in the original OFW, and simplifies Ft(x) in (4) as t
• For strongly convex losses, we introduce a strongly convex variant of OFW, and prove that it achieves a regret bound of O(T 2/3√) over general convex sets and a better regret bound of O( T ) over strongly convex sets
• An open question is whether the regret of OFW and its strongly convex variant over strongly convex sets can be further improved if the losses are smooth
Results
• The authors first introduce necessary preliminaries including common notations, definitions and assumptions.
• The authors present an improved regret bound for OFW over strongly convex sets.
• The authors introduce the SC-OFW algorithm for strongly convex OCO as well as its theoretical guarantees.
• The convex set K belongs to a finite vector space E, and the authors denote the l2 norm of any vector x ∈ K by x.
• The authors recall two standard definitions for smooth and strongly convex functions (Boyd and Vandenberghe 2004), respectively
Conclusion
• The authors first prove that the classical OFW algorithm attains an O(T 2/3) regret bound for OCO over strongly convex sets, which is better than the O(T 3/4) regret bound for the general OCO.
• An open question is whether the regret of OFW and its strongly convex variant over strongly convex sets can be further improved if the losses are smooth.
• The authors note that Hazan and Minasyan (2020) have proposed a projectionfree algorithm for OCO over general convex sets, and established an improved regret bound of O(T 2/3) by taking advantage of the smoothness
Summary
• ## Introduction:

Online convex optimization (OCO) is a powerful framework that has been used to model and solve problems from diverse domains such as online routing (Awerbuch and Kleinberg 2004, 2008), online portfolio selection (Blum and Kalai

1999; Agarwal et al 2006) and prediction from expert advice (Cesa-Bianchi et al 1997; Freund et al 1997).
• The player chooses a decision xt from a convex set K.
• The goal of the player is to choose decisions so that the regret defined as T R(T ) = t=1 ft(xt) −.
• Various algorithms such as online gradient descent (OGD) (Zinkevich 2003), online Newton step (Hazan, Agarwal, and Kale 2007) and follow-the-regularized-leader (Shalev-Shwartz 2007; Shalev-Shwartz and Singer 2007) have been proposed to yield optimal regret bounds under different scenarios
• ## Results:

The authors first introduce necessary preliminaries including common notations, definitions and assumptions.
• The authors present an improved regret bound for OFW over strongly convex sets.
• The authors introduce the SC-OFW algorithm for strongly convex OCO as well as its theoretical guarantees.
• The convex set K belongs to a finite vector space E, and the authors denote the l2 norm of any vector x ∈ K by x.
• The authors recall two standard definitions for smooth and strongly convex functions (Boyd and Vandenberghe 2004), respectively
• ## Conclusion:

The authors first prove that the classical OFW algorithm attains an O(T 2/3) regret bound for OCO over strongly convex sets, which is better than the O(T 3/4) regret bound for the general OCO.
• An open question is whether the regret of OFW and its strongly convex variant over strongly convex sets can be further improved if the losses are smooth.
• The authors note that Hazan and Minasyan (2020) have proposed a projectionfree algorithm for OCO over general convex sets, and established an improved regret bound of O(T 2/3) by taking advantage of the smoothness
Related work
• In this section, we briefly review the related work about projection-free algorithms for OCO.

OFW (Hazan and Kale 2012; Hazan 2016) is the first projection-free algorithms for OCO, which attains a regret bound of O(T 3/4). Recently, some studies have proposed projection-free online algorithms which attain better regret bounds, for special cases of OCO. Specifically, if the decision set is a polytope, Garber and Haza√n (2016) proposed variants of OFW, which enjoy an O( T ) regret bound for convex losses and an O(log T ) regret bound for strongly convex losses. For OCO over smooth sets, Levy and Krause (2019) proposed a projection-free variant of OGD via devising a fa√st approximate projection for such sets, and established O( T ) and O(log T ) regret bounds for convex and strongly convex losses, respectively. Besides these improvements for OCO over special decision sets, Hazan and Minasyan (2020) proposed a randomized projection-free algorithm for OCO with smooth losses, and achieved an expected regret bound of O(T 2/3).
Reference
• Agarwal, A.; Hazan, E.; Kale, S.; and Schapire, R. E. 2006. Algorithms for portfolio management based on the Newton method. In Proceedings of the 23rd International Conference on Machine Learning, 9–16.
• Awerbuch, B.; and Kleinberg, R. 2008. Online linear optimization and adaptive routing. Journal of Computer and System Sciences 74(1): 97–114.
• Awerbuch, B.; and Kleinberg, R. D. 2004. Adaptive routing with end-to-end feedback: Distributed learning and geometric approaches. In Proceedings of the 36th Annual ACM Symposium on Theory of Computing, 45–53.
• Blum, A.; and Kalai, A. 1999. Universal portfolios with and without transaction costs. Machine Learning 35(3): 193– 205.
• Boyd, S.; and Vandenberghe, L. 2004. Convex Optimization. Cambridge University Press.
• Bubeck, S.; Dekel, O.;√Koren, T.; and Peres, Y. 2015. Bandit convex optimization: T regret in one dimension. In Proceedings of the 28th Conference on Learning Theory, 266– 278.
• Cesa-Bianchi, N.; Freund, Y.; Haussler, D.; Helmbold, D. P.; Schapire, R. E.; and Warmuth, M. K. 199How to use expert advice. Journal of the ACM 44(3): 427–485.
• Chen, L.; Zhang, M.; and Karbasi, A. 2019. Projection-free bandit convex optimization. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2047–2056.
• Duchi, J. C.; Agarwal, A.; and Wainwright, M. J. 2011. Dual averaging for distributed optimization: Convergence analysis and network scaling. IEEE Transactions on Automatic Control 57(3): 592–606.
• Flaxman, A. D.; Kalai, A. T.; and McMahan, H. B. 2005. Online convex optimization in the bandit setting: Gradient descent without a gradient. In Proceedings of the 16th Annual ACM-SIAM Symposium on Discrete Algorithms, 385– 394.
• Frank, M.; and Wolfe, P. 1956. An algorithm for quadratic programming. Naval Research Logistics Quarterly 3(1–2): 95–110.
• Freund, Y.; Schapire, R. E.; Singer, Y.; and Warmuth, M. K. 1997. Using and combining predictors that specialize. In Proceedings of the 29th Annual ACM Symposium on Theory of Computing, 334–343.
• Garber, D.; and Hazan, E. 2015. Faster rates for the frankwolfe method over strongly-convex sets. In Proceedings of the 32nd International Conference on Machine Learning, 541–549.
• Garber, D.; and Hazan, E. 2016. A linearly convergent conditional gradient algorithm with applications to online and stochastic optimization. SIAM Journal on Optimization 26(3): 1493–1528.
• Garber, D.; and Kretzu, B. 2020a. Improved regret bounds for projection-free bandit convex optimization. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2196–2206.
• Garber, D.; and Kretzu, B. 2020b. Revisiting projection-free online learning: the strongly convex case. ArXiv e-prints arXiv: 2010.07572.
• Hazan, E. 2008. Sparse approximate solutions to semidefinite programs. In Latin American Symposium on Theoretical Informatics, 306–316.
• Hazan, E. 2016. Introduction to online convex optimization. Foundations and Trends in Optimization 2(3–4): 157–325.
• Hazan, E.; Agarwal, A.; and Kale, S. 2007. Logarithmic regret algorithms for online convex optimization. Machine Learning 69(2): 169–192.
• Hazan, E.; and Kale, S. 2012. Projection-free online learning. In Proceedings of the 29th International Conference on Machine Learning, 1843–1850.
• Hazan, E.; and Luo, H. 2016. Variance-reduced and projection-free stochastic optimization. In Proceedings of the 33rd International Conference on Machine Learning, 1263–1271.
• Hazan, E.; and Minasyan, E. 2020. Faster projection-free online learning. In Proceedings of the 33rd Annual Conference on Learning Theory, 1877–1893.
• Hosseini, S.; Chapman, A.; and Mesbahi, M. 2013. Online distributed optimization via dual averaging. In 52nd IEEE Conference on Decision and Control, 1484–1489.
• Jaggi, M. 2013. Revisiting frank-wolfe: Projection-free sparse convex optimization. In Proceedings of the 30th International Conference on Machine Learning, 427–435.
• Levy, K. Y.; and Krause, A. 2019. Projection free online learning over smooth sets. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 1458–1466.
• Shalev-Shwartz, S. 2007. Online Learning: Theory, Algorithms, and Applications. Ph.D. thesis, The Hebrew University of Jerusalem.
• Shalev-Shwartz, S. 2011. Online learning and online convex optimization. Foundations and Trends in Machine Learning 4(2): 107–194.
• Shalev-Shwartz, S.; and Singer, Y. 2007. A primal-dual perspective of online learning algorithm. Machine Learning 69(2–3): 115–142.
• Wan, Y.; Tu, W.-W.; and Zhang, L. 2020. Pro√jection-free distributed online convex optimization with O( T ) communication complexity. In Proceedings of the 37th International Conference on Machine Learning.
• Zhang, W.; Zhao, P.; Zhu, W.; Hoi, S. C. H.; and Zhang, T. 2017. Projection-free distributed online learning in networks. In Proceedings of the 34th International Conference on Machine Learning, 4054–4062.
• Zinkevich, M. 2003. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning, 928– 936.
Full Text