Gradient Descent Learns Linear Dynamical Systems

The Journal of Machine Learning Research(2019)

引用 339|浏览236
暂无评分
摘要
We prove that stochastic gradient descent efficiently converges to the global optimizer of the maximum likelihood objective of an unknown linear time-invariant dynamical system from a sequence of noisy observations generated by the system. Even though the objective function is non-convex, we provide polynomial running time and sample complexity bounds under strong but natural assumptions. Linear systems identification has been studied for many decades, yet, to the best of our knowledge, these are the first polynomial guarantees for the problem we consider.
更多
查看译文
关键词
non-convex optimization,linear dynamical system,stochastic gradient descent,generalization bounds,time series,over-parameterization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要