AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We have demonstrated that Cartesian coordinates combined with explicit constraints make the Hamiltonians and Lagrangians of physical systems easier to learn, improving the data-efficiency and trajectory prediction accuracy by two orders of magnitude

Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints

NIPS 2020, (2020)

Cited by: 14|Views62
EI
Full Text
Bibtex
Weibo

Abstract

Reasoning about the physical world requires models that are endowed with the right inductive biases to learn the underlying dynamics. Recent works improve generalization for predicting trajectories by learning the Hamiltonian or Lagrangian of a system rather than the differential equations directly. While these methods encode the constr...More
0
Introduction
  • The behavior of physical systems can be complex, they can be derived from more abstract functions that succinctly summarize the underlying physics.
  • Recent work has shown that the authors can model physical systems by learning their Hamiltonians and Lagrangians from data [9, 14, 20].
  • The authors' approach simplifies the functional form of the Hamiltonian and Lagrangian and allows them to learn complicated behavior more accurately, as shown in Figure 1.
Highlights
  • The behavior of physical systems can be complex, they can be derived from more abstract functions that succinctly summarize the underlying physics
  • (2) We show how to learn Hamiltonians and Lagrangians in Cartesian coordinates via explicit constraints using networks that we term Constrained Hamiltonian Neural Networks (CHNNs) and Constrained Lagrangian Neural Networks (CLNNs). (3) We show how to apply our method to arbitrary rigid extendedbody systems by showing how such systems x1,2
  • All models perform progressively worse as N increases, but CHNN and CLNN consistently outperform the competing methods with an increasing gap in the relative error as N increases and the dynamics become increasingly complex.We present Figure 5 in linear scale in Appendix B, emphasizing that CHNN and
  • We have demonstrated that Cartesian coordinates combined with explicit constraints make the Hamiltonians and Lagrangians of physical systems easier to learn, improving the data-efficiency and trajectory prediction accuracy by two orders of magnitude
  • We have shown how to embed arbitrary extended body systems into purely Cartesian coordinates
  • Our approach is applicable to rigid body systems where the state is fully observed in a 3D space, such as in robotics
Results
  • (1) The authors demonstrate analytically that embedding problems in Cartesian coordinates simplifies the Hamiltonian and the Lagrangian that must be learned, resulting in systems that can be accurately modelled by neural networks with 100 times less data.
  • The differential equations can be derived from one of two scalar functions, a Hamiltonian H or a Lagrangian L, depending on the formalism.
  • The authors' method relies on explicit constraints to learn Hamiltonians and Lagrangians in Cartesian coordinates.
  • The authors use a simple example to demonstrate how Cartesian coordinates can vastly simplify the functions that the models must learn.
  • The constant mass matrix M is a general property of using Cartesian coordinates for these systems as shown in Section 6.
  • Cartesian coordinates reduce the functional complexity of the Hamiltonian and Lagrangian, they do not encode the constraints of the system.
  • Since mechanical systems in Cartesian coordinates have separable Hamiltonians and Lagrangian with constant M , the method can parametrize M −1 with a learned positive semi-definite matrix instead of how it is usually done with a neural network [2, 14].
  • In Hamiltonian and Lagrangian mechanics, the authors may freely use any set of coordinates that describe the system as long as the constraints are either implicitly or
  • The authors use a more specialized parametrization of M that is block diagonal but still fully general even when the authors do not know the ground truth Hamiltonian or Lagrangian, shown in Equation 9 explicitly enforced.
  • All models perform progressively worse as N increases, but CHNN and CLNN consistently outperform the competing methods with an increasing gap in the relative error as N increases and the dynamics become increasingly complex.The authors present Figure 5 in linear scale in Appendix B, emphasizing that CHNN and
Conclusion
  • The authors have demonstrated that Cartesian coordinates combined with explicit constraints make the Hamiltonians and Lagrangians of physical systems easier to learn, improving the data-efficiency and trajectory prediction accuracy by two orders of magnitude.
  • Cartesian coordinates are only possible for systems in physical space, which precludes the method from simplifying learning in some Hamiltonian systems like the Lotka-Volterra equations.
  • The authors hope that this approach can inspire handling other kinds of constraints such as gauge constraints in modeling electromagnetism. the method requires the constraints to be known, it may be possible to model the constraints with neural networks and propagate gradients through the Jacobian matrices to learn the constraints directly from data
Related work
  • In addition to the work on learning physical systems above, Chen et al [2] showed how symplectic integration and recurrent networks stabilize Hamiltonian learning including on stiff dynamics. Finzi et al [7] showed how learned dynamics can be made to conserve invariants such as linear and angular momentum by imposing symmetries on the learned Hamiltonian. Zhong et al [21] showed how to extend HNNs to dissapative systems, and Cranmer et al [3] with LNNs showed how DeLaNs could be generalized outside of mechanical systems such as those in special relativity.

    Our method relies on explicit constraints to learn Hamiltonians and Lagrangians in Cartesian coordinates. Constrained Hamiltonian mechanics was developed by Dirac [5] for canonical quantization — see Date [4] for an introduction. The framework for constrained Lagrangians is often used in physics engines and robotics [6, 17] — see LaValle [10] for an introduction. However, our paper is the first to propose learning Hamiltonians and Lagrangians with explicit constraints. Our approach leads to two orders of magnitude improvement in accuracy and sample efficiency over the state-of-the-art alternatives, especially on chaotic systems and 3D extended-body systems.
Funding
  • This research is supported by an Amazon Research Award, Facebook Research, Amazon Machine Learning Research Award, NSF I-DISRE 193471, NIH R01 DA04876401A1, NSF IIS-1910266, and NSF 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science
Reference
  • Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In Advances in neural information processing systems, pages 6571– 6583, 2018.
    Google ScholarLocate open access versionFindings
  • Zhengdao Chen, Jianyu Zhang, Martin Arjovsky, and Léon Bottou. Symplectic recurrent neural networks. arXiv preprint arXiv:1909.13334, 2019.
    Findings
  • Miles Cranmer, Sam Greydanus, Stephan Hoyer, Peter Battaglia, David Spergel, and Shirley Ho. Lagrangian neural networks. arXiv preprint arXiv:2003.04630, 2020.
    Findings
  • Ghanashyam Date. Lectures on constrained systems. arXiv preprint arXiv:1010.2062, 2010.
    Findings
  • Paul Adrien Maurice Dirac. Generalized hamiltonian dynamics. Canadian journal of mathematics, 2:129–148, 1950.
    Google ScholarLocate open access versionFindings
  • Roy Featherstone. Rigid body dynamics algorithms. Springer, 2014.
    Google ScholarFindings
  • Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. arXiv preprint arXiv:2002.12880, 2020.
    Findings
  • Ayush Garg and Sammed Shantinath Kagi. Neurips 2019 reproduciblity challenge: Hamiltonian neural networks. Neurips 2019 Reproducibilty Challenge, 2019.
    Google ScholarLocate open access versionFindings
  • Samuel Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian neural networks. In Advances in Neural Information Processing Systems, pages 15353–15363, 2019.
    Google ScholarLocate open access versionFindings
  • Steven M LaValle. Planning algorithms. Cambridge university press, 2006.
    Google ScholarFindings
  • Benedict J Leimkuhler and Robert D Skeel. Symplectic numerical integrators in constrained hamiltonian systems. Journal of Computational Physics, 112(1):117–125, 1994.
    Google ScholarLocate open access versionFindings
  • Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
    Findings
  • Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
    Findings
  • Michael Lutter, Christian Ritter, and Jan Peters. Deep lagrangian networks: Using physics as model prior for deep learning. arXiv preprint arXiv:1907.04490, 2019.
    Findings
  • David JC MacKay. Bayesian model comparison and backprop nets. In Advances in neural information processing systems, pages 839–846, 1992.
    Google ScholarLocate open access versionFindings
  • Wesley J Maddox, Gregory Benton, and Andrew Gordon Wilson. Rethinking parameter counting in deep models: Effective dimensionality revisited. arXiv preprint arXiv:2003.02139, 2020.
    Findings
  • Richard M Murray, Zexiang Li, S Shankar Sastry, and S Shankara Sastry. A mathematical introduction to robotic manipulation. CRC press, 1994.
    Google ScholarFindings
  • Alvaro Sanchez-Gonzalez, Victor Bapst, Kyle Cranmer, and Peter Battaglia. Hamiltonian graph networks with ode integrators. arXiv preprint arXiv:1909.12790, 2019.
    Findings
  • David E Stewart. Rigid-body dynamics with friction and impact. SIAM review, 42(1):3–39, 2000.
    Google ScholarLocate open access versionFindings
  • Yaofeng Desmond Zhong, Biswadip Dey, and Amit Chakraborty. Symplectic ode-net: Learning hamiltonian dynamics with control. arXiv preprint arXiv:1909.12077, 2019.
    Findings
  • Yaofeng Desmond Zhong, Biswadip Dey, and Amit Chakraborty. Dissipative symoden: Encoding hamiltonian dynamics with dissipation and control into deep learning. arXiv preprint arXiv:2002.08860, 2020.
    Findings
  • Aiqing Zhu, Pengzhan Jin, and Yifa Tang. Deep hamiltonian networks based on symplectic integrators. arXiv preprint arXiv:2004.13830, 2020.
    Findings
Author
Marc Finzi
Marc Finzi
Ke Alexander Wang
Ke Alexander Wang
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科