Learning to Simulate Complex Physics with Graph Networks

ICML 2020, 2020.

被引用10|引用|浏览136|来源
关键词
Physical Simulationtraditional simulatorcomplex dynamicsmoothed particle hydrodynamics”position-based dynamics”更多(18+)
微博一下
Our experimental results show our single Graph Network-based Simulators” architecture can learn to simulate the dynamics of fluids, rigid solids, and deformable materials, interacting with one another, using tens of thousands of particles over thousands time steps

摘要

Here we present a general framework for learning simulation, and provide a single model implementation that yields state-of-the-art performance across a variety of challenging physical domains, involving fluids, rigid solids, and deformable materials interacting with one another. Our framework---which we term "Graph Network-based Simula...更多

代码

数据

0
简介
  • Realistic simulators of complex physics are invaluable to many scientific and engineering disciplines, traditional simulators can be very expensive to create and use.
  • An attractive alternative to traditional simulators is to use machine learning to train simulators directly from observed data, the large state spaces and complex dynamics have been difficult for standard end-to-end learning approaches to overcome.
  • The authors' framework imposes strong inductive biases, where rich physical states are represented by graphs of interacting particles, and complex dynamics are approximated by learned message-passing among nodes
重点内容
  • Realistic simulators of complex physics are invaluable to many scientific and engineering disciplines, traditional simulators can be very expensive to create and use
  • Our framework imposes strong inductive biases, where rich physical states are represented by graphs of interacting particles, and complex dynamics are approximated by learned message-passing among nodes
  • We explored how our Graph Network-based Simulators” learns to simulate in datasets which contained three diverse, complex physical materials: water as a barely damped fluid, chaotic in nature; sand as a granular material with complex frictional behavior; and “goop” as a viscous, plastically deformable material
  • Our main findings are that our Graph Network-based Simulators” model can learn accurate, high-resolution, long-term simulations of different fluids, deformables, and rigid solids, and it can generalize well beyond training to much longer, larger, and challenging settings
  • General-purpose machine learning framework for learning to simulate complex systems, based on particle-based representations of physics and learned message-passing on graphs
  • Our experimental results show our single Graph Network-based Simulators” architecture can learn to simulate the dynamics of fluids, rigid solids, and deformable materials, interacting with one another, using tens of thousands of particles over thousands time steps
方法
  • The authors explored how the GNS learns to simulate in datasets which contained three diverse, complex physical materials: water as a barely damped fluid, chaotic in nature; sand as a granular material with complex frictional behavior; and “goop” as a viscous, plastically deformable material
  • These materials have very different behavior, and in most simulators, require implementing separate material models or even entirely different simulation algorithms.
  • The authors used SPlisHSPlasH (Bender & Koschier, 2015), a SPH-based fluid simulator with strict volume preservation to generate this dataset
结果
  • The authors' main findings are that the GNS model can learn accurate, high-resolution, long-term simulations of different fluids, deformables, and rigid solids, and it can generalize well beyond training to much longer, larger, and challenging settings.
  • In Section 5.5 below, the authors compare the GNS model to two recent, related approaches, and find the approach was simpler, more generally applicable, and more accurate.
  • To challenge the robustness of the architecture, the authors used a single set of model hyperparameters for training across all of the experiments.
结论
  • General-purpose machine learning framework for learning to simulate complex systems, based on particle-based representations of physics and learned message-passing on graphs.
  • The authors' experimental results show the single GNS architecture can learn to simulate the dynamics of fluids, rigid solids, and deformable materials, interacting with one another, using tens of thousands of particles over thousands time steps.
  • There are natural ways to incorporate stronger, generic physical knowledge into the framework, such as Hamiltonian mechanics (Sanchez-Gonzalez et al, 2019) and rich, architecturally imposed symmetries.
  • Differentiable simulators will be valuable for solving inverse problems, by not strictly optimizing for forward prediction, but for inverse objectives as well
总结
  • Introduction:

    Realistic simulators of complex physics are invaluable to many scientific and engineering disciplines, traditional simulators can be very expensive to create and use.
  • An attractive alternative to traditional simulators is to use machine learning to train simulators directly from observed data, the large state spaces and complex dynamics have been difficult for standard end-to-end learning approaches to overcome.
  • The authors' framework imposes strong inductive biases, where rich physical states are represented by graphs of interacting particles, and complex dynamics are approximated by learned message-passing among nodes
  • Methods:

    The authors explored how the GNS learns to simulate in datasets which contained three diverse, complex physical materials: water as a barely damped fluid, chaotic in nature; sand as a granular material with complex frictional behavior; and “goop” as a viscous, plastically deformable material
  • These materials have very different behavior, and in most simulators, require implementing separate material models or even entirely different simulation algorithms.
  • The authors used SPlisHSPlasH (Bender & Koschier, 2015), a SPH-based fluid simulator with strict volume preservation to generate this dataset
  • Results:

    The authors' main findings are that the GNS model can learn accurate, high-resolution, long-term simulations of different fluids, deformables, and rigid solids, and it can generalize well beyond training to much longer, larger, and challenging settings.
  • In Section 5.5 below, the authors compare the GNS model to two recent, related approaches, and find the approach was simpler, more generally applicable, and more accurate.
  • To challenge the robustness of the architecture, the authors used a single set of model hyperparameters for training across all of the experiments.
  • Conclusion:

    General-purpose machine learning framework for learning to simulate complex systems, based on particle-based representations of physics and learned message-passing on graphs.
  • The authors' experimental results show the single GNS architecture can learn to simulate the dynamics of fluids, rigid solids, and deformable materials, interacting with one another, using tens of thousands of particles over thousands time steps.
  • There are natural ways to incorporate stronger, generic physical knowledge into the framework, such as Hamiltonian mechanics (Sanchez-Gonzalez et al, 2019) and rich, architecturally imposed symmetries.
  • Differentiable simulators will be valuable for solving inverse problems, by not strictly optimizing for forward prediction, but for inverse objectives as well
表格
  • Table1: List of maximum number of particles N , sequence length K, and quantitative model accuracy (MSE) on the held-out test set. All domain names are also hyperlinks to the video website. Note, since K varies across datasets, the errors are not directly comparable to one another
Download tables as Excel
相关工作
  • Our approach focuses on particle-based simulation, which is used widely across science and engineering, e.g., computational fluid dynamics, computer graphics. States are represented as a set of particles, which encode mass, material, movement, etc. within local regions of space. Dynamics are computed on the basis of particles’ interactions within their local neighborhoods. One popular particle-based method for simulating fluids is “smoothed particle hydrodynamics” (SPH) (Monaghan, 1992), which evaluates pressure and viscosity forces around each particle, and updates particles’ velocities and positions accordingly. Other techniques, such as “position-based dynamics” (PBD) (Muller et al, 2007) and “material point method” (MPM) (Sulsky et al, 1995), are more suitable for interacting, deformable materials. In PBD, incompressibility and collision dynamics involve resolving pairwise distance constraints between particles, and directly predicting their position changes. Several differentiable particle-based simulators have recently appeared, e.g., DiffTaichi (Hu et al, 2019), PhiFlow (Holl et al, 2020), and Jax-MD (Schoenholz & Cubuk, 2019), which can backpropagate gradients through the architecture.
引用论文
  • Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
    Findings
  • Battaglia, P., Pascanu, R., Lai, M., Rezende, D. J., et al. Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems, pp. 4502–4510, 2016.
    Google ScholarLocate open access versionFindings
  • Battaglia, P. W., Hamrick, J. B., Bapst, V., SanchezGonzalez, A., Zambaldi, V., Malinowski, M., Tacchetti, A., Raposo, D., Santoro, A., Faulkner, R., et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
    Findings
  • Bender, J. and Koschier, D. Divergence-free smoothed particle hydrodynamics. In Proceedings of the 2015 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. ACM, 2015. doi: http://dx.doi.org/10.1145/2786784.2786796.
    Locate open access versionFindings
  • Chang, M. B., Ullman, T., Torralba, A., and Tenenbaum, J. B. A compositional object-based approach to learning physical dynamics. arXiv preprint arXiv:1612.00341, 2016.
    Findings
  • Cuturi, M. Sinkhorn distances: Lightspeed computation of optimal transportation distances, 2013.
    Google ScholarFindings
  • Graph Nets Library. DeepMind, 2018. URL https://github.com/deepmind/graph_nets.
    Findings
  • Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263–1272. JMLR. org, 2017.
    Google ScholarLocate open access versionFindings
  • Gretton, A., Borgwardt, K. M., Rasch, M. J., Scholkopf, B., and Smola, A. A kernel two-sample test. Journal of Machine Learning Research, 13(Mar):723–773, 2012.
    Google ScholarLocate open access versionFindings
  • Grzeszczuk, R., Terzopoulos, D., and Hinton, G. Neuroanimator: Fast neural network emulation and control of physics-based models. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pp. 9–20, 1998.
    Google ScholarLocate open access versionFindings
  • He, S., Li, Y., Feng, Y., Ho, S., Ravanbakhsh, S., Chen, W., and Poczos, B. Learning to predict the cosmological structure formation. Proceedings of the National Academy of Sciences, 116(28):13825–13832, 2019.
    Google ScholarLocate open access versionFindings
  • Hu, Y., Fang, Y., Ge, Z., Qu, Z., Zhu, Y., Pradhana, A., and Jiang, C. A moving least squares material point method with displacement discontinuity and two-way rigid body coupling. ACM Trans. Graph., 37(4), July 2018.
    Google ScholarLocate open access versionFindings
  • Hu, Y., Anderson, L., Li, T.-M., Sun, Q., Carr, N., RaganKelley, J., and Durand, F. Difftaichi: Differentiable programming for physical simulation. arXiv preprint arXiv:1910.00935, 2019.
    Findings
  • Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
    Findings
  • Kipf, T. N. and Welling, M. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
    Findings
  • Kumar, S., Bitorff, V., Chen, D., Chou, C., Hechtman, B., Lee, H., Kumar, N., Mattson, P., Wang, S., Wang, T., et al. Scale mlperf-0.6 models on google tpu-v3 pods. arXiv preprint arXiv:1909.09756, 2019.
    Findings
  • Ladicky, L., Jeong, S., Solenthaler, B., Pollefeys, M., and Gross, M. Data-driven fluid simulations using regression forests. ACM Transactions on Graphics (TOG), 34(6): 1–9, 2015.
    Google ScholarLocate open access versionFindings
  • Li, Y., Wu, J., Tedrake, R., Tenenbaum, J. B., and Torralba, A. Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids. arXiv preprint arXiv:1810.01566, 2018.
    Findings
  • Li, Y., Wu, J., Zhu, J.-Y., Tenenbaum, J. B., Torralba, A., and Tedrake, R. Propagation networks for model-based control under partial observation. In 2019 International Conference on Robotics and Automation (ICRA), pp. 1205– 1211. IEEE, 2019.
    Google ScholarLocate open access versionFindings
  • Macklin, M., Muller, M., Chentanez, N., and Kim, T.-Y. Unified particle physics for real-time applications. ACM Transactions on Graphics (TOG), 33(4):1–12, 2014.
    Google ScholarLocate open access versionFindings
  • Manessi, F., Rozza, A., and Manzo, M. Dynamic graph convolutional networks. Pattern Recognition, 97:107000, 2020.
    Google ScholarLocate open access versionFindings
  • Monaghan, J. J. Smoothed particle hydrodynamics. Annual review of astronomy and astrophysics, 30(1):543–574, 1992.
    Google ScholarLocate open access versionFindings
  • Mrowca, D., Zhuang, C., Wang, E., Haber, N., Fei-Fei, L. F., Tenenbaum, J., and Yamins, D. L. Flexible neural representation for physics prediction. In Advances in Neural Information Processing Systems, pp. 8799–8810, 2018.
    Google ScholarLocate open access versionFindings
  • Holl, P., Koltun, V., and Thuerey, N. Learning to control pdes with differentiable physics. arXiv preprint arXiv:2001.07457, 2020.
    Findings
  • Muller, M., Heidelberger, B., Hennix, M., and Ratcliff, J. Position based dynamics. Journal of Visual Communication and Image Representation, 18(2):109–118, 2007.
    Google ScholarLocate open access versionFindings
  • Sanchez-Gonzalez, A., Heess, N., Springenberg, J. T., Merel, J., Riedmiller, M., Hadsell, R., and Battaglia, P. Graph networks as learnable physics engines for inference and control. arXiv preprint arXiv:1806.01242, 2018.
    Findings
  • Sanchez-Gonzalez, A., Bapst, V., Cranmer, K., and Battaglia, P. Hamiltonian graph networks with ode integrators. arXiv preprint arXiv:1909.12790, 2019.
    Findings
  • Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., and Monfardini, G. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2008.
    Google ScholarLocate open access versionFindings
  • Schoenholz, S. S. and Cubuk, E. D. Jax, md: End-to-end differentiable, hardware accelerated, molecular dynamics in pure python. arXiv preprint arXiv:1912.04232, 2019.
    Findings
  • Wiewel, S., Becher, M., and Thuerey, N. Latent space physics: Towards learning the temporal evolution of fluid flow. In Computer Graphics Forum, pp. 71–82. Wiley Online Library, 2019.
    Google ScholarLocate open access versionFindings
  • Yan, S., Xiong, Y., and Lin, D. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Thirty-second AAAI conference on artificial intelligence, 2018.
    Google ScholarLocate open access versionFindings
  • Sulsky, D., Zhou, S.-J., and Schreyer, H. L. Application of a particle-in-cell method to solid mechanics. Computer physics communications, 87(1-2):236–252, 1995.
    Google ScholarLocate open access versionFindings
  • Sun, C., Karlsson, P., Wu, J., Tenenbaum, J. B., and Murphy, K. Stochastic prediction of multi-agent interactions from partial observations. arXiv preprint arXiv:1902.09641, 2019.
    Findings
  • Tacchetti, A., Song, H. F., Mediano, P. A., Zambaldi, V., Rabinowitz, N. C., Graepel, T., Botvinick, M., and Battaglia, P. W. Relational forward models for multi-agent learning. arXiv preprint arXiv:1809.11044, 2018.
    Findings
  • Trivedi, R., Dai, H., Wang, Y., and Song, L. Know-evolve: Deep temporal reasoning for dynamic knowledge graphs. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3462–3471. JMLR. org, 2017.
    Google ScholarLocate open access versionFindings
  • Trivedi, R., Farajtabar, M., Biswal, P., and Zha, H. Dyrep: Learning representations over dynamic graphs. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyePrhR5KX.
    Locate open access versionFindings
  • Ummenhofer, B., Prantl, L., Thurey, N., and Koltun, V. Lagrangian fluid simulation with continuous convolutions. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=B1lDoJSYDH.
    Locate open access versionFindings
  • Velikovi, P., Ying, R., Padovano, M., Hadsell, R., and Blundell, C. Neural execution of graph algorithms. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SkgKO0EtvS.
    Locate open access versionFindings
  • Villani, C. Topics in optimal transportation. American Mathematical Soc., 2003.
    Google ScholarFindings
您的评分 :
0

 

标签
评论