Automatic Differentiation for Second Renormalization of Tensor Networks

Chen Bin-Bin
Chen Bin-Bin
Gao Yuan
Gao Yuan
Guo Yi-Bin
Guo Yi-Bin
Liu Yuzhi
Liu Yuzhi
Liao Hai-Jun
Liao Hai-Jun
Cited by: 0|Bibtex|Views116
Other Links: arxiv.org
Weibo:
Facilitated by the automatic differentiation technique widely used in deep learning, we propose a uniform framework of differentiable Tensor renormalization group that can be applied to improve various Tensor renormalization group methods, in an automatic fashion

Abstract:

Tensor renormalization group (TRG) constitutes an important methodology for accurate simulations of strongly correlated lattice models. Facilitated by the automatic differentiation technique widely used in deep learning, we propose a uniform framework of differentiable TRG ($\partial$TRG) that can be applied to improve various TRG metho...More

Code:

Data:

Introduction
  • In the course of TRG process, environment of local tensors should be considered for conducting a precise truncation through isometric renormalization transformations in tensor bases.
  • Tensor renormalization group (TRG) constitutes an important methodology for accurate simulations of strongly correlated lattice models.
  • ∂TRG systematically extends the concept of second renormalization [PRL 103, 160601 (2009)] where the tensor environment is computed recursively in the backward iteration, in the sense that given the forward process of TRG, ∂TRG automatically finds the gradient through backpropagation, with which one can deeply “train” the tensor networks.
Highlights
  • Facilitated by the automatic differentiation technique widely used in deep learning, we propose a uniform framework of differentiable Tensor renormalization group (∂Tensor renormalization group) that can be applied to improve various Tensor renormalization group methods, in an automatic fashion
  • ∂Tensor renormalization group systematically extends the concept of second renormalization [PRL 103, 160601 (2009)] where the tensor environment is computed recursively in the backward iteration, in the sense that given the forward process of Tensor renormalization group, ∂Tensor renormalization group automatically finds the gradient through backpropagation, with which one can deeply “train” the tensor networks
  • In the course of Tensor renormalization group process, environment of local tensors should be considered for conducting a precise truncation through isometric renormalization transformations in tensor bases
  • Differentiable tensor renormalization group. — Being aware of the intimate relation between the backpropagation and second renormalization group, we extend the latter to a more flexible framework, ∂Tensor renormalization group, with the help of well-developed automatic differentiation packages [39], e.g., autograd [51] and PyTorch [35, 52]
  • Conclusion and outlook.— Inspired by the essential correspondence between the backpropagation algorithm and second renormalization group of tensor networks, we propose the framework of ∂Tensor renormalization group
Results
  • The authors benchmark ∂TRG in solving the square-lattice Ising model, and demonstrate its power by simulating one- and two-dimensional quantum systems at finite temperature.
  • The deep optimization as well as GPU acceleration renders ∂TRG manybody simulations with high efficiency and accuracy.
  • In SRG, the environment of local tensors is computed recursively between different scales of a hierarchical network, with which a global optimization is feasible.
  • In ∂TRG, the forward TRG process is made fully differentiable, and the renormalization transformations are optimized globally and automatically through the backpropagation.
  • The authors apply ∂TRG to simulate thermal equilibrium states at finite temperature, and achieve significantly improved accuracy over previous methods [13, 19].
  • The efficiency is demonstrated by implementing ∂TRG with PyTorch [35, 52], which facilitates the GPU computing and shows a high performance of about 40 times acceleration over a single CPU core.
  • The authors consider two different ∂TRG schemes following the HOTRG [7] and exponential TRG (XTRG) [19], as shown in Figs.
  • In Fig. 2, the authors show the accuracies of ∂TRG implementations, together with the HOTRG and HOSRG data for comparisons.
  • The results are shown in Fig. 3(a), where the relative error |δ f / f | curves rise up from very small values at high temperature and increase monotonically as T decreases.
  • On the other hand, the enhancement of accuracy is marginal due to the limited expressibility of the tensor network with a given bond dimension D = 32.
  • [42], suggest that GPU acceleration constitutes a very promising technique to be fully explored in quantum manybody computations, in tensor network simulations.
Conclusion
  • One can observe a high accuracy with an optimization depth nd = 3, which continuously improves upon increasing the bond dimension D.
  • Large-scale simulations and finite-temperature phase m transition.— the authors conduct ∂TRG calculations of quantum Ising model on wide cylinders with various widths W and lengths L.
  • Conclusion and outlook.— Inspired by the essential correspondence between the backpropagation algorithm and SRG of tensor networks, the authors propose the framework of ∂TRG.
Summary
  • In the course of TRG process, environment of local tensors should be considered for conducting a precise truncation through isometric renormalization transformations in tensor bases.
  • Tensor renormalization group (TRG) constitutes an important methodology for accurate simulations of strongly correlated lattice models.
  • ∂TRG systematically extends the concept of second renormalization [PRL 103, 160601 (2009)] where the tensor environment is computed recursively in the backward iteration, in the sense that given the forward process of TRG, ∂TRG automatically finds the gradient through backpropagation, with which one can deeply “train” the tensor networks.
  • The authors benchmark ∂TRG in solving the square-lattice Ising model, and demonstrate its power by simulating one- and two-dimensional quantum systems at finite temperature.
  • The deep optimization as well as GPU acceleration renders ∂TRG manybody simulations with high efficiency and accuracy.
  • In SRG, the environment of local tensors is computed recursively between different scales of a hierarchical network, with which a global optimization is feasible.
  • In ∂TRG, the forward TRG process is made fully differentiable, and the renormalization transformations are optimized globally and automatically through the backpropagation.
  • The authors apply ∂TRG to simulate thermal equilibrium states at finite temperature, and achieve significantly improved accuracy over previous methods [13, 19].
  • The efficiency is demonstrated by implementing ∂TRG with PyTorch [35, 52], which facilitates the GPU computing and shows a high performance of about 40 times acceleration over a single CPU core.
  • The authors consider two different ∂TRG schemes following the HOTRG [7] and exponential TRG (XTRG) [19], as shown in Figs.
  • In Fig. 2, the authors show the accuracies of ∂TRG implementations, together with the HOTRG and HOSRG data for comparisons.
  • The results are shown in Fig. 3(a), where the relative error |δ f / f | curves rise up from very small values at high temperature and increase monotonically as T decreases.
  • On the other hand, the enhancement of accuracy is marginal due to the limited expressibility of the tensor network with a given bond dimension D = 32.
  • [42], suggest that GPU acceleration constitutes a very promising technique to be fully explored in quantum manybody computations, in tensor network simulations.
  • One can observe a high accuracy with an optimization depth nd = 3, which continuously improves upon increasing the bond dimension D.
  • Large-scale simulations and finite-temperature phase m transition.— the authors conduct ∂TRG calculations of quantum Ising model on wide cylinders with various widths W and lengths L.
  • Conclusion and outlook.— Inspired by the essential correspondence between the backpropagation algorithm and SRG of tensor networks, the authors propose the framework of ∂TRG.
Funding
  • This work was supported by the National Natural Science Foundation of China (Grant Nos. 11774420, 11834014, 11974036, and 11774398), the National R&D Program of China (Grants Nos. 2016YFA0300503, 2017YFA0302900) and German Research Foundation (DFG WE4819/3-1) under Germany’s Excellence Strategy - EXC2111 - 390814868
Reference
  • T. Liu, W. Li, A. Weichselbaum, J. von Delft, and G. Su, “Simplex valence-bond crystal in the spin-1 kagome heisenberg antiferromagnet,” Phys. Rev. B 91, 060403 (2015).
    Google ScholarLocate open access versionFindings
  • H. J. Liao, Z. Y. Xie, J. Chen, Z. Y. Liu, H. D. Xie, R. Z. Huang, B. Normand, and T. Xiang, “Gapless spin-liquid ground state in the s = 1/2 kagome antiferromagnet,” Phys. Rev. Lett. 118, 137202 (2017).
    Google ScholarLocate open access versionFindings
  • L. Chen, D.-W. Qu, H. Li, B.-B. Chen, S.-S. Gong, J. von Delft, A. Weichselbaum, and W. Li, “Two-temperature scales in the triangular lattice Heisenberg antiferromagnet,” Phys. Rev. B 99, 140404(R) (2019).
    Google ScholarLocate open access versionFindings
  • [5] M. Levin and C. P. Nave, “Tensor renormalization group approach to two-dimensional classical lattice models,” Phys. Rev. Lett. 99, 120601 (2007).
    Google ScholarLocate open access versionFindings
  • [6] Z.-C. Gu and X.-G. Wen, “Tensor-entanglement-filtering renormalization approach and symmetry-protected topological order,” Phys. Rev. B 80, 155131 (2009).
    Google ScholarLocate open access versionFindings
  • [7] Z. Y. Xie, J. Chen, M. P. Qin, J. W. Zhu, L. P. Yang, and T. Xiang, “Coarse-graining renormalization by higher-order singular value decomposition,” Phys. Rev. B 86, 045139 (2012).
    Google ScholarLocate open access versionFindings
  • [8] G. Evenbly and G. Vidal, “Tensor network renormalization,” Phys. Rev. Lett. 115, 180405 (2015).
    Google ScholarLocate open access versionFindings
  • [9] G. Evenbly and G. Vidal, “Tensor network renormalization yields the multiscale entanglement renormalization ansatz,” Phys. Rev. Lett. 115, 200401 (2015).
    Google ScholarLocate open access versionFindings
  • [10] S. Yang, Z.-C. Gu, and X.-G. Wen, “Loop optimization for tensor network renormalization,” Phys. Rev. Lett. 118, 110504 (2017).
    Google ScholarLocate open access versionFindings
  • [11] M. Bal, M. Mariën, J. Haegeman, and F. Verstraete, “Renormalization group flows of Hamiltonians using tensor networks,” Phys. Rev. Lett. 118, 250602 (2017).
    Google ScholarLocate open access versionFindings
  • [12] H. C. Jiang, Z. Y. Weng, and T. Xiang, “Accurate determination of tensor network state of quantum lattice models in two dimensions,” Phys. Rev. Lett. 101, 090603 (2008).
    Google ScholarLocate open access versionFindings
  • [13] W. Li, S.-J. Ran, S.-S. Gong, Y. Zhao, B. Xi, F. Ye, and G. Su, “Linearized tensor renormalization group algorithm for the calculation of thermodynamic properties of quantum lattice models,” Phys. Rev. Lett. 106, 127202 (2011).
    Google ScholarLocate open access versionFindings
  • [14] P. Czarnik, L. Cincio, and J. Dziarmaga, “Projected entangled pair states at finite temperature: Imaginary time evolution with ancillas,” Phys. Rev. B 86, 245101 (2012).
    Google ScholarLocate open access versionFindings
  • [15] Y.-L. Dong, L. Chen, Y.-J. Liu, and W. Li, “Bilayer linearized tensor renormalization group approach for thermal tensor networks,” Phys. Rev. B 95, 144428 (2017).
    Google ScholarLocate open access versionFindings
  • [16] A. Kshetrimayum, M. Rizzi, J. Eisert, and R. Orús, “Tensor network annealing algorithm for two-dimensional thermal states,” Phys. Rev. Lett. 122, 070502 (2019).
    Google ScholarLocate open access versionFindings
  • [17] P. Czarnik and J. Dziarmaga, “Variational approach to projected entangled pair states at finite temperature,” Phys. Rev. B 92, 035152 (2015).
    Google ScholarLocate open access versionFindings
  • [18] P. Czarnik and P. Corboz, “Finite correlation length scaling with infinite projected entangled pair states at finite temperature,” arXiv:1904.02476 (2019).
    Findings
  • [19] B.-B. Chen, L. Chen, Z. Chen, W. Li, and A. Weichselbaum, “Exponential thermal tensor network approach for quantum lattice models,” Phys. Rev. X 8, 031082 (2018).
    Google ScholarLocate open access versionFindings
  • [20] H. Li, B.-B. Chen, Z. Chen, J. von Delft, A. Weichselbaum, and W. Li, “Thermal tensor renormalization group simulations of square-lattice quantum spin models,” Phys. Rev. B 100, 045110 (2019).
    Google ScholarLocate open access versionFindings
  • [21] S. R. White, “Density matrix formulation for quantum renormalization groups,” Phys. Rev. Lett. 69, 2863–2866 (1992).
    Google ScholarLocate open access versionFindings
  • [22] Z. Y. Xie, H. C. Jiang, Q. N. Chen, Z. Y. Weng, and T. Xiang, “Second renormalization of tensor-network states,” Phys. Rev. Lett. 103, 160601 (2009).
    Google ScholarLocate open access versionFindings
  • [23] H. H. Zhao, Z. Y. Xie, Q. N. Chen, Z. C. Wei, J. W. Cai, and T. Xiang, “Renormalization of tensor-network states,” Phys. Rev. B 81, 174411 (2010).
    Google ScholarLocate open access versionFindings
  • [24] H.-H. Zhao, Z.-Y. Xie, T. Xiang, and M. Imada, “Tensor network algorithm by coarse-graining tensor renormalization on finite periodic lattices,” Phys. Rev. B 93, 125115 (2016).
    Google ScholarLocate open access versionFindings
  • [25] G. Carleo and M. Troyer, “Solving the quantum many-body problem with artificial neural networks,” Science 355, 602–606 (2017).
    Google ScholarLocate open access versionFindings
  • [26] J. Carrasquilla and R. G. Melko, “Machine learning phases of matter,” Nature Physics 13, 431–434 (2017).
    Google ScholarLocate open access versionFindings
  • [28] E. Stoudenmire and D. J. Schwab, “Supervised learning with tensor networks,” in Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Curran Associates, Inc., 2016) pp. 4799–4807.
    Google ScholarLocate open access versionFindings
  • [29] S. Foreman, J. Giedt, Y. Meurice, and J. Unmuth-Yockey, “Examples of renormalization group transformations for image sets,” Phys. Rev. E 98, 052129 (2018).
    Google ScholarLocate open access versionFindings
  • [30] Z.-Y. Han, J. Wang, H. Fan, L. Wang, and P. Zhang, “Unsupervised generative modeling using matrix product states,” Phys. Rev. X 8, 031012 (2018).
    Google ScholarLocate open access versionFindings
  • [31] C. Guo, Z. M. Jie, W. Lu, and D. Poletti, “Matrix product operators for sequence-to-sequence learning,” Phys. Rev. E 98, 042114 (2018).
    Google ScholarLocate open access versionFindings
  • [32] S.-H. Li and L. Wang, “Neural network renormalization group,” Phys. Rev. Lett. 121, 260601 (2018).
    Google ScholarLocate open access versionFindings
  • [33] M. Koch-Janusz and Z. Ringel, “Mutual information, neural networks and the renormalization group,” Nature Physics 14, 578–582 (2018).
    Google ScholarLocate open access versionFindings
  • [34] H.-J. Liao, J.-G. Liu, L. Wang, and T. Xiang, “Differentiable programming tensor networks,” Phys. Rev. X 9, 031041 (2019).
    Google ScholarLocate open access versionFindings
  • [35] A. Paszke, G. Chanan, Z. Lin, S. Gross, E. Yang, L. Antiga, and Z. Devito, “Automatic differentiation in PyTorch,” in Conference on Neural Information Processing Systems (NIPS 2017) (Long beach, CA, USA).
    Google ScholarLocate open access versionFindings
  • [36] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature (London) 323, 533–536 (1986).
    Google ScholarLocate open access versionFindings
  • [37] D. B. Parker, “Learning logic report,” (1985), MIT, TR-47.
    Google ScholarFindings
  • [38] Y. LeCun, L. D. Jackel, B. Boser, J. S. Denker, H. P. Graf, I. Guyon, D. Henderson, R. E. Howard, and W. Hubbard, “Handwritten digit recognition: applications of neural network chips and automatic learning,” IEEE Communications Magazine 27, 41–46 (1989).
    Google ScholarLocate open access versionFindings
  • [40] G. Vidal, “Entanglement renormalization,” Phys. Rev. Lett. 99, 220405 (2007).
    Google ScholarLocate open access versionFindings
  • [41] G. Evenbly and G. Vidal, “Algorithms for entanglement renormalization,” Phys. Rev. B 79, 144108 (2009).
    Google ScholarLocate open access versionFindings
  • [42] A. Milsted, M. Ganahl, S. Leichenauer, J. Hidary, and G. Vidal, “TensorNetwork on TensorFlow: A Spin Chain Application Usfing Tree Tensor Networks,” (2019), arXiv:1905.01331.
    Findings
  • [43] B. Bauer et al., “The ALPS project release 2.0: open source software for strongly correlated systems,” Journal of Statistical Mechanics: Theory and Experiment 2011, P05001 (2011).
    Google ScholarLocate open access versionFindings
  • [46] E. Efrati, Z. Wang, A. Kolan, and L. P. Kadanoff, “Real-space renormalization in statistical mechanics,” Rev. Mod. Phys. 86, 647–667 (2014).
    Google ScholarLocate open access versionFindings
  • [47] K. G. Wilson, “The renormalization group: Critical phenomena and the kondo problem,” Rev. Mod. Phys. 47, 773–840 (1975).
    Google ScholarLocate open access versionFindings
  • [48] L. P. Kadanoff, “Variational principles and approximate renormalization group calculations,” Phys. Rev. Lett. 34, 1005–1008 (1975).
    Google ScholarLocate open access versionFindings
  • [49] Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature (London) 521, 436–444 (2015).
    Google ScholarLocate open access versionFindings
Full Text
Your rating :
0

 

Tags
Comments