AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We designed the Instruction Pointer Attention Graph Neural Network architecture based on a classical interpreter

Learning to Execute Programs with Instruction Pointer Attention Graph Neural Networks

NIPS 2020, (2020)

Cited by: 5|Views129
EI
Full Text
Bibtex
Weibo

Abstract

Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks including code completion, bug finding, and program repair. They benefit from leveraging program structure like control flow graphs, but they are not well-suited to tasks like program execution that require far more sequential reasoning ...More
0
Introduction
  • Static analysis methods underpin thousands of programming tools from compilers and debuggers to IDE extensions, offering productivity boosts to software engineers at every stage of development.
  • Graph neural networks in particular have emerged as a powerful tool for these tasks due to their suitability for learning from program structures such as parse trees, control flow graphs, and data flow graphs.
  • These successes motivate further study of neural network models for static analysis tasks.
  • An example program and its control flow graph are shown in Figure 1
Highlights
  • Static analysis methods underpin thousands of programming tools from compilers and debuggers to IDE extensions, offering productivity boosts to software engineers at every stage of development
  • Graph neural networks in particular have emerged as a powerful tool for these tasks due to their suitability for learning from program structures such as parse trees, control flow graphs, and data flow graphs
  • For tasks requiring reasoning about program execution, we expect the best models will come from a study of both Recurrent neural networks (RNNs) and Graph neural networks (GNNs) architectures
  • We consider models that share a causal structure with a classical interpreter. This leads us to the design of the Instruction Pointer Attention Graph Neural Network (IPA-GNN) model, which we find takes the form of a message passing graph neural network
  • We designed the Instruction Pointer Attention Graph Neural Network architecture based on a classical interpreter
  • By closely following the causal structure of an interpreter, the IPA-GNN exhibits stronger systematic generalization than baseline models on tasks requiring reasoning about program execution behavior
Methods
  • Through a series of experiments on generated programs, the authors evaluate the IPA-GNN and baseline models for systematic generalization on the program execution as static analysis tasks.

    Dataset The authors draw the dataset from a probabilistic grammar over programs using a subset of the Python programming language.
  • Through a series of experiments on generated programs, the authors evaluate the IPA-GNN and baseline models for systematic generalization on the program execution as static analysis tasks.
  • Dataset The authors draw the dataset from a probabilistic grammar over programs using a subset of the Python programming language.
  • The generated programs exhibit variable assignments, multi-digit arithmetic, while loops, and if-else statements.
  • V9, and the size of constants and scope of statements and conditions used in a program are limited.
Results
  • 2 shows the results of each model on the full and partial execution tasks.
  • On both tasks, IPA-GNN outperforms all baselines.
  • Figure 4 breaks the results out by complexity.
  • Complexity values used during training, the Line-by-Line Line-by-Line RNN 32.0 11.5.
  • RNN model performs almost as well as the IPA-GNN.
  • The performance of.
  • NoExecute 50.7 20.7 all baseline models drops off faster than that of the IPA- GGNN 16.0 5.7.
Conclusion
  • Conclusion and Future Work

    Following a principled approach, the authors designed the Instruction Pointer Attention Graph Neural Network architecture based on a classical interpreter.
  • The programs in the experiments were limited in the number of variables considered, in the magnitude of the values used, and in the scope of statements permitted.
  • Even at this modest level of difficulty, though, existing models struggled with the tasks, and there remains work to be done to solve harder versions of these tasks and to scale these results to real world problems.
  • The domain naturally admits scaling of difficulty and so provides a good playground for studying systematic generalization
Tables
  • Table1: The IPA-GNN model is a message passing GNN. Selectively replacing its components with those of the GGNN yields two baseline models, NoControl and NoExecute. Blue expressions originate with the IPA-GNN, and orange expressions with the GGNN
  • Table2: Accuracies on Dtest (%) Model Full Partial
Download tables as Excel
Study subjects and analysis
samples: 500
For this filtering, we use program length as our complexity measure c, with complexity threshold C = 10. We then sample 4.5k additional samples with c(x) > C to comprise Dtest, filtering to achieve 500 samples each at complexities {20, 30, . . . , 100}. An example program is shown in Figure 1

Reference
  • Alfred V. Aho, Monica S. Lam, Ravi Sethi, and Jeffrey D. Ullman. Compilers: Principles, Techniques, and Tools (2nd Edition). Addison-Wesley Longman Publishing Co., Inc., USA, 2006. ISBN 0321486811.
    Google ScholarFindings
  • Miltiadis Allamanis, Marc Brockschmidt, and Mahmoud Khademi. Learning to represent programs with graphs. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=BJOFETxR-.
    Locate open access versionFindings
  • Frances E. Allen. Control flow analysis. SIGPLAN Not., 5(7):1–19, July 1970. ISSN 0362-1340. doi: 10.1145/390013.808479. URL https://doi.org/10.1145/390013.808479.
    Locate open access versionFindings
  • Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Thien Huu Nguyen, Harm de Vries, and Aaron Courville. Systematic generalization: What is required and can it be learned? In International Conference on Learning Representations, 2019.
    Google ScholarLocate open access versionFindings
  • Marc Brockschmidt, Miltiadis Allamanis, Alexander L. Gaunt, and Oleksandr Polozov. Generative code modeling with graphs. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bke4KsA5FX.
    Locate open access versionFindings
  • Dan Busbridge, Dane Sherburn, Pietro Cavallo, and Nils Y. Hammerla. Relational graph attention networks, 2019. URL https://openreview.net/forum?id=Bklzkh0qFm.
    Findings
  • Jonathon Cai, Richard Shin, and Dawn Song. Making neural programming architectures generalize via recursion, 2017.
    Google ScholarFindings
  • Elizabeth Dinella, Hanjun Dai, Ziyang Li, Mayur Naik, Le Song, and Ke Wang. Hoppity: Learning graph transformations to detect and fix bugs in programs. In International Conference on Learning Representations, 2019.
    Google ScholarLocate open access versionFindings
  • Alexander L Gaunt, Marc Brockschmidt, Nate Kushman, and Daniel Tarlow. Differentiable programs with neural libraries. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1213–1222. JMLR. org, 2017.
    Google ScholarLocate open access versionFindings
  • Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing Machines. arXiv e-prints, art. arXiv:1410.5401, Oct 2014.
    Findings
  • Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwinska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià Puigdomènech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626): 471–476, October 2016.
    Google ScholarLocate open access versionFindings
  • Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in neural information processing systems, pages 1828–1836, 2015.
    Google ScholarLocate open access versionFindings
  • Vincent J. Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, and David Bieber. Global relational models of source code. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=B1lnbRNtwr.
    Locate open access versionFindings
  • Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in neural information processing systems, pages 190–198, 2015.
    Google ScholarLocate open access versionFindings
  • Łukasz Kaiser and Ilya Sutskever. Neural GPUs Learn Algorithms. arXiv e-prints, art. arXiv:1511.08228, Nov 2015.
    Findings
  • Ashwin Kalyan, Abhishek Mohta, Oleksandr Polozov, Dhruv Batra, Prateek Jain, and Sumit Gulwani. Neural-guided deductive search for real-time program synthesis from examples, 2018.
    Google ScholarFindings
  • Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980.
    Findings
  • Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated Graph Sequence Neural Networks. arXiv e-prints, art. arXiv:1511.05493, Nov 2015.
    Findings
  • Scott Reed and Nando de Freitas. Neural Programmer-Interpreters. arXiv e-prints, art. arXiv:1511.06279, Nov 2015.
    Findings
  • Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2008.
    Google ScholarLocate open access versionFindings
  • Jessica Schrouff, Kai Wohlfahrt, Bruno Marnette, and Liam Atkinson. Inferring javascript types using graph neural networks, 2019.
    Google ScholarFindings
  • Zhan Shi, Kevin Swersky, Daniel Tarlow, Parthasarathy Ranganathan, and Milad Hashemi. Learning execution through neural code fusion. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SJetQpEYvB.
    Locate open access versionFindings
  • Xujie Si, Hanjun Dai, Mukund Raghothaman, Mayur Naik, and Le Song. Learning loop invariants for program verification. In S Bengio, H Wallach, H Larochelle, K Grauman, N CesaBianchi, and R Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7751–7762. Curran Associates, Inc., 2018.
    Google ScholarLocate open access versionFindings
  • Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf.
    Locate open access versionFindings
  • Daniel Tarlow, Subhodeep Moitra, Andrew Rice, Zimin Chen, Pierre-Antoine Manzagol, Charles Sutton, and Edward Aftandilian. Learning to fix build errors with graph2diff neural networks, 2019.
    Google ScholarFindings
  • Andrew Trask, Felix Hill, Scott Reed, Jack Rae, Chris Dyer, and Phil Blunsom. Neural arithmetic logic units, 2018.
    Google ScholarLocate open access versionFindings
  • Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks, 2017.
    Google ScholarLocate open access versionFindings
  • Jiayi Wei, Maruth Goyal, Greg Durrett, and Isil Dillig. Lambdanet: Probabilistic type inference using graph neural networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=Hkx6hANtwH.
    Locate open access versionFindings
  • Wojciech Zaremba and Ilya Sutskever. Learning to execute, 2014.
    Google ScholarFindings
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科