AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We investigated back-parsing for Abstract meaning representation-to-text generation by integrating the prediction of projected AMRs into sentence decoding

Online Back Parsing for AMR to Text Generation

EMNLP 2020, pp.1206-1219, (2020)

Cited by: 0|Views253
Full Text
Bibtex
Weibo

Abstract

AMR-to-text generation aims to recover a text containing the same meaning as an input AMR graph. Current research develops increasingly powerful graph encoders to better represent AMR graphs, with decoders based on standard language modeling being used to generate outputs. We propose a decoder that back predicts projected AMR graphs on th...More

Code:

Data:

0
Introduction
  • Abstract meaning representation (AMR) (Banarescu et al, 2013) is a semantic graph representation that abstracts meaning away from a sentence.
  • Figure 1 shows an AMR graph, where the nodes, such as “possible-01” and “police”, represent concepts, and the edges, such as “ARG0” and “ARG1”, indicate relations between the concepts they connect.
  • The task of AMR-to-text generation (Konstas et al, 2017) aims to produce fluent sentences that convey consistent meaning with input AMR graphs.
  • Taking the AMR in Figure 1 as input, a model can produce the sentence “The police could help the victim”.
  • AMR-totext generation can be a good test bed for general graph-to-sequence problems (Belz et al, 2011; Gardent et al, 2017)
Highlights
  • Abstract meaning representation (AMR) (Banarescu et al, 2013) is a semantic graph representation that abstracts meaning away from a sentence
  • AMR-to-text generation has been shown useful for many applications such as machine translation (Song et al, 2019) and summarization (Liu et al, 2015; Yasunaga et al, 2017; Liao et al, 2018; Hardy and Vlachos, 2018)
  • The AMR-to-text generation task takes an AMR graph as input, which can be denoted as a directed acyclic graph G = (V, E), where V denotes the set of nodes and E refers to the set of labeled edges
  • In order to deal with words without alignments, we introduce a NULL node v∅ into the input AMR graph and align such words to it
  • Our systems give significantly better results than the previous systems using different encoders, including LSTM (Konstas et al, 2017), graph gated neural network (GGNN; Beck et al, 2018), graph recurrent network (GRN; Song et al, 2018), densely connected graph convolutional network (DCGCN; Guo et al, 2019) and various graph transformers (G-Trans-F, G-Trans-SA, G-TransC, G-Trans-W)
  • We investigated back-parsing for AMR-to-text generation by integrating the prediction of projected AMRs into sentence decoding
Methods
  • Experiments on two

    AMR benchmark datasets (LDC2015E86 and LDC2017T101) show that the model significantly outperforms a state-ofthe-art graph Transformer baseline by 1.8 and 2.5 BLEU points, respectively, demonstrating the advantage of structure-integrated decoding for AMR-to-text generation.
  • The authors conduct experiments on two benchmark AMR-to-text generation datasets, including LDC2015E86 and LDC2017T10.
  • These two datasets contain 16,833 and 36,521 training examples, respectively, and share a common set of 1,368 development and 1,371 test instances.
  • The authors' models are trained for 500K steps on a single 2080Ti GPU
  • The authors tune these hyperparameters on the LDC2015E86 development set and use the selected values for testing7
Results
  • 4.3.1 Automatic Evaluation

    Table 2 shows the automatic evaluation results, where “G-Trans-F-Ours” and “Ours BackParsing” represent the baseline and the full model, respectively.
  • The authors' systems give significantly better results than the previous systems using different encoders, including LSTM (Konstas et al, 2017), graph gated neural network (GGNN; Beck et al, 2018), graph recurrent network (GRN; Song et al, 2018), densely connected graph convolutional network (DCGCN; Guo et al, 2019) and various graph transformers (G-Trans-F, G-Trans-SA, G-TransC, G-Trans-W).
  • Note that the authors do not compare the model with methods by using external data
Conclusion
  • The authors investigated back-parsing for AMR-to-text generation by integrating the prediction of projected AMRs into sentence decoding.
  • The resulting model benefits from both richer loss and more structual features during decoding.
  • Experiments on two benchmarks show advantage of the model over a state-of-the-art baseline
Tables
  • Table1: BLEU and Meteor scores on the LDC2015E86 devset under different model settings
  • Table2: Test-set BLEU scores on LDC2015E86 (LDC15) and LDC2017T10 (LDC17)
  • Table3: Human evaluation of the sentences generated by different systems on concept presevation rate (CPR), relation preservation rate (RPR) and fluency
  • Table4: Human study for discourse preservation accuracy on LDC2015E86
  • Table5: Ablation study on LDC2015E86 test set
  • Table6: The pearson correlation coefficient ρ between the prediction accuracy and BLEU
  • Table7: Examples for case study
  • Table8: Full list of model parameters on the LDC2015E86
  • Table9: Main test results on LDC2015E86 and LDC2017T10
Download tables as Excel
Related work
  • Early studies on AMR-to-text generation rely on statistical methods. Flanigan et al (2016) convert input AMR graphs to trees by splitting re-entrances, before translating these trees into target sentences with a tree-to-string transducer; Pourdamghani et al (2016) apply a phrase-based MT system on linearized AMRs; Song et al (2017) design a synchronous node replacement grammar to parse input AMRs while generating target sentences. These approaches show comparable or better results than early neural (1) (o / obvious-01 :ARG1 (p / problem :ARG1-of (l / local-02)) :ARG1-of (c / cause-01 :ARG0 (l2 / lumpy :domain (d / dough :mod (c2 / cookie) :mod (t / this)))))

    REF: Obviously there are local problems because this cookie dough is lumpy . Baseline: It is obvious that these cookie dough were a lumpy . Ours: Obviously there is a local problem as this cookie dough is a lumpy .

    (2) (c / cause-01 :ARG0 (s / see-01 :ARG0 (d / doctor) :ARG1 (c2 / case :ARG1-of (b / bad-05 :degree (m / more :quant (m2 / much))))) :ARG1 (w / worry :polarity - :mode imperative :ARG0 (y / you) :ARG1 (t / that)))

    REF: Doctors have seen much worse cases so don’t worry about that ! Baseline: Don’t worry about that see much worse cases by doctors . Ours: Don’t worry that , as a doctor saw much worse cases .
  • Related work on

    NMT studies back-translation loss (Sennrich et al, 2016; Tu et al, 2017) by translating the target reference back into the source text (reconstruction), which can help retain more comprehensive input information. This is similar to our goal. Wiseman et al (2017) extended the reconstruction loss of Tu et al (2017) for table-to-text generation. We study a more challenging topic on how to retain the meaning of a complex graph structure rather than a sentence or a table. In addition, rather than reconstructing the input after the output is produced, we predict the input while the output is constructed, thereby allowing stronger information sharing.

    Our work is also remotely related to previous work on string-to-tree neural machine translation (NMT) (Aharoni and Goldberg, 2017; Wu et al, 2017; Wang et al, 2018), which aims at generating target sentences together with their syntactic trees.
Funding
  • This work has been supported by National Natural Science Foundation of China under grant No.61976180 and a Xiniuniao grant of Tencent
Study subjects and analysis
people: 3
We also employ human evaluation to assess the semantic faithfulness and generation fluency of compared methods by randomly selecting 50 AMR graphs for comparison. Three people familiar with AMR are asked to score the generation quality with regard to three aspects — concept preservation rate, relation preservation rate and fluency (on a scale of [0, 5]). Details about the criteria are:

Reference
  • Roee Aharoni and Yoav Goldberg. 2017. Towards string-to-tree neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers).
    Google ScholarLocate open access versionFindings
  • Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse.
    Google ScholarLocate open access versionFindings
  • Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
    Google ScholarLocate open access versionFindings
  • Anja Belz, Michael White, Dominic Espinosa, Eric Kow, Deirdre Hogan, and Amanda Stent. 2011. The first surface realisation shared task: Overview and evaluation results. In Proceedings of the 13th
    Google ScholarLocate open access versionFindings
  • European workshop on natural language generation, pages 217–22Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Deng Cai and Wai Lam. 2020. Graph transformer for graph-to-sequence learning. In Proceedings of Thirty-Fourth AAAI Conference on Artificial Intelligence.
    Google ScholarLocate open access versionFindings
  • Kris Cao and Stephen Clark. 2019. Factorising AMR generation through syntax. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2157–2163, Minneapolis, Minnesota. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Marco Damonte and Shay B Cohen. 201Structural neural encoders for amr-to-text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).
    Google ScholarLocate open access versionFindings
  • Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 376–380, Baltimore, Maryland, USA. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency parsing. In International Conference on Learning Representations.
    Google ScholarLocate open access versionFindings
  • Jeffrey Flanigan, Chris Dyer, Noah A Smith, and Jaime G Carbonell. 2016. Generation from abstract meaning representation using tree transducers. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
    Google ScholarLocate open access versionFindings
  • Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from rdf data. In Proceedings of the 10th International Conference on Natural Language Generation, pages 124–133.
    Google ScholarLocate open access versionFindings
  • Zhijiang Guo, Yan Zhang, Zhiyang Teng, and Wei Lu. 2019. Densely connected graph convolutional networks for graph-to-sequence learning. Transactions of the Association for Computational Linguistics, 7:297–312.
    Google ScholarLocate open access versionFindings
  • Valerie Hajdik, Jan Buys, Michael Wayne Goodman, and Emily M Bender. 2019. Neural text generation from rich semantic representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).
    Google ScholarLocate open access versionFindings
  • Hardy Hardy and Andreas Vlachos. 2018. Guided neural language generation for abstractive summarization using abstract meaning representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.
    Google ScholarLocate open access versionFindings
  • Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference for Learning Representations.
    Google ScholarFindings
  • Rik Koncel-Kedziorski, Dhanush Bekal, Yi Luan, Mirella Lapata, and Hannaneh Hajishirzi. 2019. Text generation from knowledge graphs with graph transformers. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).
    Google ScholarLocate open access versionFindings
  • Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
    Google ScholarLocate open access versionFindings
  • Kexin Liao, Logan Lebanoff, and Fei Liu. 2018. Abstract meaning representation for multi-document summarization. In Proceedings of the 27th International Conference on Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A Smith. 2015. Toward abstractive summarization using semantic representations. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
    Google ScholarLocate open access versionFindings
  • Lemao Liu, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers.
    Google ScholarLocate open access versionFindings
  • Manuel Mager, Ramon Fernandez Astudillo, Tahira Naseem, Md Arafat Sultan, Young-Suk Lee, Radu Florian, and Salim Roukos. 2020. GPT-too: A language-model-first approach for AMR-to-text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1846–1852, Online. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Supervised attentions for neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.
    Google ScholarLocate open access versionFindings
  • Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Maja Popovic. 2017. chrF++: words helping character n-grams. In Proceedings of the Second Conference on Machine Translation.
    Google ScholarLocate open access versionFindings
  • Nima Pourdamghani, Yang Gao, Ulf Hermjakob, and Kevin Knight. 2014. Aligning English strings with abstract meaning representation graphs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP).
    Google ScholarLocate open access versionFindings
  • Nima Pourdamghani, Kevin Knight, and Ulf Hermjakob. 2016. Generating english from abstract meaning representations. In Proceedings of the 9th international natural language generation conference.
    Google ScholarLocate open access versionFindings
  • Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn discourse TreeBank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), Marrakech, Morocco. European Language Resources Association (ELRA).
    Google ScholarLocate open access versionFindings
  • Leonardo FR Ribeiro, Claire Gardent, and Iryna Gurevych. 2019. Enhancing amr-to-text generation with dual graph representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
    Google ScholarLocate open access versionFindings
  • Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
    Google ScholarLocate open access versionFindings
  • Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using amr. Transactions of the Association for Computational Linguistics, 7:19–31.
    Google ScholarLocate open access versionFindings
  • Linfeng Song, Xiaochang Peng, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2017. Amr-to-text generation with synchronous node replacement grammar. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers).
    Google ScholarLocate open access versionFindings
  • Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for amr-to-text generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
    Google ScholarLocate open access versionFindings
  • Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017. Neural machine translation with reconstruction. In Thirty-First AAAI Conference on Artificial Intelligence.
    Google ScholarLocate open access versionFindings
  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems.
    Google ScholarLocate open access versionFindings
  • Tianming Wang, Xiaojun Wan, and Hanqi Jin. 2020. Amr-to-text generation with graph transformer. Transactions of the Association for Computational Linguistics, 8:19–33.
    Google ScholarLocate open access versionFindings
  • Xinyi Wang, Hieu Pham, Pengcheng Yin, and Graham Neubig. 2018. A tree-based decoder for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.
    Google ScholarLocate open access versionFindings
  • Sam Wiseman, Stuart M Shieber, and Alexander M Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing.
    Google ScholarLocate open access versionFindings
  • Shuangzhi Wu, Dongdong Zhang, Nan Yang, Mu Li, and Ming Zhou. 2017. Sequence-to-dependency neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
    Google ScholarLocate open access versionFindings
  • Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017).
    Google ScholarLocate open access versionFindings
  • Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
    Google ScholarLocate open access versionFindings
  • Jie Zhu, Junhui Li, Muhua Zhu, Longhua Qian, Min Zhang, and Guodong Zhou. 2019. Modeling graph structure in transformer for better amr-totext generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
    Google ScholarLocate open access versionFindings
  • A. Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. ArXiv, abs/2006.14799.
    Findings
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科