AMR Parsing with Latent Structural Information

ACL, pp. 4306-4319, 2020.

Cited by: 0|Bibtex|Views82
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com
Weibo:
Experiment results show that the latent structural information improve the best reported parsing performance on both Abstract Meaning Representations 2.0 and Abstract Meaning Representations 1.0

Abstract:

Meaning Representations (AMRs) capture sentence-level semantics structural representations to broad-coverage natural sentences. We investigate parsing AMR with explicit dependency structures and interpretable latent structures. We generate the latent soft structure without additional annotations, and fuse both dependency and latent struct...More
0
Introduction
Highlights
  • Meaning Representations (AMRs) (Banarescu et al, 2013) model sentence level semantics as rooted, directed, acyclic graphs
  • In order to take benefits from both explicit and latent structural information in Abstract Meaning Representations parsing, we extend the Syntactic-GCN (Marcheggiani and Titov, 2017; Zhang et al, 2018b) with a graph fusion layer and omit labels in the graph
  • Ablation Study We investigate the impacts of different structural information in our model on Abstract Meaning Representations 2.0 with main sub-tasks5
  • Experiment results show that both the explicit structure and latent structure can improve the performance of Abstract Meaning Representations parsing, and latent structural information reduces the errors in sub-tasks such as concept and SRL
  • We investigate latent structure for Abstract Meaning Representations parsing, and we denote that the inferred latent graph can interpret the connection probabilities between input words
  • Experiment results show that the latent structural information improve the best reported parsing performance on both Abstract Meaning Representations 2.0 (LDC2017T10) and Abstract Meaning Representations 1.0 (LDC2014T12)
Methods
  • 4.1 Setup

    The authors use two standard AMR corpora: AMR1.0 (LDC2014T12) and AMR 2.0 (LDC2017T10).
  • AMR 1.0 contains 13051 sentences in total.
  • - w/o BERT Zhang et al (2019b) Ours - w/o BERT AMR 1.0.
  • Flanigan et al (2016) Pust et al (2015) Wang and Xue (2017) Guo and Lu (2018) Zhang et al (2019a).
  • 2.0 is larger which is split into 36521, 1368 and 1371 sentences in training, development and testing sets respectively.
  • The authors treat in AMR 2.0 as the main dataset in the experiments since it is larger
Results
  • Main Results The authors compare the SMATCH F1 scores (Cai and Knight, 2013) against previous best reported models and other recent AMR parsers.
  • For AMR 2.0, with the benefit from the fused structural information, the authors improve the baseline (Zhang et al, 2019a) by 1.2% F1 in the full model, and 0.9% F1.
  • Ablation Study The authors investigate the impacts of different structural information in the model on AMR 2.0 with main sub-tasks5.
  • Table 3 shows the fused structure perform better in most sub-task than explicit and latent structure.
  • Latent structure perform better in concepts sub-task, and fused structure brings more information to the negation subtask which obtain 0.5% and 1.0% improvement than explicit and latent structure respectively
Conclusion
  • Experiment results show that both the explicit structure and latent structure can improve the performance of AMR parsing, and latent structural information reduces the errors in sub-tasks such as concept and SRL.
  • It can be seen that the latent matrix (Figure 3a) tries to retain information from most word pairs, and the AMR root “and” holds high connection probabilities to each word in the sentence.
  • The fused matrix (Figure 3b) holds similar connection probabilities to predicates and arguments in the sentence as well, and it reduces the connection degrees to the determiner “The” which does not appear in corresponding AMR graph.
  • The authors propose to incorporate the latent graph into other multi-task learning problems (Chen et al., 2019; Kurita and Søgaard, 2019)
Summary
  • Introduction:

    Abstract Meaning Representations (AMRs) (Banarescu et al, 2013) model sentence level semantics as rooted, directed, acyclic graphs.
  • AMR introduces re-entrance relation to depict the node reuse in the graphs.
  • It has been adopted in downstream NLP tasks, including text summarization (Liu et al, 2015; Dohare and Karnick, 2017), question answering (Mitra and Baral, 2016) and machine translation (Jones et al, 2012; Song et al, 2019).
  • Reinforcement learning (Naseem et al, 2019) and sequence-to-sequence (Konstas et al, 2017) have been exploited in AMR parsing as well
  • Methods:

    4.1 Setup

    The authors use two standard AMR corpora: AMR1.0 (LDC2014T12) and AMR 2.0 (LDC2017T10).
  • AMR 1.0 contains 13051 sentences in total.
  • - w/o BERT Zhang et al (2019b) Ours - w/o BERT AMR 1.0.
  • Flanigan et al (2016) Pust et al (2015) Wang and Xue (2017) Guo and Lu (2018) Zhang et al (2019a).
  • 2.0 is larger which is split into 36521, 1368 and 1371 sentences in training, development and testing sets respectively.
  • The authors treat in AMR 2.0 as the main dataset in the experiments since it is larger
  • Results:

    Main Results The authors compare the SMATCH F1 scores (Cai and Knight, 2013) against previous best reported models and other recent AMR parsers.
  • For AMR 2.0, with the benefit from the fused structural information, the authors improve the baseline (Zhang et al, 2019a) by 1.2% F1 in the full model, and 0.9% F1.
  • Ablation Study The authors investigate the impacts of different structural information in the model on AMR 2.0 with main sub-tasks5.
  • Table 3 shows the fused structure perform better in most sub-task than explicit and latent structure.
  • Latent structure perform better in concepts sub-task, and fused structure brings more information to the negation subtask which obtain 0.5% and 1.0% improvement than explicit and latent structure respectively
  • Conclusion:

    Experiment results show that both the explicit structure and latent structure can improve the performance of AMR parsing, and latent structural information reduces the errors in sub-tasks such as concept and SRL.
  • It can be seen that the latent matrix (Figure 3a) tries to retain information from most word pairs, and the AMR root “and” holds high connection probabilities to each word in the sentence.
  • The fused matrix (Figure 3b) holds similar connection probabilities to predicates and arguments in the sentence as well, and it reduces the connection degrees to the determiner “The” which does not appear in corresponding AMR graph.
  • The authors propose to incorporate the latent graph into other multi-task learning problems (Chen et al., 2019; Kurita and Søgaard, 2019)
Tables
  • Table1: Main results of SMATCH F1 on AMR 2.0 (LDC2017T10) and 1.0 (LDC2014T12) test sets. Results are evaluated over 3 runs
  • Table2: Fine-grained F1 scores on the AMR 2.0 (LDC2017T10) test set. N’18 is <a class="ref-link" id="cNaseem_et+al_2019_a" href="#rNaseem_et+al_2019_a">Naseem et al (2019</a>); Z’19a is <a class="ref-link" id="cZhang_et+al_2019_a" href="#rZhang_et+al_2019_a">Zhang et al (2019a</a>); Z’19b is Zhang et al (2019b)
  • Table3: Ablation studies of the results for AMR2.0 (LDC2017T10) on different kind of structural information in our model
  • Table4: The UAS of fused and latent graph by calculating from the corresponding explicit dependencies in test set (we calculate the UAS by predicting the maximum probability heads in the latent graph)
  • Table5: Hyper-parameter settings
Download tables as Excel
Related work
  • Transition-based AMR parsers (Wang et al, 2016; Damonte et al, 2017; Wang and Xue, 2017; Liu et al, 2018; Guo and Lu, 2018; Naseem et al, 2019) suffer from the lack of annotated alignments between words and concept notes is crucial in these models. Lyu and Titov (2018) treat the alignments as an latent variable for their probabilistic model, which jointly obtains the concepts, relations and alignments variables. Sequence-to-sequence AMR parsers transform AMR graphs into serialized sequences by external traversal rules, and then restore the generated the AMR sequence to avoid aligning issue (Konstas et al, 2017; van Noord and Bos, 2017). Moreover, Zhang et al (2019a) extend a pointer generator (See et al, 2017), which can generate a node multiple times without alignment through the copy mechanism.

    With regards to latent structure, Naradowsky et al (2012) couples syntactically-oriented NLP tasks to combinatorially constrained hidden syntactic representations. Bowman et al (2016); Yogatama et al (2017) and Choi et al (2018) generate unsupervised constituent tree for text classification. The latent constituent trees are shallower than human annotated, and it can boost the performance of downstream NLP tasks (e.g., text classification). Guo et al (2019) and Ji et al (2019) employ selfattention and bi-affine attention mechanism respectively to generate soft connected graphs, and then adopt GNNs to encode the soft structure to take advantage from the structural information to their works.
Funding
  • This work is supported by the National Natural Science Foundation of China (NSFC-61772378), the National Key Research and Development Program of China (No.2017YFC1200500) and the Major Projects of the National Social Science Foundation of China (No.11&ZD189)
  • We also would like to acknowledge funding support from the Westlake University and Bright Dream Joint Institute for Intelligent Robotics
Reference
  • Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1699–1710, Lisbon, Portugal. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. CoRR, abs/1607.06450.
    Findings
  • Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
    Google ScholarLocate open access versionFindings
  • Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Joost Bastings, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963–2977, Florence, Italy. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima’an. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1957–1967.
    Google ScholarLocate open access versionFindings
  • Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers.
    Google ScholarLocate open access versionFindings
  • Deng Cai and Wai Lam. 2019. Core semantic first: A top-down approach for AMR parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3797–3807, Hong Kong, China. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 748–752, Sofia, Bulgaria. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019. A multi-task approach for disentangling syntax and semantics in sentence representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2453–2464, Minneapolis, Minnesota. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2018. Learning to compose task-specific tree structures. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5094–5101.
    Google ScholarLocate open access versionFindings
  • Yoeng-Jin Chu. 1965. On the shortest arborescence of a directed graph. Scientia Sinica, 14:1396–1400.
    Google ScholarLocate open access versionFindings
  • Marco Damonte and Shay B. Cohen. 2019. Structural neural encoders for AMR-to-text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3649–3658, Minneapolis, Minnesota. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 536–546, Valencia, Spain. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186.
    Google ScholarLocate open access versionFindings
  • Shibhansh Dohare and Harish Karnick. 2017. Text summarization using abstract meaning representation. CoRR, abs/1706.01678.
    Findings
  • Timothy Dozat and Christopher D. Manning. 20Deep biaffine attention for neural dependency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.
    Google ScholarLocate open access versionFindings
  • Jack Edmonds. 1967. Optimum branchings. Journal of Research of the national Bureau of Standards B, 71(4):233–240.
    Google ScholarLocate open access versionFindings
  • Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016. CMU at SemEval-2016 task 8: Graph-based AMR parsing with infinite ramp loss. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1202–1206, San Diego, California. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426– 1436, Baltimore, Maryland. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Zhijiang Guo and Wei Lu. 2018. Better transitionbased AMR parsing with a refined search space. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1712–1722, Brussels, Belgium. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 241–251, Florence, Italy. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. CoRR, abs/1606.08415.
    Findings
  • Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780.
    Google ScholarLocate open access versionFindings
  • Tao Ji, Yuanbin Wu, and Man Lan. 2019. Graphbased dependency parsing with graph neural networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2475–2485, Florence, Italy. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Bevan Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-based machine translation with hyperedge replacement grammars. In Proceedings of COLING 2012, pages 1359–1376, Mumbai, India. The COLING 2012 Organizing Committee.
    Google ScholarLocate open access versionFindings
  • Yoon Kim, Yacine Jernite, David A. Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 2741–2749.
    Google ScholarLocate open access versionFindings
  • Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
    Google ScholarLocate open access versionFindings
  • Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings.
    Google ScholarLocate open access versionFindings
  • Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.
    Google ScholarLocate open access versionFindings
  • Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: Sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 146–157, Vancouver, Canada. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Ponnambalam Kumaraswamy. 1980. A generalized probability density function for double-bounded random processes. Journal of Hydrology, 46(1-2):79– 88.
    Google ScholarLocate open access versionFindings
  • Shuhei Kurita and Anders Søgaard. 2019. Multi-task semantic dependency parsing with policy gradient for learning easy-first strategies. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2420–2430, Florence, Italy. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Matthias Lindemann, Jonas Groschwitz, and Alexander Koller. 2019. Compositional semantic parsing across graphbanks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4576–4585, Florence, Italy. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A. Smith. 2015. Toward abstractive summarization using semantic representations. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1077–1086, Denver, Colorado. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Yijia Liu, Wanxiang Che, Bo Zheng, Bing Qin, and Ting Liu. 2018. An AMR aligner tuned by transition-based parser. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 November 4, 2018, pages 2422–2430.
    Google ScholarLocate open access versionFindings
  • Christos Louizos, Max Welling, and Diederik P. Kingma. 2017. Learning sparse neural networks through l0 regularization. CoRR, abs/1712.01312.
    Findings
  • Chunchuan Lyu and Ivan Titov. 2018. AMR parsing as graph prediction with latent alignment. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 397–407, Melbourne, Australia. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60.
    Google ScholarLocate open access versionFindings
  • Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506–1515, Copenhagen, Denmark. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Arindam Mitra and Chitta Baral. 2016. Addressing a question answering challenge by combining statistical methods with inductive rule learning and reasoning. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 2779–2785.
    Google ScholarLocate open access versionFindings
  • Eric T. Nalisnick and Padhraic Smyth. 2017. Stickbreaking variational autoencoders. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.
    Google ScholarLocate open access versionFindings
  • Jason Naradowsky, Sebastian Riedel, and David Smith. 2012. Improving NLP through marginalization of hidden syntactic structure. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 810–820, Jeju Island, Korea. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513–553.
    Google ScholarLocate open access versionFindings
  • Rik van Noord and Johan Bos. 2017. Neural semantic parsing by character-based translation: Experiments with abstract meaning representations. CoRR, abs/1705.09980.
    Findings
  • Hao Peng, Sam Thomson, and Noah A. Smith. 2017. Deep multitask learning for semantic dependency parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2037–2048, Vancouver, Canada. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Parsing English into abstract meaning representation using syntaxbased machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1143–1154, Lisbon, Portugal. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Mike Schuster and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Trans. Signal Processing, 45(11):2673–2681.
    Google ScholarLocate open access versionFindings
  • Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using AMR. Transactions of the Association for Computational Linguistics, 7:19–31.
    Google ScholarLocate open access versionFindings
  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998–6008.
    Google ScholarLocate open access versionFindings
  • Tahira Naseem, Abhishek Shah, Hui Wan, Radu Florian, Salim Roukos, and Miguel Ballesteros. 2019. Rewarding Smatch: Transition-based AMR parsing with reinforcement learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4586–4592, Florence, Italy. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Chuan Wang, Sameer Pradhan, Xiaoman Pan, Heng Ji, and Nianwen Xue. 2016. CAMR at semeval-2016 task 8: An extended transition-based AMR parser. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016, pages 1173–1178.
    Google ScholarLocate open access versionFindings
  • Chuan Wang and Nianwen Xue. 2017. Getting the most out of AMR parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1257–1268, Copenhagen, Denmark. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2017. Learning to compose words into sentences with reinforcement learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.
    Google ScholarLocate open access versionFindings
  • Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019a. AMR parsing as sequence-tograph transduction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 80–94, Florence, Italy. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019b. Broad-coverage semantic parsing as transduction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3784–3796, Hong Kong, China. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Sheng Zhang, Xutai Ma, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2018a. Cross-lingual decompositional semantic parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1664–1675, Brussels, Belgium. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018b. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205–2215, Brussels, Belgium. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
Full Text
Your rating :
0

 

Tags
Comments