AI帮你理解科学

AI 生成解读视频

AI抽取解析论文重点内容自动生成视频


pub
生成解读视频

AI 溯源

AI解析本论文相关学术脉络


Master Reading Tree
生成 溯源树

AI 精读

AI抽取本论文的概要总结


微博一下
We propose AutoQA, a methodology and a toolkit to automatically create a semantic parser given a database

AutoQA: From Databases To QA Semantic Parsers With Only Synthetic Training Data

EMNLP 2020, (2020)

被引用0|浏览13
下载 PDF 全文
引用
微博一下

摘要

Database schema and values We propose AutoQA, a methodology and toolkit to generate semantic parsers that answer questions on databases, with no manual effort. Given a database schema and its data, AutoQA automatically generates a large set of high-quality questions for training that covers different database operations. It uses automatic...更多

代码

数据

0
简介
  • Semantic parsing is the task of mapping natural language sentences to executable logical forms.
  • The Schema2QA toolkit (Xu et al, 2020) demonstrated that it is possible to achieve high accuracy on realistic user inputs using this methodology with a comprehensive set of generic, domain-independent question templates
  • This approach requires a significant manual effort for each domain: the developers must supply how each attribute can be referred to using different parts of speech, and crowdworkers are needed to paraphrase the queries
重点内容
  • We apply AutoQA to the Schema2QA dataset and obtain an average logical form accuracy of 62.9% when tested on natural questions, which is only 6.4% lower than a model trained with expert natural language annotations and paraphrase data collected from crowdworkers
  • Semantic parsing is the task of mapping natural language sentences to executable logical forms
  • Our evaluation metric is logical form accuracy: the logical form produced by our parser must exactly match the one in the test set
  • Compared with the baseline models trained with data generated by Schema2QA but without manual annotation and human paraphrase, AutoQA improves the accuracy by 25.3%. This result is obtained on naturally sourced test data, as opposed to paraphrases. This shows that AutoQA is effective for bootstrapping question answering systems for new domains, without any manual effort in creating or collecting training data
  • We propose AutoQA, a methodology and a toolkit to automatically create a semantic parser given a database
方法
  • The authors evaluate the effectiveness of the methodology: can a semantic parser created with AutoQA approach the performance of humanwritten annotations and paraphrases? The authors evaluate on two different benchmark datasets: the Schema2QA dataset (Xu et al, 2020) and the Overnight dataset (Wang et al, 2015).

    6.1 AutoQA Implementation

    Paraphrasing Model.
  • The authors formulate paraphrasing as a sequence-to-sequence problem and use the pre-trained BART large model (Lewis et al, 2019).
  • The authors fine-tune it for 4 epochs on sentence pairs from PARABANK 2 (Hu et al, 2019a), which is a paraphrase dataset constructed by back-translating the Czech portion of an English-Czech parallel corpus.
  • To ensure the output of the model is grammatical, during training, the authors use the back-translated Czech sentence as the input and the human-written English phrase as the output.
  • Training is done with mini-batches of 1280 examples where each mini-batch consists of sentences with similar lengths3
结果
  • The authors' evaluation metric is logical form accuracy: the logical form produced by the parser must exactly match the one in the test set.
  • Compared with the baseline models trained with data generated by Schema2QA but without manual annotation and human paraphrase, AutoQA improves the accuracy by 25.3%.
  • This result is obtained on naturally sourced test data, as opposed to paraphrases.
  • This shows that AutoQA is effective for bootstrapping question answering systems for new domains, without any manual effort in creating or collecting training data
结论
  • The authors propose AutoQA, a methodology and a toolkit to automatically create a semantic parser given a database.
  • The authors test AutoQA on two different datasets with different target logical forms and data synthesis templates.
  • On both datasets, AutoQA achieves comparable accuracy to state-ofthe-art QA systems trained with manual attribute annotation and human paraphrases.
  • AutoQA relies on a neural paraphraser trained with an out-of-domain dataset to generate training data.
  • Future work is needed to handle attributes containing long free-form text, as AutoQA currently only supports database operations without reading comprehension
总结
  • Introduction:

    Semantic parsing is the task of mapping natural language sentences to executable logical forms.
  • The Schema2QA toolkit (Xu et al, 2020) demonstrated that it is possible to achieve high accuracy on realistic user inputs using this methodology with a comprehensive set of generic, domain-independent question templates
  • This approach requires a significant manual effort for each domain: the developers must supply how each attribute can be referred to using different parts of speech, and crowdworkers are needed to paraphrase the queries
  • Objectives:

    The authors' objective is to eliminate the need for manual effort in building semantic parsers, while achieving comparable accuracy.
  • The authors' goal is to automatically derive all the other POS annotations given a canonical annotation
  • Methods:

    The authors evaluate the effectiveness of the methodology: can a semantic parser created with AutoQA approach the performance of humanwritten annotations and paraphrases? The authors evaluate on two different benchmark datasets: the Schema2QA dataset (Xu et al, 2020) and the Overnight dataset (Wang et al, 2015).

    6.1 AutoQA Implementation

    Paraphrasing Model.
  • The authors formulate paraphrasing as a sequence-to-sequence problem and use the pre-trained BART large model (Lewis et al, 2019).
  • The authors fine-tune it for 4 epochs on sentence pairs from PARABANK 2 (Hu et al, 2019a), which is a paraphrase dataset constructed by back-translating the Czech portion of an English-Czech parallel corpus.
  • To ensure the output of the model is grammatical, during training, the authors use the back-translated Czech sentence as the input and the human-written English phrase as the output.
  • Training is done with mini-batches of 1280 examples where each mini-batch consists of sentences with similar lengths3
  • Results:

    The authors' evaluation metric is logical form accuracy: the logical form produced by the parser must exactly match the one in the test set.
  • Compared with the baseline models trained with data generated by Schema2QA but without manual annotation and human paraphrase, AutoQA improves the accuracy by 25.3%.
  • This result is obtained on naturally sourced test data, as opposed to paraphrases.
  • This shows that AutoQA is effective for bootstrapping question answering systems for new domains, without any manual effort in creating or collecting training data
  • Conclusion:

    The authors propose AutoQA, a methodology and a toolkit to automatically create a semantic parser given a database.
  • The authors test AutoQA on two different datasets with different target logical forms and data synthesis templates.
  • On both datasets, AutoQA achieves comparable accuracy to state-ofthe-art QA systems trained with manual attribute annotation and human paraphrases.
  • AutoQA relies on a neural paraphraser trained with an out-of-domain dataset to generate training data.
  • Future work is needed to handle attributes containing long free-form text, as AutoQA currently only supports database operations without reading comprehension
表格
  • Table1: Example questions in the restaurant domain with their ThingTalk representations
  • Table2: Annotations for “alumniOf” attribute with example templates and utterances in six POS categories, where table and value denote the placeholders for table canonical annotations and values, respectively
  • Table3: Size of Schema2QA and AutoQA datasets decoding and 4 temperatures (<a class="ref-link" id="cFicler_2017_a" href="#rFicler_2017_a">Ficler and Goldberg, 2017</a>) of 0.3, 0.5, 0.7 and 1.0 to generate these paraphrases. Note that the input dataset to each paraphrasing round is the output of the previous round, and we have one round for Schema2QA and three rounds for Overnight experiments
  • Table4: Test accuracy of AutoQA on the Schema2QA dataset. For the hotel domain, Xu et al (2020) only report transfer learning accuracy, so we rerun the training with manual annotations and human paraphrases to obtain the accuracy for hotel questions
  • Table5: Ablation study on Schema2QA development sets. Each “–” line removes only that feature from AutoQA
  • Table6: Logical form accuracy (left) and answer accuracy (right) percentage on the Overnight test set. Numbers are copied from the cited papers. We report the numbers for the BL-Att model of <a class="ref-link" id="cDamonte_et+al_2019_a" href="#rDamonte_et+al_2019_a">Damonte et al (2019</a>), Att+Dual+LF of <a class="ref-link" id="cCao_et+al_2019_a" href="#rCao_et+al_2019_a">Cao et al (2019</a>), ZEROSHOT model of Herzig and Berant (2018b), and the Projection model of <a class="ref-link" id="cMarzoev_et+al_2020_a" href="#rMarzoev_et+al_2020_a">Marzoev et al (2020</a>). Herzig and Berant (2018b) do not evaluate on the Basketball domain
Download tables as Excel
相关工作
基金
  • This work is supported in part by the National Science Foundation under Grant No 1900638 and the Alfred P
  • Sloan Foundation under Grant No G2020-13938
研究对象与分析
epochs on sentence pairs: 4
BART is a Transformer (Vaswani et al, 2017) neural network trained on a large unlabeled corpus with a sentence reconstruction loss. We fine-tune it for 4 epochs on sentence pairs from PARABANK 2 (Hu et al, 2019a), which is a paraphrase dataset constructed by back-translating the Czech portion of an English-Czech parallel corpus. We use a subset of 5 million sentence pairs with the highest dual conditional cross-entropy score (Junczys-Dowmunt, 2018), and use only one of the five paraphrases provided for each sentence

epochs on sentence pairs: 4
BART is a Transformer (Vaswani et al, 2017) neural network trained on a large unlabeled corpus with a sentence reconstruction loss. We fine-tune it for 4 epochs on sentence pairs from PARABANK 2 (Hu et al, 2019a), which is a paraphrase dataset constructed by back-translating the Czech portion of an English-Czech parallel corpus. We use a subset of 5 million sentence pairs with the highest dual conditional cross-entropy score (Junczys-Dowmunt, 2018), and use only one of the five paraphrases provided for each sentence

cases: 3
For auto-annotation to work, the table and attribute names must be meaningful and unambiguous as discussed in Section 4. We found it necessary to override the original names in only three cases. In the restaurants domain, “starRating” is renamed to “michelinStar” to avoid ambiguity with “aggregateRating”

引用论文
  • Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. 2020. Do not have enough data? deep learning to the rescue! In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI, pages 7383–7390. AAAI Press.
    Google ScholarLocate open access versionFindings
  • Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
    Findings
  • Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
    Findings
  • Giovanni Campagna, Agata Foryciarz, Mehrad Moradshahi, and Monica S Lam. 2020. Zero-shot transfer learning with synthesized data for multidomain dialogue state tracking. arXiv preprint arXiv:2005.00891.
    Findings
  • Giovanni Campagna, Silei Xu, Mehrad Moradshahi, Richard Socher, and Monica S. Lam. 2019. Genie: A generator of natural language semantic parsers for virtual assistant commands. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2019, pages 394–410, New York, NY, USA. ACM.
    Google ScholarLocate open access versionFindings
  • Ruisheng Cao, Su Zhu, Chen Liu, Jieyu Li, and Kai Yu. 2019. Semantic parsing with dual learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 51–64.
    Google ScholarLocate open access versionFindings
  • Shuaichen Chang, Pengfei Liu, Yun Tang, Jing Huang, Xiaodong He, and Bowen Zhou. 2019. Zero-shot text-to-SQL learning with auxiliary task. arXiv preprint arXiv:1908.11052.
    Findings
  • Bo Chen, Le Sun, and Xianpei Han. 201Sequenceto-action: End-to-end semantic graph generation for semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 766– 777, Melbourne, Australia. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Marco Damonte, Rahul Goel, and Tagyoung Chung. 201Practical semantic parsing for spoken language understanding. Proceedings of the 2019 Conference of the North.
    Google ScholarLocate open access versionFindings
  • Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186.
    Google ScholarLocate open access versionFindings
  • Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. Proceedings of the Workshop on Stylistic Variation.
    Google ScholarLocate open access versionFindings
  • Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 20PPDB: The paraphrase database. In Proceedings of NAACL-HLT, pages 758–764, Atlanta, Georgia. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2017. A deep generative framework for paraphrase generation. arXiv preprint arXiv:1709.05074.
    Findings
  • Junxian He, Jiatao Gu, Jiajun Shen, and Marc’Aurelio Ranzato. 2019. Revisiting self-training for neural sequence generation. arXiv preprint arXiv:1909.13788.
    Findings
  • Jonathan Herzig and Jonathan Berant. 2018a. Decoupling structure and lexicon for zero-shot semantic parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1619–1629.
    Google ScholarLocate open access versionFindings
  • Jonathan Herzig and Jonathan Berant. 2018b. Decoupling structure and lexicon for zero-shot semantic parsing. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.
    Google ScholarLocate open access versionFindings
  • Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780.
    Google ScholarLocate open access versionFindings
  • Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 20The curious case of neural text degeneration. arXiv preprint arXiv: 1904.09751.
    Findings
  • J. Edward Hu, Abhinav Singh, Nils Holzenberger, Matt Post, and Benjamin Van Durme. 2019a. Largescale, diverse, paraphrastic bitexts via sampling and clustering. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 44–54, Hong Kong, China. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Zhiting Hu, Bowen Tan, Russ R Salakhutdinov, Tom M Mitchell, and Eric P Xing. 2019b. Learning data manipulation for augmentation and weighting. In Advances in Neural Information Processing Systems, pages 15738–15749.
    Google ScholarLocate open access versionFindings
  • Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers).
    Google ScholarLocate open access versionFindings
  • Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Marcin Junczys-Dowmunt. 2018. Dual conditional cross-entropy filtering of noisy parallel corpora. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 888–895, Belgium, Brussels. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452–457, New Orleans, Louisiana. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1516–1526.
    Google ScholarLocate open access versionFindings
  • Varun Kumar, Ashutosh Choudhary, and Eunah Cho. 2020. Data augmentation using pre-trained transformer models. arXiv preprint arXiv:2003.02245.
    Findings
  • Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
    Findings
  • Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119.
    Google ScholarLocate open access versionFindings
  • Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 881–893, Valencia, Spain. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Alana Marzoev, Samuel Madden, M. Frans Kaashoek, Michael Cafarella, and Jacob Andreas. 2020. Unnatural language processing: Bridging the gap between synthetic and natural language data. arXiv preprint arXiv:2004.13645.
    Findings
  • David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 152– 159, New York City, USA. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Mehrad Moradshahi, Giovanni Campagna, Sina J. Semnani, Silei Xu, and Monica S. Lam. 2020. Localizing open-ontology QA semantic parsers in a day using machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing.
    Google ScholarLocate open access versionFindings
  • Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2923–2934, Osaka, Japan. The COLING 2016 Organizing Committee.
    Google ScholarLocate open access versionFindings
  • Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.
    Google ScholarLocate open access versionFindings
  • Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. arXiv preprint arXiv:1909.05855.
    Findings
  • Aurko Roy and David Grangier. 2019. Unsupervised paraphrasing without translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6033–6039, Florence, Italy. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083.
    Google ScholarLocate open access versionFindings
  • Sina J. Semnani, Madhulima Pandey, and Manish Pandey. 2019. Domain-specific question answering at scale for conversational systems. 3rd NeurIPS Conversational AI Workshop.
    Google ScholarFindings
  • Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155.
    Findings
  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008.
    Google ScholarLocate open access versionFindings
  • Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1332–1342. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Nathaniel Weir, Prasetya Utama, Alex Galakatos, Andrew Crotty, Amir Ilkhechi, Shekar Ramaswamy, Rohin Bhushan, Nadja Geisler, Benjamin Hattasch, Steffen Eger, Ugur Cetintemel, and Carsten Binnig. 2020. DBPal: A fully pluggable nl2sql training pipeline. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, SIGMOD ’20, page 2347–2361, New York, NY, USA. Association for Computing Machinery.
    Google ScholarLocate open access versionFindings
  • Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.
    Findings
  • Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. 2018b. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911–3921.
    Google ScholarLocate open access versionFindings
  • Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2SQL: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103.
    Findings
  • Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Conditional BERT contextual augmentation. Computational Science – ICCS 2019, page 84–95.
    Google ScholarLocate open access versionFindings
  • Silei Xu, Giovanni Campagna, Jian Li, and Monica S Lam. 2020. Schema2QA: High-quality and lowcost Q&A agents for the structured web. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management.
    Google ScholarLocate open access versionFindings
  • Qian Yang, Zhouyuan Huo, Dinghan Shen, Yong Cheng, Wenlin Wang, Guoyin Wang, and Lawrence Carin. 2019. An end-to-end generative architecture for paraphrase generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3132–3142, Hong Kong, China. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd Annual Meeting of the Association for Computational Linguistics, pages 189–196, Cambridge, Massachusetts, USA. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. 2018a. QANet: Combining local convolution with global self-attention for reading comprehension. ArXiv, abs/1804.09541.
    Findings
作者
Silei Xu
Silei Xu
Sina Semnani
Sina Semnani
您的评分 :
0

 

标签
评论
小科