Machine Reading Comprehension机器阅读理解(Machine Reading Comprehension,MRC)是一种利用算法使计算机理解文章语义并回答相关问题的技术。由于文章和问题均采用人类语言的形式,因此机器阅读理解属于自然语言处理(NLP)的范畴,也是其中最新最热门的课题之一。近些年来,随着机器学习,特别是深度学习的发展,机器阅读理解研究有了长足的进步,并在实际应用中崭露头角。
ICLR, (2020)
This paper introduces a new graph-based recurrent retrieval approach, which retrieves reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions
Cited by91BibtexViews422
0
0
Transactions of the Association for Computational Linguistics, (2019)
Text passages from seven diverse domains
Cited by394BibtexViews381DOI
0
0
national conference on artificial intelligence, (2019)
We first introduce two auxiliary losses to help the reader concentrate on answer extraction and no-answer detection respectively, and utilize an answer verifier to validate the legitimacy of the predicted answer, in which three different architectures are investigated
Cited by94BibtexViews301
0
0
Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang,Christopher D. Manning
EMNLP/IJCNLP (1), pp.2590-2602, (2019)
We present GOLDEN Retriever, which iterates between reading context and retrieving more supporting documents to answer open-domain multi-hop questions
Cited by46BibtexViews370DOI
0
0
EMNLP/IJCNLP (1), pp.5924-5931, (2019)
We present QUOREF, a focused reading comprehension benchmark that evaluates the ability of models to resolve coreference
Cited by41BibtexViews288DOI
0
0
north american chapter of the association for computational linguistics, (2018)
Recent empirical improvements due to transfer learning with language models have demonstrated that rich, unsupervised pre-training is an integral part of many language understanding systems
Cited by19480BibtexViews2330
0
0
north american chapter of the association for computational linguistics, (2018)
We have introduced a general approach for learning high-quality deep context-dependent representations from bidirectional language model, and shown large improvements when applying ELMo to a broad range of NLP tasks
Cited by6721BibtexViews522DOI
0
0
pp.12 (2018)
We introduced a framework for achieving strong natural language understanding with a single task-agnostic model through generative pre-training and discriminative fine-tuning
Cited by2183BibtexViews315
0
0
ACL, (2018): 784-789
Machine reading comprehension has become a central task in natural language understanding, fueled by the creation of many large-scale datasets
Cited by799BibtexViews464
0
0
ICLR, (2018)
We propose a fast and accurate end-to-end model, QANet, for machine reading comprehension
Cited by548BibtexViews425
0
0
Christopher R. Clark,Matt Gardner
meeting of the association for computational linguistics, (2018)
We have shown that, when using a paragraph-level QA model across multiple paragraphs, our training method of sampling non-answer-containing paragraphs while using a shared-norm objective function can be very beneficial
Cited by270BibtexViews273DOI
0
0
EMNLP, (2018)
Of these, following the transition matrix gives the best performance, reinforcing the observation that the dialog context plays a significant role in the task
Cited by253BibtexViews515
0
0
IJCAI, pp.4099-4106, (2018)
A reattention mechanism is introduced to alleviate the problems of attention redundancy and deficiency in multi-round alignment architectures
Cited by242BibtexViews254DOI
0
0
Transactions of the Association for Computational Linguistics, (2018)
We have introduced a new dataset and a set of tasks for training and evaluating reading comprehension systems, borne from an analysis of the limitations
Cited by237BibtexViews332DOI
0
0
Peter Clark, Isaac Cowhey,Oren Etzioni, Tushar Khot,Ashish Sabharwal, Carissa Schoenick,Oyvind Tafjord
arXiv: Artificial Intelligence, (2018)
To help the field move towards more difficult tasks, we have presented the AI2 Reasoning Challenge, consisting of a new question set, text corpus, and baselines, and whose Challenge partition is hard for retrieval and co-occurence methods
Cited by176BibtexViews349
0
0
Hsin-Yuan Huang, Chenguang Zhu,Yelong Shen,Weizhu Chen
international conference on learning representations, (2018)
We applied FusionNet to machine reading comprehension task and experimental results show that FusionNet outperforms existing machine reading models on both the Stanford Question Answering Dataset dataset and the adversarial SQuAD dataset
Cited by114BibtexViews186
0
0
international conference on learning representations, (2018)
The results showed that R3 achieved F1 56.0, Exact Match 50.9 on Wiki domain and F1 68.5, EM 63.0 on Web domain, which is competitive to the state-of-the-arts
Cited by110BibtexViews319
0
0
meeting of the association for computational linguistics, (2018)
The model achieves results competitive with the state-of-the-art on the Stanford Question Answering Dataset leaderboard, as well as on the Adversarial SQuAD and MS MARCO datasets
Cited by100BibtexViews206DOI
0
0
Wei Wang,Ming Yan, Chen Wu
meeting of the association for computational linguistics, (2018): 1705-1714
We introduce a novel hierarchical attention network, a state-of-the-art reading comprehension model which conducts attention and fusion horizontally and vertically across layers at different levels of granularity between question and paragraph
Cited by81BibtexViews132DOI
0
0
meeting of the association for computational linguistics, (2018)
We proposed an efficient and robust question answering system that is scalable to large documents and robust to adversarial inputs
Cited by80BibtexViews146
0
0
小科