FOLIO: Natural Language Reasoning with First-Order Logic

Simeng Han,Hailey Schoelkopf,Yilun Zhao,Zhenting Qi, Martin Riddell, Wenfei Zhou, James Coady, David Peng,Yujie Qiao, Luke Benson, Lucy Sun, Alex Wardle-Solano, Hannah Szabo,Ekaterina Zubova,Matthew Burtell, Jonathan Fan,Yixin Liu, Brian Wong, Malcolm Sailor,Ansong Ni,Linyong Nan,Jungo Kasai,Tao Yu,Rui Zhang,Alexander R. Fabbri,Wojciech Kryscinski, Semih Yavuz, Ye Liu,Xi Victoria Lin,Shafiq Joty, Yingbo Zhou,Caiming Xiong, Rex Ying, Arman Cohan,Dragomir Radev

arxiv(2022)

引用 20|浏览147
暂无评分
摘要
Large language models (LLMs) have achieved remarkable performance on a variety of natural language understanding tasks. However, existing benchmarks are inadequate in measuring the complex logical reasoning capabilities of a model. We present FOLIO, a human-annotated, logically complex and diverse dataset for reasoning in natural language (NL), equipped with first-order logic (FOL) annotations. FOLIO consists of 1,430 examples (unique conclusions), each paired with one of 487 sets of premises used to deductively reason for the validity of each conclusion. The logical correctness of the premises and conclusions is ensured by their FOL annotations, which are automatically verified by an FOL inference engine. In addition to the main NL reasoning task, NL-FOL pairs in FOLIO constitute a new NL-FOL translation dataset. Our experiments on FOLIO systematically evaluate the FOL reasoning ability of supervised fine-tuning on medium-sized language models. For both NL reasoning and NL-FOL translation, we benchmark multiple state-of-the-art language models. Our results show that a subset of FOLIO presents a challenge for one of the most capable Large Language Model (LLM) publicly available, GPT-4.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要