Are Large Language Models Table-based Fact-Checkers?
2024 27th International Conference on Computer Supported Cooperative Work in Design (CSCWD)(2024)
Abstract
Table-based Fact Verification (TFV) aims to extract the entailment relationbetween statements and structured tables. Existing TFV methods based onsmall-scaled models suffer from insufficient labeled data and weak zero-shotability. Recently, the appearance of Large Language Models (LLMs) has gainedlots of attraction in research fields. They have shown powerful zero-shot andin-context learning abilities on several NLP tasks, but their potential on TFVis still unknown. In this work, we implement a preliminary study about whetherLLMs are table-based fact-checkers. In detail, we design diverse prompts toexplore how the in-context learning can help LLMs in TFV, i.e., zero-shot andfew-shot TFV capability. Besides, we carefully design and construct TFVinstructions to study the performance gain brought by the instruction tuning ofLLMs. Experimental results demonstrate that LLMs can achieve acceptable resultson zero-shot and few-shot TFV with prompt engineering, while instruction-tuningcan stimulate the TFV capability significantly. We also make some valuablefindings about the format of zero-shot prompts and the number of in-contextexamples. Finally, we analyze some possible directions to promote the accuracyof TFV via LLMs, which is beneficial to further research of table reasoning.
MoreTranslated text
Key words
Table-based Fact Verification,Large Language Models,In-context Learning,Instruction Tuning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined