Automatic Analysis of Substantiation in Scientific Peer Reviews.
CoRR(2023)
摘要
With the increasing amount of problematic peer reviews in top AI conferences,
the community is urgently in need of automatic quality control measures. In
this paper, we restrict our attention to substantiation -- one popular quality
aspect indicating whether the claims in a review are sufficiently supported by
evidence -- and provide a solution automatizing this evaluation process. To
achieve this goal, we first formulate the problem as claim-evidence pair
extraction in scientific peer reviews, and collect SubstanReview, the first
annotated dataset for this task. SubstanReview consists of 550 reviews from NLP
conferences annotated by domain experts. On the basis of this dataset, we train
an argument mining system to automatically analyze the level of substantiation
in peer reviews. We also perform data analysis on the SubstanReview dataset to
obtain meaningful insights on peer reviewing quality in NLP conferences over
recent years.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要