Dimension-Prompts Boost Commonsense Consolidation

PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023(2023)

引用 0|浏览55
暂无评分
摘要
Neural knowledge models emerged and advanced common-sense-centric knowledge grounding. They parameterize a small seed curated commonsense knowledge graph (CS-KG) in a language model to generalize more. A current trend is to scale the seed up by directly mixing multiple sources of CS-KG (e.g., ATOMIC, ConceptNet) into one model. But, such brute-force mixing inevitably hinders effective knowledge consolidation due to i) ambiguous, polysemic, and/or inconsistent relations across sources and ii) knowledge learned in an entangled manner despite distinct types (e.g., causal, temporal). To mitigate this, we adopt a concept of commonsense knowledge dimension and propose a brand-new dimension-disentangled knowledge model ((DKM)-K-2) learning paradigm with multiple sources. That is, a generative language model with dimension-specific soft prompts is trained to disentangle knowledge acquisitions along with different dimensions and facilitate potential intra-dimension consolidation across CS-KG sources. Experiments show our knowledge model outperforms its baselines in both standard and zero-shot scenarios.
更多
查看译文
关键词
neural knowledge models,commonsense knowledge construction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要