Sampling-Based SAT/ASP Multi-model Optimization as a Framework for Probabilistic Inference

INDUCTIVE LOGIC PROGRAMMING (ILP 2018)(2018)

引用 4|浏览25
暂无评分
摘要
This paper proposes multi-model optimization through SAT witness or answer set sampling, with common probabilistic reasoning tasks as primary use cases (including deduction-style probabilistic inference and hypothesis weight learning). Our approach enhances a state-of-the-art SAT/ASP solving algorithm with Gradient Descent as branching literal decision approach, and optionally a cost backtracking mechanism. Sampling of models using these methods minimizes a task-specific, user-provided multi-model cost function while adhering to given logical background knowledge (either a Boolean formula in CNF or a normal logic program under stable model semantics). Features of the framework include its relative simplicity and high degree of expressiveness, since arbitrary differentiable cost functions and background knowledge can be provided.
更多
查看译文
关键词
Probabilistic logic programming,SAT,Answer set programming,Projective gradient descent,Numerical optimization,Relational AI
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要