Trust Modelling and Verification Using Event-B.
CoRR(2023)
摘要
Trust is a crucial component in collaborative multiagent systems (MAS)
involving humans and autonomous AI agents. Rather than assuming trust based on
past system behaviours, it is important to formally verify trust by modelling
the current state and capabilities of agents. We argue for verifying actual
trust relations based on agents abilities to deliver intended outcomes in
specific contexts. To enable reasoning about different notions of trust, we
propose using the refinement-based formal method Event-B. Refinement allows
progressively introducing new aspects of trust from abstract to concrete models
incorporating knowledge and runtime states. We demonstrate modelling three
trust concepts and verifying associated trust properties in MAS. The formal,
correctness-by-construction approach allows to deduce guarantees about
trustworthy autonomy in human-AI partnerships. Overall, our contribution
facilitates rigorous verification of trust in multiagent systems.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要