Combining Direct Trust and Indirect Trust in Multi-Agent Systems

IJCAI 2020, pp. 311-317, 2020.

被引用0|引用|浏览20|DOI:https://doi.org/10.24963/ijcai.2020/44
EI
其它链接dblp.uni-trier.de|academic.microsoft.com
微博一下
We provided the first systematic study on when and how to combine direct with indirect trust in decision-making

摘要

To assess the trustworthiness of an agent in a multi-agent system, one often combines two types of trust information: direct trust information derived from one's own interactions with that agent, and indirect trust information based on advice from other agents. This paper provides the first systematic study on when it is beneficial to com...更多

代码

数据

简介
  • Trust and reputation systems constitute an active branch of research in multi-agent systems.
  • Agents interact with one another in order to collect information, goods, or services that help with completing a set task.
  • For such interactions to be largely successful, agents try to estimate how trustworthy other individual agents are.
  • Agent A’s indirect trust in B is based on recommendations about B that one or more third-party advisors have provided to A
重点内容
  • Trust and reputation systems constitute an active branch of research in multi-agent systems
  • While the approaches to modeling trust cover a wide variety of techniques, usually the literature distinguishes between methods for computing direct trust and those for computing indirect trust [Jøsang et al, 2007]
  • An asterisk in addition means a worse relative frequency of unsuccessful interactions compared to when using indirect trust alone
  • We provided the first systematic study on when and how to combine direct with indirect trust in decision-making
  • The results of our broad empirical analysis show that the best methods for computing indirect trust benefit from incorporating direct trust only in certain categories of settings, especially when advisors change their behavior dynamically
  • Combining this method with the indirect trust system ITEA yields a system that is very robust across a wide variety of scenarios and in most cases outperforms or is on par with all other tested systems
方法
  • A straightforward approach to combining direct and indirect trust would be to assign each a fixed weight and calculate the weighted average.
  • The combination methods the authors tested are the following: Only Indirect.
  • This method only uses indirect trust; direct trust is ignored completely.
  • This is a baseline to test if combining direct and indirect trust leads to improvement
结果
  • Results and Discussion

    For each setting, the authors measured RFU after 500 positive interactions for four different percentages (20%, 40%, 60%, 80%) of dishonest advisors chosen at random from the set of all advisors.
  • Each table has a section for each system tested (ITEA, TRAVOS, ACT, MET), containing a row for each combination method from Section 3.
  • Rows marked ACT/AC refer to the full ACT system.
  • TRAVOS+ rows refer to the full TRAVOS system.
  • A bold entry indicates a statistically significant difference compared to that system using indirect trust alone.
  • An asterisk in addition means a worse RFU compared to when using indirect trust alone
结论
  • The authors provided the first systematic study on when and how to combine direct with indirect trust in decision-making.
  • One of the methods for combining direct and indirect trust dominates all other tested methods, regardless of the indirect trust method used in conjunction.
  • Combining this method with the indirect trust system ITEA yields a system that is very robust across a wide variety of scenarios and in most cases outperforms or is on par with all other tested systems
总结
  • Introduction:

    Trust and reputation systems constitute an active branch of research in multi-agent systems.
  • Agents interact with one another in order to collect information, goods, or services that help with completing a set task.
  • For such interactions to be largely successful, agents try to estimate how trustworthy other individual agents are.
  • Agent A’s indirect trust in B is based on recommendations about B that one or more third-party advisors have provided to A
  • Methods:

    A straightforward approach to combining direct and indirect trust would be to assign each a fixed weight and calculate the weighted average.
  • The combination methods the authors tested are the following: Only Indirect.
  • This method only uses indirect trust; direct trust is ignored completely.
  • This is a baseline to test if combining direct and indirect trust leads to improvement
  • Results:

    Results and Discussion

    For each setting, the authors measured RFU after 500 positive interactions for four different percentages (20%, 40%, 60%, 80%) of dishonest advisors chosen at random from the set of all advisors.
  • Each table has a section for each system tested (ITEA, TRAVOS, ACT, MET), containing a row for each combination method from Section 3.
  • Rows marked ACT/AC refer to the full ACT system.
  • TRAVOS+ rows refer to the full TRAVOS system.
  • A bold entry indicates a statistically significant difference compared to that system using indirect trust alone.
  • An asterisk in addition means a worse RFU compared to when using indirect trust alone
  • Conclusion:

    The authors provided the first systematic study on when and how to combine direct with indirect trust in decision-making.
  • One of the methods for combining direct and indirect trust dominates all other tested methods, regardless of the indirect trust method used in conjunction.
  • Combining this method with the indirect trust system ITEA yields a system that is very robust across a wide variety of scenarios and in most cases outperforms or is on par with all other tested systems
表格
  • Table1: Settings 1–4. OI = Only Indirect. EF = ExpFun. AC = ACT-RL. OL = Online Learning. Avg = Average. DA = Direct as Advisor
  • Table2: Settings 7–10. OI = Only Indirect. EF = ExpFun. AC = ACT-RL. OL = Online Learning. Avg = Average. DA = Direct as Advisor
  • Table3: Settings 11–14. OI = Only Indirect. EF = ExpFun. AC = ACT-RL. OL = Online Learning. Avg = Average. DA = Direct as Advisor
  • Table4: Dominance analysis
Download tables as Excel
相关工作
  • As in many studies in the literature, we assume that interactions with a trustee (the agent with which to interact) have binary outcomes, i.e., they can be either positive or negative. The relative frequency of positive interactions with a trustee can then be seen as that trustee’s trustworthiness.

    The vast majority of trust systems compute direct trust information using the Beta Reputation System (BRS) [Jøsang and Ismail, 2002]. Following the notation in [Parhizkar et al, 2019], trustees are denoted sj, indexed by j. Then the direct trust an agent has in trustee sj is given by brs(pj, nj) = pj pj + 1 + nj + (1)

    where pj and nj refer to the number of positive, resp. negative, interactions that the agent has had with sj. A larger value of this measure suggests a higher trustworthiness of sj, but many systems store the numbers pj and nj separately, since two pairs (p, n) and (p , n ) may result in the same value in brs, yet p + n could be much larger than p + n , suggesting a larger confidence in the trustworthiness value derived from (p, n) than in that derived from (p , n ), since the former stems from a larger number of interactions.
引用论文
  • [Cohen et al., 2018] Robin Cohen, Peng F. Wang, and Zehong Hu. Revisiting public reputation calculation in a personalized trust model. In Proceedings of the 20th International Trust Workshop, pages 13–24, 2018.
    Google ScholarLocate open access versionFindings
  • [Huynh et al., 2006] Trung Dong Huynh, Nicholas R. Jennings, and Nigel R. Shadbolt. An integrated trust and reputation model for open multi-agent systems. Autonomous Agents and Multi-Agent Systems, 13(2):119–154, 2006.
    Google ScholarLocate open access versionFindings
  • [Irissappane and Zhang, 2017] Athirai A. Irissappane and Jie Zhang. Filtering unfair ratings from dishonest advisors in multi-criteria e-markets: a biclustering-based approach. Autonomous Agents and Multi-Agent Systems, 31:36–65, 2017.
    Google ScholarLocate open access versionFindings
  • [Jiang et al., 2013] Siwei Jiang, Jie Zhang, and Yew-Soon Ong. An evolutionary model for constructing robust trust networks. In Proceedings of the 12th International Conference on Autonomous Agents and Multi-Agent Systems, pages 813–820, 2013.
    Google ScholarLocate open access versionFindings
  • [Jøsang and Ismail, 2002] Audun Jøsang and Roslan Ismail. The Beta Reputation System. In Proceedings of the 15th Bled Electronic Commerce Conference, pages 2502–2511, 2002.
    Google ScholarLocate open access versionFindings
  • [Jøsang et al., 2007] Audun Jøsang, Roslan Ismail, and Colin Boyd. A survey of trust and reputation systems for online service provision. Decision Support Systems, 43:618–644, 2007.
    Google ScholarLocate open access versionFindings
  • [Liu et al., 2011] Siyuan Liu, Jie Zhang, Chunyan Miao, Yin-Leng Theng, and Alex C. Kot. iCLUB: an integrated clustering-based approach to improve the robustness of reputation systems. In Proceedings of the 10th International Conference on Autonomous Agents and Multi-Agent Systems, pages 1151–1152, 2011.
    Google ScholarLocate open access versionFindings
  • [Liu et al., 2017] Yuan Liu, Jie Zhang, Quanyan Zhu, and Xingwei Wang. CONGRESS: A hybrid reputation system for coping with rating subjectivity. IEEE Transactions on Computational Social Systems, pages 163–178, 2017.
    Google ScholarLocate open access versionFindings
  • [Parhizkar et al., 2019] Elham Parhizkar, Mohammad Hossein Nikravan, and Sandra Zilles. Indirect trust is simple to establish. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 3216– 3222, 2019.
    Google ScholarLocate open access versionFindings
  • [Regan et al., 2006] Kevin Regan, Pascal Poupart, and Robin Cohen. Bayesian reputation modeling in emarketplaces sensitive to subjectivity, deception and change. In Proceedings of the AAAI National Conference on Artificial Intelligence, pages 1206–1212, 2006.
    Google ScholarLocate open access versionFindings
  • [Teacy et al., 2006] W. T. Luke Teacy, Jigar Patel, Nicholas R. Jennings, and Michael Luck. TRAVOS: Trust and reputation in the context of inaccurate information sources. Autonomous Agents and Multi-Agent Systems, 12:183–198, 2006.
    Google ScholarLocate open access versionFindings
  • [Teacy et al., 2012] W. T. Luke Teacy, Michael Luck, Alex Rogers, and Nicholas R. Jennings. An efficient and versatile approach to trust and reputation using hierarchical
    Google ScholarFindings
  • Bayesian modelling. Artificial Intelligence, 193:149–185, 2012.
    Google ScholarLocate open access versionFindings
  • [Weng et al., 2010] Jianshu Weng, Zhiqi Shen, Chunyan Miao, Angela Goh, and Cyril Leung. Credibility: How agents can handle unfair third-party testimonies in computational trust models. IEEE Transactions on Knowledge and Data Engineering, 22:1286–1298, 2010.
    Google ScholarLocate open access versionFindings
  • [Yu and Singh, 2003] Bin Yu and Munindar P. Singh. Detecting deception in reputation management. In Proceedings of the 2nd International Conference on Autonomous Agents and Multi-Agent Systems, pages 73–80, 2003.
    Google ScholarLocate open access versionFindings
  • [Yu et al., 2014] Han Yu, Zhiqi Shen, Chunyan Miao, Bo An, and Cyril Leung. Filtering trust opinions through reinforcement learning. Decision Support Systems, 66:102–113, 2014.
    Google ScholarLocate open access versionFindings
下载 PDF 全文
您的评分 :
0

 

标签
评论