Collective Responsibility in Multiagent Settings


引用 0|浏览5
Scientific and technological advancements in autonomous agents and multiagent systems offer a promising prospect for more reliable and effective supply chain, transportation, and healthcare systems [Iqbal et al. 2016; Gerding et al. 2011; Chaib-Draa and Müller 2006]. However, at the current stage, developed autonomous systems mainly aim for technical functionality and efficiency but are incapable of reasoning about how and to what extent each agent is responsible and later on is to account for undesirable situations such as a collision among autonomous vehicles. This is mainly because they do not take into account the fact that autonomous systems and artificial intelligence technologies are embedded in a social context with other actors and stakeholders. To foster such an embedding, we follow [Jennings and Mamdani 1992] and argue that the meta-level notion of collective responsibility is applicable for coordinating such systems and ensuring their socio-technically desirable behaviour. In multiagent settings, an open problem is to determine and distinguish who is responsible, blameworthy, accountable, or sanctionable in a human-agent collective [Jennings et al. 2014; Abeywickrama et al. 2019]. (In the next section, we clarify how these forms of responsibility relate.) The other challenge is on how the collective-level responsibility can be justifiably ascribed to individuals in a collective. In the literature on responsibility reasoning, this is known as the problem of many hands [Frankfurt 1969] where we face responsibility voids [Braham and van Hees 2011]. This is a class of situations where a collective is responsible for an outcome but there exist no exact method to link the outcome to individuals in the collective, hence ascribing responsibilities is problematic. This lack of collective responsibility reasoning methods, able to capture new forms of agency and autonomy, calls for developing techniques capable of capturing the strategic, temporal, epistemic, and normative aspects of collective responsibility in multiagent settings. Such tools will be a base for ensuring the responsible and trustworthy behaviour of autonomous systems and, in turn, preserving social values key to the successful deployment of human-centred artificial intelligence in society. Against this background, this work highlights key aspects in various notions of collective responsibility and suggests an approach to employ multiagent temporal logics to formally represent and reason about different forms of responsibility in multiagent settings.
AI 理解论文
Chat Paper