AI helps you reading Science
AI Insight
AI extracts a summary of this paper
Weibo:
Toward a General Logicist Methodology for Engineering Ethically Correct Robots
IEEE Intelligent Systems, no. 4 (2006): 38-44
EI WOS
Keywords
Abstract
It's hard to deny that robots will become increasingly capable and that humans will increasinglyexploit these capabilities by deploying them in ethically sensitive environments, such as hospitals, whereethically incorrect robot behavior could have dire consequences for humans. How can we ensure that suchrobots will always behave in an eth...More
Code:
Data:
Introduction
- Given the inevitability of this future, how can the authors ensure that the robots in question always behave in an ethically correct manner? How can the authors know ahead of time, via rationales expressed in clear English, that they will so behave? How can the authors know in advance that their behavior will be constrained by the ethical codes affirmed by human overseers? The authors refer to these queries as the driving questions.
- Given the inevitability of this future, how can the authors ensure that the robots in question always behave in an ethically correct manner?
- How can the authors know ahead of time, via rationales expressed in clear English, that they will so behave?
- In this paper the authors provide an answer, in general terms, to these questions.
- The authors strive to give this answer in a manner that makes it understandable to a broad readership, rather than merely to researchers in the own technical paradigm.
- The authors' coverage of computational logic is intended to be self-contained
Highlights
- How will we ensure that the robots in question always behave in an ethically correct manner? How can we know ahead of time, via rationales expressed in clear English, that they will so behave? How can we know in advance that their behavior will be constrained by the ethical codes selected by human overseers? In general, it seems clear that one reply worth considering, put in encapsulated form, is this one: “By insisting that our robots only perform actions that can be proved ethically permissible in a human-selected deontic logic.” (A deontic logic is a logic that formalizes an ethical code.) This approach ought to be explored for a number of reasons
- To illustrate the feasibility of our methodology, we describe it in general terms free of any committment to particular systems, and show it solving a challenge regarding robot behavior in an intensive care unit
- Given the inevitability of this future, how can we ensure that the robots in question always behave in an ethically correct manner? How can we know ahead of time, via rationales expressed in clear English, that they will so behave? How can we know in advance that their behavior will be constrained by the ethical codes affirmed by human overseers? We refer to these queries as the driving questions
- S2, if followed, precludes a situation caused in part by unethical robot behavior, and by definition regulates robots who find themselves in such pristine environments
- Even if robots never ethically fail, humans will, and robots must be engineered to deal with such situations. That such situations are very challenging, logically speaking, was demonstrated long ago by Roderick Chisholm (1963), who put the challenge in the form of a paradox that continues to fascinate to this day: Consider the following entirely possible situation: 1: s It ought to be that Jones does perform lifesaving surgery
Conclusion
- Some readers may well wonder if the optimism is so extreme as to become Pollyannaish.
- Since humans will be collaborating with robots, the approach must deal with the fact that some humans will fail to meet their obligations in the collaboration — and so robots must be engineered so as to deal smoothly with situations in which obligations have been violated
- This is a very challenging class of situations, because in the approach, at least so far, robots are engineered in accordance with the S2 pair introduced at the start of the paper, and in this pair, no provision is made for what to do when the situation in question is fundamentally immoral.
- That such situations are very challenging, logically speaking, was demonstrated long ago by Roderick Chisholm (1963), who put the challenge in the form of a paradox that continues to fascinate to this day: Consider the following entirely possible situation: 1: s It ought to be that Jones does perform lifesaving surgery
Funding
- ∗This work was supported in part by a grant from Air Force Research Labs–Rome; we are most grateful for this support
Reference
- Aqvist, E. (1984), Deontic logic, in D. Gabbay & F. Guenthner, eds, ‘Handbook of Philosophical Logic, Volume II: Extensions of Classical Logic’, D. Reidel, Dordrecht, The Netherlands, pp. 605–714.
- Arkoudas, K. (n.d.), Athena. http://www.cag.csail.mit.edu/̃kostas/dpls/athena.
- Arkoudas, K. & Bringsjord, S. (2005a), Metareasoning for multi-agent epistemic logics, in ‘Fifth International Conference on Computational Logic In Multi-Agent Systems (CLIMA 2004)’, Vol. 3487 of Lecture Notes in Artificial Intelligence (LNAI), Springer-Verlag, New York, pp. 111–125.
- Arkoudas, K. & Bringsjord, S. (2005b), Toward ethical robots via mechanized deontic logic, Technical Report Machine Ethics: Papers from the AAAI Fall Symposium; FS–05–06, American Association of Artificial Intelligence, Menlo Park, CA.
- Asimov, I. (2004), I, Robot, Spectra, New York, NY.
- Barwise, J. & Etchemendy, J. (1999), Language, Proof, and Logic, Seven Bridges, New York, NY.
- Belnap, N., Perloff, M. & Xu, M. (2001), Facing the Future, Oxford University Press.
- Bringsjord, S. & Ferrucci, D. (1998a), ‘Logic and artificial intelligence: Divorced, still married, separated...?’, Minds and Machines 8, 273–308.
- Bringsjord, S. & Ferrucci, D. (1998b), ‘Reply to Thayse and Glymour on logic and artificial intelligence’, Minds and Machines 8, 313–315.
- Chellas, B. (1969), The Logical Form of Imperatives. PhD dissertation, Stanford Philosophy Department.
- Chellas, B. F. (1980), Modal Logic: An Introduction, Cambridge University Press, Cambridge, UK.
- Chisholm, R. (1963), ‘Contrary-to-duty imperatives and deontic logic’, Analysis 24, 33–36.
- Claessen, K. & Sorensson, N. (2003), New techniques that improve Mace-style model finding, in ‘Model Computation: Principles, Algorithms, Applications (Cade-19 Workshop)’, Miami, Florida.
- Davis, M. (2000), Engines of Logic: Mathematicians and the Origin of the Computer, Norton, New York, NY.
- Feldman, F. (1986), Doing the Best We Can: An Essay in Informal Deontic Logic, D. Reidel, Dordrecht, Holland.
- Feldman, F. (1998), Introduction to Ethics, McGraw Hill, New York, NY.
- Friedland, N., Allen, P., Matthews, G., Witbrock, M., Baxter, D., Curtis, J., Shepard, B., Miraglia, P., Angele, J., Staab, S., Moench, E., Oppermann, H., Wenke, D., Israel, D., Chaudhri, V., Porter, B., Barker, K., Fan, J., Chaw, S. Y., Yeh, P., Tecuci, D. & Clark, P. (2004), ‘Project halo: Towards a digital aristotle’, AI Magazine pp. 29–47.
- Genesereth, M. & Nilsson, N. (1987), Logical Foundations of Artificial Intelligence, Morgan Kaufmann, Los Altos, CA.
- Halpern, J., Harper, R., Immerman, N., Kolaitis, P., Vardi, M. & Vianu, V. (2001), ‘On the unusual effectiveness of logic in computer science’, The Bulletin of Symbolic Logic 7(2), 213–236.
- Hilpinen, R. (2001), Deontic Logic, in L. Goble, ed., ‘Philosophical Logic’, Blackwell, Oxford, UK, pp. 159– 182.
- Horty, J. (2001), Agency and Deontic Logic, Oxford University Press, New York, NY.
- Joy, W. (2000), ‘Why the Future Doesn’t Need Us’, Wired 8(4).
- Kuhse, H. & Singer, P., eds (2001), Bioethics: An Anthology, Blackwell, Oxford, UK.
- Leibniz (1984), Notes on Analysis, Past Masters: Leibniz, Oxford University Press, Oxford, UK. Translated by George MacDonald Ross.
- Levesque, H. & Brachman, R. (1985), A fundamental tradeoff in knowledge representation and reasoning (revised version), in ‘Readings in Knowledge Representation’, Morgan Kaufmann, Los Altos, CA, pp. 41–70.
- Murakami, Y. (2004), Utilitarian Deontic Logic, in ‘Proceedings of the Fifth International Conference on Advances in Modal Logic (AiML 2004)’, Manchester, UK, pp. 288–302.
- Nilsson, N. (1991), ‘Logic and Artificial Intelligence’, Artificial Intelligence 47, 31–56.
- Reiter, R. (2001), Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical
- Systems, MIT Press, Cambridge, MA. Russell, S. & Norvig, P. (2002), Artificial Intelligence: A Modern Approach, Prentice Hall, Upper Saddle
- River, NJ. Skyrms, B. (1999), Choice and Chance: An Introduction to Inductive Logic, Wadsworth.
- von Wright, G. (1951), ‘Deontic logic’, Mind 60, 1–15.
- Voronkov, A. (1995), ‘The anatomy of vampire: Implementing bottom-up procedures with code trees’, Journal of Automated Reasoning 15(2). Wos, L., Overbeek, R., e. Lusk & Boyle, J. (1992), Automated Reasoning: Introduction and Applications, McGraw Hill, New York, NY.
Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn