What Would It Mean for a Machine to Have a Self?
Social Science Research Network(2022)
Abstract
What would it mean for autonomous AI agents to have a ‘self’? One proposal for a minimal notion of self is a representation of one’s body spatio-temporally located in the world, with a tag of that representation as the agent taking actions in the world. This turns self-representation into a constructive inference process of self-orienting, and raises a challenging computational problem that any agent must solve continually. Here we construct a series of novel ‘self-finding’ tasks modeled on simple video games—in which players must identify themselves when there are multiple self-candidates—and show through quantitative behavioral testing that humans are near optimal at self-orienting. In contrast, well-known Deep Reinforcement Learning algorithms, which excel at learning much more complex video games, are far from optimal. We suggest that self-orienting allows humans to navigate new settings, and that this is a crucial target for engineers wishing to develop flexible agents.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined