Embodied Agents
By ai-depot | June 30, 2002
The essay describes a common technique used for creating realistic animats: embodiment. The advantages of modeling a synthetic creature’s body are discussed, along with specific issues that this paradigm entails for computer game AI and other applications.
Written by Alex J. Champandard.
Overview & Motivation
Modern research in artificial intelligence strives as much to create intelligence as it does realism in behaviours. This is an effort to reproduce human intelligence, rather than just generic computer intelligence. One field that has the second goal at heart is animat research, which focuses on creating virtual animals, or even human-like synthetic creatures.
This essay describes a major trend in this field: embodied agents. They are also known as embedded agents, but the confusion with portable electronics products is a reason to shy away from that terminology. That said all along the essay, the words embededness and embodiment will be interchangeable.
First, we’ll start by defining what’s so special about these kinds of agent. Then we’ll look at the motivation behind the creation of such agents and discuss the challenges that they entail. Finally, we’ll wrap up by looking at the applications of such technology, and how everything relates to game AI projects.
Overview
Some of you may be familiar with the word “agent”, in the context of computer science. It usually applies to a smart piece of software that can perform tasks in a somewhat intelligent way. This includes web spiders, virtual assistants like that annoying Word paperclip, or IRC bots. What do these entities have in common? They are purely virtual; they do not have a body of any kind — or if they do, it has no use. In that sense, they have more freedom, since they do not obey to fundamental rules of physics (just electronic rules).
In virtual worlds, there are artificial animals, synthetic creatures, animats. Like their biological counter-parts, they have a body to deal with. Admittedly, in real-life, those bodies are actually submitted to physical constraints, whereas in the simulation those are just programmed rules. Fundamentally, it’s the same thing.
An embodied agent is an autonomous living creature, subject to the constraints of its environment.
In effect, this is just a consequence of giving the agent a body to control! The agent is the piece of software, the brain if you will. The body is the interface between the brain and the world; it provides sensations, and can execute actions. In many respects, it can be considered a limitation of the agent’s capabilities, but I prefer to see it as the definition of the agent’s purpose.
So, in essence, embedding is about actively enforcing these constraints. There are more or less pro-active ways of doing this, depending on how much realism is involved. As you may expect, it is quite lax in computer games, whereas academic research tries to take a more authentic approach.
Motivation
Realism
The major reason for choosing embodied agents is realism. You’re actually using a biologically inspired, physically accurate simulation of the animat’s body. As a consequence, many of the behaviours observed will appear authentic. This is an intrinsic consequence of the embeddedness, and has many practical examples. Here are two quick ones to wet your appetite:
- Vision
Numerous little details related to the visual senses can be simulated very easily when you model the ocular organs explicitly. This includes visual delay, errors in perception (position, types of entity). You can also actively enforce the field of view, line of sight paradigm. - Movement
When an animat is moving, you’ll be physically simulating its body. Things like momentum, turning rate will be automatically applied to any motion command the agent requires. I was extremely surprised by this in my current research, but this makes the paths extremely smooth and human like.
These properties are what some game AI developers are striving to, without necessarily realising the grand scheme of things. Picking out and simulating a small subset of “embeddedness” is probably slightly more efficient, but the nasty idiosyncrasies and artefacts observed often make them pay the price.
Specification
When you actively define what an animat’s body is capable of, you’re essentially defining its behaviour. There will be room for specific individual comportment, just like humans exhibit unique characteristics despite having very similar physiological appearance. For embodied agents, such a specification is a first step towards standardisation. Not only would this make the task of researchers and developers much simpler, but this would also allow testing of underlying AI modules on equal terms.
If you lurk in gamers’ (or even developers’) forum after a new game with good AI comes out, you’ll often hear requests for AI hardware. While I’m not sure time will change my mind on this topic, I believe the debate itself is about 3 years premature. Why? There is no robust specification for an underlying AI; many techniques have proven themselves valuable, yet none have a clear advantage over others. On the other hand, the interface with the environment will change very little over the next few years. Eventually, when it does require extending, defining a backwards-compatible specification should be no problem.
Modularity
When developing complex programs, software models that promote abstraction, black boxes and object oriented design paradigms tend to shine through. AI is no different. While AI in current game simulations still remains trivial, this is not an issue. However, when these are expected to scale up in complexity and behaviour realism, one will expect the underlying code to increase as well.
The embodied animat concept is ideally equipped to expose such modularity; there’s the body, and the brain. The task of designing and implementing these modules is split between the engine coders:
- Body
The interface is designed by the AI programmer (with team interaction), but it should be implemented by the engine programmer(s), responsible for the physics, game logic, or graphics. These people are best placed to enforce the restrictions of the game world on the AI agents, just like on the human players. Contrarily to what some recent AI books or game developer sites claim, the game play/logic, animation and physics do not fall under the category of AI (whether they are also done by the same guy for budget/simplicity issues is another matter).
The body itself can also be split in a modular fashion, including components such as vision, hearing, movement, weapon handling… - Brain
This part is entirely up-to the AI programmer. The only restrictions imposed on him are the speed constraints of the engine specification, and potentially the behaviours required by the designers. That aside, he should be given a free hand!
This paradigm, like other modular programming languages, is ideal for portability. Bots can be exported from current projects, live beyond a single game engine. They major part of the code can thereby be reused.
Pages: 1 2
Tags: none
Category: essay |