AI in Games: A Personal View

We shall start by looking at the creatures in Black & White, and then go on to speculate about what sorts of interesting agents we can expect in computer games in the next few years.

The creatures in Black & White had to fulfil two very different requirements:

  1. We wanted the user to feel he was dealing with a person. The creatures had to be plausible, malleable, and loveable.
  2. They had to be useful to the player in his many quests and goals. The creatures in Black & White aren’t just toys you experiment with, they can be trained to be invaluable helpers in the campaign.

To my knowledge, this combination of features has not been attempted before. There is some software (Creatures, The Sims) in which you feel you are dealing with passably plausible agents, but these packages, excellent as they are, are more like sand-boxes than games: they are pure goal-less simulations, in which the entertainment is to be gained from experimentation, not from progressing through a series of quests. There are some games (Daikatana) in which the player’s character is given helpers to aid him on his quest, but in these games the helpers are just state machines, hard-coded for the particular task at hand.

 

Daikatana

Creatures

Black & White

Person-like agents?

No

Yes

Yes

Useful, helpful agents?

Yes

No

Yes

At first glance, there seems to be some sort of conflict between these requirements: the person-like requirement implies the creatures are autonomous, whereas the usefulness requirement seems to preclude too much autonomy. Later on we shall see how this conflict was "resolved". But first we shall look at the first requirement: making persons out of creatures.

1. Making a Person: the Architecture of an Agent

In order for the player to see his creature as a person, the creatures had to be:

1.1 Psychologically Plausible Agents

To make agents who were psychologically plausible, we took the Belief-Desire-Intention architecture of an agent, fast becoming orthodoxy in the agent programming community, and developed it in a variety of ways. The underlying methodology was to avoid imposing a uniform structure on the representations used in the architecture, but instead to use a variety of different types of representation, so that we could pick the most suitable representation for each of the very different tasks (See Marvin Minsky in his paper on Causal Diversity). So beliefs about individual objects were represented symbolically, as a list of attribute-value pairs, beliefs about types of objects were represented as decision-trees, and desires were represented as perceptrons. There is something attractive about this division of representations: beliefs are symbolic structures, whereas desires are more fuzzy.

To make a plausible agent, there must be an explanation of why he is in that particular mental state. In particular, if an agent has a belief about an object, that belief must be grounded in his perception of that object: creatures in Black & White do not cheat about their beliefs – their beliefs are gathered from their perceptions, and there is no way a creature can have free access to information he has not gathered from his senses. I call this requirement Epistemic Verisimilitude.

Further, if a creature wants something, there must be an explanation of why he wants it. (For example: if the creature is angry, it might be because he has been watching you being destructive, and has decided to copy you; or, the creature might grow angry after getting hurt). Each desire has a number of different desire-sources; these jointly contribute to the current intensity of the desire.

 

1.2 Malleable Agents

We wanted the creatures to be malleable in many different ways: we wanted them to learn many different types of thing, and we wanted there to be many different types of situation which would prompt learning.

"Learning" covers a variety of very different skills:

The architecture was designed to allow all these different types of learning.

Learning can be initiated in a number of very different ways:

The architecture was designed to allow all these different ways in which learning can be initiated.

All these different types of learning, and different types of occasions which prompt learning, coexist in one happy bundle. I will only go into detail about one of these types of learning: learning which types of object are most suitable for various different desires.

 

1.2.1 Learning Opinions: Dynamically Building Decision-Trees

How does a creature learn what sorts of objects are good to eat? He looks back at his experience of eating different types of things, and the feedback he received in each case, how nice they tasted, and tries to "make sense" of all that data by building a decision tree. Suppose the creature has had the following experiences:

What he ate

Feedback – "how nice it tasted"

A big rock

-1.0

A small rock

-0.5

A small rock

-0.4

A tree

-0.2

A cow

+0.6

He may build the following simple tree to explain this data:

 

A decision tree is built by looking at the attributes which best divide the learning episodes into groups with similar feedback values. The best decision tree is the one which minimises entropy, a measure of how disordered the feedbacks are.

 

 

 

 

 

 

 

 

 


 

 

 

To take a simplified example, if a creature was given the following feedback after attacking enemy towns:

What he attacked

Feedback from player

Friendly town, weak defence, tribe Celtic

-1.0

Enemy town, weak defence, tribe Celtic

+0.4

Friendly town, strong defence, tribe Norse

-1.0

Enemy town, strong defence, tribe Norse

-0.2

Friendly town, medium defence, tribe Greek

-1.0

Enemy town, medium defence, tribe Greek

+0.2

Enemy town, strong defence, tribe Greek

-0.4

Enemy town, medium defence, tribe Aztec

0.0

Friendly town, weak defence, tribe Aztec

-1.0

Then the creature would build a decision tree for Anger like this:

 

 

The algorithm used to dynamically construct decision-trees to minimise entropy is based on Quinlan’s ID3 system.

1.3 Loveable Agents

We wanted the player to feel some sort of emotional attachment to his creature. We soon realised that empathetic attachment is intrinsically reciprocal: the reason why it is inappropriate to feel emotionally attached to your tv remote is because your tv remove is not going to reciprocate. Conclusion: if you want the player to get attached to his creature, you must first ensure the creature is empathetically attached to you!

Agents in computer games are at best like severely autistic people: capable of perceiving and predicting the behaviour of objects in the world, but incapable of seeing other people as people - incapable of building a model of another agent’s mind which could be used, to great effect, to predict his actions.

In Black & White, the creature’s mind includes a simplified model of the player’s mind. He watches what actions the player is doing, and tries to make sense of those actions by ascribing goals to the player which would explain those actions. He stores a simple personality model of the player, which he uses in decision making. As well as a model of what he thinks the player is like, he also has goals which relate directly to his master: the desire to help his master, the desire to play with his master, and the desire for attention.

If we want to enable agents to build mental models of others, and we should want to do so very much, the first thing we must do is ensure our architecture is sufficiently clean that a useful (and short) description of the current mental-state can be read-off from the actual mental state of the creature. It is easy to read-off a clear description of a part of a mental state if the architecture is organised around symbolic data-structures, but if the architecture is a net with merely numerical connections, it is holistic and opaque, making it doubly difficult to extract a short description which is useful for making predictions.

So, if we want the creatures to be capable of modelling other creature’s minds, we must design the architecture around symbolic data-structures. But this does not mean we should ignore all that non-symbolic learning has to offer: the creature architecture uses threshold functions to adjust desire tolerance, and entropy functions to estimate the amount of noise. But these soft fuzzy functions are housed within the hard framework of a symbolic architecture.

 

2. Making a Person Useful: Autonomy Can Go Too Far!

The creatures in Black & White had to be person-like, but they also had to be useful. The person-like requirement implies the creatures are autonomous, whereas the usefulness requirement seems to preclude too much autonomy. How can we resolve these conflicting requirements? The solution we arrived at was that creatures start off completely autonomous, but over time, through training, you can mould them so that they only do what you want them to do. This gives the player the enormous feeling of satisfaction that he has trained his creature to actually be useful in the game! The down-side is that your creature loses something of his charm the more you train him: he becomes more focussed on a few goals in a few situations on a few types of objects: as he becomes more useful, he becomes more "robotic".

 

3. Future Directions: Extrapolating from Black & White

3.1 Person-Like Agents

What could we do to make more realistic agents?

3.2 Empathetic Agents

If we want to make more plausible agents, we enrich the mental model of the agent. If we want to make more empathetic agents, we enrich the mental model which the agent uses to model other agents. (These two are quite distinct: the latter is invariably going to be simpler than the former, for space-efficiency reasons – the agent is going to have models about lots of different agents, so these models should be small). The creatures in Black & White have simple models of other agents’ minds: they just model the desire part of the architecture. Wouldn’t it be nice to add more?

The trouble is that the more we enrich the agent’s model of other agents, the harder it is for the agent to figure out what the other agent is thinking. For instance, suppose our agent’s model of another agent includes data about the other agent’s beliefs as well as his desires. Then we have made the task of understanding the other agent considerably harder, because there will be more models which fit the data, and it will be harder to figure out which is best. Suppose, for instance, that an agent fails to eat the apple. This might be because he hasn’t seen the apple (and consequently has no belief about it), or because he doesn’t like apples, or because he just isn’t hungry. Which of these is the right explanation? We can’t tell until we have seen a lot of examples. (This problem just doesn’t arise if you keep an excessively simple model of other agents: if you just model them as a bunch of desires, then the only possible explanation is that he isn’t hungry). (There are proposed solutions to this in the philosophical literature: the Principle of Charity solves the apple problem by assuming the agent’s beliefs are correct, but if we are going to assume this across the board, then there is no point in modelling beliefs at all).

 

4 Summary

There are three features of Black & White which will become increasingly commonplace:

  1. The creature’s mind includes both symbolic and connectionist representations, happily coexisting in one unified architecture.
  2. Creatures are both person-like and useful in the game.
  3. Creatures are empathetic (this is clearly an aspect of being person-like, but is sufficiently important to be stated separately).