Assaulting F.E.A.R.’s AI: 29 Tricks to Arm Your Game

While the majority of first-person shooters are mere clones, every once in a while one of them advances the state-of-the-art. F.E.A.R. is one of those games; not only is it highly entertaining, but it has been praised for its innovations in artificial intelligence. It ranked very highly on the Top 10 Most Influential AI Games because of it.

This technical review from AiGameDev.com looks into the technology behind the game: things to equip your game with and what to stay away from. See Jeff Orkin’s page, or the references at the bottom of the article for more details.

Overall Approach

At the core, F.E.A.R.’s AI engine is quite different from the traditional hierarchical finite state machines.

Screenshot 1: Confronting a variety of enemy behaviors.

1) AI Controls Only Movement and Animation

The design of the AI in F.E.A.R. focuses entirely on the character’s skeleton in two ways:

  • Animation — Play a full body motion clip that may move the character as a side effect.

  • Movement — Select and control an animation to head towards a specific direction.

These two tasks are all that’s required to implement behaviors involving the characters, which simplifies things greatly.

2) Annotate Animation for Other Behavior Functionality

Despite the AI in F.E.A.R. only driving movement and animation, it can express behaviors in different ways too (e.g. audio, weapon control). This is done by annotating animation frames with messages to activate other game systems. As Jeff Orkin explains in [1]:

“We assume the animation system has key frames which may have embedded messages that tell the audio system to play a footstep sound, or the weapon system to start and stop firing.”

Using this approach, the AI is greatly simplified, but of course it relies on the animations being annotated correctly. Like any data-driven solution, changes to the animations or annotations must be done in a mindful way.

3) Use Smart Objects

Monolith (the studio behind F.E.A.R.) internally uses smart objects to implement context-sensitive animations. This works as follows:

  1. Edit and store the animations along with the object itself.

  2. Don’t reference specific animations for objects within the AI.

  3. Implement a function to lookup the animation automatically.

As discussed in more detail during the technical review of The Sims, this helps reduce the complexity of the development process.

4) Rely on Few Low-Level States

F.E.A.R. is different from most games based on finite state machines (FSM) in that it uses only two low-level states. This is possible because only because of the simplification of animation and movement discussed above. As a result:

  • The two states have many parameters that cover all options.

  • The states can be implemented generically very early in development.

  • Little or no hacks are required late in the project to implement special cases.

Even if you have more behaviors than just animation and movement, keeping these generic reduces the workload on the programmer.

5) Understand Sources of Complexity in the AI

Generally speaking, managing next-gen behaviors is becoming increasingly complex. However, it’s important to understand the specifics of why your game is complex. In particular for F.E.A.R.:

“In F.E.A.R., A.I. use cover more tactically, coordinating with squad members to lay suppression fire while others advance. A.I. only leave cover when threatened, and blind fire if they have no better position.”

As Jeff mentions, no specific behavior is particularly hard to implement with existing techniques, but the combination of the behaviors quickly becomes unmanageable.

6) Reduce Complexity with a Planner

Since the states in F.E.A.R. have so many parameters, and the editing process is becoming increasingly more difficult, a planner is used to automate the solution. A planner works by analysing dependencies of each action, and figuring out how to realize them:

“We can satisfy the goal of eliminating a threat by firing a gun at the threat, but the gun needs to be loaded, or we can use a melee attack, but we have to move close enough.”

By spending more time to develop a planner, less work is required to implement specific behaviors (compared to a FSM for example).

Team & Workflow

Monolith, developers of the F.E.A.R., used an atypical approach to developing this game and its technology.

Screenshot 2: A soldier vaulting over a barrier.

7) Have Only One AI Programmer!

One thing that distinguishes F.E.A.R. from other AAA titles is that there was only one AI programmer. This was accepted, and used as the motivation to develop better technology.

“The thought was that if we can delegate some of the workload to these A.I. guys, we’d be in good shape. If the A.I. are really so smart, and they can figure out some things on their own, then we’ll be all set!”

Thanks to that focus and serious prototyping, the development team managed to build technology that reduced the effort required to build the many different AI characters and squad behaviors.

8) Don’t Let Level Designers Script the Gameplay

The various studios have different perspectives on the roles of level designers. At Monolith, they maintain the following philosophy:

“The designer’s job is to create interesting spaces for combat, packed with opportunities for the A.I. to exploit. Designers are not responsible for scripting the behavior of individuals, other than for story elements.”

This attitude puts more emphasis on the autonomous behaviors of the AI, and at the same time, emphasizes the planning technology used in F.E.A.R.

9) Understand Academic Algorithms

F.E.A.R. managed to distinguish itself by using an algorithm popular in academia, known as STRIPS. Two things made this possible:

  1. Knowledge of the theory behind the algorithm, and

  2. Time taken to experiment with it in practice in prototypes.

Hiring a mix of developers who have a background in both academia and industry is a good way to get ahead in this department.

Implementation

As with most algorithms that push the limits of what’s possible, you have to take special care to get it running efficiently.

Screenshot 3: Ducking to hide from explosions.

10) Express World State as a Vector

Traditionally, STRIPS planners maintain a list of facts about the world. Instead, F.E.A.R. expresses this as a fixed vector of variables which are known at compile time. This increases efficiency at the cost of flexibility. As Jeff Orkin mentions:

“The fixed sized array does limit us though. While an A.I. may have multiple weapons, and multiple targets, he can only reason about one of each during planning, because the world state array has only one slot for each.”

To get around this limitation, “attention-selection” mechanisms are used to focus the planner on specific weapons and targets so the problem scope is the most appropriate.

11) Think Wisely about Variables and Actions

Regardless of whether you use a fixed vector or a dynamic list of facts to represent the current state, you should choose the variables very carefully. In fact, this also applies to choosing actions too, as they both affect performance significantly.

The best solution is to start with the minimal possible description of a state, and only make additions carefully while monitoring performance. For F.E.A.R., the limits and necessary representations were determined this during prototypes in pre-production.

12) Use ActionSets as Configuration Tables

F.E.A.R. uses the concept of ActionSets. Essentially, they’re a group of actions selected by the designers to be used by the planner to solve problems. By changing these configuration tables, the designers can tweak the behaviors easily.

“The soldier’s Action Set includes actions for firing weapons, while the assassin’s Action Set has lunges and melee attacks.”

You can implement a lookup table and get the same benefits without using a planner. All that’s required are fallbacks for behaviors that are not found in the lookup table, and check that the AI still works if certain behaviors are not present.

13) Employ Procedural Logic

STRIPS traditionally is based on declarative information: facts are stored about the current state in a list, and each applicable action is specified as pre-conditions and side-effects operating on this list. This, however, isn’t the easiest way to work with all forms of knowledge about the world.

“It’s not practical to represent everything we need to know about the entire game world in our fixed-sized array of variables, so we added the ability to run additional procedural precondition checks.”

This provides the planner with a way to perform checks on-demand while planning if it’s necessary to do so. This is done by calling a C++ function that’s part of the Action class.

14) Design the System for Planning in One Frame

Because it’s critical for the AI to make decisions in a short and predictable amount of time, the planner in F.E.A.R. is designed to complete by the time the next frame is starting. To do this, a few tricks are employed:

  • Separate sensors are used to gather information that’s computationally expensive on separate schedule.

  • Knowledge is cached locally so the planner always has necessary facts available.

  • Procedural checks allow the planner to check for information lazily only when necessary.

  • An event-driven system prevents redundant calculations as much as possible.

Memory and Knowledge

The decision making in F.E.A.R. shines thanks to the underlying architecture that structures the way knowledge is stored.

Screenshot 4: Taking over a maintenance room.

15) Design the Architecture around a Blackboard

The overall architecture inspired by MIT’s C4 architecture, which emphasizes the use of a blackboard to store facts about the world. Then, separate sensing operations are used to populate it with information, which is used by the decision making process. This provides a form of decoupling to increase the modularity of the implementation.

F.E.A.R. in particular uses a dynamic blackboard where multiple facts can be added dynamically, based on events that happen in the world around each AI actor. (This provides more flexibility but less efficiency than a static blackboard.)

16) Assign a Confidence to All Facts

Facts in the blackboard are made up of many attributes, including position with direction, the type of fact, and the weight of the desire, etc. Each of these attributes is given a confidence value normalized to fall in the range [0,1].

These confidence values have different meanings depending on the type of fact, but each related action is designed to understand what this value means. For example, in the case of cover, the confidence value of the position indicates how close the cover is to the actor.

17) Share Information Between Behaviors

Since F.E.A.R. centralizes information using a working memory, it can share data between multiple different behaviors and goals. Your AI is no longer restricted to strict FSM hierarchies that cause many problems in games:

“For example someone could sit down at a desk and do some work. The problem was that only the Work goal knew that the A.I. was in a sitting posture, interacting wit the desk. When we shot the A.I., we wanted him to slump naturally over the desk. Instead, he would finish his work, stand up, push in his chair, and then fall to the floor. This was because there was no information sharing between goals, so each goal had to exit cleanly, and get the A.I. back into some default state where he could cleanly enter the next goal.”

To handle this with a planner, the current animation state of the actor can be used as input to the planner so it can figure out the best way to portray “death” based on the situation.

18) Centralize Knowledge for Efficiency and Convenience

F.E.A.R.’s architecture is designed around a central working memory. This has multiple advantages:

  • It provides a place to insert optimizations, for example, by providing fast lookup of facts using hash tables and a cache.

  • It allows introducing policies for garbage collection as necessary, although by default owners of the information are responsible for deleting it.

19) Maintain World Knowledge from Previous Actions

Since each AI actor has a blackboard, it can be exploited even further by adding information dynamically according to previous plan executions. This is not only much more plausible, but it also highlights to the player how the AI replans.

This is how Jeff Orkin describes it:

“This dynamic behavior arises out of re-planning while taking into account knowledge gained through previous failures. In our previous discussion of decoupling goals and actions, we saw how knowledge can be centralized in shared working memory. As the A.I. discovers obstacles that invalidate his plan, such as the blocked door, he can record this knowledge in working memory, and take it into consideration when re-planning to find alternative solutions to the KillEnemy goal.”

AI Design

Screenshot 5: Over the top action by design!

20) Use Voice Communications to Create Illusions

The AI behaviors use communication only to signal their intended behavior. This in itself creates an illusion of intelligence, even if the logic to implement the behavior does not exist. This is good enough to fool the players.

“For example, when an A.I. realizes that he is the last surviving member of a squad, he says some variation of ‘I need reinforcements.’ We did not really implement any mechanism for the A.I. to bring in reinforcements, but as the player progresses through the level, he is sure to see more enemy A.I. soon enough.

This trick relies on the players’ assumptions to enhance their experience. It’s a human tendency to extrapolate and find patterns even if they are not explicitly designed that way.

21) Keep Inter-Actor Communications Realistic

Thief in many ways pioneered the use of voice to help explain the AI behaviors. But F.E.A.R. takes this further by designing it in a much more “post-modern” fashion! Specifically:

“Wherever possible, we try to make the vocalizations a dialogue between two or more characters, rather than an announcement by one character.”

This kind of AI is much less frustrating to play against.

22) Set-up Fallbacks for Most Behaviors

A planner is particularly useful for generating context-sensitive behaviors. You can make the most out of this by providing many different kinds of actions to be used as fallback behaviors.

One example Jeff uses from F.E.A.R. is reacting to grenade explosions. Only AI actors that are near the explosion can ReactToDanger because the distance precondition is satisfied, however, the AI actors that are further away instead use LookAtDisturbance as the response to deal with the reaction since it has no distance preconditions.

23) Use Costs Per Action to Help Designers Fine-Tune Behaviors

A benefit of using the A* (pronounced “A star”) algorithm for planning is that the cost of each action can be set to a floating point value, and the search process will find a solution that’s the most appropriate based on the total cost of the plan. This gives designers ways to adjust the behaviors by tweaking weights.

That said, it’s hard to control the outcome of the planner explicitly. Tweaking weights is a fine art, and it’s hard to guarantee context-specific behaviors or even ordering of actions.

24) Drive Different Behaviors with the Same Technology

What we are seeing is that these characters have the same goals, but different Action Sets, used to satisfy the goals.

“If we place an assassin in the exact same level, with the same GDC06 Goal Set, we get markedly different behavior. The assassin satisfies the Patrol and KillEnemy goals in a very different manner from the soldier. The assassin runs cloaked through the warehouse, jumps up and sticks to the wall, and only comes down when he spots the player. He then jumps down from the wall, and lunges at player, swinging his fists.”

25) Employ Multiple Collaborating Goals

In F.E.A.R., the behaviors of the actors arise from having multiple goals competing for activation. Specifically, these goals are KillEnemy, Dodge, Cover, and Ambush. Each of these has built-in logic for determining when their priority based on changes in the world.

“When the sensors detect significant changes in the state of the world, the agent re-evaluates the relevance of his goals. Only one goal may be active at a time. When the most relevant goal changes, the agent uses the planner to search for the sequence of actions that will satisfy the goal.”

This dynamic interplay of goals with dynamic priorities results in interesting emergent behaviors that add depth to the squad behaviors too (discussed below).

26) Add Actions and Goals Modularly as Necessary

Effectively, F.E.A.R.’s AI is a very modular system. The benefit here is that you can easily add new actions and goals into the mix, and see what comes out.

“The primary point we are trying to get across here is that with a planning system, we can just toss in goals and actions. We never have to manually specify the transitions between these behaviors. The A.I. figure out the dependencies themselves at run-time based on the goal state and the preconditions and effects of actions.”

Group Behaviors

Monolith focused on establishing robust individual behaviors before turning to squad-based AI.

Screenshot 6: Slo-mo extends actor behaviors for a few seconds more!

27) Squad Coordinators

F.E.A.R. uses high-level AI logic to manage squads, in particular to:

  1. Regroup soldiers into squads dynamically based on their position in space.

  2. Assign behaviors to individuals such as laying suppression fire, moving into position, or following orders.

This approach with a central coordinator is easier to implement and debut than just having the solders try to collaborate with each other locally.

28) Keep Squad Behaviors Simple

The bulk of the squad AI is implemented as four rather simple behaviors, as described by Jeff Orkin in [1]:

  • Get-to-Cover — Orders all solders into valid cover while laying suppression fire.

  • Advance-Cover — Similarly, using suppression fire, this moves soldiers into cover closer to the player.

  • Orderly-Advance — Move to a position as an orderly file, with each soldier covering a different side.

  • Search — Separate the soldiers into groups of two, and do a systematic sweep of an area.

29) Use Emergent Squad Behaviors

The more complex squad behaviors in F.E.A.R. are entirely due to the separation between individuals planning, and the squad logic. Again, in Jeff’s words:

“The truth is, we actually did not have any complex squad behaviors at all in F.E.A.R. Dynamic situations emerge out of the interplay between the squad level decision making, and the individual A.I.’s decision making, and often create the illusion of more complex squad behavior than what actually exists!”

References

[1] 3 States & a Plan: The AI of F.E.A.R.
Orkin, J.
Game Developer's Conference Proceedings, 2006
(Download DOC, 704Kb)

[2] Agent Architecture Considerations for Real-Time Planning in Games
Orkin, J.
AIIDE Proceedings, 2005
(Download PDF, 151Kb)

7 Comments ↓

#1 Ian Morrison on 10.22.07 at 5:43 pm

I remember this presentation… I recall spending the vast majority of my time thinking “oh, hey, THAT’S neat” and variations thereof.

The biggest problem I had with it was with #10 on your list. Storing it all in an array is obviously the ideal way to make it as efficient as possible, but I’ve got to wonder if there could be ways to extend it to managing more complex environments, more than one weapon, etc. Perhaps the array could be extended to keep certain types of objects (like potential targets) in a kind of list?

I think a consequence of this is that FEAR rarely had two sides with multiple AIs each fighting each other (it was always you versus a squad). Even when they did, the fights were notably straightforward, like the area where a small group of Armacham soldiers are firing at a group of Replicas down a long, narrow hallway. You never, for example, have a teammate fight beside you.

#23 is also a really interesting subject for me. Are there any resources on simplifying this process, or at least standardizing it somewhat? One idea I’ve been toying with in my own game is to have the weighting correspond to easily estimated values with predetermined units of measure (for example, measuring the value of shooting a certain target in terms of damage you could expect to inflict at the range, weapon spread, and rate of fire you have chosen).

Thinking up ways to measure individual types of results (ie, inflicting damage or gaining advantageous positions) is not too difficult. What is difficult is deciding how these values will compete with each other. Some scales would result in any value from 0 to infinity, while others might only go from 0 to 100. Similarily, some might be more valuable the closer they get to 0, while others might even have negative values! I’ve been puzzling over how to make sure that individual goals can be compared with each other fairly.

#2 Dave Mark on 10.23.07 at 8:27 pm

Once again, the F.E.A.R. AI has exposed an intriguing premise - that of breaking down the decision process into the most granular component parts rather than that only to the level of the decision itself. In the latter, you would get as far as “should I run, hide, fire, etc.”. In the former, you are going all the way down the building blocks that make up that decision and giving the AI agent a way of constructing with those blocks in order to arrive at his own decision.

Really, it’s not much different from parenting a child. You could admonish a child to “not run into the street” (the quotes here are very necessary). At that point, (assuming 100% obedience… a bit of a stretch) the child will “not run into the street”. However, the next time you hop out of the car at the grocery store, it will not occur to him to “not run into the parking lot.” After all, that was not explicitly contained in the instruction to “not run into the street.” As well, it may not register to said youngster to exhibit caution when crossing a driveway while walking on the sidewalk. Dad never said anything about this… he said to “not run into the street.”

If you use a different approach, however… you can allow the child to apply this sage advice to any such situation. The trick is, you tell the child not WHAT to do, but explain WHY. Rather than the explicit “do not run into the street,” you can say “be careful of any place where there may be cars because you may get plowed over.” (That last bit is somewhat over the top.) Assuming the child understands the concept that you explained, they can now apply it to any situation where they may reasonably expect cars… the parking lot, the driveway, or standing in the middle of the Daytona Motor Speedway. You didn’t have to specifically list each of those places (what if he visits Talledaga?) only the parameter of cars. It is now up to the child, armed with knowledge of the danger of cars, to decide where to apply that tidbit.

Sure, this is a simplistic example, but it illustrates the point. By focusing on defining the “why” of the situation and enabling our AI agents to process their environment along with those “whys”, we are preparing them for any potential situation where that “why” may occur. It beats the hell outta adding to a near infinite list of “whats” that still may never satisfy all the potential game-states that the agent may encounter.

F.E.A.R. isn’t the only game that has taken a similar approach. And STRIPS isn’t the only way to do it. At that point, however, we are only talking tools (i.e. “what”). The most important thing that we as designers need to remember is “why” we are doing it.

#3 Sergio on 10.24.07 at 3:58 am

Planning is an exciting technology, and it’s true it opens a world of possibilities.

However, (there’s always a however) don’t assume the road is bump free. In my experience, the biggest obstacle for a planning system to succeed is that it lives or dies by the abstraction capabilities of the AI designer.

Most designers can describe what they want using specific, concrete examples: “I want that monster to run towards the player, use cover once or twice, and attack from there using this gun.” Even if the designer is just detailing an example, and he really wants emergent behaviour (an exceptional case by itself), it’s still quite a challenge to *abstract* the basic building pieces and rules that will combine to create those behaviours.

As Dave has explained, you need to understand the motivations a character has for doing things. And “because the designer said so” doesn’t really count. Unless you have a purely functional design, in which mechanics exist to give computer opponents tools that they can use effectively by planning, decomposing the behavioural range of characters into small actions with pre- and post-conditions, and finding good heuristics to glue it all together is not a simple task.

#4 Diego on 10.25.07 at 5:15 pm

I also have been thinking about the planning system for a while, since it is a very emergent way of build an AI system. I think that programmers like this system a lot because it’s very modular. You build a bunch of behavior blocks, you set some preconditions and let the planner organize them according to a set of rules. It looks clean and organized.

The problem with all this emergent systems is that it’s very difficult to hack specific behaviors, you have to play around with the rules, the preconditions to obtain what the designer wants. The decision tree described by Damian Isla for Halo included specific mechanisms that they had to add to support for that and it’s even more difficult with a planner.

For designers to be able to use this kind of system they have to be aware of what the planner is going to do, anticipating its decisions so that they can build the level, or prepare the situations taking that into account. I think designers are more used to the old systems and they will have difficulties to understand how a planner works to anticipate what is going to happen.

#5 Dave Mark on 10.25.07 at 5:52 pm

While it is true that the behavior of an agent is more obscured from casual examination by the designer and programmer, I don’t necessarily feel that it locks out specifying certain behaviors. In the most simplistic form, a point in the game can have a flag that triggers a branch away from the rest of the planning module and into a sub-section that utilizes other, more designer-controlled algorithms (e.g. rule-based or scripted).

For example, say the planning algorithm is churning merrily away in a combat sequence. When the player reaches a certain spot (say the communications control panel on the desk), the AI for a specific bot may interupt its regularly scheduled programming to perform a specific action. This is entirely reactive to if and when the player meets that precondition (the comm. panel). At this point, by using a message from the comm panel to the nearby agents, and allowing that message to interupt the current plan action(s), you are giving the designer the ability to specify a triggered reaction to a situation. It has little to do with what was happening before and may or may not have to do with anything afterwards. The agent may very well be allowed to go back to combat or running or any other plan-based action. But what it does allow is for the designer to not care about anything else in that room BUT the specific contingency.

I think therein lies much of the power… script the specific behaviors for specific issues - but then be able to say “for the rest of the time, do as you see fit.”

#6 Diego on 10.25.07 at 7:35 pm

You can certainly do that, but it’s not that simple. Conditions can change while the character is doing that scripted action and in that case you want the planner to reevaluate the situation and decide to perform a new set of actions. If you don’t reactivate the planner until the scripted action is finished you have to script all possible situations to end the scripted action and go back to the planner.

If you can do that is cool, what all the possible combination treat that the planner tries to avoid start to cause pain as the specific situations and the possible world events increase in number.

#7 alexjc on 10.28.07 at 8:27 am

Ian,

Regarding #10; I believe Jeff mentioned once that the planning was way too slow without it, so it would have been a deal breaker for using the technology. So don’t see it as a premature optimization, but an obligatory proof of adequate performance :-) It’s not only more efficient to process the vector, but it reduces the search space quite dramatically…

The extensions you mention would definitely work, it’s just a question of how fast.

Regarding #23; I don’t like weights or absolute priorities. They’re a bit clumsy, although they get the job done. I prefer relative ordering based on importance (easier to edit).

Having a hierarchy is a good way to tune these behaviors as you can insert special cases in there manually (unlike with a flat planner).

Dave,

Good points. I see programming moving towards this kind of approach: teaching a system about specific things rather than having to explain everything step by step all the time.

Sergio,

I haven’t seen designers have too much trouble thinking in “abstract terms” like you describe. It’s very much like teaching a child. True, in the case of STRIPS, you don’t get the chance to override the behavior with specific “scripted sequences.”

Diego,

I think hierarchical planners can fix that by allowing designers to override any plan with specifically chosen behaviors… So that seems like the next logical step to me.

Anyway, some great comments here. We should make it a topic for a Tuesday discussion!

Leave a Comment

Game AI Character