Return to Castle Wolfenstein: AI Analysis
By ai-depot | June 30, 2002
This in depth review covers an AI analysis of the highly acclaimed game. The techniques used are discussed from a developers point of view; working techniques and pitfalls are pointed out, along with potential improvements for the future.
Written by Alex J. Champandard.
Return to Castle Wolfenstein: AI Analysis
More Info + Order [ US ] - [ UK ]
I managed to make some time at the start of this weekend to do some meticulous “experimentation” with Return to Castle Wolfenstein’s AI. I started on the second hardest skill, since I value my time more than I like to brag about my shooting skills! The characters’ behaviours are usually much less realistic in hard mode anyway, so it made sense scientifically ;) Obviously things got out of hand, and I ended up completing the game late on Sunday night. Though I didn’t waste time looking for all the secret areas, that doesn’t really say much about the length of the game — but rather reveals how addictive the game is!
All in all, I found the game environment very captivating, with some select bits were much better than others — the stealth scenes against other humans were much better than ones with monsters or robots. The storyline seems a bit convoluted, in a way you’d expect from FPS. The skill levels are reasonably judged, increasing reasonably towards the end of the game. That said, I was quite annoyed that some zombies spontaneously became very resistant just because I was in the final episode! The last boss was very disappointing, since all it really took was large circle straffing movements with a sniper rifle — and the occasional super chain-gun rush. I had more trouble beating the final X-creature in the Norwegian episode; at least it was fairly good against any brute force technique…
Graphically, the indoor levels were gorgeous, despite me playing on a machine barely above the required specification. I’m still not convinced about the Q3A engine’s handling of outdoor scenes, as everything looks so coarse. That said, I’d love to see it on a much higher performance machine… maybe a good reason to keep the game.
Anyway, enough with “standard” reviewing, lets get into some technical Artificial Intelligence details… I’m sure that’s what you’re here for! Throughout the review, I mention parts of the code that perform specific functions. You can find the package containing all the Wolf C code over on Blues News.
Spatial Awareness
Wolfenstein is based on the Quake 3 Arena engine (Q3A), which comes with an already comprehensive navigation system. This is primarily based on AAS, the area awareness system designed by J.P. van Waveren for his Gladiator Quake 2 bot. This was extended within Q3 itself, but I believe the underlying concepts have remained the same. You can find his master’s thesis right here. Essentially, a pre-processing phase extracts adjacency information from Id’s proprietary BSP structure. This builds a static representation of the level, which is tweaked for select dynamic conditions such as doors. Given any point in space, the system can find which Area it is in, and determine a path to any other point by using “hierarchical routing”, as the author calls it. If I’m not mistaken in my quick glance at Wolf’s source, this provides the base for all the navigation in the game.
The Good
The sense of cover of the German soldiers is very good indeed. They have no trouble finding hiding spots, and seem to make use of them very realistically. They tend to be very cautious with their lives, and seem to take cover at well-chosen moments. This is all done inside this procedure:
int AICast_WantsToTakeCover(cast_state_t *cs, qboolean attacking);
This is a somewhat simple piece of code, but provides great results. It combines aggressiveness, danger, and the current state to determine if it’s a good plan to take cover. On top of this, the ability to take cover also implies that a sense of visibility is used to determine where to move to, and indeed this is done with another method:
bool AICast_GetTakeCoverPos(cast_state_t *cs, int enemyNum, vec3_t enemyPos);
This procedure first does a simple local check, to see if the character can get out of trouble by ducking. If not, the AAS system is called to find a more complicated hiding spot.
Similarly, under the threat of a grenade, monsters have little trouble avoiding getting hit. In fact, this does increase the quality of your grenade throwing, as you quickly start planning to trap them in and time the launch so it explodes at the right time. Though I have to admit, the grenades were much to weak for my liking!
One particularly impressive moment was inside the X-lab, throwing a grenade around a door to hit a legless “electric” monster. As soon as he realised the presence of the grenade, he simply jumped over the barrier down to the lower level, and hid further down the corridor. Now, I’m not sure if the jump down was actually intended, but it looked cool nonetheless!
The Not So Good
Multiple times over the course of the game, I’ve witnessed a couple soldiers getting stuck together in a small doorway. They’d just run forward in slow motion, and not get very far at all. This reveals two things:
- The inability of AAS to deal with crowded dynamic environments, where entities that are not enemies. Those that are dangerous can simply be destroyed, and don’t really pose a problem!
- A complete lack of any form of squad tactics. Even trivial communication (request based) would allow this problem to be solved.
One second detail that does shock involves the elite witches. They have one behaviour that causes them to roll on the floor to take cover, or to position themselves for a shot. On more than one occasion, I’ve noticed them roll straight into the wall. I think this is just negligence, as it would be fairly simple to check that there is enough room to move, though it would imply a closer integration of the animation and the underlying AI script. I think this is the culprit:
char *AIFunc_BattleDiveStart( cast_state_t *cs, vec3_t vec );
Finally, onto the little details. I’ve seen a couple soldiers fall down ladders, and change their mind halfway down - though this second one can be forgiven. Finally, doors open both ways, which is quite conveniently for the AI, but it does admittedly improve the human’s playing experience.
Analysis
I’m going to go out on a ledge (virtually :), and claim that AAS was not the ideal type of navigation system required for RTCW. Admittedly it did make sense to use existing Id technology that was already integrated into the engine, but if you’re in the same situation about to implement a similar single player game with AI characters, stop and think.
All the behaviours you can observe in the game can be obtained using a purely reactive strategy. This means that you don’t need any form of cognitive representation of the level like AAS — implying much quicker development times and easier debugging. Though some deliberative planners may conceptually provide the most human-like reasoning, a reactive approach will generally not be very far behind; it will in a great majority of cases return just as good cover spots. When cover cannot be found, the AI character can simply come out all guns blazing. All this to simplify the development process, so it’s not to be neglected if you don’t have the power of AAS already behind you.
Not only is a deliberative approach not necessary, but I’ll also argue it’s not any more efficient than a simple reactive behaviour. Finding cover can be done by tracing a few boxes around the current position, and finding which end position is not visible by the threat. If the end points are all visible, then the task can be repeated the next frame with different parameters, or the furthest option can be chosen. All these traces are very predictable, can be batched, and would work wonders with a local cache of your 3D space partition.
Individual Behaviours
I found each of the character behaviours well crafted, with the exception of the end boss (it’s difficult to express how I feel about him without swearing, so I won’t ;) These enhanced the gameplay quite a lot, while still remaining realistic. Particularly, the armed undead’s behaviour was quite interesting; when they crouch down, you can shoot them and bullets will bounce off. If you wait until they rush forward, you can hit them without any problems. This was interesting for me, as it took me a couple attempts to develop the right strategy (that would not hurt me), and it made the final tactic quite novel: you have to wait until the last moment and shoot bursts when they rush forward. This goes to show how important the AI behaviours are in influencing the game-play.
Technology
Return to Castle Wolfenstein uses the same concept of AI that some of the first games used, finite state machines. The original Doom springs to mind, though Quake 2’s non-player character AI appeared the most similar in my opinion. The technique of FSM’s are nothing amazing, but yet well polished for computer games. They have many advantages:
- Easy integration with the animations. It wouldn’t even surprise me if the whole concept of finite state machines in games was purely for simplifying handling of animation transitions (rather than AI :) Admittedly, soon there will be aneed for two FSM; one will monitor the animation’s state, and the other will handle the AI.
- Simple conceptually, easy to develop and debug.
- Fully deterministic, your testers will love you… **sigh**
- Efficient! You can hardcode pretty much everything if you want, and function pointers in C have extremely little overhead.
The key to developing convincing behaviours with state machines lies in the state transitions. In fact, most of your time spent within a state will be determining which state to switch to next! Actually performing the actions associated with the states is something that remains quite trivial (due to the simplicity of the actions — fire, move). As the transitions become more human-like, the emerging behaviour will become even more realistic. The states themselves increase in numbers along side the intricacy of the AI. For example, when sounds, communication and emotions come into play, the complexity of the FSM would explode.
Improvements
My major complaint I have with Wolf’s handling of the FSM is the ugliness and inflexibility of the C code. Admittedly, it serves its purpose: it’s efficient and it works. However, the code is a spaghetti mess, passing function pointers all over the place; there is no simple way of tweaking the behaviours without intricately knowing the code. A data-driven/script based approach would allow a much more elegant solution. A language suited to state management and event handling would also come in very handy. There would be some speed drawbacks, but the advantages would prevail:
- Scalability — When more behaviours are required, a hard-coded solution would generate a huge resident DLL, which should be avoided for a few reasons.
- Simplicity — The behaviours for each entity would be stored in their own script file, making them easily accessible and modifiable.
- Flexibility — A simple modification of a script would allow the behaviours to be tweaked easily, possibly even online! An intuitive user interface could be provided to assist development… in the way OpenAI’s FSM is going.
Hopefully, developers will at least realise the benefits of such an approach, and switch over within the next generation of games (if not taking things even further by adding some learning capabilities).
Perception
Much of the quality of the characters in Wolfenstein lies in their model of auditory perception. They have a relatively good concept of sound, such that your choice of weapons affects their state. If you use a noisy machinegun, you will get noticed; the knife on the other hand will allow you to go completely undetected. That said, there were quite a few cases when my assaults should have triggered some sort of reaction from neighbouring units, but didn’t. I presume this was due to (what I’d consider) loud weapons being handled as stealth ones. Another factor that may have caused this is the neglecting of units outside of the current sector. Typically, games do not simulate entities that have little impact on the player. The assumption that was made in this case is that distance is not really a factor, rather the visibility/close proximity in the BSP.
Screenshot 2: Patrolling soldier trying to spot us, who will soon fall victim to the almighty sniper rifle… stealth tactics provide a great attraction of the game.
Additionally, the noise of an enemy’s loud machine gun tended to have a much different effect on other solders than my own weapon. This is most likely due to the fact that only the player triggers NPC behaviour, where as a more realistic approach would model the effect of every sound — and not only the human player’s.
On the other hand, I have a few minor complaints with the visual perception. In some cases it is too good, and poor in others. This is a common factor in too many modern games. Firstly, even if you’re barely moving at a fair distance, entities have no problem in spotting you and starting to shoot at you before you have time to say “Return to Castle Wolfenstein.” Reaction times are modelled by the game, so that helps a bit — though I would claim they are a bit too quick to be realistic. Secondly, the entities don’t seem to be able to see off-centre body parts (such as legs). In some cases I should have clearly been shot, despite not being fully in view (e.g. I was standing round a corner, capable of seeing a soldier’s legs, he was facing me and could have seen mine, but did not budge. The fact he saw me previously did not even affect his behaviour, but that’s another problem…)
Event Handling
For me, this was by far the worse part of the AI. It doesn’t come across to often, but if you’re looking for it, you can find it. The first problem occurred more than once. Assume you are not within an entity’s field of view, but can still see it (the distance is also quite large). Shooting it repeatedly will not even make it move! The problem is that “pain” does not always cause a change of state — that is, if the entity is being simulated at all.
The second problem only occurred once, but other times was handled realistically. It involved a soldier dying, and his friend nearby not even twitching. Typically, you’d expect him to at least react to that!! I guess this is due to the limited field of view, and the non-handling of NPC sounds, since in other instances I’ve seen soldiers come running when their companion has been!
Collective Behaviours
This aspect really surprised, and impressed me a lot. The way collective behaviours are handled is really obvious in the game, and very simple. Yet still it gives incredibly good results. This is a very good example of emergent complexity, whereby unforeseen patterns can arise from a set of simple rules. In the game, there are two good examples of this.
The first is the concept of awareness, which is a direct consequence of the basic modelling of sound. Soldiers notice you when you go around shooting at things. As more and more of them become aware of you, the problems escalate and you end up having everyone on your back. This scenario is generally amplified thanks to the concept of the alarm. This is truly great, since playing the same level twice with different tactics, you can either catch everyone napping individually, or all bunched together waiting for you. This just goes to show how very primitive communication between entities can truly enhance the gaming experience.
Screenshot 3: Collective behaviours among multiple soldiers really immerge the player into the world. Simple communication is the key!
Another great example of collective behaviour arises from cowardness. When a soldier is close to death, he will retreat back and hide further along your path. This works out very nicely, as there are generally other units waiting there too! I don’t believe the presence of others is taken into account, since the code basically finds a cover spot. But this still works great in practice.
AI Related Extras
Quick mention of some other great features of the game that one could class as AI related.
Context Sensitive Music
All in all, this worked quite nicely. When the alarm goes off, the “panic” music starts. This immerges the player even more in the game, though I did find myself trying to be quiet as I preferred the “stealth” music! It also brings to mind a sketch by a British comedian named Eddie Izzard (one of the funniest men alive ;) He says mockingly about horror films: “Why doesn’t the cast listen to the music! When you get the scary violins and the choir boys singing, you know your in for trouble…” The same applies to RTCW, the music tends to give away what’s going to happen.
Selective Unit Placement
Unless you’re a FPS god, you’ll most likely have to play a level more than once. In my experience, I’ve found that unit placement tend to change from attempt to attempt. I’m assuming some probabilistic knowledge-based placement is used, or the level designer may have defined the positions manually. Either way, this can be a great feature for re-playability, as you don’t get too bored of replaying the same bit over and over. On the other hand, if the units aren’t placed the same way, you’re still just as unlikely of dying (since you can’t learn to cope with a specific unit layout).
Conclusion
The AI in Return to Castle Wolfenstein is very good, though there’s obviously room for improvement.
First and foremost, a better handling of the characters perception is in order. Some rigid specification to handle this would even come in very handy, as it will often be reused. Coincidentally, this is the topic of my current side project — a link subtly dropped for those that have been bothered reading until the end… more about that very soon!!
Secondly, I believe a more elegant handling of Finite State Machines will prove handy in the very near future. As the AI in games becomes more complex, the hack C approach will not scale in development and debugging times.
Finally, I’d love to see the units learn to use their behaviours, rather than have programmers determine the succession of actions. This would allow an optimal behaviour and reduce the amount of coding, while still allowing the control of the designers select the fundamental behaviours to be used. This IS possible, though quite frankly I’m having a hard time convincing game developers it’s possible!
Written by Alex J. Champandard.
Tags: none
Category: review |