Bot Navigation: Neural Networks for Obstacle Avoidance
By ai-depot | June 30, 2002
Results
Demo
If you have Quake 2 installed on your computer, you’ll be able to view the demo recorded. It’s very nice, and I spent a while editing it! That was to impress my potential supervisor…
You can download the demo here. You’ll also need the Gothic Resurection level pack. It’s fairly big, but not any bigger than a AVI animation of the movie would have been. You’ll also be able to reuse the levels in the coming tutorials.
To view the demo, extract the demo into the Quake 2 directory, preserving the directory structure. Put the level PAK into the baseq2 directory. Then go to the console and type:
map oa.dm2
That will launch the demo. You’ll see bots evolving at various stages of the evolution. In this case, the bots had 7 sensors, and two hidden layers of 24 nodes each. That’s quite a lot, and I reduced those numbers in the later stages of experimentation. See below for more information
Fitness Function
There’s not all that much skill involved in tweaking the fitness function. It’s just a matter of letting the optimisation complete, and visually noticing what is not appropriate behaviour. The corresponding weight for the fitness component then needs to be changed, either increased or decreased. Sometimes that is not sufficient and an extra component needs to be added.
In the case of pure obstacle avoidance, I’ve personally ended up with a very negative fitness function. I usually don’t like using punishment components only, as it makes me seem like a nasty person! It seems the most productive way of doing things, so I guess that can’t really be helped. Besides, my bots don’t have feelings yet! The final fitness pseudo-code looks like this:
// punish oscillatory movement
if (bot->desired_yaw_change * bot->last_yaw_change < 0)
bot->fitness -= 1;
// compute laziness
tmp = fabs( bot->desired_yaw_change ) + 1;
// use square of change to penalise big changes over small increments
bot->fitness -= tmp * tmp - 1;
// and decrease fitness for hitting a wall
if (wall_close( bot->origin ))
bot->fitness -= 10.0f;
In theory, the perfect solution will have a fitness of 0. That means it is capable of avoiding all collisions, without turning. In practice, however, negative values will be expected: there will be some amount of turning, depending on the quality of the solution. The closer the fitness is to 0, the more lazy and efficient the solution will be. Conversely, for large negative values, the solution will be un-efficient at avoiding collisions, and potentially too energetic for our liking.
Tested Solutions
I was dreading writing this section as it involved doing lots of testing. Skipping the section didn’t seem a very reasonable thing to do, so I wrote a small script to do it for me and left the computer running all night. It was not only interesting but also very insightful.
Two network parameters were tested: the number of sensors and the number of neurons in the hidden layers. There was always an extra input to the network, corresponding to the previous output. This allowed the network to have a sense of state, so that the human like behaviour could be rewarded. The fitness of the solutions was taken on the average of 3 evolutions, each of 100 generations. This allowed us to discard most of the noise due to the imprecision of the genetic algorithms.
After many years as a computer scientist, I have to admit these results are very nice. Not only did I not have to tweak the values to prove a point ;), but I was also able to extract a couple of interesting ideas.
- With 3 sensors, the model is efficient. Low neuron counts in the hidden layer don’t allow enough knowledge to be internalised, and the obstacle avoidance is not ideal. With 18 hidden neurons, the results are optimal. This is due to finding the right balance between a network that is too general, and one that is too specific. Values above 18 cause the performance to tail off.
- With 5 sensors, fewer neurons in the hidden layer are required to extract meaningful patterns. 10-12 neurons provide a slightly better obstacle avoidance behaviour than with 3 sensors. Like the previous example, performance tails off slowly with fewer or extra neurons.
- With 7 sensors, the system seems to struggle to find the best behaviour. This is most likely due to the increase in the search space. However, when the solution is converged to, its quality is sligthly superior to those with fewer sensors. Very similar neuron counts provide the best behaviour, namely between 10 and 12 for the hidden layer.
Tags: none
Category: tutorial |