Genetic Algorithm Class Design

By ai-depot | June 30, 2002

Candidates & Phenotypes

Candidates

Whether used by template or generic GA, the candidates need to implement some basic features:

  • Conversion to and from genotypes (or chromosomes)
  • Ability to randomise

That said, after a lot of experimenting, I�ve found the ability to do phenotype specific breeding very useful. Essentially, instead of dealing with an array of genes, we keep the data-structure that�s used to store the phenotype and breed that. There are many reasons for doing this, which we�ll discuss in the next section.

First lets take a quick overview of the code:

class Candidate
{
public:

/**
* Set the candidate to a random solution, allowing good
* initial coverage of the search space.
*/
void Randomise();

/**
* Create this phenotype given that of the mother and the
* father.
*
* @param mother The first phenotype to use in breeding.
* @param father The second phenotype to use in breeding.
*/
void CreateFromParents( const Candidate& mother, const Candidate& father );

} // class Candidate

Phenotype-Specific Genetics

Before I start the argument in favour of the phenotype approach, I�ll mention the advantages of the alternatives. The concept of a genotype or chromosome is great for genericness (did you know that�s in the dictionary? :). You can convert any solution to a chromosome, optimise it with a GA, and convert it back. This simplifies the process a lot, despite you having to encode and decode your solutions at will.

However, this genericness is not always an advantage, and I�ve come to realise this with more experience. The conversion of complex data-structures to array of genes is often a very tedious process. MIT�s GALib allows some other basic data-types, which does simplify things. But taking things a step further can be an extra advantage.

When you perform the genetic operations on the phenotype itself, you can include domain specific knowledge in the mutation and crossover operators. That probably means very little to you in theory, so lets look at an example: Variable Size Feed-Forward Neural Networks.

You can easily encode that to an array of floats, using a layer size gene, and guard genes (to prevent misinterpreting a layer when the size has been mutated). However, when combining networks with different structures, the weights for the layers will overlap during the crossover. This is not necessarily a problem, rather more a sub-optimal way of doing things: it will cause a bigger jump in the search space. On the other hand, using the NN data-structure to perform the crossover, you can keep the weights of the same layer together and set missing ones to 0.0f which would minimise the change of behaviour. This does improves the performance of the GA a fair bit.

Pages: 1 2 3 4

Tags: none
Category: tutorial |

Comments