Genetic Algorithms

1.Intro

Genetic Algorithms (GA) have been around for a while but have not seen widespread adoption in data mining circles as a primary modeling technique.  They are widely used in academic research simulate evolutionary environments and human behavior.  In Economics, many papers have shown GA are effective in simulating how learning effects the economy.  In these papers GA are used to model how people learn and adapt to changes in an economic system.

In data mining GA are mainly used coupled with other techniques such as ANN or as a voting mechanism.  Here, they can be quite powerful in choosing the best parameters or model.  As a primary modeling technique, such as predicting income, they have been less successful.  If you can remember genetics from your biology class and GA will seem familiar to you.  Genetic algorithms were developed following the Inference Theory of Learning (ITL).

Key features of a GA

1. Heterogeneous belief of agents.

2. Information requirement for agents is minimal.

3. Natural model for experimentation by agents with alternative rules.

4. Heterogeneity allows parallel processing of information.

3. Overview

The basic principles of heredity are at the core to GA. In GA the genes represents knowledge or a belief and the evolutionary process is to pick the best belief(s). The fitness of the belief is determined by its forecast error.  A select percentage of beliefs are eliminated based on their fitness. Key to GA is that fact knowledge good or bad is quickly shared through out the knowledge space. The agent’s beliefs are typically represented by bit strings.Below is an example of a simple belief bit string, X = (X1,X2, … ,Xn).

Position 4 indicates 1000

Position 3 indicates 100

Position 2 indicates 10

Position 1 indicates 1

So,  the bit string  0010  indicates a belief of 10.

2. Learning Methodology

a) Cross Over

Cross Over is when two agents share beliefs (genes); it can be viewed as simulated mating.  First, two newborn agents are randomly chosen to mate.  With some probability agents will perform crossover of their beliefs.  Crossover is achieved by dividing each agent’s beliefs at a random point and then swapping pieces.  This creates two new bit strings that represents new agents with new beliefs.

X = (X1,X2, … ,Xn) ->   X = (X1, … , Xi , Yi+1, … Yn)

Y= (Y1,Y2, … ,Yn)    -> Y = (Y1, … , Yi ,  Xi+1, …Xn)

Example

Before Mating:

Agent 1’s  belief: 0010  = 10

Agent 2’s belief: 0001  = 1

Mating:

Exchange segments 2 and 3.

After Mating:

Agent 3’s belief: 0000  = 0

Agent 4’s belief: 0011  = 1

b)  Mutations

To achieve an even more random new belief you can take the crossed over belief and mutate it.  Mutation involves randomly changing a bit string value to a new value, in this case from zero to one or one to zero.

e.g.  X’ = (X1, … , Xi-1 , Z ,  Xi+1 , … ,Xn) where Z is a mutation.

Agent 3’s belief: 0000  = 0 -> 1000  = 1000

c) Inversion

Another approach to create new information is to invert existing strings. Inversion is taking a randomly chosen segment of an agent’s bit string and inverting it.  This method, as with mutation, does not require the newborns to interact with one another while still allowing new information to be generated from the newborn’s belief.

e.g.  X’ = (X1 , …. Xi , Xj, Xj-1, … , Xi-1, Xj+1, …, Xn)

3. Selection

After the crossover, mutation and inversion processes are complete each newborn has a new belief. Once the new information has entered the system each newborn has two beliefs, that they received from the smarter parent and one they have created themselves.  The newborn then applies their learning goal so, the belief that would have yielded the highest utility the period before becomes the newborn’s forecast rule. Now one cycle of the learning process is complete.  This cycle is repeated until the stop criteria (such as a minimum forecast error) is achieved or the maximum number of cycles is reached.