Frenetic Array

The intersection of logic and imagination.

Evolutionary Computing

Recombination Operators: Permutation, Integer, and Real-Valued Crossover

We have been introduced to recombination operators before; however, that was merely an introduction. There are dozens of different Evolutionary Algorithm recombination operators for any established genotype; some are simple, some are complicated.

For a genotype representation that is a permutation (such as a vector[1], bit-string, or hash-map[2]), we have seen a possible recombination operator. Our 3-SAT solver uses a very popular recombination technique: uniform crossover.

Furthermore, we know a permutation is not the only, valid genotype for an individual: other possibilities can include an integer or a real-valued number.

Note, for simplicity, we will discuss recombination to form one offspring. This exact process can be applied to form a second child (generally with the parent's role reversed). Recombination can also be applied to more than two parents (depending on the operator). Again, for simplicity, we choose to omit it[3].

First, let us start with permutations.

Permutation Crossover

In regard to premutation crossover, there are three, common operators:

  1. Uniform Crossover
  2. $N$-Point Crossover
  3. Davis Crossover

Uniform crossover we have seen before. We consider individual elements in the permutation, and choose one with a random, equal probability. For large enough genotypes, the offspring genotype should consist of 50% of the genotype from parent one, and 50% of the genotype from parent two.

Uniform Crossover

$N$-Point crossover considers segments of a genotype, apposed to individual elements. This operator splits the genotype of Parent 1 and Parent 2 $N$ times (hence the name $N$-point), and creates a genotype by alternating segments from the two parents. For every $N$, there will be $N + 1$ segments. For 1-point crossover, the genotype should be split into two segments, and the offspring genotype should be composed of one segment from Parent 1, and one segment from Parent 2. For 2-point crossover, there will be three segments, and the offspring genotype will have two parts from Parent 1 and one part from Parent 2 (or two parts, Parent 1, one part, Parent 2).

1-Point Crossover

Davis Crossover tries to preserve the ordering of the genotype in the offspring (apposed to the previous methods, where ordering was not considered). The premise is a bit complicated, but bear with me. Pick two random indices ($k_1$ and $k_2$), and copy the genetic material of Parent 1 from $k_1$ to $k_2$ into the offspring at $k_1$ to $k_2$. Put Parent 1 to the side, his role is finished. Start copying the genotype of Parent 2 starting at $k_1$ to $k_2$ at the beginning of the offspring. When $k_2$ is reached in the parent, start copying the beginning of Parent 2 into the genotype, and when $k_1$ is reached in the parent, skip to $k_2$. When $k_1$ is reached in the offspring, skip to $k_2$, and start copying until the end. If this seems a complicated (it very much is), reference the accompanying figure.

Davis Crossover

Those are considered the three, most popular choices for permutations. Now, let us look at integer crossover.

Integer Crossover

Integer crossover is actually quite an interesting case; integers can be recombined as permutations or real-valued numbers.

An integer is already a permutation, just not at first glance: binary. The individual bits in a binary string are analogous to elements in a vector, and the whole collection is a vector. Now it is a valid permutation. We can apply uniform crossover, $N$-point crossover, or Davis Crossover, just as we have seen.

An integer is also already a real-valued number, so we can treat it as such. Let's take a look at how to recombine it.

Real-Valued Crossover

Real-Valued crossover is different than methods we have seen before. We could turn it into binary, but that would be a nightmare to deal with. However, we can exploit the arithmetic properties of real-valued numbers — with a weighted, arithmetic mean. For a child (of real value) $z$, we can generate it from Parent 1 $x$ and Parent 2 $y$ as such:

$$z = \alpha \cdot x + (1 - \alpha) \cdot y$$

Now, if we want to crossover a permutation of Parent 1 and Parent 2, we can do so for every element.

$$z_i = \alpha \cdot x_i + (1 - \alpha) \cdot y_i$$

This can be shown to have better performance than crossover methods discussed, but would entirely depend on use case.

Implementing Permutation Recombination

As always, we will now tackle implementing the permutation crossovers we've had before. None of them are incredibly complicated, except possibly $N$-point crossover.

class Individual
    ...

    @staticmethod
    def __uniform_crossover(parent_one, parent_two):
        new_genotype = SAT(Individual.cnf_filename)

        for variable in parent_one.genotype.variables:
            gene = choice([parent_one.genotype[variable], parent_two.genotype[variable]])
            new_genotype[variable] = gene

        individual = Individual()
        individual.genotype = new_genotype
        return individual

    @staticmethod
    def __n_point_crossover(parent_one, parent_two, n):
        new_genotype = SAT(Individual.cnf_filename)
        variables = sorted(parent_one.genotype.variables)
        splits = [(i * len(variables) // (n + 1)) for i in range(1, n + 2)]

        i = 0
        for index, split in enumerate(splits):
            for variable_index in range(i, split):
                gene = parent_one.genotype[variables[i]] if index % 2 == 0 else parent_two.genotype[variables[i]]
                new_genotype[variables[i]] = gene

                i += 1

        individual = Individual()
        individual.genotype = new_genotype

        return individual

    @staticmethod
    def __davis_crossover(parent_one, parent_two):
        new_genotype = SAT(Individual.cnf_filename)
        variables = sorted(parent_one.genotype.variables)
        split_one, split_two = sorted(sample(range(len(variables)), 2))

        for variable in variables[:split_one]:
            new_genotype[variable] = parent_two.genotype[variable]

        for variable in variables[split_one:split_two]:
            new_genotype[variable] = parent_one.genotype[variable]

        for variable in variables[split_two:]:
            new_genotype[variable] = parent_two.genotype[variable]

        individual = Individual()
        individual.genotype = new_genotype

        return individual

Recombination In General

By no means is recombination easy. It took evolution hundreds of thousands of years to formulate ours. The particular permutation operator to use entirely dependent on the context of the problem; and most of the time, it is not obvious by any stretch. Sometimes, there might not even be an established crossover operator for a particular genotype.

Sometimes, you might have to get a little creative.


  1. List or array in programming terms. ↩︎

  2. Dictionary or map in programming terms. ↩︎

  3. View it as "an exercise left for the reader". ↩︎

Endgame Dynamics: Adaptive Restarts and Termination Conditions

Update: Previously, a system was introduced for detecting if an Evolutionary Algorithm was stuck at a local optimum. After extensive testing, this system was shown to be fragile. This post has been updated to showcase a more robust system.

Previously our Evolutionary Algorithms had it pretty easy: there would be either one local optimum (like our Secret Message problem instance) or multiple, valid local optimums (like the 3-SAT problem instance). In the real world, we might not be so lucky.

Often, an Evolutionary Algorithm might encounter a local optimum within the search space, and it will not be so easy to escape — offspring generated will be in close proximity of the optimum, and the mutation will not be enough to start exploring other parts of the search space.

To add to the frustration, there might not enough time or patience to wait for the Evolutionary Algorithm to finish. We might have different criteria we are looking for, outside of just a fitness target.

We are going to tackle both of these issues.

Applying Termination Conditions

First, we will examine what criteria we want met before our Evolutionary Algorithm terminates. In general, there are six that are universal:

  1. Date and Time. After a specified date and time, terminate.
  2. Fitness Target. This is what we had before; terminate when any individual attains a certain fitness.
  3. Number of Fitness Evaluations. Every generation, every individual's fitness is evaluated (in our case, every generation $\mu + \lambda$ fitnesses are evaluated). Terminate after a specified number of fitness evaluations.
  4. Number of Generations. Just like the number of fitness evaluations, terminate after a specified generations.
  5. No Change In Average Fitness. This is a bit tricky. After specify $N$ generations, we check every $N$ generations back to determine if the average fitness of a population has improved. We have to be careful in our programming; by preserving diversity, we almost always lose fitness.
  6. No Change In Best Fitness. Just like No Change In Average Fitness, but instead of taking the average fitness, we take the best.

Later, we will see how Conditions 5 & 6 will come in handy to determining if we are stuck in a local optimum.

To make sure we are always given valid termination conditions, we will have a super class that all termination conditions will inherit from. From there, we will have a separate condition for each of the listed conditions above.

class _TerminationCondition:
    pass

class FitnessTarget(_TerminationCondition):
    """Terminate after an individual reaches a particular fitness."""

class DateTarget(_TerminationCondition):
    """Terminate after a particular date and time."""

class NoChangeInAverageFitness(_TerminationCondition):
    """Terminate after a there has been no change in the average fitness for a period of time."""

class NoChangeInBestFitness(_TerminationCondition):
    """Terminate after a there has been no change in the best fitness for a period of time."""

class NumberOfFitnessEvaluations(_TerminationCondition):
    """Terminate after a particular number of fitness evaluations."""

class NumberOfGenerations(_TerminationCondition):
    """Terminate after a particular number of generations."""

Now, we need something that will keep track of all these conditions, and tells us when we should terminate. And here's where we need to be careful.

First, we need to know when to terminate. We want to mix and match different conditions, depending on the use case. This begs the questions:

Should the Evolutionary Algorithm terminate when one condition has been met, or all of them?

Generally, it makes more sense to terminate when any of the conditions have been met, as apposed to all of them. Suppose the two termination conditions are date and target fitness. It does not make sense to keep going after the target fitness is reached, and (if in a time crunch) it does not make sense to keep going after a specified date.

Second, how should we define no change in average/best fitness? These values can be quite sinusoidal, so we want to be more conservative in our definition. One plausible solution is to take the average of the first quartile (the first 25% to ever enter the queue), and see if the there is a single individual with a better fitness in the second, third, or fourth quartile (the last 75% percent to enter the queue). This way, even if there were very dominant individuals in the beginning, a single, more dominant individual will continue the Evolutionary Algorithm.

From this, we have everything we might need to keep track of our terminating conditions.

class TerminationManager:
    def __init__(self, termination_conditions, fitness_selector):
        assert isinstance(termination_conditions, list)
        assert all(issubclass(type(condition), _TerminationCondition) for condition in termination_conditions), "Termination condition is not valid"

        self.termination_conditions = termination_conditions
        self.__fitness_selector = fitness_selector

        self.__best_fitnesses = []
        self.__average_fitnesses = []

        self.__number_of_fitness_evaluations = 0
        self.__number_of_generations = 0

    def should_terminate(self):
        for condition in self.termination_conditions:
            if isinstance(condition, FitnessTarget) and self.__fitness_should_terminate():
                return True
            elif isinstance(condition, DateTarget) and self.__date_should_terminate():
                return True
            elif isinstance(condition, NoChangeInAverageFitness) and self.__average_fitness_should_terminate():
                return True
            elif isinstance(condition, NoChangeInBestFitness) and self.__best_fitness_should_terminate():
                return True
            elif isinstance(condition, NumberOfFitnessEvaluations) and self.__fitness_evaluations_should_terminate():
                return True
            elif isinstance(condition, NumberOfGenerations) and self.__generations_should_terminate():
                return True

        return False

    def reset(self):
        """Reset the best fitnesses, average fitnesses, number of generations, and number of fitness evaluations."""
        self.__best_fitnesses = []
        self.__average_fitnesses = []

        self.__number_of_fitness_evaluations = 0
        self.__number_of_generations = 0

    def __fitness_should_terminate(self):
        """Determine if should terminate based on the max fitness."""

    def __date_should_terminate(self):
        """Determine if should terminate based on the date."""

    def __average_fitness_should_terminate(self):
        """Determine if should terminate based on the average fitness for the last N generations."""

    def __best_fitness_should_terminate(self):
        """Determine if should terminate based on the average fitness for the last N generations."""

    def __fitness_evaluations_should_terminate(self):
        """Determine if should terminate based on the number of fitness evaluations."""

    def __generations_should_terminate(self):
        """Determine if should terminate based on the number of generations."""

And the changes to our Evolutionary Algorithm are minimal, too.

class EA:
    ...
    
    def search(self, termination_conditions):
        generation = 1
        self.population = Population(self.μ, self.λ)
        
        fitness_getter = lambda: [individual.fitness for individual in self.population.individuals]  # noqa
        termination_manager = TerminationManager(termination_conditions, fitness_getter)

        while not termination_manager.should_terminate():
            offspring = Population.generate_offspring(self.population)
            self.population.individuals += offspring.individuals
            self.population = Population.survival_selection(self.population)

            print("Generation #{}: {}".format(generation, self.population.fittest.fitness))
            generation += 1

        print("Result: {}".format(self.population.fittest.genotype))
        return self.population.fittest

However, we can still do better.

Generations Into Epochs

Before, the Evolutionary Algorithm framework we put in place was strictly a generational model. One generation lead to the next, and there were no discontinuities. Now, let's make our generational model into an epochal one.

We define an epoch as anytime our Evolutionary Algorithm encounters a local optimum. Once the end of an epoch is reached, the EA is reset, and the previous epoch is saved. Upon approaching the end of the next epoch, reintroduce the last epoch into the population; by this, more of the search space is covered.

How can we determine if we are at a local optimum?

We can't.

That does not mean we cannot have a heuristic for it. When there is little to no change in average/best fitness for a prolonged period of time, that typically means a local optimum has been reached. How long is a prolonged period of time? That's undetermined; it is another parameter we have to account for.

Note, if the Evolutionary Algorithm keeps producing more fit individuals, but the average fitness remains the same, the algorithm will terminate. Likewise, if the best fitness remains the same, but the average fitness closely approaches the best, the EA will terminate. Therefore, we should determine if the best fitness and the average fitness has not changed; only then should we start a new epoch.

Luckily, we already have something that will manage the average/best fitness for us.

class EA:
    ...

    def search(self, termination_conditions):
        epochs, generation, total_generations = 1, 1, 1
        self.population = Population(self.μ, self.λ)

        previous_epoch = []
        fitness_getter = lambda: [individual.fitness for individual in self.population.individuals]  # noqa

        termination_manager = TerminationManager(termination_conditions, fitness_getter)
        epoch_manager_best_fitness = TerminationManager([NoChangeInBestFitness(250)], fitness_getter)
        epoch_manager_average_fitness = TerminationManager([NoChangeInAverageFitness(250)], fitness_getter)

        while not termination_manager.should_terminate():
            if epoch_manager_best_fitness.should_terminate() and epoch_manager_average_fitness.should_terminate():
                if len(previous_epoch) > 0:
                    epoch_manager_best_fitness.reset()
                    epoch_manager_average_fitness.reset()

                    self.population.individuals += previous_epoch
                    previous_epoch = []
                else:
                    epoch_manager_best_fitness.reset()
                    epoch_manager_average_fitness.reset()

                    previous_epoch = self.population.individuals
                    self.population = Population(self.μ, self.λ)

                    generation = 0
                    epochs += 1

            self.population = Population.survival_selection(self.population)

            offspring = Population.generate_offspring(self.population)
            self.population.individuals += offspring.individuals

            self.__log(total_generations, epochs, generation)

            total_generations += 1
            generation += 1

        print("Result: {}".format(self.population.fittest.genotype))
        return self.population.fittest


    def __log(self, total_generations, epochs, generation):
        """Log the process of the Evolutionary Algorithm."""
        ...

Although considerably more complicated, this new Evolutionary Algorithm framework allows us to explore much more of a search space (without getting stuck).

Let's put it to the test.

A New 3-SAT Problem

We're going to take on a substantially harder 3-SAT instance: 1,000 clauses, 250 variables. To make it worse, the number of valid solutions is also lower. We will also include the following terminating conditions:

  • Time of eight hours.
  • Fitness of all clauses satisfied (100).
  • A million generations.

So, how does our Evolutionary Algorithm fair?

Not well. After twenty epochs, and thousands of generations — we do not find a solution. Fear not; in subsequent posts, we will work on optimizing our Genetic Algorithm to handle much larger cases, more effectively.

The Reusability Of Evolutionary Algorithms: 3-SAT Solving With EAs

Let's propose an Evolutionary Algorithm experiment; say we already have a framework in place (like the Secret Message framework we previously implemented). How difficult would it be to completely switch problem instances?

First, we need another problem instance. Our previous problem instance was pretty straightforward: it had one local optimum. Let's take on a problem with many local optimum, such as the 3-SAT problem.

The premise of 3-SAT is simple. From a global pool of variables ($x_1$, $x_2$, $\ldots$, $x_n$), we have a basic clause of three variables or-ed together (signified by $\vee$):

$$x_p \vee x_q \vee x_r$$

Then, and (signified by a $\wedge$) several clauses together:

$$\left(x_p \vee x_q \vee x_r\right) \wedge \left(x_s \vee x_t \vee x_u\right) \wedge \ldots \wedge \left(x_v \vee x_w \vee x_y\right)$$

The only stipulation is that any variable can be negated (signified by a $\neg$). So, supposing we want to negate $x_p$; $x_s$ and $x_u$; and $x_v$, $x_w$, and $x_y$; we can do the following:

$$\left(\neg x_p \vee x_q \vee x_r\right) \wedge \left(\neg x_s \vee x_t \vee \neg x_u\right) \wedge \ldots \wedge \left(\neg x_v \vee \neg x_w \vee \neg x_y\right)$$

Now, we simply have to assign all the variables such that all the clauses will evaluate to true. It may sound simple, but it belongs to the hardest classes of problems in computer science. There is no guaranteed algorithm to produce the right answer at this time.

For a more visual approach, please reference the figure below. The goals is to make every inner node green, by having at lease one, connected, outer node be green. Note the green nodes have to account for negation as well.

sat-easy-solution

This sounds like a good problem candidate for an Evolutionary Algorithms[1].

The SAT Problem

We can skip over the problem specific parts to worry more about the Evolutionary Algorithm aspect. Suppose we already have a well-defined SAT class that takes care of SAT-specific properties and methods, like so:

class SAT:
    def __init__(self, filename):
        """Create a SAT object that is read in from a CNF file."""
        ...

    @property
    def variables(self):
        """Get *all* the variables."""
        ...

    @property
    def total_clauses(self):
        """Set the total number of clauses."""
        ...

    @property
    def clauses_satisfied(self):
        """Get the number of satisfied clauses."""
        ...

    def __getitem__(self, key):
        """Get a particular variable (key)"""
        ...

    def __setitem__(self, key, value):
        """Set a variable (key) to value (True/False)"""
        ...

From this, we can create a new genotype for our Individual.

The New Genotype

The genotype structure was very similar to what we had before:

  • The genotype is the SAT problem we defined above.
  • Fitness is defined by a percentage of the total satisfied clauses.
  • Mutation is uniform, choose a percentage $p$ of alleles and flip their value.
  • Recombination is uniform, randomly assemble values from both parents.

Looking at the refactoring, not much has changed.

sat-secret-message-diff

The New EA Framework

Now that we have updated our Individual, next thing to updated would be the Evolutionary Algorithm framework, including:

  • The Population
  • The EA Itself

Except, we don't have to.

That's the beauty of Evolutionary Algorithms, they are incredibly adaptable. By swapping out the Individual, the rest of the evolutionary algorithm should still work.

For our SAT problem, there were some parameters updated, to make the algorithm more efficient:

  • The mutation rate has been reduced to 5%
  • The tournament size has been reduced to 15 individuals (out of $\lambda = 100$).

The Result

So, let's try our Evolutionary Algorithm. Taking a SAT instance with 75 variables and 150 clauses, this makes the search space

$$2^{75} \approx 3.77 \times 10^{22}$$

Great, so roughly 1,000 times the grain of sand on Earth, easy. So, can our EA do it?

After roughly 100 iterations, yes. See the visualization below.

sat-result

Marvelous, our EA managed to find a solution after only 100 iterations in a giant search space. And all we had to do was swap out one class.

The Source Code

All source code can be found here.


  1. In reality, it's not a great candidate for an evolutionary algorithm. The gradient is sometimes murky, because flipping one variable's value can drastically decrease/increase the fitness function. Also, there are several great heuristics for solving the SAT problem. ↩︎

Parameter Tuning: Pitfalls

One of the big takeaways in my introduction to Evolutionary Algorithm was the sheer number of numerical parameters.

  • $\mu$ And $\lambda$
  • Mutation Rate
  • $k$ in k-Tournament Selection

Not only this, but the sheer number of parameters:

  • The genotype
  • The mutator operator
  • The survivor selection algorithm

And one might be wondering, what is the best operator for $x$ or $y$? Let’s look at an example.

Recall the problem from the previous discussion:

We are going to consider a sample problem: a deciphering program. The premise of the problem is such.

  • There is a string of characters (without spaces) hidden away that, after set, is inaccessible.
  • There are two ways to retrieve data about the hidden message:
    1. Get the length of the string.
    2. Given a string, the problem will output how many characters match within the two strings.

Disregarding the other technical details, let us focus on the survivor selection. We used $k$-tournament selection (with $k = 50$). But, let’s run a little experiment:

Run the Evolutionary Algorithm, with $k$ ranging from 5 (basically the bare minimum) to 100 ($\lambda$, the population size), and see how fast the algorithm terminates. Do this 1,000 times to get accurate results.

The result?

Average Generations vs. k-Tournament

This makes sense. Our problem has one local optimum: the actual solution. So we do not need a lot of genetic diversity, we need aggressive selective pressure[1] to reach the top quickly.

As $k$ gets closer to $\mu$, the average termination time decreases. What does this tell us? We picked the wrong survivor selection algorithm.

With $k = \mu$, we no longer have $k$-tournament selection; we have truncation selection (where only the most fit individuals survive). And that's the interesting part about Evolutionary Algorithms: there are no objective, best parameters.

How do we alleviate this? Trial and error. There is no telling when one parameter is going to perform better than another.

A after a couple of trial runs, and objectives in mind (average terminating fitness, best terminating fitness, time to termination), the answer might surprise (and delight) you.


  1. How elitist the survivor selection algorithm is, picking the strongest individuals more often. ↩︎

Evolutionary Algorithms: An Evolutionary Approach To Problem Solving

Arguable the first (and most successful) problem solver we know of is Evolution. Humans (along with other species) all share a common problem: becoming the best at surviving our environment.

Just as Darwinian finches evolved their beaks for to survive different parts of the Galápagos Islands, we too, evolved to survive different parts of the world. And we can program a computer to do the same.

Evolution inspired a whole generation of problem solving, commonly known as Evolutionary Algorithms (EAs). EAs have been known for solving (or, approximating) solutions to borderline unsolvable problems. And, just as the mechanics of evolution are not that difficult, the mechanics of EAs are just the same.

Today, we will build an Evolutionary Algorithm from the ground-up.

An Introduction

Before we proceed with implementation or an in-depth discussion, first we wish to tackle two questions: what is an Evolutionary Algorithm, what does an Evolutionary Algorithm look like, and what problems can Evolutionary Algorithms solve.

What Is An Evolutionary Algorithm?

An Evolutionary Algorithm is generic, population-based optimization algorithm that generates solution via biological operators. That is quite a mouthful, so let’s break it up.

Population-based. All Evolutionary Algorithms start by creating a population of random individuals. These individuals are just like an individual in nature: there is a genotype (the genes that make up an individual) and a phenotype (the result of the genotype interacting with the environment). In EAs, they would be defined as follows:

  • Genotype The representation of the solution.
  • Phenotype The solution itself.

Because it’s a little confusing to think of it this way, it’s often better to think about it in terms of a genotype space and a phenotype space.

  • Genotype Space The space of all possible combinations of genes.
  • Phenotype Space The space of all possible solutions.

Don’t worry if this doesn’t make sense, we’ll touch on it later.

Optimization Algorithm. Evolution is an optimization algorithm. Given an environment, it will try to optimize an individual for that environment with some fitness metric. Evolutionary Algorithms operate the same way.

Given an individual, it will try to optimize it. We do not use a literal environment, but still use a fitness metric. The fitness metric is simply a function that takes in the genotype of the individual, and outputs a value that is proportional to how good a solution is.

Because fitness metrics are proportional to how good a solution is, this implies a very important condition for our phenotype space: it’s a gradient.

Biological operators. Evolutionary Algorithms are inspired by biology and evolution. Just as biology has operators to generate new individuals, so do Evolutionary Algorithms. More on that later.

Generic. Evolutionary Algorithms are generic. When a framework has been introduced, it can be reused on an individual basis (provided it has the appropriate crossover and mutator operators).

What Does An Evolutionary Algorithm Look Like?

The pseudocode for an Evolutionary Algorithm is one we might expect evolution to have, generate a random population, generate offspring, and let survival of the fittest do its job. And so it does:

BEGIN
    INITIALISE population with random solutions

    WHILE ( TERMINATION CONDITION is satisfied ) DO
        SELECT parents
        RECOMBINE pairs of parents
        MUTATE the resulting offspring
        EVALUATE new candidates
        SELECT individuals for the next generation
    OD 
END

What Problems Can Evolutionary Algorithms Solve?

Evolutionary Algorithms can solve any problem that has a genotype that can fit within our framework:

  • The genotype can have a crossover operator.
  • The genotype can have a mutator operator.
  • The genotype can map to a definite fitness function.

Again, the fitness should be proportional to how good a solution is. If the fitness function $f(x)$ is bounded by $0 \leq f(x) \leq 100$, 0 should be the worst solution or no solution, and 100 should be the best solution (or vice versa, for inverted fitnesses).

Implementing An Evolutionary Algorithm

We will be implementing a special class of Evolutionary Algorithm, referred to as a (μ + λ)-Evolutionary Strategy. The name is not important, but μ and λ will be; we will come back to them shortly.

For the purposes of our discussion, we are going to consider a sample problem: a deciphering program. The premise of the problem is such.

  • There is a string of characters (without spaces) hidden away that, after set, is inaccessible.
  • There are two ways to retrieve data about the hidden message:
    1. Get the length of the string.
    2. Given a string, the problem will output how many characters match within the two strings.

The secret message would look as follows:

class SecretMessage:
    def __init__(self, message):
        """Initialize a Secret Message object.

        :message (str): The secret message to hide.
        """
        self.__message = message

    def letters_match(self, message):
        """Determine how many characters match the secret message.

        Note:
            The message length and the secret message length must be the same length (accessed via the length property).


        :message (str): The message to compare the secret message to.
        :returns (int): The number of characters matched.
        """
        return sum(self.__message[char] == message[char] for char in range(len(message)))

    @property
    def length(self):
        """Get the length of the secret message.

        :returns (int): The length of the secret message.
        """
        return len(self.__message)

Not too complicated.

An Individual

In Evolutionary Algorithms, an individual is simply a candidate solution. An individual has a genotype (the representation) and operators (Crossover, Mutation, and Fitness) that act on the genotype. We will discuss them more extensively below.

The Genotype

As aforementioned, a genotype is the representation of an individual. Just as DNA can for humans, knowing the genotype can give you all the information one might need to determine the characteristics of an individual.

Because a genotype must be acted upon a crossover and mutation operator, there are few common choices for genotypes:

  • Vectors[1]. A vector is common because crossover is trivial, take elements from the two genotypes to create a new individual. Mutation is also trivial, pick random elements in vector, and mutate them. Often, for a complex enough individual, a vector of bits is used[2].
  • Matrices. Same as a vector, but with multiple dimensions.
  • Float-Point or Real Numbers. This one is tricky, but commonly used. There are a plethora of ways to recombine two numbers: average of the two numbers, bit manipulation, binary encoding crossover. Same can be said for mutation: adding a random value to the number, bit manipulation with a random value, or bit flipping in binary encoding. It should be noted that some of these introduce biases, and one should account for them.
  • Trees. Some problems can be easily broken down into trees (like an entire programming language can be broken down into a parse tree). Crossover is trivial, swap a random subtree with another. Mutation, however, is often not used; this is because the crossover itself acts as a mutation operator.

Next, our genotype must be initialized to some random values. Our initial population is seeded with said randomly-generated individuals, and with a good distribution, they will cover a large portion of the genotype space.

Keeping all this in mind, let us think about the representation of our problem. A string is nothing more than a vector of characters, so using the first bullet point, we are given our operators pretty easily.

Here’s what our genotype would look like:

class Individual:
    message = SecretMessage("")

    def __init__(self):
        """Initialize an Individual object.

        Note:
            Individual.message should be initialized first.
        """
        length = Individual.message.length
        characters = [choice(ascii_letters) for _ in range(length)]
        self.genotype = "".join(characters)

Crossover, Mutation, and Fitness

As aforementioned, to fit within an Evolutionary Algorithm framework, a genotype must be created with crossover, mutator, and fitness operators. Although we have covered said operators, we will formalize them here.

  • Crossover. A crossover operator simply takes in two genotypes, and produces a genotype that is a mixture of the two. The crossover can be uniform (random elements from both genotypes are taken), 1-point (take a pivot position between two points, the left half is one genotype, and the right half another), and $N$-point (same as 1-point but with multiple pivot positions).
  • Mutator. A mutator operator takes random values within the genotype and changes them to a random values. There is a mutation rate that is associated with all genotypes, we call it $p$. $p$ is bounded such that $0 \leq p \leq 1.0$, where $p$ is the percentage of the genotype that gets mutated. Careful to limit this value, however; too high $p$ can result in just a random search.
  • Fitness. The fitness operator simply takes in a genotype, and outputs a numerical value proportional to how good a candidate solution is. Fitness has no limits, and can be inverted (i.e., a smaller fitness is better).

For the purposes of our program, we are going to have the following operators: crossover will be uniform, mutation will be a fixed number of mutating characters, and fitness will be a percentage of the characters matched.

class Individual
    ...
    @property
    def fitness(self):
        """Get the fitness of an individual. This is done via a percentage of how many characters
        in the genotype match the actual message.

        :return (float): The fitness of an individual.
        """
        return 100 * Individual.message.letters_match(self.genotype) / Individual.message.length

    @staticmethod
    def mutate(individual, rate):
        """Mutation operator --- mutate an individual with a specified rate.
        This is done via a uniform random mutation, by selecting random genes and swapping them.

        Note:
            rate should be a floating point number (0.0 < rate < 1.0).

        :individual (Individual): The individual to mutate.
        :rate (float): The rate at which to mutate the individual's genotype.
        """
        number_of_characters_to_mutate = int(rate * individual.message.length)
        genotype_list = list(individual.genotype)  # Strings are immutable, we have to use a list

        for _ in range(number_of_characters_to_mutate):
            character_to_mutate = choice(range(individual.message.length))
            genotype_list[character_to_mutate] = choice(ascii_letters)

        individual.genotype = "".join(genotype_list)

    @staticmethod
    def recombine(parent_one, parent_two):
        """Recombination operator --- combine two individuals to generate an offspring.

        :parent_one (Individual): The first parent.
        :parent_two (Individual): The second parent.
        :returns (Individual): The combination of the two parents (the offspring).
        """
        new_genotype = ""

        for gene_one, gene_two in zip(parent_one.genotype, parent_two.genotype):
            gene = choice([gene_one, gene_two])
            new_genotype += gene

        individual = Individual()
        individual.genotype = new_genotype
        return individual

A Population

Now that we have an Individual, we must create a Population. The Population holds the candidate solutions, creates new offspring, and determine which are to propagate into further generations.

μ And λ

Remember when we mentioned that μ and λ would be important in our Evolutionary Algorithm? Well, now here they come into play. μ And λ are defined as follows:

  • μ: The population size.
  • λ: The number of offspring to create.

Although these are simple constants, they can have a drastic impact on an Evolutionary Algorithm. For example, a Population size of 1,000 might find a solution in much fewer generations than 100, but will take longer to process. It has been experimentally shown that a good proportion between the two is:

$$
λ / μ \approx 6
$$

However, this is tested for a large class of problems, and a particular Evolutionary Algorithm could benefit from having different proportions.

For our purposes, we will pick $μ = 100$ and $λ = 15$, a proportion just a little over 6.

class Population:
    def __init__(self, μ, λ):
        """Initialize a population of individuals.

        :μ (int): The population size.
        :λ (int): The offspring size.
        """
        self.μ, self.λ = μ, λ

        self.individuals = [Individual() for _ in range(self.μ)]

Generating Offspring

Generating offspring is trivial with the framework we imposed on an Individual: pick two random parents, perform a crossover between the two to create a child, mutate said child, and introduce the child back into the population pool.

In code, it would look as follows:

class Population:
    ...

    @staticmethod
    def random_parents(population):
        """Get two random parents from a population.

        :return (Individual, Individual): Two random parents.

        """
        split = choice(range(1, len(population.individuals)))
        return choice(population.individuals[:split]), choice(population.individuals[split:])

    @staticmethod
    def generate_offspring(population):
        """Generate offspring from a Population by picking two random parents, recombining them,
        mutating the child, and adding it to the offspring. The number of offspring is determine by
        λ.

        :population (Population): The population to generate the offspring from.
        :returns (Population): The offspring (of size λ).
        """
        offspring = Population(population.μ, population.λ)
        offspring.individual = []

        for _ in range(population.λ):
            parent_one, parent_two = Population.random_parents(population)

            child = Individual.recombine(parent_one, parent_two)
            child.mutate(child, 0.15)

            offspring.individuals += [child]

        return offspring

Survivor Selection

The last core part of an Evolutionary Algorithm would be survival selection. This puts selective pressure on our candidate solutions, and what ultimately leads to fitter solutions.

Survivor selection picks μ Individuals that would be the best to propagate into the next generation; however, it’s not as easy as picking the fittest μ Individuals. Always picking the μ best Individuals leads to premature convergence, a way of saying we “got a good solution, but not the best solution”. The Evolutionary Algorithm simply did not explore the search space enough to find other, fitter solutions.

There are a number of ways to run a survival selection, one of the most popular being $k$-tournament selection. $k$-tournament selection picks $k$ random Individuals from the pool, and selects the fittest Individual from the tournament. It does this μ times, to get the full, new Population. The higher $k$, the higher the selective pressure; however, also the higher chance of premature convergence. The lower $k$, the less of a chance of premature convergence, but also the more the Evolutionary Algorithm starts just randomly searching.

At the bounds, $k = 1$ will always be just a random search, and $k = μ$ will always be choosing the best μ individuals.

We choose $k = 25$, giving less fit solutions a chance to win, but still focusing on the more fit solutions.

    @staticmethod
    def survival_selection(population):
        """Determine from the population what individuals should not be killed. This is done via
        k-tournament selection: generate a tournament of k random individuals, pick the fittest
        individuals, add it to the survivors, and remove it from the original population.

        Note:
            The population should be of size μ + λ. The resultant population will be of size μ.

        :population (Population): The population to run survival selection on. Must be of size μ + λ.
        :returns (Population): The resultant population after killing off unfit individual. Will be
        of size μ.
        """
        new_population = Population(population.μ, population.λ)
        new_population.individuals = []

        individuals = deepcopy(population.individuals)

        for _ in range(population.μ):
            tournament = sample(individuals, 25)
            victor = max(tournament, key=lambda individual: individual.fitness)

            new_population.individuals += [victor]
            individuals.remove(victor)

        return new_population

The Evolutionary Algorithm

As with the pseudocode in the introduction, this will exactly resemble our Evolutionary Algorithm. Because the Individual and the Population framework is established, it is almost a copy-paste.

class EA:
    def __init__(self, μ, λ):
        """Initialize an EA.

        :μ (int): The population size.
        :λ (int): The offspring size.
        """
        self.μ, self.λ = μ, λ
        
    def search(self):
        """Run the genetic algorithm until the fittness reaches 100%.

        :returns: The fittest individual.
        """
        generation = 1
        self.population = Population(self.μ, self.λ)

        while self.population.fittest.fitness < 100.0:
            offspring = Population.generate_offspring(self.population)
            self.population.individuals += offspring.individuals
            self.population = Population.survival_selection(self.population)

            generation += 1

Running An Evolutionary Algorithm

Below, we have an instance of the evolutionary algorithm searching for a solution:

Now, looking at the string “FreneticArray”, it has 13 characters, and seeing as there are 26 letters in the alphabet, double that for lowercase/uppercase letters, our search space was:

$$
\left(2 * 26\right)^{13} \approx 2.0 \times 10^{22}
$$

Huge.

On average, our EA 29 generations to finish[3]. As each generations had at most 115 individuals, we can conclude on average we had to generate:

$$
29 * 115 = 3335 \ \text{solutions}
$$

Much smaller than $2.0 \times 10^{22}$. That is what Evolutionary Algorithms are good for: turning a large search space into a much smaller one.

Although there are much more advanced topics in Evolutionary Algorithms, this is enough start implementing your own. With just the simple operators listed above, a genotype, a search space that has a gradient, many problems can solved with an Evolutionary Algorithm.

The Source Code

All source code can be found here.


  1. A mathematical vector, common to linear algebra. Just a collection of related items, often referred to as an array in computer science. ↩︎

  2. Hey, if it’s powerful enough to run modern computers, surely it can be adequate enough for a genotype representation. ↩︎

  3. Per 1,000 runs. ↩︎