Roulette Wheel Selection in Python 🐍

The roulette wheel selection (also known as fitness proportionate selection) is a function used by genetic algorithms for selecting potentially useful solutions for recombination.
The crossover individual probability is computed based on the individual’s fitness divided by the sum of all population fitness. The following is the formula for it:
roulette wheel chromosome probability
where pi is the probability of each chromosome equals the chromosome frequency divided by the sum of all fitness. Let’s imagine that the roulette wheel selection algorithm is like a pie chart. Each individual has a fitness value and the sum of all is the circle. So the probability of selecting a potential mate depends on your fitness with respect to the rest. The following illustration shows the probability of selecting each of them depends on how much space they take in the pie.
Roulette wheel selection pie chart example

Roulette wheel selection in genetic algorithm python

An example of the genetic algorithm roulette wheel selection in python. Easy python implementation without pseudo code.

If you desire to apply this genetic algorithm operation as a minimization problem all you have to do is reverse the probabilities of the function. the roulette wheel selection for the minimization problem requires updating the probabilities to 1-prob. Below the python example:

I hope that was clear, if not leave a comment and I’ll do my best to clarify it.

Newsletter subscription

5 Replies to “Roulette Wheel Selection in Python 🐍”

  1. Bayesian says:

    it works perfect when the fitness function is a max. But what if the fitness function is seeking for a min ? Using the sum in the denominator will produce the opposite. (possible solution: the probability is a function of the distance between an individual to a theoretical (min) optimum and the sum of the distance for every individual)

    • Roc Reguant says:

      The probability function is based on ptobabilities that add up to 1 (100%); therefore, by doing one minus the probability you can select for the opposite (the minimum). I hope that makes sense 🙂

      • Kyle says:

        It doesn’t work though. The probabilities doesn’t add up to 1. If the distribution for maximization are [0.3, 0.2, 0.4, 0.1] then your minimization distribution will be [0.7, 0.8, 0.6, 0.9] which doesn’t add up to 1.

  2. Mufassir says:

    Due to computational precisions of computer, the sum of probabilities is not coming to be 1.

    • Roc Reguant says:

      True, but that’s a physical limitation i.e. hardware that all software has and handles in different ways. I’m not sure you’ll be able to appreciate this imprecision. Do you want to show us the error in a reproducible manner? Or would you like to show us an alternative solution?

Leave a Reply

Your email address will not be published. Required fields are marked *