0% found this document useful (0 votes)
22 views

Genetic Algorithms

Uploaded by

Tanvir Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Genetic Algorithms

Uploaded by

Tanvir Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Genetic Algorithms

Presented by
Md. Hafiz Ahamed
Lecturer
Dept. of Mechatronics Engineering
Rajshahi University of Engineering & Technology
Fundamentals of Genetic Algorithms
 Genetic Algorithms (GAs) are adaptive heuristic search algorithm
based on the evolutionary ideas of natural selection and genetics.
 Genetic algorithms (GAs) are a part of Evolutionary computing,
a rapidly growing area of artificial intelligence. GAs are inspired by
Darwin's theory about evolution - "survival of the fittest".
 GAs represent an intelligent exploitation of a random search used
to solve optimization problems.
 GAs, although randomized, exploit historical information to direct
the search into the region of better performance within the search
space.
 In nature, competition among individuals for scanty resources
results in the fittest individuals dominating over the weaker ones.
2
Why Genetic Algorithms ?

 It is better than conventional AI ; It is more robust.

 Unlike older AI systems, the GA's do not break easily even if the
inputs changed slightly, or in the presence of reasonable noise.

 While performing search in large state-space, or multi-modal


state-space, or n-dimensional surface, a genetic algorithms offer
significant benefits over many other typical search optimization
techniques like - linear programming, heuristic, depth-first,
breath-first.

3
Optimization
Optimization is a process that finds a best, or optimal, solution for a
problem. The Optimization problems are centered around three factors :

 An objective function which is to be minimized or maximized; Examples


‡ In manufacturing, we want to maximize the profit or minimize
the cost .
‡ In designing an automobile panel, we want to maximize the
strength.

A set of unknowns or variables that affect the objective function,


Examples
‡ In manufacturing, the variables are amount of resources used
or the time spent.
‡ In panel design problem, the variables are shape and
4
Optimization

 A set of
constraints that allow the unknowns to take on certain
values but exclude others; Examples
‡ In manufacturing, one constrain is, that all "time"
variables to be non- negative.
‡ In the panel design, we want to limit
the weight and put constrain on its
shape.

 An optimization problem is defined as : Finding values of the


variables that minimize or maximize the objective function
while satisfying the constraints.

5
Search Optimization Algorithms
Search
Optimization

Calculus Guided Random Search Enumerative


Based techniques Techniques
Techniques

Indirect Direct
method method Uninformed Informed
Search Search
Finonacci
Newton

Tabu Hill Simulated Evolutionary


Search Climbing Annealing Algorithms

Genetic Genetic
Programming Algorithms
6
Evolutionary Algorithm (EAs)
Evolutionary Algorithm (EA) is a subset of Evolutionary Computation
(EC) which is a subfield of Artificial Intelligence (AI).

 Evolutionary Computation (EC) is a general term for several


computational techniques. Evolutionary Computation represents
powerful search and optimization paradigm influenced by
biological mechanisms of evolution : that of natural selection and
genetic.

 Evolutionary Algorithms (EAs) refers to Evolutionary


Computational models using randomness and genetic inspired
operations. EAs involve selection, recombination, random
variation and competition of the individuals in a population of
adequately represented potential solutions. The candidate
solutions are referred as chromosomes or individuals. 7
Genetic Algorithms (GAs) - Basic
Concepts
Genetic algorithms (GAs) are the main paradigm of evolutionary
computing. GAs are inspired by Darwin's theory about evolution – the
"survival of the fittest". In nature, competition among individuals for
scanty resources results in the fittest individuals dominating over the
weaker ones.
 GAs are the ways of solving problems by mimicking processes
nature uses; ie., Selection, Crosses over, Mutation and Accepting,
to evolve a solution to a problem.
 GAs are adaptive heuristic search based on the evolutionary
ideas of natural selection and genetics.
 GAs are intelligent exploitation of random search used in
optimization problems.
 GAs,although randomized, exploit historical information to direct
the search into the region of better performance within the search
8
Pseudo-Code Evolutionary process
 Begin
 Initialize population with random
candidate solution.
 Evaluate each candidate;
Parents
 Repeat until (termination
condition ) is satisfied Initialization
Parents

 DO
Recombination

 SELECT parents;
 RECOMBINE pairs of
Population
Mutation

parents;
 MUTATE the resulting
Termination

offspring; Survivor
Offspring

 SELECT individuals or Fig. General Scheme of Evolutionary


the next generation; process 9
Evolutionary Algorithm (EAs)
In solving problems, some solution will be the best among
others. The space of all feasible solutions (among which the desired
solution resides) is called search space (also called state space).
 Each point in the search space represents one possible solution.
 Each possible solution can be "marked" by its value (or
fitness) for the problem.
 The GA looks for the best solution among a number of possible
solutions represented by one point in the search space.
 Lookingfor a solution is then equal to looking for some extreme
value (minimum or maximum) in the search space.
 At times the search space may be well defined, but usually only a
few points in the search space are known.
10
Basic Terminology
 Chromosome : a set of genes; a chromosome contains the solution in form of
genes.
 Gene : a part of chromosome; a gene contains a part of solution. It determines
the solution. e.g. 16743 is a chromosome and 1, 6, 7, 4 and 3 are its genes.
 Individual : same as chromosome.
 Population: number of individuals present with same length of chromosome.
 Fitness : the value assigned to an individual based on how far or close a
individual is from the solution; greater the fitness value better the solution it
contains.
 Fitness function : a function that assigns fitness value to the individual. It is
problem specific.
 Breeding : taking two fit individuals and then intermingling there chromosome
to create new two individuals.
 Mutation : changing a random gene in an individual.
 Selection : selecting individuals for creating the next generation.
11
Outline of the Basic Genetic Algorithm
1. [Start] Generate random population of n chromosomes (i.e. suitable solutions for
the problem).
2. [Fitness] Evaluate the fitness f(x) of each chromosome x in the population.
3. [New population] Create a new population by repeating following steps until
the new population is complete.
(a) [Selection] Select two parent chromosomes from a population according to
their fitness (better the fitness, bigger the chance to be selected)
(b) [Crossover] With a crossover probability, cross over the parents to form
new offspring (children). If no crossover was performed, offspring is the exact
copy of parents.
(c) [Mutation] With a mutation probability, mutate new offspring at each locus
(position in chromosome).
(d) [Accepting] Place new offspring in the new population
4. [Replace] Use new generated population for a further run of the algorithm
5. [Test] If the end condition is satisfied, stop, and return the best solution in current
population
6. [Loop] Go to step 2 12
Flow Chart for Genetic Programming
Start

Seed Population
Generate N individuals Genesis

Scoring : assign fitness


to each individual

Natural Select two individuals


(Parent 1 Parent
Selection 2)
No
Reproduction Use crossover operator
Crossover
to produce off- springs
Recombination

Scoring : assign fitness Crossover


to off- springs Finished?

Yes
Survival of Fittest

Apply replacement Natural Select one off-spring


Yes No
operator to incorporate
new individual into Selection
population
Apply Mutation
Mutation operator to produce
No Mutated offspring

Terminate?
Mutation Scoring : assign
Yes fitness to off- spring
Finished?
Finish 13
Operators of Genetic Algorithm
Genetic operators used in genetic algorithms maintain genetic
diversity. Genetic diversity or variation is a necessity for the process of
evolution.
Genetic operators are analogous to those which occur in the natural world:
 Reproduction (or Selection) ;
 Crossover (or Recombination); and
 Mutation.
In addition to these operators, there are some parameters of GA. One
important parameter is Population size.
 Population size says how many chromosomes are in population (in one
generation).
 If there are only few chromosomes, then GA would have a few possibilities to
perform crossover and only a small part of search space is explored.
 If there are many chromosomes, then GA slows down.
 Research shows that after some limit, it is not useful to increase population size,
because it does not help in solving the problem faster. The population size
depends on the type of encoding and the problem. 14
Reproduction, or Selection
Reproduction is usually the first operator applied on population.
From the population, the chromosomes are selected to be parents to
crossover and produce offspring.
The problem is how to select these chromosomes ?
According to Darwin's evolution theory "survival of the fittest" – the
best ones should survive and create new offspring.
 The Reproduction operators are also called Selection operators.
 Selection means extract a subset of genes from an existing population,
according to any definition of quality. Every gene has a meaning, so one can
derive from the gene a kind of quality measurement called fitness function.
Following this quality (fitness value), selection can be performed.
 Fitness function quantifies the optimality of a solution (chromosome) so that
a particular solution may be ranked against all the other solutions. The
function depicts the closeness of a given ‘solution’ to the desired result.
15
Reproduction, or Selection
Many reproduction operators exists and they all essentially do same
thing. They pick from current population the strings of above
average and insert their multiple copies in the mating pool in a
probabilistic manner.
The most commonly used methods of selecting chromosomes for
parents to crossover are :
 Roulette wheel selection,
 Boltzmann selection,
 Rank selection
 Steady state selection.
 Tournament selection,
16
Example of Selection
Evolutionary Algorithms is to maximize the function f(x) = x 2
with x in the integer interval [0 , 31], i.e., x = 0, 1, . . . 30,
31.
1. The first step is encoding of chromosomes; use binary
representation for integers; 5-bits are used to represent integers up to
31.
2.Assume that the population size is 4.
3. Generateinitial population at random. They are chromosomes or
genotypes; e.g., 01101, 11000, 01000, 10011.
4. Calculate fitness value for each individual.
(a) Decode the individual into an integer (called phenotypes),
01101 ® 13; 11000 ® 24; 01000 ® 8; 10011 ® 19;

(b) Evaluate the fitness according to f(x) = x2 ,


13 ® 169; 24 ® 576; 8 ® 64;19 ® 361. 17
Example of Selection
5. Select parents (two individuals) for crossover based on their
fitness in pi. Out of many methods for selecting the best
chromosomes, if roulette-wheel selection is used, then the
n
probability of the ith string in the population
pi = F / (  Fj)
i
is
j=1
Where,
F i is fitness for the string i in the population, expressed
as f(x)
pi is probability of the string i being selected,
String No Initial X value Fitness Fi f(x) p i Expected count N *

n is no of individuals in the population, is population


Population = x2 Prob i

1 01101 13 169 0.14 0.58


size, is 2 11000
expected count
24 576 0.49 1.97

3 01000 8 64 0.06 0.22

4 10011 19 361 0.31 1.23

Sum 1170 1.00 4.00

Average 293 0.25 1.00

Max 576 0.49 1.97


18
Roulette wheel selection
Roulette-wheel selection, also known as Fitness Proportionate
Selection, is a genetic operator, used for selecting potentially useful
solutions for recombination.
In fitness-proportionate selection :
 The chance of an individual's being selected is proportional to
its fitness, greater or less than its competitors' fitness.
 Conceptually, this can be thought as a game of Roulette.
8
1
5%
2
9%
20%

3
13%
7
8%

6
8%
17%

4
20%
5

Fig. Roulette-wheel Shows 8 individual with fitness 19


Roulette wheel selection
The Roulette-wheel simulates 8 individuals with fitness values Fi,
marked at its circumference; e.g.,
 The 5th individual has a higher fitness than others, so the wheel would
choose the 5th individual more than other individuals .
 The fitness of the individuals is calculated as the wheel is spun n = 8
times, each time selecting an instance, of the string, chosen by the
wheel pointer. n
pi = F i / (  Fj)

Probability of ith string is j=1


where
n = no of individuals, called population size; pi = probability of
ith
string being selected; Fi = fitness for ith string in the population.
Because the circumference
F
Average fi tness =
of the wheel is marked according
Fj/ n ; Expected count = (n =8 ) x pi

to a string’s fitness,
=  p
the Roulette-wheel mechanism is
Cumulative Probability 5
N=5
i

expected to make F/F copies of the ith string.


i=1

20
Boltzmann Selection
Simulated annealing is a method used to minimize or maximize a function.
 Thismethod simulates the process of slow cooling of molten metal to
achieve the minimum function value in a minimization problem.
 The cooling phenomena is simulated by controlling a temperature like
parameter introduced with the concept of Boltzmann probability
distribution.
 The system in thermal equilibrium at a temperature T has its energy
distribution based on the probability defined by P(E) = exp ( - E / kT)
were k is Boltzmann constant.
 This expression suggests that a system at a higher temperature has
almost uniform probability at any energy state, but at lower temperature
it has a small probability of being at a higher energy state.
 Thus, by controlling the temperature T and assuming that the search
process follows Boltzmann probability distribution, the convergence of
the algorithm is controlled. 21
One-Point Crossover
One-Point crossover operator randomly selects one crossover point
and then copy everything before this point from the first parent and
then everything after the crossover point copy from the second
parent. The Crossover would then look as shown below.
Consider the two parents selected for crossover.
Consider the two parents selected for crossover.
Parent 1 11011|00100110110
Parent 2 11011|11000011110
Interchanging the parents chromosomes after the crossover points -
The Offspring produced are :
Offspring 1 1 1 0 1 1 | 1 1 0 0 0 0 1 1 1 1 0
Offspring 2 1 1 0 1 1 | 0 0 1 0 0 1 1 0 1 1 0
Note : The symbol, a vertical line, | is the chosen crossover point. 22
Two-Point Crossover

Two-Point crossover operator randomly selects two crossover points


within a chromosome then interchanges the two parent
chromosomes between these points to produce two new offspring.
Consider the two parents selected for crossover :
Parent 1 11011|0010011|0110
Parent 2 11011|1100001|1110
Interchanging the parents chromosomes between the crossover
points - The Offspring produced are :
Offspring 1 1 1 0 1 1 | 0 0 1 0 0 1 1 | 0 1 1 0
Offspring 2 1 1 0 1 1 | 0 0 1 0 0 1 1 | 0 1 1 0

23
Two-Point Crossover
Two-Point crossover operator randomly selects two crossover points
within a chromosome then interchanges the two parent
chromosomes between these points to produce two new offspring.
Consider the two parents selected for crossover :
Parent 1 11011|0010011|0110
Parent 2 11011|1100001|1110
Interchanging the parents chromosomes between the crossover
points - The Offspring produced are :
Offspring 1 1 1 0 1 1 | 0 0 1 0 0 1 1 | 0 1 1 0
Offspring 2 1 1 0 1 1 | 0 0 1 0 0 1 1 | 0 1 1 0

24
Uniform Crossover
Uniform crossover operator decides (with some probability – know as
the mixing ratio) which parent will contribute how the gene values in
the offspring chromosomes. The crossover operator allows the parent
chromosomes to be mixed at the gene level rather than the segment
level (as with one and two point crossover).
Consider the two parents selected for crossover.
Parent 1 1 1 0 1 1 0 0 1 0 0 1 1 0 1 1 0
Parent 2 1 1 0 1 1 1 1 0 0 0 0 1 1 1 1 0

If the mixing ratio is 0.5 approximately, then half of the genes in the
offspring will come from parent 1 and other half will come from parent
2. The possible set of offspring after uniform crossover would be:
Offspring 1 11 12 02 11 11 12 12 02 01 01 02 11 12 11 11 02

Offspring 2 12 11 01 12 12 01 01 11 02 02 11 12 01 12 12 01

Note: The subscripts indicate which parent the gene came from. 25
Arithmetic
Arithmetic crossover operator linearly combines two parent chromosome
vectors to produce two new offspring according to the equations:
Offspring1 = a * Parent1 + (1- a) * Parent2
Offspring2 = (1 – a) * Parent1 + a * Parent2
where a is a random weighting factor chosen before each crossover
operation.
Consider two parents (each of 4 float genes) selected for crossover:
Parent 1 (0.3) (1.4) (0.2) (7.4)
Parent 2 (0.5) (4.5) (0.1) (5.6)
Applying the above two equations and assuming the
weighting factor a = 0.7, applying above equations, we get two resulting
offspring. The possible set of offspring after arithmetic crossover would be:
Offspring 1 (0.36) (2.33) (0.17) (6.87)
Offspring 2 (0.402) (2.981) (0.149) (5.842) 26
Heuristic
Heuristic crossover operator uses the fitness values of the two
parent chromosomes to determine the direction of the search.
The offspring are created according to the equations:
Offspring1 = Best Parent + r * (Best Parent − Worst
Parent)
Offspring2 = Best Parent
where r is a random number between 0 and 1.
It is possible that offspring1 will not be feasible. It can happen if r
is chosen such that one or more of its genes fall outside of the
allowable upper or lower bounds. For this reason, heuristic crossover
has a user defined parameter n for the number of times to try
and find an r that results in a feasible chromosome. If a feasible
chromosome is not produced after n tries, the worst parent is
returned as offspring1. 27
Mutation
After a crossover is performed, mutation takes place.
Mutation is a genetic operator used to maintain genetic diversity from one
generation of a population of chromosomes to the next.
Mutation occurs during evolution according to a user-definable mutation
probability, usually set to fairly low value, say 0.01 a good first choice.
Mutation alters one or more gene values in a chromosome from its initial state.
This can result in entirely new gene values being added to the gene pool. With
the new gene values, the genetic algorithm may be able to arrive at better
solution than was previously possible.
Mutation is an important part of the genetic search, helps to prevent the
population from stagnating at any local optima. Mutation is intended to prevent
the search falling into a local optimum of the state space.
The Mutation operators are of many type.
 One simple way is, Flip Bit.
 The others are Boundary, Non-Uniform, Uniform, and Gaussian.
The operators are selected based on the way chromosomes are encoded . 28
Flip Bit

The mutation operator simply inverts the value of the chosen gene.
i.e. 0 goes to 1 and 1 goes to 0.
This mutation operator can only be used for binary genes.
Consider the two original off-springs selected for mutation.
Original offspring 11 1 0 1 1 1 1 0 0 0 0 1 1 1 1 0
1 1 0 1 1 0 0 1 0 0 1 1 0 1 1 0
Original offspring 2
Invert the value of the chosen gene as 0 to 1 and 1 to 0
The Mutated Off-spring produced are :
Mutated offspring 11 1 0 0 1 1 1 0 0 0 0 1 1 1 1 0
Mutated offspring 21 1 0 1 1 0 1 1 0 0 1 1 0 1 0 0
29
Mutation
 Boundary
The mutation operator replaces the value of the chosen gene with either the upper or lower
bound for that gene (chosen randomly).
This mutation operator can only be used for integer and float genes.
 Non-Uniform
The mutation operator increases the probability such that the amount of the mutation will be
close to 0 as the generation number increases. This mutation operator prevents the population
from stagnating in the early stages of the evolution then allows the genetic algorithm to fine tune
the solution in the later stages of evolution.
This mutation operator can only be used for integer and float genes.
 Uniform
The mutation operator replaces the value of the chosen gene with a uniform random value
selected between the user-specified upper and lower bounds for that gene.
This mutation operator can only be used for integer and float genes.
 Gaussian
The mutation operator adds a unit Gaussian distributed random value to the chosen gene. The
new gene value is clipped if it falls outside of the user-specified lower or upper bounds for that
gene.
30
This mutation operator can only be used for integer and float genes.
Example 1 :
Q. Maximize the function f(x) = x2 over the range of integers
from 0 . . . 31.
Note : This function could be solved by a variety of traditional
methods such as a hill-climbing algorithm which
uses the derivative.
One way is to :
 Start from any integer x in the domain of f
 Evaluate at this point x the derivative f’
 Observing that the derivative is +ve, pick a new x which is at a
small distance in the +ve direction from current x
 Repeat until x = 31
See, how a genetic algorithm would approach this problem ?
31
Example 1 :
1. Devise a means to represent a solution to the problem :
Assume, we represent x with five-digit unsigned binary integers.
2. Devise a heuristic for evaluating the fitness of any particular
solution : The function f(x) is simple, so it is easy to use the f(x) value itself
to rate the fitness of a solution; else we might have considered a more
simpler heuristic that would more or less serve the same purpose.
3. Coding - Binary and the String length :
GAs often process binary representations of solutions. This works well,
because crossover and mutation can be clearly defined for binary solutions.
A Binary string of length 5 can represents 32 numbers (0 to 31).
4. Randomly generate a set of solutions :
Here, considered a population of four solutions. However, larger
populations are used in real applications to explore a larger part of the
search. Assume, four randomly generated solutions as : 01101, 11000,
01000, 10011. These are chromosomes or genotypes. 32
Example 1 :
5. Evaluate the fitness of each member of the population :
The calculated fitness values for each individual are -
(a) Decode the individual into an integer (called phenotypes),
01101 ® 13; 11000 ® 24; 01000 ® 8; 10011 ® 19;
(b) Evaluate the fitness according to f(x) = x 2
,
13 ® 169; 24 ® 576; 8 ® 64; 19 ® 361.
(c) Expected count = N * Prob i , where N is the number of
individuals in the population called population size, here N = 4.
Thus the evaluation of the initial population summarized in table below .
String No i Initial Population X value Fitness Prob i Expected count N
(chromosome) (Pheno f(x) = x2 (fraction * Prob i
types) of total)

1 01101 13 169 0.14 0.58


2 11000 24 576 0.49 1.97
3 01000 8 64 0.06 0.22
4 10011 19 361 0.31 1.23
Total (sum) 1170 1.00 4.00
Average 293 0.25 1.00
Max 576 0.49 1.97

Thus, the string no 2 has maximum chance of selection. 33


Example 1 :
6. Produce a new generation of solutions by picking from the
existing pool of solutions with a preference for solutions which are
better suited than others:
We divide the range into four bins, sized according to the relative
fitness of the solutions which they represent.
Strings Prob i Associated Bin

01101 0.14 0.0 . . . 0.14


11000 0.49 0.14 . . . 0.63
01000 0.06 0.63 . . . 0.69
10011 0.31 0.69 . . . 1.00

By generating 4 uniform (0, 1) random values and seeing which bin


they fall into we pick the four strings that will form the basis for the
next generation.
Random No Falls into bin Chosen string

0.08 0.0 . . . 0.14 01101


0.24 0.14 . . . 0.63 11000
0.52 0.14 . . . 0.63 11000
0.87 0.69 . . . 1.00 10011

34
Example 1 :
8. Within each pair swap parts of the members solutions to
create offspring which are a mixture of the parents :
For the first pair of strings: 01101 ,11000
 We randomly select the crossover point to be after the fourth
digit. Crossing these two strings at that point yields:
01101 Þ 0 1 1 0 |1 Þ 01100
11000 Þ 1 1 0 0 |0 Þ 11001
For the second pair of strings:1 1 0 0 0 , 10011
 We randomly select the crossover point to be after the second
digit. Crossing these two strings at that point yields:
11000 Þ 1 1 |0 0 0 Þ 11011
10011 Þ 1 0 |0 1 1 Þ 10000
9. Randomly mutate a very small fraction of genes in the
population : With a typical mutation probability of per bit it happens
that none of the bits in our population are mutated. 35
Example 1 :
10. Go back and re-evaluate fitness of the population (new generation) : This would
be the first step in generating a new generation of solutions. However it is also useful
in showing the way that a single iteration of the genetic algorithm has improved this
String No Initial X value Fitness Prob i Expected count
sample. Population (Pheno f(x) = x (fraction2

(chromosome) types) of
total)
1 0 1 1 0 0 12 144 0.082 0.328
2 1 1 0 0 1 25 625 0.356 1.424
3 1 1 0 1 1 27 729 0.415 1.660
4 1 0 0 0 0 16 256 0.145 0.580
Total (sum) 1754 1.000 4.000
Average 439 0.250 1.000
Max 729 0.415 1.660

Observe that :
1. Initial populations : At start step 5 were
01101, 11000, 01000 , 10011
After one cycle, new populations, at step 10 to act as initial population
01100, 11001, 1 1 0 11 , 1 0 0 0 0
2. The total fitness has gone from 1170 to 1754 in a single generation.
3. The algorithm has already come up with the string 11011 (i.e x = 27) as a
36
possible solution.
Thank
You 37

You might also like