Clever Algorithms Welcome to CleverAlgorithms.com! Get notified of future announcements! Email:

 Clever Algorithms: Nature-Inspired Programming Recipes By Jason Brownlee PhD. First Edition, Lulu Enterprises, January 2011. ISBN: 978-1-4467-8506-5.

Compact Genetic Algorithm

Compact Genetic Algorithm, CGA, cGA.

Taxonomy

The Compact Genetic Algorithm is an Estimation of Distribution Algorithm (EDA), also referred to as Population Model-Building Genetic Algorithms (PMBGA), an extension to the field of Evolutionary Computation. The Compact Genetic Algorithm is the basis for extensions such as the Extended Compact Genetic Algorithm (ECGA). It is related to other EDAs such as the Univariate Marginal Probability Algorithm, the Population-Based Incremental Learning algorithm, and the Bayesian Optimization Algorithm.

Inspiration

The Compact Genetic Algorithm is a probabilistic technique without an inspiration. It is related to the Genetic Algorithm and other Evolutionary Algorithms that are inspired by the biological theory of evolution by means of natural selection.

Strategy

The information processing objective of the algorithm is to simulate the behavior of a Genetic Algorithm with a much smaller memory footprint (without requiring a population to be maintained). This is achieved by maintaining a vector that specifies the probability of including each component in a solution in new candidate solutions. Candidate solutions are probabilistically generated from the vector and the components in the better solution are used to make small changes to the probabilities in the vector.

Procedure

The Compact Genetic Algorithm maintains a real-valued prototype vector that represents the probability of each component being expressed in a candidate solution. Algorithm (below) provides a pseudocode listing of the Compact Genetic Algorithm for maximizing a cost function. The parameter $n$ indicates the amount to update probabilities for conflicting bits in each algorithm iteration.

Input: $Bits_{num}$, $n$
Output: $S_{best}$
$V$ $\leftarrow$ InitializeVector($Bits_{num}$, 0.5)
$S_{best}$ $\leftarrow$ $\emptyset$
While ($\neg$StopCondition())
$S_{1}$ $\leftarrow$ GenerateSamples($V$)
$S_{2}$ $\leftarrow$ GenerateSamples($V$)
$S_{winner}$, $S_{loser}$ $\leftarrow$ SelectWinnerAndLoser($S_{1}$, $S_{2}$)
If (Cost($S_{winner}$) $\leq$ Cost($S_{best}$))
$S_{best}$ $\leftarrow$ $S_{winner}$
End
For ($i$ To $Bits_{num}$)
If ($S_{winner}^i$ $\neq$ $S_{loser}^i$)
If ($S_{winner}^i$ $\equiv$ 1)
$V_{i}^i$ $\leftarrow$ $V_{i}^i$ + $\frac{1}{n}$
Else
$V_{i}^i$ $\leftarrow$ $V_{i}^i$ – $\frac{1}{n}$
End
End
End
End
Return ($S_{best}$)
Pseudocode for the cGA.

Heuristics

• The vector update parameter ($n$) influences the amount that the probabilities are updated each algorithm iteration.
• The vector update parameter ($n$) may be considered to be comparable to the population size parameter in the Genetic Algorithm.
• Early results demonstrate that the cGA may be comparable to a standard Genetic Algorithm on classical binary string optimization problems (such as OneMax).
• The algorithm may be considered to have converged if the vector probabilities are all either $0$ or $1$.

Code Listing

Listing (below) provides an example of the Compact Genetic Algorithm implemented in the Ruby Programming Language. The demonstration problem is a maximizing binary optimization problem called OneMax that seeks a binary string of unity (all '1' bits). The objective function only provides an indication of the number of correct bits in a candidate string, not the positions of the correct bits. The algorithm is an implementation of Compact Genetic Algorithm that uses integer values to represent 1 and 0 bits in a binary string representation.

def onemax(vector)
return vector.inject(0){|sum, value| sum + value}
end

def generate_candidate(vector)
candidate = {}
candidate[:bitstring] = Array.new(vector.size)
vector.each_with_index do |p, i|
candidate[:bitstring][i] = (rand()<p) ? 1 : 0
end
candidate[:cost] = onemax(candidate[:bitstring])
return candidate
end

def update_vector(vector, winner, loser, pop_size)
vector.size.times do |i|
if winner[:bitstring][i] != loser[:bitstring][i]
if winner[:bitstring][i] == 1
vector[i] += 1.0/pop_size.to_f
else
vector[i] -= 1.0/pop_size.to_f
end
end
end
end

def search(num_bits, max_iterations, pop_size)
vector = Array.new(num_bits){0.5}
best = nil
max_iterations.times do |iter|
c1 = generate_candidate(vector)
c2 = generate_candidate(vector)
winner, loser = (c1[:cost] > c2[:cost] ? [c1,c2] : [c2,c1])
best = winner if best.nil? or winner[:cost]>best[:cost]
update_vector(vector, winner, loser, pop_size)
puts " >iteration=#{iter}, f=#{best[:cost]}, s=#{best[:bitstring]}"
break if best[:cost] == num_bits
end
return best
end

More in the Series

Check-out other books in the series.

Please Note: This content was automatically generated from the book content and may contain minor differences.