Clever Algorithms: Nature-Inspired Programming Recipes

By Jason Brownlee PhD

Home | Read Online



This is the ad-supported version of the book. Buy it now if you like it.

Univariate Marginal Distribution Algorithm

Univariate Marginal Distribution Algorithm, UMDA, Univariate Marginal Distribution, UMD.

Taxonomy

The Univariate Marginal Distribution Algorithm belongs to the field of Estimation of Distribution Algorithms (EDA), also referred to as Population Model-Building Genetic Algorithms (PMBGA), an extension to the field of Evolutionary Computation. UMDA is closely related to the Factorized Distribution Algorithm (FDA) and an extension called the Bivariate Marginal Distribution Algorithm (BMDA). UMDA is related to other EDAs such as the Compact Genetic Algorithm, the Population-Based Incremental Learning algorithm, and the Bayesian Optimization Algorithm.

Inspiration

Univariate Marginal Distribution Algorithm is a population technique-based without an inspiration. It is related to the Genetic Algorithm and other Evolutionary Algorithms that are inspired by the biological theory of evolution by means of natural selection.

Strategy

The information processing strategy of the algorithm is to use the frequency of the components in a population of candidate solutions in the construction of new candidate solutions. This is achieved by first measuring the frequency of each component in the population (the univariate marginal probability) and using the probabilities to influence the probabilistic selection of components in the component-wise construction of new candidate solutions.

Procedure

Algorithm (below) provides a pseudocode listing of the Univariate Marginal Distribution Algorithm for minimizing a cost function.

Input: $Bits_{num}$, $Population_{size}$, $Selection_{size}$
Output: $S_{best}$
Population $\leftarrow$ InitializePopulation($Bits_{num}$, $Population_{size}$)
EvaluatePopulation(Population)
$S_{best}$ $\leftarrow$ GetBestSolution(Population)
While ($\neg$StopCondition())
    Selected $\leftarrow$ SelectFitSolutions(Population, $Selection_{size}$)
    $V$ $\leftarrow$ CalculateFrequencyOfComponents(Selected)
    Offspring $\leftarrow$ $\emptyset$
    For ($i$ To $Population_{size}$)
        Offspring $\leftarrow$ ProbabilisticallyConstructSolution($V$)
    End
    EvaluatePopulation(Offspring)
    $S_{best}$ $\leftarrow$ GetBestSolution(Offspring)
    Population $\leftarrow$ Offspring
End
Return ($S_{best}$)
Pseudocode for the UMDA.

Heuristics

  • UMDA was designed for problems where the components of a solution are independent (linearly separable).
  • A selection method is needed to identify the subset of good solutions from which to calculate the univariate marginal probabilities. Many selection methods from the field of Evolutionary Computation may be used.

Code Listing

Listing (below) provides an example of the Univariate Marginal Distribution Algorithm implemented in the Ruby Programming Language. The demonstration problem is a maximizing binary optimization problem called OneMax that seeks a binary string of unity (all '1' bits). The objective function provides only an indication of the number of correct bits in a candidate string, not the positions of the correct bits.

The algorithm is an implementation of UMDA that uses the integers 1 and 0 to represent bits in a binary string representation. A binary tournament selection strategy is used and the whole population is replaced each iteration. The mechanisms from Evolutionary Computation such as elitism and more elaborate selection methods may be implemented as an extension.

def onemax(vector)
  return vector.inject(0){|sum, value| sum + value}
end

def random_bitstring(size)
  return Array.new(size){ ((rand()<0.5) ? 1 : 0) }
end

def binary_tournament(pop)
  i, j = rand(pop.size), rand(pop.size)
  j = rand(pop.size) while j==i
  return (pop[i][:fitness] > pop[j][:fitness]) ? pop[i] : pop[j]
end

def calculate_bit_probabilities(pop)
  vector = Array.new(pop.first[:bitstring].length, 0.0)
  pop.each do |member|
    member[:bitstring].each_with_index {|v, i| vector[i] += v}
  end
  vector.each_with_index {|f, i| vector[i] = (f.to_f/pop.size.to_f)}
  return vector
end

def generate_candidate(vector)
  candidate = {}
  candidate[:bitstring] = Array.new(vector.size)
  vector.each_with_index do |p, i|
    candidate[:bitstring][i] = (rand()<p) ? 1 : 0
  end
  return candidate
end

def search(num_bits, max_iter, pop_size, select_size)
  pop = Array.new(pop_size) do
    {:bitstring=>random_bitstring(num_bits)}
  end
  pop.each{|c| c[:fitness] = onemax(c[:bitstring])}
  best = pop.sort{|x,y| y[:fitness] <=> x[:fitness]}.first
  max_iter.times do |iter|
    selected = Array.new(select_size) { binary_tournament(pop) }
    vector = calculate_bit_probabilities(selected)
    samples = Array.new(pop_size) { generate_candidate(vector) }
    samples.each{|c| c[:fitness] = onemax(c[:bitstring])}
    samples.sort!{|x,y| y[:fitness] <=> x[:fitness]}
    best = samples.first if samples.first[:fitness] > best[:fitness]
    pop = samples
    puts " >iteration=#{iter}, f=#{best[:fitness]}, s=#{best[:bitstring]}"
  end
  return best
end

if __FILE__ == $0
  # problem configuration
  num_bits = 64
  # algorithm configuration
  max_iter = 100
  pop_size = 50
  select_size = 30
  # execute the algorithm
  best = search(num_bits, max_iter, pop_size, select_size)
  puts "done! Solution: f=#{best[:fitness]}, s=#{best[:bitstring]}"
end
Univariate Marginal Distribution Algorithm in Ruby
Download: umda.rb.

References

Primary Sources

The Univariate Marginal Distribution Algorithm was described by Mühlenbein in 1997 in which a theoretical foundation is provided (for the field of investigation in general and the algorithm specifically) [Muhlenbein1997]. Mühlenbein also describes an incremental version of UMDA (IUMDA) that is described as being equivalent to Baluja's Population-Based Incremental Learning (PBIL) algorithm [Baluja1994].

Learn More

Pelikan and Mühlenbein extended the approach to cover problems that have dependencies between the components (specifically pair-dependencies), referring to the technique as the Bivariate Marginal Distribution Algorithm (BMDA) [Pelikan1998] [Pelikan1999].

Bibliography

[Baluja1994] S. Baluja, "Population-Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning", Technical Report CMU-CS-94-163, School of Computer Science, Carnegie Mellon University, 1994.
[Muhlenbein1997] H. Mühlenbein, "The equation for response to selection and its use for prediction", Evolutionary Computation, 1997.
[Pelikan1998] M. Pelikan and H. Mühlenbein, "Marginal distributions in evolutionary algorithms", in Proceedings of the International Conference on Genetic Algorithms Mendel, 1998.
[Pelikan1999] M. Pelikan and H. Mühlenbein, "The Bivariate Marginal Distribution Algorithm", in Advances in Soft Computing: Engineering Design and Manufacturing, pages 521–535, Springer, 1999.
Clever Algorithms: Nature-Inspired Programming Recipes

Free Course

Get one algorithm per week...
  • ...delivered to your inbox
  • ...described in detail
  • ...to read at your own pace
Sign-up Now












Own A Copy

This 438-page ebook has...
  • ...45 algorithm descriptions
  • ...best practice usage
  • ...pseudo code
  • ...Ruby code
  • ...primary sources
Buy Now




Please Note: This content was automatically generated from the book content and may contain minor differences.

Clever Algorithms: Nature-Inspired Programming Recipes



Do you like Clever Algorithms?
Buy the book now.


© Copyright 2015. All Rights Reserved. | About | Contact | Privacy