Univariate Marginal Distribution AlgorithmUnivariate Marginal Distribution Algorithm, UMDA, Univariate Marginal Distribution, UMD. TaxonomyThe Univariate Marginal Distribution Algorithm belongs to the field of Estimation of Distribution Algorithms (EDA), also referred to as Population ModelBuilding Genetic Algorithms (PMBGA), an extension to the field of Evolutionary Computation. UMDA is closely related to the Factorized Distribution Algorithm (FDA) and an extension called the Bivariate Marginal Distribution Algorithm (BMDA). UMDA is related to other EDAs such as the Compact Genetic Algorithm, the PopulationBased Incremental Learning algorithm, and the Bayesian Optimization Algorithm. InspirationUnivariate Marginal Distribution Algorithm is a population techniquebased without an inspiration. It is related to the Genetic Algorithm and other Evolutionary Algorithms that are inspired by the biological theory of evolution by means of natural selection. StrategyThe information processing strategy of the algorithm is to use the frequency of the components in a population of candidate solutions in the construction of new candidate solutions. This is achieved by first measuring the frequency of each component in the population (the univariate marginal probability) and using the probabilities to influence the probabilistic selection of components in the componentwise construction of new candidate solutions. ProcedureAlgorithm (below) provides a pseudocode listing of the Univariate Marginal Distribution Algorithm for minimizing a cost function. Input :
$Bits_{num}$, $Population_{size}$, $Selection_{size}$
Output :
$S_{best}$
Population $\leftarrow$ InitializePopulation ($Bits_{num}$, $Population_{size}$)EvaluatePopulation (Population )$S_{best}$ $\leftarrow$ GetBestSolution (Population )While ($\neg$StopCondition ())Selected $\leftarrow$ SelectFitSolutions (Population , $Selection_{size}$)$V$ $\leftarrow$ CalculateFrequencyOfComponents (Selected )Offspring $\leftarrow$ $\emptyset$For ($i$ To $Population_{size}$)Offspring $\leftarrow$ ProbabilisticallyConstructSolution ($V$)End EvaluatePopulation (Offspring )$S_{best}$ $\leftarrow$ GetBestSolution (Offspring )Population $\leftarrow$ Offspring End Return ($S_{best}$)Heuristics
Code ListingListing (below) provides an example of the Univariate Marginal Distribution Algorithm implemented in the Ruby Programming Language. The demonstration problem is a maximizing binary optimization problem called OneMax that seeks a binary string of unity (all '1' bits). The objective function provides only an indication of the number of correct bits in a candidate string, not the positions of the correct bits. The algorithm is an implementation of UMDA that uses the integers 1 and 0 to represent bits in a binary string representation. A binary tournament selection strategy is used and the whole population is replaced each iteration. The mechanisms from Evolutionary Computation such as elitism and more elaborate selection methods may be implemented as an extension. def onemax(vector) return vector.inject(0){sum, value sum + value} end def random_bitstring(size) return Array.new(size){ ((rand()<0.5) ? 1 : 0) } end def binary_tournament(pop) i, j = rand(pop.size), rand(pop.size) j = rand(pop.size) while j==i return (pop[i][:fitness] > pop[j][:fitness]) ? pop[i] : pop[j] end def calculate_bit_probabilities(pop) vector = Array.new(pop.first[:bitstring].length, 0.0) pop.each do member member[:bitstring].each_with_index {v, i vector[i] += v} end vector.each_with_index {f, i vector[i] = (f.to_f/pop.size.to_f)} return vector end def generate_candidate(vector) candidate = {} candidate[:bitstring] = Array.new(vector.size) vector.each_with_index do p, i candidate[:bitstring][i] = (rand()<p) ? 1 : 0 end return candidate end def search(num_bits, max_iter, pop_size, select_size) pop = Array.new(pop_size) do {:bitstring=>random_bitstring(num_bits)} end pop.each{c c[:fitness] = onemax(c[:bitstring])} best = pop.sort{x,y y[:fitness] <=> x[:fitness]}.first max_iter.times do iter selected = Array.new(select_size) { binary_tournament(pop) } vector = calculate_bit_probabilities(selected) samples = Array.new(pop_size) { generate_candidate(vector) } samples.each{c c[:fitness] = onemax(c[:bitstring])} samples.sort!{x,y y[:fitness] <=> x[:fitness]} best = samples.first if samples.first[:fitness] > best[:fitness] pop = samples puts " >iteration=#{iter}, f=#{best[:fitness]}, s=#{best[:bitstring]}" end return best end if __FILE__ == $0 # problem configuration num_bits = 64 # algorithm configuration max_iter = 100 pop_size = 50 select_size = 30 # execute the algorithm best = search(num_bits, max_iter, pop_size, select_size) puts "done! Solution: f=#{best[:fitness]}, s=#{best[:bitstring]}" end Download: umda.rb.
ReferencesPrimary SourcesThe Univariate Marginal Distribution Algorithm was described by Mühlenbein in 1997 in which a theoretical foundation is provided (for the field of investigation in general and the algorithm specifically) [Muhlenbein1997]. Mühlenbein also describes an incremental version of UMDA (IUMDA) that is described as being equivalent to Baluja's PopulationBased Incremental Learning (PBIL) algorithm [Baluja1994]. Learn MorePelikan and Mühlenbein extended the approach to cover problems that have dependencies between the components (specifically pairdependencies), referring to the technique as the Bivariate Marginal Distribution Algorithm (BMDA) [Pelikan1998] [Pelikan1999]. Bibliography

Free CourseGet one algorithm per week...
Own A CopyThis 438page ebook has...


Please Note: This content was automatically generated from the book content and may contain minor differences. 

Do you like Clever Algorithms? Buy the book now. 