Clever Algorithms: Nature-Inspired Programming Recipes

By Jason Brownlee PhD

Home | Read Online

This is the ad-supported version of the book. Buy it now if you like it.

Stochastic Hill Climbing

Stochastic Hill Climbing, SHC, Random Hill Climbing, RHC, Random Mutation Hill Climbing, RMHC.


The Stochastic Hill Climbing algorithm is a Stochastic Optimization algorithm and is a Local Optimization algorithm (contrasted to Global Optimization). It is a direct search technique, as it does not require derivatives of the search space. Stochastic Hill Climbing is an extension of deterministic hill climbing algorithms such as Simple Hill Climbing (first-best neighbor), Steepest-Ascent Hill Climbing (best neighbor), and a parent of approaches such as Parallel Hill Climbing and Random-Restart Hill Climbing.


The strategy of the Stochastic Hill Climbing algorithm is iterate the process of randomly selecting a neighbor for a candidate solution and only accept it if it results in an improvement. The strategy was proposed to address the limitations of deterministic hill climbing techniques that were likely to get stuck in local optima due to their greedy acceptance of neighboring moves.


Algorithm (below) provides a pseudocode listing of the Stochastic Hill Climbing algorithm for minimizing a cost function, specifically the Random Mutation Hill Climbing algorithm described by Forrest and Mitchell applied to a maximization optimization problem [Forrest1993].

Input: $Iter_{max}$, ProblemSize
Output: Current
Current $\leftarrow$ RandomSolution(ProblemSize)
For ($iter_i \in$ $Iter_{max}$)
    Candidate $\leftarrow$ RandomNeighbor(Current)
    If (Cost(Candidate) $\geq$ Cost(Current))
        Current $\leftarrow$ Candidate
Return (Current)
Pseudocode for Stochastic Hill Climbing.


  • Stochastic Hill Climbing was designed to be used in discrete domains with explicit neighbors such as combinatorial optimization (compared to continuous function optimization).
  • The algorithm's strategy may be applied to continuous domains by making use of a step-size to define candidate-solution neighbors (such as Localized Random Search and Fixed Step-Size Random Search).
  • Stochastic Hill Climbing is a local search technique (compared to global search) and may be used to refine a result after the execution of a global search algorithm.
  • Even though the technique uses a stochastic process, it can still get stuck in local optima.
  • Neighbors with better or equal cost should be accepted, allowing the technique to navigate across plateaus in the response surface.
  • The algorithm can be restarted and repeated a number of times after it converges to provide an improved result (called Multiple Restart Hill Climbing).
  • The procedure can be applied to multiple candidate solutions concurrently, allowing multiple algorithm runs to be performed at the same time (called Parallel Hill Climbing).

Code Listing

Listing (below) provides an example of the Stochastic Hill Climbing algorithm implemented in the Ruby Programming Language, specifically the Random Mutation Hill Climbing algorithm described by Forrest and Mitchell [Forrest1993]. The algorithm is executed for a fixed number of iterations and is applied to a binary string optimization problem called 'One Max'. The objective of this maximization problem is to prepare a string of all '1' bits, where the cost function only reports the number of bits in a given string.

def onemax(vector)
  return vector.inject(0.0){|sum, v| sum + ((v=="1") ? 1 : 0)}

def random_bitstring(num_bits)
  return{|i| (rand<0.5) ? "1" : "0"}

def random_neighbor(bitstring)
  mutant =
  pos = rand(bitstring.size)
  mutant[pos] = (mutant[pos]=='1') ? '0' : '1'
  return mutant

def search(max_iterations, num_bits)
  candidate = {}
  candidate[:vector] = random_bitstring(num_bits)
  candidate[:cost] = onemax(candidate[:vector])
  max_iterations.times do |iter|
    neighbor = {}
    neighbor[:vector] = random_neighbor(candidate[:vector])
    neighbor[:cost] = onemax(neighbor[:vector])
    candidate = neighbor if neighbor[:cost] >= candidate[:cost]
    puts " > iteration #{(iter+1)}, best=#{candidate[:cost]}"
    break if candidate[:cost] == num_bits
  return candidate

if __FILE__ == $0
  # problem configuration
  num_bits = 64
  # algorithm configuration
  max_iterations = 1000
  # execute the algorithm
  best = search(max_iterations, num_bits)
  puts "Done. Best Solution: c=#{best[:cost]}, v=#{best[:vector].join}"
Stochastic Hill Climbing in Ruby


Primary Sources

Perhaps the most popular implementation of the Stochastic Hill Climbing algorithm is by Forrest and Mitchell, who proposed the Random Mutation Hill Climbing (RMHC) algorithm (with communication from Richard Palmer) in a study that investigated the behavior of the genetic algorithm on a deceptive class of (discrete) bit-string optimization problems called 'royal road' functions [Forrest1993]. The RMHC was compared to two other hill climbing algorithms in addition to the genetic algorithm, specifically: the Steepest-Ascent Hill Climber, and the Next-Ascent Hill Climber. This study was then followed up by Mitchell and Holland [Mitchell1993].

Jules and Wattenberg were also early to consider stochastic hill climbing as an approach to compare to the genetic algorithm [Juels1994]. Skalak applied the RMHC algorithm to a single long bit-string that represented a number of prototype vectors for use in classification [Skalak1994].

Learn More

The Stochastic Hill Climbing algorithm is related to the genetic algorithm without crossover. Simplified version's of the approach are investigated for bit-string based optimization problems with the population size of the genetic algorithm reduced to one. The general technique has been investigated under the names Iterated Hillclimbing [Muhlenbein1991], ES(1+1,m,hc) [Muhlenbein1992], Random Bit Climber [Davis1991], and (1+1)-Genetic Algorithm [Back1993]. This main difference between RMHC and ES(1+1) is that the latter uses a fixed probability of a mutation for each discrete element of a solution (meaning the neighborhood size is probabilistic), whereas RMHC will only stochastically modify one element.


[Back1993] T. Bäck, "Optimal Mutation Rates in Genetic Search", in Proceedings of the Fifth International Conference on Genetic Algorithms, 1993.
[Davis1991] L. Davis, "Bit-climbing, representational bias, and test suite design", in Proceedings of the fourth international conference on genetic algorithms, 1991.
[Forrest1993] S. Forrest and M. Mitchell, "Relative building-block fitness and the building-block hypothesis", in Foundations of Genetic Algorithms 2, 1993.
[Juels1994] A. Juels and M. Wattenberg, "Stochastic hill climbing as a baseline method for evaluating genetic algorithms", University of California, Berkeley, 1994.
[Mitchell1993] M. Mitchell and J. H. Holland, "When Will a Genetic Algorithm Outperform Hill Climbing?", in Proceedings of the 5th International Conference on Genetic Algorithms, 1993.
[Muhlenbein1991] H. Mühlenbein, "Evolution in time and space - the parallel genetic algorithm", in Foundations of Genetic Algorithms, 1991.
[Muhlenbein1992] H. Mühlenbein, "How Genetic Algorithms Really Work: I. Mutation and Hillclimbing", in Parallel Problem Solving from Nature 2, 1992.
[Skalak1994] D. B. Skalak, "Prototype and Feature Selection by Sampling and Random Mutation Hill Climbing Algorithms", in Proceedings of the eleventh international conference on machine learning, 1994.
Clever Algorithms: Nature-Inspired Programming Recipes

Free Course

Get one algorithm per week...
  • ...delivered to your inbox
  • ...described in detail
  • read at your own pace
Sign-up Now

Own A Copy

This 438-page ebook has...
  • ...45 algorithm descriptions
  • practice usage
  • ...pseudo code
  • ...Ruby code
  • ...primary sources
Buy Now

Please Note: This content was automatically generated from the book content and may contain minor differences.

Clever Algorithms: Nature-Inspired Programming Recipes

Do you like Clever Algorithms?
Buy the book now.

© Copyright 2015. All Rights Reserved. | About | Contact | Privacy