Clever Algorithms: Nature-Inspired Programming Recipes

By Jason Brownlee PhD

Home | Read Online

This is the ad-supported version of the book. Buy it now if you like it.

Variable Neighborhood Search

Variable Neighborhood Search, VNS.


Variable Neighborhood Search is a Metaheuristic and a Global Optimization technique that manages a Local Search technique. It is related to the Iterative Local Search algorithm.


The strategy for the Variable Neighborhood Search involves iterative exploration of larger and larger neighborhoods for a given local optima until an improvement is located after which time the search across expanding neighborhoods is repeated. The strategy is motivated by three principles: 1) a local minimum for one neighborhood structure may not be a local minimum for a different neighborhood structure, 2) a global minimum is a local minimum for all possible neighborhood structures, and 3) local minima are relatively close to global minima for many problem classes.


Algorithm (below) provides a pseudocode listing of the Variable Neighborhood Search algorithm for minimizing a cost function. The Pseudocode shows that the systematic search of expanding neighborhoods for a local optimum is abandoned when a global improvement is achieved (shown with the Break jump).

Input: Neighborhoods
Output: $S_{best}$
$S_{best}$ $\leftarrow$ RandomSolution()
While ($\neg$ StopCondition())
    For ($Neighborhood_{i}$ $\in$ Neighborhoods)
        $Neighborhood_{curr}$ $\leftarrow$ CalculateNeighborhood($S_{best}$, $Neighborhood_{i}$)
        $S_{candidate}$ $\leftarrow$ RandomSolutionInNeighborhood($Neighborhood_{curr}$)
        $S_{candidate}$ $\leftarrow$ LocalSearch($S_{candidate}$)
        If (Cost($S_{candidate}$) < Cost($S_{best}$))
            $S_{best}$ $\leftarrow$ $S_{candidate}$
Return ($S_{best}$)
Pseudocode for VNS.


  • Approximation methods (such as stochastic hill climbing) are suggested for use as the Local Search procedure for large problem instances in order to reduce the running time.
  • Variable Neighborhood Search has been applied to a very wide array of combinatorial optimization problems as well as clustering and continuous function optimization problems.
  • The embedded Local Search technique should be specialized to the problem type and instance to which the technique is being applied.
  • The Variable Neighborhood Descent (VND) can be embedded in the Variable Neighborhood Search as a the Local Search procedure and has been shown to be most effective.

Code Listing

Listing (below) provides an example of the Variable Neighborhood Search algorithm implemented in the Ruby Programming Language. The algorithm is applied to the Berlin52 instance of the Traveling Salesman Problem (TSP), taken from the TSPLIB. The problem seeks a permutation of the order to visit cities (called a tour) that minimizes the total distance traveled. The optimal tour distance for Berlin52 instance is 7542 units.

The Variable Neighborhood Search uses a stochastic 2-opt procedure as the embedded local search. The procedure deletes two edges and reverses the sequence in-between the deleted edges, potentially removing 'twists' in the tour. The neighborhood structure used in the search is the number of times the 2-opt procedure is performed on a permutation, between 1 and 20 times. The stopping condition for the local search procedure is a maximum number of iterations without improvement. The same stop condition is employed by the higher-order Variable Neighborhood Search procedure, although with a lower boundary on the number of non-improving iterations.

def euc_2d(c1, c2)
  Math.sqrt((c1[0] - c2[0])**2.0 + (c1[1] - c2[1])**2.0).round

def cost(perm, cities)
  distance =0
  perm.each_with_index do |c1, i|
    c2 = (i==perm.size-1) ? perm[0] : perm[i+1]
    distance += euc_2d(cities[c1], cities[c2])
  return distance

def random_permutation(cities)
  perm ={|i| i}
  perm.each_index do |i|
    r = rand(perm.size-i) + i
    perm[r], perm[i] = perm[i], perm[r]
  return perm

def stochastic_two_opt!(perm)
  c1, c2 = rand(perm.size), rand(perm.size)
  exclude = [c1]
  exclude << ((c1==0) ? perm.size-1 : c1-1)
  exclude << ((c1==perm.size-1) ? 0 : c1+1)
  c2 = rand(perm.size) while exclude.include?(c2)
  c1, c2 = c2, c1 if c2 < c1
  perm[c1...c2] = perm[c1...c2].reverse
  return perm

def local_search(best, cities, max_no_improv, neighborhood)
  count = 0
    candidate = {}
    candidate[:vector] =[:vector])
    candidate[:cost] = cost(candidate[:vector], cities)
    if candidate[:cost] < best[:cost]
      count, best = 0, candidate
      count += 1
  end until count >= max_no_improv
  return best

def search(cities, neighborhoods, max_no_improv, max_no_improv_ls)
  best = {}
  best[:vector] = random_permutation(cities)
  best[:cost] = cost(best[:vector], cities)
  iter, count = 0, 0
    neighborhoods.each do |neigh|
      candidate = {}
      candidate[:vector] =[:vector])
      candidate[:cost] = cost(candidate[:vector], cities)
      candidate = local_search(candidate, cities, max_no_improv_ls, neigh)
      puts " > iteration #{(iter+1)}, neigh=#{neigh}, best=#{best[:cost]}"
      iter += 1
      if(candidate[:cost] < best[:cost])
        best, count = candidate, 0
        puts "New best, restarting neighborhood search."
        count += 1
  end until count >= max_no_improv
  return best

if __FILE__ == $0
  # problem configuration
  berlin52 = [[565,575],[25,185],[345,750],[945,685],[845,655],
  # algorithm configuration
  max_no_improv = 50
  max_no_improv_ls = 70
  neighborhoods = 1...20
  # execute the algorithm
  best = search(berlin52, neighborhoods, max_no_improv, max_no_improv_ls)
  puts "Done. Best Solution: c=#{best[:cost]}, v=#{best[:vector].inspect}"
Variable Neighborhood Search in Ruby


Primary Sources

The seminal paper for describing Variable Neighborhood Search was by Mladenovic and Hansen in 1997 [Mladenovic1997], although an early abstract by Mladenovic is sometimes cited [Mladenovic1995]. The approach is explained in terms of three different variations on the general theme. Variable Neighborhood Descent (VND) refers to the use of a Local Search procedure and the deterministic (as opposed to stochastic or probabilistic) change of neighborhood size. Reduced Variable Neighborhood Search (RVNS) involves performing a stochastic random search within a neighborhood and no refinement via a local search technique. Basic Variable Neighborhood Search is the canonical approach described by Mladenovic and Hansen in the seminal paper.

Learn More

There are a large number of papers published on Variable Neighborhood Search, its applications and variations. Hansen and Mladenovic provide an overview of the approach that includes its recent history, extensions and a detailed review of the numerous areas of application [Hansen2003]. For some additional useful overviews of the technique, its principles, and applications, see [Hansen1998] [Hansen2001a] [Hansen2002].

There are many extensions to Variable Neighborhood Search. Some popular examples include: Variable Neighborhood Decomposition Search (VNDS) that involves embedding a second heuristic or metaheuristic approach in VNS to replace the Local Search procedure [Hansen2001], Skewed Variable Neighborhood Search (SVNS) that encourages exploration of neighborhoods far away from discovered local optima, and Parallel Variable Neighborhood Search (PVNS) that either parallelizes the local search of a neighborhood or parallelizes the searching of the neighborhoods themselves.


[Hansen1998] P. Hansen and N. Mladenović, "An introduction to Variable neighborhood search", in Meta-heuristics, Advances and trends in local search paradigms for optimization, pages 433–458, Kluwer Academic Publishers, 1998.
[Hansen2001] P. Hansen and N. Mladenović and D. Perez–Britos, "Variable Neighborhood Decomposition Search", Journal of Heuristics, 2001.
[Hansen2001a] P. Hansen and N. Mladenović, "Variable neighborhood search: Principles and applications", European Journal of Operational Research, 2001.
[Hansen2002] P. Hansen and N. Mladenović, "Variable neighbourhood search", in Handbook of Applied Optimization, pages 221–234, Oxford University Press, 2002.
[Hansen2003] P. Hansen and N. Mladenović, "6: Variable Neighborhood Search", in Handbook of Metaheuristics, pages 145–184, Springer, 2003.
[Mladenovic1995] N. Mladenović, "A variable neighborhood algorithm - A new metaheuristic for combinatorial optimization", in Abstracts of papers presented at Optimization Days, 1995.
[Mladenovic1997] N. Mladenović and P. Hansen, "Variable neighborhood search", Computers & Operations Research, 1997.
Clever Algorithms: Nature-Inspired Programming Recipes

Free Course

Get one algorithm per week...
  • ...delivered to your inbox
  • ...described in detail
  • read at your own pace
Sign-up Now

Own A Copy

This 438-page ebook has...
  • ...45 algorithm descriptions
  • practice usage
  • ...pseudo code
  • ...Ruby code
  • ...primary sources
Buy Now

Please Note: This content was automatically generated from the book content and may contain minor differences.

Clever Algorithms: Nature-Inspired Programming Recipes

Do you like Clever Algorithms?
Buy the book now.

© Copyright 2015. All Rights Reserved. | About | Contact | Privacy