Greedy Randomized Adaptive SearchGreedy Randomized Adaptive Search Procedure, GRASP. TaxonomyThe Greedy Randomized Adaptive Search Procedure is a Metaheuristic and Global Optimization algorithm, originally proposed for the Operations Research practitioners. The iterative application of an embedded Local Search technique relate the approach to Iterative Local Search and MultiStart techniques. StrategyThe objective of the Greedy Randomized Adaptive Search Procedure is to repeatedly sample stochastically greedy solutions, and then use a local search procedure to refine them to a local optima. The strategy of the procedure is centered on the stochastic and greedy stepwise construction mechanism that constrains the selection and orderofinclusion of the components of a solution based on the value they are expected to provide. ProcedureAlgorithm (below) provides a pseudocode listing of the Greedy Randomized Adaptive Search Procedure for minimizing a cost function. Input :
$\alpha$
Output :
$S_{best}$
$S_{best}$ $\leftarrow$ ConstructRandomSolution ()While ($\neg$ StopCondition ())$S_{candidate}$ $\leftarrow$ GreedyRandomizedConstruction ($\alpha$)$S_{candidate}$ $\leftarrow$ LocalSearch ($S_{candidate}$)If (Cost ($S_{candidate}$) < Cost ($S_{best}$))$S_{best}$ $\leftarrow$ $S_{candidate}$ End End Return ($S_{best}$)Algorithm (below) provides the pseudocode the Greedy Randomized Construction function. The function involves the stepwise construction of a candidate solution using a stochastically greedy construction process. The function works by building a Restricted Candidate List (RCL) that constraints the components of a solution (features) that may be selected from each cycle. The RCL may be constrained by an explicit size, or by using a threshold ($\alpha \in [0,1]$) on the cost of adding each feature to the current candidate solution. Input :
$\alpha$
Output :
$S_{candidate}$
$S_{candidate}$ $\leftarrow \emptyset$ While ($S_{candidate}$ $\neq$ ProblemSize )$Feature_{costs}$ $\leftarrow \emptyset$ For ($Feature_{i}$ $\notin$ $S_{candidate}$)$Feature_{costs}$ $\leftarrow$ CostOfAddingFeatureToSolution ($S_{candidate}$, $Feature_{i}$)End RCL $\leftarrow \emptyset$$Fcost_{min}$ $\leftarrow$ MinCost ($Feature_{costs}$)$Fcost_{max}$ $\leftarrow$ MaxCost ($Feature_{costs}$)For ($F_{i}cost$ $\in$ $Feature_{costs}$)If ($F_{i}cost$ $\leq$ $Fcost_{min} + \alpha \cdot (Fcost_{max}  Fcost_{min})$ )RCL $\leftarrow$ $Feature_{i}$End End $S_{candidate}$ $\leftarrow$ SelectRandomFeature (RCL )End Return ($S_{candidate}$)Heuristics
Code ListingListing (below) provides an example of the Greedy Randomized Adaptive Search Procedure implemented in the Ruby Programming Language. The algorithm is applied to the Berlin52 instance of the Traveling Salesman Problem (TSP), taken from the TSPLIB. The problem seeks a permutation of the order to visit cities (called a tour) that minimizes the total distance traveled. The optimal tour distance for Berlin52 instance is 7542 units. The stochastic and greedy stepwise construction of a tour involves evaluating candidate cities by the the cost they contribute as being the next city in the tour. The algorithm uses a stochastic 2opt procedure for the Local Search with a fixed number of nonimproving iterations as the stopping condition. def euc_2d(c1, c2) Math.sqrt((c1[0]  c2[0])**2.0 + (c1[1]  c2[1])**2.0).round end def cost(perm, cities) distance =0 perm.each_with_index do c1, i c2 = (i==perm.size1) ? perm[0] : perm[i+1] distance += euc_2d(cities[c1], cities[c2]) end return distance end def stochastic_two_opt(permutation) perm = Array.new(permutation) c1, c2 = rand(perm.size), rand(perm.size) exclude = [c1] exclude << ((c1==0) ? perm.size1 : c11) exclude << ((c1==perm.size1) ? 0 : c1+1) c2 = rand(perm.size) while exclude.include?(c2) c1, c2 = c2, c1 if c2 < c1 perm[c1...c2] = perm[c1...c2].reverse return perm end def local_search(best, cities, max_no_improv) count = 0 begin candidate = {:vector=>stochastic_two_opt(best[:vector])} candidate[:cost] = cost(candidate[:vector], cities) count = (candidate[:cost] < best[:cost]) ? 0 : count+1 best = candidate if candidate[:cost] < best[:cost] end until count >= max_no_improv return best end def construct_randomized_greedy_solution(cities, alpha) candidate = {} candidate[:vector] = [rand(cities.size)] allCities = Array.new(cities.size) {i i} while candidate[:vector].size < cities.size candidates = allCities  candidate[:vector] costs = Array.new(candidates.size) do i euc_2d(cities[candidate[:vector].last], cities[i]) end rcl, max, min = [], costs.max, costs.min costs.each_with_index do c,i rcl << candidates[i] if c <= (min + alpha*(maxmin)) end candidate[:vector] << rcl[rand(rcl.size)] end candidate[:cost] = cost(candidate[:vector], cities) return candidate end def search(cities, max_iter, max_no_improv, alpha) best = nil max_iter.times do iter candidate = construct_randomized_greedy_solution(cities, alpha); candidate = local_search(candidate, cities, max_no_improv) best = candidate if best.nil? or candidate[:cost] < best[:cost] puts " > iteration #{(iter+1)}, best=#{best[:cost]}" end return best end if __FILE__ == $0 # problem configuration berlin52 = [[565,575],[25,185],[345,750],[945,685],[845,655], [880,660],[25,230],[525,1000],[580,1175],[650,1130],[1605,620], [1220,580],[1465,200],[1530,5],[845,680],[725,370],[145,665], [415,635],[510,875],[560,365],[300,465],[520,585],[480,415], [835,625],[975,580],[1215,245],[1320,315],[1250,400],[660,180], [410,250],[420,555],[575,665],[1150,1160],[700,580],[685,595], [685,610],[770,610],[795,645],[720,635],[760,650],[475,960], [95,260],[875,920],[700,500],[555,815],[830,485],[1170,65], [830,610],[605,625],[595,360],[1340,725],[1740,245]] # algorithm configuration max_iter = 50 max_no_improv = 50 greediness_factor = 0.3 # execute the algorithm best = search(berlin52, max_iter, max_no_improv, greediness_factor) puts "Done. Best Solution: c=#{best[:cost]}, v=#{best[:vector].inspect}" end Download: grasp.rb.
ReferencesPrimary SourcesThe seminal paper that introduces the general approach of stochastic and greedy stepwise construction of candidate solutions is by Feo and Resende [Feo1989]. The general approach was inspired by greedy heuristics by Hart and Shogan [Hart1987]. The seminal review paper that is cited with the preliminary paper is by Feo and Resende [Feo1995], and provides a coherent description of the GRASP technique, an example, and review of early applications. An early application was by Feo, Venkatraman and Bard for a machine scheduling problem [Feo1991]. Other early applications to scheduling problems include technical reports [Feo1993] (later published as [Bard1996]) and [Feo1994] (also later published as [Feo1996]). Learn MoreThere are a vast number of review, application, and extension papers for GRASP. Pitsoulis and Resende provide an extensive contemporary overview of the field as a review chapter [Pitsoulis2002], as does Resende and Ribeiro that includes a clear presentation of the use of the $\alpha$ threshold parameter instead of a fixed size for the RCL [Resende2003]. Festa and Resende provide an annotated bibliography as a review chapter that provides some needed insight into large amount of study that has gone into the approach [Festa2002]. There are numerous extensions to GRASP, not limited to the popular Reactive GRASP for adapting $\alpha$ [Prais2000], the use of long term memory to allow the technique to learn from candidate solutions discovered in previous iterations, and parallel implementations of the procedure such as 'Parallel GRASP' [Pardalos1995]. Bibliography

Free CourseGet one algorithm per week...
Own A CopyThis 438page ebook has...


Please Note: This content was automatically generated from the book content and may contain minor differences. 

Do you like Clever Algorithms? Buy the book now. 