# Clever Algorithms: Nature-Inspired Programming Recipes

By Jason Brownlee PhD

This is the ad-supported version of the book. Buy it now if you like it.

# Artificial Immune Recognition System

Artificial Immune Recognition System, AIRS.

## Taxonomy

The Artificial Immune Recognition System belongs to the field of Artificial Immune Systems, and more broadly to the field of Computational Intelligence. It was extended early to the canonical version called the Artificial Immune Recognition System 2 (AIRS2) and provides the basis for extensions such as the Parallel Artificial Immune Recognition System [Watkins2004]. It is related to other Artificial Immune System algorithms such as the Dendritic Cell Algorithm, the Clonal Selection Algorithm, and the Negative Selection Algorithm.

## Inspiration

The Artificial Immune Recognition System is inspired by the Clonal Selection theory of acquired immunity. The clonal selection theory credited to Burnet was proposed to account for the behavior and capabilities of antibodies in the acquired immune system [Burnet1957] [Burnet1959]. Inspired itself by the principles of Darwinian natural selection theory of evolution, the theory proposes that antigens select-for lymphocytes (both B and T-cells). When a lymphocyte is selected and binds to an antigenic determinant, the cell proliferates making many thousands more copies of itself and differentiates into different cell types (plasma and memory cells). Plasma cells have a short lifespan and produce vast quantities of antibody molecules, whereas memory cells live for an extended period in the host anticipating future recognition of the same determinant. The important feature of the theory is that when a cell is selected and proliferates, it is subjected to small copying errors (changes to the genome called somatic hypermutation) that change the shape of the expressed receptors. It also affects the subsequent determinant recognition capabilities of both the antibodies bound to the lymphocytes cells surface, and the antibodies that plasma cells produce.

## Metaphor

The theory suggests that starting with an initial repertoire of general immune cells, the system is able to change itself (the compositions and densities of cells and their receptors) in response to experience with the environment. Through a blind process of selection and accumulated variation on the large scale of many billions of cells, the acquired immune system is capable of acquiring the necessary information to protect the host organism from the specific pathogenic dangers of the environment. It also suggests that the system must anticipate (guess) at the pathogen to which it will be exposed, and requires exposure to pathogen that may harm the host before it can acquire the necessary information to provide a defense.

## Strategy

The information processing objective of the technique is to prepare a set of real-valued vectors to classify patterns. The Artificial Immune Recognition System maintains a pool of memory cells that are prepared by exposing the system to a single iteration of the training data. Candidate memory cells are prepared when the memory cells are insufficiently stimulated for a given input pattern. A process of cloning and mutation of cells occurs for the most stimulated memory cell. The clones compete with each other for entry into the memory pool based on stimulation and on the amount of resources each cell is using. This concept of resources comes from prior work on Artificial Immune Networks, where a single cell (an Artificial Recognition Ball or ARB) represents a set of similar cells. Here, a cell's resources are a function of its stimulation to a given input pattern and the number of clones it may create.

## Procedure

Algorithm (below) provides a high-level pseudocode for preparing memory cell vectors using the Artificial Immune Recognition System, specifically the canonical AIRS2. An affinity (distance) measure between input patterns must be defined. For real-valued vectors, this is commonly the Euclidean distance:

$dist(x,c) = \sum_{i=1}^{n} (x_i - c_i)^2$

where $n$ is the number of attributes, $x$ is the input vector and $c$ is a given cell vector. The variation of cells during cloning (somatic hypermutation) occurs inversely proportional to the stimulation of a given cell to an input pattern.

Input: InputPatterns, $clone_{rate}$, $mutate_{rate}$, $stim_{thresh}$, $resources_{max}$, $affinity_{thresh}$
Output: $Cells_{memory}$
$Cells_{memory}$ $\leftarrow$ InitializeMemoryPool(InputPatterns)
For ($InputPattern_i$ $\in$ InputPatterns)
Stimulate($Cells_{memory}$, InputPatterns)
$Cell_{best}$ $\leftarrow$ GetMostStimulated($InputPattern_i$, $Cells_{memory}$)
If ($Cell_{best}^{class}$ $\neq$ $InputPattern_{i}^{class}$)
$Cells_{memory}$ $\leftarrow$ CreateNewMemoryCell($InputPattern_i$)
Else
$Clones_{num}$ $\leftarrow$ $Cell_{best}^{stim}$ $\times$ $clone_{rate}$ $\times$ $mutate_{rate}$
$Cells_{clones}$ $\leftarrow$ $Cell_{best}$
For ($i$ To $Clones_{num}$)
$Cells_{clones}$ $\leftarrow$ CloneAndMutate($Cell_{best}$)
End
While (AverageStimulation($Cells_{clones}$) $\leq$ $stim_{thresh}$)
For ($Cell_{i}$ $\in$ $Cells_{clones}$)
$Cells_{clones}$ $\leftarrow$ CloneAndMutate($Cell_{i}$)
End
Stimulate($Cells_{clones}$, InputPatterns)
ReducePoolToMaximumResources($Cells_{clones}$, $resources_{max}$)
End
$Cell_{c}$ $\leftarrow$ GetMostStimulated($InputPattern_i$, $Cells_{clones}$)
If ($Cell_{c}^{stim}$ > $Cell_{best}^{stim}$)
$Cells_{memory}$ $\leftarrow$ $Cell_{c}$
If (Affinity($Cell_{c}$, $Cell_{best}$) $\leq$ $affinity_{thresh}$)
DeleteCell($Cell_{best}$, $Cells_{memory}$)
End
End
End
End
Return ($Cells_{memory}$)
Pseudocode for AIRS2.

## Heuristics

• The AIRS was designed as a supervised algorithm for classification problem domains.
• The AIRS is non-parametric, meaning that it does not rely on assumptions about that structure of the function that is is approximating.
• Real-values in input vectors should be normalized such that $x \in [0,1)$.
• Euclidean distance is commonly used to measure the distance between real-valued vectors (affinity calculation), although other distance measures may be used (such as dot product), and data specific distance measures may be required for non-scalar attributes.
• Cells may be initialized with small random values or more commonly with values from instances in the training set.
• A cell's affinity is typically minimizing, where as a cells stimulation is maximizing and typically $\in [0,1]$.

## Code Listing

Listing (below) provides an example of the Artificial Immune Recognition System implemented in the Ruby Programming Language. The problem is a contrived classification problem in a 2-dimensional domain $x\in[0,1], y\in[0,1]$ with two classes: 'A' ($x\in[0,0.4999999], y\in[0,0.4999999]$) and 'B' ($x\in[0.5,1], y\in[0.5,1]$).

The algorithm is an implementation of the AIRS2 algorithm [Watkins2002b]. An initial pool of memory cells is created, one cell for each class. Euclidean distance divided by the maximum possible distance in the domain is taken as the affinity and stimulation is taken as $1.0-affinity$. The meta-dynamics for memory cells (competition for input patterns) is not performed and may be added into the implementation as an extension.

def random_vector(minmax)
return Array.new(minmax.size) do |i|
minmax[i][0] + ((minmax[i][1] - minmax[i][0]) * rand())
end
end

def generate_random_pattern(domain)
class_label = domain.keys[rand(domain.keys.size)]
pattern = {:label=>class_label}
pattern[:vector] = random_vector(domain[class_label])
return pattern
end

def create_cell(vector, class_label)
return {:label=>class_label, :vector=>vector}
end

def initialize_cells(domain)
mem_cells = []
domain.keys.each do |key|
mem_cells << create_cell(random_vector([[0,1],[0,1]]), key)
end
return mem_cells
end

def distance(c1, c2)
sum = 0.0
c1.each_index {|i| sum += (c1[i]-c2[i])**2.0}
return Math.sqrt(sum)
end

def stimulate(cells, pattern)
max_dist = distance([0.0,0.0], [1.0,1.0])
cells.each do |cell|
cell[:affinity] = distance(cell[:vector], pattern[:vector]) / max_dist
cell[:stimulation] = 1.0 - cell[:affinity]
end
end

def get_most_stimulated_cell(mem_cells, pattern)
stimulate(mem_cells, pattern)
return mem_cells.sort{|x,y| y[:stimulation] <=> x[:stimulation]}.first
end

def mutate_cell(cell, best_match)
range = 1.0 - best_match[:stimulation]
cell[:vector].each_with_index do |v,i|
min = [(v-(range/2.0)), 0.0].max
max = [(v+(range/2.0)), 1.0].min
cell[:vector][i] = min + (rand() * (max-min))
end
return cell
end

def create_arb_pool(pattern, best_match, clone_rate, mutate_rate)
pool = []
pool << create_cell(best_match[:vector], best_match[:label])
num_clones = (best_match[:stimulation] * clone_rate * mutate_rate).round
num_clones.times do
cell = create_cell(best_match[:vector], best_match[:label])
pool << mutate_cell(cell, best_match)
end
return pool
end

def competition_for_resournces(pool, clone_rate, max_res)
pool.each {|cell| cell[:resources] = cell[:stimulation] * clone_rate}
pool.sort!{|x,y| x[:resources] <=> y[:resources]}
total_resources = pool.inject(0.0){|sum,cell| sum + cell[:resources]}
while total_resources > max_res
cell = pool.delete_at(pool.size-1)
total_resources -= cell[:resources]
end
end

def refine_arb_pool(pool, pattern, stim_thresh, clone_rate, max_res)
mean_stim, candidate = 0.0, nil
begin
stimulate(pool, pattern)
candidate = pool.sort{|x,y| y[:stimulation] <=> x[:stimulation]}.first
mean_stim = pool.inject(0.0){|s,c| s + c[:stimulation]} / pool.size
if mean_stim < stim_thresh
candidate = competition_for_resournces(pool, clone_rate, max_res)
pool.size.times do |i|
cell = create_cell(pool[i][:vector], pool[i][:label])
mutate_cell(cell, pool[i])
pool << cell
end
end
end until mean_stim >= stim_thresh
return candidate
end

if candidate[:stimulation] > best_match[:stimulation]
mem_cells << candidate
end
end

def classify_pattern(mem_cells, pattern)
stimulate(mem_cells, pattern)
return mem_cells.sort{|x,y| y[:stimulation] <=> x[:stimulation]}.first
end

def train_system(mem_cells, domain, num_patterns, clone_rate, mutate_rate, stim_thresh, max_res)
num_patterns.times do |i|
pattern = generate_random_pattern(domain)
best_match = get_most_stimulated_cell(mem_cells, pattern)
if best_match[:label] != pattern[:label]
mem_cells << create_cell(pattern[:vector], pattern[:label])
elsif best_match[:stimulation] < 1.0
pool = create_arb_pool(pattern, best_match, clone_rate, mutate_rate)
cand = refine_arb_pool(pool,pattern, stim_thresh, clone_rate, max_res)
end
puts " > iter=#{i+1}, mem_cells=#{mem_cells.size}"
end
end

def test_system(mem_cells, domain, num_trials=50)
correct = 0
num_trials.times do
pattern = generate_random_pattern(domain)
best = classify_pattern(mem_cells, pattern)
correct += 1 if best[:label] == pattern[:label]
end
puts "Finished test with a score of #{correct}/#{num_trials}"
return correct
end

def execute(domain, num_patterns, clone_rate, mutate_rate, stim_thresh, max_res)
mem_cells = initialize_cells(domain)
train_system(mem_cells, domain, num_patterns, clone_rate, mutate_rate, stim_thresh, max_res)
test_system(mem_cells, domain)
return mem_cells
end

if __FILE__ == \$0
# problem configuration
domain = {"A"=>[[0,0.4999999],[0,0.4999999]],"B"=>[[0.5,1],[0.5,1]]}
num_patterns = 50
# algorithm configuration
clone_rate = 10
mutate_rate = 2.0
stim_thresh = 0.9
max_res = 150
# execute the algorithm
execute(domain, num_patterns, clone_rate, mutate_rate, stim_thresh, max_res)
end

AIRS in Ruby

## References

### Primary Sources

The Artificial Immune Recognition System was proposed in the Masters work by Watkins [Watkins2001], and later published [Watkins2002a]. Early works included the application of the AIRS by Watkins and Boggess to a suite of benchmark classification problems [Watkins2002], and a similar study by Goodman and Boggess comparing to a conceptually similar approach called Learning Vector Quantization [Goodman2002].

Marwah and Boggess investigated the algorithm seeking issues that affect the algorithms performance [Marwah2002]. They compared various variations of the algorithm with modified resource allocation schemes, tie-handling within the ARB pool, and ARB pool organization. Watkins and Timmis proposed a new version of the algorithm called AIRS2 which became the replacement for AIRS1 [Watkins2002b]. The updates reduced the complexity of the approach while maintaining the accuracy of the results. An investigation by Goodman et al. into the so called 'source of power' in AIRS indicated that perhaps the memory cell maintenance procedures played an important role [Goodman2003]. Watkins et al. provide a detailed review of the technique and its application [Watkins2004a].

## Bibliography

 [Burnet1957] F. M. Burnet, "A modification of Jerne's theory of antibody production using the concept of clonal selection", Australian Journal of Science, 1957. [Burnet1959] F. M. Burnet, "The clonal selection theory of acquired immunity", Vanderbilt University Press, 1959. [Goodman2002] D. E. Goodman Jr. and L. Boggess and A. Watkins, "Artificial Immune System Classification of Multiple-Class Problems", in Fuzzy Logic, Evolutionary Programming, Complex Systems and Artificial Life, 2002. [Goodman2003] D. E. Goodman Jr. and L. Boggess and A. Watkins, "An Investigation into the Source of Power for AIRS, an Artificial Immune Classification Systems", in Proceedings of the International Joint Conference on Neural Networks (IJCNN'03), 2003. [Marwah2002] G. Marwah and L. Boggess, "Artificial Immune Systems for classification: Some issues", in First International Conference on Artificial Immune Systems, 2002. [Watkins2001] A. B. Watkins, "AIRS: A resource limited artificial immune classifiers", [Masters Thesis] Mississippi State University, 2001. [Watkins2002] A. Watkins and L. Boggess, "A New Classifier Based on Resource Limited Artificial Immune Systems", in Part of the 2002 IEEE World Congress on Computational Intelligence, 2002. [Watkins2002a] A. B. Watkins and L. C. Boggess, "A Resource Limited Artificial Immune Classifier", in Part of the 2002 IEEE World Congress on Computational Intelligence held in Honolulu, 2002. [Watkins2002b] A. Watkins and J. Timmis, "Artificial Immune Recognition System (AIRS): Revisions and Refinements", in 1st International Conference on Artificial Immune Systems (ICARIS2002), 2002. [Watkins2004] A. Watkins and J. Timmis, "Exploiting Parallelism Inherent in AIRS, an Artificial Immune Classifier", in Lecture Notes in Computer Science (LNCS), 2004. [Watkins2004a] A. Watkins and J. Timmis and L. Boggess, "Artificial Immune Recognition System (AIRS): An Immune-Inspired Supervised Learning Algorithms", Genetic Programming and Evolvable Machines, 2004.

### Free Course

Get one algorithm per week...
• ...described in detail
Sign-up Now

### Own A Copy

This 438-page ebook has...
• ...45 algorithm descriptions
• ...best practice usage
• ...pseudo code
• ...Ruby code
• ...primary sources

Please Note: This content was automatically generated from the book content and may contain minor differences.

Do you like Clever Algorithms?