# Genetic Algorithm Essentials

### Description

Summary over Computational Intelligence I modul in University of Oldenburg. Genetic Algorithm - Essentials from Prof. Dr. Oliver Kramer is the basis of this mind-Map. So you need to read it, to understand the mindmap in detail.
Mind Map by Mark Otten, updated more than 1 year ago
 Created by Mark Otten over 6 years ago
103
0

## Resource summary

Genetic Algorithm Essentials
1. Related heuristics
1. swarm algorithms
1. fireworks algorithms
1. firefly algorthims
1. simulated annealing
2. Optimization
1. problems
1. Traveling Salesman problem
1. OneMax
2. Optima
1. difficulty
1. noise
1. to many local optima
2. found
1. local
1. 4 - Multimodality
1. avoid getting stuck
1. Restarts
1. different starting conditions
1. chance high to reach new local optima
2. Global mutation rates
1. Fitness sharing

Annotations:

• • neighboring or the ones that belong to one cluster • clustering techniques like k-means or DBSCAN • evolutions within shared region is random walk
1. hilly fitness landscape
2. Novelty search
1. avoid known solutions
1. bias search towards unknown solution areas
1. use outlier detection methods
2. Niching
1. clustering techniques
1. k-means
1. DBSCAN

Annotations:

• • define core points with density of solutions • if more than minPts solutions in epsilon radius, point is core point • core points that have core points in their neighborhood belong to one cluster • points that are within radius of core points but are not core points are corner points • rest is noise
1. hierarchical clustering
2. run separate GA
1. Tree Niching
1. Follow the worse solution for g generations
1. After g generations follow better branch
3. global
2. find min or max
3. 2 - Algorithm
1. crossover
1. 1-Point crossover
1. Arithmetic crossover
1. Dominant crossover
2. mutation
1. properties
1. Reachability

Annotations:

•  each point in solution space must be reachable from an arbitrary point in solution space .
1. Unbiasedness

Annotations:

•  The mutation operator should not induce a drift of the search to a particular direction.
1. Scalability

Annotations:

•  The mutation rate should be adjustable.
2. examples
1. Bit flip
1. Random resetting
1. Gaussian mutation

Annotations:

•  x_ = x + sigma * np.random.standard_normal(10)
1. x' = x + σ · N (0, 1)
2. mutation rate sigma
1. sigma' = simgma' * exp(tau * N(0,1))
1. each chromosome gets an own σ
2. Mutation Rate Controls
1. Dynamic Control
1. time
1. generation counter
2. changes parameter during run
3. Rechenberg
1. success (>1/5) -> σ' = σ · τ
1. no success (<1/5) -> σ' = σ / τ
1. rate -> 1/5 = keep σ constant
2. Constant mutation rate
2. Genotype-Phenotype Mapping

Annotations:

• • map chromosome to solution to evaluate fitness • example: map bit string to pipeline of machine learning commands in sklearn
1. fitness
1. effiiciency
1. find optimum
1. approximation in fitness evaluations
2. 6 - Multiple Objectives
1. two or more objectives at a time
1. Pareto
1. op. sol. in single-objective optimization corresponds to Pareto-set in multi-objective optimization
1. not dominated solutions are Pareto-optimal
1. Pareto-set
1. formal
1. f = w · f 1 + (1 - w) · f 2
1. w \in (0, 1)
2. Non-dominated sorting
1. non-dominated = rank 1
1. if remove rank i, remaining non-dominated solutions get rank i+1
2. NSGA-II
1. selection solutions with maximal crowding distance
1. seeks for a broad coverage
1. sum of differences between fitness of left and right
2. Rake
1. affords evolution of corner points
1. rake selection selects the ones that minimize the distance to closest rake
1. rake base between leftest and rightest solution
2. Hypervolume
1. greedy heuristic: (μ + 1)-GA that discards solution with least hypervolume contribution
1. reference point = badly dominated solution
1. exhaustive for n > 3
1. each solution dominates area (n = 2) or volume (n > 2)
1. S-metric selection
2. conflicting: price and performance
1. multiple fitness functions
1. no single optimal solution
2. selection

Annotations:

• •  To allow convergence towards optimal solutions, the best offspring solutions have to be selected to be parents for the new parental population. • Surplus of offspring is generated, best parents are selected to allow approximation
1. Elitist selection
1. plus selection
1. comma selection
2. Forgetting
1. Roulette wheel selection
1. Tournament
1. to overcome
2. Termination
1. after # fitnessfunctions
1. after # generations
1. after stagnation
2. Experiments
1. measure means
1. standard deviiantion
1. Best and worste fitness
2. 3 - Parameter
1. Meta-GA
1. inefficient, but effective
1. GA tunes GA
1. MGA starts GA to optimize the GA Parameters like tau
2. tuning
3. 5 - Contraints
1. Penalty functions

Annotations:

• penalty functions deteriorate fitness for infeasible solutions
1. walk into infeasible
1. deteriorate fitness

Annotations:

• verschlechtern
1. f'(x) = f(x) + alpha * G(x)
1. fitness function evaluations are not always possible
1. at the end
1. depending on
1. generation number
1. number of infeasible solutions
1. increase/decrease at border of feasibility
2. Solution space
1. formal
1. g(x) < 0 is constraint function
1. G(x) = max (0, g(x))
1. f(x) is fitness function
2. Death penalty
1. in such cases does not terminate
1. may suffer from premature stagnation
2. Repair function
1. depending on representation and problem structure
1. examples
1. linear projection in continuous solution spaces
1. TSP problem: repair tour
2. for consistence of GA search
3. Decoders
1. map constrained solution space to
1. an unconstrained one
1. solution space with less difficult conditions
2. mapping must capture characteristics of solution space
1. neighboring solutions in original space must be neighboring in decoder space
2. Premature Stagnation
1. How to prevent?
1. adaptation of mutation ellipsoid to increase success probabilities
1. minimum mutation rate, but this also prevents convergence
2. at boundary of infeasible solution space success probabilities are often low
1. good solutions can be further away from unconstrained optimum
2. GA variants
1. genetic alogrithm 1960 (John Holland)
1. evolution strategies (Ingo Rechenberg)
1. evolutionary programing
1. genetic programming
2. 7 - Theory
1. No Free Lunch
1. there is no overall superior optimization algorithm that is able to solve all kinds of optimization problems
1. algorithm that performs well on one problem, will probably fail on many other problems
1. Good News
1. it is possible to design algorithms working well on broad problem class
2. Runtime Analysis
1. Laufzeitanalyse für (1+1)-GA
1. O(N log N)
1. k/N * (1 - 1/N)^(N - 1)
1. flip k 0 bits
1. do not flip N-1 bits
2. proof
3. fitness-based partitions
4. Markov Chains
1. treat populations in each generations as states
1. modelled with matrix of transition probabilities
1. independent of current state from predecessor
1. convergence to global optimum with elitism
1. a GA optimizing a function over an arbitrary finite space converges to optimum with probability one
2. Progress Rates
1. local convergence measures evaluating amelioration power of GAs
1. population-based GA with plus selection always performs better than with comma selection
2. Schema Theorem
1. proportion of individuals representing schema at subsequent time-steps is product of probabilities of being selected and not being disrupted
2. Building Block Hypothesis
1. while schema theorem only considers disruptive effects, BBH considers constructive effects
1. O(N 2 log N) with crossover
1. O(N^k) without Crossover
2. 8 - Maschine Learning
1. Covariance Matrix
1. lets the mutation operator adapt to solution space characteristics
1. covariance matrix estimated during an optimization run of a GA on Sphere function
2. Fitness meta-modeling
1. use f to predict fitness of x
1. comprise solution x and fitness f(x) pair as pattern- label pair
1. exploration strategy: try new solutions for optimization or for improvement of meta-model
2. data space is solution space
1. Supervised learning
1. learning with labels
1. training set T = {(x 1 ,y 1 ), (x 2 ,y 2 ), ..., (x N ,y N )}
1. classification
1. y = 0 or 1, feasible or infeasible
2. regression
1. y is numerical (e.g. 0.634, 12.39, fitness values)
3. Constraint meta-modeling
1. same works for constraints
1. comprise constraint satisfaction prediction as classification problem
2. Dimensionality reduction
1. Dimensionality reduction for visualization
1. mapping of high-dimensional population to low dimensional space
2. map high-dimensional patterns to abstract low- dimensional spaces
1. principal component analysis
1. PCA detects axes in the data that employ highest variances
1. projections of patterns onto these axes

### Similar

Deep Learning Essentials
Definitionen von Fachbegriffen
Computer Science
An Inspector Calls - Themes
Input Devices