• 沒有找到結果。

Recently, genetic algorithm (GA) proposed by J. H. Holland in 1970 has become the one of the most popular optimization methods [36]. GA has the advantages that it provides the robust solution quality and that although GA does not need the additional domain knowledge to search the solution space, however, applying appropriate prior knowledge leads to better performance. The main difference between GA and traditional numerical methods is that: 1) GA adopts the coding strategy to transform the candidate solution to “individual chromosome” consisting of a group of parameters; 2) with the population of chromosomes and the specific operators to exchange the information between chromo-somes during searching, GA can efficiently search for the optimal solutions in the search space with high probability to finding out the global optima. Figure 2.3 showes the illumi-nations of the searching models of GA and traditional numerical methods. The qualities

Search space Local optimal

Global optimal

Search space (a) (b)

Figure 2.3. Illuminations of the searching behaviors between genetic algorithm and tra-ditional numerical method; (a) tratra-ditional numerical method; (b) genetic algorithm.

of the solutions using traditional numerical methods highly depends on the initial given value such that it is easy to fall into local optima.

GA consists of three basic operators: 1) Selection: attempting to apply pressure upon the population in a manner similar to that of natural selection found in biological systems;

2) Crossover: allowing solutions to exchange information in a way similar to that used by natural organism undergoing sexual reproduction; 3) Mutation: used to randomly change (flip) the value of single parameter (bit) within the individual chromosome. Figure 2.4 is the flowchart of GA. In this flowchart, the lighter the color of gene is, the better the value of the gene contains. Following are the brief introductions about the major issues in GA: encoding scheme and fitness function, population initialization, selection, crossover, mutation, and termination condition.

2.3.1 Encoding Scheme and Fitness Function

The first stage of building a genetic algorithm is to decide on a genetic representation of a candidate solution to the original problem. This involves defining and arranging each parameter within the individual chromosome and the mapping approach from individual chromosomes and the corresponding candidate solutions to problems being solved.

After deciding on the representation of chromosomes is to design an appropriate fit-ness function. The fitfit-ness functions (or objective functions) are used to quantify each candidate solution mapped from one chromosome, and often, they can be maximized or minimized. Because of the selection operator based on the fitness values a lot, the per-formance of genetic algorithms usually highly depends on the convenience of the adopted

Design Fitness function.

Encoding Scheme gene

Population Initialization

Selection Crossover

Probability, Pc

Mutation

Probability, Pm

Multiple Generations

Evolutionary Result

Figure 2.4. The flowchart of genetic algorithm. The lighter the color of gene is, the better the value of the gene contains.

fitness functions.

Following is a simple example for encoding scheme and fitness function design. If we want to maximize the following equation f (x):

f (x) = x2; f or integer x and 0 ≤ x ≤ 4095. (2.3)

We can just use f (x) as the fitness function to be maximized, and adopt the binary representation strategy to encode the value of x such that “110101100100” implies x = 3428 while “010100001100” represents x = 1292.

2.3.2 Population Initialization

One of the characteristics of genetic algorithms is doing parallel search in the solution space with a set of candidate solutions. This set of candidate solutions is called a

“popu-lation”. To achieve the objective of searching the solution space globally, the chromosomes of populations usually are randomly initialized such that each chromosome will be scat-tered over the solution space uniformly. However, if there are constraints on solutions to the problem being solved, how to guarantee all initial chromosomes feasible is an impor-tant issue to be considered.

2.3.3 Selection

Selection attempts to apply pressure upon the population in a manner similar to that of natural selection found in biological systems. Poorer performing individuals are weeded out and better (fitter) performing ones have a greater chance of promoting the information they contain within the next generation. The typical selection operators can be classified to two categories, parent selection and survivor selection. Both of them are to distinguish among individuals based on their qualities, however parent selection is responsible to allow the fitter individuals to become parents of the next generation, while survivor selection is called after having created the offspring of the selected parents and decide which individual will exist in the next generation. Because of the selection operator, GA can guarantee that after iterated generations, the average quality of the entire population will be improved with a high probability. Following, we will introduce the most common used methods for parent selection: roulette wheel selection and binary tournament selection, and for survivor selection: ranking selection.

Roulette Wheel Selection

With this approach, the probability of selection for one individual is based on the propor-tion of its fitness to the sum of fitness of entire populapropor-tion. Given the fitness value of the ith individual, fi, and the size of population is Npop, the probability of the ith individual being selected is:

pi = fi

PNpop

j=1 fj. (2.4)

Suppose that there are four individual in the population, and their fitness values and the corresponding selected probabilities are showed in Figure 2.5. For example, in selection, first, randomly generate a real number in [0, 1]. If the real number is in [0, 0.1]

then child 1, C1 is selected; if the random value is in (0.1, 0.3] then child 2, C2 is selected.

Repeat the steps mentioned above, until the number of individuals in the mating pool is equal to the size of population in the previous generation.

Individuals C

1

C

2

C

3

C

4

Fitness 10 20 30 40

Selected Probability 0.1 0.2 0.3 0.4 (a)

(b)

Figure 2.5. An example of roulette wheel selection.

Binary Tournament Selection

The main idea of binary tournament selection is that when doing parent selection, repeat to randomly picking up two individual and place the fitter one to the mating pool, until the number of individuals in the mating pool is equal to the size of population in the previous generation. Compared with roulette wheel selection, in the later period of evolutionary computing, binary tournament selection has a better ability to distinguish the fitter one from two individuals. That is because that in the later period of evolutionary computing, the fitness values of all individual in the population converged such that because of the closed fitness values between individuals, when using roulette wheel selection, it is more difficult to distinguish the better one by two almost the same probabilities. However, even though the fitness values of population have converged, by means of judging which

one of two has the better fitness, binary tournament selection can still select the better one successfully.

Ranking Selection

Ranking selection is the simplest approach to survivor selection. Rank selection method replaces the worst Ps× Npop individuals with the best Ps× Npop individuals to form a new population, where Ps is a selection probability and Npop is the size of population.

Although it is simple, ranking selection have the advantage that it can efficiently speed up the convergence of the entire population and improve the average quality of entire population a lot. [38] adopted ranking selection in their selection operator of GA.

2.3.4 Crossover

The major advantage of genetic algorithm is that with the population of chromosomes (candidate solutions) and the specific operators, crossover, each individual in the pop-ulation can efficiently searching the solution space concurrently. As the name indicate, crossover or recombination allowing two parent individuals to exchange their parameters or information in a way similar to that used by natural organism undergoing sexual re-production. With a probabilistic parameter, Pc, controlling whether the selected pairs of individuals doing crossover or not, we can mate two individuals with different but desir-able features to produce the offspring that combines both of those features. Cooperating with the selection operator, once the better or fitter offspring are generated, they have the higher probability to survive after selection such that the average fitness of population is successfully improved. The most used variations of crossover operator are: one-point crossover, multi-point crossover, and uniform crossover.

One-point Crossover

Before doing one-point crossover, Randomly generate a cut point, then exchange the all parameters of the two parent behind the position of cut point. Figure 2.6(a) shows the behavior of one-point crossover.

Multi-point Crossover

First, randomly generate multiple cut points. After the positions of multiple cut points are determined, randomly determine whether the parameters of parents between all pairs of successive cut points to be exchange or not. Figure 2.6(b) shows the behavior of multi-point crossover.

Uniform Crossover

Before doing uniform crossover, randomly generate a binary bit string with length being the same as the number of parameters in the individual chromosome. This binary bit string is used as a mask. If a bit value is one, it means that the corresponding parameter should be exchanged, while zero bit implies that the corresponding parameters will not to be exchanged. Figure 2.6(c) shows the behavior of uniform crossover.

P1

Figure 2.6. Illuminations of (a) one-point crossover, (b) multi-point crossover, and (c) uniform crossover.

2.3.5 Mutation

Mutation operators randomly change (flip) the value of single parameter (bit) within individual chromosome. When doing mutation operation, each parameter or bit in a single individual chromosome is determined whether its value is changed or not based on a probabilistic parameter Pm. Because of the experiences of medical science, usually, the mutation brings harmful effects to individuals such that we often set Pm with a small value. However, the mutation operators still have the significant importance during evolutionary computing. According to the selection and crossover operator, the average

quality of population will be improved during iterated generations. However, in the last period of evolutionary computing, the fitness values among populations converge and all information contained in the individuals is almost the same. Without producing some new information or parameter values, the entire candidate solutions of population will be trapped into local optima. In this situation, the mutation operator can bring the new information to the entire population such that the population may jump the local optima and find out the global ones.

The mostly used methods of mutation operations are bit flip mutation for binary bit string or randomly generating the perturbing value for each real-valued parameter.

The bit flip mutation for binary bit string is that when doing mutation, each bit in the individual have the probability Pm to flip its value, such as change 1 to 0 or reverse 0 to 1. Figure 2.7 shows the behavior of bit flip mutation. The other commonly used mutation for real-valued parameters is described below. With a probability Pm, assume a real-valued parameter x is to be mutated. A perturbation x0 of x is generated by the Cauchy-Lorentz probability distribution [59]. The mutated value of x is x + x0 or x − x0, determined randomly.

1 0 0 1 0 1 1 0 1

Mutation point

1 0 0 1 0 0 1 0 1

Figure 2.7. An example of bit flip mutation.

2.3.6 Termination Condition

The termination conditions are the criterions that we terminate the evolutionary search or computing of genetic algorithm. The commonly used termination conditions may be:

1) the average or best fitness values is improved to a default value; 2) The number of generations or fitness evaluation is up to a upper bound set in advance; 3) The best fitness is still not improved after a number of generations; 4) other criterions designed by the users.

Chapter 3

Intelligent Genetic Algorithm

The used intelligent genetic algorithm (IGA) is a specific variant of the intelligent evolu-tionary algorithm [38] to solve the large-scale parameter optimization problems (LPOP).

The main difference between IGA and the traditional GA [36] is an efficient intelligent crossover operation. The intelligent crossover is based on orthogonal experimental design to solve intractable optimization problems comprising lots of design parameters. The following sections describe orthogonal experimental design, factor analysis, intelligent crossover, and the simple intelligent genetic algorithm. The merits of orthogonal exper-imental design and the superiority of intelligent crossover can be further referred to [37]

and [38].

3.1 Concept of Orthogonal Experimental Design (OED)

An efficient way to study the effect of several factors simultaneously is to use OED with both orthogonal array (OA) and factor analysis [60, 61, 62]. The factors are the variables (parameters), which affect response variables, and a setting (or a discriminative value) of a factor is regarded as a level of the factor. OED utilizes properties of fractional factorial experiments to efficiently determine the best combination of factor levels to use in design problems.

OA is a fractional factorial array, which assures a balanced comparison of levels of any factor. OA is an array of numbers arranged in rows and columns where each row represents the levels of factors in each combination, and each column represents a specific factor that can be changed from each combination. The term “main effect” designates

the effect on response variables that one can trace to a design parameter [62]. The array is called orthogonal because all columns can be evaluated independently of one another, and the main effect of one factor does not bother the estimation of the main effect of another factor. Factor analysis using the orthogonal array’s tabulation of experimental results can evaluate the effects of individual factors on the evaluation function, rank the most effective factors, and determine the best level for each factor such that the evaluation function is optimized.

OED can provide near-optimal quality characteristics for a specific objective. Fur-thermore, there is a large saving in the experimental effort. OED specifies the procedure of drawing a representative sample of experiments with the intention of reaching a sound decision [62]. Therefore, OED using OA and factor analysis is regarded as a systematic reasoning method.

相關文件