• 沒有找到結果。

中 華 大 學

N/A
N/A
Protected

Academic year: 2022

Share "中 華 大 學"

Copied!
53
0
0

加載中.... (立即查看全文)

全文

(1)

中 華 大 學 碩 士 論 文

多目標鏈結學習型基因演算法之理論分析

A Comparative Study of Multi-objective Linkage Learning Genetic Algorithms

系 所 別:資訊工程學系碩士班 學號姓名:M09502041 魏良哲 指導教授:陳建宏 博士

中華民國 九十九年 八月

(2)

摘 要

對於基因演算法來說,欺騙問題(Deceptive problems)被認為是一個困難度高 的問題,因為在基因演算法的重組(recombination)期間中,染色體的基因鏈結 (genetic linkage)有很高的機率被破壞。不只傳統基因演算法容易被欺騙問題所誤 導,對於多目標演化式演算法,基因鏈結在不同的目標可能也不盡相同,如何解 決基因鏈結問題亦成為有效改良基因演算法效能的重要關鍵之ㄧ。

本 論 文 針 對 Ponsawat 等 人 所 提 出 的 建 構 基 石 辨 識 (building blocks identification)來導引重組交配方法進行更進一步的完整比較分析。本篇論文將各 種多目標演化式演算法與其整合實作成不同的多目標鏈結學習型基因演算法 (multi-objective linkage learning genetic algorithms),然後應用於求解多目標欺騙 問題,並提出了完整性、完備性、最早與最後獲解時間等四種效能評測方式來加 以分析評估各種多目標鏈結學習型基因演算法的效能。實驗分析結果顯示在過去 文獻中所提出的著名多目標演算法中,僅有兩項多目標演算法在結合建構基石辨 識導引重組方法後,具有100%的完整性與完備性,可以在有限時間內找到所有 Pareto最佳解。

關鍵詞 關鍵詞 關鍵詞

關鍵詞::::欺騙問題、基因演算法、基因鏈結、建構基石、建構基石辨識。

(3)

Abstract

Deceptive problems are considered as hard problems for the researchers in the genetic algorithms (GAs) field, because genetic linkage within chromosomes has a higher probability to be disrupted than the other problems during the recombination phase of genetic algorithms. It not only poses a difficulty for traditional genetic algorithms, but also for multi-objective genetic algorithms (MOGAs), because genetic linkage may be different to different objective functions. Therefore, how to tackle genetic linkage issues in GAs has become an important key problem for improving the performance of GAs.

In this thesis, a building blocks identification method for guiding crossover, proposed by Ponsawat et al., is selected for our comparative study. Several famous MOGAs are selected and integrated with this method as multi-objective linkage learning GAs. Hereafter, these multi-objective linkage learning GAs are used to solve multi-objective deceptive functions. Four performance metrics: completeness of Pareto-optimal solutions, robustness of solution quality, the first and the last hitting time of Pareto-optimal solutions are proposed to evaluate the performance of multi-objective linkage learning GAs. The experimental results indicate that only two multi-objective linkage learning GAs are able to achieve 100% of completeness and robustness, and obtained all the Pareto-optimal solutions within a small number of generations.

Keyword: Deceptive problems, genetic algorithms, genetic linkage, building blocks, building blocks identification

(4)

致謝

我首先所要感謝的人是指導教授陳建宏老師,感謝在我這四年的研究所生 涯中,一步一步地指引我完成這篇論文,讓我自身充實許多也了解了做事的方法 和態度,使我不僅研究所的生涯中受益良多,且在各方面都有成長。感謝在就讀 研究所的期間,實驗室裡的每一位學長姊、同學及學弟妹們,在我遇到瓶頸適時 的幫助並相互討論。感謝兩位口試委員于天立教授及陳潁平教授給予論文上的建 議與指正,使得這篇論文可以更加了完善。

最後感謝家人們的支持、關心與不斷提醒要好好努力、用心去做好自己的研 究,讓我得以完成自己的碩士班學業與研究,謹將此論文獻給你們,謝謝你們。

魏良哲 謹誌於 中華大學資訊工程研究所 民國九十九年八月

(5)

Table of Contents

摘 摘 摘

摘 要 要 要 要... i

Abstract... ii

致謝 致謝 致謝 致謝... iii

Table of Contents ... iv

List of Tables... vi

List of Figures... vii

Chapter 1 Introduction...1

1.1 Motivation...2

1.2 Objective ...2

1.3 Thesis Organization ...3

Chapter 2 Related Work ...4

2.1 Genetic Algorithms ...4

2.2 Multi-objective Evolutionary Optimization...7

2.2.1 FFGA ...8

2.2.2 NSGA...9

2.2.3 NSGA-II... 11

2.2.4 SPEA2...12

2.3 Genetic Linkage ...13

Chapter 3 Definition of Deceptive Trap functions...16

3.1 Deceptive Trap Functions ...17

3.2 Multi-Objective Deceptive Trap Functions ...17

Chapter 4 Multi-objective Linkage Learning Genetic Algorithms....20

4.1 Chromosome Representation ...20

4.2 Fitness Assignment ...21

4.3 Genetic Operators ...21

4.4 Building Blocks Identification...22

4.4.1 Chi-square Matrix (CSM) ...22

4.4.2 Partitioning (PAR) Algorithm ...23

(6)

Chapter 5 Experiment Result ...25

5.1 Performance Metrics ...25

5.2 Normal 10× 5-trap function ...26

5.3 Shuffle 10 × 5-trap function...33

Chapter 6 Conclusion ...41

Reference...42

(7)

List of Tables

Table 5.1 Genetic algorithms parameter setting. ... 28

Table 5.2 Robustness for normal 10×5-trap function... 30

Table 5.3 Number of successful runs... 32

Table 5.4 Hitting Time (generation) ... 33

Table 5.5 Robustness for shuffle 10×5-trap function... 37

Table 5.6 Number of successful runs... 39

Table 5.7 Hitting Time(generation) ... 40

(8)

List of Figures

Figure 2.1 One point crossover operator ... 5

Figure 2.2 Mutation operator ... 6

Figure 2.3 Flow chart of genetic algorithm ... 6

Figure 2.4 NSGA algorithm flowchart... 10

Figure 2.5 Crowding distance calculation. ... 11

Figure 3.1 A 5-bit deceptive function of unity. ... 17

Figure 3.2 5-bit inverse trap function ... 19

Figure 5.1 Number of Pareto-optimal solutions in MOGA for normal 10×5-trap function ... 28

Figure 5.2 Number of Pareto-optimal solutions in MOLLGAs for normal 10×5-trap function ... 29

Figure 5.3 Box plots of Completeness in MOGAs for normal 10×5-trap function... 29

Figure 5.4 Box plots of Completeness in MOLLGAs for normal 10×5-trap function 30 Figure 5.5 First hitting time in MOGAs for normal 10×5-trap function ... 31

Figure 5.6 First hitting time in MOLLGAs for 2-objective 10×5-trap function... 31

Figure 5.7 Last hitting time in MOLLGAs for 2-objective 10×5-trap function ... 32

Figure 5.8 Number of Pareto-optimal solutions in MOGAs for shuffle 10×5-trap function ... 35

Figure 5.9 Number of Pareto-optimal solutions in MOLLGAs for shuffle 10×5-trap function ... 36

Figure 5.10 Box plots of Completeness in MOGAs for shuffle 10×5-trap function... 36

Figure 5.11 Box plots of Completeness in MOLLGAs for shuffle 10×5-trap function 37 Figure 5.12 First hitting time in MOGAs for shuffle 10×5-trap function ... 38

Figure 5.13 First hitting time in MOLLGAs for shuffle 10×5-trap function ... 38

Figure 5.14 Last hitting time in MOLLGAs for shuffle 10×5-trap function... 39

(9)

Chapter 1 Introduction

Genetic algorithms (GAs) were developed by Prof. John Holland and his students at the University of Michigan during the 1960s and 1970s. Genetic algorithms guide the search through the solution space by using natural selection and genetic operators, such as crossover, mutation, and the like [3, 7]. GA optimization mechanism is theorized by researchers with building block processing, such as creating, identifying and exchanging [3, 7]. The solutions are improved based on the assumption that good solutions have some common substructures and if they can be combined together correctly, the new solutions will be better [10]. Multi-objective evolutionary algorithms (MOEAs) [26] are very popular approaches for solving MOOPs. MOEAs use Pareto dominance relationship which can find a widespread set of non-dominated solutions in a single run. Genetic algorithms are used to illustrate the basic framework in MOEAs, since they are stochastic and population-base search and can find multiple Pareto optimal solutions. However, there still exist problems that are hard for traditional genetic algorithms [19].

Successful solve optimization algorithms must identify and use such linkages. Some algorithms have linkage assumptions built into them, For example, the canonical genetic algorithms with one-point crossover presumes strong linkages between adjacent components of solution representations, whereas uniform crossover presumes that all problem components are linked equally strongly to each other. As a result, applications of the classical crossover operators break linkages between some genes, ignoring any problem-specific dependence between them [20, 21]. In order to prevent disruption of

(10)

genetic linkage form crossover operators, the identify genetic linkage and mix them is a crucial work in GAs, like Messy Genetic Algorithm (mGA) [7], Linkage Learning GA (LLGA) [9], Multi-Objective fast messy Genetic Algorithm (MOMGA-II) [22], and Non-dominated Sorting Genetic Algorithm II (NSGA-II) with building blocks identification [11].

In this paper, we perform the building blocks identification [11] to find candidates of building blocks to be mixed to have an optimal solution. If the problem inside the linkage is difficult enough, as in a deceptive problem, we still need to have 0(2k) strings, where k is the length of the linkage, but if the problem is easy for GA, we do not need to have such a large population size because it is easy for the intra GA to find an optimal solution inside the linkage which is to be a building block [24].

1.1 Motivation

The deception problems are often challenging to optimize and involve some degree of deception - resulting in conflicting objectives [22]. For the deception problems, GAs always finds the suboptimal fitness. Ponsawat, Punyapom, and Chaiyaratana proposed the building blocks identification model and successful solve this problem. As the result in [11], the building blocks identification is a good solver. Several MOGAs are selected and integrated with this method as multi-objective linkage learning GAs.

1.2 Objective

The building block identification was proposed in [11, 12, 13] and successful solve the deceptive problem. We will use the multi-objective deceptive function as the test problem and use multi-objective linkage learning GAs to solve the multi-objective deceptive function and four performance metrics: completeness of Pareto-optimal solutions,

(11)

robustness of solution quality, the first and the last hitting time of Pareto-optimal solutions are proposed to evaluate the performance of multi-objective linkage learning GAs in this paper.

1.3 Thesis Organization

The remaining parts of this thesis are organized as follows. In Chapter 2, we introduce the definitions of genetic algorithm, multi-objective evolutionary optimization, four genetic algorithms that we choose, and the genetic linkage. In Chapter 3, we describe the definition of trap functions, multi-objective trap function, and the mathematical formation of the multi-objective trap function. In Chapter 4, we describe the building blocks identification. In Chapter 5, we show the experiment result. Chapter 6 we conclude the thesis.

(12)

Chapter 2

Related Work

2.1 Genetic Algorithms

Genetic algorithms were first proposed by John Holland in the 1975s [5]. Genetic algorithms imitate biological evolution processes by applying the principle of survival of the fittest. Genetic algorithms are stochastic and population-based search and optimization algorithms loosely modeled after the paradigms of evolution. Therefore, Genetic algorithms provide efficient, effectives techniques for optimization and machine learning applications.

To solve the problems, GAs encode the decision variables of the problem into solution strings and evaluate the fitness function of solutions. The solution strings are either fixed or variables size strings, with binary or integer values at each position. Each solution string is referred to as an individual or, alternatively, a chromosome. The first step in the implementation of GA is to generate an initial population of chromosomes. The initial population of chromosomes can be generated at random or with some problem-specific knowledge. After creating an initial population, each chromosome is then evaluated and assigned a fitness value. The fitness value provides a measure of performance with respect to a particular set of parameters. That is useful to distinguish between the good solutions and the bad solutions of the problem. In order to evolve the population, genetic operators such as selection, crossover, and mutation are repeatedly applied to generate offspring to form a new generation.

(13)

The selection operator implements the principle of survival of the fittest, and selects individual from the current population for inclusion in the next population. Individual solutions are selected according to the assigned fitness, where fitter solutions are typically more likely to be selected. The most popular selection operators include roulette wheel selection [6] and tournament selection [7].

After selection has been carried out the construction of the intermediate population is complete and crossover operator can occur. Crossover operator is process of creating one or more new chromosomes through the combination of genetic material randomly selected from two or more parents. Crossover occurs during evolution according to a user-definable crossover probability pc . The new chromosome may be better than both of the parents if it takes the best characteristics from each of the parents. For example, consider the following two parents which have been selected for crossover with a randomly chosen crossover point of 3. The crossover operation can be illustrated as:

Figure 2.1 One point crossover operator

The common crossovers include one point crossover, two point crossover, and uniform crossover. When using one point crossover, only one crossover point is chosen at random.

When using two point crossover, everything between the two points is swapped between the parent chromosomes. When using uniform crossover, the bits are swapped with a fixed probability 0.5. This achieves the maximum allele-wise mixing rate.

Mutation operator is the process of randomly changing some pieces of individuals to form perturbed solutions. Mutation occurs during evolution according to a user-definable mutation probability pm. Mutation should be applied with care not to destroy the good genetic material in highly fit individuals. For this reason, the mutation probability should

(14)

be assigned a low value. Consider the case where the GA decides to mutate bit position 4.

The mutation operator can be illustrated as:

Figure 2.2 Mutation operator

The selection pressure may cause premature convergence due to reduced diversity of the new populations. The main objective of mutation is to introduce new genetic material into the population, which may keep the genetic diversity of the current population. Figure 2.3 shows flow chart of genetic algorithm

Figure 2.3 Flow chart of genetic algorithm

Start

End Initial population

Evolution

Selection

Crossover

Mutation

Terminate?

True

False

(15)

2.2Multi-objective Evolutionary Optimization

Most problems in nature have several objectives to be satisfied. The objectives are conflicting and cannot be optimized simultaneously. For example, the optimum design of a racing car may seek minimum cost and maximum performance. High-performance car architectures substantially increase costs, while car architectures with cheap costs usually provide low performance. Instead, cheapness and performance are generally competing.

Assume the multi-objective functions are to be minimized. Mathematically, MOOPs can be represented as the following vector mathematical programming problems

)}.

( ..., ), ( ), ( { )

(Y F1 Y F2 Y F Y

F

Minimize = i (1)

Where Y denotes a solution and Fi(Y) is generally a nonlinear objective function. Pareto dominance relationship and some related terminologies are introduced below. When the following inequalities hold between two solutions Y1 and Y2, Y2 is a non-dominated solution and is said to dominate Y1 (Y2 fY1):

).

( ) ( : ) ( ) (

: F Y1 F Y2 j F Y1 F Y2

i i > i ∧∃ j > j

∀ (2)

When the following inequality hold between two solutions Y1 and Y2, Y2 is said to weakly dominate Y1 (Y2fY1):

).

( )

(

: F Y

1

F Y

2

i

i

i

(3)

A feasible solution Y * is said to be a Pareto-optimal solution if and only if there does not exist a feasible solution Y where Y dominates Y *, and the corresponding vector of Pareto-optimal solutions is called Pareto-optimal front.

By making use of Pareto dominance relationship, multi-objective evolutionary algorithms (MOEAs) are capable of performing the fitness assignment of multiple objectives without using relative preferences of multiple objectives. Thus, all the objective

(16)

functions can be optimized simultaneously. As a result, MOEA seems to be an alternative approach to solving production planning and inspection planning problems on the assumption that no prior domain knowledge is available [4].

Multi-objective evolutionary algorithms are the popular to solve the several objective problems. For the purpose of this research, four algorithms have been selected for performance comparison. The multi-objective proposed by Fonseca and Fleming (FFGA), Non-dominated Sorting Genetic Algorithm (NSGA), Non-dominated Sorting Genetic Algorithm II (NSGA-II), and Strength Pareto Evolutionary Algorithm 2 (SPEA2) were selected to solve deceptive problems in this paper. A brief overview of these algorithms is given in this section. The initial population, representation and evolutionary operators of these algorithms are standard: uniform distribution, binary srepresentation, binary tournament selection, single-point crossover, and uniform mutation.

2.2.1 FFGA

Fonseca and Fleming [14] proposed an improved genetic algorithm to solve the multi-objective optimization in 1993s. This genetic algorithm is called FFGA in this paper.

The FFGA uses a Pareto-based ranking procedure, where the rank of an individual is equal the number of solutions found in the population where its corresponding decision vector is dominated [26]. All individuals are assigned rank 1 in population and accumulate how many individuals which dominate the individual Y. Assign ranks as fitnesses to individuals.

The individuals' rank can be given by

( ) Y q

F = 1 +

(4)

where q is the number of individuals which can dominate the individual Y in the current population. The rank 1 is represented the Pareto-optimal in current generation.

(17)

The flow of the FFGA is as follows.

Step 1: Generate the initial population, and set g ← 1. The variable g represents the generation.

Step 2: Determine the archive solution set X from the initial population.

Step 3: As the parents, pick up a solution x1 randomly from the population and a solution x2 randomly from X. Remove x1 from the population.

Step 4: Generate two solutions x3 and x4 by means of the crossover operation. The crossover operation is applied at every generation.

Step 5: Generate a solution x5 from x1 by means of the mutation operation. Similarly, generate a solution x6 from x2 by means of the mutation operation. The mutation operation is also applied at every generation.

Step 6: Remove a solution selected randomly from the population.

Step 7: Select two solutions from the solutions {x1, x2, …, x6} by means of SAR, and add them to the population.

Step 8: Update X from X ∪ {x3, x4, x5, x6}.

Step 9: If g = G, terminate this algorithm and output X as the answer. If not, set g ← g + 1 and return to Step 3. The archive solution set X has no size limitation, because the size remains relatively small for this problem.

2.2.2 NSGA

NSGA varies from simple genetic algorithm only in the way the selection operator works. The crossover and mutation operators remain as usual. Before the selection is performed, the population is ranked on the basis of an individual’s non-domination. The non-dominated individuals present in the population are first identified from the current population. Then, all these individuals are assumed to constitute the first non-dominated

(18)

front in the population and assigned a large dummy fitness values. In order to maintain diversity in the population, these classified individuals are then shared with their dummy fittness values [15].

Figure 2.4 NSGA algorithm flowchart

Start

End Initial population

Identify Non-dominated

individuals

Assign dummy fitness

Sharing in current front

front = front + 1 front = 1

Reproduction according to dummy fitness

Crossover

Mutation

Terminate?

Population classified?

gen = gen + 1

True True

False

False

(19)

2.2.3 NSGA-II

NSGA-II was developed by Deb, Agrawal, Pratap, and Meyarivan. It is modified base on the NSGA and improved the performance of the NSGA. It uses a fast non-dominated sorting approach, simple density estimation, and the elitism procedure [16].

To get an estimate of the density, NSGA-II take the average distance of the two points on either side of this point along each of the objectives. This quantity idistance serves as an estimate of the size of the largest cuboid enclosing the point i without including any other point in the population, called crowding distance. In figure 2.6, the crowding distance of the i-th solution in its front is the average side-length of the cuboid [16].

Figure 2.5 Crowding distance calculation.

The flow of the NSGA-II is as follows.

Step 1: Generate a random initial population P0. Set t = 0.

Step 2: Sort population Pt based on non-domination.

Step 3: Assign fitness and create an offspring population Q0 using binary tournament, recombination and mutation.

Step 4: Combine parent Pt and offspring populations Qt and form combined population Rt with size of 2N (except first period). R=P ∪ Q.

(20)

Step 5: Sort Rt based on non-domination: Rt = {F1, F2, …}. F1 is the non-domination front.

Step 6: Form the new parent population according non-domination and crowding distance.

Step 7: If the maximum number of generations is reached, then stop, else go to step 2.

2.2.4 SPEA2

SPEA2 was developed by Zitzler and Thiele [17]. SPEA2 is an improved version of the SPEA algorithm. Besides the population SPEA maintains an external set of individuals (archive) which contains the non-dominated solutions among all solutions considered so far. In each generation the external set is updated and if necessary pruned by means of a clustering procedure. Afterwards, individuals in population and external set are evaluated interdependently, such that external set members have better fitness values than the population members. Finally, selection is performed on the union of population and external set and recombination and mutation operators are applied as usual. SPEA2, which incorporates a close-grained fitness assignment strategy and an adjustable elitism scheme, is described in [17].

The flow of SPEA2 algorithm is as follows:

Input: N (population size) N (archive size)

T (maximum number of generations)

Output: A (non-dominated set)

Step 1: Initialization: Generate an initial population P0 and create the empty archive (external set) P0= Ø. Set t = 0.

Step 2: Fitness assignment: Calculate fitness values of individuals in Pt and Pt.

(21)

Step 3: Environmental selection: Copy all non-dominated individuals in Pt and Pt to

+1

Pt . If size of Pt+1 exceeds N then reduce Pt+1 by means of the truncation operator, otherwise if size of Pt+1 is less than N then fill Pt+1 with dominated individuals in Pt and Pt.

Step 4: Termination: If t ≥ T or another stopping criterion is satisfied then set A to the set of decision vectors represented by the non-dominated individuals in Pt+1. Stop.

Step 5: Mating selection: Perform binary tournament selection with replacement on Pt+1

in order to fill the mating pool.

Step 6: Variation: Apply recombination and mutation operators to the mating pool and set

+1

Pt to the resulting population. Increment generation counter (t = t + 1) and go to Step 2.

2.3 Genetic Linkage

Linkage in the context of GAs represents the ability of building blocks to bind tightly together and thus travel as one under the action of the crossover operator [9]. The building block hypothesis states that the final solutions to a given optimization problem can be evolved with a continuous process of creating, identifying, and recombine high-quality building blocks. For genetic algorithms, the chromosome is represented as a string of characters, and GAs search the solution spce by using natural selection and genetic operators, such as crossover, mutation, and the like [3, 7]. For multi-objective optimization, linkage disequilibrium is a phenomenon that the linkage of genes in chromosome are different in objective functions. The linkage of a problem gives an indication of how the interacting genes are positioned in the chromosome. While genes contributing to a BB are physically close to one another, the linkage of these genes are said to tight. If the genetic linkage between these genes is tight. Crossover operators, such as one-point and two-point

(22)

crossover, disrupt them with a low probability and transfer them all together to the child individual with a high probability. On the other hand, if the genetic linkage between these genes is loose, crossover operators disrupt them with a high probability and transfer them all together to the child individual with a low probability. If chromosome representation provides tight linkage, a simple GA can solve difficult problems. Otherwise, a simple GA can easily fail. Therefore, for simple genetic algorithms, tight genetic linkage, or a good coding scheme, is indeed far more important than it is usually considered. Genetic linkage can be used describe and measure the relation of gene, i.e, how close those genes belonging to a building block are on a chromosome [3]. If a solution A of a problem is an instance of the similarity subset of a schema H, the schema H is called a building block(BB). For example, 1******** is a building block of the solution 111111111, but not

of 000000000. A schema is a locally superior BB of a solution if it is a BB of a solution and is superior to the competitors in its similarity partition. For example, given a population (a set of solutions) and suppose three BBs, BB1 = 111***, BB2 = 1*****, and BB3 = 0*****. Assuming that the average objective function value of three BBs in the population f(BB1) > f(BB2) > f(BB3), BB1 is said to be a locally superior BB.

In many problems, because of the interactions between parameters, to optimize each dimension of candidate solutions separately could not lead to a global optimum. The linkage, interrelationships existing between genes needed to be considered when genetic algorithms are used. Moreover, according to Goldberg’s design decomposition theory, building block identification or genetic linkage learning is critical to the success of genetic algorithms [3]. If these genes spread all over the chromosome, building blocks are very hard to be created and easy to be destroyed by the recombination operator. Genetic algorithms cannot perform well under such circumstances. In practice, without prior knowledge of the problem and linkage information, it is difficult to guarantee that the

(23)

coding scheme defined by the user always provides tight building blocks, although it is a key to the success of genetic algorithms. The genetic linkage of building blocks dominates all kinds of building-block processing, including creation, identification, separation, preservation, and mixing. Handling genetic linkage for genetic algorithms to succeed is extremely important [2].

(24)

Chapter 3

Definition of Deceptive Trap functions

The notation of deception in GAs was first introduced by Goldberg. Deceptive problems are those where low-order building blocks are misleading. In other word, a deceptive function is one in which low-order schema direct the search towards a particular local optimum instead of leading towards the global optimum and for a fully deceptive function, one is the complement of the other. Deceptive problems become more difficult for traditional GAs, if deception occurs across some random combination of bits in the encoding. This implies that the linkage is poor among bits of the same functionality and as such genetic algorithms can not adequately sample long defining schema because of the high rate of disruption during recombination. Thus the combination of deception and bad linkage makes a problem GA-hard in traditional sense. One of the major failings of simple GA’s in solving deceptive problems is because of premature convergence in which search stuck at a sub-optimum point [18].

Trap functions are hard for traditional optimizers because they will tend to climb to the deceptive peak, but GAs with properly sized populations can solve them satisfactorily.

We expect to use larger population sizes than before to solve these functions for two reasons: (1) the BBs are much scarcer in the initial population because they are longer; and (2) the signal to noise ratio is lower, making the decision between the best and the second best BBs more difficult. Figure 3.1 is a 5-bit deceptive function of unity.

(25)

Figure 3.1 A 5-bit deceptive function of unity.

3.1 Deceptive Trap Functions

Trap function is an adversary function for studying BBs and linkage problems in GAs.

Trap function is a fundamental unit for designing test functions that resist hill-climbing algorithms [13]. Consider the deceptive order-3 function [1], suppose that individual 0s are better than individual 1s, that is , f(0**) > f(1**) , f(*0*) > f(*1*), f(**0) > f(**1).

Furthermore suppose that all two-bit combinations lead to 0s, that is f(00*) > f(11*) , f(00*)

> f(10*), f(00*) > f(01*), and that similar relations hold over the partitions f*f, and *ff;

however, the last f(111) > f(x), x≠111. In this case, the global optimum is 111 not 000. This function was called deceptive, because they deceive solvers by leading away from the right answer.

3.2 Multi-Objective Deceptive Trap Functions

The multi-objective deceptive problem which has two objectives function: 1).m × k deceptive trap. 2).m × k deceptive inverse trap. The k-bit trap and inverse trap are defined as follows [11].

(26)





=

= otherwise

k d u k

k u if k

u trapk

; 1 1

) (

; )

( (5)





=

= otherwise

k d u k

u if k

u invtrapk

; 1 ) 1 (

0

; )

( (6)

Where u is the number of 1s in the binary string of k bits, and d is the signal difference. In this experiment, let k = 5 and d =1. There are 2m solutions in the set of non-dominated solutions and every one of them that sets the bits in its partition either to 0s or 1s. Figure 3.1 and 3.2 are the examples for 5-bit trap function and inverse trap function. The definition of m × k-trap function as follow:

=

× = 1

0 1

0... ) ( )

(

m

i

i k m

k

m B B F B

F (7)

Where Fk is trapk or invtrapk function. The m and k are varied to produce many test functions. These functions are often referred to as additively decomposable functions (ADFs). Decomposability of a problem into subproblems is essential to realizing effective search with optimization algorithms based on divide and rule strategy. For genetic optimizations, problem decomposition based on BBs and their mixing by recombination operators are essential to realize effective search. Classical GAs did not consider the problem decomposition explicitly---they process BBs indirectly with general or problem- specific crossover operators [25].

A deceptive trap function called Shuffle trap function is created in [11]. The Shuffle trap function creates non-compact building blocks which render a simple crossover operator ineffective [11]. For example, the linkage of normal 4 × 5-trap function is tightly and the schemata as shown.

11111 ***** ***** *****

In Shuffle 4 × 5-trap function, the linkage is repeated in every m bit .

(27)

1*** 1*** 1*** 1*** 1***

Where * is don’t care (either 0 or 1). For the Shuffle m × k-trap function, the single-point crossover process is easy to break the genetic linkage.

Figure 3.2 5-bit inverse trap function

(28)

Chapter 4

Multi-objective Linkage Learning Genetic Algorithms

Genetic algorithms which are a kind of directed search algorithms based on the mechanic of biological evolution is introduced. Genetic algorithms are widely-used today in many fields to solve optimization problems. In this chapter, we describe a multi-objective genetic algorithm with building blocks identification algorithms that was used to solve the deceptive problems. In Section 4.1 and Section 4.2, there are tow main components of genetic algorithms that are problems dependent: the problem encoding and the fitness function. In Section 4.3, genetic algorithms evolve the population of solutions with genetic operator, including selection, crossover, and mutation. In Section 4.4, we describe the procedure of genetic algorithm with building blocks identification. In this paper, the experiment uses 2-objective 10×5-trap function as our test function.

4.1 Chromosome Representation

Generally speaking, a chromosome has gene information for solving the deceptive problems in MOGAs. Each chromosome is a binary string and has fixed gene size, which composed from 10 copies of a fully deceptive order 5 bit trap function or inverse trap function. Each trap function or inverse trap function is uniformly and maximally distributed across the length.

(29)

4.2 Fitness Assignment

We use a generalized Pareto-based scale-independent fitness function (GPSIFF) [27]

considering the quantitative fitness values in Pareto space for both dominated and non-dominated individuals. GPSIFF makes the best use of Pareto dominance relationship to evaluate individuals using a single measure of performance. The used GPSIFF is briefly described below. Let the fitness value of an individual Y be a tournament-like score obtained from all participant individuals by the following function:

( )

F Y = − +p q c (8)

Where p is the number of individuals which can be dominated by the individual Y, and q is the number of individuals which can dominate the individual Y in the objective space.

Generally, a constant c can be optionally added in the fitness function to make fitness values positive. c is usually set to the number of all participant individuals.

4.3 Genetic Operators

The genetic operators used in the proposed approach are widely used in literature. The selection operator uses a binary tournament selection without replacement, which works as follows. Choose two individuals randomly from the population and copy the better individual into the intermediate population.

Crossover is a recombination process in which genes from two selected parents are recombined to generate offspring chromosomes. The one point crossover in GA literature is used in our approach.

A simple mutation operator is used to alter genes. For each gene, randomly generate a real value from their given range [0, 1]. If the value is smaller than the mutation probability pm, replace its index with a randomly 0 or 1.

(30)

4.4 Building Blocks Identification

Building Blocks Identification is performed before the crossover operator and we use the Building Blocks Identification to identify genetic linkage. There are two parts to Building Blocks Identification: simultaneity matrix construction (SMC) and partitioning (PAR) algorithms [11, 12, 13].

4.4.1 Chi-square Matrix (CSM)

The CSM input is a population or a set of l-bit binary string input denoted by:

{

0, , 1

}

= s sn

S LL (9)

Where si is the ith string, 0 ≤ i ≤ n – 1. The matrix is a l × l symmetric matrix, denoted by M

= (mij ),0 ≤ i, j ≤ l-1 [11, 12, 13]. The matrix is called chi-square Matrix (CSM) and defined as follows.



 ≠

= otherwise

j i if j

i ChiSquare mij

; 0

; ) ,

( (10)

The ChiSquare(i,j) is defined as follows.

{

00 ,01 ,10 ,11

}

4 ; /

) 4 / ) , (

( − 2

xy

n n j i Count

xy

xy

s (11)

Where the observe frequency Countsxy( ji, ) is the number of solutions in which ith bit is identical to x and jth bit is identical to y. The expected frequencies of observing

“00”,”01”,”10”,”11” are n/4 where n is the number of the solutions. If bit i and bit j are in the same BB, the Chi-square of bit variables is high [11].

(31)

4.4.2 Partitioning (PAR) Algorithm

The Partitioning (PAR) Algorithm [11, 12, 13] will partition each input bit into suitable blocks according to the simultaneity matrix and outputs the partitions:

{ } { }

.

1 , , 0

; , ,

1

0 0 1

j i all for B

B

l B

B B P

j i

P

i P i

=

=

=

=

φ       

L

L

U

(12)

The Bi is the partition subset. PAR must have five preconditions.

 P is a partition.

The number of P are disjoint set.

The union of all members of P is

{

 0,L ,l 1 

}

.



P {   {   0 , L , l 1   }   }

.

 For all BP such that | B | >1

 For all BP, that largest | B |-1 matrix elements in row i are founded in columns of B \ {i}.

 For all BP such that | B | >1,

Hmax-Hmin < α (Hmax-Lmin) where 0 ≤ α ≤ 1,

{ ( ) }

(

 m i j B B i j  

)

Hmax =max ij| , × ,

{ ( ) }

(

 m i j B B i j  

)

Hmin =min ij| , × , , and

{ }

{ }

(

m i B j l B

)

Lmin =min ij| , 0,L, 1 \

 There are no partition Px such that for someBP, for someBx Px . P and Px satisfy the first, second, third, and fourth conditions, BBx.

(32)

4.5 Procedure of multi-objective linkage learning GAs

The procedure of multi-objective linkage learning GAs is written as follows:

Input: population size Npop, recombination probability pc, mutation probability pm, the number of maximum generations Gmax.

Output: The optimum solutions ever found in P.

Step 1: Initialization Randomly generate an initial population P of Npop individuals, and create an empty set E.

Step 2: Evaluation For each individual in the population, compute all objective function values F1, and F2.

Step 3: Fitness assignment Assign each individual a fitness value by using GPSIFF.

Step 4: Selection Select Npop individuals from the population to a new population using the binary tournament selection.

Step 5: Building blocks identification Obtain the linkage of each gene using the Chi-square Matrix and perform Partitioning (PAR) Algorithm to identify blocks.

Step 6: Recombination Perform the uniform crossover operation for each block with a recombination probability pc.

Step 7: Mutation Apply the mutation operator to each block in the individuals with a mutation probability pm.

Step 8: Termination test If a stopping condition is satisfied, stop the algorithm. Otherwise, go to Step 2.

(33)

Chapter 5

Experiment Result

In this section, we show the experimental results and analyzed the effectiveness of the GAs with Building Blocks Identification to solve the multi-objective trap problem. For the GA with Building Blocks Identification, we add two letters “BB” before the name of GA like BBNSAG-II and modify the NSGA without Fitness Sharing, denoted NSGA(nonSharing). The first experiment is 2-objective 10×5-trap function as the test function. Parameters of MOGAs and MOLLGAs show in Table 5.1. Algorithm PAR parameter Threshold (α) = 0.95 where it is the appropriate value that generates high quality building blocks [11]. The results of the experiment are the average from 30 independent runs. In the experiment, we compare in robust, completeness, first and last hitting time.

5.1 Performance Metrics

In order to evaluate the performance of MOGAs in solving test functions, four performance metrics are proposed in this thesis. They are completeness of Pareto-optimal solutions, robustness of solution quality, the first hitting time of Pareto-optimal solutions and the last hitting time of Pareto-optimal solutions.

Completeness of Pareto-optimal solutions represents the ratio of Pareto-optimal solutions obtained by an MOGA divided by the total number of Pareto-optimal solutions, in a single run. If the ratio is 1024/1024, it stands for that the algorithm can obtain all the Pareto-optimal solutions in a single run.

Robustness of Pareto-optimal solutions represents the successful ratio of an MOGA in

(34)

number is 16/30, it represents that only 16 runs of the comparing MOGA had successfully obtained all the Pareto-optimal solutions, and the other 14 runs failed in obtaining all the Pareto-optimal solutions.

The first and last hitting time stand for the time (generations) for MOGAs to obtain the first and last Pareto-optimal solutions. If a comparing MOGA obtained the first and the last Pareto-optimal solution using a lesser number of generations than the other MOGAs, it represents that the comparing MOGA converge faster than the other algorithms.

5.2 Normal 10× 5-trap function

First, Figure 5.1 illustrates that the number of Pareto-optimal solutions is obtained by MOGAs. For 2-objective 10×5-trap function, there are 1024 Pareto-optimal solutions from all possible solutions (250). In figure 5.1, NSGA-II obtains 550 Pareto-optimal solutions.

The completeness of Pareto-optimal solutions of NSGA-II is 53% (550/1024). SPEA2 obtains few of Pareto-optimal solutions. The completeness of Pareto-optimal solutions of SPEA2 is 0.39% (4/1024). Other MOGA can’t obtain any Pareto-optimal solutions. The completeness of Pareto-optimal solutions is 0%. Figure 5.2 illustrates how many Pareto-optimal solutions MOLLGAs can obtain in a single run. BBMOGA(GPSIFF), BBNSGA(nonSharing), and BBSPEA2 obtain all Pareto-optimal solutions at 42th, 36th, and 54th generation. Completeness of Pareto-optimal solutions of BBMOGA(GPSIFF), BBNSGA(nonSharing), and BBSPEA2 is 100% (1024/1024). It represents that all Pareto-optimal solutions was obtained by BBMOGA(GPSIFF), BBNSGA(nonSharing), and BBSPEA2 within all the number of experiment runs. BBNSGA-II can obtain 1006 Pareto-optimal solutions at 800th generation. The completeness of Pareto-optimal solutions of BBNSGA-II is 98% (1006/1024). For the ability of search and the speed of search for all Pareto-optimal solutions, the BBNSGA(nonSharing) is good as BBMOGA(GPSIFF)

(35)

and BBSPEA2 is slower than BBMOGA(GPSIFF). Figure 5.3 and 5.4 depict box plots of completeness of Pareto-optimal solutions.

Table 5.2 shows the robustness for MOGAs and MOLLGAs for normal 10×5-trap function. In this table, All MOGAs are 0/30 that represents 30 runs failed in obtaining all the Pareto-optimal solutions. BBNSGA(nonSharing), BBMOGA(GPSIFF), and BBSPEA2 is 30/30 that represents 30 runs of they has successfully obtained all the Pareto-optimal solutions. BBNSGA-II is 15/30. As the result, we can sure BBNSGA(nonSharing), BBMOGA(GPSIFF), and BBSPEA2 obtain all of Pareto-optimal solutions in any run.

Figure 5.5 illustrates the first hitting time. This figure shows NSGA-II and SPEA2 can obtain the first Pareto-optimal solution, and then NSGA-II is earlier than SPEA2. In addition to NSGA-II and SPEA2, other of MOGAs can’t find any Pareto-optimal solution.

All MOGAs are failed to obtain the last hitting time in this experiment and we don’t show the figure of last hitting time. Figure 5.6 and figure 5.7 illustrate that the first and last hitting time for MOLLGAs. BBMOGA(GPSIFF), BBNSGA(nonSharing), and BBSPEA2 obtain the last hitting time stand for find all Pareto-optimal solutions in a single run. In figure 5.6, BBNSGA-II obtain the first hitting time before 10th generations. It is faster than BBNSGA, BBMOGA(GPSIFF), and BBSPEA2. Although, the first hitting time for BBNSGA-II is fastest than other MOLLGAs, the last hitting time for BBNSGA-II is slowest than other MOLLGAs or failed. Table 5.3 shows runs of the comparing MOGAs and MOLLGAs had successfully obtained first and last hitting time for normal 10×5-trap function that in 30 independent runs. Table 5.4 shows the comparing MOGAs and MOLLGAs had successfully obtained first and last hitting time for normal 10×5-trap function that in 30 independent runs. N/A is the comparing MOGAs and MOLLGAs can’t obtain first or last hitting time.

(36)

Table 5.1 Genetic algorithms parameter setting.

Algorithm Parameter

Multi-objective Genetic Algorithms

MOGAs with Building Bocks Identification

Experimental Run 30

Max. Generation 1000

Population Size 2000

Chromosome Length 50

Crossover Rate 0.9

Mutation Rate 0.1

Selection Operator Binary Tournament

Crossover Operator Single point Partition Crossover Mutation Operator Uniform for Gene Uniform for Partition

0 200 400 600 800 1000

0 100 200 300 400 500

Generations

Number of Optimal Solutions

SPEA2 NSGA2

Figure 5.1 Number of Pareto-optimal solutions in MOGA for normal 10×5-trap function

(37)

0 200 400 600 800 1000 0

200 400 600 800 1000

Generation

Number of Optimal Solution

BBSPEA2

BBMOGA(GPSIFF) BBNSGA(nonSharing) BBNSGA2

Figure 5.2 Number of Pareto-optimal solutions in MOLLGAs for normal 10×5-trap function

0 0.2 0.4 0.6 0.8 1 SPEA2

FFGA MOGA(GPSIFF) NSGA NSGA(nonSharing) NSGA2

Figure 5.3 Box plots of Completeness in MOGAs for normal 10×5-trap function

(38)

0 0.2 0.4 0.6 0.8 1 BBSPEA2

BBFFGA BBMOGA(GPSIFF) BBNSGA BBNSGA(nonSharing) BBNSGA2

Figure 5.4 Box plots of Completeness in MOLLGAs for normal 10×5-trap function

Table 5.2 Robustness for normal 10×5-trap function MOGAs

FFGA 0/30

NSGA 0/30

NSGA(nonSharing) 0/30

NSGA-II 0/30

MOGA(GPSIFF) 0/30

SPEA2 0/30

MOLLGAs

BBFFGA 0/30

BBNSGA 0/30

BBNSGA(nonSharing) 30/30

BBNSGA-II 15/30

BBMOGA(GPSIFF) 30/30

BBSPEA2 30/30

(39)

0 5 10 15 20 25 30 0

200 400 600 800

Runs

G e n e ra ti o n s

SPEA2 NSGA2

Figure 5.5 First hitting time in MOGAs for normal 10×5-trap function

0 5 10 15 20 25 30

0 10 20 30 40 50 60 70

Runs

G e n e ra ti o n s

BBSPEA2

BBMOGA(GPSIFF) BBNSGA(nonSharing) BBNSGA2

Figure 5.6 First hitting time in MOLLGAs for 2-objective 10×5-trap function

(40)

0 5 10 15 20 25 30 0

200 400 600 800

Runs

Generations

BBSPEA2

BBMOGA(GPSIFF) BBNSGA(nonSharing) BBNSGA2

Figure 5.7 Last hitting time in MOLLGAs for 2-objective 10×5-trap function

Table 5.3 Number of successful runs

Algorithms First Hitting Time Last Hitting Time FFGA

NSGA

NSGA(nonSharing)

0/30 0/30

NSGA-II 30/30 0/30

MOGA(GPSIFF) 0/30 0/30

SPEA2 8/30 0/30

BBFFGA

BBNSGA 0/30 0/30

BBNSGA(nonSharing) 30/30 30/30

BBNSGA-II 30/30 15/30

BBMOGA(GPSIFF) 30/30 30/30

BBSPEA2 30/30 30/30

(41)

Table 5.4 Hitting Time (generation)

First Hitting Time Last Hitting Time Algorithms

Avg. Max. Min. Avg. Max. Min.

FFGA NSGA

NSGA(nonSharing)

N/A

NSGA-II 43.37 72 23

MOGA(GPSIFF) N/A

SPEA2 526.12* 780* 242*

N/A

BBFFGA

BBNSGA N/A N/A

BBNSGA(nonSharing) 17.33 24 12 27.86 35 22

BBNSGA-II 8.26 10 7 541.6* 723* 366*

BBMOGA(GPSIFF) 17.36 31 10 28.56 41 21

BBSPEA2 25.57 41 17 42.73 54 29

*: The values are given by runs of SPEA2 and BBNSGA-II had successfully

5.3 Shuffle 10 × 5-trap function

The second experiment is 2-objective shuffle 10×5-trap function as the test function.

There are also 1024 Pareto-optimal solutions. The parameters setting are the same as in first experiment. For this test function, the crossover operator always disrupts the genetic linkage.

First, figure 5.8 shows the number of Pareto-optimal solutions is obtained by MOGAs.

NSGA-II obtains 10 Pareto-optimal solutions. The completeness of Pareto-optimal solutions of NSGA-II is 1% (10/1024). Other MOGAs can’t obtain any Pareto-optimal solutions. The completeness of Pareto-optimal solutions is 0%. Figure 5.9 illustrates the number of Pareto-optimal solutions is obtained by MOLLGAs. BBMOGA(GPSIFF) and BBNSGA(nonSharing) both obtain 1024 Pareto-optimal solutions at 50th generation and

(42)

38th generation. The completeness of Pareto-optimal solutions of BBMOGA(GPSIFF) and BBNSGA(nonSharing) is 100% (1024/1024). BBNSGA-II finds 1017 Pareto-optimal solutions at 800th generation. It stands for that completeness of Pareto-optimal solutions of BBNSGA-II is 99% (1017/1024). BBSPEA2 only find 4 Pareto-optimal solutions. The completeness of Pareto-optimal solutions of BBSPEA2 is 0.39% (4/1024). For the ability of search and the speed of search for all optimal solutions, BBNSGA(nonSharing) is good as BBMOGA(GPSIFF). We indicate explicitly that BBNSGA(nonSharing) and BBMOGA(GPSIFF) obtain all Pareto-optimal solutions in every runs and they don’t miss. Figure 5.10 and 5.11 depict box plots of completeness of Pareto-optimal solutions of MOGAs and MOLLGAs in 30 runs.

Table 5.5 is the robustness for MOGAs and MOLLGAs for shuffle 10×5-trap function.

In this table, All MOGAs are 0/30 that represents 30 runs failed in obtaining all the Pareto-optimal solutions. BBNSGA(nonSharing) and BBMOGA(GPSIFF) are 30/30 that represents 30 runs of they has successfully obtained all the Pareto-optimal solutions.

BBNSGA-II is 21/30. As the result, we sure BBNSGA(nonSharing) and BBMOGA(GPSIFF) obtain all Pareto-optimal solutions in every experiment runs.

Figure 5.12 illustrates the first hitting time for MOGAs. There is only NSGA-II in this figure. The maximum of the first hitting time for NSGA-II is about 350th generation. It represents only NSGA-II obtains the first Pareto-optimal solution before 350th generation, others don’t find any Pareto-optimal solution. All MOGAs are failed to obtain the last hitting time in this experiment and we don’t show the last hitting time for MOGAs. Figure 5.13 and figure 5.14 illustrate that the first and last hitting time for MOLLGAs. The first hitting time for BBNSGA-II is before 10th generation. It is the fastest to obtain the first Pareto-optimal solution. But the last hitting time for BBNSGA-II is after 500th generation or don’t obtain the last hitting time. The last hitting time for BBMOGA(GPSIFF) and

(43)

BBNSGA(nonSharing) obtain before 25th generation. It stands for find all Pareto-optimal solutions before 25th generation in a single run. Table 5.6 shows runs of the comparing MOGAs and MOLLGAs had successfully obtained first and last hitting time for normal 10×5-trap function that in 30 independent runs. Table 5.7 shows the comparing MOGAs and MOLLGAs had successfully obtained first and last hitting time for normal 10×5-trap function that in 30 independent runs. N/A is the comparing MOGAs and MOLLGAs can’t obtain first or last hitting time.

0 200 400 600 800 1000

0 5 10 15 20

Generations

Number of Optimal Solutions

NSGA-II

Figure 5.8 Number of Pareto-optimal solutions in MOGAs for shuffle 10×5-trap function

(44)

0 200 400 600 800 1000 0

200 400 600 800 1000

Generation

Number of Optimal Solution

BBSPEA2

BBMOGA(GPSIFF) BBNSGA(nonSharing) BBNSGA2

Figure 5.9 Number of Pareto-optimal solutions in MOLLGAs for shuffle 10×5-trap function

0 0.2 0.4 0.6 0.8 1 SPEA2

FFGA MOGA(GPSIFF) NSGA NSGA(nonSharing) NSGA2

Figure 5.10 Box plots of Completeness in MOGAs for shuffle 10×5-trap function

(45)

0 0.2 0.4 0.6 0.8 1 BBSPEA2

BBFFGA BBMOGA(GPSIFF) BBNSGA BBNSGA(nonSharing) BBNSGA2

Figure 5.11 Box plots of Completeness in MOLLGAs for shuffle 10×5-trap function

Table 5.5 Robustness for shuffle 10×5-trap function MOGAs

FFGA 0/30

NSGA 0/30

NSGA(nonSharing) 0/30

NSGA-II 0/30

MOGA(GPSIFF) 0/30

SPEA2 0/30

MOLLGAs

BBFFGA 0/30

BBNSGA 0/30

BBNSGA(nonSharing) 30/30

BBNSGA-II 21/30

BBMOGA(GPSIFF) 30/30

BBSPEA2 0/30

(46)

0 5 10 15 20 25 30 0

50 100 150 200 250 300 350

Runs

Generations

NSGA-II

Figure 5.12 First hitting time in MOGAs for shuffle 10×5-trap function

0 5 10 15 20 25 30

0 20 40 60 80

Runs

Generations

BBSPEA2

BBMOGA(GPSIFF) BBNSGA(nonSharing) BBNSGA2

Figure 5.13 First hitting time in MOLLGAs for shuffle 10×5-trap function

(47)

0 5 10 15 20 25 30 0

200 400 600 800 1000

Runs

Generations

BBMOGA(GPSIFF) BBNSGA(nonSharing) BBNSGA2

Figure 5.14 Last hitting time in MOLLGAs for shuffle 10×5-trap function

Table 5.6 Number of successful runs

Algorithms First Hitting Time Last Hitting Time FFGA

NSGA

NSGA(nonSharing)

0/30 0/30

NSGA-II 30/30 0/30

MOGA(GPSIFF) 0/30 0/30

SPEA2 0/30 0/30

BBFFGA

BBNSGA 0/30 0/30

BBNSGA(nonSharing) 30/30 30/30

BBNSGA-II 30/30 21/30

BBMOGA(GPSIFF) 30/30 30/30

BBSPEA2 30/30 0/30

(48)

Table 5.7 Hitting Time(generation)

First Hitting Time Last Hitting Time Algorithms

Avg. Max. Min. Avg. Max. Min.

FFGA NSGA

NSGA(nonSharing)

N/A

NSGA-II 244.7 348 125

MOGA(GPSIFF)

SPEA2 N/A

N/A

BBFFGA

BBNSGA N/A N/A

BBNSGA(nonSharing) 17.66 27 12 28.7 37 22

BBNSGA-II 8.13 10 7 585.6* 893* 349*

BBMOGA(GPSIFF) 21.6 37 11 32.86 49 23

BBSPEA2 32.73 45 25 N/A

*: The values are given by 21 runs of BBNSGA-II had successfully

參考文獻

相關文件

denote the successive intervals produced by the bisection algorithm... denote the successive intervals produced by the

Wang, Solving pseudomonotone variational inequalities and pseudocon- vex optimization problems using the projection neural network, IEEE Transactions on Neural Networks 17

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

In the following we prove some important inequalities of vector norms and matrix norms... We define backward and forward errors in

Theorem 5.6.1 The qd-algorithm converges for irreducible, symmetric positive definite tridiagonal matrices.. It is necessary to show that q i are in

 Definition: A problem exhibits optimal subst ructure if an optimal solution to the proble m contains within it optimal solutions to su bproblems..  怎麼尋找 optimal

optimal solutions to related subproblems, which we may solve

optimal solutions to related subproblems, which we may solve