• 沒有找到結果。

Chapter 3 Immune Algorithm Embedded with Particle Swarm Optimizer

3.5 Concluding Remarks

In this chapter, the efficient immune-based particle swarm optimization (IPSO) is proposed to improve the searching ability and the converge speed. We proposed the IPSO for a neuro-fuzzy classifier to solve the skin color detection problem. The advantages of the proposed IPSO method are summarized as follows: 1) We employed the advantages of PSO to improve the mutation mechanism; 2) The experimental results show that our method is more efficient than IA and PSO in accuracy rate and convergence speed.

Figure 3.9: Original color images from CIT facial database.

Figure 3.10: Results of skin color detection with 3 dimension input (Y, Cb and Cr).

Chapter 4

An Evolutionary Neural Fuzzy Classifier Using Bacterial Foraging Oriented by Particle Swarm Optimization Strategy

Classification is one of the most important tasks for different application such as text categorization, tone recognition, image classification, micro-array gene expression, proteins structure predictions, data classification etc. There are many methods to construct classifiers, such as statistical models [100], neural networks [37][39][101], and fuzzy systems [6][16][17][102]. Most of the existing supervised classification methods are based on traditional statistics, which can provide ideal results when sample size is tending to infinity. However, only finite samples can be acquired in practice.

In this chapter, an evolutionary neural fuzzy classifier, using bacterial foraging oriented by particle swarm optimization strategy (BFPSO), is applied on different data sets which have two or multi class. The proposed BFPSO is a hybrid method which combines bacterial foraging optimization (BFO) and particle swarm optimization (PSO). The proposed algorithm performs local search through the chemotactic movement operation of BFO whereas the global search over the entire search space is accomplished by a PSO operator. In this way it balances between exploration and exploitation enjoying best of both the worlds.

4.1 Basic Concepts of Bacterial Foraging Optimization

Passino [103] proposed the BFO in 2002. The idea of the BFO is based on the fact that natural selection tends to eliminate animals with poor “foraging strategies”

and favor the propagation of genes of those animals that have successful foraging strategies. After many generations, poor foraging strategies are either eliminated or shaped into good ones. Logically, such evolutionary principles have led scientists in the field of “foraging theory” to hypothesize that it is appropriate to model the activity of foraging as an optimization process. Take the E. coli bacteria (the ones that are living in our intestines) foraging strategy for instance, their foraging strategy is governed by four processes, namely, chemotaxis, swarming, reproduction, and elimination-and-dispersal.

4.1.1 Chemotaxis

Chemotaxis is achieved through swimming and tumbling. Depending upon the rotation of the flagella in each bacterium, it decides whether it should move in a predefined direction (swimming) or in an altogether different direction (tumbling), over the entire lifetime of the bacterium.

Let S denote the bacterial population size and Nc be the length of the lifetime of the bacteria as measured by the number of chemotactic steps they take during their life. Let C i( )0, i1, 2,,S denote a basic chemotactic step size that we will use to define the lengths of steps during runs. To represent a tumble, a unit-length random direction, say

 

j , is generated; this will be used to define the direction of movement after a tumble. In particular, we let

1, ,

 

, ,

    

i i

j k l j k l C i j

     (4.1)

where i

j k l, ,

represents the location of the ith bacterium at the jth

chemotactic step, kth reproduction step, and lth elimination-dispersal event. C i

 

is the size of the step taken in the random direction specified by the tumble.

Then, the movement of the ith bacterium at jth chemotactic step, can be

With the activity of run or tumble taken at each step of the chemotaxis process, a step fitness, denoted as J i j k l

, , ,

, will be evaluated. If at i

j1, ,k l

the cost

, 1, ,

J i jk l is better (lower) than at i

j k l, ,

, then another step of size C i

 

in

this same direction will be taken, and again, if that step resulted in a position with a better cost value than at the previous step, another step is taken. This swim is continued as long as it continues to reduce the cost, but only up to a maximum number of steps, N . This represents that the cell will tend to keep moving if it is s headed in the direction of increasingly favorable environments.

4.1.2 Swarming

It is always desired for the bacterium that has searched out the optimum path of food should try to attract other bacteria, so that they reach the desired place more rapidly. Swarming makes the bacteria congregate into groups, and hence move as concentric patterns of groups with high bacterial density. Mathematically, swarming can be represented as

 

 

function to minimize a time-varying cost function; S is the total number of bacteria;

p is the number of parameters to be optimized that are present in each bacterium;

and dattract, attract, hrepellant and repellant are different coefficients that are to be judiciously chosen.

4.1.3 Reproduction

After N chemotactic steps, a reproduction step is taken. Let c N be the re number of reproduction steps to be taken. The health cost of each bacterium is calculated as the sum of the step fitness during its life, that is, 1

 

1

where N is the maximum step in a chemotaxis process. For convenience, we c assume that S   is a positive even integer. The population is sorted in order of 2 Sr ascending accumulated cost (higher accumulated cost represents that a bacterium did not get as many nutrients during its lifetime of foraging and hence is not as “healthy”

and thus unlikely to reproduce); then the S least healthy bacteria die and the other r S healthiest bacteria each split into two bacteria, which are placed at the same r

location. Thus, the population of bacteria keeps constant which is very convenient in coding the algorithm.

4.1.4 Elimination-and-Dispersal

Let Ned be the number of elimination-dispersal events. The chemotaxis

provides a basis for local search, and the reproduction process speeds up the convergence which has been simulated by the classical BFO. While to a large extent, only chemotaxis and reproduction are not enough for global optima searching. Since bacteria may get stuck around the initial positions or local optima, it is possible for the diversity of BFO to change either gradually or suddenly to eliminate the accidents of being trapped into the local optima. In BFO, the dispersion event happens after a certain number of reproduction processes. Then some bacteria are chosen, according to a preset probability p , to be killed and moved to another position within the ed environment.

4.2 Learning Algorithms for the NFS Model

BFO is based on the foraging behavior of Escherichia Coli (E. Coli) bacteria present in the human intestine and already been in use to many engineering problems, such as optimal control [104][105], and machine learning [106]. However, bacteria foraging strategies with fixed step size suffers from two main problems. If the step size is very large, then the precision becomes low, although the bacterium quickly reaches the vicinity of the optimum point. It moves around the maximum for the remaining chemotactic steps. If the step size is very small, then it takes many chemotactic steps to reach the optimum point. The rate of convergence thus decreases [107].

In PSO, a particle represents a potential solution which is a point in the search space. Each particle has a fitness value and a velocity to adjust its flying direction according to the best experiences of the swarm to search for the global optimum in the solution space. In Eq. (1.3), the inertia weight is used to balance the global and local

search abilities. A large inertia weight is more appropriate for global search, and a small inertia weight facilitates local search.

The proposed BFPSO algorithm, a new algorithm that combines BFO with PSO algorithm, is endowed with high convergence speed and commendable accuracy. This can be otherwise stated as the PSO performing a global search and providing a near optimal solution very quickly which is followed by a local search by BFO which fine-tunes the solution and gives an optimum solution of high accuracy. PSO has an inherent disability of trapping in the local optima but high convergence speed whereas BFO has the drawback of having a very poor convergence speed but the ability to not trap in the local optima. Figure 4.1 is the flowchart of proposed BFPSO algorithm.

The brief pseudo code of the proposed BFPSO method has been provided below:

Step 1: Initialization

p : Dimension of the search space.

S : The number of bacteria in the population.

N c : The number of chemotactic steps.

N s : The number of swimming steps.

N : The number of reproduction steps. re

N : The number of elimination-dispersal events. ed

p : The probability that each bacterium will be eliminated-dispersed. ed

C : The size of the step taken in the random direction specified by the tumble.

c 1 : The cognitive learning rates.

c 2 : The social learning rates.

w : The coefficient of the inertia term to control exploratory properties.

Step 2: Elimination-dispersal loop: l l 1. Step 3: Reproduction loop: k k 1.

Step 4: Chemotaxis loop: j  . j 1

[Step 4.1] For i1, 2,...,S, take a chemotactic step for bacterium i as follows.

[Step 4.2] Evaluate the cost function ( , , , )J i j k l , then let JlastJ i j k l( , , , ). [Step 4.3] Tumble: let

         

[Step 4.6] Go to next bacterium.

Step 5: If ( jNc), go to Step 4. Since the life of the bacteria is not over.

Step 6: Reproduction: Compute the health of the bacterium i :

 

Sort bacteria and chemotactic parameters ( )C i in order of ascending cost

health

J (higher cost means lower health). The S bacteria with the highest r

health

J values die and the other S bacteria with the best values split (and the r copies that are made are placed at the same location as their parent).

Step 7: If (kNre), go to Step 3.

Step 8: Elimination-dispersal: Eliminate and disperse bacteria with probabilityp . ed Step 9: If (lNed), go to Step 2; otherwise end and output the results.

Figure 4.1: Flowchart of proposed BFPSO method.

4.3 Illustrative Examples

In this section, we evaluate the classification performance of the proposed NFS-BFPSO method using two better-known benchmark data sets and one skin color

detection problem. The first example uses the iris data and the second example uses the Wisconsin breast cancer data. The two benchmark data sets are available from the University of California, Irvine, via an anonymous ftp address ftp://ftp.ics.uci.edu/pub/machine-learning-databases. In the following simulations, the parameters and number of training epochs were based on the desired accuracy. In short, the trained NFS with BFPSO was stopped once its high learning efficiency was demonstrated.

Example 1: Iris Data Classification

The Fisher-Anderson iris data consists of four input measurements, sepal length (sl), sepal width (sw), petal length (pl), and petal width (pw), on 150 specimens of the iris plant. Three species of iris were involved, Iris Sestosa, Iris Versiolor and Iris Virginica, and each species contains 50 instances. The measurements are shown in Figure 4.2.

In the iris data experiments, 25 instances with four features from each species were randomly selected as the training set (i.e., a total of 75 training patterns were used as the training data set) and the remaining instances were used as the testing set.

Once the NFS was trained, all 150 test patterns of the iris data were presented to the trained NFS, and the re-substitution error was computed. In this example, three fuzzy rules are adopted. After 4000 generations, the final fitness value was 0.9278.

Figure 4.3 (a)-(f) show the distribution of the training pattern and the final assignment of the fuzzy rules (i.e., distribution of input membership functions). Since the region covered by a Gaussian membership function is unbounded, in Figure 4.3 (a)-(f), the boundary of each ellipse represent a rule with a firing strength of 0.5. We compared the testing accuracy of our proposed method with that of other methods – the neural fuzzy system with bacterial foraging optimization (NFS-BFO) and the

neural fuzzy system with particle swarm optimization (NFS-PSO). The experiments calculated the classification accuracy and the values of the average produced on the testing set using the NFS-BFO method, the NFS-PSO method, and the proposed NFS-BFPSO method.

Figure 4.2: Iris data: iris sestosa (), iris versiolor (), and iris virginica ().

During the learning phase, the learning curves from the proposed NFS-BFPSO method, the NFS-BFO method, and the NFS-PSO method are shown in Figure 4.4.

Table 4.1 shows that the experiments with the NFS-BFPSO method result in high accuracy, with an accuracy percentage ranging from 96% to 98.67%. The means of re-substitution accuracy was 97.6%. The average classification accuracy of the

NFS-BFPSO method was better than that of other methods. Table 4.2 shows the comparison of the classification results of the NFS-BFPSO method with other methods [28][102][108-110] on the iris data. The results show that the proposed NFS-BFPSO method is able to keep similar average substitution accuracy.

(a) For the Sepal Length and Sepal Width dimensions.

(b) For the Petal Length and Petal Width dimensions.

(c) For the Sepal Length and Petal Length dimensions.

(d) For the Sepal Width and Petal Width dimensions.

(e) For the Sepal Width and Petal Length dimensions.

(f) For the Sepal Length and Petal Width dimensions.

Figure 4.3: The distribution of input training patterns and final assignment of three rules.

Figure 4.4: Learning curves of the NFS-BFPSO method, the NFS-BFO method, and the NFS-PSO method.

Table 4.1: Classification accuracy using various methods for the iris data.

Model

Experiment # NFS-BFO NFS-PSO NFS-BFPSO

1 96 98.67 98.67

2 92 93.33 96

3 97.33 94.67 98.67

4 97.33 98.67 97.33

5 94.67 94.67 97.33

Average (%) 95.47 96 97.6

Table 4.2: Average re-substitution accuracy comparison of various models for the iris data classification problem.

Models Average re-substitution accuracy (%)

FEBFC [102] 96.91

SANFIS [28] 97.33

FMMC [108] 97.3

FUNLVQ+GFENCE [109] 96.3

Wu-and-Chen’s [110] 96.21

NFS-BFPSO 97.6

Example 2: Wisconsin Breast Cancer Diagnostic Data Classification The Wisconsin breast cancer diagnostic data set contains 699 patterns distributed into two output classes, “benign” and “malignant.” Each pattern consists of nine input features: clump thickness, uniformity of cell size, uniformity of cell shape, marginal adhesion, single epithelial cell size, bare nuclei, bland chromatin, normal nucleoli, and mitoses. 458 patterns are in the benign class and the other 241 patterns are in the malignant class. Since there were 16 patterns containing missing values, we used 683 patterns to evaluate the performance of the proposed NFS-BFPSO method. To compare the performance with other models, we used half of the 683 patterns as the training set and the remaining patterns as the testing set.

Experimental conditions were the same as the previous experiment. The training patterns were randomly chosen, and the remaining patterns were used for testing. The experiments calculated the classification accuracy and the values of the average produced on the testing set by the NFS-BFO method, the NFS-PSO method, and the proposed NFS-BFPSO method.

During the supervised learning phase, 4000 epochs of training were performed.

Figure 4.5 shows the membership functions for each input feature. The learning curves from the proposed NFS-BFPSO method, the NFS-BFO method, and the

NFS-PSO method are shown in Figure 4.6. The performance of the NFS-BFPSO method is better than the performance of all other models.

Table 4.3 shows that the experiments with the NFS-BFPSO method result in high accuracy, with an accuracy percentage ranging from 97.66% to 98.54%. The means of re-substitution accuracy was 97.95%. The average classification accuracy of the NFS-BFPSO method was better than that of other methods. We compared the testing accuracy of our model with that of other methods [26][28][101][102][111]. Table 4.4 shows the comparison between the learned NFS-BFPSO method and other fuzzy, neural networks, and neural fuzzy systems. The average classification accuracy of the NFS-BFPSO method is better than that of other methods.

Figure 4.5: Input membership functions for breast cancer classification.

Figure 4.6: Learning curves from the NFS-BFPSO method, the NFS-BFO method and the NFS-PSO method.

Table 4.3: Classification accuracy for the Wisconsin breast cancer diagnostic data.

Model

Experiment # NFS-BFO NFS-PSO NFS-BFPSO

1 95.32 96.49 97.66

2 95.61 97.08 98.54

3 93.86 94.44 97.66

4 94.74 97.37 97.95

5 94.74 96.49 97.95

Average (%) 94.85 96.37 97.95

Table 4.4: Average accuracy comparison of various models for Wisconsin breast cancer diagnostic data.

Models Average re-substitution accuracy (%)

NNFS [101] 94.15

FEBFC [102] 95.14

SANFIS [28] 96.3

NEFCLASS [26] 92.7

MSC [111] 94.9

NFS-BFPSO 97.95

Example 3: Skin Color Detection

The description of the system is the same as Section 3.4. Unlike the previous chapter set four rules to constitute the neuro-fuzzy classifier, we set three fuzzy rules in this example. In addition, the parameter learning method is change to be BFPSO method.

In this example, the performance of the NFS-BFPSO method is compared with the NFS-BFO method, and the NFS-PSO method. The learning curves are shown in Figure 4.7. In Figure 4.7, we find that the performance of the proposed NFS-BFPSO method is superior to the other methods. In addition, the comparison items include the training and testing accuracy rates with various existing models are tabulated in Table 4.5.

The CIT facial database consists of complex backgrounds and diverse lighting.

Hence, from the comparison data listed in Table 4.5, the average of the test accuracy rate is 82.39% for the NFS-BFO method, 83.64% for the NFS-PSO method and 85.82% for the proposed NFS-BFPSO method. This demonstrates that the CIT database is more complex and does not lead to a decrease in the accuracy rate. The proposed NFS-BFPSO method maintains a superior accuracy rate. The color images from the CIT database are shown in Figure 4.8. A well-trained network can generate

binary outputs (1/0 for skin/non-skin) to detect a facial region. Figure 4.9 shows that our model accurately determines a facial region.

Figure 4.7: The learning curves of the three methods using the CIT database.

Table 4.5: Performance comparison with various existing models from the CIT database.

Method NFS-BFPSO NFS-PSO NFS-BFO

Average training accuracy rate 97.63% 96.77% 96.5%

Average testing accuracy rate 85.82% 83.64% 82.39%

Figure 4.8: Original face images from CIT database.

Figure 4.9: Results of skin color detection with 3 dimension input (Y, Cb, Cr).

4.4 Concluding Remarks

This chapter proposes an efficient evolutionary learning method, using bacterial

foraging oriented by particle swarm optimization strategy (BFPSO), for the neural fuzzy system (NFS) in classification applications. The proposed BFPSO method attempts to make a judicious use of exploration and exploitation abilities of the search space and therefore likely to avoid false and premature convergence in many cases.

The advantages of the proposed BFPSO method are summarized as follows: 1) BFPSO involves the elite-selection mechanism to gain a chance to reproduce near optimal solutions. 2) BFPSO records the best previous solution and the global best solution to evolve. 3) BFPSO can balance the exploration and exploitation abilities of the search space. Three examples showed that the proposed NFS-BFPSO method improves the system performance in terms of a fast learning convergence, and a high correct classification rate.

Chapter 5

Nonlinear System Control Using

Functional-Link-Based Neuro-Fuzzy Network Model Embedded with

Modified Particle Swarm Optimizer

Nonlinear system control is an important tool that is adopted to improve control performance and achieve robust fault-tolerant behavior. Among nonlinear control techniques, those based on artificial neural networks and fuzzy systems have become popular topics of research in recent years [112-114] because classical control theory usually requires a mathematical model to design the controller. However, the inaccuracy of the mathematical modeling of plants usually degrades the performance of the controller, especially for nonlinear and complex control problems [115]. On the contrary, both the fuzzy system controller and the artificial neural network controller provide key advantages over traditional adaptive control systems. Although traditional neural networks can learn from data and feedback, the meaning associated with each neuron and each weight in the network is not easily interpreted. Alternatively, the fuzzy logical models are easily appreciated, because they use linguistic terms and the structure of IF-THEN rules. However, fuzzy systems have a lack of an effective learning algorithm to refine the membership functions to minimize output errors.

According to the literature review mentioned before, it can be said that, in contrast to

According to the literature review mentioned before, it can be said that, in contrast to

相關文件