• 沒有找到結果。

CHAPTER 3 B-spline Fuzzy Neural Network (B-spline FNN)

3.2 B-spline Membership Function (BMF)

As a result, the B-spline membership function (BMF)µA(xq) introduced in [7,16]

is expressed as

= membership functions and use the SGA (to be introduced in chapter 4) to obtain a set of optimal control points of BMFs. To avoid the increased number of control points, we use the BMFs as the version of fixed number of control points in [15], which is shown in Fig. 3-2.

fuzzy variable x0

(x )0

Fig. 3-2. Illustration of fixed number of control points of BMF's of order 2.

Fig. 3-1 All knot spans are on the left (first) column and all degree one basis functions on the second



3.4 The Configuration of A B-spline FNN

Fig. 3-3 shows the configuration of a typical fuzzy-neural network. The system has a total of four layers. Nodes at layer I are input nodes (linguistic nodes) that represent input linguistic variables. Nodes at layer II are term nodes which act as BMFs to represent the terms of the respective linguistic variables. Each node at layer III is a fuzzy rule. Layer IV is the output layer.

3.5 A B-spline FNN Inference Method

Given the training input data xq,q=1,2,m,n, and the output data yp, p=1,2,m,m, the ith fuzzy rule has the following form:

i output ypof the fuzzy inference can be derived from the following equations:

weighting vector. We assume that each input has the same number of BMFs and the ith BMF of the qth input has r+1 control points denoted as

Fig. 3-3. The configuration of a fuzzy neural network.

1

i

cq ={ciqj |cqji = pj, j=0,1,2,r}=[cqi0 cqi1cqri ]T.

Each input has z fuzzy sets (BMFs). If there are h rules in fuzzy rule base, then the adjustable set of all the control points is defined as

T T z n zT T T zT T

Tc c c c c c

c

c=[ 11 12 h 1 12 22 h 2 h ]

={cqji |i=1,2,h,z,q=1,2,h,n,j=0,1,h,r} (3-7) Hence, the objective of the learning algorithm is to minimize the error function:

2

*) (

) ,

( p p p

p w c y y

e = − (3-8) and

2

* ||

||

) ,

(w c Y Y

E = − (3-9) where w=[w1Tw2TwTm]T is a weighting vector of the fuzzy neural network,

T zT T

Tc c c

c

c=[ 11 12  1 12 c22Tc2zTcnzT]T is a control point vector of the BMFs, ]

[y1 y2 ym

Y =  is an m-dimensional vector of the current outputs, and ]

[ 1 2

= y y ym

Y  is an m-dimensional vector of the desired outputs acquired from specialists.

CHAPTER 4 Design of the FNN Identifiers by the Simplified Genetic Algorithms

In this chapter, a novel approach to adjust both control points of B-spline membership functions (BMFs) and weightings of fuzzy-neural networks using a simplified genetic algorithm (SGA) is proposed. Fuzzy-neural networks are traditionally trained by using gradient-based methods, and may fall into local minimum during the learning process.

Genetic algorithms have drawn significant attentions in various fields due to their capabilities of directed random search for global optimization. This motivates the use of the genetic algorithms to overcome the problem encountered by the conventional learning methods. However, it is well known that searching speed of the conventional genetic algorithms is not desirable. Thus far, such conventional genetic algorithms are inherently disadvantaged in dealing with a vast amount (over 100) of adjustable parameters in the fuzzy-neural networks. In this chapter, the SGA is proposed by using a sequential-search-based crossover point (SSCP) method in which a better crossover point is determined and only the gene at the specified crossover point is crossed as a single point crossover operation. Chromosomes consisting of both the control points of BMF’s and the weightings of fuzzy-neural networks are coded as an adjustable vector with real number components and searched by the SGA. Because of the use of the SGA, faster convergence of the evolution process to search for an optimal fuzzy-neural network can be achieved. Nonlinear functions approximated by using the fuzzy-neural networks via the SGA are demonstrated in this chapter to illustrate the effectiveness and applicability of the proposed method.

4.1 The Simplified Genetic Algorithm

To overcome the problems encountered by conventional genetic algorithms, we propose a simplified genetic algorithm (SGA) with a novel structure different from the conventional GAs to deal with a complicated situation where a vast number (over 100) of adjustable parameters are searched in the fuzzy-neural network.

4.2 Basic Concept of GAs

GAs are powerful search optimization algorithms based on the mechanics of natural selection and natural genetics. GAs can be characterized by the following features [24]:

• A scheme for encoding solutions to the problem, referred to as chromosomes;

• An evaluation function (referred to as a fitness function) that rates each chromosome relative to the others in the current set of chromosomes (referred to as a population);

• An initialization procedure for a population of chromosomes;

• A set of operators which are used to manipulate the genetic composition of the population (such as recombination, mutation, crossover, etc);

Basically, GAs are probabilistic algorithms, which maintain a population of individuals (chromosomes, vectors) for iteration. Each chromosome represents a potential solution to the problem at hand, and is evaluated to give some measure of its fitness. Then, selecting the more fit individuals forms a new population. Some members of the new population undergo transformations by means of genetic operators to form new solutions. After some number of generations, it is hoped that the system converges

with a near-optimal solution.

There are two primary groups of genetic operators, crossover and mutation, used by most researchers. Crossover combines the features of two parent chromosomes to form two similar offspring by swapping corresponding segments of the parents. The intuition behind the applicability of the crossover operator is information exchange between potential solutions. Mutation, on the other hand, arbitrarily alters one or more genes of a selected chromosome, by a random change with a probability equal to the mutation rate.

The intuition behind the mutation operator is the introduction of some extra variability into the population.

The GA described above, however, is a conventional one. In this chapter, we propose a simplified genetic algorithm (SGA), which are characterized by three simplified processes. Firstly, the population size is fixed and can be reduced to a minimum size of 4. Secondly, the crossover operator is simplified to be a single point crossover. Thirdly, only one chromosome in a population is selected for mutation.

Details will be discussed in the following section.

4.3 Evolutionary Processes of the Simplified Genetic Algorithm (SGA)

The adjusted parameters of FNN, which is both the control-points and weights or alternative, can be defined. In this section, we define the evolutionary processes of SGA by using both parameters w and c . For learning the adjustable parameters of the fuzzy-neural network shown in Chapter 3.4, we define the chromosome as

1 1 2

1

1] [ ]

[ + = + +

= φβ φ φ φβφβ β

φ wTcT   m   R , (4-1) with the length of

z r

n h

m× + × + ×

= ( ( 1))

β , (4-2) wherew is a set of weighting factors defined as the parameter w of (3-10), ranging from within the interval D1 =[wmin,wmax]⊆ R, and c is a set of control points defined in (3-8), ranging from within the interval D2 =[cmin,cmax]⊆R. Each input has z fuzzy sets (BMFs). The φβ+1 is defined as a virtual gene, on which the single point crossover performed does not affect the fitness values of the population, if the crossover point chooses j =β. The single point crossover will be introduced later.

Because a real-valued space is been dealt with where each chromosome is coded as an adjustable vector with floating point type components, the crossover and mutation are the real-number genetic operators.

4.3.1 Population Initialization

A genetic algorithm requires a population of potential solutions to be initialized and then maintained during the process. In the proposed approach, a fixed number k of population size is used to prevent the unlimited growth of population. Real number representation for potential solutions is also adopted to simplify genetic operator definitions and obtain a better performance of the genetic algorithm itself. The initial chromosomes are randomly generated within the feasible ranges, D1 and D2. The initial population with k chromosomes defined in (4-1) is randomly generated as follows:

 can be evolutionarily obtained to be a set of near-optimal parameters for the fuzzy-neural network. Note that the number of chromosomes, k, needs be an even number (to be introduced in Chapter 4.3.3).

After initialization, two genetic operations: crossover and mutation are performed during procreation.

4.3.2 Fitness function

The performance of each chromosome is evaluated according to its fitness. After generations of evolution, it is expected that the genetic algorithm converges and a best chromosome with largest fitness (or smallest error) representing the optimal solution to the problem is obtained. The fitness function is defined as follows:

)

4.3.3 Single Point Crossover

In order to deal with a vast number of adjustable parameters, the single point crossover is introduced in this section. Fig. 4-1 shows the difference between the traditional crossover methods and the proposed single point crossover method. Fig. 4-1 (a) and (b) show the traditional methods, which adopt one crossover point and two crossover points, respectively. Although the proposed single point crossover shown in Fig. 4-1 (c) has two crossover points, the distance between the two crossover points is only one gene (parameter). For each generation, the crossover operator will act on parents to give offspring. To avoid improper crossover, the distance between the two crossover points is reduced to be only one gene. The single point crossover operator is defined as:



where j is the crossover point determined by a sequential-search-based crossover point method or a randomly selected crossover point method introduced later, ∆ denotes the elements of offspring which remain the same as those of their parents, and



position j+1 for all chromosomes with a linear combination of φj i

+1 and φj

k i

+ + 1

2 ( / )

. a is a random number between 0 and 1. Ψˆ is a new population.

To determine the crossover point j for the single point crossover, we propose two kinds of search methods. One is a randomly selected crossover point (RSCP) method, which chooses a crossover point at random. The RSCP is a popular technique adopted by many researchers. Pseudo code for the randomly selected crossover point (RSCP) method is shown in Fig. 4-2 (a).

The other one is a sequential-search-based crossover point (SSCP) method, where the crossover point j is determined via a sequential search based on fitness before the crossover operation actually takes place. The search algorithm of SSCP is similar to the local search procedure in [24] and a sequential search of a database. Pseudo code for the sequential-search-based crossover point (SSCP) method is shown in Fig. 4-2 (b). If there is no satisfied crossover point at this generation, then let the crossover point be

β

=

j . Then, the single point crossover performs on the virtual gene, φβ+1, and the fitness values of the population will not be affected.

4.3.4 Sorting Operation

After crossover, the newly generated population is sorted by ranking the fitness of chromosomes within the population, resulting in E(φˆ1)≤E(φˆ2)m≤E(φˆk). The first chromosome φˆ1 of the sorted population Ψˆ =[φˆ1φˆ2mφˆk]T has the highest fitness value (or smallest error).

4.3.5 Mutation Operation

After sorting, the first chromosome is the best one in the population in terms of fitness. Mutation here means copying the first (i.e., best) chromosome to the (k/2+1)th chromosome, first. Then, genes within the (k/2+1)th chromosome are randomly selected for mutation, according to the mutation rate pm shown in Fig. 4-3. Note that mutation on the selected genes is performed based on a copy of the best-fit chromosome,

φ Ψ

Parents

Offsprings

(a) single crossover point.

(a traditional method)

(b) two crossover points between multiple genes.

(a traditional method)

(c) two crossover points between single gene.

(the proposed method) The single point crossover

Fig. 4-1. Traditional crossover methods and the proposed single point crossover method.

) 1 2 /

ˆ(k +

φj selected for mutation within the (k/2+1)th chromosome φˆ(k/2+1)are altered by the following mutation operators, which are described in (4-7) and (4-8).

Because two different intervals, D1 and D2, are defined for weightings and control points, respectively, the mutation operator is divided into two parts.

For the weightings (φˆ(jk/2+1), j =1,2,m,m×h), the genes are updated by:

o o

o o

o o

o o

mutation

) 1 2 / (k +

The th chromosome

o o o o

o o

o o

o o

mutation

) 1 2 / (k +

The (k/2+1)th chromosome The th chromosome

o o

o o

(a) a traditional method (b) the propose method

Fig. 4-3 Traditional mutation method and the proposed method.

Procedure Sequential Search-based Crossover Point (j);

Begin

Let j=0; i=0;

Repeat

Perform Ψˆ =Crs(Ψ;i); by (3-5)

Evaluate fitnees(φˆ1) and fitness1) by (4-4);

If fitnees(φˆ1) > fitness1) Then j=i; Else i=i+1;

Until fitnees(φˆ1) > fitness1) or i=β; Return j=i;

End

Fig. 4-2. Pseudo code for the randomly selected crossover point (RSCP) method and the sequential-search-based crossover point (SSCP) method.

Procedure Random Selected Crossover Point (j);

Begin

Obtain j randomly between 0 and β; End

(a) the RSCP method

(b) the SSCP method (4-5)

 iteration, γ is a random number from [0,1], and T is the maximal generation number.

γ is a system parameter determining the degree of dependency on an iteration number.

The function ∆( yt, ) returns a value in the range of [0, y] such that the probability )

, ( yt

being close to 0 increases as t increases. This property causes the mutation operator to search the space uniformly at initial stage (when t is small), and very locally at later stages; thus increasing the probability of generating children closer to its successor than a random choice. The design of the mutation operator is based on two rationales. First, it is desirable to take large leaps in the early phase of SGA search so that SGA can explore a parameter space as wide as possible. Second, it is also desirable to take smaller jumps in the later phase of SGA search so that SGA can direct its search toward a global minimum more effectively. Both are accomplished through the role that

“time” t plays in ( yt, ).

The SGA offers exciting advantages over the conventional gradient-based methods during the learning process of fuzzy-neural networks. To start with, chromosomes consisting of adjustable parameters of the fuzzy-neural network are coded as a vector with real number components. The fitness values are obtained by a mapping from the error function, defined as the difference between the outputs of the fuzzy-neural network and the desired outputs. Thus, all the best adjustable parameters of the fuzzy-neural network can be obtained by repeating genetic operations, i.e., crossover and mutation, so that an optimal fuzzy neural network satisfying an error bound condition can be evolutionarily obtained. Because of the use of the simplified genetic algorithm, faster convergence of the evolution process to search for an optimal fuzzy neural network can be achieved.

4.4 Pseudo Code for The SGA

The idea of the SGA has been introduced in previous section. Fig. 4-4 shows two kinds of SGA adopted for an off-line learning process. Pseudo code for SGA with the RSCP method and SGA with the SSCP method are shown in Fig. 4-4 (a) and Fig. 4-4 (b), respectively.

Using off-line learning and SGA with the RSCP method shown in Fig. 4-4 (a), an additional procedure, If fitnees(φˆ1) < fitness1) Then φˆ11 , is used to maintain the best fitness evolutionarily obtained so far. The procedure to keep the best chromosome into the next generation is a popular technique in conventional genetic algorithms. For SGA with the SSCP method, however, this additional procedure is no longer required as shown in Fig. 4-4 (b), since a better crossover point can be

Example 1, the learning effect of SGA with the SSCP method is superior to that of SGA with the RSCP method.

Procedure SGA with RSCP Begin

Initialize Ψ % generate an initial population While (not terminate-condition) do

% Obtain the crossover point for off-line learning

Perform Random Selected Crossover Point (j) in Fig. 4-2 (a);

Perform Ψˆ =Crs(Ψ;j); % Perform Single Point Crossover Sort Ψˆ ;

% Additional procedure for off-line learning and RSCP If fitnees(φˆ1) < fitness1) Then φˆ11;

Mutate Ψˆ ; % only apply to the (k/2+1)th chromosome End While

End

(a) SGA with the RSCP Procedure SGA with SSCP

Begin

Initialize Ψ % generate an initial population While (not terminate-condition) do

% Obtain the crossover point for off-line learning

Perform Sequential Search-based Crossover Point (j) in Fig. 4-2 (b);

Perform Ψˆ =Crs(Ψ; j); % Perform Single Point Crossover Sort Ψˆ ;

Mutate Ψˆ ; % only apply to the (k/2+1)th chromosome End While

End

(b) SGA with the SSCP

Fig. 4-4 Pseudo code of SGA with the RSCP and the SSCP.

CHAPTER 5 INDIRECT ADAPTIVE FUZZY-NEURAL CONTROLLER

In this chapter, a constructive manner how to develop on-line indirect adaptive controllers based on the SGA to achieve the control objectives is proposed. Particularly, the state feedback control law with the SGA update law can be on-line tuned.

5.1 Control Objectives

Consider the nth-order nonlinear systems of the form

1

or equivalently of the form

x to be available for measurement. We assume that f and g are unknown functions, and g is, without loss of generality, a strictly positive function. In [55], these systems are in normal form and have the relative degree equal to n. The control objective is to design an indirect adaptive state feedback fuzzy-neural controller so that the system output y follows a given bounded reference signalym.

First, let e= ym y, e=(e,eD,l,e(n1))T and k =(kn,l,k1)T Rnbe such that all roots of a polynomial h(s)=sn +k1sn1+m+kn are in the open left half-plane. If the functions f and g are known, then the optimal control law

] However, since f and g are unknown, the optimal control law (5-3) can not be obtained.

To solve this problem, we use the fuzzy logic systems as approximators to approximate the unknown functions.

5.2 Certainty Equivalent Controller

We replace f and g in (5-3) by the fuzzy logic systems ˆ( | , )

g respectively. The resulting control law

] is the so-called certainty equivalent controller [56] in the adaptive control literature.

Substituting (5-5) to (5-2) and after some manipulations, we obtain the error equation

c

c

a unique positive definite symmetric n×n matrix P which satisfies the Lyapunov equation [57]:

Q

5.3 Supervisory Control

By incorporating a control term, us into the uc, the control law becomes

s

c u

u

u= + , (5-11) where us is called a supervisory control. The supervisory controlus is turned on when the error function Ve is greater than a positive constant V . If Ve V , then the supervisory control us is turned off. That is, if the system tends to be unstable (Ve >V), then the supervisory controlus forcesVe V . In this way, uslikes a supervisor.

Substituting (5-11) into (5-2) the error equation becomes

] In order that the right-hand side of (5-13) is nonpositive, we need to know the bounds of f and g. We assume the following.

Assumption 5.1. We can determine functions fU(x),gU(x) and gL( x) such that| f(x)| fU(x) and gL(x)g(x)gU(x) for xUc, where fU(x)<,gU(x)<, and gL(x)>0 for xUc.

Because of Assumption 5.1, the plant (5-2) can be viewed as “poorly-understood,” but

not “totally unknown.” Note that in Assumption 5.1 we require to know the state-dependent bounds of f and g, which is less restrictive than requiring fixed bounds for allxUc. Finally, the overall control scheme of SIAFC is shown in Figure 5.1.

Fig. 5-1 The overall scheme of SIAFC Control.

x

Fitness function for and adaptive lawSGA-Based adaptive law

Fitness function for and Fitness function for

and

5.4. Simulations Results

This chapter presents the simulation results of the proposed on-line SGA-based indirect adaptive fuzzy-neural controller (SIAFC) with SSCP method for a class of unknown nonlinear dynamical systems to illustrate the stability for the closed-loop system. The inverted pendulum system used in this plan is shown in Fig. 5-2. Let

=θ

x1 be the angle of the pendulum with respect to the vertical line.

Consider the dynamic equations of the inverted pendulum system as follows:

u m m

x l m

m m

x

m m

x l m

m m

x x x mlx

g x

x x

c c

c c

cos ) 3

(4 cos

cos ) 3

(4

sin sin cos

1 2 1

1 2

1 1 2 2 1 2

2 1

+ + +

+

+

=

= D D

(5-16)

where g=9.8 meter/sec2 is the acceleration due to gravity, mcis the mass of the cart, lis the half-length of the pole, m is the mass of the pole and u is the control input. In this example, we assume, mc=1 kg, m=0.1 kg and l=0.5 meter. Clearly, (5-16) is in the form of (5-1). Thus, the SIAFC is adopted to control the system. In this example, each input of the BMF fuzzy-neural network has 7 BMFs. All of the BMFs are order α=2 and each BMF has 15 control points. A population size k=4 is assumed. The adjustable parameters,wf and cf , of f(x1,x2) are in the intervals D =[-2,2] and 1 D =[0,1], 2 respectively. And the adjustable parameters, wg andcf, of g(x1,x2)are in the intervals D =[-1.5,1.5] and 1 D =[0,1], respectively. We choose the reference signal 2 ym =0(case 1) and ym(t)=0.1*sin(t) (case 2) in the following simulations. The initial states are

] 0 60, [ ) 0

( = π

x .

To apply the SIAFC to this system, the bounds fU,gU, and gLshould be obtained:

θ mgsin 1

x1

θ = θD=x2

m

c

u

θ mgsin 1

x1

θ = θD=x2

m

c

u

Fig. 5-2 The inverted pendulum system

) requirement is satisfied), then

) From Figs. 5-3, 5-4, 5-8 and 5-9, SIAFC can control the inverted pendulum to follow the desired trajectory. As shown Fig. 5-3, the position error reaches a bounded error (VeV =0.01). In Fig. 5-8, the position error reaches a bounded error. Therefore, the tracking performance is very good as shown in Fig. 5-3 (case 1) and Fig 5-8 (case 2), in which ym is the reference trajectory and x1 is the system output. The tracking performance of the angle velocity trajectory is also very good as shown in Fig. 5-4 (case 1) and 5-9 (case 2), in which yDmis the reference trajectory and x2 is the system angle velocity. As shown in Fig. 5-5 (case 1) and 5-10 (case 2), the chattering effect of the

) From Figs. 5-3, 5-4, 5-8 and 5-9, SIAFC can control the inverted pendulum to follow the desired trajectory. As shown Fig. 5-3, the position error reaches a bounded error (VeV =0.01). In Fig. 5-8, the position error reaches a bounded error. Therefore, the tracking performance is very good as shown in Fig. 5-3 (case 1) and Fig 5-8 (case 2), in which ym is the reference trajectory and x1 is the system output. The tracking performance of the angle velocity trajectory is also very good as shown in Fig. 5-4 (case 1) and 5-9 (case 2), in which yDmis the reference trajectory and x2 is the system angle velocity. As shown in Fig. 5-5 (case 1) and 5-10 (case 2), the chattering effect of the

相關文件