• 沒有找到結果。

CHAPTER 2 Fuzzy Control System

2.3 Fuzzy Rule Base

=

hgt D

hgt D

dy y

dy y y

y ( )

)

* (

µ

µ (2-7)

where ∫hgt(D)is an integration for the continuous part of hgt(D) and it is a summation for the discrete part of hgt(D).

2.3 Fuzzy Rule Base

The fuzzy rule base consists of fuzzy IF-THEN rules. It is a heart of the fuzzy system in the sense. And all other components are used to implement these rules in a reasonable and efficient manner. Hence, the fuzzy rule base comprises the following fuzzy IF-THEN rules:

Rule i: IF xl is A1i and …and xn is Ani THEN y is Di (2-8) The canonical fuzzy IF-THEN rules in the form of (2-13) includes the following ones:

(1) Partial rules:

IF x1 is A1i and …and xm is Ami THEN y is Di (2-9) (2) Or rules

IF x1 is A1i and …and xm is Ami or xm+1 is Ami+1 and …xn isAniTHEN y is Di (2-10) (3) Fuzzy statements

y is Di (2-11) 2.4 Fuzzy Inference

The fuzzy inference is a reasoning method using the fuzzy theory, and whereby the expert knowledge is presented using linguistic rules. For example: IF premise THEN conclusions, where premise is a statement in the fuzzy logic.

The fuzzy inference introduced as follows:

Product Inference : (y)=max=1 [sup ( A(x)

A (xi) D(y))]

U x M

D l l

i µ

µ µ

µ (2-12)

Minimum Inference: ( ) max[supmin( ( ), ( 1)..., ( ), ( ))]

1 x 1 x x y

y l l

l An n D

A A U

x M

D l µ µ µ µ

µ = = (2-13)

The product inference and minimum inference are the most commonly used fuzzy inference in the fuzzy system and other fuzzy applications.

CHATPER 3 B-spline Fuzzy-Neural Network (B-splineFNN)

The B-spline membership function (BMFs) introduced in [7,16] is adopted in this plan as the fuzzy membership function. The fuzzy B-spline membership functions (BMFs) constructed in possess the property of local control and have been successfully applied to fuzzy-neural control. This is mainly due to the local control property of B-spline curve, i.e., the BMF has the elegant property of being locally tuned in a learning process. In this chapter, the property of B-spline curves will be discussed.

3.1 Knot vector and B-spline Curves

A spline is a function, usually constructed using low–order polynomial pieces, and joined at breakpoints with certain smoothness conditions. The breakpoints are called knots. For α order, r+1 control points, the B-spline basis functions have the knot vector T ={ti,i =0,1,l,r+α} with t0 <t1 <t2 <m<tr+α. The following mixed types of knot vectors are adopted in this theis.

1. The knot vector is set to open uniform. The knot vector is defined as



To define B-spline basis functions, we need a parameter, the degree of these basis functions, α . This i-th B-spline basis function of degreeα , written as Ni,α(t), is defined recursively as following:



This above is usually referred to as the B-spline blend function. This definition looks complicated. But, it is not difficult to understand. If the degree is one (i.e., α=1), these basis functions are step functions. That is, basis functionNi,1(t)is 1 if t lies on the i-th span [ti, ti+1]. To understand the way of computingNi,α(t)forα greater than 1, let us use the triangular computation scheme. All knot spans are on the left (first) column and all degree zero basis functions on the second, shown in Fig. 3-1.

For r+1control points, {p0,p1,,pr}, the ith B-spline blending function of order α

3.2 B-spline Membership Function (BMF)

As a result, the B-spline membership function (BMF)µA(xq) introduced in [7,16]

is expressed as

= membership functions and use the SGA (to be introduced in chapter 4) to obtain a set of optimal control points of BMFs. To avoid the increased number of control points, we use the BMFs as the version of fixed number of control points in [15], which is shown in Fig. 3-2.

fuzzy variable x0

(x )0

Fig. 3-2. Illustration of fixed number of control points of BMF's of order 2.

Fig. 3-1 All knot spans are on the left (first) column and all degree one basis functions on the second



3.4 The Configuration of A B-spline FNN

Fig. 3-3 shows the configuration of a typical fuzzy-neural network. The system has a total of four layers. Nodes at layer I are input nodes (linguistic nodes) that represent input linguistic variables. Nodes at layer II are term nodes which act as BMFs to represent the terms of the respective linguistic variables. Each node at layer III is a fuzzy rule. Layer IV is the output layer.

3.5 A B-spline FNN Inference Method

Given the training input data xq,q=1,2,m,n, and the output data yp, p=1,2,m,m, the ith fuzzy rule has the following form:

i output ypof the fuzzy inference can be derived from the following equations:

weighting vector. We assume that each input has the same number of BMFs and the ith BMF of the qth input has r+1 control points denoted as

Fig. 3-3. The configuration of a fuzzy neural network.

1

i

cq ={ciqj |cqji = pj, j=0,1,2,r}=[cqi0 cqi1cqri ]T.

Each input has z fuzzy sets (BMFs). If there are h rules in fuzzy rule base, then the adjustable set of all the control points is defined as

T T z n zT T T zT T

Tc c c c c c

c

c=[ 11 12 h 1 12 22 h 2 h ]

={cqji |i=1,2,h,z,q=1,2,h,n,j=0,1,h,r} (3-7) Hence, the objective of the learning algorithm is to minimize the error function:

2

*) (

) ,

( p p p

p w c y y

e = − (3-8) and

2

* ||

||

) ,

(w c Y Y

E = − (3-9) where w=[w1Tw2TwTm]T is a weighting vector of the fuzzy neural network,

T zT T

Tc c c

c

c=[ 11 12  1 12 c22Tc2zTcnzT]T is a control point vector of the BMFs, ]

[y1 y2 ym

Y =  is an m-dimensional vector of the current outputs, and ]

[ 1 2

= y y ym

Y  is an m-dimensional vector of the desired outputs acquired from specialists.

CHAPTER 4 Design of the FNN Identifiers by the Simplified Genetic Algorithms

In this chapter, a novel approach to adjust both control points of B-spline membership functions (BMFs) and weightings of fuzzy-neural networks using a simplified genetic algorithm (SGA) is proposed. Fuzzy-neural networks are traditionally trained by using gradient-based methods, and may fall into local minimum during the learning process.

Genetic algorithms have drawn significant attentions in various fields due to their capabilities of directed random search for global optimization. This motivates the use of the genetic algorithms to overcome the problem encountered by the conventional learning methods. However, it is well known that searching speed of the conventional genetic algorithms is not desirable. Thus far, such conventional genetic algorithms are inherently disadvantaged in dealing with a vast amount (over 100) of adjustable parameters in the fuzzy-neural networks. In this chapter, the SGA is proposed by using a sequential-search-based crossover point (SSCP) method in which a better crossover point is determined and only the gene at the specified crossover point is crossed as a single point crossover operation. Chromosomes consisting of both the control points of BMF’s and the weightings of fuzzy-neural networks are coded as an adjustable vector with real number components and searched by the SGA. Because of the use of the SGA, faster convergence of the evolution process to search for an optimal fuzzy-neural network can be achieved. Nonlinear functions approximated by using the fuzzy-neural networks via the SGA are demonstrated in this chapter to illustrate the effectiveness and applicability of the proposed method.

4.1 The Simplified Genetic Algorithm

To overcome the problems encountered by conventional genetic algorithms, we propose a simplified genetic algorithm (SGA) with a novel structure different from the conventional GAs to deal with a complicated situation where a vast number (over 100) of adjustable parameters are searched in the fuzzy-neural network.

4.2 Basic Concept of GAs

GAs are powerful search optimization algorithms based on the mechanics of natural selection and natural genetics. GAs can be characterized by the following features [24]:

• A scheme for encoding solutions to the problem, referred to as chromosomes;

• An evaluation function (referred to as a fitness function) that rates each chromosome relative to the others in the current set of chromosomes (referred to as a population);

• An initialization procedure for a population of chromosomes;

• A set of operators which are used to manipulate the genetic composition of the population (such as recombination, mutation, crossover, etc);

Basically, GAs are probabilistic algorithms, which maintain a population of individuals (chromosomes, vectors) for iteration. Each chromosome represents a potential solution to the problem at hand, and is evaluated to give some measure of its fitness. Then, selecting the more fit individuals forms a new population. Some members of the new population undergo transformations by means of genetic operators to form new solutions. After some number of generations, it is hoped that the system converges

with a near-optimal solution.

There are two primary groups of genetic operators, crossover and mutation, used by most researchers. Crossover combines the features of two parent chromosomes to form two similar offspring by swapping corresponding segments of the parents. The intuition behind the applicability of the crossover operator is information exchange between potential solutions. Mutation, on the other hand, arbitrarily alters one or more genes of a selected chromosome, by a random change with a probability equal to the mutation rate.

The intuition behind the mutation operator is the introduction of some extra variability into the population.

The GA described above, however, is a conventional one. In this chapter, we propose a simplified genetic algorithm (SGA), which are characterized by three simplified processes. Firstly, the population size is fixed and can be reduced to a minimum size of 4. Secondly, the crossover operator is simplified to be a single point crossover. Thirdly, only one chromosome in a population is selected for mutation.

Details will be discussed in the following section.

4.3 Evolutionary Processes of the Simplified Genetic Algorithm (SGA)

The adjusted parameters of FNN, which is both the control-points and weights or alternative, can be defined. In this section, we define the evolutionary processes of SGA by using both parameters w and c . For learning the adjustable parameters of the fuzzy-neural network shown in Chapter 3.4, we define the chromosome as

1 1 2

1

1] [ ]

[ + = + +

= φβ φ φ φβφβ β

φ wTcT   m   R , (4-1) with the length of

z r

n h

m× + × + ×

= ( ( 1))

β , (4-2) wherew is a set of weighting factors defined as the parameter w of (3-10), ranging from within the interval D1 =[wmin,wmax]⊆ R, and c is a set of control points defined in (3-8), ranging from within the interval D2 =[cmin,cmax]⊆R. Each input has z fuzzy sets (BMFs). The φβ+1 is defined as a virtual gene, on which the single point crossover performed does not affect the fitness values of the population, if the crossover point chooses j =β. The single point crossover will be introduced later.

Because a real-valued space is been dealt with where each chromosome is coded as an adjustable vector with floating point type components, the crossover and mutation are the real-number genetic operators.

4.3.1 Population Initialization

A genetic algorithm requires a population of potential solutions to be initialized and then maintained during the process. In the proposed approach, a fixed number k of population size is used to prevent the unlimited growth of population. Real number representation for potential solutions is also adopted to simplify genetic operator definitions and obtain a better performance of the genetic algorithm itself. The initial chromosomes are randomly generated within the feasible ranges, D1 and D2. The initial population with k chromosomes defined in (4-1) is randomly generated as follows:

 can be evolutionarily obtained to be a set of near-optimal parameters for the fuzzy-neural network. Note that the number of chromosomes, k, needs be an even number (to be introduced in Chapter 4.3.3).

After initialization, two genetic operations: crossover and mutation are performed during procreation.

4.3.2 Fitness function

The performance of each chromosome is evaluated according to its fitness. After generations of evolution, it is expected that the genetic algorithm converges and a best chromosome with largest fitness (or smallest error) representing the optimal solution to the problem is obtained. The fitness function is defined as follows:

)

4.3.3 Single Point Crossover

In order to deal with a vast number of adjustable parameters, the single point crossover is introduced in this section. Fig. 4-1 shows the difference between the traditional crossover methods and the proposed single point crossover method. Fig. 4-1 (a) and (b) show the traditional methods, which adopt one crossover point and two crossover points, respectively. Although the proposed single point crossover shown in Fig. 4-1 (c) has two crossover points, the distance between the two crossover points is only one gene (parameter). For each generation, the crossover operator will act on parents to give offspring. To avoid improper crossover, the distance between the two crossover points is reduced to be only one gene. The single point crossover operator is defined as:



where j is the crossover point determined by a sequential-search-based crossover point method or a randomly selected crossover point method introduced later, ∆ denotes the elements of offspring which remain the same as those of their parents, and



position j+1 for all chromosomes with a linear combination of φj i

+1 and φj

k i

+ + 1

2 ( / )

. a is a random number between 0 and 1. Ψˆ is a new population.

To determine the crossover point j for the single point crossover, we propose two kinds of search methods. One is a randomly selected crossover point (RSCP) method, which chooses a crossover point at random. The RSCP is a popular technique adopted by many researchers. Pseudo code for the randomly selected crossover point (RSCP) method is shown in Fig. 4-2 (a).

The other one is a sequential-search-based crossover point (SSCP) method, where the crossover point j is determined via a sequential search based on fitness before the crossover operation actually takes place. The search algorithm of SSCP is similar to the local search procedure in [24] and a sequential search of a database. Pseudo code for the sequential-search-based crossover point (SSCP) method is shown in Fig. 4-2 (b). If there is no satisfied crossover point at this generation, then let the crossover point be

β

=

j . Then, the single point crossover performs on the virtual gene, φβ+1, and the fitness values of the population will not be affected.

4.3.4 Sorting Operation

After crossover, the newly generated population is sorted by ranking the fitness of chromosomes within the population, resulting in E(φˆ1)≤E(φˆ2)m≤E(φˆk). The first chromosome φˆ1 of the sorted population Ψˆ =[φˆ1φˆ2mφˆk]T has the highest fitness value (or smallest error).

4.3.5 Mutation Operation

After sorting, the first chromosome is the best one in the population in terms of fitness. Mutation here means copying the first (i.e., best) chromosome to the (k/2+1)th chromosome, first. Then, genes within the (k/2+1)th chromosome are randomly selected for mutation, according to the mutation rate pm shown in Fig. 4-3. Note that mutation on the selected genes is performed based on a copy of the best-fit chromosome,

φ Ψ

Parents

Offsprings

(a) single crossover point.

(a traditional method)

(b) two crossover points between multiple genes.

(a traditional method)

(c) two crossover points between single gene.

(the proposed method) The single point crossover

Fig. 4-1. Traditional crossover methods and the proposed single point crossover method.

) 1 2 /

ˆ(k +

φj selected for mutation within the (k/2+1)th chromosome φˆ(k/2+1)are altered by the following mutation operators, which are described in (4-7) and (4-8).

Because two different intervals, D1 and D2, are defined for weightings and control points, respectively, the mutation operator is divided into two parts.

For the weightings (φˆ(jk/2+1), j =1,2,m,m×h), the genes are updated by:

o o

o o

o o

o o

mutation

) 1 2 / (k +

The th chromosome

o o o o

o o

o o

o o

mutation

) 1 2 / (k +

The (k/2+1)th chromosome The th chromosome

o o

o o

(a) a traditional method (b) the propose method

Fig. 4-3 Traditional mutation method and the proposed method.

Procedure Sequential Search-based Crossover Point (j);

Begin

Let j=0; i=0;

Repeat

Perform Ψˆ =Crs(Ψ;i); by (3-5)

Evaluate fitnees(φˆ1) and fitness1) by (4-4);

If fitnees(φˆ1) > fitness1) Then j=i; Else i=i+1;

Until fitnees(φˆ1) > fitness1) or i=β; Return j=i;

End

Fig. 4-2. Pseudo code for the randomly selected crossover point (RSCP) method and the sequential-search-based crossover point (SSCP) method.

Procedure Random Selected Crossover Point (j);

Begin

Obtain j randomly between 0 and β; End

(a) the RSCP method

(b) the SSCP method (4-5)

 iteration, γ is a random number from [0,1], and T is the maximal generation number.

γ is a system parameter determining the degree of dependency on an iteration number.

The function ∆( yt, ) returns a value in the range of [0, y] such that the probability )

, ( yt

being close to 0 increases as t increases. This property causes the mutation operator to search the space uniformly at initial stage (when t is small), and very locally at later stages; thus increasing the probability of generating children closer to its successor than a random choice. The design of the mutation operator is based on two rationales. First, it is desirable to take large leaps in the early phase of SGA search so that SGA can explore a parameter space as wide as possible. Second, it is also desirable to take smaller jumps in the later phase of SGA search so that SGA can direct its search toward a global minimum more effectively. Both are accomplished through the role that

“time” t plays in ( yt, ).

The SGA offers exciting advantages over the conventional gradient-based methods during the learning process of fuzzy-neural networks. To start with, chromosomes consisting of adjustable parameters of the fuzzy-neural network are coded as a vector with real number components. The fitness values are obtained by a mapping from the error function, defined as the difference between the outputs of the fuzzy-neural network and the desired outputs. Thus, all the best adjustable parameters of the fuzzy-neural network can be obtained by repeating genetic operations, i.e., crossover and mutation, so that an optimal fuzzy neural network satisfying an error bound condition can be evolutionarily obtained. Because of the use of the simplified genetic algorithm, faster convergence of the evolution process to search for an optimal fuzzy neural network can be achieved.

4.4 Pseudo Code for The SGA

The idea of the SGA has been introduced in previous section. Fig. 4-4 shows two kinds of SGA adopted for an off-line learning process. Pseudo code for SGA with the RSCP method and SGA with the SSCP method are shown in Fig. 4-4 (a) and Fig. 4-4 (b), respectively.

Using off-line learning and SGA with the RSCP method shown in Fig. 4-4 (a), an additional procedure, If fitnees(φˆ1) < fitness1) Then φˆ11 , is used to maintain the best fitness evolutionarily obtained so far. The procedure to keep the best chromosome into the next generation is a popular technique in conventional genetic algorithms. For SGA with the SSCP method, however, this additional procedure is no longer required as shown in Fig. 4-4 (b), since a better crossover point can be

Example 1, the learning effect of SGA with the SSCP method is superior to that of SGA with the RSCP method.

Procedure SGA with RSCP Begin

Initialize Ψ % generate an initial population While (not terminate-condition) do

% Obtain the crossover point for off-line learning

Perform Random Selected Crossover Point (j) in Fig. 4-2 (a);

Perform Ψˆ =Crs(Ψ;j); % Perform Single Point Crossover Sort Ψˆ ;

% Additional procedure for off-line learning and RSCP If fitnees(φˆ1) < fitness1) Then φˆ11;

Mutate Ψˆ ; % only apply to the (k/2+1)th chromosome End While

End

(a) SGA with the RSCP Procedure SGA with SSCP

Begin

Initialize Ψ % generate an initial population While (not terminate-condition) do

% Obtain the crossover point for off-line learning

Perform Sequential Search-based Crossover Point (j) in Fig. 4-2 (b);

Perform Ψˆ =Crs(Ψ; j); % Perform Single Point Crossover Sort Ψˆ ;

Mutate Ψˆ ; % only apply to the (k/2+1)th chromosome End While

End

(b) SGA with the SSCP

Fig. 4-4 Pseudo code of SGA with the RSCP and the SSCP.

CHAPTER 5 INDIRECT ADAPTIVE FUZZY-NEURAL CONTROLLER

In this chapter, a constructive manner how to develop on-line indirect adaptive controllers based on the SGA to achieve the control objectives is proposed. Particularly, the state feedback control law with the SGA update law can be on-line tuned.

5.1 Control Objectives

Consider the nth-order nonlinear systems of the form

1

or equivalently of the form

x to be available for measurement. We assume that f and g are unknown functions, and g is, without loss of generality, a strictly positive function. In [55], these systems are in normal form and have the relative degree equal to n. The control objective is to design an indirect adaptive state feedback fuzzy-neural controller so that the system output y follows a given bounded reference signalym.

First, let e= ym y, e=(e,eD,l,e(n1))T and k =(kn,l,k1)T Rnbe such that all roots of a polynomial h(s)=sn +k1sn1+m+kn are in the open left half-plane. If the functions f and g are known, then the optimal control law

] However, since f and g are unknown, the optimal control law (5-3) can not be obtained.

To solve this problem, we use the fuzzy logic systems as approximators to approximate the unknown functions.

5.2 Certainty Equivalent Controller

We replace f and g in (5-3) by the fuzzy logic systems ˆ( | , )

g respectively. The resulting control law

] is the so-called certainty equivalent controller [56] in the adaptive control literature.

Substituting (5-5) to (5-2) and after some manipulations, we obtain the error equation

c

c

a unique positive definite symmetric n×n matrix P which satisfies the Lyapunov equation [57]:

Q

5.3 Supervisory Control

By incorporating a control term, us into the uc, the control law becomes

s

c u

u

u= + , (5-11) where us is called a supervisory control. The supervisory controlus is turned on when the error function Ve is greater than a positive constant V . If Ve V , then the supervisory control us is turned off. That is, if the system tends to be unstable (Ve >V),

u= + , (5-11) where us is called a supervisory control. The supervisory controlus is turned on when the error function Ve is greater than a positive constant V . If Ve V , then the supervisory control us is turned off. That is, if the system tends to be unstable (Ve >V),

相關文件