• 沒有找到結果。

Chapter 1 Introductions

1.3 Thesis Overview

This thesis is organized as follows:

Chapter 1: We describe the background, motivation, major works and framework of this thesis.

Chapter 2: We describe the evolutionary learning of fuzzy-neural networks using a reduced simulated annealing optimization algorithm in this chapter. Offline learning for the fuzzy-neural network is considered by using the RSAOA. The simulation results show that it provides a suited way of learning for the fuzzy-neural network.

Chapter 3: A simulated annealing indirect adaptive fuzzy-neural control scheme is proposed for a class of single-input single-output (SISO) nonlinear systems. The control scheme incorporates a reduced simulated annealing algorithm and fuzzy-neural networks into indirect adaptive control design. In addition, in order to guarantee that the system states are confined to the safe region, a safe control term is incorporated into the control scheme.

Chapter 4: A simulated annealing adaptive fuzzy-neural control scheme is proposed for a class of single-input single-output (SISO) nonlinear systems. The control scheme incorporates a reduced simulated annealing algorithm and fuzzy-neural networks into backstepping design. In addition, in order to guarantee that the system states are confined to the safe region, a safe control term is incorporated into the control scheme.

Chapter 5: A simulated annealing adaptive fuzzy-neural control scheme is proposed for a class of multiple-input multiple-output (MIMO) nonlinear systems. The control scheme incorporates a reduced simulated annealing algorithm and fuzzy-neural networks into backstepping design. In addition, in order to guarantee that the

system states are confined to the safe region, a safe control term is incorporated into the control scheme.

Chapter 6: Based on the reduced simulated annealing algorithm, A DC servomotor experiment is performed in order to verify the effectiveness of the proposed method in real-time control.

Chapter 7: Conclusion in the final chapter in this thesis.

Chapter 2

Evolutionary Learning of Fuzzy-Neural Networks Using a Reduced Simulated Annealing Optimization

Algorithm

A novel method of adjusting the weights of fuzzy-neural networks using a reduced SA optimization algorithm (RSAOA) is proposed in this chapter. This method can be used to search for the optimal parameters. The RSAOA is applied in function approximation. The simulation results show that the RSAOA is a good learning algorithm for fuzzy neural networks.

2.1 Fuzzy-Neural Networks

A fuzzy-neural network is generally a fuzzy inference system constructed from structure of neural networks. A learning algorithm is used to adjust the weights of the fuzzy inference system [2], [3]. Fig. 2-1 shows the configuration of a fuzzy-neural network [15]. The system has a total of four layers. Nodes at layer I are input nodes that represent input linguistic variables. Nodes at layer II, represent the values of the membership function of total linguistic variables.

At layer III, nodes are the values of the fuzzy basis vector ξ. Each node at layer III is a fuzzy rule. Layer III and layer IV are fully connected by the weights, , i.e., the adjustable parameters. Layer IV is the output layer .

1 2

T [

p w wp p wp

θ =w = L

( ) y x

h T]

Fig. 2-1. Configuration of fuzzy neural network

The fuzzy inference engine uses fuzzy IF-THEN rules to perform a mapping from training input data x , q q=1, 2,L, ,n xT =[x x1 2Lxn]∈ℜn to output inference, center-averaging and singleton fuzzification, the output of the fuzzy-neural network can be expressed as:

Bi

fuzzy basis vector. ξi is defined as By adjusting the weights of the fuzzy-neural network, the learning algorithm attempts to minimize the error function as follows:

i

wp

e wp( p)=(ypy*p)2 (2-4) for a single output system, or

E w( )= YY* 2

]

(2-5) for a multiple output system, where Y=[Y Y1 2LYm is an m-dimensional vector of the actual outputs of the fuzzy-neural network,

is an m-dimensional vector of the desired outputs, and is a weighting vector of the fuzzy-neural network for outputs.

* * *

2.2 Reduced Simulated Annealing Optimization (RSAO) Algorithm for Off-line Learning

In the standard SA algorithm [67], it initializes a solution space first and many solution sets exist in this solution space. Through some arrangements and selections, the SA algorithm would find the best solution set. This best solution set is the optimum solution set in SA algorithm. For fuzzy-neural networks, the SA algorithm, however, needs the process of off-line learning.

This results is difficulty in real-time control application. Therefore, in this thesis, we propose a Reduced Simulated Annealing (RSA) algorithm to alleviate the computation load. The RSA algorithm is characterized by two features: (1) one perturbation in each temperature, and (2) perform compact perturbation mechanism and special cooling schedule on the state by the cost

function.

The cost function is defined as 1

Φ = −1 E

+ (2-6) where is an estimation error function defined in (2-5). In Greed Search, in each temperature, the current solution perturbs to become a new solution

. Let . If

E

wold

wnew ΔΦ = Φ(wnew)− Φ(wold) ΔΦ < , the new solution 0 is accepted. Otherwise, we abandon the new solution and preserve the current solution . So this method just finds the local optimum solution but not global optimum solution. The SA algorithm, however, imports the conception of Boltzmann probabilistic distribution. So, it can escape from local optimum and approach the global optimum. The Boltzmann probabilistic distribution is defined as

wnew

= , where denotes Boltzmann constant and is temperature. If , where is given at random in the interval [0, 1], the new solution is accepted. Otherwise, the current solution

is preserved. As a result of the probabilistic condition, the SA algorithm can escape from local optimum and approach the global optimum.

Kb

T p>r

wnew

r

wold

The reduced SAO algorithm can be discussed as follow:

‹ Initial configuration space: Let m]Tbe a solution in the

‹ Cost function: The cost function is given as (2-6)

‹ Perturbation mechanism: The goal of perturbation is to produce a new solution according to the present solution. In this thesis, we design one perturbation in each temperature and the perturbation mechanism is given as follow:

max

‹ Acceptance condition: The acceptance probability function for the new solution is defined as

‹ Cooling schedule: Cooling schedule is a crystallizing process. If temperature decreases faster, the time of crystallization is shorter but the crystal would have defects easily. If temperature decreases slower, the time of crystallization is longer but the crystal would be more perfect.

Here, the cooling schedule is given as follow:

In this section, using the proposed RSAOA, two examples are illustrated to show the effects of training of the fuzzy-neural network for function approximation. Each input of the fuzzy-neural network has seven membership functions in the two examples.

Example 2-1: Here we describe a process of offline learning [32]. Two input variables and one output variable are used to approximate a desired surface

shown in Fig. 2-2. Forty-nine training data pairs are given. The adjustable parameters are in the intervals D=[wmin wmax]= −[ 1010] and α=0.1, β =2, σ =10, . In the fuzzy-neural network, the number of the weightings is 49. Fig. 2-3 shows the simulation results of RSAOA after 100 iterations of learning. The result confirms the effectiveness of RSAOA, and the fuzzy-neural network can approximate the desired surface with less iterations.

The error curve of RSAOA for 100 iterations of learning is shown in Fig. 2-4.

The error with respect to its iterations is given in Table 2-1.

0 1000

T =

Fig. 2-2 Desired approximating surface.

Fig. 2-3 Output of the fuzzy-neural network trained by the proposed RSAOA after 100 iterations.

Fig. 2-4 Error curve of the fuzzy-neural network trained by the RSAOA with respect to iterations.

Table 2-1 Error with respect to iterations.

Example 2-2: We perform another procedure which is offline learning and online test in this example [32]. For a nonlinear system, the unknown nonlinear item is approximated by the fuzzy-neural network via RSAOA. First, some of training data from the unknown function are collected for an offline initial learning process of the fuzzy-neural network. After offline learning, the trained fuzzy-neural network replaces the unknown nonlinear function for online test.

Iterations Error 1 0.27105 2 0.087261 3 0.045344 4 0.030357 5 0.019737 10 0.0075411 15 0.0032227 20 0.0016179

30 40 50 100

0.00038908 0.00027872 0.00023119 0.00021729

Consider a nonlinear system in [2] as

y k( + =1) 0.3 ( )y k +0.6 (y k− +1) g u k ][ ( ) (2-10) We assume that the unknown nonlinear function has the form:

g u( )=sin(2πu)+0.6sin(4πu)+0.2sin(6πu) (2-11)

For offline learning, 21 training data are regularly collected from -1 to 1 are provided. The offline learning configuration of the 21 training data points is shown in Fig. 2-5(a). In order to show the approximated effect for the unknown nonlinear function , a series-parallel model shown in Fig. 2-5(b) is defined as

u=

( ) g u

y kˆ( + =1) 0.3 ( )y k +0.6 (y k− +1) f u kˆ[ ( )] (2-12) where ˆ[ ( )]f u k

min max

[ w ]

=

is the approximated function for by the fuzzy-neural network. The adjustable parameters are in the interval

, and

[ ( )]

g u k

[ 10

D w = − 10] α =0.1, β =2, σ =10, . In the

fuzzy-neural network, the number of weightings is 7. Fig. 2-6 shows the exact curve of driven by 21 training data in Table 2-2.

0 1000

T =

( ) g u

ˆ ( ) f u

u e

(a) Offline learning by 21 training data

g

Fig. 2-5 The series-parallel identification model

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Fig. 2-6 Training curve of the nonlinear function using training data from Table 2-2

( ) g u

Table 2-2 Training data and approxim ted data obtained by RSAOA for 100 Besides, the back-propagation gradient descent method is also built for lea

00 iterations of learning, as demonstrated in Fig. 2-7, the ap

rning the weightings of fuzzy-neural network [3]. The initial values of the weightings are randomly generated from the interval -10 to 10, which is the same as that of RSAOA method, and learning rate of gradient descent method is 0.005.

After 1

proximated function ˆ ( )f u learned by RSAOA method is much closer to the exact function ( )g u the gradient descent method. The error curves with respect to iterations are shown in Fig. 2-8. It is obvious from Fig. 2-8 that RSAOA method converges faster than the gradient descent method. Note that

than

all of the error curves shown in Fig. 2-8 are calculated by (2-6), and error with respect to iterations is shown in Table 2-3.

Fig. 2-7 Two method approach to nonlinear function for 100 iterations

Fig. 2-8 Error curves of the approximated

( ) g u of learning

ˆ[ ( )]

f u k using two method for 100

iterations of learning

Table 2-3 Error with respect to iterations.

error

From Fig. 2-7, Fig. 2-8 and Table2-3 show that the effect of gradient descent method is bad for 100 iterations. Fig. 2-9 shows that the effect of gradient descent method for 300 iterations and the error curves with its iterations are shown in Fig. 2-10.

Fig. 2-9 The gradient descent identification method for 300 iterations of

Fig. 2-10 Error curve of the approximated learning

ˆ[ ( )]

f u k using gradient descent method for 300 iterations of learning

For online test, we assume that the series-parallel model shown in Fig.

2-5(b) is driven by ( )u k =sin(2πk/ 250) and α =1, β =1, σ =10,

0 1000

T = . The respons system n w [ ( )]g is

ted by the fuzzy-neural network, and learned offline for 10 iterations of learning by the RSAOA is shown in Fig. 2-11. The error curve is shown in Fig. 2-12. As demonstrated in Fig. 2-11 and Fig. 2-12, the fuzzy-neural network using the proposed RSAOA successfully approximates the unknown nonlinear function g u k[ ( )].

e of the nonlinear i hich u k approxima

Fig. 2-11. Output of the nonlinear system (dotted line) and the identification model (solid line) using the proposed RSAOA

Fig. 2-12. Identification error of the approximated model of Fig. 2-11

2.4 Conclusions

This chapter proposed a RSA, which can be successfully applied in the fuzzy-neural network to search for the optimal parameters, in spite of a vast number of adjustable parameters of the fuzzy neural networks. Two examples for RSA verify that it has outstanding efficacy in learning and approximation.

Chapter 3

Indirect RSA On-Line Tuning of Fuzzy-Neural Networks for Uncertain Nonlinear Systems

In this chapter, an RSA indirect adaptive fuzzy-neural controller (RIAFC) for uncertain nonlinear systems is proposed by using a reduced simulated annealing algorithm (RSA). The weighting factors of the adaptive fuzzy-neural controller are tuned on-line via the RSA approach. For the purpose of on-line tuning these parameters and evaluating the stability of the closed-loop system, a cost function is included in the RSA approach. In addition, in order to guarantee that the system states are confined to the safe region, a supervisory controller is incorporated into the RIAFC. To illustrate the feasibility and applicability of the proposed method, two examples of nonlinear systems controlled by the RIAFC are demonstrated.

3.1 Problem Formulation

In this section, we describe the control problem for a class of nonlinear systems, and then design the controller.

3.1.1 The Design of Certainty Equivalent Controller

Here we consider another nth-order nonlinear system of the form

1

and x=[ ,x x1 2,K,xn]T

ym

= − δ =[ , ,e e& K ( ) h s

is the state vector of the system, which is assumed to be available for measurement. We assume that f and g are uncertain continuous functions, and g is, without loss of generality, a strictly positive function. In [49], these systems are in normal form and have a relative degree n. Our objective is to design an indirect adaptive fuzzy-neural controller so that the system output y follows a given bounded reference signal . First, let

, and such that all roots

of a polynomial are in the open left half-plane. If the functions f and g are known, then the optimal control law is

ym However, since f and g are uncertain, the optimal control law (3-18) cannot be obtained. To solve this problem, we use the fuzzy-neural systems as approximators to approximate the uncertain continuous functions, f and g.

First, we replace f and g in (3-2) by fuzzy-neural networks , i.e., ˆ ( |f x wf) and ˆ ( |g x wg), respectively. Based on certainty equivalent controller [64], the resulting control law is

( ) ˆ

Substituting (3-4) in (3-1) and after some manipulations, we obtain the error equation

where

there exists a unique positive definite symmetric n Λc sI−Λc =s n +k1s(n1)+L+kn

)

| (

(|

×n matrix P, which satisfies the Lyapunov equation:

T which means we require that , when is greater than a large constant

Vδ

≤0

V&δ Vδ

>0

V . Next, we assume the following.

Assumption 3.1: We can determine functions fU( ),x gU( )x and

Based on Assumption 3.1, equation (3-9) can be modified as

1 | | ( ) ˆ( |

Therefore, according to (3-10), we define a cost function for the RSA as

Φ = −δTPb f x wc ˆ( | f)−δTPb g x w uc( ( |ˆ g) c (3-11) in order to on-line tune weightings, a solution with the smallest cost function denotes the optimal solution. So, a better crystal can be obtained according to (3-11).

Substituting (3-12) into (3-1), the error equation becomes

ˆ ˆ ˆ Based on Assumption 3.1 and (3-14), we choose the supervisory control us as

(3-15) Equation (3-16) ensures the bounded stability of the RIAFC for the nonlinear system in (3-1).

Remark 3.1 : The concept of the supervisory control is added into our design mainly because the system states may go into the unsafe region if the RSA operations can not simultaneously generate the appropriate weightings.

To safely control the uncertain nonlinear systems, the supervisory controller must be turned on when the system states go into the unsafe region.

3.2 Description of Reduce Simulated Annealing Algorithm for On-line Controllers

For the purpose of speeding up the computation of the simulated annealing operation, the mechanism of the reduced simulated annealing algorithm has two simplified parts: (1) one perturbation in every temperature, and (2) perform compact perturbation mechanism and special cooling schedule on the solution by the cost function. The details are discussed in the following.

First, the adjustable parameters of the p-th output of the fuzzy-neural networks are as follow.

wp = ⎣ ⎦⎡wTp⎤ (3-17) where denotes a set of weighting factors in the interval , . Each element represents the adjustable parameter of the fuzzy-neural

wp Dp = −[ d dp, p]

p 0 d >

networks. Note that the solution is accepted by the cost function, that is, the optimum solution denotes the best solution of cost function.

Next, to instantaneously evaluate the stability of the closed-loop system, define the p-th cost function as

Φ = −δTPb fc ˆp( |x wf)−δTPb gcˆp( |x w ug) c (3-18) where is the estimation of the unknown dynamic and

is the estimation of the unknown dynamic A state with smallest cost function denotes the optimal solution. The detail explanation of the cost function is given later.

ˆ ( |p )

Then, according to the cost function, perturbation mechanism operators are performed. The operation procedure of the perturbation mechanism is as follows. The acceptance probability function for the new solution vector is defined as

_ _

where denotes a probability in the interval [0,1], represents Boltzmann constant and is temperature.

r Kb

T

And then, we design cooling schedule as follow:

2

The block diagram of RIAFC is given as follow:

yd Sim ulated annealing adaptive

backstepping

determ ined in the interval

Fig. 3-1 The block diagram of RIAFC

The design algorithm of RIAFC is as follows:

Step 1: Construct fuzzy-neural networks for ˆ ( |f x wf) and ˆ ( |g x wg), including fuzzy sets for x(t), the weighting vectors w and f w . g Step 2: Adjust the weighting vectors by using the RSA approach with the

cost function (3-11).

Step 3: Compute ˆ ( |f x wf) and ˆ ( |g x wg). Then, obtain the control law (3-4).

3.3 Simulation Examples of the RIAFC

This section presents the simulation results of the proposed on-line RIAFC for a class of uncertain nonlinear systems to illustrate the stability of the closed-loop system is guaranteed, and all signal involved are bounded.

Example 3-1: Consider the three-order nonlinear system described as

1

Thus, the RIAFC is suitable to control the system. The adjustable parameters, wf of f x x xˆ ( , , )1 2 3 are in the intervals D1=[-2,2] α =0.01, β =20, σ =10 respectively. The reference signal is given as y td( )=sin( )t in the following simulations. The initial states are set as (0)x =[0.3,1, 0.5]. The membership functions for xi, i=1,2 are given as

To apply the RIAFC to the system, the bounds fU should be obtained:

1

The simulation results are shown in Figs. 3-2, 3-3, 3-4 the RIAFC can control

the uncertain nonlinear systems to follow the desired trajectories very well. In Fig. 3-3, the tracking error reaches a bounded error (Vδ ≤ = ). Therefore, V 1 the tracking performance is very good as shown in Fig. 3-2, in which is the reference trajectory and

yd

x1 is the system output. As shown in Fig. 3-4, the chattering effect of the control input (uc + ) almost disappears after 2 us seconds, respectively. In 2 seconds, the RSA searches the neighborhood for the optimal parameters of the RIAFC.

Fig. 3-2. The system output y(t) and bounded reference y td( )

Fig. 3-3. The tracking error e

Fig. 3-4. The control input u(t)

Example 3-2: Consider the dynamic equations of the inverted pendulum system as [18]

1 2 where g=9.8 meter/sec is the acceleration due to gravity, is the mass of the cart, is the half-length of the pole, m is the mass of the pole and u is the control input. In this example, we assume =1 kg, m=0.1 kg and l=0.5 meter.

Thus, the RIAFC is suitable to control the system. The adjustable parameters,

2

D =[1,2], respectively. The reference signal is given as 2 in the following simulations. The initial states are set as

( 0.1*

y t)m = sin( )t (0) [ , 0 x = 60π ]

. The membership functions are the same as Example1.

To apply the RIAFC to the system, the bounds f ,U gU, and gL should

The design parameters are set as k1 = and 1 k2 =2, Q=diag(10,10) and 0.01

V = . Then, solve (3-8) and obtain 15 5

5 5

P ⎡ ⎤

= ⎢ ⎥

⎣ ⎦. As shown in Figs. 3-5, 3-6, 3-7 the RIAFC can control the inverted pendulum to follow the desired trajectories very well. In Fig. 3-6, the position error reaches a bounded error (Vδ ≤ =V 0.01). Therefore, the tracking performance is very good as shown in Fig. 3-5, where ym is the reference trajectory and x1 is the system output.

Fig. 3-5. The system output y(t) and bounded reference y tm( )

Fig. 3-6. The tracking error e

Fig. 3-7. The control input u(t)

3.4 Conclusions

In this chapter, an RSA indirect adaptive fuzzy-neural controller (RIAFC) has been proposed. The free parameters of the adaptive fuzzy-neural controller

can be successfully tuned on-line via the RSA approach with a special evaluation mechanism, instead of solving complicated mathematical equations.

The RIAFC with the supervisory controller guarantees the bounded stability of the closed-loop system. The simulation results show that the RSA-based adaptive fuzzy-neural controller performs on-line tracking successfully.

Chapter 4

Backstepping Adaptive Control of Uncertain Nonlinear Systems Using RSA On-Line Tuning of

Fuzzy-Neural Networks

In this chapter, an RSA backstepping adaptive fuzzy-neural controller (RBAFC) for uncertain nonlinear systems is proposed by using a reduced simulated annealing algorithm (RSA). The weighting factors of the adaptive fuzzy-neural controller are tuned on-line via the RSA approach. For the purpose of on-line tuning these parameters and evaluating the stability of the closed-loop system, a cost function is included in the RSA approach. In addition, in order to guarantee that the system states are confined to the safe region, a supervisory controller is incorporated into the RBAFC. To illustrate the feasibility and applicability of the proposed method, two examples of nonlinear systems controlled by the RBAFC are demonstrated.

4.1 Problem Formulation

In this section, we describe the control problem for a class of nonlinear systems, and then design the backstepping controller.

4.1.1 The Design of Backstepping Controller

Here we consider another nth-order nonlinear system of the form

Where f and g are unknown smooth continuous functions, uR is the

system input, and x=[ ,x x1 2Lxn]TRn is the state vector. The control

system input, and x=[ ,x x1 2Lxn]TRn is the state vector. The control

相關文件