• 沒有找到結果。

2. Robust Adaptive Self-structuring Fuzzy Control Design for Nonaffine

2.4 Simulation Results

In this section, the simulations are performed using MATLAB under Windows XP. Four examples are presented. Approximations of unknown nonlinear functions are shown in Examples 2-1 and 2-2 to reveal the growing and pruning capabilities of the proposed self-structuring algorithm, respectively. Examples 2-3 and 2-4 are used to examine the applicability and effectiveness of the proposed RASFC system for nonaffine nonlinear control problems. Two cases are performed in Examples 2-3 and 2-4 for comparison purpose. Case 3a and Case 4a show the effectiveness of the SFS with both rules growing and pruning capabilities. In Case 3b, an adaptive FS with fixed number of rules is adapted, and the parameters of the FS are also tuned by adaptive laws (2-56)-(2-58). In Case 4b, only the growing of fuzzy rules by SFS is considered. It can be easily shown that the following examples of nonaffine system control satisfy ( , ) 0

∂ >

u

u f x

. It should be emphasized that the development of the RASFC does not need to know the exact system dynamics of the controlled systems.

Example 2-1: Consider the following nonaffine nonlinear system [60]:

2

1 x

x& =

x&2 =x12+0.15u3+0.1(1+x22)u+sin(0.1u) (2-79) In tracking control, the SFS is used to approximate an unknown function

cu u u

x u

x

u = + + + + −

∆(x, ) 12 0.15 3 0.1(1 22) sin(0.1 ) . To illustrate the rule growing capability of the self-structuring algorithm, the approximation is performed under three conditions as shown in Table 2-1. Figures 2-5(a)-2-5(c) show the approximation results of Condition 1a, 1b and 1c, respectively, Fig. 2-5(d) shows the absolute value of the modeling error, u~, and Fig.

2-5(e) shows the number of fuzzy rules. The approximation performances under Conditions 1a and 1b are better than that under Condition 1a after t ≥ 5. In Fig. 2-5(b), the abrupt variations are marked by circles. These abrupt variations are obviously caused by the rule generation so that the approximation performance is affected for a short period. In Fig. 2-5(c), this phenomenon is mitigated by using (2-27) discussed in Remark 2-1. From Fig. 2-5(d), we can see the approximation performance under Condition 1c is the best among three conditions.

time (sec) (a)

) , ( ux

c

uf

ufc

time (sec) (a)

) , ( ux

c

uf

) , ( ux

c

uf

ufc

time (sec) (b) ufc

) , ( ux

c

uf

time (sec) (b) ufc

) , ( ux

c

uf

) , ( ux

c

uf

time (sec) (c) ufc

) , ( ux

c

uf

time (sec) (c) ufc

) , ( ux

c

uf

) , ( ux

c

uf

time (sec)

(d) u~

*

Condition 1a Condition 1b Condition 1c

time (sec) u~

time (sec)

(d) u~

*

Condition 1a Condition 1b Condition 1c

*

Condition 1a Condition 1b Condition 1c

time (sec) u~

rule number

time (sec) (e)

*

Condition 1a Condition 1b Condition 1c

rule number

time (sec) (e)

*

Condition 1a Condition 1b Condition 1c

*

Condition 1a Condition 1b Condition 1c

Fig. 2-5 Approximation results in Example 2-1

Table 2-2 Two conditions in Example 2-2 desired trajectory of tracking control:

xc=1.5sin(t)

rule number Condition 2a fixed (40 rules)

Condition 2b t ≧0, rule pruning is operated

Example 2-2: A third-order Chua’s chaotic circuit is a simple electronic system that consists of one linear resistor (Rc), two capacitors (C , 1 C ), one inductor (2 L), and one nonlinear resistor (η). It has been shown to own very rich nonlinear dynamics such as chaos and bifurcations. The dynamic equations of Chua’s circuit are written as [9-10]

)) ( ) 1 (

1 (

2 1

2 1

1

C C

C

C v v v

R

v& = C − −η )

) 1(

1 (

2 1 2

2

L C C

C v v i

R

v& =C − + )

1(

1 0 L

C

L v Ri

i& = L − − (2-80)

where the voltages

C1

v ,

C2

v and current i are state variables, L R is a constant, and 0 η denotes the nonlinear resistor, which is a function of the voltage across the two terminals of

C1.Here, φ is defined as a cubic function as

3 2 1vC1 λ vC1

λ

φ = + (λ1 <0,λ2 >0). (2-81) The state equations in (2-80) are not in the standard canonical form. Therefore, a linear transformation is needed to transform them into the form of (2-1). Then, the dynamic equations of transformed Chua’s circuit can be rewritten as

2

1 x

x& =

3

2 x

x& = u F x&3 = +

x1

y= (2-82)

where x=[x1 x2 x3]T is the state vector of the system which is assumed to be available; the

system dynamic function

3 3 2 1

3 2

1 )

95 7 361 (28 45

2 38

1 9025

168 1805

14 x x x x x x

F = − + − + + (2-83)

and u is the control input. The reference signal is yr(t)=1.5sin(t). In tracking control, the SFS is used to approximate an unknown function ∆(x,u)=F+ucu. To illustrate the rule pruning of the self-structuring algorithm, the approximation is performed under two conditions as shown in Table 2-2. Figures 2-6(a)-2-6(b) show the approximation results.

Figure 2-6(c) shows the approximation error E. Figure 2-6(d) shows the number of fuzzy rules. Taking the last pruned rule for example, we record the contribution and significance index of the rule pruned at t=2.28 in Fig. 2-6(e). Figures. 2-6(a)-2-6(c) show that the approximation performances of Conditions 2a and 2b are both quit well. However, the convergence speed of u~ under Condition 2b is faster than that of Condition 2a. This shows that the parameter training of a large number of fuzzy rules slow down the convergence speed of approximation, and the pruned rules under Condition 2b are redundant and ineffective to the approximation performance. In Fig. 2-6(e), we show the contribution and significance index of a certain rule pruned at t=2.28. When the contribution calculated by (2-23) is smaller than a given constant β=0.005, the significance index (2-24) decays with decay constant τ=0.99. Once the significance index is smaller than the pruning threshold Θp =0.005 at t=2.28, this rule is insignificant thereafter and thus pruned to ease computational load.

time (sec) (a)

Condition 2a

c

u( ufx, )

time (sec) (a)

Condition 2a

c

u( ufx, )

(b) time (sec)

) , ( ux

Condition 2b

c

uf

(b) time (sec)

) , ( ux

Condition 2b

c

uf

Condition 2a Condition 2b

time (sec) (c) u~

Condition 2a Condition 2b Condition 2a Condition 2b

time (sec) (c) u~

time (sec) (d)

Condition 2a Condition 2b

rule number

time (sec) (d)

Condition 2a Condition 2b Condition 2a Condition 2b

rule number

(e) time (sec)

significance index Condition 2b

contribution

(e) time (sec)

significance index Condition 2b

contribution

Fig. 2-6 Approximation results in Example 2-2 Example 2-3: Consider the following nonaffine nonlinear system [61]

2

1 x

x& =

d e

u x e

x&2 =0.2(1+ x1x2)[(2+sin( 2)]( + u −1)+ (2-84) where d is a square wave with amplitude ±3.0 and period 5 seconds. The desired trajectory isxd(t)=sin(0.5t)+cos(t). The initial sates are chosen as x(0)=[x1(0) x2(0)]=[0 0]T. The learning rates are selected as ηα =120 and ηcσ =1. The thresholds for growing and pruning criteria in Case 3a are selected as Θg =0.1 and Θp =0.01, respectively. These parameters are chosen through some trials to achieve favorable transient control performance.

For a choice of Q=2I, K =[2 1]T, and ρ2 =δ , we solve the Riccati-like equation shown in (2-62) and obtain the a positive definite symmetric matrix P:

⎥⎦

⎢ ⎤

=⎡

1.5 0.5

0.5

P 3.5 (2-85)

The simulation results for Cases 3a and 3b are shown in Figs. 2-7 and 2-8, respectively. The tracking responses of state x1 are shown in Figs. 2-7(a) and 2-8(a), the tracking responses of

Table 2-3 Comparison between two cases in Example 2-3

104

25 .

1 × iterations Case 3a Case 3b

maximum number of rules at any time instant 7 4 (fixed)

accumulated sum of rule number, Na 34,577 60,000

total execution time, te (sec) 12.88 18.14

state x2 are shown in Figs. 2-7(b) and 2-8(b), the associated control inputs are shown Figs.

2-7(c) and 2-8(c), and the numbers of fuzzy rules at every iteration are shown in Figs. 2-7(d) and 2-8(d). From Figs. 2-7(a)-2-7(b) and Figs. 2-8(a)-2-8(b), we can see that the tracking performance in Case 3a is better than that in Case 3b under the external disturbance. In Fig.

2-7(d) the maximum number of rules is 7; in Fig. 2-8(d), the number of rules is 4. Table 2-3 shows the comparison between the two cases, where Na represents the accumulated sum of computed rules, and te denotes the total execution time during the simulation. The proposed self-structuring algorithm can relieve the heavy computational burden caused by 25,423 redundant rules (42.37% of the Na in Case 3b), and the te in Case 3a is nearly one-half times faster than that in Case 3b.

xd

x1

time (sec) (a) x1

xd

x1d

x x1

time (sec) (a) x1

time (sec) (b) x2

x&d

x2

time (sec) (b) x2

x&d

xx&2d

x2

time (sec) (c)

control inputs

time (sec) (c)

control inputs

time (sec) (d)

rule number

time (sec) (d)

rule number

Fig. 2-7 Simulation results of Case 3a in Example 2-3

time (sec) (a) x1

xd

x1

time (sec) (a) x1

xd

x1d

x x1

time (sec) (b) x2

x&d

x2

time (sec) (b) x2

x&d

xx&2d

x2

time (sec) (c)

control inputs

time (sec) (c)

control inputs

time (sec) (d)

rule number

time (sec) (d)

rule number

Fig. 2-8 Simulation results of Case 3b in Example 2-3

Table 2-4 Comparison between two cases in Example 2-4

104

25 .

1 × iterations Case 4a Case 4b

maximum number of rules at any time instant 7 28

accumulated sum of computed fuzzy rules, Na 39,973 227,650

total execution time, te (sec) 12.72 64.89

Example 2-4: The Van der Pol oscillator is the main model of self-oscillatory system with two dimensional phase space [13-15]. The oscillator and its extensions have been implemented in various types of electrical circuits. The nonaffine second-order Van der Pol oscillator with nonlinear damping is described as [62]

2

1 x

x& =

x x d

e x e

x u x x

x u

u − +

− + +

+ + +

= 1 2 12 22 12 2

2 )

1 )(1

& ( (2-86) where d is a white noise with power 2 which occurs after t≥15 The desired trajectory is

) 5 . 0 cos(

) sin(

)

(t t t

xd = + , and the initial state is x(0)=[x1(0) x2(0)]=[0.6 0.5]T. All other parameter settings are chosen the same as those in Example 2-3. The simulation results for Cases 4a and 4b are shown in Figs. 2-9 and 2-10, respectively. The tracking responses of state x1 are shown in Figs. 2-9(a) and 2-10(a), the tracking responses of state x2 are shown in Figs.

2-9(b) and 2-10(b), the associated control inputs are shown Figs. 2-9(c) and 2-10(c), and the numbers of fuzzy rules at every iteration are shown in Figs. 2-9(d) and 2-10(d). From the simulation results, we can see that that the proposed RASFC scheme in Case 4a can achieve the same favorable tracking performance as that in Case 4b even an external disturbance suddenly occurs. In Fig. 2-9(d), rule growing plays the major role in SFS within 0 ≤ t < 0.25 and thus the rule number is increased from one to produce a suitable control effort to suppress the tracking error. For t >0.25, to reduce tracking error, the pruning of unnecessary rules will be activated in SFS and thus the number of rules decreases gradually. After a large external disturbance occurs at t ≥15, the rule number apparently increases to eliminate the effect caused by the disturbance. When tracking error is again suppressed to a small level, the rule pruning effect will be activated again. In Fig. 2-10(d), the number of rules increases very rapidly from the beginning to the end of control. Throughout the control process, the

maximum number of rules is 7 in Case 4a and 28 in Case 4b. Table 2-4 shows the comparison between two cases. From Table 2-4, it is obvious that our proposed self-structuring algorithm can relieve the heavy computational burden caused by the 187,677 redundant rules (82.44 % of the Na in Case 4b), and the te in Case 4a is over 5 times faster than that in Case 4b. It can be imagined that the relief of computational load caused by the redundant rules will become more and more remarkable as the control period continues.

time (sec) (a) x1

xd

x1

time (sec) (a) x1

xd

xx1d

x1

time (sec) (b) x2

x&d

x2

time (sec) (b) x2

x&d

xx&2d

x2

time (sec) (c)

control inputs

time (sec) (c)

control inputs

time (sec) (d)

rule number

time (sec) (d)

rule number

Fig. 2-9 Simulation results of Case 4a in Example 2-4

time (sec) (a) x1

xd

x1

time (sec) (a) x1

xd

x1d

x x1

time (sec) (b) x2

x&d

x2

time (sec) (b) x2

x&d

xx&2d

x2

time (sec) (c)

control inputs

time (sec) (c)

control inputs

time (sec) (d)

rule number

time (sec) (d)

rule number

Fig. 2-10 Simulation results of Case 4b in Example 2-4

It is worth noting that in Examples 2-3 and 2-4, the tracking control is started with only one fuzzy rule, and thereafter a compact rule base is constructed automatically without human knowledge. In addition, the same parameter settings, including constants to be designed, learning rates, thresholds of growing and pruning, and the positive definite symmetric matrix P, are adopted in these two examples. These parameter settings are chosen for Example 2-3 to achieve favorable transient tracking performance, and they may be not equally suitable for Example 2-4. Nevertheless, as we can see, satisfactory tracking performance is still achieved in these two examples.

Chapter 3

Direct Adaptive Control Design Using Hopfield-Based Dynamic Neural Network for Affine Nonlinear Systems

A dynamic neural network (DNN) is a collection of dynamic neurons which are fully interconnected to a function of their own output. On the contrary, in a static neural network (SNN), the output is directly calculated from the input through feedforward interconnections.

DNNs are proven to be more suitable for representing dynamic systems. In this chapter, we aim at solving the control problem of SISO affine nonlinear systems. A direct adaptive control scheme using a Hopfield-based DNN is developed to achieve this goal. Meantime, the structuring problem of NNs is solved by the proposed parsimonious structure of the Hopfield-based DNN, that is, only a single Hopfield neuron is needed to control any affine nonlinear system.

3.1 Hopfield-Based Dynamic Neural Network 3.1.1 Description of DNN Model

DNNs are made of recurrent and interconnected dynamic neurons which distinguish DNNs from feedforward neural works, where the output of one neuron is connected only to neurons in the next layer. Consider a DNN described by a nonlinear differential equation of the following form [47]

) ( ) ( )

(V1χ BΨφ V2χ γ u BWσ

χ& = + + (3-1) where χ=

[

χ1 χ2Lχn

]

T Rn is the state vector, u =

[

u1u2Lum

]

T Rm is the input vector, σ :RrRk , ARn×n is a Hurwitz matrix. B=diag

{

b1 ,b2 ,L ,bn

}

Rn×n ,

k

Rn×

W , V1Rr×n, ΨRn×l, V2Rs×n, φ :RsRl×n, and γ :RmRn. Here χ is

χ1

Fig. 3-1 The structure of the dynamic neural network

the state of the DNN, W and Ψ are the weight matrices describing output layer connections, V1 and V2 are the weight matrices describing the hidden layer connections,

(⋅)

σ is a sigmoid vector function responsible for nonlinear state feedbacks, and γ(⋅) is a differentiable input function. A DNN in (3-1) satisfying

n s

r = = , V1 =V2 = In×n, φ(⋅)=In×n (3-2) is a simplest DNN without any hidden layers. It can be expressed as

) Then, the expression in (3-3) can be modified as

u

The output of every neuron in Fig. 3-1 can be expressed as u

where WiT =

[

wi1wi2Lwin

]

and ΘiT =

[

θi1θi2Lθim

]

are the ith rows of W and Θ, respectively. Solve the differential equation (3-5), we obtain

( ) (

0 0,

)

3.1.2 Hopfield-based DNN Approximator

A DNN approximator for continuous functions can be defined as

( ) (

0 0,

)

difficult to be determined and might not be unique. The modeling error χ~ is defined as i

i

In this paper, a Hopfield-based dynamic neural network is adopted as the approximator.

It is known as a special case of DNN with ai =1/(RiCi) and bi =1/Ci, where Ri >0 and

>0

Ci representing the resistance and capacitance at the ith neuron, respectively [25],[29].

The sigmoid functionσ(χ)=[σ(χ1 )σ(χ2 )Lσ(χn)]T is defined by a hyperbolic tangent

3.2 Problem Formulation

Let SRn be an open set, DS ⊂ be and compact set. Consider the nth-order S nonlinear dynamic system of the form

d is the output of the system, and dR is a bounded external disturbance. We consider only the nonlinear systems which can be represented in (3-14). In order for (3-14) to be controllable, it is required that g ≠0. Without losing generality, we assume that 0< g <∞. The control objective is to force the system output y to follow a given bounded reference signal yrCh, hn. The reference signal vector yr and the error vector e are defined as

with e= yrx= yry.

If the functions f(x) and g are known and the system is free of external disturbance, the ideal controller can be designed as

[

x kTce

]

n r

id f y

u = g1 − ( )+ ( ) +

(3-16) where kc =

[

kn kn1Lk1

]

T. Applying (3-16) to (3-14), we have the following error dynamics system

) 0

1 ( 1 )

( +k e + +k e=

e n n L n . (3-17) If ki, i=1, 2, …, n are chosen so that all roots of the polynomial H(s)∆sn +k1sn1 +L+kn lie strictly in the open left half of the complex plane, then limt→∞ e t

( )

=0 can be implied for any initial conditions. However, since the system dynamics may be unknown or perturbed, the ideal feedback controller u id in (3-16) cannot be implemented.

3.3 Design of DACHDNN

To solve this problem, a new direct adaptive control scheme using Hopfield neural networks for SISO nonlinear systems is proposed. In the DACHDNN, a Hopfield-based DNN is used to estimate the ideal controller uid . The direct adaptive Hopfield-based DNN controller takes the following form

s HDNN

d u u

u = + (3-18) where uHDNN is the Hopfield-based DNN controller used to approximate the ideal controller u in (3-16); id u is the compensation controller employed to compensate the effects of s external disturbance and the approximation error introduced by the Hopfield-based DNN approximation (described later). The overall DACHDNN is shown in Fig. 3-2, wherein the adaptive laws are described later. Substituting (3-18) into (3-14) and using (3-16) yield

d u

u u

g c ideal HDNN s c

ce B B

A

e& = + ( − − )−

d u

u

g c s c

ce B B

A + − −

= (~ ) (3-19)

x

Fig. 3-2 The Block diagram of the DACHDNN

Note that the ideal controller uid is a scalar, and thus the Hopfield-based DNN used to approximate uid contains only a single neuron. The output of such a Hopfield-based DNN can be express as Hopfield-based DNN containing only a single neuron.

(⋅)

Fig. 3-3 The electric circuit of the Hopfield-based DNN containing only a single neuron

Substituting (3-20) into (3-19) yields

where ∆ is the approximation error. In order to derive the one of the main theorems in this chapter, the following assumption and lemma is required.

Assumption: Let d g and Θ, respectively. If the adaptive laws are designed as

( )

satisfies the following Riccati-like equation

1 0

Following the preceding consideration, we have the following theorem.

Theorem 3-1: Suppose the Assumption (3-22) holds. Consider the plant (3-14) with the control law (3-18). The Hopfield-based DNN controller uHDNN is given by (3-20) with the adaptive laws (3-23) and (3-24). The compensation controller us is given as

BTcPe scheme guarantees the following properties:

i)

+ +Θ Θ Θ+

ii) The tracking error e can be expressed in terms of the lumped uncertainty as

) the minimum eigenvalue of P.

Proof:

i) Define the Lyapunov function candidate as

Θ

η . Differentiating (3-31) with respect to time and using (3-21) yield

Θ Substituting (3-28) into (3-32), we have

+ Θ

By using the Riccati-like equation (3-25), (3-36) can be rewritten as + Θ

(3-38) For the condition

⎟⎟ and the second line of (3-40) can be rewritten as

0 Integrating both sides of the inequality (3-43) yields

Substituting (3-31) into (3-44), we can prove (3-29).

ii) From (3-44) and since 0

0

t Te Qedt , we have

µ ρ2 ) 2

0 ( 2 ) (

2V tV +g , 0≤ t <∞ (3-45) From (3-31), it is obvious that e PeT ≤2 , for any V V . Because P is a positive definite symmetric matrix, we have

Pe e e e P e

P) = ( ) TT

( 2 min

min λ

λ (3-46) Thus, we obtain

µ ρ

λmin(P)e 2 ≤ PeeT ≤2V(t)≤2V(0)+g2 2 (3-47) from (3-45)-(3-46). Therefore, from (3-47), we can easily obtain (3-30), which explicitly describe the bound of tracking error e . If initial state V(0)=0, tracking error e can be made arbitrarily small by choosing adequate ρ. Equation (3-30) is very crucial to show that the proposed DACHDNN will provide the closed-loop stability rigorously in the Lyapunov sense under the Assumption (3-22). Q.E.D.

Remark: Equation (3-30) shows the relations among e , ρ, and λmin(P). For more insight of (3-30), we first choose ρ2 =δ in (3-25) to simplify the analysis. Thus, from (3-25), we can see that λmin(P) is fully affected by the choice of λmin(Q) in the way that a larger

)

min(Q

λ leads to a larger λmin(P), and vice versa. Now, one can easily observe form (3-30) that the norm of tracking error can be attenuated to any desired small level by choosing ρ and λmin(Q) as small as possible. However, this may lead to a large control signal which is usually undesirable in practical systems.

3. 4 Simulation Results

In this section, two examples are presented to illustrate the effectiveness of the proposed DACHDNN. It should be emphasized that the development of the DACHDNN does not need to know the exact dynamics of the controlled system.

Example 3-1: Chaotic dynamic systems are known for their complex, unpredictable behavior and extreme sensitivity to initial conditions as well as parameter variations. Consider a second-order chaotic dynamic system, the well known Duffing’s equation, which describes a special nonlinear circuit or a pendulum moving in a viscous medium under control [65]:

2

1 x

x& =

u wt q x p x p x p

x&2 =− &− 12 3 + cos( )+ x

y= (3-48)

Fig. 3-4 The Phase plane of uncontrolled chaotic system

where p , p1, p2, q and w are real constants. Depending on the choices of these constants, the solutions of system (3-49) may display complex phenomena, including various periodic orbits behaviors and some chaotic behaviors [66]. Fig. 3-4 shows the complex open-loop system behaviors simulated with u=0, 4p=0. , p1 =−1.1, p2 =1.0, w=1.8,

95 .

=1

q , and

[

x1 x2

] [ ]

T = 0 0T. Assume the system is free of external disturbance in this example. The reference signal is yr(t)=sin(0.5t)+cos(t).Some initial parameter settings of DACHDNN are chosen as

[

x x20

]

T

[

0.5 0

]

T

0

1 = , 0uHNN,0 = , 0ξ0W = , ξ0Θ =

[ ]

0 0 T,Wˆ0 =0, and Θˆ0 =

[ ]

11 T. These initial settings are chosen through some trials to achieve favorable transient control performance. The learning rates of weights adaption are selected as

5 .

=7

Θ

βW ; the slope of tanh(⋅ at the origin are selected as ) κ =1; gL =0.1 and 5

.

=0

δ for the compensation controller. The resistance and capacitance are chosen asR= 5Ω andC=0.005F. Solving the Riccati-like equation (3-25) for a choice of Q 10= I,

[ ]

T

c = 21

k , we have ⎥

⎢ ⎤

=⎡

5 5

5

P 15 . The simulation results for are shown in Figs. 3-5, where

the tracking responses of state x1 and x2 are shown in Figs. 3-5(a) and 3-5(b), respectively, the associated control inputs are shown Fig. 3-5(c), and the trained weightings are shown in Fig. 3-5(d). From the simulation results, we can see that the proposed DACHDNN can achieve favorable tracking performances without external disturbance.

yr

x1

time (sec) (a)

x1

yr

x1

time (sec) (a)

x1

y&r

x2

time (sec) (b)

x2

y&r

x2

time (sec) (b)

x2

time (sec) (c)

ud

time (sec) (c)

ud

θ11

Fig. 3-5 Simulation results of Example 3-1

Example 3-2:

Consider the following nonlinear dynamic system described as [58, 67]

2

P . The simulation results for

are shown in Figs. 3-6, where the tracking responses of state x1, x2 and x are shown in 3 Figs. 3-6(a), 3-6(b), and 3-6(c), respectively, the associated control inputs are shown in Fig.

3-6(d), and the trained weightings are shown in Fig. 3-6(e). From Fig. 3-6(a), we can observe that the output of the system well tracks the reference signal throughout the whole control process, even with the external disturbance occurring in the middle time (t ≥10). This fact shows the strong disturbance-tolerance ability of the proposed system.

yr

x1

time (sec) (a)

x1

yr

x1

time (sec) (a)

x1

y&r

x2

time (sec) (b)

x2

y&r

x2

time (sec) (b)

x2

y&&r

x3

time (sec) (c)

x3

y&&r

x3

time (sec) (c)

x3

ud

time (sec) (d)

ud

time (sec) (d)

t i m e (

θ11

θ12

θ13

time (sec) (e)

weightings

W t i m e ( t i m e (

θ11

θ12

θ13

time (sec) (e)

weightings

W

Fig. 3-6 Simulation results of Example 3-2

3.5 Performance analysis of Hopfield-based DNNs with and without the self-feedback loop

The performance of Hopfield-based DNNs with and without the self-feedback loop will be compared in this section. Hopfield networks are sometimes composed of neurons without self-feedback loops in some applications, such as pattern recognition [68]. This is to minimize the number of potential stable states so as to increase the recognition rate [68]. However, is it true that a Hopfield-based DNN composed of neurons without self-feedback loops performs better in the control problem of SISO affine nonlinear systems? We will try to answer this question by the following discussions and simulation results.

Because the proposed Hopfield-based DNN contains only a single neuron for SISO affine nonlinear systems, we can simply set W =0 (and hence W* =Wˆ =W~ =0) when a neuron without self-feedback loop is used. Thus, repeating the discussions with W =0 in sections 3.1 and 3.3, we have the following theorem:

Theorem 3-2: Suppose the required assumption holds. Consider the plant (3-14) with the

Theorem 3-2: Suppose the required assumption holds. Consider the plant (3-14) with the

相關文件