• 沒有找到結果。

多輸入多輸出非線性系統之類神經網路適應性控制器設計

N/A
N/A
Protected

Academic year: 2021

Share "多輸入多輸出非線性系統之類神經網路適應性控制器設計"

Copied!
70
0
0

加載中.... (立即查看全文)

全文

(1)國立臺灣師範大學應用電子科技學系 碩士論文. 指導教授:王偉彥博士 多輸入多輸出非線性系統之類神經網路適應性控制 器設計 RBF Neural Network Adaptive Backstepping Controllers for MIMO Nonlinear Systems. 研究生:郭名峰. 撰. 中 華 民 國 九十八年七月.

(2)

(3)

(4) 多輸入多輸出非線性系統之類神經網路適應性控制器設計 學生:郭名峰. 指導教授:王偉彥. 國立臺灣師範大學應用電子科技學系碩士班. 摘. 要. 本文將針對多輸入多輸出的典型與非典型的未知非線性系統,提出一個輻狀 基底類神經網路適應性控制器,在針對典型未知非線性系統的控制器的架構中, 我們運用輻狀基底類神經網路來近似未知的非線性函數,並且利用適應律來調整 輻狀基底類神經網路的權重值,在運用倒階法來設計控制器時會造成基底多次微 分的問題,而這個問題會往往使得設計出來的控制器無法運用在高階的系統,也 因此我們在設計控制器的過程中加入一階濾波器來避免這類問題的發生,在針對 非典型未知非線性的控制器的設計上,運用的方法與針對典型未知非線性系統的 控制器相似,都是需要用到輻狀基底類神經網路來近似未知的非線性函數,以及 用適應律來調整輻狀基底類神經網路的權重值,最後再加入一階濾波器來避免基 底多次微分的問題,不過在這之前我們利用均值定理的特性,將控制器與虛擬控 制器從非典型的函數中分離出來,才可利用設計好的控制器加以控制,當然我們 也會運用李亞普諾夫定理來證明此控制器可以使系統達到穩定。最後附上一些範 例模擬,從模擬圖中可得知系統輸出值會盡追蹤到參考訊號。 關鍵字:輻狀基底類神經網路,適應性控制,多輸入多輸出系統,倒階控制. i.

(5) RBF Neural Network Adaptive Backstepping Controllers for MIMO Nonlinear Systems Student:Ming-Feng Kuo. Advisors:Dr. Wei-Yen Wang. Institute of Applied Electronics Technology National Taiwan Normal University. ABSTRACT. This thesis proposes a radial basis function neural network adaptive backstepping controller (RBFNN_ABC) for multiple-input multiple-output (MIMO) affine and nonaffine nonlinear systems in block-triangular form. The control scheme incorporates the adaptive neural backstepping design technique with a first-order filter at each step of the backstepping design to avoid the higher-order derivative problem, which is generated by the backstepping design. This problem may produce an unpredictable and unfavorable influence on control performance because higher-order derivative term errors are introduced into the neural approximation model. Finally, simulation results demonstrate that the output tracking error between the plant output and the desired reference output can be made arbitrarily small. Keywords: Radial basis function (RBF) neural networks (NNs), adaptive, backstepping, MIMO nonlinear systems.. ii.

(6) 誌. 謝. 研究所階段即將進入尾聲,我非常感謝王偉彥教授與呂藝光教授,在這段期 間的照顧與指導,回想這兩年的點點滴滴,在剛進入實驗室時,對於研究所的一 切都很陌生,老師不斷的協助學生讓我順利步上軌道,並且時時關心學生的生活 狀況,在指導研究時,老師給予學生很大的空間自由發揮,當然也常常遇到瓶頸 與犯錯,但老師總是不厭其煩的指導與督促,而老師為人謙遜、溫文儒雅的文人 氣度,更是學生學習的楷模,學生有幸能接受老師的指導,讓學生不僅在專業領 域上的知識有所增長,並且給予學生許多正面的人生觀。此外我也非常感謝應用 電子所所長洪欽銘教授,時常關心學生的日常生活狀況與論文進度,並且給予許 多的鼓勵與支持,對於老師們的栽培之恩,學生將永銘於心。 誠摯感謝口試委員:李祖添校長、王文俊教授、洪欽銘所長、王偉彥教授、 呂藝光教授,在百忙之中撥冗蒞臨指導,提供許多寶貴的意見以及不同的思考方 向與問題,使得論文內容能更加完整與正確。 同時感謝銘滄學長、宜興學長、人齊學長、承曄學長、宏見學長,在專業領 域與論文上的協助。士恆學長、助教琇文與琼姿在處理行政事務上的協助。也感 謝同學建豪、建宏、建佑、俊堯、正皓、伯楷,學弟皓勇、嘉良、皓程、弨廣、 小建宏、暉翔,在這兩年一同學習成長,一同渡過美好的時光。 最後,最想感謝的還是我的父母,郭祥澤先生與蔡碧治女士,這些年來的養育之 恩,並且給予我許多的支持與鼓勵,也感謝女友芝妤這段期間的容忍與陪伴,僅 以此論文成果的喜悅分享給我摯愛的家人,還有所有關心我的朋友。. iii.

(7) CONTENTS ABSTRACT (In Chinese) ............................................................................................... i ABSTRACT (In English) ............................................................................................... ii ACKNOWLEDGEMENT............................................................................................. iii CONTENTS .................................................................................................................. iv LIST OF FIGURES .........................................................................................................v Chapter 1 Introduction.....................................................................................................1 Chapter 2 Radial Basis Function Neural Network ..........................................................3 2.1 Introduction of Artificial Neural Networks .......................................................3 2.2 Learning rule of ANNs ......................................................................................4 2.2.1 General weight learning rule ...................................................................5 2.2.2 Perceptron learning rule...........................................................................6 2.2.3 Widrow-Hoff learning rule ......................................................................6 2.2.4 Hebbian learning rule ..............................................................................7 2.3 Radial Basis Function Neural Network .............................................................7 2.4 Simulation Results ...........................................................................................10 Chapter 3 RBFNN Adaptive Backstepping Controllers for MIMO Affine Nonlinear Systems..........................................................................................................13 3.1 Problem Formulation .......................................................................................13 3.2 Design of RBFNN_ABC .................................................................................14 3.3 Simulation Results ...........................................................................................24 Chapter 4 RBFNN Adaptive Backstepping Controllers for MIMO Nonaffine Nonlinear Systems.........................................................................................34 4.1 Problem Formulation .......................................................................................34 4.2 Design of RBFNN_ABC .................................................................................35 4.3 Simulation Results ...........................................................................................48 Chapter 5 Conclusion ....................................................................................................57 References .....................................................................................................................58 Autobiography ...............................................................................................................61 iv.

(8) LIST OF FIGURES Fig. 2-1 Diagram of M-P neuron……………………………………………………..4 Fig. 2-2 Two categories of learning……………………………………………….....5 Fig. 2-3 Diagram of general weight learning rule……………………………………6 Fig. 2-4 Schematic diagram of RBFN………………………………………………..8 Fig. 2-5 The Schematic diagram of Gausian function………………………………..9 Fig. 2-6 Model reference signal y and system output yˆ ……………………………10 Fig. 2-7 Error of system output yˆ and model reference signal y………………….11 Fig. 2-8 Model reference signal……………………………………………………..12 Fig. 2-9 System output yˆ …………………………………………………………..12 Fig. 3-1 State x1,1 and model reference signal xd 1 ………………………...............26 Fig. 3-2 State x2,1 and model reference signal xd 2 …………………………………..26 Fig. 3-3 Control inputs u1 …………………………………………………………..27 Fig. 3-4 Control inputs u2 ………………………………………………………….27 Fig. 3-5 Error of system output x1,1 and model reference signal xd 1 ……………..28 Fig. 3-6 Error of system output x2,1 and model reference signal xd 2 ……………..28 Fig. 3-7 State x1,1 and model reference signal xd1 ………………………...............30 Fig. 3-8 State x2,1 and model reference signal xd 2 …………………………………31 Fig. 3-9 Control inputs u1 ………………………………………………………….31 Fig. 3-10 Control inputs u2 ………………………………………………...............32 Fig. 3-11 Error of system output x1,1 and model reference signal xd 1 ……………32 Fig. 3-12 Error of system output x2,1 and model reference signal xd 2 ……………33 Fig. 4-1. Diagram of mean value theorem………………………………………….35. Fig. 4-2. State x1,1 and model reference signal xd 1 ……………………………….49. Fig. 4-3. State x2,1 and model reference signal xd 2 …………………………………50. Fig. 4-4. Control inputs u1 …………………………………………………………50. Fig. 4-5. Control inputs u2 …………………………………………………….......51 v.

(9) Fig. 4-6. Error of system output x1,1 and model reference signal xd 1 ……………51. Fig. 4-7. Error of system output x2,1 and model reference signal xd 2 ……………52. Fig. 4-8. State x1,1 and model reference signal xd 1 ……………………………….54. Fig. 4-9. State x2,1 and model reference signal xd 2 …………………………………54. Fig. 4-10 Control inputs u1 …………………………………………………...……55 Fig. 4-11 Control inputs u2 ………………………………………………………...55 Fig. 4-12 Error of system output x1,1 and model reference signal xd 1 ……………56 Fig. 4-13 Error of system output x2,1 and model reference signal xd 2 ……………56. vi.

(10) Chapter 1. Introduction. Adaptive control is a useful method for designing controllers for uncertain dynamic systems. The main idea in adaptive control is using output feedback to model the unknown system [1]-[2]. Adaptive controllers are classified into two types: direct and indirect. Direct adaptive control means that the parameters of the controller are directly adjusted to reduce the norm of the output error between the plant and the reference model. Indirect adaptive control means that the parameters of the plant are estimated and the controller is chosen assuming that the estimated parameters represent the true values of the plant parameters [3]. Since the 1990s, backstepping has become one of the most popular design methods for a large class of nonlinear systems [4]-[11]. Compared to feedback linearization methods [12], the backstepping technique [13], [14] has the advantage of avoiding the cancellation of useful nonlinearities in the design process. Thus, in the past decade, the backstepping technique has been widely used for nonlinear control systems. Its main design procedure is that an appropriate state and virtual control are selected for each smaller subsystem, then the state equation is rewritten in terms of them, and finally Lyapunov functions are chosen for these subsystems so that the true controller integrating the individual controls of these subsystems guarantees the stability of the overall system. Recently, owing to the development of intelligent control methods such as fuzzy logic control, neural network control, etc., many intelligent backstepping methods [15-19] have been proposed to control nonlinear systems with unknown system dynamics, combining intelligent control methods with an adaptive backstepping design. Neural networks have been widely used to model nonlinearities. A neural network is a universal approximator which, with increased size and complexity, can approximate any nonlinear function with arbitrary precision. Based on its capabilities, neural networks have also been widely adopted for nonlinear system identification and control [20-29]. A backstepping-based neural network controller for unknown nonlinear systems first came out in [18]. In [8], an adaptive backstepping controller 1.

(11) using a radial basis function neural network (RBFNN) was proposed. Adaptive neural control of uncertain MIMO nonlinear systems was proposed in [30]. In [31], an RBF neural network adaptive backstepping control system (RBFNN_ABC) was proposed to control a nonlinear system. However, the RBF adaptive backstepping control method results in the controller containing higher-order derivative terms as the order (n) of the system increases. The higher-order derivative terms introduced into the approximation model may produce an unpredictable and unfavorable influence on control performance. To solve the problems mentioned above, this thesis proposes a filtered RBFNN adaptive backstepping control scheme. It consists of the backstepping design which achieves the desired control behavior, the RBF neural network which is utilized to estimate the unknown system dynamics, the adaptive control scheme which is utilized to adjust the controller parameters, and a first-order filter at each step of the backstepping design which is chosen to avoid producing higher-order derivative terms. This thesis is structured as follows. In chapter 2, we introduce neural network, the general weight learning rule, the perceptron learning rule, the Widrow-Hoff learning rule, the Hebbian learning rule, and the Radial Basis Function (RBF) neural network. In chapter 3, a radial basis function neural network adaptive backstepping controller with a first order filter for multiple-input multiple-output (MIMO) affine nonlinear systems in block-triangular form is proposed. In chapter 4, a similar controller but for nonaffine nonlinear systems. is proposed. Finally, we draw some conclusions in. chapter 5.. 2.

(12) Chapter 2 Radial Basis Function Neural Network. 2.1 Introduction of Artificial Neural Networks Artificial neural networks (ANNs) are systems that deliberately constructed to make use of some organization principles resembling those of human brain. They represent the promising new generation of information processing systems. ANNs are compose of a number of mutually interconnected processing units called neurons. Each neuron output is connected, through weights, to other neurons or to itself; both delay and lag-free connections are allowed. Hence, the structure that organizes these neurons and the connection geometry among them should be specified for an ANNs. ANNs are good at some field like function approximation, classification, optimization, pattern matching and vector quantization, and data clustering. In 1943s Mculloch and Pitts [32] propose a simple mathematical of the above-mentioned biological neuron, usually called an M-P neuron, shown in Figure 2-1. In this model, the jth processing element computes a weighted sum of its inputs and outputs y j = 1 or 0 according to whether this weighted sum is above or below a certain threshold θ j : m. net j = ∑ w ji xi + θ j. (2-1). i =1. y j = φ ( net j ). (2-2). where the weight w ji is the strength of the synapse connecting source neuron i and destination neuron j, the activation function φ is a unit step function. if v ≥ 0 if v < 0. ⎧1 ⎩0. φ (v ) = ⎨. 3. (2-3).

(13) x1. wk 1. wk 2 x2. ∑ φ (.). yj. wkm. xm. Fig. 2-1 Diagram of M-P neuron. 2.2 Learning rule of ANNs Broadly speaking, there are two kind of learning rule in ANNs: supervised learning and unsupervised learning.In supervised learning, there are sequence of example , (x1 , r1 ), (x 2 , r2 ),.....(x n , rn ), of the input and desired output. The desired output play the. role of teacher, presses the network to modify the weight value. the output and desired signal will move closer to the desired output, shows in figure 2-2(a).In unsupervised learning, there is no teaching input involved here, but it provides the system with an input x, and allow it to self-organize its weights to generate internal prototypes of the same vectors , shows in figure 2-2(b).. 4.

(14) x(input) y(output). ANN. r(desired signal). error. (a) supervised learning input output. ANN. (b) unsupervised learning. Fig.2-2 Two categories of learning. (a) supervised learning. (b) unsupervised learning. 2.2.1 General weight learning rule A general form of weight learning rule [33] indicates that the increment of weight vector w j produced by the learning signal r and the input x. w j (t ) = η rx(t ). (2-4). where η is learning constant and learning signal r is a function of w j , x, d j that is r = r ( w j , x, d j ). (2-5). The weight vector at learning time step (t+1) is w j (t + 1) = w j (t ) + η r ( w j (t ), x(t ), d j (t ) ) x(t ). The structure of general weight learning rule is shown in fig. 2-3. 5. (2-6).

(15) x1. yj. x2. jth neuron. xi. wj xn. r. dj Learning signal. x n+1 = −1. X. η. Fig.2-3 Diagram of general weight learning rule. 2.2.2 Perceptron learning rule The perceptron learning rule processes the training data one by one and adjust the weight incrementally. The learning signal is set as r = dj − yj. (2-7). Since the desired output d j takes the value of +1 or -1, we have ⎧2η d j xi Δw ji = η[d j − sgn(W jT X )]xi = ⎨ ⎩0 for j=1,2...,m. if y j ≠ d j otherwise. (2-8). which indicates that weights are adjusted only when the actual output y j disagrees with d j. 2.2.3 Widrow-Hoff learning rule The widrow-Hoff learning rule is very similar to the perceptron learning rule. The major difference is that the perceptron learning rule originated in an empirical hebbian assumption, while the widrow-Hoff rule was derived from the gradient-descent 6.

(16) method, which can be easily generalized to more than one layer. For a given set of p training patterns, {(x (1) , d (1) ), (x (2) , d ( 2) ),.....(x (n) , d ( n ) )} , the goal is to find a correct set of weight wi such that m. ∑w x j =1. j. (k ) j. = d (k ). (2-9). Then define the cost function E(w) as E(w) =. 1 p (k ) 1 p (k ) (k ) 2 ( ) d − y = ∑ ∑ (d − wT x(jk ) )2 2 k =1 2 k =1. m 1 p = ∑ (d ( k ) − ∑ w j x (jk ) ) 2 2 k =1 j =1. (2-10). The change in response to pattern x ( k ) is simply w j = −η (d ( k ) − wT x ( k ) ) x (jk ). (2-11). 2.2.4 Hebbian learning rule The Hebbian hypothesis that the weight are adjusted according to the pre-and postcorrelations, the learning signal r in the general weight learning rule is set as r. a ( wiT x) = yi. (2-12). where a(.) is the activation function. In the Hebbian learning rule, the learning signal r is simply set as current output. The increment wi of the weight vector becomes wi = −η a ( wiT x) x. (2-13). That is, the components of the weight vector are update by an amount of Δwij = −η a ( wiT x) x j = η yi x j. (2-14). Thus, the hebbian learning rule is an unsupervised learning rule for a feedforward network since it use only the product of input and actual output to modify the weight. No desired output are given to generate the learning signal to update the weight.. 2.3 Radial Basis Function Neural Network A Radial Basis Function (RBF) neural network has an input layer, a hidden layer and an output layer. Figure 2-4 is a schematic diagram of RBFN .The RBF is designed. 7.

(17) to perform input-output mapping trained by examples ( x k , d k ) ,k=1,2,…p . In the input layer ,the input neurons feed the values to each of the neurons in the hidden layer. The neurons in the hidden layer contain basis functions as z j ( x) = Φ ( x − c j ) j = 1, 2...m. (2-15). where Φ (.) is the radial basis function, c j is the jth center of the neuron, x − c j represents the Euclidean distance of the test case from the neuron’s center point. The value coming out of a neuron in the hidden layer is multiplied by a weight associated with the neuron and passed to the summation which adds up the weighted values and presents this sum as M. y = ∑ w j z j (x) j =0. j = 1, 2,K M. (2-16). The RBFN is basically trained by the hybrid learning rule: unsupervised learning in the input layer and supervised learning in the output layer. The weight in the output layer can be update simply by using the delta learning rule.. Fig. 2-4 Schematic diagram of RBFN Broadly speaking, there are six kind of radial basis function 8.

(18) 1、 linear function. ξ =( x−c. ). (2-17). 2、 cubic function ξ = x−c. 3. (2-18). 3、 thin-plate-spline function ξ = x − c ln x − c 2. 4、 Gausian function. (. ξ = exp − x − c / 2σ 2 2. (2-19). ). (2-20). 5、 multiquadratic function ξ=. x − c +σ 2 2. (2-21). 6、 inverse multiquadratic function ξ = 1/. x − c +σ 2 2. (2-22). In this paper Gausian function is used as the radial basis function of neuron network. The Schematic diagram of Gausian function is shown in Fig. 2-5.. (a) Gausian function of dimension one. (b) Gausian function of dimension two. Fig. 2-5 Schematic diagram of Gausian function. 9.

(19) 2.4. Simulation Results. In this section, we will illustrate the radial basis function of neuron network method described in section 2.2 by considering the following two example. The first example has one input vector and the second example has two input vector, Gausian function is exploited as the radial basis function. The center is produce at random, width is a positive constant. Example 2-1 The reference model is y = 2sin(t ) + cos(t ). (2-23). The neural network has 21 centers which range over the interval [0,20], The initial state is yˆ (0) = −0.5 . The system output yˆ of this system and the model reference signal yd are shown in Fig. 2-6. The error of system output yˆ and model reference signal yd are shown in Fig. Fig. 2-7. Fig. 2-6 Model reference signal y and system output yˆ. 10.

(20) Fig. 2-7 Error of system output yˆ and model reference signal y Example 2-2 The reference model is y = −5exp ⎡⎣ − ( x12 + x22 ) / 2 ⎤⎦. (2-24). The neural network has 3 centers which range over the interval [-5,5]. The system output yˆ of this system and the model reference signal yd are shown in Fig. 2-8. The error of system output yˆ and model reference signal yd are shown in Fig. Fig. 2-9. 11.

(21) Fig. 2-8 Model reference signal. Fig. 2-9 System output yˆ. 12.

(22) Chapter 3 RBFNN Adaptive Backstepping Controllers for MIMO Affine Nonlinear Systems. This chapter proposes a radial basis function neural network adaptive backstepping controller (RBFNN_ABC) with first order filter for multiple-input multiple-output (MIMO) nonlinear systems in block-triangular form. The control scheme incorporates the adaptive neural backstepping design technique with a first-order filter at each step of the backstepping design to avoid the higher-order derivative problem, which is generated by the backstepping design. This problem may create an unpredictable and unfavorable influence on control performance because higher-order derivative term errors are introduced into the neural approximation model.. 3.1 Problem Formulation The model of an uncertain MIMO block-triangular system can be described as shown in (3-1). ⎧x&1,1 = f1,1 ( X1,1,..., X n,1−λ1 +λn ) + g1,1 ( X1,1,..., X n,1−λ1 +λn ) x1,2 ⎪ ⎪x&1,i1 = f1,i1 ( X1,i1 ,..., X n,i1 −λ1 +λn ) + g1,i1 ( X1,i1 ,..., X n,i1 −λ1 +λn ) x1,i1 +1, 2 ≤ i1 < λ1 ⎪x& = f ( X ) + g ( X ,..., X 1,λ1 1,λ1 −1 n,λn −1 )u1 ⎪ 1,λ1 1,λ1 ⎪ ⎨..... ⎪x& = f ( X ,..., X n,1−λk +λn ) + gk ,1 ( X1,1−λk +λ1 ,..., X n,1−λk +λn ) xk ,2 ⎪ k ,1 k ,1 1,1−λk +λ1 ⎪x&k ,i = fk ,i ( X1,i −λ +λ ,..., X n,i −λ +λ ) + gk ,i ( X1,i −λ +λ ,..., X n,i −λ +λ ) xk,i +1, 2 ≤ ik < λk k k k 1 k k n k k k 1 k k n k ⎪ k ⎪⎩x&k ,λk = fk ,λk ( X ,u1...uk -1 ) + gk ,λk ( X1,λk −1,..., X n,λk −1 )uk yk = xk,1 , 1 ≤ k < n. (3-1). where xk ,ik , ik = 1...λk are the state of kth subsystem, uk ∈ R is the input, yk ∈ R is the output, λk ∈ R. is the order of kth subsystem.. X k ,ik = [ xk ,1 ,..., xk ,ik ] ,. X = [ x1,1.....xn,λn ] , Functions f (x) and g (x) are unknown smooth continuous. functions. The control objective is to design a adaptive neural backtepping controller for system (3-1) such that all the signals in the closed-loop uniformly 13.

(23) ultimately bounded and the state xk ,1 can track a bound reference signal xk ,d as closer as possible. Remark3.1: as example, consider the following two block triangular MIMO systems ⎧ x&1,1 = f1,1 ( X 1,1 , X 2,1 ) + g1,1 ( X 1,1 , X 2,1 ) x1,2 ⎪ ⎪ x&1,2 = f1,2 ( X ) + g1,2 ( X 1,1 , X 2,1 )u1 ⎪ F1 : ⎨ x&2,1 = f 2,1 ( X 1,1 , X 2,1 ) + g 2,1 ( X 1,1 , X 2,1 ) x2,2 ⎪ x& = f ( X , u ) + g ( X , X )u 2,2 1 2,2 1,1 2,1 2 ⎪ 2,2 ⎪ yi = xi ,1 i = 1, 2 ⎩. (3-2). where X = [ x1,1 , x1,2 , x2,1 , x2,2 ] , X 1,1 = [ x1,1 ] , X 2,1 = [ x2,1 ] ⎧ x&1,1 = f1,1 ( X 1,1 , X 2,2 ) + g1,1 ( X 1,1 , X 2,2 ) x1,2 ⎪ ⎪ x&1,2 = f1,2 ( X ) + g1,2 ( X 1,1 , X 2,2 )u1 ⎪ x& = f ( X ) + g ( X ) x 2,1 2,1 2,1 2,1 2,2 ⎪ 2,1 F2 : ⎨ ⎪ x&2,2 = f 2,2 ( X 1,1 , X 2,2 ) + g 2,2 ( X 1,1 , X 2,2 ) x2,3 ⎪ x& = f ( X , u ) + g ( X , X )u 2,2 1 2,2 1,1 2,2 2 ⎪ 2,3 ⎪⎩ yi = xi ,1 i = 1, 2. (3-3). where X = [ x1,1 , x1,2 , x2,1 , x2,2 , x2,3 ] , X 2,2 = [ x2,1 , x2,2 ] system F1 shows the same order of two subsystems, and system F2 shows the different order of two subsystems. For designing an adaptive neural backtepping controller of system (3-1), we make the following assumption. Assumption 3-1: There exist positive constants. g kU,ik. g kL,ik such. and. that. g kL,ik ≤ g (.) ≤ g Uk ,ik , where i = 1, 2,3...n .. Assumption 3-2: There exists a positive constant g& Uk,i such that g& k ,i (.) ≤ g& kU,i , where k. k. k. i = 1, 2,3...n .. 3.2 Design of RBFNN_ABC For the kth subsystem Step 1: Define the tracking error zk ,1 = xk ,1 − xk ,d , where xk ,d is the reference signal of. the first subsystem. The derivative of zk ,1 is defined as. 14.

(24) z& k ,1 = x&k ,1 − x&k ,d = f k,1 + g k,1 xk,2 − x&k , d. (3-4). The ideal virtual controller, α k*,1 , is defined as α k*,1 = −ck ,1 zk ,1 −. 1 ( f k ,1 − x&k , d ) g k ,1. (3-5). where ck ,1 is a positive constant. By employing a RBF neural network Wk*,1T φk ,1 (bk ,1 ) to approximate the last term of (3-5), let. 1 ( f k ,1 − x&k ,d ) = Wk*,1T φk ,1 (bk ,1 ) + δ k ,1 . α k*,1 can be g k ,1. expressed as α k* ,1 = −ck ,1 zk ,1 − Wk*,1T φk ,1(bk ,1 ) − δ k ,1. (3-6). where Wk*,1T denotes ideal constant weights, bk ,1 =[X1,1−λ +λ ,..., X n,1−λ +λ , x&k ,d ] denotes the k. 1. k. n. RBF input, δ k,1 is the approximation error with δ k ,1 ≤ δ k*,1 , where δ k*,1 is a bounded constant. The virtual controller α k ,1 is defined as α k ,1 = −ck ,1 zk ,1 − WˆkT,1φk ,1 (bk ,1 ). (3-7). where Wˆk ,1 is the estimation of Wk*,1T , and ck ,1 is a positive design constant. The adaptive law for Wˆk ,1 is & Wˆk ,1 = β k ,1[φk ,1 (bk ,1 ) zk ,1 − σ k ,1Wˆk ,1 ]. (3-8). where σ k ,1 >0 and βk ,1 >0 are design constants. Let WˆkT,1φk ,1 (bk ,1 ) pass through a first-order filter to obtain γ k ,1 . Thus, we have ξ k ,1γ&k ,1 + γ k ,1 = WˆkT,1φk ,1 (bk ,1 ). (3-9). where ξ k ,1 is the time constant. Then the virtual controller α k ,1 is redefined as α k ,1 = −ck ,1z k ,1 − γ k ,1. (3-10). Define the error between WˆkT,1φk ,1 (bk ,1 ) and γ k ,1 as ek ,1 = γ k ,1 -WˆkT,1φk ,1 (bk ,1 ). (3-11). Derivative of ek ,1 is defined as. 15.

(25) & e&k ,1 = γ&k ,1 + WˆkT,1φk ,1 (bk ,1 ) + WˆkT,1φ&k ,1 (bk ,1 ) ∂φ ∂φk ,1 ∂φ & = γ&k ,1 + WˆkT,1φk ,1 (bk ,1 ) + WˆkT,1 k ,1 x&k ,1.... + WˆkT,1 x&n ,1−λk + λn + WˆkT,1 k ,1 && xk ,d (3-12) ∂xk ,1 ∂xn ,1−λk + λn ∂x&k ,d =−. ek ,1. ξ k ,1. +A k ,1. where ∂φ ∂φk ,1 ∂φ & A k ,1 = WˆkT,1φk ,1 (bk ,1 ) + WˆkT,1 k ,1 x&k ,1.... + WˆkT,1 x&n ,1−λk + λn + WˆkT,1 k ,1 && xk ,d ∂xk ,1 ∂xn ,1−λk + λn ∂x&k ,d. (3-13). is a continuous function. Define the estimation error as W%k ,1 = Wˆk ,1 − Wk*,1. (3-14). Then, we have z&k ,1 = f k ,1 + g k ,1 ( zk ,2 + α k ,1 ) − x&k , d = f k ,1 + g k ,1[ zk ,2 − ck ,1 zk ,1 − WˆkT,1φk ,1 (bk ,1 ) + ξ k ,1γ&k ,1 ] − x&k ,d = g [ z − c z − W% T φ (b ) + ξ γ& + δ ] k ,1. k ,2. k ,1 k ,1. k ,1 k ,1. k ,1. k ,1 k ,1. (3-15). k ,1. Consider the Lyapunov function vk ,1 =. 1 2 1 1 zk ,1 + w% k2,1 + e 2k ,1 2 g k ,1 2 β k ,1 2. (3-16). The derivative of vk ,1 can be found as follows g& k ,1 1 1 z&k ,1 zk ,1 + w% k ,1w&ˆ k ,1 − z 2 + e k ,1e& k ,1 2 k ,1 2( g k ,1 ) g k ,1 β k ,1 = z [ z − c z − W% T φ (b ) + ξ γ& + δ ]. v&k ,1 =. k ,1. +. k ,2. 1. β k ,1. k ,1 k ,1. w% k ,1w&ˆ k ,1 −. k ,1 k ,1. g& k ,1 2( g k ,1 ). 2. k ,1. zk2,1 −. k ,1 k ,1. 2 k ,1. e. ξ k ,1. k ,1. + ek ,1 Ak ,1. = zk ,1 zk ,2 − ck ,1 zk2,1 − zk ,1W%kT,1φk ,1 (bk ,1 ) + zk ,1 (ξ k ,1γ&k ,1 + δ k ,1 ) +. 1. β k ,1. w% k ,1wˆ& k ,1 −. g& k ,1 2( g k ,1 ) 2. z − 2 k ,1. ek2,1. ξ k ,1. + ek ,1 Ak ,1. Substituting the adaptive law w&ˆ k ,1 = β k ,1[ zk ,1φk ,1 − σ k ,1wˆ k ,1 ] into (3-17), we have. 16. (3-17).

(26) v&k ,1 = zk ,1 zk ,2 − ck ,1 zk2,1 + zk ,1 (ξ k ,1γ&k ,1 + δ k ,1 ) − σ k ,1W%kT,1Wˆk ,1 −. Let −ck ,1 −. g& k ,1. 2( g k ,1 ). 2. g& k ,1 2( g k ,1 ) 2. zk2,1 −. ek2,1. ξ k ,1. (3-18). + ek ,1 Ak ,1. ≤ −ck ,1 + ck ,1 , where ck ,1 is the upper bound of. g& k ,1 2( g k ,1 ) 2. , by. choosing ck ,1 , such that −ck ,1 + ck ,1 =− ck ,11 − ck ,12 < 0 , where ck ,11 and ck ,12 are positive constants then we have 2. v&k ,1 ≤ zk ,1 zk ,2 − c. z −c. 2 k ,11 k ,1. e z + zk ,1 (ξ k ,1γ&k ,1 + δ k ,1 ) − σ k ,1W%kT,1Wˆk ,1 − k ,1 + ek ,1 Ak ,1. 2 k ,12 k ,1. ξ k ,1. (3-19). Let ξk ,1γ&k ,1 + δ k ,1 = sk ,1 where kk ,1 has an upper bound sk*,1 Using the facts −c. sk2,1 sk*2,1 z + kk ,1z k ,1 ≤ ≤ 4ck ,12 4ck ,12. 2 k ,12 k ,1. Let. 1. ξ k,1 −. (3-20). = pk,1 + qk,1 , where pk,1 , qk,1 are positive constants, then we have. ek,21. ξ k,1. + ek,1 Ak,1 = −( pk,1 + qk,1 )ek,21 + ek,1 Ak,1. (3-21). = − pk,1ek,21 − qk,1ek,21 + ek,1 Ak,1. By completion of squares, we have 2. − qk,1ek,21 + ek,1 Ak,1 ≤ − qk,1 ek,1 + Ak,1 ek,1 ≤. Ak,1. 2. 4qk,1. (3-22). then v&k ,1 ≤ zk ,1 zk ,2 − c z + 2 k ,1 k ,1. sk*2,1 4ck ,12. −p e + 2 k ,1 k ,1. Ak ,1. 2. 4qk ,1. − σ k ,1W%kT,1Wˆk ,1. (3-23). Using the facts 2 σ σ −σ k ,1W%kT,1Wˆk ,1 ≤ − k ,1 W%k ,1 + k ,1 Wk*,1 2 2. then. 17. 2. (3-24).

(27) sk*2,1. v&k ,1 ≤ zk ,1 zk ,2 − ck ,1 zk2,1 +. 4ck ,12. Ak ,1. − pk ,1ek2,1 +. 2. −. 4qk ,1. σ k ,1. 2 σ W%k ,1 + k ,1 Wk*,1 2. 2. 2. (3-25). Step ik ( 2 ≤ ik ≤ λk − 1 ): Define the tracking error zk,i = xk,i − α k,i −1 . The derivative k. k. k. of zk,i is defined as k. z& k,ik = x&k,ik − α& k,ik −1. (3-26). = f k,ik + g k,ik xk,ik +1 − α k,ik −1. The ideal virtual controller α k,i* is defined as k. α k,i* = − zk,i −1 − ck,i zk,i − k. k. k. k. 1 ( f k,ik − α& k,ik −1 ) g k,ik. (3-27). where ck,i is a positive constant. By employing a RBF neural network Wk,i*T φk,i (bk,i ) to k. k. k. k. 1 ( f k,ik − α& k,ik ) = Wk,i*Tk φk,ik (bk,ik ) + δ k,ik . α k,i* k can g k,ik. approximate the last term of (3-27), let be expressed as. α k,i* = − zk,i −1 − ck,i zk,i − Wk,i*T φk,i (bk,i ) − δ k,i k. k. k. k. k. k. k. (3-28). k. where Wk,i*T denotes ideal constant weights, bk ,i =[X 1,i −λ +λ ,..., X n,i −λ +λ ,α& k,i −1 ] denotes k. k. k. k. k. 1. k. n. k. the RBF input, where δ k,i is the approximation error with δ k,i ≤ δ k,i* , where δ k,i* is k. k. k. k. a bounded constant. The virtual controller α k,i is defined as k. α k,i = − zk,i −1 − ck,i zk,i − Wˆk,iT φk,i (bk,i ) k. k. k. k. k. k. (3-29). k. where Wˆk,i is the estimation of Wk,i*T and ck,i is a positive design constant. The k. k. k. adaptive law for Wˆk,i is k. & Wˆk,ik = β k,ik [φk,ik (bk,ik ) zk,ik − σ k,ik Wˆk,ik ]. (3-30). where σ k,i >0 and β k,i >0 are design constants. Let Wˆk,iT φk,i (bk,i ) pass through a k. k. k. k. k. first-order filter to obtain γ k,i . Thus, we have k. ξk,i γ&k,i + γ k,i = Wˆk,iT φk,i (bk,i ) k. k. k. k. k. (3-31). k. where ξ k,i is the time constant. Then the virtual controller α k,i is redefined as k. k. α k,i = − zk,i −1 − ck,i z k,i − γ k,i k. k. k. k. (3-32). k. 18.

(28) Define the error between Wˆk,iT φk,i (bk,i ) and γ k,i as k. k. k. k. ek,ik = γ k,ik -Wˆk,iTk φk,ik (bk,ik ). (3-33). Derivative of ek,i is defined as k. & e&k,ik = γ&k,ik + Wˆk,iTk φk,ik (bk,ik ) + Wˆk,iTk φ&k,ik (bk,ik ) & = γ&k,ik + Wˆk,iTk φk,ik (bk,ik ) + Wˆk,iTk. + WˆkT,1 =−. ek,ik. ξ k,i. ∂φk ,1 ∂xn ,ik −λk + λn. ∂φk,ik ∂x1,ik −λk +λ1. x&1,ik −λk + λ1 .... ∂φk ,1 x&n ,ik −λk + λn + WˆkT,1 α&&k,i −1 ∂α& k,ik −1 k. (3-34). +A k,ik. k. where & A k ,ik = Wˆk,iTk φk,ik (bk,ik ) + Wˆk,iTk + Wˆ. T k ,1. ∂φk ,1 ∂xn ,ik −λk + λn. ∂φk,ik ∂x1,ik −λk +λ1. x&n ,ik −λk + λn. x&1,ik −λk + λ1 .... ∂φk ,1 + Wˆ α&&k,i −1 ∂α& k,ik −1 k. (3-35). T k ,1. is a continuous function. Define the estimation error as W%k,ik = Wˆk,ik − Wk,i* k. (3-36). Then, we have z& k,ik = f k,ik + g k,ik (zk,λk + α k,λk −1 ) − α& k,ik −1 = f k,ik + g k,ik [ zk,ik +1 − zk,ik −1 − ck,ik zk,ik − Wˆk,iTk φk,ik (bk,ik ) + ξ k,ik γ&k,ik ] − α& k,ik −1. (3-37). = g k,ik [ zk,ik +1 − zk,ik −1 − ck,ik zk,ik − W%k,iTk φk,ik (bk,ik ) + ξ k,ik γ&k,ik + δ k,ik ]. Consider the Lyapunov function vk,ik =. 1 2 1 1 zk,ik + w% k,i2 k + e 2k,ik 2 g k,ik 2 β k,ik 2. (3-38). The derivative of vk,ik can be found as follows. 19.

(29) g& k,ik 1 1 z&k,ik zk,ik + w% k,ik wˆ& k,ik − z 2 + e k,ik e& k,ik 2 k,ik β k,ik 2( g k,ik ) g k,ik. v&k,ik =. = zk,ik [ zk,ik +1 − zk,ik −1 − ck,ik zk,ik − W%k,iTk φk,ik (bk,ik ) + ξ k,ik γ&k,ik + δ k,ik ] 1. +. w% k,ik wˆ& k,ik −. β k,i. k. g& k,ik. z. 2( g k,ik ) 2. −. 2 k,ik. ek,i2 k. + ek,ik Ak,ik. ξ k,i. k. = zk,ik zk,ik +1 − zk,ik −1 zk,ik − ck,ik zk,i2 k − zk,ik W%k,iTk φk,ik (bk,ik ) + zk,ik (ξ k,ik γ&k,ik + δ k,ik ) 1. +. β k,i. w% k,ik wˆ& k,ik −. k. g& k,ik. z. 2( g k,ik ) 2. 2 k,ik. −. ek,i2 k. ξ k,i. (3-39). + ek,ik Ak,ik. k. Substituting the adaptive law wˆ& k,i = β k,i [ zk,i φk,i − σ k,i wˆ k,i ] into (3-39), we have k. k. k. k. k. k. v&k,ik = zk,ik zk,ik +1 − zk,ik −1 zk,ik − ck,ik zk,i2 k + zk,ik (ξ k,ik γ&k,ik + δ k,ik ) − σ k,ik W%k,iTk Wˆk,ik −. Let −ck,i − k. g& k,ik 2( g k,ik ). 2. g& k,ik 2( g k,ik ) 2. z. 2 k,ik. −. ek,i2 k. ξ k,i. (3-40). + ek,ik Ak,ik. k. ≤ −ck,ik + ck,ik , where ck,ik is the upper bound of. g& k,ik 2( g k,ik ) 2. , by. choosing ck,i , such that −ck,i + ck,i =− ck,i 1 − ck,i 2 < 0 , where ck,ik 1 and ck,i 2 are k. k. k. k. k. k. positive constants, then we have v&k,ik ≤ zk,ik zk,ik +1 − zk,ik −1 zk,ik − ck,ik 1 zk,i2 k − ck,ik 2 zk,i2 k + zk,ik (ξ k,ik γ&k,ik + δ k,ik ) − σ k,ik W%k,iTk Wˆk,ik −. ek,i2 k. ξ k,i. (3-41). + ek,ik Ak,ik. k. Let ξ k,i γ&k,i + δ k,i = sk,i where sk,i has an upper bound sk,i* k. k. k. k. k. k. Using the facts −c. 2 k,ik 2 k,ik. let. 1. ξ k,i. z. + sk,ik z k,ik ≤. sk,i2 k 4ck,ik 2. ≤. sk,i*2k. (3-42). 4ck,ik 2. = pk,ik + qk,ik , where pk,ik , qk,ik are positive constants. k. −. ek,i2 k. ξ k,i. + ek,ik Ak,ik = −( pk,ik + qk,ik )ek,i2 k + ek,ik Ak,ik. k. = − pk,ik ek,i2 k − qk,ik ek,i2 k + ek,ik Ak,ik. By completion of squares, we have 20. (3-43).

(30) 2. − qk,ik ek,i2 k + ek,ik Ak,ik ≤ −qk,ik ek,ik. + Ak,ik ek,ik ≤. Ak,ik. 2. (3-44). 4qk,ik. then v&k,ik ≤ zk,ik zk,ik +1 − zk,ik −1 zk,ik − c. *2 sk,i k. 2 k,ik 1 k,ik. +. 2. σ k,i. z. − pk,ik e. 2 k,ik. 4ck,ik 2. +. 2. Ak,ik 4qk,ik. − σ k,ik W%k,iTk Wˆk,ik. (3-45). Using the facts −σ k,ik W%k,iTk Wˆk,ik ≤ −. σ k,i. k. 2. W%k,ik. +. k. 2. 2. Wk,i* k. (3-46). then v&k,ik ≤ zk,ik zk,ik +1 − zk,ik −1 zk,ik − ck,ik 1 zk,i2 k +. σ k,i. −. k. 2. W%k,ik. 2. +. σ k,i. k. 2. Wk,i* k. sk,i*2k 4ck,ik 2. Ak,ik. − pk,ik ek,i2 k +. 2. 4qk,ik. (3-47). 2. Step λk ( 2 ≤ ik ≤ λk − 1 ): Define the tracking error zk,λ = xk,λ − α k,λ −1 .The k. k. k. derivative of zk,λ is defined as k. z& k,λk = x&k,λk − α& k,λk −1 = f k,λk + g k,λk uk − α k,λk −1. (3-48). The ideal controller uk* , is defined as uk* = − zk,λk −1 − ck,λk zk,λk −. 1 g k,λk. ( f k,λk − α& k,λk −1 ). (3-49). where ck,λ is a positive constant. By employing a RBF neural network Wk,*λT φk ,λ (bk,λ ) k. k. to approximate the last term of (3-49), let. 1 g k,λk. k. k. ( f k,λk − α& k,λk ) = Wk,*λTk φk,λk (bk,λk ) + δ k,λk . uk*. can be expressed as u*k = − zk,λk −1 − ck,λk zk,λk − Wk,*λTk φk,λk (bk,λk ) − δ k,λk. (3-50). where Wk,*λT denotes ideal constant weights, bk ,λ =[X ,u1...uk -1 ,α& k,λ −1 ] denotes the RBF k. k. input, where δ k,λ. k. k. is the approximation error with δ k,λ ≤ δ k,*λ , where δ k,*λ k. k. k. is a. bounded constant. The controller uk is defined as uk = − zk,λk −1 − ck,λk zk,λk − Wˆk,Tλk φk,λk (bk,λk ). 21. (3-51).

(31) where Wˆk,λ is the estimation of Wk,*λT , and ck,λ is a positive design constant. The k. k. k. adaptive law for Wˆk,λ is k. & Wˆk,λk = β k,λk [φk,λk (bk,λk ) zk,λk − σ k,λk Wˆk,λk ]. (3-52). where σ k,λ >0 and β k,λ >0 are design constants. k. k. Define the estimation error as W%k,λk = Wˆk,λk − Wk,*λk. (3-53). then we have z&k,λk = f k,λk + g k,λk uk − α& k,λk -1 = g k,λk [uk +. 1 g k,λk. ( f k,λk − α& k,λk -1 )]. = g k,λk [− zk,λk −1 − ck,λk zk,λk − Wˆk,Tλk φk,λk (bk,λk ) + Wk,*λTk φk,λk (bk,λk ) + δ k,λk ]. = g k,λk [− zk,λk −1 − ck,λk zk,λk − W%k,Tλk φk,λk (bk,λk ) + δ k,λk ]. (3-54). Consider the Lyapunov function vk,λk =. 1 1 zk,2λk + w% k,2λk 2 g k,λk 2 β k,λk. (3-55). The derivative of vn can be found as follows v&k,λk =. 1 g k,λk. z&k,λk zk,λk +. 1. β k,λ. w% k,λk w&ˆ k,λk −. k. g& k,λk 2( g k,λk ). 2. zk,2λk. = zk,λk [− zk,λk −1 − ck,λk zk,λk − W%k,Tλk φk,λk (bk,λk ) + δ k,λk ] +. 1. w% k,λk wˆ& k,λk −. β k,λ. k. g& k,λk 2( g k,λk ). 2. zk,2λk. = − zk,λk −1 zk,λk − ck,λk zk,2λk − zk,λk W%k,Tλk φk,λk (bk,λk ) + zk,λk δ k,λk +. 1. β k,λ. k. w% k,λk w&ˆ k,λk −. g& k,λk 2( g k,λk ) 2. (3-56). zk,2λk. Substituting the adaptive law w&ˆ k,λ = β k,λ [ zk,λ φk,λ − σ k,λ wˆ k,λ ] into (3-56), we have k. k. k. k. k. v&k,λk = − zk,λk −1 zk,λk − ck,λk zk,2λk + zk,λk δ k,λk − σ k,λk W%k,Tλk Wˆk,λk −. 22. k. g& k,λk 2( g k,λk ). 2. zk,2λk. (3-57).

(32) Let −ck,λ − k. g& k,λk 2( g k,λk ). 2. ≤ −ck,λk + ck,λk , where ck,λk is the upper bound of. g& k,λk 2( g k,λk ) 2. , by. choosing ck,λ , such that −ck,λ + ck,λ =− ck,λ 1 − ck,λ 2 < 0 where ck,λk 1 and ck,λ 2 are k. k. k. k. k. k. positive constants, then we have v&k,λk ≤ − zk,λk −1 zk,λk − ck,λk 1 zk,2λk − ck,λk 2 zk,2λk + zk,λk δ k,λk − σ k,λk W%k,Tλk Wˆk,λk. (3-58). Using the facts −c. 2 k,λk 2 k,λk. z. δ k,2λ. + δ k,λk zk,λk ≤. (3-59). k. 4ck,λk 2. then δ k,2λ. v&k,λk ≤ − ck,λk 1 zk,2λk +. k. 4ck,λk 2. − σ k,λk W%k,Tλk Wˆk,λk. (3-60). Using the facts −σ k,λk W%k,Tλk Wˆk,λk ≤ −. σ k,λ. k. 2. W%k,λk. 2. +. σ k,λ. Wk,*λk. k. 2. 2. (3-61). then v&k,λk ≤ − zk,λk −1 zk,λk − ck,λk 1 zk,2λk +. δ k,2λ. k. 4ck,λk 2. −. σ k,λ. k. 2. W%k,λk. 2. +. σ k,λ. k. 2. Wk,*λk. 2. (3-62). Theorem3-1: Consider the system (3-1). If Assumption 1 and Assumption 2 are hold, the control input is defined as (3-51), virtual controller is (3-10) and (3-32), in which the adaptive law are given as (3-8), (3-30) and (3-52), and the Lyapunov function satisfies V ≤ 2ψ for any positive constant ψ [31], then all the signals in the closed-loop system remain bound. Proof: Define Lyapunov function as λk. Vk = ∑ vk,i. (3-63). i =1. The derivative of Vk can be found as follows λk. V&k = ∑ v&k,i. (3-64). i =1. By combining (3-25), (3-47), (3-62), we have. 23.

(33) λk. λk −1. sk*2,i. V&k ≤ −∑ ck,i1 zk,i2 + ∑ i =1. λk. −∑. σ k ,i. 2. W%k ,i. 2. i =1. 4ci 2. i =1. λk −1. Let μ k = ∑ i =1. λk. +∑. 4ci 2. +. k. 4cλk 2. σ k ,i. Wk*,i. 2. i =1. sk*2,i. +. δ k2,λ. δ k2,λ. k. 4cλk 2. λk −1. λk −1. Ak ,i. i =1. i =1. 4 qk ,i. − ∑ pk ,i ek2,i + ∑. 2. Ak ,i. i =1. 4qk ,i. λk. +∑. Let. λk −1. ηk. 2g. λk. < ck ,i1 , η k λmax {β k-1,i } < σ k ,i ,. L k ,i. σ k ,i 2. i =1. σ V&k ≤ −∑ ck,i1 zk,i2 − ∑ pk ,i ek2,i − ∑ k ,i W%k ,i 2 i =1 i =1 i =1 λk. (3-65). 2. λk −1. +∑. 2. ηk. 2 η W% 2. 2. Wk*,i. 2. + μk. < pk ,i. λk λk −1 λk η η V&k ≤ −∑ kL zk2,i − ∑ k ek2,i − ∑ k k ,i + μ k i =1 2 g k ,i i =1 2 i =1 2 β k ,i λk λk −1 W%k2,i η 1 2 ) − ∑ k ek2,i + μ k ≤ −η k ∑ ( zk ,i + 2 β k ,i i =1 2 g k ,i i =1 2. (3-66). (3-67) (3-68). (3-69). ≤ −η kVk + μ k n. Let V = ∑Vk . Its derivative is k =1. n. n. k =1. k =1. V& = ∑V&k < ∑ (−ηkVk + μk ). (3-70). < −ηV + μ where η = min{ηk } , k = 1, 2....n and μ is a positive constant. Thus, all signal of. closed-loop system is bounded. This completes the proof.. 3.3 Simulation Results This section presents the simulation results of the proposed controller, showing that the tracking error of the closed-loop system can be made arbitrarily small. Example 3-1 : Consider an MIMO nonlinear system 2 2 ⎧ x&1,1 = 0.5(x1,1 + x2,1 ) + (1 + 0.1x1,1 x2,1 ) x12 ⎪ ⎪ x&1,2 = ( x1,1 x1,2 + x2,1 x2,2 ) + (2 + cos( x1,1 x2,1 ))u1 ⎪ ⎨ x&2,1 =x1,1 x2,1 + (2 + sin( x1,1 x2,1 )) x2,2 ⎪ x1,1 − x2,1 2 ⎪ x&2,2 = ( x1,1 x1,2 + x2,1 x2,2 + u1 ) + (e + e )u2 ⎪y = x i = 1, 2 i ,1 ⎩ i. The reference model is a van der Pol oscillator [33]. 24. (3-71).

(34) ⎧ x&d 1 = xd 2 ⎪ 2 ⎨ x&d 2 = − xd 1 + β (1 − xd 1 ) xd 2 ⎪y = x i = 1, 2 di ⎩ di. This example is taken from [30], where. (3-72). u1. and. u2. are the control inputs. The initial. state is [x1,1 (0), x1,2 (0), x2,1 (0), x2,2 (0)]T = [1.5, 2, 0.7,1]T , and [xd1 (0), xd 2 (0)]T = [1.5,0.8]T . The initial weights are Wˆ1,1 (0) = Wˆ1,2 (0) = Wˆ2,1 (0) = Wˆ2,2 (0) = 0 . The design parameters are selected as c1,1 = 2.5 , c1,2 = 15 , c2,1 = 6 , c2,2 = 25 , Γ1,1 = Γ1,2 = Γ 2,1 = Γ 2,2 = diag{2} , and σ 1,1 = σ 1,2 = σ 2,1 = σ 2,2 = 0.1 . The first neural network Wˆ1,1T φ1,1 (b1,1 ) has 27 centers which. range over the interval [-2.5,2.5] × [-2.5,2.5] × [-2,2]. The second neural network Wˆ1,2T φ1,2 (b1, 2 ) has 729 centers which range over the interval [-2.5,2.5] × [-1.5,1.5] ×. [-2.5,2.5]×[-1.5,1.5]×[-1.9,1.9]×[-0.2,0.2] . The third neural network Wˆ2,1T φ2,1 (b2,1 ) has 27 centers which range over the interval [-2.5,2.5]×[-2.5,2.5]×[-2.5,2.5]. The fourth T φ2,2 (b2,2 ) has 2187 centers which range over the interval neural network Wˆ2,2. [-2.5,2.5] ×[-1.5,1.5] ×[-2.5,2.5] ×[-1.5,1.5] × [-2,2] × [-0.3,0.3] × [-1.7,1.7]. The state x1,1 of this system and the model reference signal xd 1 are shown in Fig. 3-1. The state x2,1 of this system and the model reference signal xd 2 are shown in Fig. 3-2. The responses of control inputs u1 and u2 are shown in Fig. 3-3. The responses of control inputs u2 are shown in Fig. 3-4. The error of system output x1,1 and model reference signal xd 1 are shown in Fig. 3-5. The error of system output x2,1 and model reference signal xd 2 are shown in Fig. 3-6.The simulation results show that the states x1,1 and x2,1 can track the model reference signals xd 1 and xd 2 arbitrarily closely.. 25.

(35) Fig.3-1 State x1,1 and model reference signal xd 1. Fig. 3-2 State x2,1 and model reference signal xd 2 26.

(36) Fig. 3-3 Control inputs u1. Fig. 3-4 Control inputs u2 27.

(37) Fig. 3-5 Error of system output x1,1 and model reference signal xd 1. Fig. 3-6 Error of system output x2,1 and model reference signal xd 2 28.

(38) Example 3-2 : Consider an MIMO nonlinear system 2 2 2 ⎧ x&1,1 = 0.5(x1,1 + x2,1 + x2,2 ) + (3 + 0.01x1,1 x2,1 x2,2 ) x12 ⎪ ⎪ x&1,2 = ( x1,1 x1,2 + x2,1 x2,2 ) x2,3 + (2 + cos( x1,1 x2,1 x2,2 ))u1 ⎪ x& =x + (2 + sin( x )) x 2,1 2,2 ⎪ 2,1 2,1 ⎨ ⎪ x&2,2 = ( x1,1 x1,2 + x2,1 x2,2 ) + (2 + cos( x1,1 x2,1 x2,2 )) x2,3 ⎪& x1,1 − x2,1 2 ⎪ x2,3 = ( x1,1 x1,2 + x2,1 x2,2 + x2,3 + u1 ) + (5 + e + x2,2 e )u2 ⎪ yi = xi ,1 i = 1, 2 ⎩. (3-73). The reference model is a van der Pol oscillator which is the same as (3-72). This example is taken from [30], where. u1. and. u2. are the control inputs. The initial. state is [x1,1 (0), x1,2 (0), x2,1 (0), x2,2 (0)]T = [1, 0.2, 0.7, 0.1, 0.2]T , and [xd 1 (0), xd 2 (0)]T = [1.5,0.8]T . The initial weights are Wˆ1,1 (0) = Wˆ1,2 (0) = Wˆ2,1 (0) = Wˆ2,2 (0) = 0 .. The design parameters are selected as c1,1 = 2.5 , c1,2 = 10 , c2,1 = 10 , c2,2 = 23 , c2,3 = 32 , β1,1 = β1,2 = β 2,1 = β 2,2 = β 2,3 = diag{2} , and σ 1,1 = σ 1,2 = σ 2,1 = σ 2,2 = σ 2,3 = 0.1 .. The first neural network Wˆ1,1T φ1,1 (b1,1 ) has 81 centers which range over the interval [-2.5,2.5]×[-2.5,2.5] ×[-2.5,2.5]×[-2,2]. The second neural network Wˆ1,2T φ1,2 (b1,2 ) has 2187 centers which range over the interval [-2.5,2.5]×[-1.5,1.5]×[-2.5,2.5]×[-1.5,1.5] ×[-2.5,2.5]×[-2, 2] ×[-2, 2] . The third neural network Wˆ2,1T φ2,1 (b2,1 ) has 9 centers which range over the interval [-2.5,2.5]×[-2.5,2.5]. The fourth neural network T Wˆ2,2 φ2,2 (b2,2 ) has 243 centers which range over the interval. [-2.5,2.5] ×[-2.5,2.5]. ×[-2.5,2.5] ×[-2,2] × [-2,2]. The fifth neural network Wˆ2,T3φ2,3 (b2,3 ) has19683 centers which range over the interval [-2.5,2.5]×[-1.5,1.5] ×[-2.5,2.5] ×[-1.5,1.5] × [-2,2]×[-2.5,2.5]×[-2,2]×[-2,2]×[-2,2]. The state x1,1 of this system and the model reference signal xd 1 are shown in Fig. 3-7. The state x2,1 of this system and the model reference signal xd 2 are shown in Fig. 3-8. The responses of control inputs u1 are shown in Fig. 3-9. The responses of control inputs u2 are shown in Fig. 3-10. The error of system output x1,1 and model reference signal xd 1 are shown in Fig. 3-11. The error of system output x2,1 and model reference signal xd 2 are shown in Fig. 29.

(39) 3-12. The simulation results show that the states x1,1 and x2,1 can track the model reference signals xd 1 and xd 2 arbitrarily closely. 2 x11 xd1. 1.5 1. Amplitude. 0.5 0 -0.5 -1 -1.5 -2. 0. 2. 4. 6. 8. 10 12 Time(sec). 14. 16. 18. Fig.3-7 State x1,1 and model reference signal xd1. 30. 20.

(40) 2 x21 xd2. 1.5 1. Amplitude. 0.5 0 -0.5 -1 -1.5 -2. 0. 2. 4. 6. 8. 10 12 Time(sec). 14. 16. 18. 20. Fig. 3-8 State x2,1 and model reference signal xd 2 25 u1 20. Amplitude. 15. 10. 5. 0. -5. -10. 0. 2. 4. 6. 8. 10 12 Time(sec). Fig. 3-9 Control inputs u1 31. 14. 16. 18. 20.

(41) 1000 u2 800. Amplitude. 600. 400. 200. 0. -200. -400. 0. 2. 4. 6. 8. 10 12 Time(sec). 14. 16. 18. 20. Fig. 3-10 Control inputs u2 2 e1 1.5 1. Amplitude. 0.5 0 -0.5 -1 -1.5 -2. 0. 2. 4. 6. 8. 10 12 Time(sec). 14. 16. 18. 20. Fig. 3-11 Error of system output x1,1 and model reference signal xd 1 32.

(42) 2 e2 1.5 1. Amplitude. 0.5 0 -0.5 -1 -1.5 -2. 0. 2. 4. 6. 8. 10 12 Time(sec). 14. 16. 18. 20. Fig. 3-12 Error of system output x2,1 and model reference signal xd 2. 33.

(43) Chapter 4 RBFNN Adaptive Backstepping Controllers for MIMO Nonaffine Nonlinear Systems. In this chapter, RBFNN adaptive backstepping scheme for nonaffine nonlinear systems with first order filter is proposed. The control scheme incorporates the adaptive neural backstepping design technique with a first-order filter at each step of the backstepping design to avoid the higher-order derivative problem, which is generated by the backstepping design. This problem may create an unpredictable and unfavorable influence on control performance because higher-order derivative term errors are introduced into the neural approximation model. Finally, simulation results demonstrate that the output tracking error between the plant output and the desired reference can be made arbitrarily small.. 4.1 Problem Formulation The model of an uncertain MIMO nonaffine block-triangular system can be described as shown in (4-1) . ⎧ x&1,1 = f1,1 ( X 1,1 ,..., X n ,1−λ1 + λn , x1,2 ) ⎪ ⎪ x&1,i1 = f1,i1 ( X 1,i1 ,..., X n ,i −λ1 + λn , x1,i +1 ) ⎪ x& = f ( X ,u ) 1, λ1 1 ⎪ 1,λ1 ⎪⎪..... ⎨& ⎪ xk ,1 = f k ,1 ( X 1,1−λk + λ1 ,..., X n ,1−λk + λn ,xk ,2 ) ⎪ x& = f ( X k ,ik 1,ik − λk + λ1 ,..., X n ,i2 − λk + λn ,xk,ik +1 ) ⎪ k ,ik ⎪ x&k ,λk = f k ,λk ( X ,u1...uk ) ⎪ ⎪⎩ yk = xk ,1. 2 ≤ i1 < λ1. (4-1) 2 ≤ ik < λk 1≤ k < n. where xk ,ik , ik = 1...λk are the state of kth subsystem, uk ∈ R is the input, yk ∈ R is the output, λk ∈ R is the order of kth subsystem. X k ,ik = [ xk ,1 ,..., xk ,ik ] , X = [ x1,1.....xn,λn ] , Functions f ( x) are unknown smooth continuous functions, and it is assumed that 34.

(44) 0 < ∂f k ,ik / ∂xk ,ik +1 < ∞ , 0 < ∂f k ,λn / ∂uk < ∞ .The control objective is to design a adaptive. neural backtepping controller for system (4-1) such that all the signals in the closed-loop uniformly ultimately bounded and the state xk ,1 can track a bound reference signal xk ,d as closer as possible. Lemma4.1 (Mean Value Theorem)[34]: Suppose a function f ( x) is continuous. at every point of the closed interval [ x , x ] and differentiable at every point of its interior. ( x, x). .. Then,. for. some. x*. ( x, x). between. ,. we. have. f ( x) = f ( x ) + f '( x* )( x − x ) . The diagram of Mean Value Theorem is shown in. Fig.4-1.. f ( x) f ( x* ). f ( x). f (x ). x. x*. x. x. Fig.4-1 Diagram of mean value theorem. 4.2 Design of RBFNN_ABC For the kth subsystem Step 1: Define the tracking error zk ,1 = xk ,1 − xk ,d , where xk ,d is the reference signal of the first subsystem. The derivative of zk ,1 is defined as z&k ,1 = x&k ,1 − x&k ,d = f k ,1 − x&k ,d = Fk ,1 ( X 1,1−λk + λ1 ,..., X n ,1−λk + λn ,xk ,2 , x&k ,d ). (4-2). Let a new functionψ k ,1 ( X 1,1−λk + λ1 ,..., X n ,1−λk + λn ,xk ,2 , x&k ,d , xk ,d ) = Fk ,1 + ck ,1 zk ,1 , where ck ,1 is a positive design constant, then z&k ,1 = ψ k ,1 − ck ,1 zk ,1 . There exists an ideal. 35.

(45) virtual controller α k*,1 ( X 1,1−λk + λ1 ,..., X n ,1−λk + λn , x&k ,d , xk ,d ) , such that ψ k ,1 ( X 1,1−λ +λ ,..., X n,1−λ + λ ,α k*,1 , x&k ,d , xk ,d ) = 0 k. k. 1. (4-3). n. By using the mean value theorem, equation (4-3) can rewritten as z&k ,1 = ψ k ,1 − ck ,1 zk ,1 = ψ k ,1 (α k*,1 ) + =0+ = =. ∂ψ k ,1 ∂xk ,2. ∂ψ k ,1 ∂xk ,2. xk ,2 = xkλ,2. xk ,2 = xkλ,2. ( xk ,2 − α k*,1 ) − ck ,1 zk ,1. (4-4). ∂ ( Fk ,1 + ck ,1 zk ,1 ) xk ,2 = xkλ,2. ∂xk ,2 ∂f k ,1 ∂xk ,2. xk ,2 = xkλ,2. ( xk ,2 − α k*,1 ) − ck ,1 zk ,1. ( xk ,2 − α ) − ck ,1 zk ,1 * k ,1. ( xk ,2 − α k*,1 ) − ck ,1 zk ,1. = g kλ,1 ( xk ,2 − α k*,1 ) − ck ,1 zk ,1. where that. xkλ,2. is. the. point. αk*,1. between. (ψ k ,1 ( xk ,2 ) −ψ 1,1 (α k*,1 )) /( xk ,2 − α k*,1 ) = ∂ψ k ,1 / ∂xk ,2. g kλ,1 = ∂f k ,1 / ∂xk ,2. xk ,2 = xkλ,2. and xk ,2 = xkλ,2. xk,2. ,. ,. such and. .. By employing an RBF neural network Wk*,1T φk ,1 (bk ,1 ) to approximate αk*,1 , αk*,1 can be expressed as. α k*,1 = Wk*,1T φk ,1 (bk ,1 ) − δ k ,1 = Wk*,1T φk ,1 (bk ,1 ) − ck ,1 zk ,1 + ck ,1 zk ,1 − δ k ,1. (4-5). = Wk*,1T φk ,1 (bk ,1 ) − ck ,1 zk ,1 + τ k ,1. where τ k ,1 = ck ,1 zk ,1 − δ k ,1 , is the signal error and Wk*,1T denotes ideal constant weights,. φk ,1 is the basis function, bk ,1 =[X 1,1−λ + λ ,..., X n ,1−λ + λ , x&d 1 , xd 1 ] denotes the RBF input, k. 1. k. n. and δ k,1 is the approximation error. The virtual controller α k,1 is defined as. α k ,1 = WˆkT,1φk ,1 (bk ,1 ) − ck ,1zk ,1. (4-6). where WˆkT,1 is the estimation of Wk*,1T . 36.

(46) The adaptive law for WˆkT,1 is & Wˆk ,1 = β k ,1[−φk ,1 (bk ,1 ) zk ,1 − σ k ,1Wˆk ,1 ]. (4-7). where σ k ,1 >0 and β k ,1 >0 are design constants. Let WˆkT,1φk ,1 (bk ,1 ) pass through a first-order filter to obtain γ k,1 . Thus, we have. ξ k ,1γ&k ,1 + γ k ,1 = WˆkT,1φk ,1 (bk ,1 ). (4-8). where ξ k ,1 is the time constant. Then the virtual controller α k ,1 is redefined as. α k ,1 = −ck ,1z k ,1 + γ k ,1. (4-9). Define the estimation error as W%k ,1 = Wˆk ,1 − Wk*,1. (4-10). Then, we have. α k ,1 − α k*,1 = WˆkT,1φk ,1 − ξ k ,1γ&k ,1 − ck ,1 zk ,1 − Wk*,1T φk ,1 − τ k ,1 + ck ,1 zk ,1 = W%kT,1φk ,1 − ξ k ,1γ&k ,1 − τ k ,1 = W%kT,1φk ,1 − ξ k ,1γ&k ,1 + δ k ,1 − ck ,1 zk ,1. (4-11). z&k ,1 = g kλ,1 ( zk ,2 + α k ,1 − α k*,1 ) − ck ,1 zk ,1 1 = g kλ,1 ( zk ,2 + W%kT,1φk ,1 − ξ k ,1γ&k ,1 + δ k ,1 − ck ,1 zk ,1 − λ ck ,1 zk ,1 ) g k ,1. (4-12). = g kλ,1 ( zk ,2 + W%kT,1φk ,1 − ξ k ,1γ&k ,1 + δ k ,1 − ckλ,1 zk ,1 ). where ckλ,1 =. 1 ck ,1 + ck ,1 . g kλ,1. Define the error between WˆkT,1φk ,1 (bk ,1 ) and γ k ,1 as ek ,1 = γ k ,1 − WˆkT,1φk ,1 (bk ,1 ). (4-13). Derivative of ek ,1 is defined as & e&k ,1 = γ&k ,1 + WˆkT,1φk ,1 (bk ,1 ) + WˆkT,1φ&k ,1 (bk ,1 ) ∂φ ∂φk ,1 ∂φ & x&n ,1−λk + λn + WˆkT,1 k ,1 && xk ,d (4-14) = γ&k ,1 + WˆkT,1φk ,1 (bk ,1 ) + WˆkT,1 k ,1 x&k ,1.... + WˆkT,1 ∂xk ,1 ∂xn ,1−λk + λn ∂x&k ,d =−. ek ,1. ξ k ,1. +A k ,1. where 37.

(47) ∂φ ∂φk ,1 ∂φ & x&n ,1−λk + λn + WˆkT,1 k ,1 && xk ,d A k ,1 = WˆkT,1φk ,1 (bk ,1 ) + WˆkT,1 k ,1 x&k ,1.... + WˆkT,1 ∂xk ,1 ∂xn ,1−λk + λn ∂x&k ,d. (4-15). is a continuous function Consider the Lyapunov function vk ,1 =. 1 2 1 1 zk ,1 + w% k2,1 + e 2k ,1 λ 2 g k ,1 2 β k ,1 2. (4-16). The derivative of vk ,1 can be found as follows g& kλ,1 1 1 & v&k ,1 = λ z&k ,1 zk ,1 + w% k ,1wˆ k ,1 − z 2 + e k ,1e& k ,1 λ 2 k ,1 2( g k ,1 ) g k ,1 β k ,1 1 = zk ,1 ( zk ,2 + W%kT,1φk ,1 − ξ k ,1γ&k ,1 + δ k ,1 − ckλ,1 zk ,1 ) + w% k ,1wˆ& k ,1. β k ,1. g& kλ,1. −. z − 2 k ,1. 2( g kλ,1 ) 2. ek2,1. ξ k ,1. + ek ,1 Ak ,1. (4-17). = zk ,1 zk ,2 − ckλ,1 zk2,1 + zk ,1W%kT,1φk ,1 + zk ,1 (δ k ,1 − ξ k ,1γ&k ,1 ) + −. g& kλ,1. 2( g kλ,1 ) 2. z − 2 k ,1. ek2,1. ξ k ,1. 1. β k ,1. w% k ,1wˆ& k ,1. + ek ,1 Ak ,1. Substituting the adaptive law wˆ& k ,1 = β k ,1[− zk ,1φk ,1 − σ k ,1wˆ k ,1 ] into (4-17), we have v&k ,1 = zk ,1 zk ,2 − ckλ,1 zk2,1 + zk ,1 (δ k ,1 − ξ k ,1γ&k ,1 ) − σ k ,1W%kT,1Wˆ k ,1 − λ. Let −ck ,1 −. g& kλ,1. 2( g kλ,1 ) 2. g& kλ,1. 2( g kλ,1 ) 2. zk2,1 −. ek2,1. ξ k ,1. (4-18). + ek ,1 Ak ,1. λ. ≤ −ck ,1 + ck ,1 , where ck ,1. g& kλ,1 is the upper bound of , by 2( g kλ,1 ) 2. choosing ckλ,1 , such that −ckλ,1 + ck ,1 = −ck ,11 − ck ,12 < 0 where ck ,11 , ck ,12 are positive constants then we have v&k ,1 ≤ z k ,1z k ,2 − ck ,11 zk2,1 − ck ,12 zk2,1 + (δ k ,1 − ξ k ,1γ&k ,1 )z k ,1. (4-19). 2. e − σ k ,1W% kT,1Wˆ k ,1 − k ,1 + ek ,1 Ak ,1. ξ k ,1. Let δ k ,1 − ξ k ,1γ&k ,1 = sk ,1 where. sk ,1 has an upper bound sk*,1 38.

(48) Using the facts −ck ,12 z. let. 1. ξ k ,1. 2 k ,1. sk2,1. + sk ,1z k ,1 ≤. 4ck ,12. ≤. sk*2,1. (4-20). 4ck ,12. = pk ,1 + qk ,1 , where pk ,1 , qk ,1 are positive constants −. 1. ξ k ,1. ek2,1 + ek ,1 Ak ,1 = −( pk ,1 + qk ,1 )ek2,1 + ek ,1 Ak ,1. (4-21). = − p e − q e + ek ,1 Ak ,1 2 k ,1 k ,1. 2 k ,1 k ,1. By completion of squares, we have Ak,1. 2. − q e + ek,1 Ak,1 ≤ − qk,1 ek,1 + Ak,1 ek,1 ≤ 2 k,1 k,1. 2. (4-22). 4qk,1. then v&k ,1 ≤ zk ,1 zk ,2 − c z + 2 k ,1 k ,1. sk*2,1 4ck ,12. 2. Ak ,1. −p e + 2 k ,1 k ,1. 4qk ,1. − σ k ,1W%kT,1Wˆk ,1. (4-23). Using the facts 2 σ σ −σ k ,1W%kT,1Wˆk ,1 ≤ − k ,1 W%k ,1 + k ,1 Wk*,1 2 2. 2. (4-24). then v&k ,1 ≤ zk ,1 zk ,2 − ck ,1 zk2,1 +. sk*2,1 4ck ,12. − pk ,1ek2,1 +. Ak ,1. 2. 4qk ,1. −. σ k ,1 2. 2 σ W%k ,1 + k ,1 Wk*,1 2. 2. (4-25). Step ik ( 2 ≤ ik ≤ λk − 1 ) : Define the tracking error zk ,ik = xk ,ik − α k ,ik −1 . The derivative of zk ,ik is defined as. z&k ,ik = x&k ,ik − α& k ,ik −1 = f k ,ik − α& k ,ik −1 = Fk ,ik ( X 1,ik −λk +λ1 ,.. X n ,ik −λk + λn ,xk,ik +1 , α& k ,ik −1 ) (4-26). Let a function ψ k ,ik , where ψ k ,ik = Fk ,ik + ck ,ik zk ,ik + g kλ,ik zk ,ik −1 , then we have where ck ,ik is a positive design constant, then z&k ,ik = ψ k ,ik − ck ,ik zk ,ik − g kλ,ik zk ,ik −1 . There exists an ideal virtual controller α k*,ik ( X 1,ik −λk + λ1 ,..., X n ,ik −λk + λn , α& k ,ik −1 ) , such that ψ k ,i ( X 1,i −λ + λ ,..., X n,i −λ + λ ,α k*,i , α& k ,i −1 ) = 0 k. k. k. 1. k. k. n. k. k. By using the mean value theorem, then we have 39. (4-27).

(49) z&k ,ik = ψ k ,ik − ck ,ik zk ,ik − zk ,ik −1 = ψ k ,ik (α k*,ik ) +. ∂ψ k ,ik ∂xk ,ik +1. ∂ ( Fk ,ik + ck ,ik zk ,ik ). =. ∂f k ,ik. xk ,ik +1 = xkλ,ik +1. ∂xk ,ik +1. ( xk ,ik +1 − α k*,ik ) − ck ,ik zk ,ik − g kλ,ik zk ,ik −1. ( xk ,ik +1 − α k*,ik ) − ck ,ik zk ,ik − g kλ,ik zk ,ik −1. xk ,ik +1 = xkλ,ik +1. ∂xk ,ik +1. =. xk ,ik +1 = xkλ,ik +1. (4-28). ( xk ,ik +1 − α k*,ik ) − ck ,ik zk ,ik − g kλ,ik zk ,ik −1. = g kλ,ik ( xk ,ik +1 − α k*,ik ) − ck ,ik zk ,ik − g kλ,ik zk ,ik −1. xkλ,ik. where that. is. the. point. αk*,i. between. and. k. (ψ k ,ik ( xk ,ik +1 ) −ψ k ,ik (α k*,ik )) /( xk ,ik +1 − α k*,ik ) = ∂ψ k ,ik / ∂xk ,ik +1. g kλ,ik = ∂f k ,ik / ∂xk ,ik +1. xk , ik +1 = xkλ, ik +1. xk ,ik. xk ,ik +1 = xkλ,ik +1. ,. such. ,. and. . By employing an RBF neural network Wk*T,ik φk ,ik (bk ,ik ) to. approximate αk*,ik , αk*,ik can be expressed as. α k*,i = wk*,i φk ,i (bk ,i ) − δ k ,i k. k. k. k. k. = w φk ,ik (bk ,ik ) − ck ,ik zk ,ik + ck ,ik zk ,ik − δ k ,ik * k ,ik. (4-29). = wk*,ik φk ,ik (bk ,ik ) − ck ,ik zk ,ik + τ k ,ik. where τ k ,i = ck ,i zk ,i − δ k ,i , is the signal error and Wk*,Tik denotes ideal constant weights, k. k. k. k. φk ,i is the basis function, bk ,i =[X 1,i −λ + λ ,..., X n ,i −λ + λ , α& k ,i −1 , α k ,i −1 ] denotes the RBF k. k. k. k. 1. k. k. n. k. k. input, and δ k ,ik is the approximation error. The virtual controller α k ,ik is defined as. α k ,i = WˆkT,i φk ,i (bk ,i ) − ck ,i zk ,i k. k. k. k. k. k. (4-30). where WˆkT,ik is the estimation of Wk*,Tik . The adaptive law for WˆkT,ik is & Wˆk ,ik = β k ,ik [−φk ,ik (bk ,ik ) zk ,ik − σ k ,ik Wˆk ,ik ]. (4-31). where σ k ,ik >0 and β k ,ik >0 are design constants. Let WˆkT,ik φk ,ik (bk ,ik ) pass through a first-order filter to obtain γ k ,ik . Thus, we have. ξ k ,i γ&k ,i + γ k ,i = WˆkT,i φk ,i (bk ,i ) k. k. k. k. k. (4-32). k. 40.

(50) where ξ k ,ik is the time constant. Then the virtual controller α k ,ik is redefined as. α k , i = − ck , i z k , i + γ k , i k. k. k. (4-33). k. Define zk ,ik +1 = xk ,ik +1 − α k ,ik. (4-34). Define the estimation error as W%k ,ik = Wˆk ,ik − Wk*,ik. (4-35). then α k ,i − α k*,i = WˆkT,i φk ,i − ξ k ,i γ&k ,i − ck ,i zk ,i − Wk*,Ti φk ,i − τ k ,i + ck ,i zk ,i k. k. k. k. k. k. k. k. k. k. k. k. k. = W% φk ,ik − ξ k ,ik γ&k ,ik − τ k ,ik T k ,ik. (4-36). = W%kT,ik φk ,ik − ξ k ,ik γ&k ,ik + δ k ,ik − ck ,ik zk ,ik. we have z&k ,ik = g kλ,ik ( zk ,ik +1 + α k ,ik − α k*,ik ) − ck ,ik zk ,ik − g kλ,ik zk ,ik −1 = g kλ,ik ( zk ,ik +1 + W%kT,ik φk ,ik − ξ k ,ik γ&k ,ik + δ k ,ik − ck ,ik zk ,ik ) − ck ,ik zk ,ik − g kλ,ik zk ,ik −1. (4-37). = g kλ,ik ( zk ,ik +1 + W%kT,ik φk ,ik − ξ k ,ik γ&k ,ik + δ k ,ik − ckλ,ik zk ,ik − zk ,ik −1 ). Define the error between WˆkT,ik φk ,ik (bk ,ik ) and γ k ,ik as ek ,ik = γ k ,ik -WˆkT,ik φk ,ik (bk ,ik ). (4-38). Derivative of ek ,i is defined as k. & e&k,ik = γ&k,ik + Wˆk,iTk φk,ik (bk,ik ) + Wˆk,iTk φ&k,ik (bk,ik ) & = γ&k,ik + Wˆk,iTk φk,ik (bk,ik ) + Wˆk,iTk + WˆkT,1 =−. ek,ik. ξ k,i. ∂φk ,1 ∂xn ,ik −λk +λn. ∂φk,ik ∂x1,ik −λk +λ1. x&1,ik −λk + λ1 .... ∂φk ,1 x&n,ik −λk +λn + WˆkT,1 α&&k,i −1 ∂α& k,ik −1 k. (4-39). +A k,ik. k. where & A k ,ik = Wˆk,iTk φk,ik (bk,ik ) + Wˆk,iTk. + Wˆ. T k ,1. ∂φk ,1 ∂xn ,ik −λk +λn. ∂φk,ik ∂x1,ik −λk +λ1. x&n ,ik −λk +λn. is a continuous function. 41. x&1,ik −λk +λ1 .... ∂φk ,1 + Wˆ α&&k,i −1 ∂α& k,ik −1 k T k ,1. (4-40).

(51) Consider the Lyapunov function vk ,ik =. 1 1 1 zk2,ik + w% k2,ik + e 2k ,ik λ 2 g k ,ik 2 β k ,ik 2. (4-41). The derivative of vk ,ik can be found as follows 1. v&k ,ik =. 1. z&k ,ik zk ,ik +. g kλ,ik. w% k ,ik wˆ& k ,ik −. β k ,i. k. g& kλ,ik 2( g kλ,ik ) 2. zk2,ik + e k ,ik e& k ,ik. = zk ,ik ( zk ,ik +1 + W% φk ,ik + δ k ,ik − ξ k ,ik γ&k ,ik − ckλ,ik zk ,ik − zk ,ik −1 ) T k ,ik. 1. +. β k ,i. w% k ,ik w&ˆ k ,ik −. k. g& kλ,ik 2( g kλ,ik )2. z. 2 k ,ik. = zk ,ik zk ,ik +1 − zk ,ik zk ,ik −1 − ck ,ik z 1. β k ,i. w% k ,ik wˆ& k ,ik −. k. g& kλ,ik 2( g kλ,ik ) 2. ξ k ,i. + ek ,ik Ak ,ik. (4-42). k. λ. +. −. ek2,ik. 2 k ,ik. + zk ,ik W%kT,ik φk ,ik + zk ,ik (δ k ,ik − ξ k ,ik γ&k ,ik ). zk2,ik −. ek2,ik. ξ k ,i. + ek ,ik Ak ,ik. k. Substituting the adaptive law w&ˆ k ,i = β k ,i [− zk ,i φk ,i − σ k ,i wˆ k ,i ] into (4-42), we have k. k. k. k. k. k. v&k ,ik = zk ,ik zk ,ik +1 − zk ,ik zk ,ik −1 − ckλ,ik zk2,ik + zk ,ik (δ k ,ik − ξ k ,ik γ&k ,ik ) − σ k ,ik W%kT,ik Wˆ k ,ik −. g& kλ,ik. 2( g kλ,ik ) 2. z. 2 k ,ik. −. ek2,ik. (4-43). + ek ,ik Ak ,ik. ξ k ,i. k. g& kλ,ik. ,by. choosing ckλ,ik , such that −ckλ,ik + ck ,ik = −ck ,ik 1 − ck ,ik 2 < 0 where ck ,ik 1 , ck ,ik 2. are. Let −ck ,ik −. g& kλ,ik. 2( g kλ,ik ) 2. λ. ≤ −ck ,ik + ck ,ik , where ck ,ik is the upper bound of. 2( g kλ,ik ) 2. positive constants then we have v&k,ik ≤ zk,ik zk,ik +1 − zk,ik −1 zk,ik − ck,ik 1 zk,i2 k − ck,ik 2 zk,i2 k + zk,ik (ξ k,ik γ&k,ik + δ k,ik ) − σ k,ik W%k,iTk Wˆk,ik −. ek,i2 k. ξ k,i. (4-44). + ek,ik Ak,ik. k. Let ξ k,i γ&k,i + δ k,i = sk,i where sk,i has a upper bound sk,i* k. k. k. k. k. k. Using the facts −c. 2 k,ik 2 k,ik. let. 1. ξ k,i. z. + sk,ik z k,ik ≤. sk,i2 k 4ck,ik 2. ≤. sk,i*2k. (4-45). 4ck,ik 2. = pk,ik + qk,ik , where pk,ik , qk,ik are positive constants. k. 42.

參考文獻

Outline

相關文件

a 全世界各種不同的網路所串連組合而成的網路系統,主要是 為了將這些網路能夠連結起來,然後透過國際間「傳輸通訊 控制協定」(Transmission

當事人 出納組 會計室 人事室.

n Media Gateway Control Protocol Architecture and Requirements.

ƒ Regardless of terminal or network logins, the file descriptors 0, 1, 2 of a login shell is connected to a terminal device or a pseudo- terminal device. ƒ Login does

This research is to integrate PID type fuzzy controller with the Dynamic Sliding Mode Control (DSMC) to make the system more robust to the dead-band as well as the hysteresis

Kuo, R.J., Chen, C.H., Hwang, Y.C., 2001, “An intelligent stock trading decision support system through integration of genetic algorithm based fuzzy neural network and

通常在研究賽格威這類之平衡系統時在於機構之設計是十分的昂貴,本論文

Then, these proposed control systems(fuzzy control and fuzzy sliding-mode control) are implemented on an Altera Cyclone III EP3C16 FPGA device.. Finally, the experimental results