行政院國家科學委員會補助專題研究計畫
行政院國家科學委員會補助專題研究計畫
行政院國家科學委員會補助專題研究計畫
行政院國家科學委員會補助專題研究計畫
線上基因演算之模糊類神經網路及其在非線性系統辨識與
線上基因演算之模糊類神經網路及其在非線性系統辨識與
線上基因演算之模糊類神經網路及其在非線性系統辨識與
線上基因演算之模糊類神經網路及其在非線性系統辨識與
控制之應用
控制之應用
控制之應用
控制之應用
計畫類別:□個別型計畫 □整合型計畫
計畫編號:NSC-92-2213-E-030-001
執行期間:92 年 8 月 1 日至 93 年 7 月 31 日
計畫主持人:王偉彥
共同主持人:
計畫參與人員:
李宜勳、鄭智元、林炳榮、陳冠銘、張貞觀、 郭重佑、藍弘薰成果報告類型(依經費核定清單規定繳交)
:
□精簡報告 □完整報告
本成果報告包括以下應繳交之附件:
□赴國外出差或研習心得報告一份
□赴大陸地區出差或研習心得報告一份
□出席國際學術會議心得報告及發表之論文各一份
□國際合作研究計畫國外研究報告書一份
處理方式:除產學合作研究計畫、提升產業技術及人才培育
研究計畫、列管計畫及下列情形者外,得立即公
開查詢。
□ 涉及專利或其他智慧財產權,□一年□二年
後可公開查詢
執行單位:輔仁大學電子工程系
中 華 民 國 93 年 10 月 27 日
□ 成
成
成
成 果
果
果
果 報
報
報 告
報
告
告
告
□期中進度報告
□期中進度報告
□期中進度報告
□期中進度報告
摘 要
摘 要
摘 要
摘 要 ((((中 文
中 文
中 文 ))))
中 文
在本文中,提出一種簡單式的基因演算法(Simplified Genetic Algorithm)用來 調整模糊類經網路中的權重值及 BMF(Bspline Membership Function)控制點。傳 統模糊類神經網路透過梯度下降法學習,在學習過程中可能會產生落入區域極值 的現象。在不同領域,基因演算法搜尋最佳值的特性已經受到廣泛的注意。因此 許多研究者利用基因演算法來克服傳統梯度下降法所產生的問題。但是,傳統的 基因演算法對於處理大量變數 (超過 100) 編碼解碼的過程中會有兩個重大的缺 點:第一個是過程中需要大量的計算,第二個是經過此過程照成精確度的偏差。 在本文中,提出的簡易型基因演算法藉由循序搜尋交配點來保證子代的適應力會 優方於母代。染色體由實數的方式組成,包括了模糊類經網路中的權重值及 BMF 控制點。SGA 可以快速收斂到模糊類神經網路的最佳值已經在本文中証實。近 年來,線上即時控制是一個重要的研究方向,但是以基因演算法做為基礎的線上 即時控制,因為速算量過大會有控制力延遲的問題而無法做更進一步的研究。本 文藉由 SGA 快速收斂的特性設計線上即時間接型及直接型適應控制器來控制機 械手臂及倒單擺系統在模擬中得到不錯的效果。 關鍵詞 關鍵詞 關鍵詞 關鍵詞: B-spline 函數,模糊類神經網路,簡易式基因演算法,函數近似,適 應控制。
ABSTRACT (In English)
In this paper, a novel approach to adjust both the control points of B-spline membership functions (BMFs) and the weightings of fuzzy-neural networks using a simplified genetic algorithm (SGA) is proposed. Fuzzy-neural networks (FNN) are traditionally trained by using gradient-based methods, and may fall into a local minimum during the learning process. Genetic algorithms have drawn significant attentions in various fields due to their capabilities of directed random search for global optimization. This motivates the use of genetic algorithms to overcome the problem encountered by the conventional learning methods. However, it is well known that searching speed of the conventional genetic algorithms is not desirable. Thus far, such conventional genetic algorithms are inherently disadvantaged in dealing with a vast amount (over 100) of adjustable parameters in the fuzzy-neural networks. In this thesis, the SGA is proposed by using a sequential-search-based crossover point (SSCP) method in which a better crossover point is determined and only the gene at the specified crossover point is crossed as a single point crossover operation. Chromosomes consisting of both the control points of BMF’s and the weightings of fuzzy-neural networks are coded as an adjustable vector with real number components and searched by the SGA. Adaptive control is a technique of applying some system identification techniques to obtain a model of a system from input-output data. GA on-line training is one of most significant studies of the subject. However, with traditional GA no further training of the FNN is possible, while the drive is operating since the training process is too slow for on-line training. Because of the use of the SGA, faster convergence of the evolution process to search for an optimal fuzzy-neural network can be achieved. Unknown systems identified by using the fuzzy-neural networks via the SGA are applied to adaptive fuzzy-neural control including indirect and direct adaptive control categories at the end of this paper to illustrate the effectiveness and applicability of the proposed method.
Keywords: B-spline membership function, fuzzy neural network, simplified genetic algorithm, function approximation, adaptive control.
CONTCENTS
ABSTRACT (In Chinese)……….I ABSTRACT (In English)………II CONTENTS……….III
LIST OF FIGURES AND TABLES……….………….V
CHAPTER 1 Introduction……….1
CHAPTER 2 Fuzzy Control System……….……….3
2.1 Fuzzifier……….3
2.2 Defuzzifier……….4
2.3 Fuzzy Rule Base………..5
2.4 Fuzzy Inference………5
CHAPTER 3 B-spline Fuzzy Neural Network (B-spline FNN)………..6
3.1 Knot vector and B-spline Curves………..6
3.2 B-spline Membership Function (BMF)………...7
3.3 The Configuration of A B-spline FNN………8
3.4 A B-spline FNN Inference Method……….8
CHAPTER 4 Design of the FNN Identifiers by the Simplified Genetic Algorithms..10
4.1 The Simplified Genetic Algorithm………..10
4.2 Basic Concept of Gas………..10
4.3 Evolutionary Processes of the Simplified Genetic Algorithm…..11
4.3.1 Population Initialization……….11
4.3.2 Fitness function………...12
4.3.5 Mutation Operation………13
4.4 Pseudo Code for The SGA……….15
CHAPTER 5 INDIRECT ADAPTIVE FUZZY NEURAL NETWORK………….…17
5.1 Control Objectives………...17
5.2 Certainty Equivalent Controller………...17
5.3 Supervisory Control………18
5.4 Simulations Results………...20
CHAPTER 6 DIRECT ADAPTIVE FUZZY-NEURAL CONTROLLER……….…27
6 . 1 System Formulation……….………...27
6.2 Design of the GODAF Controller………...28
6.3 Simulation Results………33
SHAPTER 7 Conclusions………42
LIST OF FIGURES AND TABLES
Figure 2-1 The fuzzy system architecture……….3
Figure 3-1 All knot spans are on the left (first) column and all degree one basis functions on the second………...7
Figure 3-2 Illustration of fixed number of control points of BMF's of order 2…….7
Figure 3-3 The configuration of a fuzzy neural network………8
Figure 4-1 Traditional crossover methods and the proposed single point crossover method………..13
Figure 4-2 Pseudo code for the randomly selected crossover point (RSCP) method and the sequential-search-based crossover point (SSCP) method………..14
Figure 4-3 Traditional mutation method and the proposed method………14
Figure 4-4 Pseudo code of SGA with the RSCP and the SSCP………..………16
Figure 5-1 The overall scheme of SIAFC Control……….19
Figure 5-2 The inverted pendulum system………..20
Figure 5-3 The statex1(t)(dashed line) and its desired value (solid line) for the initial condition T x ,0) 60 ( ) 0 ( = −π (case 1)………….……….……..……….……….22
Fi gu re 5 -4 The st at ex2(t)(dash ed line) and its desired value ym(t)=0 (solid Line) for the initial conditionx ,0)T 60 ( ) 0 ( = − π (case 1)…………..………....22
Figure 5-5 The control u(t) for the initial condition ,0) 60 ( ) 0 ( = −π x ( case 1)………..…………..……23
Figure 5-6 Trajectories of the unknown function f(x1,x2) and fˆ(x1,x2) for the initial condition ,0) 60 ( ) 0 ( = −π x (case 1)……….……….23
condition ,0) 60 ( ) 0 ( = − π
x in the example 1 (case 1)……….24
Figure5-8 The statex1(t)(dashed line) and its desired value ym(t)=0.1sin(t)
(solid line) for the initial condition T
x ,0) 60 ( ) 0 ( = −π (case 2)………...24
Figure 5-9 The statex2(t)(dashed line) and its desired value ym(t)=0.1cos(t) (solid
Line) for the initial conditionx ,0)T
60 ( ) 0
( = − π (case 2)………25
Figure 5-10 The control u(t) for the initial condition ,0)
60 ( ) 0 ( = −π x (case 2)……… …… ……… ……… ……… ………… …25 Figure 5-11 Trajectories of the unknown function f(x1,x2)(solid line) and fˆ(x1,x2)
(dashed line) for the initial condition ,0)
60 ( ) 0 ( = −π x (case 2)………..26
Figure 5-12 Trajectories of the unknown function g(x1)(solid line) and gˆ(x1)(dashed
line) for the initial condition ,0)
60 ( ) 0 ( = −π x (case 2)………26
Fig. 6-1 Overall scheme of the proposed GODAF controller………..34 Fig. 6-2 Trajectories of the state x1 and the estimation state ˆx of case 1 in example 1
1………35 Fig. 6-3. Trajectories of the output x1 and ym =0 of case 1 in example 1…………35
Fig. 6-4 Control input u of case 1 in example 1………36 Fig. 6-5 Trajectories of the state x1 and the estimation state ˆx of case 2 in example 1
1………36 Fig. 6-6 Trajectories of the output x1 and ym =sint of case 2 in example 1………37
Fi g. 6-7 Control input u of Case 2 in example 1……….37 Fig. 6-8. Trajectories of the state x1 and the estimation state ˆx of case 1 in example 1
2………..38 Fig. 6-9. Trajectories of the output x1 and ym =0 of case 1 in example 2……….…39
Fig. 6-10 Control input u of case 1 in example 2………39 Fig. 6-11 Trajectories of the state x1 and the estimation state ˆx of case 2 in example 1
2 … … … 4 0 Fig. 6-12 Trajectories of the output x1 and ym =sint of case 2 in example 2……40
Chapter 1 Introduction
Many techniques have been reported [1-3] to solve the problem of unknown function identification. Intelligent techniques have also been applied to achieve this goal [4]. The identificantion method is presented [1-3] as an optimization problem where the objective function is defined as the total square error between the output and the reference signal. Since neural networks and fuzzy logic systems are universal approximators [5,6] nonlinear functions approximated by these approximators have widely been developed for many practical applications [7,8]. Moreover, many researches [8,9] combining fuzzy logic with neural networks have been developed to improve the efficiency of function approximation. Therefore, a fuzzy approximator was developed to obtain the input-output transfer characteristic of the nonlinear function. Traditionally, fuzzy logic systems and/or neural networks are trained by using gradient-based methods, which may fall into a local minimum during the learning process. Unfortunately, such techniques suffer from many difficulties such as the choice of starting guess, convergence, etc. Moreover, since the cost function generally has multiple local minima, the attainment of the global optimum by these nonlinear optimization techniques is difficult [10].
In fuzzy set theory, the selection of appropriate membership functions has been an important issue for engineering problems [8]. It is important that the fuzzy membership functions are updated iteratively and automatically, because a change in fuzzy membership functions may alter the performance of the fuzzy logic system significantly. Several researchers have proposed many methods to adjust the parameters of triangular or Gaussian membership function [11-14]. The fuzzy B-spline membership functions (BMFs) constructed in [10] possess the property of local control and have been successfully applied to fuzzy-neural control [15]. This is mainly due to the local control property of B-spline curve, i.e., the BMF has the elegant property of being locally tuned in a learning process. Several learning algorithms have been proposed in [7,15,16] to deal with the tuning of the BMFs.
To search for global optimal solutions, genetic algorithms [17-33] have drawn significant attentions in various fields due to their capability of directed random search. Thanks to a probabilistic search procedure based on the mechanics of natural selection and natural genetics, the genetic algorithms are highly effective and robust over a broad spectrum of problems [34-36]. This motivates the use of the genetic algorithms [37-42] to overcome the problems encountered by the conventional learning methods for fuzzy-neural networks. In the traditional GAs, the natural parameter set of the optimization problem needs to be coded as a finite length string. The coding operation maps a real number to a fixed length binary string. However, the coding process will lose some information due to truncation. Furthermore, one must face a trade-off problem between the length of the coding string and the resolution of the parameter value. To increase the resolution, a longer binary string must be chosen and hence, will slow down the convergence rate. It is well known that searching speed of the conventional genetic algorithms is not desirable. Such conventional genetic algorithms are inherently disadvantaged in dealing with a vast number (over 100) of adjustable parameters in the fuzzy-neural networks. Thus, a framework to automatically tune the
of the fuzzy-neural networks to approximate nonlinear functions using a simplified genetic algorithm (SGA) is proposed in this paper.
To start with, chromosomes consisting of adjustable parameters of the fuzzy-neural networks are coded as a vector with real number components and searched by the SGA. The fitness value of each chromosome is obtained via a mapping from the error function, which is the difference between the outputs of the fuzzy-neural network and the desired outputs. Thus, an optimal set of adjustable parameters of the fuzzy-neural network can be obtained by repeating genetic operations, i.e., crossover and mutation, so that an optimal fuzzy neural network satisfying an error bound condition can be evolutionarily obtained. Because of the use of the SGA, faster convergence of the evolution process to search for an optimal fuzzy neural network can be achieved.
In the last years, the adaptive control of nonlinear systems has been an exciting research area. The control scheme via feedback linearization for nonlinear systems has been proposed in [43-46]. The fundamental idea of feedback linearization is to transform a nonlinear system dynamic into a linear one. Therefore, linear control or fuzzy control techniques can be used to acquire the desired performance. Recently, an adaptive fuzzy neural control system [47,48] has been proposed to incorporate with the expert information systematically, and the stability can be guaranteed by universal approximation theorem [49]. For systems with a high degree of nonlinear uncertainty, such as chemical process, aircraft, etc., they are very difficult to control using the conventional control theory. However, human operators can often successfully control them. Based on the fact that fuzzy logic and/or neural networks systems are capable of uniformly approximating a nonlinear function. Adaptive control is a popular technique of system identification and controller design to obtain a model of a system from input-output data and to design a controller. Using the conventional adaptive control, the adaptive fuzzy neural control has direct and indirect adaptive control categories [48]. Direct adaptive fuzzy neural control has been discussed in [47] and [48], in which the adaptive FNN controller uses a fuzzy logic system as a controller.
In this paper, we propose a method for designing both SGA-based indirect and direct adaptive fuzzy-neural controller for unknown nonlinear dynamical systems, in which the system state can be measured or not. The free parameters of the adaptive fuzzy-neural controller can be tuned on-line via the SGA approach. Also a supervisory controller is incorporated into the both adaptive control categories. If the closed-loop system controlled by the adaptive controller tends to unstable, especially in the transient period, the supervisory controller will be activated to work with the adaptive controller to stabilize the closed-loop system. On the other hand, if the adaptive controller works well, the supervisory controller will be deactivated.
Paragraphs are so arranged that Chapter 2 describes the fuzzy theory and the construction of the fuzzy-neural network. Chapter 3 describes B-spline membership functions. Chapter 4 gives details of the proposed simplified genetic algorithm (SGA). Designing indirect adaptive FNN controllers is shown in Chapter 5. An example is illustrated in Chapter 5 to show the effectiveness of this approach. In chapter6 GA-based output-feedback direct adaptive fuzzy-neural controller (GODAF) is proposed to show the effectiveness of the stability in accordance with the output-feedback systems. The conclusion shows the last chapter.
CHAPER 2 FUZZY CONTROL SYSTEM
Fuzzy control was first introduced in early 1970’s [50] in an attempt to design controllers for systems that are structurally difficult to model due to naturally existing non-linearity and model complexities.
Since Mandani and his co-workers have successfully applied the fuzzy logic controller (FLC) to steam engine control, the fuzzy control theory has been widely applied to many fields [51][52][53]. The characteristic of FLC is that it adopts the linguistic control strategy to control plants without realizing their mathematic models. The linguistic control strategy of FLC is constructed according to the operator experience and/or expert knowledge. Therefore, the FLC can control the complex and ill-defined industrial processes as well as the skilled operators do. Experiences show that the FLC yields results superior to those obtained by traditional control algorithm in the complex situation where the system model or parameters are difficult to obtain.
Typically, a fuzzy control system consists of four components: a fuzzification interface, a rule base, an inference engine, and a defuzzification interface as shown in Figure 2.1. More detail descriptions for each component are stated as follows:
2.1 Fuzzifiers
On account of the most applications, the inputs and the outputs of the fuzzy system are real valued numbers. So we must make the fuzzifier defined as a mapping from a
real valued point n
R U x*∈ ⊂
to the fuzzy set A in U. we have three criteria to design the fuzzifier. In this paper, we present three fuzzifiers as follows:
1. Singleton fuzzifier: the singleton fuzzifier maps a real valued point x*∈U
into the fuzzy singleton A in U, in which the membership value is 1 at *
x and 0 at other points in U, i.e., = = otherwise 0 1 ) ( * x x x A µ (2-1) 2. Triangular fuzzifier: the triangular fuzzifier maps x*∈U
into the fuzzy set A in U, in which the membership function is written as:
− − − − = < = = otherwise 0 , 2 , 1 , | | if ) | | 1 ( * * ) | | 1 ( ) ( * * 1 * 1 1 n i b x x b x x b x x x n i n n A µ (2-2)
Fig. 2-1: The fuzzy system architecture X in U
Fuzzy Rule Base
Fuzzifier Fuzzy Inference Defuzzifier
Fuzzy sets
in U Fuzzy sets in V Y in V
X in U
Fuzzy Rule Base
Fuzzifier Fuzzy Inference Defuzzifier
Fuzzy sets
in U Fuzzy sets in V Y in V
Fuzzy Rule Base
Fuzzifier Fuzzy Inference Defuzzifier
Fuzzy sets
where biare positive parameters and symbol * is often chosen as algebraic product or
min.
3. Gaussian fuzzifier: the Gaussian fuzifier maps x*∈U
into the fuzzy set A in U, in which the membership function is written as:
2 * 2 1 * 1 1 ) ( ) ( * * ) ( n n n x x x x A x e e δ δ µ = − − − − (2-3) where δiare positive parameters and symbol * is often chosen as algebraic product or min.
Finally, we summarize the above fuzzifiers. The singleton fuzzifier greatly simplifies the computation involved in the fuzzy inference engine for all membership functions. And the Gaussian and triangular fuzzifiers do, too. The Gaussian and triangular fuzzifiers can restrain noise in the input, but the singleton fuzzifier cannot.
2.2 Deffuzzifiers
The defuzzifier is defined as a mapping from a fuzzy set D in V ⊂R to a crisp point
.
*
V
y ∈ Hence, the task of the defuzzifier is to specify a point in V that represents the fuzzy set D. The three criteria should be considered as follows:
1. Plausibility: The point *
y can represent D from intuitive point of view. 2. Computational simplicity: The criterion is important for fuzzy control. 3. Continuity: A small change in D should not result in a large change in *
y . Hence, there are three types of defuzifiers.
1. Center of gravity Defuzzifier
The center of gravity defuzzifier specifies *
y as the center of the area covered by the membership function of D. ∫ ∫ = V D V D dy y dy y y y ) ( ) ( * µ µ (2-4)
where ∫V is the conventional integral and V be universe of discourse and y be a
mapping from V.
2. Center Average Defuzzifier
Let l
y be the center of the lth fuzzy set and wlbe its height. The center average defuzzifier presents * y as ∑ ∑ = = = M l l M l l l w w y y 1 1 * (2-5)
3. Average Maximum Defuzzifier
The maximum defuzzifier chooses *
y as the point in V, at which µD( y) achieves its maximum value. Define
)} ( sup ) ( | { ) (D y V y y hgt D V y D µ µ ∈ = ∈ = . (2-6)
hgt(D) is a set of all point in V, at which µD( y) achieves its maximum value. The
maximum defuzzifier *
y is defined as an arbitrary element in hgt(D), i.e., *
y =any point in hgt(D)
∫ ∫ = hgt D hgt D dy y dy y y y ) ( ) ( * µ µ (2-7) where ∫hgt(D)is an integration for the continuous part of hgt(D) and it is a summation for the discrete part of hgt(D).
2.3 Fuzzy Rule Base
The fuzzy rule base consists of fuzzy IF-THEN rules. It is a heart of the fuzzy system in the sense. And all other components are used to implement these rules in a reasonable and efficient manner. Hence, the fuzzy rule base comprises the following fuzzy IF-THEN rules:
Rule i: IF xl is i A1 and …and xn is i n A THEN y is i D (2-8)
The canonical fuzzy IF-THEN rules in the form of (2-13) includes the following ones: (1) Partial rules: IF x1 is i A1 and …and xm is i m A THEN y is i D (2-9) (2) Or rules IF x1 is i A1 and …and xm is i m A or xm+1 is i m A +1 and …xn is i n A THEN y is i D (2-10) (3) Fuzzy statements y is i D (2-11) 2.4 Fuzzy Inference
The fuzzy inference is a reasoning method using the fuzzy theory, and whereby the expert knowledge is presented using linguistic rules. For example: IF premise THEN conclusions, where premise is a statement in the fuzzy logic.
The fuzzy inference introduced as follows:
Product Inference : = ⋅
∏
⋅ ∈ = [sup( ( ) ( ) ( ))] max ) ( 1 x x y y A A i D U x M l D l i µ µ µ µ (2-12)Minimum Inference: ( ) max[supmin( ( ), ( 1)..., ( ), ( ))]
1 1 y x x x y l l n l A n D A A U x M l D µ µ µ µ µ ∈ = = (2-13)
The product inference and minimum inference are the most commonly used fuzzy inference in the fuzzy system and other fuzzy applications.
CHATPER 3 B-spline Fuzzy-Neural Network (B-splineFNN)
The B-spline membership function (BMFs) introduced in [7,16] is adopted in this plan as the fuzzy membership function. The fuzzy B-spline membership functions (BMFs) constructed in possess the property of local control and have been successfully applied to fuzzy-neural control. This is mainly due to the local control property of B-spline curve, i.e., the BMF has the elegant property of being locally tuned in a learning process. In this chapter, the property of B-spline curves will be discussed.
3.1 Knot vector and B-spline Curves
A spline is a function, usually constructed using low–order polynomial pieces, and joined at breakpoints with certain smoothness conditions. The breakpoints are called knots. For α order, r+1 control points, the B-spline basis functions have the knot vector T ={ti,i =0,1,l,r+α} with t0 <t1 <t2 <m<tr+α. The following mixed types of knot vectors are adopted in this theis.
1. The knot vector is set to open uniform. The knot vector is defined as
> ≤ ≤ + − − + < = − . if , if 2 , if 0 1 0 r i d r i r d d t i d t dn dn i α α α α (3-1)
To define B-spline basis functions, we need a parameter, the degree of these basis functions, α . This i-th B-spline basis function of degreeα , written as Ni,α(t), is
defined recursively as following:
≤ < = − − + − − = + − + + + + − − + otherwise if , 0 if , 1 ) ( ), ( ) ` ( ) ( ) ( ) ( 1 1 , 1 , 1 1 1 , 1 , i i i i i i i i i i i i i t t t t N t N t t t t t N t t t t t N α α α α α α (3-2)
This above is usually referred to as the B-spline blend function. This definition looks complicated. But, it is not difficult to understand. If the degree is one (i.e., α=1), these basis functions are step functions. That is, basis functionNi,1(t)is 1 if t lies on the i-th
span [ti, ti+1]. To understand the way of computingNi,α(t)forα greater than 1, let us use the triangular computation scheme. All knot spans are on the left (first) column and all degree zero basis functions on the second, shown in Fig. 3-1.
For r+1control points, {p0,p1,,pr}, the ith B-spline blending function of order α is denoted by Ni,α(t). Hence, the B-spline curve B(t) is defined as follows:
r t N p t B r i i i ≤ ≤ =
∑
= α α( ), 1 ) ( 0 , (3-3)3.2 B-spline Membership Function (BMF)
As a result, the B-spline membership function (BMF)µA(xq) introduced in [7,16]
is expressed as
∑
= = n i q i i q A x p N x 0 , ( ), ) ( α µ (3-4)where xq is the input data and A is a fuzzy set. We adopt the BMFs as the fuzzy
membership functions and use the SGA (to be introduced in chapter 4) to obtain a set of optimal control points of BMFs. To avoid the increased number of control points, we use the BMFs as the version of fixed number of control points in [15], which is shown in Fig. 3-2. -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 data point control point µµµµ fuzzy variable x0 (x )0 1
c
2c
4c
3c
5c
6c
14c
13c
c
0 11c
12c
7c
8c
9c
10c
Fig. 3-2. Illustration of fixed number of control points of BMF's of order 2. Fig. 3-1 All knot spans are on the left (first) column
and all degree one basis functions on the second
) , [ ) , [ ) , [ ) , [ ) , [ 5 4 4 3 3 2 2 1 1 0 t t t t t t t t t t 1 , 4 1 , 3 1 , 2 1 , 1 1 , 0 N N N N N 2 , 3 2 , 2 2 , 1 2 , 0 N N N N 3 , 2 3 , 1 3 , 0 N N N 4 , 1 4 , 0 N N 5 , 0 N ) , [ ) , [ ) , [ ) , [ ) , [ 5 4 4 3 3 2 2 1 1 0 t t t t t t t t t t 1 , 4 1 , 3 1 , 2 1 , 1 1 , 0 N N N N N 2 , 3 2 , 2 2 , 1 2 , 0 N N N N 3 , 2 3 , 1 3 , 0 N N N 4 , 1 4 , 0 N N 5 , 0 N
3.4 The Configuration of A B-spline FNN
Fig. 3-3 shows the configuration of a typical fuzzy-neural network. The system has a total of four layers. Nodes at layer I are input nodes (linguistic nodes) that represent input linguistic variables. Nodes at layer II are term nodes which act as BMFs to represent the terms of the respective linguistic variables. Each node at layer III is a fuzzy rule. Layer IV is the output layer.
3.5 A B-spline FNN Inference Method
Given the training input data xq,q=1,2,m,n, and the output data yp, p=1,2,m,m, the ith fuzzy rule has the following form:
i m m i i n n i i w is y and w is y THEN A is x and A is x IF R 1 1 1 1 : (3-5) where i is a rule number, Aqi’s are the fuzzy sets of the antecedent part, and i
p w are real numbers of the consequent part. When the inputs x=[x1x2mxn] are given, the output ypof the fuzzy inference can be derived from the following equations:
∑
∑
= = = = ∏ ∏ = h i q A n q h i q A n q i p p x x w x y i q i q 1 1 1 1 )) ( ( )) ( ( ) ( µ µ (3-6) where Ai (xq) qµ is the B-spline membership function of A , qi wp =[w1p w2pmwhp]T is a weighting vector. We assume that each input has the same number of BMFs and the ith
BMF of the qth input has r+1 control points denoted as
1 w11 2 1 w 1 2 w 2 2 w w1 w2 1 m w 2 m w m w 1 y 2 y m y 1 x 2 x n x o o o i q A i u
Layer I Layer II Layer III Layer IV
1 w11 2 1 w 1 2 w 2 2 w w1 w2 1 m w 2 m w m w 1 y 2 y m y 1 x 2 x n x o o o i q A i u
Layer I Layer II Layer III Layer IV h 2 • • • • • • • • • h h h 1 w11 2 1 w 1 2 w 2 2 w w1 w2 1 m w 2 m w m w 1 y 2 y m y 1 x 2 x n x o o o i q A i u
Layer I Layer II Layer III Layer IV
1 1 w11 2 1 w 1 2 w 2 2 w w1 w2 1 m w 2 m w m w 1 y 2 y m y 1 x 2 x n x o o o i q A i u
Layer I Layer II Layer III Layer IV
1 w11 2 1 w 1 2 w 2 2 w w1 w2 1 m w 2 m w m w 1 y 2 y m y 1 x 2 x n x o o o i q A i u
Layer I Layer II Layer III Layer IV h 2 • • • • • • • • • h h h
Fig. 3-3. The configuration of a fuzzy neural network.
i q
c ={ciqj |cqji = pj, j=0,1,2,r}=[cqi0 cqi1cqri ]T.
Each input has z fuzzy sets (BMFs). If there are h rules in fuzzy rule base, then the adjustable set of all the control points is defined as
T T z n T z T T T z T T c c c c c c c c=[ 11 12 h 1 12 22 h 2 h ] ={cqji |i=1,2,h,z,q=1,2,h,n,j=0,1,h,r} (3-7) Hence, the objective of the learning algorithm is to minimize the error function:
2 * ) ( ) , ( p p p p w c y y e = − (3-8) and 2 * || || ) , (w c Y Y E = − (3-9) where Tm T T T w w w
w=[ 1 2 ] is a weighting vector of the fuzzy neural network,
T T z T T c c c c
c=[ 11 12 1 12 c22Tc2zTcnzT]T is a control point vector of the BMFs, ]
[y1 y2 ym
Y = is an m-dimensional vector of the current outputs, and
] [ 1∗ 2∗ ∗ ∗ = m y y y
Y is an m-dimensional vector of the desired outputs acquired from
CHAPTER 4 Design of the FNN Identifiers by the Simplified
Genetic Algorithms
In this chapter, a novel approach to adjust both control points of B-spline membership functions (BMFs) and weightings of fuzzy-neural networks using a simplified genetic algorithm (SGA) is proposed. Fuzzy-neural networks are traditionally trained by using gradient-based methods, and may fall into local minimum during the learning process. Genetic algorithms have drawn significant attentions in various fields due to their capabilities of directed random search for global optimization. This motivates the use of the genetic algorithms to overcome the problem encountered by the conventional learning methods. However, it is well known that searching speed of the conventional genetic algorithms is not desirable. Thus far, such conventional genetic algorithms are inherently disadvantaged in dealing with a vast amount (over 100) of adjustable parameters in the fuzzy-neural networks. In this chapter, the SGA is proposed by using a sequential-search-based crossover point (SSCP) method in which a better crossover point is determined and only the gene at the specified crossover point is crossed as a single point crossover operation. Chromosomes consisting of both the control points of BMF’s and the weightings of fuzzy-neural networks are coded as an adjustable vector with real number components and searched by the SGA. Because of the use of the SGA, faster convergence of the evolution process to search for an optimal fuzzy-neural network can be achieved. Nonlinear functions approximated by using the fuzzy-neural networks via the SGA are demonstrated in this chapter to illustrate the effectiveness and applicability of the proposed method.
4.1 The Simplified Genetic Algorithm
To overcome the problems encountered by conventional genetic algorithms, we propose a simplified genetic algorithm (SGA) with a novel structure different from the conventional GAs to deal with a complicated situation where a vast number (over 100) of adjustable parameters are searched in the fuzzy-neural network.
4.2 Basic Concept of GAs
GAs are powerful search optimization algorithms based on the mechanics of natural selection and natural genetics. GAs can be characterized by the following features [24]:
• A scheme for encoding solutions to the problem, referred to as chromosomes;
• An evaluation function (referred to as a fitness function) that rates each chromosome relative to the others in the current set of chromosomes (referred to as a population);
• An initialization procedure for a population of chromosomes;
• A set of operators which are used to manipulate the genetic composition of the
population (such as recombination, mutation, crossover, etc);
Basically, GAs are probabilistic algorithms, which maintain a population of individuals (chromosomes, vectors) for iteration. Each chromosome represents a potential solution to the problem at hand, and is evaluated to give some measure of its fitness. Then, selecting the more fit individuals forms a new population. Some members of the new population undergo transformations by means of genetic operators to form new solutions. After some number of generations, it is hoped that the system converges
with a near-optimal solution.
There are two primary groups of genetic operators, crossover and mutation, used by most researchers. Crossover combines the features of two parent chromosomes to form two similar offspring by swapping corresponding segments of the parents. The intuition behind the applicability of the crossover operator is information exchange between potential solutions. Mutation, on the other hand, arbitrarily alters one or more genes of a selected chromosome, by a random change with a probability equal to the mutation rate. The intuition behind the mutation operator is the introduction of some extra variability into the population.
The GA described above, however, is a conventional one. In this chapter, we propose a simplified genetic algorithm (SGA), which are characterized by three simplified processes. Firstly, the population size is fixed and can be reduced to a minimum size of 4. Secondly, the crossover operator is simplified to be a single point crossover. Thirdly, only one chromosome in a population is selected for mutation. Details will be discussed in the following section.
4.3 Evolutionary Processes of the Simplified Genetic Algorithm (SGA)
The adjusted parameters of FNN, which is both the control-points and weights or alternative, can be defined. In this section, we define the evolutionary processes of SGA by using both parameters w and c . For learning the adjustable parameters of the fuzzy-neural network shown in Chapter 3.4, we define the chromosome as
1 1 2 1 1] [ ] [ + = + ∈ + = β β β β φ φ φ φ φ φ wTcT m R , (4-1)
with the length of
z r n h m× + × + × = ( ( 1)) β , (4-2)
wherew is a set of weighting factors defined as the parameter w of (3-10), ranging from within the interval D1 =[wmin,wmax]⊆ R, and c is a set of control points defined in (3-8), ranging from within the interval D2 =[cmin,cmax]⊆R. Each input has
z fuzzy sets (BMFs). The φβ+1 is defined as a virtual gene, on which the single point crossover performed does not affect the fitness values of the population, if the crossover point chooses j =β. The single point crossover will be introduced later.
Because a real-valued space is been dealt with where each chromosome is coded as an adjustable vector with floating point type components, the crossover and mutation are the real-number genetic operators.
4.3.1 Population Initialization
A genetic algorithm requires a population of potential solutions to be initialized and then maintained during the process. In the proposed approach, a fixed number k of population size is used to prevent the unlimited growth of population. Real number representation for potential solutions is also adopted to simplify genetic operator definitions and obtain a better performance of the genetic algorithm itself. The initial chromosomes are randomly generated within the feasible ranges, D1 and D2. The initial
= = Ψ + + + k k k k k 1 2 1 2 1 2 2 2 2 1 1 1 1 1 2 1 1 2 1 β β β β β β φ φ φ φ φ φ φ φ φ φ φ φ φ φ φ m o o r o o m m o (4-3)
where φ =[φ1φ2φβ φβ+1] is the th chromosome for =1 2, ,, k . Each chromosome has β +1 elements. It is expected that one of the candidate solutions, φ, can be evolutionarily obtained to be a set of near-optimal parameters for the
fuzzy-neural network. Note that the number of chromosomes, k, needs be an even
number (to be introduced in Chapter 4.3.3).
After initialization, two genetic operations: crossover and mutation are performed during procreation.
4.3.2 Fitness function
The performance of each chromosome is evaluated according to its fitness. After generations of evolution, it is expected that the genetic algorithm converges and a best chromosome with largest fitness (or smallest error) representing the optimal solution to the problem is obtained. The fitness function is defined as follows:
) , ( 1 1 c w E fitness + = (4-4) where )E(w,c is an estimation error function defined in (3-9).
4.3.3 Single Point Crossover
In order to deal with a vast number of adjustable parameters, the single point crossover is introduced in this section. Fig. 4-1 shows the difference between the traditional crossover methods and the proposed single point crossover method. Fig. 4-1 (a) and (b) show the traditional methods, which adopt one crossover point and two crossover points, respectively. Although the proposed single point crossover shown in Fig. 4-1 (c) has two crossover points, the distance between the two crossover points is only one gene (parameter). For each generation, the crossover operator will act on parents to give offspring. To avoid improper crossover, the distance between the two crossover points is reduced to be only one gene. The single point crossover operator is defined as: = ∆ ∆ = Ψ = Ψ + + + k k j j j j Crs φ φ φ φ φ φ ˆ ˆ ˆ ˆ ˆ ˆ ) ; ( ˆ 2 1 1 2 1 1 1 o o (4-5)
where j is the crossover point determined by a sequential-search-based crossover point method or a randomly selected crossover point method introduced later, ∆ denotes the elements of offspring which remain the same as those of their parents, and
+ + = + − = + − = − + + + + + + . , 2 ) 2 / ( , 1 ) 2 / ( if , * ) 1 ( * , 2 / , , 2 , 1 if , * ) 1 ( * ˆ ) 2 / ( 1 1 ) 2 / ( 1 1 1 k k k i a a k i a a k i j i j k i j i j i j l l φ φ φ φ φ (4-6)
position j+1 for all chromosomes with a linear combination of φij +1 and φj k i +1 + 2 ( / ) . a is a random number between 0 and 1. Ψˆ is a new population.
To determine the crossover point j for the single point crossover, we propose two kinds of search methods. One is a randomly selected crossover point (RSCP) method, which chooses a crossover point at random. The RSCP is a popular technique adopted by many researchers. Pseudo code for the randomly selected crossover point (RSCP) method is shown in Fig. 4-2 (a).
The other one is a sequential-search-based crossover point (SSCP) method, where the crossover point j is determined via a sequential search based on fitness before the crossover operation actually takes place. The search algorithm of SSCP is similar to the local search procedure in [24] and a sequential search of a database. Pseudo code for the sequential-search-based crossover point (SSCP) method is shown in Fig. 4-2 (b). If there is no satisfied crossover point at this generation, then let the crossover point be
β
=
j . Then, the single point crossover performs on the virtual gene, φβ+1, and the fitness values of the population will not be affected.
4.3.4 Sorting Operation
After crossover, the newly generated population is sorted by ranking the fitness of chromosomes within the population, resulting in E(φˆ1)≤E(φˆ2)m≤E(φˆk). The first chromosome φˆ1 of the sorted population Ψˆ =[φˆ1φˆ2mφˆk]T has the highest fitness value (or smallest error).
4.3.5 Mutation Operation
After sorting, the first chromosome is the best one in the population in terms of fitness. Mutation here means copying the first (i.e., best) chromosome to the (k/2+1)th chromosome, first. Then, genes within the (k/2+1)th chromosome are randomly selected for mutation, according to the mutation rate pm shown in Fig. 4-3. Note that
mutation on the selected genes is performed based on a copy of the best-fit chromosome,
φ Ψ
Parents
Offsprings
(a) single crossover point. (a traditional method)
(b) two crossover points between multiple genes. (a traditional method)
(c) two crossover points between single gene. (the proposed method)
The single point crossover
) 1 2 / ( ˆ k + j
φ selected for mutation within the (k/2+1)th chromosome φˆ(k/2+1)are altered by the following mutation operators, which are described in (4-7) and (4-8).
Because two different intervals, D1 and D2, are defined for weightings and
control points, respectively, the mutation operator is divided into two parts. For the weightings (ˆ(jk/2 1), j =1,2, ,m×h)
+ m
φ , the genes are updated by:
o
o
o
o
o
o
o
o
mutation ) 1 2 / (k + The th chromosomeo
o
o
o
o
o
o
o
o
o
mutation ) 1 2 / (k + The (k/2+1)th chromosome The th chromosomeo
o
o
o
(a) a traditional method (b) the propose method
Fig. 4-3 Traditional mutation method and the proposed method.
Procedure Sequential Search-based Crossover Point (j); Begin
Let j=0; i=0; Repeat
Perform Ψˆ =Crs(Ψ;i); by (3-5)
Evaluate fitnees(φˆ1) and fitness(φ1) by (4-4);
If fitnees(φˆ1) > fitness(φ1) Then j=i; Else i=i+1;
Until (φˆ1)
fitnees > fitness(φ1) or i=β;
Return j=i; End
Fig. 4-2. Pseudo code for the randomly selected crossover point (RSCP) method and the sequential-search-based crossover point (SSCP) method.
Procedure Random Selected Crossover Point (j); Begin
Obtain j randomly between 0 and β;
End
(a) the RSCP method
(b) the SSCP method (4-5)
≤ − ∆ − > − ∆ + = + . 5 . 0 if ) , ( , 5 . 0 if ) , ( ˆ min 1 1 1 max 1 ) 1 2 / ( δ φ φ δ φ φ φ w t w t j j j j k j (4-7)
For control points (φˆj(k/2+1), j=m×h+1,l,β), the genes are updated by:
≤ − ∆ − > − ∆ + = + , 5 . 0 if ) , ( , 5 . 0 if ) , ( ˆ min 1 1 1 max 1 ) 1 2 / (
δ
φ
φ
δ
φ
φ
φ
c t c t j j j j k j (4-8) γ γ *(1 / ) * ) , (t y = y −t T ∆ , (4-9)where δ ∈[0,1] is a random value, pm is a given mutation rate, t is the current iteration, γ is a random number from [0,1], and T is the maximal generation number.
γ is a system parameter determining the degree of dependency on an iteration number. The function ∆( yt, ) returns a value in the range of [0, y] such that the probability
) , ( yt
∆ being close to 0 increases as t increases. This property causes the mutation
operator to search the space uniformly at initial stage (when t is small), and very locally at later stages; thus increasing the probability of generating children closer to its successor than a random choice. The design of the mutation operator is based on two rationales. First, it is desirable to take large leaps in the early phase of SGA search so that SGA can explore a parameter space as wide as possible. Second, it is also desirable to take smaller jumps in the later phase of SGA search so that SGA can direct its search toward a global minimum more effectively. Both are accomplished through the role that “time” t plays in ∆( yt, ).
The SGA offers exciting advantages over the conventional gradient-based methods during the learning process of fuzzy-neural networks. To start with, chromosomes consisting of adjustable parameters of the fuzzy-neural network are coded as a vector with real number components. The fitness values are obtained by a mapping from the error function, defined as the difference between the outputs of the fuzzy-neural network and the desired outputs. Thus, all the best adjustable parameters of the fuzzy-neural network can be obtained by repeating genetic operations, i.e., crossover and mutation, so that an optimal fuzzy neural network satisfying an error bound condition can be evolutionarily obtained. Because of the use of the simplified genetic algorithm, faster convergence of the evolution process to search for an optimal fuzzy neural network can be achieved.
4.4 Pseudo Code for The SGA
The idea of the SGA has been introduced in previous section. Fig. 4-4 shows two kinds of SGA adopted for an off-line learning process. Pseudo code for SGA with the RSCP method and SGA with the SSCP method are shown in Fig. 4-4 (a) and Fig. 4-4 (b), respectively.
Using off-line learning and SGA with the RSCP method shown in Fig. 4-4 (a), an additional procedure, If fitnees(φˆ1) < fitness(φ1) Then φˆ1 =φ1 , is used to maintain the best fitness evolutionarily obtained so far. The procedure to keep the best chromosome into the next generation is a popular technique in conventional genetic algorithms. For SGA with the SSCP method, however, this additional procedure is no longer required as shown in Fig. 4-4 (b), since a better crossover point can be
Example 1, the learning effect of SGA with the SSCP method is superior to that of SGA with the RSCP method.
Procedure SGA with RSCP Begin
Initialize Ψ % generate an initial population
While (not terminate-condition) do
% Obtain the crossover point for off-line learning
Perform Random Selected Crossover Point (j) in Fig. 4-2 (a); Perform Ψˆ =Crs(Ψ;j); % Perform Single Point Crossover
Sort Ψˆ ;
% Additional procedure for off-line learning and RSCP
If fitnees(φˆ1) < fitness(φ1) Then φˆ1 =φ1;
Mutate Ψˆ ; % only apply to the (k/2+1)th chromosome
End While End
(a) SGA with the RSCP
Procedure SGA with SSCP Begin
Initialize Ψ % generate an initial population
While (not terminate-condition) do
% Obtain the crossover point for off-line learning
Perform Sequential Search-based Crossover Point (j) in Fig. 4-2 (b); Perform Ψˆ =Crs(Ψ; j); % Perform Single Point Crossover
Sort Ψˆ ;
Mutate Ψˆ ; % only apply to the (k/2+1)th chromosome
End While End
(b) SGA with the SSCP
CHAPTER 5 INDIRECT ADAPTIVE FUZZY-NEURAL
CONTROLLER
In this chapter, a constructive manner how to develop on-line indirect adaptive controllers based on the SGA to achieve the control objectives is proposed. Particularly, the state feedback control law with the SGA update law can be on-line tuned.
5.1 Control Objectives
Consider the nth-order nonlinear systems of the form
1 1 1 3 2 2 1 ) , , ( ) , , ( x y u x x g x x f x x x x x n n n == + = = l l D m D D (5-1)
or equivalently of the form
x y u x x x g x x x f x(n) = ( , D,l, (n−1))+ ( ,D,l, (n−1)) , = (5-2) where u∈R and y∈R are the input and output of the system, respectively, and
n T n T n x x x R x x x x=( , , , ) =( , , , ( −1)) ∈ 2
1 l Dl is a state vector of the system which is assumed
to be available for measurement. We assume that f and g are unknown functions, and g is, without loss of generality, a strictly positive function. In [55], these systems are in normal form and have the relative degree equal to n. The control objective is to design an indirect adaptive state feedback fuzzy-neural controller so that the system output y follows a given bounded reference signalym.
First, let e= ym −y, e=(e,eD,l,e(n−1))T
and k =(kn,l,k1)T ∈Rnbe such that all roots of
a polynomial n n n k s k s s h = + −1+m+ 1 )
( are in the open left half-plane. If the functions f
and g are known, then the optimal control law
] ) ( [ ) ( 1 ( ) * f x y k e x g u n T m + + − = (5-3) applied to (5-2) results in 0 ) 1 ( 1 ) ( +k e − + +k e= e n n m n (5-4) However, since f and g are unknown, the optimal control law (5-3) can not be obtained. To solve this problem, we use the fuzzy logic systems as approximators to approximate the unknown functions.
5.2 Certainty Equivalent Controller
We replace f and g in (5-3) by the fuzzy logic systems fˆ(x|wf,cf) and ),
, | (
ˆ x wg cg
g respectively. The resulting control law
] ) , | ( ˆ [ ) , | ( ˆ 1 f x w c y( ) k e c w x g u n T m f f g g c= − + + (5-5)
is the so-called certainty equivalent controller [56] in the adaptive control literature. Substituting (5-5) to (5-2) and after some manipulations, we obtain the error equation
c g g f f T n u x g c w x g x f c w x f e k e =− +[(ˆ( | , )− ( )]+[ˆ( | , )− ( )] (5-6)
c g g f f c ce b f x w c f x g x w c g x u eD=Λ + [(ˆ( | , − ( )]+[ˆ( | , )− ( )] (5-7) where − − − = Λ −1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 k k kn n c m m m m m m m m m m m m m m , = 1 0 0 m c b (5-8)
Since Λcis a stable matrix sI−Λc =s n +ks(n−1)+m+kn
1 ) (
|
(| ), we know that there exists
a unique positive definite symmetric n×n matrix P which satisfies the Lyapunov equation [57]: Q P P c T c + Λ =− Λ (5-9) where Q is an n×n positive matrix. Let V eTPe
e 2 1
= , Using (5-7) and (5-9), we have
e P e e P e V T T e D D D − + = 2 1 2 1 [(ˆ( | , ) ( )) (ˆ( | , ) ( )) ] 2 1 c g g f f c T TQe e Pb f x w c f x g x w c g x u e + − + − − = (5-10) In order to bound = (i−1)− (i−1) m i y e
x , Ve must be bounded, which means we require that
0
≤
e
VD , when Ve is greater than a large constant V >0. However, from (5-10), it is
difficult to design uc such that VDe ≤0. 5.3 Supervisory Control
By incorporating a control term, us into the uc, the control law becomes s
c u
u
u= + , (5-11) where us is called a supervisory control. The supervisory controlus is turned on when
the error function Ve is greater than a positive constant V . If Ve ≤V , then the supervisory control us is turned off. That is, if the system tends to be unstable (Ve >V), then the supervisory controlus forcesVe ≤V . In this way, uslikes a supervisor.
Substituting (5-11) into (5-2) the error equation becomes
] ) ( ) ( ) , | ( ˆ [ )] ( ) , | ( ˆ [( f f g g c s c ce b f x w c f x g x w c g x u g x u eD=Λ + − + − − (5-12)
Using (5-12) and (5-9), we have
] ) ( )) ( ) , | ( ˆ ( )) ( ) , | ( ˆ [( 2 1 s c g g f f c T T e e Qe e Pb f x w c f x g x w c g x u g x u VD =− + − + − − s c T c g g f f c T T u x g b P e u x g c w x g x f c w x f b P e e Q e | |[(| ˆ( | , )| | ( )|) |(ˆ( | , )| | ( )) |] ( ) 2 1 − − + − + − ≤ (5-13) In order that the right-hand side of (5-13) is nonpositive, we need to know the bounds of
f and g. We assume the following.
Assumption 5.1. We can determine functions fU(x),gU(x) and gL( x) such
that| f(x)|≤ fU(x) and gL(x)≤g(x)≤gU(x) for x∈Uc, where f (x)<∞,g (x)<∞ U
U
, and gL(x)>0 for x∈Uc.
not “totally unknown.” Note that in Assumption 5.1 we require to know the state-dependent bounds of f and g, which is less restrictive than requiring fixed bounds for allx∈Uc.
Based on fU,gU, and gL, and (5-13), we choose the supervisory controlus as |] ) ( | | ) , | ( ˆ | ) ( | ) , | ( ˆ [| ) ( 1 ) sgn( * 1 c U c g g U f f L c T s f x w c f x g x w c u g x u x g b p e I u = + + + (5-14) where * 1
I =1 if Ve>V , V is a constant specified by the designer, * 1
I =0 if Ve ≤V , and sgn(y)=1(−1) if y≥0(<0). Substituting (5-14) into (5-13) and considering the case Ve >V , we have |)] | | ˆ | | ˆ (| | | | ˆ | | | | ˆ [| | | 2 1 c U c U L c c c T T e f f gu g u g g gu u g f f b P e e Q e VD ≤− + + + + − + + + 0 2 1 ≤ − ≤ eTQe . (5-15) Finally, the overall control scheme of SIAFC is shown in Figure 5.1.
Fig. 5-1 The overall scheme of SIAFC Control. x y u x g x f xn = + = , ) ( ) ( ) (
plant
Fuzzy controller
) , | ( ˆ / ] ) , | ( ˆ [ f f m(n) T g g c f x w c y k e g x w c u = − + +Supervisory control
L c U c U c T s e pb f f gu g u g u =sgn( )[|ˆ|+ +|ˆ |+| |]/ ) , , , ( (n−1) m m m y y y D m g f g f w c c w , , , + + + 1 * 1 = I 0 * 1 = I x ue
SGA-Based adaptive law g f g f w c c w , , ,Fitness function for and ) (x f g(x) )) ( 1 /( 1 1 1φ − +Ei x y u x g x f xn = + = , ) ( ) ( ) (
plant
Fuzzy controller
) , | ( ˆ / ] ) , | ( ˆ [ f f m(n) T g g c f x w c y k e g x w c u = − + +Supervisory control
L c U c U c T s e pb f f gu g u g u =sgn( )[|ˆ|+ +|ˆ |+| |]/ ) , , , ( (n−1) m m m y y y D m g f g f w c c w , , , + + + 1 * 1 = I 0 * 1 = I x ue
SGA-Based adaptive lawSGA-Based adaptive law g f g f w c c w , , ,Fitness function for and ) (x f g(x) )) ( 1 /( 1 1 1φ − +Ei
Fitness function for and ) (x f g(x) )) ( 1 /( 1 1 1φ − +Ei
5.4. Simulations Results
This chapter presents the simulation results of the proposed on-line SGA-based indirect adaptive fuzzy-neural controller (SIAFC) with SSCP method for a class of unknown nonlinear dynamical systems to illustrate the stability for the closed-loop system. The inverted pendulum system used in this plan is shown in Fig. 5-2. Let
θ =
1
x be the angle of the pendulum with respect to the vertical line.
Consider the dynamic equations of the inverted pendulum system as follows:
u m m x m l m m x m m x m l m m x x mlx x g x x x c c c c ) cos 3 4 ( cos ) cos 3 4 ( sin cos sin 1 2 1 1 2 1 1 2 2 1 2 2 1 + − + + + − + − = = D D (5-16) where g=9.8 meter/sec2
is the acceleration due to gravity, mcis the mass of the cart,
lis the half-length of the pole, m is the mass of the pole and u is the control input. In this example, we assume, mc=1 kg, m=0.1 kg and l=0.5 meter. Clearly, (5-16) is in the form of (5-1). Thus, the SIAFC is adopted to control the system. In this example, each input of the BMF fuzzy-neural network has 7 BMFs. All of the BMFs are order α=2 and each BMF has 15 control points. A population size k=4 is assumed. The adjustable parameters,wf and cf , of f(x1,x2) are in the intervals D =[-2,2] and 1 D =[0,1], 2 respectively. And the adjustable parameters, wg andcf, of g(x1,x2)are in the intervals
1
D =[-1.5,1.5] and D =[0,1], respectively. We choose the reference signal 2 ym =0(case 1) and ym(t)=0.1*sin(t) (case 2) in the following simulations. The initial states are
] 0 , 60 [ ) 0 ( = − π x .
To apply the SIAFC to this system, the bounds fU,gU, and gLshould be obtained: θ sin mg 1 1 x = θ θ =x2 D c