• 沒有找到結果。

A general invariance principle for nonlinear time-varying systems and its applications

N/A
N/A
Protected

Academic year: 2021

Share "A general invariance principle for nonlinear time-varying systems and its applications"

Copied!
5
0
0

加載中.... (立即查看全文)

全文

(1)

a 2 A(i). Then, we have that p(i 0 1ji; 1) = 1 for all i  1, and p(1j0; 2) = 1; p(ij0; 1) = ~p(i) for all i 2 S; and, c(i; 1) = 1 for all i  1, c(0; 2) = 1, c(0; 1) = 0. Obviously, this discrete-time MDPs model is the same as in [4, Prop. 3.3], therefore, (5.16) contradicts with [4, Prop. 3.3].

Remark 5.2: This example shows that the conditions to guarantee the existence of a solution to the optimality inequality don’t imply the existence of a solution to the optimality equation.

REFERENCES

[1] W. J. Anderson, Continuous-Time Markov Chains. New York: Springer-Verlag, 1991.

[2] A. Aropostathis, V. Borkar, E. Gaucherand, M. Ghosh, and S. Markus, “Discrete-time controlled Markov processes with average cost criterion: A survey,” SIAM J. Control Optim., vol. 31, pp. 282–344, 1993. [3] J. Bather, “Optimal stationary policies for denumerable Markov chains

in continuous time,” Adv. Appl. Prob., vol. 8, pp. 144–158, 1976. [4] R. Cavazos-Cadena, “A counterexample on the optimality equation in

Markov decision chains with average cost criterion,” Sys. Control Lett., vol. 16, pp. 387–392, 1991.

[5] K. L. Chung, Markov Chains With Stationary Transition Probabili-ties. Berlin, Germany: Springer-Verlag, 1960.

[6] E. B. Dynkin and A. A. Yushkevich, Controlled Markov Pro-cesses. New York: Springer-Verlag, 1979.

[7] E. A. Feinberg, “Continuous-time discounted jump Markov decision processes: A discrete-event approach,”, preprint, 1998.

[8] X. P. Guo and O. Hernández-Lerma, “The optimal control of contin-uous-time Markov chains II: New optimality Conditions,” Department of Mathematics, CINVESTAV-IPN, México, Rep. no. 294, 2001. [9] X. P. Guo and W. P. Zhu, “Optimality conditions for continuous-time

Markov decision processes with average cost criterion,” in Markov Pro-cesses and Controlled Markov Chains, Z. T. Hou, J. A. Filar, and A. Y. Chen, Eds. Dordrecht, Germany: Kluwer, to be published.

[10] O. Hernández-Lerma and J. B. Lasserre, Further Topics on Dis-crete-Time Markov Control Processes. New York: Springer-Verlag, 1999.

[11] Z. T. Hou and X. P. Guo, Markov Decision Processes (in Chi-nese). Changsha, China: Science and Technology Press of Hunan, 1998.

[12] Q. Y. Hu, “Discounted and average MDPs with unbounded rewards: New conditions,” J. Math. Anal. Appl., vol. 171, pp. 111–124, 1992. [13] , “CTMDP and its relationship with DTMDP,” Chinese Sci. Bull.,

vol. 35, pp. 408–410, 1990.

[14] P. Kakumanu, “Relation between continuous and discrete time Markov decision problems,” Naval. Res. Logist. Quant., vol. 24, pp. 431–439, 1977.

[15] , “Nondiscounted continuous-time Markov decision processes with countable state space,” SIAM J. Control, vol. 10, pp. 210–220, 1972. [16] K. Liu, “Theory and applications of Markov decision processes and their

perturbations,” Ph.D. dissertation, School of Mathematics, University of South Australia, Sept. 1997.

[17] M. L. Puterman, Markov Decision Processes. New York: Wiley, 1994. [18] L. I. Sennott, Stochastic Dynamic Programming and the Control of

Queueing System. New York: Wiley, 1999.

[19] , “A new condition for the existence of optimum stationary policies in average cost Markov decision processes,” Oper. Res. Lett., vol. 5, pp. 17–23, 1986.

[20] , “Average cost semi-Markov decision processes and the control of queueing systems,” Prob. Eng. Inform. Sci., vol. 3, pp. 247–272, 1989. [21] R. F. Serfozo, “An equivalence between continuous and discrete time

MDPs,” Oper. Res., vol. 27, pp. 616–620, 1979.

[22] J. S. Song, “Continuous-time Markov decision programming with nonuniformly bounded transition rates” (in Chinese), Sci. Sin., vol. 12, pp. 1258–1267, 1987.

[23] D. V. Widder, The Laplace Transform. Princeton, NJ: Princeton Univ. Press, 1946.

[24] A. A. Yushkevich and E. A. Feinberg, “On homogeneous Markov model with continuous-time and finite or countable state space,” Theory Prob. Appl., vol. 24, pp. 156–161, 1979.

[25] S. H. Zheng, “Continuous-time Markov decision programming with av-erage reward criterion and unbounded reward rate,” Acta. Math. Appl. Sinica, vol. 7, pp. 6–16, 1991.

A General Invariance Principle for Nonlinear Time-Varying Systems and Its Applications

Ti-Chung Lee, Der-Cherng Liaw, and Bor-Sen Chen

Abstract—A general invariance principle, from the output-to-state

point of view, is proposed for the dynamical analysis of nonlinear time-varying systems. This is achieved by the construction of a simple and intuitive criterion using integral inequality of the output function and modified detectability conditions. The proposed scheme can be viewed as an extension of the integral invariance principle (Byrnes and Martin, 1995) for time-invariant systems to time-varying systems. Such extension is nontrivial and can be used in various research areas such as adaptive control, tracking control and the control of driftless systems. An application to global tracking control of four-wheeled mobile robots is given to demonstrate the feasibility and validity of the proposed approach.

Index Terms—Invariance principle, mobile robots, time-varying systems.

I. INTRODUCTION

Since the 1960s, Lyapunov function based approaches have been well developed for the analysis of system stability (see [1]–[5], [7], [8], and [11]–[15]). Among these, a very useful criterion, called the “LaSalle invariance principle,” was proposed in [7] and has been ap-plied and extended to the study of many diverse areas in the recent literature. For instance, Byrnes and Martin [4] proposed an integral in-variance principle to study the stability of nonlinear time-invariant sys-tems. However, neither the LaSalle invariance principle nor the integral invariance principle can be applied to time-varying systems directly. This is due to the fact that the!-limit set is not an invariant set in gen-eral time-varying systems (see, e.g., [5, p. 193]). Since the invariance principles have been proved to be important and useful in the anal-ysis of system dynamics, the extension of these principles to general time-varying systems has attracted much attention (e.g., [1], [2], [7], [12]). In [12], results for some classes of time-varying systems such as almost periodic systems and asymptotically autonomous systems were obtained using the concept of pseudo-invariant set. However, no simple method was given for the determination of the pseudo-invariance set. Instead of using the concept of the invariance principles, two interesting results employing the concept of “limit equations” [2] and the direct Lyapunov approach [1] were obtained for time-varying systems. Al-though the stability criteria proposed in previous literature can be used in some time-varying systems, their approaches are, in general, hard to check. The development of simple stability criteria for easy checking remains an important issue.

In this note, a simple stability criterion for time-varying systems is proposed. Instead of using the existence of!-limit set, the concept of limit systems is defined for time-varying systems. Two detectability conditions will be given in terms of limit systems. Based on these

con-Manuscript received September 13, 1999; revised August 24, 2000 and April 25, 2001. Recommended by Associate Editor G. Bastin. This work was supported by the National Science Council, Taiwan, R.O.C., under Contracts NSC-89-2212-E159-002 and NSC 89-2612-E009-003.

T.-C. Lee is with the Department of Electrical Engineering, Ming Hsin Institute of Technology, Hsinchu 304, Taiwan, R.O.C. (e-mail: tc1120@ms19.hinet.net).

D.-C. Liaw is with the Department of Electrical and Control Engineering, National Chiao Tung University, Hsinchu 300, Taiwan, R.O.C. (e-mail: dcliaw@cc.nctu.edu.tw).

B.-S. Chen is with the Department of Electrical Engineering, Na-tional Tsing-Hua University, Hsinchu 300, Taiwan, R.O.C. (e-mail: bschen@moti.ee.nthu.edu.tw).

Publisher Item Identifier S 0018-9286(01)11094-9. 0018–9286/01$10.00 © 2001 IEEE

(2)

ditions and an integral inequality for the observer function, bounded solutions of system dynamics are shown to approach a pre-specified equilibrium set. The relationships between the proposed scheme and LaSalle invariance principle as well as the integral invariance principle are also studied. Finally, we revisit the tracking control problem for a 4-wheeled mobile robot studied in [9]. In that paper, it has been shown that the error model of the tracking problem is feedback-equivalent to a passive time-varying system. However, a complete stability analysis was not given. In this study, a novel stability analysis of 4-wheeled mo-bile robot system will be presented from the concept of limit system. Through such an application, it can be seen that, just like the LaSalle in-variance principle being feasible to the stability study of time-invariant systems, the approach presented in this note is applicable to analyze the stability of time-varying systems.

II. PRELIMINARIES

In this section, we give an example to illustrate that the LaSalle invariance principle and the integral invariance principle can not be applied directly to time-varying systems for determining system stability. Then the definition of limit systems and two modified detectability conditions are presented, which will be used in the next section for the derivation of the main result. In this note, jvj = v2

1+ v22+ 1 1 1 + vn2, for allv = (v1; v2; . . . ; vn) 2 <n, the distance function is defined asjvj= inffjw 0 vjjw 2  <ng

and a functionx(t): [t01) ! X  <nis said to be bounded ifx(t) lies within a compact subset ofX.

Example 1: Consider the following system: _x1= e02tx2

_x2= 0e02tx10 x2

y = x2 (1)

wherex1; x2; y 2 <. Choose V (x1; x2) = (1=2)(x12+x22) as a Lya-punov function candidate. Taking the time derivative ofV along the state trajectory of system (1), we have _V (x1; x2) = 0x22= 0y2 0.

It is clear that 01jy(t)j2dt = 0 01 _V (x1; x2) dt < 1. From (1),

the setS = f(x1; x2)j _V (x1; x2) = 0g contains only the trivial

equi-librium solution. If LaSalle invariant principle or integral invariant prin-ciple is attempted to study the stability of system (1), one will have limt!1x1(t) = 0 and limt!1x2(t) = 0. However, we will check

thatlimt!1x1(t) 6= 0 for any solution (x1(t); x2(t)) starting from x1(0) 6= 0 and x2(0) = 0. Since _V  0 and x2(0) = 0, we then

haveV (x1; x2) = (1=2)[x21(t) + x22(t)]  V (x1(0); x2(0)) = (1=2)x2

1(0). This implies that jx1(t)j  jx1(0)j for all t  0.

More-over, the second differential equation of system (1) givesjx2(t)j = je0t t

0e0x1() dj  jx1(0)j for all t  0. By solving the first

dif-ferential equation of system (1), we then have jx1(t)j = jx1(0)+

t 0 e

02x

2() dj  12jx1(0)j; for allt  0:

This implies thatlimt!1x1(t) 6= 0. Thus, both the LaSalle

invari-ance principle and the integral invariinvari-ance principle need a modification for determining the stability of time-varying systems. Now, we present the definition of limit systems, which will be applied in Section III to the construction of invariant principle for time-varying systems. In this note, denoteX an open subset of <n. Consider a class of systems as given by

_x = f(a(t); x) (2)

y = h(b(t); x) (3)

wherex 2 X; y 2 <m; f(a; x) 2 <nandh(b; x) 2 <mwitha(t) andb(t) being <p-valued function and<q-valued function defined on [0 1), respectively. Here, assume both f(a; x) and h(b; x) are con-tinuous witha(t) and b(t) being uniformly continuous and bounded vector functions. Note that, many systems take the form of (2)–(3). For instance, linear time-varying systems and tracking control of au-tonomous systems all take the extended form of (2)–(3). Since invari-ance principles guarantee the limit behavior of a bounded solution, it is intuitive to consider the dynamics of the “limit system” for a given system. That is the behavior of system att ! 1. The definition of limit system will be given below. First, we present the definition of limit function.

Definition 1: Letc(t): [0 1) ! <p, withp 2 @, be any continuous function. A sequence = ftng of real number with limn!1tn= 1

is said to be an admissible sequence associated withc(t) if there exists a continuous functionc (t) defined on [0 1) such that fc(t+tn)g

uni-formly converges toc (t) on every compact subset of [0 1).The

func-tionc (t) is called a limit function of c(t) and is uniquely defined.

Denote3(c) the set of all admissible sequences associated with c(t). It is not difficult to check that every subsequence of an admissible se-quence is also an admissible sese-quence and all these subsese-quences pro-vide the same limit function ofc(t). Now, we are ready to give the definition of limit system.

Definition 2: Let be an admissible sequence associated with both a(t) and b(t) [i.e., 2 3(a) \ 3(b)]. Then the following associated system

_x = f (a (t); x) (4)

y = h (b (t); x) (5)

is called a “limit system” of system (2)–(3) wherea (t) and b (t)

de-note the limit functions ofa(t) and b(t) determined by the sequence , respectively.

As an example, by virtue oflimt!1e02t = 0, a limit system of

that in Example 1 can be described by the following: _x1= 0

_x2= 0x2

and

y = x2: (6)

A condition to guarantee the existence of limit functions is given as follows.

Lemma 1: Letc(t): [0 1) ! <p, withp 2 @, be a uniformly con-tinuous and bounded function andftng be a sequence approaching

in-finity. Then, there exists a subsequenceftn g of ftng such that fc(t+ tn )g converges uniformly to a limit function c(t) on every compact

subset of[0 1).

Proof: Denotecn(t) = c(t + tn). Then, by the assumption, the

sequencefcn(t)g is totally bounded and equi-continuous. Thus,

ac-cording to Arzela–Ascoli lemma (see [6]), there exists a subsequence fnkg of fng such that fcn (t)g converges uniformly to a continuous

functionc(t) on every compact subset of [0 1). This completes the proof.

As motivated by Lemma 1, we can show that the set3(a) \ 3(b) is nonempty. Letftng be any sequence approaching infinity. Then,

from Lemma 1 and the assumptions of system (2)–(3) there exists a subsequence = ftn g of ftng such that 2 3(a). Similarly, we have a subsequence of such that 2 3(b). It is clear that 2 3(a) \ 3(b). Thus, by Definition 2, Lemma 1 provides the existence of limit systems. Throughout this note, for simplicity, any sequence 2 3(a)\3(b) is said to be an admissible sequence of system (2)–(3).

(3)

It is known that (e.g., [3]) the zero-state detectability is used in time-invariant systems to determine system stability. In the following, two zero-state detectability conditions for limit systems will be given. In the remainder of this note, denote a subset of X and (t0; t; x0) a bounded solution of (2) starting from(t0; t0; x0) = x0att = t0for allt  t0 0. We then have the following two detectability conditions with respect to the trajectory:

(C1): System (2)–(3) is weakly detectable w. r. t.. That is, there exists an admissible sequence of system (2)–(3) such that every so-lutionx(t) of limit system (4), starting at t = 0, approaches the given se, i.e., limt!1jx(t)j= 0, when x(t) lies within the !-limit set

of and satisfies h(br(t); x(t))  0.

(C2): System (2)–(3) is uniformly detectable w. r. t.. That is, for every positive constant", there exists a positive constant T such that for every admissible sequence of system (2)–(3), every solution x(t) of limit system (4), starting at t = 0, will satisfy the inequality jx(t)j < " for all t  T when x(t) lies within the !-limit set of 

withh(br(t); x(t))  0.

Remark 1: For time-invariant systems, the zero-state detectability only concerns the set = f0g. Moreover, every limit system of a time-invariant system is the same as the original system. Thus, it is clear that conditions (C1) and (C2) for such case are, respectively, implied by the zero-state detectability condition and zero-state observability condition introduced in [3].

III. MAINRESULTS

In this section, a general invariant principle will be proposed and used to guarantee the attractivity of an equilibrium set using the modi-fied detectability conditions (C1)–(C2) given in Section II. An applica-tion to the tracking control problem for mobile robots is also presented to demonstrate the use of the main results. Details are given as follows. A. A Modified Invariant Principle

Before deriving the modified invariant principle, for simplicity, we have the following hypothesis for a bounded solution(t0; t; x0) of

(2).

Hypothesis 1: Suppose (t0; t; x0) satisfies the following

in-equality

1

t w(h(b(t); (t0; t; x0))) dt < 1

(7) for the output map (3), wherew is a positive definite continuous func-tion withlimjyj!1w(y) = 1.

Since _ = f(a(t); (t0; t; x0)) is bounded, (t0; t; x0) is

uni-formly continuous. Let ~h(t) = h(b(t); (t0; t; x0)) for all t  t0. Then,w(~h(t)) is also uniformly continuous. From Hypothesis 1 and Barbalat’s Lemma [5], we havelimt!1w(~h(t)) = 0. This implies

thatlimt!1h(t) = 0. We then have the next result.

Theorem 1: Suppose Hypothesis 1 holds. Then the following two results hold for system (2)–(3):

i) The set contains a !-limit point of (t0; t; x0) if condition

(C1) holds.

ii) Condition (C2) implies that the equality limt!1j(t0; t; x0)j= 0 holds.

Proof: First, we prove i) by contradiction. Suppose statement i) is false. Then, by the definition of!-limit point (see [5]), there exist a T > 0 and a " > 0 such that j(t0; t+t0; x0)j " for all t  T. Let be an admissible sequence of system (2)–(3) such that the conclusion of (C1) holds. Denotear(t) and br(t) the corresponding limit func-tions ofa(t) and b(t), respectively. Using a similar proof of Lemma 1 and the boundedness and uniform continuity of , there exists a

subsequenceftn g of such that f(t0; t + tn ; x0)g converges

uniformly to a continuous function x(t) on every compact subset of[0 1). Note that, fa(t + tn )g and fb(t + tn )g also converge

uniformly to the limit functions ar(t) and br(t) on every compact subset of[0 1) since every subsequence of an admissible sequence is also an admissible sequence and yields the same limit function. Observe that _(t0; t + tn ; x0) = f(a(t + tn ); (t0; t + tn ; x0))

and the sequences of functions relating to  and f appearing in the differential equations are uniformly convergent on every compact subset of [0 1). We can then take the limit of dif-ferential equations, see [6]. By taking the limit of difdif-ferential equations, we hence have _x(t) = f(a (t); x(t)).

More-over, by the fact of t + tn ! 1 and limt!1h(t) = 0, h(b (t); x(t)) = limk!1h(b(t + tn ); (t0; t + tn ; x0)) = 0

for each t  0. Note that x(t) lies within the !-limit set of  sincex(t) = limk!1(t0; t + tn ; x0). Thus, x(t) is a solution

of the limit system (4)–(5) starting at t = 0 and lies within the !-limit set of  with h(b (t); x(t))  0. From condition (C1), we

havelimt!1jx(t)j = 0, which contradicts the presumption that jx(t)j= limk!1j(t0; t + tn ; x0)j " since t + tk T + t0

for eacht and large enough k. The result of i) is hence proved. Similarly, we next prove ii) by contradiction . Suppose statement ii) is false. Then, there exist an" > 0 and a sequence ftng approaching

infinity such thatj(t0; tn; x0)j ". Let T be the positive constant

given in condition (C2), which depends only on". Using the similar argument in the proof of Lemma 1, it is concluded that there exists a subsequence = ftn 0 T g of ftn0 T g such that all three sequences fa(t + tn 0 T )g; fb(t + tn 0 T )g and f(t0; t + tn 0 T; x0)g,

respectively, converge uniformly to their limit functionsa (t), b (t) andx(t). We then have _x(t) = f(a (t); x(t)) and h(b (t); x(t))  0

using the fact oflimt!1h(t) = 0, along with a similar proof of i).

Thus,x(t) is a solution of the limit system, starting at t = 0, and lies within the!-limit set of  with h(b (t); x(t))  0. By condition (C2),

we havejx(T )j< ". This contradicts the assumption of jx(T )j= limk!1j(t0; tn ; x0)j ". The proof of ii) is then completed.

Remark 2: The functionw given in Hypothesis 1 is usually taken asw(y) = jyjpfor0 < p < 1 (see [4]). For such case, w is positive definite andlimjyj!1w(y) = 1.

Now, we re-examine the analysis of the system given in Example 1 to demonstrate the possible application of Theorem 1. For such system, every solution is bounded since the Lyapunov functionV is proper and satisfying _V  0. Moreover, Hypothesis 1 holds for any solution by choosingw(y) = jyj2. The corresponding limit system is given in (6). If we take = f(x1; 0)jx1 2 <g, condition (C2) also holds. Then by

Theorem 1,x2(t) ! 0. However, if we take = f(0; 0)g, condition (C1) does not hold for any solution starting from the initial conditions: x1(0) 6= 0 and x2(0) = 0. The reason is that it was shown in

Sec-tion II thatjx1(t)j  (1=2)jx1(0)j for all t  0. Thus, every solution (x1(t); x2(t)) of (6), lying within the !-limit set of the original

solu-tion and satisfyingx2(t)  0, will have jx1(t)j  (1=2)jx1(0)j for all t  0. It is observed from this example that conditions (C1) and (C2) can be used to predict the dynamical behavior of a time-varying system better than that obtained from time-invariant systems.

Remark 3: The concept of limit equations similar to that in (4) was first introduced by Artstein [2]. The goal of [2] is to give a sufficient and necessary condition in terms of limit equations to guarantee the uni-formly asymptotic stability of the origin. The result is very interesting, however, the stability checking of limit equations yields the same diffi-culty as that of the original systems in many time-varying systems. On the contrast, in a spirit like LaSalle invariance principle, the order of systems constrained on the zero-locus of the limit functions for output map can be effectively reduced by introducing the concept of limit sys-tems and limit functions of output map as presented above. An

(4)

inter-esting example for robot systems will be given in the next subsection to illustrate such point of view.

For general applications, we have = f0g and the uniform Lya-punov stability is usually attainable a priori. Under this condition, it is easy to check that the attractivity of the origin is implied by the fact of the origin being a!-limit point. Next corollary follows readily from Theorem 1.

Corollary 1: Let = f0g and suppose Hypothesis 1 holds. Then, (t0; t; x0) ! 0 as t ! 1 if the origin is uniformly Lyapunov stable

and condition (C1) holds.

Note that, several well-known invariance principles for time-in-variant systems can be deduced from Theorem 1. For instance, let w(y) = jyjpand be the largest invariant subset of the zero-locus

of the output function for time-invariant systems. It is not difficult to check that both Hypothesis 1 and Condition (C2) hold. Next corollary follows readily from Theorem 1.

Corollary 2 (Integral Invariance Principle [4]): Consider a time-in-variant system in the form of (2)–(3), i.e.,a(t) and b(t) are both con-stant functions. Suppose t1jh(b(t); (t0; t; x0))jpdt < 1 for 0 < p < 1. Then (t0; t; x0) approaches the largest invariant subset of

the zero-locus of the output function.

It was shown in [4] that the integral invariance principle is reduced to the LaSalle invariance principle by choosing the time derivative of Lyapunov function as a virtual output. The LaSalle invariance principle can hence be implied by Theorem 1.

Although in the previous discussions above, we have restricted our attention to systems having the form (2)–(3), similar results can be obtained for more general time-varying systems. For instance, con-sider a system consisting of asymptotically almost periodic or periodic functions, see [12] for the definitions. Limit systems and conditions (C1)–(C2) for these systems can be defined similarly and Theorem 1 is also true under new conditions.

B. Application to Globally Tracking Control of 4-Wheeled Mobile Robots

In our previous paper [9], a globally tracking control problem of 4-wheeled mobile robots was studied by constructing a simple tracking controller. However, a complete stability analysis was not given. In the following, Corollary 1 will be applied to the stability study of the mo-bile robots. Before the further discussion, let

(s) = 1 0 cos ss and

(s) = sin ss

fors 6= 0. Also, let (0) = 0 and (0) = 1. It is obvious that both (s) and (s) are smooth functions. An error model of the tracking system can then be recalled from [9] as given by

_xe= vr(t)f( r(t); xe) + G( r(t); xe)ue (8) wherexe = (x1; x2; x3; x4)T 2 <4; ue2 <2;vr: [0 1) ! < and

r: [0 1) ! < are two uniformly continuous and bounded functions.

Let = x40 (x3)x10 (x3)x2+ r(t). Then functions f and G

in (8) can be described as follows:

f = x2 + x3 (x3) 0x1 + x3 (x3) 0x1 (x3) 0 x2 (x3) + x4 0x3 G = 1 + x2 0 0x1 0  0 0 1 : (9)

Choose V = (1=2)jxej2 as a Lyapunov function candidate for system (8). It is not difficult to check that

@V @xef( r; xe)  0: Let ye= @V@x e G T

be a virtual output map. Then we have _V = yTeue. This implies that system (8) is passive. A simple (output feedback) controller can be chosen as

ue= 0kye (10)

for anyk > 0. We hence have _V = 0kjyej2  0, which implies that Hypothesis 1 holds by choosingw = kjyej2. It is clear thatV is a positive definite and proper function. Thus, solutions of system (8) are concluded to be globally uniformly bounded and the origin is uni-formly Lyapunov stable. Under Lyapunov stability condition, we need to verify that the origin is a common!-limit point of every solution for providing the attractivity of the origin. Before checking the attractivity of the origin, we impose the following hypothesis.

Hypothesis 2: Supposevr(t) in system (8) satisfies the inequality: lim sup

t!1 jvr(t)j > 0: (11)

Note that, the Hypothesis 2 can be referred as “persistent excitation” condition. From Hypothesis 2, there exists a sequenceftng with tn! 1 such that limn!1jvr(tn)j 6= 0. By Lemma 1, there exists a

sub-sequenceftn g of ftng such that the two sequence fvr(t+ tn )g and f r(t + tn )g, respectively, converge uniformly to the limit functions vr(t) and r(t) on every compact subset of [0 1). Then, ftn g is an

admissible sequence of the closed-loop system (8) with control ue= 0k @V@x

e G T

:

The associated limit system for system (8) can then be obtained as _xe= vr(t)f r(t); xe 0 kG r(t); xe ye (12) ye= x1+ x3 xe; r(t) ; x4 : (13)

Letxe(t) = (x1(t); x2(t); x3(t); x4(t)) be any solution of (12),

starting att = 0 with ye  0. Then we have x4(t)  0 and system

(12) can be rewritten as

_xe(t) = vr(t)f r(t); xe(t) : (14)

Note that,jvr(0)j = limk!1jvr(tn )j 6= 0. Thus, by the

conti-nuity ofvr(t), there exists a positive constant  such that vr(t) 6= 0

for allt 2 [0 ). From the fourth state equation of (14), we have _x4 = 0vr(t)x3(t). Since x4(t) = 0, this leads to x3(t) = 0 for all t 2 [0 ).

It is not difficult to check from (13) thatx1(t) = 0 for all t 2 [0 )

whenye 0. Similarly, by virtual of _x3= vr(0x1 (x3)0x2 (x3)+ x4) from the third equation of (14) and (0) = 1; x2(t) = 0 for all t 2 [0 ). To conclude the discussions above, we then have xe(t) = 0

for allt 2 [0 ). Note that, @V

(5)

From (14), this implies _V (xe(t))  0. Thus, V (xe(t)) = V (xe(0)) = 0 for all t  0. By the positive definiteness of V , we have xe(t)  0.

Thus, condition (C1) holds. According to Corollary 1, we then have the next theorem.

Theorem 2: Under Hypothesis 2, the origin of system (8) is globally asymptotically stabilizable by the control

ue= 0k @V@x eG

T :

IV. CONCLUSION

A general invariance principle was proposed in this note for the stability analysis of nonlinear time-varying systems, which cannot be derived from conventional invariance principles. This is achieved by point-set topology approach rather than Lyapunov functions scheme. Thus, it is possible to extend the results in this note to the study of more general dynamical systems. The existing results such as the LaSalle invariance principle [7] and the integral invariance principle [4] was shown to be deduced from the proposed results. Application to the tracking control of 4-wheeled mobile robots was also given to demon-strate the feasibility of the proposed approach.

ACKNOWLEDGMENT

The authors are very grateful to the anonymous referees for their valuable comments and suggestions.

REFERENCES

[1] D. Aeyels, “Asymptotic stability of nonautonomous systems by Lia-punov’s direct method,” Syst. Control Lett., vol. 25, pp. 273–280, 1995. [2] Z. Artstein, “Uniform asymptotic stability via the limiting equations,” J.

Differential Equations, vol. 27, pp. 172–189, 1978.

[3] C. I. Brynes, A. Isodori, and J. Willems, “Passivity, feedback equiva-lence and the global stabilization of minimum phase systems,” IEEE Trans. Automat. Contr., vol. 36, pp. 1228–1240, Nov. 1991.

[4] C. I. Byrnes and C. F. Martin, “An integral-invariance principle for non-linear systems,” IEEE Trans. Automat. Contr., vol. 40, pp. 983–994, June 1995.

[5] H. K. Khalil, Nonlinear Systems. Upper Saddle River, NJ: Prentice-Hall, 1996.

[6] S. Lang, Real Analysis, MA: Addison-Wesley, 1983.

[7] J. P. LaSalle, “Stability theory for ordinary differential equations,” J. Differential Equations, vol. 4, pp. 57–65, 1968.

[8] T. C. Lee, “Detectability, attractivity and invariance principle for non-linear time-varying systems,” in 4th SIAM Conf. Control Applications, Jacksonville, FL, 1998.

[9] T. C. Lee and H. L. Jhi, “A general invariance principle for nonlinear time-varying systems with application to mobile robots,” in European Control Conference, ECC’99, Karlsruhe, Germany, Aug. 31–Sept. 3 1999, paper no. F841.

[10] R. Murray and S. Sastry, “Nonholonomic motion planning: Steering using sinusoids,” IEEE Trans. Automat. Contr., vol. 38, pp. 700–716, May 1993.

[11] K. S. Narendra and A. M. Annaswamy, Stable Adaptive Sys-tems. Upper Saddle River, NJ: Prentice-Hall, 1989.

[12] N. Rouche, P. Habets, and M. Laloy, Stability Theory by Liapunov’s Direct Method. Berlin, Germany: Springer Verlag, 1977.

[13] C. Samson and K. Ait-Abderrahim, “Feedback control of a nonholo-nomic wheeled car in Cartesian space,” in Proc. IEEE Int. Conf. Robotics Automation, Sacramento, CA, 1991, pp. 1136–1141.

[14] M. Vidyasagar, Nonlinear Systems Analysis. Upper Saddle River, NJ: Prentice-Hall, 1993.

[15] J. C. Willems, “Dissipative dynamical systems Part I: General theory; Part II: Linear systems with quadratic supply rates,” Arch. Rational Mech. Anal., vol. 45, pp. 321–393, 1972.

Robust Stabilization of Large Space Structures Via Displacement Feedback

Yasumasa Fujisaki, Masao Ikeda, and Kazuhiro Miki

Abstract—It has been known that static velocity and displacement

feedback with collocated sensors and actuators can stabilize large space structures robustly against “any” uncertainty in mass, damping, and stiffness independently of the number of flexible modes. This note presents dynamic displacement feedback which can achieve such robust stabiliza-tion. The proposed control law can be implemented in a decentralized scheme straightforwardly.

Index Terms—Collocated sensors and actuators, displacement feedback,

large space structure, robust stabilization.

I. INTRODUCTION

Large space structures with collocated sensors and actuators can be stabilized robustly against any uncertainty in mass, damping, and stiff-ness independently of the number of flexible modes using static feed-back of the measured velocity and displacement [1], [2]. Such a robust control law has been obtained by utilizing the fact that the space struc-tures possess certain qualitative properties in their parameters indepen-dently of the numerical values, and stability can be ensured by a qual-itative condition. This result is very important as low authority control [3] which ensures robust stability of the closed-loop systems because the identification errors in large space structures might be quite large.

While velocity sensors are commonly used as well as displacement sensors, if the structure can be controlled without velocity measure-ments, it is desirable against the failure of velocity sensors and for the cost reduction of the sensing system. Even in the case of the displace-ment measuredisplace-ments only, it would be expected that using a pseudo dif-ferentiator with a sufficiently wide band, the static feedback of velocity and displacement can be realized approximately by dynamic feedback of displacement. However, since the wide-band pseudo differentiator is sensitive to noise and its gain is very large at high frequencies, it may cause unacceptable behaviors of the structure. Therefore, it is not recommended to use such a wide-band pseudo differentiator for ap-proximation of the velocity feedback.

In this note, we present a dynamic displacement feedback control law which stabilizes large space structures under the sensor/actuator collocation. The underlying idea comes from the fact that the unstable modes of structures are the rigid modes only. Then, we can stabilize the whole system by stabilizing the rigid modes using a narrow-band pseudo differentiator around zero frequency without violating stability of the vibration modes.

The proposed control law has the following advantages. It can stabi-lize structures robustly against any uncertainty in mass, damping, and stiffness independently of the number of flexible modes as the static feedback of velocity and displacement does. The control law can be implemented in a decentralized scheme which generates the control inputs from the measured outputs at each collocated pair of the sensors

Manuscript received April 26, 2001. Recommended by Associate Editor Y. Yamamoto.

Y. Fujisaki is with the Department of Computer and Systems En-gineering, Kobe University, Nada, Kobe 657-8501, Japan (e-mail: fu-jisaki@cs.kobe-u.ac.jp).

M. Ikeda and K. Miki are with the Department of Computer-Controlled Mechanical Systems, Osaka University, Suita, Osaka 565-0871, Japan (e-mail: ikeda@mech.eng.osaka-u.ac.jp; kazu@watt.mech.eng.osaka-u.ac.jp).

Publisher Item Identifier S 0018-9286(01)11093-7. 0018–9286/01$10.00 © 2001 IEEE

參考文獻

相關文件

Our model system is written in quasi-conservative form with spatially varying fluxes in generalized coordinates Our grid system is a time-varying grid. Extension of the model to

(In Section 7.5 we will be able to use Newton's Law of Cooling to find an equation for T as a function of time.) By measuring the slope of the tangent, estimate the rate of change

In this talk, we introduce a general iterative scheme for finding a common element of the set of solutions of variational inequality problem for an inverse-strongly monotone mapping

In this paper, we have studied a neural network approach for solving general nonlinear convex programs with second-order cone constraints.. The proposed neural network is based on

In summary, the main contribution of this paper is to propose a new family of smoothing functions and correct a flaw in an algorithm studied in [13], which is used to guarantee

Establishing the connection between the exact master equation and the non -equilibrium Green functions provides a general approach to explore the non-Markovian

According to the Heisenberg uncertainty principle, if the observed region has size L, an estimate of an individual Fourier mode with wavevector q will be a weighted average of

Otherwise, if a principle of conduct passes only the universal test but fails to pass this test, then it is an “imperfect duty.” For example, the principle “takes care of