Application of the least-squars algorithm to the observer design for linear time-varying systems

Download (0)




[4] H. K. Khalil and F. C. Chen, “H1control of two-time-scale systems,” Syst. Contr. Lett., vol. 19, no. 1, pp. 35–42, 1992.

[5] S. M. Shahruz, “H1-optimal compensators for singularly perturbed systems,” in Proc. 28th IEEE Conf. Decision and Control, Tampa, FL, 1989, pp. 2397–2398.

[6] J. L. Vian and M. E. Sawan, “H1 control for singularly perturbed systems,” in Proc. 30th IEEE Conf. Decision and Control, Brighton, U.K., 1991, pp. 1072–1074.

[7] Z. Pan and T. Basar, “H1-optimal control for singularly perturbed systems—Part I: Perfect state measurements,” Automatica, vol. 29, no. 2, pp. 401–423, 1993.

[8] Z. Pan and T. Basar, “H1-optimal control for singularly perturbed systems. Part II: Imperfect state measurements,” IEEE Trans. Automat. Contr., vol. 39, pp. 280–299, 1994.

[9] V. Dragan, “Asymptotic expansions for game-theoretic Riccati equations and stabilization with disturbance attenuation for singularly perturbed systems,” Syst. Contr. Lett., vol. 20, no. 6, pp. 455–463, 1993. [10] E. Fridman, “Near-optimalH1 control of linear singularly perturbed

systems,” IEEE Trans. Automat. Contr., vol. 41, no. 2, pp. 236–240, 1996.

[11] K. Gu, “H1control of systems under norm bounded uncertainties in all system matrices,” IEEE Trans. Automat. Contr., vol. 39, pp. 1320–1322, 1994.

[12] V. Gaitsgory and P. Shi, “Limit Hamilton–Jacobi–Isaacs equations for singularly perturbed zero-sum dynamic (discrete time) games,” in Proc. 7th Int. Symp. Dynamic Games and Applications, Kanagawa, Japan, 1996.

[13] J. C. Doyle, K. Glover, P. P. Khargonekar, and B. A. Francis, “State space solutions to the standardH2andH1control problems,” IEEE Trans. Automat. Contr., vol. 34, pp. 831–847, 1989.

[14] M. G. Safonov, D. J. N. Limebeer, and R. Y. Chiang, “Simplifying the H1 theory via loop-shifting, matrix-pencil and descriptor concepts,” Int. J. Contr., vol. 50, no. 6, pp. 2467–2488, 1989.

Application of the Least Squares Algorithm to the Observer Design for Linear Time-Varying Systems

Min-Shin Chen and Jia-Yush Yen

Abstract—In this paper, it is shown that the least squares algorithm with

covariance reset, which is originally developed for the purpose of constant parameter identification, can be effectively applied to the observer design for a general linear time-varying system. The new observer successfully avoids many of the disadvantages of other time-varying observers, such as slow convergence rate, heavy computation load, high amplification of measurement noise, and the inapplicability to systems with time-varying observability indexes or discontinuous parameter variations.

Index Terms— Least squares algorithm, linear time-varying system,

observer, persistent excitation, uniform observability.


Since the development of the Kalman–Bucy filter [1], there have been several different approaches reported in the literature to the observer design for linear time-varying systems. The first approach Manuscript received December 2, 1996; revised October 1, 1997. Recom-mended by Associate Editor, F. Jabbari. This work was supported by the National Science Council of the Republic of China under Grant NSC 87-2218-E-002-031.

The authors are with the Department of Mechanical Engineering, Na-tional Taiwan University, Taipei, Taiwan 10764 Republic of China (e-mail:

Publisher Item Identifier S 0018-9286(99)06246-7.

utilizes a matrix differential Riccati equation [2], [3] for the observer design. In the time invariant case, this Riccati equation approach can exercise control over the observer convergence rate by properly selecting the covariance matrices of state and output noises (see [4, Ch. 6.1] for a detailed discussion). However, in the time-varying case, the link between the covariance matrices in the Riccati equation and the observer convergence rate becomes too complicated to be analyzed. Hence, it is still an open question as to how a time-varying Riccati equation should be designed so that a desired convergence rate can be achieved. The second approach [5] to the time-varying observer design utilizes a weighted observability grammian. Although the observer convergence rate can be effectively controlled in this design, the amount of calculation required is huge since two matrix differential equations need to be computed online. The third approach [6], [9] is the dual of the pole-placement control [7] for linear time-varying systems. Such an approach requires high-order time derivatives of the time-varying system parameters, which are difficult to measure due to the ever present nature of measurement noise. In addition, this approach restricts the observability indexes [8] to remain constant.

In this paper, a new approach to the observer design is presented for a linear time-varying system. The new approach is based on the least squares algorithm with covariance reset [10], which was originally developed for the purpose of constant parameter identification. Since the least squares algorithm can produce arbitrarily fast convergence rate [11], one can then take advantage of this fact for the observer design. The resultant new observer has the following advantages: 1) the convergence rate of the observer can be effectively controlled by the design parameters; 2) no time-derivatives of the system parameters are required, consequently, the new observer can be applied to systems with discontinuous parameters; 3) the computation of the proposed observer feedback gain requires solving only one matrix differential equation; and 4) the observability indexes of the system are allowed to vary with time.

II. PROBLEMFORMULATION Consider a multivariable linear time-varying system

_x = A(t)x + B(t)u(t); x(0) = x0; y = C(t)x (1) where x(t) 2 Rn is the system state vector, u(t) 2 Rm the control input, and y(t) 2 Rpthe system output. The system matrix A(t) 2 Rn2n, the input matrixB(t) 2 Rn2m, and the output matrix

C(t) 2 Rp2nare time-varying matrices whose elements are bounded

and (piecewise) continuous functions of time. It is assumed that if the open-loop system (1) is unstable, its state divergence rate is uniformly bounded in the sense that given any time span T , there exists a constant > 0 such that the state transition matrix 8(t; ) 2 Rn2n of the open-loop system (1) satisfies, for all integerk

k8(t; kT )k  ; 8t 2 [kT; (k + 1)T] (2) wherek 1 k denotes the maximum singular value of a matrix.

The objective of this paper is to reconstruct the system state x(t) given only the measurement of the system output y(t) under the assumption that the system (1) is uniformly observable in the following sense.

Definition 1 [5]: The pair(A(t); C(t)) is uniformly observable if there exist 1 and 2 2 R+ such that

1I  Wo(k)  2I; k = 0; 1; 2; 1 1 1 (3)

0018–9286/99$10.001999 IEEE



whereWo(k) 2 Rn2nis the observability grammian defined by Wo(k)

kT kT 0T8

T(; kT 0 T )CT()C()8(; kT 0 T ) d

(4) in which8(t; ) is the open-loop state transition matrix of (1).

In this paper, the observer design will be based on the “least squares algorithm with covariance reset” developed in the parameter identification problem. A brief review of the least squares algorithm will be given below. Letz(t) 2 Rn represent the parameter error between the true parameter  2 Rn and the estimated parameter ^(t). If ^(t) is updated based on the least squares algorithm with covariance reset, the governing equation ofz(t) is given by

_z(t) = 0 P (t)w(t)wT(t)z(t); 8t > 0 (5) _P(t) = 0 P(t)w(t)wT(t)P(t); P (kT+) = p

0In2n> 0;

8t 2 [kT; (k + 1)T ) (6) where the least squares gain and the reset initialization value p0can be any positive constants, andw(t) 2 Rn2pis called the “regressor.” A well-known sufficient condition [12] on the exponential stability of (5) is that the regressorw(t) be “persistently exciting” as defined below.

Definition 2 [12]: The regressor w(t) is persistently exciting if there exists some time spanT and positive constants 1and2such that 1I  kT kT 0Tw()w T() d   2I; 8k: (7)

III. OBSERVER DESIGN Consider the following observer for system (1):

_^x = A(t)^x + B(t)u + L(t)(y 0 C(t)^x); ^x(0) = ^x0 (8)

where ^x(t) 2 Rn is an estimate of the system state x(t), and L(t) 2 Rn2p is the observer feedback gain to be determined so

that^x(t) approaches x(t) exponentially. Denote the state estimation error by ~x = ^x 0 x, and subtract (8) from (1) to yield the state estimation error dynamics:

_~x = [A(t) 0 L(t)C(t)]~x: (9) In order to transform the error dynamics (9) into a structure similar to that of the least squares equation in (5), pick a time spanT , and on each time interval[kT; (k + 1)T ), apply the following coordinate transformation to the error dynamics (9):

~x(t) = 8(t; kT )~zk(t); t 2 [kT; (k + 1)T ); k = 0; 1; 2; 1 1 1 ;

(10) where the new coordinate~zk(t) is defined only on the time interval [kT; (k + 1)T ), and 8(t; kT ) is the state transition matrix of the open-loop system (1) defined by

@8(t; kT )

@t = A(t)8(t; kT ); t 2 [kT; (k + 1)T ) 8(kT; kT ) = I:


From (9)–(11), the governing equation of~zk(t) is given by

_~zk= 0801(t; kT )L(t)wTo(t)~zk; wo(t) = 8T(t; kT )CT(t);

t 2 [kT; (k + 1)T ): (12)

Comparison of (12) with the least squares equation (5) immediately suggests that the observer feedback gainL(t) be chosen as

L(t) = 8(t; kT )Po(t)wo(t) = 8(t; kT )Po(t)8T(t; kT )CT(t); t 2 [kT; (k + 1)T ) (13) where _Po(t) = 0 Po(t)wo(t)woT(t)Po(t); Po(kT+) = p0I > 0; t 2 [kT; (k + 1)T ) in which and p0can be any positive constants. The transformed state estimation error dynamics (12) then becomes

_~zk(t) = 0 Po(t)wo(t)woT(t)~zk(t); t 2 [kT; (k + 1)T ) (14) _Po(t) = 0 Po(t)wo(t)woT(t)Po(t); Po(kT+) = p0I > 0 (15)

which has exactly the same structure as the least squares algorithm in (5) and (6).

The following lemma shows that the uniform observability property of (1) guarantees that the regressor wo(t) in (14) is persistently exciting.

Lemma: If(A(t); C(t)) of (1) is uniformly observable as defined in (3), the regressorwo(t) in (14) is persistently exciting in the sense

that 1m21I  Woz(k) kT kT 0Two()w T o() d  2m22I; k = 1; 2; 1 1 1 (16) wherem0is are two positive constants satisfying

m1 i[8(kT; kT 0 T )]  m2; 8k > 0: (17) Proof: See the Appendix.

In the theorem below, the transformed state equations (14) and (15) will be used to analyze how k~zk(t)k varies over the time interval

[kT; (k + 1)T ). Then, from the transformation relationship (10), the variation of k~x(t)k over the same time interval can be estimated for the purpose of stability analysis. One can thus establish the exponential stability property for the proposed observer.

Theorem: Consider (8) and (13). If (1) is uniformly observable, the state estimation error^x(t)0 x(t) converges to zero exponentially if the least squares gain in (14) and the reset initialization value p0 are chosen large enough such that

1 + p0 1m21 < 1;

where is as in (2) and 1m21 as in (16).

Proof: According to (15), the inverse ofPo(t) satisfies


o (t) = wo(t)woT(t); 8t 2 [kT; (k + 1)T ):

Integrating the above equation fromt = kT+tot = (k+1)T0gives P01 o ((k + 1)T0) 0 Po01(kT+) = (k+1)T kT wo()w T o() d:

It then follows from (16) in the lemma andPo(kT+) = p0I that

p010 + 1m21 I  Po01((k + 1)T0)  p010 + 2m22 I: Equivalently 1 p01 0 + 2m22I  Po((k + 1)T 0)  1 p01 0 + 1m21I: (18) Using (14) and (15), one can verify that


dt Po01(t)~zk(t) = 0; 8t 2 [kT; (k + 1)T ):



Fig. 1. State estimation error (T = 0:25).

Fig. 2. State estimation error (T = 2).

Hence P01

o (kT+)~zk(kT+) = Po01((k + 1)T0)~zk((k + 1)T0):

Recalling that Po(kT+) = p0I, one can derive from the above equation k~zk((k + 1)T0)k  1p 0kPo((k + 1)T 0)k 1 k~z k(kT+)k 1 + p1 0 1m21k~zk(kT +)k (19)

where the second inequality results from (18). In other words, the norm of~zk(t) decreases by a factor of 1=(1 + p0 1m21) over the

time interval[kT; kT + T ).

Now, one can relate the variation ofk~x(t)k with that of k~zk(t)k

through (10). At the beginning and the end of each time interval [kT; (k + 1)T ); ~zk(t) and ~x(t) are related by, according to (10)

~x((k + 1)T ) = 8((k + 1)T; kT )~zk((k + 1)T0) (20) ~x(kT ) = 8(kT; kT )~zk(kT+) = ~zk(kT+): (21) Taking the norm of ~x((k + 1)T ) in (20) and using (2) yields

k~x((k + 1)T )k  k~zk((k + 1)T0)k 1 + p 0 1m21k~zk(kT +)k =  1 + p0 1m21k~x(kT )k



where (19) and (21) have been used to obtain the above equations. Finally, one has

k~x(kT )k  1 + p

0 1m21 k

k~x(0)k: (22) From the Hypothesis of the theorem=(1 + p0 1m21) < 1, one

concludes from (22) that the state estimation error~x(kT ) approaches zero exponentially whenk approaches infinity.

Remark 1: Notice that in (22), given any ; m1 and 1, which characterize open-loop properties of the system (1), there always exist a least squares gain and a reset initialization value p0such that the number=(1 + p0 1m21) is as small as possible. In other words,

one can always pick either a large least squares gain or a large reset initialization valuep0such that the decay rate of the observer is as fast as desired. Such a nice property cannot be achieved by a time-varying Kalman filter design since the relationship between the design parameters (the covariance matrices of the state and output noises) and the closed-loop decay rate is not clear in the time-varying case. Remark 2: Another control design parameter other than and

p0is the time interval length T in (13). In general, the open-loop properties such as; m1; and 1in (22) depend on the value ofT , but their relationships may vary widely for different system matrices. Nevertheless, ifT is chosen too small, 1becomes almost zero [see (3) and (4)]. In other words, there is almost no obervability on such a small-length time interval. The observer will not be able to function effectively in this case. On the other hand, ifT is chosen too large, the observer may lose its output injection at the end of each time interval because the output injection gainL(t) may decrease to almost zero due to the decreasing nature ofP0(t) [see (13) and (15)]. However,

according to (22), for whatever value of T (which determines the values of; m1, and 1), there always exist and p0 to ensure a desired convergence rate for the observer. Simulation experiences (see Figs. 1 and 2) do indicate that performance of the observer is mainly mandated by the least square gain and the reset initialization valuep0, and is relatively insensitive to the choice of T . For most values ofT (except for very small or large values), proper tuning of andp0will always result in satisfactory performance of the observer. A simulation example is given below to verify the proposed observer design.

Example: Consider the error dynamics (9), where the system matrices are given by

A(t) = 01 + 1:5 sin t cos t01 + 1:5 cos2t 1 0 1:5 sin t cos t01 + 1:5 sin2t

C(t) = [cospt + sinpt; 3 cos2pt]

and the initial condition is ~xT(0) = [5; 05]. Note that the system parameters vary nonperiodically due to the presence ofpt. Simula-tion results indicate that the open-loop system is unstable. For the proposed observer design, the reset time intervalT is first chosen to be 0.25 s, the reset initialization valuep0= 0:1 in (15), and the

least squares gain = 20 in (14). Fig. 1 shows the time history of the state estimation error, which converges exponentially to zero. An even faster response can be obtained if either a larger least squares gain or a larger reset initialization value p0is used. In the second simulation, the reset time intervalT is changed to 2 s. It is seen from Fig. 2 that there has been no major change in the system performance.


In this paper, the least squares algorithm with covariance reset is applied to the observer design for a general linear time-varying system. A unique feature of the new design is that the convergence rate of the observer can be effectively controlled by two scalar

design parameters; the larger these two design parameters, the faster the convergence rate. Presently, the research is being conducted in extending the design method in this paper to the dual problem of observer design, i.e., to the problem of state feedback control design for a general linear time-varying system.


Notice thatWoz(k) in the lemma is the observability grammian of the pair(0; wTo(t)), which is related to the system’s observability grammianWo(k) in (4) by


o(k) = 8T(kT; kT 0 T )Wo(k)8(kT; kT 0 T ):

Hence, given any constant vector v, one has

1k8(kT; kT 0 T )vk2 vTWoz(k)v  2k8(kT; kT 0 T )vk2;

due to (3). Further, using the following inequality from (17): m1kvk  k8(kT; kT 0 T )vk  m2kvk

one obtains

1m21kvk2 vTWcz(k)v  2m22kvk2

which leads to the claim of the lemma.


[1] R. E. Kalman and R. S. Bucy, “New results in linear filtering and prediction theory,” Trans. ASME, J. Basic Eng., series D, pp. 95–108, 1961.

[2] H. Kwakernaak and R. Sivan, Linear Optimal Control Systems. New York: Wiley, 1972.

[3] J. O’Reilly, Observer for Linear Systems. New York: Academic, 1983. [4] B. D. O. Anderson and J. B. Moore, Optimal Control, Linear Quadratic

Methods. Englewood Cliffs, NJ: Prentice-Hall, 1989.

[5] W. J. Rugh, Linear System Theory. Englewood Cliffs, NJ: Prentice-Hall, 1993.

[6] Y. O. Y¨uksel and J. J. Bongiorno, “Observers for linear multivariable systems with applications,” IEEE Trans. Automat. Contr., vol. AC-16, pp. 603–613, 1971.

[7] W. A. Wolovich, “On the stabilization of controllable systems,” IEEE Trans. Automat. Contr., vol. AC-13, pp. 569–572, 1968.

[8] C. T. Chen, Linear System Theory and Design. New York: Holt, Rinehart and Winston, 1984.

[9] V. Lovass-Nagy, R. J. Miller, and R. Mukundan, “On the application of matrix generalized inverses to the design of observers for linear time-varying and time-invariant systems,” IEEE Trans. Automat. Contr., vol. AC–25, pp. 1213–1218, 1980.

[10] K. S. Narendra and A. M. Annaswamy, Stable Adaptive Systems. Englewood Cliffs, NJ: Prentice-Hall, 1989.

[11] E. W. Bai and S. S. Sastry, “Global stability proofs for continuous-time indirect adaptive control schemes,” IEEE Trans. Automat. Contr., vol. AC-32, pp. 537–543, 1987.

[12] S. Sastry and M. Bodson, Adaptive Control, Stability, Convergence, and Robustness. Englewood Cliffs, NJ: Prentice-Hall, 1989.


Fig. 1. State estimation error ( T = 0:25).
Fig. 1. State estimation error ( T = 0:25). p.3
Fig. 2. State estimation error ( T = 2).
Fig. 2. State estimation error ( T = 2). p.3