• 沒有找到結果。

On minimum-fuel control of affine nonlinear systems

N/A
N/A
Protected

Academic year: 2021

Share "On minimum-fuel control of affine nonlinear systems"

Copied!
4
0
0

加載中.... (立即查看全文)

全文

(1)

IEEE 131 [41 151 [61 I331 I341 I361 [371

TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 34, NO. 7, JULY 1989 comparison with conventional position servo for a computer-controlled manipula- tor,” Jet Propulsion Lab. Rep. JPL TM 33-601, Mar. 1973.

A. K. Bejczy, “Robot arm dynamics and control,” Jet Propulsion Lab. Rep. JPL TM 33-669, Feb. 1974.

H. Hemami and P. C. Camana, “Nonlinear feedback in simple locomotion systems,” IEEE Trans. Automat. Contr., vol. AC-21, pp. 855-860, 1976. M. H. Raibert and B. K. Horn, “Manipulator control using the configuration space method,” Indust. Robot, vol. 5, pp. 69-73, 1978.

J. R. Hewit and J. Padovan, “Decoupled feedback control of robot and manipulator arms,” in Proc. 3rd ClSM-IFToMM Symp. Theory and Practice of Robot Manipulators, Udine, Italy, Sept. 1976, pp. 251-266.

R. Paul, J. Luh et al., “Advanced industrial robot control systems,’’ Purdue Univ., West Lafayette, IN, Rep. TR-EE 78-25, May 1978.

1. Luh, M. Walker, and R. Paul, “Resolved-Acceleration control of mechanical manipulators,” IEEE Trans. Automat. Contr., vol. AC-25, pp. 4 6 8 4 7 4 , 1980. E. Freund, “A nonlinear control concept for computer controlled manipulators,” in Proc. IFAC Symp. Multivariable Technol. Syst., 1977, pp. 395403. E. Freund and M. Syrbe, “Control of industrial robots by means of microproces- sors,” in IRIA Conf. Lecture Notes on Information Sciences. New York: Springer-Verlag, 1976, pp. 167-85.

E. Freund, “Fast nonlinear control with arbitrary pole-placement for industrial robots and manipulators,” Int. J. Robotics Res., vol. 3, pp. 76-86, 1982. 0. Khatib, “Commande dynamique dans I’espace operational des robots manipu- lators en presence d’obstacles,” Engineering Doctoral dissertation 37, ENSAE, Toulouse, France, 1980.

-, “Dynamics control of manipulators in operational space,” in Proc. 6th -, “The operational space formulation in the analysis, design, and control of robot manipulators,” in Proc. 3rd Int. Symp. Robotics Res., 1985, pp. 103- 1 IO.

A. Isidori and A. Ruberti, “On the synthesis of linear input-output responses for nonlinear systems,” Syst. Contr. Lett., vol. 4, pp. 17-22, 1984.

A. Isidori, “The matching of a prescribed linear input-output behavior in a nonlinear system,” IEEE Trans. Automat. Contr., vol. AC-30, pp. 258-265,

1985.

D. Cheng, T. J. Tarn, and A. Isidori, “Global external linearization of nonlinear systems via feedback,” IEEE Trans. Automat. Contr.

A. Isidori, Nonlinear Control Systems: A n Introduction. New York: Springer-Verlag, 1985.

Y. Chen, “Nonlinear feedback and computer control of robot arms,” D.Sc. dissertation, Dep. Syst. Sci. and Math, Washington Univ., St. Louis, MO, 1984. T. I . Tarn, A. K. Bejczy et a l . , “Nonlinear feedback in robot arm control,” in Proc. 23rd Conf. Decision Contr., 1984, pp. 736-751.

A. K. Bejczy, T. J. Tarn, and Y. L. Chen, “Robot arm dynamic control by computer,” in Proc. IEEE ICRA, 1985, pp. 960-970.

T. J. Tam, A. K. Bejczy, and X. Yun, “Coordinated control of two robot arms,” in Proc. IEEE ICRA, 1986, pp. 1193-202.

M. Brady et al.. Robot Motion Planning and Control. Cambridge, MA: M.I.T. Press, 1984.

K. Kreutz, “On nonlinear control for decoupled exact external linearization of robot manipulators,” in Recent Trends in Robotics: Modeling, Control, and Education, M. Jamshidi et al., Eds. Amsterdam, The Netherlands, North- Holland, 1986, pp. 199-212.

L. Meirovitch, Methods of Analytical Dynamics. New York: McGraw-Hill, 1970.

F. Gantmacher, Lectures in Analytical Mechanics. Moscow: MIR, 1975. D. Koditschek, “Natural control of robot arms,” Dep. Elec. Eng., Yale Univ., New Haven, CT, Center for Syst. Sci. Rep. 1985.

D. Koditschek, “High gain feedback and telerobotic tracking,” in Proc. Workshop on Space Telerobotics, Pasadena, CA, Jan. 20-22, 1987, vol. 3, pp. 355-364.

H. Asada and J. Slotine, Robot Analysis and Control. New York: Wiley, 1986.

R. Pringle, Jr., “On the stability of a body with connected moving parts,’’ AIAA J . , vol. 4, pp. 1395-1404, 1966.

M. Takegaki and S. Arimoto, “A new feedback method for dynamic control of manipulators,” J . Dynam. Syst. Meas. Contr.. vol. 102, pp. 119-125, 1981. S . Arimoto and F. Miyazaki, “Stability and robustness of PID feedback control for robot manipulators of sensory capacity,” in Proc. 1st Int. Symp. Robotics Res., 1983, pp. 783-99.

-, “Stability and robustness of PD feedback control with gravity compensa- tion for robot manipulators,” Robotics: Theory and Practice, DSC-vol. 3 , pp. 67-72, 1986.

J. T. Wen and D. S . Bayard, “Simple robust control laws for robotic manipulators-Part I: Nonadaptive case,” in Proc. Workshop on Space Telerobotics. Pasadena, CA, Jan. 1987, JPL publication 87-13, vol. 3, pp. 215- 230.

D. S. Bayard and I . T. Wen, “Simple robust control laws for robotic manipulators-Part 11: Adaptive case.” in Proc. Workshop on Space Telero- botics, Pasadena, CA, Jan. 1987, JPL publication 87-13, vol. 3. pp, 231-244. J. Slotine and W. Li, “On the adaptive control of robot manipulators,” Roborics:

Theory and Practice, DSC-vol. 3 , pp. 51-56, 1986.

B. Paden and D. Slotine, “PD

+

robot controllers: Tracking and adaptive control,” presented at the 1987 IEEE Int. Conf. Robotics Automat., 1987. CISM-IFToMM, 1983. I391 [401 I531 [541

767

E. Sontag and H. Sussman, “Time-optimal control of manipulators,” in Proc. IEEE 1986Int. Conf. Robotics Automat., San Francisco, CA, 1986, pp. 1692-

1697.

-, “Remarks on the time-optimal control of two-link manipulators,” in Proc. 24th IEEE Conf. Decision Contr., 1985, pp. 1643-1652.

1. Wen, “On minimum time control for robotic manipulators.” in Recent Trends in Robotics: Modeling, Control, and Education, M. Jamshidi et al. Eds. Amsterdam, The Netherlands: North-Holland, 1986, pp. 283-292.

I . Stuelpnagel, “On the parameterization of the three dimensional rotation group,” SIAM Rev., vol. 6, pp. 4 2 2 4 3 0 , 1964.

P. C . Hughes, Spacecraft Attitude Dynamics. J . Craig, Introduction to Robotics.

W. Boothby, A n Introduction to Differentiable Manifolds and Riemannian Geometry, 2nd ed.

J. Y. S. Luh, M. W. Walker, and R. P. Paul, “On-line computational scheme for mechanical manipulators,” J . Dynam. Syst. Measure, Contr., vol. 102, pp. 69- 76, 1980.

J. M. Hollerbach and G . Sahar, “Wrist-partitioned inverse kinematic accelerations and manipulator dynamics,” Int. J . Robotics Res., vol. 2. pp. 61-76, 1983. C. Von Westenholz, Differential Forms in Mathematical Physics. Amster- dam, The Netherlands: North-Holland, 1986.

J . Hollerbach. “Optimum kinematic design for a seven degree of freedom manipulator,” presented at the 2nd Int. Symp. Robotics Res., 1984. J. Baillieul et al. , “Programming and control of kinematically redundant manipulators,” in Proc. 23rd Conf. Decision Contr., 1986, pp. 768-774. G. Cesareo and R. Marino, “On the controllability properties of elastic robots,’’ presented at the 6th Int. Conf. on Anal. and Optimiz. Syst., INRIA, Nice, June 1984.

A. De Luca. “Dynamic control of robots with joint elasticity,” in Proc. 1988 IEEE Int. Conf. Robotics Automat., Philadelphia, PA, Apr. 24-29, 1988, pp. 152-158.

0. Khatib and J. Burdick, “Motion and force control of robot manipulators,” in Proc. IEEE 1986Int. Conf. Robotics Automat., San Francisco, CA, 1986. pp. I38 I - 1386.

2 . Li and S . Sastry, “Hybrid velocityiforce control of a robot manipulator,” Univ. California, Berkeley, Eng. Res. Lab. Rep. M8719, Mar. 3, 1987. T. J. Tarn. A. K. Bejczy, and X . Yun, “Nonlinear feedback control of multiple robot arms,” in Proc. Workshop on Space Telerobotics, Pasadena, CA, JPL publication 87-13, vol. 3, pp. 179-192.

-, “Robot arm force control through system linearization by nonlinear feedback,” in Proc. IEEE Int. Conf. Robotics Automat., Philadelphia, PA, Apr. 24-29, 1988, pp. 1618-1625.

New York: Wiley, 1986. Reading, MA: Addison-Wesley, 1986. New York: Academic, 1986.

On

Minimum-Fuel Control of Affine Nonlinear Systems

JING-SIN LIU, KING YUAN, AND WEI-SONG LIN

Abstract-The minimum-fuel control problem is investigated for a class of multiinput affine nonlinear systems whose associated Lie algebra is nilpotent. Interesting consequences of the maximum principle are deduced for such systems.

I. INTRODUCTION

Optimal control theory [7] provides a systematic design method for modern control systems and thus plays an important role in linear control theory (more specifically, the linear quadratic regulator and linear quadratic Gaussian control theories). Roughly speaking, the success of optimal control theory in the context of linear systems is due to the ease of computation of the optimal control law. On the other hand, until now there has been a lack of systematic and reliable procedures for solving nonlinear optimal control problems. This is unfortunate, but some attempts have been made to resolve these difficulties; in [ 2 ] a Lie algebraic approach has been used to derive a set of quasi-linear partial differential equations which the optimal feedback law must satisfy.

Manuscript received March 29. 1988; revised June 13, 1988.

J . 4 . Liu and W . 4 . Lin are with the Department of Electrical Engineering, National K. Yuan is with the Department of Mechanical Engineering, National Taiwan IEEE Log Number 8927764.

Taiwan University, Taipei, Taiwan, Republic of China. University, Taipei, Taiwan, Republic of China.

OO18-9286189/07OO-0767$01

.OO

0

1989 IEEE

(2)

768

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 34, NO. 7, JULY 1989

Application of this new computing method for the optimal control of regulation of satellite angular momentum has been reported in [8] recently. The Lie brackets of vector fields have become a main mathematical tool in nonlinear control theory [9] and optimal control theory [lo]. In this note we consider the following optimal control problem:

the performance index to be minimized is:

(1) l T

2 0

J(xo, U ) = - uru d f + K ( x ( T ) ) subject to the smooth affine system dynamics

m

x = f ( x ) + C g,(x)u,, x(O)=xo

, = I

=for)

+&)U (2)

where T

>

0 is the fixed end time, the system’s vector fieldsf, g,, and K are all smooth, x i s a real n-vector, U, is a scalar control, i = 1,

. .

e , m.

Since in most cases the cost integrand uTu of (1) can be identified as the energy expended, we call the optimal control problem (l), (2) a minimum-fuel control problem for (2). Associated with the nonlinear system (2) we define the Lie algebra L

L : = the Lie algebra generated by the system vector fields

{ f ,

8 ,

...

9 g m ) (3)

i.e., L is the set consisting off, g,,

. . .

,

g, and all possible Lie brackets generated by f, g,

,

. . .

,

g, and their linear combinations. The following notations are also used in this note:

ad: := L adLL = [ L , L ]

= { [ X , Y ] :

x

E L , Y E L ) ad:+ I L = adLad*,L where [., * ] is the Lie bracket.

The minimum-fuel control problem of a scalar input bilinear system was studied in [I], assuming that the Lie algebra L is nilpotent. The purpose of this note is to generalize results of [ 11 to a more general class of multiinput systems described by (2) with nilpotent Lie algebra L. The organization of this note is as follows. In the next section, some preliminary notions and definitions will be given; then the motivations of our work on the problem ( l ) , (2) will be explained. Then we will concentrate on the solution to the optimal control problem if (3) is nilpotent. It will be seen that some results of [I] are due not only to the nilpotent property of L but also to the fact that the systems considered are single-input in which case the solution is greatly simplified. In the last section we make several conclusions.

11. PRELIMINARIES AND MOTIVATION

In this note, we are especially interested in system (2) with special structure: the Lie algebra L is nilpotent. Recall [5] that a Lie algebra L is nilpotent if there exists a positive integer k such that

ad:L=O. (4)

Note that other equivalent definitions are available [SI and the one adopted here is the same as that used in [I]. A system of the form (2) can be very complex to allow for the application of existing control theory, e.g., optimal control theory. A possible first step to overcome the difficulties arising from the system’s complex structure is to approximate system (2) locally by a system with simpler structure. A system with simpler structure must be more mathematically tractable to facilitate the subse- quent control design problem. It has been shown [3], 141 that under some nonrestrictive conditions, a system of the affine form ( 2 ) , with or without

the drift termf(x), can be locally approximated by a nilpotent one with the same form of state equations as (2). Therefore, it is useful to study the control problem of an affine smooth nonlinear control system with nilpotent Lie algebra L before we thoroughly investigate the general control problem. For brevity, the nonlinear system described by the state equations (2) is called a nilpotent nonlinear control system if its associated system’s Lie algebra L is nilpotent.

III. OPTIMAL CONTROL PROBLEM

In

this section, we solve the optimal control problem (I), (2) under the assumption that L is nilpotent, i.e., we consider the minimum-fuel control problem for nilpotent control system (2). For such an optimal control problem, we formulate the associated Hamiltonian

1 2

H = p r ( f + g u ) - - U ~ U

=H@, P . U ) ( 5 )

wherep is the n x I real costate vector. For the Hamiltonian given by (5) we have the Hamiltonian system given below

X;=f(x)+g(x)u, x(O)=xo aH

P = -- (x, p , U )

ax

and an m x 1 output is also added to (6) aH

y = -

au

(7)

Note that the system (6), (7) forms a Hamiltonian system in the canonical coordinate system ( x , p). The optimal control problem (I), (2) with Hamiltonian (5) is regular (or nondegenerate) since

is nonsingular for any

( x ,

p, U). The optimal control U* satisfies the necessary condition

y = o or equivalently,

From this equation (8) we get explicitly

u,*=prg,(x), i = l , 2 ,

...,

m. Let

(9)

H*(x, p ) : = H ( x , p , U*)

denote the optimal Hamiltonian, then the resulting Hamiltonian system (6) after replacing H ( x , p , U ) by H*(x, p) will be called an optimal Hamiltonian.

From (7), the ith (i = 1, 2, *

,

m ) output is

(3)

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 34, NO. 7, JULY 1989

769

then the first time derivative of y , is

or

where

F : = f + g u . (11)

The following simple Lemma is of use in the subsequent derivations.

Lemma: Let Y be a vector and p the optimal costate vector. Then

where F is as defined in (1 1) and the time derivative is calculated along the system's trajectory.

Proof: The time derivative of the function p Y along the system's trajectory is d dt - ( p T Y ) = p T Y + p T Y aF a y - - - p 7 - ax Y + p T - F ax =pTadFY

From this Lemma, we easily obtain the second time derivative of U, by differentiating (10):

u , = p T a d i g , - y , . (12)

In general, we have for each i = 1 , 2,

. . .

,

m

u!*)=pradSg,-yjk), k = O , 1, 2,

....

(13)

Thus, the necessary condition y ! k ) = 0, k = 0 , 1 , 2,

. . .

,

for optimality of U: is equivalent to

u*(k)=prad;g,Iu*, k=O, 1, 2 ,

....

The above derivations give the following.

along the flow of the optimal Hamiltonian H*

Proposition: The necessary conditions for optimality of U: are that

~ : ( ~ ) = p ~ a d ; g , I . * , k=O, 1 , 2,

...,

i = l , 2,

...,

m. (14)

Remark: The hierarchy of conditions (13) are also given in [2] for a more general class of criterions and systems, written for a slightly different problem (Mayer problem) in a less explicit form. The conditions

U: = P Tg,(x) (15)

U : = P T [ f + g u * , g,l (16)

were also derived in [6] in which (16) was obtained from direct differentiation (with respect to time) of (15). The derivation of [6, eq.

(15)] was from the trivial symmetry (or trivial (energy) conservation law) { H*, H * } = 0

where { .

,

.

} is the Poisson bracket of smooth functions, since H* is a first integral of optimal Hamiltonian system. In this regard, we can view the conditions given in the Proposition as representations of energy conservation law.

From the Lemma, the following Corollary is clear.

Corollary: If L satisfies the nilpotence condition adL=O

for some positive integer k , then for any vector field X E

adi-'L,

p T X ( x ) is a constant.

There are three special cases of interest to be considered.

Case 1 (Commutative Case or k = I): In this case, for each i, j = 1, 2,

...,

m

I f ,

&I = 0,

[ g , , g , l = O . Since from (14)

U:=p7adFg,I.* by the Corollary, we have

1': = 0,

i.e., the minimum-fuel control for a nilpotent control system with adL = 0 is a constant vector: U* =

C.

The computation of this constant U* can proceed as follows. In this case, the minimum cost is

J*(xo) : = J(x0, U *)

=i

~ ' U * ~ U * d t + K ( x ( T ) ) 2 0 _ m

=f

C ; + K ( x ( T ) ) , = I

and the system dynamics become

x = f ( x ) + g ( x ) C , x(O)=xo. (18) From (18) we can (numerically or analytically) solve for x ( t ) and thus

x ( T ) , then (17) is a function of Ci only. For optimality of

C,,

we must require that aK - TC,

+

- (x( T ) ) dJ*

_ -

d c ,

act

=0, i = l , 2,

...,

m.

These constitute a set of m algebraic equations in m unknowns: C,,

. . .

,

C,,,;

C

can then be solved by standard methods.

Remark: For the present Case 1, the result is the same as that of [I] for a single input bilinear system.

Case 2 (k = 2 or ad; = 0): In particular, a d i g , = 0, i = 1, 2,

. . .

,

m. In view of (14): we have, by the Corollary, ii:=pradig,

= 0. Therefore the open-loop optimal control is

u : ( t ) = C , + d , t , i = l , 2 , ..., m

for some constants C, and d , . Note that the result obtained here is also analogous to that of [I] for single input bilinear systems. Consequently, the two cases just considered are natural extensions of [ l ] to the more general problem ( l ) , (2) considered.

Case 3 (k = 3 or ad; = 0): In particular, adig, = 0. It can be seen from (14) that

U ; ( ~ ) = O , i = l , 2,

...,

m

and by the Corollary and (12):

m m "

u : = a i + x b ; u , ? + x C ; u : + Z d;,u,?u:

j = 1 k = l ) = I k = I

i = l , 2, ..., m (19)

(4)

770

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 34, NO. 7, JULY 1989 where the constants are defined by

Q ’ = P ‘ [ f *

[f,

Sill.

q = P 7 [ f ,

[g,, S,ll> c;=PT[Sk,

[f,

g,ll, q , = P r [ S , , [Sk, S,ll.

This set of m nonlinear coupled second-order differential equations given by (19) may be solved for U:. However, usually analytic closed-form solution to u F ( t ) is not possible and we must accept a numerical solution using numerical integration techniques. Note that the second and final terms on the right-hand side of (19) are due to the multiinput nature of the system in addition to the nilpotence structure imposed on the system (cf. the first equation in [ l , p. 8971).

For single input systems (i.e., m = 1) b; = d;! = 0, (19) is then reduced to a linear constant second-order ordinary differential equation

U

*

=

c,

t c2u

*

(20)

where

C,=p‘[f,

U-,

Sll,

The general solution of (20) is

where C,’s and k,’s are constants; k2 satisfies the characteristic equation of (20)

k: = C2.

Remark: For the single-input nilpotent control system with ad: = 0, the minimum-fuel control is given by (20) which is a generalization of [ 11. For the multiinput system, the situation is far more complicated as can be seen from (19). The solution of u * ( f ) from (19) is not a simple task and numerical integration is helpful in the present case. We can then safely say that the result of [l] is not only due to the nilpotent structure of the system’s Lie algebra L but also due to the fact that the systems considered are single input. Although it is well known [5] that every finite- dimensional nilpotent Lie algebra has a matrix representation, it is not convenient to analyze the problem in the matrix setting. This is different from the bilinear case in which the matrix representation is provided in the problem.

IV. CONCLUSION

We have considered the optimal control problem (l), ( 2 ) when the system ( 2 ) under investigation is such that its Lie algebra L defined in (3) is nilpotent, i.e., ad2L = 0 for some positive integer k. The key equations for optimal control U* are (14) whch constitute a hierarchy of necessary

conditions for U*. These equations play a crucial role in obtaining the

open-loop optimal control u * ( t ) at least for k = 1, 2 , 3 which were

studied in this note. The result of [ I ] for single input bilinear system was then naturally generalized to system ( 2 ) ; and it was also stressed that their results are due to two properties of systems considered by them: single input and nilpotent L . Since those systems ( 2 ) with nilpotent L are of special interest as mentioned in the second section, other control aspects of such systems need extensive and intensive research in the future.

ACKNOWLEDGMENT

The authors would like to express their sincere thanks to one of the reviewers for his (or her) constructive comments. The suggestion of the Lemma has greatly improved earlier versions of this note.

REFERENCES

S . P. Banks and M. K. Yew, “On the optimal control of bilinear systems and its relation to Lie algebras,” Int. J. Confr., vol. 43, no. 3, pp. 891-900, 1986. H. Bourdache-Siguerdidjane and M. Fliess, “Optimal feedback control of nonlinear systems,” Aufomafica, vol. 23, no. 3, pp. 365-372, 1987. A. Bressan, “Local asymptotic approximation of nonlinear control systems,” Int.

J. Conlr., vol. 4 1 , no. 5, pp. 1331-1336, 1985.

H. Hermes, “Nilpotent approximation of control systems and distributions,” SIAM J. Contr. Opfimiz., vol. 24, no. 4 , pp. 731-736, 1986.

A. A. Sagle and R. E. Walde, Introduction lo Lie Groups and Lie Algebras. New York: Academic, 1973.

A. 1. Van der Schaft, “On symmetries in optimal control,” in Proc. 251h Con$

Decision Confr., Athens, Greece, Dec. 1986.

A. E. Bryson and Y. C. Ho, Applied Optimal Control. New York: Wiley, 1975.

H. Bourdache-Siguerdidjane, “On application of a new method for computing optimal nonlinear feedback controls,” O p f . Confr. Appl. Methods, vol. 8, pp. 3 9 7 4 0 9 , 1987.

A. Isidori, Nonlinear Confrol Syslems: An Infroducfion (Lect. Notes Contr. Inf. Sci., Vol. 72).

H. 1. Sussmann, “Lie brackets, real analyticity and geometric control,” in Differenfial Geometric Control Theory, Progress in Mathematics, R. W. Brackett, R. S. Millman, and H. J . Sussmann, U s . Boston: Birkhauser, 1983, vol. 27, pp. 1-116.

New York: Springer-Verlag, 1985.

Optimal Control Via Fourier Series

of

Operational

Matrix

of

Integration

Y. ENDOW

Abstract-The state equations of an optimal regulator problem are given in terms of the truncated Fourier series and the associated

operational matrix of integration. An effective computational algorithm is developed to calculate the expansion coefficients of the derivatives of

state variables for saving computer storage and time and minimizing the computational error. An illustrative example is also given, and satisfac- tory computational results are obtained.

I. INTRODUCTION

In recent years orthogonal functions have been used by a number of researchers to solve control problems. The objective is to obtain efficient algorithms, and hence to use the computational capacity of computers. The main characteristic of this technique is that it reduces the differential equation involved in the problem to an algebraic equation in terms of the orthogonal functions and the operational matrix of integration associated with these functions. Typical examples of the orthogonal functions are the Walsh [l], block-pulse [2], Laguerre (31, Legendre [4], Chebyshev [ 5 ] , [6], Fourier [7], [8], and polynomial [9] functions.

In this note the Fourier series operational matrix of integration is used to determine an optimal control for a linear regulator problem. This approach has advantages due mainly to the use of sinusoidal functions since they are widely used in engineering fields and their properties are well known. In addition, the algorithm is comparatively simple and does not require excessive memories so it is suitable for microprocessors.

Manuscript received May 5, 1987; revised December 21, 1987 and May 26, 1988. The author is with the Department of Industrial Engineering, Chuo University, Kasuga, Bunkyo-ku, Tokyo, Japan.

IEEE Log Number 8927782. 0018-9286/89/0700-0770$01

.OO

0

1989 IEEE

參考文獻

相關文件

Tseng, Growth behavior of a class of merit functions for the nonlinear comple- mentarity problem, Journal of Optimization Theory and Applications, vol. Fukushima, A new

Chen, The semismooth-related properties of a merit function and a descent method for the nonlinear complementarity problem, Journal of Global Optimization, vol.. Soares, A new

By exploiting the Cartesian P -properties for a nonlinear transformation, we show that the class of regularized merit functions provides a global error bound for the solution of

The superlinear convergence of Broyden’s method for Example 1 is demonstrated in the following table, and the computed solutions are less accurate than those computed by

Given a graph and a set of p sources, the problem of finding the minimum routing cost spanning tree (MRCT) is NP-hard for any constant p > 1 [9].. When p = 1, i.e., there is only

The MTMH problem is divided into three subproblems which are separately solved in the following three stages: (1) find a minimum set of tag SNPs based on pairwise perfect LD

2 System modeling and problem formulation 8 3 Adaptive Minimum Variance Control of T-S Fuzzy Model 12 3.1 Stability of Stochastic T-S Fuzzy

In this paper, by using Takagi and Sugeno (T-S) fuzzy dynamic model, the H 1 output feedback control design problems for nonlinear stochastic systems with state- dependent noise,