• 沒有找到結果。

Literature Review and Objectives

Methods for Optimal Control Problems

Optimal control problems can be solved by a variational method (Pontryagin et al., 1962) or by nonlinear programming approaches (Huang and Tseng, 2003, 2004; Hu et al., 2002;

Jaddu and Shimemura, 1999). The variational or indirect method is based on the solution of first-order necessary conditions for optimality obtained from Pontryagin’s maximum principle (Pontryagin et al., 1962). For problems without inequality constraints, the optimality conditions can be formulated as a set of differential-algebraic equations, often in the form of a

two-point boundary value problem (TPBVP). The TPBVP can be addressed using many approaches, including single shooting, multiple shooting, invariant embedding, or a discretization method such as collocation on finite elements. On the other hand, if the problem requires that active inequality constraints be handled, finding the correct switching structure, as well as suitable initial guesses for the state and costate variables, is often very difficult.

Much attention has been paid in the literature to the development of numerical methods for solving optimal control problems (Hu et al., 2002; Pytlak, 1999; Jaddu and Shimemura, 1999; Teo, and Wu, 1984; Polak, 1971), the most popular approach in this field is the reduction of the original problem to a NLP problem. Nevertheless, in spite of extensive use of nonlinear programming methods to solve optimal control problems, engineers still spend much effort reformulating nonlinear programming problems for different control problems.

Moreover, implementing the corresponding programs for the nonlinear programming problem is tedious and time consuming. Therefore, a general OCP solver coupled with a systematic computational procedure for various optimal control problems has become an imperative for engineers, particularly for those who are inexperienced in optimal control theory or numerical techniques.

Additionally, in many practical engineering applications, the control action is restricted to a set of discrete values. These systems can be classified as switched systems consisting of several subsystems and switching laws that orchestrate the active subsystem at each time instant. Optimal control problems for switched systems, which require solution of both the optimal switching sequences and the optimal continuous inputs, have recently drawn the attention of many researchers. The primary difficulty with these switched systems is that the range set of the control is discrete and hence not convex. Moreover, choosing the appropriate elements from the control set in an appropriate order is a nonlinear combinatorial optimization problem. In the context of time optimal control problems, as pointed out by Lee et al. (1997),

serious numerical difficulties may arise in the process of identifying the exact switching points. Therefore, an efficient numerical method is still needed to determine the exact control switching times in many practical engineering problems.

Time-Optimal Control Problems

The TOCP is one of most common types of OCP, one in which only time is minimized and the control is bounded. In a TOCP, a TPBVP is usually derived by applying Pontryagin’s maximum principle (PMP). In general, time-optimal control solutions are difficult to obtain (Pinch, 1993) because, unless the system is of low order and is time invariant and linear, there is little hope of solving the TPBVP analytically (Kirk, 1970). Therefore, in recent research, many numerical techniques have been developed and adopted to solve time-optimal control problems.

One of the most common types of control function in time-optimal control problems is the piecewise-constant function by which a sequence of constant inputs is used to control a given system with suitable switching times. Additionally, when the control is bounded, a very commonly encountered type of piecewise-constant control is the bang-bang type, which switches between the upper and lower bounds of the control input. When the controls are assumed to be of the bang-bang type, the time-optimal control problem becomes one of determining the switching times, several methods for which have been studied extensively in the literature (see, e.g., Kaya and Noakes, 1996; Bertrand and Epenoy, 2002; Simakov et al., 2002). However, as already mentioned, in contrast to practical reality, these methods require that the number of switching times be known before their algorithms can be applied. To overcome the numerical difficulties arising during the process of finding the exact switching points, Lee et al. (1997) proposed the control parameterization enhancing transform (CPET), which they also extended to handle the optimal discrete-valued control problems (Lee et al., 1999) and applied to solve the sensor-scheduling problem (Lee et al., 2001).

In similar manner, this project focuses on developing a numerical method to solve time-optimal control problems. This method consists of the two-phase scheme: first, switching times are calculated using existing optimal control methods; and second, the resulting information is used to compute the discrete-valued control strategy. The proposed algorithm, which integrates the admissible optimal control problem formulation with an enhanced branch-and-bound method (Tseng et al., 1995), is then implemented and applied to some examples.

Objectives

The major purpose of this project is to develop a computational method to solve the time-optimal control problems and find the corresponding discrete-valued optimal control laws. The other purpose of this project is to implement a general OCP solver and provide a systematic procedure for solving OCPs that provides engineers with a systematic and efficient procedure to solve their optimal control problems.

相關文件