• 沒有找到結果。

1. INTRODUCTION

1.2. Outline

The organization of this thesis is as follows. In Chapter 2, we recall some basic tool and theory. These include Fault Identification Filter, short-time Fourier transform, Variable Structure Control, Adaptive Control. In Chapter 3, we first introduce the model of power system proposed by Dobson and Chiang [9]. Then, we apply the FIDF to the detection of voltage collapse in a power system. In Chapter 4, we establish the model of power system with tap changer. Then, the Variable Structure Control scheme is applied to adjust the tap changer ratio for the purpose of voltage regulation. In Chapter 5, a scheme of prevention of voltage collapse is proposed, and Simulation results demonstrate the effectiveness of this scheme. Finally, conclusions and suggestions for further research are given in Chapter 6.

CHAPTER 2 Preliminaries

In this chapter we review some basic tool and theory. These include Fault Identification Filter [5,18], short-time Fourier transform [11], Variable Structure Control [25,29], adaptive control [25]. These results will be employed in the next two chapters to develop the detection of voltage collapse and voltage regulation for the electric power system.

2.1 Fault Identification Filter (FIDF)

Fault Identification Filter is a tool that can provide an efficient approach to detect the appearance of faults in a control system. In this section we recall the FIDF design results presented in [5].

Consider a linear system is given by

) input vector, the fault vector, and the output vector, respectively. From (2.1) and (2.2), by taking Laplace transform, we have

The configuration of a FIDF is shown in Figure 2.1.

PLANT

)

1

( s

H

+

H

2

( s )

u

y

) (s r

Figure 2.1: FIDF configuration

To fulfill the requirement of (2.7), we first assume that Gf(s) as given by (2.5) is invertible. The FIDF design procedure is given in [5] then can be summarized as the following algorithm.

Algorithm 1 (FIDF design procedure)

Step 1 : Construct H2(s) so that the transfer matrix H2(s)Gf(s)is a diagonal proper and stable one.

Step 2 : Determine H1(s) such that H1(s)+H2(s)Gu(s)=0

Step 3 : Establish and check r(s) according to Eq. (2.6)

Under the procedure of Algorithm 1, it is noted from Eq. (2.8) that the residual vector is influenced only by the fault vector. Thus, by properly checking the value of residual vector as listed in Step 3 of Algorithm 1 above, one can detect the system fault accurately. In addition to the effect of fault vector, the system output is also affected by nonzero initial state. Since the objective is that the residual be affected only by the fault vector, The response to a nonzero initial state should decay to zero.

This implies that the matrix A in Eq. (2.1) should also be required to be stable.

2.2 Short-Time Fourier Transform

The short-time Fourier transform is the most widely used method for studying nonstationary signals. The concept behind it is simple and powerful. Break up the signal into small time segments and Fourier analyze each time segment to ascertain the frequencies that existed in that segment. That is the basic idea of the Short-time Fourier Transform. The totality of such spectra indicates how the spectrum is varying in time.

2.2.1 Window function

If we are interesting in a desired portion of a signal at time t , it can be obtained by multiplying the original signal by a window function, which emphasizes the signal at that time interval, centered at t , and suppresses the signal at other times.

Let φ(t) be a real-valued window function. Then we apply the window function to the original signal and obtain the information of f(t) near t=b, and express this

where b is a sliding factor and we can slide the window function along the time axis to analyze the local behavior of the function f(t) in different intervals.

0

τ

In the window function, We have the two most important parameters, its center and width. It is clear that the center and the standard width of the window function in Figure2.2 are 0 and 2τ , respectively. For a general window function φ(t), we define its center t as *

t* = 12

t (t)2dt

: φ

φ (2.10) and the root-mean-square (RMS) radius∆ as φ

2

The function is called a time window. For the window of Figure2.2, use (2.10) and (2.11) to verify thatt* =0 and ∆φ =τ/ 3. Therefore, the RMS width is smaller than the standard width by a factor of1/ 3.

10

From the function φ(t) described above, similarly, we can have a frequency window φˆ(ω) with center ω* and the RMS radius ∆ defined analogous to (2.10) φˆ

Theoretically, A function cannot be limited in time and frequency simultaneously.

Verifying φ(t) for the window of Figure2.2, ω* =0 and ∆φ =∞, this window is the best time window but the worst frequency window.

2.2.2 Short-time Fourier transform

We want to obtain the properties of a signal f(t) in the neighborhood of some desired location in time t =b, by multiplying an appropriated window function φ(t) to produce the windowed function fb(t)= f(t)φ(tb) and then taking the Fourier transform of fb(t). This is the short-time Fourier transform (STFT). Formally, we can define the STFT of a function f(t) with the window functionφ(t) discussed in Section 2.2.1 in the time-frequency plane as

Because of the windowing nature of the STFT, this transform is referred to as the windowed Fourier transform.

11

2.2.3 Time-Frequency Window

Let us consider the window function φ(t) in (2.15). If t is the center and *φ the radius of the window function, then (2.14) gives the information of the function f(t) in the time window:

] ,

[t* +bφ t* +b+φ (2.16) To derive the corresponding window in frequency domain, apply Parseval’s identity to (2.14). We have where the symbol “∨ ” represents the inverse Fourier transform. Observe that (2.17) has a form similar to (2.14). If ω* is the center and ∆ is the radius of the window φˆ Because of the similarity of representation in (2.14) and (2.17), the STFT give information about the function f(t) in the time-frequency widow:

ˆ] Figure 2.3 represents graphically the notion of the time-frequency windowgiven by (2.19). Here we have assumed that t** =0.

12

Figure 2.3: Time-frequency window for short-time Fourier transform(t** =0)

2.3 Variable Structure Control

The Variable Structure Control(VSC) have the advantages of faster response and smaller sensitivity to system uncertainties and disturbances. In this thesis, we will adopt VSC schemes to design our controller. In this section we review some basic concept of VSC theory first. and )b(x (in general, nonlinear) are not exactly known, but the extent of the imprecision on f(x) is upper bounded by a known continuous function of x, and control gain b(x) is of known sign and bounded by a known continuous function of x, respectively. For example, the inertia of a mechanical system is only known to a certain accuracy, and friction models only describe part of the actual friction forces.

13

The control problem is to get the state x to track a specific time-varying state

n T In a second-order system, for example, position or velocity can not "jump", so that any desire trajectory feasible from t =0 necessarily starts with the same position and velocity as those of the plant. Otherwise, tracking can only be achieved after a transient. to be the tracking error vector. Furthermore, let us define a time-varying surface S(t) in the state-space R by the scalar equation (n) s(X;t)=0, where

dt x t d X

s( ; )=( +λ)n1~ (2.23) Given initial condition (2.22), the tracking problem XXd is equivalent to that of remaining on the surface S(t) for all t>0; indeed s0 represents a linear differential equation whose unique solution is ~ ≡x 0, given initial conditions (2.22).

Thus, the problem of tracking the n-dimensional vector x can be reduced to that d keeping the scalar portion s at zero. More precisely, the problem of tracking the n-dimensional vector x can in effect be replaced by a d 1storder stabilization problem in s. Indeed, since from (2.23) the expression of s contains ~x(n1), we only need to differentiate s once for the input u to appear. Furthermore, bounds on

s can be directly translated into bounds on the tracking error vector x~, and

14

therefore the scalar s represents a true measure of tracking performance. Then, order

st

1 problem of keeping the scalar s at zero can be achieved by choosing the control law u of the system (2.21) such that outside of S(t)

s dts

d 2 ≤−η 2

1 (2.24)

where η is a strictly positive constant. Practically, (2.24) states that the squared

"distance" to the surface, as measured by s , decrease along system trajectory. Thus, 2 it constrains trajectories to points towards the surface S(t), as illustrated in Figure 2.4.

In particular, once on the surface, the system trajectories remain on the the surface. In other words, satisfying sliding condition (2.24), makes the surface an invariant set.

Furthermore, as we shall see, (2.24) also implies that some disturbances or dynamics uncertainties can be tolerated while still keeping the surface an invariant set.

Graphically, this corresponds to the fact that in Figure 2.4 the trajectories off the surface can "move" while still pointing towards the surface. S(t) verifying (2.24) is referred to as a sliding surface, and the system's behavior once on the surface is called sliding mode.

Figure 2.4: The sliding condition

15

The other interesting appearance of the invariant set S(t) is that once on it, the system trajectories are defined by the equation of the set itself, namely

~ 0 the geometric interpretation of the definition (2.23) allow us, in effect, to replace an

order

Furthermore, definition (2.23) implies that once on the surface, the tracking error tends exponentially to zero, with a time constant (n−1)/λ (form the sequence of

) 1

(n− filters of time constants equal to 1/λ.

The typical system behavior implied by satisfying sliding condition (2.24) is illustrated in Figure 2.5 for n=2. The sliding surface is a line in the phase plane, of slope λ and containing the (time-varying) point Xd =[Xd X&d]T. Starting from any initial condition, the state trajectory reaches the time-varying surface in a finite time smaller than s(t=0)/η, and then slide along the surface towards X d exponentially, with a time-constant equal to 1/λ.

16

Figure 2.5: Graphical interpretation of equations (2.23) and (2.24) (n=2)

In summary, the idea behind equations (2.23) and (2.24) is to choose a well-behaved function of the tracking error, s, according to (2.23), and then select the feedback control law u in system (2.21) such that s remains a Lyapunov-like function of 2 the closed-loop system, despite the presence of model uncertainties and disturbances.

The controller design procedure then consists of two steps. First, a feedback control law u is selected so as to verifying sliding condition. However, in order to account for the presence of modeling uncertainties and disturbances, the control law has to be discontinuous across S(t). Since the implementation of the associate control switchings is necessarily imperfect (for example, in practice switching is not instantaneous, and the value s is not known with infinite precision), this leads to chattering as showing in Figure 2.6.

17

Figure 2.6 Chattering as result of imperfect control switching

The chattering is undesirable in practice, since it involves high control activity and further may excite high-frequency dynamics neglected in the course of modeling (such as unmodeled structure modes, neglected time-delays, and so on). Thus, in a second step, the discontinuous control law u is suitably smoothed to achieve an optimal trade-off between control bandwidth and tracking precision: while the first step accounts for parametric uncertainty, the second step achieves robustness to high-frequency unmodeled dynamics.

2.3.2 Variable Structure Control Design

The implementation of the Variable Structure Control (VSC) consists of two main phases. First, we should construct the sliding surface such that the system states restricted to the sliding surface will produce the desired behavior. Second, we construct switched feedback gain which derive the plant state trajectory to the sliding surface in finite time and restrict the state to sliding surface. The method of equivalent control is means of determining the system motion restricted to the sliding surface.

18

Suppose at t , the state trajectory of the plant intercepts the sliding surface and a 0 sliding mode exists for all t > . The existence of a sliding mode implies (1) t0 s&=0, and (2) s =0 for all t > . The system's motions on the sliding surface can be given t0 an interesting geometric interpretation, as an "average" of the systems' dynamics on both sides of the surface. The system while in sliding mode can be written as

=0

s& (2.25) By solving the above equation formally for the control input, we obtain an expression for u called the equivalent control, u which can be interpreted as the eq continuous control law that would maintain s&=0 if the dynamics were exactly known. For example, for a second-order system

u the equivalent control u of a continuous control law that would achieve eq s&=0 is

x x f

ueq = + &&d λ~& (2.29) and the system dynamics while in sliding mode is

x x u f

x&&= + eq = &&d λ~& (2.30) Geometrically, the equivalent control can be constructed as

++ −

= u u

ueq α (1 α) (2.31) i.e., as a convex combination of the value of u on both sides of the surface S(t). The value of α can again be obtained formally from (2.25), which corresponds to

19

requiring the system trajectories be tangent to the surface. This intuitive construction is summarized in Figure 2.7, where f+ = &[x f +u+]T , and similarly

u T

f x

f = &[ + ] and feq = &[x f +ueq]T. Its formal justification was derived in the early 1960's by the Russian mathematician A.F.Filippov.

Figure 2.7: Filippov's construction of the equivalent dynamics in sliding mode Controller design is the second phase of the VSC design procedure. Here the goal is to determine switched feedback gains which derive the plant state trajectory to the sliding surface and maintain a sliding mode condition. The presumption is that the sliding surface has been designed. Among several approach (e.g. the diagonalization method and hierarchical control method), augmenting the equivalent control is one popular approach. This structure of control of system (2.26) is

re

eq u

u

u= + (2.32) where u is the discontinuous or the switched part of (2.32). Consider the system re (2.26), we have ueq =f +x&&d λ~x&. In order to satisfy sliding condition (2.24), we

20

Many dynamic systems to be controlled have constant or slowly-varying uncertain parameters. For instance, Power systems may be subjected to large variations in loading conditions. Adaptive control is an approach to the control of such system. The basic idea in adaptive control is to estimate the uncertain plant parameters (or, equivalently, the corresponding controller parameters) on-line based on the measured system signal, and use the estimated parameters in the control input computation. An adaptive control system can thus be regarded as a control system with on-line parameter estimation.

An adaptive controller differs from an ordinary controller in that the controller parameters are variable, and there is a mechanism for adjusting these parameters on-line based on signals in the system. There are two main approaches for constructing adaptive controllers. One is the so-called model-reference adaptive control method, and the other is the so-called self-tuning method.

21

Model-reference adaptive control (MRAC)

Generally, a model-reference adaptive control system can be schematically represented by Figure 2.8. It is composed of four parts: a plant containing unknown parameters, a reference model for compactly specifying the desired output of the control system, a feedback control law containing adjustable parameters, and an adaptation mechanism for updating the adjustable parameters.

Figure 2.8 A model-reference adaptive control system

The plant is assumed to have a known structure, although the parameters are unknown, for linear plants, this means that the number of poles and the number of zeros are assumed to be known, but that the locations of these poles and zeros are not.

For nonlinear plants, this implies that the structure of the dynamic equations is known, but that some parameters are not.

A reference model is used to specify the ideal response of the adaptive control system to the external command. Intuitively, it provides the ideal plant response which the adaptation mechanism should seek in adjusting the parameters. The choice

22

of the reference model is part of the adaptive control system design. This choice has to satisfy two requirements. On the one hand, it should reflect the performance specification in the control tasks, such as rise time, settling time, overshoot or frequency domain characteristics. On the other hand, this ideal behavior should be achievable for the adaptive control system, i.e., there are some inherent constraints on the structure of the reference model (e.g., its order and relative degree) given the assumed structure of the plant model.

The controller is usually parameterized by a number of adjustable parameters (implying that one may obtain a family of controllers by assigning various values to the adjustable parameters). The controller should have perfect tracking capacity in order to allow the possibility of tracking convergence. That is, when the plant parameters are exactly known, the corresponding controller parameters should make the plant output identical to that of the reference model. When the plant parameters are not known, the adaptation mechanism will adjust the controller parameters so that perfect tracking is asymptotically achieved. If the control law is linear in terms of the adjustable parameters, it is said to be linearly parameterized. Existing adaptive control designs normally require linear parametrization of the controller in order to obtain adaptation mechanisms with guaranteed stability and tracking convergence.

The adaptation mechanism is used to adjust the parameters in the control law. In MRAC systems, the adaptation law searches for parameters such that the response of the plant under adaptive control becomes the same as that of the reference model, i.e., the objective of the adaptation is to make the tracking error converge to zero. Clearly, the main difference from conventional control lies in the existence of this mechanism.

The main issue in adaptation design is to synthesize an adaptation mechanism which will guarantee that the control system remains stable and the tracking error converges to zero as the parameters are varied. Many formalisms in nonlinear control can be

23

used to this end, such as Lyapunov theory, hyperstability theory, and passivity theory.

Although the application of one formalism may be more convenient than that of another, the results are often equivalent.

Self-tuning controllers (STC)

In non-adaptive control design (e.g., pole placement), one computes the parameters of the controllers from those of the plant. If the plant parameters are not known, it is intuitively reasonable to replace them by their estimated values, as provided by a parameter estimator. A controller thus obtained by coupling a controller with an on-line (recursive) parameter estimator is called a self-tuning controller. Figure 2.9 illustrates the schematic structure of such an adaptive controller. Thus, a self-tuning controller is a controller which performs simultaneous identification of the unknown plant.

Figure 2.9 A self-tuning controller

The operation of a self-tuning controller is as follows: at each time instant, the estimator sends to the controller a set of estimated plant parameters, which is computed based on the past plant input u and output y ; the computer finds the corresponding controller parameters, and then computes a control input u based on

24

the controller parameters and measured signals; this control input u causes a new plant output to be generated, and the whole cycle of parameter and input updates is repeated. Note that the controller parameters are computed from the estimates of the plant parameters as if they were the true plant parameters. This idea is often called the certainty equivalence principle.

Parameter estimation can be understood simply as the process of finding a set of parameters that fits the available input-output data from a plant. This is different from parameter adaptation in MRAC systems, where the parameters are adjusted so that the tracking errors converge to zero. For linear plants, many techniques are available to estimate the unknown parameters of the plant. The most popular one is the least-squares method and its extensions. There are also many control techniques for linear plants, such as pole-placement, PID, LQR (linear quadratic control), minimum variance control, or Hdesigns. By coupling different control and estimation schemes, one can obtain a variety of self-tuning regulators. The self-tuning method can also be applied to some nonlinear systems without any conceptual difference.

In the basic approach to self-tuning control, one estimates the plant parameters and then computes the controller parameters. Such a scheme is often called indirect

In the basic approach to self-tuning control, one estimates the plant parameters and then computes the controller parameters. Such a scheme is often called indirect

相關文件