• 沒有找到結果。

Distributed optimal power flow with discrete control variables of large distributed power systems

N/A
N/A
Protected

Academic year: 2021

Share "Distributed optimal power flow with discrete control variables of large distributed power systems"

Copied!
10
0
0

加載中.... (立即查看全文)

全文

(1)

Distributed Optimal Power Flow With Discrete

Control Variables of Large Distributed Power Systems

Ch’i-Hsin Lin and Shin-Yeu Lin

Abstract—In this paper, we propose a distributed algorithm to

solve the yet explored distributed optimal power flow problem with discrete control variables of large distributed power systems. The proposed algorithm consists of two distinguished features: 1) a distributed algorithm for solving continuous distributed optimal power flow to serve as a core technique in the framework of ordinal optimization (OO) strategy, and 2) implementing the OO strategy in a distributed power system to select a good enough discrete control variable solution. We have tested the proposed algorithm for several cases on the IEEE 118-bus and Tai Power 244-bus systems using a 4-PC network. The test results demonstrate the validity, robustness, and excellent computational efficiency of the proposed distributed algorithm in getting a good enough feasible solution.

Index Terms—Discrete control variables, distributed

computa-tion, distributed optimal power flow, nonlinear programming, or-dinal optimization.

NOMENCLATURE

Due to involved notations, in this section we illustrate those frequently appear throughout the paper, and the rest will be il-lustrated in the context.

Total number of subsystems. Vector of continuous variables consisting of real and reactive power generations and bus complex voltage corresponding to subsystem .

Vector of continuous variables of the overall system.

Objective function of subsystem . Objective function of the overall system.

Index set of subsystems connecting with subsystem

.

Manuscript received November 26, 2007; revised February 10, 2008. This work was supported in part by the National Science Council, Taiwan, R.O.C., under Grant NSC95-2221-E-009-099-MY2. Paper no. TPWRS-00865-2007.

C.-H. Lin is with the Department of Electronics Engineering, Kao Yuan Uni-versity, Kaoshiung, Taiwan 821, R.O.C. (e-mail: chsinlin@cc.kyu.edu.tw).

S.-Y. Lin is with the Department of Electrical and Control Engineering, Na-tional Chiao Tung University, Hsinchu 300, Taiwan, R.O.C. (e-mail: sylin@cc. nctu.edu.tw).

Digital Object Identifier 10.1109/TPWRS.2008.926695

Vector of complex voltage on the boundary buses of subsystem , which connects with subsystem .

, .

–dimensional vector of discrete control variables, such as switching shunt capacitor banks and transformer taps, etc., corresponding to subsystem .

–dimensional discrete control variables of the overall system

and .

Solution space of for subsystem .

Solution space of .

Real and reactive power balance equations of subsystem . Inequality constraints in subsystem , such as security limits on voltage magnitude, real power line flows and real and reactive power generation limits. Vector of continuous variables in the general CDOPF shown in (3) corresponding to subsystem .

.

Continuous variables on the boundary buses of subsystem , which connects with subsystem .

.

Equality constraints in the general CDOPF (3).

represents the equality constraints on the boundary buses of subsystem but involving .

.

Partition of .

(2)

Inequality constraints in the general CDOPF (3). Continuous version of corresponding to subsystem . . Optimal . I. INTRODUCTION

A

LTHOUGH the optimal power flow (OPF) problem has a long history in power system research [1]–[5], the study of distributed OPF is introduced only recently. Kim and Baldick proposed a course-grained distributed OPF algorithm in [6], and they also compared three decomposition coordination methods for implementing distributed OPF algorithms in [7]. Hur et al. had evaluated the convergence rate of the auxiliary problem principle for a distributed OPF algorithm in [8]. In a more re-cent paper [9], Hur et al. considered the security limits for dis-tributed OPF. Furthermore, Nogales et al. proposed a decompo-sition algorithm for the multiarea OPF problem in [10]. Chang and Lin proposed an MPBSG technique based parallel dual type method for solving distributed OPF problems in [11], and sim-ilar technique appeared in [12]. These excellent research works had made distributed OPF possible; however, issues of handling

discrete control variables such as the switching shunt capacitor

banks and transformer taps in a large distributed power system are not explored in the above mentioned papers.

Discrete control variables play an important role in central-ized OPF and have been studied for years [13]–[16] including more recent papers that use ordinal optimization (OO) approach [17], simulated annealing (SA) method [18], genetic algorithm (GA) [19], tabu search (TS) method, [20], and evolutionary pro-gramming (EP) [21] as the solution techniques. The importance of discrete control variables to distributed OPF remains as well. Thus, distributed optimal power flow with discrete control variables, which is abbreviated as DOPFD in this paper, of large distributed power system is an important research topic to pursue. DOPFD is a large dimension distributed combinatorial

optimization problem, which, in general, is computationally

intractable. Thus, the purpose of this paper is to propose a computationally efficient distributed algorithm to solve the DOPFD for a good enough solution. The proposed distributed algorithm for DOPFD possesses two distinguished features: 1) a distributed algorithm for solving continuous distributed OPF to serve as a core technique in the framework of OO strategy, and 2) implementing the OO strategy [22], [23] in a distributed power system to select a good enough discrete control variable solution.

In addition, we will implement the proposed distributed al-gorithm in a real PC network to demonstrate its validity. Thus, the contribution of this paper is that we not only propose a com-putationally efficient distributed algorithm to solve the DOPFD for a good enough solution but also implement it in a real com-puter network.

Fig. 1. Example system formed by three interconnected subsystems.

This paper is organized in the following manner. In Section II, we will state the considered DOPFD mathematically. In Section III, we will present the proposed distributed algorithm to solve the DOPFD for a good enough solution. In Section IV, we will test the proposed distributed algorithm on the IEEE 118-bus and Tai Power (TP) 244-bus systems, which are arbi-trarily partitioned into four subsystems, using a PC network. Finally, we will draw a conclusion in Section V.

II. PROBLEMSTATEMENT

The considered DOPFD problem can be stated in the following:

subject to

(1) where denotes the set of tie lines between the pair of sub-systems and , denotes the set of pair subsystems consisting by tie lines, denotes the real power flows over the tie line from subsystem to subsystem measured at the end bus in subsystem , which is indicated by the superscript in ; and denote the lower and upper security power flow limit of

, respectively. A graphical illustration of , ,

and the tie line real power flows are shown in

Fig. 1. For example, , ,

, , , where

and furthermore,

, ; the tie line real power flows from

subsystem 1 to subsystem 3 are , , , and from subsystem 1 to subsystem 2 are , . We can transform the inequality

constraints (i.e., the security limits) on

the tie line real power flow into equality constraints and simple inequality constraints as follows:

(3)

where and denote the slack variables corresponding to in subsystem .

III. DISTRIBUTEDALGORITHM FORSOLVING THEDOPFD The difficulties of the DOPFD are twofold. The first one is for a given ,(1) is a large dimension continuous distributed

optimal power flow, which is abbreviated as CDOPF in this

paper; thus, to evaluate the performance of a , we need to solve a CDOPF. The second one is the enormous size of , for example if there are , say 40, control variables in the whole system and each one has , say four, possible discrete values,

then there are ( ) possible . Therefore, if we

employ the exhaustive search method to search the optimal

in , we need to solve more than continuous CDOPFs.

This is definitely computationally intractable not to mention the difficulty in developing a distributed algorithm for solving the CDOPF. Thus, to cope with the difficulty caused by the enor-mous size of , we will employ the OO strategy to select a good

enough instead of optimal in and simultaneously solve the CDOPF under this . To accomplish this task, we need to 1) propose a distributed algorithm for solving CDOPFs in the framework of OO strategy and 2) implement the OO strategy in a distributed power system to select a good enough . In the following, we will present 1) first.

A. Distributed Algorithm for Solving the CDOPF

Since the CDOPF will appear in the OO strategy more than once, we will use a more general expression to describe the CDOPF.

The considered CDOPF can be stated in the following form:

subject to

(3)

in which we can partition into

such that involves only, while involves . Thus, we

may use the following to replace the equality constants in (3):

and (4)

Our approach for solving the CDOPF is a combination of the successive quadratic programming (SQP) method with the dual pseudo quasi Newton (DPQN) method [24], such that the quadratic programming problem (QPP) induced in the SQP method is solved by the DPQN method. The SQP method uses the following iterations to solve (3):

(5) where is the iteration index, is a positive step-size.

The in (5) is the optimal

solution of the following QPP:

subject to

(6) For the sake of notation simplicity, we drop the arguments in the functions , , , and . As indicated above, the QPP in (6) will be solved by the DPQN method. Therefore, the DPQN method is an inner loop of the SQP method. That means most of the computations of the proposed distributed algorithm for solving the CDOPF lie in the DPQN method, because the SQP

method simply updates , by (5) once ,

is obtained.

Instead of solving (6) directly, the DPQN method solves the dual problem of (6) as stated in the following.

Let denote the Lagrange multiplier vector associated with the equality constrains of the overall system in (6) and let de-note the subvector of associated with the equality constraints

corresponding to subsystem . We partition into ,

such that and associate with the equality constraints and

, respectively. Then the dual problem of (6) can be stated as follows[25]:

(7) where the dual function is defined by (8) at the bottom of the page, in which we have put the inequality constraints in (6) as the domain of , denoted by and defined by

(9) The DPQN method uses the following iterations to solve the dual problem (7):

(10) where is the iteration index, and is a positive step-size.

The is obtained from solving

the following linear equations:

(11)

where denotes the gradient of with respect to ;

the block diagonal matrix is an

ap-proximate Hessian of the dual function without

consid-ering the constraints , and this is the reason why we name (10)–(11) as the dual pseudo quasi-Newton method. The

(4)

formulae for calculating and are described below. The th diagonal block submatrix of denoted by is given by [25] (12) where (13) (14) (15) (16) and the matrix in (13)–(16) is defined by the following:

(17) in which the matrix is an identity matrix with dimensions of

, and is a small positive real number to make positive definite. Note that in (13)–(16), we do not consider the constraint . Since is block diagonal, we can decom-pose (11) into the following:

(18)

where . Clearly, is a

negative definite matrix for every , so is . Con-sequently, in (10) obtained from solving (18) is an

as-sent direction of at . We can partition into which can be computed by the following [25]:

(19) (20)

in which , , is the optimal solution of the

min-imization problem on the RHS of (8). Thus to compute

using (19) and (20), we need to solve the minimization problem on the RHS of (8) first as stated in the following. The constraint

set in (9) can be expressed as , where

and

. Though not that trivial, the objective function of the min-imization problem on the RHS of (8) is separable as illustrated below. The coupling between subsystems is the last term inside the big bracket in (8). We define as the subvector of

asso-ciated with the constraint ,

which is part of that

in-volves . Thus , and the last term inside the

big bracket in (8) regarding subsystem can be rewritten as (21)

The relationship between , , , , and can

be illustrated by the aid of Fig. 2.

In this figure, each boundary bus is associated with the equality constraint, Lagrange multiplier and the variable;

Fig. 2. Relationship between ,  , g , g , 1y , and1y .

however, due to space limitation, we only mark necessary

notations. Thus, we have ; ,

where , and ;

, where and ;

, where and

. Therefore, the complicating

variables in (21) is , , which do not belong to

subsystem . However, taking the summation outside the big bracket in (8), , into account and suitably rearranging

terms, we can rewrite ,

where the term inside the parenthesis is the last term in

(21), as , where denotes

the Lagrange multiplier vector associated with the equality constraints in subsystem but involving the variables

in subsystem ; for example, in Fig. 2, and

. Therefore, after such rearrangement, the minimization problem on the RHS of (8) is separable and can be decomposed into the following independent minimization subproblems: For

(22)

Once the optimal solution of (22), , is obtained for all , we can calculate by (19) and (20). Subsequently,

can be solved from (18).

Remark 1: As indicated previously, is an ascent

direc-tion for the dual funcdirec-tion at . Thus if the step-size is determined by the Armijo’s rule [24] or is a small enough constant, the DPQN method (10) and (18) will converge. Sim-ilarly, is a descent direction of the objective function at , which is true because typical for OPF such as total generation cost or total system losses is locally convex, thus if the step-size is determined by the Armijo’s rule [24] or is a small enough constant, the SQP method (5) and (6) will converge. Since Armijo’s rule is a cen-tralized step-size determination rule, for the sake of implemen-tation in distributed computer network, we had better employ a constant step-size. Based on our extensive simulation experi-ences, 0.9 is a good choice of small enough constant for both

(5)

1) Complete Decomposition and Parallel Computation (Re-solving the Difficulty Caused by Large Dimension): All the

computation formulae in the SQP and DPQN methods (5), (10), (13)–(16), (18)–(20), and (22) are decoupled and can be carried out independently and in parallel. This property resolves the dif-ficulty caused by large dimensionality of CDOPF.

2) Required Data Communication: As indicated previously,

all the computation formulae in the SQP and DPQN methods are decoupled, however there are data communications required in performing (14)–(16), (20) and (22). Specifically, to prepare in (18), we need to perform (13)–(16) in subsystem ; while preparing in (14)–(16), we require the data of or ,

, from subsystems , because

involves . Similar situations occur to computing in (20), in which we need from subsystem the data

to prepare and , and the data to perform (20).

In addition, to obtain from solving (22), we need the data

, and , from subsystems . However, to

prepare the just mentioned data in subsystem , we need the data from subsystem . Fortunately, the data required

in subsystem from subsystems (subsystem ) are

only those on the boundary buses. Therefore, the burden of data communication is very light. This indicates that the proposed distributed algorithm for solving CDOPF is very suitable for implementation in a distributed computer network.

3) Convergence Determination and Synchronization of Dis-tributed Computation: Regarding convergence determination,

we assign a subsystem namely the root subsystem to monitor the convergence of the SQP and DPQN methods in the dis-tributed computer network. Following is the disdis-tributed algo-rithmic steps for solving CDOPF in each subsystem , and the steps for determining the convergence will be executed in the root subsystem only, which will be indicated specifically. Re-garding the synchronization in each algorithmic step, we employ the concept of asynchronous synchronization, which implies the computations of an algorithmic step will start only when all the required data from the connecting subsystems are received.

4) Distributed Algorithm I for Subsystem : Now we are

ready to state the distributed algorithmic steps for subsystem to solve the CDOPF (3).

Step 0) Initially guess and ; set , .

Step 1) Calculate the values of , , , ; send

to subsystem for every .

Step 2) Once receiving from all subsystems .

calculate , , (i.e., , ), and

(by (13)–(16)); send to subsystem for every .

Step 3) Once receiving from all subsystems ,

go to Step 4.

Step 4) Send to subsystem for every .

Note: The reason why we send and to subsystem

for every in separate steps is because is constant

for the whole DPQN method in iteration of the SQP method,

while varies for each iteration of the DPQN method as can be seen in Step 10.

Step 5) Once receiving all , , obtain from

solving the th minimization subproblem in (22).

Step 6) Send to subsystem for every .

Step 7) Once all are received, calculate

by (19) and (20).

Step 8) Solve from (18).

Step 9) If , send a signal to the root subsystem

to inform the convergence of the DPQN method in this

subsystem and go to Step 11 if or go

to Step 12 if . If , go to

Step 10.

Step 10) Update by (10) with an experienced

step-size , set and return to Step 4.

Step 11 (for root subsystem only) Once receiving the signal indicating the convergence of the DPQN method from all subsystems, send a convergence signal of the DPQN method to

subsystem for all .

Step 12) Once receiving the convergence signal of the DPQN

method from the root subsystem, set . If

, send a signal to the root subsystem to inform the convergence of the SQP method in this subsystem and wait for further convergence signal from the root subsystem;

otherwise, update by (5) with an experienced

step-size , set and return to Step 1.

Step 13 (for root subsystem only) Once receiving the signal indicating the convergence of the SQP method from all subsystems, send a signal to all subsystems to continue the algorithmic steps in Distributed Algorithm II, which will be presented later.

B. OO Theory-Based Distributed Algorithm to Solve DOPFD for a Good Enough Solution

For real-time application purpose, we would rather obtain a

good enough solution within reasonable computation time than

get the optimal solution using incredibly long time. The OO theory [22], [23] is a recently developed optimization technique to solve hard optimization problems, such as the combinato-rial optimization problem, for a good enough solution with high

probability using limited computation time. Based on the

obser-vation that the performance order of discrete solutions is likely preserved even evaluated by a surrogate model, the OO theory concludes the following: Suppose we simultaneously evaluate

a large set of alternatives very approximately and order them according to the approximate evaluation. Then there is high probability that we can find the actual good alternatives if we limit ourselves to the top of the observed good choices.

According to this conclusion, we can quickly evaluate the es-timated performances of all discrete solutions in the candidate discrete solution set using a surrogate model and rank them to select a set of top ranked solutions. Suppose we employ a more refined surrogate model, there will be more actual good discrete solutions contained in the selected set of top ranked solutions,

(6)

however at the cost of more evaluation time for each discrete so-lution. Therefore, if the size of the primitive discrete solution set is huge, the above evaluation, ranking and selection process can repeat for more than one stage, such that 1) the employed surro-gate models will be refined stage by stage, and 2) the set of top ranked solutions selected in one stage will serve as the candidate solution set of next stage. In the final stage, the exact model will be used to evaluate all discrete solutions of the largely trimmed candidate solution set, and the resulted best discrete solution is the good enough solution that we seek. Thus, comparing with the exhaustive search method, in which the exact model is used to evaluate every discrete solution, the proposed OO strategy is a process to select a good enough from the enormous using limited computation time. However, some ranking and selection are centralized behaviors. Thus, we need to assign the root sub-system, which is responsible for determining the convergence of SQP and DPQN methods in Distributed Algorithm I, to carry out this task. Consequently, our idea to carry out some central-ized OO concept in a distributed power system is as follows. Each subsystem will evaluate the estimated performances of and send the evaluation results to the root subsystem. The root subsystem will then rank and select a set of top ranked

based on the gathering estimated performances of

sent from all subsystems and send the subvector

of the selected to subsystem . Based on this idea, the proposed distributed algorithm for solving DOPFD for a good

enough solution consists of three stages as described below. 1) Stage 1: There are two parts in this stage. The first part is

to reduce the size of the primitive candidate solution set, , to based on the optimal solution of the CDOPF.

To achieve this, replacing the discrete in (1) by its con-tinuous version and replacing the inequality constraints on tie line real power flows by the transformed equality and simple inequality constraints, (2), we obtain a CDOPF shown in the following:

subject to

(23) Note that the upper and lower bounds of is represented by the maximum and minimum values of , respectively, and these bounded constraints on are included in the

inequality constraints in (23). We define the

functions and

, and denote the

vector functions , and

, . We define

and denote the vector of slack variables , .

Setting , , , and

then we can use the Distributed Algorithm I to solve the CDOPF, (23). It is worth noting that the optimal objective value of (1) cannot compete with that of (23), because

the optimal for (1) is only a feasible solution of (23). Since most of the objective functions considered in the OPF, such as total generation cost and total system losses, are continuously differentiable and locally convex, the neighboring discrete control solutions of the continuous optimal solution, , of (23) should consist of good enough discrete control solutions of (1). Thus, we have reduced the size of candidate solution set, , to .

However, , for example , is still a very large number. Thus, the second part of this stage is to further reduce the size of the candidate solution set from to based on sen-sitivity analysis, where , a faction of say , is prede-termined. To achieve this, we proceed as follows. Since some components of may already be very close to the closest dis-crete values or the disdis-crete steps of some disdis-crete control vari-ables are very small such as the transformer tap ratio, we can fix those components of to their closest discrete values if the corresponding deviations do not affect the optimal objec-tive value of the CDOPF significantly. Thus, in the rest of this stage, we will employ the sensitivity theory [25, Ch. 10, Sec. 10.7, p. 312] to find such components. The sensitivity theory states that the sensitivity, or the gradient, of with respect to the value change of the equality constraint function

equals the negative Lagrange multiplier, .

We let and denote the th component of and

, respectively, and define and

, where and denote

the closest discrete value on the right-hand side and left-hand

side of , respectively. The deviation (or )

will cause the value change on and . Then

the deviation of the overall optimal objective value of (23), ,

caused by the deviation (or ), denoted by

(or ) can be calculated based on

the above mentioned sensitivity theory and chain rule by

(24)

Smaller (or ) implies

that (or ) will affect very lightly. We let

and rank based on the values of

such that the smaller the latter, the higher rank the former. Then, for each of the top ranked , we will fix the corresponding discrete control variable

at if or if

.

Now, since each of yet fixed discrete control variables in subsystem can take two neighboring discrete values, there

are possible in subsystem , and we denote them

by , . Combination of the subsystem’s

results in possible . In other words,

we have possible from the overall system point

of view, thus we have further reduced the size of the candidate

(7)

2) Stage 2: In this stage, we will estimate the performance

of the obtained in Stage 1 using sensitivity model

and select the top ranked , say 50, .

To do so, we will compute the estimated deviation of the op-timal objective value due to the deviation

for each of the in subsystem by

(25) The reason that supports (25) is exactly the same as that supports (24) except for being a vector, while is a component. Then, subsystem will send the

pairs of to the root subsystem.

Now in the root subsystem, we label these

pos-sible as , and , the

subvector of corresponding to subsystem , is one of

the sent from subsystem . Due to the linear

property of the sensitivity theory [25, Ch. 10, Sec. 10.7, p.

312], we have , where

, is the subvector of

corresponding to subsystem , and is one of

the sent from subsystem . As indicated

previously, the optimal objective value of (1) is larger than that of (23), because the optimal for (1) is only a feasible

solution for (23). Therefore, smaller implies

a smaller deviation of the optimal objective value of (23)

caused by the deviation . Thus, the root subsystem

will then rank these based on the

corre-sponding values of such that the with

smaller has higher rank. Subsequently, we

can pick the top ranked and relabel them as ,

. Then the root subsystem will send to subsystem

the corresponding subvectors , , for

every . Thus, we have further reduced the size of

the candidate solution set from to . In the meantime, the root subsystem will inform each subsystem to proceed with next stage.

3) Stage 3: In this stage, we will use the exact model to

evaluate the resulted in Stage 2, and the best one is the good enough that we seek.

The exact model for evaluating the discrete control solutions

, obtained in Stage 2 is (1), but in which

the is replaced by the fixed and becomes a CDOPF.

Replacing the inequality constraints on tie line real power flows by the transformed equality and simple inequality constraints (2), the exact model will be the same as (23) except for substi-tuting the continuous variables by the fixed . Using

the similar treatment as to (23), we can set ,

, and , where ,

and have been defined in the paragraph followed by (23). Thus, we can use Steps 1–13 of the Distributed Algorithm I to solve the resulted CDOPF. Note that we do not need Step 0 of Distributed Algorithm I, because the resulted operating point from Stage 2 (that is the optimal solution of the CDOPF (23), but in which is replaced by ) will be the ini-tial operating point of this Stage. Thus we can proceed with

picking the best among as follows.

When receiving the corresponding subvectors of the discrete control solutions resulted in Stage 2 from the root subsystem, all subsystems will cooperate to solve the CDOPFs. We let denote the optimal solution of the CDOPF for the fixed . Once the CDOPFs are solved, each subsystem will send the pairs of to the root subsystem. The root sub-system will calculate the objective value of the overall sub-system

for the given by taking the sum .

We denote as the corresponding to the smallest

, among . Then the

, associated with the will be the good

enough solution that we seek.

Remark 2: Solving CDOPFs seems to be computa-tionally very intensive. In fact, it is not, because each is neighboring to , and the initial operating point resulted from Stage 2 is already close to the solution. Therefore, in almost all the CDOPFs, it takes only one iteration of the Distributed Al-gorithm I excluding Step 0 to obtain the solution.

Remark 3: We say that a is feasible if the CDOPF re-sulted by setting the in (1) to be the given has optimal solution. Now, one of the conventional approaches to central-ized OPF with discrete control variables is using an

approxi-mating technique to obtain an approximate discrete control

so-lution then rounding off to the closest discrete values. However,

arbitrarily rounding off may cause infeasibility problem as

in-dicated in [16]. Our approach can circumvent such undesirable situation, because if there is at least one feasible among the resulted in Stage 2, the good enough solution obtained in Stage 3 must be a feasible one. In other words, we significantly increase the probability of obtaining a good enough feasible so-lution. For example, suppose half of the resulted in the first part of Stage 1 are feasible. The probability of getting

fea-sible by arbitrarily rounding off is then 0.5. However, the probability that the good enough obtained by our approach is feasible is

the probability that none of the is feasible

Now we are ready to state the algorithmic steps of the dis-tributed algorithm for each subsystem to solve DOPFD for a good enough solution, and the steps executed only in the root subsystem will be specifically indicated.

C. Distributed Algorithm II for Subsystem

Step 0 (for root subsystem only) Command all subsystems to start.

Step 1) When receiving the command from the root subsystem, perform the first part of Stage 1 using Distributed Algorithm I. Step 2) When receiving the convergence signal of CDOPF from the root subsystem, perform the second part of Stage

1, such that will be fixed at one side of the

closest discrete value. For the rest yet fixed components

of , we have possible , which are relabeled as

, . Compute by (25), where

(8)

pairs of , to the root subsystem.

Step 3 (for root subsystem only) When receiving the pairs

of , from subsystem

for all , the root subsystem will pick the best

from the based on the sensitivity model

as stated in Stage 2. Relabel the picked as ,

and send , to subsystem

for all .

Step 4) Once receiving the subvectors , from the root subsystem, start to solve the CDOPFs using Steps 1–13 of Distributed Algorithm I as stated in Stage 3. Once the CDOPFs are solved, send the pairs

of , , to the root

subsystem.

Step 5 (for root subsystem only) When receiving the

pairs of , , from all

subsystems , the root subsystem will take the sum

for each , and based on

which pick the best as stated in Stage 3. Relabel the

best as and send to subsystem , for

all .

Step 6) Once receiving the good enough subvector from the root subsystem, stop the algorithm and output the

solution .

IV. TESTRESULTS

In this section, we will demonstrate 1) the validity of Dis-tributed Algorithm II by implementing it in a real PC network and 2) the computational efficiency and the goodness of the obtained good enough solutions by indirect comparisons with existing centralized global searching techniques. (To our best knowledge, there is no method dealing with the DOPFD con-sidered in this paper so far, indirect comparison is all we can do.)

We have implemented our Distributed Algorithm II in a 4-PC network to solve the DOPFD of the IEEE 118-bus and TP 244-bus systems, both of which are arbitrarily partitioned

into four subsystems namely , , , and , , , ,

respectively. Each subsystem is associated with a PC. Some details regarding number of buses, number of transmission lines and number of generation buses in each subsystem are shown in Table I. It should be noted that the values of conductance of the transmission lines in the TP 244-bus system are much higher than that of the IEEE 118-bus system on the average. We consider two types of objective function: the minimum total real power generation cost

and the minimum system losses , where

denotes the real power generation of generation bus , ,

and are cost coefficients, and denotes the real

power loss on transmission line . The set of pair subsys-tems consisting of tie lines in the IEEE 118-bus and TP

244-bus systems denoted by and , respectively,

are and

. We

TABLE I

CONTENTS OF THEFOURSUBSYSTEMS IN THE IEEE 118-BUS ANDTP 244-BUSSYSTEMS

TABLE II

TWOSETS OFDISCRETECONTROLVARIABLES INEACHSUBSYSTEM OF THEIEEE 118-BUS ANDTP 244-BUSSYSTEM

TABLE III

FINALOBJECTIVEVALUEOBTAINED BY AND THECONSUMEDCPU TIME OF THEDISTRIBUTEDALGORITHMII IMPLEMENTED IN4-PC NETWORK,THECORRESPONDINGCENTRALIZEDVERSION,THE CENTRALIZEDGA,ANDCENTRALIZEDTS METHODIMPLEMENTED

INSINGLEPCFOR THEEIGHTCASES

assume each switching capacitor is equipped with four capac-itor banks, and the capacity of a bank is 14 MVAR. We assume each transformer tap has 32 discrete steps such that each step is 5/8% of the nominal transformer tap ratio. We consider two sets of discrete control variables, namely and , in each system and the number of switching capacitors and trans-formers in each subsystem for each set are shown in Table II. The values of of each subsystem can be easily calculated by adding the number of switching capacitors and transformers as shown in the fifth row of Table II, and we set for each subsystem as shown in the last row of Table II.

For each system, each objective function and each discrete control variable set, we have tested eight cases, which are

de-scribed by three arguments , such that indicates

the test system is the IEEE 118-bus system and 2 the TP 244-bus system, indicates the objective function is total genera-tion cost and 2 the total system losses, indicates the first discrete control variable set and 2 the second discrete control variable set for the corresponding system as shown in Table II. These eight cases are shown in the first column of Table III. We apply Distributed Algorithm II to solve the DOPFD of these eight cases in the 4-PC network. The four PCs are of the same

(9)

model: Pentium IV, 2.66-GHz processor and 1.25 GB of RAM. We assign subsystems and as the root subsystems of the IEEE 118-bus and TP 244-bus systems, respectively. The pro-gram is written in . We employ the TCPIP as the communica-tion protocol in the 4-PC network. We set in Steps 9 and 12 in Distributed Algorithm I and in Step 3 of Dis-tributed Algorithm II. The final objective value of the overall system in each case obtained by the Distributed Algorithm II and corresponding CPU time consumed in the 4-PC network is shown in the second and the eighth columns in Table III. The consumed CPU time including the communication overhead in the 4-PC network is counted until the root subsystem determines the good enough discrete control solution and sends the corre-sponding subvector to each subsystem.

To illustrate the reduction of the search space of discrete con-trol variables in our approach, we take the case (1,1,1) as an illustrative example. The size of the original search space

is , because each

transformer tap has 32 discrete steps and each switching capac-itor has four capaccapac-itor banks. After executing Step 1 of Dis-tributed Algorithm II (i.e., part 1 of Stage 1), the size of search space is reduced from 2.237 to

. At this point, each discrete control variable

has two choices of discrete values, or . After

ex-ecuting Step 2 (i.e., part 2 of Stage 1), we found that the op-timal objective value of the CDOPF (23) is very insensitive to the transform tap ratio due to two reasons: 1) the values of and are insensitive to the deviation of transformer tap ratio and 2) the discrete step of the transform tap ratio is very small. On the other hand, optimal objective value is more sen-sitive to capacitor due to larger discrete step unless the optimal continuous capacitor value is already close to one of the neigh-boring discrete values. Thus, all the transformer taps and half of the switching capacitors of each subsystem are fixed at the side

of closest discrete value that achieves as defined

in part 2 of Stage 1. At this point, we have further reduced the

size of the search space from 1.1 to .

After executing Step 3 (i.e., Stage 2), the best 50 out of the 1024 possible resulted from Step 2 are selected based on the sen-sitivity model. At this point, we have further reduced the size of the search space from 1024 to 50. Applying the exact model to evaluate the 50 possible resulted from Step 3, that is ex-ecuting Steps 4 and 5 (i.e., Stage 3), the best is the good enough discrete control variable solution. From this case, we see that if we apply the exhaustive search method to find the

op-timal for (1), we need to solve 2.237 CDOPFs.

How-ever, we only need to solve 51 CDOPFs to obtain a good enough . Moreover, the 50 CDOPFs solved in Steps 4 and 5 take one iteration only. This manifests the dramatic computation time re-duction in our approach.

To verify our results, we also implement the Distributed Al-gorithm II in single PC and apply to the eight cases. The final objective values shown in the third column of Table III are ex-actly the same as that obtained in the 4-PC network, and the consumed CPU times are shown in the ninth column.

This demonstrates that Distributed Algorithm II is success-fully implemented in a computer network, and the consumed

CPU time is less than one third but more than one fourth of the CPU time consumed by the centralized version. The reason that the Distributed Algorithm II is not four times faster than the centralized version when using the 4-PC network is because the sizes of the four subsystems are different, and there exists some but very slight communication overhead. It is worth noting that the good enough solutions we obtained in all the eight cases are feasible. This reflects the significantly improved probability of our approach in getting feasible discrete control solutions as commented in Remark 3. Although we cannot find any com-peting methods in dealing with the DOPFD considered in this paper so far, we can treat (1) as a centralized OPF with discrete control variables for all the eight cases and solve them using the global searching techniques GA and TS methods associated with the combination of SQP with the DPQN methods to solve the centralized CDOPF. In the employed GA [26], we use a simple coding scheme of 0 and 1 strings to represent all pos-sible in , and each represents a population in GA. We randomly select 20 from as our initial populations. The fitness of a population is set to be the reciprocal of the objec-tive value of (1), in which is set to be the population , and is solved by the combination of the SQP and DPQN methods in single PC. The members in the mating pool are selected from the pool of populations using roulette wheel selection scheme based on the fitness values. We set the probability of selecting members in the mating pool to serve as parents for crossover to be . We use a single point crossover scheme and

assume the mutation probability to be . For each

of the eight cases, we stop GA when it consumes around 150 times of the CPU time consumed by the centralized version of Distributed Algorithm II and record the best so far objective values and the consumed CPU time in the fourth and the tenth columns of Table III. The iterative mechanism of the employed TS method is stated in the following. Starting from a randomly selected from , in each iteration of the TS method, we ran-domly evaluate one third of the neighboring of the current and accept the best one to be the new based on the tabu list and a criterion of global aspiration by objective [26]. Noting that evaluating the objective value of (1) for a given in the TS method is the same as in GA, we apply TS method to all eight cases. For each of the eight cases, we also stop the method when it consumes around 150 times of the CPU time consumed by the centralized Distributed Algorithm II and record the best so far objective value and the consumed CPU times in the sixth and the eleventh columns in Table III. From the fourth and sixth columns of Table III, we see that in most of the cases, GA out-performs TS method, because of its capability of decentraliza-tion. From the fifth and seventh columns, we can observe that when the number of discrete control variables increases in the same system for the same objective function, the performance of both GA and TS methods degrade, because the improvement of the best so far objective values becomes more sluggish. Fur-thermore from the third, fifth, seventh, ninth, tenth and eleventh columns, we find that when both GA and TS methods consumed around 150 times of the CPU time consumed by the central-ized version of Distributed Algorithm II, their best so far objec-tive values are still 23.25% and 24.63%, on the average, more than the objective values obtained by the centralized version of

(10)

Distributed Algorithm II, respectively. On the other hand, using less than one third of the CPU time, the Distributed Algorithm II implemented in 4-PC network can obtain the same objective value as that obtained by the centralized version. These indi-rect comparisons demonstrate the computational efficiency of Distributed Algorithm II and the goodness of the obtained good enough solutions. The feasibility and the goodness of the ob-tained good enough solutions in all eight cases confirm the ro-bustness of our algorithm.

V. CONCLUSION

In this paper, we have proposed a distributed algorithm to deal with DOPFD of large distributed power systems. We use a 4-PC network to implement the proposed distributed algorithm and apply to the DOPFD on the IEEE 118-bus and TP 244-bus systems. We have ascertained the robustness of our distributed algorithm in the aspects of feasibility and goodness of the ob-tained solution; moreover, the computational speed of our dis-tributed algorithm is at least three times faster than the central-ized version in all the test cases when using a 4-PC network.

ACKNOWLEDGMENT

The authors would like to thank the anonymous reviewers who gave them constructive comments and helpful suggestions to improve the readability of their paper. The authors also would like to thank Mr. C.-Z. Lin for his efforts in setting up the PC net-work, establishing the communication between PCs and typing this paper.

REFERENCES

[1] B. Stott, J. L. Marinho, and O. Alsac, “Review of linear programming applied to power system rescheduling,” in Proc. IEEE PICA Conf., 1979, pp. 142–154.

[2] T. C. Giras and S. N. Talukdar, “Quasi-Newton method for optimal power flows,” Int. J. Elect. Power Energy Syst., vol. 3, no. 2, pp. 59–64, Apr. 1981.

[3] R. C. Burchett, H. H. Happ, and D. R. Vierath, “Quadratically conver-gent optimal power flow,” IEEE Trans. Power App. Syst., vol. PAS-103, no. 10, pp. 2864–2880, Oct. 1984.

[4] D. I. Sun, I. Hu, G. Lin, C. J. Lin, and C. M. Chen, “Experiences with implementing optimal power flow for reactive scheduling in the Taiwan power system,” IEEE Trans. Power Syst., vol. 3, no. 3, pp. 1193–1200, Aug. 1988.

[5] Y. C. Wu, A. S. Debs, and R. E. Marsten, “A direct nonlinear predictor-corrector primal-dual interior point algorithm for optimal power flows,”

IEEE Trans. Power Syst., vol. 9, no. 2, pp. 876–883, May 1994.

[6] B. H. Kim and R. Baldick, “Coarse-grained distributed optimal power flow,” IEEE Trans. Power Syst., vol. 12, no. 2, pp. 932–939, May 1997. [7] B. H. Kim and R. Baldick, “A comparison of distributed optimal power flow algorithms,” IEEE Trans. Power Syst., vol. 15, no. 2, pp. 599–604, May 2000.

[8] D. Hur, J.-K. Park, and B. H. Kim, “Evaluation of convergence rate in the auxiliary problem principle distributed optimal power flow,” Proc.

Inst. Elect. Eng., Gen., Transm., Distrib., vol. 149, no. 5, pp. 525–532,

Sep. 2002.

[9] D. Hur, J.-K. Park, B. H. Kim, and K.-M. son, “Security constrained optimal power flow for the evaluation of transmission capability on korea electric power system,” in Proc. IEEE Power Eng. Soc. Summer

Meeting, 2001, vol. 2, pp. 1133–1138.

[10] F. J. Nogales, F. J. Prieto, and A. J. Conejo, “A decomposition method-ology applied to the multi-area optimal power flow problem,” Ann.

Oper. Res., vol. 120, no. 1-4, pp. 99–116, Apr. 2003.

[11] S.-S. Lin and H. Chang, “An efficient algorithm for solving BCOP and implementation,” IEEE Trans. Power Syst., vol. 22, no. 1, pp. 275–284, Feb. 2007.

[12] H. Chang and S.-S. Lin, “A MPBSG technique based parallel dual-type method for solving distributed optimal power flow problems,” IEICE

Trans. Fundam. Electron., Commun., Comp. Sci., pp. 260–269, Jan.

2006.

[13] A. Bakirtzis and A. Meliopoulos, “Incorporation of switching opera-tion in power system corrective control computaopera-tions,” IEEE Trans.

Power Syst., vol. 2, no. 3, pp. 669–676, Aug. 1987.

[14] A. Monticelli and W. Liu, “Adaptive movement penalty method for the Newton optimal power flow,” IEEE Trans. Power Syst., vol. 7, no. 1, pp. 334–340, Feb. 1992.

[15] W. Liu, A. Papalexopoulos, and W. Tinney, “Discrete shunt controls in a Newton optimal power flow,” IEEE Trans. Power Syst., vol. 7, no. 4, pp. 1509–1520, Nov. 1992.

[16] W. Tinney, J. Bright, K. Demaree, and B. Hughes, “Some deficien-cies in optimal power flow,” in Proc. IEEE PICA Conf., May 1987, pp. 164–169.

[17] S.-Y. Lin, Y.-C. Ho, and C.-H. Lin, “An ordinal optimization theory based algorithm for solving the optimal power flow problem with dis-crete control variables,” IEEE Trans. Power Syst., vol. 19, no. 1, pp. 276–286, Feb. 2004.

[18] L. Chen, H. Suzuki, and K. Katou, “Mean field theory for optimal power flow,” IEEE Trans. Power Syst., vol. 12, no. 4, pp. 1481–1486, Nov. 1997.

[19] A. Bakirtzis, P. Biskas, C. Zoumas, and V. Petridis, “Optimal power flow by enhanced genetic algorithm,” IEEE Trans. Power Syst., vol. 17, no. 2, pp. 229–236, May 2002.

[20] T. Kulworawanichpong and S. Sujitjorn, “Optimal power flow using tabu search,” IEEE Power Eng. Rev., vol. 22, no. 6, pp. 37–40, Jun. 2002.

[21] J. T. Ma and L. L. Lai, “Evolutionary programming approach to reac-tive power planning,” Proc. Inst. Elect. Eng., Gen., Transm., Distrib., vol. 143, no. 4, pp. 365–370, Jul. 1996.

[22] Y. C. Ho, Soft Optimization for Hard Problem. Cambridge, MA: Har-vard Univ. Press, 1996, Lecture Notes.

[23] T. W. E. Lau and Y.-C. Ho, “Universal alignment probability, and subset selection for ordinal optimization,” J. Optim. Theory Appl., vol. 93, no. 3, pp. 455–489, Jun. 1997.

[24] C.-H. Lin and S.-Y. Lin, “A new dual-type method used in solving optimal power flow problems,” IEEE Trans. Power Syst., vol. 12, no. 4, pp. 1667–1675, Nov. 1997.

[25] D. Luenberger, Linear and Nonlinear Programming, 2nd ed. Reading, MA: Addison-Wesley, 1984.

[26] S. Sait and H. Youssef, Iterative Computer Algorithms With

Applica-tion in Engineering: Solving Combinatorial OptimizaApplica-tion Problems.

Los Alamitos, CA: IEEE Computer Society, 1999.

Ch’i-Hsin Lin was born in Taiwan, R.O.C. He

received the B.S. degree in electrical engineering from Feng Chia University, Taichung, Taiwan, the M.S. degree in electrical engineering from National Tsing Hua University, Hsinchu, Taiwan, and Ph.D. degree in electrical and control engineering from Chiao Tung University, Hsinchu, in 1989, 1991, and 1996, respectively.

He joined the Department of Electronic Engi-neering at the Kao Yuan University, Kaoshiung, Taiwan, R.O.C., in 1998 and has been a Associate Professor since 2003. His major research interests include large-scale power systems and ordinal optimization theory and applications.

Shin-Yeu Lin was born in Taiwan, R.O.C. He

received the B.S. degree in electronics engineering from National Chiao Tung University, Hsinchu. Taiwan, R.O.C., the M.S. degree in electrical engi-neering from University of Texas at El Paso, and the D.Sc. degree in systems science and mathematics from Washington University, St. Louis, MO, in 1975, 1979, and 1983, respectively.

From 1984 to 1985, he was with Washington Uni-versity working first as a Research Associate and then a Visiting Assistant Professor. From 1985 to 1986, he was with GTE Laboratory working as a Senior MTS. He joined the Department of Electrical and Control Engineering at National Chiao Tung University in 1987 and has been a Professor since 1992. His major research interests include op-timal power flow, ordinal optimization theory and applications, and distributed computations.

數據

Fig. 1. Example system formed by three interconnected subsystems.
Fig. 2. Relationship between  ,  , g , g , 1y , and 1y .

參考文獻

相關文件

In this section, we consider a solution of the Ricci flow starting from a compact manifold of dimension n 12 with positive isotropic curvature.. Our goal is to establish an analogue

massive gravity to Ho ř ava-Lifshitz Stochastic quantization and the discrete quantization scheme used for dimer model and crystal melting. are

✓ Express the solution of the original problem in terms of optimal solutions for subproblems. Construct an optimal solution from

✓ Express the solution of the original problem in terms of optimal solutions for subproblems.. Construct an optimal solution from

Lu, Linear-time compression of bounded-genus graphs into information-theoretically optimal number of bits, in Proceedings of the Thirteenth Annual ACM-SIAM Symposium on

As we shall see in Section 30.2, if we choose “complex roots of unity” as the evaluation points, we can produce a point-value representation by taking the discrete Fourier transform

– discrete time and discrete state space – continuous time and discrete state space – discrete time and continuous state space – continuous time and continuous state space..

For a directed graphical model, we need to specify the conditional probability distribution (CPD) at each node.. • If the variables are discrete, it can be represented as a