• 沒有找到結果。

A global optimization method for nonconvex separable programming problems

N/A
N/A
Protected

Academic year: 2021

Share "A global optimization method for nonconvex separable programming problems"

Copied!
18
0
0

加載中.... (立即查看全文)

全文

(1)

Theory and Methodology

A global optimization method for nonconvex separable

programming problems

Han-Lin Li

a,*

, Chian-Son Yu

b

aInstitute of Information Management, National Chiao Tung University, Hsinchi 30050, Taiwan, ROC bDepartment of Information Management, Shih Chien University, Taipei 10020, Taiwan, ROC

Received 7 May 1997; accepted 8 June 1998

Abstract

Conventional methods of solving nonconvex separable programming (NSP) problems by mixed integer program-ming methods requires adding numerous 0±1 variables. In this work, we present a new method of deriving the global optimum of a NSP program using less number of 0±1 variables. A separable function is initially expressed by a piecewise linear function with summation of absolute terms. Linearizing these absolute terms allows us to convert a NSP problem into a linearly mixed 0±1 program solvable for reaching a solution which is extremely close to the global optimum. Ó 1999 Elsevier Science B.V. All rights reserved.

Keywords: Goal programming; Piecewise linear function; Separable programming

1. Introduction

Separable programs are nonlinear programs in which the objective functions and constraints can be expressed as the sum of all functions, each nonlinear term involving only one variable. The nonconvex separable programming (NSP) problem discussed herein, denoted as Problem P1, is expressed as follows:

Problem P1 (NSP Problem) minimize Xn iˆ1 fi…xi† subject to Xn iˆ1 hij…xi† P 0 for all j; xiP 0 for i ˆ 1; 2; . . . ; n; www.elsevier.com/locate/orms

*Corresponding author. Tel.: 886-35-728709; fax: 886-35-723792; e-mail: hlli@cc.nctu.edu.tw

0377-2217/99/$ ± see front matter Ó 1999 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 7 - 2 2 1 7 ( 9 8 ) 0 0 2 4 3 - 4

(2)

where fi…xi† could be nonconvex functions and hij…xi† are linear functions.

By assuming that all of fi…xi† in Problem P1 are convex, Problem P1 can be solved by the simplex method

to obtain the global optimum. The conventional means of solving Problem P1 with nonconvex fi…xi† [3,5,11]

is discussed below.

Assume that fi…xi† is approximately linearized over the interval [a, b]. De®ne ai;k; k ˆ 1; 2; . . . ; mi, as the

kth break point on the xi-axis such that ai;1< ai;2<    < ai;mi with ai;1ˆ a and ai;mi ˆ b. Then fi…xi† can be

approximated as fi…xi† ˆ

Xmi

kˆ1

fi…ai;k†ti;k; …1†

where xiˆPmkˆ1i ai;kti;k; Pkˆ1mi ti;kˆ 1; ti;kP 0; in which only two adjacent ti;k, e.g. …ti;kÿ1; ti;k† and …ti;k; ti;k‡1†;

are allowed to be nonzero. In reference to Eq. (1), conventional methods [3,5,11] treat the NSP Problem as the following program.

Program 1 (Conventional NSP methods [3,5,11]) minimize Xn

iˆ1

Xmi

kˆ1

fi…ai;k†ti;k

subject to Xn iˆ1 hij…xi† P 0 for all j; xiˆ Xmi kˆ1

ai;kti;k for i ˆ 1; 2; . . . ; n;

ti;16 yi;1 for i ˆ 1; 2; . . . ; n;

ti;k6 yi;kÿ1‡ yi;k for i ˆ 1; 2; . . . ; n; k ˆ 2; 3; . . . ; mi;

Xmi kˆ1 yi;kˆ 1 Xmi kˆ1 ttki ˆ 1 for i ˆ 1; 2; . . . ; n;

where yi;kˆ 0 or 1, ti;kP 0; k ˆ 1; 2; . . . ; mi; and i ˆ 1; 2; . . . ; n:

Obviously, there exists a unique k where yi;kˆ 1 and ti;k‡ ti;k‡1 ˆ 1, in which Problem 1 becomes

minimize Xn

iˆ1

‰fi…ai;k†ti;k‡ fi…ai;k‡1†…1 ÿ ti;k†Š

subject to Xn

iˆ1

hij…xi† P 0 for all j;

xiˆ ai;k‡1‡ …ai;kÿ ai;k‡1†ti;k for i ˆ 1; 2; . . . ; n;

xiP 0; ti;kP 0:

Program 1 is a linearly mixed integer problem which can obtain the global optimum. Program 1 is seriously limited in that it contains a large number of 0±1 variables which incur heavy computational burden. The number of newly added 0±1 variables required to approximately linearize a function fi…xi†

equals the number of breaking intervals. For instance, Program 1 requires using Pn

iˆ1…miÿ 1† zero±one

variables …i:e:; yi;1; yi;2; . . . ; yi;miÿ1†.

An alternative means of solving Problem P1 is the restricted-basis simplex method [3,11]. This method speci®es that no more than two positive ti;kcan appear in the basis. Moreover, two ti;kcan be positive only if

they are adjacent. In this case, the additional constraints involving yi:kare disregarded. The restricted basis

method, although computationally ecient in terms of solving Problem P1, can only guarantee to attain a local optimum [3,11].

(3)

In light of above discussion, this work presents a novel means of solving Problem P1. The proposed method is advantageous over conventional NSP methods in that it can ®nd approximately global optimum of a NSP problem by using less number of 0±1 variables. The solution derived herein can be improved by adequately adding the break points with the searching intervals.

2. Preliminaries

Some propositions on how to linearize a nonconvex separable function f(x) are described as follows. Proposition 1. Let f(x) be the piecewise linear function of x, as depicted in Fig. 1, where ak; k ˆ 1; 2; . . . ; m,

are the break points of f(x), sk; k ˆ 1; 2; . . . ; m ÿ 1, are the slopes of line segments between skand ak‡1, and

skˆ ‰f …ak‡1† ÿ f …ak†Š=‰ak‡1ÿ akŠ:

In addition, f(x) can be expressed as follows: f …x† ˆ f …a1† ‡ s1…x ÿ a1† ‡

Xmÿ1 kˆ2

skÿ skÿ1

2 …jx ÿ akj ‡ x ÿ ak†; …2† where joj is the absolute value of o.

This proposition can be examined as follows: (i) If x 6 a2, then

f …x† ˆ f …a1† ‡f …aa2† ÿ f …a1†

2ÿ a1 …x ÿ a1† ˆ f …a1† ‡ s1…x ÿ a1†:

(ii) If x  a3; then

f …x† ˆ f …a1† ‡ s1…a2ÿ a1† ‡ s2…x ÿ a2† ˆ f …a1† ‡ s1…x ÿ a1† ‡s2ÿ s2 1…jx ÿ a2j ‡ x ÿ a2†:

(iii) If x 6 ak0; thenPmÿ1k P k0…jx ÿ akj ‡ x ÿ ak† ˆ 0; and f(x) becomes

(4)

f …x† ˆ f …a1† ‡ s1…x ÿ a1† ‡

Xk0ÿ1

kˆ2

skÿ skÿ1

2 …jx ÿ akj ‡ x ÿ ak†:

Example 1. Consider a separable function f …x1† ˆ x31ÿ 4x21‡ 2x1 depicted in Fig. 2(a), where 0 6 x16 5.

Assume that the break points of f …x1† are 0, 0.5, 1, 1.5,..., 4.5, 5. In reference to Proposition 1, f …x1† can be

approximately linearized as follows (Fig. 2(b)): f …x1† ˆ x31ÿ 4x21‡ 2x1

 0:25x1ÿ2:52 …jx1ÿ 0:5j ‡ x1ÿ 0:5† ÿ12…jx1ÿ 1j ‡ x1ÿ 1† ‡0:52 …jx1ÿ 1:5j ‡ x1ÿ 1:5†

‡22…jx1ÿ 2j ‡ x1ÿ 2† ‡3:52 …jx1ÿ 2:5j ‡ x1ÿ 2:5† ‡52…jx1ÿ 3j ‡ x1ÿ 3† ‡6:52 …jx1ÿ 3:5j

‡ x1ÿ 3:5† ‡82…jx1ÿ 4j ‡ x1ÿ 4† ‡9:52 …jx1ÿ 4:5j ‡ x1ÿ 4:5†:

…3†

Expressing a separable linear function by Eq. (2) is advantageous in that the intervals of convexity and concavity for f(x) can be easily known, as described by the following proposition.

Proposition 2. Consider f(x) in Eq. (2) where x is within the interval ‰p; qŠ; ap6 x 6 aq. If sk > skÿ1then f(x)

is a convex for akÿ16 x 6 ak‡1, as depicted in Fig. 3(a). If sk < skÿ1 then f(x) is a concave for akÿ16 x 6 ak‡1,

as depicted in Fig. 3(b).

Consider Expression (3) and Fig. 2(b) as an example, in which f(x) is concave when 0 6 x 6 1 and f(x) is convex when 1 6 x 6 5.

Proposition 3. Consider a goal programming PP1, as expressed below: PP1 minimize w ˆXmÿ1

kˆ2

ck…jx ÿ akj ‡ x ÿ ak†

subject to x 2 F …a feasible set†; x P 0; ckP 0;

…4† where ck are coecients, k ˆ 2; 3; . . . ; m ÿ 1, and 0 < a1 < a2 <    < am, can be linearized as PP2 below:

PP2 minimize w ˆ 2Xmÿ1 kˆ2 ck x ÿ ak‡ Xkÿ1 lˆ1 dl ! subject to x ‡Xmÿ2 lˆ1 dlP amÿ1; 0 6 dl6 al‡1ÿ al for ` ˆ 1; 2; . . . ; m ÿ 1;

x 2 F …a feasible set†; x P 0; ckP 0:

…5†

Proof. According to Li [7], PP1 is equivalent to the following program: PP3 minimize w ˆ 2Xmÿ1

kˆ2

ck…x ÿ ak‡ rk†

subject to x ÿ ak‡ rkP 0; rkP 0; for k ˆ 2; 3; . . . ; m ÿ 1;

x 2 F …a feasible set†; x P 0; ckP 0;

(5)
(6)

where rk is a deviation variable. PP3 implies that: if x < ak then at optimal solution rk ˆ akÿ x; if x P ak

then at optimal solution rkˆ 0.

Substitute rk byPkÿ1`ˆ1d`; 0 6 d`6 a`‡1ÿ a`; PP3 then becomes

PP4 minimize w ˆ 2Xmÿ1 kˆ2 ck x ÿ ak‡ Xkÿ1 lˆ1 dl ! subject to x ‡ d1P a2; x ‡ d1‡ d2P a3; ... x ‡ d1‡ d2‡    ‡ dmÿ2P amÿ1; 0 6 dl6 al‡1ÿ al;

x 2 F …a feasible set†; x P 0; ckP 0:

Since a`‡1ÿ a`P d`, it is obvious that

(7)

x P amÿ1ÿ Xmÿ2 `ˆ1 d`P amÿ2ÿ Xmÿ3 `ˆ1 d`P    P a3ÿ d1ÿ d2P a2ÿ d1P 0:

Therefore, the ®rst (m ÿ 2) constraints in PP4 are covered by the ®rst constraint in PP2. By doing so, Proposition 3 is proven. h

Many conventional goal programming methods (such as Charnes and Cooper method in [3,5]) can be utilized to solve (4). Comparing with conventional goal programming methods, linearizing (4) by (5) is more computationally ecient owing to the following two reasons.

(i) All constraints in (5) are simple upper or lower bounded constraints except for the ®rst constraint in (5). (ii) By utilizing Li's method [7] for linearizing an absolute term with positive coecient, only (6) contains m ÿ 2 deviation variables (i.e., r2; r3; . . . ; rmÿ1). In contrast, conventional goal programming techniques

[3,5,11] require using 2(m ÿ 2) deviation variables. Example 2. Consider the following goal programming:

minimize w ˆ 2…x ÿ 1† ‡ 2…jx ÿ 2j ‡ x ÿ 2† ‡ 1…jx ÿ 3j ‡ x ÿ 3† subject to x P 1:5:

This program, as depicted in Fig. 4(a), can be transformed into the following linear program: minimize w ˆ 2…x ÿ 1† ‡ 4…x ÿ 2 ‡ d1† ‡ 2…x ÿ 3 ‡ d1‡ d2†

subject to x ‡ d1‡ d2P 3; x P 1:5;

0 6 d16 1; 0 6 d26 1; x 2 F :

LINDO [9] is used to solve the above program, hereby obtaining d1ˆ 0.5, d2ˆ 1, w ˆ 1, and x ˆ 1.5.

Notably, Proposition 3 can be applied only if all coecients in (4) are nonnegative. The technique of linearizing an absolute term with negative coecient is introduced below.

Proposition 4. Consider the following program: minimize w ˆ cjx ÿ aj

subject to x 2 F …a feasible set†; c is negative coefficient …i:e: c < 0†; 0 6 x 6 x …x is the upper bound of x†:

This program can be replaced by the mixed 0±1 program: minimize w ˆ c…x ÿ 2z ‡ 2au ÿ a†

subject to x ‡ x…u ÿ 1† 6 z;

x 2 F …a feasible set†; x P 0; z P 0; u is a 0±1 variable and c is a negative constant; 0 6 x 6 x …x is the upper bound of x†:

Proof. By introducing a 0±1 variable u, where u ˆ 0 if x P a, and otherwise u ˆ 1. It is convenient to con®rm whether if u ˆ 1 then z ˆ x and if u ˆ 0 then z ˆ 0. Thus, w can be rewritten as cjx ÿ aj ˆ c…1 ÿ 2u†…x ÿ a† ˆ c…x ÿ 2ux ‡ 2au ÿ a†. Denote the polynomial term ux as z. By referring to Li and Chang [8], the relationship among x, z and u is expressed as x ‡ x…u ÿ 1† 6 z and z P 0. By doing so, Proposition 4 is proven. h

(8)

Example 3. Consider the following goal programming: minimize w ˆ 5 ÿ …x ÿ 1† ÿ 1:5…jx ÿ 2j ‡ x ÿ 2† subject to x 6 2:5:

This program, as depicted in Fig. 4(b), can be transformed into the following linear program: minimize w ˆ 5 ÿ …x ÿ 1† ÿ 1:5‰…x ÿ 2z ‡ 4u ÿ 2† ‡ …x ÿ 2†Š ˆ ÿ4x ‡ 3z ÿ 6u ‡ 12 subject to x ‡ 3…u ÿ 1† 6 z; x 6 2:5;

x P 0; z P 0; u is a 0±1 variable; x 2 F :

Solve the above program by LINDO [9] to obtain u ˆ 0, z ˆ 0, w ˆ 2, and x ˆ 2.5. Based on Propositions 1±3, Problem 1 can be approximated as the following program:

Fig. 4. (a) A goal programming problem with convex objective function (Example 2). (b) A goal programming problem with concave objective function (Example 3).

(9)

Program 2 (Proposed NSP method): minimizeXn

iˆ1

f …ai;1† ‡ si;1…xiÿ ai;1† ‡ 2

X

for k; where si;k>si;kÿ1

…si;kÿ si;kÿ1† xiÿ ai;k‡

Xkÿ1 lˆ1 di;l ! 2 4 ‡ X

for k;where si;k<si;kÿ1

…si;kÿ si;kÿ1†…xiÿ 2zi;k‡ 2ai;kui;kÿ ai;k†

3 5 subject toXn iˆ1 hij…xi† P 0; for all j; xi‡P mjÿ2 lˆ1di;lP amiÿ1

0 6 di;l6 ai;l‡1ÿ ai;l

9 =

; for i ˆ 1; . . . ; n and k where si;k> si;kÿ1;

xi‡ x…ui;kÿ 1† 6 zi;k

zi;kP 0



for i ˆ 1; . . . ; n and k where si;k< si;kÿ1;

where xiP 0; di;lP 0; zi;lP 0, and ui;k are 0±1 variables.

Table 1 lists the extra 0±1 and continuous variables used in Programs 1 and 2. Table 1 indicates that for solving a NSP problem the proposed method uses less number of 0±1 variables than used in Program 2. 3. Selection of break points

Accuracy of the piecewise linear estimate heavily depends on the selection of proper break points. With an increasing number of break points, the number of additional deviation variables for approximating a convex function (or zero±one variables for approximating a concave function) also increases. Conse-quently, inappropriate selecting of break points causes a computational burden when piecewise linearizing non-linear functions.

Bazarra et al. [3] and Meyer [10] presented a means of selecting adequate break points. Their method initially utilizes a coarse grid and then generates ®ner break points around the obtained optimal solution computed by the coarse grid. If necessary, break points around the optimal solution computed by the ®ner break are generated again until the precision is satis®ed. Their method, although applicable to linearize a convex function, is dicult for use in linearizing a nonconvex function.

Therefore, in this work, we present an ecient means of selecting break points. For instance, consider a convex function f(x1) ˆ 5x31 (Fig. 5(a)) where a16 x16 a5. Assume that three break points a2, a3, and a4

within a16 x16 a5 are selected. The error of piecewisely linearizing f(x1) is computed as

Table 1

Comparison of Programs 1 and 2 Extra 0±1

variables Number of extra 0±1variables Extra continuousvariables Number of extra continuousvariables Program 1 (Conventional

NSP Method) yi;k Number of all piecewisesegments for all fi(xi)

ti;k Number of all piecewise

segments for all fi(xi)

Program 2 (Proposed

NSP Method) ui;k Number of concavepiecewise segments only di;` Number of convex piecewisesegments only

zi;k Number of concave piecewise

(10)

Error ˆ f …a1† ‡ s…x1ÿ a1† ÿ f …x1† ˆ 125x1ÿ 5x31:

By taking partial oError/ox1ˆ 0, the maximal error occurs at x1ˆ 2.89 where oError=ox1ˆ

s ÿ …of …x1†=ox1† ˆ 125 ÿ 15x21ˆ 0. By doing so, we obtain the ®rst break point a3ˆ 2.89.

Similarly, ®ner break points a2 and a4 can also be generated at maximal errors occur at x1 for

0 6 x16 2.89 and 2.89 6 x16 5, respectively, as depicted in Fig. 5(b). Therefore, the second break point is

(11)

a2ˆ 1.67 (for 0 6 x16 2.89) where saÿ of …x1†=ox1ˆ 41:76 ÿ 15x21ˆ 0 and the third break point is a4ˆ 3.99

(for 2.89 6 x16 5) where sbÿ of …x1†=ox1ˆ 239 ÿ 15x21ˆ 0.

Similarly, for a concave function f(x2) ˆ 5x0:52 ÿ x2 (Fig. 5(c)) where a16 x26 a3. Assume we want to

choose a break point a2within a16 x16 a3. The maximal error of piecewisely linearizing x2is computed as

Error ˆ f …x2† ÿ …f …a1† ‡ s…x2ÿ a1†† ˆ 5x0:52 ÿ x2ÿ 1:5x2:

By taking oError=ox2ˆ 0, the maximal error occurs at x2 where …of …x2†=ox2† ÿ s ˆ

…2:5xÿ0:5

2 ÿ 1† ÿ 1:5 ˆ 0. After calculating, the obtained break point a2ˆ 1.

Owing to that treating continuous variables is more computational ecient than treating zero±one variables, we recommend selecting three break points for linearizing a convex function and one break point for linearizing a concave function at each iteration.

4. Solution algorithm

The solution algorithm of solving Problem P1 is described in the following steps: Step 1. Select initial break points.

(i) For each function fi(xi) where fi(xi) is convex for the interval xi6 xi6 xi, three break points within this

interval are selected by the method described in the Section 3.

(ii) For each function fi(xi) where fi(xi) is concave for the interval xi6 xi6 xi, one point within this

in-terval is selected by the method described in the Section 3.

Step 2. Formulate piecewise functions. Proposition 1 can be used to approximately linearize each function fi(xi), expressed as ^ fi…xi† ˆ fi…ai1† ‡ si1…xiÿ ai1† ‡ Ximÿ1 kˆ2 si;kÿ si;kÿ1 2 …jxiÿ aikj ‡ xiÿ aik†; where aik are break points selected in Step 1.

Step 3. Linearize the program. Using Proposition 3 linearizes the absolute terms where si;k> si;kÿ1, and

using Proposition 4 linearizes the absolute terms where si;k< si;kÿ1.

Step 4. Solve the program and assess the tolerable error. Solve the linear mixed integer program to obtain the solution xDˆ …xD

1; xD2; . . . ; xDn†. If jfi…xDi† ÿ ^fi…xDi†j 6 e for all i, where ^fi…xi† is the approximate linear

function expressed in Step 2, then terminate the solution process; and otherwise go to Step 5.

Step 5. Add ®ner break points. If ak6 xDi 6 ak0, then add new break points within the interval, reiterate

Step 2.

5. Numerical examples

Example 4. Consider the following separable programming problem with nonconvex objective function, in which one of the constraints is nonconvex:

minimize w ˆ x3 1ÿ 4x21‡ 2x1‡ x32ÿ 4x22‡ 3x2 subject to 3x1‡ 2x26 11:75; 2x1‡ 5x0:52 ÿ x2P 9; 0 6 x16 5; 0 6 x26 4; where x3

(12)

Step 1. Select initial break points. From the basis of Section 3, one break point (x2ˆ 1) is selected for the

function 5x0:5

2 ÿ x2 within 0 6 x26 4 as depicted in Fig. 5(c). For the function x32ÿ 4x22‡ 3x2, one break

point (x2ˆ 0.32) is selected for the concave portion in which 0 6 x26 1.5 and three break points (x2ˆ 2.3,

2.923 and 3.48) are selected for the convex portion in which 1.5 6 x26 4 (Fig. 6(b)).

Step 2. Formulate the piecewise functions. The original problem is expressed piecewisely as

(13)

minimize w ˆ …right-hand side of expression …3†† ‡ 1:8224 x2 ÿ3:272 …jx2ÿ 0:32j ‡ x2ÿ 0:32† ‡0:23762 …jx2ÿ 1:5j ‡ x2ÿ 1:5† ‡3:8732 …jx2ÿ 2:3j ‡ x2ÿ 2:3† ‡5:5532 …jx2ÿ 2:923j ‡ x2ÿ 2:923† ‡6:894 2 …jx2ÿ 3:48j ‡ x2ÿ 3:48† subject to 3x1‡ 2x26 11:75; 2x1‡ 4x2ÿ3:33342 …jx2ÿ 1j ‡ x2ÿ 1† P 9; 0 6 x16 5; 0 6 x26 4:

Step 3. Linearize the program. The above problem is converted into following linearly mixed 0-1 pro-gram: minimize w ˆ 31:75x1‡ 2:5z11‡ z12ÿ 1:25u11ÿ u12‡ 35d11‡ 34:5d12 ‡ 32:5d13‡ 29d14‡ 24d15‡ 17:5d16‡ 9:5d17‡ 15:11x2 ‡ 3:27z21ÿ 1:046u21‡ 16:5576d21‡ 16:32d22‡ 12:447d23ÿ 6:894d24ÿ 172:19 subject to x1‡ d11‡ d12‡ d13‡ d14‡ d15‡ d16‡ d17P 4:5; x2‡ d21‡ d22‡ d23‡ d24P 3:84;

x1‡ 5…u11ÿ 1† 6 z11; x1‡ 5…u12ÿ 1† 6 z12; x2‡ 4…u21ÿ 1† 6 z21;

3x1‡ 2x26 11:75; 2x1‡ 0:666x2ÿ 3:334z22ÿ 3:334u22P 5:666; x2‡ 4…u22ÿ 1† 6 z22; 0 6 x16 5; 0 6 x26 4: d1j6 0:5; j ˆ 1; 2; . . . ; 7; d216 1:18; d226 0:8; d236 0:623; d246 0:557; u11; u12; u21; u22are 0±1 variables:

Step 4. Solve the program and assess the tolerable error. By running on the LINDO [9], the optimal solution is x1ˆ 2.38333, x2ˆ 2.3, w ˆ ÿ6.380064 and the error of approximation is 0.129. Assume that the

pre-speci®ed tolerable error should be less than 0.01. Then, go to Step 5.

Step 5. Add ®ner break points. To derive a solution closer to the global optimum and satisfy the pre-speci®ed approximated error 6 0.01, three break points (2.285, 2.386, 2.48) can be further added for the function x3

1ÿ 4x21‡ 2x1within 2.18 6 x16 2.58. In addition, three break points (2.205, 2.307, 2.405) can be

added for the function x3

2ÿ 4x22‡ 3x2within 2.1 6 x26 2.5. Similarly, one break point (x2ˆ 2.296) is added

for the function 5x0:5

2 ÿ x2within 2.1 6 x26 2.5.

The problem then becomes

minimize w ˆ ÿ0:90507…x1ÿ 2:18† ‡0:58732 …jx1ÿ 2:285j ‡ x1ÿ 2:285† ‡0:61452 …jx1ÿ 2:386j ‡ x1ÿ 2:386j ‡ x1ÿ 2:386† ‡0:66852 …jx1ÿ 2:48j ‡ x1ÿ 2:48† ÿ 0:131626…x2ÿ 2:1† ‡0:54042 …jx2ÿ 2:2055j ‡ x2ÿ 2:2055† ‡0:5817 2 …jx2ÿ :20369j ‡ x2ÿ 2:3069† ‡ 0:6203 2 …jx2ÿ 2:4049j ‡ x2ÿ 2:4049† subject to: 3x1‡ 2x26 11:75; 2x1‡ 0:6868…x2ÿ 2:1† ÿ0:07192 …jx2ÿ 2:29564j ‡ x2ÿ 2:29564† P 9; 2:18 6 x16 2:58; 2:1 6 x26 2:5:

(14)

The problem is linearized as follows: minimize w ˆ 0:965204x1‡ 1:870274d11‡ 1:282974d12‡ 0:668524d13 ‡ 1:42614x2‡ 1:7424d21‡ 1:202d22‡ 0:6203d23ÿ 5:4092 subject to x1‡ d11‡ d12‡ d13P 2:48; x2‡ d21‡ d22‡ d23P 2:4049; 3x1‡ 2x26 11:75; 2x1‡ 0:6149x2‡ 0:0719z ÿ 0:16506u P 10:277223;

x2‡ 2:5…u ÿ 1† 6 z; u is an zero±one variable:

2:18 6 x16 2:58; 2:1 6 x26 2:5;

d116 0:105; d126 0:101; d136 0:094;

d216 0:10548; d226 0:10139; d236 0:098:

After running on the LINDO [9], the ®ner optimal values are x1ˆ 2.3875, x2ˆ 2.2155, the objective

function's value is ÿ6.5291 and the approximated error ˆ 0.00029 < 0.01. The solution process is termi-nated since the approximated error is less than the pre-speci®ed tolerable error.

Example 5. (Taken from Klein et al. [6]). The amount of electric power that can be produced from a multi-unit hydro-electric generating station depends on the amount of water discharged through each multi-unit. A situation in which the discharge is not properly allocated among the generating units implies that the potential power output may not be fully achieved. More expensive sources such as nuclear, coal or oil (which are environmentally less attractive) would have to replace any loss. Thus, an electric utility should maximize hydro-electric generation which is the cheapest and cleanest source of energy. In addition, the quantity of electricity generated through each generating unit is a nonconvex function since the eciency characteristics may not be the same for di€erent units [6]. An illustrative example is provided in the following, which consists of two hydro-electric generating units, as depicted in Fig. 7(a) and (b), respectively:

maximize f1…x1† ‡ f2…x2†

subject to x16 241; x26 250;

x1‡ x2ˆ Q; x1; x2P 0;

where Q are varying values of total discharge.

From the basis of Proposition 1, f1(x1) and f2(x2) can expressed as follows:

f1…x1† ˆ 0:23256…x1ÿ 11† ‡ 0:00872…jx1ÿ 54j ‡ x1ÿ 54† ÿ 0:04924…jx1ÿ 142j ‡ x1ÿ 142†;

f2…x2† ˆ 0:22727…x2ÿ 11† ‡ 0:040475…jx1ÿ 55j ‡ x1ÿ 55† ÿ 0:041865…jx1ÿ 201j ‡ x1ÿ 201†:

Based on Propositions 3 and 4, the problem can be reformulated as follows: minimize ÿ f1…x1† ÿ f2…x2† ˆ ÿ0:15152x1‡ 0:01744z1ÿ 0:94176u‡0:09848d1 ÿ 0:22449x2‡ 0:08095z2ÿ 4:45225u2‡ 0:08373d2ÿ 20:36175 subject to x1‡ d1P 142; d16 88; x1‡ 241…u1ÿ 1† 6 z1; x2‡ d2P 201; d26 146; x2‡ 250…u2ÿ 1† 6 z2; x1‡ x2ˆ Q; u1; u2are 0±1 variables; x1; x2P 0:

(15)

By letting Q ˆ 450, 400, 350, 300 and 250, the computed optimal discharge allocation (x1, x2) ˆ (200, 250),

(150, 250), (142, 208), (142, 158), and (142, 108) respectively. The obtained solutions are the same as the ones found in Klein et al. [6].

Example 6 (Modi®ed from Hillier et al. [5]). A farmer raises pigs for market, and he wishes to determine the quantities of the available types of feed that should be administered to each pig to ful®ll certain nutritional requirements at a minimum cost. Table 2 provides the number of units of each type of basic nutritional

Fig. 7. (a) A hydro-electric generating function f1(x1). (b) A hydro-electric generating function f2(x2).

Table 2

Required nutritional ingredient

Nutritional ingredient Kilogram of corn Kilogram of tankage Kilogram of alfalfa Minimum daily requirement

Carbohydrates 90 20 40 2000

Protein 30 80 60 1800

Vitamins 10 20 60 1500

(16)

ingredient contained within a kilogram of each feed type, along with the daily nutritional requirements and feed cost:

By considering factors such as holding cost, order cost, and quantity discount, cost functions f(x1), f(x2),

and f(x3) naturally become a non-convex shape [1,2,4,12], as depicted in Fig. 8(a)±(c), respectively.

Based on Proposition 1, the cost functions are formulated as follows: f …x1† ˆ 40x1‡ 5…jx1ÿ 10j ‡ x1ÿ 10† ÿ 5…jx1ÿ 12j ‡ x1ÿ 12†;

f …x2† ˆ 20x2ÿ 5…jx2ÿ 10j ‡ x2ÿ 10† ‡ 5…jx2ÿ 12j ‡ x2ÿ 12†;

f …x3† ˆ 30x3‡ 10…jx3ÿ 10j ‡ x3ÿ 10† ÿ 10…jx3ÿ 20j ‡ x3ÿ 20†:

(17)

From the basis of Propositions 3 and 4, f(x1), f(x2) and f(x3) can be linearized as follows:

f …x1† ˆ 40x1‡ 10z1ÿ 120u1‡ 10d1‡ 20;

where x1‡ d1P 10; d16 10; x1‡ 17…u1ÿ 1† 6 z1; and u1 is a 0±1 variable;

f …x2† ˆ 20x2‡ 10z2ÿ 100u2‡ 10d2ÿ 20;

where x2‡ d2P 12; d26 2; x2‡ 17…u2ÿ 1† 6 z2; and u2 is a 0±1 variable; and

f …x3† ˆ 30x3‡ 20z3ÿ 400u3‡ 20d3‡ 200;

where x3‡ d3P 10; d36 10; x3‡ 25…u3ÿ 1† 6 z3; and u3 is a 0±1 variable.

Therefore, the problem is formulated as follows: minimize f …x1† ‡ f …x2† ‡ f …x3† subject to x1‡ d1P 10; d16 10; x1‡ 17…u1ÿ 1† 6 z1; x2‡ d2P 12; d26 2; x2‡ 17…u2ÿ 1† 6 z2; x3‡ d3P 10; d36 10; x3‡ 25…u3ÿ 1† 6 z3; 90x1‡ 20x2‡ 40x3P 2000; 30x1‡ 80x2‡ 60x3P 1800; 10x1‡ 20x2‡ 60x3P 1500; x1; x2; x3P 0;

u1; u2; and u3are 0±1 variables:

After running on the LINDO [9], the optimal values are x1ˆ 11.04, x2ˆ 12, and x3ˆ 19.16.

6. Concluding remark

This paper treats nonconvex separable programming problems where the objective functions and the constraints might be nonconvex. Comparing the proposed method with conventional NSP methods reveals that the former can derive the approximately global optimum of a NSP problem by using less number of zero±one variables. The quality of derived solution can be improved by adequately adding the break points with the searching intervals.

References

[1] R.C. Baker, T.L. Urban, A deterministic inventory system with an inventory-level-dependent demand rate, Journal of the Operational Research Society 39 (9) (1988) 823±831.

[2] M.A. Bakir, M.D. Byrne, An application of the multi-stage Monte Carlo optimization algorithm to aggregate production planning, International Journal of Production Economics 35 (1±3) (1994) 207±213.

[3] M.S. Bazaraa, H.D. Sherali, C.M. Shetty, Nonlinear Programming Theory and Algorithms, second edition, Wiley, New York, 1993.

[4] F. Chen, Y.S. Zhengm, Inventory models with general backorder costs, European Journal of Operational Research 65 (2) (1993) 175±186.

[5] F.S. Hillier, G.J. Lieberman, Introduction to Operations Research, sixth edition, McGraw-Hill, New York, 1995.

[6] E.M. Klein, S.H. Sim, Discharge allocation for hydro-electric generating stations, European Journal of Operational Research 73 (1994) 132±138.

[7] H.L. Li, An ecient method for solving linear goal programming problems, Journal of Optimization Theory and Applications 90 (2) (1996) 465±469.

(18)

[8] H.L. Li, C. Chang, An approximate approach of global optimization for polynomial programming problems, European Journal of Operational Research 107 (1998) 625±632.

[9] LINDO System Inc., Lindo Release 6.0 ± User's Guide, USA, 1997.

[10] R.R. Meyer, Two-segment separable programming, Management Science 25 (4) (1979) 385±395. [11] H.A. Taha, Operations Research, ®fth edition, Macmillan, New York, 1992.

[12] T.L. Urban, Deterministic inventory models incorporating marketing decisions, Computers and Industrial Engineering 22 (1) (1992) 85±93.

參考文獻

相關文件

By exploiting the Cartesian P -properties for a nonlinear transformation, we show that the class of regularized merit functions provides a global error bound for the solution of

It is well known that second-order cone programming can be regarded as a special case of positive semidefinite programming by using the arrow matrix.. This paper further studies

Taking second-order cone optimization and complementarity problems for example, there have proposed many ef- fective solution methods, including the interior point methods [1, 2, 3,

The superlinear convergence of Broyden’s method for Example 1 is demonstrated in the following table, and the computed solutions are less accurate than those computed by

Dynamic programming is a method that in general solves optimization prob- lems that involve making a sequence of decisions by determining, for each decision, subproblems that can

/** Class invariant: A Person always has a date of birth, and if the Person has a date of death, then the date of death is equal to or later than the date of birth. To be

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2005..

Note that this method uses two separate object variables: the local variable message and the instance field name.. A local variable belongs to an individual method, and you can use