Theory and Methodology
A global optimization method for nonconvex separable
programming problems
Han-Lin Li
a,*, Chian-Son Yu
baInstitute of Information Management, National Chiao Tung University, Hsinchi 30050, Taiwan, ROC bDepartment of Information Management, Shih Chien University, Taipei 10020, Taiwan, ROC
Received 7 May 1997; accepted 8 June 1998
Abstract
Conventional methods of solving nonconvex separable programming (NSP) problems by mixed integer program-ming methods requires adding numerous 0±1 variables. In this work, we present a new method of deriving the global optimum of a NSP program using less number of 0±1 variables. A separable function is initially expressed by a piecewise linear function with summation of absolute terms. Linearizing these absolute terms allows us to convert a NSP problem into a linearly mixed 0±1 program solvable for reaching a solution which is extremely close to the global optimum. Ó 1999 Elsevier Science B.V. All rights reserved.
Keywords: Goal programming; Piecewise linear function; Separable programming
1. Introduction
Separable programs are nonlinear programs in which the objective functions and constraints can be expressed as the sum of all functions, each nonlinear term involving only one variable. The nonconvex separable programming (NSP) problem discussed herein, denoted as Problem P1, is expressed as follows:
Problem P1 (NSP Problem) minimize Xn i1 fi xi subject to Xn i1 hij xi P 0 for all j; xiP 0 for i 1; 2; . . . ; n; www.elsevier.com/locate/orms
*Corresponding author. Tel.: 886-35-728709; fax: 886-35-723792; e-mail: hlli@cc.nctu.edu.tw
0377-2217/99/$ ± see front matter Ó 1999 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 7 - 2 2 1 7 ( 9 8 ) 0 0 2 4 3 - 4
where fi xi could be nonconvex functions and hij xi are linear functions.
By assuming that all of fi xi in Problem P1 are convex, Problem P1 can be solved by the simplex method
to obtain the global optimum. The conventional means of solving Problem P1 with nonconvex fi xi [3,5,11]
is discussed below.
Assume that fi xi is approximately linearized over the interval [a, b]. De®ne ai;k; k 1; 2; . . . ; mi, as the
kth break point on the xi-axis such that ai;1< ai;2< < ai;mi with ai;1 a and ai;mi b. Then fi xi can be
approximated as fi xi
Xmi
k1
fi ai;kti;k; 1
where xiPmk1i ai;kti;k; Pk1mi ti;k 1; ti;kP 0; in which only two adjacent ti;k, e.g. ti;kÿ1; ti;k and ti;k; ti;k1;
are allowed to be nonzero. In reference to Eq. (1), conventional methods [3,5,11] treat the NSP Problem as the following program.
Program 1 (Conventional NSP methods [3,5,11]) minimize Xn
i1
Xmi
k1
fi ai;kti;k
subject to Xn i1 hij xi P 0 for all j; xi Xmi k1
ai;kti;k for i 1; 2; . . . ; n;
ti;16 yi;1 for i 1; 2; . . . ; n;
ti;k6 yi;kÿ1 yi;k for i 1; 2; . . . ; n; k 2; 3; . . . ; mi;
Xmi k1 yi;k 1 Xmi k1 ttki 1 for i 1; 2; . . . ; n;
where yi;k 0 or 1, ti;kP 0; k 1; 2; . . . ; mi; and i 1; 2; . . . ; n:
Obviously, there exists a unique k where yi;k 1 and ti;k ti;k1 1, in which Problem 1 becomes
minimize Xn
i1
fi ai;kti;k fi ai;k1 1 ÿ ti;k
subject to Xn
i1
hij xi P 0 for all j;
xi ai;k1 ai;kÿ ai;k1ti;k for i 1; 2; . . . ; n;
xiP 0; ti;kP 0:
Program 1 is a linearly mixed integer problem which can obtain the global optimum. Program 1 is seriously limited in that it contains a large number of 0±1 variables which incur heavy computational burden. The number of newly added 0±1 variables required to approximately linearize a function fi xi
equals the number of breaking intervals. For instance, Program 1 requires using Pn
i1 miÿ 1 zero±one
variables i:e:; yi;1; yi;2; . . . ; yi;miÿ1.
An alternative means of solving Problem P1 is the restricted-basis simplex method [3,11]. This method speci®es that no more than two positive ti;kcan appear in the basis. Moreover, two ti;kcan be positive only if
they are adjacent. In this case, the additional constraints involving yi:kare disregarded. The restricted basis
method, although computationally ecient in terms of solving Problem P1, can only guarantee to attain a local optimum [3,11].
In light of above discussion, this work presents a novel means of solving Problem P1. The proposed method is advantageous over conventional NSP methods in that it can ®nd approximately global optimum of a NSP problem by using less number of 0±1 variables. The solution derived herein can be improved by adequately adding the break points with the searching intervals.
2. Preliminaries
Some propositions on how to linearize a nonconvex separable function f(x) are described as follows. Proposition 1. Let f(x) be the piecewise linear function of x, as depicted in Fig. 1, where ak; k 1; 2; . . . ; m,
are the break points of f(x), sk; k 1; 2; . . . ; m ÿ 1, are the slopes of line segments between skand ak1, and
sk f ak1 ÿ f ak=ak1ÿ ak:
In addition, f(x) can be expressed as follows: f x f a1 s1 x ÿ a1
Xmÿ1 k2
skÿ skÿ1
2 jx ÿ akj x ÿ ak; 2 where joj is the absolute value of o.
This proposition can be examined as follows: (i) If x 6 a2, then
f x f a1 f aa2 ÿ f a1
2ÿ a1 x ÿ a1 f a1 s1 x ÿ a1:
(ii) If x a3; then
f x f a1 s1 a2ÿ a1 s2 x ÿ a2 f a1 s1 x ÿ a1 s2ÿ s2 1 jx ÿ a2j x ÿ a2:
(iii) If x 6 ak0; thenPmÿ1k P k0 jx ÿ akj x ÿ ak 0; and f(x) becomes
f x f a1 s1 x ÿ a1
Xk0ÿ1
k2
skÿ skÿ1
2 jx ÿ akj x ÿ ak:
Example 1. Consider a separable function f x1 x31ÿ 4x21 2x1 depicted in Fig. 2(a), where 0 6 x16 5.
Assume that the break points of f x1 are 0, 0.5, 1, 1.5,..., 4.5, 5. In reference to Proposition 1, f x1 can be
approximately linearized as follows (Fig. 2(b)): f x1 x31ÿ 4x21 2x1
0:25x1ÿ2:52 jx1ÿ 0:5j x1ÿ 0:5 ÿ12 jx1ÿ 1j x1ÿ 1 0:52 jx1ÿ 1:5j x1ÿ 1:5
22 jx1ÿ 2j x1ÿ 2 3:52 jx1ÿ 2:5j x1ÿ 2:5 52 jx1ÿ 3j x1ÿ 3 6:52 jx1ÿ 3:5j
x1ÿ 3:5 82 jx1ÿ 4j x1ÿ 4 9:52 jx1ÿ 4:5j x1ÿ 4:5:
3
Expressing a separable linear function by Eq. (2) is advantageous in that the intervals of convexity and concavity for f(x) can be easily known, as described by the following proposition.
Proposition 2. Consider f(x) in Eq. (2) where x is within the interval p; q; ap6 x 6 aq. If sk > skÿ1then f(x)
is a convex for akÿ16 x 6 ak1, as depicted in Fig. 3(a). If sk < skÿ1 then f(x) is a concave for akÿ16 x 6 ak1,
as depicted in Fig. 3(b).
Consider Expression (3) and Fig. 2(b) as an example, in which f(x) is concave when 0 6 x 6 1 and f(x) is convex when 1 6 x 6 5.
Proposition 3. Consider a goal programming PP1, as expressed below: PP1 minimize w Xmÿ1
k2
ck jx ÿ akj x ÿ ak
subject to x 2 F a feasible set; x P 0; ckP 0;
4 where ck are coecients, k 2; 3; . . . ; m ÿ 1, and 0 < a1 < a2 < < am, can be linearized as PP2 below:
PP2 minimize w 2Xmÿ1 k2 ck x ÿ ak Xkÿ1 l1 dl ! subject to x Xmÿ2 l1 dlP amÿ1; 0 6 dl6 al1ÿ al for ` 1; 2; . . . ; m ÿ 1;
x 2 F a feasible set; x P 0; ckP 0:
5
Proof. According to Li [7], PP1 is equivalent to the following program: PP3 minimize w 2Xmÿ1
k2
ck x ÿ ak rk
subject to x ÿ ak rkP 0; rkP 0; for k 2; 3; . . . ; m ÿ 1;
x 2 F a feasible set; x P 0; ckP 0;
where rk is a deviation variable. PP3 implies that: if x < ak then at optimal solution rk akÿ x; if x P ak
then at optimal solution rk 0.
Substitute rk byPkÿ1`1d`; 0 6 d`6 a`1ÿ a`; PP3 then becomes
PP4 minimize w 2Xmÿ1 k2 ck x ÿ ak Xkÿ1 l1 dl ! subject to x d1P a2; x d1 d2P a3; ... x d1 d2 dmÿ2P amÿ1; 0 6 dl6 al1ÿ al;
x 2 F a feasible set; x P 0; ckP 0:
Since a`1ÿ a`P d`, it is obvious that
x P amÿ1ÿ Xmÿ2 `1 d`P amÿ2ÿ Xmÿ3 `1 d`P P a3ÿ d1ÿ d2P a2ÿ d1P 0:
Therefore, the ®rst (m ÿ 2) constraints in PP4 are covered by the ®rst constraint in PP2. By doing so, Proposition 3 is proven. h
Many conventional goal programming methods (such as Charnes and Cooper method in [3,5]) can be utilized to solve (4). Comparing with conventional goal programming methods, linearizing (4) by (5) is more computationally ecient owing to the following two reasons.
(i) All constraints in (5) are simple upper or lower bounded constraints except for the ®rst constraint in (5). (ii) By utilizing Li's method [7] for linearizing an absolute term with positive coecient, only (6) contains m ÿ 2 deviation variables (i.e., r2; r3; . . . ; rmÿ1). In contrast, conventional goal programming techniques
[3,5,11] require using 2(m ÿ 2) deviation variables. Example 2. Consider the following goal programming:
minimize w 2 x ÿ 1 2 jx ÿ 2j x ÿ 2 1 jx ÿ 3j x ÿ 3 subject to x P 1:5:
This program, as depicted in Fig. 4(a), can be transformed into the following linear program: minimize w 2 x ÿ 1 4 x ÿ 2 d1 2 x ÿ 3 d1 d2
subject to x d1 d2P 3; x P 1:5;
0 6 d16 1; 0 6 d26 1; x 2 F :
LINDO [9] is used to solve the above program, hereby obtaining d1 0.5, d2 1, w 1, and x 1.5.
Notably, Proposition 3 can be applied only if all coecients in (4) are nonnegative. The technique of linearizing an absolute term with negative coecient is introduced below.
Proposition 4. Consider the following program: minimize w cjx ÿ aj
subject to x 2 F a feasible set; c is negative coefficient i:e: c < 0; 0 6 x 6 x x is the upper bound of x:
This program can be replaced by the mixed 0±1 program: minimize w c x ÿ 2z 2au ÿ a
subject to x x u ÿ 1 6 z;
x 2 F a feasible set; x P 0; z P 0; u is a 0±1 variable and c is a negative constant; 0 6 x 6 x x is the upper bound of x:
Proof. By introducing a 0±1 variable u, where u 0 if x P a, and otherwise u 1. It is convenient to con®rm whether if u 1 then z x and if u 0 then z 0. Thus, w can be rewritten as cjx ÿ aj c 1 ÿ 2u x ÿ a c x ÿ 2ux 2au ÿ a. Denote the polynomial term ux as z. By referring to Li and Chang [8], the relationship among x, z and u is expressed as x x u ÿ 1 6 z and z P 0. By doing so, Proposition 4 is proven. h
Example 3. Consider the following goal programming: minimize w 5 ÿ x ÿ 1 ÿ 1:5 jx ÿ 2j x ÿ 2 subject to x 6 2:5:
This program, as depicted in Fig. 4(b), can be transformed into the following linear program: minimize w 5 ÿ x ÿ 1 ÿ 1:5 x ÿ 2z 4u ÿ 2 x ÿ 2 ÿ4x 3z ÿ 6u 12 subject to x 3 u ÿ 1 6 z; x 6 2:5;
x P 0; z P 0; u is a 0±1 variable; x 2 F :
Solve the above program by LINDO [9] to obtain u 0, z 0, w 2, and x 2.5. Based on Propositions 1±3, Problem 1 can be approximated as the following program:
Fig. 4. (a) A goal programming problem with convex objective function (Example 2). (b) A goal programming problem with concave objective function (Example 3).
Program 2 (Proposed NSP method): minimizeXn
i1
f ai;1 si;1 xiÿ ai;1 2
X
for k; where si;k>si;kÿ1
si;kÿ si;kÿ1 xiÿ ai;k
Xkÿ1 l1 di;l ! 2 4 X
for k;where si;k<si;kÿ1
si;kÿ si;kÿ1 xiÿ 2zi;k 2ai;kui;kÿ ai;k
3 5 subject toXn i1 hij xi P 0; for all j; xiP mjÿ2 l1di;lP amiÿ1
0 6 di;l6 ai;l1ÿ ai;l
9 =
; for i 1; . . . ; n and k where si;k> si;kÿ1;
xi x ui;kÿ 1 6 zi;k
zi;kP 0
for i 1; . . . ; n and k where si;k< si;kÿ1;
where xiP 0; di;lP 0; zi;lP 0, and ui;k are 0±1 variables.
Table 1 lists the extra 0±1 and continuous variables used in Programs 1 and 2. Table 1 indicates that for solving a NSP problem the proposed method uses less number of 0±1 variables than used in Program 2. 3. Selection of break points
Accuracy of the piecewise linear estimate heavily depends on the selection of proper break points. With an increasing number of break points, the number of additional deviation variables for approximating a convex function (or zero±one variables for approximating a concave function) also increases. Conse-quently, inappropriate selecting of break points causes a computational burden when piecewise linearizing non-linear functions.
Bazarra et al. [3] and Meyer [10] presented a means of selecting adequate break points. Their method initially utilizes a coarse grid and then generates ®ner break points around the obtained optimal solution computed by the coarse grid. If necessary, break points around the optimal solution computed by the ®ner break are generated again until the precision is satis®ed. Their method, although applicable to linearize a convex function, is dicult for use in linearizing a nonconvex function.
Therefore, in this work, we present an ecient means of selecting break points. For instance, consider a convex function f(x1) 5x31 (Fig. 5(a)) where a16 x16 a5. Assume that three break points a2, a3, and a4
within a16 x16 a5 are selected. The error of piecewisely linearizing f(x1) is computed as
Table 1
Comparison of Programs 1 and 2 Extra 0±1
variables Number of extra 0±1variables Extra continuousvariables Number of extra continuousvariables Program 1 (Conventional
NSP Method) yi;k Number of all piecewisesegments for all fi(xi)
ti;k Number of all piecewise
segments for all fi(xi)
Program 2 (Proposed
NSP Method) ui;k Number of concavepiecewise segments only di;` Number of convex piecewisesegments only
zi;k Number of concave piecewise
Error f a1 s x1ÿ a1 ÿ f x1 125x1ÿ 5x31:
By taking partial oError/ox1 0, the maximal error occurs at x1 2.89 where oError=ox1
s ÿ of x1=ox1 125 ÿ 15x21 0. By doing so, we obtain the ®rst break point a3 2.89.
Similarly, ®ner break points a2 and a4 can also be generated at maximal errors occur at x1 for
0 6 x16 2.89 and 2.89 6 x16 5, respectively, as depicted in Fig. 5(b). Therefore, the second break point is
a2 1.67 (for 0 6 x16 2.89) where saÿ of x1=ox1 41:76 ÿ 15x21 0 and the third break point is a4 3.99
(for 2.89 6 x16 5) where sbÿ of x1=ox1 239 ÿ 15x21 0.
Similarly, for a concave function f(x2) 5x0:52 ÿ x2 (Fig. 5(c)) where a16 x26 a3. Assume we want to
choose a break point a2within a16 x16 a3. The maximal error of piecewisely linearizing x2is computed as
Error f x2 ÿ f a1 s x2ÿ a1 5x0:52 ÿ x2ÿ 1:5x2:
By taking oError=ox2 0, the maximal error occurs at x2 where of x2=ox2 ÿ s
2:5xÿ0:5
2 ÿ 1 ÿ 1:5 0. After calculating, the obtained break point a2 1.
Owing to that treating continuous variables is more computational ecient than treating zero±one variables, we recommend selecting three break points for linearizing a convex function and one break point for linearizing a concave function at each iteration.
4. Solution algorithm
The solution algorithm of solving Problem P1 is described in the following steps: Step 1. Select initial break points.
(i) For each function fi(xi) where fi(xi) is convex for the interval xi6 xi6 xi, three break points within this
interval are selected by the method described in the Section 3.
(ii) For each function fi(xi) where fi(xi) is concave for the interval xi6 xi6 xi, one point within this
in-terval is selected by the method described in the Section 3.
Step 2. Formulate piecewise functions. Proposition 1 can be used to approximately linearize each function fi(xi), expressed as ^ fi xi fi ai1 si1 xiÿ ai1 Ximÿ1 k2 si;kÿ si;kÿ1 2 jxiÿ aikj xiÿ aik; where aik are break points selected in Step 1.
Step 3. Linearize the program. Using Proposition 3 linearizes the absolute terms where si;k> si;kÿ1, and
using Proposition 4 linearizes the absolute terms where si;k< si;kÿ1.
Step 4. Solve the program and assess the tolerable error. Solve the linear mixed integer program to obtain the solution xD xD
1; xD2; . . . ; xDn. If jfi xDi ÿ ^fi xDij 6 e for all i, where ^fi xi is the approximate linear
function expressed in Step 2, then terminate the solution process; and otherwise go to Step 5.
Step 5. Add ®ner break points. If ak6 xDi 6 ak0, then add new break points within the interval, reiterate
Step 2.
5. Numerical examples
Example 4. Consider the following separable programming problem with nonconvex objective function, in which one of the constraints is nonconvex:
minimize w x3 1ÿ 4x21 2x1 x32ÿ 4x22 3x2 subject to 3x1 2x26 11:75; 2x1 5x0:52 ÿ x2P 9; 0 6 x16 5; 0 6 x26 4; where x3
Step 1. Select initial break points. From the basis of Section 3, one break point (x2 1) is selected for the
function 5x0:5
2 ÿ x2 within 0 6 x26 4 as depicted in Fig. 5(c). For the function x32ÿ 4x22 3x2, one break
point (x2 0.32) is selected for the concave portion in which 0 6 x26 1.5 and three break points (x2 2.3,
2.923 and 3.48) are selected for the convex portion in which 1.5 6 x26 4 (Fig. 6(b)).
Step 2. Formulate the piecewise functions. The original problem is expressed piecewisely as
minimize w right-hand side of expression 3 1:8224 x2 ÿ3:272 jx2ÿ 0:32j x2ÿ 0:32 0:23762 jx2ÿ 1:5j x2ÿ 1:5 3:8732 jx2ÿ 2:3j x2ÿ 2:3 5:5532 jx2ÿ 2:923j x2ÿ 2:923 6:894 2 jx2ÿ 3:48j x2ÿ 3:48 subject to 3x1 2x26 11:75; 2x1 4x2ÿ3:33342 jx2ÿ 1j x2ÿ 1 P 9; 0 6 x16 5; 0 6 x26 4:
Step 3. Linearize the program. The above problem is converted into following linearly mixed 0-1 pro-gram: minimize w 31:75x1 2:5z11 z12ÿ 1:25u11ÿ u12 35d11 34:5d12 32:5d13 29d14 24d15 17:5d16 9:5d17 15:11x2 3:27z21ÿ 1:046u21 16:5576d21 16:32d22 12:447d23ÿ 6:894d24ÿ 172:19 subject to x1 d11 d12 d13 d14 d15 d16 d17P 4:5; x2 d21 d22 d23 d24P 3:84;
x1 5 u11ÿ 1 6 z11; x1 5 u12ÿ 1 6 z12; x2 4 u21ÿ 1 6 z21;
3x1 2x26 11:75; 2x1 0:666x2ÿ 3:334z22ÿ 3:334u22P 5:666; x2 4 u22ÿ 1 6 z22; 0 6 x16 5; 0 6 x26 4: d1j6 0:5; j 1; 2; . . . ; 7; d216 1:18; d226 0:8; d236 0:623; d246 0:557; u11; u12; u21; u22are 0±1 variables:
Step 4. Solve the program and assess the tolerable error. By running on the LINDO [9], the optimal solution is x1 2.38333, x2 2.3, w ÿ6.380064 and the error of approximation is 0.129. Assume that the
pre-speci®ed tolerable error should be less than 0.01. Then, go to Step 5.
Step 5. Add ®ner break points. To derive a solution closer to the global optimum and satisfy the pre-speci®ed approximated error 6 0.01, three break points (2.285, 2.386, 2.48) can be further added for the function x3
1ÿ 4x21 2x1within 2.18 6 x16 2.58. In addition, three break points (2.205, 2.307, 2.405) can be
added for the function x3
2ÿ 4x22 3x2within 2.1 6 x26 2.5. Similarly, one break point (x2 2.296) is added
for the function 5x0:5
2 ÿ x2within 2.1 6 x26 2.5.
The problem then becomes
minimize w ÿ0:90507 x1ÿ 2:18 0:58732 jx1ÿ 2:285j x1ÿ 2:285 0:61452 jx1ÿ 2:386j x1ÿ 2:386j x1ÿ 2:386 0:66852 jx1ÿ 2:48j x1ÿ 2:48 ÿ 0:131626 x2ÿ 2:1 0:54042 jx2ÿ 2:2055j x2ÿ 2:2055 0:5817 2 jx2ÿ :20369j x2ÿ 2:3069 0:6203 2 jx2ÿ 2:4049j x2ÿ 2:4049 subject to: 3x1 2x26 11:75; 2x1 0:6868 x2ÿ 2:1 ÿ0:07192 jx2ÿ 2:29564j x2ÿ 2:29564 P 9; 2:18 6 x16 2:58; 2:1 6 x26 2:5:
The problem is linearized as follows: minimize w 0:965204x1 1:870274d11 1:282974d12 0:668524d13 1:42614x2 1:7424d21 1:202d22 0:6203d23ÿ 5:4092 subject to x1 d11 d12 d13P 2:48; x2 d21 d22 d23P 2:4049; 3x1 2x26 11:75; 2x1 0:6149x2 0:0719z ÿ 0:16506u P 10:277223;
x2 2:5 u ÿ 1 6 z; u is an zero±one variable:
2:18 6 x16 2:58; 2:1 6 x26 2:5;
d116 0:105; d126 0:101; d136 0:094;
d216 0:10548; d226 0:10139; d236 0:098:
After running on the LINDO [9], the ®ner optimal values are x1 2.3875, x2 2.2155, the objective
function's value is ÿ6.5291 and the approximated error 0.00029 < 0.01. The solution process is termi-nated since the approximated error is less than the pre-speci®ed tolerable error.
Example 5. (Taken from Klein et al. [6]). The amount of electric power that can be produced from a multi-unit hydro-electric generating station depends on the amount of water discharged through each multi-unit. A situation in which the discharge is not properly allocated among the generating units implies that the potential power output may not be fully achieved. More expensive sources such as nuclear, coal or oil (which are environmentally less attractive) would have to replace any loss. Thus, an electric utility should maximize hydro-electric generation which is the cheapest and cleanest source of energy. In addition, the quantity of electricity generated through each generating unit is a nonconvex function since the eciency characteristics may not be the same for dierent units [6]. An illustrative example is provided in the following, which consists of two hydro-electric generating units, as depicted in Fig. 7(a) and (b), respectively:
maximize f1 x1 f2 x2
subject to x16 241; x26 250;
x1 x2 Q; x1; x2P 0;
where Q are varying values of total discharge.
From the basis of Proposition 1, f1(x1) and f2(x2) can expressed as follows:
f1 x1 0:23256 x1ÿ 11 0:00872 jx1ÿ 54j x1ÿ 54 ÿ 0:04924 jx1ÿ 142j x1ÿ 142;
f2 x2 0:22727 x2ÿ 11 0:040475 jx1ÿ 55j x1ÿ 55 ÿ 0:041865 jx1ÿ 201j x1ÿ 201:
Based on Propositions 3 and 4, the problem can be reformulated as follows: minimize ÿ f1 x1 ÿ f2 x2 ÿ0:15152x1 0:01744z1ÿ 0:94176u0:09848d1 ÿ 0:22449x2 0:08095z2ÿ 4:45225u2 0:08373d2ÿ 20:36175 subject to x1 d1P 142; d16 88; x1 241 u1ÿ 1 6 z1; x2 d2P 201; d26 146; x2 250 u2ÿ 1 6 z2; x1 x2 Q; u1; u2are 0±1 variables; x1; x2P 0:
By letting Q 450, 400, 350, 300 and 250, the computed optimal discharge allocation (x1, x2) (200, 250),
(150, 250), (142, 208), (142, 158), and (142, 108) respectively. The obtained solutions are the same as the ones found in Klein et al. [6].
Example 6 (Modi®ed from Hillier et al. [5]). A farmer raises pigs for market, and he wishes to determine the quantities of the available types of feed that should be administered to each pig to ful®ll certain nutritional requirements at a minimum cost. Table 2 provides the number of units of each type of basic nutritional
Fig. 7. (a) A hydro-electric generating function f1(x1). (b) A hydro-electric generating function f2(x2).
Table 2
Required nutritional ingredient
Nutritional ingredient Kilogram of corn Kilogram of tankage Kilogram of alfalfa Minimum daily requirement
Carbohydrates 90 20 40 2000
Protein 30 80 60 1800
Vitamins 10 20 60 1500
ingredient contained within a kilogram of each feed type, along with the daily nutritional requirements and feed cost:
By considering factors such as holding cost, order cost, and quantity discount, cost functions f(x1), f(x2),
and f(x3) naturally become a non-convex shape [1,2,4,12], as depicted in Fig. 8(a)±(c), respectively.
Based on Proposition 1, the cost functions are formulated as follows: f x1 40x1 5 jx1ÿ 10j x1ÿ 10 ÿ 5 jx1ÿ 12j x1ÿ 12;
f x2 20x2ÿ 5 jx2ÿ 10j x2ÿ 10 5 jx2ÿ 12j x2ÿ 12;
f x3 30x3 10 jx3ÿ 10j x3ÿ 10 ÿ 10 jx3ÿ 20j x3ÿ 20:
From the basis of Propositions 3 and 4, f(x1), f(x2) and f(x3) can be linearized as follows:
f x1 40x1 10z1ÿ 120u1 10d1 20;
where x1 d1P 10; d16 10; x1 17 u1ÿ 1 6 z1; and u1 is a 0±1 variable;
f x2 20x2 10z2ÿ 100u2 10d2ÿ 20;
where x2 d2P 12; d26 2; x2 17 u2ÿ 1 6 z2; and u2 is a 0±1 variable; and
f x3 30x3 20z3ÿ 400u3 20d3 200;
where x3 d3P 10; d36 10; x3 25 u3ÿ 1 6 z3; and u3 is a 0±1 variable.
Therefore, the problem is formulated as follows: minimize f x1 f x2 f x3 subject to x1 d1P 10; d16 10; x1 17 u1ÿ 1 6 z1; x2 d2P 12; d26 2; x2 17 u2ÿ 1 6 z2; x3 d3P 10; d36 10; x3 25 u3ÿ 1 6 z3; 90x1 20x2 40x3P 2000; 30x1 80x2 60x3P 1800; 10x1 20x2 60x3P 1500; x1; x2; x3P 0;
u1; u2; and u3are 0±1 variables:
After running on the LINDO [9], the optimal values are x1 11.04, x2 12, and x3 19.16.
6. Concluding remark
This paper treats nonconvex separable programming problems where the objective functions and the constraints might be nonconvex. Comparing the proposed method with conventional NSP methods reveals that the former can derive the approximately global optimum of a NSP problem by using less number of zero±one variables. The quality of derived solution can be improved by adequately adding the break points with the searching intervals.
References
[1] R.C. Baker, T.L. Urban, A deterministic inventory system with an inventory-level-dependent demand rate, Journal of the Operational Research Society 39 (9) (1988) 823±831.
[2] M.A. Bakir, M.D. Byrne, An application of the multi-stage Monte Carlo optimization algorithm to aggregate production planning, International Journal of Production Economics 35 (1±3) (1994) 207±213.
[3] M.S. Bazaraa, H.D. Sherali, C.M. Shetty, Nonlinear Programming Theory and Algorithms, second edition, Wiley, New York, 1993.
[4] F. Chen, Y.S. Zhengm, Inventory models with general backorder costs, European Journal of Operational Research 65 (2) (1993) 175±186.
[5] F.S. Hillier, G.J. Lieberman, Introduction to Operations Research, sixth edition, McGraw-Hill, New York, 1995.
[6] E.M. Klein, S.H. Sim, Discharge allocation for hydro-electric generating stations, European Journal of Operational Research 73 (1994) 132±138.
[7] H.L. Li, An ecient method for solving linear goal programming problems, Journal of Optimization Theory and Applications 90 (2) (1996) 465±469.
[8] H.L. Li, C. Chang, An approximate approach of global optimization for polynomial programming problems, European Journal of Operational Research 107 (1998) 625±632.
[9] LINDO System Inc., Lindo Release 6.0 ± User's Guide, USA, 1997.
[10] R.R. Meyer, Two-segment separable programming, Management Science 25 (4) (1979) 385±395. [11] H.A. Taha, Operations Research, ®fth edition, Macmillan, New York, 1992.
[12] T.L. Urban, Deterministic inventory models incorporating marketing decisions, Computers and Industrial Engineering 22 (1) (1992) 85±93.