• 沒有找到結果。

Introduction to Generalized Geometric Programming

Chapter 3 Extension 1-Solving Generalized Geometric Programs Problems

3.1 Introduction to Generalized Geometric Programming

Generalized Geometric Programming (GGP) methods have been applied to solve problems in various fields, such as heat exchanger network design (Duffin and Peterson 1966), capital investment (Hellinckx and Rijckaert 1971), optimal design of cooling towers (Ecker and Wiebking 1978), batch plant modeling (Salomone and Iribarren 1992), competence sets expansion (Li 1999), smoothing splines (Cheng et al. 2005), and digital circuit (Boyd 2005).

These GGP problems often contain continuous and discrete functions where the discrete variables may represent the sizes of components, thicknesses of steel plates, diameters of pipes, lengths of springs, and elements in a competence set etc. Many local optimization algorithms for solving GGP problems have been developed, which include linearization method (Duffin 1970), separable programming (Kochenberger et al. 1973), and a concave simplex algorithm (Beck and Ecker 1975). Pardalos and Romeijn (2002) provided an impressive overview of these GGP algorithms. Among existing GGP algorithms, the techniques developed by Maranas and Floudas (1997), Floudas et al. (1999), Floudas (2000) (these three methods are called Floudas’s methods in this study) are the most popular approaches for solving GGP problems. Floudas’s methods, however, can not be applied to

treat non-positive variables and discrete functions. Recently, Li and Tsai (2005) developed another method to modify Floudas’s methods thus to treat continuous variables containing zero values; however, Li and Tsai’s method is incapable of effectively handling mixed-integer variables. This study proposes a novel method to globally solve a GGP program with mixed integer free-sign variables. A free-sign variable is one which can be positive, negative, or zero.

The GGP program discussed in this study is expressed below:

GGP

(C8) αp,iw,q,i ,cp ,hw,q ,andlw are constants,

(C9) αp,i and αw ,,qi are integers if the lower bounds of x are negative. i Floudas’s method can solve a specific GGP problem containing continuous functions with positive variables, as illustrated below:

GGP1 (with continuous functions and positive variables) Min

monomials with identical sign. GGP1 is rewritten by Floudas’s method as follows:

Min )f0+(t)− f0(t

GGP1 is a signomial geometric program containing posynomial functions f0+(t),

)

+(t

fw and signomial functions − f0(t) , − fw(t) where t=(t1,...,tn) is a positive variable vector. A posynomial term is a monomial with positive coefficient, while a signomial term is a monomial with negative coefficient (Bazaraa et el. 1993; Floudas 2000). Supports all signomial terms in GGP1 are removed then we have a posynomial geometric program below:

Min )f0+(t

s.t. fw+(t)≤lw,w=1,...,s.

Posynomial geometric program laid the foundation for the theory of generalized geometric program. Duffin and Peterson (1966) pioneered the initial work on posynomial geometric programs, which derived the dual based on the arithmetic geometric inequality.

This dual involves the maximization of a separable concave function subject to linear constraints. Unlike posynomial geometric problems, signomial geometric problems remain nonconvex and much more difficult to solve. Floudas's method employs an exponential variable transformation to the initial signomial geometric program to reduce it into a decomposition program. A convex relaxation is then obtained based on the linear lower bounding of the concave parts. Floudas's method can reach ε-convergence to the global minimum by successively refining a convex relaxation of a series of nonlinear convex optimization problems. The ε convergence to the global minimum (− ε global minimum), − as defined in Floudas (2000, page 58) is stated below: Suppose that x is a feasible solution, *

≥0

ε is a small predescribed tolerance, and f(x)≥ f(x*)−ε for all feasible x , then x * is an ε global minimum. However, the usefulness of Floudas's method is limited by the − difficulty that x must be strictly positive. This restriction prohibits many applications where i

x can be zero or negative values (such as temperature, growth rate, etc.). i

In order to overcome the difficulty of Floudas’s methods, recently, Li and Tsai (2005) proposed another method for solving GGP1 where x may be negative or zero. Li and Tsai i

first transfer

= n

i i

i

x p

1 α ,

and

= n

i i

i q

x w

1

,

α ,

into convex and concave functions, then approximate

the concave functions by piecewise linear techniques. Li and Tsai’s method can also reach finite ε-convergence to the global minimum. Both Floudas’s methods and Li & Tsai’s method use the exponential-based decomposition technique to decompose the objective function and constraints into convex and concave functions. Decomposition programs have good properties for finding a global optimum (Horst and Tuy, 1996). A convex relaxation of the decomposition can be computed conveniently based on the linear lower bound of the concave parts of the objective functions and constraints.

Two difficulties of direct application of Floudas’s method or Li & Tsai method to globally solve a GGP problem are discussed below:

(i) The major difficulty is that r−1 binary variables are used in expressing a discrete variable with r values. For instance, to linearize a signomial term

2 1

2 1 2 2 1

1( ) ( )

)

( g y g y yα yα

G y = ⋅ = ⋅ where yj dj dj djr j rj

j} ,

,..., ,

{ ,1 ,2 ,

∈ , by taking the

logarithm of G(y) one obtains

2 2 1

1ln ln

) (

lnG yyy

∑ ∑

= + =

= 2

1 [ln ,1 2 , (ln , ln , 1)]

j

r

k jk jk jk

j j

j

d d

u α d

where 1, , {0,1}

2

, ≤ ∈

= jk r

k k

j u

u

j

.

It require r1+ r2 −2 binary variables to linearly decompose G(y). If the number of r j

is large then it will cause a heavy computational burden.

(ii) Another difficulty is the treatment of taking logarithms of y . If j y may take on a j

negative or zero value then we can not take logarithmic directly.

This study proposes a novel method to globally solve GGP programs. The advantages of the proposed method are listed as below:

Less number of binary variables and constraints are used in solving a GGP program. Only

⎡ ⎤

= m

r rj

1 log2 binary variables are required to linearize gp,j(yj) and gw,q,j(yj) in GGP. It is capable of treating non-positive variables in the discrete and continuous functions in GGP.

This study is organized as follows. Section 3.2 develops the first approach of treating discrete functions in GGP. Section 3.3 describes another approach of treating discrete functions. Section 3.4 proposes a method for handling continuous functions. Numerical examples are analyzed in Section 3.5.

3.2 The Proposed Linear Approximation Method

Form the basic of THEOREM 2.1, we develop two approaches to express a discrete variable y with r values. The first approach, as described in this section, use

log2r

binary variables and r2 extra constraints to express y . The second approach, as described in the next section, use

log2r

binary variables but only 3+4

log2r

constraints to express

y . The first approach is good at treating product terms

= m

j

j j y g f

1

) ( )

(x , while the second

approach is more effective in treating additive terms

j j j y g ( ).

REMARK 3.1 A discrete free-sign variable y,y∈{d1,d2,...,dr} can be expressed as:

k k

k

k M A y d M A

d − ⋅ ≤ ≤ + ⋅ , k =1,...,r (3.2)

(i) k is same as θ in (2.5) and A is same as k Aθ(θ′) in (2.7).

(ii) M is a big enough positive value,

} ,..., , 0 min{

} ,..., , 1

max{ d1 dr d1 dr

M = − . (3.3)

PROOF (i) If Ak =0 then y =dk,

(ii) If Ak ≥1 then dkMAk ≤min{0,d1,...,dr}≤ y≤max{0,d1,...,dr}≤dk +MAk.

Therefore (3.2) is still correct.

Take Table 2.1 for instance, fory∈{−1 ,0 ,1 ,4 ,5 ,6 ,7.5 ,8 ,9 ,10}, y can be expressed by linear inequalities below:

) (0

1 )

(0

1−M +u1′+u2′ +u3′ +u4′ ≤ y≤− +M +u1′+u2′ +u3′ +u4

− ,

) 1

0 1

0−M(u1′+u2′ +u3′ +u4)y≤ +M(u1′+u2′ +u3′ +u4′ , M

) u u u u M(

y ) u u u u

M(2 1 2 3 4 10 2 1 2 3 4

10− − ′+ ′ + ′ − ′ ≤ ≤ + − ′+ ′ + ′ − ′ ,

where M =10+1=11.

In this case r=10and there are 2×10=20 constraints being used to express y . Since depending on the final u1′,u2′,u3′, and u′4 assignments, only two of 20 inequalities will turn into equalities which indicate the discrete choice made of y , while the rest of 18 inequalities turn into redundant constraints.

REMARK 3.2 Expression (3.2) uses

log2r

binary variables and 2 constraints to r express a discrete variable with r values.

PROPOSITION 3.1 Given a function g(y) where y∈{d1,d2,...,dr},dk are discrete free-sign values, g(y) can be expressed by following linear inequalities:

k k

k

k M A g y g d M A

d

g( )− ⋅ ≤ ( )≤ ( )+ ⋅ , k =1,...,r where y ,A ,and k k are specified in REMARK 3.1, and

)}

( ),..., ( , 0 min{

)}

( ),..., ( , 1

max{ g d1 g dr g d1 g dr

M = − . (3.4)

PROPOSITION 3.2 A product term z= f(x)⋅g(y) , where

=

= n

i ii

x c f

1

)

(x α as being

specified in GGP, xixixi, c is a free-sign constant, and y∈{d1,...,dr}, can be expressed by following inequalities:

k k

k

k M A z f g d M A

d g

f(x)⋅ ( )− ⋅′ ≤ ≤ (x)⋅ ( )+ ⋅′ , k =1,...,r where

(i) y ,A ,and k k are specified in PROPOSITION 3.1.

(ii) M′= f(x)⋅M , for

}

|

|, 1 max{

) (

1

=

= n

i U

xi

c

f x , where xUi =max{|xiαi |,xixixii}. (3.5)

PROOF It is clear that M′≥M | f(x)| for xixixi . Since Ak ≥0 ,

k

k Mf A

A

M′ ≤− (x)

− and MAkMf(x)Ak. Two cases are discussed.

(i) Case 1 for f(x)≥0. From Proposition 3 to have

k k

k

k Mf A f g y f g d Mf A

d g

f(x)⋅ ( )− (x)⋅ ≤ (x) ( )≤ (x)⋅ ( )+ (x)⋅ .

We then have

k k

k

k M A f g y f g d M A

d g

f(x) ( )− ⋅′ ≤ (x) ( )≤ (x) ( )+ ⋅′ .

k k

k

k Mf A f g y f g d Mf A

d g

f(x)⋅ ( )− (x)⋅ ≥ (x) ( )≥ (x)⋅ ( )+ (x)⋅ .

Similar to Case 1, it is clear that

k k

k

k M A f g y f g d M A

d g

f(x) ( )− ⋅′ ≤ (x) ( )≤ (x) ( )+ ⋅′ .

The proposition is then proven.

REMARK 3.3 For a constraint z= f(x)⋅g(y)≤a where a is constant, f(x) and g(y) are the same as in PROPOSITION 3.2. The constraint z≤ can be expressed by following a inequalities:

a M

d g

f(x)⋅ ( k)− ⋅′Ak ≤ , k =1,...,r,

where M ′ is same in PROPOSITION 3.2. There are r constraints used to describe z≤ ; a where only one of the constraints is activated(i.e., Ak =0) which indicates the discrete choice made by y , while the rest of r−1 inequalities turn into redundant constraints.

We then deduce the main result below:

THEOREM 3.1 Denote z1 = f(x)⋅g1(y1) and

=

⋅ = ⋅

= σ σ σ σ

σ

1

1 ( ) ( ) ( )

j

j j y g f

y g z

z x for

m ,...,

=2

σ where

=

= n

i xii

c f

1

)

(x α , xixixi, c is a free-sign constant, and y are j

discrete free-sign variables, { ,1, ,2,..., , }

rj

j j j

j d d d

y. The terms z1,z2,...,zm can be expressed by the following inequalities:

(C1) f(x)⋅g1(d1,k)−M1′⋅A1,kz1f(x)⋅g1(d1,k)+M1′⋅A1,k ,k =1,...,r1, (C2) z1g2(d2,k)−M2′⋅A2,kz2z1g2(d2,k)+M2′⋅A2,k ,k =1,...,r2,

(Cm) zm1gm(dm,k)−Mm′ ⋅Am,kzmzm1gm(dm,k)+Mm′ ⋅Am,k , k=1,...,rm.

REMARK 3.4 A Constraint

∑ ∏

following inequalities:

=

where M1=32 and A1,k are specified as

the inequalities as

, 32 ,..., 1

4,

, 2 2

1bM′⋅Az q=

y q q

where M&2 =32*32=1024.

The original constraint then becomes z3 +z4a where there are additional 10 binary variables and two continuous variables. By utilizing Theorem 1 to express this constraint in linear form requires 192 additive constraints (where 64 come from (3.8), 64 come from (3.9)). Another approach, developed based on Theorem 1, can be used to reduce the number of constraints from 192 to 64+2(3+4

log232

)=110.

相關文件