• 沒有找到結果。

Convex relaxation for solving posynomial programs

N/A
N/A
Protected

Academic year: 2021

Share "Convex relaxation for solving posynomial programs"

Copied!
8
0
0

加載中.... (立即查看全文)

全文

(1)

DOI 10.1007/s10898-009-9414-2

Convex relaxation for solving posynomial programs

Hao-Chun Lu · Han-Lin Li ·

Chrysanthos E. Gounaris · Christodoulos A. Floudas

Received: 2 February 2008 / Accepted: 3 March 2009 / Published online: 21 March 2009 © Springer Science+Business Media, LLC. 2009

Abstract Convex underestimation techniques for nonlinear functions are an essential part of global optimization. These techniques usually involve the addition of new variables and constraints. In the case of posynomial functions xα1

1 x2α2. . . xnαn, logarithmic transformations

(Maranas and Floudas, Comput. Chem. Eng. 21:351–370, 1997) are typically used. This study develops an effective method for finding a tight relaxation of a posynomial function by introducing variables yjand positive parametersβj, for allαj > 0, such that yj= x−βj j.

By specifyingβj carefully, we can find a tighter underestimation than the current methods.

Keywords Convex underestimation· Posynomial functions 1 Introduction

Convex underestimation techniques are frequently applied in global optimization algorithms. A good convex underestimator should be as tight as possible and should require a mini-mal number of additional variables and constraints. Floudas [8,9] present various convex underestimating techniques. Ryoo and Sahinidis [21] studied the use of arithmetic intervals, recursive arithmetic intervals, logarithmic transformations, and exponential transformations for multilinear functions. Liberti and Pantelides [14] proposed a nonlinear continuous and differentiable convex envelope for monomials of odd degree. Pörn et al. [20] presented differ-ent convexification strategies for nonconvex optimization problems. Björk et al. [4] studied

H.-C. Lu (

B

)

Department of Information Management, College of Management Fu Jen Catholic University, No. 510, Jhongjheng Rd., Sinjhuang, Taipei 242, Taiwan

e-mail: haoclu@gmail.com H.-L. Li

Institute of Information Management, National Chiao Tung University, Management Building 2, 1001 Ta-Hsueh Road, Hsinchu 300, Taiwan

C. E. Gounaris· C. A. Floudas

(2)

convexifications for signomial terms, introduced properties of power convex functions and studied quasi-convex convexifications.

Tardella [22] studied the class of functions whose convex envelope on a polyhedron coin-cides. Meyer and Floudas [19] described the structure of the polyhedral convex envelopes of edge-concave functions over polyhedral domains using geometric arguments and showed the improvements over the classicalαBB convex underestimators for box-constrained opti-mization problems. Caratzoulas and Floudas [5] developed convex underestimators for trig-onometric functions. Akrotirianakis and Floudas [1,2] introduced a new class of convex underestimators for twice continuously differentiable nonlinear programs, studied their the-oretical properties, and proved that the resulting convex relaxation is improved compared to theα BB one. More recently, Gounaris and Floudas [10,11] presented a piecewise application of theα BB method that produces tighter convex underestimators than the original variants. Three popular convex underestimation methods include arithmetic intervals (AI) [12], recursive arithmetic intervals (rAI) [12,15,21], and explicit facets (EF) for convex envelopes of trilinear monomials [17,18]. However, these current methods have difficulty in treating a posynomial function. Since the number of linear constraints of convex envelopes for a mul-tilinear function with n variables grows doubly exponentially in n, it is more difficult for AI to treat a posynomial function for n> 3 cases. Moreover, applying the rAI scheme to under-estimate a multilinear function x1x2. . . xn requires use of exponentially many 2n−1linear

inequalities. Therefore, the rAI bounding scheme also has difficulty in treating posynomial functions.

EF [17,18] provide the explicit facets of the convex and concave envelopes of trilin-ear monomials. The explicit facets of the convex envelope are effective in treating general trilinear monomials but the derivation of explicit facets for the convex envelope of general multilinear monomials and signomials remains an open problem. Li et al. [13] proposed a new method for the convex relaxation of posynomial functions. xα1

1 2 2 . . . x

αn

n via the reciprocal

transformation and linear underestimation of the concave terms.

This study proposes a novel method of convex relaxation for posynomial functions f(X) =

1

1 x2α2. . . xnαn. Forαj > 0, we introduce variable yj and positive parameterβj such that

yj= x−βj j. We denote yUj as the linear function ofβjsatisfying yj ≤ x−βj j ≤ yUj . By

spec-ifyingβjcarefully so as to minimize the difference between yUj and yj, we underestimate

f(X) tightly. The proposed method can obtain very tight results, and this is demonstrated

with a number of examples drawn from the literature.

This paper is organized as follows. Section2develops the convex underestimators for posynomial functions. Section3specifies the value ofβj, while the numerical examples are

given in Sect.4.

2 Convex underestimator of a posynomial function

This section presents a new method to develop a convex underestimator for a twice-differ-entiable posynomial function f(X) = xα1

1 22. . . x

αn

n , where X = (x1, . . . , xn), 0 < xi

xi ≤ ¯xi, αi ∈  for i = 1, 2, . . . , n, and c ∈ . Let H(X) be the Hessian matrix of f (X)

and Hi(X) be the ith principal minor of H(X). The determinant of H(X) and Hi(X) can be

expressed as det H(X) = (−1)n n  i=1 αixinαi−2   1− n  i=1 αi  (1)

(3)

and det Hi(X) = (−1)k  k  i=1 αixikαi−2  ⎛ ⎝ n j=k+1 xkjαj ⎞ ⎠  1− k  i=1 αi  , k = 1, . . . , n − 1. (2)

If xi ≥ 0 and αi < 0∀i, then det H(X) ≥ 0 and det Hi(X) ≥ 0.

The following proposition holds:

Proposition 1 A twice-differentiable posynomial function

f(X) = xα1

1 22. . . xαnn ∀xi > 0 (3)

is a convex function whenαi < 0 ∀i.

From Proposition1, we deduce the following rules for a general posynomial function f(X) as in (3).

Rule 1 Ifαi < 0∀i, then f (X) is already a convex function by Proposition 1. No

convexifi-cation is required.

Rule 2 Ifαj > 0 for some j, j /∈ I, I = {k|αk < 0 ∀k = 1, . . . , n}, then we convert f (X)

into a new function

f(X, Y) = i∈I xαi i  j/∈I yα j β j j (4) where yj= x−βj j, βj are constants, 0 < βj ≤ 1. (5)

Since f(X, Y) is a convex function, we only need to relax (5) for all j /∈ I . Let us now focus on relaxing the equality (5). Since xβjj ≤ xβjj ≤ ¯xβjj and 1

¯xβ jj = yjyj≤ ¯yj = 1 xβ jj , we have: jj − xβjj⎝yj− 1 ¯xβj j⎠ ≥ 0. (6) Owing to jjyj= 1 and − xβjj ≤ − ⎛ ⎝xβj j + ¯xβj j − x βj j ¯xj− xj (xj− xj)⎠ . (7) It is obvious yj ≤ 1 ¯xβj j + 1 xβj j − 1 ¯xβj j x βj j⎝xβj j + ¯xβj j − x βj j ¯xj− xj (xj− xj) ⎞ ⎠ = x−βj j + ¯x−βj j − x −βj j ¯xj− xj (xj− xj). (8)

(4)

Fig. 1 Dβj j

x

0 j y U j y ) ( j Dβ j y j

x

x

j

We then deduce the following proposition:

Proposition 2 Consider a twice-differentiable non-convex function f(X) = xα1

1 22. . . xnαn,

0< xi ≤ xi ≤ ¯xi∀i, where xi and ¯xi are respectively the lower and upper bounds of xi.

Let I = {k|αk< 0 ∀k = 1, . . . , n}. The lower bound of f (X) can be obtained by solving the

following convex problem.

Min x,y i∈Ix αi i j/∈Iyα jβ j j s.t. yj ≤ yUj, j /∈ I, (9) yUj = x−βj j + ¯x −β j j −x −β j j ¯xj−xj (xj− xj), where¯x−βj j ≤ yj ≤ x−βj j, and 0 < βj ≤ 1. (10)

Remark 1 A smaller βj in Proposition 2 means that yUj is tighter to yj, as presented

in Fig.1.

The value ofβjaffects the tightness of yUj to yj. However, given the restriction of

computa-tional accuracy,βjcannot be chosen arbitrarily close to zero. The next selection computes the

lowest possible value ofβj so that there exists a computationally distinguishable distance

between the two functions, thus preventing the round-off error in the computer’s floating computation.

3 Selection ofβj

The selection ofβjis the same as solving the following optimization program:

Min βj.

(5)

The maximal value of 1 ¯xβ jj + 1 xβ jj − 1 ¯xβ jj x β j j y−1j − yjis x−0.5βj j − ¯x −0.5βj j 2 , appearing at yj= x−0.5βj j¯x −0.5βj j where ⎛ ⎝ 1 ¯xβ jj + 1 xβ jj − 1 ¯xβ jj xβ jj y−1j −yj ⎞ ⎠

∂yj = 0. Therefore, βjneeds to satisfy: (x−0.5βj

j − ¯x

−0.5βj

j )

2 ≥ ε. (11)

Let g1(βj) = x−0.5βj j as a continuous function ofβj. The Taylor expansion of g1(βj) is

g1(βj) =n=0 g(n)1 0 j) n! (βj− β0j)n. Choosing β0j = 0, we have g1(βj) = g1(0) + g1(0)βj+ g1(0) 2! β 2 j + · · · + g1(n)(0) n! β n j+ · · · = 1+ ∞  n=1 βn j n!(−0.5 ln xj) n

Similarly, let g2(βj) = ¯x−0.5βj j that leads to g2(βj) = 1 +

n=1 βn j n!(−0.5 ln ¯xj) n. We then have x−0.5βj j − ¯x−0.5βj j 2 = g1(βj) − g2(βj) 2 =   n=1 βn j n! (−0.5 ln xj)n− (−0.5 ln ¯xj)n 2 =   n=1 An 2 , (12)

where we have defined An=

βn j

n!((−0.5 ln xj)n− (−0.5 ln ¯xj)n) for all n = 1, 2, . . . , ∞.

It is clear that A1, A3, A5, . . . are always positive. If ln( ¯xjxj) ≤ 0 then A2, A4, A6, . . .

are also non-negative, otherwise A2, A4, A6, . . . are negative. For instance, consider A2:

A2= β2 j 2 0.25(ln xj)2− 0.25(ln ¯xj)2 = −0.125β2 j  ln( ¯xjxj) · ln ¯xj xj  (13)

It is obvious that A2≥ 0 if and only if ln( ¯xjxj) ≤ 0.

Proposition 3 T heβjparameter in Proposition2can be selected as follows.

(i) I f ln( ¯xjxj) ≤ 0, then

βj

2√ε ln¯xj/xj

, (14)

where:ε is the accuracy of the computer.

(ii) I f ln( ¯xjxj) > 0, then βj≥ 4 ln¯xj/xj− √ G 2 ln¯xj/xj· ln( ¯xjxj) , (15) where: G= 16 ln ¯xj/xj ln¯xj/xj− 2 √ ε ln( ¯xjxj) (16)

andε should satisfy:

“computer accuracy”≤ ε ≤  ln¯xj/xj 2 ln( ¯xjxj) 2 . (17)

(6)

Proof Case 1 ln( ¯xjxj) ≤ 0

In this case, Ak> 0 ∀k and

k=1Ak 2 ≥ A2 1= 0.5βjln¯xj/xj 2 . For (11) to hold, it suffices to have 0.5βjln¯xj/xj 2 ≥ ε, which results in (14). Case 2 ln( ¯xjxj) > 0

In this case, A2, A4, A6, . . . are negative. If we choose βjso as to satisfy

A1+ A2≥ √

ε, (18)

then it will also hold that A2k−1+ A2k ≥ 0, k = 2, 3, . . . ∞. Therefore, in such a case we

would have x−0.5βj j − ¯x−0.5βj j 2 =  A1+ A2+ ∞  k=2 (A2k−1+ A2k) 2 ≥ ε. (19)

We now focus on examining the required conditions for theβj parameters, so that Eq.18

holds: (19)⇔ βj(0.5 ln ¯xj− 0.5 ln xj) + 0.5β2j(0.25 ln2xj− 0.25 ln2¯xj) ≥ε ⇔ −(ln ¯xj− ln xj)(ln ¯xj+ ln xj)β2j + 4βj(ln ¯xj− ln xj) − 8ε ≥ 0 ⇔ ln ¯xj/xj· ln( ¯xjxj)β2j − 4 ln ¯xj/xjβj+ 8√ε ≤ 0. Denoting G as in (16), we get  βj− 4 ln¯xj/xj− √ G 2 ln¯xj/xj·ln( ¯xjxj)   βj− 4 ln¯xj/xj+ √ G 2 ln¯xj/xj·ln( ¯xjxj)  ≤ 0, which results to 4 ln¯xj/xj− √ G 2 ln¯xj/xj·ln( ¯xjxj)≤ βj≤ 4 ln¯xj/xj+ √ G 2 ln¯xj/xj·ln( ¯xjxj).

Since G need to be non-negative, the computation accuracyε we use should satisfy

(17).

4 Numerical examples

Two examples are presented so as to demonstrate the tightness of the proposed convex under-estimation technique and compare it to other methods such as the root node lower bound of BARON [23], exponential transformations (ET) [16,20], and reciprocal transformations (RT) [13]. The numerical examples were coded in the GAMS v21.7 [3] environment. The ET, RT and proposed method relaxations were solved using the CONOPT3 solver [7], while the BARON (root node) results were obtained both with BARON version 7.2.5, as well as through the version maintained in the NEOS Server for Optimization [6].

Example 1 Find the underestimation of the following function: f(X) = x1x2x3x4x5− x20.5x40.5− 3x1− x5, 1 ≤ x1, x2,x3, x4, x5≤ 100

Let xi = y

−1

β

i for i= 1, 2, . . . , 5. A convex relaxation is formulated as follows:

Min f(X, Y ) = y− 1 β 1 y −1 β 2 y −1 β 3 y −1 β 4 y −1 β 5 − x20.5x40.5− 3x1− x5. s.t. yi≤ 1 +100 −β−1 100−1 (xj− 1), i = 1, 2, . . . , 5, where 1≤ xi≤ 100, 100−β ≤ yi≤ 1, for i = 1, 2, . . . , 5.

(7)

Table 1 Comparisons of lower bound results for examples

Scheme BARON (root node) ET RT Proposed method Global Optimum

Example 1 −499 −209.220 −317.076 −209.477 −202

Example 2 −49.4716 −43.9794 −48.8309 −43.9812 −39.7601

We have ln(1 · 100) > 0, therefore, case (ii) of Proposition 3 applies. We selectε = 10−6< ln¯x j/xj 2 ln( ¯xjxj) 2 = 0.25.

From Eqs. 20 and 21, we calculate

βj

4 ln 100/1 −G

2 ln 100/1 · ln(100 · 1)

4 ln 100−√338.6428361

2 ln 100· ln 100 ≈ 0.00434512

Solving this convex program with GAMS/CONOPT, we obtain the solution(x1, x2, x4, x5,

y1, y2, y3, y4, y5) = (90.483591, 1, 1, 1, 0.98209321, 1, 1, 1, 1), which corresponds to a

lower bound of−209.477. The maximal number of required additional linear constraints is 5. Table1lists the results from BARON root node, ET, RT (corresponding toβ = 1), proposed method, and known solution. It shows that the lower bound of the proposed method is comparable to ET and much tighter than the other two approaches.

Example 2 f(X) = x−21 x2−1.5x31.2x34− 3x30.5+ x2− 4x4, 0.01 ≤ x1, x2, x3, x4≤ 10.

Let xi = y

−1

β

i , i = 3, 4. Solving following program:

Min f(X,Y) = x1−2x2−1.5y− 1.2 β 3 y −3 β 4 − 3x30.5+ x2− 4x4 s.t. yj≤ 0.01−β +10 −β−0.01−β 10−0.01 (xj− 0.01), j = 3, 4 where 10−β ≤ yj ≤ 0.01−β, j = 3, 4, 0.01 ≤ xi ≤ 10, i = 1, 2, 3, 4.

Theβ parameter is selected as follows:

We have ln(0.01 · 10) ≤ 0, therefore case (i) of proposition 3 applies. We selectε = 10−6and use Eq.14to calculateβj ≥ 2

ε

ln 10/0.01 ≈ 0.00028953.

Solving the convex relaxation with GAMS/CONOPT, we obtain(x1, x2, x3, x4, y3, y4) =

(10, 1.31724, 4.23897, 10, 1.00049, 0.99934), which corresponds to a lower bound of −43.9812. The comparison results are shown in Table1.

5 Conclusion

This study integrates the convexification techniques and the bounding schemes to construct a convex lower bound for a posynomial program. By specifying properly the value of theβj

parameters, the convex relaxations produced provide tight bounds to the global optimum of the posynomial program. Comparing with other underestimation/relaxation techniques, the proposed method produces underestimators of comparable or much better tightness.

(8)

Acknowledgments The authors thank the area editor, the anonymous associate editor, and anonymous referees for providing insightful comments that significantly improved this paper. This research has been supported by Taiwan NSC 97-2218-E-030-005-.

References

1. Akrotirianakis, I.G., Floudas, C.A.: Computational experience with a new class of convex underesti-mators: box constrained NLP problems. J. Glob. Optim. 29, 249–264 (2004a). doi:10.1023/B:JOGO. 0000044768.75992.10

2. Akrotirianakis, I.G., Floudas, C.A.: A new class of improved convex underestimators for twice continuously differentiable constrained NLPs. J. Glob. Optim. 30, 367–390 (2004b). doi:10.1007/ s10898-004-6455-4

3. Brooke, A., Kendrick, D., Meeraus, A., Raman, R.: GAMS: a users guide. GAMS Development Corpo-ration, Washington, DC (2005)

4. Björk, K.J., Lindberg, P.O., Westerlund, T.: Some convexifications in global optimization of problems con-taining signomial terms. Comput. Chem. Eng. 27, 669–679 (2003). doi:10.1016/S0098-1354(02)00254-5 5. Caratzoulas, S., Floudas, C.A.: A trigonometric convex underestimator for the base functions in Fourier

space. J. Optim. Theory Appl. 124, 339–362 (2005). doi:10.1007/s10957-004-0940-2

6. Czyzyk, J., Mesnier, M., More, J.: The NEOS server. IEEE J. Comput. Sci. Eng. 5, 68–75 (1998). doi:10. 1109/99.714603

7. Drud, A.S.: CONOPT: a system for large-scale nonlinear optimization. Reference manual for CONOPT subroutine library. ARKI Consulting and Development A/S, Bagsvaerd, Denmark (1996)

8. Floudas, C.A.: Global optimization in design and control of chemical process systems. J. Process Control

20, 125–134 (2000). doi:10.1016/S0959-1524(99)00019-0

9. Floudas, C.A., Akrotirianakis, I.G., Caratzoulas, S., Meyer, C.A.: Kallrath, J.: Global optimization in the 21st century: advances and challenges. Comput. Chem. Eng. 29, 1185–1202 (2005). doi:10.1016/j. compchemeng.2005.02.006

10. Gounaris, C.E., Floudas, C.A.: Tight convex underestimators for C2-continuous problems: I. Univariate functions. J. Glob. Optim. 42, 51–67 (2008)

11. Gounaris, C.E., Floudas, C.A.: Tight convex underestimators for C2-continuous problems: II. Multivariate functions. J. Glob. Optim. 42, 69–89 (2008)

12. Hamed, A.S.E.: Calculation of bounds on variables and underestimating convex functions for nonconvex functions. Ph.D thesis. The George Washington University (1991)

13. Li, H.L., Tsai, J.F., Floudas, C.A.: Convex Underestimation for Posynomial Functions of Positive Vari-ables. Optim. Lett. 2, 333–340 (2008)

14. Liberti, L., Pantelides, C.C.: Convex envelops of monomials of odd degree. J. Glob. Optim. 25, 157–168 (2003). doi:10.1023/A:1021924706467

15. Maranas, C.D., Floudas, C.A.: Finding all solutions of nonlinearly constrained systems of equations. J. Glob. Optim. 7, 143–182 (1995). doi:10.1007/BF01097059

16. Maranas, C.D., Floudas, C.A.: Global optimization in generalized geometric programming. Comput. Chem. Eng. 21, 351–370 (1997). doi:10.1016/S0098-1354(96)00282-7

17. Meyer, C.A., Floudas, C.A.: Trilinear monomials with positive or negative domains: facets of the convex and concave envelopes. In : Floudas, C.A., Pardalos, P.M. (eds.) Frontiers in Global Optimization. Kluwer Academic Publishers, Santorini, Greece (2003)

18. Meyer, C.A., Floudas, C.A.: Trilinear monomials with mixed sign domains: facets of the convex and concave envelopes. J. Glob. Optim. 29, 125–155 (2004). doi:10.1023/B:JOGO.0000042112.72379.e6 19. Meyer, C.A., Floudas, C.A.: Convex envelopes for edge concave functions. Math. Program. Ser. B. 103,

207–224 (2005). doi:10.1007/s10107-005-0580-9

20. Pörn, R., Harjunkoski, I., Westerlund, T.: Convexification of different classes of non-convex MINLP problems. Comput. Chem. Eng. 23, 439–448 (1999). doi:10.1016/S0098-1354(98)00305-6

21. Ryoo, H.S., Sahinidis, N.V.: Analysis of bounds for multilinear functions. J. Glob. Optim. 19, 403–424 (2001). doi:10.1023/A:1011295715398

22. Tardella, F.: On the existence of polyhedral convex envelopes. In: Floudas, C.A., Pardalos, P.M. (eds.) Frontiers in Global Optimization, pp. 563–573. Kluwer Academic Publishers, Santorini, Greece (2003) 23. Tawarmalani, M., Sahinidis, N.V.: BARON 7.2.5: Global optimization of mixed-integer nonlinear

數據

Fig. 1 D β j jx0jy Ujy )(jDβjy jx x j
Table 1 Comparisons of lower bound results for examples

參考文獻

相關文件

Numerical results are reported for some convex second-order cone programs (SOCPs) by solving the unconstrained minimization reformulation of the KKT optimality conditions,

In this paper, we build a new class of neural networks based on the smoothing method for NCP introduced by Haddou and Maheux [18] using some family F of smoothing functions.

Numerical experiments are done for a class of quasi-convex optimization problems where the function f (x) is a composition of a quadratic convex function from IR n to IR and

Chen, The semismooth-related properties of a merit function and a descent method for the nonlinear complementarity problem, Journal of Global Optimization, vol.. Soares, A new

Numerical results are reported for some convex second-order cone programs (SOCPs) by solving the unconstrained minimization reformulation of the KKT optimality conditions,

By exploiting the Cartesian P -properties for a nonlinear transformation, we show that the class of regularized merit functions provides a global error bound for the solution of

The purpose of this talk is to analyze new hybrid proximal point algorithms and solve the constrained minimization problem involving a convex functional in a uni- formly convex

Huang, A nonmonotone smoothing-type algorithm for solv- ing a system of equalities and inequalities, Journal of Computational and Applied Mathematics, vol. Hao, A new