• 沒有找到結果。

CHARACTERIZATIONS OF SOLUTION SETS FOR TWO NONSYMMETRIC CONE PROGRAMS

N/A
N/A
Protected

Academic year: 2022

Share "CHARACTERIZATIONS OF SOLUTION SETS FOR TWO NONSYMMETRIC CONE PROGRAMS"

Copied!
14
0
0

加載中.... (立即查看全文)

全文

(1)

NONSYMMETRIC CONE PROGRAMS

MING-YEN LI, CHING-YU YANG, XIN-HE MIAO, AND JEIN-SHAN CHEN

Abstract. This paper is devoted to the characterizations of solution sets for a general cone-constrained convex programming problems. In particular, when the cone reduces to two specific and nonsymmetric cones, that is, the power cone and the exponential cone, we demonstrate that the conclusion holds by exploiting the structures of those two cones.

1. Introduction

In this paper, we consider the following general cone-constrained convex pro- gramming problem:

(1.1)

min f (x) s.t. −g(x) ∈ K

x ∈ C,

where C is a closed convex set in Rn, K is a closed convex cone in Rr, f : Rn→ R is a convex function, and g : Rn → Rr is a continuous K-convex mapping, i.e., for every x, y ∈ Rn and t ∈ [0, 1], there holds

tg(x) + (1 − t)g(y) − g (tx + (1 − t)y) ∈ K.

It is known that constrained optimization problems including cone-constrained problems arise in a variety of scientific and engineering applications [11, 12, 18]. For constrained optimization problems, an important issue is the characterization of so- lution sets. This is because the characterizations and properties of solution sets is fundamental and crucial for understanding of the behavior of solution methods for solving optimization problems, see [4, 10, 15, 17, 19, 20, 23]. In 1988, Mangasarian [19] considered characterizations of the solution set of a differentiable convex pro- gramming problem. Later, Burke and Ferris [4] extended the results given in [19] to the setting of nondifferentiable convex programming. Moreover, for problem (1.1), when the function f is pseudolinear, g = 0, and the set C = {x ∈ Rn| Ax = b}, Jeyakumar et al. [17] described the characterization of the solution set of so-called pseudolinear programs. In addition, for cone-constrained convex programming problems, Jeyakumar et al. [15] also provided the characterization of the solu- tion set in terms of subgradients and Lagrange multipliers. Following the topic on

2010 Mathematics Subject Classification. 26A27, 90C33.

Key words and phrases. Power cone, exponential cone, Lagrange multipliers, K-convex mapping.

Xin-He Miao. The author’s work is supported by National Natural Science Foundation of China (No. 11471241).

Jein-Shan Chen. Corresponding author. The author’s work is supported by Ministry of Science and Technology, Taiwan.

1

(2)

the characterization of the solution set in [15], Miao and Chen [20] further consid- ered a type of cone-constrained convex programming problem and simplified the corresponding results in [15]. In particular, when the cone reduces to three specific cones i.e., p-order cone [2, 24], Lp cone [12], and circular cone [25], the obtained conclusions can be achieved by exploiting the special structures of those three cones.

The main purpose of this paper is to describe the characterization of the solution set of problem (1.1), which is a generalization of the problem in [20]. Moreover, when the cone K reduces to two types of convex cones, i.e., the power cone Kαm,n and the exponential cone Ke (see Section 2 for details), we may obtain characteri- zations of the solution sets via exploiting the special structures of these two convex cones. Why do we focus on these two cones? There are two main reasons. The first one is because that these two non-symmetric cones appear in a lot of practical applications such as location problems and geometric programming [6, 13, 21, 22].

The second reason is indeed more important. More specifically, through appropri- ate transformations (for example, α-representation and extended α-representation defined in [6]), plenty of non-symmetric cones can be generated from the power cone Kα and the exponential cone Ke. In other words, these two cones are the cores of many non-symmetric cones in real world applications.

Toward the end of this section, we say a few words about notations which will be used in this paper. Throughout this paper, R denotes the space of real numbers, R+

denotes the set consisting of the nonnegative reals, and Rnmeans the n-dimensional real vector space endowed with the inner product h·, ·i. Moreover, we use kxk to denote the Euclidean norm of x which induced by the inner product h·.·i, i.e., kxk = phx, xi. For any a set Ω ⊆ Rn, int Ω denotes the interior of Ω and bd Ω denotes the boundary of Ω. For any a function f : Rn → R, we denote ∂f(x) the subdifferential of the function f at x ∈ Rn.

2. Preliminaries

In this section, we briefly recall some background materials and useful results, which will be extensively used in subsequent analysis. More details can be found in [3, 7, 14, 11].

We start with the definition of the subdifferential of a function f : Rn→ R. The subdifferential of the function f at x is defined as

∂f (x) := {ξ ∈ Rn| f (y) − f (x) ≥ hξ, y − xi, ∀y ∈ Rn} .

If Ω is a convex set in Rn, the normal cone N(x) of the set Ω at x ∈ Ω is defined by

N(x) := {ξ ∈ Rn| hξ, y − xi ≤ 0, ∀y ∈ Ω} .

When the convex set Ω corresponds to Ω = {x ∈ Rn| Ax = b} with A being a m × n matrix, it is easy to verify that for any x ∈ Ω, the normal cone N(x) of the set Ω at x is written as

N(x) =ATy | y ∈ Rm .

(3)

For the problem (1.1), we know the function g : Rn→ Rr is continuous K-convex, which implies that the set {x ∈ Rn| − g(x) ∈ K} is convex. Thus, it follows from the convexity of f that the problem (1.1) is a convex optimization problem. Let F and S be the feasible region and the solution set of the problem (1.1), respectively, that is,

F := {x ∈ C | − g(x) ∈ K} and S := {x ∈ F | f (x) ≤ f (y), ∀y ∈ F } . According to the optimality conditions of the convex optimization problems, if the problem (1.1) satisfies the Slater condition [16], i.e., there exists ¯x ∈ C with −g(¯x) ∈ int K, it is known that a ∈ S if and only if the element a satisfies the KKT conditions, i.e., a ∈ F and there exists a Lagrange multiplier λa∈ Rr such that

(2.1) 0 ∈ ∂f (a) + ∂(λTag)(a) + NC(a), λa∈ K and λTag(a) = 0, where K denotes the dual cone of K given by

K= {z ∈ Rr| hz, xi ≥ 0, ∀x ∈ K} .

In this paper, we always assume that the solution set S of the problem (1.1) is nonempty. From the above analysis, for a ∈ S, there exists the corresponding Lagrange multiplier λa such that (a, λa) satisfies the KKT conditions (2.1). For convenience, we employ the Lagrange function La(·, λa) : Rn → R associated with a defined by

La(x, λa) := f (x) + λTag(x) for all x ∈ Rn. Then, the KKT conditions (2.1) can be reformulated into the form of

0 ∈ ∂La(a, λa) + NC(a), λa∈ K and λTag(a) = 0.

To close this section, we review the concepts of two specific closed convex cones, the explicit expressions of these two cones and their dual cones.

(1) power cone, see [6, 13]. It is a generalization of second-order cone (SOC) and defined as bellow:

Kαm,n :=

(

(x, z) ∈ Rm+× Rn

kzk ≤

m

Y

i=1

xαii )

where αi > 0 andPm

i=1αi = 1, x = (x1, · · · , xm)T ∈ Rm+, z = (z1, · · · , zn)T ∈ Rn. Indeed, Kαm,n is a solid (i.e., int Km,nα 6= ∅), closed and convex cone, and its dual cone is given by

(Kαm,n) = (

(λ, y) ∈ Rm+ × Rn

kyk ≤

m

Y

i=1

 λi αi

αi)

where λ = (λ1, · · · , λm)T ∈ Rm+ and y = (y1, · · · , yn)T ∈ Rn. From the expression of the dual cone (Km,nα ), we see that the dual cone (Km,nα ) is also a solid, closed and convex cone. When m = 1, we note that the power cone is just second-order cone Kn+1 [1, 5, 8, 9] defined as follows:

Kn+1 =



(x1, z) ∈ R+× Rn

kzk ≤ x1

 .

(4)

Hence, the power cone Kαm,n includes second-order cone Kn+1 as a special case with m = 1. In addition, from the expression of the power cone Km,nα and its dual cone (Kαm,n), it is not hard to verify that the boundary of the power cone Kαm,n and its dual cone (Km,nα ) can be respectively expressed as follows:

bd Kαm,n = (

(x, z) ∈ Rm+ × Rn

kzk =

m

Y

i=1

xαii )

,

bd (Km,nα ) = (

(λ, y) ∈ Rm+× Rn

kyk =

m

Y

i=1

i

αi)αi )

.

In order to have further understanding of Kαm,n, the pictures of four different cones Km,nα in Rm+ × Rn and their dual cones are depicted in Figure 1 and Figure 2, respectively.

Figure 1. The 3-dimensional power cones and its dual cones with m = 2, n = 1 and different α1, α2

(2) exponential cone, see [6, 22]. The exponential cone is a cone in 3-dimensional Euclidean space R3, which is defined as bellow:

Ke := cln

(x1, x2, x3)T ∈ R3

x2ex1x2 ≤ x3, x2> 0o . In fact, the exponential cone is also the union of two sets, i.e.,

Ke:=

n

(x1, x2, x3)T ∈ R3

x2ex1x2 ≤ x3, x2 > 0 o

∪(x1, 0, x3)T

x1 ≤ 0, x3≥ 0 .

(5)

10

z-axis 5 0 -5 10 -10

(m,n) = (1,2)

5 y-axis

0 -5

-10 5

0 10 15

x-axis

Figure 2. The 3-dimensional power cone with m = 1, n = 2, i.e., second-order cone

As shown in [6], the exponential cone Ke is a closed convex cone, and its dual cone Ke is given by

Ke = cl n

(y1, y2, y3)T ∈ R3

− y1ey2y1 ≤ ey3, y1 < 0 o

.

In a similar manner, the dual cone is also expressed as the union of the two corre- sponding sets, i.e.,

Ke:=n

(y1, y2, y3)T ∈ R3

− y1ey2y1 ≤ ey3, y1< 0o

∪(0, y2, y3)T

y2 ≥ 0, y3 ≥ 0 . Note that the dual cone Ke is also a closed convex cone. The pictures of the exponential cone Ke and its dual cone Ke are depicted in Figure 3 and Figure 4, respectively. Moreover, in view of the expressions of exponential cone Ke and its dual cone Ke(or alternatively from Figure 3 and Figure 4, respectively), it is easy to verify that the boundary of exponential cone and its dual cone can be respectively expressed as follows:

bd Ke := n

(x1, x2, x3)T ∈ R3

x2ex1x2 = x3, x2> 0o

∪(x1, 0, x3)T

x1 ≤ 0, x3 ≥ 0 , bd Ke :=

n

(y1, y2, y3)T ∈ R3

− y1ey2y1 = ey3, y1 < 0 o

∪(0, y2, y3)T

y2 ≥ 0, y3 ≥ 0 .

3. Characterizations of solution set

In this section, we describe the characterization of the solution set S for the problem (1.1) in terms of Lagrange multipliers and subgradients. Moreover, when

(6)

1 0

x-axis -1 0 -2

1 y-axis

2 3 1

0 0.5 1.5 2

4

z-axis

Figure 3. The exponential cone

0 -0.5

x-axis -1 -1.5 -2 -2

-1 y-axis

0 1

2 1.5

1

0.5

0 2

z-axis

Figure 4. The dual cone of exponential cone

the cone K reduces to two specific cones, i.e., the power cone and the exponential cone, we can establish the same conclusions by exploiting the structures of the two types of specific cones.

(7)

Theorem 3.1. For the problem (1.1), let a ∈ S. Suppose that the corresponding Lagrange multiplier λa∈ Rr satisfies the conditions:

0 ∈ ∂La(a, λa) + NC(a), λa∈ K and λTag(a) = 0.

(3.1)

Then, the following hold.

(a): If λa= 0, then for every x ∈ S, there exists ξ ∈ NC(a) such that

−ξ ∈ ∂f (x).

(b): If λa6= 0, then for every x ∈ S and g(x) 6= 0, we have

−g(x) ∈ bd K, λa∈ bd K and λTag(x) = 0.

Proof. (a) For λa = 0, from the conditions (3.1), there exists ξ ∈ NC(a) such that −ξ ∈ ∂La(a, λa). By the definitions of the subdifferential and the Lagrange function, it follows that for any y ∈ Rn, there has

(−ξ)T(y − a) ≤ La(y, λa) − La(a, λa)

= f (y) + λTag(y) − f (a) − λTag(a)

= f (y) − f (a).

This means −ξ ∈ ∂f (a). Moreover, it follows from ξ ∈ NC(a) that (−ξ)T(x − a) ≥ 0 for every x ∈ S. This together with the properties of convex function yields

f (y) − f (x) = f (y) − f (a)

≥ (−ξ)T(y − a)

= (−ξ)T(y − x) + (−ξ)T(x − a)

≥ (−ξ)T(y − x)

for every x ∈ S and any y ∈ Rn, which says that −ξ ∈ ∂f (x) for every x ∈ S.

(b) For λa6= 0, from the conditions (3.1), i.e.,

0 ∈ ∂La(a, λa) + NC(a), λa∈ K and λTag(a) = 0,

there exists ξ ∈ NC(a) such that −ξ ∈ ∂La(a, λa). Then, for every x ∈ S, we have f (x) + λTag(x) = La(x, λa)

≥ La(a, λa) + (−ξ)T(x − a) ≥ La(a, λa)

= f (a) + λTag(a),

where the second inequality holds since (−ξ)T(x − a) ≥ 0 for ξ ∈ NC(a). Now, using x, a ∈ S and λTag(a) = 0, we obtain that λTag(x) ≥ 0 for every x ∈ S. On the other hand, because λa ∈ K and −g(x) ∈ K for every x ∈ S, this gives λTa(−g(x)) ≥ 0, which says λTag(x) ≤ 0. Hence, we conclude that λTag(x) = 0 for every x ∈ S.

Next, we show that λa ∈ bd K and −g(x) ∈ bd K for every x ∈ S and g(x) 6= 0.

Here, we only prove −g(x) ∈ bd K because with the same arguments, the conclusion of λa ∈ bd K can be drawn. Now, we prove −g(x) ∈ bd K by contradiction.

Suppose that −g(x) ∈ int K. Then, there is a  > 0 such that B(−g(x), ) ⊆ K where B is open ball with radius . This implies that for any y ∈ Rr, there exists α > 0 such that

−g(x) + αy ∈ B(−g(x), ) ⊆ K.

(8)

Moreover, since λa∈ K, we know that

λTa(−g(x) + αy) = −λTag(x) + αλTay ≥ 0.

Hence, it follows from λTag(x) = 0 for every x ∈ S that αλTay ≥ 0. By the arbi- trariness of y ∈ Rr, we obtain that λa= 0, which contradicts the condition λa6= 0.

Thus, −g(x) ∈ bd K. Then, the proof is complete. 2

Next, we demonstrate that Theorem 3.1 in the settings of power cone and ex- ponential cone can be achieved as well by using the structures of power cone and exponential cone, respectively. To this end, for the problem (1.1),

min f (x) s.t. −g(x) ∈ K

x ∈ C,

we consider the cases of K = Km,r−mα and K = Ke respectively. Under each case, the problem (1.1) becomes a specific power cone or exponential cone constrained convex programming problem. To proceed, we need the following technical lemmas.

Lemma 3.2. [Weighted AM-GM inequality]. For any n ∈ N, suppose that ξi ≥ 0 and wi> 0 for i = 1, · · · , n. Let w =Pn

j=1wj. Then,

n

Y

j=1

ξjwj

1 w

≤ 1 w

n

X

j=1

wjξj

with the equality holding if and only if ξ1= ξ2= · · · = ξn.

Proof. This is a well-known inequality, please refer to [14] for a proof. 2

Lemma 3.3. Suppose that ai ≥ 0, bi ≥ 0 and pi > 0 for i = 1, 2, · · · , n, where Pn

i=1pi= 1. Then, we have

n

X

i=1

aibi

n

Y

i=1

 aibi

pi

pi

.

Proof. For any i = 1, 2, · · · , n, by ai ≥ 0, bi ≥ 0 and pi> 0 with Pn

i=1pi = 1, let yi = apibi

i (i = 1, · · · , n). It is clear that yi ≥ 0 for any i = 1, · · · , n. Then, from Lemma 3.2, we have

a1b1+ a2b2+ · · · + anbn = p1y1+ p2y2+ · · · + pnyn

≥ y1p1· · · ypnn

=  a1b1

p1

p1 a2b2

p2

p2

· · · anbn

pn

pn

. This meansPn

i=1aibi≥ Πni=1(apibi

i )pi, which is the desired result. 2

Lemma 3.4. Let h(t) = et−1− t on R. Then, we have h(t) ≥ 0 for all t ∈ R.

(9)

Proof. Since h(t) = et−1− t, we have h0(t) = et−1− 1. Thus, it follows that h0(t) = et−1− 1 > 0, ∀t > 1 and h0(t) = et−1− 1 < 0, ∀t < 1.

This indicates that the function h is strictly increasing on (1, ∞), and h is strictly decreasing on (−∞, 1). Thus, for any t ∈ R, we have h(t) ≥ h(1) = 0, which is the desired result. 2

Theorem 3.5. For the problem (1.1), let K = Kαm,r−mand a ∈ S. Suppose that the corresponding Lagrange multiplier λa satisfies the conditions as Theorem 3.1, i.e.,

0 ∈ ∂La(a, λa) + NC(a), λa∈ (Kαm,r−m) and λTag(a) = 0.

If λa6= 0, then for each x ∈ S and g(x) 6= 0, there have

−g(x) ∈ bd Kαm,r−m, λa∈ bd (Kαm,r−m) and λTag(x) = 0.

Proof. From the proof of Theorem 3.1, we know that λTag(x) = 0 for all x ∈ S.

Then, it remains to show that −g(x) ∈ bd Kαm,r−m and λa ∈ bd (Kαm,r−m). For convenience, we denote 0 6= −g(x) := (x, z) ∈ Km,r−mα and 0 6= λa := (λ, y) ∈ (Kαm,r−m) with m < r. By the expressions of the power cone Kαm,r−m and its dual cone (Kαm,r−m), it follows that

kzk ≤

m

Y

i=1

xαii and kyk ≤

m

Y

i=1

 λi αi

αi

with αi> 0 and Pn

i=1αi= 1. Then, from λTag(x) = 0, we have 0 = λ>(−x) + y>(−z)

≤ −

m

X

i=1

λixi+ kykkzk

≤ −

m

X

i=1

 λixi αi

αi

+

"m Y

i=1

 λi αi

αi# "m Y

i=1

xαii

#

≤ 0

where the first inequality holds due to the Cauchy-Schwarz inequality, and the last inequality holds due to Lemma 3.3. This implies that

kzk =

m

Y

i=1

xαii and kyk =

m

Y

i=1

 λi αi

αi

. Hence, we conclude that

−g(x) ∈ bd Kαm,r−m, λa∈ bd (Km,r−mα ) and λTag(x) = 0 and the proof is complete. 2

(10)

Theorem 3.6. For the problem (1.1), let K = Ke and a ∈ S. Suppose that the corresponding Lagrange multiplier λa satisfies the conditions as Theorem 3.1, i.e.,

0 ∈ ∂La(a, λa) + NC(a), λa∈ Ke and λTag(a) = 0.

If λa6= 0, then for each x ∈ S and g(x) 6= 0, there have

−g(x) ∈ bd Ke, λa∈ bd Ke and λTag(x) = 0.

Proof. Using the same arguments as the proof of Theorem 3.5 and applying The- orem 3.1, it is clear that λTag(x) = 0 for all x ∈ S. Then it remains to show that

−g(x) ∈ bd Ke and λa ∈ bd Ke. Suppose that 0 6= −g(x) := (x1, x2, x3)T ∈ Ke and 0 6= λa= (y1, y2, y3)T ∈ Ke. For convenience, we denote

A :=n

(x1, x2, x3)T

x2ex1x2 ≤ x3, x2 > 0o

, B :=(x1, 0, x3)T

x1≤ 0, x3 ≥ 0 , M := n

(y1, y2, y3)T

− y1ey2y1 ≤ ey3, y1< 0o

, N :=(0, y2, y3)T

y2 ≥ 0, y3≥ 0 . Then, using the expressions of exponential cone Ke and its dual cone Ke, i.e.,

Ke = n

(x1, x2, x3)T

x2ex1x2 ≤ x3, x2 > 0 o

∪(x1, 0, x3)T

x1≤ 0, x3 ≥ 0 Ke = n

(y1, y2, y3)T

− y1ey2y1 ≤ ey3, y1 < 0o

∪(0, y2, y3)T

y2≥ 0, y3 ≥ 0 , we have Ke = A ∪ B and Ke = M ∪ N . To proceed the proof, we need to discuss four cases.

Cases 1. When −g(x) ∈ A, λa ∈ M , we have x2ex1x2 ≤ x3 with x2 > 0 and

−y1ey2y1 ≤ ey3 with y1 < 0. This together with λTag(x) = 0 for all x ∈ S yields 0 = x1y1+ x2y2+ x3y3

= −y1x2

 x1y1

−y1x2 + x2y2

−y1x2 + x3y3

−y1x2



= −y1x2

 x1

−x2 + y2

−y1 + (x3

x2)( y3

−y1)



≥ −y1x2



−(x1 x2

+ y2 y1

) + ex1x2ey2y1−1



= −y1x2



−(x1 x2 + y2

y1) + ex1x2+y2y1−1



≥ 0,

where the last inequality is due to Lemma 3.4. Then, it follows that xx3

2 = ex1x2 and

y3

−y1 = ey2y1−1, i.e., x2ex1x2 = x3 and −y1ey2y1 = ey3, which says −g(x) ∈ bd A and λa∈ bd M . Thus, −g(x) ∈ bd Ke and λa∈ bd Ke.

Cases 2. When −g(x) ∈ A, λa ∈ N , we have x2ex1x2 ≤ x3 with x2 > 0, and y1 = 0 with y2 ≥ 0 and y3 ≥ 0. Hence, it follows from λTag(x) = 0 for all x ∈ S that 0 = x2y2 + x3y3. Because x2 > 0, y2 ≥ 0, y3 ≥ 0 and x3 > 0, we obtain that y2 = y3 = 0, i.e., λa= (y1, y2, y3)T = (0, 0, 0)T, which contradicts λa 6= 0. This says that the subcase does not occur.

(11)

Cases 3. When −g(x) ∈ B, λa∈ M , we have x1≤ 0, x3 ≥ 0, x2 = 0 and −y1ey2y1 ≤ ey3 with y1 < 0. Because λTag(x) = 0 for all x ∈ S, this implies 0 = x1y1+ x3y3. Then, it follows from x1 ≤ 0, x3 ≥ 0, y1 < 0 and y3 > 0 that x1 = x3 = 0, i.e.,

−g(x) = 0. This contradicts −g(x) 6= 0. Hence, this subcase does not also occur.

Cases 4. When −g(x) ∈ B, λa∈ N , in light of the expression of exponential cone Ke and its dual cone Ke, it is clear that −g(x) ∈ bd Ke and λa∈ bd Ke.

From the above discussions in all cases, we prove that

−g(x) ∈ bd Ke, λa∈ bd Ke and λTag(x) = 0.

Thus, the proof is complete. 2

Example 3.7. For x = (x1, x2, x3)T ∈ R3, consider the nonlinear convex program- ming problem:

min f (x) = x21+ x22+ x23 s.t. −g(x) =

−x1

−x2

−x3

∈ Ke, where Ke is the exponential cone.

Let F and S be the feasible set and the solution set of this problem, respectively.

It follows from −g(x) = (−x1, −x2, −x3)T ∈ Ke that −x2ex1x2 ≤ −x3 with −x2 > 0, or −x1 ≤ 0, −x3≥ 0 and x2 = 0, which yields the feasible set

F =n

(x1, x2, x3)T ∈ R3| x2ex1x2 ≥ x3, x2< 0 o

∪(x1, 0, x3)T

x1 ≥ 0, x3 ≤ 0 . Noting that for any x = (x1, x2, x3)T ∈ R3, we have

f (x) = x21+ x22+ x23 ≥ 0.

Thus, it is not hard to verify that ¯x = (0, 0, 0)T ∈ R3 is a solution to the considered problem, i.e, ¯x ∈ S. Moreover, for any ¯x 6= x = (x1, x2, x3)T ∈ R3, there has

∂f (x) = 2(x1, x2, x3)T 6= 0.

In light of this, for the solution ¯x ∈ S, it is easy to see that the corresponding Lagrange multiplier λx¯ = (0, 0, 0)T ∈ Ke and 0 ∈ ∂Lx¯(¯x, λx¯) = ∂f (¯x). All the above leads to

(0, 0, 0)T ∈ ∂f (x) ⇐⇒ x1 = 0, x2 = 0, x3= 0.

Therefore, we conclude that the solution set S can be expressed as S =(x1, x2, x3)T ∈ R3| x1 = 0, x2= 0, x3 = 0 .

(12)

Example 3.8. For x = (x1, x2)T ∈ R2, consider the nonlinear convex programming problem:

min f (x) =pu2(x1) + v2(x2) + v(x2) s.t. −g(x) =

 −v(x2) u(x1)



∈ Kα1,1,

where u : R → R and v : R → R are both differentiable and α = 1.

Let F and S be the feasible set and the solution set of this problem, respectively.

Because −g(x) = (−v(x2), u(x1))T ∈ Kα1,1, we have 0 ≤ |u(x1)| ≤ −v(x2), which implies that the feasible set

F =(x1, x2)T ∈ R2| v(x2) ≤ −|u(x1)| ≤ 0 . Noting that for any x = (x1, x2)T ∈ R2, we have

f (x) =p

u2(x1) + v2(x2) + v(x2) ≥ |v(x2)| + v(x2) ≥ 0.

Thus, it is easy to check that ¯x = (¯x1, ¯x2)T ∈ R2satisfying u(¯x1) = 0 and v(¯x2) = 0 is a solution of the considered problem, i.e, ¯x ∈ S. Since for any ¯x 6= x = (x1, x2)T ∈ R2 with u(x1) 6= 0 or v(x2) 6= 0, it can be computed that

∂f (x) =

u(x1)

pu2(x1) + v2(x2)u0(x1), v(x2)

pu2(x1) + v2(x2)v0(x2) + v0(x2)

!T

=

 u0(x1) 0 0 v0(x2)



u(x1)

u2(x1)+v2(x2) v(x2)

u2(x1)+v2(x2)+ 1

 .

Moreover, it can be verified that

∂f (¯x) =

 u0(¯x1) 0 0 v0(¯x2)

  0 1

 + B



,

where B denotes the closed unit ball in R2. Besides, for the solution ¯x ∈ S, it is easy to see that if u0(¯x1) 6= 0 and v0(¯x2) 6= 0, the corresponding Lagrange multiplier λ¯x= (0, 0)T ∈ (K1,1α ), and (0, 0)T ∈ ∂L¯x(¯x, λx¯) = ∂f (¯x). With this, it follows that if u0(x1) 6= 0 and v0(x2) 6= 0,

(0, 0)T ∈ ∂f (x) ⇐⇒ u(x1) = 0, v(x2) ≤ 0.

Therefore, we conclude that the solution set S may be expressed as S =(x1, x2)T ∈ R2| u(x1) = 0, v(x2) ≤ 0 .

In fact, when the convex set C reduces the special convex set C := {x ∈ Rn| Ax = b}, where the matrix A is a m × n matrix, we see that Theorem 3.1 reduces to [20, Theorem 3.1]. This says that the considered problem in this paper includes the problem in [20] as a special case, which is presented the following corollary.

(13)

Corollary 3.9. [20, Theorem 3.1] For the problem (1.1), let C := {x ∈ Rn| Ax = b}

and a ∈ S. Suppose that the corresponding Lagrange multiplier λa ∈ Rr satisfies the conditions:

0 ∈ ∂La(a, λa) + {ATy | y ∈ Rm}, λa∈ K and λTag(a) = 0.

Then, the following hold.

(a): If λa= 0, then for each x ∈ S, there exists y ∈ Rm such that

−ATy ∈ ∂f (x).

(b): If λa6= 0, then for each x ∈ S and g(x) 6= 0, there have

−g(x) ∈ ∂K, λa∈ ∂K and λTag(x) = 0.

References

[1] F. Alizadeh and D. Goldfarb, Second-order cone programming, Mathematical Programming, 95 (2003), 3–52.

[2] E. D. Andersen, C. Roos and T. Terlaky , Notes on duality in second order and p-order cone optimization, Optimization, 51(4) (2002), 627–643.

[3] D.P. Bertsekas, A. Nedi´c, and A.E. Ozdaglar, Convex Analysis and Optimization, Athena Scientific, (2003).

[4] J.V. Burke and M.C. Ferris, Characterization of solution sets of convex programs, Opera- tions Research Letters, 10, 57–60 (1991).

[5] J.-S. Chen, Conditions for error bounds and bounded level sets of some merit functions for the second-order cone complementarity problem, Journal of Optimization Theory and Applications, 135, 459–473 (2007).

[6] R. Chares, Cones and interior-point algorithms for structured convex optimization involving powers and exponentials, Ph.D. thesis, Universit´e catholique de Louvain, http://hdl.handle.net/2078.1/28538, (2009).

[7] F.H. Clarke, Optimization and Nonsmooth Analysis, Wiley-Interscience, New York, NY, 1983.

[8] J.-S. Chen, X. Chen, and P. Tseng, Analysis of nonsmooth vector-valued functions associ- ated with second-order cone, Mathematical Programming, 101, 95–117 (2004).

[9] J.-S. Chen and P. Tseng, An unconstrained smooth minimization reformulation of second- order cone complementarity problem, Mathematical Programming, 104, 293–327 (2005).

[10] S. Deng, Characterizations of the nonemptiness and compactness of solution sets in convex vecter optimization, Journal of Optimization Theory and Applications, 96, 123–131 (1998).

[11] F. Facchinei and J.-S. Pang, Finite-Dimensional Variational Inequalities and Complemen- tarity Problems, Vol. I, New York, Springer, (2003).

[12] F. Glineur and T. Terlaky, Conic formulation for lp-norm optimization, Journal of Opti- mization Theory and Applications, 122(2), 285–307 (2004).

[13] L.T.K. Hien, Differential properties of Euclidean projection onto power cone, Mathematical Methods of Operations Research, 82(3), 265–284 (2015).

[14] H. Hoffmann, Weighted AM-GM Inequality via Elementary Multivariable Calculus, The Col- lege Mathematics Journal, 47(1), 56–58 (2016).

[15] V. Jeyakumar, G.M. Lee, and N. Dinh, Lagrange multiplier conditions characterizing the optimal solution sets of cone-constrained convex programs, Journal of Optimization Theory and Applications, 123(1), 83–103 (2004).

[16] V. Jeyakumar and H. Wolkowicz, Generalizations of Slater’s constraint qualification for infinite convex programs, Mathematical Programming, 57, 85–101 (1992).

[17] V. Jeyakumar, X.-Q. Yang, Characterizing the solution sets of pseudolinear programs, Jour- nal of Optimization Theory and Applications, 87, 747–755 (1995).

(14)

[18] M.S. Lobo, L. Vandenberghe, S. Boyd, H. Lebret, Applications of second-order cone programming, Linear Algebra and its Application, 284, 193–228 (1998).

[19] O.L. Mangasarian, A simple characterization of solution sets of convex programs, Operations Research Letters, 7(1), 21–26 (1988).

[20] X.-H. Miao and J.-S. Chen, Characterization of solution sets of cone-constrained convex programming problems, Optimization Letters, 9(7), 1433–1445 (2015).

[21] Y. Peres, G. Pete, and S. Somersille, Biased tug-of-war, the biased infinity Laplacian, and comparison with exponential cones, Calculus of Variations, 38, 541-564 (2010).

[22] S.A. Serrano, Algorithms for unsymmetric cone optimization and an implementation for problems with the exponential cone, Ph.D. thesis, Stanford University, (2015).

[23] Z.-L. Wu and S.-Y. Wu, Characterizations of the solution sets of convex programs and variational inequality problems, Journal of Optimization Theory and Applications, 130(2), 339–358 (2006).

[24] G. Xue and Y. Ye, An efficient algorithm for minimizing a sum of p-norms, SIAM Journal on Optimization, 10(2), 315–330 (1999).

[25] J.-C. Zhou and J.-S. Chen, Properties of circular cone and spectral factorization associated with circular cone, Journal of Nonlinear and Convex Analysis, 14(4), 807–816 (2013).

(M.-Y. Li) Department of Mathematics, National Taiwan Normal University, Taipei 11677, Taiwan

E-mail address: leemy801026@gmail.com

(C.-Y. Yang) Department of Mathematics, National Taiwan Normal University, Taipei 11677, Taiwan

E-mail address: yangcy@math.ntnu.edu.tw

(X.-H. Miao) Department of Mathematics, Tianjin University, Tianjin 300072, China E-mail address: xinhemiao@tju.edu.cn

(J.-S. Chen) Department of Mathematics, National Taiwan Normal University, Taipei 11677, Taiwan

E-mail address: jschen@math.ntnu.edu.tw

參考文獻

相關文件

Numerical results are reported for some convex second-order cone programs (SOCPs) by solving the unconstrained minimization reformulation of the KKT optimality conditions,

According to the authors’ earlier experience on symmetric cone optimization, we believe that spectral decomposition associated with cones, nonsmooth analysis regarding

We point out that extending the concepts of r-convex and quasi-convex functions to the setting associated with second-order cone, which be- longs to symmetric cones, is not easy

In section 4, based on the cases of circular cone eigenvalue optimization problems, we study the corresponding properties of the solutions for p-order cone eigenvalue

For the proposed algorithm, we establish a global convergence estimate in terms of the objective value, and moreover present a dual application to the standard SCLP, which leads to

In section 4, based on the cases of circular cone eigenvalue optimiza- tion problems, we study the corresponding properties of the solutions for p-order cone eigenvalue

In this paper, we have studied the Lipschitz continuity of the solution mapping for symmetric cone linear or nonlinear complementarity problems over Euclidean Jordan algebras

In this paper we establish, by using the obtained second-order calculations and the recent results of [25], complete characterizations of full and tilt stability for locally