• 沒有找到結果。

On the existence of saddle points for nonlinear second-order cone programming problems

N/A
N/A
Protected

Academic year: 2022

Share "On the existence of saddle points for nonlinear second-order cone programming problems"

Copied!
22
0
0

加載中.... (立即查看全文)

全文

(1)

On the existence of saddle points for nonlinear second-order cone programming problems

Jinchuan Zhou · Jein-Shan Chen

Received: 8 October 2013 / Accepted: 27 October 2014 / Published online: 6 November 2014

© Springer Science+Business Media New York 2014

Abstract In this paper, we study the existence of local and global saddle points for nonlinear second-order cone programming problems. The existence of local saddle points is developed by using the second-order sufficient conditions, in which a sigma-term is added to reflect the curvature of second-order cone. Furthermore, by dealing with the perturbation of the primal problem, we establish the existence of global saddle points, which can be applicable for the case of multiple optimal solutions. The close relationship between global saddle points and exact penalty representations are discussed as well.

Keywords Local and global saddle points· Second-order sufficient conditions · Augmented Lagrangian· Exact penalty representations

Mathematics Subject Classification 90C26· 90C46 1 Introduction

Recall that the second-order cone (SOC), also called the Lorentz cone or ice-cream cone, in IRm+1is defined as

Km+1= {(x1, x2) ∈ IR × IRm| x2 ≤ x1},

The author’s work is supported by National Natural Science Foundation of China (11101248, 11171247, 11271233), Shandong Province Natural Science Foundation (ZR2010AQ026, ZR2012AM016), and Young Teacher Support Program of Shandong University of Technology. Jein-Shan Chen work is supported by Ministry of Science and Technology, Taiwan.

J. Zhou

Department of Mathematics School of Science, Shandong University of Technology, Zibo 255049, People’s Republic of China

e-mail: jinchuanzhou@163.com

J.-S. Chen (

B

)

Department of Mathematics, National Taiwan Normal University, Taipei 11677, Taiwan e-mail: jschen@math.ntnu.edu.tw

(2)

where ·  denotes the Euclidean norm. The order relation induced by this pointed closed convex coneKm+1is given by

xKm+10 ⇐⇒ x ∈ IRm+1, x1≥ x2.

In this paper, we consider the following nonlinear second-order cone programming (NSOCP) min f(x)

s.t. gj(x) Km j +1 0, j = 1, 2, . . . , J, h(x) = 0,

(1)

where f : IRn → IR, h : IRn → IRl, gj : IRn → IRmj+1are twice continuously differen- tiable functions, andKmj+1is the second-order cone in IRmj+1for j = 1, 2, . . . , J.

For a given nonlinear programming problem, we can define another programming problem associated with it by using traditional Lagrangian functions. The original problem is called the primal problem, and the latter one is called the dual problem. Since the weak duality property always holds, our concern is on how to obtain the strong duality property (or zero duality gap property). In other words, we want to know when the primal and dual problems have the same optimal values, which provides the theoretical foundation for many primal-dual type methods. However, if we employ the traditional Lagrangian functions, then some convexity is necessary for achieving strong duality property. To overcome this drawback, we need to resort to the augmented Lagrangian functions, whose main advantage is ensuring the strong duality property without requiring convexity. In addition, the zero duality gap property coincides with the existence of global saddle points, provided that the optimal solution sets of the primal and dual problems are nonempty, respectively. Many researchers have studied the properties of augmented Lagrangian and the existence of saddle points. For example, Rockafellar and Wets [13] proposed a class of augmented Lagrangian where augmented function is required to be convex functions. This was extended by Huang and Yang [6] where convexity condition is replaced by level-boundedness, and it was further generalized by Zhou and Yang [21] where level-boundedness condition is replaced by so-called valley-at-zero property; see also [14]

for more details. These important works give an unified frame for the augmented Lagrangian function and its duality theory. Meanwhile, Floudas and Jongen [5] pointed out the crucial role of saddle points for the minimization of smooth functions with a finite number of stationary points. The necessary and/or sufficient conditions to ensure the existence of local and/or global saddle points were investigated by many researchers. For example, the existence of local and global saddle points of Rockafellar’s augmented Lagrangian function was studied in [12].

Local saddle points of the generalized Mangasarian’s augmented Lagrangian were analyzed in [19]. The existences of local and global saddle points of pth power nonlinear Lagrangian were discussed in [7,8,18]. For more references, please see [9,10,14,16,17,20,22].

All the results mentioned above are focused on either the standard nonlinear programming or the generalized minimizing problems [13]. The main purpose of this paper is to establish the existences of local and global saddle points of NSOCP (1) by sufficiently exploiting the special structure of SOC. As shown in nonlinear programming, the positive definiteness of

x x2 L over the critical cone is a sufficient condition for the existence of local saddle points.

However, this classical result cannot be extended trivially to NSOCP (1) and the analysis is more complicated because IR+n is polyhedral, whereasKm+1 is non-polyhedral. Hence, we particulary study the sigma-term [4], which in some extend stands for the curvature

(3)

of second-order cone. Our result shows that the local saddle point exists provided that the sum of∇2x xL andHis positive definite even if∇x x2 L is indefinite (see Theorem2.3). This undoubtedly clarifies the essential role played by the sigma-term. Moreover, by developing the perturbation of the primal problem, we establish the existence of global saddle points without restricting the optimal solution being unique, as required in [12,16]. Furthermore, we study another important concept, exact penalty representation, and develop its new necessary and sufficient conditions. The close relationship between global saddle points and exact penalty representations is established as well.

To end this section, we introduce some basic concepts which will be needed for our subsequent analysis. Let IRn be n-dimensional real vector space. For x, y ∈ IRn, the inner product is denoted by xTy or x, y . Given a convex subset A ⊆ IRnand a point x ∈ A, the normal cone of A at x, denoted by NA(x), is defined as

NA(x) := {v ∈ IRn| v, z − x ≤ 0, ∀z ∈ A}, and the tangent cone, denoted by TA(x), is defined as

TA(x) := NA(x),

where NA(x) means the polar cone of NA(x). Given d ∈ TA(x), the outer second order tangent set is defined as

TA2(x, d) =

w ∈ IRn ∃tn ↓ 0 such that dist

x+ tnd+1 2tn2w, A

= o(tn2) . The support function of A is

σ (x | A) := sup{ x, z | z ∈ A}.

We also write cl(A), int(A), and ∂(A) to stand for the closure, interior, and boundary of A, respectively. For the simplicity of notations, let us writeKjto stand forKmj+1andKbe the Cartesian product of these second-order cones, i.e.,K:=K1×K2× · · ·KJ. In addition, we denote g(x) := (g1(x), g2(x), . . . , gJ(x)), p :=J

j=1(mj+ 1), and Smeans the solution set of NSOCP (1). According to [13, Exercise 11.57], the augmented Lagrangian function for NSOCP (1) is written as

Lc(x, λ, μ, c) := f (x) + μ, h(x) + c 2h(x)2 +c

2

J j=1

dist2

gj(x) −λj

c ,Kj

λj

c 2

. (2)

Here c ∈ IR++ := {ζ ∈ IR | ζ > 0} and (x, λ, μ) ∈ IRn × IRp × IRl with λ = 1, λ2, . . . , λJ) ∈ IRm1+1× IRm2+1× · · · × IRmJ+1.

Definition 1.1 LetLcbe given as in (2) and(x, λ, μ) ∈ IRn× IRp× IRl.

(a) The triple(x, λ, μ) is said to be a local saddle point ofLcfor some c> 0 if there existsδ > 0 such that

Lc(x, λ, μ) ≤Lc(x, λ, μ) ≤Lc(x, λ, μ), ∀x ∈B(x, δ), (λ, μ) ∈ IRp× IRl, (3) whereB(x, δ) denotes the δ-neighborhood of x, i.e.,B(x, δ) := {x ∈ IRn| x−x ≤ δ}.

(4)

(b) The triple(x, λ, μ) is said to be a global saddle point ofLcfor some c> 0 if Lc(x, λ, μ) ≤Lc(x, λ, μ) ≤Lc(x, λ, μ), ∀x ∈ IRn, (λ, μ) ∈ IRp× IRl. (4)

2 On local saddle points

In this section, we focus on the necessary and sufficient conditions for the existence of local saddle points. For simplicity, we let Q stand for a second-order cone without emphasizing its dimension, while using the notation Q⊂ IRm+1to indicate that Q is regarded as a second- order cone in IRm+1. In other words, the result holding for Q is also applicable toKi for i = 1, . . . , J in the subsequent analysis. According to [13, Example 6.16] we know for a∈ Q,

− b ∈ NQ(a) ⇐⇒ Q(a − b) = a

⇐⇒ dist(a − b, Q) = b

⇐⇒ a ∈ Q, b ∈ Q, aTb= 0, (5)

where the last equivalence comes from the fact that Q is a self-dual cone, i.e.,(Q)= −Q.

Lemma 2.1 LetLcbe given as in (2). Then, the augmented Lagrangian functionLc(x, λ, μ) is nondecreasing with respect to c> 0.

Proof See [13, Exercise 11.56]. 

We now discuss the necessary conditions for local saddle points.

Theorem 2.1 Suppose(x, λ, μ) is a local saddle point ofLc. Then, (a) −λ∈ NK(g(x));

(b) Lc(x, λ, μ) = f (x) for all c > 0;

(c) xis a local optimal solution to NSOCP (1).

Proof We first show that xis a feasible point of NSOCP (1), for which we need to verify two things: (i) h(x) = 0, (ii) gj(x) Kj 0 for all j= 1, 2, . . . , J.

(i) Suppose h(x) = 0. Taking μ = γ h(x) with γ → ∞, and applying the first inequality in (3) yieldsLc(x, λ, μ) = ∞ which is a contradiction. Thus, h(x) = 0.

(ii) Suppose gj(x) /∈ Kj for some j = 1, . . . , J. Then, there exist ˜λjKj such that η := ˜λj, gj(x) < 0. Therefore, for β ∈ IR

dist2



gj(x) −β ˜λj

c ,Kj



β ˜λj

c

2

=

gj(x) −β ˜λj

c − Kj



gj(x) −β ˜λj

c



2

β ˜λj

c

2

=

gj(x)−Kj



gj(x)−β ˜λj

c



2

−2

β ˜λj

c , gj(x)−Kj



gj(x)−β ˜λj

c



(5)

≥ dist2

gj(x),Kj

− 2β

˜λj

c, gj(x)



= dist2

gj(x),Kj

− 2β η

c. (6)

Here the inequality comes from the facts that

gj(x) − Kj



gj(x) −β ˜λj

c

 ≥ gj(x) − Kj(gj(x)) = dist(gj(x),Kj)

and 

˜λj, Kj



gj(x) − (β ˜λj/c)

≥ 0

because ˜λjKjandKj



gj(x) − (β ˜λj/c)

Kj. Takingβ → ∞, it follows from (3) and (6) thatLc(x, λ, μ) is unbounded above which is a contradiction.

Pluggingλ = 0 in the first inequality of (3) (i.e.,Lc(x, 0, μ) ≤ Lc(x, λ, μ)), we obtain

J j=1

⎣dist2



gj(x) −λj c,Kj



λj c

2

⎦ ≥ 0, (7)

where we have used the feasibility of xas shown above.

On the other hand, we have dist



gj(x) −λj c,Kj



gj(x) −λj

c − gj(x) =

λj c ,

where the inequality is due to the fact that gj(x) ∈Kjas shown above. This together with (7) ensures that

dist



gj(x) −λj c,Kj



=

λj c

. (8)

Combining (5) and (8) yields −λj ∈ NKj(gj(x)) for all j = 1, . . . , J, i.e., −λNK(g(x)) by [13, Proposition 6.41]. This establishes part (a). Furthermore, it implies

dist



gj(x) −λj c ,Kj



=

λj c

, ∀c> 0, (9)

because −λj/c ∈ NKj(gj(x)) for all c > 0 (since NKj(gj(x)) is a cone). Hence Lc(x, λ, μ) = f (x) for all c > 0. This establishes part (b).

Now, we turn the attention to part (c). Suppose x∈B(x, δ) is any feasible point of NSOCP (1). Then, from (3), we know

f(x) ≥Lc(x, λ, μ) ≥Lc(x, λ, μ) = f (x),

where the first inequality comes from the fact that x is feasible. This means xis a local

optimal solution to NSOCP (1). The proof is complete. 

(6)

For NSOCP (1), we say that Robinson’s constraint qualification holds at xif∇hi(x) for i= 1, . . . , l are linearly independent and there exists d ∈ IRnsuch that

∇h(x)d = 0 and g(x) + ∇g(x)d ∈ int(K).

It is known that if xis a local solution to NSOCP (1) and Robinson’s constraint qualification holds at x, then there exists, μ) ∈ IRp× IRl such that the following Karush-Kuhn- Tucker (KKT) conditions

xL(x, λ, μ) = 0, h(x) = 0, −λ∈ NK(g(x)), (10) or equivalently,

xL(x, λ, μ) = 0, h(x) = 0, λK, g(x) ∈K, (λ)Tg(x) = 0, where L(x, λ, μ) is the standard Lagrangian function of NSOCP (1), i.e.,

L(x, λ, μ) := f (x) + μ, h(x) − λ, g(x) . (11) For convenience of subsequent analysis, we denote by (x) all Lagrangian multipliers , μ) satisfying (10).

It is well-known that the second order sufficient conditions are utilized to ensure the existence of local saddle points. In the nonlinear programming, it requires the positive defi- niteness of∇x x2 L over the critical cone. However, due to the non-polyhedric of second-order cone, an additional widely known sigma-term (orσ -term), which stands for the curvature of second-order cone, is required. In particular, it was noted in [4, page 177] that theσ -term vanishes when the cone is polyhedral. Due to the important role played byσ -term in the analysis of second-order cone, before developing the sufficient conditions for the existence of local saddle points, we shall study some basic properties ofσ -term which will be used in the subsequence analysis. First, based on the arguments given in [1, Theorem 29] we obtain the following result.

Theorem 2.2 Let x ∈ Q and d ∈ TQ(x). Then, the support function of the outer second order tangent set TQ2(x, d) is

σ

y| TQ2(x, d)

=

⎧⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎩

xy11 dT 1 0

0 −Im

d, for y ∈ NQ(x) ∩ {d}, x ∈ ∂ Q\{0}, 0, for y∈ NQ(x) ∩ {d}, x /∈ ∂ Q\{0},

+∞, for y /∈ NQ(x) ∩ {d}.

Proof We know from [4, Proposition 3.34] that

TQ2(x, d) + TTQ(x)(d) ⊂ TQ2(x, d) ⊂ TTQ(x)(d).

This implies σ

y| TQ2(x, d) + σ

y| TTQ(x)(d)

= σ

y| TQ2(x, d) + TTQ(x)(d)

≤ σ

y| TQ2(x, d)

≤ σ

y| TTQ(x)(d) . (12)

(7)

Note that σ

y| TTQ(x)(d)

< +∞ ⇐⇒ σ

y| TTQ(x)(d)

= 0 (13)

⇐⇒ y ∈ NTQ(x)(d) (14)

⇐⇒ y ∈ TQ(x)

= NQ(x), yTd= 0 (15) where the first and third equivalences come from the fact that TTQ(x)(d) and TQ(x) are cones, respectively. Thus, we only need to establish the exact formula ofσ

y| TQ2(x, d)

, provided that (15) holds. In addition, it also indicates from (12) thatσ

y| TQ2(x, d)

= ∞ whenever y /∈ NQ(x) ∩ {d}, since TQ2(x, d) is nonempty for x ∈ Q and d ∈ TQ(x) by [1, Lemma 27].

In fact, under condition (15), it follows from (12) and (13) that σ

y| TQ2(x, d)

≤ σ

y| TTQ(x)(d)

= 0. (16)

Furthermore, in light of condition (15), we discuss the following four cases.

(i) If x = 0, then 0 ∈ TQ2(x, d) = TQ(d) where the equality is due to [1, Lemma 27].

Thus,

σ

y| TQ2(x, d)

= σ

y| TQ(d)

≥ 0.

This together with (16) impliesσ

y| TQ2(x, d)

= 0.

(ii) If x∈ int(Q), then it follows from (15) that y= 0. Hence, σ

y| TQ2(x, d)

= 0.

(iii) If x ∈ ∂(Q)\{0} and d ∈ int(TQ(x)), then it follows from (14) that y = 0 since d∈ int(TQ(x)). Hence σ

y| TQ2(x, d)

= 0 = −(y1/x1)(d12− d22).

(iv) If x∈ ∂(Q)\{0} and d ∈ ∂(TQ(x)), then the desired result can be obtained by following the arguments given in [1, p. 222]. We provide the proof for the sake of completeness.

Note thatσ

y|TQ2(x, d)

is to maximize y1w1+ y2Tw2over allw satisfying −w1x1+ wT2x2≤ d12− d22(see [1, Lemma 27]). From y∈ NQ(x), i.e., −y ∈ Q, x ∈ Q, and xTy = 0, we know −y1 = αx1 and−y2 = −αx2 withα = −yx11 ≥ 0, see [1, page 208]. Thus,

y, w = y1w1+y2Tw2= α

w2Tx2−w1x1

≤ α

d12−d22

= −y1 x1

d12−d22 .

The maximum can be obtained at(w1, w2) = (−dx12

1, −dx22

22x2). This establishes the

desired expression. 

Remark 2.1 Let A be a convex subset in IRm+1. In the proof of Theorem2.2, we use the inclusion TA2(x, d) ⊂ TTA(x)(d). It is known from [4, page 168] that these two sets are the same if A is polyhedral. But, for the non-polyhedral cone Q, the following example shows this inclusion maybe strict.

Example 2.1 For Q⊂ IR3, let ¯x = (1, 1, 0) and ¯d = (1, 1, 1). Then,

TQ( ¯x) = {d = (d1, d2, d3) ∈ IR3| (d2, d3)T( ¯x2, ¯x3) − d1¯x1 ≤ 0}

= {d = (d1, d2, d3) | d2− d1≤ 0},

(8)

which implies ¯d ∈ ∂TQ( ¯x). Hence,

TQ2( ¯x, ¯d) = {w = (w1, w2, w3) | (w2, w3)T( ¯x2, ¯x3) − w1¯x1≤ ¯d12− ( ¯d2, ¯d3)2}

= {w = (w1, w2, w3) | w2− w1≤ −1}.

On the other hand, since TTQ( ¯x)( ¯d) = cl(RTQ( ¯x)( ¯d)), whereRTQ( ¯x)( ¯d) denotes the radical (or feasible) cone of TQ( ¯x) at ( ¯d), then for each w ∈ TTQ( ¯x)( ¯d), there exists wRTQ( ¯x)( ¯d) → w such that ¯d+ tw∈ TQ( ¯x) for some t > 0, i.e.,

( ¯d2, ¯d3) + t(w2, w3)T

( ¯x2, ¯x3) − ( ¯d1+ tw1) ¯x1 ≤ 0,

which ensures that(w2, w3)T( ¯x2, ¯x3) − w1¯x1 ≤ 0. Now, taking limit yields w2− w1 ≤ 0.

Thus, we obtain

TTQ( ¯x)( ¯d) = {w = (w1, w2, w3) | w2− w1≤ 0}

which says TQ2( ¯x, ¯d)TTQ( ¯x)( ¯d). In fact, 0 ∈ TTQ( ¯x)( ¯d), but 0 /∈ TQ2( ¯x, ¯d).

Corollary 2.1 For x∈ Q and y ∈ NQ(x), we define

(x, y) := TQ(x) ∩ {y}= {d | d ∈ TQ(x) and yTd= 0}.

Then,σ

y| TQ2(x, d)

is nonpositive and continuous with respect to d over (x, y).

Proof We first show thatσ (y | TQ2(x, d)) is nonpositive for d ∈ (x, y). In fact, we know from Theorem2.2thatσ

y| TQ2(x, d)

= 0 when x = 0, or x ∈ int(Q), or x ∈ ∂(Q)\{0}

and d∈ int(TQ(x)). If x ∈ ∂(Q)\{0} and d ∈ ∂(TQ(x)), then we have x1d1 = x2Td2by the formula of TQ(x), see [1, Lemma 25]. Hence x1|d1| = |x2Td2| ≤ x2d2 which implies

|d1| ≤ d2 because x1 = x2 > 0. Note that −y1 is nonnegative since−y ∈ Q. Then, applying Theorem2.2yieldsσ

y| TQ2(x, d)

= −(y1/x1)(d12− d22) ≤ 0. Thus, in any case, we have verified the nonpositivity ofσ

y| TQ2(x, d)

over (x, y).

Next, we now show the continuity ofσ

y| TQ2(x, d)

with respect to d over (x, y).

Indeed, if x = 0 or x ∈ int(Q), then σ

y| TQ2(x, d)

= 0 for all d ∈ (x, y) which, of course, is continuous. If x ∈ ∂ Q\{0}, then σ

y| TQ2(x, d)

= −(y1/x1)(d12− d22) for d∈ (x, y) which is continuous with respect to d as well. 

Remark 2.2 For a general closed convex cone, σ (y | T2(x, d)) can be a discontinuous function of d; see [4, Page 178] or [15, Page 489]. But, when is the second order cone Q, our result shows that this function is continuous.

For a convex subset A in IRm+1, it is well known that the function dist2(x, A) is con- tinuously differentiable with∇dist2(x, A) = 2 (x − A(x)). But, there are very limited results on second order differentiability unless some additional structure is imposed on A, for example, second order regularity, see [2,3,15].

Letφ(x) := dist2(x, Q) for Q ⊂ IRm+1. Since Q is second order regular, then according to [15],φ possesses the following nice property: for any x, d ∈ IRm+1, there holds that

lim

d→d t↓0

φ(x + td) − φ(x) − tφ(x; d)

1

2t2 =V(x, d) (17)

(9)

whereV(x, d) is the optimal value of the problem min



2d − z2− 2σ

x− Q(x) | TQ2(Q(x), z)

s.t. z ∈ 

Q(x), x − Q(x)

. (18)

With these preparations, the sufficient conditions for the existence of local saddle points are given as below.

Theorem 2.3 Suppose xis a feasible point of the NSOCP (1) satisfying the following:

(i) xis a KKT point and(λ, μ) ∈ (x), i.e.,

xL(x, λ, μ) = 0 and − λ∈ NK(g(x)).

(ii) the following second order conditions hold

2x xL(x, y)(d, d) + dTH(x, λ)d > 0, ∀d ∈C(x, λ)\{0}, (19) where

C(x, λ):=

d∈ IRn| ∇h(x)d =0, ∇g(x)d ∈ TK g(x)

,

∇g(x)dT

)=0 , andH(x, λ) :=J

j=1Hj(x, λj) with

Hj x, λj

:=

⎧⎪

⎪⎨

⎪⎪

(gj(xj)1))1∇gj(x)T 1 0

0 −Imj

∇gj(x), gj(x) ∈ ∂(Kj)\{0},

0, ot herwise.

Then,(x, λ, μ) is a local saddle point ofLcfor some c> 0.

Proof The first inequality in (3) follows from the fact thatLc(x, λ, μ) = f (x) by (5) since−λ∈ NK(g(x)), and thatLc(x, λ, μ) ≤ f (x) for all (λ, μ) ∈ IRp × IRl due to xbeing feasible.

We will prove the second inequality in (3) by contradiction, i.e., we cannot find c > 0 andδ > 0 such that f (x) =Lc(x, λ, μ) ≤Lc(x, λ, μ) for all x ∈B(x, δ). In other words, there exists a sequence cn → ∞ as n → ∞, and each fixed cn, we always find a sequence{xkn} (noting that its sequence is dependent on cn) such that xkn → xas k → ∞ and

f(x) >Lcn(xnk, λ, μ). (20) To proceed, we denote tkn := xkn−x and dkn:= (xkn−x)/xkn−x. Assume, without loss of generality, that dkn→ ˜dnas k→ ∞. First, we observe that

φ



gj(xnk) −λj cn



= φ



gj(x) −λj

cn + tkn∇gj(x)dkn+1

2(tkn)22gj(x)(dkn, dkn) + o (tkn)2

= φ



gj(x) −λj cn + tkn



∇gj(x)dkn+1

2tkn2gj(x)(dkn, dkn)



+ o (tkn)2

(10)

= φ



gj(x) −λj cn

 + tknφ



gj(x) −λj cn



∇gj(x)dkn+1

2tkn2gj(x)(dkn, dkn)

+1 2(tkn)2V



gj(x) −λj

cn, ∇gj(x) ˜dn

 + o

(tkn)2

(21) where the second equality follows from the fact of φ being Lipschitz continuous (in fact, φ is continuously differentiable) and the last step is due to (17). From (18), V

gj(x) − λj/cn, ∇gj(x) ˜dn

is the optimal value of the following problem min 2∇gj(x) ˜dn− z2− 2σ

λcnj  TK2j(gj(x), z) !

s.t. z ∈ (gj(x), −λj)

(22)

where we have used the fact that 

gj(x), −λj/cn

= (gj(x), −λj) by definition since

cn = 0, and Kj

gi(x) − (λj/cn)

= gi(x) because −λj ∈ NKj(gj(x)) by (5).

Note that the optimal value of the above problem (22) is finite sinceσ is nonpositive by Corollary2.1, and that the objective function is strongly convex (because · 2 is strongly convex and−σ is convex [4, Proposition 3.48]). Hence, the optimal solution of the problem (22) exists and is unique, say znj, i.e.,

V



gj(x) −λj

cn, ∇gj(x) ˜dn



= 2 ∇gj(x) ˜dn− znj 2− 2σ



λj cn

 TK2j(gj(x), znj)

 , (23) where znj ∈ (gj(x), −λj). Then, combining (21) and (23) yields

dist2



gj(xkn) −λj cn,Kj



λj cn

2

= −2tkn

λj

cn, ∇gj(x)dkn+1

2tkn2gj(x)(dkn, dkn)



+ (tkn)2

∇gj(x) ˜dn− znj2− σ



λj cn

 TK2j(gj(x), znj)



+ o((tkn)2), (24)

where we use the fact that dist

gj(x) − (λj/cn),Kj

= λj/cn and

φ



gj(x) −λj cn



= 2

gj(x) −λj cn − Kj



gj(x) −λj cn



= −2λj cn. Since f(x) >Lcn(xkn, λ, μ) by (20), applying the Taylor expansion, we obtain from (24) that

0> f (xkn) − f (x) + μ, h(xkn) +cn

2h(xkn)2 +cn

2

J j=1

⎣dist2



gj(xkn) −λj cn,Kj



λj ck

2

(11)

= tkn∇ f (x)Tdkn+1

2(tkn)2(dkn)T2f(x)dkn+ o((tkn)2) +

"

μ, tkn∇h(x)dkn +1

2(tkn)2∇h(x)(dkn, dkn) + o((tkn)2)

# +cn

2tkn∇h(x)dkn+ o(tkn)2 +cn

2

J j=1



− 2tkn

λj

cn, ∇gj(x)dkn+1

2tkn2gj(x)(dkn, dkn)



+ (tkn)2



∇gj(x) ˜dn− znj2− σ



λj cn

 TK2j(gj(x), znj)



+ o((tkn)2)

 .

Dividing by(tkn)2/2 on both sides and taking limits as k → ∞ give

0≥ ∇x x2 L(x, λ, μ)( ˜dn, ˜dn) + cn∇h(x) ˜dn2 (25) + cn

J j=1

∇gj(x) ˜dn− znj2− σ



λj cn TK2

j(gj(x), znj)



where we use the fact that∇xL(x, λ, μ) = 0, the first equality in KKT conditions (10).

Since−λj ∈ NKj(gj(x)) from (10) and znj ∈ (gj(x), −λj), applying Corollary2.1 yields

σ



λj cn

 TK2j(gj(x), znj)



= 1 cnσ

−λjTK2

j(gj(x), znj)

≤ 0

where the equality is due to the positive homogeneity of the support function, see [11]. Thus, it follows from (25) that

0≥ ∇x x2 L(x, λ, μ)( ˜dn, ˜dn) + cn∇h(x) ˜dn2+ cn

J j=1

∇gj(x) ˜dn− znj2.

Due to ˜dn = 1 for all n, we may assume, taking a subsequence if necessary, that ˜dn→ ˜d.

Because cncan be made sufficiently large as n→ ∞, we obtain from the above inequality that

∇h(x) ˜dn→ 0 and ∇gj(x) ˜dn− znj → 0. Therefore, ∇h(x) ˜d = limn→∞∇h(x) ˜dn= 0 and

dist

∇gj(x) ˜d, (gj(x), −λj)

= lim

n→∞dist

∇gj(x) ˜dn, (gj(x), −λj)

≤ limn→∞∇gj(x) ˜dn− znj = 0

which implies∇gj(x) ˜d ∈ (gj(x), −λj) for all j = 1, 2, . . . , J. Thus, we have ˜d ∈ C(x, λ). In addition, it follows from (25) again that

0≥ ∇x x2 L(x, λ, μ)( ˜dn, ˜dn) − cn

J j=1

σ



λj cn

 TK2j

gj(x), znj

= ∇x x2 L(x, λ, μ)( ˜dn, ˜dn) −

J j=1

σ

−λj TK2j

gj(x), znj

.

參考文獻

相關文件

Based on a class of smoothing approximations to projection function onto second-order cone, an approximate lower order penalty approach for solving second-order cone

Although we have obtained the global and superlinear convergence properties of Algorithm 3.1 under mild conditions, this does not mean that Algorithm 3.1 is practi- cally efficient,

In section 4, based on the cases of circular cone eigenvalue optimization problems, we study the corresponding properties of the solutions for p-order cone eigenvalue

Chen, Conditions for error bounds and bounded level sets of some merit func- tions for the second-order cone complementarity problem, Journal of Optimization Theory and

We propose a primal-dual continuation approach for the capacitated multi- facility Weber problem (CMFWP) based on its nonlinear second-order cone program (SOCP) reformulation.. The

Chen, “Alternative proofs for some results of vector- valued functions associated with second-order cone,” Journal of Nonlinear and Convex Analysis, vol.. Chen, “The convex and

Abstract We investigate some properties related to the generalized Newton method for the Fischer-Burmeister (FB) function over second-order cones, which allows us to reformulate

Fukushima, On the local convergence of semismooth Newton methods for linear and nonlinear second-order cone programs without strict complementarity, SIAM Journal on Optimization,