• 沒有找到結果。

Interior proximal methods and central paths for convex second-order cone programming

N/A
N/A
Protected

Academic year: 2022

Share "Interior proximal methods and central paths for convex second-order cone programming"

Copied!
18
0
0

加載中.... (立即查看全文)

全文

(1)

Contents lists available atScienceDirect

Nonlinear Analysis

journal homepage:www.elsevier.com/locate/na

Interior proximal methods and central paths for convex second-order cone programming

Shaohua Pan

a

, Jein-Shan Chen

b,,1

aDepartment of Mathematics, South China University of Technology, Guangzhou 510640, China

bDepartment of Mathematics, National Taiwan Normal University, Taipei 11677, Taiwan

a r t i c l e i n f o

Article history:

Received 18 November 2008 Accepted 28 June 2010

Keywords:

Convex second-order cone optimization Interior proximal methods

Proximal distances with respect to SOCs Convergence

Central path

a b s t r a c t

We make a unified analysis of interior proximal methods of solving convex second-order cone programming problems. These methods use a proximal distance with respect to second-order cones which can be produced with an appropriate closed proper univariate function in three ways. Under some mild conditions, the sequence generated is bounded with each limit point being a solution, and global rates of convergence estimates are obtained in terms of objective values. A class of regularized proximal distances is also constructed which can guarantee the global convergence of the sequence to an optimal solution. These results are illustrated with some examples. In addition, we also study the central paths associated with these distance-like functions, and for the linear SOCP we discuss their relations with the sequence generated by the interior proximal methods. From this, we obtain improved convergence results for the sequence for the interior proximal methods using a proximal distance continuous at the boundary of second-order cones.

© 2010 Elsevier Ltd. All rights reserved.

1. Introduction

We consider the following convex second-order cone programming problem (CSOCP):

inf f

(

x

)

s

.

t

.

Ax

=

b

,

x



K0

,

(1)

where f : Rn

R

∪ {+∞}

is a closed proper convex function, A is an m

×

n matrix with full row rank m

,

b is a vector in Rm

,

x



K0 means x

K, andKis the Cartesian product of some second-order cones (SOCs), also called Lorentz cones [1].

In other words,

K

=

Kn1

×

Kn2

× · · · ×

Knr (2)

where r

,

n1

, . . . ,

nr

1 with n1

+ · · · +

nr

=

n, and Kni

:= 

(

x1

,

x2

) ∈

R

×

Rni1

|

x1

≥ k

x2

k

with

k · k

being the Euclidean norm. When f reduces to a linear function, i.e. f

(

x

) =

cTx for some c

Rn,(1)becomes the standard SOCP. Throughout this paper, we denote by Xthe optimal set of(1), and letV

:= {

x

Rn

|

Ax

=

b

}

. The

Corresponding author. Tel.: +886 2 29325417; fax: +886 2 29332342.

E-mail addresses:shhpan@scut.edu.cn(S. Pan),jschen@math.ntnu.edu.tw(J.-S. Chen).

1 Member of the Mathematics Division, National Center for Theoretical Sciences, Taipei Office.

0362-546X/$ – see front matter©2010 Elsevier Ltd. All rights reserved.

doi:10.1016/j.na.2010.06.079

(2)

CSOCP, as an extension of the standard SOCP, has a wide range of applications from engineering, control, and finance to robust optimization and combinatorial optimization; see [2,3] and references therein.

There have proposed various methods for solving the CSOCP, which include the interior point methods [4–6], the smoothing Newton methods [7,8], the smoothing–regularization method [9], the semismooth Newton method [10], and the merit function method [11]. These methods are all developed by reformulating the KKT optimality conditions as a system of equations or an unconstrained minimization problem. This paper will focus on an iterative scheme which is proximal based and handles directly the CSOCP itself. Specifically, the proximal-type algorithm consists of generating a sequence

{

xk

}

via

xk

:=

argmin



λ

kf

(

x

) +

H

(

x

,

xk1

) |

x

K

V

,

k

=

1

,

2

, . . .

(3)

where

{ λ

k

}

is a sequence of positive parameters, and H: Rn

×

Rn

R

∪ {+∞}

is a proximal distance with respect to intK (seeDefinition 3.1) which plays the same role as the Euclidean distance

k

x

y

k

2in the classical proximal algorithms (see, e.g., [12,13]), but possesses certain more desirable properties for forcing the iterates to stay inK

V, thus eliminating the constraints automatically. As will be shown in Section4, such proximal distances can be produced with an appropriate closed proper univariate function.

In this paper, under mild assumptions like those used in interior proximal methods for convex programs over nonnegative orthant cones (see, e.g., [14–20]), we show that the sequence

{

xk

}

is bounded with all limit points, being a solution of(1), and obtain global rates of convergence in terms of objective values. But, unlike for interior proximal methods for convex programs over nonnegative orthant cones, the global convergence of

{

xk

}

to an optimal solution can be guaranteed for the class of proximal distancesF1

(

K

)

orF2

(

K

)

under a very restrictive assumption for X(seeTheorem 3.2(a)), or for their subclassesF

b

1

(

Kn

)

orF

b

2

(

Kn

)

under mild assumptions for X(seeTheorem 3.2(b)), or for the smallest subclassF

¯

2

(

Kn

)

. These results are illustrated with some examples.

Just like proximal point methods with generalized distances, the central paths derived from barrier functions have been the object of intensive study. Recently, the central paths for semidefinite programming were under active study (see, e.g., [21–24]). For example, da Cruz Neto et al. [21] established relations among the central paths in semidefinite programming, generalized proximal point methods, and Cauchy trajectories in Riemannian manifolds, extending the results of Iusem et al. [25] for monotone variational inequality problems. Motivated by this, we also investigate the properties of the central paths of(1)with respect to (w.r.t.) the distance-like functions used by interior proximal methods (seePropositions 5.2 and5.3). For the linear SOCP, we discuss the relations between the central paths and the sequences generated by the interior proximal methods, and show that the sequence generated by interior proximal methods will converge under the usual assumptions if the proximal distance satisfies a certain continuity at the boundary of second-order cones (seeTheorem 5.2).

Auslender and Teboulle [15] provided a unified technique for analyzing and designing interior proximal methods for convex and conic optimization. However, for the CSOCP, we notice that it seems hard to find a proximal distance example for the classF+

(

Kn

)

such that global convergence results similar to those for [15, Theorem 2.2] can apply for it. In this paper, we extend their unified analysis technique to interior proximal methods using a proximal distance which can be produced with an appropriate univariate function in three ways, and establish the global convergence results for the smallest classF

¯

2

(

Kn

)

, and the classF

b

2

(

Kn

)

with some mild assumptions of X. The examples from the two classes of proximal distances are easy to find. In particular, for the linear SOCP, we obtain improved convergence results for these interior proximal methods, by exploring the relations between the sequence generated by the interior proximal methods and the central path associated with the corresponding proximal distances. In view of these contexts, this paper can be regarded as a refinement of [15] for the second-order cone optimization.

Throughout this paper, I denotes an identity matrix of suitable dimension and Rndenotes the space of n-dimensional real column vectors. For any x

,

y

Rn, we write x



Kny if x

y

Kn; and we write x



Kny if x

y

intKn. Given a matrix E

,

Im

(

E

)

means the subspace generated by the columns of E. A function is closed if and only if it is lower semicontinuous (lsc), and a function is proper if f

(

x

) < ∞

for at least one x

Rnand f

(

x

) > −∞

for all x

Rn. For a lsc proper convex function f

:

Rn

R

∪ {+∞}

, we denote its domain by domf

:= {

x

Rn

|

f

(

x

) < ∞}

and the



-subdifferential of f at

¯

x by

f

x

) := {w ∈

Rn

|

f

(

x

) ≥

f

x

) + hw,

x

− ¯

x

i − , ∀

x

Rn

}

. If f is differentiable at x

, ∇

f

(

x

)

means the gradient of f at x. For a differentiable h on R

,

h0and h00denote its first and second derivatives. For any closed set S

,

int S denotes the interior of S.

In the rest of this paper, we focus on the case whereK

=

Kn, and all the analysis can be carried over to the case where Khas the direct product structure as in(2). Unless otherwise stated, we make the following minimal assumption for the CSOCP(1):

(A1) domf

∩ (

V

intKn

) 6= ∅

and f

:=

inf

{

f

(

x

) |

x

V

Kn

} > −∞

. 2. Preliminaries

This section recalls some preliminary results that will be used in the subsequent sections. For any x

= (

x1

,

x2

),

y

= (

y1

,

y2

) ∈

R

×

Rn1, their Jordan product [1] is defined as

x

y

:= (h

x

,

y

i ,

y1x2

+

x1y2

).

(4)

It is easy to verify that the identity element under the Jordan product is e

≡ (

1

,

0

, . . . ,

0

)

T

Rn, i.e., e

x

=

x for all x

Rn. Note that the Jordan product is not associative, but it is power associated, i.e., x

◦ (

x

x

) = (

x

x

) ◦

x for all x

Rn. Thus, we

(3)

may without fear of ambiguity write xmfor the product of m copies of x and xm+n

=

xm

xnfor all positive integers m and n. n We stipulate x0

=

e. For each x

= (

x1

,

x2

) ∈

R

×

Rn1, let

det

(

x

) :=

x21

− k

x2

k

2 and tr

(

x

) :=

2x1

.

(5)

These are called the determinant and the trace of x, respectively. A vector x is said to be invertible if det

(

x

) 6=

0. If x

Rnis invertible, there is a unique y

Rnsatisfying x

y

=

y

x

=

e. We call this y the inverse of x and denote it by x1.

We recall from [1] that each x admits a spectral factorization associated withKn:

x

= λ

1

(

x

)

u(x1)

+ λ

2

(

x

)

u(x2)

,

(6)

where

λ

i

(

x

)

and u(xi)for i

=

1

,

2 are the spectral values of x

= (

x1

,

x2

) ∈

R

×

Rn1and the associated spectral vectors, defined by

λ

i

(

x

) =

x1

+ (−

1

)

i

k

x2

k ,

u(xi)

=

1

2 1

, (−

1

)

ix

¯

2



,

(7)

withx

¯

2

=

x2

kx2kif x2

6=

0, otherwise being any vector in Rn1such that

x2

k =

1. If x2

6=

0, then the factorization is unique.

The following lemma is direct by formula(6).

Lemma 2.1. For any x

= (

x1

,

x2

),

y

= (

y1

,

y2

) ∈

R

×

Rn1, the following results hold:

(a) det

(

x

) = λ

1

(

x

2

(

x

),

tr

(

x

) = λ

1

(

x

) + λ

2

(

x

)

and

k

x

k

2

=

1

2

 (λ

1

(

x

))

2

+ (λ

2

(

x

))

2



. (b) x

Kn

⇐⇒ λ

1

(

x

) ≥

0 and x

intKn

⇐⇒ λ

1

(

x

) >

0.

(c)

λ

1

(

x

2

(

y

) + λ

2

(

x

1

(

y

) ≤

tr

(

x

y

) ≤ λ

1

(

x

1

(

y

) + λ

2

(

x

2

(

y

)

.

With the spectral factorization above, one may define a vector-valued function using a univariate function. For any given h: IR

R with IR

R, define hsoc: S

Rnby

hsoc

(

x

) :=

h

1

(

x

)) ·

u(x1)

+

h

2

(

x

)) ·

u(x2)

, ∀

x

S

.

(8) The definition is unambiguous whether x2

6=

0 or x2

=

0. For example, let h

(

t

) =

t1for any t

>

0; then using formulas (6)and(8)we can compute that

x1

:=

hsoc

(

x

) =

1

x21

− k

x2

k

2

(

x1

, −

x2

) =

tr

(

x

)

e

x

det

(

x

)

for x

intKn

.

(9)

Moreover, by Lemma 2.2 of [26], S is open whenever IRis open, and S is closed whenever IRis closed. The following lemma shows that some favorable properties of h can be transmitted to hsoc, whose proofs were given in Proposition 5.1 of [8] and Lemma 2.2 of [27].

Lemma 2.2. Given h: IR

R with IR

R, let hsoc: S

Rnbe the vector-valued function induced by h via(8), where S

Rn. Then, the following results hold:

(a) If h is continuously differentiable on int IR, then hsocis continuously differentiable on int S, and for any x

int S with x

= (

x1

,

x2

) ∈

R

×

Rn1,

hsoc

(

x

) =

 

 

 

 

h0

(

x1

)

I if x2

=

0

,

b c xT2

k

x2

k

c x2

k

x2

k

aI

+ (

b

a

) k

xx22x

k

T22

otherwise

where a

=

hλ2(x))−h1(x))

2(x)−λ1(x)

,

b

=

h02(x))+h01(x))

2

,

c

=

h02(x))−h01(x))

2 .

(b) If h is continuously differentiable on int IR, then tr

(

hsoc

(

x

))

is continuously differentiable on int S with

tr

(

hsoc

(

x

)) =

2

hsoc

(

x

)

e

=

2

(

h0

)

soc

(

x

)

.

(c) If h is (strictly) convex on IR, then tr

(

hsoc

(

x

))

is (strictly) convex on S.

Lemma 2.3. (a) The real-valued function ln

(

det

(

x

))

is strictly concave on intKn. (b) For any x

,

y

intKnwith x

6=

y, it holds that

det

x

+ (

1

− α)

y

) > (

det

(

x

))

α

(

det

(

y

))

1α

, ∀α ∈ (

0

,

1

).

Proof. Clearly, part (b) is a direct consequence of part (a). The proof of part (a) was given in [28, Prop. 2.4(a)] by computing the Hessian matrix of ln

(

det

(

x

))

. Here, we give a simpler proof. Let ln x be the vector-valued function induced by ln t via (8). FromLemma 2.1(a), ln

(

det

(

x

)) =

ln

1

(

x

)) +

ln

2

(

x

)) =

tr

(

ln x

)

for any x

intKn. The result is then direct by Lemma 2.2(c) and the strict concavity of ln t

(

t

>

0

)

. 

(4)

To close this section, we review the definition of SOC-convexity and SOC-monotonicity. The two concepts, like matrix- convexity and the matrix-monotonicity in semidefinite programming, play an important role in the solution methods of SOCPs.

Definition 2.1 ([28]). Given h: IR

R with IR

R. Let hsoc: S

Rnwith S

Rnbe the vector-valued function induced by h via formula(8). Then,

(a) h is said to be SOC-convex of order n on IRif for any x

,

y

S and 0

≤ β ≤

1,

hsoc

x

+ (

1

− β)

y

) 

Kn

β

hsoc

(

x

) + (

1

− β)

hsoc

(

y

).

(10) (b) h is said to be SOC-monotone of order n on IRif for any x

,

y

S,

x



Kny

H⇒

hsoc

(

x

) 

Knhsoc

(

y

).

We say that h is SOC-convex (respectively, SOC-monotone) on IRif h is SOC-convex of all orders n (respectively, SOC- monotone of all orders n) on IR. A function h is said to be SOC-concave on IRwhenever

h is SOC-convex on IR. When h is continuous on IR, the condition in(10)can be replaced by a more special condition:

hsoc



x

+

y 2





Kn

1

2

(

hsoc

(

x

) +

hsoc

(

y

)).

(11)

Obviously, the set of SOC-monotone functions and the set of SOC-convex functions are both closed under positive linear combinations and under pointwise limits.

For the characterizations of SOC-convexity and SOC-monotonicity, the interested reader may refer to [28,29]. The following lemma collects some common SOC-concave functions whose proofs can be found in [27] or are direct by Lemma 3.2 of [27].

Lemma 2.4. (a) For any fixed u

R, the function h

(

t

) = (

t

+

u

)

r with r

∈ [

0

,

1

]

is SOC-concave and SOC-monotone on

[−

u

, +∞)

.

(b) For any fixed u

R, the function h

(

t

) = −(

t

+

u

)

rwith r

∈ [

0

,

1

]

is SOC-concave and SOC-monotone on

(−

u

, +∞)

. (c) For any fixed

α ≥

0

,

ln

(α +

t

)

is SOC-concave and SOC-monotone on

[−

a

, +∞)

.

(d) For any fixed u

0

,

u+tt is SOC-concave and SOC-monotone on

(−

u

, +∞)

. 3. Interior proximal methods

First of all, we present the definition of a proximal distance w.r.t. the open cone intKn.

Definition 3.1. An extended-valued function H: Rn

×

Rn

R

∪ {+∞}

is called a proximal distance with respect to intKn if it satisfies the following properties:

(P1) domH

(·, ·) =

C1

×

C2with intKn

×

intKn

C1

×

C2

Kn

×

Kn.

(P2) For each given y

intKn

,

H

(·,

y

)

is continuous and strictly convex onC1, and it is continuously differentiable on intKnwith dom

1H

(·,

y

) =

intKn.

(P3) H

(

x

,

y

) ≥

0 for all x

,

y

Rn, and H

(

y

,

y

) =

0 for all y

intKn.

(P4) For each fixed y

C2, the sets

{

x

C1

:

H

(

x

,

y

) ≤ γ }

are bounded for all

γ ∈

R.

Definition 3.1has a little difference from Definition 2.1 of [15] for a proximal distance w.r.t. intKn, since here H

(·,

y

)

is required to be strictly convex overC1for any fixed y

intKn. We denote byD

(

intKn

)

the family of functions H satisfying Definition 3.1. With a given H

D

(

intKn

)

, we have the following basic iterative algorithm for(1).

Interior Proximal Algorithm (IPA). Given H

D

(

intKn

)

and x0

V

intKn, for k

=

1

,

2

, . . .

, with

λ

k

>

0 and



k

0, generate a sequence

{

xk

} ⊂

V

intKnwith gk

∈ ∂

kf

(

xk

)

via the following iterative scheme:

xk

:=

argmin



λ

kf

(

x

) +

H

(

x

,

xk1

) |

x

V

(12) such that

λ

kgk

+ ∇

1H

(

xk

,

xk1

) =

ATuk for some uk

Rm

.

(13)

The following proposition implies that the IPA is well-defined, and moreover, from its proof we see that the iterative formula(12)is equivalent to the iterative scheme(3). When



k

>

0 for any k

N (the set of natural numbers), the IPA can be viewed as an approximate interior proximal method, and it becomes exact if



k

=

0 for all k

N.

(5)

Proposition 3.1. For any given H

D

(

intKn

)

and y

intKn, consider the problem

f

(

y

, τ) =

inf

{ τ

f

(

x

) +

H

(

x

,

y

) |

x

V

}

with

τ >

0

.

(14) Then, for each

 ≥

0, there exist x

(

y

, τ) ∈

V

intKnand g

∈ ∂

f

(

x

(

y

, τ))

such that

τ

g

+ ∇

1H

(

x

(

y

, τ),

y

) =

ATu (15)

for some u

Rm. Moreover, for such x

(

y

, τ)

, we have

τ

f

(

x

(

y

, τ)) +

H

(

x

(

y

, τ),

y

) ≤

f

(

y

, τ) + .

Proof. Set F

(

x

, τ) := τ

f

(

x

)+

H

(

x

,

y

)+δ

VKn

(

x

)

, where

δ

VKn

(

x

)

is the indicator function defined on the setV

Kn. Since domH

(·,

y

) =

C1

Kn, it is clear that

f

(

y

, τ) =

inf



F

(

x

, τ) |

x

Rn

.

(16)

Since f

> −∞

, it is easy to verify that for any

γ ∈

R the following relation holds:



x

Rn

|

F

(

x

, τ) ≤ γ ⊂ 

x

V

Kn

|

H

(

x

,

y

) ≤ γ − τ

f

⊂ {

x

C1

|

H

(

x

,

y

) ≤ γ − τ

f

} ,

which together with (P4) implies that F

(·, τ)

has bounded level sets. In addition, by (P1)–(P3), F

(·, τ)

is a closed proper and strictly convex function. Hence, the problem(16)has a unique solution, say x

(

y

, τ)

. From the optimality conditions of(16), we get

0

∈ ∂

F

(

x

(

y

, τ)) = τ∂

f

(

x

(

y

, τ)) + ∇

1H

(

x

(

y

, τ),

y

) + ∂δ

VKn

(

x

(

y

, τ))

where the equality is due to Theorem 23.8 of [30] and domf

∩ (

V

intKn

) 6= ∅

. Notice that dom

1H

(·,

y

) =

intKnand dom

∂δ

VKn

(·) =

V

Kn. Therefore, the last equation implies x

(

y

, τ) ∈

V

intKn, and there exists g

∈ ∂

f

(

x

(

y

, τ))

such that

− τ

g

− ∇

1H

(

x

(

y

, τ),

y

) ∈ ∂δ

VKn

(

x

(

y

, τ)).

On the other hand, by the definition of

δ

VKn

(·)

, it is not hard to derive that

∂δ

VKn

(

x

) =

Im

(

AT

) ∀

x

V

intKn

.

The last two equations imply that(15)holds for

 =

0. When

 >

0,(15)also holds for such x

(

y

, τ)

and g since

f

(

x

(

y

, τ)) ⊂

f

(

x

(

y

, τ))

. Finally, since for each y

intKnthe function H

(·,

y

)

is strictly convex, and since g

∈ ∂

f

(

x

(

y

, τ))

, we have

τ

f

(

x

) +

H

(

x

,

y

) ≥ τ

f

(

x

(

y

, τ)) +

H

(

x

(

y

, τ),

y

) + hτ

g

+ ∇

1H

(

x

(

y

, τ),

y

),

x

x

(

y

, τ)i − 

= τ

f

(

x

(

y

, τ)) +

H

(

x

(

y

, τ),

y

) + h

ATu

,

x

x

(

y

, τ)i − 

= τ

f

(

x

(

y

, τ)) +

H

(

x

(

y

, τ),

y

) − 

for all x

V

,

where the first equality is from(15)and the last one is by x

,

x

(

y

, τ) ∈

V. Thus, f

(

y

, τ) =

inf

{ τ

f

(

x

) +

H

(

x

,

y

) |

x

V

} ≥ τ

f

(

x

(

y

, τ)) +

H

(

x

(

y

, τ),

y

) − 

. 

In the rest of this section, we focus on the convergence behaviors of the IPA with H from several subclasses ofD

(

intKn

)

, which also satisfy one of the following properties.

(P5) For any x

,

y

intKnand z

C1

,

H

(

z

,

y

) −

H

(

z

,

x

) ≥ h∇

1H

(

x

,

y

),

z

x

i

.

(

P50

)

For any x

,

y

intKnand z

C2, H

(

y

,

z

) −

H

(

x

,

z

) ≥ h∇

1H

(

x

,

y

),

z

x

i

.

(P6) For each x

C1, the level sets

{

y

C2

:

H

(

x

,

y

) ≤ γ }

are bounded for all

γ ∈

R.

Specifically, we denote asF1

(

intKn

)

andF2

(

intKn

)

the families of functions H

D

(

intKn

)

satisfying (P5) and

(

P50

)

, respectively. IfC1

=

Kn, we denote asF1

(

Kn

)

the family of functions H

D

(

intKn

)

satisfying (P5) and (P6). IfC2

=

Kn, we writeF2

(

intKn

)

asF

(

Kn

)

. It is easy to see that the class of proximal distanceF

(

intKn

)

(respectively,F

(

Kn

)

) in [15]

subsumes the

(

H

,

H

)

with H

F1

(

intKn

)

(respectively,F1

(

Kn

)

), but it does not include any

(

H

,

H

)

with H

F2

(

intKn

)

(respectively,F2

(

Kn

)

).

Theorem 3.1. Let

{

xk

}

be the sequence generated by the IPA with H

F1

(

intKn

)

or H

F2

(

intKn

)

. Set

σ

ν

= P

ν

k=1

λ

k. Then, the following results hold:

(a) f

(

xν

) −

f

(

x

) ≤ σ

ν1H

(

x

,

x0

) + σ

ν1

P

ν

k=1

σ

k



kfor any x

V

C1if H

F1

(

intKn

)

; f

(

xν

) −

f

(

x

) ≤ σ

ν1H

(

x0

,

x

) + σ

ν1

P

ν

k=1

σ

k



kfor any x

V

C2if H

F2

(

intKn

)

. (b) If

σ

ν

→ +∞

and



k

0, then lim infν→∞f

(

xν

) =

f. (c) The sequence

{

f

(

xk

)}

converges to fwhenever

P

k=1



k

< ∞

.

(6)

(d) If X

6= ∅

, then

{

xk

}

is bounded with all limit points in Xunder

(

d1

)

or

(

d2

)

: (d1) Xis bounded and

P

k=1



k

< ∞

; (d2)

P

k=1

λ

k



k

< ∞

and H

F1

(

Kn

) (

or H

F2

(

Kn

))

.

Proof. The proofs are similar to those of [15, Theorem 4.1]. For completeness, we here take H

F2

(

intKn

)

for example to prove the results.

(a) Since gk

∈ ∂

kf

(

xk

)

, from the definition of the subdifferential, it follows that f

(

x

) ≥

f

(

xk

) + h

gk

,

x

xk

i − 

k

x

Rn

.

This, together with Eq.(13), implies that

λ

k

(

f

(

xk

) −

f

(

x

)) ≤ h∇

1H

(

xk

,

xk1

),

x

xk

i + λ

k



k

x

V

C2

.

Using

(

P50

)

with x

=

xk

,

y

=

xk1and z

=

x

V

C2, it then follows that

λ

k

(

f

(

xk

) −

f

(

x

)) ≤

H

(

xk1

,

x

) −

H

(

xk

,

x

) + λ

k



k

x

V

C2

.

(17) Summing over k

=

1

,

2

, . . . , ν

in this inequality yields that

− σ

νf

(

x

) + X

ν

k=1

λ

kf

(

xk

) ≤

H

(

x0

,

x

) −

H

(

xν

,

x

) + X

ν

k=1

λ

k



k

.

(18)

On the other hand, setting x

=

xk1in(17), we obtain f

(

xk

) −

f

(

xk1

) ≤ λ

k1



H

(

xk1

,

xk1

) −

H

(

xk

,

xk1

) + 

k

≤ 

k

.

(19) Multiplying the inequality by

σ

k1(with

σ

0

0) and summing over k

=

1

, . . . , ν

, we get

X

ν k=1

σ

k1f

(

xk

) − X

ν

k=1

σ

k1f

(

xk1

) ≤ X

ν

k=1

σ

k1



k

.

Noting that

σ

k

= λ

k

+ σ

k1with

σ

0

0, the above inequality can reduce to

σ

νf

(

xν

) − X

ν

k=1

λ

kf

(

xk

) ≤ X

ν

k=1

σ

k1



k

.

(20)

Adding the inequalities(18)and(20)and recalling that

σ

k

= λ

k

+ σ

k1, it follows that f

(

xν

) −

f

(

x

) ≤ σ

ν1



H

(

x0

,

x

) −

H

(

xν

,

x

) + σ

ν1

X

ν

k=1

σ

k



k

x

V

C2

,

which immediately implies the desired result due to the nonnegativity of H

(

xν

,

x

)

.

(b) If

σ

ν

→ +∞

and



k

0, then applying Lemma 2.2(ii) of [15] with ak

= 

kand bν

:= σ

ν1

P

ν

k=1

λ

k



kyields

σ

ν1

P

ν

k=1

λ

k



k

0. From part (a), it then follows that lim inf

ν→∞ f

(

xν

) ≤

inf



f

(

x

) |

x

V

intKn

.

This together with f

(

xν

) ≥

inf

{

f

(

x

) |

x

V

Kn

}

implies that lim inf

ν→∞ f

(

xν

) =

inf



f

(

x

) |

x

V

intKn

=

f

.

(c) From(19), 0

f

(

xk

) −

f

f

(

xk1

) −

f

+ 

k. Using Lemma 2.1 of [15] with

γ

k

0 and

v

k

=

f

(

xk

) −

f, we have that

{

f

(

xk

)}

converges to fwhenever

P

k=1



k

< ∞

.

(d) If the condition (d1) holds, then the sets

{

x

V

Kn

|

f

(

x

) ≤ γ }

are bounded for all

γ ∈

R, since f is closed proper convex and X

= {

x

V

Kn

|

f

(

x

) ≤

f

}

. Note that(19)implies

{

xk

} ⊂ {

x

V

Kn

|

f

(

x

) ≤

f

(

x0

)+P

kj=1



j

}

. Combining with

P

k=1



k

< ∞

, clearly we have that

{

xk

}

is bounded. Since

{

f

(

xk

)}

converges to fand f is lsc, passing to the limit and recalling that

{

xk

} ⊂

V

Knyields that each limit point of

{

xk

}

is a solution of(1).

Suppose that the condition (d2) holds. If H

F2

(

Kn

)

, then inequality(17)holds for each x

V

Kn, and particularly for x

X. Consequently,

H

(

xk

,

x

) ≤

H

(

xk1

,

x

) + λ

k



k

x

X

.

(21)

Summing over k

=

1

,

2

, . . . , ν

for the last inequality, we obtain H

(

xν

,

x

) ≤

H

(

x0

,

x

) + X

ν

k=1

λ

k



k

.

參考文獻

相關文件

Abstract Based on a class of smoothing approximations to projection function onto second-order cone, an approximate lower order penalty approach for solving second-order cone

Based on a class of smoothing approximations to projection function onto second-order cone, an approximate lower order penalty approach for solving second-order cone

Although we have obtained the global and superlinear convergence properties of Algorithm 3.1 under mild conditions, this does not mean that Algorithm 3.1 is practi- cally efficient,

We consider an extended second-order cone linear complementarity problem (SOCLCP), including the generalized SOCLCP, the horizontal SOCLCP, the vertical SOCLCP, and the mixed SOCLCP

We point out that extending the concepts of r-convex and quasi-convex functions to the setting associated with second-order cone, which be- longs to symmetric cones, is not easy

Chen, A semi-distance associated with symmetric cone and a new proximal distance function on second-order cone, Linear and Nonlinear Anal.. 5

We investigate some properties related to the generalized Newton method for the Fischer-Burmeister (FB) function over second-order cones, which allows us to reformulate the

Chen, “Alternative proofs for some results of vector- valued functions associated with second-order cone,” Journal of Nonlinear and Convex Analysis, vol.. Chen, “The convex and