• 沒有找到結果。

A Proximal Point Algorithm for the Monotone Second-Order Cone Complementarity Problem

N/A
N/A
Protected

Academic year: 2022

Share "A Proximal Point Algorithm for the Monotone Second-Order Cone Complementarity Problem"

Copied!
29
0
0

全文

(1)

to appear in Computational Optimization and Applications, 2011

A Proximal Point Algorithm for the Monotone Second-Order Cone Complementarity Problem

Jia Wu

School of Mathematical Sciences Dalian University of Technology

Dalian 116024, China E-mail: jwu dora@mail.dlut.edu.cn

Jein-Shan Chen1 Department of Mathematics National Taiwan Normal University

Taipei 11677, Taiwan E-mail: jschen@math.ntnu.edu.tw

August 4, 2010

(revised on December 7, 2010) (2nd revised on December 21, 2010)

Abstract. This paper is devoted to the study of the proximal point algorithm for solving monotone second-order cone complementarity problems. The proximal point algorithm is to generate a sequence by solving subproblems that are regularizations of the original problem. After given an appropriate criterion for approximate solutions of subproblems by adopting a merit function, the proximal point algorithm is verified to have global and superlinear convergence properties. For the purpose of solving the subproblems efficiently, we introduce a generalized Newton method and show that only one Newton step is even- tually needed to obtain a desired approximate solution that approximately satisfies the ap- propriate criterion under mild conditions. Numerical comparisons are also made with the derivative-free descent method used by [21], which confirm the theoretical results and the effectiveness of the algorithm.

Key words.Complementarity problem, second-order cone, proximal point algorithm, ap- proximation criterion.

1Member of Mathematics Division, National Center for Theoretical Sciences, Taipei Office. The author’s work is partially supported by National Science Council of Taiwan.

(2)

1 Introduction

The second-order cone complementarity problem (SOCCP) which is a natural extension of nonlinear complementarity problem (NCP), is to find x∈ ℜnsatisfying

SOCCP(F) :⟨F(x), x⟩ = 0, F(x) ∈ K, x ∈ K, (1.1) where⟨·, ·⟩ is the Euclidean inner product, F is a mapping from ℜn intoℜn, andK is the Cartesian product of second-order cones (SOC), in other words,

K = Kn1 × · · · × Knq, (1.2)

where q, n1, . . . , nq≥ 1, n1+ · · · + nq = n, and for each i ∈ {1, . . . , q}

Kni := {(x0, ¯x) ∈ ℜ × ℜni−1|∥ ¯x∥ ≤ x0},

with∥ · ∥ denoting the Euclidean norm and K1 denoting the set of nonnegative reals ℜ+. IfK = ℜn+, then (1.1) reduces to the nonlinear complementarity problem. Throughout this paper, corresponding to the Cartesian structure ofK, we write F = (F1, . . . , Fq) and x = (x1, . . . , xq) with Fi being mappings fromℜn toℜni and xi ∈ ℜni, for each i ∈ {1, . . . , q}.

We also assume that the mapping F is continuously differentiable and monotone.

Until now, a variety of methods for solving SOCCP have been proposed and inves- tigated. They include interior-point methods [1, 2, 13, 18, 28, 31], the smoothing Newton methods [5, 10], the merit function method [4] and the semismooth Newton method [11], where the last three kinds of methods are all based on an SOC complementarity function or a merit function.

The proximal point algorithm (PPA) is known for its theoretically nice convergence properties, which was first proposed by Martinet [16] and further studied by Rockafellar [24]. PPA is a procedure for finding a vector z satisfying 0∈ T (z), where T is a maximal monotone operator. Therefore, it can be applied to a broad class of problems such as convex programming problems, monotone variational inequality problems, and monotone complementarity problems.

In this paper, motivated by the work of Yamashita and Fukushima [29] for the NCPs, we focus on introducing PPA for solving the SOC complementarity problems. For SOCCP(F), given the current point xk, PPA obtains the next point xk+1 by approximately solving the subproblem

SOCCP(Fk) :⟨Fk(x), x⟩ = 0, Fk(x)∈ K, x ∈ K, (1.3) where Fk :ℜn → ℜn is defined by

Fk(x) := F(x) + ck(x− xk) (1.4) with ck > 0. It is obvious that Fk is strongly monotone when F is monotone. Then, by [8, Theorem 2.3.3], the subproblem SOCCP(Fk), which is more tractable than the orig- inal problem, always has a unique solution. Thus, PPA is well defined. It was pointed out

(3)

in [15, 24] that with appropriate criteria for approximate solutions of subproblems (1.3), PPA has global and superlinear convergence property under mild conditions. However, those criteria are usually not easy to check. Inspired by [29], we give a practical criterion based on a new merit function for SOCCP proposed by Chen in [3]. Another implemen- tation issue is how to solve subproblems efficiently and obtain an approximate solution such that the approximation criterion for the subproblem is fulfilled. We use a generalized Newton method proposed by De Luca et al. [14] which is used in [29] for the NCP case to solve subproblems. We also give the conditions under which the approximation criterion is eventually approximately fulfilled by a single Newton iteration of the generalized Newton method.

The following notations and terminologies are used throughout the paper. I repre- sents an identity matrix of suitable dimension, ℜn denotes the space of n−dimensional real column vectors, andℜn1 × · · · × ℜnq is identified withℜn1+···+nq. Thus, (x1, . . . , xq) ∈ ℜn1 × · · · × ℜnq is viewed as a column vector inℜn1+···+nq. For any two vectors u and v, the Euclidean inner product is denoted by⟨u, v⟩ := uTv and for any vector w, the norm ||w|| is induced by the inner product which is called the Euclidean vector norm. For a matrix M, the norm||M|| is denoted to be the matrix norm induced by the Euclidean vector norm, that is the spectral norm. Given a differentiable mapping F : ℜn → ℜl, we denote byJF(x) the Jacobian of F at x and∇F(x) := JF(x), the adjoint ofJF(x). For a symmetric matrix M, we write M ≻ O (respectively, M ≽ O) if M is positive definite (respectively, positive semidefinite). Given a finite number of square matrices Q1, · · · , Qq, we denote the block diagonal matrix with these matrices as block diagonals by diag(Q1, · · · , Qq) or by diag(Qi, i= 1, . . . , q). If I and B are index sets such that I, B ⊆ {1, 2, . . . , q}, we denote by PIBthe block matrix consisting of the sub-matrices Pik ∈ ℜni×nk of P with i∈ I, k ∈ B, and denote by xBa vector consisting of sub-vectors xi ∈ ℜni with i∈ B.

The organization of this paper is as follows. In Section 2, we recall some notions and background materials. Section 3 is devoted to developing proximal point method to solve the monotone second-order cone complementarity problem with a practical approximation criterion based on a new merit function. In Section 4, a generalized Newton method is introduced to solve the subproblems and we prove that the proximal point algorithm in Section 3 has approximate genuine superlinear convergence under mild conditions, which is the main result of this paper. In Section 5, we report the numerical results for several test problems. Section 6 is to give conclusions.

2 Preliminaries

In this section, we review some background materials that will be used in the sequel. We first recall some mathematical concepts and the Jordan algebra associated with the SOC.

Then we talk about the complementarity functions and three merit functions for SOCCP.

Finally, we briefly mention the proximal point algorithm.

(4)

2.1 Mathematical concepts

Given a setΩ ∈ ℜnlocally closed around ¯x ∈ Ω, define the regular normal cone to Ω at ¯x by

Nb( ¯x) :={

v∈ ℜn lim sup

x−→ ¯x

⟨v, x − ¯x⟩

||x − ¯x|| ≤ 0} .

The (limiting) normal cone toΩ at ¯x is defined by N( ¯x) := lim sup

x

−→ ¯x

Nb(x),

where ”limsup” is the Painlev´e-Kuratowski outer limit of sets (see [25]).

We now recall definitions of monotonicity of a mapping which are needed for the as- sumptions throughout this paper. We say that a mapping G :n → ℜn is monotone if

⟨G(ζ) − G(ξ), ζ − ξ⟩ ≥ 0, ∀ ζ, ξ ∈ ℜn. Moreover, G is strongly monotone if there existsρ > 0 such that

⟨G(ζ) − G(ξ), ζ − ξ⟩ ≥ ρ∥ζ − ξ∥2, ∀ ζ, ξ ∈ ℜn.

It is well known that, when G is continuously differentiable, G is monotone if and only if

∇G(ζ) is positive semidefinite for all ζ ∈ ℜn while G is strongly monotone if and only if

∇G(ζ) is positive definite for all ζ ∈ ℜn. For more details about monotonicity, please refer to [8].

There is another kind of concepts called Cartesian P-properties which have close re- lationship with monotonicity concept and are introduced by Chen and Qi [6] for a lin- ear transformation. Here we present the definitions of Cartesian P-properties for a matrix M∈ ℜn×nand the nonlinear generalization in the setting ofK.

A matrix M ∈ ℜn×n is said to have the Cartesian P-property if for any 0 , x = (x1, . . . , xq)∈ ℜnwith xi ∈ ℜni, there exists an indexν ∈ {1, 2, . . . , q} such that ⟨xν, (Mx)ν⟩ >

0. And M is said to have the Cartesian P0-property if the above strict inequality becomes

⟨xν, (Mx)ν⟩ ≥ 0 where the chosen index ν satisfies xν , 0.

Given a mapping G = (G1, . . . , Gq) with Gi :ℜn → ℜni, G is said to have the uniform Cartesian P-property if for any x = (x1, . . . , xq), y = (y1, . . . , yq) ∈ ℜn, there is an index ν ∈ {1, 2, . . . , q} and a positive constant ρ > 0 such that

⟨xν− yν, Gν(x)− Gν(y)⟩ ≥ ρ∥x − y∥2.

In addition, for a single-valued Lipschitz continuous mapping G :n → ℜm, the B- subdifferential of G at x denoted by ∂BG(x), is defined as

BG(x) :={

klim→∞JG(xk) xk → x, G is differentiable at xk} .

(5)

The convex hull of∂BG(x) is the Clarke’s generalized Jacobian of G at x, denoted by∂G(x), see [7]. We say that G is strongly BD-regular at x if every element ofBG(x) is nonsingular.

There is another important concept named semismoothness which was first introduced in [17] for functionals and was extended in [23] to vector-valued functions. Let G :n → ℜm be a locally Lipschitz continuous mapping. We say that G is semismooth at a point x ∈ ℜn if G is directionally differentiable and for any ∆x ∈ ℜn and V ∈ ∂G(x + ∆x) with

∆x → 0,

G(x+ ∆x) − G(x) − V(∆x) = o(||∆x||).

Furthermore, G is said to be strongly semismooth at x if G is semismooth at x and for any

∆x ∈ ℜnand V ∈ ∂G(x + ∆x) with ∆x → 0,

G(x+ ∆x) − G(x) − V(∆x) = O(||∆x||2).

2.2 Jordan algebra associated with SOC

It is known thatKlis a closed convex self-dual cone with nonempty interior given by int(Kl) := {x = (x0, ¯x) ∈ ℜ × ℜl−1| x0 > ∥ ¯x∥}.

For any x= (x0, ¯x) ∈ ℜl and y= (y0, ¯y) ∈ ℜl, we define their Jordan product as x◦ y = (xTy, y0¯x+ x0¯y).

We write x2 to mean x◦ x and write x + y to mean the usual componentwise addition of vectors. Moreover, if x ∈ Kl, there exists a unique vector inKl, which we denote by x12, such that (x12)2 = x12 ◦ x12 = x. And we recall that each x = (x0, ¯x) ∈ ℜ × ℜl−1 admits a spectral factorization, associated withKl, of the form

x= λ1(x)u1x+ λ2(x)u2x,

whereλ1(x), λ2(x) and u1x, u2x are the spectral values and the associated spectral vectors of x, respectively, defined by

λi(x)= x0+ (−1)i∥ ¯x∥, uix = 1

2(1, (−1)iω), i = 1, 2

withω = ¯x/∥ ¯x∥ if ¯x , 0 and otherwise ω being any vector in ℜl−1satisfying∥ω∥ = 1.

For each x= (x0, ¯x) ∈ ℜl, define the matrix Lxby Lx :=

[ x0 ¯xT

¯x x0I ]

, (2.1)

which can be viewed as a linear mapping fromℜltoℜl.

(6)

Lemma 2.1 The mapping Lxdefined by (2.1) has the following properties.

(a) Lxy= x ◦ y and Lx+y = Lx+ Lyfor any y∈ ℜl.

(b) x∈ Kl if and only if Lx ≽ O. And x ∈ intKl if and only if Lx ≻ O.

(c) Lxis invertible whenever x∈ intKl. Proof. Please see [4, 10]. 2

2.3 Complementarity and merit functions associated with SOC

In this subsection, we discuss three reformulations of SOCCP that will play an important role in the sequel of this paper. We deal with the problem SOCCP( ˆF), where ˆF :n → ℜn is a certain mapping that has the same structure with F in Section 1, that is, ˆF = ( ˆF1, . . . , ˆFq) with ˆFi :ℜn → ℜni.

A mappingϕ : ℜl× ℜl → ℜl is called an SOC complementarity function associated with the coneKlif

ϕ(x, y) = 0 ⇔ x ∈ Kl, y ∈ Kl, ⟨x, y⟩ = 0. (2.2) A popular choice ofϕ is the vector-valued Fischer-Brumeister (FB) function, defined by

ϕFB(x, y) := (x2+ y2)12 − x − y, ∀x, y ∈ ℜl. (2.3) The function was shown in [10] to satisfy the equivalence (2.2), and therefore its squared norm

ψFB(x, y) := 1

2∥ϕFB(x, y)∥2 (2.4)

is a merit function. The functionsϕFBandψFBwere studied in the literature [4,26], in which ϕFB was shown semismooth in [26] whereas ψFB was proved smooth everywhere in [4].

Due to these favorable properties, the SOCCP( ˆF) can be reformulated as the following nonsmooth system of equations

ΦFB(x) :=











ϕFB(x1, ˆF1(x)) ϕFB(xi, ˆF... i(x))

...

ϕFB(xq, ˆFq(x))











= 0, (2.5)

whereϕFB is defined as in (2.3) with a suitable dimension l. Moreover, its squared norm induces a smooth merit function, given by

fFB(x) := 1

2∥ΦFB(x)2 =

q i=1

ψFB(xi, ˆFi(x)). (2.6)

(7)

Lemma 2.2 The mappingsΦFB and fFB defined in (2.5) and (2.6) have the following prop- erties.

(a) If ˆF is continuously differentiable, then ΦFB is semismooth.

(b) If∇ ˆF is locally Lipschitz continuous, then ΦFB is strongly semismooth.

(c) If ˆF is continuously differentiable, then fFB is continuously differentiable everywhere.

(d) If ˆF is continuously differentiable and ∇ ˆF(x) at any x ∈ ℜn has the Cartesian P0- property, then every stationary point of fF Bis a solution to the SOCCP( ˆF).

(e) If ˆF is strongly monotone and xis a nondegenerate solution of SOCCP( ˆF), i.e., ˆFi(x)+ xi ∈ intKni for all i∈ {1, . . . , q}. Then ΦFB is strongly BD-regular at x.

Proof. Items (a) and (b) come from [26, Corollary 3.3] and the fact that the composite of (strongly) semismooth functions is (strongly) semismooth by [9, Theorem 19]. Item (c) was shown by Chen and Tseng, which is an immediate consequences of [4, Proposition 2].

Item (d) is due to [20, Proposition 5.1].

For item (e), since∇ ˆF(x) has Cartesian P-property and is positive definite, which can be obtained from the strongly monotonicity of ˆF, it follows from [22, Proposition 2.1] that the conditions in [20, Theorem 4.1] are satisfied and hence (e) is proved. 2

Since the complementarity functionΦFB and its induced merit function fFB have many useful properties described as in Lemma 2.2, especially when ˆF is strongly monotone, they play a crucial role in solving subproblems by using a generalized Newton method in Section 4. On the other hand, in [3], Chen extended a new merit function for the NCP to the SOCCP and studied conditions under which the new merit function provides a global error bound and has property of bounded level sets, which play an important role in convergence analysis. In contrast, the merit function fFB lacks these properties. For this reason, we utilize this new merit function to describe the approximation criterion.

Letψ0:ℜl× ℜl → ℜ+be defined by ψ0(x, y) := 1

2∥ΠKl(x◦ y)∥2,

where the mappingΠKl(·) denotes the orthogonal projection onto the set Kl. After taking the fixed parameter as in [3], a new merit function is defined asψ(x, y) := ψ0(x, y)+ψFB(x, y), whereψFB is given by (2.4). Via the new merit function, it was shown that the SOCCP( ˆF) is equivalent to the following global minimization:

min

x∈ℜn f (x) where f (x) :=

q i=1

ψ( ˆFi(x), xi). (2.7)

(8)

Hereψ is defined with a suitable dimension l.

The properties about the function f including the error bound property and the bound- edness of level sets which are given in [3] are summarized in the following three lemmas.

Lemma 2.3 Let f be defined as in (2.7).

(a) If ˆF is smooth, then f is smooth and f12 is uniformly locally Lipschitz continuous on any compact set.

(b) f(ζ) ≥ 0 for all ζ ∈ ℜnand f (ζ) = 0 if and only if ζ solves the SOCCP( ˆF).

(c) Suppose that the SOCCP( ˆF) has at least one solution, thenζ is a global minimization of f if and only ifζ solves the SOCCP( ˆF).

Proof. From [3, Proposition 3.2], we only need to prove that f12 is Lipschitz continuous on the set{y| f (y) = 0}. It follows from [3, Proposition 3.1] that if f (y) = 0, then yi◦ ˆFi(y)= 0 for all i= 1, 2, . . . , q. Thus, for any y ∈ {y| f (y) = 0}, we have

f (x)12 − f (y)12

= f (x)12

≤ 1

√2

q i=1

(∥ΠKni(xi◦ ˆFi(x))∥ + ∥ϕFB(xi, ˆFi(x))∥)

= 1

√2

q i=1

(∥ΠKni(xi◦ ˆFi(x))− ΠKni(yi◦ ˆFi(y))∥ + ∥ϕFB(xi, ˆFi(x))− ϕF B(yi, ˆFi(y))∥) .

Noting that the functions xi◦ ˆFi(x) andϕFB(xi, ˆFi(x)) are Lipschitz continuous provided that F is smooth. Then from the Lipschitz continuity ofˆ ϕFBand the nonexpansivity of projective mapping onto a convex set, we obtain that f12 is Lipschitz continuous at y. 2

Lemma 2.4 [3, Proposition 4.1] Suppose that ˆF is strongly monotone with the modulus ρ > 0 and ζis the unique solution of SOCCP ( ˆF). Then there exists a scalarτ > 0 such that

τ∥ζ − ζ2 ≤ 3√

2 f (ζ)12, ∀ζ ∈ ℜn, (2.8) where f is given by (2.7) andτ can be chosen as

τ := ρ

max{√

2, ∥ ˆF(ζ)∥, ∥ζ∥}.

(9)

Lemma 2.5 [3, Proposition 4.2] Suppose that ˆF is monotone and that SOCCP( ˆF) is strictly feasible, i.e., there exists ˆζ ∈ ℜn such that ˆF( ˆζ), ˆζ ∈ intK. Then the level set

L(r) := {ζ ∈ ℜn| f (ζ) ≤ r}

is bounded for all r ≥ 0, where f is given by (2.7).

Another SOC complementarity function which we usually call it the natural residual mapping is defined by

ϕNR(x, y) := x − ΠKl(x− y), ∀x, y ∈ ℜl, based on which we define the mappingΦNR :ℜn → ℜnas

ΦNR(x) :=











ϕNR(x1, ˆF1(x)) ϕNR(xi, ˆF... i(x)) ϕNR(xq, ˆF... q(x))











. (2.9)

Then it is straightforward to see that SOCCP( ˆF) is equivalent to the system of equations ΦNR(x)= 0.

Lemma 2.6 The mappingΦNR defined as in (2.9) has the following properties.

(a) If ˆF is continuously differentiable, then ΦNR is semismooth.

(b) If∇ ˆF is locally Lipschitz continuous, then ΦNR is strongly semismooth.

(c) If∇ ˆF(x) is positive definite, then every V ∈ ∂BΦNR(x) is nonsingular, i.e.,ΦNRis strongly BD-regular at x.

Proof. Items (a) and (b) are obvious after combining [5, Proposition 4.3] and [9, Theorem 19]. Note that these two items are also proved in [12] in a different approach. The proof of item (c) is similar to that in [27] and [30] for a more general setting, and we omit it. 2

From Lemma 2.6(c) we know that the natural residual mapping ΦNR is strongly BD- regular under weaker conditions thanΦFB. In view of this, we will useΦNR to explore the condition of superlinear convergence of PPA in Section 3.

(10)

2.4 Proximal point algorithm

LetT : ℜn ⇒ ℜnbe a set-valued mapping defined by

T (x) := F(x) + NK(x). (2.10)

ThenT is a maximal monotone mapping and SOCCP(F) defined by (1.1) is equivalent to the problem of finding a point x such that

0∈ T (x).

The proximal point algorithm generates, for any starting point x0, a sequence{xk} by the approximate rule:

xk+1≈ Pk(xk), where Pk :=(

I+ c1kT)−1

is a single-valued mapping fromℜn toℜn,{ck} is some sequence of positive real numbers, and xk+1 ≈ Pk(xk) means that xk+1 is an approximation to Pk(xk).

Accordingly, for SOCCP(F), Pk(xk) is given by

Pk(xk)= (

I+ 1

ck(F+ NK) )−1

(xk), from which we have

Pk(xk)∈ SOL(SOCCP(Fk)),

where Fk is defined by (1.4) and SOL(SOCCP(Fk)) is the solution set of SOCCP(Fk).

Therefore, xk+1 is given by an approximate solution of SOCCP(Fk). Two general crite- ria for the approximate calculation of Pk(xk) proposed by Rockafellar [24] are as follows:

Criterion 1. ∥xk+1− Pk(xk)∥ ≤ εk, ∑

k=0εk < ∞.

Criterion 2. ∥xk+1− Pk(xk)∥ ≤ ηk∥xk+1− xk∥, ∑

k=0ηk < ∞.

Results on the convergence of the proximal point algorithm have already been stud- ied in [15, 24] from which we know that Criterion 1 guarantees global convergence while Criterion 2, which is rather restrictive, ensures superlinear convergence.

Theorem 2.1 Let{xk} be any sequence generated by the PPA under Criterion 1 with {ck} bounded. Suppose SOCCP(F) has at least one solution. Then{xk} converges to a solution xof SOCCP(F).

Proof. This can be proved by similar arguments as in [24, Theorem 1]. 2

(11)

Theorem 2.2 Suppose the solution set ¯X of SOCCP(F) is nonempty, and let {xk} be any sequence generated by PPA with Criteria 1 and 2 and ck → 0. Let us also assume that

∃ δ > 0, ∃ C > 0, s.t. dist (x, ¯X) ≤ C∥w∥ whenever x ∈ T−1(ω) and ∥ω∥ ≤ δ. (2.11) Then the sequence{dist(xk, ¯X)} converges to 0 superlinearly.

Proof. This can be also verified by similar arguments as in [15, Theorem 2.1]. 2

3 A proximal point algorithm for solving SOCCP

Based on the previous discussion, in this section we describe PPA for solving SOCCP(F) as defined in (1.1) where F is smooth and monotone. We first illustrate the related mappings that will be used in the remainder of this paper.

The mappings ΦNR, ΦFB, fFB are defined by (2.9), (2.5) and (2.6), respectively, where the mapping ˆF is substituted by F. And the function fk, fFBk andΦkFB are defined by (2.7), (2.6) and (2.5), respectively, where the mapping ˆF is replaced by Fk which is given by (1.4), i.e.,

ΦNR(x) :=











ϕNR(x1, F1(x)) ϕNR(xi, F... i(x)) ϕNR(xq, F... q(x))











, ΦkFB(x) :=











ϕFB(x1, F1k(x)) ϕFB(xi, F... ik(x)) ϕFB(xq, F... qk(x))











, ΦFB(x) :=











ϕFB(x1, F1(x)) ϕFB(xi, F... i(x)) ϕFB(xq, F... q(x))











,

fk(x) :=

q i=1

ψ(xi, Fki(x)), fFB(x) := 1

2∥ΦFB(x)2, fk

FB(x) := 1

2∥ΦkFB(x)2.

Now we are in a position to describe the proximal point algorithm for solving Problem (1.1).

Algorithm 3.1

Step 0. Choose parametersα ∈ (0, 1), c0 ∈ (0, 1) and an initial point x0 ∈ ℜn. Set k := 0.

Step 1. If xksatisfies fF B(xk)= 0, then stop.

Step 2. Let Fk(x)= F(x) + ck(x− xk). Get an approximation solution xk+1of SOCCP(Fk) that satisfies the condition

fk(xk+1)≤ c6kmin{1, ∥xk+1− xk4} 18 max{√

2, ∥Fk(Pk(xk))∥, ∥Pk(xk)∥}2. (3.1)

(12)

Step 3. Set ck+1 = αckand k := k + 1. Go to Step 1.

Theorem 3.1 Let ¯X be the solution set of SOCCP(F). If ¯X , ∅, then the sequence {xk} generated by Algorithm 3.1 converges to a solution xof SOCCP(F).

Proof. From Theorem 2.1, it suffices to prove that such {xk} satisfies Criterion 1. Since Fk is strongly monotone with modulus ck > 0 and Pk(xk) is the unique solution of SOCCP(Fk), it follows from Lemma 2.4 that

∥xk+1− Pk(xk)∥2≤ 3√ 2

ck max{√

2, ∥Fk(Pk(xk))∥, ∥Pk(xk)∥} fk(xk+1)12, (3.2) which together with (3.1) implies

∥xk+1− Pk(xk)∥ ≤ ck. (3.3) 2

To obtain superlinear convergence properties, we need to give the following assumption which will be connected to the condition (2.11) in Theorem 2.2.

Assumption 1. ∥x − ΠK(x− F(x))∥ provides a local error bound for SOCCP(F), that is, there exist positive constants ¯δ and ¯C such that

dist(x, ¯X) ≤ ¯C∥x − ΠK(x− F(x))∥, for all x with ∥x − ΠK(x− F(x))∥ ≤ ¯δ, (3.4) where ¯X denotes the solution set of SOCCP(F).

The following lemma can help us to understand Assumption 1 as it implies conditions under which Assumption 1 holds.

Lemma 3.1 [19, Proposition 3] If a Lipschitz continuous mapping H is strongly BD- regular at x, then there is a neighborhood N of x and a positive constant α, such that

∀x ∈ N and V ∈ ∂BH(x), V is nonsingular and∥V−1∥ ≤ α. If, furthermore, H is semismooth at x and H(x) = 0, then there exists a neighborhood N of x and a positive constantβ such that∀x ∈ N,∥x − x∥ ≤ β∥H(x)∥.

Note that when∇F(x) is positive definite at one solution x of SOCCP(F), Assumption 1 holds by Lemma 2.6 and Lemma 3.1.

Theorem 3.2 LetT be defined by (2.10). If ¯X , ∅, then Assumption 1 implies condition (2.11), that is, there existδ > 0 and C > 0 such that

dist(x, ¯X) ≤ C∥ω∥, whenever x∈ T−1(ω) and ∥ω∥ ≤ δ.

(13)

Proof. For all x ∈ T−1(ω) we have

w∈ T (x) = F(x) + NK(x).

Therefore there exists v ∈ NK(x) such that w = F(x) + v. Because K is a convex set, it is easy to obtain that

ΠK(x+ v) = x. (3.5)

Noting that the projective mapping onto a convex set is nonexpansive, we have from (3.5) that

∥x − ΠK(x− F(x))∥ = ∥ΠK(x+ v) − ΠK(x− F(x))∥ ≤ ∥v + F(x)∥ = ∥ω∥.

From Assumption 1 and letting C = ¯C, δ = ¯δ yield the desired condition (2.11). 2 The following theorem gives the superlinear convergence of Algorithm 3.1, whose proof is based on Theorem 3.2 and can be obtained in the same way as Theorem 3.1.

We omit the proof here.

Theorem 3.3 Suppose that Assumption 1 holds. Let{xk} be generated by Algorithm 3.1.

Then the sequence{dist(xk, ¯X)} converges to 0 superlinearly.

Although we have obtained the global and superlinear convergence properties of Al- gorithm 3.1 under mild conditions, this does not mean that Algorithm 3.1 is practically efficient, as it says nothing about how to obtain an approximation solution of the strongly monotone second-order cone complementarity problem in Step 2 satisfying (3.1) and what is the cost. We will give the answer in the next section.

4 Generalized Newton method

In this section, we introduce the generalized Newton method proposed by De Luca, Facchinei, and Kanzow [14] for solving the subproblems in Step 2 of Algorithm 3.1. As mentioned earlier, for each fixed k, Problem (1.3) is equivalent to the following nonsmooth equation

ΦkFB(x)= 0. (4.1)

Now we describe as below the generalized Newton method for solving the nonsmooth sys- tem (4.1), which is employed from what was introduced in [29] for solving NCP.

Algorithm 4.1(generalized Newton method for SOCCP(Fk)) Step 0. Chooseβ ∈ (0,12) and an initial point x0 ∈ ℜn. Set j := 0.

Step 1. If∥ΦkFB(xj)∥ = 0, then stop.

(14)

Step 2. Select an element Vj ∈ ∂BΦkFB(xj). Find the solution dj of the system

Vjd = −ΦkFB(xj). (4.2)

Step 3. Find the smallest nonnegative integer ij such that fk

FB(xj+ 2−ijdj)≤ (1 − β21−ij) fk

FB(xj). Step 4. Set xj+1:= xj+ 2−ijdj and j := j + 1. Go to Step 1.

To guarantee the descent sequence of fk

FB must have an accumulation point, Pan and Chen [20] give the following condition under which the coerciveness of fFBk for SOCCP(Fk) can be established.

Condition 1.For any sequence{xj} ⊆ ℜnsatisfying∥xj∥ → +∞, if there exists an index i ∈ {1, 2, . . . , q} such that {λ1(xij)} and {λ1(Fi(xj))} are bounded below, and λ2(xij),λ2(Fi(xj))→ +∞, then

lim sup

j→∞

xij

∥xij∥, Fi(xj)

∥Fi(xj)∥

> 0.

As Fk is strongly monotone, it then has the uniform Cartesian P-property. From [20], we have the following theorem.

Theorem 4.1 [20, Proposition 5.2] For SOCCP(Fk), if Condition 1 holds, then the merit function fk

FB is coercive.

To obtain the quadratic convergence of Algorithm 4.1, we need the following two As- sumptions which are also essential in the follow-up work.

Assumption 2. Fis continuously differentiable function with a local Lipschitz Jacobian.

Assumption 3. The limit point x of the sequence {xk} generated by Algorithm 3.1 is nondegenerate, i.e., xi + Fi(x)∈ intKni holds for all i∈ {1, . . . , q}.

Note that when k is large enough, the unique solution Pk(xk) of SOCCP(Fk) is nonde- generate, that is, (Pk(xk))i+ Fik(Pk(xk))∈ intKni holds for all i ∈ {1, . . . , q}. Because Fk is strongly monotone, we immediately have the following convergence theorem from Lemma 2.2 and [14, Theorem 3.1].

Theorem 4.2 If the sequence{xj} generated by Algorithm 4.1 has an accumulation point and Assumptions 2 and 3 hold. Then{xj} globally converges to the unique solution Pk(xk) and the rate is quadratic.

Noting that the condition (3.1) in Algorithm 3.1 is equivalent to the following two criteria:

(15)

Criterion 1. fk(xk+1)≤ c6k 18 max{√

2, ∥Fk(Pk(xk))∥, ∥Pk(xk)∥}2. Criterion 2. fk(xk+1)≤ c6k∥xk+1− xk4

18 max{√

2, ∥Fk(Pk(xk))∥, ∥Pk(xk)∥}2.

It follows from Section 3 that Criterion 1 guarantees global convergence, while Criterion 2, which is rather restrictive, ensures superlinear convergence of PPA. Next, we give condi- tions under which a single Newton step of generalized Newton method can generate a point eventually that satisfies the following two criteria for any given r ∈ (0, 1), i.e., Criterion 1 and the following criterion:

Criterion 2(r). fk(xk+1)≤ c6k∥xk+1− xk4(1−r) 18 max{√

2, ∥Fk(Pk(xk))∥, ∥Pk(xk)∥}2.

Thereby the PPA can be practically efficient, which we call that Algorithm 3.1 has approx- imate genuine superlinear convergence. Firstly, we have the following two lemmas, which indicate the relationship between∥xk− Pk(xk)∥ and dist(xk, ¯X).

Lemma 4.1 If SOCCP(F) is strictly feasible. Then, for sufficiently large k, there exists a constant B1 ≥ 2 such that

2≤ max{√

2, ∥Fk(Pk(xk))∥, ∥Pk(xk)∥}2 ≤ B1. (4.3) Proof. From Lemma 2.5, we obtain that the solution set ¯X of SOCCP(F) is bounded, which implies the boundedness of F( ¯X). Let m1 > 0 be such that

max{sup

x∈ ¯X ∥x∥, sup

x∈ ¯X ∥F(x)∥} ≤ m1.

Since ck → 0, it follows from Theorem 3.1 that the two sequences {xk} and {Pk(xk)} have the same limit point x ∈ ¯X. Then there exists a positive constant m2 such that

∥Pk(xk)− x∥ ≤ m2, and

∥F(Pk(xk))− F(x)∥ ≤ m2, when k is large enough. Thus, the following two inequalities

∥Fk(Pk(xk))∥ = ∥F(Pk(xk))+ ck(Pk(xk)− xk)∥

≤ ∥F(Pk(xk))∥ + ck∥Pk(xk)− xk

≤ ∥F(x)∥ + 2m2

and

∥Pk(xk)∥ ≤ ∥x∥ + m2

hold for sufficiently large k. Let B1= max{2, (m1+ 2m2)2}, we complete the proof. 2

(16)

Lemma 4.2 If SOCCP(F) is strictly feasible, then for sufficiently large k, there exists a constant B2 > 0 such that

∥xk− Pk(xk)∥ ≤ B2

ckdist(xk, ¯X)12

Proof. Let ¯xk be the nearest point in ¯X from xk. From [8, Theorem 2.3.5] we know that ¯X is convex, and hence the mappingΠX¯(·) is nonexpansive. Therefore,

∥ ¯xk− x∥ = ∥ΠX¯(xk)− ΠX¯(x)∥ ≤ ∥xk− x∥.

Since{xk} is bounded, so is { ¯xk}. Let ˆX be a bounded set containing {xk} and { ¯xk}. From Lemma 2.3, we know that f12 is uniformly Lipschitz continuous on ˆX. Then there exists L1 > 0 such that

(f (xk))12

=( f (xk))12

−( f ( ¯xk))12

≤ L21∥xk− ¯xk∥ = L21dist(xk, ¯X), which implies that

(f (xk))14

≤ L1dist(xk, ¯X)12. It follows from Lemma 2.4 that

∥xk− Pk(xk)∥2≤ 3√ 2 τk

fk(xk)12, where

τk = ck

max{√

2, ∥Fk(Pk(xk))∥, ∥Pk(xk)∥}, which together with Lemma 4.1 yields

√2 ck ≤ 1

τk

B1 ck . Hence, we have

∥xk− Pk(xk)∥ ≤ (3√

2B1 ck

)12 (

fk(xk))14 .

On the other hand, since Fk(xk)= F(xk), we know fk(xk)= f (xk) and hence

∥xk− Pk(xk)∥ ≤ (3√

2B1 ck

)12 ( f (xk))14

≤ (3√

2B1 ck

)12

L1dist(xk, ¯X)12. Then, letting B2 = (3√

2B1)12L1leads to the desired inequality. 2

The next three lemmas give the relationship between ∥xkN − Pk(xk)∥ and ∥xk − Pk(xk)∥ which is the key to the main result in this section. One attention we should pay to is

(17)

that we will not be able to obtain the inequality in Lemma 4.3 without twice continuously differentiability. The reason is as explained in [29, Remark 4.1]. To this end, we make the following assumption.

Assumption 4. Fis twice continuously differentiable.

Lemma 4.3 Suppose that Assumption 3 and Assumption 4 hold. Then ΦkFB is twice con- tinuously differentiable in a neighborhood of xk for sufficiently large k, and there exists a positive constant B3such that

∥JΦkFB(xk)(xk− Pk(xk))− ΦkFB(xk)+ ΦkFB(Pk(xk))∥ ≤ B3∥xk− Pk(xk)∥2.

Proof. It is obvious that when F is twice continuously differentiable and Assumption 3 holds,ΦkFB is twice continuously differentiable near xk and Pk(xk) when k is large enough.

Then from the second order Taylor expansion and the Lipschitz continuity of∇ΦkFBnear xk, there exist positive constants m3, m4such that when k is sufficiently large,

∥ΦkFB(xk)− ΦkFB(Pk(xk))− JΦkFB(Pk(xk))(xk− Pk(xk))∥ ≤ m3∥xk− Pk(xk)∥2,

∥JΦkFB(xk)(xk− Pk(xk))− JΦkFB(Pk(xk))(xk − Pk(xk))∥ ≤ m4∥xk− Pk(xk)∥2. Let B3= m3+ m4, we have for sufficiently large k

∥JΦkFB(xk)(xk− Pk(xk))− ΦkFB(xk)+ ΦkFB(Pk(xk))∥ ≤ B3∥xk− Pk(xk)∥2. 2

Now let us denote

xkN := xk− Vk−1ΦkFB(xk), Vk ∈ ∂BΦkFB(xk). (4.4) Then xkN is a point produced by a single Newton iteration of Algorithm 4.1 with the initial point xk.

Lemma 4.4 Suppose that Assumption 3 holds, thenΦkFB is differentiable at xk and the Ja- cobiankFB(xk) is nonsingular for sufficiently large k.

Proof. Let

zi(x)= (x2i + (Fik(x))2)12, (4.5) for each i ∈ {1, . . . , q}. From Assumption 3 and [20, Lemma 4.2], we have that for every i∈ {1, . . . , q},

xki + Fki(xk)∈ intKni, and

(xki)2+ (Fik(xk))2 ∈ intKni,

參考文獻

相關文件

Interior-point methods not amenable to warm start.. Chen,T

In this paper, we have studied a neural network approach for solving general nonlinear convex programs with second-order cone constraints.. The proposed neural network is based on

Numerical results are reported for some convex second-order cone programs (SOCPs) by solving the unconstrained minimization reformulation of the KKT optimality conditions,

Chen, The semismooth-related properties of a merit function and a descent method for the nonlinear complementarity problem, Journal of Global Optimization, vol.. Soares, A new

Numerical results are reported for some convex second-order cone programs (SOCPs) by solving the unconstrained minimization reformulation of the KKT optimality conditions,

Abstract Based on a class of smoothing approximations to projection function onto second-order cone, an approximate lower order penalty approach for solving second-order cone

Based on a class of smoothing approximations to projection function onto second-order cone, an approximate lower order penalty approach for solving second-order cone

Although we have obtained the global and superlinear convergence properties of Algorithm 3.1 under mild conditions, this does not mean that Algorithm 3.1 is practi- cally efficient,

11 (1998) 227–251] for the nonnegative orthant complementarity problem to the general symmet- ric cone complementarity problem (SCCP). We show that the class of merit functions

The purpose of this talk is to analyze new hybrid proximal point algorithms and solve the constrained minimization problem involving a convex functional in a uni- formly convex

In section 4, based on the cases of circular cone eigenvalue optimization problems, we study the corresponding properties of the solutions for p-order cone eigenvalue

In this paper, we extend this class of merit functions to the second-order cone complementarity problem (SOCCP) and show analogous properties as in NCP and SDCP cases.. In addition,

Chen, Conditions for error bounds and bounded level sets of some merit func- tions for the second-order cone complementarity problem, Journal of Optimization Theory and

We propose a primal-dual continuation approach for the capacitated multi- facility Weber problem (CMFWP) based on its nonlinear second-order cone program (SOCP) reformulation.. The

By this, the second-order cone complementarity problem (SOCCP) in H can be converted into an unconstrained smooth minimization problem involving this class of merit functions,

Abstract We investigate some properties related to the generalized Newton method for the Fischer-Burmeister (FB) function over second-order cones, which allows us to reformulate

Taking second-order cone optimization and complementarity problems for example, there have proposed many ef- fective solution methods, including the interior point methods [1, 2, 3,

For finite-dimensional second-order cone optimization and complementarity problems, there have proposed various methods, including the interior point methods [1, 15, 18], the

Based on the reformulation, a semi-smooth Levenberg–Marquardt method was developed, and the superlinear (quadratic) rate of convergence was established under the strict

Moreover, for the merit functions induced by them for the second-order cone complementarity problem (SOCCP), we provide a condition for each stationary point being a solution of

Moreover, for the merit functions induced by them for the second- order cone complementarity problem (SOCCP), we provide a condition for each sta- tionary point to be a solution of

Chen, Conditions for error bounds and bounded level sets of some merit func- tions for the second-order cone complementarity problem, Journal of Optimization Theory and

Recently, the paper [36] investigates a family of smoothing functions along with a smoothing-type algorithm to tackle the absolute value equation associated with second-order