• 沒有找到結果。

# A proximal point algorithm for the monotone second-order cone complementarity problem

N/A
N/A
Protected

Share "A proximal point algorithm for the monotone second-order cone complementarity problem"

Copied!
27
0
0

(1)

DOI 10.1007/s10589-011-9399-x

### A proximal point algorithm for the monotonesecond-order cone complementarity problem

Jia Wu· Jein-Shan Chen

Received: 4 August 2010 / Published online: 1 March 2011

Abstract This paper is devoted to the study of the proximal point algorithm for solv- ing monotone second-order cone complementarity problems. The proximal point al- gorithm is to generate a sequence by solving subproblems that are regularizations of the original problem. After given an appropriate criterion for approximate solutions of subproblems by adopting a merit function, the proximal point algorithm is verified to have global and superlinear convergence properties. For the purpose of solving the subproblems efficiently, we introduce a generalized Newton method and show that only one Newton step is eventually needed to obtain a desired approximate solution that approximately satisfies the appropriate criterion under mild conditions. Numer- ical comparisons are also made with the derivative-free descent method used by Pan and Chen (Optimization 59:1173–1197,2010), which confirm the theoretical results and the effectiveness of the algorithm.

Keywords Complementarity problem· Second-order cone · Proximal point algorithm· Approximation criterion

J.-S. Chen is a member of Mathematics Division, National Center for Theoretical Sciences, Taipei Office. The author’s work is partially supported by National Science Council of Taiwan.

J. Wu

School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China e-mail:jwu_dora@mail.dlut.edu.cn

J.-S. Chen (

### 

)

Department of Mathematics, National Taiwan Normal University, Taipei 11677, Taiwan e-mail:jschen@math.ntnu.edu.tw

(2)

1 Introduction

The second-order cone complementarity problem (SOCCP) which is a natural exten- sion of nonlinear complementarity problem (NCP), is to find x∈ nsatisfying

SOCCP(F ): F (x), x = 0, F (x) ∈ K, x ∈ K, (1.1) where·, · is the Euclidean inner product, F is a mapping from n inton, andK is the Cartesian product of second-order cones (SOC), in other words,

K = Kn1× · · · × Knq, (1.2)

where q, n1, . . . , nq≥ 1, n1+ · · · + nq= n, and for each i ∈ {1, . . . , q}

Kni:= {(x0,¯x) ∈  × ni−1| ¯x ≤ x0},

with· denoting the Euclidean norm and K1 denoting the set of nonnegative re- als+. If K = n+, then (1.1) reduces to the nonlinear complementarity problem.

Throughout this paper, corresponding to the Cartesian structure of K, we write F= (F1, . . . , Fq)and x= (x1, . . . , xq)with Fi being mappings fromntoni and xi∈ ni, for each i∈ {1, . . . , q}. We also assume that the mapping F is continuously differentiable and monotone.

Until now, a variety of methods for solving SOCCP have been proposed and in- vestigated. They include interior-point methods [1,2,13,18,28,31], the smoothing Newton methods [6,10], the merit function method [5] and the semismooth Newton method [11], where the last three kinds of methods are all based on an SOC comple- mentarity function or a merit function.

The proximal point algorithm (PPA) is known for its theoretically nice conver- gence properties, which was first proposed by Martinet [16] and further studied by Rockafellar [24]. PPA is a procedure for finding a vector z satisfying 0∈ T (z), where T is a maximal monotone operator. Therefore, it can be applied to a broad class of problems such as convex programming problems, monotone variational inequality problems, and monotone complementarity problems.

In this paper, motivated by the work of Yamashita and Fukushima [29] for the NCPs, we focus on introducing PPA for solving the SOC complementarity problems.

For SOCCP(F ), given the current point xk, PPA obtains the next point xk+1by ap- proximately solving the subproblem

SOCCP(Fk): Fk(x), x = 0, Fk(x)∈ K, x ∈ K, (1.3) where Fk: n→ nis defined by

Fk(x):= F (x) + ck(x− xk) (1.4) with ck>0. It is obvious that Fk is strongly monotone when F is monotone. Then, by [8, Theorem 2.3.3], the subproblem SOCCP(Fk), which is more tractable than the original problem, always has a unique solution. Thus, PPA is well defined. It was pointed out in [15, 24] that with appropriate criteria for approximate solutions of

(3)

subproblems (1.3), PPA has global and superlinear convergence property under mild conditions. However, those criteria are usually not easy to check. Inspired by [29], we give a practical criterion based on a new merit function for SOCCP proposed by Chen in [3]. Another implementation issue is how to solve subproblems efficiently and obtain an approximate solution such that the approximation criterion for the sub- problem is fulfilled. We use a generalized Newton method proposed by De Luca et al. [14] which is used in [29] for the NCP case to solve subproblems. We also give the conditions under which the approximation criterion is eventually approximately fulfilled by a single Newton iteration of the generalized Newton method.

The following notations and terminologies are used throughout the paper. I repre- sents an identity matrix of suitable dimension,ndenotes the space of n-dimensional real column vectors, and n1 × · · · × nq is identified with n1+···+nq. Thus, (x1, . . . , xq)∈ n1 × · · · × nq is viewed as a column vector inn1+···+nq. For any two vectors u and v, the Euclidean inner product is denoted byu, v := uTv and for any vector w, the normw is induced by the inner product which is called the Euclidean vector norm. For a matrix M, the normM is denoted to be the ma- trix norm induced by the Euclidean vector norm, that is the spectral norm. Given a differentiable mapping F: n→ l, we denote by J F (x) the Jacobian of F at x and∇F (x) := J F (x), the adjoint ofJ F (x). For a symmetric matrix M, we write M O (respectively, M O) if M is positive definite (respectively, positive semi- definite). Given a finite number of square matrices Q1, . . . , Qq, we denote the block diagonal matrix with these matrices as block diagonals by diag(Q1, . . . , Qq) or by diag(Qi, i= 1, . . . , q). If I and B are index sets such that I, B ⊆ {1, 2, . . . , q}, we denote by PIB the block matrix consisting of the sub-matrices Pik ∈ ni×nk of P with i∈ I, k ∈ B, and denote by xBa vector consisting of sub-vectors xi∈ ni with i∈ B.

The organization of this paper is as follows. In Sect.2, we recall some notions and background materials. Section3is devoted to developing proximal point method to solve the monotone second-order cone complementarity problem with a practical ap- proximation criterion based on a new merit function. In Sect.4, a generalized Newton method is introduced to solve the subproblems and we prove that the proximal point algorithm in Sect.3 has approximate genuine superlinear convergence under mild conditions, which is the main result of this paper. In Sect.5, we report the numerical results for several test problems. Section6is to give conclusions.

2 Preliminaries

In this section, we review some background materials that will be used in the sequel.

We first recall some mathematical concepts and the Jordan algebra associated with the SOC. Then we talk about the complementarity functions and three merit functions for SOCCP. Finally, we briefly mention the proximal point algorithm.

(4)

2.1 Mathematical concepts

Given a set ∈ nlocally closed around ¯x ∈ , define the regular normal cone to  at ¯x by

N(¯x) :=



v∈ n lim sup

x−→ ¯x

v, x − ¯x

x − ¯x ≤ 0

 .

The (limiting) normal cone to  at ¯x is defined by N(¯x) := lim sup

x−→ ¯x

N(x),

where “limsup” is the Painlev´e-Kuratowski outer limit of sets (see [25]).

We now recall definitions of monotonicity of a mapping which are needed for the assumptions throughout this paper. We say that a mapping G: n→ nis monotone if

G(ζ ) − G(ξ), ζ − ξ ≥ 0, ∀ζ, ξ ∈ n. Moreover, G is strongly monotone if there exists ρ > 0 such that

G(ζ ) − G(ξ), ζ − ξ ≥ ρζ − ξ2, ∀ζ, ξ ∈ n.

It is well known that, when G is continuously differentiable, G is monotone if and only if∇G(ζ ) is positive semidefinite for all ζ ∈ nwhile G is strongly monotone if and only if ∇G(ζ ) is positive definite for all ζ ∈ n. For more details about monotonicity, please refer to [8].

There is another kind of concepts called Cartesian P -properties which have close relationship with monotonicity concept and are introduced by Chen and Qi [4] for a linear transformation. Here we present the definitions of Cartesian P -properties for a matrix M∈ n×nand the nonlinear generalization in the setting ofK.

A matrix M∈ n×n is said to have the Cartesian P -property if for any 0= x = (x1, . . . , xq)∈ n with xi ∈ ni, there exists an index ν∈ {1, 2, . . . , q} such that

xν, (Mx)ν > 0. And M is said to have the Cartesian P0-property if the above strict inequality becomesxν, (Mx)ν ≥ 0 where the chosen index ν satisfies xν= 0.

Given a mapping G= (G1, . . . , Gq)with Gi : n→ ni, G is said to have the uniform Cartesian P -property if for any x= (x1, . . . , xq), y= (y1, . . . , yq)∈ n, there is an index ν∈ {1, 2, . . . , q} and a positive constant ρ > 0 such that

xν− yν, Gν(x)− Gν(y) ≥ ρx − y2.

In addition, for a single-valued Lipschitz continuous mapping G: n → m, the B-subdifferential of G at x denoted by ∂BG(x), is defined as

BG(x):=

klim→∞J G(xk)xk→ x, G is differentiable at xk .

The convex hull of ∂BG(x)is the Clarke’s generalized Jacobian of G at x, denoted by ∂G(x), see [7]. We say that G is strongly BD-regular at x if every element of

BG(x)is nonsingular.

(5)

There is another important concept named semismoothness which was first intro- duced in [17] for functionals and was extended in [23] to vector-valued functions.

Let G: n→ mbe a locally Lipschitz continuous mapping. We say that G is semi- smooth at a point x∈ nif G is directionally differentiable and for any x∈ nand V ∈ ∂G(x + x) with x → 0,

G(x+ x) − G(x) − V (x) = o(x).

Furthermore, G is said to be strongly semismooth at x if G is semismooth at x and for any x∈ nand V∈ ∂G(x + x) with x → 0,

G(x+ x) − G(x) − V (x) = O(x2).

2.2 Jordan algebra associated with SOC

It is known thatKlis a closed convex self-dual cone with nonempty interior given by int(Kl):= {x = (x0,¯x) ∈  × l−1|x0> ¯x}.

For any x= (x0,¯x) ∈ l and y= (y0,¯y) ∈ l, we define their Jordan product as x◦ y = (xTy, y0¯x + x0¯y).

We write x2 to mean x ◦ x and write x + y to mean the usual component- wise addition of vectors. Moreover, if x∈ Kl, there exists a unique vector in Kl, which we denote by x12, such that (x12)2= x12 ◦ x12 = x. And we recall that each x= (x0,¯x) ∈  × l−1 admits a spectral factorization, associated withKl, of the form

x= λ1(x)u1x+ λ2(x)u2x,

where λ1(x), λ2(x) and u1x, u2x are the spectral values and the associated spectral vectors of x, respectively, defined by

λi(x)= x0+ (−1)i ¯x, uix=1

2(1, (−1)iω), i= 1, 2

with ω= ¯x/ ¯x if ¯x = 0 and otherwise ω being any vector in l−1 satisfying

ω = 1.

For each x= (x0,¯x) ∈ l, define the matrix Lxby Lx:=

x0 ¯xT

¯x x0I

, (2.1)

which can be viewed as a linear mapping fromltol.

Lemma 2.1 The mapping Lxdefined by (2.1) has the following properties.

(a) Lxy= x ◦ y and Lx+y= Lx+ Lyfor any y∈ l.

(b) x∈ Kl if and only if Lx O. And x ∈ int Kl if and only if Lx O.

(c) Lxis invertible whenever x∈ int Kl.

(6)

Proof Please see [5,10].  2.3 Complementarity and merit functions associated with SOC

In this subsection, we discuss three reformulations of SOCCP that will play an im- portant role in the sequel of this paper. We deal with the problem SOCCP( ˆF ), where ˆF : n→ nis a certain mapping that has the same structure with F in Sect.1, that is, ˆF = ( ˆF1, . . . , ˆFq)with ˆFi: n→ ni.

A mapping φ: l× l→ l is called an SOC complementarity function associ- ated with the coneKl if

φ (x, y)= 0 ⇔ x ∈ Kl, y∈ Kl, x, y = 0. (2.2) A popular choice of φ is the vector-valued Fischer-Brumeister (FB) function, defined by

φFB(x, y):= (x2+ y2)12 − x − y, ∀x, y ∈ l. (2.3) The function was shown in [10] to satisfy the equivalence (2.2), and therefore its squared norm

ψFB(x, y):=1

2FB(x, y)2 (2.4)

is a merit function. The functions φFBand ψFBwere studied in the literature [5,26], in which φFBwas shown semismooth in [26] whereas ψFBwas proved smooth every- where in [5]. Due to these favorable properties, the SOCCP( ˆF )can be reformulated as the following nonsmooth system of equations

FB(x):=

⎜⎜

⎜⎜

⎜⎜

φFB(x1, ˆF1(x)) ... φFB(xi, ˆFi(x))

... φFB(xq, ˆFq(x))

⎟⎟

⎟⎟

⎟⎟

= 0, (2.5)

where φFBis defined as in (2.3) with a suitable dimension l. Moreover, its squared norm induces a smooth merit function, given by

fFB(x):=1

2 FB(x)2=

q i=1

ψFB(xi, ˆFi(x)). (2.6)

Lemma 2.2 The mappings FBand fFBdefined in (2.5) and (2.6) have the following properties.

(a) If ˆF is continuously differentiable, then FBis semismooth.

(b) If∇ ˆF is locally Lipschitz continuous, then FBis strongly semismooth.

(c) If ˆF is continuously differentiable, then fFBis continuously differentiable every- where.

(7)

(d) If ˆF is continuously differentiable and∇ ˆF(x) at any x ∈ n has the Cartesian P0-property, then every stationary point of fFBis a solution to the SOCCP( ˆF ).

(e) If ˆF is strongly monotone and xis a nondegenerate solution of SOCCP( ˆF ), i.e., ˆFi(x)+ xi∈ int Kni for all i∈ {1, . . . , q}. Then FB is strongly BD-regular at x.

Proof Items (a) and (b) come from [26, Corollary 3.3] and the fact that the composite of (strongly) semismooth functions is (strongly) semismooth by [9, Theorem 19].

Item (c) was shown by Chen and Tseng, which is an immediate consequences of [5, Proposition 2]. Item (d) is due to [19, Proposition 5.1].

For item (e), since ∇ ˆF(x) has Cartesian P -property and is positive definite, which can be obtained from the strongly monotonicity of ˆF, it follows from [20, Proposition 2.1] that the conditions in [19, Theorem 4.1] are satisfied and hence (e)

is proved. 

Since the complementarity function FBand its induced merit function fFBhave many useful properties described as in Lemma2.2, especially when ˆF is strongly monotone, they play a crucial role in solving subproblems by using a generalized Newton method in Sect.4. On the other hand, in [3], Chen extended a new merit function for the NCP to the SOCCP and studied conditions under which the new merit function provides a global error bound and has property of bounded level sets, which play an important role in convergence analysis. In contrast, the merit function fFB

lacks these properties. For this reason, we utilize this new merit function to describe the approximation criterion.

Let ψ0: l× l→ +be defined by ψ0(x, y):=1

2Kl(x◦ y)2,

where the mapping Kl(·) denotes the orthogonal projection onto the set Kl. After taking the fixed parameter as in [3], a new merit function is defined as ψ(x, y):=

ψ0(x, y)+ ψFB(x, y), where ψFB is given by (2.4). Via the new merit function, it was shown that the SOCCP( ˆF )is equivalent to the following global minimization:

xmin∈nf (x) where f (x):=

q i=1

ψ (xi, ˆFi(x)). (2.7)

Here ψ is defined with a suitable dimension l.

The properties about the function f including the error bound property and the boundedness of level sets which are given in [3] are summarized in the following three lemmas.

Lemma 2.3 Let f be defined as in (2.7).

(a) If ˆF is smooth, then f is smooth and f12 is uniformly locally Lipschitz continuous on any compact set.

(b) f (ζ )≥ 0 for all ζ ∈ nand f (ζ )= 0 if and only if ζ solves the SOCCP( ˆF).

(8)

(c) Suppose that the SOCCP( ˆF )has at least one solution, then ζ is a global mini- mization of f if and only if ζ solves the SOCCP( ˆF ).

Proof From [3, Proposition 3.2], we only need to prove that f12 is Lipschitz contin- uous on the set{y|f (y) = 0}. It follows from [3, Proposition 3.1] that if f (y)= 0, then yi◦ ˆFi(y)= 0 for all i = 1, 2, . . . , q. Thus, for any y ∈ {y|f (y) = 0}, we have

f (x)12 − f (y)12

= f (x)12

≤ 1

√2

q i=1

Kni(xi◦ ˆFi(x)) + φFB(xi, ˆFi(x))

= 1

√2

q i=1

Kni(xi◦ ˆFi(x))− Kni(yi◦ ˆFi(y))

+φFB(xi, ˆFi(x))− φFB(yi, ˆFi(y)) .

Noting that the functions xi◦ ˆFi(x)and φFB(xi, ˆFi(x))are Lipschitz continuous pro- vided that ˆF is smooth. Then from the Lipschitz continuity of φFB and the nonex- pansivity of projective mapping onto a convex set, we obtain that f12 is Lipschitz

continuous at y. 

Lemma 2.4 [3, Proposition 4.1] Suppose that ˆF is strongly monotone with the mod- ulus ρ > 0 and ζis the unique solution of SOCCP( ˆF ). Then there exists a scalar τ >0 such that

τζ − ζ2≤ 3√

2f (ζ )12, ∀ζ ∈ n, (2.8) where f is given by (2.7) and τ can be chosen as

τ:= ρ

max{√

2, ˆF(ζ), ζ}.

Lemma 2.5 [3, Proposition 4.2] Suppose that ˆF is monotone and that SOCCP( ˆF )is strictly feasible, i.e., there exists ˆζ∈ nsuch that ˆF ( ˆζ ), ˆζ∈ int K. Then the level set

L(r) := {ζ ∈ n|f (ζ ) ≤ r}

is bounded for all r≥ 0, where f is given by (2.7).

Another SOC complementarity function which we usually call it the natural resid- ual mapping is defined by

φNR(x, y):= x − Kl(x− y), ∀x, y ∈ l,

(9)

based on which we define the mapping NR: n→ nas

NR(x):=

⎜⎜

⎜⎜

⎜⎜

φNR(x1, ˆF1(x)) ... φNR(xi, ˆFi(x))

... φNR(xq, ˆFq(x))

⎟⎟

⎟⎟

⎟⎟

. (2.9)

Then it is straightforward to see that SOCCP( ˆF )is equivalent to the system of equa- tions NR(x)= 0.

Lemma 2.6 The mapping NRdefined as in (2.9) has the following properties.

(a) If ˆF is continuously differentiable, then NRis semismooth.

(b) If∇ ˆF is locally Lipschitz continuous, then NRis strongly semismooth.

(c) If∇ ˆF(x) is positive definite, then every V ∈ ∂B NR(x)is nonsingular, i.e., NR

is strongly BD-regular at x.

Proof Items (a) and (b) are obvious after combining [6, Proposition 4.3] and [9, The- orem 19]. Note that these two items are also proved in [12] in a different approach.

The proof of item (c) is similar to that in [27] and [30] for a more general setting, and

we omit it. 

From Lemma2.6(c) we know that the natural residual mapping NRis strongly BD-regular under weaker conditions than FB. In view of this, we will use NRto explore the condition of superlinear convergence of PPA in Sect.3.

2.4 Proximal point algorithm

LetT : n⇒ nbe a set-valued mapping defined by

T (x) := F (x) + NK(x). (2.10)

ThenT is a maximal monotone mapping and SOCCP(F ) defined by (1.1) is equiva- lent to the problem of finding a point x such that

0∈ T (x).

The proximal point algorithm generates, for any starting point x0, a sequence{xk} by the approximate rule:

xk+1≈ Pk(xk), where Pk:= (I + c1

kT )−1is a single-valued mapping fromn ton,{ck} is some sequence of positive real numbers, and xk+1≈ Pk(xk)means that xk+1is an approx- imation to Pk(xk). Accordingly, for SOCCP(F ), Pk(xk)is given by

Pk(xk)=

 I+ 1

ck(F+ NK)

−1 (xk),

(10)

from which we have

Pk(xk)∈ SOL(SOCCP(Fk)),

where Fk is defined by (1.4) and SOL(SOCCP(Fk)) is the solution set of SOCCP(Fk). Therefore, xk+1is given by an approximate solution of SOCCP(Fk).

Two general criteria for the approximate calculation of Pk(xk)proposed by Rock- afellar [24] are as follows:

Criterion 2.1

xk+1− Pk(xk) ≤ εk,



k=0

εk<∞.

Criterion 2.2

xk+1− Pk(xk) ≤ ηkxk+1− xk, 

k=0

ηk<∞.

Results on the convergence of the proximal point algorithm have already been studied in [15,24] from which we know that Criterion2.1guarantees global con- vergence while Criterion2.2, which is rather restrictive, ensures superlinear conver- gence.

Theorem 2.1 Let{xk} be any sequence generated by the PPA under Criterion2.1 with{ck} bounded. Suppose SOCCP(F ) has at least one solution. Then {xk} con- verges to a solution xof SOCCP(F ).

Proof This can be proved by similar arguments as in [24, Theorem 1].  Theorem 2.2 Suppose the solution set ¯Xof SOCCP(F ) is nonempty, and let{xk} be any sequence generated by PPA with Criterions2.1and2.2and ck→ 0. Let us also assume that

∃δ > 0, ∃C > 0,

s.t. dist(x, ¯X)≤ Cw whenever x ∈ T−1(ω)andω ≤ δ. (2.11) Then the sequence{dist(xk, ¯X)} converges to 0 superlinearly.

Proof This can be also verified by similar arguments as in [15, Theorem 2.1]. 

3 A proximal point algorithm for solving SOCCP

Based on the previous discussion, in this section we describe PPA for solving SOCCP(F ) as defined in (1.1) where F is smooth and monotone. We first illustrate the related mappings that will be used in the remainder of this paper.

(11)

The mappings NR, FB, fFBare defined by (2.9), (2.5) and (2.6), respectively, where the mapping ˆF is substituted by F . And the functions fk, fFBk and kFB are defined by (2.7), (2.6) and (2.5), respectively, where the mapping ˆF is replaced by Fk which is given by (1.4), i.e.,

NR(x):=

⎜⎜

⎜⎜

⎜⎜

φNR(x1, F1(x)) ... φNR(xi, Fi(x))

... φNR(xq, Fq(x))

⎟⎟

⎟⎟

⎟⎟

, kFB(x):=

⎜⎜

⎜⎜

⎜⎜

φFB(x1, F1k(x)) ... φFB(xi, Fik(x))

... φFB(xq, Fqk(x))

⎟⎟

⎟⎟

⎟⎟

,

FB(x):=

⎜⎜

⎜⎜

⎜⎜

φFB(x1, F1(x)) ... φFB(xi, Fi(x))

... φFB(xq, Fq(x))

⎟⎟

⎟⎟

⎟⎟

,

fk(x):=

q i=1

ψ (xi, Fik(x)), fFB(x):=1

2 FB(x)2, fFBk (x):=1

2 kFB(x)2.

Now we are in a position to describe the proximal point algorithm for solving Prob- lem (1.1).

Algorithm 3.1

Step 0. Choose parameters α∈ (0, 1), c0∈ (0, 1) and an initial point x0∈ n. Set k:= 0.

Step 1. If xksatisfies fFB(xk)= 0, then stop.

Step 2. Let Fk(x)= F (x) + ck(x− xk). Get an approximation solution xk+1 of SOCCP(Fk)that satisfies the condition

fk(xk+1)c6kmin{1, xk+1− xk4} 18 max{√

2,Fk(Pk(xk)), Pk(xk)}2. (3.1) Step 3. Set ck+1= αckand k:= k + 1. Go to Step 1.

Theorem 3.1 Let ¯Xbe the solution set of SOCCP(F ). If ¯X= ∅, then the sequence {xk} generated by Algorithm3.1converges to a solution xof SOCCP(F ).

Proof From Theorem2.1, it suffices to prove that such{xk} satisfies Criterion2.1.

Since Fkis strongly monotone with modulus ck>0 and Pk(xk)is the unique solution

(12)

of SOCCP(Fk), it follows from Lemma2.4that

xk+1− Pk(xk)2≤3√ 2 ck max{√

2,Fk(Pk(xk)), Pk(xk)}fk(xk+1)12, (3.2) which together with (3.1) implies

xk+1− Pk(xk) ≤ ck. (3.3)

 To obtain superlinear convergence properties, we need to give the following as- sumption which will be connected to the condition (2.11) in Theorem2.2.

Assumption 3.1 x − K(x− F (x)) provides a local error bound for SOCCP(F ), that is, there exist positive constants ¯δ and ¯Csuch that

dist(x, ¯X)≤ ¯Cx − K(x− F (x)),

for all x withx − K(x− F (x)) ≤ ¯δ, (3.4) where ¯Xdenotes the solution set of SOCCP(F ).

The following lemma can help us to understand Assumption3.1as it implies con- ditions under which Assumption3.1holds.

Lemma 3.1 [22, Proposition 3] If a Lipschitz continuous mapping H is strongly BD- regular at x, then there is a neighborhood N of xand a positive constant α, such that∀x ∈ N and V ∈ ∂BH (x), V is nonsingular andV−1 ≤ α. If, furthermore, H is semismooth at xand H (x)= 0, then there exists a neighborhood Nof xand a positive constant β such that∀x ∈ N,x − x ≤ βH (x).

Note that when∇F (x) is positive definite at one solution x of SOCCP(F ), As- sumption3.1holds by Lemmas2.6and3.1.

Theorem 3.2 LetT be defined by (2.10). If ¯X= ∅, then Assumption3.1 implies condition (2.11), that is, there exist δ > 0 and C > 0 such that

dist(x, ¯X)≤ Cω, whenever x∈ T−1(ω)andω ≤ δ.

Proof For all x∈ T−1(ω)we have

w∈ T (x) = F (x) + NK(x).

Therefore there exists v∈ NK(x)such that w= F (x) + v. Because K is a convex set, it is easy to obtain that

K(x+ v) = x. (3.5)

(13)

Noting that the projective mapping onto a convex set is nonexpansive, we have from (3.5) that

x − K(x− F (x)) = K(x+ v) − K(x− F (x)) ≤ v + F (x) = ω.

From Assumption3.1and letting C= ¯C, δ = ¯δ yield the desired condition (2.11). The following theorem gives the superlinear convergence of Algorithm3.1, whose proof is based on Theorem3.2and can be obtained in the same way as Theorem3.1.

We omit the proof here.

Theorem 3.3 Suppose that Assumption3.1holds. Let {xk} be generated by Algo- rithm3.1. Then the sequence{dist(xk, ¯X)} converges to 0 superlinearly.

Although we have obtained the global and superlinear convergence properties of Algorithm3.1under mild conditions, this does not mean that Algorithm3.1is practi- cally efficient, as it says nothing about how to obtain an approximation solution of the strongly monotone second-order cone complementarity problem in Step 2 satisfying (3.1) and what is the cost. We will give the answer in the next section.

4 Generalized Newton method

In this section, we introduce the generalized Newton method proposed by De Luca, Facchinei, and Kanzow [14] for solving the subproblems in Step 2 of Algorithm3.1.

As mentioned earlier, for each fixed k, Problem (1.3) is equivalent to the following nonsmooth equation

kFB(x)= 0. (4.1)

Now we describe as below the generalized Newton method for solving the nonsmooth system (4.1), which is employed from what was introduced in [29] for solving NCP.

Algorithm 4.1 (Generalized Newton method for SOCCP(Fk)) Step 0. Choose β∈ (0,12)and an initial point x0∈ n. Set j:= 0.

Step 1. If kFB(xj) = 0, then stop.

Step 2. Select an element Vj∈ ∂B kFB(xj). Find the solution dj of the system

Vjd= − kFB(xj). (4.2)

Step 3. Find the smallest nonnegative integer ij such that

fFBk (xj+ 2−ijdj)≤ (1 − β21−ij)fFBk (xj).

Step 4. Set xj+1:= xj+ 2−ijdjand j:= j + 1. Go to Step 1.

To guarantee the descent sequence of fFBk must have an accumulation point, Pan and Chen [19] give the following condition under which the coerciveness of fFBk for SOCCP(Fk)can be established.

(14)

Condition 4.1 For any sequence{xj} ⊆ n satisfyingxj → +∞, if there exists an index i∈ {1, 2, . . . , q} such that {λ1(xij)} and {λ1(Fi(xj))} are bounded below, and λ2(xij), λ2(Fi(xj))→ +∞, then

lim sup

j→∞

 xij

xij, Fi(xj)

Fi(xj)



>0.

As Fkis strongly monotone, it then has the uniform Cartesian P -property. From [19], we have the following theorem.

Theorem 4.1 [19, Proposition 5.2] For SOCCP(Fk), if Condition4.1holds, then the merit function fFBk is coercive.

To obtain the quadratic convergence of Algorithm4.1, we need the following two Assumptions which are also essential in the follow-up work.

Assumption 4.1 F is continuously differentiable function with a local Lipschitz Ja- cobian.

Assumption 4.2 The limit point xof the sequence{xk} generated by Algorithm3.1 is nondegenerate, i.e., xi+ Fi(x)∈ int Kni holds for all i∈ {1, . . . , q}.

Note that when k is large enough, the unique solution Pk(xk)of SOCCP(Fk)is nondegenerate, that is, (Pk(xk))i+ Fik(Pk(xk))∈ int Kni holds for all i∈ {1, . . . , q}.

Because Fk is strongly monotone, we immediately have the following convergence theorem from Lemma2.2and [14, Theorem 3.1].

Theorem 4.2 If the sequence{xj} generated by Algorithm4.1has an accumulation point and Assumptions4.1and4.2hold. Then{xj} globally converges to the unique solution Pk(xk)and the rate is quadratic.

Noting that the condition (3.1) in Algorithm3.1is equivalent to the following two criteria:

Criterion 4.1

fk(xk+1)c6k 18 max{√

2,Fk(Pk(xk)), Pk(xk)}2. Criterion 4.2

fk(xk+1)c6kxk+1− xk4 18 max{√

2,Fk(Pk(xk)), Pk(xk)}2.

It follows from Sect. 3that Criterion 4.1guarantees global convergence, while Criterion4.2, which is rather restrictive, ensures superlinear convergence of PPA.

(15)

Next, we give conditions under which a single Newton step of generalized Newton method can generate a point eventually that satisfies the following two criteria for any given r∈ (0, 1), i.e., Criterion4.1and the following criterion:

Criterion 4.2(r)

fk(xk+1)c6kxk+1− xk4(1−r) 18 max{√

2,Fk(Pk(xk)), Pk(xk)}2.

Thereby the PPA can be practically efficient, which we call that Algorithm3.1 has approximate genuine superlinear convergence. Firstly, we have the following two lemmas, which indicate the relationship betweenxk− Pk(xk) and dist(xk, ¯X).

Lemma 4.1 If SOCCP(F ) is strictly feasible. Then, for sufficiently large k, there exists a constant B1≥ 2 such that

2≤ max√

2,Fk(Pk(xk)), Pk(xk)2

≤ B1. (4.3)

Proof From Lemma2.5, we obtain that the solution set ¯Xof SOCCP(F ) is bounded, which implies the boundedness of F ( ¯X). Let m1>0 be such that

max

 sup

x∈ ¯Xx, sup

x∈ ¯XF (x)

≤ m1.

Since ck→ 0, it follows from Theorem3.1that the two sequences{xk} and {Pk(xk)} have the same limit point x∈ ¯X. Then there exists a positive constant m2such that

Pk(xk)− x ≤ m2, and

F (Pk(xk))− F (x) ≤ m2, when k is large enough. Thus, the following two inequalities

Fk(Pk(xk)) = F (Pk(xk))+ ck(Pk(xk)− xk)

≤ F (Pk(xk)) + ckPk(xk)− xk

≤ F (x) + 2m2

and

Pk(xk) ≤ x + m2

hold for sufficiently large k. Let B1 = max{2, (m1 + 2m2)2}, we complete the

proof. 

(16)

Lemma 4.2 If SOCCP(F ) is strictly feasible, then for sufficiently large k, there exists a constant B2>0 such that

xk− Pk(xk) ≤ B2

ckdist(xk, ¯X)12.

Proof Let ¯xk be the nearest point in ¯Xfrom xk. From [8, Theorem 2.3.5] we know that ¯Xis convex, and hence the mapping ¯X(·) is nonexpansive. Therefore,

 ¯xk− x = ¯X(xk)− ¯X(x) ≤ xk− x.

Since{xk} is bounded, so is { ¯xk}. Let ˆX be a bounded set containing {xk} and { ¯xk}.

From Lemma2.3, we know that f12 is uniformly Lipschitz continuous on ˆX. Then there exists L1>0 such that

 f (xk)

1

2 =

f (xk)

1

2 −

f (¯xk)

1

2 ≤ L21xk− ¯xk = L21dist(xk, ¯X), which implies that

 f (xk)

14

≤ L1dist(xk, ¯X)12. It follows from Lemma2.4that

xk− Pk(xk)2≤ 3√ 2 τk

fk(xk)12,

where

τk= ck

max{√

2,Fk(Pk(xk)), Pk(xk)}, which together with Lemma4.1yields

√2 ck ≤ 1

τk

B1

ck

.

Hence, we have

xk− Pk(xk) ≤

3√ 2B1

ck

12 fk(xk)

14 .

On the other hand, since Fk(xk)= F (xk), we know fk(xk)= f (xk)and hence

xk− Pk(xk) ≤

3√ 2B1

ck

12 f (xk)

1

4

3√ 2B1

ck

12

L1dist(xk, ¯X)12.

Then, letting B2= (3

2B1)12L1leads to the desired inequality. 

(17)

The next three lemmas give the relationship betweenxNk − Pk(xk) and xkPk(xk) which is the key to the main result in this section. One attention we should pay to is that we will not be able to obtain the inequality in Lemma4.3without twice continuously differentiability. The reason is as explained in [29, Remark 4.1]. To this end, we make the following assumption.

Assumption 4.3 F is twice continuously differentiable.

Lemma 4.3 Suppose that Assumptions4.2and4.3hold. Then kFBis twice continu- ously differentiable in a neighborhood of xk for sufficiently large k, and there exists a positive constant B3such that

J kFB(xk)(xk− Pk(xk))− kFB(xk)+ kFB(Pk(xk)) ≤ B3xk− Pk(xk)2. Proof It is obvious that when F is twice continuously differentiable and Assump- tion4.2holds, kFB is twice continuously differentiable near xk and Pk(xk)when kis large enough. Then from the second order Taylor expansion and the Lipschitz continuity ofkFBnear xk, there exist positive constants m3, m4such that when k is sufficiently large,

 kFB(xk)− kFB(Pk(xk))− J kFB(Pk(xk))(xk− Pk(xk)) ≤ m3xk− Pk(xk)2,

J kFB(xk)(xk− Pk(xk))− J kFB(Pk(xk))(xk− Pk(xk)) ≤ m4xk− Pk(xk)2. Let B3= m3+ m4, we have for sufficiently large k

J kFB(xk)(xk− Pk(xk))− kFB(xk)+ kFB(Pk(xk)) ≤ B3xk− Pk(xk)2.  Now let us denote

xNk := xk− Vk−1 kFB(xk), Vk∈ ∂B kFB(xk). (4.4) Then xNk is a point produced by a single Newton iteration of Algorithm4.1with the initial point xk.

Lemma 4.4 Suppose that Assumption4.2holds, then kFBis differentiable at xkand the JacobianJ kFB(xk)is nonsingular for sufficiently large k.

Proof Let

zi(x)= (xi2+ (Fik(x))2)12, (4.5) for each i∈ {1, . . . , q}. From Assumption4.2and [19, Lemma 4.2], we have that for every i∈ {1, . . . , q},

xik+ Fik(xk)∈ int Kni, and

(xik)2+ (Fik(xk))2∈ int Kni,

In Chapter 3, we transform the weighted bipartite matching problem to a traveling salesman problem (TSP) and apply the concepts of ant colony optimization (ACO) algorithm as a basis

• Consider an algorithm that runs C for time kT (n) and rejects the input if C does not stop within the time bound.. • By Markov’s inequality, this new algorithm runs in time kT (n)

• Consider an algorithm that runs C for time kT (n) and rejects the input if C does not stop within the time bound.. • By Markov’s inequality, this new algorithm runs in time kT (n)

• Consider an algorithm that runs C for time kT (n) and rejects the input if C does not stop within the time bound.. • By Markov’s inequality, this new algorithm runs in time kT (n)

We present numerical results for Algorithm Elastic-Inexact applied to a set of 18 MPECs, showing the outcomes for each problem and analyzing whether exact complementarity,

Primal-dual approach for the mixed domination problem in trees Although we have presented Algorithm 3 for ﬁnding a minimum mixed dominating set in a tree, it is still desire to

Then, we tested the influence of θ for the rate of convergence of Algorithm 4.1, by using this algorithm with α = 15 and four different θ to solve a test ex- ample generated as

Numerical results are reported for some convex second-order cone programs (SOCPs) by solving the unconstrained minimization reformulation of the KKT optimality conditions,

Then, it is easy to see that there are 9 problems for which the iterative numbers of the algorithm using ψ α,θ,p in the case of θ = 1 and p = 3 are less than the one of the

Numerical results are reported for some convex second-order cone programs (SOCPs) by solving the unconstrained minimization reformulation of the KKT optimality conditions,

Section 3 is devoted to developing proximal point method to solve the monotone second-order cone complementarity problem with a practical approximation criterion based on a new

In this paper, we have shown that how to construct complementarity functions for the circular cone complementarity problem, and have proposed four classes of merit func- tions for

For the proposed algorithm, we establish a global convergence estimate in terms of the objective value, and moreover present a dual application to the standard SCLP, which leads to

In this paper, we extend this class of merit functions to the second-order cone complementarity problem (SOCCP) and show analogous properties as in NCP and SDCP cases.. In addition,

For the proposed algorithm, we establish its convergence properties, and also present a dual application to the SCLP, leading to an exponential multiplier method which is shown

Chen, Conditions for error bounds and bounded level sets of some merit func- tions for the second-order cone complementarity problem, Journal of Optimization Theory and

Abstract We investigate some properties related to the generalized Newton method for the Fischer-Burmeister (FB) function over second-order cones, which allows us to reformulate

Like the proximal point algorithm using D-function [5, 8], we under some mild assumptions es- tablish the global convergence of the algorithm expressed in terms of function values,

Taking second-order cone optimization and complementarity problems for example, there have proposed many ef- fective solution methods, including the interior point methods [1, 2, 3,

For finite-dimensional second-order cone optimization and complementarity problems, there have proposed various methods, including the interior point methods [1, 15, 18], the

Moreover, for the merit functions induced by them for the second-order cone complementarity problem (SOCCP), we provide a condition for each stationary point being a solution of

Moreover, for the merit functions induced by them for the second- order cone complementarity problem (SOCCP), we provide a condition for each sta- tionary point to be a solution of

Chen, Conditions for error bounds and bounded level sets of some merit func- tions for the second-order cone complementarity problem, Journal of Optimization Theory and