DOI 10.1007/s10589-011-9399-x

**A proximal point algorithm for the monotone** **second-order cone complementarity problem**

**Jia Wu· Jein-Shan Chen**

Received: 4 August 2010 / Published online: 1 March 2011

© Springer Science+Business Media, LLC 2011

**Abstract This paper is devoted to the study of the proximal point algorithm for solv-**
ing monotone second-order cone complementarity problems. The proximal point al-
gorithm is to generate a sequence by solving subproblems that are regularizations of
the original problem. After given an appropriate criterion for approximate solutions
of subproblems by adopting a merit function, the proximal point algorithm is verified
to have global and superlinear convergence properties. For the purpose of solving the
subproblems efficiently, we introduce a generalized Newton method and show that
only one Newton step is eventually needed to obtain a desired approximate solution
that approximately satisfies the appropriate criterion under mild conditions. Numer-
ical comparisons are also made with the derivative-free descent method used by Pan
and Chen (Optimization 59:1173–1197,2010), which confirm the theoretical results
and the effectiveness of the algorithm.

**Keywords Complementarity problem**· Second-order cone · Proximal point
algorithm· Approximation criterion

J.-S. Chen is a member of Mathematics Division, National Center for Theoretical Sciences, Taipei Office. The author’s work is partially supported by National Science Council of Taiwan.

J. Wu

School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China e-mail:jwu_dora@mail.dlut.edu.cn

J.-S. Chen (

^{)}

Department of Mathematics, National Taiwan Normal University, Taipei 11677, Taiwan e-mail:jschen@math.ntnu.edu.tw

**1 Introduction**

The second-order cone complementarity problem (SOCCP) which is a natural exten-
*sion of nonlinear complementarity problem (NCP), is to find x*∈ * ^{n}*satisfying

*SOCCP(F ): F (x), x = 0, F (x) ∈ K, x ∈ K,* (1.1)
where*·, · is the Euclidean inner product, F is a mapping from ** ^{n}* into

*, and*

^{n}*K*is the Cartesian product of second-order cones (SOC), in other words,

*K = K*^{n}^{1}*× · · · × K*^{n}^{q}*,* (1.2)

*where q, n*1*, . . . , n**q**≥ 1, n*1*+ · · · + n**q**= n, and for each i ∈ {1, . . . , q}*

*K*^{n}^{i}*:= {(x*0*,¯x) ∈ × *^{n}^{i}^{−1}*| ¯x ≤ x*0*},*

with*· denoting the Euclidean norm and K*^{1} denoting the set of nonnegative re-
als_{+}. If *K = *^{n}_{+}, then (1.1) reduces to the nonlinear complementarity problem.

Throughout this paper, corresponding to the Cartesian structure of *K, we write*
*F= (F*1*, . . . , F*_{q}*)and x= (x*1*, . . . , x*_{q}*)with F**i* being mappings from* ^{n}*to

^{n}*and*

^{i}*x*

*∈*

_{i}

^{n}

^{i}*, for each i∈ {1, . . . , q}. We also assume that the mapping F is continuously*differentiable and monotone.

Until now, a variety of methods for solving SOCCP have been proposed and in- vestigated. They include interior-point methods [1,2,13,18,28,31], the smoothing Newton methods [6,10], the merit function method [5] and the semismooth Newton method [11], where the last three kinds of methods are all based on an SOC comple- mentarity function or a merit function.

The proximal point algorithm (PPA) is known for its theoretically nice conver-
gence properties, which was first proposed by Martinet [16] and further studied by
Rockafellar [24]. PPA is a procedure for finding a vector z satisfying 0*∈ T (z), where*
*T is a maximal monotone operator. Therefore, it can be applied to a broad class of*
problems such as convex programming problems, monotone variational inequality
problems, and monotone complementarity problems.

In this paper, motivated by the work of Yamashita and Fukushima [29] for the NCPs, we focus on introducing PPA for solving the SOC complementarity problems.

*For SOCCP(F ), given the current point x*^{k}*, PPA obtains the next point x** ^{k+1}*by ap-
proximately solving the subproblem

*SOCCP(F*^{k}*): F*^{k}*(x), x = 0, F*^{k}*(x)∈ K, x ∈ K,* (1.3)
*where F** ^{k}*:

*→*

^{n}*is defined by*

^{n}*F*^{k}*(x):= F (x) + c**k**(x− x*^{k}*)* (1.4)
*with c**k**>0. It is obvious that F*^{k}*is strongly monotone when F is monotone. Then,*
by [8, Theorem 2.3.3], the subproblem SOCCP(F^{k}*), which is more tractable than*
the original problem, always has a unique solution. Thus, PPA is well defined. It was
pointed out in [15, 24] that with appropriate criteria for approximate solutions of

subproblems (1.3), PPA has global and superlinear convergence property under mild conditions. However, those criteria are usually not easy to check. Inspired by [29], we give a practical criterion based on a new merit function for SOCCP proposed by Chen in [3]. Another implementation issue is how to solve subproblems efficiently and obtain an approximate solution such that the approximation criterion for the sub- problem is fulfilled. We use a generalized Newton method proposed by De Luca et al. [14] which is used in [29] for the NCP case to solve subproblems. We also give the conditions under which the approximation criterion is eventually approximately fulfilled by a single Newton iteration of the generalized Newton method.

*The following notations and terminologies are used throughout the paper. I repre-*
sents an identity matrix of suitable dimension,^{n}*denotes the space of n-dimensional*
real column vectors, and ^{n}^{1} × · · · × ^{n}* ^{q}* is identified with

^{n}^{1}

^{+···+n}*. Thus,*

^{q}*(x*

_{1}

*, . . . , x*

_{q}*)*∈

^{n}^{1}× · · · ×

^{n}*is viewed as a column vector in*

^{q}

^{n}^{1}

^{+···+n}*. For any*

^{q}*two vectors u and v, the Euclidean inner product is denoted byu, v := u*

^{T}*v*and

*for any vector w, the normw is induced by the inner product which is called the*

*Euclidean vector norm. For a matrix M, the normM is denoted to be the ma-*trix norm induced by the Euclidean vector norm, that is the spectral norm. Given a

*differentiable mapping F*:

*→*

^{n}*, we denote by*

^{l}*J F (x) the Jacobian of F at x*and

*∇F (x) := J F (x)*

^{∗}, the adjoint of

*J F (x). For a symmetric matrix M, we write*

*M O (respectively, M O) if M is positive definite (respectively, positive semi-*

*definite). Given a finite number of square matrices Q*1

*, . . . , Q*

*q*, we denote the block

*diagonal matrix with these matrices as block diagonals by diag(Q*1

*, . . . , Q*

*q*) or by

*diag(Q*

*i*

*, i= 1, . . . , q). If I and B are index sets such that I, B ⊆ {1, 2, . . . , q}, we*

*denote by P*

_{IB}*the block matrix consisting of the sub-matrices P*

*ik*∈

^{n}

^{i}

^{×n}

^{k}*of P*

*with i∈ I, k ∈ B, and denote by x*

_{B}*a vector consisting of sub-vectors x*

*i*∈

^{n}*with*

^{i}*i∈ B.*

The organization of this paper is as follows. In Sect.2, we recall some notions and background materials. Section3is devoted to developing proximal point method to solve the monotone second-order cone complementarity problem with a practical ap- proximation criterion based on a new merit function. In Sect.4, a generalized Newton method is introduced to solve the subproblems and we prove that the proximal point algorithm in Sect.3 has approximate genuine superlinear convergence under mild conditions, which is the main result of this paper. In Sect.5, we report the numerical results for several test problems. Section6is to give conclusions.

**2 Preliminaries**

In this section, we review some background materials that will be used in the sequel.

We first recall some mathematical concepts and the Jordan algebra associated with the SOC. Then we talk about the complementarity functions and three merit functions for SOCCP. Finally, we briefly mention the proximal point algorithm.

2.1 Mathematical concepts

*Given a set *∈ * ^{n}*locally closed around

*¯x ∈ , define the regular normal cone to*at

*¯x by*

*N**(¯x) :=*

*v*∈ * ^{n}* lim sup

*x*−→^{}*¯x*

*v, x − ¯x*

*x − ¯x* ≤ 0

*.*

*The (limiting) normal cone to at* *¯x is defined by*
*N**(¯x) := lim sup*

*x*−→^{}*¯x*

*N**(x),*

where “limsup” is the Painlev´e-Kuratowski outer limit of sets (see [25]).

We now recall definitions of monotonicity of a mapping which are needed for the
*assumptions throughout this paper. We say that a mapping G*: * ^{n}*→

*is monotone if*

^{n}*G(ζ ) − G(ξ), ζ − ξ ≥ 0, ∀ζ, ξ ∈ *^{n}*.*
*Moreover, G is strongly monotone if there exists ρ > 0 such that*

*G(ζ ) − G(ξ), ζ − ξ ≥ ρζ − ξ*^{2}*,* *∀ζ, ξ ∈ *^{n}*.*

*It is well known that, when G is continuously differentiable, G is monotone if and*
only if*∇G(ζ ) is positive semidefinite for all ζ ∈ *^{n}*while G is strongly monotone*
if and only if *∇G(ζ ) is positive definite for all ζ ∈ ** ^{n}*. For more details about
monotonicity, please refer to [8].

*There is another kind of concepts called Cartesian P -properties which have close*
relationship with monotonicity concept and are introduced by Chen and Qi [4] for a
*linear transformation. Here we present the definitions of Cartesian P -properties for a*
*matrix M*∈ ^{n}* ^{×n}*and the nonlinear generalization in the setting of

*K.*

*A matrix M*∈ ^{n}^{×n}*is said to have the Cartesian P -property if for any 0= x =*
*(x*1*, . . . , x**q**)*∈ ^{n}*with x**i* ∈ ^{n}^{i}*, there exists an index ν∈ {1, 2, . . . , q} such that*

*x**ν**, (Mx)*_{ν}* > 0. And M is said to have the Cartesian P*0-property if the above strict
inequality becomes*x**ν**, (Mx)*_{ν}* ≥ 0 where the chosen index ν satisfies x**ν*= 0.

*Given a mapping G= (G*1*, . . . , G**q**)with G**i* : * ^{n}*→

^{n}

^{i}*, G is said to have the*

*uniform Cartesian P -property if for any x= (x*1

*, . . . , x*

_{q}*), y= (y*1

*, . . . , y*

_{q}*)*∈

*,*

^{n}*there is an index ν∈ {1, 2, . . . , q} and a positive constant ρ > 0 such that*

*x**ν**− y**ν**, G*_{ν}*(x)− G**ν**(y) ≥ ρx − y*^{2}*.*

*In addition, for a single-valued Lipschitz continuous mapping G*: * ^{n}* →

*, the*

^{m}*B-subdifferential of G at x denoted by ∂*

*B*

*G(x), is defined as*

*∂**B**G(x)*:=

*k*lim→∞*J G(x*^{k}*)**x*^{k}*→ x, G is differentiable at x*^{k}*.*

*The convex hull of ∂**B**G(x)is the Clarke’s generalized Jacobian of G at x, denoted*
*by ∂G(x), see [7]. We say that G is strongly BD-regular at x if every element of*

*∂*_{B}*G(x)*is nonsingular.

There is another important concept named semismoothness which was first intro- duced in [17] for functionals and was extended in [23] to vector-valued functions.

*Let G*: * ^{n}*→

^{m}*be a locally Lipschitz continuous mapping. We say that G is semi-*

*smooth at a point x*∈

^{n}*if G is directionally differentiable and for any x*∈

*and*

^{n}*V*

*∈ ∂G(x + x) with x → 0,*

*G(x+ x) − G(x) − V (x) = o(x).*

*Furthermore, G is said to be strongly semismooth at x if G is semismooth at x and*
*for any x*∈ ^{n}*and V∈ ∂G(x + x) with x → 0,*

*G(x+ x) − G(x) − V (x) = O(x*^{2}*).*

2.2 Jordan algebra associated with SOC

It is known that*K** ^{l}*is a closed convex self-dual cone with nonempty interior given by

*int(K*

^{l}*):= {x = (x*0

*,¯x) ∈ ×*

^{l}^{−1}

*|x*0

*> ¯x}.*

*For any x= (x*0*,¯x) ∈ *^{l}*and y= (y*0*,¯y) ∈ ** ^{l}*, we define their Jordan product as

*x◦ y = (x*

^{T}*y, y*0

*¯x + x*0

*¯y).*

*We write x*^{2} *to mean x* *◦ x and write x + y to mean the usual component-*
*wise addition of vectors. Moreover, if x∈ K** ^{l}*, there exists a unique vector in

*K*

*,*

^{l}*which we denote by x*

^{1}

^{2}

*, such that (x*

^{1}

^{2}

*)*

^{2}

*= x*

^{1}

^{2}

*◦ x*

^{1}

^{2}

*= x. And we recall that each*

*x= (x*0

*,¯x) ∈ ×*

^{l}^{−1}admits a spectral factorization, associated with

*K*

*, of the form*

^{l}*x= λ*1*(x)u*^{1}_{x}*+ λ*2*(x)u*^{2}_{x}*,*

*where λ*1*(x), λ*2*(x)* *and u*^{1}_{x}*, u*^{2}* _{x}* are the spectral values and the associated spectral

*vectors of x, respectively, defined by*

*λ**i**(x)= x*0*+ (−1)*^{i}* ¯x,* *u*^{i}* _{x}*=1

2*(1, (−1)*^{i}*ω),* *i= 1, 2*

*with ω= ¯x/ ¯x if ¯x = 0 and otherwise ω being any vector in ** ^{l−1}* satisfying

*ω = 1.*

*For each x= (x*0*,¯x) ∈ *^{l}*, define the matrix L**x*by
*L** _{x}*:=

*x*_{0} *¯x*^{T}

*¯x x*0*I*

*,* (2.1)

which can be viewed as a linear mapping from* ^{l}*to

*.*

^{l}**Lemma 2.1 The mapping L***x**defined by (2.1) has the following properties.*

*(a) L**x**y= x ◦ y and L**x**+y**= L**x**+ L**y**for any y*∈ * ^{l}*.

*(b) x∈ K*^{l}*if and only if L**x**
O. And x ∈ int K*^{l}*if and only if L**x** O.*

*(c) L**x**is invertible whenever x∈ int K** ^{l}*.

*Proof Please see [5,*10].
2.3 Complementarity and merit functions associated with SOC

In this subsection, we discuss three reformulations of SOCCP that will play an im-
*portant role in the sequel of this paper. We deal with the problem SOCCP( ˆF ), where*
*ˆF : ** ^{n}*→

^{n}*is a certain mapping that has the same structure with F in Sect.*1, that is, ˆ

*F*

*= ( ˆF*1

*, . . . , ˆF*

*q*

*)*with ˆ

*F*

*i*:

*→*

^{n}

^{n}*.*

^{i}*A mapping φ*: * ^{l}*×

*→*

^{l}*is called an SOC complementarity function associ- ated with the cone*

^{l}*K*

*if*

^{l}*φ (x, y)= 0 ⇔ x ∈ K*^{l}*,* *y∈ K*^{l}*,* *x, y = 0.* (2.2)
*A popular choice of φ is the vector-valued Fischer-Brumeister (FB) function, defined*
by

*φ*FB*(x, y):= (x*^{2}*+ y*^{2}*)*^{1}^{2} *− x − y, ∀x, y ∈ *^{l}*.* (2.3)
The function was shown in [10] to satisfy the equivalence (2.2), and therefore its
squared norm

*ψ*FB*(x, y)*:=1

2*φ*FB*(x, y)*^{2} (2.4)

*is a merit function. The functions φ*FB*and ψ*FBwere studied in the literature [5,26],
*in which φ*FBwas shown semismooth in [26] whereas ψFBwas proved smooth every-
where in [5]. Due to these favorable properties, the SOCCP( ˆ*F )*can be reformulated
as the following nonsmooth system of equations

*
*_{FB}*(x)*:=

⎛

⎜⎜

⎜⎜

⎜⎜

⎝

*φ*FB*(x*1*, ˆF*1*(x))*
*...*
*φ*FB*(x*_{i}*, ˆF*_{i}*(x))*

*...*
*φ*FB*(x**q**, ˆF**q**(x))*

⎞

⎟⎟

⎟⎟

⎟⎟

⎠

*= 0,* (2.5)

*where φ*FBis defined as in (2.3) with a suitable dimension l. Moreover, its squared
norm induces a smooth merit function, given by

*f*FB*(x)*:=1

2*
*FB*(x)*^{2}=

*q*
*i=1*

*ψ*FB*(x**i**, ˆF**i**(x)).* (2.6)

* Lemma 2.2 The mappings
*FB

*and f*FB

*defined in (2.5) and (2.6) have the following*

*properties.*

*(a) If ˆF* *is continuously differentiable, then
*FB*is semismooth.*

*(b) If∇ ˆF is locally Lipschitz continuous, then
*FB*is strongly semismooth.*

*(c) If ˆF* *is continuously differentiable, then f*FB*is continuously differentiable every-*
*where.*

*(d) If ˆF* *is continuously differentiable and∇ ˆF(x) at any x ∈ *^{n}*has the Cartesian*
*P*_{0}*-property, then every stationary point of f*FB*is a solution to the SOCCP( ˆF ).*

*(e) If ˆF* *is strongly monotone and x*^{∗}*is a nondegenerate solution of SOCCP( ˆF ), i.e.,*
*ˆF*_{i}*(x*^{∗}*)+ x*_{i}^{∗}*∈ int K*^{n}^{i}*for all i∈ {1, . . . , q}. Then
*FB *is strongly BD-regular*
*at x*^{∗}.

*Proof Items (a) and (b) come from [26, Corollary 3.3] and the fact that the composite*
of (strongly) semismooth functions is (strongly) semismooth by [9, Theorem 19].

Item (c) was shown by Chen and Tseng, which is an immediate consequences of [5, Proposition 2]. Item (d) is due to [19, Proposition 5.1].

For item (e), since *∇ ˆF(x*^{∗}*)* *has Cartesian P -property and is positive definite,*
which can be obtained from the strongly monotonicity of ˆ*F*, it follows from [20,
Proposition 2.1] that the conditions in [19, Theorem 4.1] are satisfied and hence (e)

is proved.

*Since the complementarity function
*FB*and its induced merit function f*FBhave
many useful properties described as in Lemma2.2, especially when ˆ*F* is strongly
monotone, they play a crucial role in solving subproblems by using a generalized
Newton method in Sect.4. On the other hand, in [3], Chen extended a new merit
function for the NCP to the SOCCP and studied conditions under which the new merit
function provides a global error bound and has property of bounded level sets, which
*play an important role in convergence analysis. In contrast, the merit function f*FB

lacks these properties. For this reason, we utilize this new merit function to describe the approximation criterion.

*Let ψ*0: * ^{l}*×

*→*

^{l}_{+}be defined by

*ψ*0

*(x, y)*:=1

2_{K}^{l}*(x◦ y)*^{2}*,*

*where the mapping *_{K}^{l}*(·) denotes the orthogonal projection onto the set K** ^{l}*. After
taking the fixed parameter as in [3], a new merit function is defined as ψ(x, y):=

*ψ*0*(x, y)+ ψ*FB*(x, y), where ψ*FB is given by (2.4). Via the new merit function, it
*was shown that the SOCCP( ˆF )*is equivalent to the following global minimization:

*x*min∈^{n}*f (x)* *where f (x)*:=

*q*
*i*=1

*ψ (x**i**, ˆF**i**(x)).* (2.7)

*Here ψ is defined with a suitable dimension l.*

*The properties about the function f including the error bound property and the*
boundedness of level sets which are given in [3] are summarized in the following
three lemmas.

**Lemma 2.3 Let f be defined as in (2.7).**

*(a) If ˆF* *is smooth, then f is smooth and f*^{1}^{2} *is uniformly locally Lipschitz continuous*
*on any compact set.*

*(b) f (ζ )≥ 0 for all ζ ∈ *^{n}*and f (ζ )= 0 if and only if ζ solves the SOCCP( ˆF).*

*(c) Suppose that the SOCCP( ˆF )has at least one solution, then ζ is a global mini-*
*mization of f if and only if ζ solves the SOCCP( ˆF ).*

*Proof From [3, Proposition 3.2], we only need to prove that f*^{1}^{2} is Lipschitz contin-
uous on the set*{y|f (y) = 0}. It follows from [3, Proposition 3.1] that if f (y)*= 0,
*then y**i**◦ ˆF**i**(y)= 0 for all i = 1, 2, . . . , q. Thus, for any y ∈ {y|f (y) = 0}, we have*

*f (x)*^{1}^{2} *− f (y)*^{1}^{2}

*= f (x)*^{1}^{2}

≤ 1

√2

*q*
*i*=1

_{K}*ni**(x*_{i}*◦ ˆF**i**(x)) + φ*FB*(x*_{i}*, ˆF*_{i}*(x))*

= 1

√2

*q*
*i*=1

_{K}*ni**(x*_{i}*◦ ˆF**i**(x))− *_{K}*ni**(y*_{i}*◦ ˆF**i**(y))*

*+φ*FB*(x*_{i}*, ˆF*_{i}*(x))− φ*FB*(y*_{i}*, ˆF*_{i}*(y))*
*.*

*Noting that the functions x**i**◦ ˆF**i**(x)and φ*FB*(x*_{i}*, ˆF*_{i}*(x))*are Lipschitz continuous pro-
vided that ˆ*F* *is smooth. Then from the Lipschitz continuity of φ*FB and the nonex-
*pansivity of projective mapping onto a convex set, we obtain that f*^{1}^{2} is Lipschitz

*continuous at y.*

**Lemma 2.4 [3, Proposition 4.1] Suppose that ˆ***F* *is strongly monotone with the mod-*
*ulus ρ > 0 and ζ*^{∗}*is the unique solution of SOCCP( ˆF ). Then there exists a scalar*
*τ >0 such that*

*τζ − ζ*^{∗}^{2}≤ 3√

*2f (ζ )*^{1}^{2}*,* *∀ζ ∈ *^{n}*,* (2.8)
*where f is given by (2.7) and τ can be chosen as*

*τ*:= *ρ*

max{√

*2, ˆF(ζ*^{∗}*), ζ*^{∗}}*.*

**Lemma 2.5 [3, Proposition 4.2] Suppose that ˆ***F* *is monotone and that SOCCP( ˆF )is*
*strictly feasible, i.e., there exists ˆζ*∈ ^{n}*such that ˆF ( ˆζ ), ˆζ∈ int K. Then the level set*

*L(r) := {ζ ∈ *^{n}*|f (ζ ) ≤ r}*

*is bounded for all r≥ 0, where f is given by (2.7).*

Another SOC complementarity function which we usually call it the natural resid- ual mapping is defined by

*φ*_{NR}*(x, y):= x − *_{K}^{l}*(x− y), ∀x, y ∈ *^{l}*,*

*based on which we define the mapping
*NR: * ^{n}*→

*as*

^{n}*
*NR*(x)*:=

⎛

⎜⎜

⎜⎜

⎜⎜

⎝

*φ*_{NR}*(x*_{1}*, ˆF*_{1}*(x))*
*...*
*φ*NR*(x**i**, ˆF**i**(x))*

*...*
*φ*_{NR}*(x*_{q}*, ˆF*_{q}*(x))*

⎞

⎟⎟

⎟⎟

⎟⎟

⎠

*.* (2.9)

*Then it is straightforward to see that SOCCP( ˆF )*is equivalent to the system of equa-
*tions
*NR*(x)*= 0.

* Lemma 2.6 The mapping
*NR

*defined as in (2.9) has the following properties.*

*(a) If ˆF* *is continuously differentiable, then
*NR*is semismooth.*

*(b) If∇ ˆF is locally Lipschitz continuous, then
*NR*is strongly semismooth.*

*(c) If∇ ˆF(x) is positive definite, then every V ∈ ∂**B**
*_{NR}*(x)is nonsingular, i.e.,
*NR

*is strongly BD-regular at x.*

*Proof Items (a) and (b) are obvious after combining [6, Proposition 4.3] and [9, The-*
orem 19]. Note that these two items are also proved in [12] in a different approach.

The proof of item (c) is similar to that in [27] and [30] for a more general setting, and

we omit it.

From Lemma2.6(c) we know that the natural residual mapping
NRis strongly
*BD-regular under weaker conditions than
*FB*. In view of this, we will use
*NRto
explore the condition of superlinear convergence of PPA in Sect.3.

2.4 Proximal point algorithm

Let*T : ** ^{n}*⇒

*be a set-valued mapping defined by*

^{n}*T (x) := F (x) + N*_{K}*(x).* (2.10)

Then*T is a maximal monotone mapping and SOCCP(F ) defined by (1.1) is equiva-*
*lent to the problem of finding a point x such that*

0*∈ T (x).*

*The proximal point algorithm generates, for any starting point x*^{0}, a sequence*{x** ^{k}*} by
the approximate rule:

*x*^{k}^{+1}*≈ P**k**(x*^{k}*),*
*where P**k**:= (I +* _{c}^{1}

*k**T )*^{−1}is a single-valued mapping from* ^{n}* to

*,*

^{n}*{c*

*k*} is some

*sequence of positive real numbers, and x*

^{k}^{+1}

*≈ P*

*k*

*(x*

^{k}*)means that x*

^{k}^{+1}is an approx-

*imation to P*

*k*

*(x*

^{k}*). Accordingly, for SOCCP(F ), P*

*k*

*(x*

^{k}*)*is given by

*P*_{k}*(x*^{k}*)*=

*I*+ 1

*c*_{k}*(F+ N*_{K}*)*

_{−1}
*(x*^{k}*),*

from which we have

*P*_{k}*(x*^{k}*)∈ SOL(SOCCP(F*^{k}*)),*

*where F** ^{k}* is defined by (1.4) and SOL(SOCCP(F

^{k}*))*is the solution set of

*SOCCP(F*

^{k}*). Therefore, x*

^{k}^{+1}

*is given by an approximate solution of SOCCP(F*

^{k}*).*

*Two general criteria for the approximate calculation of P**k**(x*^{k}*)*proposed by Rock-
afellar [24] are as follows:

**Criterion 2.1**

*x*^{k}^{+1}*− P**k**(x*^{k}*) ≤ ε**k**,*

∞

*k*=0

*ε**k**<∞.*

**Criterion 2.2**

*x*^{k}^{+1}*− P**k**(x*^{k}*) ≤ η**k**x*^{k}^{+1}*− x*^{k}*,* ^{∞}

*k*=0

*η*_{k}*<∞.*

Results on the convergence of the proximal point algorithm have already been studied in [15,24] from which we know that Criterion2.1guarantees global con- vergence while Criterion2.2, which is rather restrictive, ensures superlinear conver- gence.

**Theorem 2.1 Let**{x^{k}*} be any sequence generated by the PPA under Criterion*2.1
*with{c**k**} bounded. Suppose SOCCP(F ) has at least one solution. Then {x*^{k}*} con-*
*verges to a solution x*^{∗}*of SOCCP(F ).*

*Proof This can be proved by similar arguments as in [24, Theorem 1].*
**Theorem 2.2 Suppose the solution set ¯**Xof SOCCP(F ) is nonempty, and let{x^{k}*} be*
*any sequence generated by PPA with Criterions*2.1*and*2.2*and c**k**→ 0. Let us also*
*assume that*

*∃δ > 0, ∃C > 0,*

s.t. *dist(x, ¯X)≤ Cw whenever x ∈ T*^{−1}*(ω)andω ≤ δ.* (2.11)
*Then the sequence{dist(x*^{k}*, ¯X)} converges to 0 superlinearly.*

*Proof This can be also verified by similar arguments as in [15, Theorem 2.1].*

**3 A proximal point algorithm for solving SOCCP**

Based on the previous discussion, in this section we describe PPA for solving
*SOCCP(F ) as defined in (1.1) where F is smooth and monotone. We first illustrate*
the related mappings that will be used in the remainder of this paper.

*The mappings
*NR*,
*_{FB}*, f*_{FB}are defined by (2.9), (2.5) and (2.6), respectively,
where the mapping ˆ*F* *is substituted by F . And the functions f*^{k}*, f*_{FB}^{k}*and
*^{k}_{FB} are
defined by (2.7), (2.6) and (2.5), respectively, where the mapping ˆ*F* is replaced by
*F** ^{k}* which is given by (1.4), i.e.,

*
*NR*(x)*:=

⎛

⎜⎜

⎜⎜

⎜⎜

⎝

*φ*NR*(x*1*, F*1*(x))*
*...*
*φ*NR*(x**i**, F**i**(x))*

*...*
*φ*_{NR}*(x*_{q}*, F*_{q}*(x))*

⎞

⎟⎟

⎟⎟

⎟⎟

⎠

*,* *
*^{k}_{FB}*(x)*:=

⎛

⎜⎜

⎜⎜

⎜⎜

⎝

*φ*_{FB}*(x*_{1}*, F*_{1}^{k}*(x))*
*...*
*φ*FB*(x**i**, F*_{i}^{k}*(x))*

*...*
*φ*FB*(x**q**, F*_{q}^{k}*(x))*

⎞

⎟⎟

⎟⎟

⎟⎟

⎠
*,*

*
*FB*(x)*:=

⎛

⎜⎜

⎜⎜

⎜⎜

⎝

*φ*_{FB}*(x*_{1}*, F*_{1}*(x))*
*...*
*φ*FB*(x*_{i}*, F*_{i}*(x))*

*...*
*φ*FB*(x**q**, F**q**(x))*

⎞

⎟⎟

⎟⎟

⎟⎟

⎠
*,*

*f*^{k}*(x)*:=

*q*
*i*=1

*ψ (x*_{i}*, F*_{i}^{k}*(x)),* *f*FB*(x)*:=1

2*
*FB*(x)*^{2}*,*
*f*_{FB}^{k}*(x)*:=1

2*
*^{k}_{FB}*(x)*^{2}*.*

Now we are in a position to describe the proximal point algorithm for solving Prob- lem (1.1).

**Algorithm 3.1**

* Step 0. Choose parameters α∈ (0, 1), c*0

*∈ (0, 1) and an initial point x*

^{0}∈

*. Set*

^{n}*k*:= 0.

**Step 1. If x**^{k}*satisfies f*FB*(x*^{k}*)*= 0, then stop.

**Step 2. Let F**^{k}*(x)= F (x) + c**k**(x− x*^{k}*). Get an approximation solution x*^{k}^{+1} of
*SOCCP(F*^{k}*)*that satisfies the condition

*f*^{k}*(x*^{k}^{+1}*)*≤ *c*^{6}* _{k}*min{1, x

^{k}^{+1}

*− x*

^{k}^{4}} 18 max{√

*2,F*^{k}*(P*_{k}*(x*^{k}*)), P**k**(x*^{k}*)*}^{2}*.* (3.1)
**Step 3. Set c***k*+1*= αc**k**and k:= k + 1. Go to Step 1.*

**Theorem 3.1 Let ¯**Xbe the solution set of SOCCP(F ). If ¯X= ∅, then the sequence*{x*^{k}*} generated by Algorithm*3.1*converges to a solution x*^{∗}*of SOCCP(F ).*

*Proof From Theorem*2.1, it suffices to prove that such*{x** ^{k}*} satisfies Criterion2.1.

*Since F*^{k}*is strongly monotone with modulus c**k**>0 and P**k**(x*^{k}*)*is the unique solution

*of SOCCP(F*^{k}*), it follows from Lemma*2.4that

*x*^{k}^{+1}*− P**k**(x*^{k}*)*^{2}≤3√
2
*c** _{k}* max{√

*2,F*^{k}*(P**k**(x*^{k}*)), P**k**(x*^{k}*)}f*^{k}*(x*^{k}^{+1}*)*^{1}^{2}*,* (3.2)
which together with (3.1) implies

*x*^{k}^{+1}*− P**k**(x*^{k}*) ≤ c**k**.* (3.3)

To obtain superlinear convergence properties, we need to give the following as- sumption which will be connected to the condition (2.11) in Theorem2.2.

**Assumption 3.1** *x − *_{K}*(x− F (x)) provides a local error bound for SOCCP(F ),*
*that is, there exist positive constants ¯δ and ¯C*such that

*dist(x, ¯X)≤ ¯Cx − *_{K}*(x− F (x)),*

*for all x withx − *_{K}*(x− F (x)) ≤ ¯δ,* (3.4)
where ¯*Xdenotes the solution set of SOCCP(F ).*

The following lemma can help us to understand Assumption3.1as it implies con- ditions under which Assumption3.1holds.

**Lemma 3.1 [22, Proposition 3] If a Lipschitz continuous mapping H is strongly BD-**
*regular at x*^{∗}*, then there is a neighborhood N of x*^{∗}*and a positive constant α, such*
*that∀x ∈ N and V ∈ ∂**B**H (x), V is nonsingular andV*^{−1}* ≤ α. If, furthermore, H*
*is semismooth at x*^{∗}*and H (x*^{∗}*)= 0, then there exists a neighborhood N*^{}*of x*^{∗}*and*
*a positive constant β such that∀x ∈ N*^{},*x − x*^{∗}* ≤ βH (x).*

Note that when*∇F (x) is positive definite at one solution x of SOCCP(F ), As-*
sumption3.1holds by Lemmas2.6and3.1.

* Theorem 3.2 LetT be defined by (*2.10). If ¯

*X= ∅, then Assumption*3.1

*implies*

*condition (2.11), that is, there exist δ > 0 and C > 0 such that*

*dist(x, ¯X)≤ Cω,*
*whenever x∈ T*^{−1}*(ω)andω ≤ δ.*

*Proof For all x∈ T*^{−1}*(ω)*we have

*w∈ T (x) = F (x) + N*_{K}*(x).*

*Therefore there exists v∈ N*_{K}*(x)such that w= F (x) + v. Because K is a convex*
set, it is easy to obtain that

_{K}*(x+ v) = x.* (3.5)

Noting that the projective mapping onto a convex set is nonexpansive, we have from (3.5) that

*x − *_{K}*(x− F (x)) = *_{K}*(x+ v) − *_{K}*(x− F (x)) ≤ v + F (x) = ω.*

From Assumption3.1*and letting C= ¯C, δ = ¯δ yield the desired condition (2.11).*
The following theorem gives the superlinear convergence of Algorithm3.1, whose
proof is based on Theorem3.2and can be obtained in the same way as Theorem3.1.

We omit the proof here.

* Theorem 3.3 Suppose that Assumption*3.1

*holds. Let*

*{x*

^{k}*} be generated by Algo-*

*rithm*3.1. Then the sequence

*{dist(x*

^{k}*, ¯X)} converges to 0 superlinearly.*

Although we have obtained the global and superlinear convergence properties of Algorithm3.1under mild conditions, this does not mean that Algorithm3.1is practi- cally efficient, as it says nothing about how to obtain an approximation solution of the strongly monotone second-order cone complementarity problem in Step 2 satisfying (3.1) and what is the cost. We will give the answer in the next section.

**4 Generalized Newton method**

In this section, we introduce the generalized Newton method proposed by De Luca, Facchinei, and Kanzow [14] for solving the subproblems in Step 2 of Algorithm3.1.

*As mentioned earlier, for each fixed k, Problem (1.3) is equivalent to the following*
nonsmooth equation

*
*^{k}_{FB}*(x)= 0.* (4.1)

Now we describe as below the generalized Newton method for solving the nonsmooth system (4.1), which is employed from what was introduced in [29] for solving NCP.

**Algorithm 4.1 (Generalized Newton method for SOCCP(F**^{k}*))*
**Step 0. Choose β**∈ (0,^{1}_{2}*)and an initial point x*^{0}∈ ^{n}*. Set j*:= 0.

**Step 1. If***
*^{k}_{FB}*(x*^{j}*)* = 0, then stop.

**Step 2. Select an element V**^{j}*∈ ∂**B**
*^{k}_{FB}*(x*^{j}*). Find the solution d** ^{j}* of the system

*V*^{j}*d= −
** ^{k}*FB

*(x*

^{j}*).*(4.2)

**Step 3. Find the smallest nonnegative integer i***j* such that

*f*_{FB}^{k}*(x** ^{j}*+ 2

^{−i}

^{j}*d*

^{j}*)≤ (1 − β2*

^{1}

^{−i}

^{j}*)f*

_{FB}

^{k}*(x*

^{j}*).*

**Step 4. Set x**^{j}^{+1}*:= x** ^{j}*+ 2

^{−i}

^{j}*d*

^{j}*and j:= j + 1. Go to Step 1.*

*To guarantee the descent sequence of f*_{FB}* ^{k}* must have an accumulation point, Pan
and Chen [19] give the following condition under which the coerciveness of f

_{FB}

*for*

^{k}*SOCCP(F*

^{k}*)*can be established.

**Condition 4.1 For any sequence***{x** ^{j}*} ⊆

*satisfying*

^{n}*x*

*→ +∞, if there exists*

^{j}*an index i∈ {1, 2, . . . , q} such that {λ*1

*(x*

_{i}

^{j}*)} and {λ*1

*(F*

_{i}*(x*

^{j}*))*} are bounded below,

*and λ*2

*(x*

_{i}

^{j}*), λ*2

*(F*

_{i}*(x*

^{j}*))*→ +∞, then

lim sup

*j*→∞

*x*_{i}^{j}

*x*_{i}^{j}*,* *F**i**(x*^{j}*)*

*F**i**(x*^{j}*)*

*>0.*

*As F*^{k}*is strongly monotone, it then has the uniform Cartesian P -property. From [19],*
we have the following theorem.

**Theorem 4.1 [19, Proposition 5.2] For SOCCP(F**^{k}*), if Condition*4.1*holds, then the*
*merit function f*_{FB}^{k}*is coercive.*

To obtain the quadratic convergence of Algorithm4.1, we need the following two Assumptions which are also essential in the follow-up work.

* Assumption 4.1 F is continuously differentiable function with a local Lipschitz Ja-*
cobian.

**Assumption 4.2 The limit point x**^{∗}of the sequence*{x** ^{k}*} generated by Algorithm3.1

*is nondegenerate, i.e., x*

_{i}^{∗}

*+ F*

*i*

*(x*

^{∗}

*)∈ int K*

^{n}

^{i}*holds for all i∈ {1, . . . , q}.*

*Note that when k is large enough, the unique solution P**k**(x*^{k}*)of SOCCP(F*^{k}*)*is
*nondegenerate, that is, (P**k**(x*^{k}*))**i**+ F*_{i}^{k}*(P**k**(x*^{k}*))∈ int K*^{n}^{i}*holds for all i∈ {1, . . . , q}.*

*Because F** ^{k}* is strongly monotone, we immediately have the following convergence
theorem from Lemma2.2and [14, Theorem 3.1].

**Theorem 4.2 If the sequence**{x^{j}*} generated by Algorithm*4.1*has an accumulation*
*point and Assumptions*4.1*and*4.2*hold. Then{x*^{j}*} globally converges to the unique*
*solution P**k**(x*^{k}*)and the rate is quadratic.*

Noting that the condition (3.1) in Algorithm3.1is equivalent to the following two criteria:

**Criterion 4.1**

*f*^{k}*(x*^{k}^{+1}*)*≤ *c*^{6}* _{k}*
18 max{√

*2,F*^{k}*(P*_{k}*(x*^{k}*)), P**k**(x*^{k}*)*}^{2}*.*
**Criterion 4.2**

*f*^{k}*(x*^{k}^{+1}*)*≤ *c*^{6}_{k}*x*^{k}^{+1}*− x*^{k}^{4}
18 max{√

*2,F*^{k}*(P*_{k}*(x*^{k}*)), P**k**(x*^{k}*)*}^{2}*.*

It follows from Sect. 3that Criterion 4.1guarantees global convergence, while Criterion4.2, which is rather restrictive, ensures superlinear convergence of PPA.

Next, we give conditions under which a single Newton step of generalized Newton
method can generate a point eventually that satisfies the following two criteria for
*any given r∈ (0, 1), i.e., Criterion*4.1and the following criterion:

**Criterion 4.2(r)**

*f*^{k}*(x*^{k}^{+1}*)*≤ *c*^{6}_{k}*x*^{k}^{+1}*− x*^{k}^{4(1}* ^{−r)}*
18 max{√

*2,F*^{k}*(P*_{k}*(x*^{k}*)), P**k**(x*^{k}*)*}^{2}*.*

Thereby the PPA can be practically efficient, which we call that Algorithm3.1
has approximate genuine superlinear convergence. Firstly, we have the following two
lemmas, which indicate the relationship between*x*^{k}*− P**k**(x*^{k}*) and dist(x*^{k}*, ¯X).*

**Lemma 4.1 If SOCCP(F ) is strictly feasible. Then, for sufficiently large k, there***exists a constant B*1*≥ 2 such that*

2≤ max√

*2,F*^{k}*(P**k**(x*^{k}*)), P**k**(x*^{k}*)*2

*≤ B*1*.* (4.3)

*Proof From Lemma*2.5, we obtain that the solution set ¯*Xof SOCCP(F ) is bounded,*
*which implies the boundedness of F ( ¯X). Let m*1*>*0 be such that

max

sup

*x**∈ ¯X**x, sup*

*x**∈ ¯X**F (x)*

*≤ m*1*.*

*Since c**k*→ 0, it follows from Theorem3.1that the two sequences*{x*^{k}*} and {P**k**(x*^{k}*)*}
*have the same limit point x*^{∗}*∈ ¯X. Then there exists a positive constant m*2such that

*P**k**(x*^{k}*)− x*^{∗}* ≤ m*2*,*
and

*F (P**k**(x*^{k}*))− F (x*^{∗}*) ≤ m*2*,*
*when k is large enough. Thus, the following two inequalities*

*F*^{k}*(P**k**(x*^{k}*)) = F (P**k**(x*^{k}*))+ c**k**(P**k**(x*^{k}*)− x*^{k}*)*

*≤ F (P**k**(x*^{k}*)) + c**k**P**k**(x*^{k}*)− x*^{k}

*≤ F (x*^{∗}*) + 2m*2

and

*P**k**(x*^{k}*) ≤ x*^{∗}* + m*2

*hold for sufficiently large k. Let B*1 *= max{2, (m*1 *+ 2m*2*)*^{2}}, we complete the

proof.

**Lemma 4.2 If SOCCP(F ) is strictly feasible, then for sufficiently large k, there exists***a constant B*2*>0 such that*

*x*^{k}*− P**k**(x*^{k}*) ≤* *B*2

√*c*_{k}*dist(x*^{k}*, ¯X)*^{1}^{2}*.*

*Proof Let* *¯x** ^{k}* be the nearest point in ¯

*Xfrom x*

*. From [8, Theorem 2.3.5] we know that ¯*

^{k}*Xis convex, and hence the mapping*

_{¯X}*(·) is nonexpansive. Therefore,*

* ¯x*^{k}*− x*^{∗}* = **¯X**(x*^{k}*)− **¯X**(x*^{∗}*) ≤ x*^{k}*− x*^{∗}*.*

Since*{x*^{k}*} is bounded, so is { ¯x*^{k}*}. Let ˆX be a bounded set containing {x*^{k}*} and { ¯x** ^{k}*}.

From Lemma2.3, we know that f^{1}^{2} is uniformly Lipschitz continuous on ˆ*X. Then*
*there exists L*1*>*0 such that

*f (x*^{k}*)*

^{1}

2 =

*f (x*^{k}*)*

^{1}

2 −

*f (¯x*^{k}*)*

^{1}

2 *≤ L*^{2}1*x*^{k}*− ¯x*^{k}* = L*^{2}1*dist(x*^{k}*, ¯X),*
which implies that

*f (x*^{k}*)*

^{1}_{4}

*≤ L*1*dist(x*^{k}*, ¯X)*^{1}^{2}*.*
It follows from Lemma2.4that

*x*^{k}*− P**k**(x*^{k}*)*^{2}≤ 3√
2
*τ**k*

*f*^{k}*(x*^{k}*)*^{1}^{2}*,*

where

*τ**k*= *c*_{k}

max{√

*2,F*^{k}*(P*_{k}*(x*^{k}*)), P**k**(x*^{k}*)*}*,*
which together with Lemma4.1yields

√2
*c**k* ≤ 1

*τ**k* ≤

√*B*1

*c**k*

*.*

Hence, we have

*x*^{k}*− P**k**(x*^{k}*)* ≤

3√
*2B*1

*c**k*

^{1}_{2}
*f*^{k}*(x*^{k}*)*

^{1}_{4}
*.*

*On the other hand, since F*^{k}*(x*^{k}*)= F (x*^{k}*), we know f*^{k}*(x*^{k}*)= f (x*^{k}*)*and hence

*x*^{k}*− P**k**(x*^{k}*)* ≤

3√
*2B*1

*c**k*

^{1}_{2}
*f (x*^{k}*)*

^{1}

4 ≤

3√
*2B*1

*c**k*

^{1}_{2}

*L*1*dist(x*^{k}*, ¯X)*^{1}^{2}*.*

*Then, letting B*2*= (3*√

*2B*1*)*^{1}^{2}*L*_{1}leads to the desired inequality.

The next three lemmas give the relationship between*x*_{N}^{k}*− P**k**(x*^{k}*) and x** ^{k}*−

*P*

*k*

*(x*

^{k}*)*which is the key to the main result in this section. One attention we should pay to is that we will not be able to obtain the inequality in Lemma4.3without twice continuously differentiability. The reason is as explained in [29, Remark 4.1]. To this end, we make the following assumption.

**Assumption 4.3 F is twice continuously differentiable.**

* Lemma 4.3 Suppose that Assumptions*4.2

*and*4.3

*hold. Then*

^{k}_{FB}

*is twice continu-*

*ously differentiable in a neighborhood of x*

^{k}*for sufficiently large k, and there exists*

*a positive constant B*3

*such that*

*J
** ^{k}*FB

*(x*

^{k}*)(x*

^{k}*− P*

*k*

*(x*

^{k}*))−*

*FB*

^{k}*(x*

^{k}*)+*

*FB*

^{k}*(P*

_{k}*(x*

^{k}*)) ≤ B*3

*x*

^{k}*− P*

*k*

*(x*

^{k}*)*

^{2}

*.*

*Proof It is obvious that when F is twice continuously differentiable and Assump-*tion4.2

*holds,*

^{k}_{FB}

*is twice continuously differentiable near x*

^{k}*and P*

*k*

*(x*

^{k}*)*when

*k*is large enough. Then from the second order Taylor expansion and the Lipschitz continuity of

*∇*

^{k}_{FB}

*near x*

^{k}*, there exist positive constants m*3

*, m*

_{4}

*such that when k*is sufficiently large,

*
** ^{k}*FB

*(x*

^{k}*)−*

*FB*

^{k}*(P*

*k*

*(x*

^{k}*))− J*

*FB*

^{k}*(P*

*k*

*(x*

^{k}*))(x*

^{k}*− P*

*k*

*(x*

^{k}*)) ≤ m*3

*x*

^{k}*− P*

*k*

*(x*

^{k}*)*

^{2}

*,*

*J
*^{k}_{FB}*(x*^{k}*)(x*^{k}*− P**k**(x*^{k}*))− J
*^{k}_{FB}*(P*_{k}*(x*^{k}*))(x*^{k}*− P**k**(x*^{k}*)) ≤ m*4*x*^{k}*− P**k**(x*^{k}*)*^{2}*.*
*Let B*3*= m*3*+ m*4*, we have for sufficiently large k*

*J
*^{k}_{FB}*(x*^{k}*)(x*^{k}*− P**k**(x*^{k}*))−
*^{k}_{FB}*(x*^{k}*)+
*^{k}_{FB}*(P*_{k}*(x*^{k}*)) ≤ B*3*x*^{k}*− P**k**(x*^{k}*)*^{2}*.*
Now let us denote

*x*_{N}^{k}*:= x*^{k}*− V*_{k}^{−1}*
*^{k}_{FB}*(x*^{k}*),* *V*_{k}*∈ ∂**B**
*^{k}_{FB}*(x*^{k}*).* (4.4)
*Then x*_{N}* ^{k}* is a point produced by a single Newton iteration of Algorithm4.1with the

*initial point x*

*.*

^{k}* Lemma 4.4 Suppose that Assumption*4.2

*holds, then*

^{k}_{FB}

*is differentiable at x*

^{k}*and*

*the JacobianJ*

^{k}_{FB}

*(x*

^{k}*)is nonsingular for sufficiently large k.*

*Proof Let*

*z*_{i}*(x)= (x*_{i}^{2}*+ (F*_{i}^{k}*(x))*^{2}*)*^{1}^{2}*,* (4.5)
*for each i∈ {1, . . . , q}. From Assumption*4.2and [19, Lemma 4.2], we have that for
*every i∈ {1, . . . , q},*

*x*_{i}^{k}*+ F*_{i}^{k}*(x*^{k}*)∈ int K*^{n}^{i}*,*
and

*(x*_{i}^{k}*)*^{2}*+ (F*_{i}^{k}*(x*^{k}*))*^{2}*∈ int K*^{n}^{i}*,*