**On the existence of saddle points for nonlinear** **second-order cone programming problems**

**Jinchuan Zhou** **· Jein-Shan Chen**

Received: 8 October 2013 / Accepted: 27 October 2014 / Published online: 6 November 2014

© Springer Science+Business Media New York 2014

**Abstract In this paper, we study the existence of local and global saddle points for nonlinear**
second-order cone programming problems. The existence of local saddle points is developed
by using the second-order sufficient conditions, in which a sigma-term is added to reflect the
curvature of second-order cone. Furthermore, by dealing with the perturbation of the primal
problem, we establish the existence of global saddle points, which can be applicable for the
case of multiple optimal solutions. The close relationship between global saddle points and
exact penalty representations are discussed as well.

**Keywords** Local and global saddle points· Second-order sufficient conditions ·
Augmented Lagrangian· Exact penalty representations

**Mathematics Subject Classification** 90C26· 90C46
**1 Introduction**

*Recall that the second-order cone (SOC), also called the Lorentz cone or ice-cream cone, in*
IR* ^{m+1}*is defined as

*K**m*+1*= {(x*1*, x*2*) ∈ IR × IR*^{m}*| x*2* ≤ x*1*},*

The author’s work is supported by National Natural Science Foundation of China (11101248, 11171247, 11271233), Shandong Province Natural Science Foundation (ZR2010AQ026, ZR2012AM016), and Young Teacher Support Program of Shandong University of Technology. Jein-Shan Chen work is supported by Ministry of Science and Technology, Taiwan.

J. Zhou

Department of Mathematics School of Science, Shandong University of Technology, Zibo 255049, People’s Republic of China

e-mail: jinchuanzhou@163.com

J.-S. Chen (

### B

^{)}

Department of Mathematics, National Taiwan Normal University, Taipei 11677, Taiwan e-mail: jschen@math.ntnu.edu.tw

where · denotes the Euclidean norm. The order relation induced by this pointed closed
convex cone*K** _{m+1}*is given by

*x**K**m+1*0 *⇐⇒ x ∈ IR*^{m+1}*, x*1*≥ x*2*.*

In this paper, we consider the following nonlinear second-order cone programming (NSOCP)
min *f(x)*

s*.t. g**j**(x) *_{K}* _{m j +1}* 0

*, j = 1, 2, . . . , J,*

*h(x) = 0,*

(1)

*where f* : IR^{n}*→ IR, h : IR** ^{n}* → IR

^{l}*, g*

*j*: IR

*→ IR*

^{n}

^{m}

^{j}^{+1}are twice continuously differen- tiable functions, and

*K*

*m*

*j*+1is the second-order cone in IR

^{m}

^{j}^{+1}

*for j*

*= 1, 2, . . . , J.*

For a given nonlinear programming problem, we can define another programming problem
associated with it by using traditional Lagrangian functions. The original problem is called the
primal problem, and the latter one is called the dual problem. Since the weak duality property
always holds, our concern is on how to obtain the strong duality property (or zero duality
gap property). In other words, we want to know when the primal and dual problems have the
same optimal values, which provides the theoretical foundation for many primal-dual type
methods. However, if we employ the traditional Lagrangian functions, then some convexity is
necessary for achieving strong duality property. To overcome this drawback, we need to resort
*to the augmented Lagrangian functions, whose main advantage is ensuring the strong duality*
property without requiring convexity. In addition, the zero duality gap property coincides with
the existence of global saddle points, provided that the optimal solution sets of the primal and
dual problems are nonempty, respectively. Many researchers have studied the properties of
augmented Lagrangian and the existence of saddle points. For example, Rockafellar and Wets
[13] proposed a class of augmented Lagrangian where augmented function is required to be
convex functions. This was extended by Huang and Yang [6] where convexity condition is
replaced by level-boundedness, and it was further generalized by Zhou and Yang [21] where
level-boundedness condition is replaced by so-called valley-at-zero property; see also [14]

for more details. These important works give an unified frame for the augmented Lagrangian function and its duality theory. Meanwhile, Floudas and Jongen [5] pointed out the crucial role of saddle points for the minimization of smooth functions with a finite number of stationary points. The necessary and/or sufficient conditions to ensure the existence of local and/or global saddle points were investigated by many researchers. For example, the existence of local and global saddle points of Rockafellar’s augmented Lagrangian function was studied in [12].

Local saddle points of the generalized Mangasarian’s augmented Lagrangian were analyzed in [19]. The existences of local and global saddle points of pth power nonlinear Lagrangian were discussed in [7,8,18]. For more references, please see [9,10,14,16,17,20,22].

All the results mentioned above are focused on either the standard nonlinear programming or the generalized minimizing problems [13]. The main purpose of this paper is to establish the existences of local and global saddle points of NSOCP (1) by sufficiently exploiting the special structure of SOC. As shown in nonlinear programming, the positive definiteness of

∇_{x x}^{2} *L over the critical cone is a sufficient condition for the existence of local saddle points.*

However, this classical result cannot be extended trivially to NSOCP (1) and the analysis
is more complicated because IR_{+}* ^{n}* is polyhedral, whereas

*K*

*is non-polyhedral. Hence, we particulary study the sigma-term [4], which in some extend stands for the curvature*

_{m+1}of second-order cone. Our result shows that the local saddle point exists provided that the
sum of∇^{2}*x x**L andH*is positive definite even if∇*x x*^{2} *L is indefinite (see Theorem*2.3). This
undoubtedly clarifies the essential role played by the sigma-term. Moreover, by developing the
perturbation of the primal problem, we establish the existence of global saddle points without
restricting the optimal solution being unique, as required in [12,16]. Furthermore, we study
another important concept, exact penalty representation, and develop its new necessary and
sufficient conditions. The close relationship between global saddle points and exact penalty
representations is established as well.

To end this section, we introduce some basic concepts which will be needed for our
subsequent analysis. Let IR^{n}*be n-dimensional real vector space. For x, y ∈ IR** ^{n}*, the inner

*product is denoted by x*

^{T}*y orx, y. Given a convex subset A ⊆ IR*

^{n}*and a point x*

*∈ A, the*

*normal cone of A at x, denoted by N*

*A*

*(x), is defined as*

*N*_{A}*(x) := {v ∈ IR*^{n}*| v, z − x ≤ 0, ∀z ∈ A},*
*and the tangent cone, denoted by T*_{A}*(x), is defined as*

*T**A**(x) := N**A**(x)*^{◦}*,*

*where N**A**(x)*^{◦} *means the polar cone of N**A**(x). Given d ∈ T**A**(x), the outer second order*
tangent set is defined as

*T*_{A}^{2}*(x, d) =*

*w ∈ IR** ^{n}* ∃

*t*

*↓ 0 such that dist*

_{n}*x+ t**n**d*+1
2*t*_{n}^{2}*w, A*

*= o(t**n*^{2}*)*
*.*
*The support function of A is*

*σ (x | A) := sup{x, z | z ∈ A}.*

We also write cl(A), int(A), and ∂(A) to stand for the closure, interior, and boundary of A,
respectively. For the simplicity of notations, let us write*K**j*to stand for*K**m**j*+1and*K*be the
Cartesian product of these second-order cones, i.e.,*K*:=*K*1×*K*2× · · ·*K**J*. In addition, we
*denote g(x) := (g*1*(x), g*2*(x), . . . , g**J**(x)), p :=*_{J}

*j=1**(m**j**+ 1), and S*^{∗}means the solution
set of NSOCP (1). According to [13, Exercise 11.57], the augmented Lagrangian function
for NSOCP (1) is written as

*L**c**(x, λ, μ, c) := f (x) + μ, h(x) +* *c*
2*h(x)*^{2}
+*c*

2

*J*
*j*=1

dist^{2}

*g**j**(x) −λ**j*

*c* *,K**j*

−
*λ**j*

*c*
^{2}

*.* (2)

*Here c* ∈ IR++ *:= {ζ ∈ IR | ζ > 0} and (x, λ, μ) ∈ IR** ^{n}* × IR

*× IR*

^{p}*with*

^{l}*λ =*

*(λ*1

*, λ*2

*, . . . , λ*

*J*

*) ∈ IR*

^{m}^{1}

^{+1}× IR

^{m}^{2}

^{+1}× · · · × IR

^{m}

^{J}^{+1}.

**Definition 1.1 Let***L**c*be given as in (2) and*(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) ∈ IR** ^{n}*× IR

*× IR*

^{p}*.*

^{l}(a) The triple*(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) is said to be a local saddle point ofL**c**for some c> 0 if there*
exists*δ > 0 such that*

*L**c**(x*^{∗}*, λ, μ) ≤L**c**(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) ≤L**c**(x, λ*^{∗}*, μ*^{∗}*), ∀x ∈*B*(x*^{∗}*, δ), (λ, μ) ∈ IR** ^{p}*× IR

^{l}*,*(3) whereB

*(x*

^{∗}

*, δ) denotes the δ-neighborhood of x*

^{∗}, i.e.,B

*(x*

^{∗}

*, δ) := {x ∈ IR*

^{n}*| x−x*

^{∗}≤

*δ}.*

(b) The triple*(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) is said to be a global saddle point ofL**c**for some c> 0 if*
*L**c**(x*^{∗}*, λ, μ) ≤L**c**(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) ≤L**c**(x, λ*^{∗}*, μ*^{∗}*), ∀x ∈ IR*^{n}*, (λ, μ) ∈ IR** ^{p}*× IR

^{l}*. (4)*

**2 On local saddle points**

In this section, we focus on the necessary and sufficient conditions for the existence of local
*saddle points. For simplicity, we let Q stand for a second-order cone without emphasizing its*
*dimension, while using the notation Q*⊂ IR^{m+1}*to indicate that Q is regarded as a second-*
order cone in IR^{m+1}*. In other words, the result holding for Q is also applicable toK**i* for
*i* *= 1, . . . , J in the subsequent analysis. According to [13, Example 6.16] we know for*
**a***∈ Q,*

**− b ∈ N***Q***(a) ⇐⇒ ***Q***(a − b) = a**

**⇐⇒ dist(a − b, Q) = b**

**⇐⇒ a ∈ Q, b ∈ Q, a**^{T}**b***= 0,* (5)

*where the last equivalence comes from the fact that Q is a self-dual cone, i.e.,(Q)*^{◦}*= −Q.*

**Lemma 2.1 Let**_{L}_{c}*be given as in (2). Then, the augmented Lagrangian function _{L}*

_{c}*(x, λ, μ)*

*is nondecreasing with respect to c> 0.*

*Proof See [13, Exercise 11.56].*

We now discuss the necessary conditions for local saddle points.

**Theorem 2.1 Suppose**(x^{∗}*, λ*^{∗}*, μ*^{∗}*) is a local saddle point ofL**c*^{∗}*. Then,*
*(a)* *−λ*^{∗}*∈ N*_{K}*(g(x*^{∗}*));*

*(b)* _{L}_{c}*(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) = f (x*^{∗}*) for all c > 0;*

*(c) x*^{∗}*is a local optimal solution to NSOCP (1).*

*Proof We first show that x*^{∗}is a feasible point of NSOCP (1), for which we need to verify
*two things: (i) h(x*^{∗}*) = 0, (ii) g**j**(x*^{∗}*) **K**j* *0 for all j= 1, 2, . . . , J.*

*(i) Suppose h(x*^{∗}*) = 0. Taking μ = γ h(x*^{∗}*) with γ → ∞, and applying the first inequality*
in (3) yields*L**c*^{∗}*(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) = ∞ which is a contradiction. Thus, h(x*^{∗}*) = 0.*

*(ii) Suppose g*_{j}*(x*^{∗}*) /∈* *K**j* *for some j* *= 1, . . . , J. Then, there exist ˜λ**j* ∈ *K**j* such that
*η := ˜λ**j**, g**j**(x*^{∗}*) < 0. Therefore, for β ∈ IR*

dist^{2}

*g**j**(x*^{∗}*) −β ˜λ**j*

*c*^{∗} *,K**j*

−

*β ˜λ**j*

*c*^{∗}

2

=

*g*_{j}*(x*^{∗}*) −β ˜λ**j*

*c*^{∗} *− **K**j*

*g*_{j}*(x*^{∗}*) −β ˜λ**j*

*c*^{∗}

2

−

*β ˜λ**j*

*c*^{∗}

2

=

*g**j**(x*^{∗}*)−*_{K}_{j}

*g**j**(x*^{∗}*)−β ˜λ**j*

*c*^{∗}

2

−2

*β ˜λ**j*

*c*^{∗} *, g**j**(x*^{∗}*)−*_{K}_{j}

*g**j**(x*^{∗}*)−β ˜λ**j*

*c*^{∗}

≥ dist^{2}

*g*_{j}*(x*^{∗}*),K**j*

*− 2β*

*˜λ**j*

*c*^{∗}*, g**j**(x*^{∗}*)*

= dist^{2}

*g**j**(x*^{∗}*),K**j*

*− 2β* *η*

*c*^{∗}*.* (6)

Here the inequality comes from the facts that

*g*_{j}*(x*^{∗}*) − **K**j*

*g*_{j}*(x*^{∗}*) −β ˜λ**j*

*c*^{∗}

≥ *g*_{j}*(x*^{∗}*) − **K**j**(g**j**(x*^{∗}*)) = dist(g**j**(x*^{∗}*),K**j**)*

and

*˜λ**j**, *_{K}*j*

*g*_{j}*(x*^{∗}*) − (β ˜λ**j**/c*^{∗}*)*

≥ 0

because ˜*λ**j* ∈*K**j*and*K**j*

*g*_{j}*(x*^{∗}*) − (β ˜λ**j**/c*^{∗}*)*

∈*K**j*. Taking*β → ∞, it follows from (*3)
and (6) that*L**c*^{∗}*(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) is unbounded above which is a contradiction.*

Plugging*λ = 0 in the first inequality of (*3) (i.e.,*L**c*^{∗}*(x*^{∗}*, 0, μ*^{∗}*) ≤* *L**c*^{∗}*(x*^{∗}*, λ*^{∗}*, μ*^{∗}*)), we*
obtain

*J*
*j=1*

⎡

⎣dist^{2}

*g**j**(x*^{∗}*) −λ*^{∗}_{j}*c*^{∗}*,K**j*

−

*λ*^{∗}_{j}*c*^{∗}

2⎤

*⎦ ≥ 0,* (7)

*where we have used the feasibility of x*^{∗}as shown above.

On the other hand, we have dist

*g*_{j}*(x*^{∗}*) −λ*^{∗}_{j}*c*^{∗}*,K**j*

≤

*g*_{j}*(x*^{∗}*) −λ*^{∗}_{j}

*c*^{∗} *− g**j**(x*^{∗}*)*
=

*λ*^{∗}_{j}*c*^{∗}
*,*

*where the inequality is due to the fact that g**j**(x*^{∗}*) ∈K**j*as shown above. This together with
(7) ensures that

dist

*g*_{j}*(x*^{∗}*) −λ*^{∗}_{j}*c*^{∗}*,K**j*

=

*λ*^{∗}_{j}*c*^{∗}

*.* (8)

Combining (5) and (8) yields *−λ*^{∗}_{j}*∈ N*_{K}*j**(g**j**(x*^{∗}*)) for all j = 1, . . . , J, i.e., −λ*^{∗} ∈
*N*_{K}*(g(x*^{∗}*)) by [13, Proposition 6.41]. This establishes part (a). Furthermore, it implies*

dist

*g**j**(x*^{∗}*) −λ*^{∗}_{j}*c* *,K**j*

=

*λ*^{∗}_{j}*c*

* , ∀c> 0,* (9)

because *−λ*^{∗}_{j}*/c ∈ N**K**j**(g**j**(x*^{∗}*)) for all c > 0 (since N**K**j**(g**j**(x*^{∗}*)) is a cone). Hence*
*L**c**(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) = f (x*^{∗}*) for all c > 0. This establishes part (b).*

*Now, we turn the attention to part (c). Suppose x*∈B*(x*^{∗}*, δ) is any feasible point of NSOCP*
(1). Then, from (3), we know

*f(x) ≥L**c*^{∗}*(x, λ*^{∗}*, μ*^{∗}*) ≥L**c*^{∗}*(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) = f (x*^{∗}*),*

*where the first inequality comes from the fact that x is feasible. This means x*^{∗}is a local

optimal solution to NSOCP (1). The proof is complete.

For NSOCP (1), we say that Robinson’s constraint qualification holds at x^{∗}if*∇h**i**(x*^{∗}*)*
*for i= 1, . . . , l are linearly independent and there exists d ∈ IR** ^{n}*such that

*∇h(x*^{∗}*)d = 0 and g(x*^{∗}*) + ∇g(x*^{∗}*)d ∈ int(K).*

*It is known that if x*^{∗}is a local solution to NSOCP (1) and Robinson’s constraint qualification
*holds at x*^{∗}, then there exists*(λ*^{∗}*, μ*^{∗}*) ∈ IR** ^{p}*× IR

*such that the following Karush-Kuhn- Tucker (KKT) conditions*

^{l}∇*x**L(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) = 0, h(x*^{∗}*) = 0, −λ*^{∗}*∈ N**K**(g(x*^{∗}*)),* (10)
or equivalently,

∇*x**L(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) = 0, h(x*^{∗}*) = 0, λ*^{∗}∈*K, g(x*^{∗}*) ∈K, (λ*^{∗}*)*^{T}*g(x*^{∗}*) = 0,*
*where L(x, λ, μ) is the standard Lagrangian function of NSOCP (*1), i.e.,

*L(x, λ, μ) := f (x) + μ, h(x) − λ, g(x) .* (11)
For convenience of subsequent analysis, we denote by *(x*^{∗}*) all Lagrangian multipliers*
*(λ*^{∗}*, μ*^{∗}*) satisfying (*10).

It is well-known that the second order sufficient conditions are utilized to ensure the
existence of local saddle points. In the nonlinear programming, it requires the positive defi-
niteness of∇_{x x}^{2} *L over the critical cone. However, due to the non-polyhedric of second-order*
*cone, an additional widely known sigma-term (orσ -term), which stands for the curvature of*
second-order cone, is required. In particular, it was noted in [4, page 177] that the*σ -term*
vanishes when the cone is polyhedral. Due to the important role played by*σ -term in the*
analysis of second-order cone, before developing the sufficient conditions for the existence
of local saddle points, we shall study some basic properties of*σ -term which will be used in*
the subsequence analysis. First, based on the arguments given in [1, Theorem 29] we obtain
the following result.

**Theorem 2.2 Let x***∈ Q and d ∈ T**Q**(x). Then, the support function of the outer second*
*order tangent set T*_{Q}^{2}*(x, d) is*

*σ*

*y| T*_{Q}^{2}*(x, d)*

=

⎧⎪

⎪⎪

⎪⎪

⎨

⎪⎪

⎪⎪

⎪⎩

−_{x}^{y}^{1}_{1} *d** ^{T}*
1 0

0 *−I**m*

*d, for y ∈ N**Q**(x) ∩ {d}*^{⊥}*, x ∈ ∂ Q\{0},*
0*,* *for y∈ N**Q**(x) ∩ {d}*^{⊥}*, x /∈ ∂ Q\{0},*

*+∞,* *for y* */∈ N**Q**(x) ∩ {d}*^{⊥}*.*

*Proof We know from [4, Proposition 3.34] that*

*T*_{Q}^{2}*(x, d) + T**T**Q**(x)**(d) ⊂ T*_{Q}^{2}*(x, d) ⊂ T**T**Q**(x)**(d).*

This implies
*σ*

*y| T*_{Q}^{2}*(x, d)*
*+ σ*

*y| T**T**Q**(x)**(d)*

*= σ*

*y| T*_{Q}^{2}*(x, d) + T**T**Q**(x)**(d)*

*≤ σ*

*y| T*_{Q}^{2}*(x, d)*

*≤ σ*

*y| T**T**Q**(x)**(d)*
*. (12)*

Note that
*σ*

*y| T**T**Q**(x)**(d)*

*< +∞ ⇐⇒ σ*

*y| T**T**Q**(x)**(d)*

= 0 (13)

*⇐⇒ y ∈ N**T**Q**(x)**(d)* (14)

*⇐⇒ y ∈*
*T*_{Q}*(x)*_{◦}

*= N**Q**(x), y*^{T}*d*= 0 (15)
*where the first and third equivalences come from the fact that T**T*_{Q}*(x)**(d) and T**Q**(x) are cones,*
respectively. Thus, we only need to establish the exact formula of*σ*

*y| T*_{Q}^{2}*(x, d)*

, provided
that (15) holds. In addition, it also indicates from (12) that*σ*

*y| T*_{Q}^{2}*(x, d)*

= ∞ whenever
*y* */∈ N**Q**(x) ∩ {d}*^{⊥}*, since T*_{Q}^{2}*(x, d) is nonempty for x ∈ Q and d ∈ T**Q**(x) by [*1, Lemma
27].

In fact, under condition (15), it follows from (12) and (13) that
*σ*

*y| T*_{Q}^{2}*(x, d)*

*≤ σ*

*y| T**T**Q**(x)**(d)*

*= 0.* (16)

Furthermore, in light of condition (15), we discuss the following four cases.

*(i) If x* *= 0, then 0 ∈ T*_{Q}^{2}*(x, d) = T**Q**(d) where the equality is due to [*1, Lemma 27].

Thus,

*σ*

*y| T*_{Q}^{2}*(x, d)*

*= σ*

*y| T**Q**(d)*

*≥ 0.*

This together with (16) implies*σ*

*y| T*_{Q}^{2}*(x, d)*

= 0.

*(ii) If x∈ int(Q), then it follows from (15) that y= 0. Hence, σ*

*y| T*_{Q}^{2}*(x, d)*

= 0.

*(iii) If x* *∈ ∂(Q)\{0} and d ∈ int(T**Q**(x)), then it follows from (*14) that y = 0 since
*d∈ int(T**Q**(x)). Hence σ*

*y| T*_{Q}^{2}*(x, d)*

*= 0 = −(y*1*/x*1*)(d*_{1}^{2}*− d*2^{2}*).*

*(iv) If x∈ ∂(Q)\{0} and d ∈ ∂(T**Q**(x)), then the desired result can be obtained by following*
the arguments given in [1, p. 222]. We provide the proof for the sake of completeness.

Note that*σ*

*y|T*_{Q}^{2}*(x, d)*

*is to maximize y*_{1}*w*1*+ y*_{2}^{T}*w*2over all*w satisfying −w*1*x*_{1}+
*w*^{T}_{2}*x*2*≤ d*_{1}^{2}*− d*2^{2}(see [1, Lemma 27]). From y*∈ N**Q**(x), i.e., −y ∈ Q, x ∈ Q, and*
*x*^{T}*y* *= 0, we know −y*1 *= αx*1 and*−y*2 *= −αx*2 with*α = −*^{y}_{x}^{1}_{1} ≥ 0, see [1, page
208]. Thus,

*y, w = y*1*w*1*+y*2^{T}*w*2*= α*

*w*2^{T}*x*_{2}*−w*1*x*_{1}

*≤ α*

*d*_{1}^{2}*−d*2^{2}

= −*y*_{1}
*x*_{1}

*d*_{1}^{2}*−d*2^{2}
*.*

The maximum can be obtained at*(w*1*, w*2*) = (−*^{d}_{x}^{1}^{2}

1*, −*^{d}_{x}^{2}^{}^{2}

2^{2}*x*2*). This establishes the*

desired expression.

*Remark 2.1 Let A be a convex subset in IR*^{m}^{+1}. In the proof of Theorem2.2, we use the
*inclusion T*_{A}^{2}*(x, d) ⊂ T**T**A**(x)**(d). It is known from [4, page 168] that these two sets are the*
*same if A is polyhedral. But, for the non-polyhedral cone Q, the following example shows*
this inclusion maybe strict.

*Example 2.1 For Q*⊂ IR^{3}, let *¯x = (1, 1, 0) and ¯d = (1, 1, 1). Then,*

*T**Q**( ¯x) = {d = (d*1*, d*2*, d*3*) ∈ IR*^{3}*| (d*2*, d*3*)*^{T}*( ¯x*2*, ¯x*3*) − d*1*¯x*1 ≤ 0}

*= {d = (d*1*, d*2*, d*3*) | d*2*− d*1*≤ 0},*

which implies ¯*d* *∈ ∂T**Q**( ¯x). Hence,*

*T*_{Q}^{2}*( ¯x, ¯d) = {w = (w*1*, w*2*, w*3*) | (w*2*, w*3*)*^{T}*( ¯x*2*, ¯x*3*) − w*1*¯x*1*≤ ¯d*_{1}^{2}*− ( ¯d*2*, ¯d*3*)*^{2}}

*= {w = (w*1*, w*2*, w*3*) | w*2*− w*1*≤ −1}.*

*On the other hand, since T**T**Q**( ¯x)**( ¯d) = cl(R**T**Q**( ¯x)**( ¯d)), whereR**T**Q**( ¯x)**( ¯d) denotes the radical (or*
*feasible) cone of T**Q**( ¯x) at ( ¯d), then for each w ∈ T**T**Q**( ¯x)**( ¯d), there exists w*^{}∈*R**T**Q**( ¯x)**( ¯d) → w*
such that ¯*d+ tw*^{}*∈ T**Q**( ¯x) for some t > 0, i.e.,*

*( ¯d*2*, ¯d*3*) + t(w*^{}_{2}*, w*^{}_{3}*)**T*

*( ¯x*2*, ¯x*3*) − ( ¯d*1*+ tw*^{}_{1}*) ¯x*1 *≤ 0,*

which ensures that*(w*^{}_{2}*, w*_{3}^{}*)*^{T}*( ¯x*2*, ¯x*3*) − w*^{}_{1}*¯x*1 *≤ 0. Now, taking limit yields w*2*− w*1 ≤ 0.

Thus, we obtain

*T*_{T}_{Q}_{( ¯x)}*( ¯d) = {w = (w*1*, w*2*, w*3*) | w*2*− w*1≤ 0}

*which says T*_{Q}^{2}*( ¯x, ¯d)**T*_{T}_{Q}_{( ¯x)}*( ¯d). In fact, 0 ∈ T**T**Q**( ¯x)**( ¯d), but 0 /∈ T*_{Q}^{2}*( ¯x, ¯d).*

**Corollary 2.1 For x**∈ Q and y ∈ N*Q**(x), we define*

*
(x, y) := T**Q**(x) ∩ {y}*^{⊥}*= {d | d ∈ T**Q**(x) and y*^{T}*d= 0}.*

*Then,σ*

*y| T*_{Q}^{2}*(x, d)*

*is nonpositive and continuous with respect to d over
(x, y).*

*Proof We first show thatσ (y | T*_{Q}^{2}*(x, d)) is nonpositive for d ∈
(x, y). In fact, we know*
from Theorem2.2that*σ*

*y| T*_{Q}^{2}*(x, d)*

*= 0 when x = 0, or x ∈ int(Q), or x ∈ ∂(Q)\{0}*

*and d∈ int(T**Q**(x)). If x ∈ ∂(Q)\{0} and d ∈ ∂(T**Q**(x)), then we have x*1*d*_{1} *= x*_{2}^{T}*d*_{2}by the
*formula of T*_{Q}*(x), see [*1, Lemma 25]. Hence x_{1}*|d*1*| = |x*_{2}^{T}*d*_{2}*| ≤ x*2*d*2 which implies

*|d*1*| ≤ d*2* because x*1 *= x*2* > 0. Note that −y*1 is nonnegative since*−y ∈ Q. Then,*
applying Theorem2.2yields*σ*

*y| T*_{Q}^{2}*(x, d)*

*= −(y*1*/x*1*)(d*_{1}^{2}*− d*2^{2}*) ≤ 0. Thus, in any*
case, we have verified the nonpositivity of*σ*

*y| T*_{Q}^{2}*(x, d)*

over*
(x, y).*

Next, we now show the continuity of*σ*

*y| T*_{Q}^{2}*(x, d)*

*with respect to d over
(x, y).*

*Indeed, if x* *= 0 or x ∈ int(Q), then σ*

*y| T*_{Q}^{2}*(x, d)*

*= 0 for all d ∈
(x, y) which, of*
*course, is continuous. If x* *∈ ∂ Q\{0}, then σ*

*y| T*_{Q}^{2}*(x, d)*

*= −(y*1*/x*1*)(d*_{1}^{2}*− d*2^{2}*) for*
*d∈
(x, y) which is continuous with respect to d as well.*

*Remark 2.2 For a general closed convex cone, σ (y | T*_{}^{2}*(x, d)) can be a discontinuous*
*function of d; see [4, Page 178] or [15, Page 489]. But, when is the second order cone Q,*
our result shows that this function is continuous.

*For a convex subset A in IR** ^{m+1}*, it is well known that the function dist

^{2}

*(x, A) is con-*tinuously differentiable with∇dist

^{2}

*(x, A) = 2 (x −*

*A*

*(x)). But, there are very limited*

*results on second order differentiability unless some additional structure is imposed on A,*for example, second order regularity, see [2,3,15].

Let*φ(x) := dist*^{2}*(x, Q) for Q ⊂ IR*^{m+1}*. Since Q is second order regular, then according*
to [15],*φ possesses the following nice property: for any x, d ∈ IR** ^{m+1}*, there holds that

lim

*d*^{}*→d*
*t*↓0

*φ(x + td*^{}*) − φ(x) − tφ*^{}*(x; d*^{}*)*

1

2*t*^{2} =*V(x, d)* (17)

where*V(x, d) is the optimal value of the problem*
min

2*d − z*^{2}*− 2σ*

*x− **Q**(x) | T*_{Q}^{2}*(**Q**(x), z)*

s*.t. z ∈
*

*Q**(x), x − **Q**(x)*

*.* (18)

With these preparations, the sufficient conditions for the existence of local saddle points are given as below.

**Theorem 2.3 Suppose x**^{∗}*is a feasible point of the NSOCP (1) satisfying the following:*

*(i) x*^{∗}*is a KKT point and(λ*^{∗}*, μ*^{∗}*) ∈ (x*^{∗}*), i.e.,*

∇*x**L(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) = 0 and − λ*^{∗}*∈ N*_{K}*(g(x*^{∗}*)).*

*(ii) the following second order conditions hold*

∇^{2}*x x**L(x*^{∗}*, y*^{∗}*)(d, d) + d*^{T}*H(x*^{∗}*, λ*^{∗}*)d > 0, ∀d ∈C(x*^{∗}*, λ*^{∗}*)\{0},* (19)
*where*

*C(x*^{∗}*, λ*^{∗}*):=*

*d*∈ IR^{n}*| ∇h(x*^{∗}*)d =0, ∇g(x*^{∗}*)d ∈ T**K*
*g(x*^{∗}*)*

*,*

*∇g(x*^{∗}*)d*_{T}

*(λ*^{∗}*)=0*
*,*
*andH(x*^{∗}*, λ*^{∗}*) :=*_{J}

*j*=1*H*^{j}*(x*^{∗}*, λ*^{∗}_{j}*) with*

*H*^{j}*x*^{∗}*, λ*^{∗}_{j}

:=

⎧⎪

⎪⎨

⎪⎪

⎩

−_{(g}^{(λ}_{j}_{(x}^{∗}^{j}* ^{)}*∗

^{1}

*))*1

*∇g*

*j*

*(x*

^{∗}

*)*

*1 0*

^{T}0 *−I**m**j*

*∇g**j**(x*^{∗}*), g**j**(x*^{∗}*) ∈ ∂(K**j**)\{0},*

0, *ot herwise.*

*Then,(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) is a local saddle point ofL**c**for some c> 0.*

*Proof The first inequality in (3) follows from the fact thatL**c**(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) = f (x*^{∗}*) by (5)*
since*−λ*^{∗}*∈ N*_{K}*(g(x*^{∗}*)), and thatL**c**(x*^{∗}*, λ, μ) ≤ f (x*^{∗}*) for all (λ, μ) ∈ IR** ^{p}* × IR

*due to*

^{l}*x*

^{∗}being feasible.

We will prove the second inequality in (3) by contradiction, i.e., we cannot find c *> 0*
and*δ > 0 such that f (x*^{∗}*) =L**c**(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) ≤L**c**(x, λ*^{∗}*, μ*^{∗}*) for all x ∈*B*(x*^{∗}*, δ). In other*
*words, there exists a sequence c*_{n}*→ ∞ as n → ∞, and each fixed c**n*, we always find a
sequence*{x*_{k}^{n}*} (noting that its sequence is dependent on c**n**) such that x*_{k}^{n}*→ x*^{∗}*as k* → ∞
and

*f(x*^{∗}*) >L**c**n**(x*^{n}_{k}*, λ*^{∗}*, μ*^{∗}*).* (20)
*To proceed, we denote t*_{k}^{n}*:= x*_{k}^{n}*−x*^{∗}* and d*_{k}^{n}*:= (x*_{k}^{n}*−x*^{∗}*)/x*_{k}^{n}*−x*^{∗}. Assume, without
*loss of generality, that d*_{k}^{n}*→ ˜d*^{n}*as k*→ ∞. First, we observe that

*φ*

*g**j**(x*^{n}_{k}*) −λ*^{∗}_{j}*c**n*

*= φ*

*g*_{j}*(x*^{∗}*) −λ*^{∗}_{j}

*c*_{n}*+ t*_{k}^{n}*∇g**j**(x*^{∗}*)d*_{k}* ^{n}*+1

2*(t*_{k}^{n}*)*^{2}∇^{2}*g*_{j}*(x*^{∗}*)(d*_{k}^{n}*, d*_{k}^{n}*) + o*
*(t*_{k}^{n}*)*^{2}

*= φ*

*g**j**(x*^{∗}*) −λ*^{∗}_{j}*c**n* *+ t*_{k}^{n}

*∇g**j**(x*^{∗}*)d*_{k}* ^{n}*+1

2*t*_{k}* ^{n}*∇

^{2}

*g*

*j*

*(x*

^{∗}

*)(d*

_{k}

^{n}*, d*

_{k}

^{n}*)*

*+ o*
*(t*_{k}^{n}*)*^{2}

*= φ*

*g*_{j}*(x*^{∗}*) −λ*^{∗}_{j}*c*_{n}

*+ t*_{k}^{n}*φ*^{}

*g*_{j}*(x*^{∗}*) −λ*^{∗}_{j}*c*_{n}

*∇g**j**(x*^{∗}*)d*_{k}* ^{n}*+1

2*t*_{k}* ^{n}*∇

^{2}

*g*

_{j}*(x*

^{∗}

*)(d*

_{k}

^{n}*, d*

_{k}

^{n}*)*

+1
2*(t*_{k}^{n}*)*^{2}*V*

*g**j**(x*^{∗}*) −λ*^{∗}_{j}

*c**n**, ∇g**j**(x*^{∗}*) ˜d*^{n}

*+ o*

*(t*_{k}^{n}*)*^{2}

(21)
where the second equality follows from the fact of *φ being Lipschitz continuous (in*
fact, *φ is continuously differentiable) and the last step is due to (17). From (18),*
*V*

*g**j**(x*^{∗}*) − λ*^{∗}_{j}*/c**n**, ∇g**j**(x*^{∗}*) ˜d*^{n}

is the optimal value of the following problem
min 2*∇g**j**(x*^{∗}*) ˜d*^{n}*− z*^{2}*− 2σ*

−^{λ}_{c}_{n}^{∗}^{j}* T*_{K}^{2}_{j}*(g**j**(x*^{∗}*), z)*
!

s.t. z ∈
(g*j**(x*^{∗}*), −λ*^{∗}_{j}*)*

(22)

where we have used the fact that*
*

*g**j**(x*^{∗}*), −λ*^{∗}_{j}*/c**n*

*=
(g**j**(x*^{∗}*), −λ*^{∗}_{j}*) by definition since*

*c**n* *= 0, and *_{K}_{j}

*g**i**(x*^{∗}*) − (λ*^{∗}_{j}*/c**n**)*

*= g**i**(x*^{∗}*) because −λ*^{∗}_{j}*∈ N*_{K}_{j}*(g**j**(x*^{∗}*)) by (5).*

Note that the optimal value of the above problem (22) is finite since*σ is nonpositive by*
Corollary2.1, and that the objective function is strongly convex (because · ^{2} is strongly
convex and*−σ is convex [*4, Proposition 3.48]). Hence, the optimal solution of the problem
(22) exists and is unique, say z^{n}* _{j}*, i.e.,

*V*

*g**j**(x*^{∗}*) −λ*^{∗}_{j}

*c**n**, ∇g**j**(x*^{∗}*) ˜d*^{n}

= 2*∇g**j**(x*^{∗}*) ˜d*^{n}*− z*^{n}_{j}^{2}*− 2σ*

−*λ*^{∗}_{j}*c**n*

* T*_{K}^{2}_{j}*(g**j**(x*^{∗}*), z*^{n}_{j}*)*

*,*
(23)
*where z*^{n}_{j}*∈
(g**j**(x*^{∗}*), −λ*^{∗}_{j}*). Then, combining (*21) and (23) yields

dist^{2}

*g**j**(x*_{k}^{n}*) −λ*^{∗}_{j}*c**n**,K**j*

−

*λ*^{∗}_{j}*c**n*

2

*= −2t*_{k}^{n}

*λ*^{∗}_{j}

*c*_{n}*, ∇g**j**(x*^{∗}*)d*_{k}* ^{n}*+1

2*t*_{k}* ^{n}*∇

^{2}

*g*

_{j}*(x*

^{∗}

*)(d*

_{k}

^{n}*, d*

_{k}

^{n}*)*

*+ (t*_{k}^{n}*)*^{2}

*∇g**j**(x*^{∗}*) ˜d*^{n}*− z*^{n}_{j}^{2}*− σ*

−*λ*^{∗}_{j}*c**n*

* T*_{K}^{2}_{j}*(g**j**(x*^{∗}*), z*^{n}_{j}*)*

*+ o((t*_{k}^{n}*)*^{2}*), (24)*

where we use the fact that dist

*g*_{j}*(x*^{∗}*) − (λ*^{∗}_{j}*/c**n**),K**j*

*= λ*^{∗}_{j}*/c**n* and

*φ*^{}

*g*_{j}*(x*^{∗}*) −λ*^{∗}_{j}*c*_{n}

= 2

*g*_{j}*(x*^{∗}*) −λ*^{∗}_{j}*c*_{n}*− *_{K}_{j}

*g*_{j}*(x*^{∗}*) −λ*^{∗}_{j}*c*_{n}

= −2*λ*^{∗}_{j}*c*_{n}*.*
*Since f(x*^{∗}*) >L**c*_{n}*(x*_{k}^{n}*, λ*^{∗}*, μ*^{∗}*) by (20), applying the Taylor expansion, we obtain from (24)*
that

0*> f (x*_{k}^{n}*) − f (x*^{∗}*) + μ*^{∗}*, h(x*_{k}^{n}*) +c*_{n}

2*h(x*_{k}^{n}*)*^{2}
+*c**n*

2

*J*
*j*=1

⎡

⎣dist^{2}

*g**j**(x*_{k}^{n}*) −λ*^{∗}_{j}*c**n**,K**j*

−

*λ*^{∗}_{j}*c**k*

2⎤

⎦

*= t*_{k}^{n}*∇ f (x*^{∗}*)*^{T}*d*_{k}* ^{n}*+1

2*(t*_{k}^{n}*)*^{2}*(d*_{k}^{n}*)** ^{T}*∇

^{2}

*f(x*

^{∗}

*)d*

_{k}

^{n}*+ o((t*

_{k}

^{n}*)*

^{2}

*) +*

"

*μ*^{∗}*, t*_{k}^{n}*∇h(x*^{∗}*)d*_{k}* ^{n}*
+1

2*(t*_{k}^{n}*)*^{2}*∇h(x*^{∗}*)(d*_{k}^{n}*, d*_{k}^{n}*) + o((t*_{k}^{n}*)*^{2}*)*

#
+*c**n*

2*t*_{k}^{n}*∇h(x*^{∗}*)d*_{k}^{n}*+ o(t*_{k}^{n}*)*^{2}
+*c*_{n}

2

*J*
*j=1*

*− 2t*_{k}^{n}

*λ*^{∗}_{j}

*c*_{n}*, ∇g**j**(x*^{∗}*)d*_{k}* ^{n}*+1

2*t*_{k}* ^{n}*∇

^{2}

*g*

*j*

*(x*

^{∗}

*)(d*

_{k}

^{n}*, d*

_{k}

^{n}*)*

*+ (t*_{k}^{n}*)*^{2}

*∇g**j**(x*^{∗}*) ˜d*^{n}*− z*^{n}_{j}^{2}*− σ*

−*λ*^{∗}_{j}*c*_{n}

* T*_{K}^{2}_{j}*(g**j**(x*^{∗}*), z*^{n}_{j}*)*

*+ o((t*_{k}^{n}*)*^{2}*)*

*.*

Dividing by*(t*_{k}^{n}*)*^{2}*/2 on both sides and taking limits as k → ∞ give*

0≥ ∇_{x x}^{2} *L(x*^{∗}*, λ*^{∗}*, μ*^{∗}*)( ˜d*^{n}*, ˜d*^{n}*) + c**n**∇h(x*^{∗}*) ˜d*^{n}^{2} (25)
*+ c**n*

*J*
*j=1*

*∇g**j**(x*^{∗}*) ˜d*^{n}*− z*^{n}_{j}^{2}*− σ*

−*λ*^{∗}_{j}*c*_{n}*T*_{K}^{2}

*j**(g**j**(x*^{∗}*), z*^{n}_{j}*)*

where we use the fact that∇*x**L(x*^{∗}*, λ*^{∗}*, μ*^{∗}*) = 0, the first equality in KKT conditions (10).*

Since*−λ*^{∗}_{j}*∈ N**K**j**(g**j**(x*^{∗}*)) from (*10) and z^{n}_{j}*∈
(g**j**(x*^{∗}*), −λ*^{∗}_{j}*), applying Corollary*2.1
yields

*σ*

−*λ*^{∗}_{j}*c**n*

* T*_{K}^{2}_{j}*(g**j**(x*^{∗}*), z*^{n}_{j}*)*

= 1
*c**n**σ*

*−λ*^{∗}_{j}*T*_{K}^{2}

*j**(g**j**(x*^{∗}*), z*^{n}_{j}*)*

≤ 0

where the equality is due to the positive homogeneity of the support function, see [11]. Thus, it follows from (25) that

0≥ ∇_{x x}^{2} *L(x*^{∗}*, λ*^{∗}*, μ*^{∗}*)( ˜d*^{n}*, ˜d*^{n}*) + c**n**∇h(x*^{∗}*) ˜d*^{n}^{2}*+ c**n*

*J*
*j=1*

*∇g**j**(x*^{∗}*) ˜d*^{n}*− z*^{n}_{j}^{2}*.*

Due to* ˜d*^{n}* = 1 for all n, we may assume, taking a subsequence if necessary, that ˜d*^{n}*→ ˜d.*

*Because c**n**can be made sufficiently large as n*→ ∞, we obtain from the above inequality that

*∇h(x*^{∗}*) ˜d*^{n}*→ 0 and ∇g**j**(x*^{∗}*) ˜d*^{n}*− z*^{n}_{j}*→ 0. Therefore, ∇h(x*^{∗}*) ˜d = lim**n→∞**∇h(x*^{∗}*) ˜d** ^{n}*= 0
and

dist

*∇g**j**(x*^{∗}*) ˜d,
(g**j**(x*^{∗}*), −λ*^{∗}_{j}*)*

= lim

*n→∞*dist

*∇g**j**(x*^{∗}*) ˜d*^{n}*,
(g**j**(x*^{∗}*), −λ*^{∗}_{j}*)*

≤ lim_{n→∞}*∇g**j**(x*^{∗}*) ˜d*^{n}*− z*^{n}* _{j}* = 0

which implies*∇g**j**(x*^{∗}*) ˜d ∈
(g**j**(x*^{∗}*), −λ*^{∗}_{j}*) for all j = 1, 2, . . . , J. Thus, we have ˜d ∈*
*C(x*^{∗}*, λ*^{∗}*). In addition, it follows from (25) again that*

0≥ ∇*x x*^{2} *L(x*^{∗}*, λ*^{∗}*, μ*^{∗}*)( ˜d*^{n}*, ˜d*^{n}*) − c**n*

*J*
*j=1*

*σ*

−*λ*^{∗}_{j}*c*_{n}

* T*_{K}^{2}_{j}

*g*_{j}*(x*^{∗}*), z*^{n}_{j}

= ∇*x x*^{2} *L(x*^{∗}*, λ*^{∗}*, μ*^{∗}*)( ˜d*^{n}*, ˜d*^{n}*) −*

*J*
*j=1*

*σ*

*−λ*^{∗}_{j}* T*_{K}^{2}_{j}

*g*_{j}*(x*^{∗}*), z*^{n}_{j}

*.*