PROBLEMS ASSOCIATED WITH SECOND-ORDER CONE

XINHE MIAO^{∗}, WEI-MING HSU, CHIEU THANH NGUYEN,
AND JEIN-SHAN CHEN^{†}

Abstract. In this paper, we study the solvabilities of three optimiza- tion problems associated with second-order cone, including the absolute value equations associated with second-order cone (SOCAVE), eigen- value complementarity problem associated with second-order cone (SOCE- iCP), and quadratic eigenvalue complementarity problem associated with second-order cone (SOCQEiCP). More specifically, we characterize un- der what conditions these optimizations have solution and unique solu- tion, respectively.

1. Introduction

In this paper, we study the solvabilities of three optimization problems associated with second-order cone. The first optimization problem that our target is the so-called absolute value equations associated with second-order cone, abbreviated as SOCAVEs. For SOCAVEs, there have two types of them. The first type is in the form of

(1) Ax − |x| = b.

Another one is a more general SOCAVE, which is in the form of

(2) Ax + B|x| = b,

where A, B ∈ R^{n×n}, B 6= 0, and b ∈ R^{n}. Note that, unlike the standard
absolute value equation that is presented below, here |x| means the absolute
value of x coming from the square root of the Jordan product “◦”, associated
with second-order cone (SOC), of x and x, that is, |x| := (x ◦ x)^{1/2}. The
second-order cone in R^{n} (n ≥ 1), also called the Lorentz cone or ice-cream
cone, is defined as

K^{n}:=(x_{1}, x2) ∈ R × R^{n−1}| kx_{2}k ≤ x_{1} ,

2010 Mathematics Subject Classification. 26B05, 26B35, 90C33.

Key words and phrases. Solvability, eigenvalue, second-order cone, absolute value equations.

∗The author’s work is supported by National Natural Science Foundation of China (No.

11471241).

†Corresponding author. The author’s work is supported by Ministry of Science and Technology, Taiwan.

1

where k · k denotes the Euclidean norm. If n = 1, then K^{1} is the set of
nonnegative reals R+. In general, a general second-order cone K could be
the Cartesian product of SOCs, i.e.,

K := K^{n}^{1}× · · · × K^{n}^{r}.

For simplicity, we focus on the single second-order cone K^{n} because all the
analysis can be carried over to the setting of Cartesian product. More de-
tails about second-order cone, Jordan product, and (·)^{1/2} will be introduced
in Section 2.

Indeed, the SOCAVE (1) (respectively, SOCAVE (2)) is a natural exten- sion of the standard absolute value equation (AVE for short) as bellow:

(3) Ax − |x| = b, (respectively, Ax + B|x| = b)

where |x| denotes the componentwise absolute value of vector x ∈ R^{n}. It is
known that the standard absolute value equation (3) was first introduced by
Rohn in [44] and recently has been investigated by many researchers. For
standard absolute value equation, there are two main research directions.

One is on the theoretical side in which the corresponding properties of the solution for the AVE (3) are studied, see [21, 25, 28, 29, 32, 35, 42, 44, 52].

The other one focuses on the algorithm for solving the absolute value equa- tion, see [5, 23, 30, 31, 33, 34, 45, 53, 54].

On the theoretical aspect, Mangasarian and Meyer [35] show that the AVE (3) is equivalent to the bilinear program, the generalized LCP (lin- ear complementarity problem), and the standard LCP provided 1 is not an eigenvalue of A. Prokopyev [42] further improves the above equivalence which indicates that the AVE (3) can be equivalently recast as an LCP with- out any assumption on A and B, and also provides a relationship with mixed integer programming. In general, if solvable, the AVE (3) can have either unique solution or multiple (e.g., exponentially many) solutions. Indeed, various sufficient conditions on solvability and non-solvability of the AVE (3) with unique and multiple solutions are discussed in [35, 42]. Moreover, Wu and Guo [52] further study the unique solvability of the AVE (3), and give some new and useful results for the unique solvability of the AVE (3).

Recently, the absolute value equation associated with second-order cone or circular cone are investigated in [22] and [27], respectively. In particular, Hu, Huang and Zhang [22] show that the SOCAVE (2) is equivalent to a class of second-order cone linear complementarity problems, and establish a result regarding the unique solvability of the SOCAVE (2). Along this direction, we further look into the SOCAVEs (1) and (2) in this paper, and achieve some new results about the existence of (unique) solution.

The second optimization problem that we focus on is the so-called second-
order cone eigenvalue complementarity problem, SOCEiCP for short. More
specifically, given two matrices B, C ∈ R^{n×n}, the SOCEiCP is to find
(x, y, λ) ∈ R^{n}× R^{n}× R such that

(4) SOCEiCP(B, C) :

y = λBx − Cx,
y K^{n} 0, x K^{n} 0,
x^{T}y = 0,

a^{T}x = 1,

where a is an arbitrary fixed point with a ∈ int(K^{n}), and x K^{n} 0 means
that x ∈ K^{n}, a partial order. The SOCEiCP(B, C) given as in (4) comes
naturally from the traditional eigenvalue complementarity problem [43, 47],
which seeks to find (λ, x, w) ∈ R^{n}× R^{n}× R such that

EiCP(B, C) :

y = λBx − Cx,
y ≥ 0, x ≥ 0,
x^{T}y = 0,
e^{T}x = 1,

where B, C ∈ R^{n×n} and e = (1, 1, · · · , 1)^{T} ∈ R^{n}. Usually, the matrix B
is assumed to be positive definite. The scalar λ is called a complementary
eigenvalue and x is a complementary eigenvector associated to λ for the pair
(B, C). The condition x^{T}y = 0 and the nonnegative requirements on x and
y imply that either xi = 0 or yi = 0 for 1 ≤ i ≤ n. These two variables are
called complementary.

A natural extension of the EiCP goes to the quadratic eigenvalue comple-
mentarity problem (QEiCP), whose mathematical format is as below. Given
A, B, C ∈ R^{n×n}, the QEiCP consists of finding (x, y, λ) ∈ R^{n}× R^{n}× R such
that

QEiCP(A, B, C) :

y = λ^{2}Ax + λBx + Cx,
y ≥ 0, x ≥ 0,

x^{T}y = 0,
e^{T}x = 1,

where e = (1, 1, · · · , 1)^{T} ∈ R^{n}. It is clear that when A = 0, the QEiCP(A, B, C)
reduces to the EiCP(B,−C). The λ component of a solution to the QEiCP(A, B, C)
is called a quadratic complementary eigenvalue for the pair (A, B, C), whereas
the x component is called a quadratic complementary eigenvector for the pair
(A, B, C).

Following the same idea for creating the SOCEiCP(B, C), the third op- timization problem that we study in this paper is the so-called second- order cone quadratic eigenvalue complementarity problem (SOCQEiCP).

In other words, given matrices A, B, C ∈ R^{n×n}, the SOCQEiCP seeks to

find (x, y, λ) ∈ R^{n}× R^{n}× R such that

(5) SOCQEiCP(A, B, C) :

y = λ^{2}Ax + λBx + Cx,
y K^{n} 0, x K^{n} 0,
x^{T}y = 0,

a^{T}x = 1,

with arbitrary fixed point a ∈ int(K^{n}). The SOCEiCP (4) and the SOC-
QEiCP (5) have been investigated in [2, 3, 19]. The purpose of this paper
aims to establish the solvabilities of the SOCEiCP (4) and the SOCQEiCP
(5) by reformulating them as second-order cone complementarity problem
(SOCCP) and a nonsmooth system of equations (see more details in Section
5).

We point out that the last normalization constraint appeared in the above
EiCP, QEiCP, SOCEiCP, and SOCQEiCP has been introduced in order to
prevent the x component of a solution to vanish. In other words, “for an
arbitrary fixed point a ∈ int(K^{n}), x ∈ K^{n}satisfying a^{T}x > 0 is equivalent to
x 6= 0”. To see this, we provide some arguments as below. First, it is trivial
that a^{T}x > 0 implies that x 6= 0. Now, suppose that x = (x_{1}, x_{2}) ∈ K^{n}
which is nonzero. Then, there must have x1 > 0. Using the definition of

int(K^{n}) =(x_{1}, x_{2}) ∈ R × R^{n−1}| kx_{2}k < x_{1} ,
we have

a^{T}x = a1x1+ ha2, x2i > |ha_{2}, x2i| + ha_{2}, x2i ≥ 0.

This proves that a^{T}x > 0.

Another thing needs to be pointed out is that the normalization constraint
e^{T}x = 1 is good enough for EiCP and QEiCP; moreover, this condition was
also used in [2] for SOCEiCP. However, we show that it does not make
sense in the settings of SOCEiCP and SOCQEiCP because e /∈ int(K^{n}).

Indeed, for a counterexample, we consider λ = 1, x =

1

−1

∈ K^{2},
two matrices C =

1 2 2 5

∈ R^{2×2} and B := I ∈ R^{2×2}. Then, we
have λBx − Cx =

1

−1

−

1 2 2 5

1

−1

=

2 2

∈ K^{2}. Hence,
x^{T}(λBx − Cx) = 0, but e^{T}x = 0. This is why, in this paper, we require an
point a ∈ int(K^{n}) such that a^{T}x = 1 to serve as the normalization constraint
in SOCEiCP and SOCQEiCP.

To close this section, we say a few words about notations. As usual, R^{n}
denotes the space of n-dimensional real column vectors. R+and R++denote
the nonnegative and positive reals. For any x, y ∈ R^{n}, the Euclidean inner
product are denoted hx, yi = x^{T}y, and the Euclidean norm kxk are denoted
as kxk = phx, xi. Given a matrix A ∈ R^{n×n}, kAk_{a} denotes the arbitrary

matrix norm, for example, kAk1, kAk2 and kAk∞. In addition, ρ(A) means
the spectral radius of A, that is, ρ(A) := max{|λ| | λ is eigenvalue of A}, and
M (K^{n}) ⊂ K^{n} denotes that for any z ∈ K^{n}, we have M z ∈ K^{n}. For conve-
nience, we say that a pair (x, λ) ∈ R^{n}× R solves the SOCEiCP(B, C) when
the triplet (x, y, λ) with y = λBx − Cx, is a solution to the SOCEiCP(B, C)
in the sense defined in (4). Similarly, we say that (x, λ) ∈ R^{n}× R solves the
SOCQEiCP(A, B, C) when the same occurs with the triplet (x, λ), where
y = λ^{2}Ax + λBx + Cx.

2. Preliminaries

In this section, we recall some basic concepts and background materials
regarding second-order cone and the absolute value of x ∈ R^{n}, which will
be extensively used in the subsequent analysis. More details can be found
in [9, 14, 16, 17, 20, 22].

The official definition of second-order cone (SOC) is already defined in
Section 1. We begin with introducing the concept of Jordan product. For
any two vectors x = (x_{1}, x_{2}) ∈ R × R^{n−1} and y = (y_{1}, y_{2}) ∈ R × R^{n−1}, the
Jordan product of x and y associated with K^{n}is given by

x ◦ y :=

x^{T}y
y_{1}x_{2}+ x_{1}y_{2}

.

The Jordan product, unlike scalar or matrix multiplication, is not associa-
tive, which is a main source of complication in the analysis of optimization
problems involved SOC, see [14, 16, 20] and references therein for more de-
tails. The identity element under this Jordan product is e = (1, 0, · · · , 0)^{T} ∈
R^{n}. With these definitions, x^{2} means the Jordan product of x with itself,
i.e., x^{2} := x ◦ x; while x^{1/2} with x ∈ K^{n} denotes the unique vector in K^{n}
such that x^{1/2}◦ x^{1/2} = x. In light of this, the vector |x| in the SOCAVEs
(1) and (2) is computed by

|x| := (x ◦ x)^{1/2}.

However, by the definition of |x|, it is not easy to write out the expression
of |x| explicitly. Fortunately, there is another way to reach |x| via spectral
decomposition and projection onto second-order cone. We elaborate it as
below. For x = (x_{1}, x_{2}) ∈ R × R^{n−1}, the spectral decomposition of x with
respect to SOC is given by

(6) x = λ_{1}(x)u^{(1)}_{x} + λ_{2}(x)u^{(2)}_{x} ,
where λ_{i}(x) = x_{1}+ (−1)^{i}kx_{2}k for i = 1, 2 and

u^{(i)}_{x} =

1 2

1, (−1)^{i x}_{kx}^{T}^{2}

2k

T

if kx_{2}k 6= 0,

1

2 1, (−1)^{i}ω^{T}T

if kx_{2}k = 0,

with ω ∈ R^{n−1} being any vector satisfying kωk = 1. The two scalars λ1(x)
and λ_{2}(x) are called spectral values (or eigenvalues) of x; while the two
vectors u^{(1)}x and u^{(2)}x are called the spectral vectors (or eigenvectors) of x.

Moreover, it is obvious that the spectral decomposition of x ∈ R^{n} is unique
if x_{2} 6= 0.

Next, we talk about the projection onto second-order cone. Let x+ be
the projection of x onto K^{n}, while x− be the projection of −x onto its dual
cone of K^{n}. Since second-order cone K^{n} is self-dual, the dual cone of K^{n}
is itself, i.e., (K^{n})^{∗} = K^{n}. In fact, the explicit formula of projection of
x = (x1, x2) ∈ R × R^{n−1} onto K^{n} is characterized in [14, 16, 17, 20, 18] as
below:

x_{+}=

x if x ∈ K^{n},
0 if x ∈ −K^{n},
u otherwise,

where u =

" _{x}_{1}_{+kx}_{2}_{k}

2 x1+kx2k

2

x2

kx2k

# . Similarly, the expression of x− is in the form of

x−=

0 if x ∈ K^{n},

−x if x ∈ −K^{n},
w otherwise,

where w =

"

−^{x}^{1}^{−kx}_{2} ^{2}^{k}

_{x}

1−kx2k 2

x2

kx2k

# . Together with the spectral decomposition (6) of x, it can be verified that x = x+− x− and the expression of x+ and x− have the form:

x_{+} = (λ_{1}(x))_{+}u^{(1)}_{x} + (λ_{2}(x))_{+}u^{(2)}_{x} ,
x− = (−λ1(x))+u^{(1)}_{x} + (−λ2(x))+u^{(2)}_{x} ,
where (α)+= max{0, α} for α ∈ R.

Based on the definitions and expressions of x+ and x−, we introduce
another expression of |x| associated with SOC. In fact, the alternative ex-
pression is obtained by the so-called SOC-function, which can be found in
[10]. For any x ∈ R^{n}, we define the absolute value |x| of x with respect to
SOC as |x| := x++x−. In fact, in the setting of SOC, the form |x| = x++x−

is equivalent to the form |x| = (x ◦ x)^{1/2}. Combining the above expression
of x_{+} and x−, it is easy to see that the expression of the absolute value |x|

is in the form of

|x| = (λ_{1}(x))_{+}+ (−λ_{1}(x))_{+}u^{(1)}_{x} +(λ_{2}(x))_{+}+ (−λ_{2}(x))_{+}u^{(2)}_{x}

=
λ_{1}(x)

u^{(1)}_{x} +
λ_{2}(x)

u^{(2)}_{x} .

For the absolute value |x| associated with SOC, Hu, Huang and Zhang [22] have obtained some properties as the following lemmas.

Lemma 2.1. [22, Theorem 2.1] The generalized Jacobian of the absolute value function | · | is given as follows:

(a) Suppose that x_{2}= 0. Then, ∂|x| = {tI | t ∈ sgn(x_{1})}.

(b) Suppose that x26= 0.

(i) If x_{1} + kx_{2}k < 0 and x_{1} − kx_{2}k < 0, then ∂|x| = {∇|x|} =

−1 0^{T}
0 −I

.

(ii) If x1 + kx2k > 0 and x_{1} − kx_{2}k > 0, then ∂|x| = {∇|x|} =

1 0^{T}
0 I

.

(iii) If x1+ kx2k > 0 and x_{1}− kx_{2}k < 0, then

∂|x| = {∇|x|} =

0 _{kx}^{x}^{T}^{2}

2k x2

kx_{2}k x1

kx_{2}k

I −_{kx}^{x}^{2}^{x}^{T}^{2}

2k^{2}

.
(iv) If x1+ kx2k = 0 and x_{1}− kx_{2}k < 0, then

∂|x| =

1 2

t − 1 (t + 1)_{kx}^{x}^{T}^{2}

2k

(t + 1)_{kx}^{x}^{2}

2k −2I + (t + 1)_{kx}^{x}^{2}^{x}^{T}^{2}

2k^{2}

t ∈ sgn(x_{1}+ kx_{2}k)

.
(v) If x1+ kx2k > 0 and x_{1}− kx_{2}k = 0, then

∂|x| =

1 2

t + 1 (1 − t)_{kx}^{x}^{T}^{2}

2k

(1 − t)_{kx}^{x}^{2}

2k 2I − (1 − t)_{kx}^{x}^{2}^{x}^{T}^{2}

2k^{2}

t ∈ sgn(x_{1}− kx_{2}k)

,

where the function sgn(·) denotes that sgn(a) =

{1} if a > 0, {t | t ∈ [−1, 1]} if a = 0, {−1} if a < 0.

Lemma 2.2. [22, Theorem 2.2] For any V ∈ ∂|x|, the absolute value of every eigenvalue of V is not greater than 1.

Lemma 2.3. [22, Theorem 2.3] For any V ∈ ∂|x|, we have V x = |x|.

3. Existence of solution to the SOCAVEs

This section is devoted to the existence and nonexistence of solution to SOCAVE (1) and SOCAVE (2).

Theorem 3.1. Let C ∈ R^{n×n} and b ∈ R^{n}.
(a) If the following system

(7) (C − I)z = b, z ∈ K^{n}

has a solution, then for any A = ±C the SOCAVE (1) has a solution.

(b) If the following system

(C + B)z = b, z ∈ K^{n}

has a solution, then for any A = ±C the SOCAVE (2) has a solution.

Proof. (a) Suppose that z := (z1, z2) ∈ R × R^{n−1} is a solution to the system
(7), i.e.,

(C − I)z = b, z ∈ K^{n}.

Since z ∈ K^{n}, it follows that z1 ≥ kz_{2}k. Taking x = ±z, which means
x = (±z_{1}, ±z_{2}) ∈ R × R^{n−1}. Using the definition of |x|, we see that

|x| =
λ_{1}(x)

u^{(1)}_{x} +
λ_{2}(x)

u^{(2)}_{x}

=

± z_{1}− k ± z_{2}k

1

2

−_{2kz}^{±z}^{2}

2k

+

± z_{1}+ k ± z_{2}k

1

±z2_{2}
2kz2k

= z.

Plugging in A = ±C yields that

Ax − |x| = ±Cx − z = (C − I)z = b.

This says that x is a solution to the SOCAVE (1).

(b) The arguments are similar to part (a). 2

Theorem 3.2. Suppose that −b ∈ K^{n} and A(K^{n}) ⊆ K^{n} with ρ(A) < 1.

Then, the SOCAVE (1) has a solution x ∈ K^{n}.

Proof. We consider the iterative scheme x^{k+1} = Ax^{k} − b with x^{0} := −b.

Since −b ∈ K^{n}, it follows that x^{k} ∈ K^{n} for every k ∈ N. Hence, from the
condition ρ(A) < 1, we can conclude that the sequence {x^{k}} converges to a
point x^{∗} such that x^{∗} = Ax^{∗}− b. Combining with the closeness of K^{n}, this
yields x^{∗} ∈ K^{n}, which implies

Ax^{∗}− |x^{∗}| = Ax^{∗}− x^{∗}= b.

Thus, x^{∗} ∈ K^{n} is a solution to the SOCAVE (1). 2

Remark 3.1. In fact, if the condition ρ(A) < 1 in Theorem 3.2 is replaced
by kAk_{a}< 1, where kAk_{a} denotes an arbitrary matrix norm, then the result
of Theorem 3.2 still holds.

Theorem 3.3. Suppose that 0 6= b ∈ K^{n}. Then, the following hold.

(a) If the spectral norm kAk < 1 with kAk := p

ρ(A^{H}A), then the SO-
CAVE (1) has no solution.

(b) If kAk < 1, B(K^{n}) ⊂ −K^{n} and kBxk ≥ kxk for any x ∈ K^{n}, then
the SOCAVE (2) has no solution.

Proof. From Ax − |x| = b and 0 6= b ∈ K^{n}, it follows that Ax − |x| ∈ K^{n}.
This together with the fact |x| ∈ K^{n} implies Ax + |x| ∈ K^{n}. Moreover, by

the self-duality of K^{n}, we see that

kAxk^{2}− kxk^{2} = kAxk^{2}− k|x|k^{2}

= hAx + |x|, Ax − |x|i

≥ 0.

Hence, we have

kxk ≤ kAxk ≤ kAkkxk < kxk,

where the last inequality is due to kAk < 1. This is a contradiction. There- fore, the SOCAVE (1) has no solution.

(b) The idea for the proof is similar to part(a), we present it for completeness.

From Ax + B|x| = b and 0 6= b ∈ K^{n}, we know Ax + B|x| ∈ K^{n}. Then, it
follows from B(K^{n}) ⊂ −K^{n} and b ∈ K^{n} that Ax = b − B|x| ∈ K^{n}, which
says Ax − B|x| ∈ K^{n}. Moreover, by the self-duality of K^{n}, we have

kAxk^{2}− kB|x|k^{2}= hAx + B|x|, Ax − B|x|i ≥ 0,
which implies

kxk > kAxk ≥ kB|x|k ≥ k|x|k = kxk,

where the first inequality is due to kAk < 1 and the last inequality is due to
kBxk ≥ kxk for any x ∈ K^{n}. This is a contradiction. Hence, the SOCAVE
(2) has no solution. 2

4. The unique solvability for the SOCAVEs

In this section, we further investigate the unique solvability of SOCAVE (1) and SOCAVE (2).

Theorem 4.1. (a) If all singular values of A exceed 1, then the SO- CAVE (1) has a unique solution.

(b) If all singular values of A ∈ R^{n×n} exceed the maximal singular value
of B ∈ R^{n×n}, then the SOCAVE (2) has a unique solution.

Proof. (a) For any V ∈ ∂|x|, by Lemma 2.3, we have |x| = V x, which implies that

Ax − |x| = Ax − V x = (A − V )x,

i.e., the SOCAVE (1) becomes the equation (A − V )x = b. Moreover, by
Lemma 2.1, we know that the real matrix V is symmetric. This leads to
that the singular values of V are the absolute values of eigenvalue of V . On
the other hand, from Lemma 2.2, it follows that all singular values of V are
not greater than 1. Combining with the condition that all singular values
of A exceed 1, we can assert that the matrix A − V is nonsingular. If not,
there exists 0 6= x ∈ R^{n} such that (A − V )x = 0, i.e., Ax = V x. Hence, we
have

kxk^{2}< hAx, Axi = hV x, V xi ≤ kxk^{2},

which is a contradiction. Thus, the matrix A − V is nonsingular, which says the equation (A − V )x = b has a unique solution. Then, the proof is complete.

(b) The proof is similar to that for part (a), we present it for completeness.

For any V ∈ ∂|x|, by Lemma 2.3 again, we have |x| = V x; and hence Ax + B|x| = (A + BV )x.

Moreover, we also know that all singular values of V are not greater than
1 due to Lemma 2.2. Applying the condition that all singular values of A
exceed the maximal singular value of B ∈ R^{n×n} and [22, Theorem 3.1],
we obtain that the matrix A + BV is nonsingular. Thus, the equation
(A + BV )x = b has a unique solution, which says the SOCAVE (2) has a
unique solution. 2

Remark 4.1. We point out that in [22], Hu, Huang and Zhang have shown
that if all singular values of A ∈ R^{n×n} exceed the maximal singular value
of B ∈ R^{n×n}, the SOCAVE (2) has at least one solution for any b ∈ R^{n}.
In Theorem 4.1(b), we study when the SOCAVE (2) has a unique solution,
which is a stronger result than the aforementioned one in [22], although the
same condition is used. In other words, under the condition that all singular
values of A ∈ R^{n×n} exceed the maximal singular value of B ∈ R^{n×n}, it
guarantees that the SOCAVE (2) not only has at least one solution, but also
has unique solution.

Corollary 4.1. If the matrix A is nonsingular and kA^{−1}k < 1, then the
SOCAVE (1) has a unique solution.

Proof. This is an immediate consequence of Theorem 4.1(a), whose proof is similar to that for [35, Proposition 4.1]. Hence, we omit it. 2

Theorem 4.2. (a) If the matrix A = [a_{ij}] ∈ R^{n×n} satisfies

|a_{ii}| >√

n +X

j6=i

|a_{ij}| ∀i ∈N := {1, 2, · · · , n},
then for any b ∈ R^{n} the SOCAVE (1) has a unique solution.

(b) If the matrices A = [aij] ∈ R^{n×n} and B ∈ R^{n×n} satisfy

|a_{ii}| > kBk_{∞}√

n +X

j6=i

|a_{ij}| ∀i ∈N := {1, 2, · · · , n},
then for any b ∈ R^{n} the SOCAVE (2) has a unique solution.

Proof. (a) Again, for any V ∈ ∂|x|, we know that |x| = V x and kV k ≤ 1, which implies that the SOCAVE (1) is equal to the equation (A − V )x = b.

Moreover, by the relationship between the spectral norm and the infinite norm, i.e.,

kV k_{∞}≤√
nkV k,
it follows that kV k∞ ≤ √

n. Let [w_{ij}] = W := A − V = [a_{ij} − v_{ij}]. Then,
we note that for any i ∈N = {1, 2, · · · , n},

|w_{ii}| = |a_{ii}− v_{ii}| ≥ |a_{ii}| − |v_{ii}|

> √

n +X

j6=i

|a_{ij}| − |v_{ii}|

≥ √

n +X

j6=i

|w_{ij}| −

n

X

j=1

|v_{ij}|

≥ X

j6=i

|w_{ij}|,

where the last inequality is due to kV k∞ ≤ √

n. This indicates that the matrix A − V = W is a strictly diagonally dominant by row. Hence, the matrix A − V is nonsingular, which leads to that the equation (A − V )x = b has a unique solution. Thus, the SOCAVE (1) has a unique solution.

(b) The proof is similar to part (a) and we omit it here. 2

Theorem 4.3. If the matrix A ∈ R^{n×n} can be expressed as

A = αI + M, where M (K^{n}) ⊆ K^{n} and α − 1 > ρ(M ),
then for any b ∈ R^{n}, the SOCAVE (1) has a unique solution.

Proof. For any x ∈ K^{n} and V ∈ ∂|x|, we know that x = |x| = V x and
kV k ≤ 1. Note that

Ax − |x| = (αI − V )x + M x = (α − 1)|x| + M x.

This implies that the matrix αI + M − V is a generalized M -matrix with
respect to K^{n}. Hence, we have the matrix αI + M − V is nonsingular. In
addition, applying the fact that Ax − |x| = (αI + M − V )x, it yields that
the SOCAVE (1) has a unique solution. 2

Lemma 4.1. For any x, y ∈ R^{n}, let |x|, |y| be the absolute value coming
from the square root of x^{2} and y^{2} under the Jordan product, respectively.

Then, we have

|x| − |y|

≤ kx − yk.

Proof. First, we note that
kx − yk^{2}−

|x| − |y|

2 = hx − y, x − yi − h|x| − |y|, |x| − |y|i

= 2 h|x|, |y|i − hx, yi

= 2 hx_{+}+ x−, y_{+}+ y−i − hx_{+}− x_{−}, y_{+}− y_{−}i

= 4 hx_{+}, y−i + hx_{−}, y_{+}i

≥ 0.

With this, it is clear to see that

|x| − |y|

≤ kx − yk. Then, the proof is complete. 2

Theorem 4.4. For any β ∈ R, assume that the matrix βI + A is nonsin- gular.

(a) If the matrix A satisfies

(βI + A)^{−1}

< 1

|β| + 1, then the SOCAVE (1) has a unique solution.

(b) If the matrices A and B satisfy
(βI + A)^{−1}

< 1

|β| + kBk, then the SOCAVE (2) has a unique solution.

Proof. (a) For the SOCAVE (1), we know that

Ax − |x| = b ⇐⇒ (βI + A)x = βx + |x| + b.

If the matrix βI + A is nonsingular, then we further have

Ax−|x| = b ⇐⇒ (βI+A)x = βx+|x|+b ⇐⇒ x = (βI+A)^{−1}(βx+|x|+b).

In view of this, we consider the following iterative scheme
x^{k+1}= (βI + A)^{−1}(βx^{k}+ |x^{k}| + b).

With this, it follows that

x^{k+1}− x^{k}= (βI + A)^{−1}β(x^{k}− x^{k−1}) + (|x^{k}| − |x^{k−1}|).

Hence, we have

x^{k+1}− x^{k}
=

(βI + A)^{−1}β(x^{k}− x^{k−1}) + (|x^{k}| − |x^{k−1}|)

≤

(βI + A)^{−1}

|β|kx^{k}− x^{k−1}k + k|x^{k}| − |x^{k−1}|k
(8)

≤

(βI + A)^{−1}

(|β| + 1)kx^{k}− x^{k−1}k,

where the last inequality holds due to Lemma 4.1. This together with the assumption that

(βI + A)^{−1}

< _{|β|+1}^{1} yields the sequence {x^{k}} converges
to a solution of the SOCAVE (1).

Next, we verify the SOCAVE (1) has a unique solution. If there exist x^{∗}
and ¯x that both satisfy the SOCAVE (1), then as done in (8) we have

kx^{∗}− ¯xk ≤

(βI + A)^{−1}

(|β| + 1)kkx^{∗}− ¯xk.

Since

(βI + A)^{−1}

< _{|β|+1}^{1} , we obtain that x^{∗} = ¯x. This says that the
SOCAVE (1) has a unique solution. Thus, the proof is complete.

(b) The proof is similar to part (a) and we omit it here. 2

5. The solvabilities of SOCEiCP and SOCQEiCP

In this section, we focus on the solvabilities of the other two optimization problems, SOCEiCP(B,C) and SOCQEiCP(A,B,C), which are given as in (4) and (5) respectively. In order to clearly describe our results, we need a few concepts which were introduced in [3, 4].

Definition 5.1. Let K^{n} be a single second-order cone.

(a) A matrix A ∈ R^{n×n} is called K^{n}-regular if x^{T}Ax 6= 0 for all nonzero
x K^{n} 0.

(b) A matrix A ∈ R^{n×n} is called strictly K^{n}-copositive if x^{T}Ax > 0 for
all nonzero x K^{n}0.

(c) A triple (A, B, C) with A, B, C ∈ R^{n×n} is called K^{n}-hyperbolic if
(x^{T}Bx)^{2}≥ 4(x^{T}Ax)(x^{T}Cx)

for all nonzero x K^{n} 0.

(d) The class R_{0}(K^{n}) ⊆ R^{n×n} consists of those matrices A ∈ R^{n×n}
such that there exists no nonzero x ∈ K^{n} satisfying Ax ∈ K^{n} and
x^{T}Ax = 0.

(e) The class S0(K^{n}) ⊆ R^{n×n} consists of those matrices A ∈ R^{n×n} such
that Ax ∈ K^{n} for at least a nonzero x ∈ K^{n}.

(f) The class R^{0}_{0}(K^{n}) ⊆ R^{n×n} consists of those matrices A ∈ R^{n×n} such
that x^{T}Ax = 0 for at least a nonzero x ∈ K^{n} satisfying Ax ∈ K^{n}.
(g) The class S_{0}^{0}(K^{n}) ⊆ R^{n×n} consists of those matrices A ∈ R^{n×n} such

that there exists no nonzero x ∈ K^{n} satisfying Ax ∈ K^{n}.

In fact, there exist some study in [3, 46, 48], which investigated the eigen- values problems involved with general cones. The solvability results therein automatically include solvabilities of SOCEiCP(B,C) and SOCQEiCP(A,B,C) as special cases. For example, we extract some of them from [3, 46, 48], when the cone reduces to an SOC or is a general cone, and list them as below.

Proposition 5.1. Let K^{n} be a single second-order cone and consider the
SOCEiCP(B,C) given as in (4) and the SOCQEiCP(A,B,C) given as in
(5).

(a) If B ∈ R^{n×n} is strictly K^{n}-copositive, then SOCEiCP(B, C) has
solutions for any C ∈ R^{n×n}.

(b) If A is K^{n}-regular and (A, B, C) is K^{n}-hyperbolic, then SOCQEiCP(A, B, C)
has solutions.

(c) The matrix C ∈ R_{0}^{0}(K^{n}) if and only if 0 is a quadratic complementary
eigenvalue for SOCQEiCP(A, B, C).

(d) If C ∈ S_{0}^{0}(K^{n}) and A is strictly K^{n}-copositive, there exist at least
one positive and one negative quadratic complementary eigenvalue
for SOCQEiCP(A, B, C).

(e) If A ∈ S_{0}^{0}(K^{n}) and C is strictly K^{n}-copositive, there exist at least
one positive and one negative quadratic complementary eigenvalue
for SOCQEiCP(A, B, C).

In view of the above existing solvability results in the literature, we aim to seek the solvabilities of SOCEiCP(B,C) and SOCQEiCP(A,B,C) via dif- ferent approach. In this section, we will recast these problems as three reformulations, called Reformulation I, Reformulation II and Refor- mulation III.

The idea of Reformulation I is to recast these problems as a form of
second-order cone complementarity problem (SOCCP), which is a natural
extension of nonlinear complementarity problem (NCP). To proceed, we first
recall the mathematical format of the SOCCP as follows. More details can
be found in [6, 7, 8, 9, 12, 13, 14, 16, 36, 37, 38, 39, 40, 41, 50, 51]. Given
a continuously differentiable mapping F : R^{n} → R^{n}, the SOCCP(F ) is to
find x ∈ R^{n} satisfying

(9) SOCCP(F ) :

x K^{n} 0,
F (x) K^{n}0,
x^{T}F (x) = 0.

It is well know that the KKT conditions of an second-order cone program-
ming problem can be rewritten as an SOCCP(F ). We now elaborate how
we to recast the SOCEiCP(B, C) as the SOCCP(F ). Suppose that we are
given an SOCEiCP(B, C) as in (4), where B, C ∈ R^{n×n} and the matrix B
is assumed to be positive definite. For any x ∈ R^{n} such that x 6= 0, plug-
ging w = λBx − Cx into the complementarity condition x^{T}w = 0 yields
λ = ^{x}_{x}^{T}T^{Cx}Bx. Hence, we obtain

w = x^{T}Cx

x^{T}BxBx − Cx.

With this, for any x ∈ R^{n} such that x 6= 0, we define a mapping F : R^{n} →
R^{n} which is given by

(10) F (x) := x^{T}Cx

x^{T}BxBx − Cx.

This mapping F is not good enough to be put into the SOCCP (9) because F (0) is not defined yet. To this end, we show the following lemma to make up the value F (0).

Lemma 5.1. Consider the SOCEiCP(B, C) given as in (4) where B is
positive definite. Let F : R^{n}→ R^{n}be defined as in (10) where x 6= 0. Then,

x→0limF (x) = 0.

Proof. Since B is positive definite, from Cholesky factorization, there exists
an invertible lower triangle matrix L with positive diagonal entries such that
B = LL^{T}. Hence, for x 6= 0, we have

x^{T}Bx = x^{T}LL^{T}x = L^{T}xT

L^{T}x
and

x^{T}Cx = x^{T}LL^{−1}C(L^{T})^{−1}L^{T}x = L^{T}xT

L^{−1}C(L^{−1})^{T}

L^{T}x .
For convenience, we denote D := L^{−1}C(L^{−1})^{T} and let M := kDk_{sup} =

1≤i,j≤nmax |d_{ij}| be the supremum norm of D, where d_{ij} means the (i, j)-entry of
D. In addition, for x 6= 0, we denote y = (y_{1}, · · · , y_{n})^{T} := L^{T}x. Then, we
obtain

x^{T}Cx
x^{T}Bx

=

y^{T}Dy
y^{T}y

≤ Pn

i,j=1|d_{ij}||y_{i}||y_{j}|
Pn

i=1|y_{i}|^{2} .
By Cauchy’s inequality |y_{i}||y_{j}| ≤ ^{y}

2
i+y^{2}_{j}

2 , we see that Pn

i,j=1|d_{ij}||y_{i}||y_{j}|
Pn

i=1|y_{i}|^{2} ≤ M ·Pn
i,j=1

y_{i}^{2}+y_{j}^{2}
2

P_{n}

i=1y^{2}_{i} = M

2 ·nPn

i=1y_{i}^{2}+ nPn
j=1y_{j}^{2}
P_{n}

i=1y_{i}^{2} = nM
which says

x^{T}Cx
x^{T}Bx

≤ nM.

This further implies that kF (x)k ≤

x^{T}Cx
x^{T}Bx

· kBxk + kCxk ≤ (nM )kBxk + kCxk.

Applying the continuity of linear transformation B and C proves lim

x→0F (x) =

0. 2

Very often, the mapping F in an SOCCP(F ) is required to be differ- entiable. Therefore, in view of Lemma 5.1, it is natural to redefine F (x) as

(11) F (x) =

_{x}TCx

x^{T}BxBx − Cx if x 6= 0,

0 if x = 0.

This enables that the mapping F : R^{n} → R^{n} is continuous. Indeed, it is
clear that the mapping F : R^{n}→ R^{n} is even smooth except for 0. In other

words, F may not be differentiable at 0. To see this, we give an example as below. For n = 2, we take B =

b_{11} b_{12}
b21 b22

which is positive definite with b12 > 0 and C =

c11 c12

c_{21} c_{22}

with c226= 0. Because B is positive definite,
the entries b_{11}, b_{22} are positive. If we consider the first term of F (x) as in
(10), i.e., ^{x}_{x}^{T}T^{Cx}BxBx, it can be written out as

c_{11}x^{2}_{1}+ (c12+ c21)x1x2+ c22x^{2}_{2}
b_{11}x^{2}_{1}+ (b_{12}+ b_{21})x_{1}x_{2}+ b_{22}x^{2}_{2}

·

b11x1+ b12x2

b_{21}x_{1}+ b_{22}x_{2}

.

If we denote

f (x) =

f1(x)
f_{2}(x)

:= x^{T}Cx
x^{T}BxBx,
using the fact

xlim1→0

c_{11}x^{2}_{1}+ (c_{12}+ c_{21})x_{1}x_{2}+ c_{22}x^{2}_{2}
b11x^{2}_{1}+ (b12+ b21)x1x2+ b22x^{2}_{2}

·

b_{11}+b_{12}x_{2}
x1

= ∞,

we see that ^{∂f}_{∂x}^{1}

1(0) does not exist. This means f is not differentiable at 0, and hence F (x) = f (x) − Cx is not differentiable at 0.

Next, we provide two technical lemmas in order to express the Jacobian matrix of F (x) for x 6= 0.

Lemma 5.2. Suppose that f : R^{n} → R and gi : R^{n} → R (1 ≤ i ≤ n)

are real-valued differentiable functions. Denote G(x) =

g1(x)
g_{2}(x)

...
g_{n}(x)

. Then,

the scalar product function f (x)G(x) =

f (x)g1(x)
f (x)g_{2}(x)

... f (x)gn(x)

is a differentiable

function on R^{n} and its Jacobian matrix ∇(f (x)G(x)) is expressed as

∇ f (x)G(x)

= ∇f (x)(G(x))^{T} + f (x)∇G(x).

Proof. The proof comes from direct computation as below.

∇ f (x)G(x)

=

(_{∂x}^{∂f}

1 · g_{1}+ f · ^{∂g}_{∂x}^{1}

1)(x) (_{∂x}^{∂f}

1 · g_{2}+ f ·_{∂x}^{∂g}^{2}

1)(x) · · · (_{∂x}^{∂f}

1 · g_{n}+ f · ^{∂g}_{∂x}^{n}

1)(x)
(_{∂x}^{∂f}

2 · g_{1}+ f · ^{∂g}_{∂x}^{1}

2)(x) (_{∂x}^{∂f}

2 · g_{2}+ f ·_{∂x}^{∂g}^{2}

2)(x) · · · (_{∂x}^{∂f}

2 · g_{n}+ f · ^{∂g}_{∂x}^{n}

2)(x)

... ... . .. ...

(_{∂x}^{∂f}

n · g_{1}+ f · _{∂x}^{∂g}^{1}

n)(x) (_{∂x}^{∂f}

n · g_{2}+ f ·_{∂x}^{∂g}^{2}

n)(x) · · · (_{∂x}^{∂f}

n · g_{n}+ f · _{∂x}^{∂g}^{n}

n)(x)

=

∂f

∂x1(x)

∂f

∂x2(x) ...

∂f

∂xn(x)

g_{1}(x) g2(x) · · · gn(x) + f (x)

∂g1

∂x1(x) _{∂x}^{∂g}^{2}

1(x) · · · ^{∂g}_{∂x}^{n}

1(x)

∂g1

∂x2(x) _{∂x}^{∂g}^{2}

2(x) · · · ^{∂g}_{∂x}^{n}

2(x) ... ... . .. ...

∂g1

∂xn(x) _{∂x}^{∂g}^{2}

n(x) · · · _{∂x}^{∂g}^{n}

n(x)

= ∇f (x)(G(x))^{T} + f (x)∇G(x).

2

Lemma 5.3. Consider the SOCEiCP(B, C) given as in (4) where B is
positive definite. Let F : R^{n}→ R^{n}be defined as in (11). Then, F is smooth
except for 0 and its Jacobian matrix is expressed as

∇F (x) =(C + C^{T})xx^{T}B − (B + B^{T})xx^{T}C xx^{T}B^{T}

(x^{T}Bx)^{2} +x^{T}Cx

x^{T}BxB^{T} − C^{T}.
Proof. Denote f (x) = ^{x}_{x}^{T}T^{Cx}Bx and g(x) = Bx. Then, F (x) = f (x)g(x) − Cx.

For x 6= 0, we know

∇f (x) = ∇(x^{T}Cx) · (x^{T}Bx) − (x^{T}Cx) · ∇(x^{T}Bx)
(x^{T}Bx)^{2}

= (C + C^{T})x · (x^{T}Bx) − (x^{T}Cx) · (B + B^{T})x
(x^{T}Bx)^{2}

= (C + C^{T})xx^{T}B − (B + B^{T})xx^{T}C x

(x^{T}Bx)^{2} .

Then, this together with Lemma 5.2 lead to the desired result. 2

Now, we sum up the relation between SOCEiCP(B, C) and SOCCP(F ) in the below theorem and we call it Reformulation I for SOCEiCP.

Theorem 5.1 (Reformulation I for SOCEiCP). Consider the SOCEiCP(B, C)
given as in (4) where B is positive definite. Let F : R^{n} → R^{n} be defined
as in (11). Then, the mapping F is smooth except for 0 and its Jacobian
matrix is given as in Lemma 5.3. Moreover, the following hold.

(a) If (x^{∗}, λ^{∗}) solves the SOCEiCP(B, C), then x^{∗}solves the SOCCP(F ).

(b) Conversely, if ¯x is a nonzero solution of the SOCCP(F ), then (x^{∗}, λ^{∗})
solves the SOCEiCP(B, C) with λ^{∗}= ^{x}_{x}^{¯}_{¯}T^{T}^{C ¯}B ¯^{x}x and x^{∗} = _{a}T^{x}^{¯}¯x.

Proof. Part (a) is trivial and we only need to prove part(b). Suppose that

¯

x is a nonzero solution to SOCCP(F ) with F given as in (11). Then, we
have ^{x}_{x}^{¯}_{¯}^{T}T^{C ¯}B ¯^{x}x · B ¯x − C ¯x ∈ K^{n}, ¯x ∈ K^{n}, and ¯x^{T}

¯
x^{T}C ¯x

¯

x^{T}B ¯xB ¯x − C ¯x

= 0. Since
a ∈ int(K^{n}) and ¯x ∈ K^{n}, it yields _{a}_{T}^{1}_{¯}_{x} > 0 by the same arguments as on
page 4. From all the above, we conclude that

y^{∗} := λ^{∗}Bx^{∗}− Cx^{∗} = _{a}T^{1}¯x

h¯x^{T}C ¯x

¯
x^{T}B ¯x

B ¯x − C ¯xi

∈ K^{n},
x^{∗} := _{a}T^{1}x¯x ∈ K¯ ^{n},

a^{T}x^{∗} = ^{a}_{a}^{T}T^{x}x^{¯}¯ = 1,
(x^{∗})^{T}y^{∗} = _{a}T^{1}¯x

2h

¯
x^{T}

¯
x^{T}C ¯x

¯

x^{T}B ¯xB ¯x − C ¯xi

= 0.

Thus, (x^{∗}, λ^{∗}) solves SOCEiCP(B, C). 2

Next, we consider the SOCQEiCP(A, B, C) given as in (5), where A, B, C ∈
R^{n×n}such that the matrix A is positive definite (hence A is K^{n}-regular) and
the triplet (A, B, C) is K^{n}-hyperbolic. For any x ∈ R^{n}with x 6= 0, plugging
w = λ^{2}Ax + λBx + Cx into the complementarity condition x^{T}w = 0 yields
(x^{T}Ax)λ^{2} + (x^{T}Bx)λ + (x^{T}Cx) = 0. Thus, λ can be obtained by solving
this quadratic equation, i.e.,

λ1(x) = −(x^{T}Bx) +p

(x^{T}Bx)^{2}− 4(x^{T}Ax)(x^{T}Cx)

2(x^{T}Ax) ,

(12)

λ2(x) = −(x^{T}Bx) −p

(x^{T}Bx)^{2}− 4(x^{T}Ax)(x^{T}Cx)

2(x^{T}Ax) .

(13)

Then, for x 6= 0, we define Fi: R^{n}→ R^{n} as

(14) F_{i}(x) = λ^{2}_{i}(x)Ax + λ_{i}(x)Bx + Cx,

where i = 1, 2. In order to guarantee the well-definedness of F_{i}(0) for i =
1, 2, we need to look into lim

x→0Fi(x).

Lemma 5.4. Consider the SOCQEiCP(A, B, C) given as in (5) where A
is positive definite. Let Fi : R^{n} → R^{n} be defined as in (14) where x 6= 0.

Then, we have lim

x→0F_{i}(x) = 0 for i = 1, 2.

Proof. Since A is positive definite, by Cholesky factorization, there exists an
invertible lower triangle matrix L with positive diagonal entries such that
A = LL^{T}. Using the same techniques in the proof of Lemma 5.1, for x 6= 0,
we obtain

x^{T}Ax = (L^{T}x)^{T}(L^{T}x),

x^{T}Bx = (L^{T}x)^{T}(L^{−1}B(L^{−1})^{T})(L^{T}x),
x^{T}Cx = (L^{T}x)^{T}(L^{−1}C(L^{−1})^{T})(L^{T}x).

For convenience, we denote D := L^{−1}B(L^{−1})^{T}, E := L^{−1}C(L^{−1})^{T}, M1 :=

kDk_{sup} = max

1≤i,j≤n|d_{ij}| be the supremum norm of D, and M_{2} := kEk_{sup} =

1≤i,j≤nmax |e_{ij}| be the supremum norm of E, where d_{ij} is the (i, j)-entry of D and
eij is the (i, j)-entry of E. In addition, we also denote y = (y1, · · · , yn)^{T} :=

L^{T}x. Using the same techniques in the proof of Lemma 5.1, we obtain

|x^{T}Bx| = |y^{T}Dy| ≤ nM1
n

X

i=1

y_{i}^{2},

|x^{T}Cx| = |y^{T}Ey| ≤ nM2
n

X

i=1

y_{i}^{2}.
Hence, for each i and for x 6= 0, we see that

|λ_{i}(x)|

≤ |x^{T}Bx| +p|x^{T}Bx|^{2}+ 4|x^{T}Bx||x^{T}Cx|

2|x^{T}Ax|

≤

nM_{1}Pn
i=1y_{i}^{2}+

q

(nM_{1}Pn

i=1y_{i}^{2})^{2}+ 4(nM_{1}Pn

i=1y_{i}^{2})(nM_{2}Pn
i=1y_{i}^{2})
2Pn

i=1y^{2}_{i}

≤ M_{3} := 1 +√
5
2

!

n max{M_{1}, M_{2}}.

This yields

kF_{i}(x)k ≤ M_{3}^{2}kAxk + M_{3}kBxk + kCxk,

for each i and x 6= 0. Then, by the continuity of linear transformation A, B, and C, the desired result follows. 2

Again, in view of Lemma 5.4, we need to do something to construct a
differentiable mapping F_{i}. In other words, we redefine F_{i}(x) by

(15) F_{i}(x) =

λ^{2}_{i}(x)Ax + λ_{i}(x)Bx + Cx if x 6= 0,

0 if x = 0.

where λ_{i}(x), i = 1, 2 are given as in (12)-(13). From Lemma 5.4, it is clear
that the mapping Fi : R^{n} → R^{n} is continuous for i = 1, 2. In fact, the
mapping Fi : R^{n}→ R^{n} is smooth except for 0. To see this fact, we give an
example as follows. For n = 2, we take A =

a_{11} a_{12}
a21 a22

which is positive definite and B =

b_{11} b_{12}
b21 b22

such that b_{22} 6= 0. Because A is positive
definite, the entries a22 are positive. Now for each i = 1, 2, we consider the
first two terms of Fi(x) described as in (14), i.e.,

λ^{2}_{i}(x)

a11x1+ a12x2

a_{21}x_{1}+ a_{22}x_{2}

+ λi(x)

b11x1+ b12x2

b_{21}x_{1}+ b_{22}x_{2}

:=

gi1(x)
g_{i2}(x)

.

It can be verified that

xlim1→0

g_{i2}(x)
x1

= ∞,

which implies that ^{∂g}_{∂x}^{i2}

1(0) does not exist. Therefore, Fi(x) is not differen- tiable at 0.

For x 6= 0, the Jacobian matrix of F_{i}(x) in (15) is computed as below.

Lemma 5.5. Consider the SOCQEiCP(A, B, C) given as in (5) where A is
positive definite. Let F_{i} : R^{n}→ R^{n} be defined as in (15) for i = 1, 2. Then,
Fi is smooth except for 0 and its Jacobian matrix is expressed as

∇F_{i}(x) = ∇λ_{i}(x) 2λ_{i}(x)x^{T}A^{T} + x^{T}B^{T} + λ^{2}_{i}(x)A^{T} + λ_{i}(x)B^{T} + C^{T},

where

∇λ_{1}(x) = 1

2x^{T}Ax(Bx + B^{T}x)

(D(x))^{−}^{1}^{2}(x^{T}Bx) − 1

− 1

x^{T}Ax· (D(x))^{−}^{1}^{2} (Ax + A^{T}x)(x^{T}Cx) + (Cx + C^{T}x)(x^{T}Ax)

+ 1

2(x^{T}Ax)^{2}
h

x^{T}Bx −p
D(x)i

(Ax + A^{T}x),

∇λ_{2}(x) = − 1

2x^{T}Ax(Bx + B^{T}x)((D(x))^{1}^{2}(x^{T}Bx) + 1)

+ 1

x^{T}Ax(D(x))^{−}^{1}^{2} (Ax + A^{T}x)(x^{T}Cx) + (Cx + C^{T}x)(x^{T}Ax)

+ 1

2(x^{T}Ax)^{2}
h

x^{T}Bx +p
D(x)

i

(Ax + A^{T}x),

and D(x) := (x^{T}Bx)^{2}− 4(x^{T}Ax)(x^{T}Cx).