Computational and Applied Mathematics, vol. 37, no. 5, pp. 5727-5749, 2018

### Discovery of new complementarity functions for NCP and SOCCP

Peng-Fei Ma ^{1}

Department of Mathematics

Zhejiang University of Science and Technology Hangzhou, Zhejiang 310023, P.R. China

Jein-Shan Chen ^{2}
Department of Mathematics
National Taiwan Normal University

Taipei 11677, Taiwan E-mail: jschen@math.ntnu.edu.tw

Chien-Hao Huang ^{3}
Department of Mathematics
National Taiwan Normal University

Taipei 11677, Taiwan

Chun-Hsu Ko ^{4}

Department of Electrical Engineering I-Shou University

Kaohsiung 840, Taiwan

February 26, 2017

(1st revised on November 18, 2017) (2nd revised on March 9, 2018)

(3rd revised on May 20, 2018)

1E-mail: mathpengfeima@126.com. This research was supported by a grant from the National Nat- ural Science Foundation of China(No.11626212).

2Corresponding author. The author’s work is supported by Ministry of Science and Technology, Taiwan.

3E-mail: qqnick0719@ntnu.edu.tw

4E-mail: chko@isu.edu.tw

Abstract. It is well known that complementarity functions play an important role in dealing with complementarity problems. In this paper, we propose a few new classes of complementarity functions for nonlinear complementarity problems and second-order cone complementarity problems. The constructions of such new complementarity func- tions are based on discrete generalization which is a novel idea in contrast to the con- tinuous generalization of Fischer-Burmeister function. Surprisingly, these new families of complementarity functions possess continuous differentiability even though they are discrete-oriented extensions. This feature enables that some methods like derivative-free algorithm can be employed directly for solving nonlinear complementarity problems and second-order cone complementarity problems. This is a new discovery to the literature and we believe that such new complementarity functions can also be used in many other contexts.

Keywords. NCP, SOCCP, natural residual, complementarity function.

### 1 Introduction

In general, the complementarity problem comes from the KKT conditions of linear and nonlinear programming problems. For different types of optimization problems, there arise various complementarity problems, for example, linear complementarity problem, nonlinear complementarity problem, semidefinite complementarity problem, second-order cone complementarity problem, and symmetric cone complementarity problem. To deal with complementarity problems, the so-called complementarity functions play an impor- tant role therein. In this paper, we focus on two classes of complementarity functions, which are used for the nonlinear complementarity problem (NCP) and the second-order cone complementarity problem (SOCCP), respectively.

The first class is the nonlinear complementarity problem (NCP) that has attracted
much attention since 1970s because of its wide applications in the fields of economics,
engineering, and operations research, see [17, 21, 29] and references therein. In mathe-
matical format, the NCP is to find a point x ∈ R^{n} such that

x ≥ 0, F (x) ≥ 0, hx, F (x)i = 0,

where h·, ·i is the Euclidean inner product and F = (F_{1}, . . . , F_{n})^{T} is a map from R^{n} to
R^{n}. For solving NCP, the so-called NCP-function φ : R^{2} → R defined as below

φ(a, b) = 0 ⇐⇒ a, b ≥ 0, ab = 0

plays a crucial role. Generally speaking, with such NCP-functions, the NCP can be re- formulated as nonsmooth equations [36, 39, 44] or unconstrained minimization [22, 23,

27, 31, 32, 40, 43]. Then, different kinds of approaches and algorithms are designed based on the aforementioned reformulations and various NCP-functions. During the past four decades, around thirty NCP-functions are proposed, see [26] for a survey.

The second class is the second-order cone complementarity problem (SOCCP), which
can be viewed as a natural extension of NCP and is to seek a ζ ∈ R^{n} such that

ζ ∈ K, F (ζ) ∈ K, hζ, F (ζ)i = 0,

where F : R^{n} → R^{n} is a map and K is the Cartesian product of second-order cones
(SOC), also called Lorentz cones [19]. In other words, K is expressed as

K = K^{n}^{1} × · · · × K^{n}^{m},
where m, n_{1}, . . . , n_{m} ≥ 1, n_{1}+ · · · + n_{m} = n, and

K^{n}^{i} := {(x_{1}, x_{2}) ∈ R × R^{n}^{i}^{−1} | kx_{2}k ≤ x_{1}},

with k · k denoting the Euclidean norm. The SOCCP has important applications in engineering problems [35] and robust Nash equilibria [28]. Another important special case of SOCCP corresponds to the Karush-Kuhn-Tucker (KKT) optimality conditions for the second-order cone program (SOCP) (see [4] for details):

minimize c^{T}x

subject to Ax = b, x ∈ K,

where A ∈ R^{m×n}has full row rank, b ∈ R^{m}and c ∈ R^{n}. Many solution methods have been
proposed for solving SOCCP, see [12] for a survey. For example, merit function approach
based on reformulating the SOCCP as an unconstrained smooth minimization problem is
studied in [4, 6, 38]. In such approach, it is to find a smooth function ψ : R^{n}× R^{n} → R+

such that

ψ(x, y) = 0 ⇐⇒ hx, yi = 0, x ∈ K^{n}, y ∈ K^{n}. (1)
Then, the SOCCP can be expressed as an unconstrained smooth (global) minimization
problem:

min

ζ∈R^{n} ψ(ζ, F (ζ)). (2)

In fact, a function ψ satisfying the condition in (1) (not necessarily smooth) is called a
complementarity function for SOCCP (or complementarity function associated with K^{n}).

Various gradient methods such as conjugate gradient methods and quasi-Newton meth- ods [2, 20] can be applied for solving (2). In general, for this approach to be effective, the choice of complementarity function ψ is also crucial.

Back to the complementarity functions for NCP, two popular choices of NCP-functions
are the well-known Fischer-Burmeister function (FB function, in short) φ_{FB} : R^{2} → R
defined by (see [23, 24])

φ_{FB}(a, b) =√

a^{2}+ b^{2}− (a + b),
and the squared norm of Fischer-Burmeister function given by

ψ_{FB}(a, b) = 1
2

φ_{FB}(a, b)

2.

In addition, the generalized Fischer-Burmeister function φ_{p} : R^{2} → R, which includes the
Fischer-Burmeister as a special case, is considered in [5, 7, 8, 11, 30, 42]. In particular, the
function φ_{p} is a natural “continuous extension” of φ_{FB}, in which the 2-norm in φ_{FB}(a, b)
is replaced by general p-norm. In other words, φ_{p} : R^{2} → R is defined as

φ_{p}(a, b) = k(a, b)k_{p} − (a + b), p > 1 (3)
and its geometric view is depicted in [42]. The effect of perturbing p for different kinds
of algorithms is investigated in [9–11, 14, 15]. We point it out that the generalized
Fischer-Burmeister φp given as in (3) is not differentiable, whereas the squared norm
of generalized Fischer-Burmeister function is smooth so that it is usually adapted as a
differentiable NCP-function [38]. Moreover, all the aforementioned functions including
Fischer-Burmeister function, generalized Fischer-Burmeister function and their squared
norm can be extended to the setting of SOCCP via Jordan algebra.

A different type of popular NCP-function is the natural residual function φ_{NR} : R^{2} →
R given by

φ_{NR}(a, b) = a − (a − b)+= min{a, b}.

Recently, Chen et al. propose a family of generalized natural residual functions φ^{p}

NR

defined by

φ^{p}_{NR}(a, b) = a^{p}− (a − b)^{p}_{+},

where p > 1 is a positive odd integer, (a − b)^{p}_{+}= [(a − b)_{+}]^{p}, and (a − b)_{+} = max{a − b, 0}.

When p = 1, φ^{p}_{NR} reduces to the natural residual function φ_{NR}, i.e.,
φ^{1}

NR(a, b) = a − (a − b)_{+}= min{a, b} = φ_{NR}(a, b).

As remarked in [16], this extension is “discrete generalization”, not “continuous general-
ization”. Nonetheless, it possesses twice differentiability surprisingly so that the squared
norm of φ^{p}_{NR} is not needed. Based on this discrete generalization, two families of NCP-
functions are further proposed in [3] which have the feature of symmetric surfaces. To
the contrast, it is very natural to ask whether there is a similar “discrete extension” for
Fischer-Burmeister function. We answer this question affirmatively.

In this paper, we apply the idea of “discrete generalization” to the Fischer-Burmeister
function which gives the following function (denoted by φ^{p}

D−FB):

φ^{p}

D−FB(a, b) =√

a^{2}+ b^{2}p

− (a + b)^{p}, (4)

where p > 1 is a positive odd integer and (a, b) ∈ R^{2}. Notice that when p = 1, φ^{p}

D−FB

reduces to the Fischer-Burmeister function. In Section 3, we will see that φ^{p}

D−FB is an
NCP-function and is twice differentiable directly without taking its squared norm. Note
that if p is even, it is no longer an NCP-function. Even though we have the feature of
differentiability, we point out that the Newton method may not applied directly because
the Jacobian at a degenerate solution to NCP is singular (see [32, 33]). Nonetheless, this
feature may enable that many methods like derivative-free algorithm can be employed
directly for solving NCP. In addition, we investigate the differentiable properties of φ^{p}_{D−FB},
the computable formulas for their gradients and Jacobians. In order to have more in-
sight for this new family of NCP-function, we also depict the surfaces of φ^{p}

D−FB(a, b) with various values of p.

In Section 4, we show that the new function φ^{p}

D−FB can be further employed to the
SOCCP setting as complementarity functions and merit functions. In other words, in
the terms of Jordan algebra, we define φ^{p}

D−FB : R^{n}× R^{n} → R^{n} by
φ^{p}

D−FB(x, y) =p

x^{2}+ y^{2}p

− (x + y)^{p}, (5)

where p > 1 is a positive odd integer, x ∈ R^{n}, y ∈ R^{n}, x^{2} = x ◦ x is the Jordan product
of x with itself and √

x with x ∈ K^{n} being the unique vector such that √
x ◦√

x = x.

We prove that each φ^{p}

D−FB(x, y) is a complementarity function associated with K^{n} and
establish formulas for its gradient and Jacobian. These properties and formulas can be
used to design and analyze non-interior continuation methods for solving second-order
cone programs and complementarity problems. In addition, several variants of φ^{p}

D−FB are also shown to be complementarity functions for SOCCP.

Throughout the paper, we assume K = K^{n} for simplicity and all the analysis can be
carried over to the case where K is a product of second-order cones without difficulty.

The following notations will be used. The identity matrix is denoted by I and R^{n}denotes
the space of n-dimensional real column vectors. For any given x ∈ R^{n} with n > 1, we
write x = (x_{1}, x_{2}) where x_{1} is the first entry of x and x_{2} is the subvector that consists
of the remaining entries. For every differentiable function f : R^{n} → R, ∇f(x) denotes
the gradient of f at x. For every differentiable mapping F : R^{n} → R^{m}, ∇F (x) is an
n × m matrix which denotes the transposed Jacobian of F at x. For nonnegative scalar
functions α and β, we write α = o(β) to mean lim

β→0

α β = 0.

### 2 Preliminaries

In this section, we review some background materials about the Jordan algebra in [19, 25].

Then, we present some technical lemmas which are needed in subsequent analysis.

For any x = (x_{1}, x_{2}), y = (y_{1}, y_{2}) ∈ R × R^{n−1}, we define the Jordan product associated
with K^{n} as

x ◦ y := (hx, yi, y_{1}x_{2}+ x_{1}y_{2}).

The identity element under this product is e := (1, 0, . . . , 0)^{T} ∈ R^{n}. For any given
x = (x_{1}, x_{2}) ∈ R × R^{n−1}, we define symmetric matrix

L_{x} := x_{1} x^{T}_{2}
x_{2} x_{1}I

which can be viewed as a linear mapping from R^{n} to R^{n}. It is easy to verify that
Lxy = x ◦ y, ∀x ∈ R^{n}.

Moreover, we have L_{x} is invertible for x _{K}^{n} 0 and

L^{−1}_{x} = 1
det(x)

x_{1} −x^{T}_{2}

−x_{2} det(x)
x1

I + 1 x1

x_{2}x^{T}_{2}

,

where det(x) = x^{2}_{1}−kx_{2}k^{2}. We next recall from [12, 25] that each x = (x_{1}, x_{2}) ∈ R×R^{n−1}
admits a spectral factorization, associated with K^{n}, of the form

x = λ_{1}u^{(1)}+ λ_{2}u^{(2)}, (6)

where λ1, λ2 and u^{(1)}, u^{(2)} are the spectral values and the associated spectral vectors of
x given by

λ_{i} = x_{1}+ (−1)^{i}kx_{2}k,

u^{(i)} =

1 2

1, (−1)^{i} x_{2}
kx_{2}k

if x_{2} 6= 0;

1 2

1, (−1)^{i}w2

if x2 = 0,

for i = 1, 2, with w_{2} being any vector in R^{n−1} satisfying kw_{2}k = 1. If x_{2} 6= 0, the factor-
ization is unique.

Given a real-valued function g : R → R, we can define a vector-valued SOC-function
g^{soc} : R^{n}→ R^{n} by

g^{soc}(x) := g(λ_{1})u^{(1)}+ g(λ_{2})u^{(2)}.

If g is defined on a subset of R, then g^{soc} is defined on the corresponding subset of R^{n}.
The definition of g^{soc} is unambiguous whether x_{2} 6= 0 or x_{2} = 0. In this paper, we
will often use the vector-valued functions corresponding to t^{p} (t ∈ R) and √

t (t ≥ 0), respectively, which are expressed as

x^{p} := (λ_{1}(x))^{p}u^{(1)}+ (λ_{2}(x))^{p}u^{(2)}, ∀x ∈ R^{n}

√x := pλ_{1}(x)u^{(1)}+pλ_{2}(x)u^{(2)}, ∀x ∈ K^{n}.

We will see that the above two vector-valued functions play a role in showing that φ^{p}

D−FB

given as in (5) is well-defined in the SOC setting for any x, y ∈ R^{n}. Note that the
other way to define x^{p} and √

x is through Jordan product. In other words, x^{p} represents
x ◦ x ◦ · · · ◦ x for p-times and √

x ∈ K^{n} satisfies √
x ◦√

x = x.

Lemma 2.1. Suppose that p = 2k + 1 where k = 1, 2, 3, · · · . Then, for any u, v ∈ R, we
have u^{p} = v^{p} if and only if u = v.

Proof. The proof is straightforward and can be found in [1, Theorem 1.12]. Here, we provide an alternative proof.

“⇐” It is trivial.

“⇒” For v = 0, since u^{p} = v^{p}, we have u = v = 0. For v 6= 0, from f (t) = t^{p}− 1 being a
strictly monotone increasing function for any t ∈ R, we have u

v

p

− 1 = 0 if and only if u

v = 1, which implies u = v. Thus, the proof is complete. 2

Lemma 2.2. For p = 2m + 1 with m = 1, 2, 3, · · · and x = (x_{1}, x_{2}), y = (y_{1}, y_{2}) ∈
R × R^{n−1}, suppose that x^{p} and y^{p} represent x ◦ x ◦ · · · ◦ x and y ◦ y ◦ · · · ◦ y for p-times,
respectively. Then, x^{p} = y^{p} if and only if x = y.

Proof. “⇐” This direction is trivial.

“⇒” Suppose that x^{p} = y^{p}. By the spectral decomposition (6), we write
x = λ_{1}(x)u^{(1)}x + λ_{2}(x)u^{(2)}x ,

y = λ_{1}(y)u^{(1)}y + λ_{2}(y)u^{(2)}y .

Then, x^{p} = (λ_{1}(x))^{p}u^{(1)}x + (λ_{2}(x))^{p}u^{(2)}x and y^{p} = (λ_{1}(y))^{p}u^{(1)}y + (λ_{2}(y))^{p}u^{(2)}y . Since x^{p} = y^{p}
and eigenvalues are unique, we obtain (λ_{1}(x))^{p} = (λ_{1}(y))^{p} and (λ_{2}(x))^{p} = (λ_{2}(y))^{p}. By
Lemma 2.1, this implies λ_{1}(x) = λ_{1}(y) and λ_{2}(x) = λ_{2}(y). Moreover, {u^{(1)}x , u^{(2)}x } and
{u^{(1)}y , u^{(2)}y } are Jordan frames, we have u^{(1)}x + u^{(2)}x = u^{(1)}y + u^{(2)}y = e, where e is the identity
element. From x^{p} = y^{p} and u^{(1)}x + u^{(2)}x = u^{(1)}y + u^{(2)}y , we get

[(λ_{1}(x))^{p}− (λ_{2}(x))^{p}] (u^{(1)}_{x} − u^{(1)}_{y} ) = 0.

If (λ_{1}(x))^{p} = (λ_{2}(x))^{p}, we have λ_{1}(x) = λ_{2}(x) and λ_{1}(y) = λ_{2}(y), that is, x = λ_{1}(x)e = y.

Otherwise, if (λ_{1}(x))^{p} 6= (λ_{2}(x))^{p}, we must have u^{(1)}x = u^{(1)}y , which implies u^{(2)}x = u^{(2)}y .
2

### 3 New generalized Fischer-Burmeister function for NCP

In this section, we show that the function φ^{p}

D−FB defined as in (4) is an NCP-function and
present its twice differentiability. At the same time, we also depict the surfaces of φ^{p}

D−FB

with various values of p to have more insight for this new family of NCP-functions.

Proposition 3.1. Let φ^{p}

D−FB be defined as in (4) where p is a positive odd integer. Then,
φ^{p}

D−FB is an NCP-function.

Proof. Suppose φ^{p}

D−FB(a, b) = 0 , which says √

a^{2}+ b^{2}p

= (a + b)^{p}. Using p being a
positive odd integer and applying Lemma 2.1, we have

√

a^{2}+ b^{2}

p

= (a + b)^{p} ⇐⇒ √

a^{2}+ b^{2} = a + b.

It is well known that√

a^{2}+ b^{2} = a + b is equivalent to a, b ≥ 0, ab = 0 because φ_{FB} is an
NCP-function. This shows that φ^{p}_{D−FB}(a, b) = 0 implies a, b ≥ 0, ab = 0. The converse
direction is trivial. Thus, we prove that φ^{p}

D−FB is an NCP-function. 2
Remark 3.1: We elaborate more about the new NCP-function φ^{p}

D−FB.
(a) For p being an even integer, φ^{p}

D−FB is not a NCP-function. A counterexample is given as below.

φ^{p}_{D−FB}(−5, 0) = (−5)^{2}− (−5)^{2} = 0.

(b) The surface of φ^{p}_{D−FB} is symmetric, i.e., φ^{p}_{D−FB}(a, b) = φ^{p}_{D−FB}(b, a).

(c) The function φ^{p}

D−FB(a, b) is positive homogenous of degree p, i.e., φ^{p}

D−FB(α(a, b)) =
α^{p}φ^{p}

D−FB(a, b) for α ≥ 0.

(d) The function φ^{p}_{D−FB} is neither convex nor concave function. To see this, taking p = 3
and using the following argument verify the assertion.

5^{3}− 7^{3} = φ^{3}

D−FB(3, 4) > 1
2φ^{3}

D−FB(0, 0) + 1
2φ^{3}

D−FB(6, 8)

= 1

2 × 0 + 1

2 10^{3}− 14^{3} = 4 5^{3}− 7^{3}
and

0 = φ^{3}

D−FB(0, 0) < 1
2φ^{3}

D−FB(−2, 0) + 1
2φ^{3}

D−FB(2, 0) = 1

2 × 16 + 1

2× 0 = 8.

Proposition 3.2. Let φ^{p}

D−FB be defined as in (4) where p is a positive odd integer. Then, the following hold.

(a) For p > 1, φ^{p}_{D−FB} is continuously differentiable with

∇φ^{p}

D−FB(a, b) = p a(√

a^{2}+ b^{2})^{p−2}− (a + b)^{p−1}
b(√

a^{2} + b^{2})^{p−2}− (a + b)^{p−1}

.

(b) For p > 3, φ^{p}

D−FB is twice continuously differentiable with

∇^{2}φ^{p}

D−FB(a, b) =

∂^{2}φ^{p}_{D−FB}

∂a^{2}

∂^{2}φ^{p}_{D−FB}

∂^{2}φ^{p}_{D−FB} ∂a∂b

∂b∂a

∂^{2}φ^{p}_{D−FB}

∂b^{2}

,

where

∂^{2}φ^{p}

D−FB

∂a^{2} = pn

[(p − 1)a^{2}+ b^{2}](√

a^{2}+ b^{2})^{p−4}− (p − 1)(a + b)^{p−2}o
,

∂^{2}φ^{p}

D−FB

∂a∂b = p[(p − 2)ab(√

a^{2}+ b^{2})^{p−4}− (p − 1)(a + b)^{p−2}] = ∂^{2}φ^{p}

D−FB

∂b∂a ,

∂^{2}φ^{p}_{D−FB}

∂b^{2} = pn

[a^{2}+ (p − 1)b^{2}](√

a^{2}+ b^{2})^{p−4}− (p − 1)(a + b)^{p−2}o
.

Proof. The verifications of differentiability and computations of first and second deriva- tives are straightforward, we omit them. 2

Next, we present some variants of φ^{p}_{D−FB}. Indeed, analogous to those functions in [41],
the variants of φ^{p}

D−FB as below can be verified being NCP-functions.

φ_{1}(a, b) = φ^{p}

D−FB(a, b) − α(a)_{+}(b)_{+}, α > 0.

φ_{2}(a, b) = φ^{p}

D−FB(a, b) − α ((a)_{+}(b)_{+})^{2}, α > 0.

φ_{3}(a, b) = [φ^{p}

D−FB(a, b)]^{2}+ α ((ab)_{+})^{4}, α > 0.

φ_{4}(a, b) = [φ^{p}

D−FB(a, b)]^{2}+ α ((ab)_{+})^{2}, α > 0.

In the above expressions, for any t ∈ R, we define t+ as max{0, t}.

Lemma 3.1. Let φ^{p}_{D−FB} be defined as in (4) where p is a positive odd integer. Then, the
value of φ^{p}

D−FB(a, b) is negative only in the first quadrant, i.e., φ^{p}

D−FB(a, b) < 0 if and only if a > 0, b > 0.

Proof. We know that f (t) = t^{p} is a strictly increasing function when p is odd. Using
this fact yields

a > 0, b > 0

⇐⇒ a + b > 0 and ab > 0

⇐⇒ √

a^{2} + b^{2} < a + b

⇐⇒ √

a^{2}+ b^{2}p

< (a + b)^{p}

⇐⇒ φ^{p}

D−FB(a, b) < 0, which proves the desired result. 2

Proposition 3.3. All the above functions φ_{i} for i ∈ {1, 2, 3, 4} are NCP-functions.

Proof. Applying Lemma 3.1, the arguments are similar to those in [16, Proposition 2.4], which are omitted here. 2

In fact, in light of Lemma 2.1, we can construct more variants of φ^{p}

D−FB, which are
also new NCP-function. More specifically, consider that k and m are positive integers,
f : R × R → R, and g : R × R → R with g(a, b) 6= 0 for all a, b ∈ R, the following
functions are new variants of φ^{p}_{D−FB}.

φ_{5}(a, b) = h

g(a, b) √

a^{2}+ b^{2}+ f (a, b)i_{2m+1}^{2k+1}

−g(a, b) a + b + f (a, b)^{2m+1}^{2k+1} .
φ6(a, b) =

h

g(a, b)(√

a^{2}+ b^{2}− a − b)i_{m}^{k}
.
φ_{7}(a, b) = h

g(a, b)(√

a^{2}+ b^{2}− a + f (a, b))i_{2m+1}^{2k+1}

− [g(a, b)(b + f (a, b))]^{2m+1}^{2k+1} .
φ_{8}(a, b) = h

g(a, b)(√

a^{2}+ b^{2}− a + f (a, b))i_{2m+1}^{2k+1}

− [g(a, b)(b + f (a, b))]^{2m+1}^{2k+1} .
φ_{9}(a, b) = e^{φ}^{i}^{(a,b)}− 1 where i = 5, 6, 7, 8.

φ_{10}(a, b) = ln(|φ_{i}(a, b)| + 1) where i = 5, 6, 7, 8.

Proposition 3.4. All the above functions φ_{i} for i ∈ {5, 6, 7, 8, 9, 10} are NCP-functions.

Proof. This is an immediate consequence of Propositions 3.1-3.3. By Lemma 2.1 and

g(a, b) 6= 0 for a, b ∈ R, we have φ5(a, b) = 0

⇐⇒ h

g(a, b) √

a^{2}+ b^{2}+ f (a, b)i_{2m+1}^{2k+1}

=g(a, b) a + b + f (a, b)^{2m+1}^{2k+1}

⇐⇒ n h

g(a, b) √

a^{2}+ b^{2}+ f (a, b)i_{2m+1}^{2k+1} o2m+1

=n

g(a, b) a + b + f (a, b)^{2m+1}^{2k+1} o2m+1

⇐⇒ h

g(a, b) √

a^{2}+ b^{2}+ f (a, b)i2k+1

=g(a, b) a + b + f (a, b)^{2k+1}

⇐⇒ g(a, b) √

a^{2}+ b^{2}+ f (a, b) = g(a, b) a + b + f (a, b)

⇐⇒ √

a^{2}+ b^{2}+ f (a, b) = a + b + f (a, b)

⇐⇒ √

a^{2}+ b^{2} = a + b.

The other functions φi for i ∈ {6, 7, 8, 9, 10} are similar to φ5. 2

According to the above results, we immediately obtain the following theorem.

Theorem 3.1. Suppose that φ(a, b) = ϕ_{1}(a, b) − ϕ_{2}(a, b) is an NCP-function on R × R
and k and m are positive integers. Then, φ(a, b)^{m}^{k} and ϕ_{1}(a, b)_{2m+1}^{2k+1}

− [ϕ_{2}(a, b)]^{2m+1}^{2k+1}
are NCP-functions.

Proof. Using k and m being positive integers and applying Lemma 2.1, we have

φ(a, b)^{m}^{k} = 0

⇐⇒ n

φ(a, b)^{m}^{k}om

= 0

⇐⇒ φ(a, b)^{k} = 0

⇐⇒ φ(a, b) = 0.

Similarly, we have

ϕ_{1}(a, b)_{2m+1}^{2k+1}

− [ϕ_{2}(a, b)]^{2m+1}^{2k+1} = 0

⇐⇒ ϕ1(a, b)_{2m+1}^{2k+1}

= [ϕ_{2}(a, b)]^{2m+1}^{2k+1}

⇐⇒ n

ϕ_{1}(a, b)_{2m+1}^{2k+1}o2m+1

=n

[ϕ_{2}(a, b)]^{2m+1}^{2k+1}o2m+1

⇐⇒ ϕ1(a, b)]^{2k+1}=ϕ2(a, b)]^{2k+1}

⇐⇒ ϕ_{1}(a, b) = ϕ_{2}(a, b)

⇐⇒ φ(a, b) = 0.

The above arguments together with the assumption of φ(a, b) being an NCP-function yield the desired result. 2

Remark 3.2: We elaborate more about Theorem 3.1.

(a) Based on the existing well-known NCP-functions, we can construct new NCP-functions in light of Theorem 3.1. This is a novel way to construct new NCP-functions.

(b) When k is a positive integer, φ(a, b)^{k} is an NCP-function. This means that per-
turbing the parameter k gives new NCP-functions. In addition, if φ(a, b) is an NCP-
function, for any positive integer m, φ(a, b)^{m}^{k} is also an NCP-function. Thus, we
can determine suitable and nice NCP-functions among these functions according
to their numerical performance.

To close this section, we depict the surfaces of φ^{p}_{D−FB} with different values of p so
that we may have deeper insight for this new family of NCP-functions. Figure 1 is the
surface if φ_{D−FB}(a, b) from which we see that it is convex. Figure 2 presents the surface of
φ^{3}

D−FB(a, b) in which we see that it is neither convex nor concave as mentioned in Remark
3.1(c). In addition, the value of φ^{p}_{D−FB}(a, b) is negative only when a > 0 and b > 0 as
mentioned in Lemma 3.1. The surfaces of φ^{p}

D−FB with various values of p are shown in Figure 3.

−10

−5 0

5

10 −10 −5 0 5 10

−10 0 10 20 30 40

b−axis a−axis

z−axis

Figure 1: The surface of z = φ_{D−FB}(a, b) and (a, b) ∈ [−10, 10] × [−10, 10]

### 4 Extending φ

^{p}

D−FB

### and φ

^{p}

NR

### to SOCCP

In this section, we extend the new function φ^{p}

D−FB and φ^{p}

NR to SOC setting. More specifi-
cally, we show that the function φ^{p}

D−FB and φ^{p}

NR are complementarity functions associated

−10

−5 0

5

10 −10 −5 0 5 10

−1

−0.5 0 0.5 1 1.5

x 10^{4}

b−axis a−axis

z−axis

Figure 2: The surface of z = φ^{3}

D−FB(a, b) and (a, b) ∈ [−10, 10] × [−10, 10]

with K^{n}. In addition, we present the computing formulas for its Jacobian.

Proposition 4.1. Let φ^{p}_{D−FB} be defined by (5). Then, φ^{p}_{D−FB} is a complementarity func-
tion associated with K^{n}, i.e., it satisfies

φ^{p}_{D−FB}(x, y) = 0 ⇐⇒ x ∈ K^{n}, y ∈ K^{n}, hx, yi = 0.

Proof. Since φ^{p}

D−FB(x, y) = 0 , we have

px^{2}+ y^{2}p

= (x + y)^{p}. Using p being a
positive odd integer and applying Lemma 2.2 yield

px^{2}+ y^{2}p

= (x + y)^{p} ⇐⇒ p

x^{2}+ y^{2} = x + y.

It is known that φ_{FB}(x, y) := px^{2}+ y^{2}− (x + y) is a complementarity function associated
with K^{n}. This indicates that φ^{p}

D−FBis a complementarity function associate with K^{n}. 2
With similar technique, we can prove that φ^{p}

NR can be extended as a complementarity function for SOCCP.

Proposition 4.2. The function φ^{p}

NR : R^{n}× R^{n}→ R^{n} defined by
φ^{p}

NR(x, y) = x^{p}− [(x − y)_{+}]^{p} (7)
is a complementarity function associated with K^{n}, where p > 1 is a positive odd integer
and (·)_{+} means the projection onto K^{n}.

−5

0

−5 5

0

5

−1000

−500 0 500 1000 1500

a−axis

b−axis

z−axis

(a) z = φ^{3}_{D−FB}(a, b)

−5

0

5

−5

0

5

−1

−0.5 0 0.5 1 1.5

x 10^{5}

a−axis

b−axis

z−axis

(b) z = φ^{5}_{D−FB}(a, b)

−5

0

5

−5

0

5

−1

−0.5 0 0.5 1 1.5

x 10^{7}

a−axis

b−axis

z−axis

(c) z = φ^{7}_{D−FB}(a, b)

−5

0

5

−5

0

5

−1

−0.5 0 0.5 1 1.5

x 10^{9}

a−axis

b−axis

z−axis

(d) z = φ^{9}_{D−FB}(a, b)

Figure 3: The surface of z = φ^{p}

D−FB(a, b) with different values of p
Proof. From Lemma 2.2, we see that φ^{p}

NR(x, y) = 0 if and only if x = (x − y)_{+}. On the
other hand, it is known that φ_{NR}(x, y) = x − (x − y)_{+} is a complementarity function for
SOCCP, which implies x − (x − y)_{+} = 0 if and only if x ∈ K^{n}, y ∈ K^{n}, and hx, yi = 0.

Hence, φ^{p}

NR is a complementarity function associated with K^{n}. 2

In order to compute the Jacobian of φ^{p}_{D−FB}, we need to introduce some notations for
convenience. For any x = (x_{1}, x_{2}) ∈ R × R^{n−1} and y = (y_{1}, y_{2}) ∈ R × R^{n−1}, we define

w(x, y) := x^{2}+ y^{2} = (w_{1}(x, y), w_{2}(x, y)) ∈ R × R^{n−1} and v(x, y) := x + y.

Then, it is clear that w(x, y) ∈ K^{n} and λ_{i}(w) ≥ 0, i = 1, 2.

Proposition 4.3. Let φ^{p}

D−FB be defined as in (5) and g^{soc}(x) = (p|x|)^{p}, h^{soc}(x) =
x^{p} are the vector-valued functions corresponding to g(t) = |t|^{p}^{2} and h(t) = t^{p} for t ∈

R, respectively. Then, φ^{p}_{D−FB} is continuously differentiable at any (x, y) ∈ R^{n} × R^{n}.
Moreover, we have

∇_{x}φ^{p}

D−FB(x, y) = 2L_{x}∇g^{soc}(w) − ∇h^{soc}(v),

∇_{y}φ^{p}_{D−FB}(x, y) = 2L_{y}∇g^{soc}(w) − ∇h^{soc}(v),

where w := w(x, y) = x^{2}+ y^{2}, v := v(x, y) = x + y, t 7→ sign(t) is the sign function, and

∇g^{soc}(w) =

p

2|w_{1}|^{p}^{2}^{−1}· sign(w_{1})I if w_{2} = 0;

b_{1}(w) c_{1}(w) ¯w^{T}_{2}

c_{1}(w) ¯w_{2} a_{1}(w)I + (b_{1}(w) − a_{1}(w)) ¯w_{2}w¯_{2}^{T}

if w_{2} 6= 0;

¯

w_{2} = w_{2}
kw_{2}k,

a_{1}(w) = |λ_{2}(w)|^{p}^{2} − |λ_{1}(w)|^{p}^{2}
λ_{2}(w) − λ_{1}(w) ,
b_{1}(w) = p

4

h|λ_{2}(w)|^{p}^{2}^{−1}+ |λ_{1}(w)|^{p}^{2}^{−1}i
,
c_{1}(w) = p

4

h|λ_{2}(w)|^{p}^{2}^{−1}− |λ_{1}(w)|^{p}^{2}^{−1}i
,
and

∇h^{soc}(v) =

pv_{1}^{p−1}I if v2 = 0;

b_{2}(v) c_{2}(v)¯v_{2}^{T}

c_{2}(v)¯v_{2} a_{2}(v)I + (b_{2}(v) − a_{2}(v)) ¯v_{2}v¯_{2}^{T}

if v2 6= 0; (8)

¯

v2 = v_{2}

kv_{2}k, (9)

a_{2}(v) = (λ_{2}(v))^{p}− (λ_{1}(v))^{p}

λ_{2}(v) − λ_{1}(v) , (10)

b2(v) = p

2(λ2(v))^{p−1}+ (λ1(v))^{p−1} , (11)
c2(v) = p

2(λ2(v))^{p−1}− (λ1(v))^{p−1} , (12)
Proof. From the definition of φ^{p}

D−FB, it is clear to see that for any (x, y) ∈ R^{n}× R^{n},
φ^{p}_{D−FB}(x, y) =p

x^{2}+ y^{2}

p

− (x + y)^{p}

=p

|x^{2}+ y^{2}|p

− (x + y)^{p}

= h

|λ1(w)|^{p}^{2}u^{(1)}(w) + |λ2(w)|^{p}^{2}u^{(2)}(w)
i

−(λ1(v))^{p}u^{(1)}(v) + (λ2(v))^{p}u^{(2)}(v)

= g^{soc}(w) − h^{soc}(v).

(13)

For p ≥ 3, since both |t|^{p}^{2} and t^{p} are continuously differentiable on R, by [13, Proposition
5] and [25, Proposition 5.2], we know that the function g^{soc} and h^{soc} are continuously
differentiable on R^{n}. Moreover, it is clear that w(x, y) = x^{2}+ y^{2} is continuously differen-
tiable on R^{n}× R^{n}, then we conclude that φ^{p}

D−FB is continuously differentiable. Moreover, from the formula in [13, Proposition 4] and [25, Proposition 5.2], we have

∇g^{soc}(w) =

p

2|w_{1}|^{p}^{2}^{−1}· sign(w_{1})I if w_{2} = 0;

b_{1}(w) c_{1}(w) ¯w_{2}^{T}

c_{1}(w) ¯w_{2} a_{1}(w)I + (b_{1}(w) − a_{1}(w)) ¯w_{2}w¯^{T}_{2}

if w_{2} 6= 0;

∇h^{soc}(v) =

pv^{p−1}_{1} I if v2 = 0;

b_{2}(v) c_{2}(v)¯v^{T}_{2}

c_{2}(v)¯v_{2} a_{2}(v)I + (b_{2}(v) − a_{2}(v)) ¯v_{2}¯v_{2}^{T}

if v_{2} 6= 0;

where

¯

w_{2} = _{kw}^{w}^{2}

2k, ¯v_{2} = _{kv}^{v}^{2}

2k

a_{1}(w) = ^{|λ}^{2}^{(w)|}

p

2−|λ1(w)|

p 2

λ2(w)−λ1(w) , a_{2}(v) = ^{(λ}^{2}_{λ}^{(v))}^{p}^{−(λ}^{1}^{(v))}^{p}

2(v)−λ1(v) ,

b_{1}(w) = ^{p}_{4}|λ_{2}(w)|^{p}^{2}^{−1}+ |λ_{1}(w)|^{p}^{2}^{−1} , b_{2}(v) = ^{p}_{2} [(λ_{2}(v))^{p−1}+ (λ_{1}(v))^{p−1}] ,
c_{1}(w) = ^{p}_{4}|λ_{2}(w)|^{p}^{2}^{−1}− |λ_{1}(w)|^{p}^{2}^{−1} , c_{2}(v) = ^{p}_{2}[(λ_{2}(v))^{p−1}− (λ_{1}(v))^{p−1}] .

By taking differentiation on both sides about x and y for (13), respectively, and applying the chain rule for differentiation, it follows that

∇_{x}φ^{p}

D−FB(x, y) = 2L_{x}∇g^{soc}(w) − ∇h^{soc}(v),

∇_{y}φ^{p}

D−FB(x, y) = 2L_{y}∇g^{soc}(w) − ∇h^{soc}(v).

Hence, we complete the proof. 2

With Lemma 2.2 and Proposition 4.1, we can construct more complementarity func-
tions for SOCCP which are variants of φ^{p}

D−FB(x, y). More specifically, consider that k
and m are positive integers and f^{soc}(x, y) : R^{n}× R^{n}→ R^{n} is the vector-valued function
corresponding to a given real-valued function f , the following functions are new variants
of φ^{p}

D−FB(x, y).

φe_{1}(x, y) = hp

x^{2}+ y^{2}+ f^{soc}(x, y)i_{2m+1}^{2k+1}

− [x + y + f^{soc}(x, y)]^{2m+1}^{2k+1} .

φe_{2}(x, y) = hp

x^{2}+ y^{2}− x − yi_{m}^{k}
.
φe3(x, y) = hp

x^{2}+ y^{2}− x + f^{soc}(x, y)
i_{2m+1}^{2k+1}

− [y + f^{soc}(x, y)]^{2m+1}^{2k+1} .
φe_{4}(x, y) = hp

x^{2}+ y^{2}− y + f^{soc}(x, y)i_{2m+1}^{2k+1}

− [x + f^{soc}(x, y)]^{2m+1}^{2k+1} .

Proposition 4.4. All the above functions eφ_{i} for i ∈ {1, 2, 3, 4} are complementarity
functions associated with K^{n}.

Proof. The results follow from applying Lemma 2.2 and Proposition 4.1. 2

In general, for complementarity functions associated with K^{n}, we have the following
parallel result to Theorem 3.1.

Theorem 4.1. Suppose that φ(x, y) = ϕ_{1}(x, y) − ϕ_{2}(x, y) is a complementarity function
associated with K^{n} on R^{n} × R^{n}, and k, m are positive integers. Then φ(x, y)^{m}^{k} and

ϕ1(x, y)_{2m+1}^{2k+1}

− [ϕ_{2}(x, y)]^{2m+1}^{2k+1} are complementarity functions associated with K^{n}.
Proof. According to k and m are positive integers and by using Lemma 2.2, we have

φ(x, y)^{m}^{k} = 0

⇐⇒ n

φ(x, y)^{m}^{k}om

= 0

⇐⇒ φ(x, y)^{k}= 0

⇐⇒ φ(x, y) = 0.

Similarly, we have

ϕ_{1}(x, y)_{2m+1}^{2k+1}

− [ϕ_{2}(x, y)]^{2m+1}^{2k+1} = 0

⇐⇒ ϕ_{1}(x, y)_{2m+1}^{2k+1}

= [ϕ_{2}(x, y)]^{2m+1}^{2k+1}

⇐⇒ n

ϕ_{1}(x, y)_{2m+1}^{2k+1}o2m+1

=n

[ϕ_{2}(x, y)]^{2m+1}^{2k+1}o2m+1

⇐⇒ ϕ_{1}(x, y)]^{2k+1}=ϕ_{2}(x, y)]^{2k+1}

⇐⇒ ϕ_{1}(x, y) = ϕ_{2}(x, y)

⇐⇒ φ(x, y) = 0.

From the above arguments and the assumption, the proof is complete. 2 Remark 4.1: We elaborate more about Theorem 4.1.

(a) Based existing complementarity functions, we can construct new complementarity
functions associated with K^{n} in light of Theorem 4.1.

(b) When k is a positive odd integer, φ(x, y)^{k} is a complementarity function associated
with K^{n}. This means that perturbing the odd integer parameter k, we obtain
the new complementarity functions associated with K^{n}. In addition, if φ(x, y) is
a complementarity function, then for any positive integer m, φ(x, y)^{m}^{k} is also
a complementarity function. We can determine nice complementarity functions
associated with K^{n} among these functions by their numerical performance.

Finally, we establish formula for Jacobian of φ^{p}

NR and the smoothness of φ^{p}

NR. To this aim, we need the following technical lemma.

Lemma 4.1. Let p > 1. Then, the real-valued function f (t) = (t+)^{p} is continuously
differentiable with f^{0}(t) = p(t_{+})^{p−1} where t_{+} = max{0, t}.

Proof. By the definition of t_{+}, we have

f (t) = (t_{+})^{p} = t^{p} if t ≥ 0,
0 if t < 0,
which implies

f^{0}(t) = pt^{p−1} if t ≥ 0,
0 if t < 0.

Then, it is easy to see that f^{0}(t) = p(t_{+})^{p−1} is continuous for p > 1. 2

Proposition 4.5. Let φ^{p}_{NR} be defined as in (7) and h^{soc}(x) = x^{p}, l^{soc}(x) = (x+)^{p} be the
vector-valued functions corresponding to the real-valued functions h(t) = t^{p} and l(t) =
(t_{+})^{p}, respectively. Then, φ^{p}

NR is continuously differentiable at any (x, y) ∈ R^{n}× R^{n}, and
its Jacobian is given by

∇_{x}φ^{p}_{NR}(x, y) = ∇h^{soc}(x) − ∇l^{soc}(x − y),

∇_{y}φ^{p}

NR(x, y) = ∇l^{soc}(x − y),
where ∇h^{soc} satisfies (8)-(12) and

∇l^{soc}(u) =

p((u1)+)^{p−1}I if u2 = 0;

b_{3}(u) c_{3}(u)¯u^{T}_{2}

c_{3}(u)¯u_{2} a_{3}(u)I + (b_{3}(u) − a_{3}(u)) ¯u_{2}u¯^{T}_{2}

if u2 6= 0;

¯

u_{2} = u_{2}
ku_{2}k,

a_{3}(u) = (λ_{2}(u)_{+})^{p}− (λ_{1}(u)_{+})^{p}
λ_{2}(u) − λ_{1}(u) ,
b_{3}(u) = p

2(λ_{2}(u)_{+})^{p−1}+ (λ_{1}(u)_{+})^{p−1} ,
c_{3}(u) = p

2(λ_{2}(u)_{+})^{p−1}− (λ_{1}(u)_{+})^{p−1} ,

Proof. In light of [13, Proposition 5] and [25, Proposition 5.2], the results follow from applying Lemma 4.1 and using the chain rule for differentiation. 2

### 5 Numerical experiments

As mentioned, the Newton method may not be appropriate for numerical implementation, due to possible singularity of Jacobian at a degenerate solution. In view of this, in this section, we employ the derivative-free descent method studied in [37] to test the numerical performance based on various value of p. The target of the derivative-free descent method studied in [37] is mainly on SOCCP (second-order cone complementarity problem). Hence, we consider the following SOCCP:

z ∈ K, M z + b ∈ K, z^{T}(M z + b) = 0,
K = K_{1} × · · · × K_{r}.

According to our results, the above SOCCP can be recast as an unconstrained minimiza- tion problem:

min

ζ∈R^{n}Ψ_{p}(ζ) = 1
2kφ^{p}

D−FB(ζ, F (ζ))k^{2},
where F (ζ) = M ζ + b.

All tests are done on a PC using Inter core i7-5600U with 2.6GHz and 8GB RAM, and the codes are written in Matlab 2010b. The test instances are generated randomly.

In particular, we first generate random sparse square matrices Ni(i = 1, 2 . . . r) with
density 0.01, in which non-zero elements are chosen randomly from a normal distribution
with mean −1 and variance 4. Then, we create the positive semidefinite matrix M_{i} for
(i = 1, 2 . . . r) by setting M_{i} := N_{i}N_{i}^{T} and let M := diag(M_{1}, . . . , M_{r}). In addition, we
take vector b := −M w with w = (w1, . . . , wr) and wi ∈ Ki. With these M and b, it is
not hard to verify that the corresponding SOCCP has at least a feasible solution. To
construct SOCs of various types, we set n_{1} = n_{2} = · · · = n_{r}.

We implement a test problem generated as above with n = 1000 and r = 100. The parameters in the algorithm are set as

β = 0.9, γ = 0.8, σ = 10^{−4}, and = 10^{−8}.
We start with the initial point

ζ0 = (ζn1, · · · , ζnr) where ζni =

10, w_{i}
kw_{i}k

with w_{i} ∈ R^{n}^{i}^{−1} being generated randomly. The stopping criteria, i.e., Ψ_{p}(ζ^{k}) ≤ , is
either the number of iteration is over 10^{5} or a step-length is less than 10^{−12}. The Figure
4 depicts detailed iteration process of the algorithm corresponding to different value of
p.

The algorithm fails for the problem when p ≥ 5. The main reason is that the step- length is too small eventually. We also suspect that larger p leads to tedious computation of the complementarity function in Jordan algebra. Anyway, this phenomenon indicates that the discrete-type of complementarity functions only work well for small value of p.

The convergence in Figure 4 shows the method with a bigger p has a faster reduction of
Ψ_{p}at the beginning, and the method with a smaller p has a faster reduction of Ψ_{p} eventu-
ally. Moreover, the bigger p applies, the total number of iterations of the algorithm is less.

In order to check numerical performance of the algorithm corresponding to different
value of p, we solve the test problems with different dimension. The numerical results are
summarized in Tables 1. “Ψ_{p}(ζ^{∗})” and “Gap” denote the merit function value and the
value of

ζ^{T}F (ζ)

at the final iteration, respectively. “NF”, “Iter”, and “Time” indicate
the number of function evaluations of Ψ_{p}, the number of iteration required in order to
satisfy the termination condition, and the CPU time in second for solving each problem,
respectively.

Table 1: Numerical results with different value of p

Problem p = 1 p = 1.4

(n, r) Φp(ζ^{∗}) NF Iter Gap time Φp(ζ^{∗}) NF Iter Gap time
(100,10) 9.8e-9 5350 4952 2.75e-4 9.3 1.0e-8 4401 1474 5.92e-5 3.5
(200,20) 9.4e-9 5064 4914 3.74e-5 16.5 1.0e-8 16179 5649 3.84e-5 25.9
(300,30) 1.0e-8 7445 5273 2.26e-4 30.3 9.9e-9 7000 1266 2.40e-5 11.5
(400,40) 9.8e-9 5342 5016 1.62e-4 50.0 9.9e-9 3747 857 4.31e-5 9.5
(500,50) 1.0e-8 23533 13749 6.81e-4 126.4 9.6e-9 29454 6257 3.39e-4 93.9
(600,60) 1.0e-8 18260 11119 16.1e-4 65.1 1.0e-8 24685 8320 8.69e-5 119.7
(700,70) 1.0e-8 8320 5690 6.16e-4 38.3 1.0e-8 13458 4493 1.79e-4 77.7
(800,80) 1.0e-8 29415 10149 4.43e-5 199.2 9.3e-9 2507 1838 1.54e-4 27.4
(900,90) 1.0e-8 14648 10888 1.46e-3 159.8 9.9e-9 5970 1621 8.77e-5 44.9
(1000,100) 1.0e-8 14590 9672 2.78e-4 238.3 1.0e-8 12337 2570 7.58e-5 92.0
(1100,110) 9.9e-9 5994 5406 4.64e-6 109.6 1.0e-8 13767 2948 3.51e-4 126.5
(1200,120) 9.8e-9 6100 5528 6.12e-5 121.7 9.9e-9 20990 5650 1.51e-5 211.4
(1300,130) 9.8e-9 4253 3612 2.42e-4 115.5 9.7e-9 777 316 5.78e-5 10.1
(1400,140) 1.0e-8 9827 7136 1.46e-4 307.5 1.0e-8 6357 2736 2.20e-4 70.6
(1500,150) 9.9e-9 4701 4211 3.04e-4 156.9 9.9e-9 7060 1823 6.56e-6 67.8
(1600,160) 9.9e-9 5744 3843 4.61e-4 172.8 1.0e-8 9434 2583 1.39e-4 82.9
(1700,170) 1.0e-8 11163 5581 2.74e-4 195.1 1.0e-8 12307 2740 9.87e-5 185.7
(1800,180) 1.0e-8 7449 5985 3.77e-4 204.5 1.0e-8 38524 9469 2.43e-4 439.8
(1900,190) 1.0e-8 4205 2102 7.19e-5 83.2 1.0e-8 7413 1636 3.40e-4 125.4
(2000,200) 9.9e-9 5189 4953 2.12e-4 212.9 9.15e-9 10230 480 2.32e-5 294.9

We also use the performance profiles introduced by Dolan and Mor`e [18] to compare
the performance of algorithm with different p. The performance profiles are generated
by executing solvers S on the test set P. Let n_{p,s} be the number of iteration (or the

Table 2: Numerical results with different value of p

Problem p = 2.6 p = 3

(n, r) Φp(ζ^{∗}) NF Iter Gap time Φp(ζ^{∗}) NF Iter Gap time
(100,10) 9.9e-9 28878 1866 2.40e-6 11.9 9.2e-9 11281 201 3.80e-7 14.7
(200,20) 1.0e-8 57844 3743 1.64e-6 47.9 9.5e-9 21221 422 1.15e-6 52.9
(300,30) 9.9e-9 14452 963 3.14e-6 17.3 9.2e-9 4383 89 5.97e-7 17.5
(400,40) 9.8e-9 20747 1417 2.31e-6 32.7 9.9e-9 7419 133 8.34e-7 34.0
(500,50) 9.8e-9 13929 1084 1.53e-6 30.7 8.4e-9 27229 474 1.04e-6 87.8
(600,60) 9.9e-9 28224 2032 2.48e-7 77.1 9.9e-9 48809 878 4.19e-7 193.8
(700,70) 9.9e-9 16739 1230 1.93e-5 52.8 7.9e-9 7069 140 6.16e-4 58.4
(800,80) 9.9e-9 72745 5342 7.69e-7 270.5 9.8e-9 27620 534 5.95e-7 260.1
(900,90) 9.5e-9 7574 522 6.09e-7 37.5 8.0e-9 10276 187 1.35e-7 129.6
(1000,100) 1.0e-8 145414 8664 4.92e-7 821.6 9.6e-9 17790 325 2.26e-7 258.2
(1100,110) 9.7e-9 16834 1465 3.76e-7 111.0 9.5e-9 31750 528 6.41e-7 507.2
(1200,120) 9.9e-9 45621 3346 1.82e-6 271.5 9.8e-9 20326 370 4.82e-7 437.4
(1300,130) 1.0e-8 25661 1739 3.21e-6 171.8 8.9e-9 10399 185 7.16e-7 115.5
(1400,140) 9.8e-9 57526 4116 2.09e-5 277.6 8.9e-9 12529 205 1.09e-6 348.4
(1500,150) 1.0e-8 355478 321117 1.50e-5 2343.0 4.7e-3 11824 217 1.54e-5 393.5
(1600,160) 9.3e-9 12995 5961 1.70e-6 98.5 9.9e-9 33843 550 5.43e-7 862.2
(1700,170) 1.0e-8 47367 3380 8.64e-7 441.0 1.0e-8 80519 5084 1.73e-7 742.8
(1800,180) 9.8e-9 7697 536 1.67e-6 53.0 7.4e-9 8472 154 4.15e-8 289.6
(1900,190) 1.0e-8 149019 10644 2.59e-6 1577.9 1.0e-8 16128 909 5.84e-7 161.5
(2000,200) 1.0e-8 27876 1991 2.64e-6 238.5 1.0e-8 34310 630 1.37e-7 862.2

computing time) required to solve problem p ∈ P by solver s ∈ S, and define the performance ratio as

r_{p,s}= n_{p,s}

min{n_{p,s} : 1 ≤ s ≤ n_{s}},

where n_{s} is the number of solvers. Whenever the solver s does not solve problem p
successfully, set rp,s = rM. Here rM is a very large preset positive constant. Then,
performance profile for each solver s is defined by

ρ_{s}(χ) = 1

n_{p}size{p ∈ P : log_{2}(r_{p,s}) ≤ χ}.

where size{p ∈ P : log_{2}(r_{p,s}) ≤ χ} is the number of elements in the set {p ∈ P :
log_{2}(r_{p,s}) ≤ χ}. ρ_{s}(χ) represents the probability that the performance ratio r_{p,s} is within
the factor 2^{χ}. It is easy to see that ρ_{s}(0) is the probability that the solver s wins over
the rest of solvers. See [18] for more details about the performance profile.

From Figure 5(a), it shows that the algorithm with p = 1 and p = 1.4 performs better than p = 2.6 and p = 3 on function evaluations. Similarly, from Figure 5(b) and Figure 5(c), we observe that the algorithm with p = 3 performs best on the number of iterations, while the algorithm with p = 1.4 is the best one on CPU time. This provides evidence