Computational Optimization and Applications, vol. 40, pp. 389-404, 2008

### A family of NCP functions and a descent method for the nonlinear complementarity problem

Jein-Shan Chen ^{1}
Department of Mathematics
National Taiwan Normal University

Taipei 11677, Taiwan

Shaohua Pan^{2}

School of Mathematical Sciences South China University of Technology

Guangzhou 510641, China

May 12, 2006 (revised, July 15, 2006) (second revised, September 7, 2006)

Abstract In last decades, there has been much effort on the solution and the analysis
of the nonlinear complementarity problem (NCP) by reformulating NCP as an uncon-
strained minimization involving an NCP function. In this paper, we propose a family
of new NCP functions, which include the Fischer-Burmeister function as a special case,
*based on a p-norm with p being any fixed real number in the interval (1, +∞), and show*
several favorable properties of the proposed functions. In addition, we also propose a de-
scent algorithm that is indeed derivative-free for solving the unconstrained minimization
based on the merit functions from the proposed NCP functions. Numerical results for the
test problems from MCPLIB indicate that the descent algorithm has better performance
*when the parameter p decreases in (1, +∞). This implies that the merit functions asso-*
*ciated with p ∈ (1, 2), for example p = 1.5, are more effective in numerical computations*
*than the Fischer-Burmeister merit function, which exactly corresponds to p = 2.*

Key words. NCP, NCP function, merit function, descent method.

1Member of Mathematics Division, National Center for Theoretical Sciences, Taipei Office , E-mail:

jschen@math.ntnu.edu.tw, FAX: 886-2-29332342. The author’s work is partially supported by National Science Council of Taiwan.

2E-mail: shhpan@scut.edu.cn

### 1 Introduction

*The nonlinear complementarity problem (NCP) is to find a point x ∈ IR** ^{n}* such that

*x ≥ 0,* *F (x) ≥ 0,* *hx, F (x)i = 0,* (1)

*where h·, ·i is the Euclidean inner product and F = (F*_{1}*, F*_{2}*, · · · , F** _{n}*)

*is a map from IR*

^{T}*to IR*

^{n}

^{n}*. We assume that F is continuously differentiable throughout this paper. The*NCP has attracted much attention due to its various applications in operations research, economics, and engineering [6, 12, 18].

There have been many methods proposed for solving the NCP [9, 12, 18]. Among which, one of the most popular and powerful approaches that has been studied intensively recently is to reformulate the NCP as a system of nonlinear equations [17, 24] or as an unconstrained minimization problem [5, 7, 10, 14, 15, 16, 23]. Such a function that can constitute an equivalent unconstrained minimization problem for the NCP is called a merit function. In other words, a merit function is a function whose global minima are coincident with the solutions of the original NCP. For constructing a merit function, the class of functions, so-called NCP-functions and defined as below, serves an important role.

*Definition 1.1 A function φ : IR*^{2} *→ IR is called an NCP-function if it satisfies*

*φ(a, b) = 0* *⇐⇒* *a ≥ 0, b ≥ 0, ab = 0.* (2)

Over the past two decades, a variety of NCP-functions have been studied, see [9, 20] and references therein. Among which, a popular NCP-function intensively studied recently is the well-known Fischer-Burmeister NCP-function [7, 8] defined as

*φ(a, b) =* *√*

*a*^{2}*+ b*^{2}*− (a + b).* (3)

*With the above characterization of φ, the NCP is equivalent to a system of nonsmooth*
equations:

*Φ(x) =*

*φ(x*_{1} *, F*_{1}*(x))*

*·*

*·*

*·*
*φ(x*_{n}*, F*_{n}*(x))*

*= 0.* (4)

Then the function Ψ : IR^{n}*→ IR*+ defined by
*Ψ(x) :=* 1

2*kΦ(x)k*^{2} = 1
2

X*n*

*i=1*

*φ(x**i* *, F**i**(x))*^{2} (5)

is a merit function for the NCP, i.e., the NCP can be recast as an unconstrained mini- mization:

*x∈IR*min^{n}*Ψ(x).* (6)

In this paper, we propose and investigate a family of new NCP functions based on
*the Fischer-Burmeister function (3). In particular, we define φ** _{p}* : IR

^{2}

*→ IR by*

*φ*_{p}*(a, b) := k(a, b)k*_{p}*− (a + b),* (7)
*where p is any fixed real number in the interval (1, +∞) and k(a, b)k*_{p}*denotes the p-norm*
*of (a, b), i.e., k(a, b)k** _{p}* =

^{q}

^{p}*|a|*

^{p}*+ |b|*

^{p}*. In other words, in the function φ*

*, we replace the*

_{p}*2-norm of (a, b) in the Fischer-Burmeister function (3) by a more general p-norm with*

*p ∈ (1, +∞). The function φ*

*p*is still an NCP-function as was noted in Tseng’s paper [21]. Nonetheless, to our knowledge, there was no further study on this family of NCP

*functions except for p = 2. We aim to explore and study properties of φ*

*in this paper.*

_{p}*More specifically, we define ψ** _{p}* : IR

^{2}

*→ IR*

_{+}by

*ψ*

_{p}*(a, b) :=*1

2*|φ*_{p}*(a, b)|*^{2}*.* (8)

*For any given p > 1, the function ψ**p* is a nonnegative NCP-function and smooth on IR^{2}
as will be seen in Sec. 3. Analogous to Φ, the function Φ* _{p}* : IR

^{n}*→ IR*

*given as*

^{n}Φ*p**(x) =*

*φ*_{p}*(x*_{1} *, F*_{1}*(x))*

*·*

*·*

*·*
*φ*_{p}*(x*_{n}*, F*_{n}*(x))*

(9)

yields a family of merit functions Ψ* _{p}* : IR

^{n}*→ IR for the NCP for which*Ψ

_{p}*(x) :=*1

2*kΦ*_{p}*(x)k*^{2} = 1
2

X*n*

*i=1*

*φ*_{p}*(x*_{i}*, F*_{i}*(x))*^{2} =

X*n*

*i=1*

*ψ*_{p}*(x*_{i}*, F*_{i}*(x)).* (10)
As will be seen later, Ψ_{p}*for any given p > 1 is a continuously differentiable merit*
function for the NCP. Therefore, classical iterative methods such as Newton method can
be applied to the unconstrained smooth minimization of the NCP, i.e.,

*x∈IR*min* ^{n}*Ψ

*p*

*(x).*(11)

On the other hand, derivative-free methods [22] have also attracted much attention which
*do not require computation of derivatives of F . Derivative-free methods, taking advan-*
tages of particular properties of a merit function, are suitable for problems where the
*derivatives of F are not available or expensive.*

In this paper, we also study a derivative-free descent algorithm for solving the NCP
based on the merit function Ψ*p*. The algorithm is shown to be convergent for strongly
monotone NCPs. In addition, we also do numerical experiments with three specific merit
functions Ψ* _{1.5}*, Ψ

_{2}and Ψ

_{3}for the test problems from MCPLIB. Numerical results show

*that the descent algorithm has better performance as p decreases in the interval (1, +∞).*

This means that a more effective NCP function than the Fischer-Burmeister function, at
*lest in numerical computations, can be obtained by setting p ∈ (1, 2) in φ*_{p}*(a, b).*

Throughout this paper, IR^{n}*denotes the space of n-dimensional real column vectors*
and ^{T}*denotes transpose. For any differentiable function f : IR*^{n}*→ IR, ∇f (x) denotes*
*the gradient of f at x. For any differentiable mapping F = (F*_{1}*, · · · , F** _{m}*)

*: IR*

^{T}

^{n}*→ IR*

*,*

^{m}*∇F (x) = [∇F*_{1}*(x) · · · ∇F*_{m}*(x)] denotes the transpose Jacobian of F at x. We denote by*
*kxk*_{p}*the p-norm of x and by kxk the Euclidean norm of x. In addition, unless otherwise*
*stated, we always assume p in the sequel is any fixed real number in (1, +∞).*

### 2 Preliminaries

In this section, we recall some background concepts and materials which will play an important role in the subsequent analysis.

*Definition 2.1 Let F : IR*^{n}*→ IR*^{n}*, then*

*(a) F is monotone if hx − y, F (x) − F (y)i ≥ 0, for all x, y ∈ IR*^{n}*.*

*(b) F is strictly monotone if hx − y, F (x) − F (y)i > 0, for all x, y ∈ IR*^{n}*and x 6= y.*

*(c) F is strongly monotone with modulus µ > 0 if hx − y, F (x) − F (y)i ≥ µkx − yk*^{2}*, for*
*all x, y ∈ IR*^{n}*.*

*(d) F is a P*_{0}*-function if max*

*1≤i≤n*
*xi6=yi*

*(x*_{i}*− y*_{i}*)(F*_{i}*(x) − F*_{i}*(y)) ≥ 0, for all x, y ∈ IR*^{n}*and x 6= y.*

*(e) F is a P -function if max*

*1≤i≤n**(x**i**− y**i**)(F**i**(x) − F**i**(y)) > 0, for all x, y ∈ IR*^{n}*and x 6= y.*

*(f) F is a uniform P -function with modulus µ > 0 if max*

*1≤i≤n**(x*_{i}*− y*_{i}*)(F*_{i}*(x) − F*_{i}*(y)) ≥*
*µkx − yk*^{2}*, for all x, y ∈ IR*^{n}*.*

*(g) ∇F (x) is uniformly positive definite with modulus µ > 0 if d*^{T}*∇F (x)d ≥ µkdk*^{2}*, for*
*all x ∈ IR*^{n}*and d ∈ IR*^{n}*.*

*(h) F is Lipschitz continuous if there exists a constant L > 0 such that kF (x) − F (y)k ≤*
*Lkx − yk, for all x, y ∈ IR*^{n}*.*

From the above definitions, it is obvious that strongly monotone functions are strictly
*monotone, and strictly monotone functions are monotone. Moreover, F is a P*0-function
*if F is monotone and F is a uniform P -function with modulus µ > 0 if F is strongly*
*monotone with modulus µ > 0. In addition, when F is continuously differentiable, we*
have the following conclusions.

*1. F is monotone if and only if ∇F (x) is positive semidefinite for all x ∈ IR** ^{n}*.

*2. F is strictly monotone if ∇F (x) is positive definite for all x ∈ IR*

*.*

^{n}*3. F is strongly monotone if and only if ∇F (x) is uniformly positive definite.*

*Next, we recall the definition of P*_{0}*-matrix and P -matrix.*

*Definition 2.2 A matrix M ∈ IR*^{n×n}*is a*

*(a) P*_{0}*-matrix if each of its principal minors is nonnegative.*

*(b) P -matrix if each of its principal minors is positive.*

*It is obvious that every P -matrix is also a P*_{0}-matrix. Furthermore, it is known that the
*Jacobian of every continuously differentiable P*0*-function is a P*0-matrix.

*Finally, we state one of the characterizations of P*_{0}-matrices that will be used later,
*and for more properties about P -matrix and P*_{0}-matrix, please refer to [4].

*Lemma 2.1 A matrix M ∈ IR*^{n×n}*is a P*_{0}*-matrix if and only if for every nonzero vector*
*x there exists an index i such that x**i* *6= 0 and x**i**(Mx)**i* *≥ 0.*

### 3 A family of NCP functions and their properties

*In this section, we study a family of NCP functions φ*_{p}*defined as (7) with p > 1, which*
are indeed variants of Fischer-Burmeister function, and show that these functions have
several favorable properties analogous to what Fischer-Burmeister function has. We first
*present some similar properties of φ**p* to those for Fischer-Burmeister function.

*Proposition 3.1 Let φ**p* : IR^{2} *→ IR be defined as (7) with p being any fixed real number*
*in the interval (1, +∞). Then*

*(a) φ**p* *is an NCP-function, i.e., it satisfies (2).*

*(b) φ*_{p}*is sub-additive, i.e., φ*_{p}*(w + w*^{0}*) ≤ φ*_{p}*(w) + φ*_{p}*(w*^{0}*) for all w, w*^{0}*∈ IR*^{2}*.*

*(c) φ*_{p}*is positive homogeneous, i.e., φ*_{p}*(αw) = αφ*_{p}*(w) for all w ∈ IR*^{2} *and α ≥ 0.*

*(d) φ*_{p}*is convex, i.e., φ*_{p}*(αw + (1 − α)w*^{0}*) ≤ αφ*_{p}*(w) + (1 − α)φ*_{p}*(w*^{0}*) for all w, w*^{0}*∈ IR*^{2}
*and α ∈ (0, 1).*

*(e) φ*_{p}*is Lipschitz continuous with L*_{1} = *√*

2 + 2^{(1/p−1/2)}*when 1 < p < 2, and with*
*L*2 = 1 +*√*

*2 when p ≥ 2. In other words, |φ**p**(w) − φ**p**(w*^{0}*)| ≤ L*1*kw − w*^{0}*k when*
*1 < p < 2 and |φ*_{p}*(w) − φ*_{p}*(w*^{0}*)| ≤ L*_{2}*kw − w*^{0}*k when p ≥ 2 for all w, w*^{0}*∈ IR*^{2}*.*
Proof. (a) The proof can be seen in [21, page 20]. For completeness, we here include it.

*Consider any a ≥ 0 and b ≥ 0 satisfying ab = 0. Then, we have either a = 0 or b = 0. This*
*implies that φ**p**(a, b) =* ^{q}^{p}*|a|*^{p}*− a or φ**p**(a, b) =* ^{q}^{p}*|b|*^{p}*− b, and consequently φ**p**(a, b) = 0.*

*Conversely, consider any (a, b) ∈ IR*^{2} *satisfying φ*_{p}*(a, b) = 0. Then there must hold a ≥ 0*
*and b ≥ 0, otherwise we have* ^{q}^{p}*|a|*^{p}*+ |b|*^{p}*> (a + b) and hence φ*_{p}*(a, b) > 0. Now we*
*prove that one of a and b must be 0. If not, then k(a, b)k*_{p}*< k(a, b)k*_{1} *= a + b, which*
*obviously contradicts the fact that φ**p**(a, b) = 0. The two sides show that φ**p* is indeed an
NCP-function.

*(b) Let w = (a, b) and w*^{0}*= (c, d). Then*

*φ*_{p}*(w + w*^{0}*) = k(a, b) + (c, d)k*_{p}*− (a + b + c + d)*

*≤ k(a, b)k*_{p}*+ k(c, d)k*_{p}*− (a + b) − (c + d)*

*= φ**p**(a, b) + φ**p**(c, d) = φ**p**(w) + φ**p**(w*^{0}*),*

*where the inequality is true since the triangle inequality holds for p-norm when p > 1.*

*(c) Let w = (a, b) ∈ IR*^{2} *and α > 0. Then the proof follows by*

*φ*_{p}*(αw) =*^{q}^{p}*|αa|*^{p}*+ |αb|*^{p}*− (αa + αb) = α*^{q}^{p}*|a|*^{p}*+ |b|*^{p}*− α(a + b) = αφ*_{p}*(w).*

(d) This is true by part (b) and part (c).

*(e) Let w = (a, b) and w*^{0}*= (c, d), we have*

*|φ*_{p}*(w) − φ*_{p}*(w*^{0}*)| =*

¯¯

¯¯*k(a, b)k*_{p}*− (a + b) − k(c, d)k*_{p}*+ (c + d)*

¯¯

¯¯

*≤*

¯¯

¯¯*k(a, b)k*_{p}*− k(c, d)k*_{p}

¯¯

¯¯*+ |a − c| + |b − d|*

*≤ k(a, b) − (c, d)k**p*+*√*
2

q

*|a − c|*^{2}*+ |b − d|*^{2}

*≤ k(a, b) − (c, d)k** _{p}*+

*√*

*2k(a, b) − (c, d)k*

*= kw − w*^{0}*k** _{p}* +

*√*

*2kw − w*^{0}*k.*

Then, from the inequality as below (see [13, (1.3)]),

*kxk*_{p}_{2} *≤ kxk*_{p}_{1} *≤ n*^{(1/p}^{1}^{−1/p}^{2}^{)}*kxk*_{p}_{2} *for x ∈ IR*^{n}*and 1 < p*_{1} *< p*_{2}*,*

we obtain the desired results. *2*

*As below, φ**p* has more further properties which are key to proving results of the
subsequent section.

*Lemma 3.1 Let φ** _{p}* : IR

^{2}

*→ IR be defined as (7) where p > 1. If {(a*

^{k}*, b*

^{k}*)} ⊆ IR*

^{2}

*with*

*(a*

^{k}*→ −∞) or (b*

^{k}*→ −∞) or (a*

^{k}*→ ∞ and b*

^{k}*→ ∞), then we have |φ*

_{p}*(a*

^{k}*, b*

^{k}*)| → ∞*

*for k → ∞.*

Proof. This result has been mentioned in [21, page 20]. *2*

*Next, we study another family of NCP functions ψ** _{p}* : IR

^{2}

*→ IR*

_{+}defined by (8). This class of functions will lead the NCP to a reformulation of unconstrained minimization.

In other words, they are a family of merit functions for the NCP. Furthermore, they
*have some favorable properties shown as below. Particularly, ψ*_{p}*for any given p > 1 is*
*continuously differentiable everywhere whereas φ** _{p}* is not differentiable everywhere.

*Proposition 3.2 Let φ*_{p}*, ψ*_{p}*be defined as (7) and (8), respectively, where p is any fixed*
*real number in the interval (1, +∞). Then*

*(a) ψ*_{p}*is an NCP-function, i.e., it satisfies (2).*

*(b) ψ*_{p}*(a, b) ≥ 0 for all (a, b) ∈ IR*^{2}*.*

*(c) ψ*_{p}*is continuously differentiable everywhere.*

*(d) ∇*_{a}*ψ*_{p}*(a, b) · ∇*_{b}*ψ*_{p}*(a, b) ≥ 0 for all (a, b) ∈ IR*^{2}*. The equality holds if and only if*
*φ*_{p}*(a, b) = 0.*

*(e) ∇*_{a}*ψ*_{p}*(a, b) = 0 ⇐⇒ ∇*_{b}*ψ*_{p}*(a, b) = 0 ⇐⇒ φ*_{p}*(a, b) = 0.*

*Proof. (a) Since ψ**p**(a, b) = 0 if and only if φ**p**(a, b) = 0, the desired result is satisfied by*
Prop. 3.1(a).

*(b) It is clear by definition of ψ** _{p}*.

*(c) From direct computation, we obtain ∇*_{a}*ψ*_{p}*(0, 0) = ∇*_{b}*ψ*_{p}*(0, 0) = 0. For (a, b) 6= (0, 0),*

*∇*_{a}*ψ*_{p}*(a, b) =*

Ã*sgn(a) · |a|*^{p−1}*k(a, b)k*^{p−1}*p*

*− 1*

!

*φ*_{p}*(a, b)*

*∇**b**ψ**p**(a, b) =*

Ã*sgn(b) · |b|*^{p−1}*k(a, b)k*^{p−1}*p*

*− 1*

!

*φ**p**(a, b)* (12)

*where sgn(·) is the sign function. Clearly,*

¯¯

¯¯

¯

*sgn(a) · |a|*^{p−1}*k(a, b)k*^{p−1}*p*

¯¯

¯¯

¯*≤ 1 and*

¯¯

¯¯

¯

*sgn(b) · |b|*^{p−1}*k(a, b)k*^{p−1}*p*

¯¯

¯¯

¯*≤ 1* (13)

*(i.e., uniformly bounded) and moreover φ**p**(a, b) → 0 as (a, b) → (0, 0). Therefore, we*
*have ∇*_{a}*ψ*_{p}*(a, b) → 0 and ∇*_{b}*ψ*_{p}*(a, b) → 0 as (a, b) → (0, 0). This means that ψ** _{p}* is
continuously differentiable everywhere.

*(d) From part (c), we know that if (a, b) = (0, 0), it is clear that ∇*_{a}*ψ*_{p}*(a, b)·∇*_{b}*ψ*_{p}*(a, b) = 0*
*and ψ*_{p}*(a, b) = 0. Now we assume that (a, b) 6= (0, 0). Then,*

*∇*_{a}*ψ*_{p}*(a, b) · ∇*_{b}*ψ*_{p}*(a, b) =*

Ã*sgn(a) · |a|*^{p−1}*k(a, b)k*^{p−1}*p*

*− 1*

! Ã*sgn(b) · |b|*^{p−1}*k(a, b)k*^{p−1}*p*

*− 1*

!

*φ*^{2}_{p}*(a, b).* (14)

*Again, from (13), it follows immediately that ∇*_{a}*ψ*_{p}*(a, b)·∇*_{b}*ψ*_{p}*(a, b) ≥ 0 for all (a, b) ∈ IR*^{2}.
*The equality holds if and only if φ*_{p}*(a, b) = 0,* ^{sgn(a)·|a|}_{k(a,b)k}*p−1*^{p−1}

*p* = 1 or ^{sgn(b)·|b|}_{k(a,b)k}*p−1*^{p−1}

*p* = 1. In fact,
if ^{sgn(a)·|a|}_{k(a,b)k}*p−1*^{p−1}

*p* *= 1, then we have a > 0 and |a| = k(a, b)k*_{p}*, which leads to b = 0 and hence*
*φ*_{p}*(a, b) =*^{q}^{p}*|a|*^{p}*− a = a − a = 0. Similarly, we have φ*_{p}*(a, b) = 0 if* ^{sgn(b)·|b|}^{p−1}

*k(a,b)k*^{p−1}*p* = 1. Thus,
*we conclude that the equality holds if and only if φ*_{p}*(a, b) = 0.*

(e) It is already seen in the last part of proof for part (d). *2*

*It was shown that if F is monotone [10] or a P*_{0}-function [5], then any stationary point
of Ψ is a global minima of the unconstrained minimization min

*x∈IR*^{n}*Ψ(x), and hence solves*
*the NCP. Moreover, it was also shown that if F is strongly monotone [10] or uniform*
*P -function [5], then the level sets of Ψ are bounded. In what follows, we will present*
and prove analogous results for Ψ* _{p}* under the same conditions as in [5, 10]. The ideas for
proving the following propositions are borrowed from those analogous results in [5, 10].

*Proposition 3.3 Let Ψ**p* : IR^{n}*→ IR be defined as (10) with p > 1. Then Ψ**p**(x) ≥ 0 for*
*all x ∈ IR*^{n}*and Ψ*_{p}*(x) = 0 if and only if x solves the NCP (1). Moreover, suppose that*
*the NCP (1) has at least one solution. Then x is a global minimizer of Ψ*_{p}*if and only if*
*x solves the NCP (1).*

Proof. The results directly follow from Prop. 3.2. *2*

*Proposition 3.4 Let Ψ** _{p}* : IR

^{n}*→ IR be defined as (10) with p > 1. Assume F is either*

*monotone or P*

_{0}

*-function, then every stationary point of Ψ*

_{p}*is a global minima of (11);*

*and therefore solves the NCP (1).*

*Proof. (I) For the assumption of monotonicity of F , suppose that x** ^{∗}* is a stationary
point of Ψ

*p*

*. Then we have ∇Ψ*

*p*

*(x*

*) = 0 which implies that*

^{∗}X*n*

*i=1*

µ

*∇*_{a}*ψ*_{p}*(x*^{∗}_{i}*, F*_{i}*(x*^{∗}*))e*_{i}*+ ∇*_{b}*ψ*_{p}*(x*^{∗}_{i}*, F*_{i}*(x*^{∗}*))∇F*_{i}*(x** ^{∗}*)

¶

*= 0,* (15)

*where e*_{i}*= (0, · · · , 1, · · · , 0)*^{T}*. We denote ∇*_{a}*ψ*_{p}*(x*^{∗}*, F (x*^{∗}*)) = (· · · , ∇*_{a}*ψ*_{p}*(x*^{∗}_{i}*, F*_{i}*(x*^{∗}*)), · · ·)*^{T}*and ∇**b**ψ**p**(x*^{∗}*, F (x*^{∗}*)) = (· · · , ∇**b**ψ**p**(x*^{∗}_{i}*, F**i**(x*^{∗}*)), · · ·)** ^{T}*, respectively. Then (15) can be ab-
breviated as

*∇*_{a}*ψ*_{p}*(x*^{∗}*, F (x*^{∗}*)) + ∇F (x*^{∗}*)∇*_{b}*ψ*_{p}*(x*^{∗}*, F (x*^{∗}*)) = 0.* (16)
*Now, multiplying (16) by ∇*_{b}*ψ*_{p}*(x*^{∗}*, F (x** ^{∗}*))

*leads to*

^{T}X*n*

*i=1*

µ

*∇*_{a}*ψ*_{p}*(x*^{∗}_{i}*, F*_{i}*(x*^{∗}*))·∇*_{b}*ψ*_{p}*(x*^{∗}_{i}*, F*_{i}*(x** ^{∗}*))

¶

*+∇*_{b}*ψ*_{p}*(x*^{∗}*, F (x** ^{∗}*))

^{T}*∇F (x*

^{∗}*)∇*

_{b}*ψ*

_{p}*(x*

^{∗}*, F (x*

^{∗}*)) = 0.*

(17)
*Since F is monotone, ∇F (x** ^{∗}*) is positive semidefinite, the second term of (17) is nonneg-
ative. Moreover, each term in the first summation of (17) is nonnegative as well due to
Prop. 3.2(d). Therefore, we have

*∇*_{a}*ψ*_{p}*(x*^{∗}_{i}*, F*_{i}*(x*^{∗}*)) · ∇*_{a}*ψ*_{p}*(x*^{∗}_{i}*, F*_{i}*(x*^{∗}*)) = 0,* *∀i = 1, 2, · · · , n,*

*which yields φ*_{p}*(x*^{∗}_{i}*, F*_{i}*(x*^{∗}*)) = 0 for all i = 1, 2, · · · , n by Prop. 3.2(e). Thus, Ψ*_{p}*(x** ^{∗}*) = 0

*which says x*

*is a global minimizer of (11).*

^{∗}*(II) If F is P*_{0}*-function and suppose x** ^{∗}* is a stationary point of Ψ

_{p}*. Then ∇Ψ*

_{p}*(x*

*) = 0*

^{∗}*which yields (16). Notice that ∇*

_{a}*ψ*

_{p}*(a, b) and ∇*

_{b}*ψ*

_{p}*(a, b) are given as forms of (12). If we*

*denote A(x*

^{∗}*) and B(x*

^{∗}*) the possibly multivalued n × n diagonal matrices whose diagonal*elements are given by

*A**ii**(x** ^{∗}*) =

*sgn(x*

^{∗}

_{i}*) · |x*

^{∗}

_{i}*|*

^{p−1}*k(x*

^{∗}

_{i}*, F*

_{i}*(x*

^{∗}*))k*

^{p−1}*p*

*if (x*^{∗}_{i}*, F**i**(x*^{∗}*)) 6= (0, 0)*
and

*B*_{ii}*(x** ^{∗}*) =

*sgn(F*

_{i}*(x*

^{∗}*)) · |F*

_{i}*(x*

^{∗}*)|*

^{p−1}*k(x*

^{∗}

_{i}*, F*

_{i}*(x*

^{∗}*))k*

^{p−1}*p*

*if (x*^{∗}_{i}*, F*_{i}*(x*^{∗}*)) 6= (0, 0).*

*If (x*^{∗}_{i}*, F**i**(x*^{∗}*)) = (0, 0) then we let A(x*^{∗}*) = B(x*^{∗}*) = I, i.e., the n × n identity matrix.*

*With the notions of A(x*^{∗}*), B(x** ^{∗}*) and (12), the equation (16) can be rewritten as

*[(A(x*^{∗}*) − I) + ∇F (x*^{∗}*)(B(x*^{∗}*) − I)]Φ**p**(x*^{∗}*) = 0.* (18)
We want to prove that Φ_{p}*(x** ^{∗}*) = 0 (and hence Ψ

_{p}*(x*

*) = 0). Suppose not, i.e., Φ*

^{∗}

_{p}*(x*

^{∗}*) 6= 0.*

Recall that Φ*p**(x*^{∗}*) = 0 if and only if (1) is satisfied and the i-th component of Φ**p**(x** ^{∗}*) is

*φ*

_{p}*(x*

^{∗}

_{i}*, F*

_{i}*(x*

^{∗}*)). Thus, φ*

_{p}*(x*

_{i}*, F*

_{i}*(x*

^{∗}*)) 6= 0 means one of the following occurs:*

*1. x*^{∗}_{i}*6= 0 and F*_{i}*(x*^{∗}*) 6= 0.*

*2. x*^{∗}_{i}*= 0 and F*_{i}*(x*^{∗}*) < 0.*

*3. x*^{∗}_{i}*< 0 and F*_{i}*(x** ^{∗}*) = 0.

*In every case, we have B*_{ii}*(x*^{∗}*) 6= 1 (since B*_{ii}*(x*^{∗}*) = 1 if and only if φ*_{p}*(x*^{∗}_{i}*, F*_{i}*(x** ^{∗}*)) = 0 by

*Prop. 3.2(d)(e)), so that (B*

_{ii}*(x*

^{∗}*)−1)·φ*

_{p}*(x*

^{∗}

_{i}*, F*

_{i}*(x*

^{∗}*)) 6= 0. Similar arguments apply for the*

*vector (A(x*

^{∗}*) − I)Φ*

_{p}*(x*

*). Thus, from the above, we can easily verify that if Φ*

^{∗}

_{p}*(x*

^{∗}*) 6= 0*

*then (B(x*

^{∗}*) − I)Φ*

*p*

*(x*

^{∗}*) and (A(x*

^{∗}*) − I)Φ*

*p*

*(x*

*) are both nonzero. Moreover, both of their nonzero elements are in the same positions, and such nonzero elements have the same*

^{∗}*sign. But, for equation (18) to hold, it would be necessary that ∇F (x*

*) ”revert the sign”*

^{∗}*of all the nonzero elements of (B(x*^{∗}*) − I)Φ*_{p}*(x*^{∗}*), which contradicts the fact that ∇F (x** ^{∗}*)

*is a P*0-matrix by Lemma 2.1.

*2*

*Proposition 3.5 Let Ψ** _{p}* : IR

^{n}*→ IR be defined as (10) with p > 1. Assume F is either*

*strongly monotone or uniform P -function, then the level sets*

*L(Ψ**p**, γ) := {x ∈ IR*^{n}*| Ψ**p**(x) ≤ γ}*

*are bounded for all γ ∈ IR.*

*Proof. (I) First, we consider the assumption of strong monotonicity of F . Suppose*
*there exists an unbounded sequence {kx*^{k}*k}*_{k∈K}*→ ∞ with {x*^{k}*}*_{k∈K}*⊆ L(Ψ*_{p}*, γ) for some*
*γ ≥ 0, where K is a subset of N. We define the index set*

*J :=*^{n}*i ∈ {1, 2, · · · , n}| {x*^{k}_{i}*} is unbounded*^{o}*.*

*Since {x*^{k}*} is unbounded, J 6= ∅. Let {z*^{k}*} denote a bounded sequence defined by*
*z*_{i}* ^{k}* =

( *0,* *if i ∈ J,*
*x*^{k}_{i}*, if i 6∈ J.*

*Then from the definition of {z*^{k}*} and the strong monotonicity of F , we obtain*
*µ*^{X}

*i∈J*

*(x*^{k}* _{i}*)

^{2}

*= µkx*

^{k}*− z*

^{k}*k*

^{2}

*≤ hx*^{k}*− z*^{k}*, F (x*^{k}*) − F (z*^{k}*)i*

=

X*n*

*i=1*

*(x*^{k}_{i}*− z*^{k}_{i}*)(F**i**(x*^{k}*) − F**i**(z** ^{k}*)) (19)

= ^{X}

*i∈J*

*x*^{k}_{i}*(F**i**(x*^{k}*) − F**i**(z** ^{k}*))

*≤* ^{µ X}

*i∈J*

*(x*^{k}* _{i}*)

^{2}

¶* _{1/2}*X

*i∈J*

*|F**i**(x*^{k}*) − F**i**(z*^{k}*)|.*

Since ^{X}

*i∈J*

*(x*^{k}* _{i}*)

^{2}

*6= 0 for k ∈ K, then dividing by*

^{X}

*i∈J*

*(x*^{k}* _{i}*)

^{2}on both sides of (19) yields

*µ*^{µ X}

*i∈J*

*(x*^{k}* _{i}*)

^{2}

¶_{1/2}

*≤*^{X}

*i∈J*

*|F*_{i}*(x*^{k}*) − F*_{i}*(z*^{k}*)|,* *k ∈ K.* (20)

*On the other hand, we know {F*_{i}*(z*^{k}*)}*_{k∈K}*is bounded (i ∈ J) due to {z*^{k}*}** _{k∈K}* is bounded

*and F is continuous. Therefore, from (20), we have*

*{|F*_{i}_{0}*(x*^{k}*)|} → ∞* *for some i*_{0} *∈ J.*

*Also, {kx*^{k}_{i}_{0}*k} → ∞ by the definition of the index set J. Thus, Lemma 3.1 yields*
*φ*_{p}*(x*^{k}_{i}_{0}*, F*_{i}_{0}*(x*^{k}*)) → ∞ as k → ∞.*

*But this contradicts {x*^{k}*} ⊆ L(Ψ*_{p}*, γ).*

*(II) If F is uniform P -function, then the proof almost follows the same arguments as*
above. In particular, (19) is replaced by

*µ*^{X}

*i∈J*

*(x*^{k}* _{i}*)

^{2}

*= µkx*

^{k}*− z*

^{k}*k*

^{2}

*≤ max*

*1≤i≤n**(x*^{k}_{i}*− z*_{i}^{k}*)(F*_{i}*(x*^{k}*) − F*_{i}*(z** ^{k}*))

= max

*i∈J* *x*^{k}_{i}*(F**i**(x*^{k}*) − F**i**(z** ^{k}*)) (21)

*= x*^{k}_{j}_{0}*(F*_{j}_{0}*(x*^{k}*) − F*_{j}_{0}*(z** ^{k}*))

*≤ |x*^{k}_{j}_{0}*| · |F**j*0*(x*^{k}*) − F**j*0*(z*^{k}*)|,*

*where j*_{0} *is one of the indices for which the max is attained. Then dividing by |x*^{k}_{j}_{0}*| on*
both sides of (21) and the proof follows. *2*

### 4 A descent method

In this section, we study a descent method for solving the unconstrained minimization
*(11), which does not require the derivative of F involved in the NCP. In addition, we prove*
a global convergence result for this derivative-free descent algorithm. More precisely, we
consider the search direction as below:

*d*^{k}*:= −∇*_{b}*ψ*_{p}*(x*^{k}*, F (x*^{k}*)),* (22)
*where ∇*_{b}*ψ*_{p}*(x*^{k}*, F (x** ^{k}*)) =

^{³}

*∇*

_{b}*ψ*

_{p}*(x*

^{k}_{1}

*, F (x*

^{k}_{1}

*)), · · · , ∇*

_{b}*ψ*

_{p}*(x*

^{k}

_{n}*, F (x*

^{k}*))*

_{n}^{´}

*. From the following*

^{T}*lemma, we see that d*

*is a descent direction of Ψ*

^{k}

_{p}*at x*

*under monotonicity assumption.*

^{k}*Lemma 4.1 Let x*^{k}*∈ IR*^{n}*and F be a monotone function. Then the search direction*
*defined as (22) satisfies the descent condition ∇Ψ**p**(x** ^{k}*)

^{T}*d*

^{k}*< 0 as long as x*

^{k}*is not a*

*solution of the NCP (1). Moreover, if F is strongly monotone with modulus µ > 0, then*

*∇Ψ*_{p}*(x** ^{k}*)

^{T}*d*

^{k}*≤ −µkd*

^{k}*k*

^{2}

*.*

*Proof. Since ∇Ψ*_{p}*(x*^{k}*) = ∇*_{a}*ψ*_{p}*(x*^{k}*, F (x*^{k}*)) + ∇F (x*^{k}*)∇*_{b}*ψ*_{p}*(x*^{k}*, F (x** ^{k}*)), we have that

*∇Ψ**p**(x** ^{k}*)

^{T}*d*

^{k}*= −*

X*n*

*i=1*

*∇**a**ψ**p**(x*^{k}_{i}*, F**i**(x*^{k}*)) · ∇**b**ψ**p**(x*^{k}_{i}*, F**i**(x*^{k}*)) − (d** ^{k}*)

^{T}*∇F (x*

^{k}*)(d*

^{k}*).*(23)

*From the monotonicity of F , it follows that ∇F (x*

*) is positive semidefinite. Therefore, the second term of (23) is nonnegative. Also, by Prop. 3.2(d), the first term of (23)*

^{k}*is nonnegative. Therefore, ∇Ψ*

_{p}*(x*

*)*

^{k}

^{T}*d*

^{k}*≤ 0. We next prove that ∇Ψ*

_{p}*(x*

*)*

^{k}

^{T}*d*

^{k}*< 0 by*

*contradiction. Assume that ∇Ψ*

*p*

*(x*

*)*

^{k}

^{T}*d*

^{k}*= 0. Then ∇*

*a*

*ψ*

*p*

*(x*

^{k}

_{i}*, F*

*i*

*(x*

^{k}*))·∇*

*b*

*ψ*

*p*

*(x*

^{k}

_{i}*, F*

*i*

*(x*

*)) =*

^{k}*0 for all i which, by Prop. 3.2(d) again, yields φ*

_{p}*(x*

^{k}

_{i}*, F*

_{i}*(x*

*)) = 0. Thus, Φ*

^{k}

_{p}*(x*

^{k}*, F (x*

*)) = 0 and Ψ*

^{k}

_{p}*(x*

^{k}*, F (x*

^{k}*)) = 0. Consequently, x*

*solves the NCP (1). This obviously contradicts*

^{k}*our assumption that x*

*is not a solution of the NCP (1).*

^{k}*If F is strongly monotone with modulus µ > 0, then we have that*

*∇Ψ*_{p}*(x** ^{k}*)

^{T}*d*

^{k}*≤ −(d*

*)*

^{k}

^{T}*∇F (x*

^{k}*)(d*

^{k}*) ≤ −µkd*

^{k}*k*

^{2}

*,*where the first inequality follows from (23) and Prop. 3.2(d).

*2*

The above lemma motivates the following descent algorithm.

Algorithm 4.1

*(Step 0) Given a real number p > 1 and x*^{0} *∈ IR*^{n}*. Choose the parameters ε ≥ 0,*
*σ ∈ (0, 1) and β ∈ (0, 1). Set k := 0.*

(Step 1) If Ψ_{p}*(x*^{k}*) ≤ ε, then Stop.*

(Step 2) Let

*d*^{k}*:= −∇**b**ψ**p**(x*^{k}*, F (x*^{k}*)).*

*(Step 3) Compute a step-size t*_{k}*:= β*^{m}^{k}*, where m** _{k}* is the smallest nonnegative integer

*m*satisfying the Armijo-type condition:

Ψ_{p}*(x*^{k}*+ β*^{m}*d*^{k}*) ≤ (1 − σβ** ^{2m}*)Ψ

_{p}*(x*

^{k}*).*(24)

*(Step 4) Set x*

^{k+1}*:= x*

^{k}*+ t*

*k*

*d*

^{k}*, k := k + 1 and Go to Step 1.*

We next show the global convergence result for Algorithm 4.1 under the strongly
*monotone assumption of F . To this end, we assume that the parameter ε used in Algo-*
*rithm 4.1 is set to be zero and Algorithm 4.1 generates an infinite sequence {x*^{k}*}.*

*Proposition 4.1 Suppose that F is strongly monotone. Then the sequence {x*^{k}*} gener-*
*ated by Algorithm 4.1 has at least one accumulation point and any accumulation point is*
*a solution of the NCP (1).*

*Proof. Firstly, we show that there exists a nonnegative integer m** _{k}*in Step 3 of Algorithm

*4.1 whenever x*

*is not a solution. Assume that the conclusion does not hold. Then for*

^{k}*any m > 0,*

Ψ_{p}*(x*^{k}*+ β*^{m}*d*^{k}*) − Ψ*_{p}*(x*^{k}*) > −σβ** ^{2m}*Ψ

_{p}*(x*

^{k}*).*

*Dividing by β*^{m}*on both sides and taking the limit m → ∞ yield*
*h∇Ψ*_{p}*(x*^{k}*), d*^{k}*i ≥ 0.*

*Since F is strongly monotone, this obviously contradicts Lemma 4.1. Hence, we can find*
*an integer m** _{k}* in Step 3.

*Secondly, we show that the sequence {x*^{k}*} generated by Algorithm 4.1 has at least one*
*accumulation point. By the descent property of Algorithm 4.1, the sequence {Ψ**p**(x*^{k}*)} is*
decreasing. Thus, by Prop. 3.5, the generated sequence is bounded and hence it has at
least one accumulation point.

*Finally, we prove that every accumulation point is a solution of the NCP (1). Let x*^{∗}*be an arbitrary accumulation point of the generated sequence {x*^{k}*}. Then there exists a*
*subsequence {x*^{k}*}*_{k∈K}*converging to x*^{∗}*. We know that −∇*_{b}*ψ*_{p}*( · , F (·)) is continuous since*
*ψ*_{p}*is continuously differentiable, therefore, {d*^{k}*}*_{k∈K}*→ d** ^{∗}*. Next, we need to discuss two
cases. First, we consider the case where there exists a constant ¯

*β such that β*

^{m}

^{k}*≥ ¯β > 0*

*for all k ∈ K. Then, from (24), we obtain*

Ψ_{p}*(x*^{k+1}*) ≤ (1 − σ ¯β*^{2})Ψ_{p}*(x** ^{k}*)

*for all k ∈ K and the entire sequence {Ψ*_{p}*(x*^{k}*)} is decreasing. Thus, we have Ψ*_{p}*(x** ^{∗}*) = 0

*(by taking the limit) which says x*

*is a solution of the NCP (1). Now, we consider the*

^{∗}*other case where there exists a further subsequence such that β*

^{m}

^{k}*→ 0. Note that by*Armijo’s rule (24) in Step 3, we have

Ψ_{p}*(x*^{k}*+ β*^{m}^{k}^{−1}*· d*^{k}*) − Ψ*_{p}*(x*^{k}*) > −σβ*^{2(m}^{k}* ^{−1)}*Ψ

_{p}*(x*

^{k}*).*

*Dividing both sides by β*^{m}^{k}* ^{−1}* and passing to the limit on the subsequence, we obtain

*h∇Ψ*

_{p}*(x*

^{∗}*), d*

^{∗}*i ≥ 0*

*which implies that x** ^{∗}* is a solution of the NCP (1) by Lemma 4.1.

*2*

### 5 Numerical experiments

We implemented Algorithm 4.1 with our code in MATLAB 6.1 for all test problems with
all available starting points in MCPLIB [1]. All numerical experiments were done at a PC
*with CPU of 2.8 GHz and RAM of 512 MB. In order to improve the numerical behavior*
of Algorithm 4.1, we replaced the standard (monotone) Armijo-rule by nonmonotone line
*search as described in [11], i.e., we computed the smallest nonnegative integer l such that*

Ψ*p**(x*^{k}*+ β*^{l}*d*^{k}*) ≤ W**k**− σβ** ^{2l}*Ψ

*p*

*(x*

^{k}*).*

*where W** _{k}* is given by

*W** _{k}* = max

*j=k−m**k**,...,k*Ψ_{p}*(x** ^{j}*)
and where, for given nonnegative integers ˆ

*m and s, we set*

*m**k* =

( 0 *if k ≤ s*

*min {m*_{k−1}*+ 1, ˆm}* otherwise *.*

Throughout the experiments, we use ˆ*m = 5 and s = 5. Moreover, we use the parameters*
*σ = 1.0e − 10 and β = 0.2 in Algorithm 4.1. We terminated our iteration when the*
*number of iteration is over 500000 or the steplength is less than 1.0e − 10 or one of the*
following conditions is satisfied:

(C1) Ψ_{p}*(x*^{k}*) ≤ 1.0e − 5 and (x** ^{k}*)

^{T}*F (x*

^{k}*) ≤ 5.0e − 3;*

(C2) Ψ*p**(x*^{k}*) ≤ 3.0e − 7 and (x** ^{k}*)

^{T}*F (x*

^{k}*) ≤ 3.0e − 2;*

(C3) Ψ_{p}*(x*^{k}*) ≤ 3.0e − 6 and (x** ^{k}*)

^{T}*F (x*

^{k}*) ≤ 1.0e − 2.*

Our computational results are summarized in Tables 1-3 (see the Appendix). In these
tables, the first column lists the name of the problems and the starting point number
*in MCPLIB, Gap denotes the value of x*^{T}*F (x) at the final iteration, NF indicates the*
number of function evaluations of the merit function Ψ* _{p}* for solving each problem, and
Time represents the CPU time in seconds for solving each problem.

The results reported in Tables 1-3 show that our descent method based on the merit
function Ψ_{1.5}*(x), Ψ*_{2}*(x) or Ψ*_{3}*(x) was able to solve most complementarity problems in*
MCPLIB. More precisely, there are seven failures (pgvon105, pgvon106, powell, scar-
fanum, scarfasum, scarfbnum, scarfbsum) for Algorithm 4.1 due to a too small
*steplength. After a careful check, we find the direction d defined in Algorithm 4.1 is not*
a descent one for these problems. In fact, the seven problems are regarded as difficult
ones for those Newton type algorithms [19, 20]. In addition, we may see that the descent
algorithm using the merit function Ψ_{1.5}*(x) has better numerical results than using the*
Fischer-Burmeister function. Particularly, it appears from Tables 1-3 that the descent
algorithm based on Ψ_{p}*(x) will take more function evaluations and yield larger value of*

*Gap when the parameter p increases. A reasonable interpretation for this is that the*
value of Ψ*p**(x) become smaller when p increases and hence causes some difficulty for the*
descent Algorithm 4.1. This also implies that the performance of Algorithm 4.1 will be-
*come worse when the parameter p increases. This is an important new discovery, which*
has big contribution in constructing new NCP-functions, not found in the literature to
our best knowledge.

### 6 Final remarks

*In this paper, we have studied a family of NCP-functions φ*_{p}*(a, b) which include the well-*
known Fischer-Burmeister function as a special case and have shown that this class of
functions enjoy some favorable properties as other NCP-functions do. In addition, we
propose a descent method for the unconstrained minimization (11) which is a reformula-
tion of the NCP via the proposed NCP-functions. Numerical results for the test problem
in MCPLIB have shown this method is promising when Ψ_{p}*(x) is specified as Ψ*_{1.5}*(x),*
Ψ2*(x) or Ψ*3*(x). Moreover, from our numerical implementations, there indicates that*
*the performance of the descent method become better when p decreases, which is a new*
and important discovery. This implies that there does exist new NCP-function which is
better than Fischer-Burmeister function. It is yet unknown whether similar phenomena
happens in different algorithm, which is an interesting future topic.

There still are many issues for this NCP-function to be explored like those for other
NCP-functions done in the literature. For instance, it would be of interest to know the
*semismoothness property of ψ**p* *and the Lipschitz continuous property of ∇ψ**p*. In fact,
some of them are recently studied in [2, 3]. In addition, it is interesting to know whether
this class of NCP-functions can be used for SDCP and SOCCP. Some researchers have
started this issue but no update reports by now. We leave them for future research topics.

Acknowledge. The authors are grateful to Professor P. Tseng for his suggestion on studying this family of NCP-functions and thank the referees for their careful reading and helpful suggestions.

### References

*[1] S. C. Billups, S. P. Dirkse and M. C. Soares, A comparison of algorithms*
*for large scale mixed complementarity problems, Computational Optimization and*
Applications, vol. 7, pp. 3-25, 1997.

*[2] J.-S. Chen, On some NCP-functions based on the generalized Fischer-Burmeister*
*function, Asia-Pacific Journal of Opertional Research, vol. 24, pp. 401-420, 2007.*

*[3] J.-S. Chen, The semismooth-related properties of a merit function and a descent*
*method for the nonlinear complementarity problem, Journal of Global Optimization,*
vol. 36, pp. 565-580, 2006.

*[4] R.W. Cottle, J.-S. Pang and R.-E. Stone, The Linear Complementarity Prob-*
*lem, Academic Press, New York, 1992.*

*[5] F. Facchinei and J. Soares, A new merit function for nonlinear complementarity*
*problems and a related algorithm, SIAM Journal on Optimization, vol. 7, pp. 225-247,*
1997.

*[6] M. C. Ferris, O. L. Mangasarian, and J.-S. Pang, editors, Complementarity:*

*Applications, Algorithms and Extensions, Kluwer Academic Publishers, Dordrecht,*
2001.

*[7] A. Fischer, A special Newton-type optimization methods, Optimization, vol. 24, pp.*

269-284, 1992.

*[8] A. Fischer, Solution of the monotone complementarity problem with locally Lips-*
*chitzian functions, Mathematical Programming, vol. 76, pp. 513-532, 1997.*

*[9] M. Fukushima, Merit functions for variational inequality and complementarity prob-*
*lems, Nonlinear Optimization ans Applications, edited by G. Di Pillo and F. Gian-*
nessi, Plenum Press, New York, pp. 155-170, 1996.

*[10] C. Geiger and C. Kanzow, On the resolution of monotone complementarity*
*problems, Computational Optimization and Applications, vol. 5, pp. 155-173.*

*[11] L. Grippo, F. Lampariello and S. Lucidi, A nonmonotone line search tech-*
*nique for Newton’s method, SIAM Journal on Numerical Analysis, vol. 23, pp. 707-716,*
1986.

*[12] P. T. Harker and J.-S. Pang, Finite dimensional variational inequality and*
*nonlinear complementarity problem: A survey of theory, algorithms and applications,*
Mathematical Programming, vol. 48, pp. 161-220, 1990.

*[13] N.J. Higham, Estimating the matrix p-norm, Numerical Mathematics, vol. 62,*
pp.539-555, 1992.

*[14] H. Jiang, Unconstrained minimization approaches to nonlinear complementarity*
*problems, Journal of Global Optimization, vol. 9, pp. 169-181, 1996.*

*[15] C. Kanzow, Nonlinear complementarity as unconstrained optimization, Journal of*
Optimization Theory and Applications, vol. 88, pp. 139-155, 1996.

*[16] O. L. Mangasarian and M. V. Solodov, Nonlinear complementarity as un-*
*constrained and constrained minimization, Mathematical Programming, vol. 62, pp.*

277-297, 1993.

*[17] O. L. Mangasarian, Equivalence of the complementarity problem to a system of*
*nonlinear equations, SIAM Journal on Applied Mathematics, vol. 31, pp. 89-92, 1976.*

*[18] J.-S. Pang, Complementarity problems, Handbook of Global Optimization, edited*
by R. Horst and P. Pardalos, Kluwer Academic Publishers, Boston, Massachusetts,
pp. 271-338, 1994.

*[19] H.-D. Qi and L.-Z. Liao, A smoothing Newton method for gneral nonlinear com-*
*plementarity problems, Computational Optimization and Applications, vol. 17, pp.*

231-253, 2000.

*[20] D. Sun and L.-Q. Qi, On NCP-functions, Computational Optimization and Ap-*
plications, vol. 13, pp. 201-220, 1999.

*[21] P. Tseng, Global behaviour of a class of merit functions for the nonlinear com-*
*plementarity problem, Journal of Optimization Theory and Applications, vol. 89, pp.*

17-37, 1996.

*[22] K. Yamada, N. Yamashita, and M. Fukushima, A new derivative-free descent*
*method for the nonlinear complementarity problems, in Nonlinear Optimization and*
Related Topics edited by G.D. Pillo and F. Giannessi, Kluwer Academic Publishers,
Netherlands, pp. 463–489, 2000.

*[23] N. Yamashita and M. Fukushima, On stationary points of the implicit La-*
*grangian for the nonlinear complementarity problems, Journal of Optimization The-*
ory and Applications, vol. 84, pp. 653-663, 1995.

*[24] N. Yamashita and M. Fukushima, Modified Newton methods for solving a semis-*
*mooth reformulation of monotone complementarity problems, Mathematical Program-*
ming, vol. 76, pp. 469-491, 1997.