*where f**j* : IR^{n}*→ IR for j = {m + 1, m + 2, · · · , n}. For simplicity, throughout this paper,*
*we denote f : IR*^{n}*→ IR** ^{n}* as

*f (x) :=*

[ *f*_{I}*(x)*
*f*_{E}*(x)*

]

=

*f*1*(x)*
*f*2*(x)*

...
*f*_{m}*(x)*
*f*_{m+1}*(x)*
*f*_{m+2}*(x)*

...
*f**n**(x)*

*and assume that f is continuously diﬀerentiable. When E is empty set, the system (1.1)*
*reduces to a system of inequalities; whereas it reduces to a system of equations when I is*
empty.

Problems in form of (1.1) arise in real applications, including data analysis, computer- aided design problems, image reconstructions, and set separation problems, etc.. Many optimization methods have been proposed for solving the system (1.1), for instance, non- interior continuation method [14], smoothing-type algorithm [7, 13], Newton algorithm [8], and iteration methods [5, 9, 10, 12]. In this paper, we consider the similar smoothing-type algorithm studied in [7, 13] for solving the system (1.1). In particular, we propose a family of smoothing functions, investigate its properties, and report numerical performance of an algorithm in which this family of new smoothing functions is involved.

As seen in [7, 13], the main idea of smoothing-type algorithm for solving the system (1.1)
is to reformulate system (1.1) as a system of smoothing equations via projection function,
*More specifically, for any x = (x*1*, x*2*,· · · , x**n*)*∈ IR** ^{n}*, one defines

*(x)*_{+}:=

max*{0, x*1*}*
...
max*{0, x**n**}*

* .*

Then, the system (1.1) is equivalent to the following system of equations:

{ *(f*_{I}*(x))*_{+} = 0

*f*_{E}*(x)* = *0.* (1.2)

*Note that the function (f*_{I}*(x))*_{+}in the reformulation (1.2) is nonsmooth, the classical Newton
methods cannot be directly applied to solve (1.2). To conquer this, a smoothing algorithm
was considered in [7, 13], in which the following smoothing function was employed:

*ϕ(µ, t) =*

*t* if *t≥ µ,*

*(t+µ)*^{2}

*4µ* if *−µ < t < µ,*
0 if *t≤ −µ,*

(1.3)

*where µ > 0.*

In this paper, we propose a family of new smoothing functions, which include the function
*ϕ(µ, t) given as in (1.3) as a special case, for solving the reformulation (1.2). More specifically,*

we consider the family of smoothing functions as below:

*ϕ*_{p}*(µ, t) =*

*t* if *t≥* _{p}_{−1}^{µ}*,*

*µ*
*p**−1*

[*(p**−1)(t+µ)*
*pµ*

]*p*

if *−µ < t <* _{p}_{−1}^{µ}*,*

0 if *t≤ −µ,*

(1.4)

*where µ > 0 and p* *≥ 2. Note that ϕ**p* reduces to the smoothing function studied in [13]

*when p = 2. The graphs of ϕ*_{p}*with diﬀerent values of p and various µ are depicted as in*
Figures 1-3.

**Proposition 1.1. Let ϕ**_{p}*be defined as in (1.4). For any (µ, t)∈ IR*++*× IR, we have*
*(a) ϕ*_{p}*(., .) is continuously diﬀerentiable at any (µ, t)∈ IR*++*× IR.*

*(b) ϕ**p**(0, t) = (t)*+*.*

(c) ^{∂ϕ}^{p}_{∂t}^{(µ,t)}*≥ 0 for any (µ, t) ∈ IR*++*× IR.*

(d) lim*p**→∞**ϕ**p**(µ, t)→ (t)*+*.*

*Proof. (a) First, we calculate* ^{∂ϕ}^{p}_{∂t}* ^{(µ,t)}* and

^{∂ϕ}

^{p}

_{∂µ}*as below:*

^{(µ,t)}*∂ϕ**p**(µ, t)*

*∂t* =

1 if *t≥* _{p}_{−1}^{µ}*,*

[*(p**−1)(t+µ)*
*pµ*

]*p**−1*

if *−µ < t <* _{p}_{−1}^{µ}*,*

0 if *t≤ −µ,*

*∂ϕ**p**(µ, t)*

*∂µ* =

0 if *t≥*_{p}_{−1}^{µ}*,*

[*(p**−1)(t+µ)*
*pµ*

]*p**−1**(t+µ**−pt)*

*pµ* if *−µ < t <* _{p}_{−1}^{µ}*,*

0 if *t≤ −µ,*

Then, we see that ^{∂ϕ}^{p}_{∂t}^{(µ,t)}*∈ C*^{1}because

lim

*t**→*_{p−1}^{µ}

*∂ϕ**p**(µ, t)*

*∂t* = lim

*t**→*_{p−1}^{µ}

[*(p− 1)(*_{p}_{−1}^{µ}*+ µ)*
*pµ*

]*p**−1*

*= 1,*

*t**→−µ*lim

*∂ϕ**p**(µ, t)*

*∂t* = lim

*t**→−µ*

[*(p− 1)(−µ + µ)*
*pµ*

]*p**−1*

*= 0.*

and ^{∂ϕ}^{p}_{∂µ}^{(µ,t)}*∈ C*^{1} since

lim

*t**→*_{p−1}^{µ}

*∂ϕ**p**(µ, t)*

*∂µ* = lim

*t**→*_{p−1}^{µ}

[*(p− 1)(*_{p}_{−1}^{µ}*+ µ)*
*pµ*

]*p**−1*

(_{p}_{−1}^{µ}*+ µ− p*_{p}_{−1}* ^{µ}* )

*pµ* *= 0,*

*t**→−µ*lim

*∂ϕ*_{p}*(µ, t)*

*∂µ* = lim

*t**→−µ*

[*(p− 1)(−µ + µ)*
*pµ*

]*p**−1*(*−µ + µ − p(−µ))*

*pµ* *= 0.*

*The above verifications imply that ϕ**p**(., .) is continuously diﬀerentiable.*

*(b) From the definition of ϕ**p**(µ, t), it is clear that*
*ϕ**p**(0, t) =*

{ *t* if *t≥ 0*

0 if *t≤ 0* *= (t)*+

which is the desired result.

(c) When*−µ < t <* _{p}_{−1}^{µ}*, we have t + µ > 0. Hence, from the expression of* ^{∂ϕ}^{p}_{∂t}* ^{(µ,t)}*, it is
obvious that

[*(p**−1)(t+µ)*
*pµ*

]*p**−1*

*≥ 0, which says* ^{∂ϕ}^{p}_{∂t}^{(µ,t)}*≥ 0.*

(d) Part(d) is clear from the definition.

*The properties of ϕ**p* in Proposition 1.1 can be verified via the graphs. In particular, in
*Figures 1-2, we see that when µ→ 0, ϕ**p**(µ, t) goes to (t)*+which verifies Proposition 1.1(b).

*Figure 1: Graphs of ϕ**p**(µ, t) with p = 2 and µ = 0.1, 0.5, 1, 2.*

*Figure 2: Graphs of ϕ*_{p}*(µ, t) with p = 10 and µ = 0.1, 0.5, 1, 2.*

*Figure 3 says that for fixed µ > 0, ϕ**p**(µ, t) approaches to (t)*+ *as p* *→ ∞. This also*
verifies Proposition 1.1(d).

*Figure 3: Graphs of ϕ**p**(µ, t) with p = 2, 3, 10, 20 and µ = 0.2.*

Next, we will form another reformulation for problem (1.1). To this end, we define

*F (z) :=*

*f**I**(x)− s*
*f**E**(x)*
Φ*p**(µ, s)*

with Φ*p**(µ, s) :=*

*ϕ**p**(µ, s*1)
...
*ϕ**p**(µ, s**m*)

* and z = (µ, x, s)* (1.5)

where Φ*p*is a mapping from IR^{1+m}*→ IR** ^{m}*. Then, in light of Proposition 1.1(b), we see that

*F (z) = 0 and µ = 0* *⇐⇒ s = f**I**(x), s*+*= 0, f**E**(x) = 0.*

This, together with Proposition 1.1(a), indicates that one can solve system (1.1) by applying
*Newton-type methods to solve F (z) = 0 by letting µ* *↓ 0. Furthermore, by introducing an*
*extra parameter p, we define a function H : IR*^{1+n+m}*→ IR** ^{1+n+m}* by

*H(z) :=*

*µ*
*f**I**(x)− s + µx**I*

*f**E**(x) + µx**E*

Φ_{p}*(µ, s) + µs*

(1.6)

*where x**I* *= (x*1*, x*2*,· · · , x**m**), x**E**= (x**m+1**, x**m+2**,· · · , x**n**), s∈ IR*^{m}*, x := (x**I**, x**E*)*∈ IR** ^{n}* and

*functions ϕ*

*p*and Φ

*p*are defined as in(1.4) and (1.5), respectively. Thereby, it is obvious

*that if H(z) = 0, then µ = 0 and x solves the system (1.1). It is not diﬃcult to see that, for*

*any z∈ IR*++

*× IR*

^{n}*× IR*

^{m}*, the function H is continuously diﬀerentiable. Let H*

*denote the*

^{′}*Jacobian of the function H. Then, for any z*

*∈ IR*++

*× IR*

^{n}*× IR*

*, we have*

^{m}*H*^{′}*(z) =*

1 *O*1*×n* *O*1*×m*

*x*1

..
.
*x**m*

*m**×1*

*A*

*−1 · · ·* 0
..

. . .. ...
0 *· · ·* *−1*

*m**×m*

*x**m+1*

..
.
*x**n*

*(n**−m)×1*

*B*

0 *· · ·* 0
..

. . .. ...
0 *· · ·* 0

*(n**−m)×m*

*s*1+_{∂µ}^{∂}*ϕ*^{′}*(µ, s*1)
..
.
*s**m*+_{∂µ}^{∂}*ϕ*^{′}*(µ, s**m*)

*m**×1*

0 *· · ·* 0
..

. 0

..
.
0 *· · ·* 0

*m×n*

*∂*

*∂s**ϕ*^{′}*(µ, s*1*) + µ* *· · ·* 0
..

. . .. ...

0 *· · ·* *∂s*^{∂}*ϕ*^{′}*(µ, s**m**) + µ*

*m**×m*

where

*A =*

*∂f*1*(x*1)

*∂x*_{1} *+ µ* *· · ·* 0

... . .. ... 0_{m}* _{×(n−m)}*
0

*· · ·*

^{∂f}

_{∂x}

^{m}

^{(x}

_{m}

^{m}^{)}

*+ µ*

*m**×n*

and

*B =*

*∂f*_{m+1}*(x** _{m+1}*)

*∂x**m+1* *+ µ* *· · ·* 0
0_{(n}* _{−m)×m}* ... . .. ...

0 *· · ·* ^{∂f}_{∂x}^{n}^{(x}_{n}^{n}^{)}*+ µ*

*(n**−m)×n*

*With the above, we can simplify the matrix H*^{′}*(z) as*

*H*^{′}*(z) =*

1 0* _{n}* 0

_{m}*x**I* *f*_{I}^{′}*(x) + µU* *−I**m*

*x*_{E}*f*_{E}^{′}*(x) + µV* 0_{(n}_{−m)×m}*s + Φ*^{′}_{µ}*(µ, s)* 0*m**×n* Φ^{′}_{s}*(µ, s) + µI**m*

(1.7)

where

*U := [I**m* 0_{m}_{×(n−m)}*],* *V := [0*_{(n}_{−m)×m}*I**n**−m**],*

*s + Φ*^{′}_{µ}*(µ, s) =*

*s*_{1}+_{∂µ}^{∂}*ϕ*^{′}*(µ, s*_{1})
...

*s** _{m}*+

_{∂µ}

^{∂}*ϕ*

^{′}*(µ, s*

*)*

_{m}

*m**×1*

*,*

Φ^{′}_{s}*(µ, s) + µI**m*=

*∂*

*∂s**ϕ*^{′}*(µ, s*_{1}*) + µ* *· · ·* 0
... . .. ...
0 *· · ·* _{∂s}^{∂}*ϕ*^{′}*(µ, s**m**) + µ*

*m**×m*

*.*

Here, we use 0*I* *to denote the I-dimensional zero vector and 0**l**×q* *to denote the l× q zero*
*matrix for any positive integers l and q. Thus, we might apply some Newton-type methods*
*to solve the system of smooth equations H(z) = 0 at each iteration by letting µ > 0 and*
*H(z)* *→ 0 so that a solution of (1.1) can be found. This is the main idea of smoothing*
approach for solving system (1.1).

Alternatively, one may have another smoothing reformulation for system (1.1) without
*introducing the extra variable s. More specifically, we can define bH : IR*^{1+n}*→ IR** ^{1+n}* as

*H(µ, x) :=*b

*µ*

*f**E**(x) + µx**E*

Φ_{p}*(µ, f*_{I}*(x)) + µx*_{I}

(1.8)

The Jacobian of b*H(µ, x) is similar to H*^{′}*(z) and indeed is a bit tedious, so we omit its*
presentation here. The reformulation of b*H(µ, x) = 0 has less dimension than H(z) = 0,*
whereas the expression of b*H*^{′}*(µ, x) is more tedious than H*^{′}*(z). Both smoothing approaches*
*can lead to the solution to system (1.1). The numerical results based on H(z) = 0 and*
*H(µ, x) = 0 are compared in this paper. Moreover, we also investigate how the parameter p*b
*aﬀect the numerical performance when diﬀerent ϕ**p* is employed. Proposing the new family
of smoothing functions as well as the above two aspects of numerical points are the main
motivation and contribution of this paper.

**2 A smoothing-type algorithm**

In this section, we consider a non-monotone smoothing-type algorithm whose similar frame-
work has been discussed in [7, 13]. In particular, we correct a flaw in Step 5 in [13] and
show that only this modification can really make the algorithm well-defined. Moreover, for
*H(µ, x), a new reformulation of H(z) with lower dimensionality, we will use the function*b
*ψ(·) := ∥H(z)∥*^{2} *or ψ(·) := ∥ bH(µ, x)∥*^{2} alternatively. Below are the details of the algorithm.

**Algorithm 2.1. (A Nonmonotone Smoothing-Type Algorithm)**

**Step 0 Choose δ***∈ (0, 1), σ ∈ (0, 1/2), β > 0. Take τ ∈ (0, 1) such that τβ < 1. Let*
*µ*0 *= β and (x*^{0}*, s*^{0}) *∈ IR*^{n+m}*be an arbitrary vector. Set z*^{0} *:= (µ*0*, x*^{0}*, s*^{0}). Take
*e*^{0}*:= (1, 0,· · · , 0) ∈ IR*^{1+n+m}*, R*0:=*∥H(z*^{0})*∥*^{2}*= ψ(z*^{0}*) and Q*0= 1.

*Choose η*min *and η*max such that 0*≤ η*min*≤ η*max *< 1. Set θ(z*^{0}*) := τ min{1, ψ(z*^{0})*}*
*and k := 0.*

**Step 1 If** *∥H(z** ^{k}*)

*∥ = 0, stop.*

**Step 2 Compute** *△z** ^{k}* := (

*△µ*

*k*

*,△x*

^{k}*,△s*

*)*

^{k}*∈ IR × IR*

^{n}*× IR*

*by using*

^{m}*H*^{′}*△z** ^{k}* =

*−H(z*

^{k}*) + βθ(z*

^{k}*)e*

^{0}(2.1)

**Step 3 Let α***k*

*be the maximum of the values 1, δ, δ*

^{2}

*,· · · such that*

*ψ(z*^{k}*+ α**k**△z** ^{k}*)

*≤ [1 − 2σ(1 − τβ)α*

*k*

*] R*

*k*(2.2)

**Step 4 Set z**

^{k+1}*:= z*

^{k}*+ α*

*k*

*△z*

*. If*

^{k}*∥H(z*

*)*

^{k+1}*∥ = 0, stop.*

**Step 5 Choose η***k* *∈ [η*min*, η*max]. Set

*Q**k+1* := *η**k**Q**k*+ 1
*θ(z** ^{k+1}*) := min{

*τ, τ ψ(z*^{k+1}*), θ(z** ^{k}*)}

(2.3)
*R**k+1* := *(η**k**Q**k**R**k**+ ψ(z** ^{k+1}*))

*Q*_{k+1}*and k := k + 1. Go to Step 2*

In Algorithm 2.1, a nonmonotone line search technique is adopted. It is easy to see that
*R**k+1* *is a convex combination of R**k* *and ψ(z*^{k+1}*). Since R*0 *= ψ(z*^{0}*), it follows that R**k*

*is a convex combination of the function values ψ(z*^{0}*), ψ(z*^{1}*),· · · , ψ(z*^{k}*). The choice of η**k*

*controls the degree of nonmonotonicity. If η**k* *= 0 for every k, then the line search is the*
usual monotone Armijo line search. The scheme of Algorithm 2.1 is not exactly the same
*as the one in [13]. In particular, θ(z** ^{k+1}*) := min{

*τ, τ ψ(z*^{k+1}*), θ(z** ^{k}*)}

which is diﬀerent from
*θ(z** ^{k+1}*) := min{

*τ, τ ψ(z*^{k}*), θ(z** ^{k}*)}

given in [13]. Only this modification can really make the algorithm well-defined as shown in the following Theorem 2.3. For convenience, we denote

*f*^{′}*(x) :=*

[ *f*_{I}^{′}*(x)*
*f*_{E}^{′}*(x)*

]

and make the following assumption.

**Assumption 2.1. f**^{′}*(x) + µI**n* *is invertible for any x∈ IR*^{n}*and µ∈ IR*++*.*

Some basic properties of Algorithm 2.1 are stated in the following lemma. Since the proof arguments are almost the same as those in [13], they are thus omitted.

**Lemma 2.1. Let the sequence***{R**k**} and* {
*z** ^{k}*}

*be generated by Algorithm 2.1. Then, the*
*following hold.*

*(a) The sequence{R**k**} is monotonically decreasing.*

*(b) The function ψ(z** ^{k}*)

*≤ R*

*k*

*for all k∈ J .*

*(c) The sequence θ(z*

^{k}*) is monotonically decreasing.*

*(d) βθ(z** ^{k}*)

*≤ µ*

*k*

*for all k∈ J .*

*(e) µ*_{k}*> 0 for all k∈ J and the sequence {µ**k**} is monotonically decreasing.*

**Lemma 2.2. Suppose A**∈ IR^{n}^{×n}*which is partitioned as A =*

[ *A*_{11} *A*_{12}
*A*_{21} *A*_{22}

]

*where A*11*and*
*A*_{22} *are square matrices. If A*_{12} *or A*_{21} *is zero matrix, then det(A) = det(A*_{11})*· det(A*22*).*

*Proof. This a well known result in matrix analysis, which is a special case of Fischer’s*
ineqiality [2, 6]. Please refer to [11, Theorem 7.3] for a proof.

**Theorem 2.3. Suppose that f is a continuously diﬀerentiable function and Assumption 2.1***is satisfied. Then Algorithm 2.1 is well defined.*

*Proof. Applying Lemmas 2.1-2.2 and mimicking the arguments as in [13], it is easy to achieve*
*the desired result. However, we point it out again that θ(z** ^{k+1}*) in step 5 is diﬀerent from
the one in [13]. Only this modification can really make the algorithm well-defined.

**3 Convergence analysis**

In this section, we analyze the convergence of the algorithm proposed in previous section.

To this end, the following assumption is needed which was introduced in [7].

**Assumption 3.1. For an arbitrary sequence***{(µ**k**, x** ^{k}*)

*} with lim*

*k**−→∞**∥x∥ = +∞ and the*
*sequence* *{µ**k**} ⊂ IR*+ *bounded, then either*

*(i) there is at least an index i*0 *such that lim sup*

*k**−→∞**{f**i*_{0}*(x*^{k}*) + µ**k**x*^{k}_{i}_{0}*} = +∞; or*
*(ii) there is at least an index i*_{0} *such that lim sup*

*k**−→∞**{µ**k**(f*_{i}_{0}*(x*^{k}*) + µ*_{k}*x*^{k}_{i}

0)*} = −∞.*

It can be seen that many functions satisfy Assumption 3.1, see [7]. The global convergence of Algorithm 2.1 is stated as follows. In fact, with Proposition 1.1, the main idea for the proof is almost the same as that in [13, Theorem 4.1], only a few technical parts are diﬀerent.

Thus, we omit the details.

**Theorem 3.1. Suppose that f is a continuously diﬀerentiable function and Assumptions***2.1 and 3.1 are satisfied. Then, the sequence* *{z*^{k}*} generated by Algorithm 2.1 is bounded.*

*Moreover, any accumulation point of x*^{k}*is a solution to (1.1).*

Next, we analyse the convergence rate for Algorithm 2.1. Before presenting the result, we introduce some concepts that will used in the subsequent analysis as well as a technical lemma.

*A locally Lipschitz function F : IR*^{n}*→ IR*^{m}*, which has the generalized Jacobian ∂F (x), is*
*said to be semismooth (or strongly semismooth) at x∈ IR** ^{n}*if F is directionally diﬀerentiable

*at x and*

*F (x + h)− F (x) − V h = o(∥h∥) (or = O(∥h∥*^{2})

*holds for any V* *∈ ∂F (x + h). It is well known that convex functions, smooth functions,*
and piecewise linear functions are examples of semismooth functions. The composition
of (strongly) semismooth functions is still a (strongly) semismooth function. It can be
*verified that the function ϕ** _{p}* defined by (1.4) is strongly semismooth on IR

^{2}

*. Thus, f being*

*continuously diﬀerentiable implies that the function H defined by (1.6) and bH defined by*

*(1.8)are semismooth (or strongly semismooth if f*

*is Lipschitz continuous on IR*

^{′}*.*

^{n}* Lemma 3.2. For any α, β∈ IR*++

*, α = O(β) represents that*

^{α}

_{β}*is uniformly bounded, and*

*α = o(β) denotes*

^{α}

_{β}*→ 0 as β → 0. Then, we have*

*(a) O(β)± O(β) = O(β);*

*(b) o(β)± o(β) = o(β);*

*(c) If c̸= 0 then O(cβ) = O(β), o(cβ) = o(β);*

*(d) O(o(β)) = o(β), o(O(β)) = o(β);*

*(e) O(β*_{1}*)O(β*_{2}*) = O(β*_{1}*β*_{2}*), O(β*_{1}*)o(β*_{2}*) = o(β*_{1}*β*_{2}*), o(β*_{1}*)O(β*_{2}*) = o(β*_{1}*β*_{2}*).*

*(f) If α = O(β*1*) and β*1*= o(β*2*), then α = o(β*2*).*

*Proof. For parts (a)-(e), please refer to [1] for a proof. Part (f) can be verified straightfor-*
wardly.

With Proposition 1.1 and Lemma 3.2, mimicking the arguments as in [13, Theorem 5.1]

gives the following theorem.

**Theorem 3.3. Suppose that f is a continuously diﬀerentiable function and Assumptions***2.1 and 3.1 are satisfied. Let z*^{∗}*= (µ*_{∗}*, x*^{∗}*, s*^{∗}*) be an accumulation point of{z*^{k}*} generated*
*by Algorithm 2.1. If all V* *∈ ∂H(z*^{∗}*) are nonsingular, then the following hold.*

*(a) α**k**≡ 1 for all z*^{k}*suﬃciently close to z*^{∗}*;*
*(b) the whole sequence{z*^{k}*} converges to z*^{∗}*;*

(c) *∥z*^{k+1}*− z*^{∗}*∥ = o(∥z*^{k}*− z*^{∗}*∥) (or ∥z*^{k+1}*− z*^{∗}*∥ = O(∥z*^{k}*− z*^{∗}*∥*^{2}*)) provided f*^{′}*is Lipschitz*
*continuous on IR*^{n}*);*

*(d) µ**k+1**= o(µ**k**) (or µ**k+1**= O(µ*^{2}_{k}*) if f*^{′}*is Lipschitz continuous on IR*^{n}*).*

**4 Numerical Results**

In this section, we present our test problems and report numerical results. In this paper,
*the function f is assumed to be a mapping from IR** ^{n}* to IR

*, which means the dimension of*

^{n}*x is exactly the same as the total number of inequalities and equalities. In reality, this may*not be the case. In other words, there may have a system like this:

{ *f*_{I}*(x)* *≤ 0, I = {1, 2, · · · , m}*

*f**E**(x)* = *0,* *E ={m, m + 1, · · · , l}* (4.1)
*This says f could be a mapping from IR** ^{n}* to IR

^{l}*. When l̸= n, the scheme in Algorithm 2.1*

*cannot be applied to the system (4.1) because the dimension of x is not equal to the total*number of inequalities and equalities. To make system (4.1) solvable under the proposed algorithm, as remarked in [13, Sec. 6], some additional inequality or variable needs to be added. For example,

*(i) if l < n, we may add a trivial inequality like*

∑*n*
*i=1*

*x*^{2}_{i}*≤ M*

*where M is suﬃciently large, into system (4.1) so that Algorithm 2.1 can be applied.*

*(ii) if l > n and m≥ 1, we may add a variable x**n+1*into the inequalities so that
*f**i**(x)≤ 0 → f**i**(x) + x*^{2}_{n+1}*≤ 0.*

*(iii) if l > n and m = 0, we may add a trivial inequality like*

*n+2*∑

*i=1*

*x*^{2}_{i}*≤ M*

*where M is suﬃciently large, into system (4.1) so that Algorithm 2.1 can be applied.*

*In real implementation, the H(z) given as in (1.6) is replaced by*

*H(z) :=*

*µ*
*f**I**(x)− s + cµx**I*

*f**E**(x) + cµx**E*

Φ*p**(µ, s) + cµs*

(4.2)

*where c is a given constant. Likewise, the bH(µ, x) given as in (1.8) is replaced by*
*H(µ, x) :=*b

*µ*

*f**E**(x) + cµx**E*

Φ*p**(µ, f**I**(x)) + cµx**I*

* .* (4.3)

*Adding such a constant c is helpful when coding the algorithm because µ approaches to zero*
eventually. The theoretical results will not be aﬀected in any case. In practice, in order to
*obtain an interior solution x*^{∗}*for inequalities (f**I**(x*^{∗}*) < 0), the following system*

{ *f*_{I}*(x) + εe≤ 0*
*f**E**(x) = 0*

*is considered, where ε is a small number and e is the vector of all ones. Now, we list the*
test problems which are employed from [7, 13].

**Example 4.1. Consider f (x) =**

[ *f*1*(x)*
*f*2*(x)*

]

*with x∈ IR*^{2}where
*f*_{1}*(x)* = *x*^{2}_{1}*+ x*^{2}_{2}*− 1 + ε ≤ 0,*

*f*2*(x)* = *−x*^{2}1*− x*^{2}2*+ (0.999)*^{2}*+ ε≤ 0.*

**Example 4.2. Consider f (x) =**

*f*1*(x)*
*f*2*(x)*
*f*3*(x)*
*f*4*(x)*
*f*5*(x)*
*f*6*(x)*

*with x∈ IR*^{2}where

*f*_{1}*(x)* = *sin(x*_{1}*) + ε≤ 0,*
*f*2*(x)* = *− cos(x*2*) + ε≤ 0,*
*f*_{3}*(x)* = *x*_{1}*− 3π + ε ≤ 0,*
*f*4*(x)* = *x*2*−π*

2 *− 2 + ε ≤ 0,*
*f*5*(x)* = *−x*1*− π + ε ≤ 0,*
*f*_{6}*(x)* = *−x*2*−π*

2 *+ ε≤ 0.*

**Example 4.3. Consider f (x) =**

[ *f*1*(x)*
*f*2*(x)*

]

*with x∈ IR*^{2} where

*f*_{1}*(x)* = *sin(x*_{1}*) + ε≤ 0,*
*f*2*(x)* = *− cos(x*2*) + ε≤ 0.*

**Example 4.4. Consider f (x) =**

*f*1*(x)*
*f*2*(x)*
*f*3*(x)*
*f*4*(x)*
*f*_{5}*(x)*

*with x∈ IR*^{5} where

*f*_{1}*(x)* = *x*_{1}*+ x*_{3}*− 1.6 + ε ≤ 0,*
*f*2*(x)* = *1.333x*2*+ x*4*− 3 + ε ≤ 0,*
*f*3*(x)* = *−x*3*− x*4*+ x*5*+ ε≤ 0,*
*f*4*(x)* = *x*^{2}_{1}*+ x*^{2}_{3}*− 1.25 = 0,*
*f*5*(x)* = *x*^{1.5}_{2} *+ 1.5x*4*− 3 = 0.*

**Example 4.5. Consider f (x) =**

*f*1*(x)*
*f*2*(x)*
*f*3*(x)*

* with x ∈ IR*^{3} where

*f*_{1}*(x)* = *x*_{1}*+ x*_{2}*e*^{0.8x}^{3}*+ e*^{1.6}*+ ε≤ 0,*
*f*2*(x)* = *x*^{2}_{1}*+ x*^{2}_{2}*+ x*^{2}_{3}*− 5.2675 + ε ≤ 0,*
*f*3*(x)* = *x*1*+ x*2*+ x*3*− 0.2605 = 0.*

**Example 4.6. Consider f (x) =**

*f*1*(x)*
*f*2*(x)*
*f*3*(x)*

* with x ∈ IR*^{2} where

*f*1*(x)* = *0.8− e*^{x}^{1}^{+x}^{2}*+ ε≤ 0,*
*f*2*(x)* = *1.21e*^{x}^{1}*+ e*^{x}^{2}*− 2.2 = 0,*
*f*3*(x)* = *x*^{2}_{1}*+ x*^{2}_{2}*+ x*2*− 0.1135 = 0.*

**Example 4.7. Consider f (x) =**

[ *f*1*(x)*
*f*2*(x)*

]

*with x∈ IR*^{2} where

*f*1*(x)* = *x*1*− 0.7 sin(x*1)*− 0.2 cos(x*2) = 0
*f*2*(x)* = *x*2*− 0.7 cos(x*1*) + 0.2 sin(x*2) = 0

Moreover, in light of the aforementioned discussions, there have corresponding modified problems for Example 4.2’, Example 4.6’, and Example 4.7’, which are stated as below. The other examples are kept unchanged. In other words, Example 4.1 and Example 4.1’ are the same , so are Example 4.3 and Example 4.3’, Example 4.4 and Example 4.4’, Example 4.5 and Example 4.5’.

**Example 4.2’. Consider f (x) =**

*f*1*(x)*
*f*2*(x)*
*f*_{3}*(x)*
*f*_{4}*(x)*
*f*_{5}*(x)*
*f*_{6}*(x)*

*with x∈ IR*^{6}where

*f*_{1}*(x)* = *sin(x*_{1}*) + ε≤ 0*
*f*2*(x)* = *− cos(x*2*) + ε≤ 0*
*f*_{3}*(x)* = *x*_{1}*− 3π + x*^{2}3*+ ε≤ 0*
*f*_{4}*(x)* = *x*_{2}*−π*

2 *− 2 + x*^{2}4*+ ε≤ 0*
*f*5*(x)* = *−x*1*− π + x*^{2}5*+ ε≤ 0*
*f*6*(x)* = *−x*2*−π*

2 *+ x*^{2}_{6}*+ ε≤ 0*

**Example 4.6’. Consider f (x) =**

*f*1*(x)*
*f*2*(x)*
*f*3*(x)*

* with x ∈ IR*^{3}where

*f*1*(x)* = *0.8− e*^{x}^{1}^{+x}^{2}*+ x*^{2}_{3}*+ ε≤ 0,*
*f*2*(x)* = *1.21e*^{x}^{1}*+ e*^{x}^{2}*− 2.2 = 0,*
*f*3*(x)* = *x*^{2}_{1}*+ x*^{2}_{2}*+ x*2*− 0.1135 = 0.*

**Example 4.7’. Consider f (x) =**

*f*_{1}*(x)*
*f*_{2}*(x)*
*f*3*(x)*

* with x ∈ IR*^{3}where

*f*1*(x)* = *x*^{2}_{1}*+ x*^{2}_{2}*+ x*^{2}_{3}*− 10000 + ε ≤ 0,*
*f*_{2}*(x)* = *x*_{1}*− 0.7 sin(x*1)*− 0.2 cos(x*2*) = 0,*
*f*3*(x)* = *x*2*− 0.7 cos(x*1*) + 0.2 sin(x*2*) = 0.*

*The numerical implementations are coded in Matlab. In the numerical reports, x*^{0}is the
stating point, NI is the total number of iterations, NF denotes the number of function eval-
*uations for H(z** ^{k}*) or b

*H(µ*

*k*

*, x*

*), and SOL means the solution obtained from the algorithm.*

^{k}The parameters used in the algorithm are set as

*ε = 0.00001,* *δ = 0.3,* *σ = 0.001,* *β = 1.0,* *µ*_{0}*= 1.0,* *Q*_{0}*= 1.0.*

*In Table 1 and Table 2, we adapt the same x*^{0}*, c, τ , η used as in [13] for p = 2. Basically,*
in Table 1 and Table 2, the bottom half data for the modified problems are the same as
those in [13], respectively. Below are our numerical observations and conclusions.

*• From Table 1 and Table 2, we see that, when employing formulation H(z) = 0, solving*
the modified problems is more successful than solving the original problems.

*• Table 3 indicates that the numerical results are the same for original problems and*
modified problems, when b*H(µ, x) = 0 is employed. Hence, in Tables 4-11, we focus on*
*the modified problems when formulation H(z) = 0 is employed, whereas we only test*
original problems whenever the implementations are based on b*H(µ, x) = 0.*

*• From Table 5 (p = 2), we see that the algorithm based on bH(µ, x) = 0 can solve more*
problems than the one in [13] does. In view of the lower dimensionality of b*H(µ, x) = 0*
and this performance, we can confirm the merit of this new reformulation.

*• In Table 4 and Table 5, the initial point and other parameters are the same as those*
*in [13]. In Tables 6-7, we fix the initial point x*^{0} for all test problems. In Table 8 and
*Table 9, even x*^{0}*, c, τ and η are all fixed for all test problems. In Table 10 and Table*
*11, x*^{0} *is fixed for all test problems and parts of c, τ and η are fixed. In general, we*
observe that the numerical performance based on the formulation b*H(µ, x) = 0 is better*
*than the one based on H(z) = 0.*

*• Moreover, the changing of parameter p seems have no influence on the numerical per-*
formance no matter b*H(µ, x) = 0 or H(z) = 0 is adapted. This indicates that the*
*smoothing approach may not be aﬀected when p is perturbed. This phenomenon is*
diﬀerent from the one for other approaches observed in [3, 4] and is a new discovery to
*the literature. We guess that the main reason comes from µ dominating the algorithm*
*in the smoothing approach even various p is perturbed. This conjecture still needs*
further verification and investigation.

In summary, the main contribution of this paper is to propose a new family of smoothing functions and correct a flaw in an algorithm studied in [13], which is used to guarantee its convergence. We believe that the proposed new smoothing functions can be also employed in other contexts where the projection function is involved. The related numerical performance can be investigated accordingly. We leave them as future research topics.

**Acknowledgements. We are gratefully indebted to anonymous referees for their valuable**
suggestions that help us to improve the presentation of the paper.

**References**

*[1] R.G. Bartle, The Elements of Real Analysis, Second Edition John Wiley, New Jersey,*
1976.

*[2] R. Bhatia, Matrix Analysis, Springer-Verlag, New York, 1997.*

[3] J-S. Chen and S-H. Pan, A family of NCP-functions and a descent method for the
*nonlinear complementarity problem, Comput. Optim. Appl. 40 (2008) 389-404.*

[4] J-S. Chen, S-H. Pan and C-Y. Yang, Numerical comparison of two eﬀective methods
*for mixed complementarity problems, J. Comput. Appl. Math. 234 (2010) 667-683.*

*[5] J.W. Daniel, Newton’s method for nonlinear inequalities, Numer. Math. 21 (1973)*
381-387.

*[6] R. A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University Press, Cam-*
bridge, 1986.

[7] Z-H. Huang, Y. Zhang and W. Wu, A smoothing-type algorithm for solving stsyem
*of inequalities, J. Comput. Appl. Math. 220 (2008) 355-363.*

[8] S.L. HU, Z-H. Huang and P. Wang , A non-monotone smoothing Newton algorithm
*for solving nonlinear complementarity problems, Optim. Method. Softw. 24 (2009) 447-*
460.

[9] D.Q. Mayne, E. Polak and A.J. Heunis, Solving nonlinear inequalities in a finite
*number of iterations, J. Optimiz. Theory. App. 33 (1981) 207-221.*

[10] M. Sahba, On the solution of nonlinear inequalities in a finite number of iterations,
*Numer. Math. 46 (1985) 229-236.*

*[11] J.M. Schott, Matrix Analysis for Statistics, 2nd edition, John Wiley, New Jersey,*
2005.

[12] H-X. Ying, Z-H. Huang and L. Qi, The convergence of a Levenberg-Marquard
*method for the l*_{2}*-norm solution of nonlinear inequalities, Numer. Func. Anal. Opt. 29*
(2008) 687-716.

[13] Y. Zhang and Z-H. Huang, A nonmonotone smoothing-type algorithm for solving a
*system of equalities and inequalities, J. Comput. Appl. Math. 233 (2010) 2312-2321.*

[14] J. Zhu and B. Hao, A new non-interior continuation method for solving a system of
*equalities and inequalities, J. Appl. Math., (2014) Article Number 592540.*

*Manuscript received 3 December 2014*
*revised 12 December 2014*
*accepted for publication 15 December 2014*

Jein-Shan Chen

Department of Mathematics National Taiwan Normal University Taipei 11677, Taiwan

E-mail address: jschen@math.ntnu.edu.tw

Chun-Hsu Ko

Department of Electrical Engineering I-Shou University

Kaohsiung 840, Taiwan

E-mail address: chko@isu.edu.tw

Yan-Di Liu

Department of Mathematics National Taiwan Normal University Taipei 11677, Taiwan

E-mail address: 60140026S@ntnu.edu.tw

Sheng-Pen Wang

Department of Industrial and Business Management Chang Gung University

Taoyuan 333, Taiwan

E-mail address: wangsp@mail.cgu.edu.tw

*Table 1: Numerical performance when p = 2, stop criterion: 0.001.*

*Note: Based on H(z) = 0 given as in (4.2).*

*Table 2: Numerical performance when p = 2, stop criterion: 1e− 006.*

*Note: Based on H(z) = 0 given as in (4.2).*

*Table 3: Numerical performance when p = 2, stop criterion: 0.001.*

Note: Based on b*H(µ, x) = 0 given as in (4.3).*

*Table 4: Numerical performance with diﬀerent p for modified problems.*

*Note: Based on H(z) = 0 given as in (4.2).*

*Table 5: Numerical performance with diﬀerent p for original problems.*

Note: Based on b*H(µ, x) = 0 given as in (4.3).*

*Table 6: Numerical performance with diﬀerent p for modified problems.*

*Note: Based on H(z) = 0 given as in (4.2), x*^{0} is fixed.

*Table 7: Numerical performance with diﬀerent p for original problems.*

Note: Based on b*H(µ, x) = 0 given as in (4.3), x*^{0}is fixed.

*Table 8: Numerical performance with diﬀerent p for modified problems.*

*Note: Based on H(z) = 0 given as in (4.2), x*^{0}*, c, τ and η are fixed.*

*Table 9: Numerical performance with diﬀerent p for original problems.*

Note: Based on b*H(µ, x) = 0 given as in (4.3), x*^{0} *c, τ and η are fixed.*

*Table 10: Numerical performance when p = 2 for modified problems.*

*Note: Based on H(z) = 0 given as in (4.2), x*^{0} *and τ are fixed.*

*Table 11: Numerical performance when p = 2 for original problems.*

Note: Based on b*H(µ, x) = 0 given as in (4.3), x*^{0} *and τ are fixed.*