to appear in Paciﬁc Journal of Optimization, January, 2016

**New smoothing functions for solving a system of equalities and** **inequalities**

Jein-Shan Chen ^{1}
Department of Mathematics
National Taiwan Normal University

Taipei 11677, Taiwan E-mail: jschen@math.ntnu.edu.tw

Chun-Hsu Ko

Department of Electrical Engineering I-Shou University

Kaohsiung 840, Taiwan E-mail: chko@isu.edu.tw

Yan-Di Liu

Department of Mathematics National Taiwan Normal University

Taipei 11677, Taiwan E-mail: 60140026S@ntnu.edu.tw

Sheng-Pen Wang ^{2}

Department of Industrial and Business Management Chang Gung University

Taoyuan 333, Taiwan E-mail: wangsp@mail.cgu.edu.tw

September 22, 2014 (revised on November 20, 2014)

**Abstract In this paper, we propose a family of new smoothing functions for solving a**
system of equalities and inequalities, which is a generalization of [11]. We then investigate
an algorithm based on a new reformation b*H with less dimensionality and show, as in [11],*
that it is globally and locally convergent under suitable assumptions. Numerical evidence

1Member of Mathematics Division, National Center for Theoretical Sciences, Taipei Oﬃce. The author’s work is supported by Ministry of Science and Technology, Taiwan.

2Corresponding author.

shows the better performance of the algorithm in the sense that some unsolved examples in [11] can be solved by our proposed method. Moreover, the involved parameters in the family of new smoothing functions does not have inﬂuence in the algorithm, which is a new discovery to the literature.

**Keywords. Smoothing function, System of equations and inequalities, Convergence**

**1** **Introduction and Motivation**

The target problem of this paper is the following system of equalities and inequalities:

{ *f*_{I}*(x)* *≤ 0*

*f**E**(x) = 0* (1)

*where I =* *{1, 2, · · · , m} and E = {m + 1, m + 2, · · · , n}. In other words, the function*
*f** _{I}* : IR

^{n}*→ IR*

*is given by*

^{m}*f**I**(x) =*

*f*_{1}*(x)*
*f*_{2}*(x)*

...
*f*_{m}*(x)*

*where f** _{i}* : IR

^{n}*→ IR for i = {1, 2, · · · , m}; and the function f*

*E*: IR

^{n}*→ IR*

^{n}*is given by*

^{−m}*f**E**(x) =*

*f*_{m+1}*(x)*
*f*_{m+2}*(x)*

...
*f*_{n}*(x)*

*where f** _{j}* : IR

^{n}*→ IR for j = {m + 1, m + 2, · · · , n}. For simplicity, throughout this paper,*

*we denote f : IR*

^{n}*→ IR*

*as*

^{n}*f (x) :=*

[ *f*_{I}*(x)*
*f*_{E}*(x)*

]

=

*f*_{1}*(x)*
*f*_{2}*(x)*

...
*f*_{m}*(x)*
*f*_{m+1}*(x)*
*f*_{m+2}*(x)*

...
*f*_{n}*(x)*

*and assume that f is continuously diﬀerentiable. When E is empty set, the system (1)*
*reduces to a system of inequalities; whereas it reduces to a system of equations when I*

is empty.

Problems in form of (1) arise in real applications, including data analysis, computer- aided design problems, image reconstructions, and set separation problems, etc.. Many optimization methods have been proposed for solving the system (1), for instance, non- interior continuation method [12], smoothing-type algorithm [6, 11], Newton algorithm [14], and iteration methods [4, 7, 8, 10]. In this paper, we consider the similar smoothing- type algorithm studied in [6, 11] for solving the system (1). In particular, we propose a family of smoothing functions, investigate its properties, and report numerical perfor- mance of an algorithm in which this family of new smoothing functions is involved.

As seen in [6, 11], the main idea of smoothing-type algorithm for solving the system (1)
is to reformulate system (1) as a system of smoothing equations via projection function,
*More speciﬁcally, for any x = (x*_{1}*, x*_{2}*,· · · , x**n*)*∈ IR** ^{n}*, one deﬁnes

*(x)*_{+}:=

max*{0, x*1*}*
...
max*{0, x**n**}*

* .*

Then, the system (1) is equivalent to the following system of equations:

{ *(f*_{I}*(x))*_{+} = 0

*f*_{E}*(x)* *= 0.* (2)

*Note that the function (f*_{I}*(x))*_{+} in the reformulation (2) is nonsmooth, the classical
Newton methods cannot be directly applied to solve (2). To conquer this, a smooth-
ing algorithm was considered in [6, 11], in which the following smoothing function was
employed:

*ϕ(µ, t) =*

*t* if *t≥ µ,*

*(t+µ)*^{2}

*4µ* if *−µ < t < µ,*
0 if *t≤ −µ,*

(3)

*where µ > 0.*

In this paper, we propose a family of new smoothing functions, which include the
*function ϕ(µ, t) given as in (3) as a special case, for solving the reformulation (2). More*
speciﬁcally, we consider the family of smoothing functions as below:

*ϕ**p**(µ, t) =*

*t* if *t≥* _{p}_{−1}^{µ}*,*

*µ*
*p**−1*

[*(p**−1)(t+µ)*
*pµ*

]*p*

if *−µ < t <* _{p}_{−1}^{µ}*,*

0 if *t≤ −µ,*

(4)

*where µ > 0 and p≥ 2. Note that ϕ**p* reduces to the smoothing function studied in [11]

*when p = 2. The graphs of ϕ*_{p}*with diﬀerent values of p and various µ are depicted as in*
Figures 1-3.

**Proposition 1.1. Let ϕ**_{p}*be deﬁned as in (4). For any (µ, t)∈ IR*++*× IR, we have*
**(a) ϕ***p**(., .) is continuously diﬀerentiable at any (µ, t)∈ IR*++*× IR.*

**(b) ϕ**_{p}*(0, t) = (t)*_{+}*.*

**(c)** ^{∂ϕ}^{p}_{∂t}^{(µ,t)}*≥ 0 for any (µ, t) ∈ IR*++*× IR.*

**(d) lim**_{p}_{→∞}*ϕ*_{p}*(µ, t)* *→ (t)*+*.*

**Proof. (a) First, we calculate** ^{∂ϕ}^{p}_{∂t}* ^{(µ,t)}* and

^{∂ϕ}

^{p}

_{∂µ}*as below:*

^{(µ,t)}*∂ϕ*_{p}*(µ, t)*

*∂t* =

1 if *t≥* _{p}_{−1}^{µ}*,*

[*(p**−1)(t+µ)*
*pµ*

]*p**−1*

if *−µ < t <* _{p}_{−1}^{µ}*,*

0 if *t≤ −µ,*

*∂ϕ*_{p}*(µ, t)*

*∂µ* =

0 if *t* *≥* _{p}_{−1}^{µ}*,*

[*(p−1)(t+µ)*
*pµ*

]*p**−1**(t+µ−pt)*

*pµ* if *−µ < t <* _{p}_{−1}^{µ}*,*

0 if *t* *≤ −µ,*

Then, we see that ^{∂ϕ}^{p}_{∂t}^{(µ,t)}*∈ C*^{1} because

lim

*t**→*_{p−1}^{µ}

*∂ϕ*_{p}*(µ, t)*

*∂t* = lim

*t**→*_{p−1}^{µ}

[*(p− 1)(*_{p}_{−1}^{µ}*+ µ)*
*pµ*

]*p**−1*

*= 1,*

*t→−µ*lim

*∂ϕ*_{p}*(µ, t)*

*∂t* = lim

*t→−µ*

[*(p− 1)(−µ + µ)*
*pµ*

]*p**−1*

*= 0.*

and ^{∂ϕ}^{p}_{∂µ}^{(µ,t)}*∈ C*^{1} since

lim

*t**→*_{p}_{−1}^{µ}

*∂ϕ*_{p}*(µ, t)*

*∂µ* = lim

*t**→*_{p}^{µ}_{−1}

[*(p− 1)(*_{p}_{−1}^{µ}*+ µ)*
*pµ*

]*p**−1*

(_{p}_{−1}^{µ}*+ µ− p*_{p}_{−1}* ^{µ}* )

*pµ* *= 0,*

*t**→−µ*lim

*∂ϕ*_{p}*(µ, t)*

*∂µ* = lim

*t**→−µ*

[*(p− 1)(−µ + µ)*
*pµ*

]*p**−1* (*−µ + µ − p(−µ))*

*pµ* *= 0.*

*The above veriﬁcations imply that ϕ*_{p}*(., .) is continuously diﬀerentiable.*

*(b) From the deﬁnition of ϕ*_{p}*(µ, t), it is clear that*

*ϕ*_{p}*(0, t) =*

{ *t* if *t≥ 0*

0 if *t≤ 0* *= (t)*_{+}
which is the desired result.

(c) When *−µ < t <* _{p}_{−1}^{µ}*, we have t + µ > 0. Hence, from the expression of* ^{∂ϕ}^{p}_{∂t}* ^{(µ,t)}*, it is
obvious that

[*(p**−1)(t+µ)*
*pµ*

]*p**−1*

*≥ 0, which says* ^{∂ϕ}^{p}_{∂t}^{(µ,t)}*≥ 0.*

(d) Part(d) is clear from the deﬁnition. *2*

*The properties of ϕ** _{p}* in Proposition 1.1 can be veriﬁed via the graphs. In particular,

*in Figures 1-2, we see that when µ*

*→ 0, ϕ*

*p*

*(µ, t) goes to (t)*

_{+}which veriﬁes Proposition 1.1(b).

−2.50 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5

0.5 1 1.5 2 2.5

t φ p(µ,t)

p=2, µ=0.1 p=2, µ=0.5 p=2, µ=1 p=2, µ=2

*Figure 1: Graphs of ϕ*_{p}*(µ, t) with p = 2 and µ = 0.1, 0.5, 1, 2.*

*Figure 3 says that for ﬁxed µ > 0, ϕ*_{p}*(µ, t) approaches to (t)*_{+} *as p* *→ ∞. This also*
veriﬁes Proposition 1.1(d).

Next, we will form another reformulation for problem (1). To this end, we deﬁne

*F (z) :=*

*f*_{I}*(x)− s*
*f*_{E}*(x)*
Φ*p**(µ, s)*

with Φ_{p}*(µ, s) :=*

*ϕ*_{p}*(µ, s*_{1})
...
*ϕ*_{p}*(µ, s** _{m}*)

* and z = (µ, x, s)* (5)

where Φ* _{p}* is a mapping from IR

^{1+m}*→ IR*

*. Then, in light of Proposition 1.1(b), we see that*

^{m}*F (z) = 0 and µ = 0* *⇐⇒ s = f**I**(x), s*_{+}*= 0, f*_{E}*(x) = 0.*

−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 0

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

t φ p(µ,t)

p=10, µ=0.1 p=10, µ=0.5 p=10, µ=1 p=10, µ=2

*Figure 2: Graphs of ϕ*_{p}*(µ, t) with p = 10 and µ = 0.1, 0.5, 1, 2.*

−0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 0

0.05 0.1 0.15 0.2 0.25

t φ p(µ,t)

p=2, µ=0.2 p=3, µ=0.2 p=10, µ=0.2 p=20, µ=0.2

*Figure 3: Graphs of ϕ*_{p}*(µ, t) with p = 2, 3, 10, 20 and µ = 0.2.*

This, together with Proposition 1.1(a), indicates that one can solve system (1) by ap-
*plying Newton-type methods to solve F (z) = 0 by letting µ* *↓ 0. Furthermore, by*
*introducing an extra parameter p, we deﬁne a function H : IR*^{1+n+m}*→ IR** ^{1+n+m}* by

*H(z) :=*

*µ*

*f*_{I}*(x)− s + µx**I*

*f**E**(x) + µx**E*

Φ_{p}*(µ, s) + µs*

(6)

*where x*_{I}*= (x*_{1}*, x*_{2}*,· · · , x**m**), x*_{E}*= (x*_{m+1}*, x*_{m+2}*,· · · , x**n**), s* *∈ IR*^{m}*, x := (x*_{I}*, x** _{E}*)

*∈ IR*

^{n}*and functions ϕ*

*and Φ*

_{p}*are deﬁned as in(4) and (5), respectively. Thereby, it is obvious*

_{p}*that if H(z) = 0, then µ = 0 and x solves the system (1). It is not diﬃcult to see that, for*

*any z*

*∈ IR*++

*× IR*

^{n}*× IR*

^{m}*, the function H is continuously diﬀerentiable. Let H*

*denote*

^{′}*the Jacobian of the function H. Then, for any z*

*∈ IR*++

*× IR*

^{n}*× IR*

*, we have*

^{m}*H*^{′}*(z) =*

1 *O*1*×n* *O*1*×m*

*x*1

...
*x**m*

*m**×1*

*A*

*−1 · · ·* 0
... . .. ...
0 *· · · −1*

*m**×m*

*x**m+1*

...
*x**n*

*(n**−m)×1*

*B*

0 *· · · 0*
... . .. ...
0 *· · · 0*

*(n**−m)×m*

*s*_{1}+_{∂µ}^{∂}*ϕ*^{′}*(µ, s*_{1})
...

*s**m*+_{∂µ}^{∂}*ϕ*^{′}*(µ, s**m*)

*m**×1*

0 *· · · 0*
... 0 ...
0 *· · · 0*

*m**×n*

*∂*

*∂s**ϕ*^{′}*(µ, s*1*) + µ* *· · ·* 0
... . .. ...
0 *· · ·* _{∂s}^{∂}*ϕ*^{′}*(µ, s**m**) + µ*

*m**×m*

where

*A =*

*∂f*1*(x*1)

*∂x*1 *+ µ* *· · ·* 0

... . .. ... 0_{m}* _{×(n−m)}*
0

*· · ·*

^{∂f}

^{m}

_{∂x}

^{(x}

_{m}

^{m}^{)}

*+ µ*

*m×n*

and

*B =*

*∂f**m+1**(x**m+1*)

*∂x**m+1* *+ µ* *· · ·* 0

0_{(n}* _{−m)×m}* ... . .. ...

0 *· · ·* ^{∂f}_{∂x}^{n}^{(x}_{n}^{n}^{)} *+ µ*

*(n**−m)×n*

*With the above, we can simplify the matrix H*^{′}*(z) as*

*H*^{′}*(z) =*

1 0* _{n}* 0

_{m}*x*_{I}*f*_{I}^{′}*(x) + µU* *−I**m*

*x*_{E}*f*_{E}^{′}*(x) + µV* 0_{(n}_{−m)×m}*s + Φ*^{′}_{µ}*(µ, s)* 0*m**×n* Φ^{′}_{s}*(µ, s) + µI**m*

(7)

where

*U := [I** _{m}* 0

_{m}

_{×(n−m)}*],*

*V := [0*

_{(n}

_{−m)×m}*I*

_{n}

_{−m}*],*

*s + Φ*^{′}_{µ}*(µ, s) =*

*s*_{1}+ _{∂µ}^{∂}*ϕ*^{′}*(µ, s*_{1})
...

*s** _{m}*+

_{∂µ}

^{∂}*ϕ*

^{′}*(µ, s*

*)*

_{m}

*m**×1*

*,*

Φ^{′}_{s}*(µ, s) + µI** _{m}* =

*∂*

*∂s**ϕ*^{′}*(µ, s*1*) + µ* *· · ·* 0

... . .. ...

0 *· · ·* _{∂s}^{∂}*ϕ*^{′}*(µ, s*_{m}*) + µ*

*m**×m*

*.*

Here, we use 0_{I}*to denote the I-dimensional zero vector and 0*_{l}_{×q}*to denote the l* *× q*
*zero matrix for any positive integers l and q. Thus, we might apply some Newton-type*
*methods to solve the system of smooth equations H(z) = 0 at each iteration by letting*
*µ > 0 and H(z)* *→ 0 so that a solution of (1) can be found. This is the main idea of*
smoothing approach for solving system (1).

Alternatively, one may have another smoothing reformulation for system (1) without
*introducing the extra variable s. More speciﬁcally, we can deﬁne bH : IR*^{1+n}*→ IR** ^{1+n}* as

*H(µ, x) :=*b

*µ*

*f*_{E}*(x) + µx** _{E}*
Φ

_{p}*(µ, f*

_{I}*(x)) + µx*

_{I}

(8)

The Jacobian of b*H(µ, x) is similar to H*^{′}*(z) and indeed is a bit tedious, so we omit its*
presentation here. The reformulation of b*H(µ, x) = 0 has less dimension than H(z) = 0,*
whereas the expression of b*H*^{′}*(µ, x) is more tedious than H*^{′}*(z). Both smoothing ap-*
*proaches can lead to the solution to system (1). The numerical results based on H(z) = 0*
and b*H(µ, x) = 0 are compared in this paper. Moreover, we also investigate how the pa-*
*rameter p aﬀect the numerical performance when diﬀerent ϕ** _{p}* is employed. Proposing the
new family of smoothing functions as well as the above two aspects of numerical points
are the main motivation and contribution of this paper.

**2** **A smoothing-type algorithm**

In this section, we consider a non-monotone smoothing-type algorithm whose similar framework has been discussed in [6, 11]. In particular, we correct a ﬂaw in Step 5 in [11] and show that only this modiﬁcation can really make the algorithm well-deﬁned.

Moreover, for b*H(µ, x), a new reformulation of H(z) with lower dimensionality, we will*
*use the function ψ(·) := ∥H(z)∥*^{2} *or ψ(·) := ∥ bH(µ, x)∥*^{2} alternatively. Below are the
details of the algorithm.

**Algorithm 2.1. (A Nonmonotone Smoothing-Type Algorithm)**

**Step 0 Choose δ***∈ (0, 1), σ ∈ (0, 1/2), β > 0. Take τ ∈ (0, 1) such that τβ < 1. Let*
*µ*_{0} *= β and (x*^{0}*, s*^{0}) *∈ IR*^{n+m}*be an arbitrary vector. Set z*^{0} *:= (µ*_{0}*, x*^{0}*, s*^{0}*). Take*
*e*^{0} *:= (1, 0,· · · , 0) ∈ IR*^{1+n+m}*, R*_{0} :=*∥H(z*^{0})*∥*^{2} *= ψ(z*^{0}*) and Q*_{0} *= 1.*

*Choose η*min *and η*max *such that 0≤ η*min *≤ η*max *< 1. Set θ(z*^{0}*) := τ min{1, ψ(z*^{0})*}*
*and k := 0.*

**Step 1 If***∥H(z** ^{k}*)

*∥ = 0, stop.*

**Step 2 Compute***△z** ^{k}* := (

*△µ*

*k*

*,△x*

^{k}*,△s*

*)*

^{k}*∈ IR × IR*

^{n}*× IR*

^{m}*by using*

*H*^{′}*△z** ^{k}* =

*−H(z*

^{k}*) + βθ(z*

^{k}*)e*

^{0}(9)

**Step 3 Let α**

_{k}*be the maximum of the values 1, δ, δ*

^{2}

*,· · · such that*

*ψ(z*^{k}*+ α*_{k}*△z** ^{k}*)

*≤ [1 − 2σ(1 − τβ)α*

*k*

*] R*

*(10)*

_{k}

**Step 4 Set z**

^{k+1}*:= z*

^{k}*+ α*

_{k}*△z*

^{k}*. If*

*∥H(z*

*)*

^{k+1}*∥ = 0, stop.*

**Step 5 Choose η***k**∈ [η*min*, η*max*]. Set*

*Q*_{k+1}*:= η*_{k}*Q** _{k}*+ 1

*θ(z*

*) := min{*

^{k+1}*τ, τ ψ(z*^{k+1}*), θ(z** ^{k}*)}

(11)
*R**k+1* := *(η**k**Q**k**R**k**+ ψ(z** ^{k+1}*))

*Q*_{k+1}*and k := k + 1. Go to Step 2*

In Algorithm 2.1, a nonmonotone line search technique is adopted. It is easy to see
*that R**k+1* *is a convex combination of R**k* *and ψ(z*^{k+1}*). Since R*0 *= ψ(z*^{0}), it follows that
*R*_{k}*is a convex combination of the function values ψ(z*^{0}*), ψ(z*^{1}*),· · · , ψ(z** ^{k}*). The choice

*of η*

_{k}*controls the degree of nonmonotonicity. If η*

_{k}*= 0 for every k, then the line search*is the usual monotone Armijo line search. The scheme of Algorithm 2.1 is not exactly

*the same as the one in [11]. In particular, θ(z*

*) := min{*

^{k+1}*τ, τ ψ(z*^{k+1}*), θ(z** ^{k}*)}

which
*is diﬀerent from θ(z** ^{k+1}*) := min{

*τ, τ ψ(z*^{k}*), θ(z** ^{k}*)}

given in [11]. Only this modiﬁcation can really make the algorithm well-deﬁned as shown in the following Theorem 2.1. For convenience, we denote

*f*^{′}*(x) :=*

[ *f*_{I}^{′}*(x)*
*f*_{E}^{′}*(x)*

]

and make the following assumption.

**Assumption 2.1. f**^{′}*(x) + µI*_{n}*is invertible for any x* *∈ IR*^{n}*and µ∈ IR*++*.*

Some basic properties of Algorithm 2.1 are stated in the following lemma. Since the proof arguments are almost the same as those in [11], they are thus omitted.

**Lemma 2.1. Let the sequence***{R**k**} and* {
*z** ^{k}*}

*be generated by Algorithm 2.1. Then, the*
*following hold.*

**(a) The sequence***{R**k**} is monotonically decreasing.*

**(b) The function ψ(z*** ^{k}*)

*≤ R*

*k*

*for all k*

*∈ J .*

**(c) The sequence θ(z**^{k}*) is monotonically decreasing.*

**(d) βθ(z*** ^{k}*)

*≤ µ*

*k*

*for all k*

*∈ J .*

**(e) µ**_{k}*> 0 for all k∈ J and the sequence {µ**k**} is monotonically decreasing.*

**Lemma 2.2. Suppose A**∈ IR^{n}^{×n}*which is partitioned as A =*

[ *A*_{11} *A*_{12}
*A*_{21} *A*_{22}

]

*where A*_{11}*and*
*A*_{22} *are square matrices. If A*_{12} *or A*_{21} *is zero matrix, then det(A) = det(A*_{11})*· det(A*22*).*

**Proof. This a well known result in matrix analysis, which is a special case of Fischer’s**
ineqiality [1, 5]. Please refer to [9, Theorem 7.3] for a proof. *2*

**Theorem 2.1. Suppose that f is a continuously diﬀerentiable function and Assumption***2.1 is satisﬁed. Then Algorithm 2.1 is well deﬁned.*

**Proof. Applying Lemmas 2.1-2.2 and mimicking the arguments as in [11], it is easy**
*to achieve the desired result. However, we point it out again that θ(z** ^{k+1}*) in step 5 is
diﬀerent from the one in [11]. Only this modiﬁcation can really make the algorithm
well-deﬁned.

*2*

**3** **Convergence analysis**

In this section, we analyze the convergence of the algorithm proposed in previous section.

To this end, the following assumption is needed which was introduced in [6].

**Assumption 3.1. For an arbitrary sequence***{(µ**k**, x** ^{k}*)

*} with lim*

*k**−→∞**∥x∥ = +∞ and the*
*sequence* *{µ**k**} ⊂ IR*+ *bounded, then either*

**(i) there is at least an index i**_{0} *such that lim sup*

*k**−→∞* *{f**i*0*(x*^{k}*) + µ*_{k}*x*^{k}_{i}_{0}*} = +∞; or*
**(ii) there is at least an index i**_{0} *such that lim sup*

*k**−→∞* *{µ**k**(f*_{i}_{0}*(x*^{k}*) + µ*_{k}*x*^{k}_{i}_{0})*} = −∞.*

It can be seen that many functions satisfy Assumption 3.1, see [6]. The global con- vergence of Algorithm 2.1 is stated as follows. In fact, with Proposition 1.1, the main idea for the proof is almost the same as that in [11, Theorem 4.1], only a few technical parts are diﬀerent. Thus, we omit the details.

**Theorem 3.1. Suppose that f is a continuously diﬀerentiable function and Assumptions***2.1 and 3.1 are satisﬁed. Then, the sequence{z*^{k}*} generated by Algorithm 2.1 is bounded.*

*Moreover, any accumulation point of x*^{k}*is a solution to (1).*

Next, we analyse the convergence rate for Algorithm 2.1. Before presenting the re- sult, we introduce some concepts that will used in the subsequent analysis as well as a technical lemma.

*A locally Lipschitz function F : IR*^{n}*→ IR** ^{m}*, which has the generalized Jacobian

*∂F (x), is said to be semismooth (or strongly semismooth) at x∈ IR** ^{n}* if F is directionally

*diﬀerentiable at x and*

*F (x + h)− F (x) − V h = o(∥h∥) (or = O(∥h∥*^{2})

*holds for any V* *∈ ∂F (x + h). It is well known that convex functions, smooth functions,*
and piecewise linear functions are examples of semismooth functions. The composition
of (strongly) semismooth functions is still a (strongly) semismooth function. It can be
*veriﬁed that the function ϕ** _{p}* deﬁned by (4) is strongly semismooth on IR

^{2}

*. Thus, f being*

*continuously diﬀerentiable implies that the function H deﬁned by (6) and bH deﬁned by*

*(8)are semismooth (or strongly semismooth if f*

*is Lipschitz continuous on IR*

^{′}*.*

^{n}**Lemma 3.1. For any α, β***∈ IR*++*, α = O(β) represents that* ^{α}_{β}*is uniformly bounded,*
*and α = o(β) denotes* ^{α}_{β}*→ 0 as β → 0. Then, we have*

**(a) O(β)**± O(β) = O(β);

**(b) o(β)**± o(β) = o(β);

**(c) If c**̸= 0 then O(cβ) = O(β), o(cβ) = o(β);

**(d) O(o(β)) = o(β), o(O(β)) = o(β);**

**(e) O(β**_{1}*)O(β*_{2}*) = O(β*_{1}*β*_{2}*), O(β*_{1}*)o(β*_{2}*) = o(β*_{1}*β*_{2}*), o(β*_{1}*)O(β*_{2}*) = o(β*_{1}*β*_{2}*).*

**(f ) If α = O(β**_{1}*) and β*_{1} *= o(β*_{2}*), then α = o(β*_{2}*).*

**Proof.** For parts (a)-(e), please refer to [13] for a proof. Part (f) can be veriﬁed
straightforwardly. *2*

With Proposition 1.1 and Lemma 3.1, mimicking the arguments as in [11, Theorem 5.1] gives the following theorem.

**Theorem 3.2. Suppose that f is a continuously diﬀerentiable function and Assumptions***2.1 and 3.1 are satisﬁed. Let z*^{∗}*= (µ*_{∗}*, x*^{∗}*, s*^{∗}*) be an accumulation point of{z*^{k}*} generated*
*by Algorithm 2.1. If all V* *∈ ∂H(z*^{∗}*) are nonsingular, then the following hold.*

**(a) α**_{k}*≡ 1 for all z*^{k}*suﬃciently close to z*^{∗}*;*
**(b) the whole sequence***{z*^{k}*} converges to z*^{∗}*;*

**(c)** *∥z*^{k+1}*− z*^{∗}*∥ = o(∥z*^{k}*− z*^{∗}*∥) (or ∥z*^{k+1}*− z*^{∗}*∥ = O(∥z*^{k}*− z*^{∗}*∥*^{2}*)) provided f*^{′}*is Lipschitz*
*continuous on IR*^{n}*);*

**(d) µ**_{k+1}*= o(µ*_{k}*) (or µ*_{k+1}*= O(µ*^{2}_{k}*) if f*^{′}*is Lipschitz continuous on IR*^{n}*).*

**4** **Numerical Results**

In this section, we present our test problems and report numerical results. In this paper,
*the function f is assumed to be a mapping from IR** ^{n}* to IR

*, which means the dimension*

^{n}*of x is exactly the same as the total number of inequalities and equalities. In reality, this*may not be the case. In other words, there may have a system like this:

{ *f**I**(x)* *≤ 0, I = {1, 2, · · · , m}*

*f*_{E}*(x) = 0, E ={m, m + 1, · · · , l}* (12)
*This says f could be a mapping from IR** ^{n}* to IR

^{l}*. When l*

*̸= n, the scheme in Algorithm*

*2.1 cannot be applied to the system (12) because the dimension of x is not equal to*the total number of inequalities and equalities. To make system (12) solvable under the proposed algorithm, as remarked in [11, Sec. 6], some additional inequality or variable needs to be added. For example,

*(i) if l < n, we may add a trivial inequality like*

∑*n*
*i=1*

*x*^{2}_{i}*≤ M*

*where M is suﬃciently large, into system (12) so that Algorithm 2.1 can be applied.*

*(ii) if l > n and m≥ 1, we may add a variable x**n+1* into the inequalities so that
*f*_{i}*(x)≤ 0 → f**i**(x) + x*^{2}_{n+1}*≤ 0.*

*(iii) if l > n and m = 0, we may add a trivial inequality like*

∑*n+2*
*i=1*

*x*^{2}_{i}*≤ M*

*where M is suﬃciently large, into system (12) so that Algorithm 2.1 can be applied.*

*In real implementation, the H(z) given as in (6) is replaced by*

*H(z) :=*

*µ*

*f*_{I}*(x)− s + cµx**I*

*f*_{E}*(x) + cµx** _{E}*
Φ

_{p}*(µ, s) + cµs*

(13)

*where c is a given constant. Likewise, the bH(µ, x) given as in (8) is replaced by*

*H(µ, x) :=*b

*µ*

*f*_{E}*(x) + cµx** _{E}*
Φ

_{p}*(µ, f*

_{I}*(x)) + cµx*

_{I}

* .* (14)

*Adding such a constant c is helpful when coding the algorithm because µ approaches to*
zero eventually. The theoretical results will not be aﬀected in any case. In practice, in
*order to obtain an interior solution x*^{∗}*for inequalities (f**I**(x*^{∗}*) < 0), the following system*

{ *f*_{I}*(x) + εe≤ 0*
*f*_{E}*(x) = 0*

*is considered, where ε is a small number and e is the vector of all ones. Now, we list the*
test problems which are employed from [6, 11].

**Example 4.1. Consider f (x) =**

[ *f*_{1}*(x)*
*f*_{2}*(x)*

]

*with x∈ IR*^{2} *where*

*f*_{1}*(x) = x*^{2}_{1}*+ x*^{2}_{2}*− 1 + ε ≤ 0,*

*f*2*(x) =* *−x*^{2}1*− x*^{2}2*+ (0.999)*^{2}*+ ε≤ 0.*

**Example 4.2. Consider f (x) =**

*f*1*(x)*
*f*_{2}*(x)*
*f*_{3}*(x)*
*f*_{4}*(x)*
*f*5*(x)*
*f*_{6}*(x)*

*with x∈ IR*^{2} *where*

*f*_{1}*(x) = sin(x*_{1}*) + ε≤ 0,*
*f*2*(x) =* *− cos(x*2*) + ε≤ 0,*
*f*_{3}*(x) = x*_{1}*− 3π + ε ≤ 0,*
*f*_{4}*(x) = x*_{2}*−* *π*

2 *− 2 + ε ≤ 0,*
*f*_{5}*(x) =* *−x*1*− π + ε ≤ 0,*
*f*_{6}*(x) =* *−x*2*−* *π*

2 *+ ε≤ 0.*

**Example 4.3. Consider f (x) =**

[ *f*_{1}*(x)*
*f*_{2}*(x)*

]

*with x∈ IR*^{2} *where*
*f*_{1}*(x) = sin(x*_{1}*) + ε≤ 0,*
*f*_{2}*(x) =* *− cos(x*2*) + ε≤ 0.*

**Example 4.4. Consider f (x) =**

*f*_{1}*(x)*
*f*_{2}*(x)*
*f*3*(x)*
*f*_{4}*(x)*
*f*_{5}*(x)*

*with x∈ IR*^{5} *where*

*f*_{1}*(x) = x*_{1}*+ x*_{3}*− 1.6 + ε ≤ 0,*
*f*_{2}*(x) = 1.333x*_{2}*+ x*_{4}*− 3 + ε ≤ 0,*
*f*_{3}*(x) =* *−x*3 *− x*4 *+ x*_{5}*+ ε≤ 0,*
*f*_{4}*(x) = x*^{2}_{1}*+ x*^{2}_{3}*− 1.25 = 0,*
*f*_{5}*(x) = x*^{1.5}_{2} *+ 1.5x*_{4}*− 3 = 0.*

**Example 4.5. Consider f (x) =**

*f*_{1}*(x)*
*f*2*(x)*
*f*_{3}*(x)*

* with x ∈ IR*^{3} *where*

*f*_{1}*(x) = x*_{1}*+ x*_{2}*e*^{0.8x}^{3} *+ e*^{1.6}*+ ε≤ 0,*
*f*2*(x) = x*^{2}_{1}*+ x*^{2}_{2}*+ x*^{2}_{3}*− 5.2675 + ε ≤ 0,*
*f*_{3}*(x) = x*_{1}*+ x*_{2}*+ x*_{3}*− 0.2605 = 0.*

**Example 4.6. Consider f (x) =**

*f*_{1}*(x)*
*f*_{2}*(x)*
*f*_{3}*(x)*

* with x ∈ IR*^{2} *where*

*f*_{1}*(x) = 0.8− e*^{x}^{1}^{+x}^{2} *+ ε≤ 0,*
*f*2*(x) = 1.21e*^{x}^{1} *+ e*^{x}^{2} *− 2.2 = 0,*
*f*_{3}*(x) = x*^{2}_{1}*+ x*^{2}_{2} *+ x*_{2}*− 0.1135 = 0.*

**Example 4.7. Consider f (x) =**

[ *f*_{1}*(x)*
*f*_{2}*(x)*

]

*with x∈ IR*^{2} *where*
*f*_{1}*(x) = x*_{1}*− 0.7 sin(x*1)*− 0.2 cos(x*2) = 0
*f*2*(x) = x*2*− 0.7 cos(x*1*) + 0.2 sin(x*2) = 0

Moreover, in light of the aforementioned discussions, there have corresponding mod- iﬁed problems for Example 4.2’, Example 4.6’, and Example 4.7’, which are stated as

below. The other examples are kept unchanged. In other words, Example 4.1 and Exam- ple 4.1’ are the same , so are Example 4.3 and Example 4.3’, Example 4.4 and Example 4.4’, Example 4.5 and Example 4.5’.

**Example 4.2’. Consider f (x) =**

*f*_{1}*(x)*
*f*_{2}*(x)*
*f*3*(x)*
*f*_{4}*(x)*
*f*_{5}*(x)*
*f*_{6}*(x)*

*with x∈ IR*^{6} where

*f*_{1}*(x) = sin(x*_{1}*) + ε≤ 0*
*f*_{2}*(x) =* *− cos(x*2*) + ε≤ 0*
*f*_{3}*(x) = x*_{1}*− 3π + x*^{2}3*+ ε≤ 0*
*f*_{4}*(x) = x*_{2}*−* *π*

2 *− 2 + x*^{2}_{4}*+ ε≤ 0*
*f*_{5}*(x) =* *−x*1*− π + x*^{2}5*+ ε≤ 0*
*f*_{6}*(x) =* *−x*2*−* *π*

2 *+ x*^{2}_{6}*+ ε≤ 0*

**Example 4.6’. Consider f (x) =**

*f*_{1}*(x)*
*f*_{2}*(x)*
*f*3*(x)*

* with x ∈ IR*^{3} where

*f*1*(x) = 0.8− e*^{x}^{1}^{+x}^{2} *+ x*^{2}_{3}*+ ε≤ 0,*
*f*_{2}*(x) = 1.21e*^{x}^{1} *+ e*^{x}^{2} *− 2.2 = 0,*
*f*_{3}*(x) = x*^{2}_{1}*+ x*^{2}_{2} *+ x*_{2}*− 0.1135 = 0.*

**Example 4.7’. Consider f (x) =**

*f*1*(x)*
*f*_{2}*(x)*
*f*_{3}*(x)*

* with x ∈ IR*^{3} where

*f*_{1}*(x) = x*^{2}_{1}*+ x*^{2}_{2}*+ x*^{2}_{3}*− 10000 + ε ≤ 0,*
*f*_{2}*(x) = x*_{1}*− 0.7 sin(x*1)*− 0.2 cos(x*2*) = 0,*
*f*_{3}*(x) = x*_{2}*− 0.7 cos(x*1*) + 0.2 sin(x*_{2}*) = 0.*

*The numerical implementations are coded in Matlab. In the numerical reports, x*^{0}
is the stating point, NI is the total number of iterations, NF denotes the number of
*function evaluations for H(z** ^{k}*) or b

*H(µ*

_{k}*, x*

*), and SOL means the solution obtained from the algorithm. The parameters used in the algorithm are set as*

^{k}*ε = 0.00001,* *δ = 0.3,* *σ = 0.001,* *β = 1.0,* *µ*_{0} *= 1.0,* *Q*_{0} *= 1.0.*

*Table 1: Numerical performance when p = 2, stop criterion: 0.001.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1 (0, 5) 100 0.006 0.01 8 12 (-0.6188, 0.7853)

Ex 4.2 (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 Fail Fail Fail

Ex 4.3 (0, 0) 0.5 0.2 0.01 3 4 (-0.01516, 0.7207)

Ex 4.4 (0.5, 2, 1, 0, 0) 5 0.02 0.8 4 5 (0.5557, 1.324, 0.9703, 0.984, 1.156) Ex 4.5 (-1, -1, 1) 0.5 0.2 0.8 5 6 (-0.8301, -0.8662, 1.957) Ex 4.6 (0, 0, 0) 0.5 0.02 0.8 7 8 (0.2743, -0.4975, 1.5e+006) Ex 4.7 (0, 1, 0) 0.5 0.006 0.8 9 15 (0.5268, 0.5084, -100)

Ex 4.1’ (0, 5) 100 0.006 0.01 8 12 (-0.6188, 0.7853)

Ex 4.2’ (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 6 9 (-0.009654,1.428,2.846,1.28,1.639,1.666)

Ex 4.3’ (0, 0) 0.5 0.2 0.01 3 4 (-0.01516, 0.7207)

Ex 4.4’ (0.5, 2, 1, 0, 0) 5 0.02 0.8 4 5 (0.5557, 1.324, 0.9703, 0.984, 1.156) Ex 4.5’ (-1, -1, 1) 0.5 0.2 0.8 5 6 (-0.8301, -0.8662, 1.957) Ex 4.6’ (0, 0, 0) 0.5 0.02 0.8 5 7 (-0.09533, 0.09533, 0.3321) Ex 4.7’ (0, 1, 0) 0.5 0.006 0.8 9 15 (0.5268, 0.5084, -100)

*Note: Based on H(z) = 0 given as in (13).*

*In Table 1 and Table 2, we adapt the same x*^{0}*, c, τ , η used as in [11] for p = 2.*

Basically, in Table 1 and Table 2, the bottom half data for the modiﬁed problems are the same as those in [11], respectively. Below are our numerical observations and conclusions.

*• From Table 1 and Table 2, we see that, when employing formulation H(z) = 0,*
solving the modiﬁed problems is more successful than solving the original problems.

*• Table 3 indicates that the numerical results are the same for original problems and*
modiﬁed problems, when b*H(µ, x) = 0 is employed. Hence, in Tables 4-11, we focus*
*on the modiﬁed problems when formulation H(z) = 0 is employed, whereas we only*
test original problems whenever the implementations are based on b*H(µ, x) = 0.*

*• From Table 5 (p = 2), we see that the algorithm based on bH(µ, x) = 0 can solve*
more problems than the one in [11] does. In view of the lower dimensionality of
*H(µ, x) = 0 and this performance, we can conﬁrm the merit of this new reformu-*b
lation.

*• In Table 4 and Table 5, the initial point and other parameters are the same as those*
*in [11]. In Tables 6-7, we ﬁx the initial point x*^{0} for all test problems. In Table 8
*and Table 9, even x*^{0}*, c, τ and η are all ﬁxed for all test problems. In Table 10*
*and Table 11, x*^{0} *is ﬁxed for all test problems and parts of c, τ and η are ﬁxed.*

In general, we observe that the numerical performance based on the formulation
*H(µ, x) = 0 is better than the one based on H(z) = 0.*b

*Table 2: Numerical performance when p = 2, stop criterion: 1e− 006.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1 (0, 5) 100 0.006 0.01 Fail Fail Fail

Ex 4.2 (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 Fail Fail Fail

Ex 4.3 (0, 0) 0.5 0.2 0.01 4 5 (-0.01516, 0.7206)

Ex 4.4 (0.5, 2, 1, 0, 0) 5 0.02 0.8 5 6 (0.5563, 1.326, 0.9698, 0.9822, 1.155) Ex 4.5 (-1, -1, 1) 0.5 0.2 0.8 6 7 (-0.8299, -0.8663, 1.957) Ex 4.6 (0, 0, 0) 0.5 0.02 0.8 7 8 (0.2743, -0.4975, 1.5e+006) Ex 4.7 (0, 1, 0) 0.5 0.006 0.8 10 16 (0.5265, 0.5079, -100)

Ex 4.1’ (0, 5) 100 0.006 0.01 Fail Fail Fail

Ex 4.2’ (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 7 10 (-0.009276,1.429, 2.846,1.279,1.64, 1.667)

Ex 4.3’ (0, 0) 0.5 0.2 0.01 4 5 (-0.01516, 0.7206)

Ex 4.4’ (0.5, 2, 1, 0, 0) 5 0.02 0.8 5 6 (0.5563, 1.326, 0.9698, 0.9822, 1.155) Ex 4.5’ (-1, -1, 1) 0.5 0.2 0.8 6 7 (-0.8299, -0.8663, 1.957) Ex 4.6’ (0, 0, 0) 0.5 0.02 0.8 6 8 (-0.09533, 0.09533, 0.332) Ex 4.7’ (0, 1, 0) 0.5 0.006 0.8 10 16 (0.5265, 0.5079, -100)

*Note: Based on H(z) = 0 given as in (13).*

*• Moreover, the changing of parameter p seems have no inﬂuence on the numerical*
performance no matter b*H(µ, x) = 0 or H(z) = 0 is adapted. This indicates that the*
*smoothing approach may not be aﬀected when p is perturbed. This phenomenon is*
diﬀerent from the one for other approaches observed in [2, 3] and is a new discovery
*to the literature. We guess that the main reason comes from µ dominating the*
*algorithm in the smoothing approach even various p is perturbed. This conjecture*
still needs further veriﬁcation and investigation.

In summary, the main contribution of this paper is to propose a new family of smooth- ing functions and correct a ﬂaw in an algorithm studied in [11], which is used to guarantee its convergence. We believe that the proposed new smoothing functions can be also em- ployed in other contexts where the projection function is involved. The related numerical performance can be investigated accordingly. We leave them as future research topics.

**Acknowledgements. We are gratefully indebted to anonymous referees for their valu-**
able suggestions that help us to improve the presentation of the paper.

**References**

*[1] R. Bhatia, Matrix Analysis, Springer-Verlag, New York, 1997.*

*Table 3: Numerical performance when p = 2, stop criterion: 0.001.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1 (0, 5) 100 0.006 0.01 10 15 (0.5942, -0.8031)

Ex 4.2 (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 3 4 (-0.01407, 7.663e-006, 0, 0, 0, 0)

Ex 4.3 (0, 0) 0.5 0.2 0.01 3 4 (-0.01407, 7.663e-006)

Ex 4.4 (0.5, 2, 1, 0, 0) 5 0.02 0.8 3 4 (0.5489, 2.066, 0.9741, 0.0204, 9.748e-007) Ex 4.5 (-1, -1, 1) 0.5 0.2 0.8 24 39 (0.5031, -1.7, 1.458)

Ex 4.6 (0, 0, 0) 0.5 0.02 0.8 3 4 (-0.09533, 0.09533, 0.08515)

Ex 4.7 (0, 1, 0) 0.5 0.006 0.8 3 4 (0.5271, 0.508, 0)

Ex 4.1’ (0, 5) 100 0.006 0.01 10 15 (0.5942, -0.8031)

Ex 4.2’ (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 3 4 (-0.01407, 7.663e-006, 0, 0, 0, 0)

Ex 4.3’ (0, 0) 0.5 0.2 0.01 3 4 (-0.01407, 7.663e-006)

Ex 4.4’ (0.5, 2, 1, 0, 0) 5 0.02 0.8 3 4 (0.5489, 2.066, 0.9741, 0.0204, 9.748e-007) Ex 4.5’ (-1, -1, 1) 0.5 0.2 0.8 24 39 (0.5031, -1.7, 1.458)

Ex 4.6’ (0, 0, 0) 0.5 0.02 0.8 3 4 (-0.09533, 0.09533, 0.1698)

Ex 4.7’ (0, 1, 0) 0.5 0.006 0.8 3 4 (0.5271, 0.508, 0)

Note: Based on b*H(µ, x) = 0 given as in (14).*

*[2] J-S. Chen and S-H. Pan, A family of NCP-functions and a descent method for the*
*nonlinear complementarity problem, Computational Optimization and Applications,*
vol. 40, no. 3, pp. 389-404, 2008.

*[3] J-S. Chen, S-H. Pan, and C-Y. Yang, Numerical comparison of two eﬀective*
*methods for mixed complementarity problems, Journal of Computational and Applied*
Mathematics, vol. 234, no. 3, pp. 667-683, 2010.

*[4] J.W. Daniel, Newton’s method for nonlinear inequalities, Numerical Mathematics,*
vol. 21, pp. 381-387, 1973.

*[5] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press,*
Cambridge, 1986.

*[6] Z-H. Huang, Y. Zhang, and W. Wu, A smoothing-type algorithm for solving*
*stsyem of inequalities, Journal of Computational and Applied Mathematics, vol. 220,*
pp. 355-363, 2008.

*[7] D.Q. Mayne, E. Polak, and A.J. Heunis, Solving nonlinear inequalities in a*
*ﬁnite number of iterations, Journal of Optimization Theory and Applications, vol.*

33, pp. 207-221, 1981.

*[8] M. Sahba, On the solution of nonlinear inequalities in a ﬁnite number of iterations,*
Numerical Mathematics, vol. 46, pp. 229-236, 1985.

*[9] J.M. Schott, Matrix Analysis for Statistics, 2nd edition, John Wiley, New Jersey,*
2005.

*[10] H-X. Ying, Z-H. Huang, and L. Qi, The convergence of a Levenberg-Marquard*
*method for the l*_{2}*-norm solution of nonlinear inequalities, Numerical Functional Anal-*
ysis and Optimization, vol. 29, pp. 687-716, 2008.

*[11] Y. Zhang and Z-H. Huang, A nonmonotone smoothing-type algorithm for solv-*
*ing a system of equalities and inequalities, Journal of Computational and Applied*
Mathematics, vol. 233, pp. 2312-2321, 2010.

*[12] J. Zhu and B. Hao, A new non-interior continuation method for solving a sys-*
*tem of equalities and inequalities, Journal of Applied Mathematics, Artical Number*
592540, 2014.

*[13] Robert G. Bartle, The Elements of Real Analysis, Second Edition Jhon Wiley,*
New Jersey, 1976.

*[14] S.L. HU, Z-H. Huang and P. Wang , A non-monotone smoothing Newton*
*algorithm for solving nonlinear complementarity problems, Optimization Methods and*
Software , vol. 24, pp. 447-460, 2009

*Table 4: Numerical performance with diﬀerent p for modiﬁed problems.*

*p = 2, stop criterion: 1e− 006.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1’ (0, 5) 100 0.006 0.01 Fail Fail Fail

Ex 4.2’ (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 7 10 (-0.009276,1.429,2.846,1.279,1.64,1.667)

Ex 4.3’ (0, 0) 0.5 0.2 0.01 4 5 (-0.01516, 0.7206)

Ex 4.4’ (0.5, 2, 1, 0, 0) 5 0.02 0.8 5 6 (0.5563, 1.326, 0.9698, 0.9822, 1.155) Ex 4.5’ (-1, -1, 1) 0.5 0.2 0.8 6 7 (-0.8299, -0.8663, 1.957) Ex 4.6’ (0, 0, 0) 0.5 0.02 0.8 6 8 (-0.09533, 0.09533, 0.332) Ex 4.7’ (0, 1, 0) 0.5 0.006 0.8 10 16 (0.5265, 0.5079, -100)

*p = 1.5, stop criterion: 1e− 006.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1’ (0, 5) 100 0.006 0.01 Fail Fail Fail

Ex 4.2’ (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 7 10 (-0.03532,1.428, 2.849, 1.278,1.63,1.666)

Ex 4.3’ (0, 0) 0.5 0.2 0.01 4 5 (-0.0189, 0.7217)

Ex 4.4’ (0.5, 2, 1, 0, 0) 5 0.02 0.8 5 6 (0.5546, 1.329, 0.9708, 0.979, 1.135) Ex 4.5’ (-1, -1, 1) 0.5 0.2 0.8 6 7 (-0.8288, -0.8673, 1.957) Ex 4.6’ (0, 0, 0) 0.5 0.02 0.8 6 8 (-0.09533, 0.09533, 0.3509) Ex 4.7’ (0, 1, 0) 0.5 0.006 0.8 10 16 (0.5265, 0.5079, -100)

*p = 3, stop criterion: 1e− 006.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1’ (0, 5) 100 0.006 0.01 Fail Fail Fail

Ex 4.2’ (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 7 10 (-0.007125,1.43,2.848,1.281, 1.641,1.667)

Ex 4.3’ (0, 0) 0.5 0.2 0.01 4 5 (-0.01061, 0.72)

Ex 4.4’ (0.5, 2, 1, 0, 0) 5 0.02 0.8 6 7 (0.5589, 1.33, 0.9683, 0.9771, 1.17) Ex 4.5’ (-1, -1, 1) 0.5 0.2 0.8 6 7 (-0.8313, -0.8648, 1.957) Ex 4.6’ (0, 0, 0) 0.5 0.02 0.8 6 7 (-0.09533, 0.09533, 0.4472) Ex 4.7’ (0, 1, 0) 0.5 0.006 0.8 10 16 (0.5265, 0.5079, -100)

*p = 8, stop criterion: 1e− 006.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1’ (0, 5) 100 0.006 0.01 Fail Fail Fail

Ex 4.2’ (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 7 10 (-0.00355,1.431,2.849,1.282,1.643,1.668)

Ex 4.3’ (0, 0) 0.5 0.2 0.01 4 5 (-0.001338, 0.7196)

Ex 4.4’ (0.5, 2, 1, 0, 0) 5 0.02 0.8 6 7 (0.5622, 1.351, 0.9664, 0.9528, 1.18) Ex 4.5’ (-1, -1, 1) 0.5 0.2 0.8 6 7 (-0.8338, -0.8624, 1.957) Ex 4.6’ (0, 0, 0) 0.5 0.02 0.8 5 6 (-0.09533, 0.09533, 0.2985) Ex 4.7’ (0, 1, 0) 0.5 0.006 0.8 10 16 (0.5265, 0.5079, -100)

*Note: Based on H(z) = 0 given as in (13).*

*Table 5: Numerical performance with diﬀerent p for original problems.*

*p = 2, stop criterion: 1e− 006.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1 (0, 5) 100 0.006 0.01 12 20 (0.5821, -0.812)

Ex 4.2 (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 4 5 (-0.01407, 7.663e-006,0,0,0,0)

Ex 4.3 (0, 0) 0.5 0.2 0.01 4 5 (-0.01407, 7.663e-006)

Ex 4.4 (0.5, 2, 1, 0, 0) 5 0.02 0.8 4 5 (0.549,2.066,0.974,0.02039,9.747e-007) Ex 4.5 (-1, -1, 1) 0.5 0.2 0.8 25 40 (0.5029, -1.7, 1.458)

Ex 4.6 (0, 0, 0) 0.5 0.02 0.8 4 5 (-0.09533, 0.09533, 0.08515)

Ex 4.7 (0, 1, 0) 0.5 0.006 0.8 4 5 (0.5265, 0.5079, 0)

*p = 1.5, stop criterion: 1e− 006.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1 (0, 5) 100 0.006 0.01 12 20 (0.5821, -0.812)

Ex 4.2 (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 4 5 (-0.01739, 0.002797,0,0,0,0)

Ex 4.3 (0, 0) 0.5 0.2 0.01 4 5 (-0.01739, 0.002797)

Ex 4.4 (0.5, 2, 1, 0, 0) 5 0.02 0.8 4 5 (0.547,2.061,0.9751,0.02811,0.000359)

Ex 4.5 (-1, -1, 1) 0.5 0.2 0.8 Fail Fail Fail

Ex 4.6 (0, 0, 0) 0.5 0.02 0.8 4 5 (-0.09533, 0.09533, 0.1086)

Ex 4.7 (0, 1, 0) 0.5 0.006 0.8 4 5 (0.5265, 0.5079, 0)

*p = 3, stop criterion: 1e− 006.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1 (0, 5) 100 0.006 0.01 12 20 (0.5821, -0.812)

Ex 4.2 (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 4 5 (-0.009902, 6.814e-011,0,0,0,0)

Ex 4.3 (0, 0) 0.5 0.2 0.01 4 5 (-0.009902, 6.814e-011)

Ex 4.4 (0.5, 2, 1, 0, 0) 5 0.02 0.8 4 5 (0.5513, 2.071,0.9727,0.01239, 8.581e-012)

Ex 4.5 (-1, -1, 1) 0.5 0.2 0.8 Fail Fail Fail

Ex 4.6 (0, 0, 0) 0.5 0.02 0.8 4 5 (-0.09533, 0.09533, 0.06131)

Ex 4.7 (0, 1, 0) 0.5 0.006 0.8 4 5 (0.5265, 0.5079, 0)

*p = 8, stop criterion: 1e− 006.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1 (0, 5) 100 0.006 0.01 12 20 (0.5821, -0.812)

Ex 4.2 (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 4 5 (-0.002048, 6.022e-036, 0, 0, 0, 0)

Ex 4.3 (0, 0) 0.5 0.2 0.01 4 5 (-0.002048, 6.022e-036)

Ex 4.4 (0.5, 2, 1, 0, 0) 5 0.02 0.8 4 5 (0.557, 2.079,0.9694,0.001483,7.47e-037)

Ex 4.5 (-1, -1, 1) 0.5 0.2 0.8 Fail Fail Fail

Ex 4.6 (0, 0, 0) 0.5 0.02 0.8 4 5 (-0.09533, 0.09533, 0.01803)

Ex 4.7 (0, 1, 0) 0.5 0.006 0.8 4 5 (0.5265, 0.5079, 0)

Note: Based on b*H(µ, x) = 0 given as in (14).*

*Table 6: Numerical performance with diﬀerent p for modiﬁed problems.*

*p = 1.5, stop criterion: 1e− 006.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1’ (0, 0) 100 0.006 0.01 Fail Fail Fail

Ex 4.2’ (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 7 10 (-0.03532,1.428,2.849,1.278,1.63,1.666)

Ex 4.3’ (0, 0) 0.5 0.2 0.01 4 5 (-0.0189, 0.7217)

Ex 4.4’ (0, 0, 0, 0, 0) 5 0.02 0.8 Fail Fail Fail

Ex 4.5’ (0, 0, 0) 0.5 0.2 0.8 8 13 (-0.8355, -0.8607, 1.957) Ex 4.6’ (0, 0, 0) 0.5 0.02 0.8 6 8 (-0.09533, 0.09533, 0.3509)

Ex 4.7’ (0, 0, 0) 5 0.02 0.8 17 21 (0.5265, 0.5079, 100)

*p = 2, stop criterion: 1e− 006.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1’ (0, 0) 100 0.006 0.01 Fail Fail Fail

Ex 4.2’ (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 7 10 (-0.009276,1.429,2.846,1.279,1.64,1.667)

Ex 4.3’ (0, 0) 0.5 0.2 0.01 4 5 (-0.01516, 0.7206)

Ex 4.4’ (0, 0, 0, 0, 0) 5 0.02 0.8 Fail Fail Fail

Ex 4.5’ (0, 0, 0) 0.5 0.2 0.8 8 13 (-0.8356, -0.8606, 1.957) Ex 4.6’ (0, 0, 0) 0.5 0.02 0.8 6 8 (-0.09533, 0.09533, 0.332)

Ex 4.7’ (0, 0, 0) 5 0.02 0.8 17 21 (0.5265, 0.5079, 100)

*p = 3, stop criterion: 1e− 006.*

Problem *x*^{0} *c* *τ* *η* NI NF SOL

Ex 4.1’ (0, 0) 100 0.006 0.01 Fail Fail Fail

Ex 4.2’ (0, 0, 0, 0, 0, 0) 0.5 0.2 0.01 7 10 (-0.007125,1.43,2.848,1.281,1.641,1.667)

Ex 4.3’ (0, 0) 0.5 0.2 0.01 4 5 (-0.01061, 0.72)

Ex 4.4’ (0, 0, 0, 0, 0) 5 0.02 0.8 Fail Fail Fail

Ex 4.5’ (0, 0, 0) 0.5 0.2 0.8 8 13 (-0.8356, -0.8606, 1.957) Ex 4.6’ (0, 0, 0) 0.5 0.02 0.8 6 7 (-0.09533, 0.09533, 0.4472)

Ex 4.7’ (0, 0, 0) 5 0.02 0.8 17 21 (0.5265, 0.5079, 100)

*Note: Based on H(z) = 0 given as in (13), x*^{0} is ﬁxed.