• 沒有找到結果。

We analyze the convergence of the proposed algorithm

N/A
N/A
Protected

Academic year: 2022

Share "We analyze the convergence of the proposed algorithm"

Copied!
16
0
0

加載中.... (立即查看全文)

全文

(1)

AIMS’ Journals

Volume X, Number 0X, XX 200X pp. X–XX

LEVENBERG-MARQUARDT METHOD FOR ABSOLUTE VALUE EQUATION ASSOCIATED WITH SECOND-ORDER CONE

Xin-He Miao

School of Mathematics, Tianjin University Tianjin 300072, P.R. China

Kai Yao

School of Mathematics, Tianjin University Tianjin 300072, P.R. China

Ching-Yu Yang

Department of Mathematics, National Taiwan Normal University Taipei 11677, Taiwan

Jein-Shan Chen*

Department of Mathematics, National Taiwan Normal University Taipei 11677, Taiwan

(Communicated by Ruey-Lin Sheu)

Abstract. In this paper, we suggest the Levenberg-Marquardt method with Armijo line search for solving absolute value equations associated with the second-order cone (SOCAVE for short), which is a generalization of the stan- dard absolute value equation frequently discussed in the literature during the past decade. We analyze the convergence of the proposed algorithm. For nu- merical reports, we not only show the efficiency of the proposed method, but also present numerical comparison with smoothing Newton method. It indi- cates that the proposed algorithm could also be a good choice for solving the SOCAVE.

1. Introduction. The standard absolute value equation (AVE for short) currently being studied extensively is the main focus in this paper. For the AVE, there are two types of them. The first type is in the form of

Ax − |x| = b. (1)

Another one is a more general AVE, which is in the form of

Ax + B|x| = b, (2)

2020 Mathematics Subject Classification. Primary: 26B05, 65K10; Secondary: 26B35, 90C33.

Key words and phrases. Second-order cone, Absolute value equations, Levenberg-Marquardt algorithm, Armijo line search.

The first author is supported by by National Natural Science Foundation of China (No.

11471241) and Natural Science Foundation of Inner Mongolia (No. 2019LH01001). The last author is supported by Ministry of Science and Technology, Taiwan.

Corresponding author: Jein-Shan Chen.

1

(2)

where A ∈ IRn×n, 0 6= B ∈ IRn×n and b ∈ IRn. Here |x| denotes the componentwise absolute value of vector x ∈ IRn, i.e., |x| = (|x1|, |x2|, · · · , |xn|)T ∈ IRn. In fact, when B is nonsingular, the AVE (2) reduces to the AVE (1). In particular, the AVE (2) is just the AVE (1) if B = −I, where I is the identity matrix.

It is known that the AVE (1) was first introduced by Rohn in [30] in 2004 and are capable to formulate many optimization problems [18,21,26,29]. Due to this, the AVE has attracted much attention recently. For standard absolute value equation, there are two main research directions for it. One is on the theoretical side, while the other one focuses on the algorithm for solving the absolute value equation. For the theoretical side, Mangasarian and Meyer [24] show that the AVE (1) is equivalent to the bilinear program, the generalized LCP (linear complementarity problem), and the standard LCP provided that 1 is not an eigenvalue of A. In the setting of the second-order cone, Hu, Huang and Zhang [12] have shown that the absolute value equation associated with the second-order cone (SOCAVE) (2) is equivalent to the below problem: find x, y ∈ IRn such that

M x + P y = c, and x ∈ Kn, y ∈ Kn, hx, yi = 0,

where M, P ∈ IRn×n are matrices and c ∈ IRn. Note that the above problem is not a standard second-order cone linear complementarity problem (SOCLCP) because there exists one additional equation M x + P y = c therein. In light of this, Miao et.al. [27] have shown that the SOCAVE (2) can be further converted into a standard SOCLCP. In addition, about the study of the properties of solutions for the AVE (2), under the condition of solvability, the AVE (2) can have either unique solution or multiple (e.g., exponentially many) solutions. Moreover, various sufficient conditions on solvability and non-solvability of the AVE (1) and (2) with unique and multiple solutions are discussed in [24, 29]. There are many other theoretical results, see [11,16,18,19,22,26,24,29,30]. Towards to solutions of the AVE, various numerical methods for solving the AVE were proposed, for example, Mangasarian [20], Zhang and Wei [35] all considered a generalized Newton method for the AVE. Yamanaka and Fukushima [32] proposed a branch and bound method for the absolute value programs. There are also other many numerical methods for solving the standard AVE (2) in the literature, see [2,14,15,20,21,23,31,35] and references therein.

In this paper, we focus on the type of absolute value equation associated with the second-order cone, denoted by the SOCAVE, whose format looks like the AVE (1):

Ax − |x| = b, (3)

where A ∈ IRn×n and b ∈ IRn are the same as those in (1). However, |x| in the SOCAVE (3) denotes the square root of the Jordan product ”◦” of x and x associated with the second-order cone (SOC), that is, |x| :=√

x2=√

x ◦ x. In fact, by applying the SOC-function, Jordan product [4] and their related properties, there has explicit expression for |x| ∈ K in the SOCAVE (3), where K denotes the general second-order cone. More details about the second-order cone, Jordan product and

|x| will be introduced in the next section.

As mentioned above, several mathematical problems including linear program- ming, bimatrix games can be reduced to the system of absolute value equations.

(3)

Accordingly, we see that the SOCAVE (3) play the same role in various optimiza- tions involving the second-order cones since the SOCAVE is equivalent to the SO- CLCP under the suitable conditions. The SOCLCP has various applications in engineering, control, finance, robust optimization and combinatorial optimization.

In this paper, we are interested in Levenberg-Marquardt method for solving the SOCAVE (3). The Levenberg-Marquardt method [10,13, 17,25, 33, 34] was pro- posed for solving nonlinear problems F (x) = 0. This method can be viewed as a combination of the steepest descent and Gauss-Newton method. The classical Levenberg-Marquardt method computes the search direction dk by

dk = −(J (xk)TJ (xk) + µkI)−1J (xk)TF (xk), (4) where J (xk) = F0(xk) denotes the Jacobian matrix of the function F at xk, µk > 0 is a parameter and I is the identity matrix. As stated in [13], the direction dk in (4) is unique, which is a nice feature of Levenberg-Marquardt method compared to the Newton method or the Gauss-Newton method. In our setting, F (x) is defined by

F (x) = Ax − |x| − b.

In this paper, we consider the Levenberg-Marquardt method with Armijo line search. Besides analyzing the convergence of the proposed method, we also present some preliminary numerical results to illustrate the efficiency of the proposed method.

On the other hand, we also compare with the smoothing Newton method, which is a promising way for solving the SOCAVE (3) [27,28]. The numerical experiments indicate that the Levenberg-Marquardt method is competitive on some certain prob- lems. This suggests that not only is the smoothing Newton method good for solving the SOCAVE (3), but the Levenberg-Marquardt method is also another good choice.

There are a few words about our notations in this paper. All vectors are column vectors unless transposed to a row vector by a prime T. IRn denotes the space of n-dimensional real space. xTy denotes the inner product of two vectors x and y in the n-dimensional real space. IR+ and IR++ denote the nonnegative and positive reals. For x ∈ IRn, the 2-norm (i.e., Euclidean norm) is denoted by kxk. A matrix AT will denote the transpose of A, the identity matrix of arbitrary dimension is denoted by I.

The remaining parts of this paper are organized as follows. In section 2, we recall some basic concepts and background knowledge about second-order cone, the projection onto the SOC and the expression of the absolute value function associated with the SOC. Besides, we also define a vector-valued smoothing function Ψ based on the SOC. In Section 3, we propose the Levenberg-Marquardt method with Armijo line search for solving the SOCAVE (3), and discuss the convergence of the proposed method. In Section 4, simulations, numerical results, and numerical comparison are presented.

2. Preliminaries. In this section, we review some basic concepts and useful results regarding second-order cone and Jordan product, which will be extensively used in the subsequent analysis. For a comprehensive treatment of second-order cone, more details can be found in [3,4,5,6,7,9].

(4)

The second-order cone (SOC) in IRn, also called the Lorentz cone or ice-cream cone, is defined as

Kn:= {(x1, x2) ∈ IR × IRn−1

kx2k ≤ x1},

where k · k denotes the Euclidean norm. It is well known that the second-order cone is a special case of symmetric cones [9]. Besides, we can see that for n = 1, the second-order cone Kn is the set of nonnegative reals IR+. In general, the general second-order cone K is the Cartesian product of the SOCs, i.e.,

K := Kn1× · · · × Knr.

Since all the analysis can be carried out on the setting of Cartesian product, without loss of generality, we focus on the single second-order cone Kn in this paper.

Next, we will introduce the concept of Jordan product associated with SOC Kn. For any two vectors x = (x1, x2) ∈ IR × IRn−1 and y = (y1, y2) ∈ IR × IRn−1, the Jordan product of x and y associated with Kn is defined by

x ◦ y :=

 xTy y1x2+ x1y2

 .

Unlike scalar or matrix multiplication, the Jordan product is not associative, which is a main source of complication in the analysis of optimization problems involved SOC, see [7, 9] and references therein. Moreover, the identity element under the Jordan product is e = (1, 0, · · · , 0)T ∈ IRn. Based on these definitions, x2 means the Jordan product of x and x, i.e., x2 = x ◦ x; and √

x with x ∈ Kn denotes the vector satisfying√

x ◦√

x = x. On the basis of these concepts, the vector |x| in (3) is computed by

|x| :=√ x ◦ x.

In order to write out the expression of |x| easily and explicitly, we first recall the spectral decomposition of x with respect to SOC. For any x = (x1, x2) ∈ IR × IRn−1, the spectral decomposition of x with respect to SOC [4,6, 7] is given by

x = λ1(x)u(1)x + λ2(x)u(2)x ,

where λ1(x) and λ2(x) are the spectral values of x with λi(x) = x1+ (−1)ikx2k for i = 1, 2, while u(1)x and u(2)x are the corresponding spectral vectors of x given by

u(i)x =

1 2



1, (−1)i xkxT2

2k

T

if kx2k 6= 0,

1

2 1, (−1)iνTT

if kx2k = 0,

with ν ∈ IRn−1 being any vector satisfying kνk = 1 for i = 1, 2. It is easy to find that the spectral decomposition of x ∈ IRn is unique if kx2k 6= 0.

In the next content of this section, we recall the projection onto Kn. We let x+

be the projection of x onto Kn, and x denote the projection of −x onto its dual cone of Kn, where the dual cone is defined as

(Kn):= {y ∈ IRn| hx, yi ≥ 0, ∀x ∈ Kn}.

In fact, the dual cone of Kn is itself, i.e., (Kn)= Kn. Due to the special structure of second-order cone, the explicit formula of projection of x = (x1, x2) ∈ IR × IRn−1

(5)

onto Kn is obtained in [7,9] as below:

x+=

x if x ∈ Kn, 0 if x ∈ −Kn, u otherwise,

where u =

" x1+kx2k

x 2

1+kx2k 2

 x2

kx2k

# .

In the same way, x can be characterized as follows:

x=

0 if x ∈ Kn,

−x if x ∈ −Kn, w otherwise,

where w =

"

x1−kx2 2k

x

1−kx2k 2

 x2 kx2k

# .

By direct calculation, it is easy to show that x = x+− x. Moreover, together with the spectral decomposition of x, x+ and xcan be described by the following form, respectively:

x+= (λ1(x))+u(1)x + (λ2(x))+u(2)x , and

x= (−λ1(x))+u(1)x + (−λ2(x))+u(2)x , where (α)+ = max{0, α} for any α ∈ IR.

Now, we talk about the expression of |x| associated with Kn. Indeed, we can use the so-called SOC-function to achieve the expression of |x|, which can be found in [3, 4, 5]. More specifically, for any x ∈ IRn, the definition of |x| with respect to SOC is |x| := x++ x. In particular, the form |x| = x++ x is equivalent to

|x| =√

x ◦ x in the setting of SOC. In light of the above expression of projection x+and x, it is easy to get that the expression of the absolute value |x| associated with SOC is the following form

|x| = (λ1(x))++ (−λ1(x))+u(1)x +(λ2(x))++ (−λ2(x))+u(2)x

= λ1(x)

u(1)x + λ2(x)

u(2)x .

Combining the spectral decomposition of x, there is a more explicit expression of

|x| as below:

|x| =





 1 2

 |x1− kx2k| + |x1+ kx2k|

(|x1+ kx2k| − |x1− kx2k|)kxx2

2k



if x26= 0,

 |x1| 0



if x2= 0.

In order to employ Levenberg-Marquardt method for solving the SOCAVE (3), we need to adopt a continuously differentiable function. Due to the non-differentiability of |α| for α ∈ IR, we consider a special smoothing function for the absolute value function |α|. To this end, we define the function φp(·, ·) : IR2→ IR as

φp(a, b) :=p|a|p p+ |b|p, p > 1. (5) Noting that b ∈ IR and b → 0 yields φp(a, b) → |a|. Therefore, combining the spectral decomposition of x and the function φp, it is natural to define a vector- valued smoothing function Φ : IRn → IRn as

Φ(x(ρ)) = φp(ρ, λ1(x))u(1)x + φp(ρ, λ2(x))u(2)x

= p|ρ|p p+ |λ1(x)|pu(1)x +p|ρ|p p+ |λ2(x)|pu(2)x ,

(6)

where ρ ∈ IR is a parameter, and λ1(x), λ2(x) are the spectral values of x. From this, it is easy to check that

ρ→0limΦ(x(ρ)) = |λ1(x)| u(1)x + |λ2(x)| u(2)x = |x|,

which means the function Φ(x(ρ)) is a uniformly smoothing function of |x| associ- ated with SOC. With this function, for tackling the SOCAVE (3), we further define a function H(x(ρ)) : IRn→ IRn by

H(x(ρ)) = Ax − Φ(x(ρ)) − b, ∀ρ ∈ IR, x ∈ IRn. Then, from a trivial observation

lim

ρ→0H(x(ρ)) = 0 ⇐⇒ Ax − |x| − b = 0,

it indicates that x is a solution to the SOCAVE (3) if and only if x is the limit of the solution x(ρ) to the equation H(x(ρ)) = 0 when ρ → 0. In practical implementation, we often take ρ ∈ IR++and set ρ ↓ 0. In addition, it is not difficult to show that the function H(x(ρ)) with any ρ 6= 0 is continuously differentiable on IRn. By direct calculation, we can write out the explicit formula of the Jacobian matrix for the function H as below:

Hx0(x(ρ)) = A − Φ0x(x(ρ)) (6) for all x ∈ IRn with x = (x1, x2) ∈ IR × IRn−1, where

Φ0x(x(ρ)) =









sgn(x1)|x1|p−1 hp

ρp+|x1|pip−1I if x2= 0,

b ckxxT2

2k

ckxx2

2k aI + (b − a)kxx2xT2

2k2

 if x26= 0, with

a = φp(ρ, λ2(x)) − φp(ρ, λ1(x)) λ2(x) − λ1(x) ,

b = 1

2

 sgn(λ2(x))|λ2(x)|p−1

p(ρ, λ2(x))]p−1 +sgn(λ1(x))|λ1(x)|p−1p(ρ, λ1(x))]p−1

 ,

c = 1

2

 sgn(λ2(x))|λ2(x)|p−1

p(ρ, λ2(x))]p−1 −sgn(λ1(x))|λ1(x)|p−1p(ρ, λ1(x))]p−1

 ,

and the function sgn(·) is defined by sgn(α) :=

1 if α > 0, 0 if α = 0,

−1 if α < 0.

It is noteworthy that the Jacobian matrix is derived from Proposition 4 in [5]. Actually, the Jaco- bian matrix for the function H is one of the important elements in our considered algorithm. For simplicity, we denote w := x(ρ), from which we obtain the system of nonlinear equation

H(w) = 0.

Thus, x is a solution of the SOCAVE (3) if and only if x is the limit of the solution w to the equation H(w) = 0 when ρ → 0.

(7)

3. Levenberg-Marquardt method. For the SOCAVE (3), Miao et.al.[26] have obtained that the SOCAVE (3) has a unique solution if all singular values of A exceed 1. In view of this, we suppose that all singular values of A exceed 1 in this paper. It follows that the SOCAVE (3) has a unique solution. Now, we consider the Levenberg-Marquardt method with Armijo line search mentioned in [13] for solving the SOCAVE (3) and show its convergence properties. The general iterative scheme is

xk+1= xk+ αkdk, k = 0, 1, 2, 3, · · · ,

where αk∈ IR+is a step size, and dk ∈ IRnis the search direction. More specifically, αkand dkdenote the Levenberg-Marquardt direction (4) and the Armijo line search, respectively. For convenience, we denote the merit function Ψ as the following form:

Ψ(w) = 1

2kH(w)k2. Algorithm 3.1. [Levenberg-Marquardt Algorithm]

Step 0: Choose an initial point w0 = x00) ∈ IRn with any ρ0 ∈ IR++, and parameters β, % ∈ (0, 1), γ ∈ [1, 2], σ ∈ (0,12). Set k := 0.

Step 1: If kH(wk)k = 0, then stop. Otherwise go to step 2.

Step 2: Set

µk= kH(wk)kγ, dk= −(J (wk)TJ (wk) + µkI)−1J (wk)TH(wk),

where J (wk) denotes the Jacobian matrix H0(wk) of H(wk) at wk given by (6). If dk satisfies

kH(wk+ dk))k ≤ %kH(wk)k, (7)

then set xk+1k) = wk + dk. Then, set ρk+1 = 1+µµk

kρk and wk+1 :=

xk+1k+1). Otherwise go to step 3.

Step 3: Find mk be the smallest nonnegative integer m such that

Ψ(wk+ βmdk) ≤ Ψ(wk) + σβm∇Ψ(wk)Tdk. (8) Set αk := βmk and xk+1k) = wk+ αkdk. Then, set ρk+1= αkρk, wk+1= xk+1k+1), and go to step 4.

Step 4: Set k := k + 1, go back to step 1.

As indicated in [13], the direction dk in step 2 is unique. Hence, it is clear that Algorithm3.1is well defined. The parameter σ guarantees that the line search (8) makes sense, while % could simplify the algorithm when Ψ(w) drops too fast.

Theorem 3.1. Suppose that all singular values of A exceed 1 and x be a solution of the SOCAVE (3). Then, the function H(w) provides a local error bound on the neighborhood N (x, r) for the SOCAVE (3), i.e., there exists a constant c > 0 such that

kw − xk ≤ ckH(w)k, ∀w ∈ N (x, r).

Proof. Since xis a solution of the SOCAVE (3), we have H(w) := H(x(0)) = 0.

Then, it follows that

H(w) = H(w) − H(w) = H0(ξ)(w − w)

= (A − Φ0x(ξ))(w − x),

(8)

where the third equation holds due to the smoothness of Φ with any ρ > 0, and ξ = (1 − t)x+ tw with 0 < t < 1. Since all singular values of A exceed 1, it follows from Theorem 4.1 in [27] that H0(ξ) = A − Φ0x(ξ) is nonsingular. Hence, we have

w − x= (A − Φ0(ξ))−1H(w), which implies that

kw − xk ≤ k(A − Φ0x(ξ))−1H(w)k ≤ k(A − Φ0x(ξ))−1kkH(w)k = ckH(w)k, where c = k(A − Φ0x(ξ))−1k. Then, the proof is complete. 2

In order to establish the global convergence results for Levenberg-Marquardt Algorithm 3.1, we assume that ∇Ψ(wk) 6= 0 for all k, which ensures that the algorithm can generate an infinite sequence {wk}.

Theorem 3.2. Suppose that the sequence {wk} is generated by the Levenberg- Marquardt Algorithm3.1with Armijo line search. Then, any accumulation point of the sequence {wk} is a stationary point of Ψ. Moreover, if all singular values of A exceed 1, the sequence {wk} converges to the solution of the SOCAVE (3).

Proof. Since dk = −(J (wk)TJ (wk) + µkI)−1J (wk)TH(wk) and ∇Ψ(wk) 6= 0 for all k, we have

(∇Ψ(wk))Tdk = − J (wk)TH(wk)T

J (wk)TJ (wk) + µkI−1

J (wk)TH(wk) < 0, where the inequality holds because J (wk)TJ (wk) + µkI is a positive definite matrix for any µk > 0. Together with the formulae (7) and (8), the sequence {Ψ(wk)}

is monotonically decreasing. By Theorem 3.1, it is easy to get that the sequence {wk} is bounded. Therefore, there are accumulation points in the sequence {wk}.

Moreover, from the monotone decreasing of {Ψ(wk)}, the sequence {µk} is also monotonically decreasing, so it has a limit µ. To proceed, we discuss two cases for the sequence {µk} : (i) µ= 0 and (ii) µ6= 0.

Case (i): µ= 0, i.e., µk → 0. In this case, we have H(wk) → 0, which illustrates that any accumulation point of the sequence {wk} is a stationary point of Ψ.

Case (ii): µ 6= 0, i.e., µ > 0. For convenience, without loss of generality, we assume that the sequence {wk} itself is convergent. By µ 6= 0, it follows that limk→∞Ψ(wk) 6= 0 and

(∇Ψ(wk))Tdk = −(J (wk)TH(wk))Tdk

= −((J (wk)TJ (wk) + µkI)dk)Tdk

= −dTk(J (wk)TJ (wk) + µkI)dk (9)

≤ −µkkdkk2

≤ −µkdkk2.

From (9) and the fact that the gradient ∇Ψ(wk) is bounded on the convergent sequence {wk}, we have

lim sup

k→∞

kdkk < ∞.

(9)

On the other hand, using limk→∞µk = µ > 0 and {µk} being monotonically decreasing, it follows that for all large k, there exists a constant κ > 0 such that

kJ (wk)TJ (wk) + µkIk ≤ κ.

This clearly yields

kdkk ≥ k∇Ψ(wk)k

kJ (wk)TJ (wk) + µkIk ≥k∇Ψ(wk)k

κ .

This together with (9) leads to lim supk→∞kdkk = 0 or lim supk→∞|(∇Ψ(wk))Tdk| >

0. If lim supk→∞kdkk = 0, it implies that limk→∞k∇Ψ(wk)k = 0, i.e., the accumu- lation point of {wk} is a stationary point of Ψ. If lim supk→∞|(∇Ψ(wk))Tdk| > 0, by lim supk→∞kdkk < ∞, then we know that the sequence {dk} is uniformly gra- dient related to {wk}. Therefore, by [1, Proposition 1.2.1], we conclude that any accumulation point of {wk} ia a stationary point of the function Ψ.

If all singular values of A exceed 1, it follows that the Jacobian matrix J (w) = H0(w) is nonsingular. By ∇Ψ(w) = J (w)H(w), it is easy to verify that the station- ary point w of the function Ψ satisfies H(w) = 0, which implies that x is the solution of the SOCAVE (3). Then, the proof is complete. 2

Remark 1. (a): Like what is discussed in the literature, the Armijo line search in Algorithm3.1could be chosen as Goldstein line search. The corresponding convergence results can be achieved as well.

(b): Based on the strongly semismoothness of the absolute value function asso- ciated with SOC, it is easy to get that the Jacobian matrix Hx0(w) is Lipschitz continuous on some neighborhood of the solution x. By this, under all sin- gular values of A exceed 1, similar to Theorem 3.1 in [33] or Theorem 2.2 in [10], we get that the sequence {wk} generated by the Levenberg-Marquardt algorithm converges to the solution of SOCAVE (3) quadratically.

4. Numerical Experiments. This section is devoted to the numerical implemen- tations. Besides providing some numerical evidence to show the efficiency of Al- gorithm 3.1, we also do numerical comparison between the Levenberg-Marquardt algorithm and the smoothing Newton algorithm (a promising approach for solving SOCAVE recently studied in [27, 28]). In particular, we adapt the performance profile to describe the influence by perturbing the parameter p.

In our tests, the parameters are set as below:

ρ0= 0.001, x0= rand(n, 1), β = 0.5, σ = 0.2, % = 0.5 and γ = 1.

We stop the iterations when k∇Ψ(wk)k ≤ 10−5or the number of iterations exceeds 100. All the experiments are done on a PC with Intel(R) CPU of 2.30GHz and RAM of 4.00GB, and all the program codes are written in Matlab and run in Matlab environment.

To present numerical results, we consider the performance profile which is intro- duced in [8]. In other words, we regard Algorithm 3.1 corresponding to different p = 1.1, 2, 3, 10, 20, 80 as a solver, and assume that there are nssolvers and nq test

(10)

problems from the test set P. We use the computing time as performance measure for Algorithm 3.1 with different values of p which satisfies the condition p > 1 in (5). For each problem q and solver s, let

fq,s= computimg time required to solve problem q by solver s.

We employ the performance ratio

rq,s= fq,s min{fq,s: s ∈ S},

where S is the six solvers set. We assume a parameter rM such that rq,s≤ rM for all q, s are chosen, and rq,s = rM if and only if solver s does not solve problem q.

In order to obtain an overall assessment for each solver, we define ρs(τ ) := 1

nq

size{q ∈ P : rq,s≤ τ },

which is called the performance profile of the computing time for solver s. Then, ρs(τ ) is the probability for solver s ∈ S that a performance ratio rq,s is within a factor τ ∈ IR of the best possible ratio, see [8].

Additionally, we provide the performance profile in each problem to compare Levenberg-Marquardt algorithm and the smoothing Newton algorithm, which was recently studied in [27,28] for solving SOCAVE (3) as well. The performance profiles of problem 4.1-4.3 use 300 test problems and problem 4.4 use 40 test problems, they are all with fixed value of p = 2 with n = 300.

Problem. Consider the SOCAVE (3) which is generated in the following way: first choose a random matrix C ∈ IRn×n from a uniformly distribution on [−10, 10] for every element. We compute the minimal singular value σ of C, and let σ :=

min{1, σ}. Next, we divide C by σ multiplied by a random number in the interval [0, 1], and the resulting matrix is denoted as A. Accordingly, the minimum singular value of A exceeds 1, which can ensure that the SOCAVE (3) has the unique solution.

We choose randomly b ∈ IRn on [0, 1] for every element. By Algorithm3.1 in this paper, the resulting SOCAVE (3) is solvable. The initial point is chosen in the range [0, 1] entry-wisely. Note that a similar way to construct the problem was given in [12].

Problem. Consider the SOCAVE (3) which is generated in the following way:

choose two random matrices A ∈ IRn×n from a uniformly distribution on [−10, 10]

for every element. In order to ensure that the SOCAVE (3) is solvable, we update the matrix A by the following: let [U SV ] = svd(A). If min{S(i, i)} = 0 for i = 0, 1, · · · , n, we make A = U (S + 0.01E)V , and then A = λ 1.01

min(ATA)A. We choose randomly b ∈ IRn on [0, 10] for every element. The initial point is chosen in the range [0, 1] entry-wisely.

Problem. We consider the SOCAVE (3) which is generated by the same way as in Problem 4. However, here the K is Cartesian product of single SOCs, given by K := Kn1× · · · × Knr, where n1= · · · = nr=nr.

(11)

1 1.5 2 2.5 3 3.5 4 0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

p=1.1 p=2 p=3 p=10 p=20 p=80

Figure 1. Performance profile of computing time of Problem 4.1 with different p.

1 1.5 2 2.5 3 3.5 4

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Levenberg-Marquardt method Smoothing Newton method

Figure 2. Performance profile of computing time of Problem 4.1 with LM and SN methods.

As seen, the above problems 4–4 are all generated randomly. In the below ex- ample, we consider the case where the matrix A and the vector b in the SOCAVE (3) are fixed.

(12)

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

p=1.1 p=2 p=3 p=10 p=20 p=80

Figure 3. Performance profile of computing time of Problem 4.2 with different p.

1 1.5 2 2.5 3 3.5

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Levenberg-Marquardt method Smoothing Newton method

Figure 4. Performance profile of computing time of Problem 4.2 with LM and SN methods.

Problem. Consider the SOCAVE (3) which is generated in the following way: Let

A =

3 2 2 · · · 2 0 3 2 · · · 2 0 0 3 · · · 2 ... ... ... . .. ... 2 2 2 · · · 3

, b = (−2, 2, −2, . . . , −2, 2 · · · )T.

(13)

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

p=1.1 p=2 p=3 p=10 p=20 p=80

Figure 5. Performance profile of computing time of Problem 4.3 with different p.

1 1.5 2 2.5 3 3.5

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Levenberg-Marquardt method Smoothing Newton method

Figure 6. Performance profile of computing time of Problem 4.3 with LM and SN methods.

The initial point is chosen in the range [0, 1] entry-wisely. It is obvious that the minimum singular value of A exceeds 1, so the SOCAVE (3) has the unique solution.

In this example, the dimension of A is 40.

(14)

1 1.5 2 2.5 3 3.5 4 4.5 0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

p=1.1 p=2 p=3 p=10 p=20 p=80

Figure 7. Performance profile of computing time of Problem 4.4 with different p.

0 2 4 6 8 10 12

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Levenberg-Marquardt method Smoothing Newton method

Figure 8. Performance profile of computing time of Problem 4.4 with LM and SN methods.

In our experiments, every set of the simulations for every problem is randomly generated ten times, and both of the success rates with Levenberg-Marquardt algo- rithm and smoothing Newton method are 100%.

We summarize our numerical observations as below:

(15)

• From Figures 1,3,5,7, we see that the proposed algorithm is not affected when p is perturbed.

• The small value of p, which is close to 1, seems not a good choice of being employed to work with the proposed algorithm.

• In general, the iterations of Levenberg-Marquardt algorithm are more than those in smoothing Newton method in each problem. Nonetheless, from Fig- ure 1 to Figure 4, we see that the performance in computing time of the Levenberg-Marquardt algorithm is evidently better than the smoothing New- ton algorithm.

• All the above results show the efficiency of the Levenberg-Marquardt algo- rithm. Like the smoothing Newton method, this study suggests that the Levenberg-Marquardt algorithm could be another good choice for solving SO- CAVE.

Acknowledgments. We would like to thank the guest editors for giving us an opportunity to contribute this paper to the special issue in memory of Prof. Hang- Chin Lai.

REFERENCES

[1] D.P. Bertsekas, Nonlinear Programming, Athena Scientific, Massachusetts, 1995.

[2] L. Caccetta, B. Qu and G.-L. Zhou, A globally and quadratically convergent method for absolute value equations, Computational Optimization and Applications, 48 (2011), 45–58.

[3] J.-S. Chen, The convex and monotone functions associated with second-order cone, Optimiza- tion, 55 (2006), 363–385.

[4] J.-S. Chen, SOC Functions and Their Applications, Springer Optimization and Its Applica- tions 143, Springer, Singapore, (2019).

[5] J.-S. Chen, X. Chen and P. Tseng, Analysis of nonsmooth vector-valued functions associated with second-order cones, Mathematical Programming, 101 (2004), 95–117.

[6] J.-S. Chen and S.-H. Pan, A survey on SOC complementarity functions and solution methods for SOCPs and SOCCPs, Pacific Journal of Optimization, 8 (2012), 33–74.

[7] J.-S. Chen and P. Tseng, An unconstrained smooth minimization reformulation of second- order cone complementarity problem, Mathematical Programming, 104 (2005), 293–327.

[8] E.D. Dolan and J.J. More, Benchmarking optimization software with performance profiles, Mathematical Programming, 91 (2002), 201–213.

[9] U. Faraut and A. Koranyi, Anlysis on Symmetric Cones, Oxford Mathematical Monographs, Oxford University Press, New York, 1994.

[10] J.-Y. Fan and Y.-X. Yuan, On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption, Computing, 74 (2005), 23–39.

[11] S.-L. Hu and Z.-H. Huang, A note on absolute value equations, Optimization Letters, 4 (2010), 417–424.

[12] S.-L. Hu, Z.-H. Huang and Q. Zhang, A generalized Newton method for absolute value equa- tions associated with second order cones, Journal of Computational and Applied Mathematics, 235 (2011), 1490–1501.

[13] J. Iqbal, A. Iqbal and M. Arif, Levenberg-Marquardt method for solving systems of absolute value equations, Journal of Computational and Applied Mathematics, 282 (2015), 134–138.

[14] X.-Q. Jiang, A Smoothing Newton Method for Solving Absolute Value Equations, Advanced Materials Research, 765-767 (2013), 703–708.

[15] X.-Q. Jiang and Y. Zhang, A smoothing-type algorithm for absolute value equations, Journal of Industrial and Management Optimization, 9 (2013), 789–798.

[16] S. Ketabchi and H. Moosaei, Minimum norm solution to the absolute value equation in the convex case, Journal of Optimization Theory and Applications, 154 (2012), 1080–1087.

(16)

[17] K. Levenberg, A method for the solution of certain nonlinear problems in least squares, Quarterly of Applied Mathematics, 2 (1944), 164–168.

[18] O.L. Mangasarian, Absolute value programming, Computational Optimization and Applica- tions, 36 (2007), 43–53.

[19] O.L. Mangasarian, Absolute value equation solution via concave minimization, Optimization Letters, 1 (2007), 3–5.

[20] O.L. Mangasarian, A generalized Newton method for absolute value equations, Optimization Letters, 3 (2009), 101–108.

[21] O.L. Mangasarian, Absolute value equation solution via dual complementarity, Optimization Letters, 7 (2013), 625–630.

[22] O.L. Mangasarian, Linear complementarity as absolute value equation solution, Optimization Letters, 8 (2014), 1529–1534.

[23] O.L. Mangasarian, Absolute value equation solution via linear programming, Journal of Op- timization Theory and Applications, 161 (2014), 870–876.

[24] O.L. Mangasarian and R.R. Meyer, Absolute value equation, Linear Algebra and Its Appli- cations, 419 (2006), 359–367.

[25] D.W. Marquardt, An algorithm for least squares estimation of nonlinear inequalities, Journal of the Society for Industrial and Applied Mathematics, 11 (1963), 431–441.

[26] X.-H. Miao, W.-M. Hsu, C.T. Nguyen and J.-S. Chen, The solvabilities of three optimization problems associated with second-order cone, Journal of Nonlinear and Convex Analysis, 22 (2021), 937–967.

[27] X.-H. Miao, J.-T. Yang, B. Saheya and J.-S. Chen, A smoothing Newton method for abso- lute value equation associated with second-order cone, Applied Numerical Mathematics, 120 (2017), 82–96.

[28] C.T. Nguyen, B. Saheya, Y.-L. Chang and J.-S. Chen, Unified smoothing functions for abso- lute value equation associated with second-order cone, Applied Numerical Mathematics, 135 (2019), 206–227.

[29] O.A. Prokopyev, On equivalent reformulations for absolute value equations, Computational Optimization and Applications, 44 (2009), 363–372.

[30] J. Rohn, A theorem of the alternative for the equation Ax + B|x| = b, Linear and Multilinear Algebra, 52 (2004), 421–426.

[31] J. Rohn, An algorithm for solving the absolute value equation, Eletronic Journal of Linear Algebra, 18 (2009), 589–599.

[32] S. Yamanaka and M. Fukushima, A brancd and bound method for the absolute value pro- grams, Optimization, 63 (2014), 305–319.

[33] N. Yamashita and M. Fukushima, On the rate of convergence of the Levenberg-Marquardt method, Topics in Nonlinear Analysis, edited by G. Alefeld and X. Chen, pp. 239–249, Springer-Verlag, 2001.

[34] X. Zhu and G.-H. Lin, Improved convergence results for a modified Levenberg-Marquardt method for nonlinear equations and applications in MPCC, Optimization Methods and Soft- ware, 31 (2016), 791–804.

[35] C. Zhang and Q.-J. Wei, Global and finite convergence of a generalized Newton method for absolute value equations, Journal of Optimization Theory and Applications, 143 (2009), 391–403.

Received February 2020; revised October, 2020.

E-mail address: xinhemiao@tju.edu.cn E-mail address: yaokai@tju.edu.cn.

E-mail address: yangcy@math.ntnu.edu.tw.

E-mail address: jschen@math.ntnu.edu.tw.

參考文獻

相關文件

In this section, we define the so-called SOC-r-convex functions which is viewed as the natural extension of r-convex functions to the setting associated with second- order

In this paper, we characterize the projection formula of element z onto p-order cone, and establish one type of spectral factorization associated with p-order cone.. We believe

To achieve inequalities for the aforementioned SOC weighted means, we need the following technical lemma [10, Lemma 4], which is indeed a symmetric cone version of

For the proposed algorithm, we establish a global convergence estimate in terms of the objective value, and moreover present a dual application to the standard SCLP, which leads to

Abstract In this paper, we consider the smoothing Newton method for solving a type of absolute value equations associated with second order cone (SOCAVE for short), which.. 1

We consider the Tikhonov regularization method for the second-order cone complementarity problem (SOCCP) with the Cartesian P 0 -property. We show that many results of

This fact will lay a building block when the merit function approach as well as the Newton-type method are employed for solving the second-order cone complementarity problem with

one, the new approach offers a way to clarify when the equality holds which is helpful for further studying in the relationship between the second-order cone programming prob- lems