• 沒有找到結果。

second-order cone linear complementarity problems

N/A
N/A
Protected

Academic year: 2022

Share "second-order cone linear complementarity problems"

Copied!
27
0
0

加載中.... (立即查看全文)

全文

(1)

second-order cone linear complementarity problems

Zijun Hao1· Chieu Thanh Nguyen2· Jein-Shan Chen2

Received: 24 March 2020 / Accepted: 15 November 2021 / Published online: 3 December 2021

© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021

Abstract

Based on a class of smoothing approximations to projection function onto second-order cone, an approximate lower order penalty approach for solving second-order cone linear complementarity problems (SOCLCPs) is proposed, and four kinds of specific smoothing approximations are considered. In light of this approach, the SOCLCP is approximated by asymptotic lower order penalty equations with penalty parameter and smoothing parameter.

When the penalty parameter tends to positive infinity and the smoothing parameter mono- tonically decreases to zero, we show that the solution sequence of the asymptotic lower order penalty equations converges to the solution of the SOCLCP at an exponential rate under a mild assumption. A corresponding algorithm is constructed and numerical results are reported to illustrate the feasibility of this approach. The performance profile of four spe- cific smoothing approximations is presented, and the generalization of two approximations are also investigated.

Keywords Second-order cone· Linear complementarity problem · Lower order penalty approach· Exponential convergence rate

Mathematics Subject Classification 90C25· 90C30 · 90C33

The author’s work is supported by the National Natural Science Foundation of China Nos. 11661002, 11871383), the Natural Science Fund of Ningxia (No. 2020AAC03236), the First-class Disciplines Foundation of Ningxia (No. NXYLXK2017B09). J.-S. Chen: The author’s work is supported by Ministry of Science and Technology, Taiwan.

B

Jein-Shan Chen jschen@math.ntnu.edu.tw Zijun Hao

zijunhao@126.com Chieu Thanh Nguyen thanhchieu90@gmail.com

1 School of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China 2 Department of Mathematics, National Taiwan Normal University, Taipei 11677, Taiwan

(2)

1 Introduction

This paper targets the following second-order cone linear complementarity problem (SOCLCP), which is to find x∈ IRn, such that

xK, Ax − b ∈K, xT(Ax − b) = 0, (1)

where A is an n×n matrix, b is a vector in IRn, andKis the Cartesian product of second-order cones (SOCs), also called Lorentz cones [7,18]. In other words,

K:=Kn1× · · · ×Knr (2)

with r, n1, . . . , nr ≥ 1, n1+ · · · + nr = n and Kni :=

(x1, x2) ∈ IR × IRni−1| x2 ≤ x1

, i = 1, . . . , r,

where ·  denotes the Euclidean norm and (x1, x2) := (x1, x2T)T. Note thatK1denotes the set of nonnegative real numbers IR+. The SOCLCP, as an extension of the linear complemen- tarity problem (LCP), has a wide range of applications in linear and quadratic programming problems, computer science, game theory, economics, finance, engineering, and network equilibrium problems [3,15,17,26,27,30].

During the past several years, there are many methods proposed for solving the SOCLCPs (1)–(2), including the interior-point method [1,28,32], the smoothing Newton method [14,19,24], the smoothing-regularization method [23], the semismooth Newton method [25,33], the merit function method [5,10,12], and the matrix splitting method [22,41] etc.

Although the effectiveness of some methods has been improved substantially in recent years, the fact remains that there still have many complementarity problems require effi- cient and accurate numerical methods. The penalty methods are well-known for solving constrained optimization problems which possess many nice properties. More specifically, the l1exact penalty function method and lower order penalty function method are known as the approaches which hold many nice properties and attracts much attention [2,20,29,34,39,40].

The smoothing of the exact penalty methods are also proposed [35,37,38]. Besides, Wang and Yang [36] focus on the power of lower order penalty function, and propose a power penalty method for solving LCP based on approximating the LCP by nonlinear equations. It shows that under some mild assumptions, the solution sequence of the nonlinear equations converges to the solution of the LCP at an exponential rate when the penalty parameter tends to positive infinity. Based on the method in [36], Hao et al. [21] propose a power penalty method for solving the SOCLCP with a singleK=Kn, i.e.,

xKn, Ax − b ∈Kn, xT(Ax − b) = 0, (3) where A∈ IRn×nand b∈ IRn. In particular, they consider the power penalty equations:

Ax− α[x]1/k = b, (4)

where k≥ 1 and α ≥ 1 are parameters,

[x]1/k = [λ1(x)]1/k u(1)x + [λ2(x)]1/k u(2)x

with[t] = max{0, −t} and the spectral decomposition (will be introduced later in (5)).

Under a mild assumption of matrix A, asα → +∞, the solution sequence of (4) converges to the solution of the SOCLCP (3) at an exponential rate.

In this paper, we further enhance improvement and extension of the method and the problem studied in [21]. We first generalize [x]1/k in (4) to general lower order penalty

(3)

the solution sequence of the approximating lower order penalty equations converge to the solution of the SOCLCP (1) at an exponential rate O(α−1/σ) when α → +∞ and μ → 0+. This generalizes all its counterparts in the literature. Moreover, a corresponding algorithm is constructed and numerical results are also reported to examine the feasibility of the proposed method. The performance profile of those specific smoothing approximations is presented, and the generalization of two approximations are also investigated.

This paper is organized as follows: In Sect.2, we review some properties related to the single SOC which is the basis for our subsequent analysis. In Sect.3, a class of approxi- mation functions for lower order penalty function is constructed, and four kinds of specific smoothing approximations are investigated. In Sect.4, we study the approximating lower order penalty equations for solving the SOCLCP (1), and prove the convergence analysis. In Sect.5, a corresponding algorithm is constructed and the preliminary numerical experiments are presented. The performance profiles of the considered four specific smoothing approxi- mations and the generalization of two approximations are also considered. Finally, we draw the conclusion in Sect.6.

For simplicity, we denote the interior of single SOCKnby int(Kn). For any x, y in IRn, we write xKn y if x− y ∈Knand write x Kn y if x− y ∈ int(Kn). In other words, we have x Kn 0 if and only if xKn, and x Kn 0 if and only if x∈ int(Kn). We usually denote (x, y) := (xT, yT)T for the concatenation of two column vectors x, y for simplicity. The notation · pdenotes the usual lp-norm on IRnfor any p≥ 1. In particular, it is Euclidean norm ·  when p = 2.

2 Preliminary results

In this section, we first recall some basic concepts and preliminary results related to a single SOCK=Knthat will be used in the subsequent analysis. All of the analysis are then carried over to the general structureK(2). For any x = (x1, x2) ∈ IR × IRn−1, y = (y1, y2) ∈ IR× IRn−1, their Jordan product [7,18] is defined as

x◦ y := ( x, y , y1x2+ x1y2).

We write x+ y to mean the usual componentwise addition of vectors and x2to mean x◦ x.

The identity element under this product is e= (1, 0, . . . , 0)T ∈ IRn. It is known that x2Kn for all x ∈ IRn. Moreover, if xKn, then there is a unique vector inKn, denoted by x12, such that(x12)2 = x21 ◦ x12 = x. For any x ∈ IRn, we define x0 = e if x = 0. For any integer k ≥ 1, we recursively define the powers of element as xk = x ◦ xk−1, and define x−k= (xk)−1if x ∈ int(Kn). The Jordan product is not associative for n > 2, but it is power associated, i.e., x◦ (x ◦ x) = (x ◦ x) ◦ x. Thus, for any positive integer p, the form xp is definite, and xm+n = xm◦ xn for all positive integer m and n. Note thatKn is not closed under the Jordan product for n> 2.

In the following, we recall the expression of the spectral decomposition of x with respect to SOC, see [5–8,10–12,18,19,33]. For x = (x1, x2) ∈ IR × IRn−1, the spectral decomposition of x with respect to SOC is given by

x = λ1(x)u(1)x + λ2(x)u(2)x , (5)

(4)

where for i= 1, 2,

λi(x) = x1+ (−1)ix2, u(i)x =

1

2(1, (−1)i xx22) if x2 = 0,

1

2(1, (−1)iw) if x2 = 0, (6) withw ∈ IRn−1being any unit vector. The two scalarsλ1(x) and λ2(x) are called spectral values of x, while the two vectors u(1)x and u(2)x are called the spectral vectors of x. Moreover, it is obvious that the spectral decomposition of x∈ IRnis unique if x2 = 0.

Some basic properties of the spectral decomposition in the Jordan algebra associated with SOC are stated as below, whose proofs can be found in [6,7,18,19].

Proposition 2.1 For any x = (x1, x2) ∈ IR × IRn−1with the spectral valuesλ1(x), λ2(x) and spectral vectors u(1)x , u(2)x given as (6), we have:

(a) u(1)x ◦ u(2)x = 0 and u(i)x ◦ u(i)x = u(i)x ,u(i)x 2 = 1/2 for i = 1, 2.

(b) λ1(x), λ2(x) are nonnegative (positive) if and only if x ∈Kn(x ∈ int(Kn)).

(c) For any x ∈ IRn, xKn 0 if and only if x, y ≥ 0 for all y Kn 0.

The spectral decomposition (5)–(6) and the Proposition 2.1 indicate that xk can be described as xk = λk1(x)u(1)x + λk2(x)u(2)x . For any x ∈ IRn, let[x]+ denote the projec- tion of x ontoKn, and[x]be the projection of−x onto the dual cone (Kn)ofKn, where the dual cone(Kn)is defined by(Kn) := {y ∈ IRn| x, y ≥ 0, ∀x ∈Kn}. In fact, by Proposition2.1, the dual cone ofKnbeing itself, i.e.,(Kn)=Kn. Due to the special structure ofKn, the explicit formula of projection of x = (x1, x2) ∈ IR × IRn−1ontoKnis obtained in [14,17,19] as below

[x]+=

⎧⎨

x if xKn, 0 if x∈ −Kn, u otherwise,

where u=

 x1+x2 2

x1+x2 2

x2

x2

.

Similarly, the expression of[x]can be written out as [x]=

⎧⎨

0 if xKn,

−x if x ∈ −Kn, v otherwise,

where v =

 −x1−x2 2

x1−x2 2

x2

x2

.

It is easy to verify that x= [x]+− [x]and

[x]+ = [λ1(x)]+u(1)x + [λ2(x)]+u(2)x , [x]= [λ1(x)]u(1)x + [λ2(x)]u(2)x , where[α]+ = max{0, α} and [α] = max{0, −α} for α ∈ IR. Thus, it can be seen that [x]+, [x]Kn and[x]+◦ [x]= 0.

Putting these analyses into a single SOCKni, i = 1, . . . , r in (2), we can extend them to the general caseK =Kn1× · · · ×Knr. More specifically, for any x = (x1, . . . , xr) ∈ IRn1× · · · × IRnr, y= (y1, . . . , yr) ∈ IRn1× · · · × IRnr, their Jordan product is defined as

x◦ y := (x1◦ y1, . . . , xr ◦ yr).

Let[x]+,[x]respectively denote the projection of x ontoKand the projection of−x onto the dual coneK=K, then

[x]+:= ([x1]+, . . . , [xr]+), [x]:= ([x1], . . . , [xr]), (7) where[xi]+, [xi]for i = 1, . . . , r respectively denote the projection of xionto the single SOCKni and the projection of−xi onto(Kni).

(5)

was proposed by Chen and Mangasarian [4]. First, we consider the piecewise continuous function d(t) with finite number of pieces, which is a density (kernel) function. In other words, it satisfies

d(t) ≥ 0 and +∞

−∞ d(t)dt = 1. (8)

Next, we defineˆs(μ, t) := μ1d t

μ

, whereμ is a positive parameter. If +∞

−∞ |t| d(t)dt <

+∞, then a smoothing approximation for [t]+is formed. In particular, φ+(μ, t) =

+∞

−∞ (t − s)+ˆs(μ, s)ds = t

−∞(t − s)ˆs(μ, s)ds ≈ [t]+. (9) The following proposition states the properties ofφ+(μ, t), whose proofs can be found in [4, Proposition 2.2].

Proposition 3.1 Let d(t) be a density function satisfying (8) and ˆs(μ, t) = μ1d t

μ

with positive parameter μ. If d(t) is piecewise continuous with finite number of pieces and +∞

−∞ |t| d(t)dt < +∞. Then, the function φ+(μ, t) defined by (9) possesses the follow- ing properties.

(a) φ+(μ, t) is continuously differentiable.

(b) −D2μ ≤ φ+(μ, t) − [t]+≤ D1μ, where D1=

0

−∞|t|d(t)dt and D2= max +∞

−∞ td(t)dt, 0

 .

(c) ∂tφ+(μ, t) is bounded satisfying 0 ≤∂tφ+(μ, t) ≤ 1.

From Proposition3.1(b), we have

μ→0lim+φ+(μ, t) = [t]+

under the assumptions of this proposition. Applying the above way of generating smoothing function to approximate[t]= max{0, −t}, which appears in equation (4), we also achieve a smoothing approximation as follows:

φ(μ, t) = −t

−∞(−t − s)ˆs(μ, −s)ds = +∞

t

(s − t)ˆs(μ, s)ds ≈ [t]. (10) Similar to Proposition3.1, we have the below properties forφ(μ, t).

Proposition 3.2 Let d(t) and ˆs(μ, t) be as in Proposition3.1with the same assumptions.

Then, the functionφ(μ, t) defined by (10) possesses the following properties.

(a) φ(μ, t) is continuously differentiable.

(b) −D2μ ≤ φ(μ, t) − [t]≤ D1μ, where D1=

+∞

0

|t|d(t)dt and D2= max

 +∞

−∞ td(t)dt, 0

 .

(6)

(c) ∂tφ(μ, t) is bounded satisfying −1 ≤∂tφ(μ, t) ≤ 0.

Similar to Proposition 3.1, we also obtain limμ→0+φ(μ, t) = [t]. Therefore, in view of Proposition3.1and3.2, we know thatφ+(μ, t) defined by (9) andφ(μ, t) defined by (10), are the smoothing functions of[t]+and[t], respectively. Accordingly, using the continuity of compound function andφ+(μ, t) ≥ 0, φ(μ, t) ≥ 0, we can generate approximate function (not necessarily smooth) for[t]σ+and[t]σ, see below lemma.

Lemma 3.1 Under the assumptions of Proposition3.1, letφ+(μ, t), φ(μ, t) be the smooth- ing functions of[t]+, [t], defined by (9) and (10) respectively. Then, for anyσ > 0, we have

(a) lim

μ→0+φ+(μ, t)σ= [t]σ+, (b) lim

μ→0+φ(μ, t)σ= [t]σ.

By modifying the smoothing functions used in [4,9,31], we have four specific smoothing functions for[t]as well:

φ1(μ, t) = −t + μ ln 1+ eμt

, (11)

φ2(μ, t) =

⎧⎪

⎪⎩

0 if t≥ 2¯,

1

−t +μ22

if−2¯ < t < 2¯,

−t if t≤ −2¯,

(12)

φ3(μ, t) =

4μ2+ t2− t

2 , (13)

φ4(μ, t) =

⎧⎪

⎪⎩

0 if t> 0,

t2

if− ¯ ≤ t ≤ 0,

−t −μ2 if t< −¯,

(14)

where the corresponding kernel functions are d1(t) = et

(1 + et)2, d2(t) =

1 if −12 ≤ t ≤ 12, 0 otherwise, d3(t) = 2

(t2+ 4)32, d4(t) =

1 if − 1 ≤ t ≤ 0, 0 otherwise.

For those specific functions (11)–(14), they certainly obey Proposition3.2and Lemma 3.1. The graphs of[t]andφi (μ, t), i = 1, 2, 3, 4 with μ = 0.1 are depicted in Fig.1.

From Fig.1, we see that, for a fixedμ > 0, the function φ2(μ, t) seems the one which best approximate the function[t] among allφi(μ, t), i = 1, 2, 3, 4. Indeed, for a fixed μ > 0 and all t ∈ IR, we have

φ3(μ, t) ≥ φ1(μ, t) ≥ φ2(μ, t) ≥ [t]≥ φ4(μ, t). (15)

(7)

Fig. 1 Graphs of[t]andφi(μ, t), i = 1, 2, 3, 4 with μ = 0.1.

Furthermore, we shall show thatφ2(μ, t) is the function closest to [t]in the sense of the infinite norm. For any fixedμ > 0, it is clear that

|t|→∞lim φi(μ, t) − [t] =0, i = 1, 2, 3.

The functionsφi (μ, t) − [t], i = 1, 3 have no stable point but unique non-differentiable point t = 0, and φ2(μ, t) − [t] is non-zero only on the interval (−μ/2, μ/2) with maxt∈(−μ/2,μ/2)φ2(μ, t) − [t] = φ2(μ, 0). These imply that

maxt∈IRφi(μ, t) − [t] = φi(μ, 0),i= 1, 2, 3.

Sinceφ1(μ, 0) = (ln 2)μ ≈ 0.7μ, φ2(μ, 0) = μ/8, φ3(μ, 0) = μ, we obtain

1(μ, t) − [t]= (ln 2)μ,

2(μ, t) − [t]= μ/8,

3(μ, t) − [t]= μ.

On the other hand, it is obvious that maxt∈IRφ4(μ, t) − [t] = μ/2, which says

4(μ, t) − [t]= μ/2.

In summary, we have

3(μ, t) − [t]> φ1(μ, t) − [t]> φ4(μ, t) − [t]> φ2(μ, t) − [t]. (16) The orderings of (15) and (16) indicate the behavior of φi(μ, t), i = 1, 2, 3, 4 for fixed μ > 0. When taking μ → 0+, we know limμ→0+φi(μ, t) = [t], i = 1, 2, 3, 4 and φ2(μ, t) is the closest to [t], which can be verified by geometric views depicted as in Fig.2.

(8)

Fig. 2 Graphs ofφi(μ, t), i = 1, 2, 3, 4 with different μ

Remark 3.1 For any μ > 0, σ > 0 and continuously differentiable φ(μ, t) defined by (10), it can be easily seen that,φ(μ, t)σ is continuous function about t, but may not be differ- entiable. For example,φ1(μ, t)σ,φ3(μ, t)σare continuously differentiable, butφ2(μ, t)σ, φ4(μ, t)σare not continuously differentiable forσ = 1/2 since the non-differentiable points are t= μ/2 and t = 0 respectively. Their geometric views are depicted in Fig.3.

With the aforementioned discussions, for any x = (x1, . . . , xr) ∈ IRn1 × · · · × IRnr, we are ready to show how to construct a smoothing function for vectors[x]+ and[x]

associated withK= Kn1 × · · · ×Knr. We start by constructing a smoothing function for vectors[xi]+, [xi]on a single SOCKni, i = 1, . . . , r since [x]+and[x]are shown as (7).

First, given smoothing functionsφ+,φ in (9),(10) and xi ∈ IRni, i = 1, . . . , r, we define vector-valued function+i , i : IR++× IRni → IRni, i = 1, . . . , r as

i+(μ, xi) := φ+(μ, λ1(xi)) u(1)xi + φ+(μ, λ2(xi)) u(2)xi , (17)

i(μ, xi) := φ(μ, λ1(xi)) u(1)xi + φ(μ, λ2(xi)) u(2)xi , (18) whereμ ∈ IR++is a parameter,λ1(xi), λ2(xi) are the spectral values, and u(1)xi , u(2)xi are the spectral vectors of xi.

Consequently,+i (μ, xi), i (μ, xi) are also smooth on IR++× IRni [8]. Moreover, it is easy to assert that

μ→0lim++i (μ, xi) = [λ1(xi)]+u(1)x

i + [λ2(xi)]+u(2)x

i = [xi]+, (19)

(9)

Fig. 3 Graphs ofφi(μ, t)σ, i = 1, 2, 3, 4 with different μ and σ = 1/2

μ→0lim+i (μ, xi) = [λ1(xi)]u(1)xi + [λ2(xi)]u(2)xi = [xi], (20) which means each function +i (μ, xi), i (μ, xi) serves as a smoothing function of [xi]+, [xi] associated with single SOC Kni, i = 1, . . . , r, respectively. Due to Lemma 3.1, Remark3.1and from definition ofi+(μ, xi), i (μ, xi) in (17), (18), it is not difficult to verify that for anyσ > 0, the below two functions

+i (μ, xi)σ := φ+(μ, λ1(xi))σu(1)xi + φ+(μ, λ2(xi))σu(2)xi , (21)

i (μ, xi)σ := φ(μ, λ1(xi))σu(1)xi + φ(μ, λ2(xi))σu(2)xi (22) are continuous functions approximate to[xi]σ+and[xi]σ, respectively. In other words,

μ→0lim++i (μ, xi)σ = [λ1(xi)]σ+u(1)xi + [λ2(xi)]σ+u(2)xi = [xi]σ+,

μ→0lim+i (μ, xi)σ = [λ1(xi)]σu(1)x

i + [λ2(xi)]σu(2)x

i = [xi]σ.

Now we construct smoothing function for vectors[x]+and[x]associated with general cone (2). To this end, we define vector-valued function+, : IR++× IRn→ IRnas

+(μ, x) :=

+1(μ, x1), . . . , +r(μ, xr)

, (23)

(μ, x) :=

1(μ, x1), . . . , r(μ, xr)

, (24)

where+i (μ, xi), i (μ, xi), i = 1, . . . , r are defined by (17), (18), respectively. Therefore, from (19), (20) and (7),+(μ, x), (μ, x) serves as a smoothing function for [x]+, [x]

(10)

associated withK=Kn1× · · · ×Knr, respectively. At the same time, from (21), (22),

+(μ, x)σ :=

+1(μ, x1)σ, . . . , +r(μ, xr)σ

, (25)

(μ, x)σ :=

1(μ, x1)σ, . . . , r(μ, xr)σ

(26) are continuous functions approximate to[x]σ+and[x]σ, respectively.

In light of this idea, we establish an approximating lower order penalty equations for solving SOCLCP (1), which will be described in next section. To end this section, we present a technical lemma for subsequent needs.

Lemma 3.2 Suppose that+(μ, x) and (μ, x) are defined by (23), (24), respectively, and

+(μ, x)σ and(μ, x)σ are defined for anyσ > 0 as in (25), (26), respectively. Then, the following results hold.

(a) Both+(μ, x) and (μ, x) belong toK, (b) Both+(μ, x)σand(μ, x)σbelong toK.

Proof (a) For any xi ∈ IRni, i = 1, . . . , r, since φ+(μ, λk(xi)) ≥ 0, φ(μ, λk(xi)) ≥ 0 for k = 1, 2 from (9), (10), we have+i (μ, xi), i(μ, xi) ∈Kni according to the definition (17), (18). Therefore, the conclusion holds due to the definitions (23), (24) and (2).

(b) From part (a) and knowingσ > 0, we have φ+(μ, λk(xi))σ ≥ 0, φ(μ, λk(xi))σ ≥ 0, k= 1, 2. Applying (25) and (26), the desired result follows. 

4 Approximate lower order penalty approach and convergence analysis

In this section, we propose an approximate lower order penalty approach for solving SOCLCP (1). To this end, we consider the approximate lower order penalty equations (LOPEs):

Ax− α(μ, x)σ = b, (27)

whereσ ∈ (0, 1] is a given power parameter, α ≥ 1 is a penalty parameter and (μ, x)σis defined as (26). Throughout this section, xμ,αmeans the solution of (27), and corresponding to the structure of (2), we denote

xμ,α =

(xμ,α)1, . . . , (xμ,α)r

∈ IRn1× · · · × IRnr. (28) For simplicity and without causing confusion, we always denote the spectral values and spectral vectors of(xμ,α)i, i = 1, . . . , r as λk := λk((xμ,α)i), u(k) := u(k)(xμ,α)i for k= 1, 2.

Accordingly,k] := [λk((xμ,α)i)]andφ(μ, λk) := φ(μ, λk((xμ,α)i)), k = 1, 2 for instance. Note that for special caseσ = 1, the nonlinear function in (27) is always smooth.

Note that the equations (27) are penalized equations corresponding to the SOCLCP (1) because the penalty termα(μ, x)σ penalizes the ‘negative part’ of x whenμ → 0+. By Lemma3.2and from equations (27), it is easy to see that Axμ,α − b ∈ K (noting α(μ, xμ,α)σK). Our goal is to show that the solution sequence{xμ,α} converges to the solution of SOCLCP (1) whenα → +∞ and μ → 0+. In order to achieve this, we need to make the assumption for matrix A as below.

Assumption 4.1 The matrix A is positive definite, but not necessarily symmetric, i.e., there exists a constant a0> 0, such that

yTAy≥ a0y2, ∀y ∈ IRn. (29)

(11)

3 1

it is easy to see that matrix A is positive definite satisfying (29), but not symmetric. Under Assumption4.1, the SOCLCP (1) has a unique solution and the LOPEs (27) also has a unique solution, see for more details in [17,21].

Proposition 4.1 For anyα ≥ 1, σ ∈ (0, 1] and sufficiently small μ, the solution of the LOPEs (27) is bounded, i.e., there exists a positive constant M, independent of xμ,α, μ, α and σ , such thatxμ,α ≤ M.

Proof By multiplying xμ,αon both sides of (27), we observe that

xμ,αT Axμ,α= xTμ,αb+ αxμ,αT (μ, xμ,α)σ =

r

i=1

(xμ,α)iTbi+ α(xμ,α)Tii(μ, (xμ,α)i)σ (30) by (26),(28) and denoting b= (b1, . . . , br) ∈ IRn1×· · ·×IRnr. For any(xμ,α)i, i = 1, . . . , r, to proceed, we consider three cases to evaluate the term

i := (xμ,α)Ti bi+ α(xμ,α)Tii (μ, (xμ,α)i)σ ≤ xμ,α (b + 1) . (31) Case 1:(xμ,α)iKni. From Cauchy-Schwarz inequality, spectral decomposition of(xμ,α)i, and the fact that the norm of the piece component is less than that of the whole vector, we have

i ≤ (xμ,α)i

bi + αi (μ, (xμ,α)i)σ

≤ xμ,α

b + αφ(μ, λ1)σu(1)+ φ(μ, λ2)σu(2)

≤ xμ,α

b +

2αφ(μ, 0)σ ,

(32)

where the second inequality holds by the definition ofi (μ, (xμ,α)i)σ as in (22), and the last inequality holds by the triangle inequality, the nonnegativity ofφ(μ, 0)σfrom (10) and the monotone decreasing ofφ(μ, t) about t since 0 ≤ λ1≤ λ2in this case. Now, applying Lemma3.1, we have limμ→0+φ(μ, 0)σ = 0. This means, for any penalty parameter α, there exists a positive real numberν, such that

2αφ(μ, 0)σ ≤ 1 for all μ ∈ (0, ν].

Therefore, from (32), we obtain the conclusion (31).

Case 2:(xμ,α)i ∈ −Kni. In light of Lemma3.2, we knowi (μ, (xμ,α)i)σKni, and hence

(xμ,α)Ti i (μ, (xμ,α)i)σ ≤ 0.

Thus, we havei ≤ (xμ,α)Ti bi≤ (xμ,α)ibi ≤ xμ,α (b + 1), which says conclusion (31) holds.

Case 3:(xμ,α)i /∈Kni ∪ −Kni. In this case, we know thatλ1 < 0 < λ2and[(xμ,α)i]+= λ2u(2). From the definition ofi (μ, (xμ,α)i)σ as in (22), Proposition2.1, we have

(xμ,α)iTi (μ, (xμ,α)i)σ = (λ1u(1)+ λ2u(2))T

φ(μ, λ1)σu(1)+ φ(μ, λ2)σu(2) .

= 12

λ1φ(μ, λ1)σ + λ2φ(μ, λ2)σ

22(22λ2(μ, λ2)σ

22xμ,α(μ, λ2)σ,

(33)

(12)

where the first inequality holds due toλ1φ(μ, λ1)σ < 0 < λ2φ(μ, λ2)σ, and the second inequality holds due to

2

2 λ2= [(xμ,α)i]+ ≤ (xμ,α)i ≤ xμ,α. Substituting (33) from

iand using Cauchy-Schwarz inequality, we obtain

i ≤ (xμ,α)ibi +22αxμ,α(μ, λ2)σ

≤ xμ,αb +22αxμ,α(μ, λ2)σ

≤ xμ,α

b +22αφ(μ, 0)σ ,

(34)

where the third inequality holds by the monotone decreasing ofφ(μ, t) about t. Similar to case 1, for any penalty parameter α, there exists a positive real number ν, such that

2

2 αφ(μ, 0)σ ≤ 1 for all μ ∈ (0, ν]. Hence, we reach the conclusion (31) by (34).

From above three cases, the conclusion (31) holds, which shows an evaluation ofi. Thus, from (30) and Assumption4.1, there exists a constants a0> 0 such that

a0xμ,α2≤ xμ,αT Axμ,α =

r i=1

i ≤ rxμ,α (b + 1) .

This impliesxμ,α ·

a0xμ,α − r (b + 1)

≤ 0, and hence xμ,α ≤ar

0 (b + 1) .

By taking M= ar0(b + 1), the proof is completed. 

It is well-known that the affine function g(x) := Ax − b is continuous function and by Proposition4.1,g(xμ,α) is bounded for any α ≥ 1, σ ∈ (0, 1] and sufficiently small μ.

We are able to establish an upper bound for(μ, xμ,α) in next proposition. The upper bound is also applicable for[xμ,α] (see Remark4.1), which plays an important role in the convergence analysis. The detailed proof is based on the definition ofi (μ, (xμ,α)i) stated as in (18) and uses the same techniques as in [21, Proposition 3.2] by left multiplying

i (μ, (xμ,α)i) on both sides of the ith block of (27):

(Axμ,α)i− αi (μ, (xμ,α)i)σ = bi.

Therefore, we omit it and only present the result, i.e., there exists a positive constant Ci, independent of xμ,α, μ and α, such that

i (μ, (xμ,α)i) ≤ Ci

α1/σ (35)

holds for anyα ≥ 1, σ ∈ (0, 1] and sufficiently small μ. By the definition of (μ, xμ,α) as shown in (24) and setting C= C1+ · · · + Cr, we obtain the following proposition.

Proposition 4.2 For anyα ≥ 1, σ ∈ (0, 1] and sufficiently small μ, there exists a positive constant C, independent of xμ,α, μ and α, such that

(μ, xμ,α) ≤ C

α1. (36)

Remark 4.1 For any α ≥ 1, σ ∈ (0, 1] and sufficiently small μ, the ith (i = 1, . . . , r) component vector(xμ,α)i is fixed since xμ,α with (28) means the solution of (27). For the fixed(xμ,α)i with spectral decomposition (xμ,α)i = λ1u(1)+ λ2u(2) and the expression

i (μ, (xμ,α)i) = φ(μ, λ1)u(1)+ φ(μ, λ2)u(2), by takingμ → 0+ inφ(μ, λ1) and φ(μ, λ2), we obtain [λ1]u(1)+ [λ2]u(2) ≤ αC1/σi from (35),

which yields

[(xμ,α)i] ≤ Cα1/σi . (37)

參考文獻

相關文件

It is well known that second-order cone programming can be regarded as a special case of positive semidefinite programming by using the arrow matrix.. This paper further studies

In section 4, based on the cases of circular cone eigenvalue optimization problems, we study the corresponding properties of the solutions for p-order cone eigenvalue

Abstract In this paper, we consider the smoothing Newton method for solving a type of absolute value equations associated with second order cone (SOCAVE for short), which.. 1

We propose a primal-dual continuation approach for the capacitated multi- facility Weber problem (CMFWP) based on its nonlinear second-order cone program (SOCP) reformulation.. The

It is well-known that, to deal with symmetric cone optimization problems, such as second-order cone optimization problems and positive semi-definite optimization prob- lems, this

By this, the second-order cone complementarity problem (SOCCP) in H can be converted into an unconstrained smooth minimization problem involving this class of merit functions,

Abstract We investigate some properties related to the generalized Newton method for the Fischer-Burmeister (FB) function over second-order cones, which allows us to reformulate

For finite-dimensional second-order cone optimization and complementarity problems, there have proposed various methods, including the interior point methods [1, 15, 18], the