• 沒有找到結果。

On Set-Valued Complementarity Problems

N/A
N/A
Protected

Academic year: 2022

Share "On Set-Valued Complementarity Problems"

Copied!
24
0
0

加載中.... (立即查看全文)

全文

(1)

to appear in Abstract and Applied Analysis, 2013

On Set-Valued Complementarity Problems

Jinchuan Zhou 1 Department of Mathematics

School of Science

Shandong University of Technology Zibo 255049, P.R. China E-mail: jinchuanzhou@163.com

Jein-Shan Chen 2 Department of Mathematics National Taiwan Normal University

Taipei 11677, Taiwan E-mail: jschen@math.ntnu.edu.tw

Gue Myung Lee 3

Department of Applied Mathematics Pukyong National University

Busan 608-737, Korea E-mail: gmlee@pknu.ac.kr

September 18, 2012 (revised on December 14, 2012)

Abstract. This paper investigates the set-valued complementarity problems (SVCP) which posses rather different features from those that classical complementarity problems hold, due to the index set is not fixed, but dependent on x. While comparing the set-valued complementarity problems with the classical complementarity problems, we analyze the solution set of SVCP. Moreover, properties of merit functions for SVCP

1The first author is supported by National Natural Science Foundation of China (11101248, 11271233), Shandong Province Natural Science Foundation (ZR2010AQ026, ZR2012AM016), and Young Teacher Support Program of Shandong University of Technology.

2Corresponding author. Member of Mathematics Division, National Center for Theoretical Sciences, Taipei Office. The author’s work is supported by National Science Council of Taiwan.

3The third author was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2012-0006236).

(2)

are studied, such as level-bounded and error bounded. Finally, some possible research directions are discussed.

Keywords. Set-valued complementarity problems, error bound, level-bounded, limit R0-matrix.

AMS subject classifications. 90C33, 90C47.

1 Motivations and Preliminaries

The set-valued complementarity problem (SVCP) is to find x ∈ IRn such that

x ≥ 0, y ≥ 0, xTy = 0, for some y ∈ Θ(x), (1) where Θ : IRn+ ⇒ IRn is a set-valued mapping. The set-valued complementarity problem plays an important role in the sensitivity analysis of complementarity problems [6] and economic equilibrium problems [17]. However, there has been very little study on the set- valued complementarity problems compared to the classical complementarity problems.

In fact, the SVCP (1) can be recast as follows, which is denoted by SVNCP(F, Ω): to find x ∈ IRn such that

x ≥ 0, F (x, w) ≥ 0, xTF (x, w) = 0, for some w ∈ Ω(x), (2) where F : IRn× IRm → IRn and Ω : IRn ⇒ IRm is a set-valued mapping. To see this, if letting

Θ(x) = ∪w∈Ω(x){F (x, w)},

then (1) reduces to (2). Conversely, if F (x, w) = w and Ω(x) = Θ(x), then (2) takes the form of (1).

The SVNCP(F, Ω) given as in (2) provides an unified framework for several interest- ing and important problems in optimization fields described as below.

• Nonlinear complementarity problem [6], which is to find x ∈ IRn such that x ≥ 0, F (x) ≥ 0, hx, F (x)i = 0.

This corresponds to F (x, w) := F (x) + w and Ω(x) = {0} for all x ∈ IRn. In other words, the set-valued complementarity problem reduces to the classical complementarity problem under such case.

• Extended linear complementarity problem [11, 12], which is to find x, w ∈ IRn such that

x ≥ 0, w ≥ 0, xTw = 0, with M1x − M2w ∈ P,

(3)

where M1, M2 ∈ IRm×n and P ⊆ IRm is a polyhedron. This corresponds to F (x, w) = w and Ω(x) = {w | M1x − M2w ∈ P }. In particular, when P = {q}, it further reduces to the horizontal linear complementarity problem; and to the usual linear complementarity problem, in addition to M2 being an identify matrix.

• Implicit complementarity problem [15], which is find x, w ∈ IRn and z ∈ IRm such that

x ≥ 0, w ≥ 0, xTw = 0, with F (x, w, z) = 0, where F : IR2n×m → IRl. This can be rewritten as

x ≥ 0, w ≥ 0, xTw = 0, with w satisfying F (x, w, z) = 0 for some z.

This is clearly an SVNCP(F, Ω) where F (x, w) = w and Ω(x) = ∪z∈IRm{w | F (x, w, z) = 0}.

• Mixed nonlinear complementarity problem, which is to find x ∈ IRn and w ∈ IRm such that

x ≥ 0, F (x, w) ≥ 0, hx, F (x, w)i = 0, with G(x, w) = 0.

This is an SVNCP(F, Ω) where it corresponds to Ω(x) = {x | G(x, w) = 0}. Note that the mixed nonlinear complementarity problem is a natural extension of Karush-Kuhn-Tucker (KKT) conditions for the following nonlinear programming:

min f (x)

s.t. gi(x) ≤ 0, i = 1, 2, · · · , m, hj(x) = 0, j = 1, · · · , l.

To see this, we first write out the KKT conditions:





∇f (x) +Pm

i=1

λi∇gi(x) +

l

P

j=1

µj∇hj(x) = 0, h(x) = 0,

g(x) ≤ 0, λ ≥ 0, hλ, g(x)i = 0,

(3)

where g(x) := (g1(x), . . . , gm(x)), h(x) := (h1(x), . . . , hl(x)), and λ := (λ1, . . . , λm).

Then, letting w := (λ, µ), F (x, w) := −g(x), and

G(x, w) :=

∇f (x) +Pm

i=1

λi∇gi(x) +

l

P

j=1

µ∇hj(x) h(x)

implies that the KKT system (3) becomes a mixed complementarity problem.

(4)

Besides the above various complementarity problems, SVNCP(F, Ω) has a close rela- tion with the Quasi-variational inequality, a special of the extended general variational inequalities [13, 14], and min-max programming, which is elaborated as below.

• Quasi-variational inequality [17]. Given a point-to-point map F from IRn to itself and a point-to-set map K from IRn into subsets of IRn, the Quasi-variational inequality QVI(K, F ) is to find a vector x ∈ K(x) such that

hF (x), y − xi ≥ 0, ∀y ∈ K(x). (4)

It is well-known that QVI(K, F ) reduces to the classical nonlinear complementarity prob- lem when K(x) is independent of x, say, K(x) = IRn+ for all x. Now, let’s explain why it is related to SVNCP(F, Ω). To this end, given x ∈ IRn, we define I(x) = {i|Fi(x) > 0}

and let

K(x) = {x | xi ≥ 0 for i ∈ I\I(x), and xi = 0 for i ∈ I(x)}.

Clearly, 0 ∈ K(x) which says hx, F (x)i ≤ 0 by taking y = 0 in (4). Note that x ≥ 0 because x ∈ K(x). Next, we will claim that Fi(x) ≥ 0 for all i = 1, 2, . . . , n. It is enough to consider the case where i ∈ I\I(x). Under such case, by taking y = βei in (4) with β being an arbitrarily positive scalar, we have βFi(x) ≥ F (x)Tx. Since β can be made sufficiently large, it implies that Fi(x) ≥ 0. Thus, we obtain F (x)Tx ≥ 0. In summary, under such case, QVI(K, F ) becomes

x ≥ 0, F (x) ≥ 0, xTF (x) = 0, with x ∈ K(x) which is an SVNCP(F, Ω).

• Min-max programming [18], which is to solve the following problem:

min

x∈IRn+max

w∈Ω f (x, w), (5)

where f : IRn× Ω → IR is a continuously differentiable function and Ω is a compact subset in IRm. First, we define ψ(x) := max

w∈Ω f (x, w). Although ψ is not necessarily Frechet-differentiable, it is directional differentiable (even semismooth), see [20]. Now, let us check the first-order necessary conditions for problem (5). In fact, if x is a local minimizer of (5), then

ψ0(x; x − x) = max

w∈Ω(x)h∇xf (x, w), x − xi ≥ 0, ∀x ∈ IRn+, which is equivalent to

inf

x∈IRn+ max

w∈Ω(x)

h∇xf (x, w), x − xi = 0, (6)

(5)

where Ω(x) means the active set at x, i.e., Ω(x) := {w ∈ Ω | ψ(x) = f (x, w)}. At our first glance, the formula (6) is not related to SVNCP(F, Ω). Nonetheless, we will show that if Ω is convex and the function f (x, ·) is concave over Ω, then the first-order necessary conditions form an SVNCP(F, Ω), see below proposition.

Proposition 1.1. Let Ω be nonempty, compact, and convex set in IRm. Suppose that, for each x, the function f (x, ·) is concave over Ω. If x is a local optimal solution of (5), then there exists w ∈ Ω(x) such that

x ≥ 0, ∇xf (x, w) ≥ 0, h∇xf (x, w), xi = 0. (7) Proof. Note first that for each x the inner problem

ψ(x) := max

w∈Ω f (x, w) (8)

is a concave optimization problem, since f (x, ·) is concave and Ω is convex. This ensures that Ω(x), which denotes the optimal solution set of (8), is convex as well. Now we claim that the function

h(w) := h∇xf (x, w), x − xi

is concave over Ω(x). Indeed, for w1, w2 ∈ Ω(x) and α ∈ [0, 1], we have h (αw1+ (1 − α)w2)

= h∇xf (x, αw1+ (1 − α)w2), x − xi

= lim

t↓0

f (x + t(x − x), αw1+ (1 − α)w2) − f (x, αw1+ (1 − α)w2) t

= lim

t↓0

f (x + t(x − x), αw1+ (1 − α)w2) − ψ(x) t

≥ lim

t↓0

αf (x+ t(x − x), w1) + (1 − α)f (x+ t(x − x), w2) − ψ(x) t

= lim

t↓0

α[f (x+ t(x − x), w1) − f (x, w1)]

t + lim

t↓0

(1 − α)[f (x+ t(x − x), w2) − f (x, w2)]

t

= α h∇xf (x, w1), x − xi + (1 − α) h∇xf (x, w2), x − xi

= αh(w1) + (1 − α)h(w2),

where we use the fact that αw1+(1−α)w2 ∈ Ω(x) (since Ω(x) is convex) and f (x, w) = ψ(x) for all w ∈ Ω(x). On the other hand, applying the Min-Max Theorem [19, Corollary 37.3.2] to (6) yields

max

w∈Ω(x)

inf

x∈IRn+h∇xf (x, w), x − xi = 0.

(6)

Hence, for arbitrary ε > 0, we can find wε∈ Ω(x) such that inf

x∈IRn+h∇xf (x, wε), x − xi ≥ −ε, i.e.,

h∇xf (x, wε), x − xi ≥ −ε, ∀x ∈ IRn+. (9) In particular, plugging in x = 0 in (9) implies

h∇xf (x, wε), xi ≤ ε (10) Since Ω is bounded and Ω(x) is closed, we can assume, without loss of generality, that wε→ w ∈ Ω(x) as ε → 0. Thus, taking the limit in (10) gives

h∇xf (x, w), xi ≤ 0. (11) Now, let x = x+ kei ∈ IRn+. It follows from (9) that

(∇xf (x, wε))i ≥ −ε k,

which implies that (∇xf (x, wε))i ≥ 0 by letting k → ∞, and hence (∇xf (x, w))i ≥ 0 for all i = 1, 2, . . . , n, i.e., ∇xf (x, w) ≥ 0. This together with (11) means that h∇xf (x, w), xi = 0. Thus, (7) holds. 2

From all the above, we have seen that SVNCP(F, Ω) given as in (2) covers a range of optimization problems. Therefore, in this paper, we mainly focus on SVNCP(F, Ω).

Due to its equivalence to SVCP (1), our analysis and results for SVNCP(F, Ω) can be carried over to SVCP (1). This paper is organized as follows. In section 1, connection between SVNCP(F, Ω) and various optimization problems is introduced. We recall some background materials in section 2. Besides comparing the set-valued complementarity problems with the classical complementarity problems, we analyze the solution set of SVCP in section 3. Moreover, properties of merit functions for SVCP are studied in sec- tion 4, such as level-bounded and error bound. Finally, some possible research directions are discussed.

A few words about the notations used throughout the paper. For any x, y ∈ IRn, the inner product is denoted by xTy or hx, yi. we write x ≥ y (or x > y) iff xi ≥ yi (or xi > yi) for all i = 1, 2, . . . , n. Let e be the vector with all components being 1 and let ei be the i-row of identity matrix. Denote N:=

S

n=1

{n, n+1, . . . } . While SVNCP(F, Ω) meaning the set-valued nonlinear complementary problem (2), SVLCP(M, q, Ω) denotes the linear case, i.e., F (x, w) = M (w)x + q(w) where M : IRm → IRn×n and q : IRm→ IRn. For a continuously differentiable function F : IRn×IRm → IRl, we denote the l×n Jacobian

(7)

matrix of partial derivatives of F at (¯x, ¯w) with respect to x by JxF (¯x, ¯w), whereas the transposed Jacobian is denoted by ∇xF (¯x, ¯w). For a mapping H : IRn → IRm, define

lim inf

x→¯x H(x) :=

lim inf

x→¯x H1(x) lim inf

x→¯x H2(x) ... lim inf

x→¯x Hm(x)

 .

Given a set-valued mapping M : IRn ⇒ IRm, define lim sup

x→¯x

M (x) := {u | ∃xn → ¯x, ∃un → u with un ∈ M (xn)} , (12) and

lim inf

x→¯x M (x) := {u | ∀xn→ ¯x, ∃un→ u with un∈ M (xn)} . (13) We say M is outer semi-continuous at ¯x if

lim sup

x→¯x

M (x) ⊂ M (¯x),

and inner semi-continuous at ¯x if lim inf

x→¯x M (x) ⊃ M (¯x).

We say that M is continuous at ¯x if it is both outer semi-continuous and inner semi- continuous at ¯x. For more details about these functions, please refer to [1, 20]. Through- out this paper, we always assume that the set-valued mapping Ω : IRn ⇒ IRm is closed- valued, i.e., Ω(x) is closed for all x ∈ IRn [1, Chapter 1].

2 Focus on SVLCP(M, q, Ω)

It is well-known that various matrix classes paly different roles in the theory of linear complementarity problem, such as P -matrix, S-matrix, Q-matrix, Z-matrix, etc., see [3, 6] for more details. Here we recall some of them which will be needed in the subse- quent analysis.

Definition 2.1. A matrix M ∈ IRn×n is said to be an S-matrix if there exists x ∈ IRn such that

x > 0 and M x > 0.

(8)

Note that M ∈ IRn×n is an S-matrix if and only if the classical linear complementarity problem LCP(M, q) is feasible for all q ∈ IRn, see [3, Prop. 3.1.5]. Moreover, the above condition in Definition 2.1 is equivalent to

x ≥ 0 and M x > 0,

see [8, Remark 2.2]. However, such equivalence fails to hold for its corresponding cases in set-valued complementarity problem. In other words,

x > 0 and M (w)x > 0, for some w ∈ Ω(x) (14) is not equivalent to

x ≥ 0 and M (w)x > 0, for some w ∈ Ω(x). (15) It is clear that (14) implies (15). But, the converse implication does not hold, which is illustrated in Example 2.1.

Example 2.1. Let

M (w) = w 0 0 w



and Ω(x) = {0, 1}, x = (1, 0) ∈ IR2; {0}, otherwise.

If M (w)x > 0, then w = 1 and such case holds only when x = (1, 0). Therefore, (15) is satisfied, but (14) is not.

We point out that the set-valued mapping Ω(x) in Example 2.1 is indeed outer semi- continuous. A natural question arises: what happens if Ω(x) is inner semi-continuous.

The answer is given in Theorem 2.1 as below.

Theorem 2.1. If Ω(x) is inner semi-continuous and M (w) is continuous, then (14) and (15) are equivalent.

Proof. We only need to show (15) =⇒ (14). Let H(x) = max

w∈Ω(x)M (w)x and denote by ai(x) the i-th row of M (w). Hence Hi(x) = max

w∈Ω(x)ai(x)Tx. With this, suppose x0 is an arbitrary but fixed point, we know that for any ε > 0, there exists w0 ∈ Ω(x0) such that ai(w0)Tx0 > Hi(x0) − ε. Since Ω(x) is inner semi-continuous, for any xn → x0, there exists wn∈ Ω(xn) satisfying wn→ w0. This implies

Hi(xn) = max

w∈Ω(xn)ai(w)Txn≥ ai(wn)Txn. Then, taking the lower limit yields

lim inf

n→∞ Hi(xn) ≥ lim

n→∞ai(wn)Txn= ai(w0)Tx0 > Hi(x0) − ε,

(9)

where the equality follows from the continuity of ai(w), which is ensured by the continuity of M (w). Because ε > 0 is arbitrary and {xn} is an arbitrary sequence converging to x0, we obtain

lim inf

x→x0

Hi(x) ≥ Hi(x0), which says Hi is lower semi-continuous. This further implies

lim inf

x→x0

H(x) =

lim inf

x→x0

H1(x) ... lim inf

x→x0

Hn(x)

H1(x0) ... Hn(x0)

= H(x0), i.e.,

lim inf

x→x0

max

w∈Ω(x)M (w)x ≥ max

w∈Ω(x0)M (w)x0. If ¯x satisfies (15), then

¯

x ≥ 0, M ( ¯w)¯x > 0, for some ¯w ∈ Ω(¯x) which is equivalent to

¯

x ≥ 0 and H(¯x) > 0.

On the other hand, lim inf

λ→0+ H(¯x + λe) ≥ H(¯x) > 0 and ¯x + λe > 0 for λ > 0. By taking λ > 0 small enough, we know ¯x + λe satisfies (14). Thus, the proof is complete. 2

There is another point worthy of pointing out. We mentioned that the classical linear complementarity problem LCP(M, q) is feasible for all q ∈ IRn if and only if M ∈ IRn×n is a S-matrix, i.e., there exists x ∈ IRn such that

x > 0 and M x > 0.

Is there any analogous result in the set-valued set? Yes, we have an answer for it in Theorem 2.2 below.

Theorem 2.2. Consider the set-valued linear complementarity problem SVLCP(M, q, Ω).

If there exists x ∈ IRn such that

x ≥ 0, M (w)x > 0, for some w ∈ \

N ∈N˜

[

n∈ ˜N

Ω(nx), (16)

then SVLCP(M, q, Ω) is feasible for all q : IRm → IRn being bounded from below.

Proof. Let q be any mapping from IRm to IRn being bounded from below, i.e., there exists β ∈ IR such that q(w) ≥ βe. Suppose that x0 and w0 satisfy (16), which means

x0 ≥ 0, M (w0)x0 > 0, and w0 ∈ \

N ∈N˜

[

n∈ ˜N

Ω(nx0).

Then, for any ˜N ∈ N, we have w0 ∈ S

n∈ ˜N

Ω(nx0). In particular, we observe the following:

(10)

1. if taking ˜N = {1, 2, . . . , }, then there exists n1 such that w0 ∈ Ω(n1x0);

2. if taking ˜N = {n1 + 1, . . . , }, then there exists n2 with n2 > n1 such that w0 ∈ Ω(n2x0).

Repeating the above process yields a sequence {nk} such that w0 ∈ Ω(nkx0) and nk→ ∞.

Since M (w0)x0 > 0, it ensures the existence of α > 0 such that M (w0)x0 > αe. Taking k large enough to satisfy nk > max{−β/α, 0} gives αnke > −βe ≥ −q(w). Then, it implies that

M (w0)nkx0 > αnke ≥ −q(w), and hence

nkx0 ≥ 0, M (w0)(nkx0) + q(w) > 0, w0 ∈ Ω(nkx0), which says nkx0 is a feasible point of SVLCP(M, q, Ω). 2

Definition 2.2. A matrix M ∈ IRn×n is said to be a P -matrix if all its principal minors are positive. Or equivalently [3, Theorem 3.3.4],

∀ x 6= 0, ∃ k ∈ {1, 2, . . . , n} such that xk(M x)k> 0. (17) From [3, Corollary 3.3.5], we know every P -matrix is an S-matrix. In other words, if M satisfies (17), then the following system is solvable:

x ≥ 0 and M x > 0.

Their respective corresponding conditions in set-valued complementarity problem are

∀x 6= 0, ∃k ∈ {1, . . . , n} such that xk(M (w)x)k> 0, for some w ∈ Ω(x), (18) and

x ≥ 0 and M (w)x > 0 for some w ∈ Ω(x). (19) Example 2.2 shows that the aforementioned implication is not valid as well in set-valued complementarity problem.

Example 2.2. Let

M (w) = w 0 0 −w



and Ω(x) = −1, x1 = 0;

1, otherwise.

For x1 6= 0, we have M (w)x = (x1, −x2) and hence x1(M (w)x)1 = x21 > 0. For x1 = 0, we know x2 6= 0 (since x 6= 0) which says M (w)x = (−x1, x2) and hence x2(M (w)x)2 = x22 > 0. Therefore, condition (18) is satisfied. But, condition (19) fails to hold because M (w)x = (x1, −x2) or (−x1, x2). Hence, M (w)x > 0 implies that x2 < 0 or x1 < 0, which contradicts with x ≥ 0.

(11)

Definition 2.3. A matrix M ∈ IRn×n is said to be semi-monotone if

∀ 0 6= x ≥ 0 ⇒ ∃ xk> 0 such that (M x)k ≥ 0.

For the classical linear complementarity problem, we know that M is semi-monotone if and only if LCP(M, q) with q > 0 has a unique solution (zero solution), see [3, Theorem 3.9.3]. One may wonder whether such fact still holds in set-valued case. Before answering it, we need to know how to generalize concept of semi-monotonicity to its corresponding definition in the set-valued case.

Definition 2.4. The set of matrices {M (w) | w ∈ Ω(x)} is said to be (a) strongly semi-monotone if for any nonzero x ≥ 0,

∃ xk> 0 such that (M (w)x)k ≥ 0 for all w ∈ Ω(x); (20) (b) weakly semi-monotone if for any nonzero x ≥ 0,

∃ xk> 0 such that (M (w)x)k ≥ 0 for some w ∈ Ω(x). (21)

Unlike the classical linear complementarity problem case, here are parallel results re- garding set-valued linear complementarity problem which strong (weak) semi-monotonicity plays in.

Theorem 2.3. For the SVLCP(M, q, Ω), the following statements hold.

(a) If the set of matrices {M (w) | w ∈ Ω(x)} is strongly semi-monotone, then for any positive mapping q, i.e., q(w) > 0 ∀w, SVLCP(M, q, Ω) has zero as its unique solution.

(b) If SVLCP(M, q, Ω) with q(w) > 0 has zero as its unique solution, then the set of matrices {M (w)|w ∈ Ω(x)} is weakly semi-monotone.

Proof. (a) It is clear that, for any positive mapping q, x = 0 is a solution of SVLCP(M, q, Ω).

Suppose there is another nonzero solution ¯x, i.e., ∃ ¯w ∈ Ω(¯x) such that

¯

x ≥ 0, M ( ¯w)¯x + q( ¯w) ≥ 0, x¯T(M (¯x) + q( ¯w)) = 0. (22) It follows from (20) that there exists k ∈ {1, 2, . . . , n} such that ¯xk> 0 and M ( ¯w)¯x

k ≥ 0, and hence M ( ¯w)¯x + q( ¯w)

k > 0, which contradicts condition (22).

(b) Suppose {M (w)|w ∈ Ω(x)} is not weakly semi-monotone. Then, there exists a nonzero ¯x ≥ 0, for all k ∈ I+(¯x) := {i|¯xi > 0}, M (w)¯x

k < 0 for all w ∈ Ω(¯x). Choose

¯

w ∈ Ω(¯x). Let q(w) = 1 for all w 6= ¯w and qk( ¯w) = − M ( ¯w)¯x

k, k ∈ I+(¯x);

max{ M ( ¯w)¯x

k, 0} + 1, otherwise.

(12)

Therefore, q(w) > 0 for all w. According to the above construction, we have

¯

x ≥ 0, M ( ¯w)¯x + q( ¯w) ≥ 0, x¯T(M ( ¯w)¯x + q( ¯w)) = 0, with ¯w ∈ Ω(¯x),

i.e., the nonzero vector ¯x is a solution of SVLCP(M, q, Ω), which is a contradiction. 2

Theorem 2.3(b) says that the weak semi-monotonicity is a necessary condition for zero being the unique solution of SVLCP(M, q, Ω). However, it is not the sufficient condition, see Example 2.3.

Example 2.3. Let

M (w) =

−w 1 0

0 0 1

1 0 0

 and Ω(x) = {0, 1}.

For any nonzero x = (x1, x2, x3) ≥ 0, we have M (0)x = (x2, x3, x1) ≥ 0. If we plug in q = (1, 1, 1), by a simple calculation, x = (1, 0, 0) satisfies

x ≥ 0, M (1)x + q ≥ 0, xT(M (1) + q) = 0

which means SVLCP(M, q, Ω) has a nonzero solution. We also notice that the set valued mapping Ω(x) is even continuous in Example 2.3.

So far, we have seen some major difference between the classical complementarity problem and set-valued complementarity problem. Such phenomenon undoubtedly con- firms that it is an interesting, important, and challenging task to study the set-valued complementarity problem, which, to some extent, is the main motivation of this paper.

To close this section, we introduce some other concepts which will be used later too.

A function f : IRn → IR is level-bounded, if the level set {x | f (x) ≤ α} is bounded for all α ∈ IR. The metric projection of x to a closed convex subset A ⊂ IRn is denoted by ΠA(x), i.e., ΠA(x) := arg min

y∈Akx − yk. The distance function is defined as dist(x, A) :=

kx − ΠA(x)k.

3 Properties of solution sets

Recently, many authors study other classes of complementarity problems, in which an- other type of vector w ∈ Ω is involved, for example, the stochastic complementarity problem [2, 4, 7, 21]: to find x ∈ IRn such that

x ≥ 0, F (x, w) ≥ 0, xTF (x, w) = 0, a.e. w ∈ Ω,

(13)

where w is a random vector in a given probability space, and the semi-infinite comple- mentarity problem [22]: to find x ∈ IRn such that

x ≥ 0, F (x, w) ≥ 0, xTF (x, w) = 0, ∀w ∈ Ω,

which we denote it by SINCP(F, Ω). In addition, the authors introduce the following two complementarity problems in [22]: to find x ∈ IRn such that

x ≥ 0, Fmin(x) ≥ 0, xTFmin(x) = 0 and

x ≥ 0, Fmax(x) ≥ 0, xTFmax(x) = 0 where

Fmin(x) :=

 min

w∈ΩF1(x, w) ... min

w∈ΩFn(x, w)

and Fmax(x) :=

 max

w∈Ω F1(x, w) ... max

w∈Ω Fn(x, w)

. (23)

These two problems are denoted by NCP(Fmin) and NCP(Fmax), respectively. Is there any relationship among their solutions sets? In order to further describing such relationship, we adapt the following notations:

• SOL(F, Ω) means the solution set of SVNCP(F, Ω),

• SOL(M, q, Ω) means the solution set of SVLCP(F, Ω),

• [SOL(F, Ω) means the solution set of SINCP(F, Ω),

• SOL(Fmin) means the solution set of NCP(Fmin),

• SOL(Fmax) means the solution set of NCP(Fmax).

Besides, for the purpose of comparison, we restrict that Ω(x) is fixed, i.e., there exists a subset Ω in IRm such that Ω(x) = Ω for all x ∈ IRn.

It is easy to see that the solution set of SINCP(F, Ω) is T

w∈ΩSOL(Fw), but that of SVNCP(f, Ω) is S

w∈ΩSOL(Fw), where Fw(x) := F (x, w). Hence, the solution set of SINCP(F, Ω) is included in that of SVNCP(F, Ω). In other words, we have

[SOL(F, Ω) ⊆ SOL(F, Ω). (24)

The inclusion (24) can be strict as shown in Example 3.1.

Example 3.1. Let F (x, w) = (w, 1) and Ω(x) = [0, 1]. Then, we can verify that [SOL(F, Ω) = {0, 0} whereas SOL(F, Ω) = {x | x1 ≥ 0, x2 = 0}.

(14)

However, the solution set of SVNCP(F, Ω), NCP(Fmin), and NCP(Fmax) are not in- cluded each other. This is illustrated in Examples 3.2-3.3.

Example 3.2. SOL(Fmin) 6⊆ SOL(F, Ω) and SOL(F, Ω) 6⊆ SOL(Fmin).

(a) Let F (x, w) = (1 − w, w) and Ω = [0, 1]. Then, SOL(Fmin) = IR2+ and SOL(F, Ω) = S

w∈ΩSOL(Fw) = {(x1, x2)T| x1 ≥ 0, x2 ≥ 0, and x1x2 = 0}.

(b) Let F (x, w) = (w − 1, x2) and Ω = [0, 1]. Then, SOL(Fmin) = ∅ and SOL(F, Ω) = {(x1, x2) | x1 ≥ 0, x2 = 0}.

Example 3.3. SOL(Fmax) 6⊆ SOL(F, Ω) and SOL(F, Ω) 6⊆ SOL(Fmax).

(a) Let F (x, w) = (w−1, −w) and Ω = [0, 1]. Then, SOL(Fmax) = IR2+and SOL(F, Ω) =

∅.

(b) Let F (x, w) = (w, −w) and Ω = [0, 1]. Then, SOL(Fmax) = {(x1, x2) | x1 = 0, x2 ≥ 0} and SOL(F, Ω) = IR2+.

Similarly, Examples 3.4 shows that the solution set of NCP(Fmax) and NCP(Fmin) cannot be included each other.

Example 3.4. SOL(Fmax) 6⊆ SOL(Fmin) and SOL(Fmin) 6⊆ SOL(Fmax).

(a) Let F (x, w) = (w−1, 0) and Ω = [0, 1]. Then, SOL(Fmin) = ∅ and SOL(Fmax) = IR2+. (b) Let F (x, w) = (w, w) and Ω = [0, 1]. Then, SOL(Fmin) = IR2+ and SOL(Fmax) =

{(0, 0)}.

In spite of these, we obtain some results which describe the relationship among them.

Theorem 3.1. Let Ω(x) = Ω for all x ∈ IRn. Then, we have (a) SOL(F, Ω) ∩ {x | Fmin(x) ≥ 0} ⊆ SOL(Fmin);

(b) SOL(Fmax) ∩ {x | F (x, w) ≥ 0 for some w ∈ Ω} ⊆ SOL(F, Ω);

(c) SOL(Fmin) ∩ {x | xTFmax(x) ≤ 0} = SOL(Fmax) ∩ {x | Fmin(x) ≥ 0} ⊆ SOL(F, Ω).

Proof. Parts (a) and (b) follow immediately from the fact

xTFmin(x) ≤ xTF (x, w) ≤ xTFmax(x) ∀w ∈ Ω and x ∈ IRn+.

Part (c) is from (24), since the two sets in the left side of (c) is [SOL(F, Ω) by [22]. 2 For further characterizing the solution sets, we recall that for a set-valued mapping M : IRn⇒ IRm, its inverse mapping (see [20, Chapter 5]) is defined as

M−1(y) := {x | y ∈ M (x)}.

(15)

Theorem 3.2. For SVNCP(F, Ω), we have SOL(F, Ω) = [

w∈IRm

SOL(Fw) ∩ Ω−1(w).

Proof. In fact, the desired result follows from

SOL(F, Ω) = {x | x ∈ SOL(Fw) and w ∈ Ω(x) for some w ∈ IRm}

= {x | x ∈ SOL(Fw) and x ∈ Ω−1(w) for some w ∈ IRm}

= [

w∈IRm

SOL(Fw) ∩ Ω−1(w),

where the second equality is due to the definition of inverse mapping given as above.

2

4 Merit functions for SVNCP and SVLCP

It is well-known that one of the important approaches for solving the complementarity problems is to transfer it to a system of equations or an unconstrained optimization via NCP-functions or merit functions. Hence, we turn our attention in this section to address merit functions for SVNCP(F, Ω) and SVLCP(M, q, Ω).

A function φ : IR2 → IR is called an NCP-function if it satisfies φ(a, b) = 0 ⇐⇒ a ≥ 0, b ≥ 0, and ab = 0.

For example, the natural residual φNR(a, b) = min{a, b} and the Fischer-Burmeister func- tion φFB(a, b) = √

a2+ b2− (a + b) are popular NCP-functions. Please also refer to [10]

for a detailed survey on the existing NCP-functions. In addition, a real-valued function f : IRn → IR is called a merit (or residual) function for a complementarity problem if f (x) ≥ 0 for all x ∈ IRnand f (x) = 0 if and only if x is a solution of the complementarity problem. Given an NCP-function φ, we define

r(x, w) := kΦ(x, F (x, w))k where Φ(x, y) := (φ(x1, y1), . . . , φ(xn, yn)).

Then, it is not hard to verify that the function given by r(x) := min

w∈Ω(x)r(x, w) (25)

is a merit function for SVNCP(F, Ω). Note that the merit function (25) is rather different from the traditional one, because the index set is not a fixed set, but dependent on x.

We say that a merit function r(x) has a global error bound with a modulus c > 0 if dist(x, SOL(F, Ω)) ≤ c · r(x) ∀x ∈ IRn.

(16)

For more information about the error bound, please see [16] which is an excellent survey paper regarding the issue of error bounds.

Theorem 4.1. Assume that there exists a set Ω ⊂ IRm such that Ω(x) = Ω for all x ∈ IRn, and that for each w ∈ Ω, r(x, w) is a global error bound of N CP (Fw) with the modulus η(w) > 0, i.e.,

dist(x, SOL(Fw)) ≤ η(w)r(x, w) ∀x ∈ IRn. In addition, if

η := max

w∈Ω η(w) < +∞, (26)

then r(x) = min

w∈Ωr(x, w) provides a global error bound for SVNCP(F, Ω) with the modulus η.

Proof. Noticing that if Ω(x) = Ω for all x ∈ IRn, then Ω−1(w) = IRn, w ∈ Ω,

∅, w /∈ Ω.

It then follows from Theorem 3.2 that SOL(F, Ω) = [

w∈IRm

SOL(Fw) ∩ Ω−1(w) = [

w∈Ω

SOL(Fw).

Therefore,

dist(x, SOL(F, Ω)) = dist x, [

w∈Ω

SOL(Fw)

≤ min

w∈Ωdist x, SOL(Fw)

≤ min

w∈Ωη(w) · r(x, w)

≤ min

w∈Ωmax

w∈Ω η(w) · r(x, w)

= max

w∈Ω η(w) min

w∈Ωr(x, w)

= η · r(x).

Thus, the proof is complete. 2

One may ask when condition (26) is satisfied? Indeed, the condition (26) is satisfied if (i) Ω is a finite set;

(ii) F (x, w) = M (w)x + q(w) where M (w) is continuous, and for each w ∈ Ω the matrix M (w) is a P -matrix. In this case the modulus η(w) takes an explicitly formula, i.e.,

η(w) = max

d∈[0,1]nk(I − D + DM (w))−1k,

(17)

see [5, 9]. Hence, we see that η = max

d∈[0,1]n w∈Ω

k(I − D + DM (w))−1k

is well defined because M (w) is continuous and Ω is compact.

For simplification of notations, we write x → ∞ instead of kxk → ∞. We now introduce the following definitions which are similar to (12) and (13):

lim sup

x→∞

M (x) := {u | ∃xn → ∞, ∃un → u with un ∈ M (xn)} . and

lim inf

x→∞ M (x) := {u | ∀xn→ ∞, ∃un→ u with un∈ M (xn)} .

Definition 4.1. For SVLCP(M, q, Ω), the set of matrices {M (w) | w ∈ Ω(x)} is said to have the limit-R0 property if

x ≥ 0, M (w)x ≥ 0, xTM (w)x = 0 for some w ∈ lim sup

x→∞

Ω(x) =⇒ x = 0. (27)

In the case of a linear complementarity problem, i.e., Ω(x) is a fixed single-point set, Definition 4.1 coincides with that of R0-matrix.

Theorem 4.2. For SVLCP(M, q, Ω), suppose that there exists a bounded set Ω such that Ω(x) ⊂ Ω for all x ∈ IRn, and M (w) and q(w) are continuous on Ω. If the set of matrices {M (w)|w ∈ Ω(x)} has the limit-R0 property, then the merit function r(x) =

min

w∈Ω(x)

k min{x, M (w)x + q(w)}k is level-bounded.

Proof. We argue this result by contradiction. Suppose there exists a sequence {xn} satisfying kxnk → ∞ and r(xn) is bounded. Then,

r(xn)

kxnk = min

w∈Ω(xn)

min

 xn

kxnk, M (w) xn

kxnk + q(w) kxnk



=

min

 xn

kxnk, M (wn) xn

kxnk+ q(wn) kxnk



(28) where we assume the minimizer is attained at wn ∈ Ω(xn), whose existence is ensured by the compactness of Ω(xn), since Ω(x) is closed and Ω is bounded. Taking a subsequence if necessary, we can assume that {xn/kxnk} and {wn} are both convergent in which ¯x and ¯w represent their corresponding limit point. Thus, we have

¯

w ∈ lim sup

n→∞

Ω(xn) ⊂ lim sup

kxk→∞

Ω(x).

(18)

Now, taking the limit in (28) yields

kmin{¯x, M ( ¯w)¯x}k = 0,

where we have used the fact that q(wn)/kxnk → 0, because q is continuous and wn ∈ Ω is bounded. This contradicts (27) since ¯x is a nonzero vector. 2

Note that the condition (27) is equivalent to [

w∈lim sup

x→∞

Ω(x)

SOL(M (w)) = {0},

which is also equivalent to saying that each matrix M (w) for w ∈ lim sup

x→∞

Ω(x) is a R0-matrix.

Theorem 4.3. For SVLCP(M, q, Ω), suppose that there exists a compact set Ω such that Ω(x) ⊂ Ω for all x ∈ IRn, and M (w) and q(w) are continuous on Ω. If r(x) =

min

w∈Ω(x)k min{x, M (w)x + q(w)}k is level-bounded, then the following implication holds.

x ≥ 0, M (w)x ≥ 0, xTM (w)x = 0 for some w ∈ \

N ∈N˜

[

n∈ ˜N

Ω(nx) =⇒ x = 0.

Proof. Suppose that there exist a nonzero vector x0 and w0 ∈ T

N ∈N˜

S

n∈ ˜N

Ω(nx0) such that

x0 ≥ 0, M (w0)x0 ≥ 0, xT0M (w0)x0 = 0. (29) Similar to the argument as in Theorem 2.2, there exists a sequence {nk} with nk → ∞ and w0 ∈ Ω(nkx0). Hence,

r(nkx0) = min

w∈Ω(nkx0)kmin {nkx0, nkM (w)x0+ q(w)}k

≤ kmin{nkx0, nkM (w0)x0 + q(w0)}k

n

X

i=1

kmin{nk(x0)i, nk(M (w0)x0)i+ q(w0)i}k

Next, we proceed the arguments by discussing the following two cases.

Case 1. For (x0)i > 0, we have (M (w0)x0)i = 0 from (29). Since max

w∈Ω q(w) is finite due to the compactness of Ω and the continuity of q(w), nk(x0)i > max

w∈Ω q(w) for k sufficiently large. Therefore, we obtain

minnk(x0)i, nk(M (w0)x0)i+ qi(w0)

= kqi(w0)k.

(19)

Case 2. For (x0)i = 0, by a simple calculation, we have minnk(x0)i, nk(M (w0)x0)i+ qi(w0)

 = 0, if nk(M (w0)x0)i+ qi(w0) ≥ 0,

≤ kqi(w0)k, if nk(M (w0)x0)i+ qi(w0) < 0, where the inequality in the latter case comes from the fact that qi(w0) ≤ nk(M (w0)x0)i+ qi(w0) < 0. Thus,

r(nkx0) ≤

n

X

n=1

kqi(w0)k.

This contradicts the level-boundedness of r(x) since nkx0 → ∞. 2

The above conclusion is equivalent to saying that for each w ∈ T

N ∈N˜

S

n∈ ˜N

Ω(nx), the matrix M (w) is a R0-matrix. Finally, let us discuss a special case where the set-valued mapping Ω(x) has an explicit form, e.g., Ω(x) = {w | H(x, w) = 0 and G(x, w) ≥ 0}, where H : IRn× IRm → IRl1 and G : IRn× IRm → IRl2. Then, the solution set can be further characterized.

Theorem 4.4. If Ω(x) := {w | G(x, w) ≥ 0, H(x, w) = 0}, then

SOL(F, Ω) = [

w∈IRm

 x

 x 0 α

∈ SOL(Θw) with α ∈ IRl++2 and 0 ∈ IRl1

 ,

where Θw : IRn → IRn+l1+l2 is defined as Θw(x) :=

F (x, w) G(x, w) H(x, w)

 and IRl++2 := {α ∈ IRl2| αi > 0 for all i = 1, . . . , l2}.

Proof. Noting that the problem (2) is to find w ∈ IRm and x ∈ IRn such that x ≥ 0, F (x, w) ≥ 0, hF (x, w), xi = 0, and G(x, w) ≥ 0, H(x, w) = 0 namely, to find w ∈ IRm and x ∈ IRn satisfying

x ≥ 0, F (x, w) ≥ 0, hF (x, w), xi = 0, 0 ≥ 0, G(x, w) ≥ 0, 0 · G(x, w) = 0, α > 0, H(x, w) ≥ 0, hα, H(x, w)i = 0.

In other words,

 x 0 α

≥ 0,

F (x, w) G(x, w) H(x, w)

≥ 0,

 x 0 α

T

F (x, w) G(x, w) H(x, w)

= 0.

(20)

Then, the desired result follows. 2

The foregoing result indicates that the set-valued complementarity problem is differ- ent from the classical complementarity problem, since it restricts that some components of the solution must be positive or zero, which is not required in the classical comple- mentarity problems.

Moreover, the set-valued complementarity problem can be further reformulated to be an equation, i.e., finding x ∈ IRn and w ∈ IRm to satisfy the following equation

Γ(x, w) =

ξ(x, w) H(x, w) dist2(G(x, w)|IRl+2)

= 0, (30)

where ξ(x, w) = 12FB(x, F (x, w))k2. Note that when A is a closed convex set, then θ(x) := dist2(x, A) is continuously differentiable and ∇θ(x) = 2(x − ΠA(x)). This fact together with kφFBk2 being continuously differentiable imply the following immediately.

Theorem 4.5. Suppose that G and H are continuously differentiable and φ is the Fischer-Burmeister function, then Γ is continuously differentiable and

J Γ(x, w)

=

Jxξ(x, w) Jwξ(x, w)

JxH(x, w) JwH(x, w)

2 G(x, w) −Q

IRl2+(G(x, w))T

JxG(x, w) 2 G(x, w) −Q

IRl2+(G(x, w))T

JwG(x, w)

,

where

Jxξ(x, w) = ΦFB(x, F (x, w))T [Da(x, F (x, w)) + Db(x, F (x, w))JxF (x, F (x, w))] , and

Jwξ(x, w) = ΦFB(x, F (x, w))TDb(x, F (x, w))JwF (x, w).

Here Da(x, F (x, w)) and Db(x, F (x, w)) means the sets of n × n diagonal matrices diag(a1(x, F (x, w)), · · · , an(x, F (x, w))) and diag(b1(x, F (x, w)), · · · , bn(x, F (x, w))), re- spectively, and

ai(x, F (x, w)), bi(x, F (x, w))

= √(xi,Fi(x,w))

x2i+Fi2(x,w) − (1, 1), if (xi, Fi(x, w)) 6= 0,

∈ S

θ∈[0,2π]

{cosθ, sinθ} − (1, 1), if (xi, Fi(x, w)) = 0.

(21)

5 Further discussions

In this paper, we have paid much attention to the set-valued complementarity problems which posses rather different features from those of classical complementarity problems.

As suggested by one referee, we here briefly discuss the relation between stochastic varia- tional inequalities and the set-valued complementarity problems. Given F : Rn× Ξ → R, Xξ ⊂ Rn and Ξ ⊂ Rl, a set representing future states of knowledge, the stochastic variational inequalities is to find x ∈ Xξ such that

(y − x)TF (x, ξ) ≥ 0, ∀y ∈ Xξ, ξ ∈ Ξ.

If Xξ= Rn+, then the stochastic variational inequalities reduces to the stochastic comple- mentarity problem

x ≥ 0, F (x, ξ) ≥ 0, xTF (x, ξ) = 0, ξ ∈ Ξ. (31) The optimization problem corresponding to stochastic complementarity problem is

x∈Rminn+EkΦ(x, F (x, ξ))k . (32) When Ξ is a discrete set, say Ξ := {ξ1, ξ2, . . . , ξv}, then

EkΦ(x, F (x, ξ))k =

v

X

i=1

P (ξi)kΦ(x, F (x, ξi))k, (33) where P (ξi) is the probability of ξi. If the optimal value of (32) is zero, then it follows from (33) that (31) coincides with

x ≥ 0, F (x, ξi) ≥ 0, xTF (x, ξi) = 0, ∀ξi ∈ Ξ satisfying P (ξi) > 0.

When Ξ is a continuous set, then

EkΦ(x, F (x, ξ))k = Z

kΦ(x, F (x, ξi))kP (x)dx, (34) where P (x) is the density function. In this case, (31) takes the form of

x ≥ 0, F (x, ξ) ≥ 0, xTF (x, ξ) = 0, a.e. ξ ∈ Ξ, or equivalently there exists a subset Ξ0 ⊂ Ξ with P (Ξ0) = 0 such that

x ≥ 0, F (x, ξ) ≥ 0, xTF (x, ξ) = 0, ∀ξ ∈ Ξ\Ξ0.

Hence the stochastic complementarity problem is, in certain extent, a semi-infinite com- plementarity problem (SICP).

Due to some major difference between set-valued complementarity problems and clas- sical complementarity problems, there are still many interesting, important, and chal- lenging questions for further investigation as below, to name a few.

(22)

(i) How to extend other important concepts used in classical linear complementarity problems) to set-valued cases (like P0, P, Z, Q, Q0, S, ¯S, copositive, column sufficient-matrix, ...)?

(ii) How to propose an effective algorithm to solve the equation (30)?

(iii) Can we provide some sufficient conditions to ensure the existence of solutions? One possible direction is to use fixed-point theory. In fact, the set-valued complemen- tarity problem is to find x ∈ IRn such that

x = max{0, x − F (x, w)} = ΠIRn

+(x − F (x, w)) for some w ∈ Ω(x), i.e.,

x ∈ ΠIRn+(x − ˜F (x)), (35) where ˜F (x) := S

w∈Ω(x)F (x, w). Note that (35) is a fixed-point of a set-valued mapping ΠIRn

+(I − ˜F ), where I denotes the identify mapping.

Acknowledgements. The authors would like to thank three referees for their carefully reading and suggestions which help to improve this manuscript.

References

[1] J.P. Aubin and H. Frankowska, Set-Valued Analysis, Birkhauser, Boston, 1990.

[2] X. Chen and M. Fukushima, Expected residual minimization method for stochastic linear complementarity problems, Mathematics of Operations Research, vol. 30, pp.

1022-1038, 2005.

[3] R. Cottle, J.S. Pang, and R. Stone, The Linear Complementarity Problem, Academic Press, New York, 1992.

[4] X. Chen, C. Zhang, and M. Fukushima, Robust solution of monotone stochastic linear complementarity problems, Mathematical Programming, vol. 117, pp. 51–80, 2009.

[5] X. Chen and S. Xiang, Computation of error bounds for P -matrix linear comple- mentarity problems, Mathematical Programming, vol. 106, pp. 513-525, 2006.

[6] F. Facchinei and J.-S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Volume I and II, Springer Verlag, New York, 2003.

[7] H.T. Fang, X. Chen, and M. Fukushima, Stochastic R0 matrix linear comple- mentarity problems, SIAM Journal on Optimization, vol. 18, pp. 482-506, 2007.

(23)

[8] M. Fiedler, and V. Ptak, Some generalizations of positive definiteness and mono- tonicity, Numerische Mathematik, vol. 9, pp. 163-172, 1966.

[9] S.A. Gabriel and J.J. More, Smoothing of mixed complementarity problems, in Complementarityand Variational Problems (M.C. Ferris and J.S. Pang eds.), SIAM Publications, Philadelphia, 1997.

[10] A. Galantai, Properties and construction of NCP functions, Computational Op- timization and Applications, vol. 52, pp. 805-824, 2012.

[11] M.S. Gowda, On the extended linear complementarity problem, Mathematical Pro- gramming, vol. 72, pp. 33-50, 1996.

[12] O.L. Mangasarian and J.-S. Pang, The extended linear complementarity prob- lem, SIAM Journal on Matrix Analysis and Application, vol. 16, pp. 359-368, 1995.

[13] M.A. Noor, Some developments in general variational inequalities, Applied Math- ematics and Computation, vol. 152, pp. 199-277, 2004.

[14] M.A. Noor, Extended general variational inequalities, Applied Mathematics Let- ters, vol. 22, pp. 182-185, 2009.

[15] J.-S. Pang, The implicit complementarity problem, Nonlinear Programming 4, Edited by O.L. Mangasarian, R.R. Meyer, and S.M. Robinson, Academic Press, New York, 1981.

[16] J.-S. Pang, Error bounds in mathematical programming, Mathematical Program- ming, vol. 79, pp. 299-332, 1997.

[17] J.-S. Pang and M. Fukushima, Quasi-variational inequalities, generalized Nash equilibria, and multi-leader-follower games, Computational Management Science, vol.

2, pp. 21-56, 2005.

[18] E. Polak, Optimization: Algorithms and Consistent Approximation, Springer- Verlag, New York, 1997.

[19] R.T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, 1970.

[20] R.T. Rockafellar and R.J. Wets, Variational Analysis, Springer, New York, 1998.

[21] C. Zhang and X. Chen, Smoothing projected gradient method and its application to stochastic linear complementarity problems, SIAM Journal on Optimization, vol.

20, pp. 627-649, 2009.

(24)

[22] J.C. Zhou, N.H. Xiu, and J.-S. Chen, Solution properties and error bounds for semi-infinite complementarity problems, Journal of Industrial and Management Optimization, vol. 9, pp. 99-115, 2013.

參考文獻

相關文件

Chen, The semismooth-related properties of a merit function and a descent method for the nonlinear complementarity problem, Journal of Global Optimization, vol.. Soares, A new

For different types of optimization problems, there arise various complementarity problems, for example, linear complemen- tarity problem, nonlinear complementarity problem

A derivative free algorithm based on the new NCP- function and the new merit function for complementarity problems was discussed, and some preliminary numerical results for

Qi (2001), Solving nonlinear complementarity problems with neural networks: a reformulation method approach, Journal of Computational and Applied Mathematics, vol. Pedrycz,

For different types of optimization problems, there arise various complementarity problems, for example, linear complementarity problem, nonlinear complementarity problem,

In section 4, based on the cases of circular cone eigenvalue optimization problems, we study the corresponding properties of the solutions for p-order cone eigenvalue

For finite-dimensional second-order cone optimization and complementarity problems, there have proposed various methods, including the interior point methods [1, 15, 18], the

This kind of algorithm has also been a powerful tool for solving many other optimization problems, including symmetric cone complementarity problems [15, 16, 20–22], symmetric