• 沒有找到結果。

Approximate Proximal Algorithms for Generalized Variational Inequalities with Paramonotonicity and Pseudomonotonicity

N/A
N/A
Protected

Academic year: 2021

Share "Approximate Proximal Algorithms for Generalized Variational Inequalities with Paramonotonicity and Pseudomonotonicity"

Copied!
8
0
0

加載中.... (立即查看全文)

全文

(1)

www.elsevier.com/locate/camwa

Approximate proximal algorithms for generalized variational

inequalities with paramonotonicity and pseudomonotonicity

L.C. Ceng

a

, T.C. Lai

b

, J.C. Yao

c,∗

aDepartment of Mathematics, Shanghai Normal University, Shanghai 200234, China bCollege of Management, National Taiwan University, Taipei, Taiwan

cDepartment of Applied Mathematics, National Sun Yat-sen University, Kaohsiung 804, Taiwan

Received 9 November 2006; received in revised form 21 June 2007; accepted 27 June 2007

Abstract

We propose an approximate proximal algorithm for solving generalized variational inequalities in Hilbert space. Extension to Bregman-function-based approximate proximal algorithm is also discussed. Weak convergence of these two algorithms are established under the paramonotonicity and pseudomonotonicity assumptions of the operators.

c

2007 Elsevier Ltd. All rights reserved.

Keywords: Generalized variational inequalities; Monotone operators; Approximate proximal algorithms; Weak accumulation points; Weak convergence

1. Introduction and preliminaries

Let H be a real Hilbert space with inner product h·, ·i and norm k · k, respectively. Given T : D(T ) ⊂ H → 2H

where D(T ) denotes the domain of T and Ω ⊂ H be a nonempty closed and convex set, the generalized variational inequality problem for T and Ω , denoted by GVI(T, Ω) is the problem of finding x∗∈D(T ) such that

x∗∈Ω, ∃u∗∈T(x∗): hu∗, x − x∗i ≥0, ∀x ∈ Ω. (1.1) The problem GVI(T, Ω) was initially introduced in the 1970s; see, e.g. Bruck [1] and the references therein. Subsequently, Fang and Peterson [2] considered it in 1982 in the setting of finite-dimensional spaces. Since then, this problem has been extensively studied in the literature mainly on the existence of solutions of the problems. See, e.g. [3–5] and the references therein.

When T is single-valued, the GVI(T, Ω) reduces to the classical variational inequalities VI(T, Ω) which have been extensively studied both in finite- and infinite-dimensional spaces. See, [6–9] and the references therein. We observe that both GVI(T, Ω) and VI(T, Ω) are closely related to optimization problems. See, e.g. [6,9,10].

In this paper we suggest and analyse the approximate proximal algorithm (Algorithm 2.1) and Bregman-function-based approximate proximal algorithm (Algorithm 3.1) for solving GVI(T, Ω), where T is a paramonotone and

Corresponding author.

E-mail address:yaojc@math.nsysu.edu.tw(J.C. Yao).

0898-1221/$ - see front matter c 2007 Elsevier Ltd. All rights reserved.

(2)

pseudomonotone multivalued operator. The goal for the present work is twofold. First, for doing this, we consider subproblems on the domains Ωn ⊃ Ω, n = 1, 2, . . ., which form a general approximate proximal point scheme.

We prove that our general approximate proximal point scheme generates a sequence, which converges weakly to a solution of GVI(T, Ω). Second, we present an extension to Bregman function-based approximate proximal algorithm. More precisely, given a suitable Bregman function, define new approximating problems on the domains Ωn ⊃ Ω, n = 1, 2, . . ., which form a general Bregman function-based approximate proximal point scheme for

solving GVI(T, Ω). We also prove that our general Bregman function-based approximate proximal point scheme generates a sequence, which converges weakly to a solution of GVI(T, Ω). The authors studied in [11] convergence analysis of Algorithms 2.1 and3.1 for strongly monotone operators. The work of this paper can be regarded as continuation of the research work in [11].

Now we recall some preliminaries which will be used in the rest of this paper.

Definition 1.1. Let T : D(T ) ⊂ H → 2H be an operator where D(T ) is the domain of T . Then T is said to be

(i) monotone if for all x, y ∈ Ω, u ∈ T (x), and v ∈ T (y), hu −v, x − yi ≥ 0

(ii) paramonotone [12] on Ω if T is monotone and hv − u, y − zi = 0 with y, z ∈ Ω, v ∈ T (y), u ∈ T (z) implies that u ∈ T(y), v ∈ T (z).

Proposition 1.1 ([12,Proposition 4]). Assume that T is paramonotone on Ω and ¯x is a solution of GVI(T, Ω). Let x∗∈Ω be such that there exists an element u∗T(x) with hu, x− ¯x i ≤0. Then xalso solvesGVI(T, Ω).

In 2005, Burachik, Lopes and Svaiter [10] studied an outer approximation for the variational inequality problem. To prove the convergence of the method, they employed the paramonotonicity and pseudomonotonicity of multivalued operators. Let B be a reflexive Banach space and the operator T : D(T ) ⊂ H → 2H be such that the domain D(T ) is

closed and convex. T is said to be pseudomonotone [13] if for any sequence {(xn, un)} ⊂ G(T ), the graph of T , there

holds the following:

(a) {xn}converges weakly to x∗∈ D(T ),

(b) lim supnhun, xn−x∗i ≤0,

then for everyw ∈ D(T ) there exists an element u∗∈T(x∗) such that hu∗, x∗−wi ≤ lim inf

n hun, xn−wi.

2. Approximate proximal algorithm for GVI(T, Ω)

Let Ω ⊂ H be a nonempty closed and convex set and let T : D(T ) ⊂ H → 2H be a multivalued operator with Ω ∩ D(T ) 6= ∅. Recall that the generalized variational inequality GVI(T, Ω) is the problem of finding x∗Ω ∩ D(T )

such that there exists u∗∈T(x∗) with

hu∗, x − x∗i ≥0, ∀x ∈ Ω. (2.1)

S∗denotes the solution set of GVI(T, Ω). We fix a sequence {Ωn}of convex closed subsets of H and two sequences

n}, {λn} ⊂R+:= [0, +∞) satisfying the following conditions:

(A1) Ω ⊂ Ωnfor all n, and there exist x∗∈S∗and u∗∈T(x∗) such that

hu∗, x − x∗i ≥0, ∀x ∈ Ωnand ∀n.

(A2) P

n(εn/λn) < +∞ with {λn} ⊂(0, M] for some M > 0.

Observe that there are some situations where (A1) is satisfied. For example, if Ωnis contained in some bounded,

closed, convex subset of H for all n and the operator T is upper semicontinuous along line segments with bounded closed convex values, then (A1) is satisfied (see, e.g. [3]).

(3)

Algorithm 2.1. Initialization. Take any initial value x0∈Ω and Ω1⊃Ω .

Iterations. For n = 1, 2, . . ., find xn∈Ωn∩D(T ), a solution of the nth approximating problem, defined as follows:

for given Ωn,εnandλn,

find xn∈Ωn∩D(T ) such that there exists un∈T(xn) with

n(xn−1−xn+en) − un, xn−x i ≥ −εn, ∀x ∈ Ωn, (APn)

where {en}is an error sequence in H .

Definition 2.1. Let {Ωn}, {εn}and {λn}be as in (A1) and (A2).

(a) A sequence {xn}is called an almost-orbit if xnsolves(APn)for all n.

(b) An almost-orbit {xn}is called asymptotically feasible (AF, for short) if all weak accumulation points of {xn}belong

to Ω .

We remark that if D(T ) = H, en = xn−xn−1 andλn = 1 for all n, then the concepts of almost-orbit and

asymptotical feasibility reduce to the concepts of orbit and feasibility in [10, Definition 3.1], respectively.

Lemma 2.1 ([11, Lemma 2.1]). Let {an}, {bn}and {cn}be nonnegative real sequences satisfying the following

condition:

an+1≤(1 + bn)an+cn, ∀n ≥ n0, (*)

for some integer n0≥1, wherePnbn< +∞ and Pncn< +∞. Then limnanexists.

Now, we state and prove the main result of this section.

Theorem 2.1. Suppose that the sequence {xn}generated byAlgorithm2.1is an AF almost-orbit and(A1) as well

as(A2) hold. Suppose that

(i) T is paramonotone and pseudomonotone with closed domain; (ii) S∗is nonempty.

If P

nkenk< +∞, then {xn}is weakly convergent to a solution of GVI(T, Ω).

Proof. Following the same proof of Theorem 2.1 in [11], we can prove the following conclusions: (i) For x∗∈S∗as in (A1), there holds

λnhxn−1−xn+en, xn−x∗i ≥ −εn.

(ii) For x∗∈S∗as in (A1), there holds

kxn−x∗k2≤ kxn−1−x∗k2− kxn−xn−1k2+2hen, xn−x∗i +2 · εn

λn

. (iii) For x∗∈S∗as in (A1), there exists an integer N0≥1 such that for all n ≥ N0

kxn−x∗k2≤(1 + βn)kxn−1−x∗k2− 1 1 − kenk kxn−xn−1k2+βn, whereβn= ken1−kek+2εn/λn nk , ∀n ≥ N0.

(iv) The following statements hold:

(a) limnkxn−x∗kexists for x∗∈S∗as in (A1) and hence {xn}is bounded;

(b) limnkxn−xn−1k =0.

Next, we shall prove that {xn}converges weakly to a solution of GVI(T, Ω).

Indeed, we first claim that every weak accumulation point of {xn}is a solution of GVI(T, Ω). Let ˆx be a weak

accumulation point of {xn}. Then there exists a subsequence {xnj}weakly convergent to ˆx. For each j , xnj solves

(APnj). Thus there exists unj ∈T(xnj) such that

n

(4)

By the condition Ωnj ⊃Ω , we have

n

j(xnj−1−xnj +enj) − unj, xnj −x i ≥ −εnj, ∀x ∈ Ω and ∀nj. (2.2)

Since {xn}is AF, ˆx ∈Ω . Therefore

n

j(xnj−1−xnj +enj) − unj, xnj − ˆx i ≥ −εnj, ∀nj,

which implies that

εnj +λnjhxnj−1−xnj +enj, xnj − ˆx i ≥ hunj, xnj − ˆx i, ∀nj.

Also, utilizing (A2) we have lim sup j hunj, xnj − ˆx i ≤ lim sup j [λn jhxnj−1−xnj +enj, xnj − ˆx i +εnj] =lim sup j λnj " h(xn j−1−xnj +enj), xnj − ˆx i + εnj λnj # ≤ lim sup j M " (kxnj−1−xnjk + kenjk)kxnj − ˆx k + εnj λnj # =0.

Take any ¯x ∈ S∗. From the pseudomonotonicity of T , we conclude that there exists ˆu ∈ T( ˆx) such that lim inf

j

hunj, xnj − ¯x i ≥ h ˆu, ˆx − ¯xi.

Since ¯xlies in Ω , from(2.2), we have lim inf j hunj, xnj − ¯x i ≤ lim inf j [λn jhxnj−1−xnj +enj, xnj − ¯x i +εnj] ≤ lim sup j λnj " h(xn j−1−xnj +enj), xnj − ¯x i + εnj λnj # ≤ lim sup j M " (kxnj−1−xnjk + kenjk)kxnj − ¯x k + εnj λnj # =0.

Combining the last two inequalities we infer that h ˆu, ˆx − ¯xi ≤ 0.

Now taking into account the paramonotonicity of T and Iusem [12,Proposition 4], we deduce that ˆxis a solution of the GVI(T, Ω).

On the other hand, suppose that ˆxand ¯xare any two weak accumulation points of {xn}and that two subsequences

{xni}and {xmj}of {xn}weakly converge to ˆxand ¯x, respectively. Then both ˆxand ¯xbelong to S

. Thus, by conclusion

(iv) (a), we know that both limnkxn− ˆx kand limnkxn− ¯x kexist. Now, observe that

lim n kxn− ¯x k 2=lim i kxni − ¯x k 2 =lim i kxni − ˆx + ˆx − ¯x k 2 =lim i [kxni − ˆx k 2+2hx ni − ˆx, ˆx − ¯xi + k ˆx − ¯xk 2] =lim i kxni − ˆx k 2+ k ˆx − ¯x k2 =lim n kxn− ˆx k 2+ k ˆx − ¯x k2. (2.3)

Replacing the role of ˆxby ¯x, we similarly derive lim

n

kxn− ˆx k2=lim n

(5)

Adding up(2.3)and(2.4)we immediately get ˆx = ¯x. Therefore, {xn}is weakly convergent to a solution of GVI(T, Ω).



3. Extension to Bregman function-based approximate proximal algorithm

Let Λ be a convex open subset in H and h : Λ → H be a Bregman function where Λ denotes the closure of the set Λ. We refer Definition 2.1 in [14] for the definition of Bregman functions. We observe that although [14, Definition 2.1] is in finite-dimensional setting, it is not difficult to see that it can be extended to Hilbert space. The Bregman distance between x and y is defined via the “D-function”

Dh(x, y) = h(x) − h(y) − h∇h(y), x − yi, (3.1)

where x ∈ Λ and y ∈ Λ. From the strict convexity of h, one can prove that Dh(x, y) ≥ 0, and Dh(x, y) = 0 if and

only if x = y. If h(x) = 1

2kx k2, then Dh(x, y) = 1

2kx − yk2. In the following, we will use a class of functions that is

presented as

h(x) = h0(x) +

1 2kx k

2,

where h0is a Bregman function. It is easy to see that h is also a Bregman function. Thus for all x ∈ Λ and y ∈ Λ, we

have as in [11]

Dh(x, y) ≥

1

2kx − yk

2. (3.2)

In this section we still consider the GVI(T, Ω) defined by(2.1). We still fix a sequence {Ωn}of convex closed

subsets of H and two sequences {εn}, {λn} ⊂R+:= [0, +∞) satisfying the assumptions (A1) and (A2) in Section2.

In addition, assume also that

(A3) ∇h(·) is uniformly continuous on any closed bounded subsets of H.

These sequences and h define new approximating problems which form a general Bregman function-based approximate proximal point scheme.

Algorithm 3.1. Initialization. Take any initial value x0∈Ω and Ω1⊃Ω .

Iterations. For n = 1, 2, . . ., find xn ∈ Ωn∩D(T ) ∩ Λ, a solution of the nth approximating problem, defined as

follows: for given Ωn,εnandλn,

find xn∈Ωn∩D(T ) ∩ Λ such that there exists un∈T(xn) with

n(∇h(xn−1) − ∇h(xn) + en) − un, xn−x i ≥ −εn, ∀x ∈ Ωn, (BAPn)

where {en}is an error sequence in H .

Definition 3.1. Let {Ωn}, {εn}and {λn}be as in (A1) and (A2).

(a) A sequence {xn}is called an h-almost-orbit if xnsolves(BAPn)for all n.

(b) An h-almost-orbit {xn}is called asymptotically feasible (AF, for short) if all weak accumulation points of {xn}

belong to Ω .

Next we discuss the convergence of Algorithm 3.1 under the assumptions of paramonotonicity and pseudomonotonicity imposed on T . To prove the convergence ofAlgorithm 3.1, we need additionally the following condition:

(A4) ∇h(·) is sequentially continuous from the weak topology of H to the weak topology of H.

Theorem 3.1. Suppose that the assumptions(A1)–(A4) hold and that the sequence {xn}generated byAlgorithm3.1

is an AF h-almost-orbit. Suppose that

(i) T is paramonotone and pseudomonotone with closed domain; (ii) S∗is nonempty.

(6)

If P

nkenk< +∞, then {xn}is weakly convergent to a solution of GVI(T, Ω).

Proof. From the same proof of Theorem 3.1 in [11], we can prove the following conclusions: (i) For x∗∈S∗as in (A1), there holds

λnh∇h(xn−1) − ∇h(xn) + en, xn−x∗i ≥ −εn, ∀n.

(ii) For x∗Sas in (A1), there holds

Dh(x∗, xn) ≤ Dh(x∗, xn−1) − Dh(xn, xn−1) + hen, xn−x∗i + εn

λn

, ∀n. (iii) For x∗∈S∗as in (A1), there exists an integer N0≥1 such that for all n ≥ N0

Dh(x∗, xn) ≤ (1 + βn)Dh(x∗, xn−1) −

1 1 − kenk

Dh(xn, xn−1) + βn,

whereβn=ken1−kek+εnn/λk n, ∀n ≥ N0.

(iv) The following statements hold:

(a) limnDh(x∗, xn) exists for x∗∈S∗as in (A1) and hence {xn}is bounded;

(b) limnDh(xn, xn−1) = 0 and hence limnkxn−xn−1k =0.

Next, we shall prove that {xn}is weakly convergent to a solution of GVI(T, Ω).

Indeed, we first claim that every weak accumulation point of {xn}is a solution of GVI(T, Ω). Let ˆx be a weak

accumulation point of {xn}. Then there exists a subsequence {xnj}weakly convergent to ˆx. For each j , xnj solves

(BAPnj). Thus there exists unj ∈T(xnj) such that

n

j(∇h(xnj−1) − ∇h(xnj) + enj) − unj, xnj −x i ≥ −εnj, ∀x ∈ Ωnj and ∀nj.

By the condition Ωnj ⊃Ω , we have

n

j(∇h(xnj−1) − ∇h(xnj) + enj) − unj, xnj −x i ≥ −εnj, ∀x ∈ Ω and ∀nj. (3.3)

Since {xn}is AF and ˆx ∈Ω , we have

n

j(∇h(xnj−1) − ∇h(xnj) + enj) − unj, xnj − ˆx i ≥ −εnj, ∀nj.

This implies that

εnj +λnjh∇h(xnj−1) − ∇h(xnj) + enj, xnj − ˆx i ≥ hunj, xnj − ˆx i, ∀nj.

Note that limnkxn−xn−1k =0, and {xn}is bounded. Thus we derive limnk∇h(xn) − ∇h(xn−1)k = 0 by virtue of

(A3). Now utilizing (A2), we have lim sup j hunj, xnj − ˆx i ≤ lim sup j [λn jh∇h(xnj−1) − ∇h(xnj) + enj, xnj − ˆx i +εnj] =lim sup j λnj " h∇h(xnj−1) − ∇h(xnj) + enj, xnj − ˆx i + εnj λnj # ≤ lim sup j M " k∇h(xnj−1) − ∇h(xnj) + enjkkxnj − ˆx k + εnj λnj # ≤ lim sup j M " (k∇h(xnj−1) − ∇h(xnj)k + kenjk)kxnj − ˆx k + εnj λnj # =0. Take ¯x ∈ S∗. By pseudomonotonicity of T , we conclude that there exists ˆu ∈ T( ˆx) such that

lim inf

j

(7)

Since ¯xlies in Ω and from(3.3), we conclude that lim inf j hunj, xnj − ¯x i ≤lim inf j [λn jh∇h(xnj−1) − ∇h(xnj) + enj, xnj − ¯x i +εnj] ≤ lim sup j [λn jh∇h(xnj−1) − ∇h(xnj) + enj, xnj − ¯x i +εnj] =lim sup j λnj " h∇h(xnj−1) − ∇h(xnj) + enj, xnj − ¯x i + εnj λnj # ≤ lim sup j M " (k∇h(xnj−1) − ∇h(xnj)k + kenjk)kxnj − ¯x k + εnj λnj # =0. Combining the last two inequalities we infer that

h ˆu, ˆx − ¯xi ≤ 0.

Again taking into account the paramonotonicity of T and Iusem [12,Proposition 4], we deduce that ˆxis a solution of the GVI(T, Ω).

On the other hand, suppose that ˆxand ˜xare any two weak accumulation points of {xn}and that two subsequences

{xni}and {xmj}of {xn}are weakly convergent to ˆx and ˜x, respectively. Then both ˆx and ˜x belong to S

. Thus, by

conclusion (iv) (a) we know that both limnDh( ˆx, xn) and limnDh( ˜x, xn) exist, that is, there exist ˆl, ˜l ∈ R+such that

lim n Dh( ˆx, xn) = ˆl and limn Dh( ˜x, xn) = ˜l. (3.4) According toTheorem 3.1, Dh( ˆx, xn) = Dh( ˜x, xn) + h∇h(xn) − ∇h( ˜x), ˜x − ˆxi + Dh( ˆx, ˜x). From(3.4), we have lim n h∇h(xn) − ∇h( ˜x), ˜x − ˆxi = ˆl − ˜l − Dh( ˆx, ˜x). (3.5)

The left-hand side of(3.5)vanishes since ˜xis a weak cluster point of {xn}, and since ∇h(·) is sequentially continuous

from the weak topology of X to the weak topology of X by (A4). So we have ˆ

l − ˜l = Dh( ˆx, ˜x). (3.6)

Reversing the roles of ˆx and ˜x, a similar reasoning leads to ˜l − ˆl = Dh( ˜x, ˆx), which, combined with (3.6), yields

Dh( ˆx, ˜x) + Dh( ˜x, ˆx) = 0, i.e. Dh( ˆx, ˜x) = Dh( ˜x, ˆx) = 0, and hence ˜x = ˆx, establishing the uniqueness of the weak

cluster point of {xn}. It follows that {xn}is weakly convergent to a solution of GVI(T, Ω). 

Acknowledgements

First author’s research was partially supported by the Teaching and Research Award Fund for Outstanding Young Teachers in Higher Education Institutions of MOE, China and the Dawn Programme Foundation in Shanghai. Third author’s research was partially supported by grant from the National Science Council of Taiwan.

References

[1] R.E. Bruck, An iterative solution of a variational inequality for certain monotone operator in a Hilbert space, Bulletin of the American Mathematical Society 81 (1975) 890–892; (Corrigendum) 82 (1976) 353.

[2] S.C. Fang, E.L. Peterson, Generalized variational inequalities, Journal of Optimization Theory and Applications 38 (1982) 363–383. [3] J.C. Yao, Multi-valued variational inequalities with K-pseudomonotone operators, Journal of Optimization Theory and Applications 83 (1994)

391–403.

[4] J.S. Guo, J.C. Yao, Variational inequalities with nonmonotone operators, Journal of Optimization Theory and Applications 80 (1994) 63–74. [5] J.C. Yao, J.S. Guo, Variational and generalized variational inequalities with discontinuous mappings, Journal of Mathematical Analysis and

Applications 182 (1994) 371–392.

[6] G. Stampacchia, in: A. Ghizzetti (Ed.), Variational Inequalities, Theory and Applications of Monotone Operators, Edizioni Oderisi, Gubbio, Italy, 1969, pp. 101–192.

(8)

[7] G.J. Hartman, G. Stampacchia, On some nonlinear elliptic differential functional equations, Acta Mathematica 115 (1966) 271–310. [8] J.C. Yao, Variational inequality, Applied Mathematics Letters 5 (1992) 39–42.

[9] J.C. Yao, Variational inequalities with generalized monotone operators, Mathematics of Operations Research 19 (1994) 691–705.

[10] R.S. Burachik, J.O. Lopes, B.F. Svaiter, An outer approximation method for the variational inequality problem, SIAM Journal on Control and Optimization 43 (2005) 2071–2088.

[11] L.C. Ceng, J.C. Yao, Approximate proximal algorithms for generalized variational inequalities with pseudomonotone multifunctions, Journal of Computational and Applied Mathematics (2007),doi:10.1016/j.cam.2007.01.034.

[12] A.N. Iusem, On some properties of paramonotone operators, Journal of Convex Analysis 5 (1998) 269–278.

[13] F.E. Browder, Nonlinear operators and nonlinear equations of evolution in Banach spaces, in: Nonlinear Functional Analysis, AMS, Providence, RI, 1976, pp. 1–308.

[14] M.V. Solodov, B.F. Svaiter, An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions, Mathematics of Operations Research 25 (2000) 214–230.

參考文獻

相關文件

From the existence theorems of solution for variational relation prob- lems, we study equivalent forms of generalized Fan-Browder fixed point theorem, exis- tence theorems of

In this paper, we extended the entropy-like proximal algo- rithm proposed by Eggermont [12] for convex programming subject to nonnegative constraints and proposed a class of

Numerical experiments are done for a class of quasi-convex optimization problems where the function f (x) is a composition of a quadratic convex function from IR n to IR and

Section 3 is devoted to developing proximal point method to solve the monotone second-order cone complementarity problem with a practical approximation criterion based on a new

Abstract Based on a class of smoothing approximations to projection function onto second-order cone, an approximate lower order penalty approach for solving second-order cone

Based on a class of smoothing approximations to projection function onto second-order cone, an approximate lower order penalty approach for solving second-order cone

Although we have obtained the global and superlinear convergence properties of Algorithm 3.1 under mild conditions, this does not mean that Algorithm 3.1 is practi- cally efficient,

The purpose of this talk is to analyze new hybrid proximal point algorithms and solve the constrained minimization problem involving a convex functional in a uni- formly convex