• 沒有找到結果。

Journal of Computational and Applied Mathematics

N/A
N/A
Protected

Academic year: 2022

Share "Journal of Computational and Applied Mathematics"

Copied!
17
0
0

加載中.... (立即查看全文)

全文

(1)

Contents lists available atScienceDirect

Journal of Computational and Applied Mathematics

journal homepage:www.elsevier.com/locate/cam

Numerical comparisons of two effective methods for mixed complementarity problems

Jein-Shan Chen

a,,1

, Shaohua Pan

b

, Ching-Yu Yang

a

aDepartment of Mathematics, National Taiwan Normal University, Taipei, Taiwan 11677, Taiwan

bSchool of Mathematical Sciences, South China University of Technology, Guangzhou 510640, China

a r t i c l e i n f o

Article history:

Received 25 March 2009

Received in revised form 4 January 2010

Keywords:

MCP

The generalized FB function Semismooth

Convergence rate

a b s t r a c t

Recently there have two different effective methods proposed by Kanzow et al. in (Kanzow, 2001 [8]) and (Kanzow and Petra, 2004 [9]), respectively, which commonly use the Fischer–Burmeister (FB) function to recast the mixed complementarity problem (MCP) as a constrained minimization problem and a nonlinear system of equations, respectively. They all remark that their algorithms may be improved if the FB function is replaced by other NCP functions. Accordingly, in this paper, we employ the generalized Fischer–Burmeister (GFB) where the 2-norm in the FB function is relaxed to a general p-norm (p>1) for the two methods and investigate how much the improvement is by changing the parameter p as well as which method is influenced more when we do so, by the performance profiles of iterations and function evaluations for the two methods with different p on MCPLIB collection.

© 2010 Elsevier B.V. All rights reserved.

1. Introduction

The mixed complementarity problem (MCP) arises in many applications including the fields of economics, engineering, and operations research [1–4] and has attracted much attention in the past decade [5–10]. A collection of nonlinear mixed complementarity problems called MCPLIB can be found in [11] and an excellent book [12] is a good source for seeking theoretical backgrounds and numerical methods for it.

Given a mapping F

: [

l

,

u

] →

Rnwith F

= (

F1

, . . . ,

Fn

)

T, where l

= (

l1

, . . . ,

ln

)

T and u

= (

u1

, . . . ,

un

)

T with li

R

∪ {−∞}

, ui

R

∪ {+∞}

and li

<

uifor i

=

1

,

2

, . . . ,

n. The MCP is to find a vector x

∈ [

l

,

u

]

such that each component xi satisfies exactly one of the following implications:

xi

=

li

H⇒

Fi

(

x

) ≥

0

,

xi

∈ (

li

,

ui

) H⇒

Fi

(

x

) =

0

,

xi

=

ui

H⇒

Fi

(

x

) ≤

0

.

(1)

It is easy to see that, when li

= −∞

and ui

= +∞

for all i

=

1

,

2

, . . . ,

n, MCP(1)is equivalent to solving the nonlinear system of equations

F

(

x

) =

0

;

(2)

Corresponding author. Tel.: +886 2 29325417; fax: +886 2 29332342.

E-mail addresses:jschen@math.ntnu.edu.tw(J.-S. Chen),shhpan@scut.edu.cn(S. Pan),yangcy@abel.math.ntnu.edu.tw(C.-Y. Yang).

1 Member of Mathematics Division, National Center for Theoretical Sciences, Taipei Office, Taiwan.

0377-0427/$ – see front matter©2010 Elsevier B.V. All rights reserved.

doi:10.1016/j.cam.2010.01.004

(2)

when li

=

0 and ui

= +∞

for all i

=

1

,

2

, . . . ,

n, it reduces to the nonlinear complementarity problems (NCP) which is to find a point x

Rnsuch that

x

0

,

F

(

x

) ≥

0

, h

x

,

F

(

x

)i =

0

.

(3)

In fact, from Theorem 2 of [13], MCP(1)itself is equivalent to the famous variational inequality problem (VIP) which is to find a vector x

∈ [

l

,

u

]

such that

h

F

(

x

),

x

x

i ≥

0

x

∈ [

l

,

u

] .

(4)

Unless otherwise stated, the mapping F is assumed to be continuously differentiable.

Many methods have been proposed for the solution of MCP(1), among which there are two effective methods that attract much attention recently. They are the strictly feasible equation-based methods [6–8] and the semismooth Leven- berg–Marquardt methods [9,10]. Some other variants of these methods can be found in [14–16]. The ideas for the afore- mentioned two methods are to reformulate(1)as a constrained minimization or a nonsmooth system of equations by using the Fischer–Burmeister function

φ

FB

(

a

,

b

) := p

a2

+

b2

− (

a

+

b

) ∀

a

,

b

R

.

(5)

The strictly feasible Newton-type method was considered in [8] to overcome drawbacks of some typical solution methods for the MCP (see e.g. [7]), for example, they can generate feasible iterates but have to solve relatively complicated subprob- lems or they have simple subproblems but do not necessarily generate feasible iterates. On the other hand, the semismooth Levenberg–Marquardt method was proposed in [9] to overcome some drawbacks of equation-based methods using the FB function. This method has the advantages that gradient steps are not necessary to obtain global convergence and it is more robust than those equation-based methods based on the FB function.

Recently, an extension of the FB function was considered in [17–19] by two of the authors. Specifically, they define the generalized Fischer–Burmeister (GFB) function by

φ

p

(

a

,

b

) := k(

a

,

b

)k

p

− (

a

+

b

) ∀

a

,

b

R

,

(6)

where p is an arbitrary fixed real number from the interval

(

1

, +∞)

and

k (

a

,

b

)k

p denotes the p-norm of

(

a

,

b

)

, i.e.,

k (

a

,

b

)k

p

= √

p

|

a

|

p

+ |

b

|

p. In other words, in the function

φ

p, they replace the 2-norm of

(

a

,

b

)

involved in the FB function by a more general p-norm. The function

φ

pis still an NCP-function, that is, it satisfies the equivalence

φ

p

(

a

,

b

) =

0

⇐⇒

a

0

,

b

0

,

ab

=

0

.

(7)

For any given p

>

1, the function

φ

pwas shown to possess all favorable properties of

φ

FB; see [17–19]. For example, its square is continuously differentiable everywhere on R2.

In this paper, we follow the ideas used in the aforementioned two effective methods to solve MCP(1)whose solution may not be unique. For each method, we design a similar algorithm in which the GFB function is involved. We will present their convergence results although these results are analogous to those cases where

φ

FBwas considered. In fact, these convergence results are not hard to obtain since

φ

FBand

φ

pshare almost the same favorable properties. However, the focus of this paper is on the numerical side as titled. We apply the two methods for solving all MCPLIB test problems, observe and analyze their numerical results. Furthermore, by the notion of performance profile introduced in [20], we plot the performances profile figures of iterations and function evaluations, respectively, for the two algorithms corresponding to four p. The performance profiles clearly and objectively reflect the influence of p on these two methods. ComparingFigs. 1–2 withFigs. 3–4, we see that the value of p has much more influence on the strictly feasible semismooth algorithm than the semismooth Levenberg–Marquardt algorithm. A larger p (for example over 103) or a smaller p (for example in (1, 1.001]) will lead to worse performance of the strictly feasible semismooth algorithm; whereas a small p (for example p

=

1

.

001) will bring good performance to the semismooth Levenberg–Marquardt algorithm.

Throughout this paper, Rndenotes the space of n-dimensional real column vectors with the usual Euclidean product

h· , ·i

. For every differentiable function f

:

Rn

R,

f

(

x

)

denotes the gradient of f at x, and for every differentiable mapping F ,

F

(

x

)

denotes the transposed Jacobian of F at x. For a vector x

Rn, the notation

[

x

]

+means the projection of x on

[

l

,

u

]

, whereas for a scalar s,

(

s

)

+means the projection of s on R+, i.e.,

(

s

)

+

=

max

{

0

,

x

}

. We denote

k

x

k

pthe p-norm of x and

k

x

k

the Euclidean norm of x.

2. Preliminaries

In this section, we review some basic concepts that will be used in the subsequent analysis. First, we introduce the concept of generalized Jacobian of a mapping. Let G

:

Rn

Rmbe a locally Lipschitz continuous mapping. Then, G is almost everywhere differentiable by Rademacher’s Theorem (see [21]). In this case, the generalized Jacobian

G

(

x

)

of G at x (in the Clarke sense) is defined as the convex hull of the B-subdifferential

BG

(

x

) := 

V

Rm×n

| ∃{

xk

} ⊆

DG

: {

xk

} →

x and G0

(

xk

) →

V

,

(3)

where DGis the set of differentiable points of G. In other words,

G

(

x

) =

conv

BG

(

x

)

. If m

=

1, we call

G

(

x

)

the generalized gradient of G at x. The calculation of

G

(

x

)

is usually difficult in practice, and Qi [22] proposed the so-called C -subdifferential of G:

CG

(

x

)

T

:= ∂

G1

(

x

) × · · · × ∂

Gm

(

x

)

(8)

which is easier to compute than the generalized Jacobian

G

(

x

)

. Here, the right-hand side of(8)denotes the set of matrices in Rn×mwhose ith column is given by the generalized gradient of the ith component function Gi. By Proposition 2.6.2 of [21],

G

(

x

)

T

⊆ ∂

CG

(

x

)

T

.

(9)

We next introduce the definition of (strongly) semismooth function. The semismooth property is very important from computational point of view. In particular, it plays a fundamental role in the superlinear convergence analysis of generalized Newton methods [23–25]. Assume that G

:

Rn

Rmis locally Lipschitz continuous. G is called semismooth at x if G is directionally differentiable at x and for any V

∈ ∂

G

(

x

+

h

)

and h

0,

G

(

x

+

h

) −

G

(

x

) −

Vh

=

o

(k

h

k );

(10)

G is called strongly semismooth at x if G is semismooth at x and for any V

∈ ∂

G

(

x

+

h

)

and h

0,

G

(

x

+

h

) −

G

(

x

) −

Vh

=

O

(k

h

k

2

);

(11)

G is called a (strongly) semismooth function if it is (strongly) semismooth everywhere.

The following lemma lists some properties of

φ

pwhose proofs can be found in [17–19]. Such results are ground stones for getting the properties of8pand8

¯

pin what follows.

Lemma 2.1. Let

φ

p

:

R

×

R

R be defined as in(6). Then, the following results hold.

(a)

φ

pis a strongly semismooth NCP-function.

(b)

φ

pis Lipschitz continuous with the Lipschitz constant L given by L

=

2

+

2(1/p1/2)when 1

<

p

<

2 and L

=

1

+

2 when p

2.

(c) Given any point

(

a

,

b

) ∈

R2, each element in the generalized gradient

∂φ

p

(

a

,

b

)

has the representation

(ξ −

1

, ζ −

1

)

where, if

(

a

,

b

) 6= (

0

,

0

)

,

ξ =

sign

(

a

) · |

a

|

p1

k (

a

,

b

)k

pp1

and

ζ =

sign

(

b

) · |

b

|

p1

k (

a

,

b

)k

pp1

with sign

(·)

denotes the sign function, and otherwise

(ξ, ζ ) ∈

R2denotes an arbitrary vector satisfying

| ξ|

pp1

+ | ζ |

pp1

1.

(d) For any a

,

b

R and p

>

1, there holds that

(

2

21p

)|

min

{

a

,

b

}| ≤ | φ

p

(

a

,

b

)| ≤ (

2

+

21p

)|

min

{

a

,

b

}| .

(12) (e) The square of

φ

pis a continuously differentiable NCP function.

The following lemma establishes another property of

φ

p, which plays a key role in the nonsmooth system reformulation of MCP(1)with the generalized FB function.

Lemma 2.2. Let

φ

p

:

R

×

R

R be defined by(6). Then, the following limits hold.

(a) limli→−∞

φ

p xi

li

, φ

p

(

ui

xi

, −

Fi

(

x

)) = −φ

p

(

ui

xi

, −

Fi

(

x

))

. (b) limui→∞

φ

p xi

li

, φ

p

(

ui

xi

, −

Fi

(

x

)) = φ

p

(

xi

li

,

Fi

(

x

))

. (c) limli→−∞limui→∞

φ

p xi

li

, φ

p

(

ui

xi

, −

Fi

(

x

)) = −

Fi

(

x

)

.

Proof. Let

{

ak

} ⊆

R be any sequence converging to

+∞

as k

→ ∞

and b

R be any fixed number. We will prove limk→∞

φ

p

(

ak

,

b

) = −

b, and part (a) then follows by continuity arguments. Without loss of generality, assume that ak

>

0 for each k. Then,

φ

p

(

ak

,

b

) =

ak 1

+ (|

b

| /

ak

)

p



1/p

ak

b

=

ak

"

1

+

1 p

 |

b

|

ak



p

+

1

p 2p2

 |

b

|

ak



2p

+ · · · + (

1

p

) · · · (

1

pn

+

p

)

n

!

pn

 |

b

|

ak



np

+

o

 |

b

|

ak



pn



ak

b

=

1 p

|

b

|

p

(

ak

)

p1

+

1

p 2p2

|

b

|

2p

(

ak

)

2p1

+ · · · + (

1

p

) · · · (

1

pn

+

p

)

n

!

pn

|

b

|

np

(

ak

)

np1

+ (

ak

)|

b

|

np

(

ak

)

np

o

|

b

| /

ak



pn

|

b

| /

ak



pn

b

(4)

where the third equality is using the Taylor expansion of the function

(

1

+

t

)

1/pand the notation o

(

t

)

means limt0o

(

t

)/

t

=

0. Since ak

→ +∞

as k

→ ∞

, we have (|b|np

ak)np1

0 for all n. This together with the last equation implies limk→∞

φ

p

(

ak

,

b

) = −

b. This proves part (a). Parts (b) and (c) are direct by part (a) and the continuity of

φ

p.  To conclude this section, we present a lemma which will be used in the subsequent analysis.

Lemma 2.3 ([7, Proposition 6]). For all negative definite diagonal matrices Da

,

Db

Rn×n, a matrix of the form Da

+

DbM is nonsingular if and only if M

Rn×nis a P0-matrix.

3. Strictly feasible Newton-type method

For convenience, in the rest of this paper, we adopt the following notations of index sets:

Il

:= {

i

∈ {

1

,

2

, . . . ,

n

} | −∞ <

li

<

ui

= +∞} ,

Iu

:= {

i

∈ {

1

,

2

, . . . ,

n

} | −∞ =

li

<

ui

< +∞} ,

Ilu

:= {

i

∈ {

1

,

2

, . . . ,

n

} | −∞ <

li

<

ui

< +∞} ,

If

:= {

i

∈ {

1

,

2

, . . . ,

n

} | −∞ =

li

<

ui

= +∞} .

(13)

With the generalized FB function, we define an operator8p

:

Rn

Rncomponentwise as 8p,i

(

x

) :=

 

 

φ

p

(

xi

li

,

Fi

(

x

))

if i

Il

,

− φ

p

(

ui

xi

, −

Fi

(

x

))

if i

Iu

, φ

p

(

xi

li

, φ

p

(

ui

xi

, −

Fi

(

x

)))

if i

Ilu

,

Fi

(

x

)

if i

If

,

(14)

where the minus sign for i

Iuand i

Ifis motivated byLemma 2.2. In fact, all results of this paper would be true without the minus sign. Using the equivalence(7), it is easily verified that a vector x

Rnsolves(1)if and only if xis a solution of the nonlinear system of equations8p

(

x

) =

0. This means that the squared norm of8pinduces a family of merit functions for(1)in the sense that the solution of(1)is equivalent to finding a minimizer of the unconstrained minimization problem

min

xRn9p

(

x

) :=

1

2

k

8p

(

x

)k

2

,

(15)

with the corresponding objective value equal to 0. In this section, we study the strictly feasible Newton-type method based on the constrained nonlinear system of equations

8p

(

x

) =

0

,

x

∈ [

l

,

u

] ,

(16)

and globalized by the projected gradient-type method for the constrained minimization min

x∈[l,u]9p

(

x

).

(17)

Before describing the specific iterative schemes, we present a few nice properties of the mapping8pand the merit function 9pthat will be used in the subsequent analysis.

3.1. Properties of8pand9p

The following proposition states the smoothness of9pand the semismoothness of8p, which are direct byLemma 2.1(a) and (e), and Theorem 19 of [26].

Proposition 3.1. Let8pand9pbe defined as in(14)and(15), respectively. Then,

(a) the mapping8pis semismooth, and moreover, it is strongly semismooth if F0is locally Lipschitz continuous.

(b) The function9pis continuously differentiable everywhere.

The following technical lemma gives an expression for each element in the generalized Jacobian of8pat any point x which plays an important role in the subsequent analysis.

Lemma 3.1. For any given x

Rn, we have

8p

(

x

)

T

⊆ {

Da

(

x

)+∇

F

(

x

)

Db

(

x

)}

, where Da

(

x

),

Db

(

x

) ∈

Rn×nare diagonal matrices whose diagonal elements are defined below:

(a) For i

Il, if

(

xi

li

,

Fi

(

x

)) 6= (

0

,

0

)

, then

(

Da

)

ii

(

x

) =

sign

(

xi

li

)|

xi

li

|

p1

k (

xi

li

,

Fi

(

x

))k

pp1

1

, (

Db

)

ii

(

x

) =

sign

(

Fi

(

x

))|

Fi

(

x

)|

p1

k (

xi

li

,

Fi

(

x

))k

pp1

1

,

(18)

and otherwise

((

Da

)

ii

(

x

), (

Db

)

ii

(

x

)) ∈ n(ξ −

1

, ζ −

1

) ∈

R2

| | ξ|

pp1

+ | ζ |

pp1

1

o .

(19)

(5)

(b) For i

Iu, if

(

ui

xi

, −

Fi

(

x

)) 6= (

0

,

0

)

, then

(

Da

)

ii

(

x

) =

sign

(

ui

xi

)|

ui

xi

|

p1

k (

ui

xi

, −

Fi

(

x

))k

pp1

1

, (

Db

)

ii

(

x

) = −

sign

(

Fi

(

x

))|

Fi

(

x

)|

p1

k (

ui

xi

, −

Fi

(

x

))k

pp1

1

,

and otherwise

((

Da

)

ii

(

x

), (

Db

)

ii

(

x

)) ∈ n(ξ −

1

, ζ −

1

) ∈

R2

| | ξ|

pp1

+ | ζ|

pp1

1

o .

(c) For i

Ilu,

(

Da

)

ii

(

x

) =

ai

(

x

) +

bi

(

x

)

ci

(

x

)

and

(

Db

)

ii

(

x

) =

bi

(

x

)

di

(

x

),

where, if

(

xi

li

, φ

p

(

ui

xi

, −

Fi

(

x

))) 6= (

0

,

0

)

, then ai

(

x

) =

sign

(

xi

li

) · |

xi

li

|

p1

xi

li

, φ

p

(

ui

xi

, −

Fi

(

x

))

p1 p

1

,

bi

(

x

) =

sign

φ

p

(

ui

xi

, −

Fi

(

x

)) · φ

p

(

ui

xi

, −

Fi

(

x

))

p1

xi

li

, φ

p

(

ui

xi

, −

Fi

(

x

))

p1 p

1

,

and otherwise

(

ai

(

x

),

bi

(

x

)) ∈ n(ξ −

1

, ζ −

1

) ∈

R2

| | ξ|

pp1

+ | ζ |

pp1

1

o

;

and if

(

ui

xi

, −

Fi

(

x

)) 6= (

0

,

0

)

, then ci

(

x

) = −

sign

(

ui

xi

) · |

ui

xi

|

p1

k (

ui

xi

, −

Fi

(

x

))k

pp1

+

1

,

di

(

x

) = −

sign

(

Fi

(

x

)) · |

Fi

(

x

)|

p1

k (

ui

xi

, −

Fi

(

x

))k

pp1

+

1

,

and otherwise

(

ci

(

x

),

di

(

x

)) ∈ n(ξ +

1

, ζ +

1

) ∈

R2

| | ξ|

pp1

+ | ζ |

pp1

1

o .

(d) For i

If,

(

Da

)

ii

(

x

) =

0 and

(

Db

)

ii

(

x

) = −

1.

Proof. Let8p

(

x

) := (

8p

)

1

(

x

), (

8p

)

2

(

x

), . . . , (

8p

)

n

(

x

)

T. Then, from(8)and(9),

8p

(

x

)

T

⊆ ∂(

8p

)

1

(

x

) × ∂(

8p

)

2

(

x

) × · · · × ∂(

8p

)

n

(

x

)

(20) where the latter denotes the set of all matrices whose ith row belongs to

∂(

8p

)

i

(

x

)

for each i. With this in mind, we proceed to prove the lemma.

(a) For i

Il, we have

(

8p

)

i

(

x

) = φ

p

(

xi

li

,

Fi

(

x

))

. If

(

xi

li

,

Fi

(

x

)) 6= (

0

,

0

)

, then

φ

pis continuously differentiable at such point, and moreover, byLemma 2.1(c),

a

φ

p

(

xi

li

,

Fi

(

x

)) = (

Da

)

ii

(

x

), ∇

b

φ

p

(

xi

li

,

Fi

(

x

)) = (

Db

)

ii

(

x

)

with

(

Da

)

ii

(

x

)

and

(

Db

)

ii

(

x

)

given by(18). Direct calculation with chain rule gives

∂(

8p

)

i

(

x

)

T

= { (

Da

)

ii

(

x

)

ei

+ ∇

Fi

(

x

)(

Db

)

ii

(

x

)}

where ei

Rndenotes the column vector whose ith element is 1 but zero elsewhere. If

(

xi

li

,

Fi

(

x

)) = (

0

,

0

)

, then using the generalized chain rule [21, Theorem 2.3.10] yields

∂(

8p

)

i

(

x

)

T

⊆ { (

Da

)

ii

(

x

)

ei

+ ∇

Fi

(

x

)(

Db

)

ii

(

x

)},

where

(

Da

)

ii

(

x

)

and

(

Db

)

ii

(

x

)

are given by(19). Thus, we prove part (a).

(b) Since

(

8p

)

i

(

x

) = −φ

p

(

ui

xi

, −

Fi

(

x

))

for i

Iu, following the same arguments as in part (a) gives the desired results.

(c) For i

Ilu,

(

8p

)

i

(

x

) = φ

p

(

xi

li

, φ

p

(

ui

xi

, −

Fi

(

x

)))

. We denote gi

(

x

) := φ

p

(

ui

xi

, −

Fi

(

x

))

and hi

(

x

) := (

xi

li

,

gi

(

x

)).

In other words,

(

8p

)

i

(

x

) = φ

p

(

hi

(

x

))

. We first argue that

∂(

8p

)

i

(

x

) = ∂φ

p

(

hi

(

x

))∂

hi

(

x

)

.

If

(

xi

li

, φ

p

(

ui

xi

, −

Fi

(

x

))) 6= (

0

,

0

)

, i.e., hi

(

x

) 6= (

0

,

0

)

, clearly,

φ

pis continuously differentiable at hi

(

x

)

. In addition, the continuous differentiability of F along with the Lipschitz continuity of

φ

p(byLemma 2.1(b)) implies that hiis locally Lipschitz. By [21, Theorem 2.6.6], we then have

∂(

8p

)

i

(

x

) = ∂φ

p

(

hi

(

x

))∂

hi

(

x

)

.

If

(

xi

li

, φ

p

(

ui

xi

, −

Fi

(

x

))) = (

0

,

0

)

, i.e., hi

(

x

) = (

xi

li

,

gi

(

x

)) = (

0

,

0

)

, then

φ

pis continuously differentiable at

(

ui

xi

, −

Fi

(

x

))

since ui

xi

=

ui

li

>

0. Hence, hi

(

x

)

is continuously differentiable at x, which by the corollary to [21, Proposition 2.2.1] implies that hiis strictly differentiable at x. Furthermore,

φ

pis Lipschitz and convex by [19, Proposition 3.1(b)]. This implies that

φ

pis regular everywhere due to [21, Proposition 2.3.6(b)]. Then applying [21, Theorem 2.3.9(iii)]

gives

∂(

8p

)

i

(

x

) = ∂φ

p

(

hi

(

x

))∂

hi

(

x

)

.

Next we look into

∂φ

p

(

hi

(

x

))

and

hi

(

x

)

, and try to write them out. Let

(

ai

(

x

),

bi

(

x

)) ∈ ∂φ

p

(

hi

(

x

))

. Since hi

(

x

) = (

xi

li

,

gi

(

x

))

, we have

hi

(

x

)

T

= 

(

ei

, σ

i

) | σ

i

∈ ∂

gi

(

x

)

and

∂(

8p

)

i

(

x

) = 

ai

(

x

)(

ei

)

T

+

bi

(

x

)(σ

i

)

T

| σ

i

∈ ∂

gi

(

x

) .

(21)

參考文獻

相關文件

Wang, Unique continuation for the elasticity sys- tem and a counterexample for second order elliptic systems, Harmonic Analysis, Partial Differential Equations, Complex Analysis,

Spectral fractional elliptic operators; Caffarelli-Silvestre- Stinga type extension; Non-local operators; Weighted eigenvalue problems; Unique continuation from a measurable

In this paper, we propose a practical numerical method based on the LSM and the truncated SVD to reconstruct the support of the inhomogeneity in the acoustic equation with

May not be scanned, copied, or duplicated, or posted to a publicly accessible website, in whole or in part.. In this case, we must test the series for convergence at each endpoint

Now, nearly all of the current flows through wire S since it has a much lower resistance than the light bulb. The light bulb does not glow because the current flowing through it

In the case of nonlinear complementarity problems, the three classes of NCP-functions used in this paper can be exploited to design other solution methods such as merit

In particular, we show that the distance between an arbitrary point in Euclidean Jordan algebra and the solution set of the symmetric cone complementarity problem can be bounded

In this paper, we build a new class of neural networks based on the smoothing method for NCP introduced by Haddou and Maheux [18] using some family F of smoothing functions.