• 沒有找到結果。

Neural network for solving SOCQP and SOCCVI based on two discrete-type classes of SOC complementarity functions

N/A
N/A
Protected

Academic year: 2022

Share "Neural network for solving SOCQP and SOCCVI based on two discrete-type classes of SOC complementarity functions"

Copied!
36
0
0

加載中.... (立即查看全文)

全文

(1)

Neural network for solving SOCQP and SOCCVI based on two discrete-type classes of SOC complementarity functions

Juhe Sun 1 School of Science

Shenyang Aerospace University Shenyang 110136, China E-mail: juhesun@163.com

Xiao-Ren Wu

Department of Mathematics National Taiwan Normal University

Taipei 11677, Taiwan E-mail: cantor0968@gmail.com

B. Saheya2

College of Mathematical Science Inner Mongolia Normal University Hohhot 010022, Inner Mongolia, China

E-mail: saheya@imnu.edu.cn Jein-Shan Chen 3 Department of Mathematics National Taiwan Normal University

Taipei 11677, Taiwan E-mail: jschen@math.ntnu.edu.tw

Chun-Hsu Ko

Department of Electrical Engineering I-Shou University

Kaohsiung 840, Taiwan E-mail: chko@isu.edu.tw

November 15, 2018 (revised on January 19, 2019)

1This work is supported by National Natural Science Foundation of China (Grant No.11301348).

2The author’s work is supported by Natural Science Foundation of Inner Mongolia (Award Number:

2017MS0125) and research fund of IMNU (Award Number: 2017YJRC003)

3Corresponding author. The author’s work is supported by Ministry of Science and Technology, Taiwan.

(2)

Abstract. This paper focuses on solving the quadratic programming problems with second-order cone constraints (SOCQP) and the second-order cone constrained varia- tional inequality (SOCCVI) by using the neural network. More specifically, a neural network model based on two discrete-type families of SOC complementarity functions associated with second-order cone is proposed to deal with the Karush-Kuhn-Tucker (KKT) conditions of SOCQP and SOCCVI. The two discrete-type SOC complementar- ity functions are newly explored. The neural network uses the two discrete-type families of SOC complementarity functions to achieve two unconstrained minimizations which are the merit functions of the Karuch-Kuhn-Tucker equations for SOCQP and SOCCVI. We show that the merit functions for SOCQP and SOCCVI are Lyapunov functions and this neural network is asymptotically stable. The main contribution of this paper lies on its simulation part because we observe a different numerical performance from the existing one. In other words, for our two target problems, more effective SOC complementarity functions, which work well along with the proposed neural network, are discovered.

Keywords. Second-order cone, quadratic programming, variational inequality, comple- mentarity function, neural network, Lyapunov stable.

1 Introduction

In optimization community, it is well known that there are many computational ap- proaches to solve the optimization problems such as linear programming, nonlinear pro- gramming, variational inequalities, and complementarity problems, see [2, 3, 5, 8, 12, 31]

and references therein. These approaches include the method using merit function, inte- rior point method, Newton method, nonlinear equation method, projection method and its variant versions. All the aforementioned methods rely on iterative schemes and usu- ally only provide “approximate” solution(s) to the original optimization problems and do not offer real-time solutions. However, real-time solutions are eager in many applications, such as force analysis in robot grasping and control applications. Therefore, the tradi- tional optimization methods may not be suitable for these applications due to stringent computational time requirements.

The neural network approach has an advantage in solving real-time optimization prob- lems, which was proposed by Hopfield and Tank [16, 30] in the 1980s. Since then, neural networks have been applied to various optimization problems, see [1, 4, 9–11, 14, 17–

19, 21, 23, 32–42] and references therein. Unlike the traditional optimization algorithms, the essence of neural network approach for optimization is to establish a nonnegative Lya- punov function (or energy function) and a dynamic system that represents an artificial neural network. This dynamic system usually adopts the form of a first-order ordinary differential equation and its trajectory is likely convergent to an equilibrium point, which corresponds to the solution to the considered optimization problem.

(3)

Following the similar idea, researchers have also developed many continuous-time neural networks for second-order cone constrained optimization problems. For example, Ko, Chen and Yang [22] proposed two kinds of neural networks with different SOCCP functions for solving the second-order cone program; Sun, Chen and Ko [29] gave two kinds of neural networks (the fist one is based on the Fischer-Burmeister function and the second one relies on a projection function) to solve the second-order cone constrained variational inequality (SOCCVI) problem; Miao, Chen and Ko [25] proposed a neural net- work model for efficiently solving general nonlinear convex programs with second-order cone constraints. In this paper, we are interested in employing neural network approach for solving two types of SOC constrained problems, the quadratic programming prob- lems with second-order cone constraints (SOCQP for short) and the second-order cone constrained variational inequality (SOCCVI for short), whose mathematical formats are described as below.

The SOCQP is in the form of

min 12xTQx + cTx s.t. Ax = b

x ∈ K

(1)

where Q ∈ Rn×n, A is an m × n matrix with full row rank, b ∈ Rm and K is the Cartesian product of second-order cones (SOCs), also called Lorentz cones. In other words,

K = Kn1 × Kn2 × · · · × Knq

where n1, · · · , nq, q are positive integers, n1+ · · · + nq = n, and Kni denotes the SOC in Rni defined by

Kni :=xi = (xi1, xi2) ∈ R × Rni−1

kxi2k ≤ xi1 . (2) with K1 denoting the nonnegative real number set R+. A special case of (2) corresponds to the nonnegative orthant cone Rn+, i.e., q = n and n1 = · · · = nq = 1. We assume that Q is a symmetric positive semi-definite matrix and problem (1) satisfies a suitable qualification [20], such as the generalized Slater condition that there exists ˆx with strictly feasibility, then x is a solution to problem (1) if and only if there exists a Lagrange multiplier (µ, y) ∈ Rm× Rn such that

Ax − b = 0

c + Qx + ATµ − y = 0 K 3 y ⊥ x ∈ K

(3)

In Section 3, we will employ two new families of SOC-complementarity functions and use (3) to build up the neural network model for solving SOCQP.

(4)

We say a few words about why we assume that Q is a symmetric positive semi-definite matrix. First, it is clear that the symmetric assumption is reasonable because Q can be replaced by 12(QT + Q) which is symmetric. Indeed, with Q being symmetric positive definite matrix, the SOCQP can be recast as a standard SOCP. To see this, we observe

that 1

2xTQx + cTx = 1 2

Q12x + Q12c

2− 1

2cTQ−1c

which is done by completing the square. Then, the SOCQP (with K = Kn) is equivalent to

min

Q12x + Q12c s.t. Ax = b

x ∈ Kn which is also the same as

min k¯yk

s.t. Q12x − ¯y = −Q12c Ax = b

x ∈ Kn This formulation is further equivalent to

min y1

s.t. Q12x − ¯y = −Q12c Ax = b

x ∈ Kn y1 ≥ k¯yk

(4)

Now, we let y := (y1, ¯y) which says y ∈ Kn+1, and denote ˆ

v := (x, y) ∈ Kn× Kn+1, ˆc := (0, e) ∈ R2n+1, A :=ˆ

 A 0 0

Q12 0 In

 , ˆb :=

 b Q12

 .

Thus, the above reformulation (4) is expressed as a standard SOCP as below:

min (ˆc)Tˆv s.t. Aˆˆv = ˆb

ˆ

v ∈ Kn× Kn+1

(5)

In view of this reformulation (5), we focus on SOCQP with Q being symmetric positive semi-definite in this paper.

(5)

The SOCCVI, our another target problem, is to find x ∈ C satisfying

hF (x), y − xi ≥ 0 ∀y ∈ C, (6)

where the set C is finitely representable and is given by

C = {x ∈ Rn| h(x) = 0, −g(x) ∈ K} .

Here h·, ·i denotes the Euclidean inner product, F : Rn → Rn, h : Rn → Rl and g : Rn → Rm are continuously differentiable functions; and K is a Cartesian product of second-order cones (or Lorentz cones), expressed as

K = Km1 × Km2 × · · · × Kmp, (7) with l ≥ 0, m1, m2, · · · , mp ≥ 1, m1 + m2 + · · · + mp = m. When h is affine, an important special case of the SOCCVI corresponds to the KKT conditions of the convex second-order cone program (CSOCP):

min f (x) s.t. Ax = b

−g(x) ∈ K

(8)

where A ∈ Rl×n has full row rank, b ∈ Rl, g : Rn→ Rm, and f : Rn → R. Furthermore, when f is a convex twice continuously differentiable function, problem (8) is equivalent to the following SOCCVI which is to find x ∈ C such that

h∇f (x), y − xi ≥ 0, ∀y ∈ C, where

C = {x ∈ Rn| Ax − b = 0, −g(x) ∈ K} . In fact, the SOCCVI can be solved by analyzing its KKT conditions:

L(x, µ, λ) = 0,

hg(x), λi = 0, −g(x) ∈ K, λ ∈ K, h(x) = 0,

(9)

where L(x, µ, λ) = F (x) + ∇h(x)µ + ∇g(x)λ is the variational inequality Lagrangian function, µ ∈ Rl and λ ∈ Rm. We also point out that the neural network approach for SOCCVI was already studied in [29]. Here we revisit the SOCCVI with different neural models. More specifically, in our earlier work [29], we had employed neural network ap- proach to the SOCCVI problem (6)-(7), in which the neural networks were aimed to solve the system (9) whose solutions are candidates of SOCCVI problem (6)-(7). There were two neural networks considered in [29]. The first one is based on the smoothed Fischer- Burmeister function, while the other one is based on the projection function. Both neural

(6)

networks possess asymptotical stability under suitable conditions. In Section 4, in light of (9) again, we adopt new and different SOC-complementarity functions to construct our new neural networks.

As mentioned earlier, this paper studies neural networks by using two new classes of SOC-complementarity functions to efficiently solve SOCQP and SOCCVI. Although the idea and the stability analysis for both problems are routine, we emphasize that the main contribution of this paper lies on its simulations. More specifically, from numerical performance and comparison, we observe a new phenomenon different from the existing one in the literature. This may suggest update choices of SOC complementarity functions to work with neural network approach.

2 Preliminaries

Consider the first order differential equations (ODE):

˙

w(t) = H(w(t)), w(t0) = w0 ∈ Rn, (10) where H : Rn → Rn is a mapping. A point w = w(t) is called an equilibrium point or a steady state of the dynamic system (10) if H(w) = 0. If there is a neighborhood Ω ⊆ Rn of w such that H(w) = 0 and H(w) 6= 0 ∀w ∈ Ω\ {w}, then w is called an isolated equilibrium point.

Lemma 2.1. Suppose that H : Rn → Rn is a continuous mapping. Then, for any t0 > 0 and w0 ∈ Rn, there exists a local solution w(t) to (10) with t ∈ [t0, τ ) for some τ > t0. If, in addition, H is locally Lipschitz continuous at x0, then the solution is unique; if H is Lipschitz continuous in Rn, then τ can be extended to ∞.

Let w(t) be a solution to dynamic system (10). An isolated equilibrium point w is Lyapunov stable if for any w0 = w(t0) and any ε > 0, there exists a δ > 0 such that kw(t) − wk < ε for all t ≥ t0 and kw(t0) − wk < δ. An isolated equilibrium point w is said to be asymptotic stable if in addition to being Lyapunov stable, it has the property that w(t) → w as t → ∞ for all kw(t0) − wk < δ. An isolated equilibrium point w is exponentially stable if there exists a δ > 0 such that arbitrary point w(t) of (10) with the initial condition w(t0) = w0 and kw(t0) − wk < δ is well defined on [0, +∞) and satisfies

kw(t) − wk ≤ ce−ωtkw(t0) − wk ∀t ≥ t0, where c > 0 and ω > 0 are constants independent of the initial point.

(7)

Let Ω ⊆ Rn be an open neighborhood of ¯w. A continuously differentiable function V : Rn→ R is said to be a Lyapunov function at the state ¯w over the set Ω for equation (10) if

 V ( ¯w) = 0, V (w) > 0, ∀w ∈ Ω \ { ¯w}, V (w) ≤ 0, ∀w ∈ Ω \ { ¯˙ w}.

The Lyapunov stability and asymptotical stability can be verified by using Lyapunov function, which is a useful tool for analysis.

Lemma 2.2. (a) An isolated equilibrium point w is Lyapunov stable if there exists a Lyapunov function over some neighborhood Ω of w.

(b) An isolated equilibrium point w is asymptotically stable if there exists a Lyapunov function over some neighborhood Ω of w such that ˙V (w) < 0, ∀w ∈ Ω\ {w}.

For more details, please refer to any usual ODE textbooks, e.g. [26].

Next, we briefly recall some concepts associated with SOC, which are helpful for un- derstanding the target problems and our analysis techniques. We start with introducing the Jordan product and SOC-complementarity function. For any x = (x1, x2) ∈ R×Rn−1 and y = (y1, y2) ∈ R × Rn−1, we define their Jordan product associated with Kn as

x ◦ y =

 xTy y1x2+ x1y2

 .

The Jordan product ◦, unlike scalar or matrix multiplication, is not associative, which is a main source of complication in the analysis of SOC constrained optimization. There ex- ists an identity element under this product, which is denoted by e := (1, 0, · · · , 0)T ∈ Rn. Note that x2 means x ◦ x and x + y means the usual componentwise addition of vectors.

It is known that x2 ∈ Kn for all x ∈ Rn. Moreover, if x ∈ Kn, then there exists a unique vector in Kn, denoted by x1/2, such that (x1/2)2 = x1/2◦ x1/2 = x. We also denote

|x| := (x2)1/2.

A vector-valued function φ : Rn × Rn → Rn is called an SOC-complementarity function if it satisfies

φ(x, y) = 0 ⇐⇒ x ◦ y = 0, x ∈ Kn, y ∈ Kn.

There have been many SOC-complementarity functions studied in the literature, see [6, 7, 13, 15, 27] and references therein. Among them, two popular ones are the Fischer- Burmeister function φFB and the natural residual function φNR, which are given by

φFB(x, y) = (x2+ y2)1/2− (x + y), φNR(x, y) = x − (x − y)+.

(8)

Some existing SOC-complementarity functions are indeed variants of φFB and φNR. Re- cently, Ma, Chen, Huang and Ko [24] explored the idea of “discrete generalization” to the Fischer-Burmeister function which yields the following class of functions (denoted by φp

D−FB):

φpD−FB(x, y) =p

x2+ y2

p

− (x + y)p, (11)

where p > 1 is a positive odd integer. Applying similar idea, they also extended φNR to another family of SOC-complementarity functions, φpNR : Rn× Rn → Rn, whose formula is as below

φp

NR(x, y) = xp − [(x − y)+]p, (12) where p > 1 is a positive odd integer and (·)+ means the projection onto Kn. The func- tions φp

D−FB and φp

NR are continuously differentiable SOC-complementarity functions with computable Jacobian, which can be found in [24].

3 Neural networks for SOCQP

In this section, we first show how we achieve the neural network model for SOCQP and prove various stabilities for it accordingly. Then, numerical experiments are reported to demonstrate the effectiveness of the proposed neural network.

3.1 The model and stability analysis

As mentioned in Section 2, the KKT conditions are expressed in (3). With the system (3) and using a given SOC-complementarity function φ : Rn× Rn → Rn, it is clear to see that the system (3) is equivalent to

H(u) =

Ax − b c + Qx + ATµ − y

φ(x, y)

= 0,

where u = (x, µ, y) ∈ Rn× Rm× Rn. Moreover, we can specifically describe ∇H(u) as the following:

∇H(u) =

AT Q ∇xφ

0 A 0

0 −I ∇yφ

.

Here φ is a continuously differentiable SOC-complementarity function such as φp

D−FB and φp

NR introduced in Section 2. It is clear that if u solves H(u) = 0, then u solves

12kH(u)k2 = 0. Accordingly, we consider a specific first order ordinary differential

(9)

equation as below:

 du(t)

dt = −ρ ∇ 12kH(u)k2 , u(t0) = u0,

(13)

where ρ > 0 is a time scaling factor. In fact, if letting τ = ρt, then du(t)dt = ρdu(τ ) . Hence, it follows from (13) that du(τ ) = −∇(12kH(u)k2). In view of this, we set ρ = 1 in the subsequent analysis. Next, we show that the equilibrium of the neural network (13) corresponds to the solution to the system (3).

Lemma 3.1. Let u be a equilibrium of the neural network (13) and suppose that ∇H(u) is nonsingular. Then u solves the system (3).

Proof. Since ∇(12kH(u)k2) = ∇H(u)H(u) and ∇H(u) is nonsingular, it is clear to see that ∇(12kH(u)k2) = 0 if and only if H(u) = 0. 2

Besides, the following results address the existence and uniqueness of the solution trajectory of the neural network (13).

Theorem 3.1. (a) For any initial point u0 = u(t0), there exists a unique continuously maximal solution u(t) with t ∈ [t0, τ ) for the neural network (13).

(b) If the level set L(u0) := {u | kH(u)k2 ≤ kH(u0)k2} is bounded, then τ can be extended to ∞.

Proof. This proof is exactly the same as the one in [29, Proposition 3.4], so we omit it here. 2

Now, we are ready to analyze the stability of an isolated equilibrium u of the neural network (13), which means ∇(12||H(u)||2) = 0 and ∇(12kH(u)k2) 6= 0 for u ∈ Ω \ {u}, Ω being a neighborhood of u.

Theorem 3.2. Let u be an isolated equilibrium point of the neural network (13).

(a) If ∇H(u) is nonsingular, then the isolated equilibrium point u is asymptotically stable, and hence Lypunov stable.

(b) If ∇H(u) is nonsingular for all u ∈ Ω, then the isolated equilibrium point u is exponentially stable.

Proof. The desired results can be proved by using Lemma 3.1 and mimicking the arguments as in [29, Theorem 3.1]. 2

(10)

3.2 Numerical experiments

In order to demonstrate the effectiveness of the proposed neural network, we test three examples for our neural network (13). The numerical implementation is coded by Mat- lab 7.0 and the ordinary differential equation solver adopted here is ode23, which uses Ruge-Kutta (2; 3) formula. As mentioned earlier, in general the parameter ρ is set to be 1. For some special examples, the parameter ρ is set to be another value.

Example 3.1. Consider the following SOCQP problem:

min (x1− 3)2+ x22+ (x3− 1)2+ (x4− 2)2+ (x5+ 1)2 s.t. x ∈ K5

After suitable transformation, it can be recast as an SOCQP with Q = 2I5, c = [−6, 0, −2, −4, 2]T, A = 0, and b = 0. This problem has an optimal solution x = [3, 0, 1, 2, −1]T. Now, we use the proposed neural network (13) with two cases φ = φpD−FB and φ = φpNR respectively to solve the above SOCQP and their trajectories are depicted in Figures 1-4. For the sake of coding needs and check, the following expressions are presented.

For case of φ = φpD−FB, we have du(t)

dt = −ρ∇H(u)H(u), u(t0) = u0

H(u) =

 c + 2x − y φpD−FB(x, y)



, u = (x, y)

∇H(u) =

 2I5xφpD−FB(u)

−I5yφp

D−FB(u)



xφp

D−FB(x, y) = 2Lx∇gsoc(w) − 2L(x+y)∇gsoc(v),

yφpD−FB(x, y) = 2Ly∇gsoc(w) − 2L(x+y)∇gsoc(v).

w(x, y) := x2+ y2 = (w1(x, y), w2(x, y)) = (||x||2+ ||y||2, 2(x1x2+ y1y2)) ∈ R × R4 and v(x, y) := (x + y)2 = (||x + y||2, 2(x1+ y1)(x2+ y2)) ∈ R × R4.

Note that the element w = (w1, w2) ∈ R × R4 can also be expressed as w := λ1e1+ λ2e2

where λi = w1 + (−1)i||w2|| and ei = 12(1, (−1)i w||w2

2||) (i = 1, 2) if w2 6= 0, otherwise ei = 12(1, (−1)iν with ν being any vector in R4 satisfying ||ν|| = 1.

(11)

For case of φ = φp

NR, we replace φp

D−FB(x, y) as φp

NR(x, y), Hence, H(u) and ∇H(u) have the forms as follows:

H(u) = c + 2x − y φp

NR(x, y)



, u = (x, y)

∇H(u) =

 2I5xφp

NR(u)

−I5yφp

NR(u)



xφp

NR(x, y) = ∇hsoc(x) − ∇lsoc(x − y),

yφpNR(x, y) = ∇lsoc(x − y).

0 2 4 6 8 10 12

Time (ms) -1.5

-1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5

Trajectories of x(t)

x1 x2 x3 x4 x5

Figure 1: Transient behavior of the neural network with φp

D−FB function (p = 3) in Example 3.1.

Figure 1 and Figure 3 show the transient behaviors of Example 3.1 for neural network model (13) based on smooth SOC-complementarity functions φp

D−FB and φp

NR with initial states x0 = [0, 0, 0, 0, 0]T respectively. In Figure 2, we see the convergence comparison of the neural network model using φp

D−FB function with different values of p = 3, 5, 7.

Figure 4 depicts the influence of the parameter p on the value of norm of error for neural network model using φp

NR function.

Example 3.2. Consider the following SOCQP problem:

min 4x21+ 10x22+ 4x23+ 4x1x2+ 12x2x3− x1+ x2+ 5x3 s.t. 2x1+ x2− 7 = 0

3x2+ 2x3− 1 = 0 x ∈ K3

(12)

0 20 40 60 80 100 120 140 160 180 Time (ms)

10-5 10-4 10-3 10-2 10-1 100

Norm of error

FB,p=3 FB,p=5 FB,p=7

Figure 2: Convergence comparison of φp

D−FB function with different p value for Example 3.1.

For this SOCP, we have

Q =

8 4 0

4 20 12 0 12 8

, c =

−1 1 5

, b =7 1



, and A = 2 1 0 0 3 2

 .

This problem has an approximate solution x = (2.6529, 1.6943, −2.0414)T. Note that the precise solution is

22− 37 6 ,−2+2

37 6 ,6−3

37 6

T

. Indeed, we have

H(u) =

Ax − b c + Qx + ATµ − y

φ(x, y)

, u = (x, µ, y), and

∇H(u) =

AT Q ∇xφ(x, y)

0 A 0

0 −I ∇yφ(x, y)

. We also report numerical experiments for two cases when φ = φp

D−FB and φ = φp

NR, see Figures 5-8.

Figure 5 and Figure 7 show the transient behaviors of Example 3.2 for neural network model (13) based on φp

D−FB and φp

NR with initial states x0 = [0, 0, 0]T, respectively. Figure 6 provides the convergence comparison by using φpD−FB function with different values of p = 3, 5, 7. Figure 8 shows the convergence of neural network model using φp

NR function, which indicates that this class of φp

NR functions performs not well for this problem.

(13)

0 2 4 6 8 10 12 Time (ms)

-1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5

Trajectories of x(t)

x1 x2 x3 x4 x5

Figure 3: Transient behavior of the neural network with φp

NR function (p = 3) in Example 3.1.

Example 3.3. Consider the following SOCQP problem:

min 52x21+ 2x22+ 52x23+ 3x1x2− 2x2x3 − x1x3− 47x1− 35x2+ 2x3 s.t. x ∈ K3

Here, we have

Q =

5 3 −1

3 4 −2

−1 −2 5

, c = [−47, −35, 2]T, and A = 0, b = 0. This problem has an optimal solution x = (7, 5, 3)T.

Figure 9 and 11 show the transient behaviors of Example 3.3 for neural network model (13) based on φp

D−FB and φp

NR with initial states x0 = [0, 0, 0], respectively. Figure 10 shows that there are no difference between the neural networks using φp

D−FB function with p = 3, 5. Figure 12 elaborates that when p = 5 the neural network based on φpNR function produces fast decrease of norm of error. We point out that the neural network does not converge when p = 7 for both cases.

4 Neural networks for SOCCVI

This section is devoted to another type of SOC constrained problem, SOCCVI. Like what we have done for SOCQP, in this section, we first show how we build up the neural network model for SOCCVI and prove various stabilities for it accordingly. Then,

(14)

0 50 100 150 200 Time (ms)

10-5 10-4 10-3 10-2 10-1 100 101

Norm of error

NR,p=3 NR,p=5 NR,p=7

Figure 4: Convergence comparison of φp

NR function with different p value for Example 3.1.

numerical experiments are reported to demonstrate the effectiveness of the proposed neural network.

4.1 The model and stability analysis

Let φ(x, y) be a SOC-complementarity function like φp

D−FB and φp

NR defined as in (11) and (12), respectively. Mimicking the arguments described as in [28], we can verify that the KKT system (9) is equivalent to the following unconstrained smooth minimization problem:

min Ψ(z) := 1

2kS(z)k2, (14)

where z = (x, µ, λ) ∈ Rn+l+m and S(z) is given by

S(z) =

L(x, µ, λ)

−h(x) φ(−gm1(x), λm1)

...

φ(−gmq(x), λmq)

 ,

with gmi(x), λmi ∈ Rmi. In other words, Ψ(z) is a smooth merit function for the KKT system (9). Hence, based on the above smooth minimization problem (14), it is natural

(15)

0 10 20 30 40 50 60 Time (ms)

-3 -2 -1 0 1 2 3

Trajectories of x(t)

x1 x2 x3

Figure 5: Transient behavior of the neural network with φp

D−FB function (p = 3) in Example 3.2.

to propose a neural network for solving the SOCCVI as below:

 dz(t)

dt = −ρ ∇Ψ(z(t)), z(t0) = z0,

(15)

where ρ > 0 is a scaling factor. To prove the stability of neural network (15), we need to present some properties of Ψ(·).

Proposition 4.1. Let Ψ : Rn+l+m → R+ be defined as in (14). Then, Ψ(z) ≥ 0 for z = (x, µ, λ) ∈ Rn+l+m. Moreover, Ψ(z) = 0 if and only if (x, µ, λ) solves the KKT system (9).

Proof. The proof is straightforward. 2

Proposition 4.2. Let Ψ : Rn+l+m → R+ be defined as in (14). Then, the following results hold.

(a) The function Ψ is continuously differentiable everywhere with

∇Ψ(z) = ∇S(z)S(z), where

∇S(z) =

xL(x, µ, λ)T −∇h(x) −∇g(x) diag{∇gmiφ(−gmi(x), λmi)}qi=1

∇h(x)T 0 0

∇g(x)T 0 diag{∇λmiφ(−gmi(x), λmi)}qi=1

.

(16)

0 10 20 30 40 50 60 Time (ms)

10-5 10-4 10-3 10-2 10-1 100 101

Norm of error

FB,p=3 FB,p=5 FB,p=7

Figure 6: Convergence comparison of φp

D−FB function with different p value for Example 3.2.

(b) If ∇S(z) is nonsingular, then for any stationary point (x, µ, λ) ∈ Rn+l+m of Ψ, (x, µ, λ) ∈ Rn+l+m is a KKT triple of the SOCCVI problem.

(c) Ψ(z(t)) is nonincreasing with respect to t.

Proof. (a) It follows from the chain rule immediately.

(b) From ∇Ψ(z) = ∇S(z)S(z) and the matrix ∇S(z) is nonsingular, it is clear that

∇Ψ(z) = 0 if and only if S(z) = 0. Hence, we see that (x, µ, λ) ∈ Rn+l+m is a KKT triple of the SOCCVI problem provided (x, µ, λ) ∈ Rn+l+m is a stationary point of Ψ.

(c) From the definition of Ψ(z) and (15), it is easy to verify that dΨ(z(t))

dt = ∇Ψ(z(t))T dz(t)

dt = −ρ k∇Ψ(z(t))k2 ≤ 0, (16) which says Ψ(z(t)) is a monotonically decreasing function with respect to t. 2

Now, we are ready to analyze the behavior of the solution trajectory of neural network (15) and establish three kinds of stabilities for an isolated equilibrium point.

Proposition 4.3. (a) If (x, µ, λ) ∈ Rn+l+m is a KKT triple of the SOCCVI problem, then (x, µ, λ) ∈ Rn+l+m is an equilibrium point of neural network (15).

(b) If ∇S(z) is nonsingular and (x, µ, λ) ∈ Rn+l+m is an equilibrium point of (15), then (x, µ, λ) ∈ Rn+l+m is a KKT triple of the SOCCVI problem.

(17)

0 500 1000 1500 2000 2500 3000 Time (ms)

-3 -2 -1 0 1 2 3

Trajectories of x(t)

x1 x2 x3

Figure 7: Transient behavior of the neural network with φp

NR function (p = 3) in Example 3.2.

Proof. (a) From Proposition 4.1 and (x, µ, λ) ∈ Rn+l+mbeing a KKT triple of SOCCVI problem, it is clear that S(x, µ, λ) = 0, which implies ∇Ψ(x, µ, λ) = 0. Besides, by Proposition 4.2, we know that ∇Ψ(x, µ, λ) 6= 0. This shows that (x, µ, λ) is an equilibrium point of neural network (15).

(b) It follows from (x, µ, λ) ∈ Rn+l+m being an equilibrium point of neural network (15) that ∇Ψ(x, µ, λ) = 0. In other words, (x, µ, λ) is the stationary point of Ψ. Then, the result is a direct consequence of Proposition 4.2(b). 2

Proposition 4.4. (a) For any initial state z0 = z(t0), there exists exactly one maximal solution z(t) with t ∈ [t0, τ (x0)) for the neural network (15).

(b) If the level set L(z0) = z ∈ Rn+l+m| Ψ(z) ≤ Ψ(z0) is bounded, then τ (x0) = +∞.

Proof. (a) Since S(·) is continuous differentiable, it says that ∇S(·) is continuous. This means ∇S(·) is bounded on a local compact neighborhood of z, which implies that ∇Ψ(z) is locally Lipschitz continuous. Thus, applying Lemma 2.1 leads to the desired result.

(b) This proof is similar to the proof of Case(i) in [4, Proposition 4.2], so we omit it.

2

Remark 4.1. A natural question arises here. When are the level sets L(Ψ, γ) :=z ∈ Rn+l+m| Ψ(z) ≤ γ

(18)

0 500 1000 1500 2000 2500 3000 Time (ms)

10-1 100 101

Norm of error

NR,p=3 NR,p=5 NR,p=7

Figure 8: Convergence comparison of φp

NR function with different p value for Example 3.2.

bounded for all γ ∈ R? For the time being, we are not able to answer this question yet.

We suspect that there needs more subtle properties of F , h and g to finish it.

Next, we investigate the convergence of the solution trajectory and stability of neural network (15), which are the main results of this section.

Theorem 4.1. (a) Let z(t) with t ∈ [t0, τ (z0)) be the unique maximal solution to the neural network (15). If τ (z0) = +∞ and {z(t)} is bounded, then limt→∞∇Ψ(z(t)) = 0.

(b) If ∇S(z) is nonsingular and (x, µ, λ) ∈ Rn+l+m is the accumulation point of the trajectory z(t), then (x, µ, λ) ∈ Rn+l+m is a KKT triple of the SOCCVI problem.

Proof. With Proposition 4.2(b) and (c) and Proposition 4.4, the arguments are exactly the same as those for [23, Corollary 4.3]. Thus, we omit them. 2

Theorem 4.2. Let z be an isolated equilibrium point of the neural network (15). Then, the following results hold.

(a) z is asymptotically stable, and hence is also Lyapunov stable..

(b) If ∇S(z) is nonsingular, then it is exponentially stable.

(19)

0 1 2 3 4 5 6 7 8 Time (ms)

0 1 2 3 4 5 6 7 8

Trajectories of x(t)

x1 x2 x3

Figure 9: Transient behavior of the neural network with φp

D−FB function (p = 3) in Example 3.3

Proof. Again, the arguments are similar to those in [29, Theorem 3.1] and we omit them. 2

To study the conditions for nonsingularity based on ψD−FBp and φpNR, we need the following assumptions.

Assumption 4.1. (a) The gradients {∇hj(x) | j = 1, · · · , l} ∪ {∇gi(x) | i = 1, · · · , m}

are linear independent.

(b) ∇xL(x, µ, λ) is positive definite on the null space of the gradients {∇hj(x) | j = 1, · · · , l}.

When SOCCVI problem corresponds to the KKT conditions of a convex second-order cone program (CSOCP) problem as (8) where both h and g are linear, the above As- sumption 4.1(b) is indeed equivalent to the well-used condition of ∇2f (x) being positive definite, e.g., [34, Corollary 1].

Assumption 4.2. Let α := w

p

m2i and β := v

p

m2i, where wmi = g2mi + λ2mi and vmi = (gmi + λmi)2. For gmi(x), λmi ∈ Kmi, we have

(a) L2g

mi − LβL−1α L2g

miL−1α Lβ  0 or LαL−1β L2g

miL−1β Lα− L2g

mi  0;

(b) L2λ

mi − LβL−1α L2λ

miL−1α Lβ  0 or LαL−1β L2λ

miL−1β Lα− L2λ

mi  0.

(20)

0 5 10 15 20 25 Time (ms)

10-5 10-4 10-3 10-2 10-1 100 101

Norm of error

FB,p=3 FB,p=5 FB,p=7

Figure 10: Convergence comparison of φp

D−FB function with different p value for Example 3.3

Theorem 4.3. Suppose −gmi + λmi ∈ intKmi for i = 1, 2, · · · , p and that Assumption 4.1 and 4.2 hold. Then, the matrix

∇S(z) =

xL(x, µ, λ)T −∇h(x) −∇g(x) diag{∇gmiψp

D−FB(−gmi(x), λmi)}qi=1

∇h(x)T 0 0

∇g(x)T 0 diag{∇λmiψD−FBp (−gmi(x), λmi)}qi=1

is nonsingular.

Proof. We know that ∇S(z) is nonsingular if and only if the following equation only has zero solution:

∇S(z)

 u v t

= 0, where (u, v, t) ∈ Rn× Rl× Rm. (17)

To reach the conclusion, we need to prove u = 0, v = 0, t = 0. First, plugging the components of ∇S(z) into (17), we have

(∇xL)Tu − (∇h(x))v − ∇g(x)(L−g+λL−1β − LgL−1α )t = 0 (18)

(∇h(x))Tu = 0 (19)

(∇g(x))Tu + (L−g+λL−1β − LλL−1α )t = 0 (20)

(21)

0 1 2 3 4 5 6 7 8 Time (ms)

0 1 2 3 4 5 6 7 8

Trajectories of x(t)

x1 x2 x3

Figure 11: Transient behavior of the neural network with φp

NRfunction (p = 3) in Example 3.3

where

Lg+λ = diagL−gm1m1, L−gm2m2, . . . , L−gmqmq Lβ = diag∇gsoc(vm1), ∇gsoc(vm2), ∇gsoc(vmq) L−g = diagL−gm1, L−gm2, · · · , L−gmq

Lα = diag∇gsoc(wm1), ∇gsoc(wm2), ∇gsoc(wmq) Lλ = diagLλm1, Lλm2, · · · , Lλmq

α = diagαm1, αm2, · · · , αmq

αmi = (vmi)p2 = (−gmi + λmi)p, i = 1, 2, · · · , q β = diagβm1, βm2, · · · , βmq

βmi = (wmi)p2 = (−gm2i + λ2mi)p2, i = 1, 2, · · · , q From equations (18) and (19), we see that

uT(∇xL)Tu − uT∇g(x)(L−g+λL−1β − L−gL−1α )t = 0, (21) while from equation (20), we have

tT L−g+λL−1β − LgL−1α T

∇g(x))Tu + tT(L−g+λL−1β − L−gL−1α T

L−g+λL−1β − LλL−1α  t = 0 (22) Next, we will claim that

tT(L−g+λL−1β − LgL−1α )T(L−g+λL−1β − LλL−1α )t ≥ 0 (23)

(22)

0 1 2 3 4 5 6 7 8 Time (ms)

10-5 10-4 10-3 10-2 10-1 100 101

Norm of error

NR,p=3 NR,p=5

Figure 12: Convergence comparison of φp

NR function with different p value for Example 3.3

To see this, we note that

L−g+λL−1β − LgL−1α T

L−g+λL−1β − LλL−1α 

= diag



L−gm1m1L−1β

m1 − L−gm1L−1α

m1

T 

L−gm1m1L−1β

m1 − Lλm1L−1α

m1

 , · · · ,



L−gmqmqL−1βmq − L−gmqL−1αmq

T 

L−gmqmqL−1βmq − LλmqL−1αmq

 . In view of this, to prove inequality (23), it suffices to show that

tTi



L−gmimiL−1β

mi − L−gmiL−1α

mi

T 

L−gmim1L−1β

mi − LλmiL−1α

mi



ti ≥ 0, (24) for i = 1, 2, . . . , q. For convenience, we denote X := −gmi, Y := λmi, A := αmi, and B := βmi. With these notations, we have



L−gmimiL−1β

mi − L−gmiL−1α

mi

T 

L−gmim1L−1β

mi − LλmiL−1α

mi



= LX+YL−1B − LXL−1A T

LX+YL−1B − LYL−1A 

= L−1B LX+Y − LBL−1A LX

LX+Y − LYL−1A LB L−1B ,

which says that it is enough to show M := (LX+Y − LBL−1A LX)(LX+Y − LYL−1A LB) is

(23)

semipositive definite in order to prove inequality (24). To this end, we compute that

1 2



LX+Y − LBL−1A LX

LX+Y − LYL−1A LB + (LX+Y − LBL−1A LX)(LX+Y − LYL−1A LB)T 

= 12



2L2X+Y − L2X+YL−1A LB− LBL−1A L2X+Y + LBL−1A (LXLY + LYLX) L−1A LB



= 12



I − LBL−1A  L2X+Y I − L−1A LB + L2X+Y

−LBL−1A L2X+YL−1A LB+ LBL−1A (LXLY + LYLX)L−1A LB



= 12



I − LBL−1A  L2X+Y I − L−1A LB + L2X+Y − LBL−1A (L2X + L2Y) L−1A LB



= 12



I − LBL−1A  L2X+Y I − L−1A LB + L2X − LBL−1A (L2X)L−1A LB + L2Y − LBL−1A (L2Y)L−1A LB + (LXLY + LYLX)

 .

(25) It can be verified that L2X+Y and LXLY + LYLX are positive semidefinite. Then, from Assumption 4.2 and (25), we conclude that M := (LX+Y−LBL−1A LX)(LX+Y−LYL−1A LB) is semipositive definite; and hence inequality (24) holds. Thus, the inequality (23) also holds accordingly. Now, tt follows from (21), (22), and (23) that uT(∇xL)Tu = 0 which implies that u = 0. Then, equations (18) and (19) become

∇h(x)v + ∇g(x) L−g+λL−1β − LgL−1α  t = 0 (26) L−g+λL−1β − LλL−1α  t = 0 (27) In light of Assumption 4.1(a) and (26), we know

v = 0 and L−g+λL−1β − LgL−1α  t = 0. (28) Combining (27) and (28) together, it is clear to obtain

LgL−1α t = LλL−1α t

Note that −g and λ are strict complementary. Hence, it yields t = 0. In summary, from equation (17), we deduce u = v = t = 0, which says the matrix ∇S(z) is nonsingular.

2

Theorem 4.4. Suppose Assumption 4.1 holds and

gmiφp

NR(−gmi(x), λmi) · ∇λmiφp

NR(−gmi(x), λmi)  0.

參考文獻

相關文件

Then, based on these systematically generated smoothing functions, a unified neural network model is pro- posed for solving absolute value equationB. The issues regarding

For different types of optimization problems, there arise various complementarity problems, for example, linear complemen- tarity problem, nonlinear complementarity problem

Abstract Based on a class of smoothing approximations to projection function onto second-order cone, an approximate lower order penalty approach for solving second-order cone

Based on a class of smoothing approximations to projection function onto second-order cone, an approximate lower order penalty approach for solving second-order cone

Qi (2001), Solving nonlinear complementarity problems with neural networks: a reformulation method approach, Journal of Computational and Applied Mathematics, vol. Pedrycz,

In this paper, we have shown that how to construct complementarity functions for the circular cone complementarity problem, and have proposed four classes of merit func- tions for

Huang, A nonmonotone smoothing-type algorithm for solv- ing a system of equalities and inequalities, Journal of Computational and Applied Mathematics, vol. Hao, A new

For different types of optimization problems, there arise various complementarity problems, for example, linear complementarity problem, nonlinear complementarity problem,