• 沒有找到結果。

類神經網路的數學研究(III)

N/A
N/A
Protected

Academic year: 2021

Share "類神經網路的數學研究(III)"

Copied!
23
0
0

加載中.... (立即查看全文)

全文

(1)

行政院國家科學委員會專題研究計畫 成果報告

類神經網路的數學研究(3/3)

計畫類別: 個別型計畫

計畫編號: NSC94-2115-M-009-001-

執行期間: 94 年 08 月 01 日至 95 年 07 月 31 日

執行單位: 國立交通大學應用數學系(所)

計畫主持人: 石至文

報告類型: 完整報告

處理方式: 本計畫可公開查詢

中 華 民 國 95 年 11 月 1 日

(2)

計畫名稱:類神經網路的數學研究

計劃主持人:石至文

報告中文摘要:

本三年期計畫得到幾個研究成果,其中有些已經發表、有些仍在

投稿中,以下附一篇已於 SIAM Journal on Applied Mathematics 發表

之文章,合作者是鄭昌源博士(本計畫研究助理)與林光暉(碩士)。此

文摘要如下。

穩定的常態解的數量與類神經網路於記憶儲存之容量有關。在這

個研究中,我們研究傳統的 Hopfield neural networks 及有時間遲滯項

在內之這類方程式的多重常態解的存在性及穩定性。在討論的過程,

我們也得到個別常態解的吸引區域。這主要的想法奠基於類神經網路

系統結構的幾何性質。我們也提供了幾個數值模擬的結果來印證我們

得到的理論。

(3)

NSC Project Report (2003-2006) :

Mathematical Studies in Neural Networks

Chih-Wen Shih

Department of Applied Mathematics

National Chiao Tung University

Hsinchu, Taiwan 300, R.O.C.

November 1, 2006

We have achieved several results in this project. Some manuscripts based on the

results of this project have been submitted for publications and some have been

accepted for publication. The following report is part of the paper published in

SIAM Journal on Applied Mathematics, Vol. 66, No. 4, 2006, pp. 1301–1320.

Corresponding author: Tel: 886-3-5722088, Fax: 886-3-5724679, cwshih@math.nctu.edu.tw

1

(4)

MULTISTABILITY IN RECURRENT NEURAL NETWORKS

CHANG-YUAN CHENG, KUANG-HUI LIN, AND CHIH-WEN SHIH

Abstract. Stable stationary solutions correspond to memory capacity in the application of associative memory for neural networks. In this presentation, existence of multiple stable stationary solutions for Hopfield-type neural networks with delay and without delay is investigated. Basins of attraction for these stationary solutions are also estimated. Such a scenario of dynamics is established through formulating parameter conditions based on a geometrical setting. The present theory is demonstrated by two numerical simulations on the Hopfield neural networks with delays.

Key words. neural network, multistability, delay equations

AMS subject classifications. 34D20, 34D45, 92B20

DOI. 10.1137/050632440

1. Introduction. The studies of neural networks have attracted considerable

multidisciplinary research interest in recent years. The developments for neural net-work models and the theory for the models are, on the one hand, driven by application motif or inspired by biological neuronal behaviors. On the other hand, the neural net-work theory has motivated and elicited further progress in dynamical system theory. For example, theory for existence of many stable patterns or chaotic dynamics for systems in phase space of large dimension is in strong demand for neural network applications. The progress in this direction of research has also enriched dynamical system theory [6, 17, 27].

The applications of neural networks range from classifications, associative mem-ory, image processing, and pattern recognition to parallel computation and its ability to solve optimization problems. The theory on the dynamics of the networks has been developed according to the purposes of the applications. In the application to parallel computation and signal processing involving finding the solution of an optimization problem, the existence of a computable solution for all possible initial states is the best situation. Mathematically, this means that the network needs to have a unique equilibrium which is globally attractive. Such a convergent behavior is referred to as “monostability” of a network. On the other hand, when a neural network is em-ployed as an associative memory storage or for pattern recognition, the existence of many equilibria is a necessary feature [7, 11, 16, 21]. The notion of “multistability” of a neural network is used to describe coexistence of multiple stable patterns such as equilibria or periodic orbits. In general, if the dynamics for a system are bounded, the existence of multiple stable patterns is accompanied with coexistence of stable and unstable equilibria or periodic orbits. The existence of unstable equilibria is essential in certain applications of neural network. For example, unstable equilibria are related to digital constraints on selection in winner-take-all problems [32, 33].

Received by the editors May 25, 2005; accepted for publication (in revised form) November 17, 2005; published electronically March 31, 2006. This work was partially supported by The National Science Council and The National Center of Theoretical Sciences of Republic of China on Taiwan.

http://www.siam.org/journals/siap/66-4/63244.html

Department of Applied Mathematics, National Chiao Tung University, Hsinchu, Taiwan, Repub-lic of China (chengcy13@yahoo.com.tw, hs3893@mail.nc.hcc.edu.tw).

Corresponding author. Department of Applied Mathematics, National Chiao Tung University, Hsinchu, Taiwan, Republic of China (cwshih@math.nctu.edu.tw).

(5)

Classical recurrent neural networks are usually systems of ordinary differential equations. Recently, neural network systems with delays have also been studied ex-tensively, thanks to the need from practical applications and mathematical interests. In this presentation, we propose an approach to investigate existence of multiple sta-tionary solutions and their stability for recurrent neural networks with delay and without delay. We shall illustrate our approach through the Hopfield-type model.

Hopfield-type neural networks and their various generalizations have been widely studied and applied in various scientific areas. A typical form for such a network is given by Ci dxi(t) dt = xi(t) Ri + n  j=1 Tijgj(xj(t− τij)) + Ii, i = 1, 2, . . . , n, (1.1)

where Ci > 0 and Ri > 0 are, respectively, the input capacitance and resistance

associated with neuron i; Ii is the constant input; Tij are the connection strengths

between neurons; τij > 0 are the transmission delays; and gi, i = 1, 2, . . . , n, are

neuron activation functions.

The classical Hopfield-type neural network [16] is system (1.1) without delay, that is, τij = 0 for all i, j. For the Hopfield-type neural networks, the theory of unique

equilibrium and global convergence to the equilibrium has been extensively studied; cf. [9, 10] for the networks without delays and [5, 13, 19, 23, 24, 29, 30, 31, 34, 35] for the delay cases.

In contrast to these studies, we propose a treatment to explore the existence of multiple stationary solutions for (1.1) through a geometrical formulation on the parameter conditions. Stability of these equilibria for (1.1) with and without delay shall also be investigated. In addition, estimations of basins of attraction for these stable stationary solutions are derived. The stationary equations are identical for system (1.1) with delay and without delay. Thus, confirmation for the existence of equilibrium points is valid for both cases. However, stability of the equilibrium points and dynamical behaviors can be very different for the systems with delay and without delay. It is very interesting to explore such a difference as well as a possible coincidence of behaviors.

The theory for existence of multiple stable patterns has been developed for cellular neural networks [8, 17, 26, 27]. The neurons in such a system are locally connected and no time lags were considered therein. Our approach can be adopted to such a network with delays, as remarked in the later section. There are other interesting studies on delayed neural networks in [1, 2, 12, 22, 25].

This presentation is organized as follows. In section 2, we establish conditions for existence of 3n equilibria for the Hopfield network. 2n equilibria among them will be

shown to be asymptotically stable for the system without delays, through a lineariza-tion analysis. In seclineariza-tion 3, we shall verify that under the same condilineariza-tions, there are 2n regions inRn, each containing an equilibrium, which are positively invariant under the flow generated by the system with delays and without delays. Subsequently, it is argued that these 2n equilibria are asymptotically stable, even in the presence of delays. We also formulate more sufficient conditions for stability of these 2n

equilib-ria. We extend our theory to more general activation functions, including those with saturations, in section 4. Two numerical simulations on the dynamics of two-neuron networks, which illustrate the present theory, are given in section 5. We summarize our results with a discussion (section 6).

(6)

2. Existence of multiple equilibria and their stability. In this section, we

shall formulate sufficient conditions for the existence of multiple stationary solutions for Hopfield neural networks with and without delays. Our approach is based on a geometrical observation. The derived parameter conditions are concrete and can be examined easily. We also establish stability criteria of these equilibria for the sys-tem without delays, through estimations on the eigenvalues of the linearized syssys-tem. Stability for the system with delays will be discussed in the next section. After re-arranging the parameters, we consider system (1.1) in the following forms: for the network without delay,

dxi(t) dt =−bixi(t) + n  j=1 ωijgj(xj(t)) + Ji, i = 1, 2, . . . , n, (2.1)

and for the network with delays, dxi(t) dt =−bixi(t) + n  j=1 ωijgj(xj(t− τij)) + Ji, i = 1, 2, . . . , n. (2.2)

Herein, bi > 0, 0 < τij ≤ τ := max1≤i,j≤nτij. While (2.1) is a system of ordinary

differential equations, (2.2) is a system of functional differential equations. The initial condition for (2.2) is

xi(θ) = φi(θ), −τ ≤ θ ≤ 0, i = 1, 2, . . . , n,

and it is usually assumed that φi ∈ C([−τ, 0], R). Let  > 0. For x ∈ C([−τ, ], Rn)

and t∈ [0, ], we define

xt(θ) = x(t + θ), θ∈ [−τ, 0].

(2.3)

Let us denote ˜F = ( ˜F1, . . . , ˜Fn), where ˜Fi is the right-hand side of (2.2),

˜ Fi(xt) :=−bixi(t) + n  j=1 ωijgj(xj(t− τij)) + Ji,

where x = (x1, . . . , xn). A function x = x(t) is called a solution of (2.2) on [−τ, ) if

x∈ C([−τ, ), Rn) and x

t defined as (2.3) lies in the domain of ˜F and satisfies (2.2)

for t∈ [0, ). For a given φ ∈ C([−τ, 0], Rn), let us denote by x(t; φ) the solution of

(2.2) with x0(θ; φ) := x(0 + θ; φ) = φ(θ) for θ∈ [−τ, 0].

The activation functions gj usually have sigmoidal configuration or are

non-decreasing with saturations. Herein, we consider the typical logistic or Fermi function: for all j = 1, 2, . . . , n,

gj(ξ) = g(ξ) :=

1

1 + e−ξ/ε, ε > 0. (2.4)

One may also adopt gj(ξ) = 1/(1 + e−ξ/εj), εj > 0, or other output functions, as

discussed in section 4. Note that the stationary equations for systems (2.1) and (2.2) are identical; namely,

Fi(x) :=−bixi+ n  j=1 ωijgj(xj) + Ji= 0, i = 1, 2, . . . , n, (2.5)

(7)

1/4

1 0 1/2

Fig. 1. The graph for function u(y) = y− y2 and y1= g(pi), y2= g(qi).

where x = (x1, . . . , xn). For our formulation in the following discussions, we introduce

a single neuron analogue (no interaction among neurons),

dt = fi(ξ) :=−biξ + ωiig(ξ) + Ji, ξ∈ R. Let us propose the first parameter condition:

(H1): 0 <

biε ωii

<1

4, i = 1, 2, . . . , n.

Lemma 2.1. Under condition (H1), there exist two points pi and qi with pi < 0 < qi such that fi(pi) = 0, fi(qi) = 0 for i = 1, 2, . . . , n.

Proof. We compute that

g(ξ) = 1 ε(1 + e

−ξ/ε)−2e−ξ/ε.

(2.6)

Note that g is strictly increasing and that the graph of function g(ξ) is concave down and has its maximal value at ξ = 0. We let y = g(ξ), ξ ∈ R. Then y ∈ (0, 1) and g(0) = 1/2. It follows from (2.6) that

g(ξ) =1 εy 2  1 y − 1  = 1 ε(y− y 2).

On the other hand, for each i, since fi(ξ) =−bi+ ωiig(ξ), we have fi(ξ) = 0 if and

only if bi= ωiig(ξ); equivalently, biε ωii

= y− y2.

From the configuration in Figure 1, it follows that, for each i, there exist two points pi, qi, pi < 0 < qi, such that fi(pi) = fi(qi) = 0 if the parameter condition 0 < biε/ωii< 1/4 holds. This completes the proof.

Note that condition (H1) implies ωii > 0 for all i = 1, 2, . . . , n, since each bi is

already assumed to be a positive constant. We define, for i = 1, 2, . . . , n, ˆ

fi(ξ) =−biξ + ωiig(ξ) + k+i ,

ˇ

(8)

.

.

.

. .

.

.

.

.

.

1

.

.

0

Fig. 2. (a) The graph of g with ε = 0.5; (b) Configurations for ˆfi and ˇfi.

where k+i := n  j=1,j=i |ωij| + Ji, ki−:= n  j=1,j=i |ωij| + Ji. It follows that ˇ fi(xi)≤ Fi(x)≤ ˆfi(xi) (2.7)

for all x = (x1, . . . , xn) and i = 1, 2, . . . , n, since 0≤ gj≤ 1 for all j.

We consider the second parameter condition which is concerned with the existence of multiple equilibria for (2.1) and (2.2):

(H2): ˆfi(pi) < 0, fˇi(qi) > 0, i = 1, 2, . . . , n.

The configuration that motivates (H2) is depicted in Figure 2. Such a configuration is

due to the characteristics of the output function g. Under assumptions (H1) and (H2),

there exist points ˆai, ˆbi, ˆci with ˆai< ˆbi < ˆci such that ˆfiai) = ˆfibi) = ˆfici) = 0 as

well as points ˇai, ˇbi, ˇci with ˇai < ˇbi< ˇci such that ˇfiai) = ˇfibi) = ˇfici) = 0.

Theorem 2.2. Under (H1) and (H2), there exist 3n equilibria for systems (2.1) and (2.2).

Proof. The equilibria of systems (2.1) and (2.2) are zeros of (2.5). Under condi-tions (H1) and (H2), the graphs of ˆfi and ˇfi defined above are as depicted in Figure

(9)

2. According to the configurations, there are 3n disjoint closed regions in Rn. Set Ωα = {(x 1, x2, . . . , xn)∈ Rn | xi ∈ Ωiαi} with α = (α1, α2, . . . , αn), and αi = “l,” “m,” or “r,” where Ωli:={x ∈ R| ˇai≤ x ≤ ˆai}, Ωmi :={x ∈ R| ˆbi≤ x ≤ ˇbi}, Ωri :={x ∈ R| ˇci ≤ x ≤ ˆci}. (2.8)

Herein, “l,” “m,” and “r” mean, respectively, “left,” “middle,” and “right.” Consider any fixed one of these regions Ωα. For a given ˜x = (˜x

1, ˜x2, . . . , ˜xn)∈ Ωα, we solve hi(xi) :=−bixi+ ωiig(xi) + n  j=1,j=i ωijg(˜xj) + Ji= 0

for xi, i = 1, 2, . . . , n. According to an estimate similar to (2.7), the graph of hi lies

between the graphs of ˆfi and ˇfi. In fact, the graph of hi is a vertical shift of the

graph of ˆfi or ˇfi. Thus, one can always find three solutions, and each of them lies in

one of the regions in (2.8) for each i. Let us pick the one lying in Ωαi

i and set it as

xi for each i. We define a mapping Hα: Ωα → Ωα by Hα(˜x) = x = (x1, x2, . . . , xn).

Restated, we set xi= (hi|Ωl i) −1(0) if α i= “l,” xi= (hi|Ωm i ) −1(0) if α i= “m,” xi= (hi|Ωr i) −1(0) if α i= “r.”

Since g is continuous and hi is a vertical shift of function ξ→ −biξ + ωiig(ξ) by the

quantitynj=1,j=iωijg(˜xj) + Ji, the map Hαis continuous. It follows from Brouwer’s

fixed point theorem that there exists one fixed point ¯x = (¯x1, ¯x2, . . . , ¯xn) of Hα in

Ωαwhich is also a zero of the function F , where F = (F1, F2, . . . , Fn). Consequently,

there exist 3n zeros of F , hence 3n equilibria for systems (2.1) and (2.2), and each of them lies in one of the 3n regions Ωα. This completes the proof.

We consider the following criterion concerning stability of the equilibria:

(H3):−bi+ n  j=1 |ωij|g(ηj) < 0, g(ηj) := max{g(xj)| xj = ˇcj, ˆaj}, i = 1, 2, . . . , n. (2.9)

A simplified yet more restrictive version for condition (H3) is that for i = 1, 2, . . . , n,

bi> g(η) n  j=1 |ωij| with g(η) := max{g(xj)| xj= ˇcj, ˆaj, j = 1, 2, . . . , n}. (2.10)

Theorem 2.3. Under conditions (H1), (H2), and (H3), there exist 2n asymptot-ically stable equilibria for the Hopfield neural networks without delay (2.1).

Proof. Among the 3nequilibria in Theorem 2.2, we consider those ¯x = (¯x

1, . . . , ¯xn)

with ¯xi∈ Ωli or Ωri for each i. The linearized system of (2.1) at equilibrium ¯x is dyi dt =−biyi+ n  j=1 ωijgj(xj)yj, i = 1, 2, . . . , n.

(10)

Restated, ˙y = Ay, where DF (x) =: A = [aij]n×n with [aij] = ⎛ ⎜ ⎜ ⎜ ⎝ −b1+ ω11gx1) ω12gx2) · · · ω1ngxn) ω21gx1) −b2+ ω22gx2) ω2ngxn) .. . ... . .. ... ωn1gx1) ωn2gx2) · · · −bn+ ωnngxn) ⎞ ⎟ ⎟ ⎟ ⎠. Let ri= n  j=1,j=i |aij| = n  j=1,j=i |ωijgxj)| = n  j=1,j=i |ωij|gxj), i = 1, 2, . . . , n.

According to Gerschgorin’s theorem, λk∈

n

i=1

B(aii, ri)

for all k = 1, 2, . . . , n, where λk are the eigenvalues of A and B(aii, ri) := {ζ ∈ C | |ζ − aii| < ri}. Hence, for each k, there exists some i = i(k) such that

Re(λk) <−bi+ ωiigxi) + n



j=1,j=i

|ωij|gxj).

Notice that for each j, g(ξ)≤ gcj) (resp., g(ξ)≤ gaj)) if ξ≥ ˇcj (resp., ξ≤ ˆaj).

Since ¯x is such that ¯xj∈ Ωlj or Ωrj, we have ¯xj≥ ˇcj or ¯xj≤ ˆaj for all j = 1, 2, . . . , n.

It follows that Re(λk) < 0 by (2.9). Thus, under (H3), all the eigenvalues of A have

negative real parts. Therefore, there are 2nasymptotically stable equilibria for system

(2.1). The proof is completed.

We certainly can replace condition (H3) by weaker ones, such as an individual

con-dition for each equilibrium. Let ¯x be an equilibrium lying in Ωαwith α = (α

1, . . . , αn)

and αi= “r” or αi= “l,” that is, ¯xi∈ Ωli or Ωri, for each i. For such an equilibrium

we consider, for i = 1, 2, . . . , n, bi> ωiig(ξi) + n  j=1,j=i |ωij|g(ξj), ξk= ˇck if αk = “r,” ξk= ˆak if αk = “l,” k = 1, . . . , n. Such conditions are obviously much more tedious than (H3).

3. Stability of equilibria and the basins of attraction. We plan to

inves-tigate the stability of equilibrium for system (2.2), that is, with delays. We shall also explore the basins of attraction for the asymptotically stable equilibria, for both systems (2.1) and (2.2), in this section.

Note that the function ξ → [ωii +

n

j=1,j=i|ωij|]g(ξ) is continuous for all i =

1, 2, . . . , n. From (2.9) and ωii > 0, it follows that there exists a positive constant 0

such that bi> max  ωii+ n  j=1,j=i |ωij|  g(ξ) : ξ = ˆai+ 0, ˇci− 0  , i = 1, 2, . . . , n. (3.1)

(11)

Herein, we choose 0such that 0< min{|ˆai− pi|, |ˇci− qi|} for all i = 1, 2, . . . , n. For

system (2.1), we consider the following 2n subsets ofRn. Let α = (α

1, . . . , αn) with αi= “l” or “r,” and set

˜

Ωα={(x1, x2, . . . , xn)| xi∈ ˜Ωli if αi= “l,” xi ∈ ˜Ωri if αi= “r”},

(3.2)

where ˜Ωli :={ξ ∈ R | ξ ≤ ˆai+ 0}, ˜Ωri :={ξ ∈ R | ξ ≥ ˇci− 0}. For system (2.2), we

consider the following 2nsubsets ofC([−τ, 0], Rn). Let α = (α1, . . . , αn) with αi= “l”

or “r,” and set

Λα={ϕ = (ϕ1, ϕ2, . . . , ϕn)| ϕi∈ Λli if αi= “l,” ϕi ∈ Λri if αi= “r”},

(3.3) where

Λli:={ϕi∈ C([−τ, 0], R) | ϕi(θ)≤ ˆai+ 0 for all θ∈ [−τ, 0]},

Λri :={ϕi∈ C([−τ, 0], R) | ϕi(θ)≥ ˇci− 0for all θ∈ [−τ, 0]}.

Theorem 3.1. Assume that (H1) and (H2) hold. Then each ˜Ωα and each Λα are positively invariant with respect to the solution flow generated by systems (2.1) and (2.2), respectively.

Proof. We prove only the delay case, i.e., system (2.2). Consider any one of the 2n sets Λα. For any initial condition φ = (φ

1, φ2, . . . , φn)∈ Λα, we claim that the

solution x(t; φ) remains in Λαfor all t≥ 0. If this is not true, there exists a component xi(t) of x(t; φ) which is the first (or one of the first) escaping from Λlior Λri. Restated,

there exist some i and t1 > 0 such that either xi(t1) = ˇci 0, dxdti(t1) ≤ 0, and

xi(t)≥ ˇci− 0for−τ ≤ t ≤ t1 or xi(t1) = ˆai+ 0, dxdti(t1)≥ 0, and xi(t)≤ ˆai+ 0 for

−τ ≤ t ≤ t1. For the first case xi(t1) = ˇci− 0 and dxdti(t1)≤ 0, we derive from (2.2)

that dxi dt (t1) =−bici− 0) + ωiig(xi(t1− τii)) + n  j=1,j=i ωijg(xj(t1− τij)) + Ji≤ 0. (3.4)

On the other hand, recalling (H2) and previous descriptions of ˇci and 0, we have

ˇ fici− 0) > 0 which gives −bici− 0) + ωiig(ˇci− 0) + k−i (3.5) =−bici− 0) + ωiig(ˇci− 0) n  j=1,j=i |ωij| + Ji> 0.

Notice that t1 is the first time for xi to escape from Λri. We have g(xi(t1− τii)) g(ˇci− 0), by the monotonicity of function g. In addition, by ωii > 0 and|g(·)| ≤ 1,

we obtain from (3.5) that

−bici− 0) + ωiig(xi(t1− τii)) + n  j=1,j=i ωijg(xj(t1− τij)) + Ji ≥ −bici− 0) + ωiig(ˇci− 0) n  j=1,j=i |ωij| + Ji> 0,

(12)

which contradicts (3.4). Hence, xi(t)≥ ˇci− 0for all t > 0. Similar arguments can be

employed to show that xi(t)≤ ˆai+ 0for all t > 0 for the situation that xi(t1) = ˆai+ 0

and dxi

dt(t1) ≥ 0. Therefore, Λ

α is positively invariant under the flow generated by

system (2.2). The assertion for system (2.1) can be justified similarly.

Theorem 3.2. Under conditions (H1), (H2), and (H3), there exist 2n exponen-tially stable equilibria for system (2.2).

Proof. Consider an equilibrium ¯x = (¯x1, ¯x2, . . . , ¯xn)∈ Ωα for some α = (α1, α2,

. . . , αn), with αi = “l” or “r,” obtained in Theorem 2.2. We consider the

single-variable functions Gi(·), defined by Gi(ζ) = bi− ζ −

n



j=1

|ωij|g(ξj)eζτij,

where ξj = ˆaj+ 0 (resp., ˇcj 0) if αj = “l” (resp., “r”). Then, Gi(0) > 0 from

(3.1) or (H3). Moreover, there exists a constant μ > 0 such that Gi(μ) > 0 for i = 1, 2, . . . , n, due to continuity of Gi. Let x(t) = x(t; φ) be the solution to (2.2)

with initial condition φ∈ Λαdefined in (3.3). Under the translation y(t) = x(t)− ¯x, system (2.2) becomes dyi(t) dt =−biyi(t) + n  j=1 ωij[g(xj(t− τij))− g(xj)], (3.6)

where y = (y1, . . . , yn). Now, consider functions zi(·) defined by zi(t) = eμt|yi(t)|, i = 1, 2, . . . , n.

(3.7)

The domain of definition for zi(·) is identical to the interval of existence for yi(·). We

shall see in the following computations that the domain can be extended to [−τ, ∞). Let δ > 1 be an arbitrary real number and let

K := max 1≤i≤n sup θ∈[−τ,0] |xi(θ)− ¯xi|  > 0. (3.8)

It follows from (3.7) and (3.8) that zi(t) < Kδ for t ∈ [−τ, 0] and all i = 1, 2, . . . , n.

Next, we claim that

zi(t) < Kδ for all t > 0, i = 1, 2, . . . , n.

(3.9)

Suppose this is not the case. Then there are an i∈ {1, 2, . . . , n} (say i = k) and a t1> 0 for the first time such that

zi(t)≤ Kδ, t ∈ [−τ, t1], i = 1, 2, . . . , n, i= k,

zk(t) < Kδ, t∈ [−τ, t1),

zk(t1) = Kδ with

d

dtzk(t1)≥ 0.

Note that zk(t1) = Kδ > 0 implies yk(t1)= 0. Hence |yk(t)| and zk(t) are

differen-tiable at t = t1. From (3.6), we derive that

d dt|yk(t1)| ≤ −bk|yk(t1)| + n  j=1 |ωkj|g(ςj)|yj(t1− τkj)| (3.10)

(13)

for some ςj between xj(t1− τkj) and ¯xj. Hence, from (3.7) and (3.10), dzk(t1) dt ≤ μe μt1|y k(t1)| + eμt1  −bk|yk(t1)| + n  j=1 |ωkj|g(ςj)|yj(t1− τkj)|  ≤ μzk(t1)− bkzk(t1) + n  j=1 |ωkj|g(ςj)eμτkjzj(t1− τkj) ≤ −(bk− μ)zk(t1) + n  j=1 |ωkj|g(ξj)eμτkj  sup θ∈[t1−τ,t1] zj(θ)  , (3.11)

where ξj = ˆaj + 0 (resp., ˇcj 0) if αj = “l” (resp., “r”). Herein, the invariance

property of Λαin Theorem 3.1 has been applied. Due to G

i(μ) > 0, we obtain 0≤dzk(t1) dt ≤ −(bk− μ)zk(t1) + n  j=1 |ωkj|g(ξj)eμτkj  sup θ∈[t1−τ,t1] zj(θ)  <− bi− μ − n  j=1 |ωij|g(ξj)eμτkj  < 0, (3.12)

which is a contradiction. Hence the claim (3.9) holds. Since δ > 1 is arbitrary, by allowing δ→ 1+, we have z

i(t)≤ K for all t > 0, i = 1, 2, . . . , n. We then use (3.7)

and (3.8) to obtain |xi(t)− ¯xi| ≤ e−μt max 1≤j≤n  sup θ∈[−τ,0] |xj(θ)− ¯xj| 

for t > 0 and all i = 1, 2, . . . , n. Therefore, x(t) is exponentially convergent to ¯x. This

completes the proof.

In the following, we employ the theory of the local Lyapunov functional [15] and the Halanay-type inequality [4, 14] to establish other sufficient conditions for asymptotic stability and exponential stability for the equilibria of system (2.2).

Theorem 3.3. There exist 2n asymptotically stable equilibria for system (2.2) under conditions (H1) and (H2) and one of the following conditions:

(H4): 2bi> n  j=1 |ωij| + [g(ηi)]2 n  j=1 |ωji| for ηi= ˆai and ˇci, i = 1, 2, . . . , n, (H5): min 1≤i≤n  2bi− n  j=1 |ωij|g(ξj)  > max 1≤i≤n  n  j=1 |ωji|g(ηi)  for ξj = ˆaj and ˇcj, ηi= ˆai and ˇci. Proof. Similarly to (3.1), there exists 0> 0 such that (H4) holds for ηi= ˆai+ 0,

ˇ

ci− 0, and (H5) holds for ξj = ˆaj+ 0, ˇcj− 0, ηi = ˆai+ 0, ˇci− 0, i = 1, 2, . . . , n,

by continuity of g. We thus define Λα as in (3.3). The following computations are

reserved for solutions lying entirely within each of the 2n positively invariant regions

(14)

(i) We employ the following Lyapunov functional: V (y)(t) = n  i=1 yi2(t) + n  i=1 n  j=1 |ωij|  t t−τij [g(xj(s))− g(xj)]2ds,

where y(t) = x(t)− x. By recalling (3.6) and using (H4), we derive

dV (y)(t) dt = 2 n  i=1 yi(t) −biyi(t) + n  j=1 ωij[g(xj(t− τij))− g(xj)]  + n  i=1 n  j=1 |ωij|[g(xj(t))− g(xj)]2 n  i=1 n  j=1 |ωij|[g(xj(t− τij))− g(xj)]2 ≤ −2 n  i=1 biy2i(t) + n  i=1 n  j=1 |ωij|y2i(t) + n  i=1 n  j=1 |ωij|[g(ηj)]2yj2(t) = n  i=1 −2bi+ n  j=1 |ωij| + [g(ηi)]2 n  j=1 |ωji|  yi2(t) < 0.

We thus conclude the asymptotic stability for equilibrium ¯x via applying the theory

of the local Lyapunov functional; cf. [15]. (ii) Recall (3.6), and let

W (y)(t) = 1 2 n  i=1 yi2(t). (3.13) Then, dW (y)(t) dt = n  i=1 yi(t) −biyi(t) + n  j=1 ωij[g(xj(t− τij))− g(xj)]  n  i=1 −biyi2(t) + 1 2 n  j=1 |ωij|g(ςj)[yi2(t) + y 2 j(t− τij)]  ≤ − n  i=1  bi− 1 2 n  j=1 |ωij|g(ξj)  yi2(t) +1 2  max 1≤i≤n n  j=1 |ωji|g(ηi)  n  i=1 sup t−τ≤s≤t yi2(s) ≤ −βW (y)(t) + ζ sup t−τ≤s≤t W (y)(s), where β := min 1≤i≤n 2bi− n  j=1 |ωij|g(ξj), ξj = ˆaj+ 0, ˇcj− 0  , ζ := max 1≤i≤n n  j=1 |ωji|g(ηi), ηi= ˆai+ 0, ˇci− 0  .

(15)

By (H5), we have β > ζ > 0. By using the Halanay inequality, we obtain that W (y)(t)  sup −τ≤s≤0W (y)(s)  e−γt (3.14)

for all t≥ 0, where γ is the unique solution of γ = β − ζeγτ. It follows that

1 2 n  i=1 yi2(t)≤  sup −τ≤s≤0  1 2 n  i=1 y2i(s)  e−γt. (3.15)

Hence, the equilibrium ¯x is asymptotically stable.

Corollary 3.4. Under conditions (H1), (H2), and (H5), there exist 2n expo-nentially stable equilibria for system (2.2).

We observe from (2.1) and (2.2) that for every i,

Fi(x), ˜Fi(xt) < 0 whenever xi> 0 is sufficiently large,

Fi(x), ˜Fi(xt) > 0 whenever xi< 0 and |xi| is sufficiently large,

since bi> 0 and

n

j=1ωijgj(xj(t)) + Jiand

n

j=1ωijgj(xj(t− τij)) + Ji are bounded

for any x and xt. Therefore, it can be concluded that every solution of (2.1) and (2.2)

is bounded in forward time.

4. Further extension. We shall extend our studies in sections 2 and 3 to more

general activation functions in this section.

4.1. Activation functions in general form. Let us consider the activation

functions{gi(·)}n1 which areC2and satisfy

(C) : 

ui≤ gi(ξ)≤ vi, gi(ξ) > 0,

(ξ− σi)gi(ξ) < 0 for all ξ∈ R,

i = 1, 2, . . . , n. Herein, ui, vi, and σiare constants with ui< vi, i = 1, 2, . . . , n. Under

these circumstances, (H1) can be modified to

(H1): 0 = inf ξ∈Rg  i(ξ) < bi ωii < max ξ∈Rg  i(ξ) (= gi(σi)), i = 1, 2, . . . , n. As in section 2, we define fi(ξ) =−biξ + ωiigi(ξ) + Ji.

Lemma 4.1. For gi in the class (C), under condition (H1), there exist constants {pi}n1 and {qi}n1 with pi < σi < qi such that fi(pi) = fi(qi) = 0 for each i =

1, 2, . . . , n. We define ˆ fi(ξ) =−biξ + ωiigi(ξ) + ki+, fˇi(ξ) =−biξ + ωiigi(ξ) + ki−, (4.1) where k+i := n  j=1,j=i ρj|ωij| + Ji, ki−:= n  j=1,j=i ρj|ωij| + Ji (4.2)

(16)

with ρj = max{|uj|, |vj|}. We locate the points ˆai < ˆbi < ˆci and ˇai < ˇbi< ˇci, where

ˆ

fiai) = ˆfibi) = ˆfici) = 0 and ˇfiai) = ˇfibi) = ˇfici) = 0.

Let η ∈ R and k ∈ {1, . . . , n} be such that gk(η) = max{gi(ξ) : ξ = ˆai, ˇci, i = 1, 2, . . . , n}. Consider (H3): bi> gk(η)  ωii+ n  j=1,j=i |ωij|  , i = 1, 2, . . . , n.

Theorem 4.2. Let gi be in the class (C). Under conditions (H1), (H2), and (H3), there exist 3n equilibria for systems (2.1) and (2.2) with 2n among them being

exponentially stable.

4.2. Saturated activation functions. In this subsection, we investigate

sys-tems (2.1), (2.2) with saturated activation functions. In particular, we consider the following continuous functions:

gi(ξ) = ⎧ ⎨ ⎩ ui if − ∞ < ξ ≤ pi, increasing if pi ≤ ξ ≤ qi, vi if qi≤ ξ < ∞,

where pi, qi are constants with pi < qi for i = 1, 2, . . . , n. Such a class of functions

includes the piecewise linear function with saturations: gi(ξ) = ⎧ ⎨ ⎩ ui if − ∞ < ξ ≤ pi, ui+vqi−ui i−pi(ξ− pi) if pi≤ ξ ≤ qi, vi if qi ≤ ξ < ∞

for each i. Typical graphs for these functions are depicted in Figures 3(a) and (c). With such activation functions, existence of multiple equilibria for (2.1) and (2.2) can be obtained under condition

(Hs): bi> 0,−bipi+ ωiiui+ ki+< 0, −biqi+ ωiivi+ k−i > 0, i = 1, 2, . . . , n,

where k+i , k−i are defined as in (4.2). We define ˆfi, ˇfi as in (4.1). The graphs of ˆfi

and ˇfi are depicted in Figures 3(b) and (d). Under condition (Hs), we also locate

the points ˆai < ˆbi < ˆci and ˇai < ˇbi < ˇci, where ˆfiai) = ˆfibi) = ˆfici) = 0 and

ˇ

fiai) = ˇfibi) = ˇfici) = 0.

Note that we do not need differentiability at corner points pi, qi of gi in our

analysis; moreover, gi(ξ) = 0 for ξ < pi and ξ > qi. Thus, (H3) is already satisfied if

bi > 0 for i = 1, 2, . . . , n. With these formulations, we can derive that there exist 3n

equilibria for systems (2.1) and (2.2), and that 2n of them are exponentially stable

under condition (Hs).

4.3. Unbounded activation functions. Our theory can also be extended to

certain unbounded activation functions with controlled slopes, for example, the acti-vation functions giwith bounded slopes in Figure 4. Herein, we require that the slopes mr

i of the right- and mli of the left-hand parts of gi satisfy bi> ωiimri, bi> ωiimli for i = 1, . . . , n.

5. Numerical illustrations. In this section, we present two examples (with

(17)

.

.

.

.

.

.

.

.

(a) (c) (d) (b)

.

.

.

.

.

.

.

.

Fig. 3. (a) The graph for a continuous activation function giwith saturations. (b) The graphs for ˆfi and ˇfi induced from the activation function in (a). (c) The graph for a piecewise linear activation function gi with saturations. (d) The graphs for ˆfi and ˇfi induced from the activation function in (c).

.

.

(a) (b)

.

.

Fig. 4. (a) The graph for an unbounded piecewise linear activation function. (b) The graph for an unbounded activation function with bounded slopes.

Example 5.1. Consider the two-dimensional neural network dx1(t)

dt =−x1(t) + 18g1(x1(t− 10)) + 5g2(x2(t− 10)) − 9, dx2(t)

(18)

Table 1

Local extreme points and zeros of ˆf1, ˇf1, ˆf2, ˇf2.

ˆ a1=−3.993889 p1=−1.762747 ˆb1=−0.757751 q1=1.762747 ˆc1=14 ˇ a1=−14 ˇb1=0.757751 ˇc1=3.993889 ˆ a2=−3.320288 p2=−1.443635 ˆb2=−0.452309 q2=1.443635 ˆc2=6.666650 ˇ a2=−6.666650 ˇb2=0.452309 ˇc2=3.320288 15 10 5 0 5 10 15 20 15 10 5 0 5 10 15 y1(t) y2 (t) -

-Fig. 5. Illustrations for the dynamics in Example 5.1.

where g1(x) = g2(x) = g(x) in (2.4) with ε = 0.5. A computation gives

ˆ

f1(x1) =−x1+ 18g(x1)− 4, fˇ1(x1) =−x1+ 18g(x1)− 14,

ˆ

f2(x2) =−3x2+ 30g(x2)− 10, fˇ2(x2) =−3x2+ 30g(x2)− 20.

Herein, the parameters satisfy our conditions in Theorem 3.2: Condition (H1): 0 < b1ε ω11 = 1 36 < 1 4, 0 < b2ε ω22 = 1 20 < 1 4. Condition (H2): ˆf1(p1) =−1.722534 < 0, fˇ1(q1) = 1.722534 > 0, ˆ f2(p2) =−4.085501 < 0, fˇ2(q2) = 4.085501 > 0. Condition (H3): b1= 1 > 0.025246 = ω11g(η1) +12|g(η2), b2= 3 > 0.081566 =|ω21|g(η1) + ω22g(η2),

where η1=±3.993889, η2=±3.320288 are defined in (2.9). Local extreme points and

zeros of ˆf1, ˇf1, ˆf2, ˇf2are listed in Table 1. The dynamics of this system are illustrated

in Figure 5, where evolutions of 56 initial conditions have been tracked. The constant initial conditions are plotted in red dots, and the time-dependent initial conditions are plotted in purple curves. The evolutions of components x1(t) and x2(t) are depicted

(19)

10 5 0 5 10 15 20 15 10 5 0 5 10 15 20 time t x1(t) --

-Fig. 6. Evolution of state variable x1(t) in Example 5.1.

10 5 0 5 10 15 20 15 10 5 0 5 10 15 time t x2(t) --

-Fig. 7. Evolution of state variable x2(t) in Example 5.1.

in Figures 6 and 7, respectively. There are four exponentially stable equilibria in the system, as confirmed by our theory. The simulations demonstrate the convergence to these four equilibria from initial functions φ lying in the basin of the respective equilibrium.

Example 5.2. In this example, we simulate the neural network dx1(t)

dt =−x1(t) + 18g1(x1(t− 10)) + 11g2(x2(t− 10)) + 1, dx2(t)

dt =−3x2(t) + 11g1(x1(t− 10)) + 30g2(x2(t− 10)) + 4 with the output function gi(ξ) = h(ξ), where

h(ξ) = 1

2(|ξ + 1| − |ξ − 1|), (5.1)

for each i. The parameters also satisfy the conditions in our formulations with such an output function. We demonstrate the dynamics as well as evolutions of components x1(t), x2(t) for the system in Figures 8, 9, and 10, respectively.

(20)

30 20 10 0 10 20 30 20 15 10 5 0 5 10 15 20 y1(t) y 2 (t) --

-Fig. 8. Illustrations for the dynamics in Example 5.2.

10 5 0 5 10 15 20 30 20 10 0 10 20 30 time t x1(t) --

-Fig. 9. Evolution of state variable x1(t) in Example 5.2.

6. Discussions. Our approach can also be adapted to the cellular neural

net-works with delays. The cellular neural netnet-works (CNNs) were introduced by Chua and Yang [8] in 1988. A model called delayed cellular neural network [24] is given by

dxi(t) dt =−xi(t) +  j∈Nr(i) aijh(xj(t)) +  j∈Nr(i) bijh(xj(t− τ)) + Ji, (6.1)

where Nr(i) = {i − 1, i, i + 1} if r = 1. The standard activation function for such a

network is the piecewise linear h defined in (5.1). Notably, (6.1) is a system of CNNs with cells coupled in the one-dimensional manner, and its local coupling structure is expressed in the equations. Global exponential stability of a single equilibrium for

(21)

10 5 0 5 10 15 20 20 15 10 5 0 5 10 15 20 25 time t x2(t)

-Fig. 10. Evolution of state variable x2(t) in Example 5.2.

(6.1) has been studied by many researchers, for instance, the authors of [3, 20]. The CNNs can be built by multidimensional couplings among cells. Since there are finitely many cells at most, the CNNs can always be rewritten in a one-dimensional coupling form by renaming the indices [28]. It can then be written in a form similar to (1.1). Such an arrangement, however, destroys the local connection representation. While previous studies on multistability for the CNNs without delays [17, 26, 27] employed the structure of local connections among cells of CNNs, our approach does not rely on such a structure. Moreover, our theory generalized the multistability to the CNNs with delays (6.1).

In this investigation, we have obtained existence of 2nstable stationary solutions

for recurrent neural networks comprised of n neurons, with delays and without delays. The theory is primarily based upon an observation on the structures of the equations. It is thus rather general and can be applied to at least the Hopfield-type neural networks and the cellular neural networks. The analysis is valid for the networks with various activation functions, including the typical sigmoidal ones and the saturated linear ones, as well as some unbounded activation functions. In fact, our formulation depends on the configuration of the activation functions instead of the precise form of the functions. The theorems thus developed are pertinent in neural network theory.

Stable periodic orbits and limit cycle attractors are also important for memory storage and other neural activities. By similar analysis, we can also establish existence of multiple limit cycles for systems (1.1) and (6.1) with periodic inputs Ji = Ji(t) = Ji(t + T ). The result will be reported in another article. The approach in this

presentation can be adopted to discrete-time neural networks as well.

The major discussions on neural networks have been centered around monostabil-ity, in an abundance of articles in the areas of physics, information sciences, electrical engineering, and mathematics. Multistability in neural networks is, however, essen-tial in numerous applications such as content-addressable memory storage and pattern recognition. Recently, further application potentials of multistability have been found in decision making, digital selection, and analogy amplification [18].

We have exploited further interesting structures of Hopfield-type neural networks in this study. Our investigations have provided computable parameter conditions for multistable dynamics in the recurrent neural networks and are expected to contribute toward practical applications.

(22)

Acknowledgment. The authors are grateful to the reviewers for their

sugges-tions on improving the presentation.

REFERENCES

[1] J. B´elair, S. A. Campbell, and P. Van Den Driessche, Frustration, stability, and delay-induced oscillations in a neural network model, SIAM J. Appl. Math., 56 (1996), pp. 245–255.

[2] S. A. Campbell, R. Edwards, and P. Van Den Driessche, Delayed coupling between two neural network loops, SIAM J. Appl. Math., 65 (2004), pp. 316–335.

[3] J. Cao, New results concerning exponential stability and periodic solutions of delayed cellular neural networks, Phys. Lett. A, 307 (2003), pp. 136–147.

[4] J. Cao and J. Wang, Absolute exponential stability of recurrent neural networks with Lipschitz-continuous activation functions and time delays, Neural Netw., 17 (2004), pp. 379–390. [5] T. Chen, Global exponential stability of delayed Hopfield neural networks, Neural Netw., 14

(2001), pp. 977–980.

[6] S. S. Chen and C. W. Shih, Transversal homoclinic orbits in a transiently chaotic neural network, Chaos, 12 (2002), pp. 654–670.

[7] L. O. Chua, CNN: A Paradigm for Complexity, World Scientific, River Edge, NJ, 1998. [8] L. O. Chua and L. Yang, Cellular neural networks: Theory, IEEE Trans. Circuits and

Sys-tems, 35 (1988), pp. 1257–1272.

[9] F. Forti, On global asymptotic stability of a class of nonlinear systems arising in neural network theory, J. Differential Equations, 113 (1994), pp. 246–264.

[10] M. Forti and A. Tesi, New conditions for global stability of neural networks with application to linear and quadratic programming problems, IEEE Trans. Circuits Systems I Fund. Theory Appl., 42 (1995), pp. 354–366.

[11] J. Foss, A. Longtin, B. Mensour, and J. Milton, Multistability and delayed recurrent loops, Phys. Rev. Lett., 76 (1996), pp. 708–711.

[12] K. Gopalsamy, Stability and Oscillations in Delay Differential Equations of Population Dy-namics, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1992.

[13] K. Gopalsamy and X. He, Stability in asymmetric Hopfield nets with transmission delays, Phys. D, 76 (1994), pp. 344–358.

[14] A. Halany, Differential Equations, Academic Press, New York, 1966.

[15] J. Hale and S. V. Lunel, Introduction to Functional-Differential Equations, Springer-Verlag, New York, 1993.

[16] J. Hopfield, Neurons with graded response have collective computational properties like those of two-state neurons, Proc. Natl. Acad. Sci. USA, 81 (1984), pp. 3088–3092.

[17] J. Juang and S.-S. Lin, Cellular neural networks: Mosaic pattern and spatial chaos, SIAM J. Appl. Math., 60 (2000), pp. 891–915.

[18] R. L. T. Hahnloser, On the piecewise analysis of networks of linear threshold neurons, Neural Netw., 11 (1998), pp. 691–697.

[19] X. Liao, G. Chen, and E. N. Sanchez, Delay-dependent exponential stability analysis of delayed neural networks: An LMI approach, Neural Netw., 15 (2002), pp. 855–866. [20] S. Mohamad and K. Gopalsamy, Exponential stability of continuous-time and discrete-time

cellular neural networks with delays, Appl. Math. Comput., 135 (2003), pp. 17–38. [21] M. Morita, Associative memory with non-monotone dynamics, Neural Netw., 6 (1993), pp.

115–126.

[22] L. Olien and J. B´elair, Bifurcations, stability, and monotonicity properties of a delayed neural network model, Phys. D, 102 (1997), pp. 349–363.

[23] J. Peng, H. Qiao, and Z. B. Xu, A new approach to stability of neural networks with time-varying delays, Neural Netw., 15 (2002), pp. 95–103.

[24] T. Roska and L. O. Chua, Cellular neural networks with non-linear and delay-type template elements and non-uniform grids, Int. J. Circuit Theory Appl., 20 (1992), pp. 469–481. [25] L. P. Shayer and S. A. Campbell, Stability, bifurcation, and multistability in a system of two

coupled neurons with multiple time delays, SIAM J. Appl. Math., 61 (2000), pp. 673–700. [26] C.-W. Shih, Pattern formation and spatial chaos for cellular neural networks with asymmetric

templates, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 8 (1998), pp. 1907–1936.

[27] C.-W. Shih, Influence of boundary conditions on pattern formation and spatial chaos in lattice systems, SIAM J. Appl. Math., 61 (2000), pp. 335–368.

[28] C.-W. Shih and C. W. Weng, On the template corresponding to cycle-symmetric connectivity in cellular neural networks, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 12 (2002), pp. 2957–2966.

(23)

[29] P. Van Den Driessche and X. Zou, Global attractivity in delayed Hopfield neural network models, SIAM J. Appl. Math., 58 (1998), pp. 1878–1890.

[30] P. Van Den Driessche, J. Wu, and X. Zou, Stabilization role of inhibitory self-connections in a delayed neural network, Phys. D, 150 (2001), pp. 84–90.

[31] D. Xu, H. Zhao, and H. Zhu, Global dynamics of Hopfield neural networks involving variable delays, Comput. Math. Appl., 42 (2001), pp. 39–45.

[32] J. F. Yang and C. M. Chen, Winner-take-all neural networks using the highest threshold, IEEE Trans. Neural Networks, 11 (2000), pp. 194–199.

[33] Z. Yi, P. A. Heng, and P. F. Fung, Winner-take-all discrete recurrent neural networks, IEEE Trans. Circuits Syst. II, 47 (2000), pp. 1584–1589.

[34] J. Zhang and X. Jin, Global stability analysis in delayed Hopfield neural network models, Neural Netw., 13 (2002), pp. 745–753.

[35] H. Zhao, Global asymptotic stability of Hopfield neural network involving distributed delays, Neural Netw., 17 (2004), pp. 47–53.

數據

Fig. 1 . The graph for function u(y) = y − y 2 and y 1 = g(p i ), y 2 = g(q i ).
Fig. 2 . (a) The graph of g with ε = 0.5; (b) Configurations for ˆ f i and ˇ f i .
Fig. 4 . (a) The graph for an unbounded piecewise linear activation function. (b) The graph for an unbounded activation function with bounded slopes.
Fig. 7 . Evolution of state variable x 2 (t) in Example 5.1.
+3

參考文獻

相關文件

Wang, Solving pseudomonotone variational inequalities and pseudo- convex optimization problems using the projection neural network, IEEE Transactions on Neural Network,

In this paper, we build a new class of neural networks based on the smoothing method for NCP introduced by Haddou and Maheux [18] using some family F of smoothing functions.

Then, based on these systematically generated smoothing functions, a unified neural network model is pro- posed for solving absolute value equationB. The issues regarding

Qi (2001), Solving nonlinear complementarity problems with neural networks: a reformulation method approach, Journal of Computational and Applied Mathematics, vol. Pedrycz,

SG is simple and effective, but sometimes not robust (e.g., selecting the learning rate may be difficult) Is it possible to consider other methods.. In this work, we investigate

{ Title: Using neural networks to forecast the systematic risk..

Ongoing Projects in Image/Video Analytics with Deep Convolutional Neural Networks. § Goal – Devise effective and efficient learning methods for scalable visual analytic

In the work of Qian and Sejnowski a window of 13 secondary structure predictions is used as input to a fully connected structure-structure network with 40 hidden units.. Thus,