ContentslistsavailableatScienceDirect
Neurocomputing
journalhomepage:www.elsevier.com/locate/neucom
Neural networks based on three classes of NCP-functions for solving nonlinear complementarity problems R
Jan Harold Alcantara, Jein-Shan Chen
∗Department of Mathematics National Taiwan Normal University Taipei 11677, Taiwan
a rt i c l e i nf o
Article history:
Received 5 February 2019 Revised 15 April 2019 Accepted 29 May 2019 Available online 31 May 2019 Communicated by Dr Q Wei Keywords:
NCP-function
Natural residual function Complementarity problem Neural network Stability
a b s t ra c t
Inthispaper,weconsiderafamilyofneuralnetworksforsolvingnonlinearcomplementarityproblems (NCP). Theneural networks areconstructedfromthe merit functions basedonthree classes ofNCP- functions:thegeneralizednaturalresidualfunctionanditstwosymmetrizations.Inthispaper,wefirst characterizethe stationarypointsoftheinducedmerit functions.Growthbehaviorofthecomplemen- tarityfunctionsisalsodescribed,asthiswillplayanimportantroleindescribing thelevelsetsofthe meritfunctions.Inaddition,the stabilityofthesteepestdescent-basedneuralnetworkmodel forNCP isanalyzed.Weprovidenumericalsimulationstoillustratethetheoreticalresults,andalsocomparethe proposedneuralnetworkswithexistingneuralnetworksbasedonotherwell-knownNCP-functions.Nu- mericalresultsindicatethattheperformanceoftheneuralnetworkisbetterwhentheparameterpas- sociatedwiththeNCP-functionissmaller.TheefficiencyoftheneuralnetworksinsolvingNCPsisalso reported.
© 2019ElsevierB.V.Allrightsreserved.
1. Introductionandmotivation
Given a function F: IRn→IRn, the nonlinear complementarity problem(NCP)istofindapointx∈IRnsuchthat
x≥ 0, F
(
x)
≥ 0, x,F(
x)
=0, (1)where
·,· is the Euclidean inner product and ≥ means the component-wiseorder onIRn.Throughout thispaper,we assume thatFiscontinuously differentiable,andletF=(
F1,...,Fn)
T with Fi:IRn→IRfori=1,...,n.For decades,substantial research effortshave beendevoted in thestudyofnonlinearcomplementarityproblemsbecauseoftheir wide range of applications in many areas such as optimization, operationsresearch, engineering,andeconomics[8,9,12,48].Some sourceproblemsofNCPsincludemodels ofequilibriumproblems in the aforementioned fields and complementarity conditions in constrainedoptimizationproblems[9,12].
There are many methods in solving the NCP (1). In general, thesesolution methodsmay be categorizedinto two classes,de- pendingon whetheror not they makeuse ofthe so-calledNCP- function(seeDefinition2.1).Some techniquesthat usuallyexploit NCP-functions include merit function approach [11,19,26], nons-
R The research is supported by Ministry of Science and Technology, Taiwan.
∗ Corresponding author.
E-mail addresses: 80640 0 05s@ntnu.edu.tw (J.H. Alcantara), jschen@math.ntnu.edu.tw (J.-S. Chen).
mooth Newton method [10,45], smoothing methods [4,31], and regularization approach [17,37]. Onthe other hand, interior-point method[29,30]andproximal pointalgorithm[33]aresome well- knownapproachestosolve(1)whichdonotutilizeNCP-functions ingeneral.TheexcellentmonographofFacchineiandPang[9]pro- vides a thorough survey and discussion of solution methods for complementarityproblemsandvariationalinequalities.
Theabove numericalapproachescan efficientlysolvetheNCP;
however, itis often desirable inscientific andengineeringappli- cations to obtain a real-time solution. One promising approach thatcanprovidereal-timesolutionsistheuseofneuralnetworks, whichwerefirstintroducedinoptimizationbyHopfieldandTank inthe1980s[13,38].Neuralnetworksbasedoncircuitimplemen- tation exhibit real-time processing. Furthermore, prior researches show that neural networkscan be used efficiently in linear and nonlinear programming, variational inequalities and nonlinear complementarity problems [2,7,14,15,20,23,42–44,47,49] and as wellasinotherfields[25,28,34,36,39,40,46,50,51,55].
Motivated by the preceding discussion, we construct a new family ofneural networks basedon recently discovered discrete- type NCP-functions tosolve NCPs. Neuralnetworksbased on the Fischer-Burmeister(FB) function[23] andthegeneralizedFischer- Burmeister function [2] have already been studied. The latter NCP-functions,which havebeenextensively usedin thedifferent solutionmethods,arestronglysemismoothfunctions,whichoften provideefficientperformance[9].Inthispaper,weexploretheuse ofsmoothNCP-functionsasbuildingblocksoftheproposedneural https://doi.org/10.1016/j.neucom.2019.05.078
0925-2312/© 2019 Elsevier B.V. All rights reserved.
networks. Moreover, the NCP-functions we consider herein have piecewise-definedformulas,asopposedtotheFBandgeneralized FB functionswhich havesimple formulations.Inturn, thesubse- quent analysis is more complicated. Nevertheless, we show that theproposedneuralnetworksmayofferpromisingresultstoo.The analysis andnumerical reports in thispaper, on the other hand, pavethewayfortheuseofpiecewise-definedNCP-functions.
This paper is organized as follows: In Section 2, we revisit equivalentreformulationsoftheNCP(1)usingNCP-functions.We also elaborate on the purpose and limitations of the paper. In Section 3, we review some mathematical preliminaries related to nonlinear mappings andstabilityanalysis.We alsosummarize some important properties of the three classes of NCP-functions weusedinconstructingtheneuralnetworks.InSection 4,wede- scribethegeneralpropertiesoftheneuralnetworks,whichinclude thecharacterizationofstationarypointsoftheinducedmeritfunc- tions. In Section 5,we lookat the growth behavior ofthe three classes of NCP-functions considered. This result will be used to provetheboundednessofthelevelsetsoftheinducedmeritfunc- tions. We also prove some stabilityproperties of theneural net- works.InSection6,wepresenttheresultsofournumericalsimu- lations.Conclusionsandsomerecommendationsforfuturestudies arediscussedinSection7.
Throughoutthe paper,IRn denotes the spaceofn-dimensional realcolumnvectors,IRm× ndenotesthespaceofm× nrealmatri- ces, andAT denotes the transposeofA∈IRm× n.For anydifferen- tiable functionf: IRn→IR,
∇
f(x) meansthe gradientoff atx.Forany differentiable mapping F=
(
F1,...,Fm)
T:IRn→IRm,∇
F(
x)
= [∇
F1(
x)
· · ·∇
Fm(
x)
]∈IRn×m denotesthetransposedJacobian ofF at x. We assume that p is an odd integer greater than 1,unless otherwisespecified.2. Overviewandcontributionsofthepaper
In this section, we give an overviewof this research. We be- ginby lookingatequivalentreformulationsofthenonlinearcom- plementarityproblem(1)usingNCP-functions,whichisdefinedas follows.
Definition2.1. Afunction
φ
:IR× IR→IRiscalledanNCP-function ifitsatisfiesφ (
a,b)
=0 ⇐⇒ a≥ 0, b≥ 0, ab=0.Thewell-knownnatural-residualfunctiongivenby
φ
NR(
a,b)
=a−(
a− b)
+=min{
a,b}
is an example ofan NCP-function, whichis widely used in solv- ingNCP.Recently, in[3],thediscrete-typegeneralizationof
φ
NR is proposedanddescribedbyφ
NRp(
a,b)
=ap−(
a− b)
p+ wherep>1isodd integer. (2) It is shown in [3] thatφ
NRp is twice continuously differentiable.However, its surfaceis not symmetric,which mayresult todiffi- culties indesigning andanalyzingsolutionmethods [16].To con- querthis, twosymmetrizationsofthe
φ
NRp are presentedin[1].Anaturalsymmetrizationof
φ
NRp isgivenbyφ
Sp−NR(
a,b)
= ap−(
a− b)
p if a>b, ap=bp if a=b, bp−(
b− a)
p if a<b.(3)
The above NCP-function is symmetric, but is only differentiable on
{ (
a,b) |
a=bora=b=0}
.Itwashowevershownin[16]thatφ
Sp−NR issemismooth andis directionally differentiable.The second symmetrizationofφ
NRp isdescribedbyψ
Sp−NR(
a,b)
= apbp−(
a− b)
pbp if a>b, apbp=a2p if a=b,apbp−
(
b− a)
pap if a<b, (4)whichpossessesbothdifferentiabilityandsymmetry.Thefunctions
φ
NRp ,φ
Sp−NR andψ
Sp−NR are three classes of the four discrete-type familiesofNCP-functions whichare recentlydiscovered, together with the discrete-type generalization of the Fischer-Burmeister functiongivenbyφ
Dp−FB(
a,b)
=x2+y2
p−
(
x+y)
p. (5)Acomprehensivediscussionoftheirpropertiesispresentedin[16]. ToseehowanNCP-function
φ
canbeusefulinsolvingNCP(1),wedefine
:IRn→IRnby
(
x)
=⎛
⎝ φ (
x1,F1(
x))
..φ (
xn,.Fn(
x))
⎞
⎠
. (6)Itiseasytoseethatx∗solvesNCP(1)ifandonlyif
(
x∗)
=0(see alsoProposition 4.1 (a)).Thus, the NCPis equivalentto the non- linearsystem ofequations(
x)
=0.Meanwhile, ifφ
isan NCP-function,then
ψ
:IR× IR→IR+givenbyψ (
a,b)
:= 12
| φ (
a,b) |
2 (7)isalsoanNCP-function.Accordingly,ifwedefine
:IRn→IR+by
(
x)
= ni=1
ψ (
xi,Fi(
x))
=12
(
x)
2, (8)then the NCP can be reformulated as a minimization problem minx∈IRn
(
x)
. Hence,given by (8) is a merit function for the NCP, that is, its global minimizer coincides with the solution of theNCP. Consequently, itis onlynatural toconsider thesteepest descent-basedneuralnetwork
dx
(
t)
dt =−
ρ∇ (
x(
t))
, x(
t0)
=x0, (9) whereρ
>0 is a time-scaling factor. The above neural network (9) is also motivated by the onesconsidered in [23] andin [2], where the NCP functions used are the Fischer-Burmeister (FB) functiongivenbyφ
FB(
a,b)
=a2+b2−
(
a+b)
, (10)andthegeneralizedFischer–Burmeisterfunctionsgivenby
φ
FBp(
a,b)
=(
a,b)
p−(
a+b)
where p∈(
1,+∞)
, (11) respectively. We aim to compare the neural networks based on thegeneralizednatural-residualfunctions(2),(3)and(4)withthe well-studiednetworksbasedontheFBfunctions(10)and(11).Oneofthe contributionsofthis paperliesonestablishing the theoreticalpropertiesofthegeneralizednaturalresidualfunctions.
Theseare fundamentalindesigning NCP-basedsolution methods, andinthispaper,weusetheneuralnetworkapproach.Basicprop- erties of these functions are already presented in [16]. The pur- poseof thispaperis to elaborate some more propertiesandap- plications of the newly discovered discrete-type classes of NCP- functions given by (2), (3) and (4). Specifically, we look at the propertiesoftheirinducedmeritfunctionsgivenby(8).First,itis importantforustodeterminethecorrespondencebetweentheso- lutionsofNCP(1)andthestationarypointsof
.Fromtheabove discussion(also see Proposition 4.1(d)), we alreadyknowthat an NCP solution is a stationary point. On the other hand, we also wantto determinewhich stationarypointsof
are solutions to the NCP. Forcertain NCP functionssuch as the Mangasarianand
Solodovfunction[19],FBfunction[11]andgeneralizedFBfunction [5],astationarypointofthemeritfunctionwasshowntobeaso- lutionto theNCPwhenFismonotone ora P0-function.Itshould bepointedoutthattheseNCP-functionspossessthefollowingnice properties:
(P1)
∇
aψ
(a,b)·∇
bψ
(a,b)≥ 0forall(a,b)∈IR2;and(P2)For all (a, b)∈IR2,
∇
aψ (
a,b)
=0⇐⇒∇
bψ (
a,b)
=0⇐⇒φ (
a,b)
=0.However, thesepropertiesarenotpossessedby
φ
NRp ,φ
Sp−NR andψ
Sp−NR,whichleadstosomedifficultiesinthesubsequentanalysis.Hence, we seekforother conditions which will guaranteethat a stationarypointisan NCPsolution.Furthermore,we alsowantto lookatthegrowthbehaviorofthefunctions(2),(3)and(4).This willplay akeyroleincharacterizingthelevelsets oftheinduced merit functions. It must be noted that since the NCP functions
φ
Sp−NRandψ
Sp−NR arepiecewise-definedfunctions,thentheanalyses oftheirgrowthbehaviorandthepropertiesoftheirinducedmerit functionsaremoredifficult,ascomparedwiththecommonlyused FBfunctions(10)and(11)whichhavesimpleformulations.Anotherpurpose ofthispaperisto discussthestabilityprop- ertiesoftheneuralnetworksbased on
φ
NRp,φ
Sp−NR andψ
Sp−NR.Wefurtherlook intodifferent examples to seethe influence ofpon theconvergenceof trajectoriesof theneural network tothe NCP solution.Finally,we comparethenumericalperformance ofthese threetypesofneural networkswithtwo well-studiedneural net- worksbasedonthe FBfunction [23] andgeneralizedFBfunction [2].
We recallthata solutionx∗issaidto bedegenerateif
{
i|
x∗i= Fi(
x∗)
=0}
isnotempty.Notethatifx∗isdegenerateandφ
isdif-ferentiableatx∗,then
∇
(x∗)issingular.Consequently,oneshould notexpectalocallyfastconvergenceofnumericalmethodsbased onsmooth NCP-functionsif thecomputed solution isdegenerate [9,18].Because ofthedifferentiability ofφ
NRp,φ
Sp−NR andψ
Sp−NR onthefeasibleregionoftheNCPproblem,itisalsoexpectedthatthe convergenceofthetrajectoriesoftheneural network(9)toade- generatesolutioncouldbeslow.Hence,inthispaper,wewillgive particularattentiontonondegenerateNCPs.
3. Preliminaries
In this section, we review some special nonlinear mappings, some properties of
φ
NRp,φ
Sp−NR andψ
Sp−NR, as well as some toolsfromstabilitytheory indynamical systemsthatwill becrucial in ouranalysis.Webeginwithrecallingconceptsrelatedtononlinear mappings.
Definition3.1. LetF=
(
F1,...,Fn)
T:IRn→IRn.Then, themapping Fissaidtobe(a) monotoneif
x− y,F(
x)
− F(
y)
≥ 0forallx,y∈IRn. (b)strictly monotone ifx− y,F(
x)
− F(
y)
>0forall x,y∈IRnandx=y.
(c)strongly monotone with modulus
μ
>0 if x− y,F(
x)
− F(
y)
≥μ
x− y2forallx,y∈IRn.(d) a P0-function if max
1≤i≤n xi=yi
(
xi− yi)(
Fi(
x)
− Fi(
y))
≥ 0 for all x, y∈IRnandx=y.(e) aP-functionifmax
1≤i≤n
(
xi− yi)(
Fi(
x)
− Fi(
y))
>0forallx,y∈IRn andx=y.(f) a uniform P-function with modulus
κ
>0 if max1≤i≤n
(
xi− yi)(
Fi(
x)
− Fi(
y))
≥κ
x− y2,forallx,y∈IRn.FromDefinition3.1,thefollowingone-sidedimplicationscanbe obtained:
Fisstronglymonotone⇒FisauniformP-function⇒Fisa P0-function.
ItisknownthatFismonotone(resp.strictly monotone)ifand onlyif
∇
F(x)ispositivesemidefinite(resp.positivedefinite)forall x∈IRn.Inaddition,F isaP0-functionifandonlyif∇
F(x) isaP0- matrixforallx∈IRn;that is,itsprincipalminorsarenonnegative.Further,if
∇
F(x)isaP-matrix(thatis,itsprincipalminorsarepos-itive) forall x∈IRn,thenFisa P-function.However, wepoint out that aP-functiondoesnot necessarilyhavea Jacobianwhich isa P-matrix.
The following characterization of P-matrices and P0-matrices willbeusefulinouranalysis.
Lemma3.1. A matrixM∈IRn× n isa P-matrix (resp.a P0-matrix) if andonlyifwheneverxi(Mx)i≤ 0(resp.xi(Mx)i<0)foralli,thenx= 0.
Proof. Pleasesee[6].
Thefollowingtwolemmassummarizesome propertiesof
φ
NRp,φ
Sp−NR andψ
Sp−NR thatwillbeusefulinoursubsequentanalysis.Lemma3.2. Letp>1beanoddinteger.Then,thefollowinghold.
(a) Thefunction
φ
NRp istwicecontinuouslydifferentiable.Itsgradi- entisgivenby∇φ
NRp(
a,b)
=p ap−1−(
a− b)
p−2(
a− b)
+(
a− b)
p−2(
a− b)
+.
(b) Thefunction
φ
Sp−NR istwice continuously differentiableon the set:={(a,b)|a=b}.Itsgradientisgivenby
∇φ
Sp−NR(
a,b)
=p[ap−1−
(
a− b)
p−1,(
a− b)
p−1]T if a>b, p[(
b− a)
p−1,bp−1−(
b− a)
p−1]T if a<b. Further,φ
Sp−NR is differentiable at (0,0) with∇φ
Sp−NR(
0,0)
= [0,0]T.(c) Thefunction
ψ
Sp−NR istwicecontinuouslydifferentiable.Itsgra- dientisgivenby∇ψ
Sp−NR(
a,b)
=⎧ ⎨
⎩
p[ap−1bp−
(
a− b)
p−1bp, apbp−1−(
a− b)
pbp−1+(
a− b)
p−1bp]T if a> b, p[ap−1bp,apbp−1]T= pa2p−1[1,1]T if a= b, p[ap−1bp−(
b− a)
pap−1+(
b− a)
p−1ap, apbp−1−(
b− a)
p−1ap]T if a< b.Proof. Pleasesee[3,Proposition2.2],[1,Propositions2.2and3.2], and[16,Proposition4.3].
Lemma3.3. Letp>1 bea positiveoddinteger. Then,thefollowing hold.
(a) If
φ
∈{ φ
NRp,φ
Sp−NR}
, thenφ
(a, b)>0 ⇐⇒ a>0,b>0. On the otherhand,ψ
Sp−NR(
a,b)
≥ 0onIR2.(b)
∇
a>φ
0NRp(
a,onb) {
·( ∇
a,bφ
b)
NRp| (
aa,>b)
b>0ora>b>2a}
,=0 on
{ (
a,b) |
a≤ bora>b=2aora>b=0}
,<0 otherwise,
∇
aφ
Sp−NR(
a,b)
·∇
bφ
Sp−NR(
a,b)
>0 on{ (
a,b) |
a>b>0}
{ (
a,b) |
b>a>0}
,and∇
aψ
Sp−NR(
a,b)
·∇
bψ
Sp−NR(
a,b)
>0onthefirstquadrantIR2++.(c)If
φ
∈{ φ
NRp,φ
Sp−NR}
, then∇
aφ (
a,b)
·∇
bφ (
a,b)
=0 provided thatφ (
a,b)
=0. On the other hand,ψ
Sp−NR(
a,b)
=0⇐⇒∇ψ
Sp−NR(
a,b)
=0. In particular, we have∇
aψ
Sp−NR(
a,b)
·∇
bψ
Sp−NR(
a,b)
=0providedthatψ
Sp−NR(
a,b)
=0. Proof. Pleasesee[16,Propositions3.4,4.5,and5.4].Next, we recall some materials about first order differential equations(ODE):
x˙
(
t)
=H(
x(
t))
, x(
t0)
=x0∈IRn (12) whereH:IRn→IRnisamapping.Wealsointroducethreekindsof stabilitythatwe willconsiderlater.Thesematerialscan befound inODEtextbooks;see[27].Definition 3.2. A point x∗=x
(
t∗)
is called an equilibrium point or a steady state of the dynamic system (12) if H(
x∗)
=0. If there is a neighborhood∗⊆IRn of x∗ such that H
(
x∗)
=0 and H(x)=0∀
x∈∗\{x∗},thenx∗iscalledanisolatedequilibriumpoint.
Lemma 3.4. Assume that H: IRn→IRn is a continuous mapping.
Then, foranyt0≥ 0andx0∈IRn,thereexistsa localsolutionx(t)for (12)with t∈[t0,
τ
) forsomeτ
>t0.If,in addition,His locally Lip- schitzcontinuous atx0,thenthesolutionis unique;ifHis Lipschitz continuousinIRn,thenτ
canbeextendedto∞.Definition3.3. (StabilityinthesenseofLyapunov)Letx(t)beaso- lutionfor(12).Anisolatedequilibriumpointx∗isLyapunovstable if foranyx0=x
(
t0)
andanyε
>0, there exists aδ
>0 such that x(
t)
− x∗<ε
forallt≥ t0 andx(
t0)
− x∗<δ
.Definition3.4. (Asymptoticstability)Anisolatedequilibriumpoint x∗ is said tobe asymptotically stableifin additionto beingLya- punov stable, it has the property that x(t)→x∗ as t→∞ for all
x(
t0)
− x∗<δ
.Definition 3.5. (Lyapunovfunction)Let
⊆IRn be an openneigh- borhoodof ¯x.Acontinuously differentiablefunctionW:IRn→IRis said to be a Lyapunovfunction at thestate ¯xover the set
for Eq.(12)if W
(
¯x)
=0, W(
x)
>0,∀
x∈\{
¯x}
.dW
(
x(
t))
dt =
∇
W(
x(
t))
TH(
x(
t))
≤ 0,∀
x∈. Lemma3.5.
(a) Anisolatedequilibriumpointx∗isLyapunovstableifthereex- istsaLyapunovfunctionoversomeneighborhood
∗ofx∗. (b) Anisolatedequilibriumpointx∗isasymptoticallystableifthere
isa Lyapunovfunctionover someneighborhood
∗ofx∗such that dW
(
x(
t))
dt <0 for all x∈
∗\{x∗}.
Definition 3.6. (Exponential stability) An isolated equilibrium pointx∗isexponentiallystableifthereexistsa
δ
>0suchthatar- bitrary point x(t)of (12)withthe initial conditionx(
t0)
=x0 and x(
t0)
− x∗<δ
iswell-definedon[0,+∞)
andsatisfies x(
t)
− x∗2≤ ce−ωtx(
t0)
− x∗∀
t≥ t0,where c>0 and
ω
>0 are constants independent of the initial point.Thefollowingresultwillalsobehelpfulinourstabilityanalysis.
Lemma3.6. LetFbelocallyLipschitzian.IfallV∈
∂
F(x)arenonsingu-lar,thenthereisaneighborhoodN(x)ofxandaconstantCsuchthat foranyy∈N(x)andanyV∈
∂
F(y),VisnonsingularandV−1≤ C Proof. Pleasesee[32,Propositions3.1].4. Neuralnetworkmodel
In thissection, we describe the properties ofthe neural net- work(9)basedonthefunctions
φ
NRp ,φ
Sp−NR andψ
Sp−NR.Beforethis,wesummarizefirstsome importantpropertiesof
asdefinedin (8)forgeneralNCP-functions.Proposition4.1(a)isinfactLemma 2.2in[19].Ontheotherhand,Proposition4.1(b)and(e)aretrue forallgradientsystems(9).
Proposition4.1. Let
:IRn→IR+bedefinedasin(8),with
φ
beinganyNCP-function,andlet
ψ
beas in(7).SupposethatFiscontinu-ouslydifferentiable.Then,
(a)
(x)≥ 0 for all x∈IRn. If the NCP (1) has a solution, x is a globalminimizerof
(x)ifandonlyifxsolvestheNCP.
(b)
(x(t))isanonincreasingfunctionoft,wherex(t)isasolution of(9).
(c) Letx∈IRn,andsupposethat
φ
isdifferentiableat(xi,Fi(x))for eachi=1,...,n.Then∇ (
x)
=∇
aψ (
x,F(
x))
+∇
F(
x) ∇
bψ (
x,F(
x))
(13)where
∇
aψ (
x,F(
x))
:=[∇
aψ (
x1,F1(
x))
,. . .,∇
aψ (
xn,Fn(
x))
]T,∇
bψ (
x,F(
x))
:=[∇
bψ (
x1,F1(
x))
,. . .,∇
bψ (
xn,Fn(
x))
]T. (d) Letxbe asolution totheNCPsuchthatφ
isdifferentiableat(xi,Fi(x))foreachi=1,...,n.Then,xisastationarypointof
.
(e) Everyaccumulationpointofa solution x(t) ofneuralnetwork (9)isanequilibriumpoint.
Proof. (a)Itisclearthat
≥ 0.Noticethat
(
x)
=0ifandonlyif(
x)
=0,whichoccursifandonlyifφ (
xi,Fi(
x))
=0foralli.Sinceφ
isan NCP-function, thisis equivalentto having xi≥ 0, Fi(x)≥ 0 and xiFi(
x)
=0. Thus,(
x)
=0 if and only if x≥ 0, F(x)≥ 0 and x,F(
x)
=0.Thisprovespart(a).(b)Thedesiredresultfollowsfrom d
(
x(
t))
dt =
∇ (
x(
t))
Tdxdt =∇ (
x(
t))
T(
−ρ∇ (
x(
t)))
=−
ρ∇ (
x(
t))
2≤ 0 forallsolutionsx(t).(c)Theformulaisclearfromchainrule.
(d)First, note that fromEq. (7),we have
∇ψ (
a,b)
=φ (
a,b)
·∇φ (
a,b)
.Thus,ifxisasolutiontotheNCP,itgives∇ψ (
xi,Fi(
x))
= 0forall i=1,...,n.Then, itfollowsfromformula(13)in part(c) that∇ (
x)
=0.Thatis,xisastationarypointof.
(e)Pleaseseepage232of[41].
We adopt the neural network (9) with
(
x)
= 12(
x)
2, whereis given by (6) with
φ
∈{ φ
NRp,φ
Sp−NR,ψ
Sp−NR}
. The func-tion
corresponding to
φ
NRp,φ
Sp−NR andψ
Sp−NR is denoted, re-spectively,by
NRp ,
S1p−NR and
pS2−NR. Theircorresponding merit functions will be denoted by
NRp,
S1p−NR and
S2p−NR, respec- tively. We note that by formula (13) and the differentiability of
∈
{
NRp,S1p−NR,
S2p−NR
}
(seeProposition4.2),theneuralnetwork (9)canbeimplementedonhardwareasinFig.1.Wefirstestablishtheexistenceanduniquenessofthesolutions ofneuralnetwork(9).
Proposition 4.2. Let p>1 be an odd integer. Then, the following hold.
(a)
NRp and
S2p−NRarebothcontinuouslydifferentiableonIRn. (b)
S1p−NR iscontinuouslydifferentiableontheopenset
=
{
x∈IRn