• 沒有找到結果。

解絕對值方程式的新平滑函數

N/A
N/A
Protected

Academic year: 2021

Share "解絕對值方程式的新平滑函數"

Copied!
19
0
0

加載中.... (立即查看全文)

全文

(1)國立臺灣師範大學數學系碩士班碩士論文. 指 導 教 授 : 陳 界 山 博士. New Smoothing Functions for Absolute Value Equation. 研 究 生 : 余 政 和. 中 華 民 國 一 零 四 年 七 月.

(2) 論文謝辭 時光荏苒,在師大研究所的日子,就這樣即將走到了尾聲。原本認為可以信手拈來, 毫不費力的論文謝辭,卻被滿滿的回憶湧上心頭而變得難以下筆。這段在師大的碩士生 涯,卻與政大亦有著一段剪不斷理還亂的複雜情緒。 失敗過一次的我,是政大陸行老師與姜志銘老師的鼓勵,讓我有勇氣遞出師大推甄 的報名表;在推甄面試上,是蔡蓉青老師與蔡碧紋老師給予的機會,讓我再有一次重回 校園的可能;游森棚老師在課堂上一句「你是可以唸書的人」,更是讓我重拾起遺失許 久的信心;與曹博盛老師在課堂上的來回討論,讓我看見教育現場中許多的不可能。感 謝這一路走來的老師們,使學生得以在這研究生的生涯中,得到許多的鼓勵與學習到各 領域的知識。 感謝陳界山老師在修課之時,不僅讓學生學習到許多學識,更訓練學生彙整報告的 能力。在論文上,也給予學生許多意見與幫助;當學生怠惰沒有進度之時,也不會讓學 生有沉重的壓力,只在適當的時機給予指點,讓學生可以毫無壓力地按照自己的步調完 成這篇論文。讓學生才得以在極快的時間內,完成這篇論文。 感謝張毓麟老師與柯春旭老師,在口試時,對論文給予的寶貴意見,讓學生論文中 的缺失可以有所改正並且更加完整;亦在口試之中,提出一些方程式在數值計算上可能 會出現的問題與限制,讓學生明白學術與實際計算上的差異。 感謝在政大課外組的所有老師們,讓學生可以在研究學術之時,仍有些許的課外活 動能夠參與,這不僅是幫助學生調劑身心,更是讓學生學習到許多不同於課堂上的技巧 與能力。感謝在政大活動中所認識的一些學弟妹,或有幫忙修改英文文法的學弟妹、或 有在我忙碌之於幫忙各項雜事的學弟妹、或有於論文煩悶之時一同打球吃飯遊戲看電影 解悶的學弟妹,如果沒有你們的陪伴,這篇論文不會如此的順利。 最後,要感謝我的家人。如果不是你們的支持,我可能早就放棄攻讀碩士學位;如 果不是你們的鼓勵與相信,我不會繼續在這條學術道路上,堅定且一步一步地走下去。 現在,你們的孩子、弟弟、哥哥,終於要從碩士班畢業了,要踏出校園進入社會,面對 人生下一階段的挑戰。我會秉持追求碩士學位的熱忱,繼續將你們的支持,轉移到下個 階段的努力上,謝謝你們。. 余政和 謹致 2015 年 7 月.

(3) Contents Abstract. 1. 1. Introduction. 1. 2. Smooth reformulation. 3. 3. A smoothing-type algorithm. 9. 4. Convergence. 12. 5. Conclusion. 14. References. 15.

(4) New Smoothing Functions for Absolute Value Equation Cheng-He Yu 1 Department of Mathematics National Taiwan Normal University Taipei 11677, Taiwan. Abstract The system of absolute value equations Ax + B|x| = b, denoted by AVEs, is a non-differentiable NP-hard problem, where A, B are arbitrary given n × n real matrices and b is arbitrary given n-dimensional vector. In this paper, we study four new smoothing functions and propose a smoothing-type algorithm to solve AVEs. With the assumption that the minimal singular value of the matrix A being strictly greater than the maximal singular value of the matrix B, we prove that the algorithm is globally and locally quadratically convergent with the four smooth equations. Key words. Smoothing function, singular value, convergence.. 1. Introduction. We consider the absolute value equation (AVE): Ax + B|x| = b,. (1). where A ∈ IRn×n , B ∈ IRn×n , B ̸= 0 and b ∈ IRn with n ∈ N, and |x| denotes the component-wise absolute value of vector x ∈ IRn . In fact, when B = −I, where I is the identity matrix, the AVE (1) reduces to the special form: Ax − |x| = b. To solve the AVE (1), the following method was considered in [6]. Let [ ] µ Hp (µ, x) := , ∀µ ∈ IR, ∀x ∈ IRn , Ax + BΦp (µ, x) − b   ϕp (µ, x1 )  ϕp (µ, x2 )    Φp (µ, x) :=   , ∀µ ∈ IR, ∀x ∈ IRn , ..   . ϕp (µ, xn ) 1. E-mail: 60240031S@ntnu.edu.tw. 1.

(5) where ϕp (a, b) := ∥(a, b)∥p =. √ p ∥a∥p + ∥b∥p ,. ∀(a, b) ∈ IR2 .. Then, Hp (µ, x) = 0 if and only if x solves AVE (1). In this paper, motivated by the aforementioned method, we define [ ] µ , ∀µ ∈ IR, ∀x ∈ IRn , Hi (µ, x) = Ax + BΦi (µ, x) − b    Φi (µ, x) =  . ϕi (µ, x1 ) ϕi (µ, x2 ) .. .. (2).    , . ∀µ ∈ IR,. ∀x ∈ IRn ,. (3). ϕi (µ, xn ) where Φi : IR × IRn → IRn and ϕ1 (µ, x) = µ[ln(1 + e− µ ) + ln(1 + e µ )] x.    . x,. if. x. x≥. (4). µ 2. x2 µ + , if − µ2 < x <  µ 4   −x, if x ≤ − µ2 √ ϕ3 (µ, x) = 4µ2 + x2. ϕ2 (µ, x) =. µ 2. (5). (6).   . x2 , if |x| ≤ µ 2µ ϕ4 (µ, x) =   |x| − µ , if |x| > µ 2. (7). AVEs have been extensively investigated in the literature. Many mathematical programming problems can be reduced to a linear complementarity problem (LCP), which is equivalent to an absolute value equation (AVEs). In [12], Mangasarian and Meyer consider AVEs in the case where all singular values of matrix A exceed 1. This means that AVEs has an unique solution for any right-hand side b. Moreover, no computational results were given in [12]. In contrast in [8], computational results were given for a linearprogramming-based successive linearization algorithm utilizing a concave minimization model and a primal-dual bilinear model in [10]. On the other hand, various numerical methods for solving AVEs were proposed. A parametric successive linearization algorithm for AVE (1) that terminates at a point. 2.

(6) satisfying necessary optimality conditions was proposed in [8]. A generalized Newton algorithm for the special form of (1) was proposed in [9]. It was proved that the generalized Newton method in [9] converges linearly from any starting point to the unique solution of the AVE (1) under the condition that ∥A−1 ∥ < 14 . This kind of approach was also discussed in [17], where the semismooth and the smoothing Newton steps were combined into their algorithm, and the global and finite convergence of the method was established. Recently, Hu, Huang and Zhang [2] extended AVE (1) to the case of second-order cones, i.e., they introduced the system of absolution value equations associated with second order cones. We will show that, the AVE (1) has a solution if and only if (2) is equal to 0. Furthermore, problems in form of AVE (1) can be replaced by ϕi (µ, x) = |x|, when µ is decreasing to 0. In particular, with these four new smoothing functions, we will investigate its properties and consider the similar smoothing-type algorithm studied in [5, 16] in order to solve the AVE (1). In this paper, we are interested in smoothing-type algorithm for solving the general system of absolute value equations (1). We reformulate AVE (1) as a parameterized smooth equations. Then, we propose a smoothing-type algorithm to solve the AVE (1). We prove that the algorithm is well-defined under an assumption that the minimal singular value of the matrix A being strictly greater than the maximal singular value of the matrix B. Moreover, we show that the proposed algorithm is globally and locally quadratically convergent with the four smooth equations.. 2. Smooth reformulation. In [6], the main idea of solving the AVE (1) is to solve the AVE (1) via (2) and (3). In particular, we consider four smoothing functions, which are defined as in (4), (5), (6) and (7), respectively. Then the function ϕi for i = 1, 2, 3, 4 possesses the following properties. Proposition 2.1 Let ϕi for i = 1, 2, 3, 4 be defined as in (4), (5), (6) and (7), respectively. For µ ↓ 0 and i = 1, 2, 3, 4, we have (a) ϕi is continuously differentiable at any (µ, x) ∈ IR++ × IRn . (b) limµ↓0 ϕi (µ, x) = |x|. Proof. (a) First, we compute. ∂ϕi (µ,x) ∂x. and. ∂ϕi (µ,x) ∂µ. as below:. For i = 1, we verify that 1 1 ∂ϕ1 (µ, x) = x − x − µ ∂x 1+e 1 + eµ 3.

(7) x x ∂ϕ1 (µ, x) x −1 1 = [ln(1 + e− µ ) + ln(1 + e µ )] + [ x + x ] − µ ∂µ µ 1+e 1 + eµ. Thus, it is clear to see that. ∂ϕ1 (µ,x) ∂x. ∈ C 1 and. ∂ϕ1 (µ,x) ∂µ. ∈ C 1.. For i = 2, we have.   1, if x ≥ µ2   2x ∂ϕ2 (µ, x) , if − µ2 < x < µ2 =  µ ∂x   −1, if x ≤ − µ 2  0, if x ≥ µ2   ( ) 2 ∂ϕ2 (µ, x)  x 1 = − + , if − µ2 < x <  ∂µ µ 4   0, if x ≤ − µ2. Then, we see that. ∂ϕ2 (µ,x) ∂x. ∈ C 1 and. ∂ϕ2 (µ,x) ∂µ. µ 2. ∈ C 1 because. ∂ϕ2 (µ, x) 2x = limµ = 1, ∂x x→ 2 x→ 2 µ ∂ϕ2 (µ, x) 2x = limµ = −1. limµ ∂x x→− 2 µ x→− 2 limµ. and. ] [ ( ) 2 x 1 ∂ϕ2 (µ, x) lim = limµ − + = 0, ∂µ µ 4 x→ µ x→ 2 2 [ ( ) ] 2 ∂ϕ2 (µ, x) x 1 lim = limµ − + = 0. ∂µ µ 4 x→− µ x→− 2 2. For i = 3, we have ∂ϕ3 (µ, x) x =√ ∂x 4µ2 + x2 4µ ∂ϕ3 (µ, x) =√ ∂µ 4µ2 + x2 Then, it is also clear that For i = 4, we have. ∂ϕ3 (µ,x) ∂x. ∈ C 1 and. ∂ϕ3 (µ,x) ∂µ. ∈ C 1..  if x > µ  1, ∂ϕ4 (µ, x)  x , if −µ ≤ x ≤ µ =  ∂x  µ −1, if x < −µ 4.

(8)  1   , if x > µ −   2 ( )2   ∂ϕ4 (µ, x) 1 x = − × , if −µ ≤ x ≤ µ  ∂µ 2 µ    1   − , if x < −µ 2 Similarly, we see that. ∂ϕ4 (µ,x) ∂x. ∈ C 1 and. ∂ϕ4 (µ,x) ∂µ. ∈ C 1 because. ∂ϕ4 (µ, x) x = lim = 1, x→µ x→µ µ ∂x ∂ϕ4 (µ, x) x lim = lim = −1. x→−µ x→−µ ∂x µ lim. and. [ ( )2 ] ∂ϕ4 (µ, x) 1 x 1 lim = lim − × =− , x→µ x→µ ∂µ 2 µ 2 [ ] ( )2 x ∂ϕ4 (µ, x) 1 1 lim = lim − × =− . x→−µ x→−µ ∂µ 2 µ 2. The results mentioned above indicate that ϕi is continuously differentiable at any (µ, x) ∈ IR++ × IRn . (b) For i = 1, 2, 3, 4, we have the following property: { ∂ϕi (µ, x) 1 if x > 0, lim = µ→0 −1 if x < 0, ∂x Therefore, part (b) is clear.. 2. In the following figures of ϕi , Proposition 2.1 can be seen via the graphs. In particular, we find that when µ ↓ 0, ϕi will be close to |x|, which verify Proposition 2.1(b).. 5.

(9) 3 µ =0.1 µ =0.3 µ =0.5. 2.5. φ1 (µ, x). 2. 1.5. 1. 0.5. 0 −2.5. −2. −1.5. −1. −0.5. 0 x. 0.5. 1. 1.5. 2. 2.5. Figure 1: Graphs of ϕ1 (µ, x) with µ = 0.1, 0.3, 0.5.. 3 µ =0.1 µ =0.3 µ =0.5. 2.5. φ2 (µ, x). 2. 1.5. 1. 0.5. 0 −2.5. −2. −1.5. −1. −0.5. 0 x. 0.5. 1. 1.5. 2. Figure 2: Graphs of ϕ2 (µ, x) with µ = 0.1, 0.3, 0.5.. 6. 2.5.

(10) 3 µ =0.1 µ =0.3 µ =0.5. 2.5. φ3 (µ, x). 2. 1.5. 1. 0.5. 0 −2.5. −2. −1.5. −1. −0.5. 0 x. 0.5. 1. 1.5. 2. 2.5. Figure 3: Graphs of ϕ3 (µ, x) with µ = 0.1, 0.3, 0.5. 2.5 µ =0.1 µ =0.3 µ =0.5 2. φ4 (µ, x). 1.5. 1. 0.5. 0 −2.5. −2. −1.5. −1. −0.5. 0 x. 0.5. 1. 1.5. 2. Figure 4: Graphs of ϕ4 (µ, x) with µ = 0.1, 0.3, 0.5.. 7. 2.5.

(11) 1.4 φ. φi (µ, x). 1. 1.2. φ2. 1. φ4. φ3. 0.8. 0.6. 0.4. 0.2. 0 −1. −0.5. 0 x. 0.5. 1. Figure 5: Graphs of all ϕi (µ, x) with µ = 0.1. Now, with the equation (2) and (3), we have the following results. Proposition 2.2 Let Φi (µ, x) for i = 1, 2, 3, 4 be defined as in (3). Then, we have (a) Hi (µ, x) = 0 if and only if x solves AVE (1). (b) Hi is continuously differentiable on IR++ × IR, and when µ > 0 the Jacobian matrix of Hi at (µ, x) is given by ] [ 1 0 (8) ∇Hi (µ, x) := (µ,x) (µ,x) B ∂Φi∂µ A + B ∂Φi∂x where. [ ]T ∂Φi (µ, x) ∂ϕi (µ,x1 ) ∂ϕi (µ,xn ) := , , . . . , ∂µ ∂µ ∂µ [ ] ∂Φi (µ, x) (µ,x1 ) ∂ϕi (µ,xn ) := diag ∂ϕi∂x . , . . . , ∂xn 1 ∂x. Proof. In order to verify the result (a), we recall the AVE (1) and Proposition 2.1(b). Therefore, it is clear that part (a) is hold. In addition, part (b) holds from Proposition 2.1(a). 2 Next, we state the following Assumption. 8.

(12) Assumption 2.3 The minimal singular value of the matrix A is strictly greater than the maximal singular value of the matrix B. According the Assumption 2.3, we have the following Proposition and Lemma. Furthermore, their proof are similar to that in [6]. Proposition 2.4 The AVE (1) is uniquely solvable for any b ∈ IRn if Assumption 2.3 is satisfied. Lemma 2.5 The function ∥H(x)∥ := ∥Ax + B|x| − b∥ is level-bounded if Assumption 2.3 is satisfied.. 3. A smoothing-type algorithm. In this section, we propose a smoothing-type algorithm with iteration steps to solve AVE (1). By Proposition 2.2(a)(b), instead of solving the AVE (1), one may solve Hi (µ, x) = 0 by using some Newton-type method, and make µ ↓ 0 so that a solution of (1) can be found. Algorithm 3.1 (A Smoothing-Type Algorithm) Step 0 Choose δ, σ ∈ (0, 1), µ0 > 0, x ∈ IR. Set z 0 := (µ, x0 ). Denote that e0 := (1, 0) ∈ IR × IRn . Choose β > 1 such that (min {1, ∥Hi (z 0 )∥})2 ≤ βµ0 . Set k := 0. Step 1 If ∥Hi (z k )∥ = 0, stop. { } Step 2 Set τk = min 1, ∥Hi (z k )∥ , and compute △z k := (△µk , △xk ) ∈ IR × IRn by using ∇Hi (z k )△z k = −Hi (z k ) + (1/β)τk2 e0 , (9) where ∇Hi (·) is defined by (8). Step 3 Let αk be the maximum of the values 1, δ, δ 2 , · · · such that ∥Hi (z k + αk △z k )∥ ≤ [1 − σ(1 − 1/β)αk ] ∥Hi (z k )∥ Step 4 Set z k+1 := z k + αk △z k and k:=k+1. Back to Step 2. With Algorithm 3.1, some basic result are shown in the following proposition. { } Proposition 3.2 Let the sequence z k be generated by Algorithm 3.1. Then, { } (a) both ∥Hi (z k )∥ and {τk } are monotonically decreasing; 9. (10).

(13) (b) τk2 ≤ βµk holds for all k; (c) the sequence {µk } is monotonically decreasing, and µk > 0 for all k. Proof. (a) To prove we may recall the Algorithm 3.1 }Step 2. We have { the proposition, } { k that τk = min 1, ∥Hi (z )∥ , and we also know that when ∥Hi (z k )∥ is monotonically decreasing if and { } only if {τk } is monotonically decreasing. Therefore, we want to show k that ∥Hi (z )∥ is monotonically decreasing. Now, we recall the equation (10) and z k+1 := z k + αk △z k , defined by Algorithm 3.1 Step 4, we attain the following process: ∥Hi (z k + αk △z k )∥ ≤ [1 − σ(1 − 1/β)αk ] ∥Hi (z k )∥ ∥Hi (z k+1 )∥ ≤ ∥Hi (z k )∥ − [σ(1 − 1/β)αk ] ∥Hi (z k )∥ ≤ ∥Hi (z k )∥ { } Thus, we have the result that both ∥Hi (z k )∥ and {τk } are monotonically decreasing. (b) In order to obtain this proposition, we can use inductive method to reach the goal. In Algorithm 3.1, we have (min {1, ∥Hi (z 0 )∥})2 ≤ βµ0 . We suppose that τn2 = (min {1, ∥Hi (z n )∥})2 ≤ βµn for some n. Then, µn+1 −. 1 2 1 2 τn+1 = µn + αn △µn − τn+1 β β 1 1 2 = µn + αn τn2 − αn µn − τn+1 β β 1 2 1 = (1 − αn )µn + αn τn2 − τn+1 β β 1 2 1 2 1 2 ≥ (1 − αn ) τn + αn τn − τn+1 β β β 1 2 1 2 = τ − τ β n β n+1 ≥ 0. since the second equality is according to the first equation of (9) and µn+1 = µn +αn △µn ; the first inequality holds from β1 τn2 ≤ µn , and the second inequality from the above result (a). Thus, by using the inductive method we obtain the result. (c) To verify this result, from the equation of (9), it follows that µn+1 = µn + αn △µn 1 = (1 − αn )µn + αn τn2 β 1 = µn − αn µn + αn τn2 β 1 1 2 ≥ µn − αn τn + αn τn2 β β = µn > 0 10.

(14) which implies that the sequence {µk } is monotonically decreasing.. 2. Now, we show the solvability of Newton equations (9). Theorem 3.3 Let Hi and ∇Hi be given by (2) and (8), respectively. Suppose that Assumption 2.3 holds. Then ∇Hi (µ, x) is invertible at any (µ, x) ∈ IR++ × IRn . Proof. ∂Φi (µ,x) From (8), it is obviously that ∇Hi (µ, x) is invertible [ if and only] if A + B ∂x is (µ,x) invertible. Suppose that there exists y ̸= 0 such that A + B ∂Φi∂x y = 0. Then, [. ∂Φi (µ, x) y A Ay = (Ay) Ay = B y ∂x T. T. T. {. where D := diag. ∂ϕi (µ,x1 ) , ∂x1. ]T B. ∂Φi (µ, x) y = y T DT B T BDy ∂x. ... ,. ∂ϕi (µ,xn ) ∂xn. (11). }. In [3], there exists a constant C such that 0 ≤ λmin (DT D) ≤ λmax (DT D) < 1 and λmax (DT B T BD) = Cλmax (B T B). Then, consider with the Assumption 2.3 that λmin (AT A) > λmax (B T B) > 0, implies that λmin (AT A) > λmax (DT B T BD). Hence, we have T T T T T y T AT Ay − y T DT B T BDy ≥ λ y ( min (A TA)y y − λmax (DT B ) BD)y T = λmin (A A) − Cλmax (B B) y y > 0,. that is y T AT Ay > y T DT B T BDy. This contradicts (11), and this completes the proof. 2 Now, we know that the system of Newton equations is solvable by Theorem 3.3. Analogous to the results in [4] and [5], we know the line search (10) is well-defined. Then we have the following corollary: Corollary 3.4 Suppose that Assumption 2.3 holds. Then Algorithm 3.1 is well-defined.. 11.

(15) 4. Convergence. In this section, we discuss the{global } and local quadratic convergence. First, we see the k boundedness of the sequence z generated by Algorithm 3.1. { } Theorem 4.1 Suppose that Assumption 2.3 holds. Then the sequence z k generated by Algorithm 3.1 is bounded. Proof. { } { } To show that the sequence z k = (µk , xk ) is bounded, we know that the sequence {µk } is bounded by Proposition 3.2(c). { } From Proposition 3.2(a), we{have the fact that ∥Hi (z}k )∥ is bounded. Then, with (2), implies that the sequence} ∥Axk + BΦi (µk , xk ) − b∥ is bounded. Hence, the se{ quence ∥Axk + BΦi (µk , xk )∥ is also bounded. Since ∥Axk ∥ − ∥BΦi (µk , xk )∥ ≤ ∥Axk + BΦi (µk , xk )∥, we can assume that without loss of generality, there exists a constant η > 0 such that ∥Axk ∥ − ∥BΦi (µk , xk )∥ ≤ η.. (12). Therefore, for all k, ∥Axk ∥2 = (xk )T AT Axk ≥ λmin (AT A)∥xk ∥2 , ∥BΦi (µk , xk )∥2 ≤ λmax (B T B)∥Φi (µk , xk )∥2 , and set the constant ζ such that ∥Φi (µk , xk )∥2 ≤ ζ then (12) for all k ∈ N, can be replaced to k k η ≥ ∥Ax )∥ √ ∥ − ∥BΦi (µk k , x √ ≥ λmin (AT A)∥x ∥ − λmax (B T B)ζ.. Then,. √ η + λmax (B T B)ζ √ ∥x ∥ ≤ λmin (AT A) { k} holds for all k. Hence, the sequence x is bounded. Then this completes the proof. 2 k. Now, we use the similar proof of [4] to show the global convergence of Algorithm 3.1. { k} Theorem 4.2 Suppose that Assumption 2.3 holds and the z is generated } { ksequence by Algorithm 3.1, any accumulation point of the sequence z is a solution of AVE (1). 12.

(16) Proof. By Theorem 4.1, without loss of generality, we suppose that limk→∞ z k = z ∗ = (µ∗ , x∗ ). With Proposition 3.2 (a), we have the following facts: H ∗ := ∥Hi (z ∗ )∥ = lim ∥Hi (z k )∥ and k→∞. { } τ∗ := min {1, H ∗ } = lim (min 1, ∥Hi (z k )∥ ) k→∞. Next, we show H ∗ = 0. To show that H ∗ = 0, we use contradiction method and assuming H ∗ > 0. In this situation, it follows from Proposition 3.2(b) that µ∗ > 0. This proof is divided into the following two parts. • Suppose that αk ≥ c > 0 for all k ∈ N, where c is a constant. By (10), we have ∥Hi (z k+1 )∥ ≤ ∥Hi (z k )∥ − σ(1 − 1/β)c∥Hi (z k )∥. { } With the sequence Hi (z k )∥ is bounded, we have k Σ∞ k=0 cσ(1 − 1/β)∥Hi (z )∥ < ∞.. which implies that limk→∞ ∥Hi (z k )∥ = 0. This contradicts H ∗ > 0. • Suppose that limk→∞ αk = 0. Then, for large k, αˆk := αk /δ does not satisfy (10), i.e., ∥Hi (z k + αˆk △z k )∥ > [1 − σ(1 − 1/β)αˆk ] ∥∥Hi (z k )∥. Then, ( ) ∥Hi (z k + αˆk △z k )∥ − ∥Hi (z k )∥ /αˆk > −σ(1 − 1/β)∥Hi (z k )∥ holds for all large k. With µ∗ > 0, it follows that Hi is continuously differentiable at z ∗ . Let k → ∞, the above inequality gives 1 ⟨Hi (z ∗ ), Hi′ (z ∗ )△z ∗ ⟩ ≥ σ(1 − 1/β)∥Hi (z k )∥ ∥Hi (z ∗ )∥. (13). Moreover, by (9) we have 1 τ∗2 ∗ ′ ∗ ∗ ∗ ⟨H (z ), H (z )△z ⟩ = −∥H (z )∥ + ⟨Hi (z ∗ ), e0 ⟩ i i i ∥Hi (z ∗ )∥ β∥Hi (z ∗ )∥ τ 2 ∥Hi (z ∗ )∥ ≤ −∥Hi (z ∗ )∥ + ∗ β∥Hi (z ∗ )∥ = −∥Hi (z ∗ )∥ + τ∗2 /β ≤ −∥Hi (z ∗ )∥ + τ∗ /β. ≤ −∥Hi (z ∗ )∥ + ∥Hi (z ∗ )∥/β = (−1 + 1/β)∥Hi (z ∗ )∥. Then, with (13), implies that −1 + 1/β ≥ −σ(−1 + 1/β), which contradicts the fact that σ ∈ (0, 1) and β > 1. 13.

(17) With the two cases above, we obtain Hi (z ∗ ) = 0. Hence, x∗ is a solution of AVE (1). Thus, we complete the proof. 2 Next, we will analyse the local superlinear convergence on Algorithm 3.1. First, we need following assumption to prove the local superlinear convergence of the smoothingtype algorithm. Assumption 4.3 All generalized Jacobian matrices of the function Hi at the solution point are nonsingular. The following theorem will be the basic to the local superlinear convergence of Algorithm 3.1. ∗ ∗ Theorem 4.4 Suppose { kthat } Assumption 2.3 holds and z := (µ∗ , x ) is an accumulation point of the sequence z generated by Algorithm 3.1. { } (a) Define JHi (z ∗ ) := lim ∇Hi (z ∗ ) : z k → z ∗ . Then, ( ) 1 0 ∗ JHi (z ) ⊆ V where V = , di ∈ [−1, 1], i = 1, 2, · · · , n. 0 A + B · diag(di ). (b) All V ∈ JHi (z ∗ ) are nonsingular. (c) There exist a neighborhood N (z ∗ ) of z ∗ and a constant C, such that for any z := (µ, x) ∈ N (z ∗ ) with µ > 0, ∇Hi (z) is nonsingular and ∥ (∇Hi (z))−1 ∥ ≤ C. Proof. The proof (a) is from a direct computation. Since Assumption 2.3 holds, (b) can easily obtain the proof by Theorem 3.3. Last, we have the result (c) by [13]. 2 By Proposition 2.2(c), Theorem 4.4 and [14], we can use the similar way to obtain the local quadratic convergence of Algorithm 3.1 as follows. Theorem 4.5 Suppose Assumption 2.3 holds and z ∗ := (µ∗ , x∗ ) is an accumulation { that } { k} point of the sequence z k generated by( Algorithm 3.1. Then the whole sequence z ) ∗ k+1 k k ∗ 2 convergence to z with ∥z − z ∥ = o ∥z − z ∥ and µk+1 = µk .. 5. Conclusion. In this paper, we have considered four equations based on different smoothing function to solve the AVE (1). We also obtain some properties of these functions. Moreover, we follow a smoothing-type algorithm studied in [6] to solve the problems of the AVE (1). Then, we show that the algorithm is globally and locally quadratically convergent with the four smooth equations. For future work, numerical implementations and comparison with other existing algorithm are desirable. 14.

(18) References [1] S. L. Hu, Z. H. Huang, and J. S. Chen, Properties of a family of generalized NCP-functions and a derivative free algorithm for complementarity problems, Journal of Computational and Applied Mathematics, vol. 230, pp. 69-82, 2009. [2] S. L. Hu, Z. H. Huang, and Q. Zhang, A generalized Newton method for absolute value equations associated with second order cones, Journal of Computational and Applied Mathematics, vol. 235, pp. 1490-1501, 2011. [3] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1985. [4] Z. H. Huang, Locating a maximally complementary solution of the monotone NCP by using non-interior-point smoothing algorithms, Mathematical Methods of Operation Research, vol 61, pp. 41-45, 2005. [5] Z-H. Huang, Y. Zhang, and W. Wu, A smoothing-type algorithm for solving system of inequalities, Journal of Computational and Applied Mathematics, vol. 220, pp. 355-363, 2008. [6] X. Jiang, and Y. Zhang, A smoothing-type algorithm for absolute value equations, Journal of Industrial and Management Optimization, vol. 9, pp. 789-798, 2013. [7] J. S. Chen, C. H, Ko, Y. D. Liu, and S. P. Wang, New smoothing functions for solving a system of equalities and inequalities, to appear in Pacific Journal of Optimization, 2016. [8] O. L. Mangasarian, Absolute value equation solution via concave minimization, Optimization Letters, vol. 1, pp. 3-5, 2007. [9] O. L. Mangasarian, A generalized Newton method for absolute value equations, Optimization Letters, vol. 3, pp. 101-108, 2009. [10] O. L. Mangasarian, Primal-dual bilinear programming solution of the absolute value equation, Optimization Letters, vol. 6, pp. 1527-1533, 2012. [11] O. L. Mangasarian, Absolute value equation solution via dual complementarity, Optimization Letters, vol. 7, pp. 625-630, 2013. [12] O. L. Mangasarian and R. R. Meyer, Absolute value equation, Linear Algebra and Its Applications, vol. 419, pp. 359-367, 2006. [13] L. Qi, Convergence analysis of some algorithms for solving nonsmooth equations, Mathematics of Operations Research, vol. 18, pp. 227-244, 1993.. 15.

(19) [14] L. Qi, D. Sun, and G.L. Zhou, A new look at smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequality problems, Mathematical Programming, vol. 87, pp. 1-35, 2000. [15] Robert G. Bartle, The Elements of Real Analysis, Wiley, Second Edition, 1976 [16] Y. Zhang and Z-H. Huang, A nonmonotone smoothing-type algorithm for solving a system of equalities and inequalities, Journal of Computational and Applied Mathematics, vol. 233, pp. 2312-2321, 2010. [17] C. Zhang and Q. J. Wei, Global and finite convergence of a generalized Newton method for absolute value equations, Journal of Optimization Theory and Applications, vol. 143, pp. 391-403, 2009.. 16.

(20)

參考文獻

相關文件

You are given the wavelength and total energy of a light pulse and asked to find the number of photons it

Wang, Solving pseudomonotone variational inequalities and pseudocon- vex optimization problems using the projection neural network, IEEE Transactions on Neural Networks 17

volume suppressed mass: (TeV) 2 /M P ∼ 10 −4 eV → mm range can be experimentally tested for any number of extra dimensions - Light U(1) gauge bosons: no derivative couplings. =&gt;

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

incapable to extract any quantities from QCD, nor to tackle the most interesting physics, namely, the spontaneously chiral symmetry breaking and the color confinement.. 

• Formation of massive primordial stars as origin of objects in the early universe. • Supernova explosions might be visible to the most

Proof. The proof is complete.. Similar to matrix monotone and matrix convex functions, the converse of Proposition 6.1 does not hold. 2.5], we know that a continuous function f

The difference resulted from the co- existence of two kinds of words in Buddhist scriptures a foreign words in which di- syllabic words are dominant, and most of them are the