• 沒有找到結果。

4 Proximal-like algorithm for the CSOCP

N/A
N/A
Protected

Academic year: 2022

Share "4 Proximal-like algorithm for the CSOCP"

Copied!
22
0
0

加載中.... (立即查看全文)

全文

(1)

Journal of Optimization Theory and Applications, vol. 138, pp. 95-113, 2008

Proximal-like algorithm using the quasi D-function for convex second-order cone programming

Shaohua Pan1

School of Mathematical Sciences South China University of Technology

Guangzhou 510640, China

Jein-Shan Chen 2 Department of Mathematics National Taiwan Normal University

Taipei 11677, Taiwan

July 28, 2006

(revised on January 30, 2007)

Abstract In this paper, we present a measure of distance in second-order cone based on a class of continuously differentiable strictly convex function on IR++. Since the distance function has some favorable properties similar to those of D-function [8], we here refer it as a quasi D-function. Then, a proximal-like algorithm using the quasi D-function is proposed and applied to the second-cone programming problem which is to minimize a closed proper convex function with general second-order cone constraints. Like the proximal point algorithm using D-function [5, 8], we under some mild assumptions es- tablish the global convergence of the algorithm expressed in terms of function values, and show that the sequence generated by the proposed algorithm is bounded and every accumulation point is a solution to the considered problem.

Key words. Quasi D-function, Bregman function, proximal-like method, convex second- order cone programming.

AMS subject classifications 90C30

1The author’s work is partially supported by the Doctoral Starting-up Foundation (B13B6050640) of GuangDong Province. E-mail:shhpan@scut.edu.cn.

2Member of Mathematics Division, National Center for Theoretical Sciences, Taipei Office.

The author’s work is partially supported by National Science Council of Taiwan. E-mail:

jschen@math.ntnu.edu.tw.

(2)

1 Introduction

We consider the following convex second-order cone programming (CSOCP):

min f (ζ)

s.t. Aζ + b ºKn 0,

where A is an n × m matrix with n ≥ m, b ∈ IRn, f : IRm → (−∞, +∞] is a closed proper convex function associated with the second-order cone (SOC for short) Kn given by

Kn :=n(x1, x2) ∈ IR × IRn−1 | kx2k ≤ x1o, (1) and x ºKn 0 means x ∈ Kn. Note that a function is closed if and only if it is lower semi-continuous (l.s.c. for short) and a function is proper if f (ζ) < ∞ for at least one ζ ∈ IRm and f (ζ) > −∞ for all ζ ∈ IRm. The CSOCP, as an extension of the standard second-order cone programming (SOCP) (see Sec. 4), has applications in a broad range of fields from engineering, control and finance to robust optimization and combinatorial optimization; see [1, 3, 6, 16, 17] and references therein.

Recently, the SOCP has received much attention in optimization, particularly in the context of solutions methods. In this paper, we focus on the solution of the more general CSOCP. Note that the CSOCP is a special class of convex programs, and therefore it can be solved via general convex programming methods. One of these methods is the proximal point algorithm for minimizing a convex function f (ζ) defined on IRm which replaces the problem min

ζ∈IRmf (ζ) by a sequence of minimization problems with strictly convex objectives, generating a sequence {ζk} defined by

ζk = argminζ∈IRm

(

f (ζ) + 1

kkζ − ζk−1k2

)

, (2)

where µk is a sequence of positive numbers and k · k denotes the Euclidean norm in IRm. The method was due to Martinet [18] who introduced the above proximal minimization problem based on the Moreau proximal approximation [19] of f . The proximal point algorithm was then further developed and studied by Rockafellar [21, 22]. Later, several researchers [8, 5, 10, 11, 23] proposed and investigated nonquadratic proximal point algorithm for the convex programming with nonnegative constraints, by replacing the quadratic distance in (2) with other distance-like functions. Among others, Censor and Zenios [8] replaced the method (2) by a method of the form

ζk= argminζ∈IRm

(

f (ζ) + 1 µk

D(ζ, ζk)

)

, (3)

where D(·, ·), called D-function, is a measure of distance based on a Bregman function.

Recall that, given a differentiable function ϕ, it is called a Bregman function [4, 9] if it

(3)

satisfies the properties listed in Definition 1.1 below, and the induced D-function is given as follows:

D(ζ, ξ) := ϕ(ζ) − ϕ(ξ) − h∇ϕ(ξ), ζ − ξi, (4) where h·, ·i denotes the inner product in IRm and ∇ϕ denotes the gradient of ϕ.

Definition 1.1 Let S ⊆ IRm be an open set and ¯S be its closure. Then ϕ : ¯S → IR is called a Bregman function with zone S if the following properties hold:

(i) ϕ is continuously differentiable on S;

(ii) ϕ is strictly convex and continuous on ¯S;

(iii) For each γ ∈ IR, the level set LD(ξ, γ) = {ζ ∈ ¯S : D(ζ, ξ) ≤ γ} and LD(ζ, γ) = {ξ ∈ S : D(ζ, ξ) ≤ γ} are bounded for any ξ ∈ S and ζ ∈ ¯S, respectively;

(iv) If {ξk} ⊂ S converges to ξ, then D(ξ, ξk) → 0;

(v) If {ζk} and {ξk} are sequences such that ξk → ξ ∈ ¯S, {ζk} is bounded and if D(ζk, ξk) → 0, then ζk→ ξ.

The Bregman proximal minimization (BPM) method described in (3) was further ex- tended by Kiwiel [15] with generalized Bregman functions, called B-functions. Compared with Bregman functions, these functions are possibly nondifferentiable and infinite on the boundary of their domain. For the detailed definition of B-functions and the convergence of BPM method using B-functions, please refer to [15].

The main purpose of this paper is to extend the BPM method (3) so that it can be used to deal with the CSOCP. Specifically, we define a measure of distance in second- order cone Kn by a class of continuously differentiable strictly convex functions on IR++

which are in fact special B-functions in IR (see Property 3.1). The distance measure, including the entropy-like distance in Kn given by [7] as a special case, is shown to have some favorable properties similar to those for a Bregman distance, and hence we here refer it as a quasi Bregman distance or quasi D-function. The specific definition is given in Section 3. Then, a proximal-like algorithm using quasi D-function is proposed and ap- plied for solving the CSOCP. Like the proximal-point algorithm (3), we establish, under some mild assumptions, the global convergence of the algorithm expressed in terms of function values, and show that the sequence generated is bounded and each accumulation point is a solution of the CSOCP.

The rest of this paper is organized as follows. In Section 2, we review some basic concepts and properties associated with SOC. In Section 3, we define a quasi D-function in Kn and explore the relations among the quasi D-function, the D-function, and the

(4)

double-regularized distance function. In Section 4, we present a proximal-like algorithm using quasi D-function and apply it for solving the CSOCP, and meanwhile, analyze the convergence of the algorithm. Finally, we close this paper in Section 5.

Some words about our notation. We use IR+ and IR++ to denote the nonnegative real number set and the positive real number set, respectively, and I to represent an identity matrix of suitable dimension. For a differentiable function φ in IR, φ0 represents its derivative. Given a set S, we use ¯S, int(S) and bd(S) to denote the closure, the interior and the boundary of S, respectively. For a closed proper convex function f : IRm → (−∞, +∞], we denote the domain of f by dom(f ) := {ζ ∈ IRm | f (ζ) < ∞} and the subdifferential of f at ¯ζ by ∂f (¯ζ) := {w ∈ IRm | f (ζ) ≥ f (¯ζ) + hw, ζ − ¯ζi, ∀ζ ∈ IRm}.

If f is differentiable at ζ, we use ∇f (ζ) to denote its gradient at ζ. For any x, y in IRn, we write x ºKn y if x − y ∈ Kn; and write x ÂKn y if x − y ∈ int(Kn). In other words, we have x ºKn 0 if and only if x ∈ Kn; and x ÂKn 0 if and only if x ∈ int(Kn).

2 Preliminaries

In this section, we review some basic concepts and properties related to the Kn that will be used in the subsequent analysis. For any x = (x1, x2), y = (y1, y2) ∈ IR × IRn−1, we define their Jordan product as

x ◦ y := (hx, yi, y1x2+ x1y2) . (5) We write x + y to mean the usual componentwise addition of vectors and x2 to mean x ◦ x. Then ◦, + and e = (1, 0, · · · , 0)T ∈ IRn have the following basic properties [12, 13]:

(1) e◦x = x for all x ∈ IRn. (2) x◦y = y ◦x for all x, y ∈ IRn. (3) x◦(x2◦y) = x2◦(x◦y) for all x, y ∈ IRn. (4) (x + y) ◦ z = x ◦ z + y ◦ z for all x, y, z ∈ IRn. Note that the Jordan product is not associative, but it is power associated, i.e., x ◦ (x ◦ x) = (x ◦ x) ◦ x for all x ∈ IRn. Thus, we may, without fear of ambiguity, write xm for the product of m copies of x and xm+n = xm◦ xn for all positive integers m and n. We define x0 = e. Besides, we should point out that Kn is not closed under Jordan product.

For each x = (x1, x2) ∈ IR × IRn−1, the determinant and the trace of x are defined by det(x) = x21− kx2k2, tr(x) = 2x1. (6) In general, det(x ◦ y) 6= det(x) det(y) unless x and y are collinear, i.e., x = αy for some α ∈ IR. A vector x = (x1, x2) ∈ IR × IRn−1 is said to be invertible if det(x) 6= 0. If x is invertible, then there exists a unique y ∈ IRn satisfying x ◦ y = y ◦ x = e. We call this y the inverse of x and denote it by x−1. In fact, we have

x−1 = 1

x21− kx2k2(x1, −x2) = 1

det(x)(tr(x)e − x).

(5)

Therefore, x ∈ int(Kn) if and only if x−1 ∈ int(Kn). Moreover, if x ∈ int(Kn), then x−k = (xk)−1 is also well-defined. For any x ∈ Kn, it is known that there exists a unique vector in Kn denoted by x1/2 such that (x1/2)2 = x1/2◦ x1/2 = x.

Next we introduce the definition of spectral factorization. Let x = (x1, x2) ∈ IR × IRn−1, then x can be decomposed as

x = λ1(x)u(1)x + λ2(x)u(2)x , (7) where λi(x) and u(i)x are the spectral value and the associated spectral vector given by

λi(x) = x1+ (−1)ikx2k,

u(i)x =

1 2

Ã

1, (−1)i x2 kx2k

!

if x2 6= 0;

1 2

³1, (−1)iw¯2´ if x2 = 0,

(8)

for i = 1, 2 with ¯w2 being any vector in IRn−1 satisfying k ¯w2k = 1. If x2 6= 0, the factor- ization is unique. In the sequel, for any x ∈ IRn, we write λ(x) := (λ1(x), λ2(x)) where λ1(x), λ2(x) are the spectral values of x.

The spectral decomposition along with the Jordan algebra associated with SOC has some basic properties as below, whose proofs can be found in [12, 13].

Property 2.1 For any x = (x1, x2) ∈ IR × IRn−1 with the spectral values λ1(x), λ2(x) and spectral vectors u(1)x , u(2)x given as above, we have

(a) u(1)x and u(2)x are orthogonal under Jordan product and have length 1/√ 2, i.e., u(1)x ◦ u(2)x = 0, ku(1)x k = ku(2)x k = 1/√

2.

(b) u(1)x and u(2)x are idempotent under Jordan product, i.e., u(i)x ◦ u(i)x = u(i)x for i = 1, 2.

(c) The determinant, the trace and the Euclidean norm of x can be represented by λ1(x), λ2(x):

det(x) = λ1(x)λ2(x), tr(x) = λ1(x) + λ2(x), kxk2 = λ21(x) + λ22(x)

2 .

(d) λ1(x), λ2(x) are nonnegative (positive) if and only if x ∈ Kn (x ∈ int(Kn)).

Finally, for any function g : IR → IR, one can define a corresponding function gsoc(x) in IRn by applying g to the spectral values of the spectral decomposition of x ∈ IRn with respect to Kn. In [3, 13], the following vector-valued function was considered:

gsoc(x) = g (λ1(x)) u(1)x + g (λ2(x)) u(2)x , ∀x = (x1, x2) ∈ IR × IRn−1. (9) If g is defined only on a subset of IR, then gsoc is defined on the corresponding subset of IRn. The definition in (9) is unambiguous whether x2 6= 0 or x2 = 0.

(6)

Lemma 2.1 [13, Proposition 5.2] or [3, Proposition 4] Given a function g : IR → IR, let gsoc(x) be the vector-valued function defined by (9). If g is differentiable (respectively, continuously differentiable), then gsoc(x) is also differentiable (respectively, continuously differentiable), and its Jacobian at x = (x1, x2) ∈ IR × IRn−1 is given by the formula

∇gsoc(x) =

"

g0(x1) 0

0 0

#

, if x2 = 0,

b cxT2/kx2k c x2

kx2k aI + (b − a)x2xT2 kx2k2

, if x2 6= 0,

(10)

where

a = g(λ2(x)) − g(λ1(x))

λ2(x) − λ1(x) , b = g02(x)) + g01(x))

2 , c = g02(x)) − g01(x))

2 . (11)

3 Quasi D-functions in SOC and their properties

In this section, we present a class of distance measures on SOC and discuss its relations with the D-function and the double-regularized Bregman distance [24]. To the end, we need a class of functions φ : IR+→ IR satisfying Property 3.1 below, in which the function d : IR+× IR++ → IR is defined by

d(s, t) = φ(s) − φ(t) − φ0(t)(s − t) ∀s ∈ IR+, t ∈ IR++. (12) Property 3.1 (a) φ is continuously differentiable on IR++;

(b) φ is strictly convex and continuous on IR+;

(c) For each γ ∈ IR, the level sets {s ∈ IR+| d(s, t) ≤ γ} and {t ∈ IR++| d(s, t) ≤ γ} are bounded for any t ∈ IR++ and s ∈ IR+, respectively.

(d) If {tk} ⊂ IR++ is a sequence such that limk→+∞tk = 0, then for all s ∈ IR++, limk→+∞φ0(tk)(s − tk) = −∞.

The function φ satisfying (d) is said in [14] to be boundary coercive. If setting φ(x) = +∞

when x /∈ IR+, then φ becomes a closed proper strictly convex on IR. Furthermore, by [15, Lemma 2.4 (d)] and Property 3.1 (c), it is not difficult to see that φ(x) and Pni=1φ(xi) are a B-function on IR and IRn, respectively. Unless otherwise stated, in the rest of this paper, we always assume that φ satisfies Property 3.1.

From the discussions in Section 2, clearly, the following vector-valued functions φsoc(x) = φ (λ1(x)) u(1)x + φ (λ2(x)) u(2)x (13)

(7)

and

0)soc(x) = φ01(x)) u(1)x + φ02(x)) u(2)x (14) are well-defined over Kn and int(Kn), respectively. In view of this, we define

H(x, y) :=

( trhφsoc(x) − φsoc(y) − (φ0)soc(y) ◦ (x − y)i ∀x ∈ Kn, y ∈ int(Kn),

+∞ otherwise. (15)

In what follows, we will show that the function H : IRn × IRn → (−∞, +∞] enjoys some favorable properties similar to those of the D-function. Particularly, we prove that H(x, y) ≥ 0 for any x ∈ Kn, y ∈ int(Kn), and moreover, H(x, y) = 0 if and only if x = y.

Consequently, it can be regarded as a distance measure on the SOC.

We first start with two technical lemmas that will be used in the subsequent analysis.

Lemma 3.1 For any x = (x1, x2), y = (y1, y2) ∈ IR × IRn−1, we have that tr(x ◦ y) ≤ hλ(x), λ(y)i,

where λ(x) = (λ1(x), λ2(x)) and λ(y) = (λ1(y), λ2(y)), and the inequality holds with an equality if and only if x2 = αy2 for some α > 0.

Proof. From equations (5)-(6) and Cauchy-Schwartz inequality, tr(x ◦ y) = 2hx, yi = 2x1y1+ 2xT2y2 ≤ 2x1y1+ 2kx2k · ky2k.

On the other hand, from the definition of the spectral values given by (8), hλ(x), λ(y)i = λ1(x)λ1(y) + λ2(x)λ2(y)

= (x1 − kx2k)(y1− ky2k) + (x1+ kx2k)(y1+ ky2k)

= 2x1y1+ 2kx2k · ky2k.

From the above two sides, we immediately obtain the inequality relation. In addition, we note that the inequality becomes an equality if and only if xT2y2 = kx2k · ky2k, which is equivalent to saying that x2 = αy2 for some α > 0. 2

Lemma 3.2 Let φsoc(x) and (φ0)soc(x) be given as in (13) and (14), respectively. Then, (a) φsoc(x) is continuously differentiable on int(Kn) with the gradient ∇φsoc(x) satisfying

∇φsoc(x)e = (φ0)soc(x).

(b) tr[φsoc(x)] =P2i=1φ[λi(x)] and tr[(φ0)soc(x)] =P2i=1φ0i(x)].

(c) tr[φsoc(x)] is continuously differentiable on int(Kn) with ∇tr[φsoc(x)] = 2∇φsoc(x)e.

(8)

(d) tr[φsoc(x)] is strictly convex and continuous on Kn.

(e) If {yk} ⊂ int(Kn) is a sequence such that limk→+∞yk= ¯y ∈ bd(Kn), then

k→+∞lim h∇tr[φsoc(yk)], x − yki = −∞ for all x ∈ int(Kn).

In other words, the function tr[φsoc(x)] is boundary coercive.

Proof. (a) The first part follows directly from Lemma 2.1. Now we prove the second part. If x2 6= 0, then by formulas (10)-(11) it is easy to compute that

∇φsoc(x)e =

φ02(x)) + φ01(x)) φ02(x)) − φ201(x))

2

x2 kx2k

.

In addition, using equations (8) and (14), we can prove that the vector in the right hand side is exactly (φ0)soc(x). Therefore, ∇φsoc(x)e = (φ0)soc(x). If x2 = 0, then using (10) and (8), we can also prove that ∇φsoc(x)e = (φ0)soc(x).

(b) The result follows directly from Property 2.1 (c) and equations (13)-(14).

(c) From part (a) and the fact that tr[φsoc(x)] = tr[φsoc(x) ◦ e] = 2hφsoc(x), ei, clearly, tr[φsoc(x)] is continuously differentiable on int(Kn). Applying the chain rule for inner product of two functions immediately yields that ∇tr[φsoc(x)] = 2∇φsoc(x)e.

(d) It is clear that φsoc(x) is continuous on Kn. We next prove that it is strictly convex on Kn. For any x, y ∈ Kn with x 6= y and α, β ∈ (0, 1) with α + β = 1, we have that

λ1(αx + βy) = αx1+ βy1− kαx2+ βy2k ≥ αλ1(x) + βλ1(y), λ2(αx + βy) = αx1+ βy1+ kαx2+ βy2k ≤ αλ2(x) + βλ2(y), which implies that

αλ1(x) + βλ1(y) ≤ λ1(αx + βy) ≤ λ2(αx + βy) ≤ αλ2(x) + βλ2(y).

On the other hand,

λ1(αx + βy) + λ2(αx + βy) = 2αx1+ 2βy1 = [αλ1(x) + βλ1(y)] + [αλ2(x) + βλ2(y)].

The last two equations imply that there exists ρ ∈ [0, 1] such that

λ1(αx + βy) = ρ[αλ1(x) + βλ1(y)] + (1 − ρ)[αλ2(x) + βλ2(y)], λ2(αx + βy) = (1 − ρ)[αλ1(x) + βλ1(y)] + ρ[αλ2(x) + βλ2(y)].

(9)

Thus, from Property 2.1, it follows that

tr[φsoc(αx + βy)] = φ[λ1(αx + βy)] + φ[λ2(αx + βy)]

= φhρ(αλ1(x) + βλ1(y)) + (1 − ρ)(αλ2(x) + βλ2(y))i h(1 − ρ)(αλ1(x) + βλ1(y)) + ρ(αλ2(x) + βλ2(y))i

≤ ρφ(αλ1(x) + βλ1(y)) + (1 − ρ)φ(αλ2(x) + βλ2(y)) +(1 − ρ)φ(αλ1(x) + βλ1(y)) + ρφ(αλ2(x) + βλ2(y))

= φ(αλ1(x) + βλ1(y)) + φ(αλ2(x) + βλ2(y))

< αφ(λ1(x)) + βφ(λ1(y)) + αφ(λ2(x)) + βφ(λ2(y))

= αtr[φsoc(x)] + βtr[φsoc(y)],

where the first equality and the last one follow from Lemma 3.1 (b), and the two inequal- ities are due to the strict convexity of φ on IR++. From the definition of strict convexity, we thus prove that the conclusion holds.

(e) From part (a) and part (c), we can readily obtain the following equality

∇tr[φsoc(x)] = 2(φ0)soc(x), ∀x ∈ int(Kn). (16) Using the relation and Lemma 3.1, we then have that

h∇tr[φsoc(yk)], x − yki = 2h(φ0)soc(yk), x − yki

= tr[(φ0)soc(yk) ◦ (x − yk)]

= tr[(φ0)soc(yk) ◦ x] − tr[(φ0)soc(yk) ◦ yk]

X2

i=1

φ0i(yk)]λi(x) − tr[(φ0)soc(yk) ◦ yk]. (17)

In addition, by Property 2.1 (a)-(b), for any y ∈ int(Kn), we can compute that 0)soc(y) ◦ y = hφ01(y))u(1)y + φ02(y))u(2)y ihλ1(y)u(1)y + λ2(y)u(2)y i

= φ01(y))λ1(y)u(1)y + φ02(y))λ2(y)u(2)y , (18) which implies that

tr[(φ0)soc(yk) ◦ yk] =

X2

i=1

φ0i(yk)]λi(yk). (19) Combining with (17) and (19) immediately yields that

h∇tr[φsoc(yk)], x − yki ≤

X2

i=1

φ0i(yk)][λi(x) − λi(yk)]. (20)

(10)

Note that λ2y) ≥ λ1y) = 0 and λ2(x) ≥ λ1(x) > 0 since ¯y ∈ bd(Kn) and x ∈ int(Kn).

Hence, if λ2y) = 0, then by Property 3.1 (d) and the continuity of λi(·) for i = 1, 2,

k→+∞lim φ0i(yk)][λi(x) − λi(yk)] = −∞, i = 1, 2, which means that

k→+∞lim

X2

i=1

φ0i(yk)][λi(x) − λi(yk)] = −∞. (21)

If λ2y) > 0, then limk→+∞φ02(yk)][λ2(x) − λ2(yk)] is finite and

k→+∞lim φ01(yk)][λ1(x) − λ1(yk)] = −∞,

and therefore the result in (21) also holds under such case. Combining (21) with (20), we prove that the conclusion holds. 2

Using the relation in (16), we have that for any x ∈ Kn and y ∈ int(Kn), trh0)soc(y) ◦ (x − y)i= 2D0)soc(y), x − yE =D∇tr[φsoc(y)], x − yE. As a consequence, the function H(x, y) in (15) can be rewritten as

H(x, y) =

( tr[φsoc(x)] − tr[φsoc(y)] − h∇tr[φsoc(y)], x − yi ∀x ∈ Kn, y ∈ int(Kn),

+∞ otherwise. (22)

By the representation, we next investigate several important properties of H(x, y).

Proposition 3.1 Let H(x, y) be the function defined as in (15) or (22). Then,

(a) H(x, y) is continuous on Kn× int(Kn), and for any y ∈ int(Kn), the function H(·, y) is strictly convex on Kn.

(b) For any given y ∈ int(Kn), H(x, y) is continuously differentiable on int(Kn) with

xH(x, y) = ∇tr[φsoc(x)] − ∇tr[φsoc(y)] = 2[(φ0)soc(x) − (φ0)soc(y)]. (23) (c) H(x, y) ≥P2i=1d(λi(x), λi(y)) ≥ 0 for any x ∈ Kn and y ∈ int(Kn), where d(·, ·) is

defined by (12). Moreover, H(x, y) = 0 if and only if x = y.

(d) For every γ ∈ IR, the partial level sets of LH(y, γ) = {x ∈ Kn : H(x, y) ≤ γ} and LH(x, γ) = {y ∈ int(Kn) : H(x, y) ≤ γ} are bounded for any y ∈ int(Kn) and x ∈ Kn, respectively.

(e) If {yk} ⊂ int(Kn) is a sequence converging to y ∈ int(Kn), then H(y, yk) → 0.

(11)

(f) If {xk} ⊂ int(Kn) and {yk} ⊂ int(Kn) are sequences such that {yk} → y ∈ int(Kn), {xk} is bounded, and H(xk, yk) → 0, then xk → y.

Proof. (a) Note that φsoc(x), (φ0)soc(y), (φ0)soc(y) ◦ (x − y) are continuous for any x ∈ Kn and y ∈ int(Kn) and the trace function tr(·) is also continuous, and hence H(x, y) is continuous on Kn× int(Kn). From Lemma 3.2 (d), tr[φsoc(x)] is strictly convex over Kn, whereas −tr[φsoc(y)] − h∇tr[φsoc(y)], x − yi is clearly convex in Kn for fixed y ∈ int(Kn).

This means that H(·, y) is strictly convex for any y ∈ int(Kn).

(b) By Lemma 3.2 (c), the function H(·, y) for any given y ∈ int(Kn) is continuously differentiable on int(Kn). The first equality in (23) is obvious and the second is due to (16).

(c) The result follows directly from the following equalities and inequalities:

H(x, y) = tr[φsoc(x)] − tr[φsoc(y)] − tr[(φ0)soc(y) ◦ (x − y)]

= tr[φsoc(x)] − tr[φsoc(y)] − tr[(φ0)soc(y) ◦ x] + tr[(φ0)soc(y) ◦ y]

≥ tr[φsoc(x)] − tr[φsoc(y)] −

X2

i=1

φ0i(y))λi(x) + tr[(φ0)soc(y) ◦ y]

=

X2

i=1

hφ(λi(x)) + φ(λi(y)) − φ0i(y))λi(x) + φ0i(y))λi(y)i

=

X2

i=1

hφ(λi(x)) − φ(λi(y)) − φ0i(y))(λi(x) − λi(y))i

=

X2

i=1

d(λi(x), λi(y)) ≥ 0,

where the first equality is due to (15), the second and fourth are obvious, the third follows from Lemma 3.2 (b) and (18), the last one is from (12), and the first inequality follows from Lemma 3.1 and the last one is due to the strict convexity of φ on IR+. Note that tr[φsoc(x)] is strictly convex for any x ∈ Knby Lemma 3.2 (d), and therefore H(x, y) = 0 if and only if x = y by (22).

(d) From part (c), we have that LH(y, γ) ⊆ {x ∈ Kn| P2i=1d(λi(x), λi(y)) ≤ γ}. By Property 3.1 (c), the set in the right hand side is bounded. So, LH(y, γ) is bounded for y ∈ int(Kn). Similarly, LH(x, γ) is bounded for x ∈ Kn.

From part (a)-(d), we immediately obtain the results in (e) and (f). 2

Remark 3.1 (i) From (22), it is not difficult to see that H(x, y) is exactly a distance measure induced by tr[φsoc(x)] via formula (4). Therefore, if n = 1 and φ is a Bregman function with zone IR++, i.e., φ also satisfies the property:

(12)

(e) if {sk} ⊆ IR+ and {tk} ⊂ IR++ are sequences such that tk → t, {sk} is bounded, and d(sk, tk) → 0, then sk → t;

then H(x, y) reduces to the Bregman distance function d(x, y) in (12).

(ii) When n > 1, H(x, y) is generally not a Bregman distance even if φ is a Bregman function with zone IR++, by noting that Proposition 3.1 (e) and (f) do not hold for {xk} ⊆ bd(Kn) and y ∈ bd(Kn). By the proof of Proposition 3.1 (c), the main reason is that in order to guarantee that

tr[(φ0)soc(y) ◦ x] =

X2

i=1

φ0i(y))λi(x)

for any x ∈ Kn and y ∈ int(Kn), the relation [(φ0)soc(y)]2 = αx2 with some α > 0 is required, where [(φ0)soc(y)]2 is a vector composed of the last n − 1 elements of 0)soc(y). It is very stringent for φ to satisfy such relation. By this, tr[φsoc(x)] is not a B-function [15] on IRn, either, even if φ itself is a B-function.

(ii) We observe that H(x, y) is inseparable, whereas the double-regularized distance func- tion proposed by [24] belongs to the separable class of functions. In view of this, H(x, y) can not become a double-regularized distance function in Kn × int(Kn), even when φ is such that ˜d(s, t) = d(s, t)/φ00(t) + µ2(s − t)2 is a double regularized component (see [24]).

In view of Proposition 3.1 and Remark 3.1, we call H(x, y) a quasi D-function in this paper. In the following, we present several specific examples of quasi D-functions.

Example 3.1. Let φ(t) = t ln t − t (with the convention 0 ln 0 = 0). It is easy to verify that φ satisfies Property 3.1. By [13, Proposition 3.2 (b)] and (13)-(14), we can compute that for any x ∈ Kn and y ∈ int(Kn),

φsoc(x) = x ◦ ln x − x and (φ0)soc(y) = ln y.

Therefore,

H(x, y) =

( tr(x ◦ ln x − x ◦ ln y + y − x) ∀x ∈ Kn, y ∈ int(Kn),

+∞ otherwise.

Using the entropy-like distance, Chen [7] proposed a proximal-like algorithm for solving a special case of the CSOCP with A = I and b = 0.

Example 3.2. Let φ(t) = t2−√

t. It is not hard to verify that φ satisfies Property 3.1.

From [3, 13], we have that for any x ∈ Kn,

x2 = x ◦ x = λ21(x)u(1)x + λ22(x)u(2)x and

x =qλ1(x)u(1)x +qλ2(x)u(2)x .

(13)

By a direct computation, we then obtain for any x ∈ Kn and y ∈ int(Kn), φsoc(x) = x ◦ x −√

x and (φ0)soc(y) = 2y − tr(

y)e −√ y 2qdet(y) . This implies that

H(x, y) =

tr

(x − y)2− (√ x −√

y) +(tr(

y)e −√

y) ◦ (x − y) 2qdet(y)

∀x ∈ Kn, y ∈ int(Kn),

+∞ otherwise.

Example 3.3. Take φ(t) = t ln t − (1 + t) ln(1 + t) + (1 + t) ln 2 (with the convention 0 ln 0 = 0). It is easily shown that φ satisfies Property 3.1. Using Property 2.1 (a)-(b), we can compute that for any x ∈ Kn and y ∈ int(Kn),

φsoc(x) = x ◦ ln x − (e + x) ◦ ln(e + x) + (e + x) ln 2 and

0)soc(y) = ln y − ln(e + y) + e ln 2.

Consequently, H(x, y) =

( trhx ◦ (ln x −ln y) −(e + x) ◦ (ln(e +x) − ln(e +y))i ∀x ∈ Kn, y ∈ int(Kn),

+∞ otherwise.

In addition, from [14, 23], it follows thatPmi=1φ(ζi) generated by φ in the above examples is a Bregman function with zone S = IRm+, and consequently Pmi=1d(ζi, ξi) defined as in (12) is a D-function induced by Pmi=1φ(ζi).

To close this section, we present another important property of H(x, y).

Proposition 3.2 Let H(x, y) be defined as in (15) or (22). Then, for all x, y ∈ int(Kn) and z ∈ Kn, the following three-points identity holds:

H(z, x) + H(x, y) − H(z, y) = D∇tr[φsoc(y)] − ∇tr[φsoc(x)], z − xE

= tr0)soc(y) − (φ0)soc(x)´◦ (z − x)i. Proof. Using the definition of H given as in (22), we have that

D∇tr[φsoc(x)], z − xE = tr[φsoc(z)] − tr[φsoc(x)] − H(z, x),

D∇tr[φsoc(y)], x − yE = tr[φsoc(x)] − tr[φsoc(y)] − H(x, y),

D∇tr[φsoc(y)], z − yE = tr[φsoc(z)] − tr[φsoc(y)] − H(z, y).

Subtracting the first two equations from the last one gives the first equality. By (16),

D∇tr[φsoc(y)] − ∇tr[φsoc(x)], z − xE= 2D0)soc(y) − (φ0)soc(x), z − yE. This together with the fact that tr(x ◦ y) = hx, yi leads to the second equality. 2

(14)

4 Proximal-like algorithm for the CSOCP

In this section, we propose a proximal-like algorithm for solving the CSOCP based on the quasi D-function H(x, y). For the sake of notation, we denote F by the set

F = nζ ∈ IRm | Aζ + b ºKn 0o. (24) It is easy to verify that F is convex and its interior int(F) is given by

int(F) =nζ ∈ IRm | Aζ + b ÂKn 0o. (25) Let ψ : IRm → (−∞, +∞] be the function defined by

ψ(ζ) =

( tr[φsoc(Aζ + b)] if ζ ∈ F,

+∞ otherwise. (26)

By Lemma 3.2, it is easily shown that the following conclusions hold for ψ(ζ).

Lemma 4.1 Let ψ(ζ) be given as in (26). If the matrix A has full rank m, then (a) ψ(ζ) is continuously differentiable on int(F) with ∇ψ(ζ) = 2AT0)soc(Aζ + b).

(d) ψ(ζ) is strictly convex and continuous on F.

(c) ψ(ζ) is boundary coercive, i.e., if {ξk} ⊂ int(F) is such that limk→+∞ξk = ξ ∈ bd(F), then for all ζ ∈ int(F), there holds that limk→+∞∇ψ(ξk)T(ζ − ξk) = −∞.

Let D(ζ, ξ) be the function induced by the above ψ(ζ) via formula (4), i.e.,

D(ζ, ξ) = ψ(ζ) − ψ(ξ) − h∇ψ(ξ), ζ − ξi. (27) Then, from (26) and (22), it is not difficult to see that

D(ζ, ξ) = H(Aζ + b, Aξ + b). (28)

So, by Proposition 3.1 and Lemma 4.1, we can prove the following conclusions.

Lemma 4.2 Let D(ζ, ξ) be given by (27) or (28). If the matrix A has full rank m, then (a) D(ζ, ξ) is continuous on F × int(F), and for any given ξ ∈ int(F), the function

D(·, ξ) is strictly convex on F.

(b) For any fixed ξ ∈ int(F), D(·, ξ) is continuously differentiable on int(F) with

ζD(ζ, ξ) = ∇ψ(ζ) − ∇ψ(ξ) = 2ATh0)soc(Aζ + b) − (φ0)soc(Aξ + b)i.

(15)

(c) D(ζ, ξ) ≥ P2i=1d(λi(Aζ + b), λi(Aξ + b)) ≥ 0 for any ζ ∈ F and ξ ∈ int(F), where d(·, ·) is defined by (12). Moreover, D(ζ, ξ) = 0 if and only if ζ = ξ.

(d) For each γ ∈ IR, the partial level sets of LD(ξ, γ) = {ζ ∈ F : D(ζ, ξ) ≤ γ} and LD(ζ, γ) = {ξ ∈ int(F) : D(ζ, ξ) ≤ γ} are bounded for any ξ ∈ int(F) and ζ ∈ F, respectively.

The proximal-like algorithm that we propose for the CSOCP is defined as follows:

ζ0 ∈ int(F), (29)

ζk = argmin

ζ∈F

nf (ζ) + (1/µk)D(ζ, ζk−1)o (k ≥ 1), (30)

where {µk}k≥1 is a sequence of positive numbers. To establish the convergence of the algorithm, we make the following assumptions for the CSOCP:

(A1) infnf (ζ) | ζ ∈ Fo:= f > −∞ and dom(f ) ∩ int(F) 6= ∅.

(A2) The matrix A is of maximal rank m.

Remark 4.1 Assumption (A1) is elementary for the solution of the CSOCP. Assump- tion (A2) is common in the solution of SOCPs and it is obviously satisfied when F = Kn. Moreover, if we consider the standard SOCP

min cTx

s.t. Ax = b, x ∈ Kn, (31)

where A ∈ IRm×n with m ≤ n, b ∈ IRm, and c ∈ IRn, the assumption that A has full row rank m is standard. Consequently, its dual problem, given by

max bTy

s.t. c − ATy ºKn 0, (32)

satisfies assumption (A2). This shows that we can solve the SOCP by applying the proximal-like algorithm in (29)-(30) to the dual problem (32).

In what follows, we are ready to prove the convergence of the proximal-like algorithm in (29)-(30) under assumptions (A1) and (A2). We first show the algorithm is well- defined.

Proposition 4.1 Suppose that assumptions (A1)-(A2) hold. Then, the algorithm de- scribed by (29)-(30) generates a sequence {ζk} ⊂ int(F) such that

−2µ−1k AT h0)soc(Aζk+ b) − (φ0)soc(Aζk−1+ b)i∈ ∂f (ζk). (33)

(16)

Proof. The proof proceeds by induction. For k = 0, it clearly holds. Assume that ζk−1 ∈ int(F). Let fk(ζ) := f (ζ) + µ−1k D(ζ, ζk−1). Then assumption (A1) and Lemma 4.2 (d) imply that fk has bounded level sets in F. By the lower semi-continuity of f and Lemma 4.2 (a), the minimization problem minζ∈Ffk(ζ), i.e. the subproblem (30), has solutions. Moreover, the solution ζk is unique due to the convexity of f and the strict convexity of D(·, ξ). In the following, we prove that ζk∈ int(F).

By [20, Theorem 23.8] and the definition of D(ζ, ξ) given by (27), we can verify that ζk is the only ζ ∈ dom(f ) ∩ F such that

−1k AT0)soc(Aζk−1+ b) ∈ ∂³f (ζ) + µ−1k ψ(ζ) + δ(ζ|F)´, (34) where δ(ζ|F) = 0 if ζ ∈ F and +∞ otherwise. We will show that

³f (ζ) + µ−1k ψ(ζ) + δ(ζ|F)´= ∅ for all ζ ∈ bd(F), (35) which by (34) implies that ζk ∈ int(F). Take ζ ∈ bd(F) and assume that there exists w ∈ ∂³f (ζ) + µ−1k ψ(ζ)´. Take ζ ∈ dom(f ) ∩ int(F) and letb

ζl = (1 − ²l)ζ + ²lζb (36)

with liml→+∞²l = 0. From the convexity of int(F) and dom(f ), it then follows that ζl ∈ dom(f ) ∩ int(F), and moreover, liml→+∞ζl = ζ. Consequently,

²lwT(ζ − ζ) = wb Tl− ζ)

≤ f (ζl) − f (ζ) + µ−1k hψ(ζl) − ψ(ζ)i

≤ f (ζl) − f (ζ) + µ−1k D2AT0)soc(Aζl+ b), ζl− ζE

≤ ²l(f (ζ) − f (ζ)) + µb −1k ²l

1 − ²ltrh0)soc(Aζl+ b) ◦ (Aζ − Aζb l)i where the first equality is due to (36), the first inequality follows from the definition of subdifferential and the convexity of f (ζ) + µ−1k ψ(ζ) in F, the second one is due to the convexity and differentiability of ψ(ζ) in int(F), and the last one is from (36) and the convexity of f . Using Lemma 3.1 and (18), we then have that

µk(1 − ²l)[f (ζ) − f (ζ) + wb T(ζ − ζ)]b

≤ trh0)soc(Aζl+ b) ◦ (Aζ + b)b i− trh0)soc(Aζl+ b) ◦ (Aζl+ b)i

X2

i=1

hφ0i(Aζl+ b))λi(Aζ + b) − φb 0i(Aζl+ b))λi(Aζl+ b)i

=

X2

i=1

φ0i(Aζl+ b))hλi(Aζ + b) − λb i(Aζl+ b)i.

Since ζ ∈ bd(F), i.e., Aζ + b ∈ bd(Kn), it follows that liml→+∞λ1(Aζl+ b) = 0. Thus, using Property 3.1 (d) and following the same line as the proof of Lemma 3.2 (d), we

參考文獻

相關文件

Then, we tested the influence of θ for the rate of convergence of Algorithm 4.1, by using this algorithm with α = 15 and four different θ to solve a test ex- ample generated as

In this paper, we extended the entropy-like proximal algo- rithm proposed by Eggermont [12] for convex programming subject to nonnegative constraints and proposed a class of

Numerical results are reported for some convex second-order cone programs (SOCPs) by solving the unconstrained minimization reformulation of the KKT optimality conditions,

Particularly, combining the numerical results of the two papers, we may obtain such a conclusion that the merit function method based on ϕ p has a better a global convergence and

Then, it is easy to see that there are 9 problems for which the iterative numbers of the algorithm using ψ α,θ,p in the case of θ = 1 and p = 3 are less than the one of the

Section 3 is devoted to developing proximal point method to solve the monotone second-order cone complementarity problem with a practical approximation criterion based on a new

A derivative free algorithm based on the new NCP- function and the new merit function for complementarity problems was discussed, and some preliminary numerical results for

Although we have obtained the global and superlinear convergence properties of Algorithm 3.1 under mild conditions, this does not mean that Algorithm 3.1 is practi- cally efficient,