• 沒有找到結果。

3 SOC-convex function and SOC-monotone function

N/A
N/A
Protected

Share "3 SOC-convex function and SOC-monotone function"

Copied!
28
0
0

(1)

Optimization, vol. 55, pp. 363-385, 2006

The convex and monotone functions associated with second-order cone

1

Jein-Shan Chen 2 Department of Mathematics National Taiwan Normal University

Taipei 11677, Taiwan

November 18, 2004 (revised April 2, 2006)

Abstract Like the matrix-valued functions used in solutions methods for semidefinite pro- gram (SDP) and semidefinite complementarity problem (SDCP), the vector-valued func- tions associated with second-order cone are defined analogously and also used in solutions methods for second-order cone program (SOCP) and second-order cone complementarity problem (SOCCP). In this paper, we study further about these vector-valued functions as- sociated with second-order cone. In particular, we define so-called SOC-convex and SOC- monotone functions for any given function f : IR → IR. We discuss the SOC-convexity and SOC-monotonicity for some simple functions, e.g., f (t) = t2, t3, 1/t, t1/2, |t|, and [t]+. Some characterizations of SOC-convex and SOC-monotone functions are studied and some conjectures about the relationship between SOC-convex and SOC-monotone functions are proposed.

Key words. Second-order cone, convex function, monotone function, complementarity, spectral decomposition

AMS subject classifications. 26A27, 26B05, 26B35, 49J52, 90C33.

1 Introduction

The second-order cone (SOC) in IRn, also called Lorentz cone, is defined by

Kn = {(x1, x2) ∈ IR × IRn−1 | kx2k ≤ x1}, (1) where k · k denotes the Euclidean norm. If n = 1, let Kn denote the set of nonnegative reals IR+. For any x, y in IRn, we write x ºKn y if x − y ∈ Kn; and write x ÂKn y if

1This work is supported by National Science Council of Taiwan.

2E-mail: jschen@math.ntnu.edu.tw, FAX: 886-2-29332342.

(2)

x − y ∈ int(Kn). In other words, we have x ºKn 0 if and only if x ∈ Kn and x ÂKn 0 if and only if x ∈ int(Kn). The relation ºKn is a partial ordering, but not a linear ordering in Kn, i.e., there exist x, y ∈ Kn such that neither x ºKn y nor y ºKn x. To see this, for n = 2, let x = (1, 1) , y = (1, 0). Then we have x − y = (0, 1) /∈ Kn , y − x = (0, −1) /∈ Kn.

Recently, the second-order cone has received much attention in optimization, particu- larly in the context of applications and solutions methods for second-order cone program (SOCP) [14] and second-order cone complementarity problem (SOCCP), [5, 6, 7, 8]. For those solutions methods, there needs spectral decomposition associated with SOC. The basic concepts are as below. For any x = (x1, x2) ∈ IR × IRn−1, x can be decomposed as

x = λ1u(1)+ λ2u(2), (2)

where λ1, λ2 and u(1), u(2) are the spectral values and the associated spectral vectors of x given by

λi = x1+ (−1)ikx2k, (3)

u(i) =

1 2

µ

1 , (−1)i x2

kx2k

, if x2 6= 0,

1 2

µ

1 , (−1)iw

, if x2 = 0,

(4)

for i = 1, 2 with w being any vector in IRn−1 satisfying kwk = 1. If x2 6= 0, the decomposi- tion is unique.

For any function f : IR → IR, the following vector-valued function associated with Kn (n ≥ 1) was considered [8, 10]:

fsoc(x) = f (λ1)u(1)+ f (λ2)u(2), ∀x = (x1, x2) ∈ IR × IRn−1. (5) If f is defined only on a subset of IR, then fsoc is defined on the corresponding subset of IRn. The definition (5) is unambiguous whether x2 6= 0 or x2 = 0. The cases of fsoc(x) = x1/2, x2, exp(x) are discussed in the book of [9]. In fact, the above definition (5) is analogous to one associated with the semidefinite cone S+n, see [19, 21].

In this paper, we further define so-called SOC-convex and SOC-monotone functions (see Sec. 3) which are parallel to matrix-convex and matrix-monotone functions (see [2, 11]).

We study the SOC-convexity and SOC-monotonicity for some simple functions, e.g., f (t) = t2, t3, 1/t, t1/2, |t|, and [t]+. Then, we explore characterizations of SOC-convex and SOC- monotone functions. In addition, we state some conjectures about the relationship between SOC-convex and SOC-monotone functions. It is our intension to extend the existing prop- erties of matrix-convex and matrix-monotone functions shown as in [2, 11]. As will be seen in Sec. 3, the vector-valued functions associated with SOC are accompanied by Jordan product (will be defined in Sec. 2). However, unlike the matrix multiplication, the Jordan

(3)

we do the extension. Therefore, the ideas for proofs are usually quite different from those for matrix-valued functions. The vector-valued functions associated with SOC are heavily used in the solutions methods for SOCP and SOCCP. Therefore, further study on these functions will be helpful for developing and analyzing more solutions methods. That is one of the main motivations for this paper.

In what follows and throughout the paper, h·, ·i denotes the Euclidean inner product and k·k is the Euclidean norm. The notation ”:=” means ”define”. For any f : IRn→ IR, ∇f (x) denotes the gradient of f at x. For any differentiable mapping F = (F1, F2, · · · , Fm)T : IRn → IRm, ∇F (x) = [∇F1(x) · · · ∇Fm(x)] is a n by m matrix which denotes the transpose Jacobian of F at x. For any symmetric matrices A, B ∈ IRn×n, we write A º B (respec- tively, A Â B) to mean A − B is positive semidefinite (respectively, positive definite).

2 Jordan product and related properties

For any x = (x1, x2) ∈ IR × IRn−1 and y = (y1, y2) ∈ IR × IRn−1, we define their Jordan product as

x ◦ y = (xTy , y1x2+ x1y2). (6) We write x2 to mean x ◦ x and write x + y to mean the usual componentwise addition of vectors. Then ◦, +, together with e = (1, 0, . . . , 0)T ∈ IRn have the following basic proper- ties (see [9, 10]): (1) e ◦ x = x, for all x ∈ IRn. (2) x ◦ y = y ◦ x, for all x, y ∈ IRn. (3) x ◦ (x2◦ y) = x2◦ (x ◦ y), for all x, y ∈ IRn. (4) (x + y) ◦ z = x ◦ z + y ◦ z, for all x, y, z ∈ IRn. The Jordan product is not associative. For example, for n = 3, let x = (1, −1, 1) and y = z = (1, 0, 1) , then we have (x ◦ y) ◦ z = (4, −1, 4) 6= x ◦ (y ◦ z) = (4, −2, 4) . However, it is power associative, i.e., x◦(x◦x) = (x◦x)◦x , for all x ∈ IRn. Thus, we may, without fear of ambiguity, write xm for the product of m copies of x and xm+n = xm◦ xn for all positive integers m and n. We define x0 = e. Besides, Kn is not closed under Jordan product. For example, x = (√

2, 1, 1) ∈ K3, y = (√

2, 1, −1) ∈ K3, but x ◦ y = (2, 2√

2, 0) /∈ K3.

For each x = (x1, x2) ∈ IR × IRn−1, the determinant and the trace of x are defined by det(x) = x21− kx2k2 , tr(x) = 2x1.

In general, det(x ◦ y) 6= det(x)det(y) unless x2 = y2. A vector x = (x1, x2) ∈ IR × IRn−1 is said to be invertible if det(x) 6= 0. If x is invertible, then there exists a unique y = (y1, y2) ∈ IR × IRn−1 satisfying x ◦ y = y ◦ x = e. We call this y the inverse of x and denote it by x−1. In fact, we have

x−1 = 1

x21− kx2k2(x1 , −x2) = 1 det(x)

µ

tr(x)e − x

.

(4)

Therefore, x ∈ int(Kn) if and only if x−1 ∈ int(Kn). Moreover, if x ∈ int(Kn), then x−k = (xk)−1 is also well-defined. For any x ∈ Kn, it is known that there exists a unique vector in Kn denoted by x1/2 such that (x1/2)2 = x1/2◦ x1/2 = x. Indeed,

x1/2=

µ

s , x2 2s

, where s =

s1 2

µ

x1+qx21− kx2k2

.

In the above formula, the term x2/s is defined to be the zero vector if x2 = 0 and s = 0, i.e., x = 0 .

For any x ∈ IRn, we always have x2 ∈ Kn (i.e., x2 ºKn 0). Hence, there exists a unique vector (x2)1/2 ∈ Kn denoted by |x|. It is easy to verify that |x| ºKn 0 and x2 = |x|2 for any x ∈ IRn. It is also known that |x| ºKn x. For any x ∈ IRn, we define [x]+ to be the nearest point (in Euclidean norm, since Jordan product does not induce a norm) projection of x onto Kn, which is the same definition as in IRn+. In other words, [x]+ is the optimal solution of the parametric SOCP: [x]+ = argmin{ kx − yk | y ∈ Kn}. It is well-known that [x]+= 12(x + |x|), see Property 2.2(f).

Next, for any x = (x1, x2) ∈ IR × IRn−1, we define a linear mapping from IRn to IRn as Lx : IRn −→ IRn

y −→ Lxy :=

"

x1 xT2 x2 x1I

#

y .

It can be easily verified that x ◦ y = Lxy, ∀y ∈ IRn, and Lx is positive definite (and hence invertible) if and only if x ∈ int(Kn). However, L−1x y 6= x−1◦ y, for some x ∈ int(Kn) and y ∈ IRn, i.e., L−1x 6= Lx−1.

The spectral decomposition along with the Jordan algebra associated with SOC entail some basic properties as below. We omit the proofs since they can be found in [9, 10].

Property 2.1 For any x = (x1, x2) ∈ IR×IRn−1 with the spectral values λ1, λ2 and spectral vectors u(1), u(2) given as in (3)-(4), we have

(a) u(1) and u(2) are orthogonal under Jordan product and have length 1/√

2 , i.e., u(1)◦ u(2) = 0 , ku(1)k = ku(2)k = 1

2 .

(b) u(1) and u(2) are idempotent under Jordan product, i.e., u(i)◦ u(i) = u(i) , i = 1, 2 .

(5)

(c) λ1, λ2 are nonnegative (positive) if and only if x ∈ Kn (x ∈ int(Kn)) , i.e., λi ≥ 0 , ∀i = 1, 2 ⇐⇒ x ºKn 0 .

λi > 0 , ∀i = 1, 2 ⇐⇒ x ÂKn 0 .

(d) The determinant, the trace and the Euclidean norm of x can all be represented in terms of λ1, λ2 :

det(x) = λ1λ2 , tr(x) = λ1+ λ2 , kxk2 = 1

221+ λ22) .

Property 2.2 For any x = (x1, x2) ∈ IR×IRn−1 with the spectral values λ1, λ2 and spectral vectors u(1), u(2) given as in (3)-(4), we have

(a) x2 = λ21u(1)+ λ22u(2) . (b) If x ∈ Kn , then x1/2 =

λ1 u(1)+

λ2 u(2) . (c) |x| = |λ1|u(1)+ |λ2|u(2) .

(d) [x]+ = [λ1]+u(1)+ [λ2]+u(2) , [x] = [λ1]u(1)+ [λ2]u(2) . (e) |x| = [x]++ [−x]+ = [x]+− [x] .

(f) [x]+ = 12(x + |x|) , [x] = 12(x − |x|) .

Property 2.3 (a) Any x ∈ IRn satisfies |x| ºKn x.

(b) For any x, y ºKn 0, if x ºKn y, then x1/2 ºKn y1/2. (c) For any x, y ∈ IRn, if x2 ºKn y2, then |x| ºKn |y|.

(d) For any x ∈ IRn, x ºKn 0 ⇐⇒ hx, yi ≥ 0, ∀y ºKn 0.

(e) For any x ºKn 0 and y ∈ IRn, x2 ºKn y2 =⇒ x ºKn y.

In the following propositions, we study and explore more characterizations about spec- tral values, determinant and trace of x as well as the partial order ºKn. In fact, Prop.

2.1-2.4 are parallel results analogous to those associated with positive semidefinite cone, see [11]. Even though both Kn and Sn belong to self-dual cones and share similar prop- erties, as we will see, the ideas for proving these results are quite different. One reason is that the Jordan product is not associative as mentioned earlier.

Proposition 2.1 For any x ÂKn 0 and y ÂKn 0, the following results hold.

(6)

(a) If x ºKn y, then det(x) ≥ det(y) , tr(x) ≥ tr(y).

(b) If x ºKn y, then λi(x) ≥ λi(y) , ∀i = 1, 2.

Proof. (a) From definition, we know that

det(x) = x21− kx2k2 , tr(x) = 2x1, det(y) = y12− ky2k2 , tr(y) = 2y1.

Since x − y = (x1− y1, x2 − y2) ºKn 0, we have kx2− y2k ≤ x1 − y1. Thus, x1 ≥ y1, and then tr(x) ≥ tr(y). Besides, the assumption on x and y gives

x1− y1 ≥ kx2− y2k ≥

¯¯

¯¯kx2k − ky2k

¯¯

¯¯, (7)

which is equivalent to x1− kx2k ≥ y1− ky2k > 0 and x1+ kx2k ≥ y1 + ky2k > 0. Hence, det(x) = x21− kx2k2 = (x1+ kx2k)(x1− kx2k) ≥ (y1+ ky2k)(y1− ky2k) = det(y).

(b) From definition of spectral values, we know that

λ1(x) = x1− kx2k , λ2(x) = x1 + kx2k and λ1(y) = y1− ky2k , λ2(y) = y1+ ky2k . Then, by the inequality (7) in the proof of part (a), the results follow immediately. 2

Proposition 2.2 For any x ºKn 0 and y ºKn 0, we have (a) det(x + y) ≥ det(x) + det(y).

(b) det(x ◦ y) ≤ det(x) · det(y).

(c) det

µ

αx + (1 − α)y

≥ α2det(x) + (1 − α)2det(y), ∀ 0 < α < 1.

(d)

µ

det(e + x)

1/2

≥ 1 + det(x)1/2, ∀x ºKn 0.

(e) det(e + x + y) ≤ det(e + x) · det(e + y).

Proof. (a) For any x ºKn 0 and y ºKn 0, we know kx2k ≤ x1 and ky2k ≤ y1, which implies

|hx2, y2i| ≤ kx2k · ky2k ≤ x1y1.

(7)

Hence, we obtain

det(x + y)

= (x1+ y1)2− kx2+ y2k2

=

µ

x21− kx2k2

+

µ

y12− ky2k2

+ 2

µ

x1y1− hx2, y2i

µ

x21− kx2k2

+

µ

y12− ky2k2

= det(x) + det(y).

(b) Applying the Cauchy inequality gives det(x ◦ y)

= hx, yi2− kx1y2+ y1x2k2

=

µ

x1y1 + hx2, y2i

2

µ

x21ky2k2 + 2x1y1hx2, y2i + y12kx2k2

= x21y12+ hx2, y2i2− x21ky2k2− y21kx2k2

≤ x21y12+ kx2k2· ky2k2 − x21ky2k2− y21kx2k2

=

µ

x21− kx2k2

·

µ

y12− ky2k2

= det(x) · det(y).

(c) For any x ÂKn 0 and y ÂKn 0, it is clear that αx ÂKn 0 and (1 − α)y ÂKn 0 for every α > 0. In addition, we observe that det(αx) = α2det(x), for all α > 0. Hence,

det

µ

αx + (1 − α)y

≥ det(αx) + det((1 − α)y) = α2det(x) + (1 − α)2det(y), where the inequality is from part (a).

(d) For any x ºKn 0, we know det(x) = λ1λ2 ≥ 0, where λi are the spectral values of x.

Hence, det(e + x) = (1 + λ1)(1 + λ2) ≥ (1 +√

λ1λ2)2 = (1 + det(x)1/2)2. Then, taking square root both sides yields the desired result.

(e) Again, For any x ºKn 0 and y ºKn 0, we have

x1− kx2k ≥ 0, y1− ky2k ≥ 0,

|hx2, y2i| ≤ kx2k · ky2k ≤ x1y1.

(8)

Also, we know det(e + x + y) = (1 + x1+ y1)2− kx2+ y2k2 , det(e + x) = (1 + x1)2− kx2k2 and det(e + y) = (1 + y1)2− ky2k2. Hence,

det(e + x) · det(e + y) − det(e + x + y)

=

µ

(1 + x1)2− kx2k2

¶µ

(1 + y1)2− ky2k2

µ

(1 + x1+ y1)2− kx2+ y2k2

(8)

= 2x1y1+ 2hx2, y2i + 2x1y21 + 2x21y1− 2y1kx2k2− 2x1ky2k2 +x21y21− y21kx2k2− x21ky2k2+ kx2k2· ky2k2

= 2

µ

x1y1+ hx2, y2i

+ 2x1

µ

y12− ky2k2

+ 2y1

µ

x21 − kx2k2

+

µ

x21− kx2k2

¶µ

y21− ky2k2

≥ 0,

where we multiply out all the expansions to obtain the second equality and the last in- equality holds by (8). 2

Proposition 2.3 For any x, y ∈ IRn, we have (a) tr(x + y) = tr(x) + tr(y).

(b) λ1(x)λ2(y) + λ1(y)λ2(x) ≤ tr(x ◦ y) ≤ λ1(x)λ1(y) + λ2(x)λ2(y).

(c) tr(αx + (1 − α)y) = α · tr(x) + (1 − α) · tr(y), ∀α ∈ IR.

Proof. Part(a) and (c) are trivial. Thus, it remains to verify (b). Using the fact that tr(x ◦ y) = 2hx, yi, we obtain

λ1(x)λ2(y) + λ1(y)λ2(x)

= (x1− kx2k)(y1+ ky2k+(x1+ kx2k)(y1− ky2k)

= 2(x1y1− kx2kky2k)

≤ 2(x1y1+ hx2, y2i

= 2hx, yi = tr(x ◦ y)

≤ 2(x1y1+ kx2kky2k)

= (x1− kx2k)(y1− ky2k) + (x1+ kx2k)(y1+ ky2k), which completes the proof. 2

The following two lemmas are well known results in matrix analysis and are key to proving Prop. 2.4 which is an important extension about the function ln det(·) from positive semidefinite cone to SOC.

Lemma 2.1 For any nonzero vector x ∈ IRn, the matrix xxT is positive semidefinite (p.s.d.) with only one nonzero eigenvalue kxk2.

Proof. The proof is routine, we omit it. 2

(9)

Lemma 2.2 Suppose that a symmetric matrix is partitioned as

"

A B

BT C

#

, where A and C are square. Then this matrix is positive definite (p.d.) if and only if A is positive definite and C Â BTA−1B.

Proof. This is Theorem 7.7.6 in [11]. 2

Proposition 2.4 For any x ÂKn 0 and y ÂKn 0, we have

(a) the real-valued function f (x) = ln(det(x)) is concave on int(Kn).

(b) det(αx + (1 − α)y) ≥ (det(x))α(det(y))1−α, ∀ 0 < α < 1.

(c) the function real-valued f (x) = ln(det(x−1)) is convex on int(Kn).

(d) the real-valued function f (x) = tr(x−1) is convex on int(Kn).

Proof. (a) Since int(Kn) is a convex set, it is enough to show that ∇2f (x) is negative semidefinite. From direct computation, we know

∇f (x) =

µ 2x1

x21− kx2k2, −2x2 x21− kx2k2

= 2x−1, and

2f (x) =

−2x21−2kx2k2 (x21−kx2k2)2

4x1xT2 (x21−kx2k2)2 4x1x2

(x21−kx2k2)2

−2(x21−kx2k2)I−4x2xT2 (x21−kx2k2)2

= (x2 −2 1−kx2k2)2

"

(x21+ kx2k2)2 −2x1xT2

−2x1x2 (x21− kx2k2)I + 2x2xT2

#

.

Let ∇2f (x) be denoted by the matrix

"

A B

BT C

#

given as in Lemma 2.2 (here A is a scalar). Then, we have

AC − BTB

= (x21+ kx2k2)

µ

(x21− kx2k2)I + 2x2xT2

− 4x21x2xT2

= (x41− kx2k4)I − 2(x21− kx2k2)x2xT2

= (x21− kx2k2)

µ

(x21+ kx2k2)I − 2x2xT2

= (x21− kx2k2) · M,

where we denote the whole matrix in the big parenthesis of the last second equality by M.

From Lemma 2.1, we know that x2xT2 is a p.s.d. with only one nonzero eigenvalue kx2k2. Hence, all the eigenvalues of the matrix M are (x21 + kx2k2) − 2kx2k2 = x21 − kx2k2 and

(10)

x21+ kx2k2 with multiplicity of n − 2, which are all positive. Thus, M is positive definite which implies that ∇2f (x) is negative definite and hence negative semidefinite.

(b) From part(a), for all 0 < α < 1, we have ln

µ

det(αx + (1 − α)y)

≥ α ln(det(x)) + (1 − α) ln(det(y))

= ln

µ

det(x)

α

+ ln

µ

det(y)

1−α

= ln

µ

det(x)

αµ

det(y)

1−α

.

Since natural logarithm is an increasing function, the desired result follows.

(c) We observe that det(x−1) = 1/det(x), for all x ∈ int(Kn). Therefore, ln det(x−1) =

− ln det(x) is a convex function by part(a).

(d) The idea for proving this is the same as the one for part(a). Since int(Kn) is a convex set, it is enough to show that ∇2f is positive semidefinie. Note that f (x) = tr(x−1) =

2x1

x21− kx2k2. Thus, from direct computations, we have

2f (x) = 2 (x21− kx2k2)3

2x31+ 6x1kx2k2, −(6x21+ 2kx2k2)xT2

−(6x21+ 2kx2k2)x2, 2x1

µ

(x21− kx2k2)I + 4x2xT2

.

Again, let ∇2f (x) be denoted by the matrix

"

A B

BT C

#

given as in Lemma 2.2 (here A is a scalar). Then, we have

AC − BTB

= 2x1

µ

2x31 + 6x1kx2k2

¶µ

(x21− kx2k2)I + 4x2xT2

µ

6x21+ 2kx2k2

2

x2xT2

=

µ

4x41+ 12x21kx2k2

¶µ

x21− kx2k2

I−

µ

20x41− 24x21kx2k2 + 4kx2k4

x2xT2

=

µ

4x41+ 12x21kx2k2

¶µ

x21− kx2k2

I − 4

µ

5x21− kx2k2

¶µ

x21− kx2k2

x2xT2

=

µ

x21− kx2k2

¶ ·µ

4x41+ 12x21kx2k2

I − 4

µ

5x21− kx2k2

x2xT2

¸

=

µ

x21− kx2k2

· M,

where we denote the whole matrix in the big parenthesis of the last second equality by M.

From Lemma 2.1, we know that x2xT2 is a p.s.d. with only one nonzero eigenvalue kx2k2. Hence, all the eigenvalues of the matrix M are (4x41+ 12x21kx2k− 20x21kx2k2+ 4kx2k4) and

(11)

4x41+ 12x21kx2k2 with multiplicity of n − 2, which are all positive since 4x41 + 12x21kx2k2− 20x21kx2k2 + 4kx2k4

= 4x41 − 8x21kx2k2+ 4kx2k4

= 4

µ

x21 − kx2k2

> 0.

Thus, by Lemma 2.2, we obtain that ∇2f (x) is positive definite and hence is positive semidefinite. Therefore, f is convex on int(Kn). 2

3 SOC-convex function and SOC-monotone function

In this section, we define the SOC-convexity and SOC-monotonicity and study some ex- amples of such functions.

Definition 3.1 Let f : IR → IR. Then

(a) f is said to be SOC-monotone of order n if the corresponding vector-valued function fsoc satisfies the following:

x ºKn y =⇒ fsoc(x) ºKn fsoc(y).

(b) f is said to be SOC-convex of order n if the corresponding vector-valued function fsoc satisfies the following:

fsoc

µ

(1 − λ)x + λy

¹Kn (1 − λ)fsoc(x) + λfsoc(y) , (9) for all x, y ∈ IRn and 0 ≤ λ ≤ 1.

We say f is SOC-monotone (respectively, SOC-convex) if f is SOC-monotone of all order n (respectively, SOC-convex of all order n). If f is continuous, then the condition (9) can be replaced by the more special condition:

fsoc

µx + y 2

¹Kn 1 2

µ

fsoc(x) + fsoc(y)

. (10)

It is clear that the set of SOC-monotone functions and the set of SOC-convex functions are both closed under positive linear combinations and under point-wise limits.

Proposition 3.1 Let f : IR → IR be f (t) = α + βt, then

(12)

(a) f is SOC-monotone on IR for every α ∈ IR and β ≥ 0.

(b) f is SOC-convex on IR for all α, β ∈ IR.

Proof. The proof is straightforward by checking that definition 3.1 is satisfied. 2

Proposition 3.2 (a) Let f : IR → IR be f (t) = t2, then f is SOC-convex on IR.

(b) Hence, the function g(t) = α + βt + γt2 is SOC-convex on IR for all α, β ∈ IR and γ ≥ 0.

Proof. (a) For any x, y ∈ IRn, we have 1

2

µ

f (x) + f (y)

− f

µx + y 2

= x2+ y2

2

µx + y 2

2

= 1

4(x − y)2 ºKn 0.

Since f is continuous, the above implies that f is SOC-convex.

(b) It’s an immediate consequence of (a). 2

Example 3.1 The function f (t) = t2 is not SOC-monotone on IR.

To see this, let x = (1, 0), y = (−2, 0), then x − y = (3, 0) ºKn 0. But, x2 − y2 = (1, 0) − (4, 0) = (−3, 0) 6ºKn 0. 2

It is clear that f (t) = t2 is also SOC-convex on the smaller interval [0, ∞) by Prop.

3.2(a). We may ask a natural question: Is f (t) = t2 SOC-monotone on the interval [0, ∞)?

The answer is: it’s true only for n = 2, however, false for general n ≥ 3. We show this in the next example.

Example 3.2 (a) The function f (t) = t2 is SOC-monotone on [0, ∞) for n = 2.

(b) However, f (t) = t2 is not SOC-monotone on [0, ∞) for n ≥ 3.

(a) Let x = (x1, x2) ºK2 y = (y1, y2) ºK2 0. Then we have the following inequalities:

|x2| ≤ x1 , |y2| ≤ y1 , |x2− y2| ≤ x1 − y1,

which implies (

x1− x2 ≥ y1− y2 ≥ 0,

x1+ x2 ≥ y1+ y2 ≥ 0. (11)

(13)

We want to prove that f (x) − f (y) = (x21+ x22 − y12− y22 , 2x1x2 − 2y1y2) ºK2 0, which is enough to verify that x21+ x22 − y12− y22 ≥ |2x1x2− 2y1y2|. This can been seen by

x21+ x22− y12− y22

¯¯

¯¯2x1x2− 2y1y2

¯¯

¯¯

=

( x21+ x22− y12− y22− (2x1x2 − 2y1y2), if x1x2− y1y2 ≥ 0 x21+ x22− y12− y22− (2y1y2− 2x1x2), if x1x2− y1y2 ≤ 0

=

( (x1 − x2)2− (y1− y2)2, if x1x2− y1y2 ≥ 0 (x1 + x2)2− (y1 + y2)2, if x1x2− y1y2 ≤ 0

≥ 0 ,

where the inequalities are true due to the inequalities (11).

(b) For n ≥ 3, we give a counterexample to show that f (t) = t2 is not SOC-monotone on the interval [0, ∞). Let x = (3, 1, −2) ∈ K3 and y = (1, 1, 0) ∈ K3. It is clear that x − y = (2, 0, −2) ºK3 0. But, x2− y2 = (14, 6, −12) − (2, 2, 0) = (12, 4, −12) 6ºK3 0. 2

Now we look at the function f (t) = t3. As expected, f (t) = t3 is not SOC-convex.

However, it is true that f (t) = t3 is SOC-convex on [0, ∞) for n = 2, whereas false for n ≥ 3. Besides, we will see f (t) = t3 is neither SOC-monotone on IR nor SOC-monotone on the interval [0, ∞) in general. Nonetheless, it is true that it is SOC-monotone on the interval [0, ∞), for n = 2. The following two examples show what we have just said.

Example 3.3 (a) The function f (t) = t3 is not SOC-convex on IR.

(b) Moreover, f (t) = t3 is not SOC-convex on [0, ∞) for n ≥ 3.

(c) However, f (t) = t3 is SOC-convex on [0, ∞) for n = 2.

To see (a), let x = (0, −2), y = (1, 0). It can be verified that 12

µ

f (x) + f (y)

− f

µ

x+y 2

µ =

98, −94

K2 0, which says f (t) = t3 is not SOC-convex on IR.

To see (b), let x = (2, 1, −1), y = (1, 1, 0) ºK3 0, then we have 12

µ

f (x) + f (y)

− f

µ

x+y 2

= (3, 1, −3) 6ºK3 0, which implies f (t) = t3 is not even SOC-convex on the interval [0, ∞).

To see (c), it is enough to show that f

µx + y 2

¹K2 1 2

µ

f (x) + f (y)

, for any x, y ºK2 0.

Let x = (x1, x2) ºK2 0 and y = (y1, y2) ºK2 0, then we have

( x3 = (x31+ 3x1x22 , 3x21x2+ x32), y3 = (y31+ 3y1y22 , 3y12y2+ y23), which yields

f (x+y2 ) = 18

µ

(x1+ y1)3+ 3(x1+ y1)(x2+ y2)2 , 3(x1+ y1)2(x2+ y2) + (x2+ y2)3

,

1 2

µ

f (x) + f (y)

= 12

µ

x31+ y31 + 3x1x22+ 3y1y22 , x32+ y32+ 3x21x2+ 3y21y2

.

This section explains in detail how the function evaluation method based on non- uniform segmentation is used to compute the f and g functions for Gaussian noise generation

We show that, for the linear symmetric cone complementarity problem (SCLCP), both the EP merit functions and the implicit Lagrangian merit function are coercive if the underlying

We have made a survey for the properties of SOC complementarity functions and theoretical results of related solution methods, including the merit function methods, the

For example, Ko, Chen and Yang [22] proposed two kinds of neural networks with different SOCCP functions for solving the second-order cone program; Sun, Chen and Ko [29] gave two

We have made a survey for the properties of SOC complementarity functions and the- oretical results of related solution methods, including the merit function methods, the

Tseng, Growth behavior of a class of merit functions for the nonlinear comple- mentarity problem, Journal of Optimization Theory and Applications, vol. Fukushima, A new

Chen, The semismooth-related properties of a merit function and a descent method for the nonlinear complementarity problem, Journal of Global Optimization, vol.. Soares, A new

A derivative free algorithm based on the new NCP- function and the new merit function for complementarity problems was discussed, and some preliminary numerical results for

These include so-called SOC means, SOC weighted means, and a few SOC trace versions of Young, H¨ older, Minkowski inequalities, and Powers-Størmer’s inequality.. All these materials

In this paper, we have shown that how to construct complementarity functions for the circular cone complementarity problem, and have proposed four classes of merit func- tions for

From these characterizations, we particularly obtain that a continuously differentiable function defined in an open interval is SOC-monotone (SOC-convex) of order n ≥ 3 if and only

For different types of optimization problems, there arise various complementarity problems, for example, linear complementarity problem, nonlinear complementarity problem,

We point out that extending the concepts of r-convex and quasi-convex functions to the setting associated with second-order cone, which be- longs to symmetric cones, is not easy

In this paper, we extend this class of merit functions to the second-order cone complementarity problem (SOCCP) and show analogous properties as in NCP and SDCP cases.. In addition,

Chen, Conditions for error bounds and bounded level sets of some merit func- tions for the second-order cone complementarity problem, Journal of Optimization Theory and

However, it is worthwhile to point out that they can not inherit all relations between matrix convex and matrix monotone functions, since the class of contin- uous

Chen, Properties of circular cone and spectral factorization associated with circular cone, to appear in Journal of Nonlinear and Convex Analysis, 2013.

Chen, “Alternative proofs for some results of vector- valued functions associated with second-order cone,” Journal of Nonlinear and Convex Analysis, vol.. Chen, “The convex and

Proof. The proof is complete.. Similar to matrix monotone and matrix convex functions, the converse of Proposition 6.1 does not hold. 2.5], we know that a continuous function f

In this paper, by using the special structure of circular cone, we mainly establish the B-subdifferential (the approach we considered here is more directly and depended on the

We have provided alternative proofs for some results of vector-valued functions associ- ated with second-order cone, which are useful for designing and analyzing smoothing and

Chen, Conditions for error bounds and bounded level sets of some merit func- tions for the second-order cone complementarity problem, Journal of Optimization Theory and

Abstract In this paper, we study the parabolic second-order directional derivative in the Hadamard sense of a vector-valued function associated with circular cone.. The