• 沒有找到結果。

Characterizations of SOC-monotone and SOC- SOC-convex functions

To close this section, we check with one popular smoothing function, f (t) = 1

2

√

t2+ 4 + t ,

which was proposed by Chen and Harker [38], Kanzow [84], and Smale [138]; and is called the CHKS function. Its corresponding SOC-function is defined by

fsoc(x) = 1 2



(x2+ 4e)12 + x ,

where e = (1, 0, · · · , 0). The function f (t) is convex and monotone, so we may also wish to know whether it is SOC-convex or SOC-monotone or not. Unfortunately, it is neither SOC-convex nor SOC-monotone for n ≥ 3, though it is both SOC-convex and SOC-monotone for n = 2. The following example demonstrates what we have just said.

Example 2.8. Let f : IR → IR be f (t) =

√t2+ 4 + t

2 . Then, (a) f is not SOC-monotone of order n ≥ 3 on IR;

(b) however, f is SOC-monotone of order 2 on IR;

(c) f is not SOC-convex of order n ≥ 3 on IR;

(d) however, f is SOC-convex of order 2 on IR.

Solution. Again, by Remark 2.1, taking x = (2, 1, −1) and y = (1, 1, 0) gives a coun-terexample for both (a) and (c).

To see (b) and (d), it follows by direct verifications as what we have done before. 

2.2 Characterizations of monotone and

Conjecture 2.1. Let f : (0, ∞) → IR be continuous, convex, and nonincreasing. Then, (a) f is SOC-convex;

(b) −f is SOC-monotone.

Conjecture 2.2. Let f : [0, ∞) → [0, ∞) be continuous. Then,

−f is SOC-convex ⇐⇒ f is SOC-monotone.

Proposition 2.8. Let f : [0, ∞) → [0, ∞) be continuous. If −f is SOC-convex, then f is SOC-monotone.

Proof. Suppose that x Kn y Kn 0. For any 0 < λ < 1, we can write λx = λy + (1 − λ) λ

1 − λ(x − y).

Then, using the SOC-convexity of −f yields that fsoc(λx) Kn λfsoc(y) + (1 − λ)fsoc λ

1 − λ(x − y)

Kn 0,

where the second inequality is true since f is from [0, ∞) into itself and x − y Kn 0. This yields fsoc(λx) Kn λfsoc(y). Now, letting λ → 1, we obtain that fsoc(x) Kn fsoc(y), which says that f is SOC-monotone. 

The converse of Proposition 2.8 is not true, in general. For counterexample, we consider

f (t) = − cot

−π

2(1 + t)−1+ π

, t ∈ [0, ∞).

Notice that − cot(t) is SOC-monotone on [π/2, π), whereas −π2(1+t)−1 is SOC-monotone on [0, ∞). Hence, their compound function f (t) is SOC-monotone on [0, ∞). However,

−f (t) does not satisfy the inequality (2.36) for all t ∈ (0, ∞). For example, when t1 = 7.7 and t2 = 7.6, the left hand side of (2.36) equals 0.0080, whereas the right hand side equals 27.8884. This shows that f (t) = − cot(t) is not SOC-concave of order n ≥ 3. In sum-mary, only one direction (“=⇒”) of Conjecture 2.2 holds. Whether Conjecture 2.1 is true or not will be confirmed at the end of Section 2.3.

We notice that if f is not a function from [0, ∞) into itself, then Proposition 2.8 may be false. For instance, f (t) = −t2 is SOC-concave, but not SOC-monotone. In other words, the domain of function f is an important factor for such relation. From now on, we will demonstrate various characterizations regarding SOC-convex and SOC-monotone functions.

Proposition 2.9. Let g : J → IR and h : I → J , where J ⊆ IR and I ⊆ IR. Then, the following hold.

(a) If g is SOC-concave and SOC-monotone on J and h is SOC-concave on I, then their composition g ◦ h = g(h(·)) is also SOC-concave on I.

(b) If g is SOC-monotone on J and h is SOC-monotone on I, then g ◦ h = g(h(·)) is SOC-monotone on I.

Proof. (a) For the sake of notation, let gsoc : bS → IRn and hsoc : S → bS be the vector-valued functions associated with g and h, respectively, where S ⊆ IRn and bS ⊆ IRn. Definebg(t) = g(h(t)). Then, for any x ∈ S, it follows from (1.2) and (1.9) that

gsoc(hsoc(x)) = gsoch

h(λ1(x))u(1)x + h(λ2(x))u(2)x i

= gh(λ1(x))u(1)x + gh(λ2(x))u(2)x (2.8)

= bgsoc(x).

We next prove that bg(t) is SOC-concave on I. For any x, y ∈ S and 0 ≤ β ≤ 1, from the SOC-concavity of h(t) it follows that

hsoc(βx + (1 − β)y) Kn βhsoc(x) + (1 − β)hsoc(y).

Using the SOC-monotonicity and SOC-concavity of g, we then obtain that gsoch

hsoc(βx + (1 − β)y)i

Kn gsoch

βhsoc(x) + (1 − β)hsoc(y)i

Kn βgsoc[hsoc(x)] + (1 − β)gsoc[hsoc(y)].

This together with (2.8) implies that for any x, y ∈ S and 0 ≤ β ≤ 1, (bg)soc

βx + (1 − β)y

Kn β(bg)soc(x) + (1 − β)(g)bsoc(y).

Consequently, the function bg(t), i.e. g(h(·)) is SOC-concave on I.

(b) If x Kn y, then hsoc(x) Kn hsoc(y) due to h being SOC-monotone. Using g being SOC-monotone yields

gsoc(hsoc(x)) Kn gsoc(hsoc(y)) .

Then, by (2.8), we have (g ◦ h)soc(x) Kn (g ◦ h)soc(y), which is the desired result.  Proposition 2.10. Suppose that f : IR → IR and z ∈ IRn. Let gz : IRn → IR be defined by gz(x) := hfsoc(x), zi. Then, f is SOC-convex if and only if gz is a convex function for all z Kn 0.

Proof. Suppose f is SOC-convex and let x, y ∈ IRn, λ ∈ [0, 1]. Then, we have fsoc (1 − λ)x + λy Kn (1 − λ)fsoc(x) + λfsoc(y),

which implies

gz((1 − λ)x + λy) = fsoc (1 − λ)x + λy, z

≤ (1 − λ)fsoc(x) + λfsoc(y), z

= (1 − λ)hfsoc(x) , zi + hfsoc(y), zi

= (1 − λ)gz(x) + λgz(y),

where the inequality holds by Property 1.3(d). This says that gz is a convex function.

For the other direction, from the convexity of g, we obtain

fsoc (1 − λ)x + λy, z ≤ (1 − λ)fsoc(x) + λfsoc(y), z . Since z Kn 0, by Property 1.3(d) again, the above yields

fsoc (1 − λ)x + λy Kn (1 − λ)fsoc(x) + λfsoc(y), which says f is SOC-convex. 

Proposition 2.11. A differentiable function f : IR → IR is SOC-convex if and only if fsoc(y) Kn fsoc(x) + ∇fsoc(x)(y − x) for all x, y ∈ IRn.

Proof. From Proposition 1.13, we know that f is differentiable if and only if fsoc is differentiable. Using the gradient formula given therein and following the arguments as in [20, Proposition B.3] or [29, Theorem 2.3.5], the proof can be done easily. We omit the details. 

To discover more characterizations and try to answer the aforementioned conjectures, we develop the second-order Taylor’s expansion for the vector-valued SOC-function fsoc defined as in (1.9), which is crucial to our subsequent analysis. To this end, we assume that f ∈ C(2)(J ) with J being an open interval in IR and dom(fsoc) is open in IRn (this is true by Proposition 1.4(a)). Given any x ∈ dom(fsoc) and h = (h1, h2) ∈ IR × IRn−1, we have x + th ∈ dom(fsoc) for any sufficiently small t > 0. We wish to calculate the Taylor’s expansion of the function fsoc(x + th) at x for any sufficiently small t > 0. In particular, we are interested in finding matrices ∇fsoc(x) and Ai(x) for i = 1, 2, . . . , n such that

fsoc(x + th) = fsoc(x) + t∇fsoc(x)h + 1 2t2

hTA1(x)h hTA2(x)h

... hTAn(x)h

+ o(t2). (2.9)

Again, for convenience, we omit the variable notion x in λi(x) and u(i)x for i = 1, 2 in the subsequent discussions.

It is known that fsoc is differentiable (respectively, smooth) if and only if f is differ-entiable (respectively, smooth), see Proposition 1.13. Moreover, there holds that

∇fsoc(x) =

b(1) c(1) xT2 kx2k c(1) x2

kx2k a(0)I + (b(1)− a(0))x2xT2 kx2k2

(2.10)

if x2 6= 0; and otherwise

∇fsoc(x) = f0(x1)I, (2.11)

where

a(0) = f (λ2) − f (λ1)

λ2 − λ1 , b(1) = f02) + f01)

2 , c(1) = f02) − f01)

2 .

Therefore, we only need to derive the formula of Ai(x) for i = 1, 2, · · · , n in (2.9).

We first consider the case where x2 6= 0 and x2+ th2 6= 0. By the definition (1.9), we see that

fsoc(x + th) = 1

2f (x1+ th1− kx2+ th2k)

"

1

kxx2+th2

2+th2k

#

+1

2f (x1+ th1+ kx2+ th2k)

"

1

x2+th2

kx2+th2k

#

=

f (x1+ th1− kx2+ th2k) + f (x1+ th1+ kx2+ th2k) f (x1+ th1+ kx2+ th2k) − f (x12+ th1− kx2+ th2k)

2

x2+ th2 kx2+ th2k

:=  Ξ1 Ξ2

 .

To derive the Taylor’s expansion of fsoc(x + th) at x with x2 6= 0, we first write out and expand kx2+ th2k. Notice that

kx2+ th2k = q

kx2k2+ 2txT2h2+ t2kh2k2 = kx2k s

1 + 2txT2h2

kx2k2 + t2kh2k2 kx2k2. Therefore, using the fact that

√1 + ε = 1 + 1 2ε − 1

2+ o(ε2),

we may obtain

kx2+ th2k = kx2k



1 + t α kx2k+ 1

2t2 β kx2k2



+ o(t2), (2.12) where

α = xT2h2

kx2k, β = kh2k2− (xT2h2)2

kx2k2 = kh2k2− α2 = hT2Mx2h2, with

Mx2 = I − x2xT2 kx2k2.

Furthermore, from (2.12) and the fact that (1 + ε)−1 = 1 − ε + ε2+ o(ε2), it follows that kx2+ th2k−1 = kx2k−1



1 − t α kx2k +1

2t2

 2 α2

kx2k2 − β kx2k2



+ o(t2)



. (2.13) Combining equations (2.12) and (2.13) yields that

x2+ th2

kx2+ th2k = x2 kx2k + t

 h2

kx2k − α kx2k

x2 kx2k



+1 2t2



2 α2

kx2k2 − β kx2k2

 x2

kx2k − 2 h2 kx2k

α kx2k



+ o(t2)

= x2

kx2k + tMx2 h2

kx2k (2.14)

+1 2t2



3hT2x2xT2h2 kx2k4

x2

kx2k− kh2k2 kx2k2

x2

kx2k − 2h2hT2 kx2k2

x2 kx2k



+ o(t2).

In addition, from (2.12), we have the following equalities f (x1+ th1− kx2+ th2k)

= f



x1+ th1

 kx2k



1 + t α kx2k + 1

2t2 β kx2k2



+ o(t2)



= f



λ1 + t(h1 − α) −1 2t2 β

kx2k + o(t2)



(2.15)

= f (λ1) + tf01)(h1− α) +1 2t2



−f01) β

kx2k + f001)(h1− α)2



+ o(t2) and

f (x1+ th1+ kx2 + th2k)

= f



λ2+ t(h1+ α) + 1 2t2 β

kx2k + o(t2)



(2.16)

= f (λ2) + tf02)(h1+ α) + 1 2t2



f02) β

kx2k+ f002)(h1+ α)2



+ o(t2).

For i = 0, 1, 2, we define a(i) = f(i)2) − f(i)1)

λ2− λ1 , b(i) = f(i)2) + f(i)1)

2 , c(i) = f(i)2) − f(i)1)

2 , (2.17)

where f(i) means the i-th derivative of f and f(0) is the same as the original f . Then, by the equations (2.15)–(2.17), it can be verified that

Ξ1 = 1 2



f (x1+ th1+ kx2+ th2k) + f (x1+ th1− kx2+ th2k)

= b(0)+ t b(1)h1+ c(1)α + 1

2t2 a(1)β + b(2)(h21+ α2) + 2c(2)h1α + o(t2)

= b(0)+ t



b(1)h1+ c(1)hT2 x2 kx2k

 +1

2t2hTA1(x)h + o(t2), where

A1(x) =

b(2) c(2) xT2 kx2k c(2) x2

kx2k a(1)I + b(2)− a(1) x2xT2 kx2k2

. (2.18)

Note that in the above expression for Ξ1, b(0) is exactly the first component of fsoc(x) and (b(1)h1+ c(1)hT2 kxx2

2k) is the first component of ∇fsoc(x)h. Using the same techniques again,

1

2 f (x1+ th1+ kx2+ th2k) − f (x1+ th1− kx2+ th2k)

= c(0)+ t c(1)h1+ b(1)α + 1 2t2

 b(1) β

kx2k + c(2)(h21+ α2) + 2b(2)h1α



+ o(t2)

= c(0)+ t c(1)h1+ b(1)α + 1

2t2hTB(x)h + o(t2), (2.19)

where

B(x) =

c(2) b(2) xT2 kx2k b(2) x2

kx2k c(2)I + b(1)

kx2k − c(2)

 Mx2

. (2.20)

Using equations (2.19) and (2.14), we obtain that Ξ2 = 1

2



f (x1+ th1+ kx2+ th2k) − f (x1+ th1− kx2+ th2k) x2+ th2 kx2+ th2k

= c(0) x2 kx2k + t

 x2

kx2k(c(1)h1+ b(1)α) + c(0)Mx2 h2 kx2k

 + 1

2t2W + o(t2), where

W = x2

kx2khTB(x)h + 2Mx2 h2

kx2k c(1)h1+ b(1)α +c(0)



3hT2x2xT2h2

kx2k4 x2

kx2k− kh2k2 kx2k2

x2

kx2k − 2h2hT2 kx2k2

x2

kx2k

 .

Now we denote

d := b(1)− a(0)

kx2k = 2(b(1)− a(0))

λ2− λ1 , U := hTC(x)h V := 2c(1)h1+ b(1)α

kx2k − c(0)2xT2h2

kx2k3 = 2a(1)h1+ 2dxT2h2 kx2k, where

C(x) :=

c(2) (b(2)− a(1)) xT2 kx2k (b(2)− a(1)) x2

kx2k dI + c(2)− 3d x2xT2 kx2k2

. (2.21)

Then U can be further recast as

U = hTB(x)h + c(0)3hT2x2xT2h2

kx2k4 − c(0)kh2k2

kx2k2 − 2xT2h2

kx2k2(c(1)h1+ b(1)α).

Consequently,

W = x2

kx2kU + h2V.

We next consider the case where x2 = 0 and x2+ th2 6= 0. By definition (1.9),

fsoc(x + th) = f (x1+ t(h1− kh2k)) 2

 1

− h2 kh2k

+ f (x1+ t(h1+ kh2k)) 2

 1 h2 kh2k

=

f (x1+ t(h1− kh2k)) + f (x1+ t(h1+ kh2k)) f (x1+ t(h1+ kh2k)) − f (x2 1+ t(h1− kh2k))

2

h2 kh2k

.

Using the Taylor expansion of f at x1, we can obtain that 1

2 h

f (x1+ t(h1− kh2k)) + f (x1+ t(h1+ kh2k))i

= f (x1) + tf(1)(x1)h1+ 1

2t2f(2)(x1)hTh + o(t2), 1

2 h

f (x1+ t(h1− kh2k)) − f (x1+ t(h1+ kh2k))i

= tf(1)(x1)h2+ 1

2t2f(2)(x1)2h1h2+ o(t2).

Therefore,

fsoc(x + th) = fsoc(x) + tf(1)(x1)h +1

2t2f(2)(x1)

 hTh 2h1h2

 .

Thus, under this case, we have that

A1(x) = f(2)(x1)I, Ai(x) = f(2)(x1)

 0 ¯eTi−1

¯

ei−1 O



i = 2, · · · , n, (2.22)

where ¯ej ∈ IRn−1 is the vector whose j-th component is 1 and the others are 0.

Summing up the above discussions gives the following conclusion.

Proposition 2.12. Let f ∈ C(2)(J ) with J being an open interval in IR (which implies dom(fsoc) is open in IRn). Then, for any x ∈ dom(fsoc), h ∈ IRn and any sufficiently small t > 0, there holds

fsoc(x + th) = fsoc(x) + t∇fsoc(x)h + 1 2t2

hTA1(x)h hTA2(x)h

... hTAn(x)h

+ o(t2),

where ∇fsoc(x) and Ai(x) for i = 1, 2, · · · , n are given by (2.11) and (2.22) if x2 = 0;

and otherwise ∇fsoc(x) and A1(x) are given by (2.10) and (2.18), respectively, and for i ≥ 2,

Ai(x) = C(x) x2i

kx2k+ Bi(x) where

Bi(x) = veTi + eivT, v =



a(1) d xT2 kx2k

T

=



a(1), d kx2kx2

 .

From Proposition 2.11 and Proposition 2.12, the following consequence is obtained.

Proposition 2.13. Let f ∈ C(2)(J ) with J being an open interval in IR (which implies dom(fsoc) is open in IRn). Then, f is SOC-convex if and only if for any x ∈ dom(fsoc) and h ∈ IRn, the vector

hTA1(x)h hTA2(x)h

... hTAn(x)h

∈ Kn,

where Ai(x) is given as in (2.22).

Now we are ready to show our another main result about the characterization of SOC-monotone functions. Two technical lemmas are needed for the proof. The first one is so-called S-Lemma whose proof can be found in [124].

Lemma 2.3. Let A, B be symmetric matrices and yTAy > 0 for some y. Then, the implication zTAz ≥ 0 ⇒ zTBz ≥ 0 is valid if and only if B  λA for some λ ≥ 0.

Lemma 2.4. Given θ ∈ IR, a ∈ IRn−1, and a symmetric matrix A ∈ IRn×n. Let Bn−1:=

{z ∈ IRn−1| kzk ≤ 1}. Then, the following hold.

(a) For any h ∈ Kn, Ah ∈ Kn is equivalent to A 1 z



∈ Kn for any z ∈ Bn−1. (b) For any z ∈ Bn−1, θ + aTz ≥ 0 is equivalent to θ ≥ kak.

(c) If A =

 θ aT

a H



with H being an (n − 1) × (n − 1) symmetric matrix, then for any h ∈ Kn, Ah ∈ Kn is equivalent to θ ≥ kak and there exists λ ≥ 0 such that the matrix

 θ2− kak2− λ θaT − aTH θa − HTa aaT − HTH + λI



 O.

Proof. (a) For any h ∈ Kn, suppose that Ah ∈ Kn. Let h = 1 z



where z ∈ Bn−1. Then h ∈ Kn and the desired result follows. For the other direction, if h = 0, the conclusion is obvious. Now let h := (h1, h2) be any nonzero vector in Kn. Then, h1 > 0 and kh2k ≤ h1. Consequently, hh2

1 ∈ Bn−1 and A

 1

h2

h1



∈ Kn. Since Kn is a cone, we have

h1A

 1

h2

h1



= Ah ∈ Kn.

(b) For z ∈ Bn−1, suppose θ + aTz ≥ 0. If a = 0, then the result is clear since θ ≥ 0. If a 6= 0, let z := −kaka . Clearly, z ∈ Bn−1 and hence θ +−akakTa ≥ 0 which gives θ − kak ≥ 0.

For the other direction, the result follows from the Cauchy Schwarz Inequality:

θ + aTz ≥ θ − kak · kzk ≥ θ − kak ≥ 0.

(c) From part(a), Ah ∈ Kn for any h ∈ Kn is equivalent to A 1 z



∈ Kn for any z ∈ Bn−1. Notice that

A 1 z



=

 θ aT

a H

  1 z



= θ + aTz a + Hz

 .

Then, Ah ∈ Kn for any h ∈ Kn is equivalent to the following two things:

θ + aTz ≥ 0 for any z ∈ Bn−1 (2.23)

and

(a + Hz)T(a + Hz) ≤ (θ + aTz)2, for any z ∈ Bn−1. (2.24) By part(b), (2.23) is equivalent to θ ≥ kak. Now, we write the expression of (2.24) as below:

zT aaT − HTH z + 2 θaT − aTH z + θ2 − aTa ≥ 0, for any z ∈ Bn−1, which can be further simplified as

 1 zT 

 θ2− kak2 θaT − aTH θa − HTa aaT − HTH

  1 z



≥ 0, for any z ∈ Bn−1. Observe that z ∈ Bn−1 is the same as

 1 zT  1 0 0 −I

  1 z



≥ 0.

Thus, by applying the S-Lemma (Lemma 2.3), there exists λ ≥ 0 such that

 θ2− kak2 θaT − aTH θa − HTa aaT − HTH



− λ 1 0 0 −I



 O.

This completes the proof of part(c). 

Proposition 2.14. Let f ∈ C(1)(J ) with J being an open interval (which implies dom(fsoc) is open in IRn). Then, the following hold.

(a) f is SOC-monotone of order 2 if and only if f0(τ ) ≥ 0 for any τ ∈ J . (b) f is SOC-monotone of order n ≥ 3 if and only if the 2 × 2 matrix

f(1)(t1) f (t2) − f (t1) t2− t1 f (t2) − f (t1)

t2− t1

f(1)(t2)

 O, ∀ t1, t2 ∈ J.

Proof. By the definition of SOC-monotonicity, f is SOC-monotone if and only if

fsoc(x + h) − fsoc(x) ∈ Kn (2.25) for any x ∈ dom(fsoc) and h ∈ Knsuch that x + h ∈ dom(fsoc). By the first-order Taylor expansion of fsoc, i.e.,

fsoc(x + h) = fsoc(x) + ∇fsoc(x + th)h for some t ∈ (0, 1),

it is clear that (2.25) is equivalent to ∇fsoc(x + th)h ∈ Kn for any x ∈ dom(fsoc) and h ∈ Knsuch that x + h ∈ dom(fsoc), and some t ∈ (0, 1). Let y := x + th = µ1v(1)+ µ2v(2) for such x, h and t. We next proceed the arguments by the two cases of y2 6= 0 and y2 = 0.

Case (1): y2 6= 0. Under this case, we notice that

∇fsoc(y) =

 θ aT

a H

 , where

θ = ˜b(1), a = ˜c(1) y2

ky2k, and H = ˜a(0)I + (˜b(1)− ˜a(0))y2yT2 ky2k2, with

˜

a(0)= f (µ2) − f (µ1)

µ2− µ1 , ˜b(1) = f02) + f01)

2 , ˜c(1) = f02) − f01)

2 .

In addition, we also observe that

θ2− kak2 = (˜b(1))2− (˜c(1))2, θaT − aTH = 0 and

aaT − HTH = −(˜a(0))2I +

(˜c(1))2− (˜b(1))2+ (˜a(0))2 y2yT2 ky2k2. Thus, by Lemma 2.4, f is SOC-monotone if and only if

(i) ˜b(1) ≥ |˜c(1)|;

(ii) and there exists λ ≥ 0 such that the matrix

(˜b(1))2− (˜c(1))2− λ 0 0 (λ − (˜a(0))2)I +

(˜c(1))2− (˜b(1))2+ (˜a(0))2 y2y2T ky2k2

 O.

When n = 2, (i) together with (ii) is equivalent to saying that f01) ≥ 0 and f02) ≥ 0.

Then we conclude that f is SOC-monotone if and only if f0(τ ) ≥ 0 for any τ ∈ J . When n ≥ 3, (ii) is equivalent to saying that (˜b(1))2− (˜c(1))2− λ ≥ 0 and λ − (˜a(0))2 ≥ 0, i.e., (˜b(1))2− (˜c(1))2 ≥ (˜a(0))2. Therefore, (i) together with (ii) is equivalent to

f(1)1) f (µ2) − f (µ1) µ2− µ1 f (µ2) − f (µ1)

µ2 − µ1 f(1)2)

 O

for any x ∈ IRn, h ∈ Kn such that x + h ∈ domfsoc, and some t ∈ (0, 1). Thus, we conclude that f is SOC-monotone if and only if

f(1)(t1) f (t2) − f (t1) t2− t1 f (t2) − f (t1)

t2− t1 f(1)(t2)

 O for all t1, t2 ∈ J.

Case (2): y2 = 0. Now we have µ1 = µ2 and ∇fsoc(y) = f(1)1)I = f(1)2)I. Hence, f is SOC-monotone is equivalent to f(1)1) ≥ 0, which is also equivalent to

f(1)1) f (µ2) − f (µ1) µ2− µ1 f (µ2) − f (µ1)

µ2 − µ1

f(1)2)

 O

since f(1)1) = f(1)2) and f (µµ2)−f (µ1)

2−µ1 = f(1)1) = f(1)2) by the Taylor formula and µ1 = µ2. Thus, similar to Case (1), the conclusion also holds under this case. 

The SOC-convexity and SOC-monotonicity are also connected to their counterparts, matrix-convexity and matrix-monotonicity. Before illustrating their relations, we briefly recall definitions of matrix-convexity and matrix-monotonicity.

Definition 2.2. Let Msan denote n×n self-adjoint complex matrices, σ(A) be the spectrum of a matrix A, and J ⊆ IR be an interval.

(a) A function f : J → IR is called matrix monotone of degree n or n-matrix monotone if, for every A, B ∈ Msan with σ(A) ⊆ J and σ(B) ⊆ J , it holds that

A  B =⇒ f (A)  f (B).

(b) A function f : J → IR is called operator monotone or matrix monotone if it is n-matrix monotone for all n ∈ N.

(c) A function f : J → IR is called matrix convex of degree n or n-matrix convex if, for every A, B ∈ Msan with σ(A) ⊆ J and σ(B) ⊆ J , it holds that

f ((1 − λ)A + λB)  (1 − λ)f (A) + λf (B).

(d) A function f : J → IR is called operator convex or matrix convex if it is n-matrix convex for all n ∈ N.

(e) A function f : J → IR is called matrix concave of degree n or n-matrix concave if

−f is n-matrix convex.

(f ) A function f : J → IR is called operator concave or matrix concave if it is n-matrix concave for all n ∈ N.

In fact, from Proposition 2.14 and [75, Theorem 6.6.36], we immediately have the following consequences.

Proposition 2.15. Let f ∈ C(1)(J ) with J being an open interval in IR. Then, the following hold.

(a) f is SOC-monotone of order n ≥ 3 if and only if it is 2-matrix monotone, and f is SOC-monotone of order n ≤ 2 if it is 2-matrix monotone.

(b) Suppose that n ≥ 3 and f is SOC-monotone of order n. Then, f0(t0) = 0 for some t0 ∈ J if and only if f (·) is a constant function on J.

We illustrate a few examples by using either Proposition 2.14 or Proposition 2.15.

Example 2.9. Let f : (0, ∞) → IR be f (t) = ln t. Then, f (t) is SOC-monotone on (0, ∞).

Solution. To see this, it needs to verify that the 2 × 2 matrix

f(1)(t1) f (t2) − f (t1) t2− t1 f (t2) − f (t1)

t2− t1 f(1)(t2)

=

1 t1

ln(t2) − ln(t1) t2− t1 ln(t2) − ln(t1)

t2− t1

1 t2

is positive semidefinite for all t1, t2 ∈ (0, ∞). 

Example 2.10. (a) For any fixed σ ∈ IR, the function f (t) = σ−t1 is SOC-monotone on (σ, ∞).

(b) For any fixed σ ∈ IR, the function f (t) =√

t − σ is SOC-monotone on [σ, ∞).

(c) For any fixed σ ∈ IR, the function f (t) = ln(t − σ) is SOC-monotone on (σ, ∞).

(d) For any fixed σ ≥ 0, the function f (t) = t+σt is SOC-monotone on (−σ, ∞).

Solution. (a) For any t1, t2 ∈ (σ, ∞), it is clear to see that

1 (σ − t1)2

1

(σ − t2)(σ − t1) 1

(σ − t2)(σ − t1)

1 (σ − t2)2

 O.

Then, applying Proposition 2.14 yields the desired result.

(b) If x Kn σe, then (x − σe)1/2 Kn 0. Thus, by Proposition 2.14, it suffices to show

1 2√

t1− σ

√t2− σ −√ t1− σ t2− t1

√t2− σ −√ t1− σ t2 − t1

1 2√

t2− σ

 O for any t1, t2 > 0,

which is equivalent to proving that 1 4√

t1− σ√

t2− σ − 1

(√

t2− σ +√

t1− σ)2 ≥ 0.

This inequality holds by 4√

t1− σ√

t2 − σ ≤ (√

t2− σ +√

t1− σ)2 for any t1, t2 ∈ (σ, ∞).

(c) By Proposition 2.14, it suffices to prove that for any t1, t2 ∈ (σ, ∞),

1 (t1− σ)

1

(t2− t1)ln t2− σ t1− σ

 1

(t2− t1)ln t2− σ t1− σ

 1

(t2− σ)

 O,

which is equivalent to showing that 1

(t1− σ)(t2− σ) −

 1

(t2− t1)ln t2− σ t1− σ

2

≥ 0.

Notice that ln t ≤ t − 1 (t > 0), and hence it is easy to verify that

 1

(t2− t1)ln t2− σ t1− σ

2

≤ 1

(t1− σ)(t2− σ). Consequently, the desired result follows.

(d) Since for any fixed σ ≥ 0 and any t1, t2 ∈ (−σ, ∞), there holds that

σ (σ + t1)2

σ

(σ + t2)(σ + t1) σ

(σ + t2)(σ + t1)

σ (σ + t2)2

 O, we immediately obtain the desired result from Proposition 2.14. 

We point out that the SOC-monotonicity of order 2 does not imply the 2-matrix monotonicity. For example, f (t) = t2 is SOC-monotone of order 2 on (0, ∞) by Exam-ple 2.2(a), but by [75, Theorem 6.6.36] we can verify that it is not 2-matrix monotone.

Proposition 2.15(a) indicates that a continuously differentiable function defined on an open interval must be SOC-monotone if it is 2-matrix monotone.

Next, we exploit Peirce decomposition to derive some characterizations for SOC-convex functions. Let f ∈ C(2)(J ) with J being an open interval in IR and dom(fsoc) ⊆ IRn. For any x ∈ dom(fsoc) and h ∈ IRn, if x2 = 0, from Proposition 2.12, we have

hTA1(x)h hTA2(x)h

... hTAn(x)h

= f(2)(x1)

 hTh 2h1h2

 .

Since (hTh, 2h1h2) ∈ Kn, from Proposition 2.13, it follows that f is SOC-convex if and only if f(2)(x1) ≥ 0. By the arbitrariness of x1, f is SOC-convex if and only if f is convex

on J .

For the case of x2 6= 0, we let x = λ1u(1)+ λ2u(2), where u(1) and u(2) are given by (1.4) with ¯x2 = kxx2

2k. Let u(i) = (0, υ(i)2 ) for i = 3, · · · , n, where υ2(3), · · · , υ(n)2 is any orthonormal set of vectors that span the subspace of IRn−2 orthogonal to x2. It is easy to verify that the vectors u(1), u(2), u(3), · · · , u(n) are linearly independent. Hence, for any given h = (h1, h2) ∈ IR × IRn−1, there exists µi, i = 1, 2, · · · , n such that

h = µ1

2 u(1)+ µ2

2 u(2)+

n

X

i=3

µiu(i).

From (2.18), we can verify that b(2)+ c(2) and b(2)− c(2) are the eigenvalues of A1(x) with u(2)and u(1)being the corresponding eigenvectors, and a(1)is the eigenvalue of multiplicity n−2 with u(i) = (0, υ2(i)) for i = 3, . . . , n being the corresponding eigenvectors. Therefore,

hTA1(x)h = µ21(b(2)− c(2)) + µ22(b(2)+ c(2)) + a(1)

n

X

i=3

µ2i

= f(2)121+ f(2)222+ a(1)µ2, (2.26) where

µ2 =Pn i=3µ2i.

Similarly, we can verify that c(2)+ b(2)− a(1) and c(2)− b(2)+ a(1) are the eigenvalues of

c(2) (b(2)− a(1)) xT2 kx2k (b(2)− a(1)) x2

kx2k dI + c(2)− d x2xT2 kx2k2

with u(2) and u(1) being the corresponding eigenvectors, and d is the eigenvalue of mul-tiplicity n − 2 with u(i) = (0, υ2(i)) for i = 3, · · · , n being the corresponding eigenvectors.

Notice that C(x) in (2.21) can be decomposed the sum of the above matrix and

0 0

0 −2d x2xT2 kx2k2

.

Consequently,

hTC(x)h = µ21(c(2)− b(2)+ a(1)) + µ22(c(2)+ b(2)− a(1)) − d(µ2− µ1)2+ dµ2. (2.27) In addition, by the definition of Bi(x), it is easy to compute that

hTBi(x)h =√

2h2,i−11(a(1)− d) + µ2(a(1)+ d)), (2.28)

where h2i = (h21, . . . , h2,n−1). From equations (2.26)-(2.28) and the definition of Ai(x) in (2.22), we thus have

n

X

i=2

(hTAi(x)h)2 = [hTC(x)h]2+ 2kh2k21(a(1)− d) + µ2(a(1)+ d))2 +2(µ2− µ1)hTC(x)h(µ1(a(1)− d) + µ2(a(1)+ d))

= [hTC(x)h]2+ 2(1

2(µ2− µ1)2+ µ2)(µ1(a(1)− d) + µ2(a(1)+ d))2 +2(µ2− µ1)hTC(x)h(µ1(a(1)− d) + µ2(a(1)+ d))

= [hTC(x)h + (µ2− µ1)(µ1(a(1)− d) + µ2(a(1)+ d))]2 +2µ21(a(1)− d) + µ2(a(1)+ d))2

= [−f(2)121+ f(2)222+ dµ2]2

+2µ21(a(1)− d) + µ2(a(1)+ d))2. (2.29) On the other hand, by Proposition 2.13, f is SOC-convex if and only if

A1(x)  O and

n

X

i=2

(hTAi(x)h)2 ≤ (hTA1(x)h)2. (2.30) From (2.26) and (2.29)-(2.40), we have that f is SOC-convex if and only if A1(x)  O and

−f(2)121+ f(2)222+ dµ22

+ 2µ21(a(1)− d) + µ2(a(1)+ d))2

≤f(2)121+ f(2)222+ a(1)µ22

. (2.31)

When n = 2, it is clear that µ = 0. Then, f is SOC-convex if and only if A1(x)  O and f(2)1)f(2)2) ≥ 0.

From the previous discussions, we know that b(2)− c(2) = f(2)1), b(2) + c(2) = f(2)2) and a(1) = f(1)λ2)−f(1)1)

2−λ1 are all eigenvalues of A1(x). Thus, f is SOC-convex if and only if

f(2)2) ≥ 0, f(2)1) ≥ 0, f(1)2) ≥ f(1)1),

which by the arbitrariness of x is equivalent to saying that f is convex on J .

When n ≥ 3, if µ = 0, then from the discussions above, we know that f is SOC-convex if and only if f is convex. If µ 6= 0, without loss of generality, we assume that µ2 = 1.

Then, the inequality (2.41) above is equivalent to 4f(2)1)f(2)221µ22+ (a(1))2 − d2

+2f(2)222(a(1)− d) + 2f(2)121(a(1)+ d)

−2

µ21(a(1)− d)2+ µ22(a(1)+ d)2+ 2µ1µ2((a(1))2− d2)

≥ 0 for any µ1, µ2. (2.32)

Now we show that A1(x)  O and (2.32) holds if and only if f is convex on J and f(2)1)(a(1)+ d) ≥ (a(1)− d)2, (2.33) f(2)2)(a(1)− d) ≥ (a(1)+ d)2. (2.34) Indeed, if f is convex on J , then by the discussions above A1(x)  O clearly holds. If the inequalities (2.33) and (2.34) hold, then by the convexity of f we have a(1) ≥ |d|. If µ1µ2 ≤ 0, then we readily have the inequality (2.32). If µ1µ2 > 0, then using a(1) ≥ |d|

yields that

f(2)1)f(2)221µ22 ≥ (a(1))2− d2.

Combining with equations (2.33) and (2.34) thus leads to the inequality (2.32). On the other hand, if A1(x)  O, then f must be convex on J by the discussions above, whereas if the inequality (2.32) holds for any µ1, µ2, then by letting µ1 = µ2 = 0 yields that

a(1) ≥ |d|. (2.35)

Using the inequality (2.35) and letting µ1 = 0 in (2.32) then yields (2.33), whereas using (2.35) and letting µ2 = 0 in (2.32) leads to (2.34). Thus, when n ≥ 3, f is SOC-convex if and only if f is convex on J and (2.33) and (2.34) hold. We notice that (2.33) and (2.34) are equivalent to

1

2f(2)1)[f (λ1) − f (λ2) + f(1)2)(λ2 − λ1)]

2− λ1)2

≥ [f (λ2) − f (λ1) − f(1)1)(λ2− λ1)]22− λ1)4

and

1

2f(2)2)[f (λ2) − f (λ1) − f(1)1)(λ2− λ1)]

2− λ1)2

≥ [f (λ1) − f (λ2) + f(1)2)(λ2− λ1)]22− λ1)4 . Therefore, f is SOC-convex if and only if f is convex on J , and

1

2f(2)(t0)[f (t0) − f (t) − f(1)(t)(t0− t)]

(t0− t)2

≥ [f (t) − f (t0) − f(1)(t0)(t − t0)]2

(t0− t)4 , ∀ t0, t ∈ J. (2.36) Summing up the above analysis, we can characterize the SOC-convexity as follows.

Proposition 2.16. Let f ∈ C(2)(J ) with J being an open interval in IR (which implies dom(fsoc) is open in IRn). Then, the following hold.

(a) f is SOC-convex of order 2 if and only if f is convex.

(b) f is SOC-convex of order n ≥ 3 if and only if f is convex and the inequality (2.36) holds for any t0, t ∈ J .

By the formulas of divided differences, it is not hard to verify that f is convex on J and (2.36) holds for any t0, t ∈ J if and only if

 42f (t0, t0, t0) 42f (t0, t, t0) 42f (t, t0, t0) 42f (t, t, t0)



 O. (2.37)

This, together with Proposition 2.16 and [75, Theorem 6.6.52], leads to the following results.

Proposition 2.17. Let f ∈ C(2)(J ) with J being an open interval in IR (which implies dom(fsoc) is open in IRn). Then, the following hold.

(a) f is SOC-convex of order n ≥ 3 if and only if it is 2-matrix convex.

(b) f is SOC-convex of order n ≤ 2 if it is 2-matrix convex.

Proposition 2.17 implies that, if f is a twice continuously differentiable function de-fined on an open interval J and 2-matrix convex, then it must be SOC-convex. Similar to Proposition 2.15(a), when f is SOC-convex of order 2, it may not be 2-matrix convex. For example, f (t) = t3 is SOC-convex of order 2 on (0, +∞) by Example 2.3(c), but it is easy to verify that (2.37) does not hold for this function, and consequently, f is not 2-matrix convex. Using Proposition 2.17, we may prove that the direction “⇐” of Conjecture 2.2 does not hold in general, although the other direction is true due to Proposition 2.8.

Particularly, from Proposition 2.17 and [68, Theorem 2.3], we can establish the following characterizations for SOC-convex functions.

Proposition 2.18. Let f ∈ C(4)(J ) with J being an open interval in IR and dom(fsoc) ⊆ IRn. If f(2)(t) > 0 for every t ∈ J , then f is SOC-convex of order n with n ≥ 3 if and only if one of the following conditions holds.

(a) For every t ∈ J , the 2 × 2 matrix

f(2)(t) 2

f(3)(t) f(3)(t) 6

6

f(4)(t) 24

 O.

(b) There is a positive concave function c(·) on I such that f(2)(t) = c(t)−3 for every t ∈ J .

(c) There holds that

f (t0) − f (t) − f(1)(t)(t0− t) (t0− t)2

! 

f (t) − f (t0) − f(1)(t0)(t − t0) (t0− t)2

!

≤ 1

4f(2)(t0)f(2)(t). (2.38)

Moreover, f is also SOC-convex of order 2 under one of the above conditions.

Proof. We note that f is convex on J . Therefore, by Proposition 2.17, it suffices to prove the following equivalence:

(2.36) ⇐⇒ assertion (a) ⇐⇒ assertion (b) ⇐⇒ assertion (c).

Case (1). (2.36) ⇒ assertion (a): From the previous discussions, we know that (2.36) is equivalent to (2.33) and (2.34). We expand (2.33) using Taylor’s expansion at λ1 to the forth order and get

3

4f(2)1)f(4)1) ≥ (f(3)1))2.

We do the same for the inequality (2.34) at λ2 and get the inequality 3

4f(2)2)f(4)2) ≥ (f(3)2))2. The above two inequalities are precisely

3

4f(2)(t)f(4)(t) ≥ (f(3)(t))2, ∀t ∈ J, (2.39) which is clearly equivalent to saying that the 2 × 2 matrix in (a) is positive semidefinite.

Case (2). assertion (a) ⇒ assertion (b): Take c(t) = [f(2)(t)]−1/3 for t ∈ J . Then c is a positive function and f(2)(t) = c(t)−3. By twice differentiation, we obtain

f(4)(t) = 12c(t)−5[c0(t)(t)]2− 3c(t)−4c00(t).

Substituting the last equality into the matrix in (a) then yields that

− 1

16c(t)−7c00(t) ≥ 0,

which, together with c(t) > 0 for every t ∈ J , implies that c is concave.

Case (3). assertion (b) ⇒ assertion (c): We first prove the following fact: if f(2)(t) is strictly positive for every t ∈ J and the function c(t) = f(2)(t)−1/3

is concave on J , then

[f (t0) − f (t) − f(1)(t)(t0− t)]

(t0− t)2 ≤ 1

2f(2)(t0)1/3f(2)(t)2/3, ∀ t0, t ∈ J. (2.40)

Indeed, using the concavity of the function c, it follows that [f (t0) − f (t) − f(1)(t)(t0− t)]

(t0− t)2 =

Z 1 0

Z u1

0

f(2)[t + u2(t0− t)] du2du1

= Z 1

0

Z u1

0

c ((1 − u2)t + u2t0))−3du2du1

≤ Z 1

0

Z u1

0

((1 − u2)c(t) + u2c(t0))−3du2du1. Notice that g(t) = 1/t (t > 0) has the second-order derivative g(2)(t) = 2/t3. Hence,

[f (t0) − f (t) − f(1)(t)(t0− t)]

(t0− t)2 ≤ 1

2 Z 1

0

Z u1

0

g(2)((1 − u2)c(t) + u2c(t0)) du2du1

= 1

2

 g(c(t0)) − g(c(t))

(c(t0) − c(t))2 − g(1)(c(t)) c(t0) − c(t)



= 1

2c(t0)c(t)c(t)

= 1

2f(2)(t0)1/3f(2)(t)2/3,

which implies the inequality (2.40). Now exchanging t0 with t in (2.40), we obtain [f (t) − f (t0) − f(1)(t0)(t − t0)]

(t0− t)2 ≤ 1

2f(2)(t)1/3f(2)(t0)2/3, ∀ t, t0 ∈ J. (2.41) Since f is convex on J by the given assumption, the left hand sides of the inequalities (2.40) and (2.41) are nonnegative, and their product satisfies the inequality of (2.38).

Case (4). assertion (c) ⇒ (2.36): We introduce a function F : J → IR defined by F (t) = 1

2f(2)(t0)[f (t0) − f (t) − f(1)(t)(t0− t)] − [f (t) − f (t0) − f(1)(t0)(t − t0)]2 (t0− t)2

if t 6= t0, and otherwise F (t0) = 0. We next prove that F is nonnegative on J . It is easy to verify that such F (t) is differentiable on J , and moreover,

F0(t) = 1

2f(2)(t0)f(2)(t)(t − t0)

−2(t − t0)−2[f (t) − f (t0) − f(1)(t0)(t − t0)](f(1)(t) − f(1)(t0)) +2(t − t0)−3[f (t) − f (t0) − f(1)(t0)(t − t0)]2

= 1

2f(2)(t0)f(2)(t)(t − t0)

−2(t − t0)−3[f (t) − f (t0) − f(1)(t0)(t − t0)][f (t0) − f (t) − f(1)(t)(t0− t)]

= 2(t − t0) 1

4f(2)(t0)f(2)(t) − (t − t0)−4 f (t) − f (t0) − f(1)(t0)(t − t0) f (t0) − f (t) − f(1)(t)(t0 − t)i

.

Using the inequality in part(c), we can verify that F (t) has a minimum value 0 at t = t0, and therefore, F (t) is nonnegative on J . This implies the inequality (2.36). 

We demonstrate a few examples by using either Proposition 2.16, Proposition 2.17, or Proposition 2.18.

Example 2.11. Let f : IR → [0, ∞) be f (t) = et. Then, (a) f is SOC-convex of order 2 on IR;

(b) f is not SOC-convex of order n ≥ 3 on IR.

Solution. (a) By applying Proposition 2.16(a), it is clear that f is SOC-convex because exponential function is a convex function on IR.

(b) As below, it is a counterexample which shows f (t) = et is not SOC-convex of order n ≥ 3. To see this, we compute that

e[(2,0,−1)+(6,−4,−3)]/2 = e(4,−2,−2)

= e4



cosh(2√

2) , sinh(2√

2) · (−2, −2)/(2√ 2)



≈ (463.48, −325.45, −325.45) and

1

2 e(2,0,−1)+ e(6,−4,−3)

= 1

2e2(cosh(1), 0, − sinh(1)) + e6(cosh(5), sinh(5) · (−4, −3)/5)

= (14975, −11974, −8985).

We see that 14975 − 463.48 = 14511.52, but

(−11974, −8985) − (−325.4493, −325.4493)

= 14515 > 14511.52 which is a contradiction. 

Example 2.12. (a) For any fixed σ ∈ IR, the function f (t) = (t − σ)−r with r ≥ 0 is SOC-convex on (σ, ∞) if and only if 0 ≤ r ≤ 1.

(a) For any fixed σ ∈ IR, the function f (t) = (t − σ)r with r ≥ 0 is SOC-convex on [σ, ∞) if and only if 1 ≤ r ≤ 2, and f is SOC-concave on [σ, ∞) if and only if 0 ≤ r ≤ 1.

(c) For any fixed σ ∈ IR, the function f (t) = ln(t − σ) is SOC-concave on (σ, ∞).

(d) For any fixed σ ≥ 0, the function f (t) = t+σt is SOC-concave on (−σ, ∞).

Solution. (a) For any fixed σ ∈ IR, by a simple computation, we have that

f(2)(t) 2

f(3)(t) f(3)(t) 6

6

f(4)(t) 24

=

r(r + 1)(t − σ)−r−2 2

r(r + 1)(−r − 2)(t − σ)−r−3 r(r + 1)(−r − 2)(t − σ)−r−3 6

6

r(r + 1)(r + 2)(r + 3)(t − σ)−r−4 24

. The sufficient and necessary condition for the above matrix being positive semidefinite is

r2(r + 1)2(r + 2)(r + 3)(t − σ)−2r−6

24 − r2(r + 1)2(r + 2)2(t − σ)−2r−6

18 ≥ 0, (2.42)

which is equivalent to requiring 0 ≤ r ≤ 1. By Proposition 2.18, it then follows that f is SOC-convex on (σ, +∞) if and only if 0 ≤ r ≤ 1.

(b) For any fixed σ ∈ IR, by a simple computation, we have that

f(2)(t) 2

f(3)(t) f(3)(t) 6

6

f(4)(t) 24

=

r(r − 1)(t − σ)r−2 2

r(r − 1)(r − 2)(t − σ)r−3 r(r − 1)(r − 2)(t − σ)r−3 6

6

r(r − 1)(r − 2)(r − 3)(t − σ)r−4 24

. The sufficient and necessary condition for the above matrix being positive semidefinite is

r ≥ 1 and r2(r − 1)2(r − 2)(r − 3)(t − σ)2r−6

24 − r2(r − 1)2(r − 2)2(t − σ)2r−6

18 ≥ 0,

(2.43) whereas the sufficient and necessary condition for it being negative semidefinite is 0 ≤ r ≤ 1 and r2(r − 1)2(r − 2)(r − 3)(t − σ)t2r−6

24 −r2(r − 1)2(r − 2)2(t − σ)2r−6

18 ≥ 0.

(2.44) It is easily shown that (2.43) holds if and only if 1 ≤ r ≤ 2, and (2.44) holds if and only if 0 ≤ r ≤ 1. By Proposition 2.18, this shows that f is SOC-convex on (σ, ∞) if and only if 1 ≤ r ≤ 2, and f is SOC-concave on (σ, ∞) if and only if 0 ≤ r ≤ 1. This together with the definition of SOC-convexity yields the desired result.

(c) Notice that for any t > σ, there always holds that

f(2)(t) 2

f(3)(t) 6 f(3)(t)

6

f(4)(t) 24

=

1

2(t − σ)2 − 1 3(t − σ)3

− 1

3(t − σ)3

1 4(t − σ)4

 O.

Consequently, from Proposition 2.18(a), we conclude that f is SOC-concave on (σ, ∞).

(d) For any t > −σ, it is easy to compute that

f(2)(t) 2

f(3)(t) 6 f(3)(t)

6

f(4)(t) 24

=

 1

(t + σ)3 − 1 (t + σ)4

− 1

(t + σ)4

1 (t + σ)5

 O.

By Proposition 2.18 again, we then have that the function f is SOC-concave on (−σ, ∞).