to appear in Linear Algebra and its Applications, 2012

### SOC-monotone and SOC-convex functions v.s. matrix-monotone and matrix-convex functions

^{1}

Shaohua Pan^{†}, Yungyen Chiang^{‡} and Jein-Shan Chen^{§}

September 30, 2010

(First revised on January 13, 2012) (Second revised on April 5, 2012)

Abstract. The SOC-monotone function (respectively, SOC-convex function) is a scalar valued function that induces a map to preserve the monotone order (respectively, the convex order), when imposed on the spectral factorization of vectors associated with second- order cones (SOCs) in general Hilbert spaces. In this paper, we provide the sufficient and necessary characterizations for the two classes of functions, and particularly establish that the set of continuous SOC-monotone (respectively, SOC-convex) functions coincides with that of continuous matrix monotone (respectively, matrix convex) functions of order 2.

Keywords: Hilbert space; second-order cone; SOC-monotonicity; SOC-convexity.

AMS subject classifications. 26B05, 26B35, 90C33, 65K05

### 1 Introduction

Let H be a real Hilbert space of dimension dim(H) ≥ 3 endowed with an inner product
h·, ·i and its induced norm k · k. Fix a unit vector e ∈ H and denote by hei^{⊥} the orthogonal
complementary space of e, i.e., hei^{⊥}= {x ∈ H | hx, ei = 0} . Then each x can be written as

x = x_{e}+ x_{0}e for some x_{e} ∈ hei^{⊥} and x_{0} ∈ R.

1This work was supported by National Young Natural Science Foundation (No. 10901058) and the Fundamental Research Funds for the Central Universities.

†Department of Mathematics, South China University of Technology, Wushan Road 381, Tianhe District of Guangzhou 510641, China. Email: shhpan@scut.edu.cn

‡Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung 80424, Taiwan.

Email: chiangyy@math.nsysu.edu.tw

§Corresponding author. Member of Mathematics Division, National Center for Theoretical Sciences, Taipei Office. The author’s work is supported by National Science Council of Taiwan, Department of Mathematics, National Taiwan Normal University, Taipei, Taiwan 11677. Email: jschen@math.ntnu.edu.tw

The second-order cone (SOC) in H, also called the Lorentz cone, is a set defined by K :=

x ∈ H | hx, ei ≥ 1

√2kxk

=n

x_{e}+ x_{0}e ∈ H | x0 ≥ kx_{e}ko
.

From [7, Section 2], we know that K is a pointed closed convex self-dual cone. Hence, H
becomes a partially ordered space via the relation _{K}. In the sequel, for any x, y ∈ H, we
always write x _{K} y (respectively, x _{K} y) when x − y ∈ K (respectively, x − y ∈ intK);

and denote x_{e} by the vector _{kx}^{x}^{e}

ek if x_{e} 6= 0, and otherwise by any unit vector from hei^{⊥}.
Associated with the second-order cone K, each x = x_{e}+ x_{0}e ∈ H can be decomposed as

x = λ_{1}(x)u_{1}(x) + λ_{2}(x)u_{2}(x), (1)
where λi(x) ∈ R and u^{i}(x) ∈ H for i = 1, 2 are the spectral values and the associated
spectral vectors of x, defined by

λ_{i}(x) = x_{0}+ (−1)^{i}kx_{e}k, u_{i}(x) = 1

2(e + (−1)^{i}x_{e}). (2)
Clearly, when x_{e} 6= 0, the spectral factorization of x is unique by definition.

Let f : J ⊆ R → R be a scalar valued function, where J is an interval (finite or infinite,
closed or open) in R. Let S be the set of all x ∈ H whose spectral values λ^{1}(x) and λ2(x)
belong to J . Unless otherwise stated, in this paper S is always taken in this way. By the
spectral factorization of x in (1)-(2), it is natural to define f^{soc}: S ⊆ H → H by

f^{soc}(x) := f (λ_{1}(x))u_{1}(x) + f (λ_{2}(x))u_{2}(x), ∀x ∈ S. (3)
It is easy to see that the function f^{soc} is well defined whether x_{e}= 0 or not. For example,
by taking f (t) = t^{2}, we have that f^{soc}(x) = x^{2} = x◦x, where “◦” means the Jordan product
and the detailed definition is see in the next section. Note that

(λ1(x) − λ1(y))^{2}+ (λ2(x) − λ2(y))^{2} = 2(kxk^{2}+ kyk^{2}− 2x0y0− 2kxekkyek)

≤ 2 kxk^{2}+ kyk^{2}− 2hx, yi = 2kx − yk^{2}.

We may verify that the domain S of f^{soc} is open in H if and only if J is open in R. Also,
S is always convex since, for any x = x_{e}+ x_{0}e, y = y_{e}+ y_{0}e ∈ S and β ∈ [0, 1],

λ_{1}[βx + (1 − β)y] = (βx_{0}+ (1 − β)y_{0}) − kβx_{e}+ (1 − β)y_{e}k ≥ min{λ_{1}(x), λ_{1}(y)},
λ_{2}[βx + (1 − β)y] = (βx_{0}+ (1 − β)y_{0}) + kβx_{e}+ (1 − β)y_{e}k ≤ max{λ_{2}(x), λ_{2}(y)},
which implies that βx + (1 − β)y ∈ S. Thus, f^{soc}(βx + (1 − β)y) is well defined.

In this paper we are interested in two classes of special scalar valued functions that induce the maps via (3) to preserve the monotone order and the convex order, respectively.

Definition 1.1 A function f : J → R is said to be SOC-monotone if for any x, y ∈ S,
x _{K} y =⇒ f^{soc}(x) _{K} f^{soc}(y); (4)
and f is said to be SOC-convex if, for any x, y ∈ S and any β ∈ [0, 1],

f^{soc}(βx + (1 − β)y) _{K} βf^{soc}(x) + (1 − β)f^{soc}(y). (5)
From Definition 1.1 and equation (3), it is easy to see that the set of SOC-monotone and
SOC-convex functions are closed under positive linear combinations and pointwise limits.

The concept of SOC-monotone (respectively, SOC-convex) functions above is a direct extension of those given by [5, 6] to general Hilbert spaces, and is analogous to that of matrix monotone (respectively, matrix convex) functions and more general operator monotone (respectively, operator convex) functions; see, e.g., [17, 15, 14, 2, 11, 23]. Just as the importance of matrix monotone (respectively, matrix convex) functions to the solution of convex semidefinite programming [19, 4], SOC-monotone (respectively, SOC-convex) functions also play a crucial role in the design and analysis of algorithms for convex second- order cone programming [3, 22]. For matrix monotone and matrix convex functions, after the seminal work of L¨owner [17] and Kraus [15], there have been systematic studies and perfect characterizations for them; see [8, 16, 4, 13, 12, 21, 20] and the references therein.

However, the study on SOC-monotone and SOC-convex functions just begins with [5], and the characterizations for them are still imperfect. Particularly, it is not clear what is the relation between the SOC-monotone (respectively, SOC-convex) functions and the matrix monotone (respectively, matrix convex) functions.

In this work, we provide the sufficient and necessary characterizations for SOC-monotone and SOC-convex functions in the setting of Hilbert spaces, and show that the set of con- tinuous SOC-monotone (SOC-convex) functions coincides with that of continuous matrix monotone (matrix convex) functions of order 2. Some of these results generalize those of [5, 6] (see Propositions 3.2 and 4.2), and some are new, which are difficult to achieve by using the techniques of [5, 6] (see, for example, Proposition 4.4). In addition, we also discuss the relations between SOC-monotone functions and SOC-convex functions, verify Conjecture 4.2 in [5] under a little stronger condition (see Proposition 6.2), and present a counterexample to show that Conjecture 4.1 in [5] generally does not hold. It is worthwhile to point out that the analysis in this paper depends only on the inner product of Hilbert spaces, whereas most of the results in [5, 6] are obtained with the help of matrix operations.

Throughout this paper, all differentiability means Fr´echet differentiability. If F : H → H
is (twice) differentiable at x ∈ H, we denote by F^{0}(x) (F^{00}(x)) the first-order F-derivative
(the second-order F-derivative) of F at x. In addition, we use C^{n}(J ) and C^{∞}(J ) to de-
note the set of n times and infinite times continuously differentiable real functions on J ,
respectively. When f ∈ C^{1}(J ), we denote by f^{[1]} the function on J × J defined by

f^{[1]}(λ, µ) :=

( f (λ)−f (µ)

λ−µ if λ 6= µ,
f^{0}(λ) if λ = µ;

and when f ∈ C^{2}(J ), denote by f^{[2]} the function on J × J × J defined by
f^{[2]}(τ_{1}, τ_{2}, τ_{3}) := f^{[1]}(τ_{1}, τ_{2}) − f^{[1]}(τ_{1}, τ_{3})

τ_{2}− τ_{3}

if τ_{1}, τ_{2}, τ_{3} are distinct, and for other values of τ_{1}, τ_{2}, τ_{3}, f^{[2]} is defined by continuity; e.g.,
f^{[2]}(τ1, τ1, τ3) = f (τ_{3}) − f (τ_{1}) − f^{0}(τ_{1})(τ_{3}− τ_{1})

(τ_{3}− τ_{1})^{2} , f^{[2]}(τ1, τ1, τ1) = 1

2f^{00}(τ1).

For a linear operator L from H into H, we write L ≥ 0 (respectively, L > 0) to mean that L is positive semidefinite (respectively, positive definite), i.e., hh, Lhi ≥ 0 for any h ∈ H (respectively, hh, Lhi > 0 for any 0 6= h ∈ H).

### 2 Preliminaries

This section recalls some background material and gives several lemmas that will be used
in the subsequent sections. We start with the definition of Jordan product [9]. For any
x = x_{e}+ x_{0}e, y = y_{e}+ y_{0}e ∈ H, the Jordan product of x and y is defined as

x ◦ y := (x_{0}y_{e}+ y_{0}x_{e}) + hx, yie.

A simple computation can verify that for any x, y, z ∈ H and the unit vector e, (i) e ◦ e = e
and e ◦ x = x; (ii) x ◦ y = y ◦ x; (iii) x ◦ (x^{2} ◦ y) = x^{2} ◦ (x ◦ y), where x^{2} = x ◦ x; (iv)
(x + y) ◦ z = x ◦ z + y ◦ z. For any x ∈ H, define its determinant by

det(x) := λ_{1}(x)λ_{2}(x) = x^{2}_{0}− kx_{e}k^{2}.

Then each x = x_{e}+ x_{0}e with det(x) 6= 0 is invertible with respect to the Jordan product,
i.e., there is a unique x^{−1} = (−xe+ x0e)/det(x) such that x ◦ x^{−1} = e.

We next give several lemmas where Lemma 2.1 is used in Section 3 to characterize SOC- monotonicity, and Lemmas 2.2 and 2.3 are used in Section 4 to characterize SOC-convexity.

Lemma 2.1 Let B := {z ∈ hei^{⊥} | kzk ≤ 1}. Then, for any given u ∈ hei^{⊥} with kuk = 1
and θ, λ ∈ R, the following results hold.

(a) θ + λhu, zi ≥ 0 for any z ∈ B if and only if θ ≥ |λ|.

(b) θ − kλzk^{2} ≥ (θ − λ^{2})hu, zi^{2} for any z ∈ B if and only if θ − λ^{2} ≥ 0.

Proof. (a) Suppose that θ + λhu, zi ≥ 0 for any z ∈ B. If λ = 0, then θ ≥ |λ| clearly holds. If λ 6= 0, take z = −sign(λ)u. Since kuk = 1, we have z ∈ B, and consequently, θ+λhu, zi ≥ 0 reduces to θ−|λ| ≥ 0. Conversely, if θ ≥ |λ|, then using the Cauchy-Schwartz inequality yields θ + λhu, zi ≥ 0 for any z ∈ B.

(b) Suppose that θ − kλzk^{2} ≥ (θ − λ^{2})hu, zi^{2} for any z ∈ B. Then we must have θ − λ^{2} ≥ 0.

If not, for those z ∈ B with kzk = 1 but hu, zi 6= kukkzk, it holds that
(θ − λ^{2})hu, zi^{2} > (θ − λ^{2})kuk^{2}kzk^{2} = θ − kλzk^{2},

which contradicts the given assumption. Conversely, if θ − λ^{2} ≥ 0, the Cauchy-Schwartz
inequality implies that (θ − λ^{2})hu, zi^{2} ≤ θ − kλzk^{2} for any z ∈ B. 2

Lemma 2.2 For any given a, b, c ∈ R and x = xe+ x_{0}e with x_{e}6= 0, the inequality

akh_{e}k^{2}− hh_{e}, x_{e}i^{2} + b[h_{0}+ hx_{e}, h_{e}i]^{2}+ c[h_{0}− hx_{e}, h_{e}i]^{2} ≥ 0 (6)
holds for all h = he+ h0e ∈ H if and only if a ≥ 0, b ≥ 0 and c ≥ 0.

Proof. Suppose that (6) holds for all h = h_{e}+ h_{0}e ∈ H. By letting he = x_{e}, h_{0} = 1 and
he = −xe, h0 = 1, respectively, we get b ≥ 0 and c ≥ 0 from (6). If a ≥ 0 does not hold,
then by taking h_{e} = q

b+c+1

|a|

ze

kzek with hz_{e}, x_{e}i = 0 and h_{0} = 1, (6) gives a contradiction

−1 ≥ 0. Conversely, if a ≥ 0, b ≥ 0 and c ≥ 0, then (6) clearly holds for all h ∈ H. 2
Lemma 2.3 Let f ∈ C^{2}(J ) and u_{e} ∈ hei^{⊥} with ku_{e}k = 1. For any h = h_{e}+ h_{0}e ∈ H, define

µ_{1}(h) := h_{0}− hu_{e}, h_{e}i

√2 , µ_{2}(h) := h_{0}+ hu_{e}, h_{e}i

√2 , µ(h) :=pkh_{e}k^{2}− hu_{e}, h_{e}i^{2}.
Then, for any given a, d ∈ R and λ^{1}, λ2 ∈ J, the following inequality

4f^{00}(λ_{1})f^{00}(λ_{2})µ_{1}(h)^{2}µ_{2}(h)^{2}+ 2(a − d)f^{00}(λ_{2})µ_{2}(h)^{2}µ(h)^{2}
+2 (a + d) f^{00}(λ_{1})µ_{1}(h)^{2}µ(h)^{2}+ a^{2}− d^{2} µ(h)^{4}

−2 [(a − d) µ_{1}(h) + (a + d) µ_{2}(h)]^{2}µ(h)^{2} ≥ 0 (7)
holds for all h = h_{e}+ h_{0}e ∈ H if and only if

a^{2}− d^{2} ≥ 0, f^{00}(λ_{2})(a − d) ≥ (a + d)^{2} and f^{00}(λ_{1})(a + d) ≥ (a − d)^{2}. (8)
Proof. Suppose that (7) holds for all h = h_{e} + h_{0}e ∈ H. Taking h0 = 0 and h_{e} 6= 0
with hh_{e}, u_{e}i = 0, we have µ_{1}(h) = 0, µ_{2}(h) = 0 and µ(h) = kh_{e}k > 0, and then (7) gives
a^{2} − d^{2} ≥ 0. Taking h_{e} 6= 0 such that |hu_{e}, h_{e}i| < kh_{e}k and h_{0} = hu_{e}, h_{e}i 6= 0, we have
µ1(h) = 0, µ2(h) = √

2h0 and µ(h) > 0, and then (7) reduces to the following inequality
4(a − d)f^{00}(λ_{2}) − (a + d)^{2} h^{2}_{0}+ (a^{2}− d^{2})(kh_{e}k^{2}− h^{2}_{0}) ≥ 0.

This implies that (a − d)f^{00}(λ_{2}) − (a + d)^{2} ≥ 0. If not, by letting h_{0} be sufficiently close to
khek, the last inequality yields a contradiction. Similarly, taking h with he 6= 0 satisfying

|hu_{e}, h_{e}i| < kh_{e}k and h_{0} = −hu_{e}, h_{e}i, we get f^{00}(λ_{1})(a + d) ≥ (a − d)^{2} from (7).

Next, suppose that (8) holds. Then, the inequalities f^{00}(λ_{2})(a − d) ≥ (a + d)^{2} and
f^{00}(λ_{1})(a + d) ≥ (a − d)^{2} imply that the left-hand side of (7) is greater than

4f^{00}(λ_{1})f^{00}(λ_{2})µ_{1}(h)^{2}µ_{2}(h)^{2}− 4(a^{2}− d^{2})µ_{1}(h)µ_{2}(h)µ(h)^{2}+ a^{2} − d^{2} µ(h)^{4},

which is obviously nonnegative if µ_{1}(h)µ_{2}(h) ≤ 0. Now assume that µ_{1}(h)µ_{2}(h) > 0. If
a^{2} − d^{2} = 0, then the last expression is clearly nonnegative, and if a^{2} − d^{2} > 0, then the
last two inequalities in (8) imply that f^{00}(λ_{1})f^{00}(λ_{2}) ≥ (a^{2}− d^{2}) > 0, and therefore,

4f^{00}(λ_{1})f^{00}(λ_{2})µ_{1}(h)^{2}µ_{2}(h)^{2}− 4(a^{2}− d^{2})µ_{1}(h)µ_{2}(h)µ(h)^{2}+ a^{2}− d^{2} µ(h)^{4}

≥ 4(a^{2}− d^{2})µ_{1}(h)^{2}µ_{2}(h)^{2}− 4(a^{2}− d^{2})µ_{1}(h)µ_{2}(h)µ(h)^{2}+ a^{2} − d^{2} µ(h)^{4}

= (a^{2} − d^{2})2µ1(h)µ_{2}(h) − µ(h)^{2}2

≥ 0.

Thus, we prove that inequality (7) holds. The proof is complete. 2

To close this section, we introduce the regularization of a locally integrable real function.

Let ϕ be a real function of class C^{∞} with the following properties: ϕ ≥ 0, ϕ is even, the
support supp ϕ = [−1, 1], and R

Rϕ = 1. For each ε > 0, let ϕ_{ε}(t) = ^{1}_{ε}ϕ(_{ε}^{t}). Then
supp ϕ_{ε}= [−ε, ε] and ϕ_{ε} has all the properties of ϕ listed above. If f is a locally integrable
real function, we define its regularization of order ε as the function

f_{ε}(s) :=

Z

f (s − t)ϕ_{ε}(t)dt =
Z

f (s − εt)ϕ(t)dt. (9)

Note that f_{ε} is a C^{∞} function for each ε > 0, and lim_{ε→0}f_{ε}(x) = f (x) if f is continuous.

### 3 Characterizations of SOC-monotone functions

In this section we present some characterizations for SOC-monotone functions, by which the set of continuous SOC-monotone functions is shown to coincide with that of continuous matrix monotone functions of order 2. To this end, we need the following technical lemma.

Lemma 3.1 For any given f : J → R with J open, let f^{soc} : S → H be defined by (3).

(a) f^{soc} is continuous on S if and only if f is continuous on J .

(b) f^{soc} is (continuously) differentiable on S iff f is (continuously) differentiable on J .
Also, when f is differentiable on J , for any x = x_{e}+ x_{0}e ∈ S and v = v_{e}+ v_{0}e ∈ H,

(f^{soc})^{0}(x)v =

f^{0}(x_{0})v if x_{e} = 0;

(b_{1}(x) − a_{0}(x))hx_{e}, v_{e}ix_{e}+ c_{1}(x)v_{0}x_{e}

+a_{0}(x)v_{e}+ b_{1}(x)v_{0}e + c_{1}(x)hx_{e}, v_{e}ie if x_{e} 6= 0,

(10)

where a_{0}(x) = ^{f (λ}_{λ}^{2}^{(x))−f (λ}^{1}^{(x))}

2(x)−λ1(x) , b_{1}(x) = ^{f}^{0}^{(λ}^{2}^{(x))+f}_{2} ^{0}^{(λ}^{1}^{(x))}, c_{1}(x) = ^{f}^{0}^{(λ}^{2}^{(x))−f}_{2} ^{0}^{(λ}^{1}^{(x))}.

(c) If f is differentiable on J , then for any given x ∈ S and all v ∈ H,

(f^{soc})^{0}(x)e = (f^{0})^{soc}(x) and he, (f^{soc})^{0}(x)vi = hv, (f^{0})^{soc}(x)i .
(d) If f^{0} is nonnegative (respectively, positive) on J , then for each x ∈ S,

(f^{soc})^{0}(x) ≥ 0 (respectively, (f^{soc})^{0}(x) > 0).

Proof. (a) Suppose that f^{soc} is continuous. Let Ω be the set composed of those x = te
with t ∈ J . Clearly, Ω ⊆ S, and f^{soc} is continuous on Ω. Noting that f^{soc}(x) = f (t)e for
any x ∈ Ω, it follows that f is continuous on J . Conversely, if f is continuous on J , then
f^{soc} is continuous at any x = x_{e}+ x_{0}e ∈ S with x_{e} 6= 0 since λ_{i}(x) and u_{i}(x) for i = 1, 2
are continuous at such points. Next, let x = x_{e} + x_{0}e be an arbitrary element from S
with x_{e} = 0, and we prove that f^{soc} is continuous at x. Indeed, for any z = z_{e}+ z_{0}e ∈ S
sufficiently close to x, it is not hard to verify that

kf^{soc}(z) − f^{soc}(x)k ≤ |f (λ_{2}(z)) − f (x_{0})|

2 + |f (λ_{1}(z)) − f (x_{0})|

2 +|f (λ_{2}(z)) − f (λ_{1}(z))|

2 .

Since f is continuous on J , and λ_{1}(z), λ_{2}(z) → x_{0} as z → x, it follows that
f (λ_{1}(z)) → f (x_{0}) and f (λ_{2}(z)) → f (x_{0}) as z → x.

The last two equations imply that f^{soc} is continuous at x.

(b) When f^{soc} is (continuously) differentiable, using the similar arguments as in part (a)
can show that f is (continuously) differentiable. Next assume that f is differentiable. Fix
any x = x_{e}+ x_{0}e ∈ S. We first consider the case where x_{e} 6= 0. Since λ_{i}(x) for i = 1, 2
and _{kx}^{x}^{e}

ek are continuously differentiable at such x, it follows that f (λi(x)) and ui(x) are
differentiable and continuously differentiable, respectively, at x. Then f^{soc} is differentiable
at such x by the definition of f^{soc}. Also, an elementary computation shows that

[λi(x)]^{0}v = hv, ei + (−1)^{i}hx_{e}, v − hv, eiei

kx_{e}k = v0+ (−1)^{i}hx_{e}, v_{e}i

kx_{e}k , (11)

x_{e}
kxek

^{0}

v = v − hv, eie

kxek −hx_{e}, v − hv, eieix_{e}

kxek^{3} = v_{e}

kxek − hx_{e}, v_{e}ix_{e}

kxek^{3} (12)
for any v = v_{e}+ v_{0}e ∈ H, and consequently,

[f (λ_{i}(x))]^{0}v = f^{0}(λ_{i}(x))

v_{0}+ (−1)^{i}hx_{e}, v_{e}i
kx_{e}k

,
[u_{i}(x)]^{0}v = 1

2(−1)^{i}

v_{e}

kx_{e}k − hx_{e}, v_{e}ix_{e}
kx_{e}k^{3}

.

Together with the definition of f^{soc}, we calculate that (f^{soc})^{0}(x)v is equal to
f^{0}(λ_{1}(x))

2

v_{0}−hx_{e}, v_{e}i
kxek

e − x_{e}
kxek

− f (λ_{1}(x))
2

v_{e}

kxek − hx_{e}, v_{e}ix_{e}
kxek^{3}

+f^{0}(λ_{2}(x))
2

v_{0}+ hx_{e}, v_{e}i
kxek

e + x_{e}
kxek

+f (λ_{2}(x))
2

v_{e}

kxek − hx_{e}, v_{e}ix_{e}
kxek^{3}

= b_{1}(x)v_{0}e + c_{1}(x) hx_{e}, v_{e}i e + c_{1}(x)v_{0}x_{e}+ b_{1}(x)hx_{e}, v_{e}ix_{e}
+a_{0}(x)v_{e}− a_{0}(x)hx_{e}, v_{e}ix_{e},

where λ_{2}(x) − λ_{1}(x) = 2kx_{e}k is used for the last equality. Thus, we get (10) for x_{e} 6= 0.

We next consider the case where x_{e} = 0. Under this case, for any v = v_{e}+ v_{0}e ∈ H,
f^{soc}(x + v) − f^{soc}(x) = f (x_{0}+ v_{0}− kv_{e}k)

2 (e − v_{e}) + f (x_{0}+ v_{0} + kv_{e}k)

2 (e + v_{e}) − f (x_{0})e

= f^{0}(x_{0})(v_{0}− kv_{e}k)

2 e + f^{0}(x_{0})(v_{0}+ kv_{e}k)

2 e

+f^{0}(x0)(v0+ kvek)

2 ve− f^{0}(x0)(v0− kvek)

2 ve+ o(kvk)

= f^{0}(x_{0})(v_{0}e + kv_{e}kv_{e}) + o(kvk),
where v_{e}= _{kv}^{v}^{e}

ek if v_{e} 6= 0, and otherwise v_{e} is an arbitrary unit vector from hei^{⊥}. Hence,
kf^{soc}(x + v) − f^{soc}(x) − f^{0}(x_{0})vk = o(kvk).

This shows that f^{soc} is differentiable at such x with (f^{soc})^{0}(x)v = f^{0}(x_{0})v.

Assume that f is continuously differentiable. From (10), it is easy to see that (f^{soc})^{0}(x)
is continuous at every x with x_{e}6= 0. We next argue that (f^{soc})^{0}(x) is continuous at every
x with x_{e}= 0. Fix any x = x_{0}e with x_{0} ∈ J. For any z = z_{e}+ z_{0}e with z_{e}6= 0, we have

k(f^{soc})^{0}(z)v − (f^{soc})^{0}(x)vk ≤ |b_{1}(z) − a_{0}(z)|kv_{e}k + |b_{1}(z) − f^{0}(x_{0})||v_{0}|

+|a_{0}(z) − f^{0}(x_{0})|kv_{e}k + |c_{1}(z)|(|v_{0}| + kv_{e}k). (13)
Since f is continuously differentiable on J and λ_{2}(z) → x_{0}, λ_{1}(z) → x_{0} as z → x, we have

a0(z) → f^{0}(x0), b1(z) → f^{0}(x0) and c1(z) → 0.

Together with equation (13), we obtain that (f^{soc})^{0}(z) → (f^{soc})^{0}(x) as z → x.

(c) The result is direct by the definition of (f^{0})^{soc} and a simple computation from (10).

(d) Suppose that f^{0}(t) ≥ 0 for all t ∈ J . Fix any x = xe+ x0e ∈ S. If xe = 0, the result
is direct. It remains to consider the case x_{e} 6= 0. Since f^{0}(t) ≥ 0 for all t ∈ J , we have
b_{1}(x) ≥ 0, b_{1}(x)−c_{1}(x) = f^{0}(λ_{1}(x)) ≥ 0, b_{1}(x)+c_{1}(x) = f^{0}(λ_{2}(x)) ≥ 0 and a_{0}(x) ≥ 0. From

part (b) and the definitions of b_{1}(x) and c_{1}(x), it follows that for any h = h_{e}+ h_{0}e ∈ H,
hh, (f^{soc})^{0}(x)hi = (b_{1}(x) − a_{0}(x))hx_{e}, h_{e}i^{2}+ 2c_{1}(x)h_{0}hx_{e}, h_{e}i + b_{1}(x)h^{2}_{0}+ a_{0}(x)kh_{e}k^{2}

= a_{0}(x)kh_{e}k^{2}− hx_{e}, h_{e}i^{2} + 1

2(b_{1}(x) − c_{1}(x)) [h_{0}− hx_{e}, h_{e}i]^{2}
+1

2(b_{1}(x) + c_{1}(x)) [h_{0}+ hx_{e}, h_{e}i]^{2} ≥ 0.

This implies that the operator (f^{soc})^{0}(x) is positive semidefinite. Particularly, if f^{0}(t) > 0
for all t ∈ J , we have that hh, (f^{soc})^{0}(x)hi > 0 for all h 6= 0. The proof is complete. 2

Lemma 3.1(d) shows that the differential operator (f^{soc})^{0}(x) corresponding to a differ-
entiable nondecreasing f is positive semidefinite. So, the differential operator (f^{soc})^{0}(x)
associated with a differentiable SOC-monotone function is also positive semidefinite.

Proposition 3.1 Assume that f ∈ C^{1}(J ) with J open. Then f is SOC-monotone if and
only if (f^{soc})^{0}(x)h ∈ K for any x ∈ S and h ∈ K.

Proof. If f is SOC-monotone, then for any x ∈ S, h ∈ K and t > 0, we have
f^{soc}(x + th) − f^{soc}(x) _{K} 0,

which, by the continuous differentiability of f^{soc} and the closedness of K, implies that
(f^{soc})^{0}(x)h _{K} 0.

Conversely, for any x, y ∈ S with x _{K} y, from the given assumption we have that
f^{soc}(x) − f^{soc}(y) =

Z 1 0

(f^{soc})^{0}(x + t(x − y))(x − y)dt ∈ K.

This shows that f^{soc}(x) _{K} f^{soc}(y), i.e., f is SOC-monotone. The proof is complete. 2
Proposition 3.1 shows that the differential operator (f^{soc})^{0}(x) associated with a differ-
entiable SOC-monotone function f leaves K invariant. If, in addition, the linear operator
(f^{soc})^{0}(x) is bijective, then (f^{soc})^{0}(x) belongs to the automorphism group of K. Such linear
operators are important to study the structure of the cone K (see [9]).

Corollary 3.1 Assume that f ∈ C^{1}(J ) with J open. If f is SOC-monotone, then
(a) (f^{0})^{soc}(x) ∈ K for any x ∈ S;

(b) f^{soc} is a monotone function, i.e., hf^{soc}(x) − f^{soc}(y), x − yi ≥ 0 for any x, y ∈ S.

Proof. Part (a) is direct by using Proposition 3.1 with h = e and Lemma 3.1(c). By part
(a), f^{0}(τ ) ≥ 0 for all τ ∈ J . Together with Lemma 3.1(d), (f^{soc})^{0}(x) ≥ 0 for any x ∈ S.

Applying the integral mean-value theorem, it then follows that
hf^{soc}(x) − f^{soc}(y), x − yi =

Z 1 0

hx − y, (f^{soc})^{0}(y + t(x − y))(x − y)idt ≥ 0.

This proves the desired result of part (b). The proof is complete. 2

Note that the converse of Corollary 3.1(a) is not correct. For example, for the function
f (t) = −t^{−2} (t > 0), it is clear that (f^{0})^{soc}(x) ∈ K for any x ∈ intK, but it is not SOC-
monotone by Example 5.1(ii). The following proposition provides another sufficient and
necessary characterization for differentiable SOC-monotone functions.

Proposition 3.2 Let f ∈ C^{1}(J ) with J open. Then f is SOC-monotone if and only if

f^{[1]}(τ_{1}, τ_{1}) f^{[1]}(τ_{1}, τ_{2})
f^{[1]}(τ_{2}, τ_{1}) f^{[1]}(τ_{2}, τ_{2})

=

f^{0}(τ1) f (τ_{2}) − f (τ_{1})
τ_{2}− τ_{1}
f (τ_{1}) − f (τ_{2})

τ_{1}− τ_{2} f^{0}(τ_{2})

≥ 0, ∀τ_{1}, τ_{2} ∈ J. (14)

Proof. The equality is direct by the definition of f^{[1]}. It suffices to prove that f is SOC-
monotone if and only if the inequality in (14) holds for any τ_{1}, τ_{2} ∈ J. Assume that f is
SOC-monotone. By Proposition 3.1, (f^{soc})^{0}(x)h ∈ K for any x ∈ S and h ∈ K. Fix any
x = x_{e}+ x_{0}e ∈ S. It suffices to consider the case where x_{e} 6= 0. Since (f^{soc})^{0}(x)h ∈ K for
any h ∈ K, we particularly have (f^{soc})^{0}(x)(z + e) ∈ K for any z ∈ B, where B is the set
defined in Lemma 2.1. From Lemma 3.1(b), it follows that

(f^{soc})^{0}(x)(z + e) = [(b_{1}(x) − a_{0}(x)) hx_{e}, zi + c_{1}(x)] x_{e}+ a_{0}(x)z + [b_{1}(x) + c_{1}(x)hx_{e}, zi] e.

This means that (f^{soc})^{0}(x)(z + e) ∈ K for any z ∈ B if and only if

b_{1}(x) + c_{1}(x)hx_{e}, zi ≥ 0, (15)

[b_{1}(x) + c_{1}(x)hx_{e}, zi]^{2} ≥ k[ (b_{1}(x) − a_{0}(x)) hx_{e}, zi + c_{1}(x)]x_{e}+ a_{0}(x)zk^{2}. (16)
By Lemma 2.1(a), we know that (15) holds for any z ∈ B if and only if b^{1}(x) ≥ |c1(x)|.

Since by a simple computation the inequality in (16) can be simplified as
b_{1}(x)^{2}− c_{1}(x)^{2}− a_{0}(x)^{2}kzk^{2} ≥b_{1}(x)^{2}− c_{1}(x)^{2}− a_{0}(x)^{2} hz, x_{e}i^{2},
applying Lemma 2.1(b) yields that (16) holds for any z ∈ B if and only if

b_{1}(x)^{2}− c_{1}(x)^{2}− a_{0}(x)^{2} ≥ 0.

This shows that (f^{soc})^{0}(x)(z + e) ∈ K for any z ∈ B if and only if

b_{1}(x) ≥ |c_{1}(x)| and b_{1}(x)^{2}− c_{1}(x)^{2}− a_{0}(x)^{2} ≥ 0. (17)

The first condition in (17) is equivalent to b_{1}(x) ≥ 0, b_{1}(x)−c_{1}(x) ≥ 0 and b_{1}(x)+c_{1}(x) ≥ 0,
which, by the expressions of b_{1}(x) and c_{1}(x) and the arbitrariness of x, is equivalent to
f^{0}(τ ) ≥ 0 for all τ ∈ J ; whereas the second condition in (17) is equivalent to

f^{0}(τ_{1})f^{0}(τ_{2}) − f (τ_{2}) − f (τ1)
τ_{2} − τ_{1}

2

≥ 0, ∀τ_{1}, τ_{2} ∈ J.

The two sides show that the inequality in (14) holds for all τ_{1}, τ_{2} ∈ J.

Conversely, if the inequality in (14) holds for all τ_{1}, τ_{2} ∈ J, then from the arguments
above we have (f^{soc})^{0}(x)(z + e) ∈ K for any x = x_{e}+ x_{0}e ∈ S and z ∈ B. This implies that
(f^{soc})^{0}(x)h ∈ K for any x ∈ S and h ∈ K. By Proposition 3.1, f is SOC-monotone. 2

Propositions 3.1 and 3.2 provide the characterizations for continuously differentiable
SOC-monotone functions. When f does not belong to C^{1}(J ), one may check the SOC-
monotonicity of f by combining the following proposition with Propositions 3.1 and 3.2.

Proposition 3.3 Let f : J → R be a continuous function on the open interval J, and f^{ε}
be its regularization defined by (9). Then, f is SOC-monotone if and only if f_{ε} is SOC-
monotone on J_{ε} for every sufficiently small ε > 0, where J_{ε}:= (a + ε, b − ε) for J = (a, b).

Proof. Throughout the proof, for every sufficiently small ε > 0, we let S_{ε} be the set of all
x ∈ H whose spectral values λ1(x), λ_{2}(x) belong to J_{ε}. Assume that f_{ε} is SOC-monotone
on J_{ε} for every sufficiently small ε > 0. Let x, y be arbitrary vectors from S with x _{K} y.

Then, for any sufficiently small ε > 0, we have x + εe, y + εe ∈ Sε and x + εe K y + εe.

Using the SOC-monotonicity of f_{ε} on J_{ε} yields that f_{ε}^{soc}(x + εe) _{K} f_{ε}^{soc}(y + εe). Taking
the limit ε → 0 and using the convergence of f_{ε}^{soc}(x) → f^{soc}(x) and the continuity of f^{soc}
on S implied by Lemma 3.1(a), we readily obtain that f^{soc}(x) _{K} f^{soc}(y). This shows that
f is SOC-monotone.

Now assume that f is SOC-monotone. Let ε > 0 be an arbitrary sufficiently small real
number. Fix any x, y ∈ S_{ε}with x _{K} y. Then, for all t ∈ [−1, 1], we have x−tεe, y−tεe ∈ S
and x − tεe _{K} y − tεe. Therefore, f^{soc}(x − tεe) _{K} f^{soc}(y − tεe), which is equivalent to

f (λ_{1}− tε) + f (λ_{2}− tε)

2 − f (µ_{1} − tε) + f (µ_{2} − tε)
2

≥

f (λ_{1}− tε) − f (λ_{2}− tε)

2 xe−f (µ_{1}− tε) − f (µ_{2}− tε)

2 y_{e}

.
Together with the definition of f_{ε}, it then follows that

f_{ε}(λ_{1}) + f_{ε}(λ_{2})

2 − f_{ε}(µ_{1}) + f_{ε}(µ_{2})
2

=

Z f (λ_{1}− tε) + f (λ_{2}− tε)

2 − f (µ_{1}− tε) + f (µ_{2}− tε)
2

ϕ(t)dt

≥ Z

f (λ_{1}− ε) − f (λ_{2}− ε)

2 x_{e}− f (µ_{1}− ε) − f (µ_{2}− ε)

2 y_{e}

ϕ(t)dt

≥

Z f (λ_{1}− ε) − f (λ_{2}− ε)

2 x_{e}−f (µ_{1}− ε) − f (µ_{2}− ε)

2 y_{e}

ϕ(t)dt

=

f_{ε}(λ_{1}) − f_{ε}(λ_{2})

2 x_{e}− f_{ε}(µ_{1}) − f_{ε}(µ_{2})

2 y_{e}

.

By the definition of f_{ε}^{soc}, this shows that f_{ε}^{soc}(x) _{K} f_{ε}^{soc}(y), i.e., f_{ε} is SOC-monotone. 2
From Proposition 3.2 and [2, Theorem V. 3.4], f ∈ C^{1}(J ) is SOC-monotone if and only
if it is matrix monotone of order 2. When the continuous f is not in the class C^{1}(J ), the
result also holds due to Proposition 3.3 and the fact that f is matrix monotone of order n
if and only if f_{ε} is matrix monotone of order n. Thus, we have the following main result.

Theorem 3.1 The set of continuous SOC-monotone functions on the open interval J co- incides with that of continuous matrix monotone functions of order 2 on J .

Remark 3.1 Combining Theorem 3.1 with L¨owner’s theorem [17] shows that if f : J → R
is a continuous SOC-monotone function on the open interval J , then f ∈ C^{1}(J ).

### 4 Characterizations of SOC-convex functions

This section is devoted itself to the characterizations of SOC-convex functions, and shows that the continuous f is SOC-convex if and only if it is matrix convex of order 2. First, for the first-order differentiable SOC-convex functions, we have the following characterizations.

Proposition 4.1 Assume that f ∈ C^{1}(J ) with J open. Then, the following results hold.

(a) f is SOC-convex if and only if for any x, y ∈ S,

f^{soc}(y) − f^{soc}(x) − (f^{soc})^{0}(x)(y − x) _{K} 0. (18)
(b) If f is SOC-convex, then (f^{0})^{soc} is a monotone function on S.

Proof. By following the arguments as in [1, Proposition B.3(a)], the proof of part (a) can be done easily, and we omit the details. From part (a), it follows that for any x, y ∈ S,

f^{soc}(x) − f^{soc}(y) − (f^{soc})^{0}(y)(x − y) _{K} 0,
f^{soc}(y) − f^{soc}(x) − (f^{soc})^{0}(x)(y − x) K 0.

Adding the last two inequalities, we immediately obtain that
[(f^{soc})^{0}(y) − (f^{soc})^{0}(x)] (y − x) _{K} 0.

Using the self-duality of K and Lemma 3.1(c) then yields

0 ≤ he, [(f^{soc})^{0}(y) − (f^{soc})^{0}(x)] (y − x)i = hy − x, (f^{0})^{soc}(y) − (f^{0})^{soc}(x)i .
This shows that (f^{0})^{soc} is monotone. The proof is complete. 2

To provide sufficient and necessary characterizations for twice differentiable SOC-convex
functions, we need the following lemma that offers the second-order differential of f^{soc}.
Lemma 4.1 For any given f : J → R with J open, let f^{soc}: S → H be defined by (3).

(a) f^{soc} is twice (continuously) differentiable on S if and only if f is twice (continuously)
differentiable on J . Furthermore, when f is twice differentiable on J , for any given
x = x_{e}+ x_{0}e ∈ S and u = u_{e}+ u_{0}e, v = v_{e}+ v_{0}e ∈ H, we have that

(f^{soc})^{00}(x)(u, v) = f^{00}(x_{0})u_{0}v_{0}e + f^{00}(x_{0})(u_{0}v_{e}+ v_{0}u_{e}) + f^{00}(x_{0})hu_{e}, v_{e}ie
if x_{e} = 0; and otherwise

(f^{soc})^{00}(x)(u, v) = (b_{2}(x) − a_{1}(x))u_{0}hx_{e}, v_{e}ix_{e}+ (c_{2}(x) − 3d(x))hx_{e}, u_{e}ihx_{e}, v_{e}ix_{e}
+d(x)[hue, veixe+ hxe, veiue+ hxe, ueive] + c2(x)u0v0xe

+(b_{2}(x) − a_{1}(x))hx_{e}, u_{e}iv_{0}x_{e}+ a_{1}(x)(v_{0}u_{e}+ u_{0}v_{e})
+b_{2}(x)u_{0}v_{0}e + c_{2}(x)[v_{0}hx_{e}, u_{e}i + u_{0}hx_{e}, v_{e}i]e

+a_{1}(x)hu_{e}, v_{e}ie + (b_{2}(x) − a_{1}(x))hx_{e}, u_{e}ihx_{e}, v_{e}ie, (19)
where

c_{2}(x) = f^{00}(λ_{2}(x)) − f^{00}(λ_{1}(x))

2 , b_{2}(x) = f^{00}(λ_{2}(x)) + f^{00}(λ_{1}(x))

2 ,

a_{1}(x) = f^{0}(λ_{2}(x)) − f^{0}(λ_{1}(x))

λ2(x) − λ1(x) , d(x) = b_{1}(x) − a_{0}(x)
kxek .
(b) If f is twice differentiable on J , then for any given x ∈ S and u, v ∈ H,

(f^{soc})^{00}(x)(u, v) = (f^{soc})^{00}(x)(v, u),
hu, (f^{soc})^{00}(x)(u, v)i = hv, (f^{soc})^{00}(x)(u, u)i.

Proof. (a) The first part is direct by the given conditions and Lemma 3.1(b), and we only
need to derive the differential formula. Fix any u = u_{e}+ u_{0}e, v = v_{e}+ v_{0}e ∈ H. We first
consider the case where x_{e} = 0. Without loss of generality, assume that u_{e} 6= 0. For any
sufficiently small t > 0, using Lemma 3.1(b) and x + tu = (x_{0}+ tu_{0}) + tu_{e}, we have that

(f^{soc})^{0}(x + tu)v = [b_{1}(x + tu) − a_{0}(x + tu)] hu_{e}, v_{e}iu_{e}+ c_{1}(x + tu)v_{0}u_{e}
+a_{0}(x + tu)v_{e}+ b_{1}(x + tu)v_{0}e + c_{1}(x + tu)hu_{e}, v_{e}ie.

In addition, from Lemma 3.1(b), we also have that (f^{soc})^{0}(x)v = f^{0}(x_{0})v_{0}e + f^{0}(x_{0})v_{e}. Using
the definition of b_{1}(x) and a_{0}(x), and the differentiability of f^{0} on J , it follows that

limt→0

b_{1}(x + tu)v_{0}e − f^{0}(x_{0})v_{0}e

t = f^{00}(x_{0})u_{0}v_{0}e,
limt→0

a_{0}(x + tu)v_{e}− f^{0}(x_{0})v_{e}

t = f^{00}(x_{0})u_{0}v_{e},
limt→0

b_{1}(x + tu) − a_{0}(x + tu)

t = 0,

limt→0

c_{1}(x + tu)

t = f^{00}(x0)kuek.

Using the above four limits, it is not hard to obtain that
(f^{soc})^{00}(x)(u, v) = lim

t→0

(f^{soc})^{0}(x + tu)v − (f^{soc})^{0}(x)v
t

= f^{00}(x_{0})u_{0}v_{0}e + f^{00}(x_{0})(u_{0}v_{e}+ v_{0}u_{e}) + f^{00}(x_{0})hu_{e}, v_{e}ie.

We next consider the case where xe 6= 0. From Lemma 3.1(b), it follows that
(f^{soc})^{0}(x)v = (b_{1}(x) − a_{0}(x)) hx_{e}, v_{e}i x_{e}+ c_{1}(x)v_{0}x_{e}

+a_{0}(x)v_{e}+ b_{1}(x)v_{0}e + c_{1}(x) hx_{e}, v_{e}i e,
which in turn implies that

(f^{soc})^{00}(x)(u, v) = [(b_{1}(x) − a_{0}(x)) hx_{e}, v_{e}i x_{e}]^{0}u + [c_{1}(x)v_{0}x_{e}]^{0}u

+ [a_{0}(x)v_{e}+ b_{1}(x)v_{0}e]^{0}u + [c_{1}(x) hx_{e}, v_{e}i e]^{0}u. (20)
By the expressions of a_{0}(x), b_{1}(x) and c_{1}(x) and equations (11)-(12), we calculate that

(b_{1}(x))^{0}u = f^{00}(λ_{2}(x)) [u_{0}+ hx_{e}, u_{e}i]

2 + f^{00}(λ_{1}(x)) [u_{0}− hx_{e}, u_{e}i]

2

= b_{2}(x)u_{0}+ c_{2}(x)hx_{e}, u_{e}i,
(c_{1}(x))^{0}u = c_{2}(x)u_{0} + b_{2}(x)hx_{e}, u_{e}i,
(a_{0}(x))^{0}u = f^{0}(λ_{2}(x)) − f^{0}(λ_{1}(x))

λ_{2}(x) − λ_{1}(x) u_{0} +b_{1}(x) − a_{0}(x)

kx_{e}k hx_{e}, u_{e}i

= a_{1}(x)u_{0}+ d(x)hx_{e}, u_{e}i,
(hx_{e}, v_{e}i)^{0}u =

1

kx_{e}ku_{e}− hx_{e}, u_{e}i
kx_{e}k x_{e}, v_{e}

.

Using these equalities and noting that a_{1}(x) = c_{1}(x)/kx_{e}k, we obtain that
h

(b_{1}(x) − a_{0}(x))hx_{e}, v_{e}ix_{e}i^{0}
u =h

(b_{2}(x) − a_{1}(x))u_{0}+ (c_{2}(x) − d(x))hx_{e}, u_{e}ii

hx_{e}, v_{e}ix_{e}
+(b_{1}(x) − a_{0}(x))

1

kx_{e}ku_{e}−hx_{e}, u_{e}i
kx_{e}k x_{e}, v_{e}

x_{e}

+ (b_{1}(x) − a_{0}(x)) hx_{e}, v_{e}i

1

kx_{e}ku_{e}− hx_{e}, u_{e}i
kx_{e}k x_{e}

=h

(b_{2}(x) − a_{1}(x))u_{0}+ (c_{2}(x) − d(x))hx_{e}, u_{e}ii

hx_{e}, v_{e}ix_{e}
+d(x)hu_{e}, v_{e}ix_{e}− 2d(x)hx_{e}, v_{e}ihx_{e}, u_{e}ix_{e}+ d(x)hx_{e}, v_{e}iu_{e};
h

a_{0}(x)v_{e}+ b_{1}(x)v_{0}ei0

u =h

a_{1}(x)u_{0} + d(x)hx_{e}, u_{e}ii
v_{e}+h

b_{2}(x)u_{0}+ c_{2}(x)hx_{e}, u_{e}ii
v_{0}e;

h

c_{1}(x)v_{0}x_{e}i0

u =h

c_{2}(x)u_{0}+ b_{2}(x)hx_{e}, u_{e}ii

v_{0}x_{e}+ c_{1}(x)v_{0}u_{e}− hx_{e}, u_{e}ix_{e}
kx_{e}k

=h

c_{2}(x)u_{0}+ b_{2}(x)hx_{e}, u_{e}ii

v_{0}x_{e}+ a_{1}(x)v_{0}h

u_{e}− hx_{e}, u_{e}ix_{e}i

; and

h

c_{1}(x)hx_{e}, v_{e}iei^{0}
u =h

c_{2}(x)u_{0}+ b_{2}(x)hx_{e}, u_{e}ii

hx_{e}, v_{e}ie + c_{1}(x) u_{e}− hxe, ueixe

kx_{e}k , v_{e}

e

= c_{2}(x)u_{0}hx_{e}, v_{e}ie + (b_{2}(x) − a_{1}(x))hx_{e}, u_{e}i hx_{e}, v_{e}i e + a_{1}(x)hu_{e}, v_{e}ie.

Adding the equalities above and using equation (20) yields the formula in (19).

(b) By the formula in part (a), a simple computation yields the desired result. 2
Proposition 4.2 Assume that f ∈ C^{2}(J ) with J open. Then, the following results hold.

(a) f is SOC-convex if and only if for any x ∈ S and h ∈ H, (f^{soc})^{00}(x)(h, h) ∈ K.

(b) f is SOC-convex if and only if f is convex and for any τ_{1}, τ_{2} ∈ J,
f^{00}(τ_{2})

2

f (τ_{2}) − f (τ_{1}) − f^{0}(τ_{1})(τ_{2} − τ_{1})

(τ_{2} − τ_{1})^{2} ≥ f (τ_{1}) − f (τ_{2}) − f^{0}(τ_{2})(τ_{1}− τ_{2})
(τ_{2}− τ_{1})^{2}

2

. (21)

(c) f is SOC-convex if and only if f is convex and for any τ_{1}, τ_{2} ∈ J,
1

4f^{00}(τ_{1})f^{00}(τ_{2}) ≥ f (τ2) − f (τ1) − f^{0}(τ1)(τ2− τ1)

(τ_{2}− τ_{1})^{2} · f (τ1) − f (τ2) − f^{0}(τ2)(τ1 − τ2)
(τ_{2}− τ_{1})^{2} .(22)
(d) f is SOC-convex if and only if for any τ_{1}, τ_{2} ∈ J and s = τ_{1}, τ_{2},

f^{[2]}(τ_{2}, s, τ_{2}) f^{[2]}(τ_{2}, s, τ_{1})
f^{[2]}(τ_{1}, s, τ_{2}) f^{[2]}(τ_{1}, s, τ_{1})

0.

Proof. (a) Suppose that f is SOC-convex. Since f^{soc} is twice continuously differentiable
by Lemma 4.1(a), we have for any given x ∈ S, h ∈ H and sufficiently small t > 0,

f^{soc}(x + th) = f^{soc}(x) + t(f^{soc})^{0}(x)h + 1

2t^{2}(f^{soc})^{00}(x)(h, h) + o(t^{2}).