• 沒有找到結果。

a study on sigmoid kernels for svm and the training of non-psd

N/A
N/A
Protected

Academic year: 2021

Share "a study on sigmoid kernels for svm and the training of non-psd"

Copied!
32
0
0

加載中.... (立即查看全文)

全文

(1)

A Study on Sigmoid Kernels for SVM and the Training of

non-PSD Kernels by SMO-type Methods

Hsuan-Tien Lin and Chih-Jen Lin Department of Computer Science and

Information Engineering National Taiwan University

Taipei 106, Taiwan cjlin@csie.ntu.edu.tw Abstract

The sigmoid kernel was quite popular for support vector machines due to its origin from neural networks. Although it is known that the kernel matrix may not be positive semi-definite (PSD), other properties are not fully studied. In this paper, we discuss such non-PSD kernels through the viewpoint of separability. Results help to validate the possible use of non-PSD kernels. One example shows that the sigmoid kernel matrix is conditionally positive definite (CPD) in certain parameters and thus are valid kernels there. However, we also explain that the sigmoid kernel is not better than the RBF kernel in general. Experiments are given to illustrate our analysis. Finally, we discuss how to solve the non-convex dual problems by SMO-type decomposition methods. Suitable modifications for any symmetric non-PSD kernel matrices are proposed with convergence proofs.

Keywords

Sigmoid Kernel, non-Positive Semi-Definite Kernel, Sequential Minimal Optimization, Support Vector Machine

(2)

1

Introduction

Given training vectors xi ∈ Rn, i = 1, . . . , l in two classes, labeled by the vector y ∈

{+1, −1}l. The support vector machine (SVM) (Boser, Guyon, and Vapnik 1992; Cortes

and Vapnik 1995) separates the training vectors in a φ-mapped (and possibly infinite dimensional) space, with an error cost C > 0:

min w,b,ξ 1 2w T w + C l X i=1 ξi subject to yi(wTφ(xi) + b) ≥ 1 − ξi, (1) ξi ≥ 0, i = 1, . . . , l.

Due to the high dimensionality of the vector variable w, we usually solve (1) through its Lagrangian dual problem:

min α F (α) = 1 2α T Qα − eTα subject to 0 ≤ αi ≤ C, i = 1, . . . , l, (2) yTα = 0,

where Qij ≡ yiyjφ(xi)Tφ(xj) and e is the vector of all ones. Here,

K(xi, xj) ≡ φ(xi)Tφ(xj) (3)

is called the kernel function where some popular ones are, for example, the polynomial

kernel K(xi, xj) = (axTi xj+ r)d, and the RBF (Gaussian) kernel K(xi, xj) = e−γkxi−xjk

2

. By the definition (3), the matrix Q is symmetric and positive semi-definite (PSD). After

(2) is solved, w =Pl

i=1yiαiφ(xi) so the decision function for any test vector x is

sgn(

l

X

i=1

αiyiK(xi, x) + b), (4)

where b is calculated through the primal-dual relationship.

In practice, some non-PSD matrices are used in (2). An important one is the

sig-moid kernel K(xi, xj) = tanh(axTi xj + r), which is related to neural networks. It was

(3)

values of the parameters a and r. More discussions are in, for instance, (Burges 1998; Sch¨olkopf and Smola 2002). When K is not PSD, (3) cannot be satisfied and the primal-dual relationship between (1) and (2) does not exist. Thus, it is unclear what kind of classification problems we are solving. Surprisingly, the sigmoid kernel has been used in several practical cases. Some explanations are in (Sch¨olkopf 1997).

Recently, quite a few kernels specific to different applications are proposed. However, similar to the sigmoid kernel, some of them are not PSD either (e.g. kernel jittering in (DeCoste and Sch¨olkopf 2002) and tangent distance kernels in (Haasdonk and Keysers 2002)). Thus, it is essential to analyze such non-PSD kernels. In Section 2, we discuss them by considering the separability of training data. Then in Section 3, we explain the practical viability of the sigmoid kernel by showing that for parameters in certain ranges, it is conditionally positive definite (CPD). We discuss in Section 4 about the similarity between the sigmoid kernel and the RBF kernel, which shows that the sigmoid kernel is

less preferable. Section 5 presents experiments showing that the linear constraint yTα = 0

in the dual problem is essential for a CPD kernel matrix to work for SVM.

In addition to unknown behavior, non-PSD kernels also cause difficulties in solving (2). The original decomposition method for solving (2) was designed when Q is PSD and and existing software may have difficulties such as endless loops when using non-PSD kernels. In Section 6, we propose simple modifications for SMO-type decomposition methods which guarantee the convergence to stationary points for non-PSD kernels. Section 7 then discusses some modifications to convex formulas. A comparison between SVM and kernel logistic regression (KLR) is performed. Finally, some discussions are in Section 8.

2

The Separability when Using non-PSD Kernel

Ma-trices

When using non-PSD kernels such as the sigmoid, K(xi, xj) cannot be separated as the

inner product form in (3). Thus, (1) is not well-defined. After obtaining α from (2), it is not clear how the training data are classified. To analyze what we actually obtained

(4)

when using a non-PSD Q, we consider a new problem: min α,b,ξ 1 2α TQα + C l X i=1 ξi subject to Qα + by ≥ e − ξ, (5) ξi ≥ 0, i = 1, . . . , l. It is from substituting w =Pl

i=1yiαiφ(xi) into (1) so that wTw = αTQα and yiwTφ(xi) =

(Qα)i. Note that in (5), αimay be negative. This problem was used in (Osuna and Girosi

1998) and some subsequent work. In (Lin and Lin 2003), it shows that if Q is symmetric PSD, the optimal solution α of the dual problem (2) is also optimal for (5). However, the opposite may not be true unless Q is symmetric positive definite (PD).

From now on, we assume that Q (or K) is symmetric but may not be PSD. The next theorem is about the stationary points of (2), that is, the points that satisfy the Karash-Kunh-Tucker (KKT) condition. By this condition, we can get a relation between (2) and (5).

Theorem 1 Any stationary point ˆα of (2) is a feasible point of (5).

Proof.

Assume that ˆα is a stationary point, so it satisfies the KKT condition. For a symmetric

Q, the KKT condition of (2) is that there are scalar p, and non-negative vectors λ and µ such that

α − e − µ + λ − py = 0,

µi ≥ 0, µiαˆi = 0,

λi ≥ 0, λi(C − ˆαi) = 0, i = 1, . . . , l.

If we consider αi = ˆαi, b = −p, and ξi = λi, then µi ≥ 0 implies that (ˆα, −p, λ) is

feasible for (5). 2

An immediate implication is that if ˆα, a stationary point of (2), does not have many

nonzero components, the training error would not be large. Thus, even if Q is not PSD, it is still possible that the training error is small. Next, we give a more formal analysis on the separability of training data:

(5)

Theorem 2 Consider the problem (2) without C: min α 1 2α T Qα − eTα subject to 0 ≤ αi, i = 1, . . . , l, (6) yTα = 0.

If there exists a attained stationary point ˆα, then

1. (5) has a feasible solution with ξi = 0, for i = 1, . . . , l.

2. If C is large enough, then ˆα is also a stationary point of (2).

The proof is directly from Theorem 1, which shows that ˆα is feasible for (5) with

ξi = 0. The second property comes from the fact that when C ≥ maxiαˆi, ˆα is also

stationary for (2).

Thus, if (6) has at least one stationary point, the kernel matrix has the ability to fully separate the training data. This theorem gives an explanation why sometimes

non-PSD kernels work. Furthermore, if a global minimum ˆα of (6) is attained, it can be the

stationary point to have the separability. On the contrary, if ˆα is not attained and the

optimal objective value goes to −∞, for every C, the global minimum ˆα of (2) would

have at least one ˆαi = C. In this case, the separability of the kernel matrix is not clear.

Next we would like to see if any conditions on a kernel matrix imply an attained global minimum and hence the optimal objective value is not −∞. Several earlier work have given useful results. In particular, it has been shown that a conditionally PSD (CPSD) kernel is good enough for SVM. A matrix K is CPSD (CPD) if for all v 6= 0

with Pl

i=1vi = 0, vTKv ≥ 0 (> 0). Note that some earlier work use different names:

conditionally PD (strictly PD) for the case of ≥ 0 (> 0). More properties can be seen in, for example, (Berg, Christensen, and Ressel 1984). Then, the use of a CPSD kernel is

equivalent to the use of a PSD one as yTα = 0 in (2) plays a similar role ofPl

i=1vi = 0 in

the definition of CPSD (Sch¨olkopf 2000). For easier analysis here, we will work only on the kernel matrices but not the kernel functions. The following theorem gives properties which imply the existence of optimal solutions of (6).

Theorem 3

(6)

2. If K is CPD, then the solution of (6) is attained and its optimal objective value is greater than −∞.

Proof.

The “if” part of the first result is very simple by definition. For any v 6= 0 with

eTv = 0,

vTKv = vT(K + ∆eeT)v > 0,

so K is CPD.

On the other hand, if K is CPD but there is no ∆ such that K + ∆eeT is PD, there

are infinite {vi, ∆i} with kvik = 1, ∀i and ∆i → ∞ as i → ∞ such that

vT

i (K + ∆ieeT)vi ≤ 0, ∀i. (7)

As {vi} is in a compact region, there is a subsequence {vi}, i ∈ K which converges to v∗.

Since vT i Kvi → (v∗)TKv∗ and eTvi → eTv∗, lim i→∞,i∈K vT i (K + ∆ieeT)vi ∆i = (eTv)2 ≤ 0. Therefore, eTv= 0. By the CPD of K, (v)TKv> 0 so

vTi (K + ∆ieeT)vi > 0 after i is large enough,

a situation which contradicts (7).

For the second result of this theorem, if K is CPD, we have shown that K + ∆eeT is

PD. Hence, (6) is equivalent to min α 1 2α T(Q + ∆yyT )α − eTα subject to 0 ≤ αi, i = 1, . . . , l, (8) yTα = 0,

which is a strict convex programming problem. Hence, (8) attains a unique global mini-mum and so does (6). 2

Unfortunately, the property that (6) has a finite objective value is not equivalent to

(7)

1, . . . , l. We illustrate this by a simple example: If K =   1 2 −1 2 1 −1 −1 −1 0   and y =   1 1 −1  ,

we can get that 1 2α T Qα − eTα = 1 2[3(α1 − 2 3) 2+ 3(α 2− 2 3) 2+ 8α 1α2− 8 3] ≥ −4 3 if α1 ≥ 0 and α2 ≥ 0.

However, K is not CPD as we can easily set α1 = −α2 = 1, α3 = 0 which satisfy eTα = 0

but αT

Kα = −2 < 0.

Moreover, the first result of the above theorem may not hold if K is only CPSD. For

example, K = [1 0

0 −1] is CPSD as for any α1 + α2 = 0, αTKα = 0. However, for any

∆ 6= 0, K +∆eeT has an eigenvalue ∆ −2 + 1 < 0. Therefore, there is no ∆ such that

K + ∆eeT is PSD. On the other hand, even though K + ∆eeT PSD implies its CPSD,

they both may not guarantee the optimal objective value of (6) is finite. For example, if

K = [0 0

0 0] , it satisfies both properties but the objective value of (6) can be −∞.

Next we use concepts given in this section to analyze the sigmoid kernel.

3

The Behavior of the Sigmoid Kernel

In this section, we consider the sigmoid kernel K(xi, xj) = tanh(axTi xj+ r), which takes

two parameters: a and r. For a > 0, we can view a as a scaling parameter of the input data, and r as a shifting parameter that controls the threshold of mapping. For a < 0, the dot-product of the input data is not only scaled but reversed. In the following table we summarize the behavior in different parameter combinations, which will be discussed in the rest of this section. It concludes that the first case, a > 0 and r < 0, is more suitable for the sigmoid kernel.

a r results

+ − K is CPD after r is small; similar to RBF for small a

+ + in general not as good as the (+, −) case

− + objective value of (6) −∞ after r large enough − − easily the objective value of (6) −∞

(8)

Case 1: a > 0 and r < 0

We analyze the limiting case of this region and show that when r is small enough, the matrix K is CPD. We begin with a lemma about the sigmoid function:

Lemma 1 Given any δ,

lim

x→−∞

1 + tanh(x + δ)

1 + tanh(x) = e

.

The proof is by a direct calculation from the definition of the sigmoid function. With this lemma, we can prove that the sigmoid kernel matrices are CPD when r is small enough:

Theorem 4 Given any training set, if xi 6= xj, for i 6= j and a > 0, there exists ˆr such

that for all r ≤ ˆr, K + eeT is PD.

Proof.

Let Hr ≡ (K + eeT)/(1 + tanh(r)), where K

ij = tanh(axTi xj+ r). From Lemma 1,

lim r→−∞H r ij = lim r→−∞ 1 + tanh(axT i xj + r) 1 + tanh(r) = e 2axT ixj.

Let ¯H = limr→−∞Hr. Thus, ¯Hij = e2ax

T

ixj = eakxik2e−akxi−xjk2eakxjk2. If written in

matrix products, the first and last terms would form the same diagonal matrices with positive elements. And the middle one is in the form of an RBF kernel matrix. From

(Micchelli 1986), if xi 6= xj, for i 6= j, the RBF kernel matrix is PD. Therefore, ¯H is PD.

If Hr is not PD after r is small enough, there is an infinite sequence {r

i} with

limi→∞ri = −∞ and Hri, ∀i are not PD. Thus, for each ri, there exists kvik = 1 such

that vT

i Hrivi ≤ 0.

Since vi is an infinite sequence in a compact region, there is a subsequence which

converges to ¯v 6= 0. Therefore, ¯vTH ¯¯

v ≤ 0, which contradicts the fact that ¯H is PD.

Thus, there is ˆr such that for all r ≤ ˆr, Hr is PD. By the definition of Hr, K + eeT is

PD as well. 2

With Theorems 3 and 4, K is CPD after r is small enough. Theorem 4 also provides a connection between the sigmoid and a special PD kernel related to the RBF kernel when a is fixed and r gets small enough. More discussions are in Section 4.

(9)

Case 2: a > 0 and r ≥ 0

It was stated in (Burges 1999) that if tanh(axT

i xj + r) is PD, then r ≥ 0 and a ≥ 0.

However, the inverse does not hold so the practical viability is not clear. Here we will discuss this case by checking the separability of training data.

Comparing to Case 1, we show that it is more possible that the objective value of (6) goes to −∞. Therefore, with experiments in Section 4, we conclude that in general using a > 0 and r ≥ 0 is not as good as a > 0 and r < 0. The following theorem discusses possible situations that (6) has the objective value −∞:

Theorem 5

1. If there are i and j such that yi 6= yj and Kii+ Kjj− 2Kij ≤ 0, (6) has the optimal

objective value −∞. 2. For the sigmoid kernel, if

max

i (akxik 2

+ r) ≤ 0, (9)

then Kii+ Kjj − 2Kij > 0 for any xi 6= xj.

Proof.

For the first result, let αi = αj = ∆ and αk = 0 for k 6= i, j. Then, the objective

value of (6) is ∆2(K

ii− 2Kij + Kjj) − 2∆. Thus, ∆ → ∞ leads to a feasible solution of

(6) with objective value −∞. For the second result, now

Kii− 2Kij + Kjj

= tanh(akxik2+ r) − 2 tanh(akxTi xjk + r) + tanh(akxjk2+ r). (10)

Since maxi(akxik2+ r) ≤ 0, by the monotonicity of tanh(x) and its strict convexity

when x ≤ 0, tanh(akxik2+ r) + tanh(akxjk2+ r) 2 ≥ tanh((akxik 2+ r) + (akx jk2+ r) 2 ) (11) = tanh(akxik 2+ kx jk2 2 + r) > tanh(axT i xj + r). (12)

(10)

Note that the last inequality uses the property that xi 6= xj.

Then, by (10) and (12), Kii− 2Kij + Kjj > 0, so the proof is complete. 2

The requirement that xi 6= xj is in general true if there are no duplicated training

instances. Apparently, (9) must happen (for a > 0) when r is negative. If (9) is wrong, it

is possible that akxik2+ r ≥ 0 and akxjk2+ r ≥ 0. Then due to the concavity of tanh(x)

at the positive side, “≥” in (11) is changed to “≤.” Thus, Kii− 2Kij + Kjj may be ≤ 0

and (6) has the optimal objective value −∞.

Case 3: a < 0 and r > 0

The following theorem tells us that a < 0 and large r > 0 may not be a good choice.

Theorem 6 For any given training set, if a < 0 and each class has at least one data

point, there exists ¯r > 0 such that for all r ≥ ¯r, (6) has optimal objective value −∞.

Proof.

Since Kij = tanh(axTi xj + r) = − tanh(−axTi xj− r), by Theorem 4, there is −¯r < 0

such that for all −r ≤ −¯r, −K + eeT is PD. That is, there exist ¯r > 0 such that for all

r ≥ ¯r, any α with yTα = 0 and α 6= 0 satisfies αTQα < 0.

Since there is at least one data point in each class, we can find yi = +1 and yj = −1.

Let αi = αj = ∆, and αk = 0 for k 6= i, j be a feasible solution of (6). The objective

value decreases to −∞ as ∆ → ∞. Therefore, for all r ≥ ¯r, (6) has optimal objective value −∞. 2

Case 4: a < 0 and r ≤ 0

The following theorem shows that, in this case, the optimal objective value of (6) easily goes to −∞:

Theorem 7 For any given training set, if a < 0, r ≤ 0, and there are xi, xj such that

xT

i xj ≤ min(kxik2, kxjk2)

(11)

Proof.

By xT

i xj ≤ min(kxik2, kxjk2), (10), and the monotonicity of tanh(x), we can get

Kii− 2Kij+ Kjj ≤ 0. Then the proof follows from Theorem 5. 2

Note that the situation xT

i xj < min(kxik2, kxjk2) and yi 6= yj easily happens if the

two classes of training data are not close in the input space. Thus, a < 0 and r ≤ 0 are generally not a good choice of parameters.

4

Relation with the RBF Kernel

In this section we extend Case 1 (i.e. a > 0, r < 0) in Section 3 to show that the sigmoid kernel behaves like the RBF kernel when (a, r) are in a certain range.

Lemma 1 implies that when r < 0 is small enough,

1 + tanh(axTi xj+ r) ≈ (1 + tanh(r))(e2ax

T

ixj). (13)

If we further make a close to 0, eakxk2 ≈ 1 so

e2axTixj = eakxik2e−akxi−xjk2eakxjk2 ≈ e−akxi−xjk2.

Therefore, when r < 0 is small enough and a is close to 0,

1 + tanh(axTi xj+ r) ≈ (1 + tanh(r))(e−akxi−xjk

2

), (14)

a form of the RBF kernel.

However, the closeness of kernel elements does not directly imply similar general-ization performance. Hence, we need to show that they have nearly the same decision functions. Note that the objective function of (2) is the same as:

1 2α T Qα − eTα = 1 2α T(Q + yyT )α − eTα (15) = 1 1 + tanh(r)( 1 2α˜ T Q + yyT 1 + tanh(r)α − e˜ Tα),˜

where ˜α ≡ (1+tanh(r))α and (15) follows from the equality constraint in (2). Multiplying

(12)

(2) is the same as solving min ˜ α Fr( ˜α) = 1 2α˜ T Q + yyT 1 + tanh(r)α − e˜ T ˜ α subject to 0 ≤ ˜αi ≤ ˜C, i = 1, . . . , l, (16) yTα = 0.˜

Given a fixed ˜C, as r → −∞, since (Q + yyT)

ij = yiyj(Kij+ 1), the problem approaches

min ˜ α FT( ˜α) = 1 2α˜ T¯ α − eTα˜ subject to 0 ≤ ˜αi ≤ ˜C, i = 1, . . . , l, (17) yTα = 0,˜ where ¯Qij = yiyje2ax T

ixj is a PD kernel matrix when x

i 6= xj for all i 6= j. Then, we can

prove the following theorem:

Theorem 8 Given fixed a and ˜C, assume that xi 6= xj for all i 6= j, and the optimal b

of the decision function from (17) is unique. Then for any data point x, lim

r→−∞ decision value at x using the sigmoid kernel in (2)

= decision value at x using (17).

We leave the proof in Appendix A. Theorem 8 tells us that when r < 0 is small enough, the separating hyperplanes of (2) and (17) are almost the same. Similar cross-validation (CV) accuracy will be shown in the later experiments.

(Keerthi and Lin 2003, Theorem 2) shows that when a → 0, for any given ¯C, the

deci-sion value by the SVM using the RBF kernel e−akxi−xjk2 with the error cost C¯

2a approaches

the decision value of the following linear SVM: min ¯ α 1 2 X i X j ¯ αiα¯jyiyjxTi xj − X i ¯ αi subject to 0 ≤ ¯αi ≤ ¯C, i = 1, . . . , l, (18) yTα = 0.¯

(13)

that the optimal b of the decision function from (18) is unique, for any data point x, lim

a→0 decision value at x using the RBF kernel with ˜C =

¯ C 2a

= decision value at x using (18) with ¯C

= lim

a→0 decision value at x using (17) with ˜C =

¯ C 2a.

Then we can get the similarity between the sigmoid and the RBF kernels as follows:

Theorem 9 Given a fixed ¯C, assume that xi 6= xj for all i 6= j, and each of (17) after

a is close to 0 and (18) has a unique b. Then for any data point x, lim

a→0 r→−∞lim decision value at x using the sigmoid kernel with C =

˜ C 1+tanh(r)

= lim

a→0 decision value at x using (17) with ˜C =

¯ C 2a

= lim

a→0 decision value at x using the RBF kernel with ˜C =

¯ C 2a

= decision value at x using the linear kernel with ¯C.

We can observe the result of Theorems 8 and 9 from Figure 1. The contours show five-fold cross-validation accuracy of the data set heart in different r and C. The contours with a = 1 are on the left-hand side, while those with a = 0.01 are on the right-hand

side. Other parameters considered here are log2C from −2 to 13, with grid space 1, and

log2(−r) from 0 to 4.5, with grid space 0.5. Detailed description of the data set will be

given later in Section 7.

From both sides of Figure 1, we can see that the middle contour (using (17)) is similar to the top one (using tanh) when r gets small. This verifies our approximation in (13) as well as Theorem 8. However, on the left-hand side, since a is not small enough,

the data-dependent scaling term eakxik2 between (13) and (14) is large and causes a

difference between the middle and bottom contours. When a is reduced to 0.01 on the right-hand side, the top, middle, and bottom contours are all similar when r is small. This observation corresponds to Theorem 9.

We observe this on other data sets, too. However, Figure 1 and Theorem 9 can only provide a connection between the sigmoid and the RBF kernels when (a, r) are in a limited range. Thus, in Section 7, we try to compare the two kernels using parameters in other ranges.

(14)

a = 1 a = 0.01 79 77 75 73 71 69 67 65 -2 0 2 4 6 8 10 12 14 16 lg(C) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 lg(-r) (a) heart-tanh 83 81 79 77 75 73 71 69 67 65 -2 0 2 4 6 8 10 12 14 16 lg(C) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 lg(-r) (b) heart-tanh 69 67 65 -2 0 2 4 6 8 10 12 14 16 lg(C) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 lg(-r) (c) heart-(17) 83 81 79 77 75 73 71 69 67 65 -2 0 2 4 6 8 10 12 14 16 lg(C) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 lg(-r) (d) heart-(17) 75 73 71 69 67 65 -2 0 2 4 6 8 10 12 14 16 lg(C) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 lg(-r) (e) heart-RBF- ˜C 83 81 79 77 75 73 71 69 67 65 -2 0 2 4 6 8 10 12 14 16 lg(C) 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 lg(-r) (f) heart-RBF- ˜C

(15)

5

Importance of the Linear Constraint

y

T

α = 0

In Section 3 we showed that for certain parameters, the kernel matrix using the sigmoid

kernel is CPD. This is strongly related to the linear constraint yTα = 0 in the dual

problem (2). Hence, we can use it to verify the CPD-ness of a given kernel matrix.

Recall that yTα = 0 of (2) is originally derived from the bias term b of (1). It has

been known that if the kernel function is PD and xi 6= xj for all i 6= j, Q will be PD and

the problem (6) attains an optimal solution. Therefore, for PD kernels such as the RBF, in many cases, the performance is not affected much if the bias term b is not used. By doing so, the dual problem is

min α 1 2α T Qα − eTα subject to 0 ≤ αi ≤ C, i = 1, . . . , l. (19)

For the sigmoid kernel, we may think that (19) is also acceptable. It turns out that

without yTα = 0, in more cases, (19) without the upper bound C, has the objective value

−∞. Thus, training data may not be properly separated. The following theorem gives an example on such cases:

Theorem 10 If there is one Kii < 0 and there is no upper bound C of α, (19) has

optimal objective value −∞. Proof.

Let αi = ∆ and αk= 0 for k 6= i. We can easily see that ∆ → ∞ leads to an optimal

objective value −∞. 2

Note that for sigmoid kernel matrices, this situation happens when mini(akxik2+r) <

0. Thus, when a > 0 but r is small, unlike our analysis in Case 1 of Section 3, solving (19) may lead to very different results. This will be shown in the following experiments. We compare the five-fold cross-validation accuracy with problems (2) and (19). Four data sets, which will be described in Section 7, are considered. We use LIBSVM for solving (2), and a modification of BSVM (Hsu and Lin 2002) for (19). Results of CV

accuracy with parameters a = 1/n and (log2C, r) = [−2, −1, . . . , 13] × [−2, −1.8, . . . , 2]

(16)

are on the right. For each contour, the horizontal axis is log2C, while the vertical axis is r. The internal optimization solver of BSVM can handle non-convex problems, so its decomposition procedure guarantees the strict decrease of function values throughout all iterations. However, unlike LIBSVM which always obtains a stationary point of (2) using the analysis in Section 6, for BSVM, we do not know whether its convergent point is a stationary point of (19) or not.

When (2) is solved, from Figure 2, higher accuracy generally happens when r < 0 (especially german and diabete). This corresponds to our analysis about the CPD of K when a > 0 and r small enough. However, sometimes the CV accuracy is also high when r > 0. We have also tried the cases of a < 0, results are worse.

The good regions for the right column shift to r ≥ 0. This confirms our analysis in Theorem 10 as when r < 0, (19) without C tends to have the objective value −∞. In

other words, without yTα = 0, CPD of K for small r is not useful.

The experiments fully demonstrate the importance of incorporating constraints of the dual problem into the analysis of the kernel. An earlier paper (Sellathurai and Haykin

1999) says that each Kij of the sigmoid kernel matrix is from a hyperbolic inner product.

Thus, a special type of maximal margin still exists. However, as shown in Figure 2,

without yTα = 0, the performance is very bad. Thus, the separability of non-PSD

kernels may not come from their own properties, and a direct analysis may not be useful.

6

SMO-type Implementation for non-PSD Kernel

Ma-trices

First we discuss how decomposition methods work for PSD kernels and the difficulties for non-PSD cases. In particular, we explain that the algorithm may stay at the same point, so the program never ends. The decomposition method (e.g. (Osuna, Freund, and Girosi 1997; Joachims 1998; Platt 1998; Chang and Lin 2001)) is an iterative process. In each step, the index set of variables is partitioned to two sets B and N, where B is the working set. Then in that iteration variables corresponding to N are fixed while a

(17)

84 83.5 83 82.5 82 81.5 81 -2 0 2 4 6 8 10 12 lg(C) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 r (a) heart 84 83.5 83 82.5 82 81.5 81 -2 0 2 4 6 8 10 12 lg(C) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 r (b) heart-nob 77.5 77 76.5 76 75.5 75 -2 0 2 4 6 8 10 12 lg(C) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 r (c) german 76.5 76 75.5 75 74.5 74 -2 0 2 4 6 8 10 12 lg(C) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 r (d) german-nob 77.5 77 76.5 76 75.5 75 -2 0 2 4 6 8 10 12 lg(C) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 r (e) diabete 77 76.5 76 75.5 75 74.5 74 -2 0 2 4 6 8 10 12 lg(C) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 r (f) diabete-nob 83.5 83 82.5 82 81.5 81 -2 0 2 4 6 8 10 12 lg(C) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 r (g) a1a 83.5 83 82.5 82 81.5 81 -2 0 2 4 6 8 10 12 lg(C) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 r (h) a1a-nob

Figure 2: Comparison of cross validation rates between problems with the linear con-straint (left) and without it (right)

(18)

solution, the following sub-problem is solved: min αB 1 2α T B (α k N) TQBB QBN QN B QN N  αB αk N  −eT B (e k N) TαB αk N  subject to yT BαB = −yTNαkN, (20) 0 ≤ αi ≤ C, i ∈ B.

The objective function of (20) can be simplified to min αB 1 2α T BQBBαB+ (QBNαNk − eB)TαB

after removing constant terms.

The extreme of the decomposition method is the Sequential Minimal Optimization (SMO) algorithm (Platt 1998) whose working sets are restricted to two elements. The advantage of SMO is that (20) can be easily solved without an optimization package. A simple and common way to select the two variables is through the following form of optimal conditions (Keerthi, Shevade, Bhattacharyya, and Murthy 2001; Chang and Lin 2001): α is a stationary point of (2) if and only if α is feasible and

max

t∈Iup(α,C)−y

t∇F (α)t≤ min

t∈Ilow(α,C)−y

t∇F (α)t, (21)

where

Iup(α, C) ≡ {t | αt< C, yt= 1 or αt> 0, yt= −1},

Ilow(α, C) ≡ {t | αt< C, yt = −1 or αt > 0, yt= 1}.

Thus, when αk is feasible but not optimal for (2), (21) does not hold so a simple selection

of B = {i, j} is

i ≡ argmax

t∈Iup(αk,C)

−yt∇F (αk)t and j ≡ argmin

t∈Ilow(αk,C)

−yt∇F (αk)t. (22)

By considering the variable αB = αkB+ d, and defining

ˆ

di ≡ yidi and ˆdj ≡ yjdj,

the two-variable sub-problem is min ˆ di, ˆdj 1 2 ˆ di dˆj Kii Kij Kji Kjj  ˆ di ˆ dj  +yi∇F (αk)i yj∇F (αk)j  ˆ di ˆ dj  subject to dˆi+ ˆdj = 0, (23) 0 ≤ αk i + yidˆi, αjk+ yjdˆj ≤ C.

(19)

To solve (23), we can substitute ˆdi = − ˆdj into its objective function: min ˆ dj 1 2(Kii− 2Kij + Kjj) ˆd 2 j + (−yi∇F (αk)i+ yj∇F (αk)j) ˆdj. (24a) subject to L ≤ ˆdj ≤ H, (24b)

where L and H are upper and lower bounds of ˆdj after including information on ˆdi:

ˆ

di = − ˆdj and 0 ≤ αki + yidˆi ≤ C. For example, if yi = yj = 1,

L = max(−αk j, α k i − C) and H = min(C − α k j, α k i).

Since i ∈ Iup(αk, C) and j ∈ Ilow(αk, C), we can clearly see L < 0 but H only ≥ 0. If Q

is PSD, Kii+ Kjj− 2Kij ≥ 0 so (24) is a convex parabola or a straight line. In addition,

from the working set selection strategy in (22), −yi∇F (αk)i+ yj∇F (αk)j > 0, so (24) is

like Figure 3. Thus, there exists ˆdj < 0 such that the objective value of (24) is strictly

decreasing. In addition, ˆdj < 0 also shows the direction toward the minimum of the

function.

If Kii+ Kjj− 2Kij > 0, the way to solve (24) is by calculating the minimum of (24a)

first: −−yi∇F (α k) i+ yj∇F (αk)j Kii− 2Kij+ Kjj < 0. (25)

Then, if ˆdj defined by the above is less than L, we reduce ˆdj to the lower bound. If the

kernel matrix is only PSD, it is possible that Kii− 2Kij + Kjj = 0, as shown in Figure

3(b). In this case, using the trick under IEEE floating point standard (Goldberg 1991), we can make sure that (25) to be −∞ which is still defined. Then, a comparison with

L still reduces ˆdj to the lower bound. Thus, a direct (but careful) use of (25) does not

cause any problem. More details are in (Chang and Lin 2001). The above procedure explains how we solve (24) in an SMO-type software.

If Kii− 2Kij+ Kjj < 0, which may happen if the kernel is not PSD, (25) is positive.

That is, the quadratic function (24a) is concave (see Figure 4) and a direct use of (25) move the solution toward (24a)’s maximum. Therefore, the decomposition method may not have the objective value strictly decreasing, a property usually required for an opti-mization algorithm. Moreover, it may not be feasible to move along a positive direction

ˆ

(20)

-6 ˆ dj (24a)

.

...

...

...

.

...

...

...

...

...

...

.

...

...

...

...

...

....

....

....

....

....

....

...

....

....

..

...

...

... ...

. ...

.. ....

. ...

...

.. ...

...

.. ...

...

.. ...

....

..

....

...

...

...

....

...

...

...

...

...

...

...

H L (a) Kii+ Kjj− 2Kij >0 -6 ˆ dj (24a)     H L (b) Kii+ Kjj− 2Kij = 0

Figure 3: Solving the convex sub-problem (24)

decrease αi nor αj. Thus, under the current setting for PSD kernels, it is possible that

the next solution stays at the same point so the program never ends. In the following we propose different approaches to handle this difficulty.

-6 ˆ dj (24a)

...

...

...

...

...

...

...

...

....

...

...

. ...

....

....

....

.. ...

.. ...

...

.. ...

...

.. ....

...

. ...

...

.. ...

.

....

...

...

....

....

...

....

....

....

...

...

....

...

...

...

...

...

...

.

...

...

...

...

...

...

.

H L

(a) L is the minimum

-6 ˆ dj (24a)

...

...

...

...

...

...

...

...

....

...

...

. ...

....

....

....

.. ...

.. ...

...

.. ...

...

.. ....

...

. ...

...

.. ...

.

....

...

...

....

....

...

....

....

....

...

...

....

...

...

...

...

...

...

.

...

...

...

...

...

...

.

H L (b) H is the minimum

Figure 4: Solving the concave sub-problem (24)

6.1

Restricting the Range of Parameters

The first approach is to restrict the parameter space. In other words, users are allowed to specify only certain kernel parameters. Then the sub-problem is guaranteed to be convex, so the original procedure for solving sub-problems works without modification.

Lemma 2 If a > 0 and

max

i (akxik 2

(21)

any two-variable sub-problem of an SMO algorithm is convex.

We have explained that the sub-problem can be reformulated as (24), so the proof is

reduced to show that Kii− 2Kij+ Kjj ≥ 0. This, in fact, is nearly the same as the proof

of Theorem 5. The only change is that without assuming xi 6= xj, “> 0” is changed to

“≥ 0.”

Therefore, if we require that a and r satisfy (26), we will never have an endless loop

staying at one αk.

6.2

An SMO-type Method for General non-PSD Kernels

Results in Section 6.1 depend on properties of the sigmoid kernel. Here we propose an SMO-type method which is able to handle all kernel matrices no matter they are PSD or

not. To have such a method, the key is on solving the sub-problem when Kii−2Kij+Kjj <

0. In this case, (24a) is a concave quadratic function like that in Figure 4. The two sub-figures clearly show that the global optimal solution of (24) can be obtained by checking the objective values at two bounds L and H.

A disadvantage is that this procedure of checking two points is different from the

solution procedure of Kii− 2Kij+ Kjj ≥ 0. Thus, we propose to consider only the lower

bound L which, as L < 0, always ensures the strict decrease of the objective function. Therefore, the algorithm is as follows:

If Kii− 2Kij + Kjj > 0, then ˆdj is the maximum of (25) and L,

Else ˆdj = L.

(27) Practically the change of the code may be only from (25) to

−−yi∇F (α

k)

i+ yj∇F (αk)j

max(Kii− 2Kij+ Kjj, 0)

. (28)

When Kii+ Kjj− 2Kij < 0, (28) is −∞. Then the same as the situation of Kii+ Kjj−

2Kij = 0, ˆdj = L is taken.

An advantage of this strategy is that we do not have to exactly solve (24). (28) also shows that a very simple modification from the PSD-kernel version is possible. Moreover, it is easier to prove the asymptotic convergence. The reason will be discussed after

(22)

Lemma 3. In the following we prove that any limit point of the decomposition procedure discussed above is a stationary point of (2). In earlier convergence results, Q is PSD so a stationary point is already a global minimum.

If the working set selection is via (22), existing convergence proofs for PSD kernels (Lin 2001; Lin 2002) require the following important lemma which is also needed here:

Lemma 3 There exists σ > 0 such that for any k,

F (αk+1) ≤ F (αk ) −σ 2kα k+1 − αk k2. (29) Proof.

If Kii+ Kjj− 2Kij ≥ 0 in the current iteration, (Lin 2002) shows that by selecting σ

as the following number

min{C2, min

t,r {

Ktt+ Krr− 2Ktr

2 | Ktt+ Krr− 2Ktr > 0}}, (30)

(29) holds.

If Kii+Kjj−2Kij < 0, ˆdj = L < 0 is the step chosen so (−yi∇F (αk)i+yj∇F (αk)j) ˆdj <

0. As kαk+1− αkk2 = 2 ˆd2

j from ˆdi = − ˆdj, (24a) implies that

F (αk+1) − F (αk) < 1 2(Kii+ Kjj− 2Kij) ˆd 2 j (31) = 1 4(Kii+ Kjj− 2Kij)kα k+1 − αk k2 ≤ −σ 0 2kα k+1 − αkk2, where σ0 ≡ − max t,r { Ktt+ Krr− 2Ktr 2 | Ktt+ Krr− 2Ktr < 0}. (32)

Therefore, by defining σ as the minimum of (30) and (32), the proof is complete. 2 Next we give the main convergence result:

Theorem 11 For the decomposition method using (22) for the working set selection and

(27) for solving the sub-problem, any limit point of {αk

} is a stationary point of (2). Proof.

(23)

If we carefully check the proof in (Lin 2001; Lin 2002), it can be extended to non-PSD Q if (1) (29) holds and (2) a local minimum of the sub-problem is obtained in each

iteration. Now we have (29) from Lemma 3. In addition, ˆdj = L is essentially one of the

two local minima of problem (24) as clearly seen from Figure 4. Thus, the same proof follows. 2

There is an interesting remark about Lemma 3. If we exactly solve (24), so far

we have not been able to establish Lemma 3. The reason is that if ˆdj = H is taken,

(−yi∇F (αk)i+ yj∇F (αk)j) ˆdj > 0 so (31) may not be true. Therefore, the convergence is

not clear. In the whole convergence proof, Lemma 3 is used to obtain kαk+1−αkk → 0 as

k → ∞. A different way to have this property is by slightly modifying the sub-problem (20) as shown in (Palagi and Sciandrone 2002). Then the convergence holds when we exactly solve the new sub-problem.

Although Theorem 11 shows only that the improved SMO algorithm converges to a stationary point rather than a global minimum, the algorithm nevertheless shows a way to design a robust SVM software with separability concern. Theorem 1 indicates that a stationary point is feasible for the separability problem (5). Thus, if the number of support vectors of this stationary point is not too large, the training error would not be

too large, either. Furthermore, with additional constraints yTα = 0 and 0 ≤ α

i ≤ C, i =

1, . . . , l, a stationary point may already be a global one. If this happens at parameters with better accuracy, we do not worry about multiple stationary points at others. An example is the sigmoid kernel, where discussion in Section 3 indicates that parameters with better accuracy tends to be with CPD kernel matrices.

It is well known that Neural Networks have similar problems about local minima (Sarle 1997), and a popular way to prevent trapping in a bad one is multiple random initializations. Here we adapt this method and present an empirical study in Figure 5. We use the heart data set, with the same setting as in Figure 2. Figure 5(a) is the contour

which uses the zero vector as the initial α0. Figure 5(b) is the contour by choosing the

solution with the smallest of five objective values via different random initial α0.

The performance of Figures 5(a) and 5(b) is similar, especially in regions with good rates. For example, when r < −0.5, the two contours are almost the same, a property

(24)

84 81 78 75 72 69 -2 0 2 4 6 8 10 12 lg(C) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 r (a) heart-zero 84 81 78 75 72 69 -2 0 2 4 6 8 10 12 lg(C) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 r (b) heart-random5

Figure 5: Comparison of cross validation rates between approaches without (left) and with (right) five random initializations

which may explain the CPD-ness in that region. In the regions where multiple stationary

points may occur (e.g. C > 26 and r > +0.5), two contours are different but the rates are

still similar. We observe similar results on other datasets, too. Therefore, the stationary point obtained by zero initialization seems good enough in practice.

7

Modification to Convex Formulations

While (5) is non-convex, it is possible to slightly modify the formulation to be convex. If the objective function is replaced by

1 2α T α + C l X i=1 ξi,

then (5) becomes convex. Note that non-PSD kernel K still appears in constraints. The

main drawback of this approach is that αi are in general non-zero, so unlike standard

SVM, the sparsity is lost.

There are other formulations which use a non-PSD kernel matrix but remain convex. For example, we can consider the kernel logistic regression (KLR) (e.g., (Wahba 1998)) and use a convex regularization term:

min α,b 1 2 l X r=1 αr2+ C l X r=1 log(1 + eξr), (33)

(25)

where ξr ≡ −yr( l X j=1 αjK(xr, xj) + b).

By defining an (l + 1) × l matrix ˜K with

˜

Kij ≡

(

Kij if 0 ≤ i, j ≤ l,

1 if i = l + 1,

the Hessian matrix of (33) is ˜

I + C ˜Kdiag(˜p)diag(1 − ˜p) ˜KT,

where ˜I is an l + 1 by l + 1 identity matrix with the last diagonal element replaced by

zero. ˜p ≡ [1/(1 + eξ1), . . . , 1/(1 + eξl)]T and diag(˜p) is a diagonal matrix generated by ˜p.

Clearly, the Hessian matrix is always positive semidefinite, so (33) is convex.

In the following we compare SVM (RBF and sigmoid kernels) and KLR (sigmoid kernel). Four data sets are tested: heart, german, diabete, and a1a. They are from (Michie, Spiegelhalter, and Taylor 1994) and (Blake and Merz 1998). The first three data sets are linearly scaled, so values of each attribute are in [-1, 1]. For a1a, its values are binary (0 or 1), so we do not scale it. We train SVM (RBF and sigmoid kernels) by LIBSVM (Chang and Lin 2001), which, an SMO-type decomposition implementation, uses techniques in Section 6 for solving non-convex optimization problems. For KLR, two optimization procedures are compared. The first one, KLR-NT, is a Newton’s method implemented by modifying the software TRON (Lin and Mor´e 1999). The second one, KLR-CG, is conjugate gradient method (see, for example, (Nash and Sofer 1996)). The stopping criteria for the two procedures are set the same to ensure that the solutions are comparable.

For the comparison, we conduct a two-level cross validation. At the first level, data are separated to five folds. Each fold is predicted by training the remaining four folds. For each training set, we perform another five-fold cross validation and choose the best

parameter by CV accuracy. We try all (log2C, log2a, r) in the region [−3, 0, . . . , 12] ×

[−12, −9, . . . , 3] × [−2.4, −1.8, . . . , 2.4]. Then the average testing accuracy is reported

in Table 1. Note that for the parameter selection, the RBF kernel e−akxi−xjk2 does not

(26)

Resulting accuracy is similar for all the three approaches. The sigmoid kernel seems to work well in practice, but it is not better than RBF. As RBF has properties of being PD and having fewer parameters, somehow there is no strong reason to use the sigmoid. KLR with the sigmoid kernel is competitive with SVM, and a nice property is that it solves a convex problem. However, without sparsity, the training and testing time for KLR is much longer. Moreover, CG is worse than NT for KLR. These are clearly shown in Table 2. The experiments are put on Pentium IV 2.8G machines with 1024 MB RAM. Optimized linear algebra subroutines (Whaley, Petitet, and Dongarra 2000) are linked to reduce the computational time for KLR solvers. The time is measured in CPU seconds. Number of support vectors (#SV) and training/testing time are averaged from the results of the first level of five-fold CV. This means that the maximum possible #SV here is 4/5 of the original data size, and we can see that KLR models are dense to this extent.

Table 1: Comparison of test accuracy

e−akxi−xjk2 tanh(axT

i xj + r)

data set #data #attributes SVM SVM KLR-NT KLR-CG

heart 270 13 83.0% 83.0% 83.7% 83.7%

german 1000 24 76.6% 76.1% 75.6% 75.6%

diabete 768 8 77.6% 77.3% 77.1% 76.7%

a1a 1605 123 83.6% 83.1% 83.7% 83.8%

Table 2: Comparison of time usage

tanh(axT i xj+ r) #SV training/testing time data set SVM KLR-NT KLR-CG SVM KLR-NT KLR-CG heart 115.2 216 216 0.02/0.01 0.12/0.02 0.45/0.02 german 430.2 800 800 0.51/0.07 5.76/0.10 73.3/0.11 diabete 338.4 614.4 614.4 0.09/0.03 2.25/0.04 31.7/0.05 a1a 492 1284 1284 0.39/0.08 46.7/0.25 80.3/0.19

(27)

8

Discussions

From the results in Sections 3 and 5, we clearly see the importance of the CPD-ness

which is directly related to the linear constraint yTα = 0. We suspect that for many

non-PSD kernels used so far, their viability is based on it as well as inequality constraints

0 ≤ αi ≤ C, i = 1, . . . , l of the dual problem. It is known that some non-PSD kernels are

not CPD. For example, the tangent distance kernel matrix in (Haasdonk and Keysers 2002) may contain more than one negative eigenvalue, a property that indicates the matrix is not CPD. Further investigation on such non-PSD kernels and the effect of

inequality constraints 0 ≤ αi ≤ C will be interesting research directions.

Even though the CPD-ness of the sigmoid kernel for certain parameters gives an explanation to the practical viability, the quality of the local minimum solution in other parameters may not be guaranteed. This makes it hard to select suitable parameters for the sigmoid kernel. Thus, in general we do not recommend the use of the sigmoid kernel. In addition, our analysis indicates that for certain parameters the sigmoid kernel behaves like the RBF kernel. Experiments also show that their performance are similar. Therefore, with the result in (Keerthi and Lin 2003) showing that the linear kernel is essentially a special case of the RBF kernel, among existing kernels, RBF should be the first choice for general users.

Acknowledgments

This work was supported in part by the National Science Council of Taiwan via the grant NSC 90-2213-E-002-111. We thank users of LIBSVM (in particular, Carl Staelin), who somewhat forced us to study this issue. We also thank Bernhard Sch¨olkopf and Bernard Haasdonk for some helpful discussions.

References

Berg, C., J. P. R. Christensen, and P. Ressel (1984). Harmonic Analysis on Semigroups. New York: Springer-Verlag.

(28)

Blake, C. L. and C. J. Merz (1998). UCI repository of machine

learn-ing databases. Technical report, University of California,

Depart-ment of Information and Computer Science, Irvine, CA. Available at

http://www.ics.uci.edu/~mlearn/MLRepository.html.

Boser, B., I. Guyon, and V. Vapnik (1992). A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory.

Burges, C. J. C. (1998). A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2 (2), 121–167.

Burges, C. J. C. (1999). Geometry and invariance in kernel based methods. In B. Sch¨olkopf, C. Burges, and A. Smola (Eds.), Advances in Kernel Methods: Sup-port Vector Learning, pp. 89–116. MIT Press.

Chang, C.-C. and C.-J. Lin (2001). LIBSVM: a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.

Cortes, C. and V. Vapnik (1995). Support-vector network. Machine Learning 20, 273– 297.

DeCoste, D. and B. Sch¨olkopf (2002). Training invariant support vector machines. Machine Learning 46, 161–190.

Goldberg, D. (1991). What every computer scientist should know about floating-point arithmetic. ACM Computing Surveys 23 (1), 5–48.

Haasdonk, B. and D. Keysers (2002). Tangent distance kernels for support vector machines. In Proceedings of the 16th ICPR, pp. 864–868.

Hsu, C.-W. and C.-J. Lin (2002). A simple decomposition method for support vector machines. Machine Learning 46, 291–314.

Joachims, T. (1998). Making large-scale SVM learning practical. In B. Sch¨olkopf, C. J. C. Burges, and A. J. Smola (Eds.), Advances in Kernel Methods - Support Vector Learning, Cambridge, MA. MIT Press.

(29)

Keerthi, S. S. and C.-J. Lin (2003). Asymptotic behaviors of support vector machines with Gaussian kernel. Neural Computation 15 (7), 1667–1689.

Keerthi, S. S., S. K. Shevade, C. Bhattacharyya, and K. R. K. Murthy (2001). Improve-ments to Platt’s SMO algorithm for SVM classifier design. Neural Computation 13, 637–649.

Lin, C.-J. (2001). On the convergence of the decomposition method for support vector machines. IEEE Transactions on Neural Networks 12 (6), 1288–1298.

Lin, C.-J. (2002). Asymptotic convergence of an SMO algorithm without any assump-tions. IEEE Transactions on Neural Networks 13 (1), 248–250.

Lin, C.-J. and J. J. Mor´e (1999). Newton’s method for large-scale bound constrained problems. SIAM Journal on Optimization 9, 1100–1127.

Lin, K.-M. and C.-J. Lin (2003). A study on reduced support vector machines. IEEE Transactions on Neural Networks 14 (6), 1449–1559.

Micchelli, C. A. (1986). Interpolation of scattered data: distance matrices and condi-tionally positive definite functions. Constructive Approximation 2, 11–22.

Michie, D., D. J. Spiegelhalter, and C. C. Taylor (1994). Machine Learning, Neural and Statistical Classification. Englewood Cliffs, N.J.: Prentice Hall. Data available at http://www.ncc.up.pt/liacc/ML/statlog/datasets.html.

Nash, S. G. and A. Sofer (1996). Linear and Nonlinear Programming. McGraw-Hill. Osuna, E., R. Freund, and F. Girosi (1997). Training support vector machines: An

application to face detection. In Proceedings of CVPR’97, New York, NY, pp. 130– 136. IEEE.

Osuna, E. and F. Girosi (1998). Reducing the run-time complexity of support vector machines. In Proceedings of International Conference on Pattern Recognition. Palagi, L. and M. Sciandrone (2002). On the convergence of a modified version of

SVMlight algorithm. Technical Report IASI-CNR 567.

Platt, J. C. (1998). Fast training of support vector machines using sequential minimal optimization. In B. Sch¨olkopf, C. J. C. Burges, and A. J. Smola (Eds.), Advances

(30)

in Kernel Methods - Support Vector Learning, Cambridge, MA. MIT Press.

Sarle, W. S. (1997). Neural Network FAQ. Periodic posting to the Usenet newsgroup comp.ai.neural-nets.

Sch¨olkopf, B. (1997). Support Vector Learning. Ph. D. thesis.

Sch¨olkopf, B. (2000). The kernel trick for distances. In NIPS, pp. 301–307. Sch¨olkopf, B. and A. J. Smola (2002). Learning with kernels. MIT Press.

Sellathurai, M. and S. Haykin (1999). The separability theory of hyperbolic tangent kernels and support vector machines for pattern classification. In Proceedings of ICASSP99.

Vapnik, V. (1995). The Nature of Statistical Learning Theory. New York, NY: Springer-Verlag.

Wahba, G. (1998). Support vector machines, reproducing kernel Hilbert spaces, and randomized GACV. In B. Sch¨olkopf, C. J. C. Burges, and A. J. Smola (Eds.), Advances in Kernel Methods: Support Vector Learning, pp. 69–88. MIT Press. Whaley, R. C., A. Petitet, and J. J. Dongarra (2000). Automatically tuned linear

alge-bra software and the ATLAS project. Technical report, Department of Computer Sciences, University of Tennessee.

A

Proof of Theorem 8

The proof of Theorem 8 contains three parts: the convergence of the optimal solution, the convergence of the decision value without the bias term, and the convergence of the bias term. Before entering the proof, we first need to know that (17) has a PD kernel

under our assumption xi 6= xj for all i 6= j. Therefore, the optimal solution ˆα∗ of (17)

is unique. From now on we denote ˆαr as a local optimal solution of (2), and br as the

associated optimal b value. For (17), b∗ denotes its optimal b.

1. The convergence of optimal solution: lim

r→−∞θrαˆ

r = ˆα, where θ

(31)

Proof.

By the equivalence between (2) and (16), θrαˆr is the optimal solution of (16). The

convergence to ˆα∗ comes from (Keerthi and Lin 2003, Lemma 2) since ¯Q is PD and

the kernel of (16) approaches ¯Q by Lemma 1. 2

2. The convergence of the decision value without the bias term: For any x, lim r→−∞ l X i=1 yiαˆri tanh(ax T i x + r) = l X i=1 yiαˆ∗ie 2axT ixj. (35) Proof. lim r→−∞ l X i=1 yiαˆri tanh(ax T i x + r) = lim r→−∞ l X i=1 yiαˆri(1 + tanh(ax T i x + r)) (36) = lim r→−∞ l X i=1 yiθrαˆri 1 + tanh(axT i x + r) θr = l X i=1 yi lim r→−∞θrαˆ r i lim r→−∞ 1 + tanh(axT i x + r) θr = l X i=1 yiαˆ∗ie2ax T ix. (37)

(36) comes from the equality constraint in (2) and (37) comes from (34) and Lemma 1. 2

3. The convergence of the bias term: lim

r→−∞b

r = b. (38)

Proof.

By the KKT condition that br must satisfy,

max

i∈Iup(ˆαr,C)−y

i∇F (ˆαr)i ≤ br ≤ min

i∈Ilow(ˆαr,C)

(32)

where Iup and Ilow are defined in (21). In addition, because b∗ is unique, max i∈Iup(ˆα∗, ˜C) −yi∇FT( ˆα∗)i = b∗ = min i∈Ilow(ˆα∗, ˜C) −yi∇FT( ˆα∗)i.

Note that the equivalence between (2) and (16) implies ∇F (ˆαr)

i = ∇Fr(θrαˆr)i. Thus, max i∈Iup(θrαˆr, ˜C) −yi∇Fr(θrαˆr)i ≤ br ≤ min i∈Ilow(θrαˆr, ˜C) −yi∇Fr(θrαˆr)i.

By the convergence of θrαˆr when r → −∞, after r is small enough, all index

i’s satisfying ˆα∗

i < ˜C would have θrαˆir < ˜C. That is, Iup( ˆα∗, ˜C) ⊆ Iup(θrαˆr, ˜C).

Therefore, when r is small enough, max i∈Iup(ˆα∗, ˜C) −yi∇Fr(θrαˆr)i ≤ max i∈Iup(θrαˆr, ˜C) −yi∇Fr(θrαˆr)i. Similarly, min i∈Ilow(ˆα∗, ˜C) −yi∇Fr(θrαˆr)i ≥ min i∈Ilow(θrαˆr, ˜C) −yi∇Fr(θrαˆr)i.

Thus, for r < 0 small enough, max

i∈Iup(ˆα∗, ˜C)

−yi∇Fr(θrαˆr)i ≤ br ≤ min

i∈Ilow(ˆα∗, ˜C)

−yi∇Fr(θrαˆr)i.

Taking limr→−∞ on both sides, using Lemma 1 and (34),

lim r→−∞b r = max i∈Iup(ˆα∗, ˜C) −yi∇FT( ˆα∗)i = min i∈Ilow(ˆα∗, ˜C) −yi∇FT( ˆα∗)i = b∗. (39) 2

數據

Figure 1: Performance of different kernels
Figure 2: Comparison of cross validation rates between problems with the linear con- con-straint (left) and without it (right)
Figure 3: Solving the convex sub-problem (24)
Figure 5: Comparison of cross validation rates between approaches without (left) and with (right) five random initializations
+2

參考文獻

相關文件

– Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices; Guidance for FDA Reviewers and Industry (see Verification and Validation

(In Section 7.5 we will be able to use Newton's Law of Cooling to find an equation for T as a function of time.) By measuring the slope of the tangent, estimate the rate of change

In this paper, we would like to characterize non-radiating volume and surface (faulting) sources for the elastic waves in anisotropic inhomogeneous media.. Each type of the source

Results for such increasing stability phenomena in the inverse source problems for the acoustic, electromagnetic, and elastic waves can be found in [ABF02, BLT10, BHKY18, BLZ20,

In Sections 3 and 6 (Theorems 3.1 and 6.1), we prove the following non-vanishing results without assuming the condition (3) in Conjecture 1.1, and the proof presented for the

Vessella, Quantitative estimates of unique continuation for parabolic equa- tions, determination of unknown time-varying boundaries and optimal stability estimates, Inverse

Let f being a Morse function on a smooth compact manifold M (In his paper, the result can be generalized to non-compact cases in certain ways, but we assume the compactness

(a) A special school for children with hearing impairment may appoint 1 additional non-graduate resource teacher in its primary section to provide remedial teaching support to