Support Vector Machines
Chih-Jen Lin
Department of Computer Science National Taiwan University
Talk at Machine Learning Summer School 2006, Taipei
Outline
Basic concepts
SVM primal/dual problems
Training linear and nonlinear SVMs
Parameter/kernel selection and practical issues Multi-class classification
Discussion and conclusions
Basic concepts
Outline
Basic concepts
SVM primal/dual problems
Training linear and nonlinear SVMs
Parameter/kernel selection and practical issues Multi-class classification
Discussion and conclusions
Basic concepts
Why SVM and Kernel Methods
SVM: in many cases competitive with existing classification methods
Relatively easy to use
Kernel techniques: many extensions
Regression, density estimation, kernel PCA, etc.
Basic concepts
Support Vector Classification
Training vectors : xi, i = 1, . . . , l Feature vectors. For example, A patient = [height, weight, . . .]
Consider a simple case with two classes:
Define an indicator vector y yi =
1 if xi in class 1
−1 if xi in class 2, A hyperplane which separates all data
Basic concepts
wTx+ b = h+1
−10
i A separating hyperplane: wTx+ b = 0
(wTxi) + b > 0 if yi = 1 (wTxi) + b < 0 if yi = −1
Decision function f (x) = sgn(wTx+ b), x: test data Many possible choices of w and b
Basic concepts
Maximal Margin
Distance between wTx+ b = 1 and −1:
2/kwk = 2/√ wTw A quadratic programming problem [Boser et al., 1992]
minw,b
1 2wTw
subject to yi(wTxi + b) ≥ 1, i = 1, . . . , l.
Basic concepts
Data May Not Be Linearly Separable
An example:
Allow training errors
Higher dimensional ( maybe infinite ) feature space φ(x) = (φ1(x), φ2(x), . . .).
Basic concepts
Standard SVM [Cortes and Vapnik, 1995]
w,b,ξmin 1
2wTw+C
l
X
i=1
ξi
subject to yi(wTφ(xi)+ b) ≥ 1 −ξi, ξi ≥ 0, i = 1, . . . , l.
Example: x ∈ R3, φ(x) ∈ R10 φ(x) = (1,√
2x1,√
2x2,√
2x3, x12, x22, x32,√
2x1x2,√
2x1x3,√
2x2x3)
Basic concepts
Finding the Decision Function
w: maybe infinite variables The dual problem
minα
1
2αTQα− eTα
subject to 0 ≤ αi ≤ C , i = 1, . . . , l yTα = 0,
where Qij = yiyjφ(xi)Tφ(xj) and e = [1, . . . , 1]T At optimum
w = Pl
i=1αiyiφ(xi)
A finite problem: #variables = #training data
Basic concepts
Kernel Tricks
Qij = yiyjφ(xi)Tφ(xj) needs a closed form Example: xi ∈ R3, φ(xi) ∈ R10
φ(xi) = (1,√
2(xi)1,√
2(xi)2,√
2(xi)3, (xi)21, (xi)22, (xi)23,√
2(xi)1(xi)2,√
2(xi)1(xi)3,√
2(xi)2(xi)3) Then φ(xi)Tφ(xj) = (1 + xTi xj)2.
Kernel: K (x, y) = φ(x)Tφ(y); common kernels:
e−γkxi−xjk2, (Radial Basis Function) (xTi xj/a + b)d (Polynomial kernel)
Basic concepts
Can be inner product in infinite dimensional space Assume x ∈ R1 and γ > 0.
e−γkxi−xjk2 = e−γ(xi−xj)2 = e−γxi2+2γxixj−γxj2
=e−γxi2−γxj2 1 + 2γxixj
1! + (2γxixj)2
2! + (2γxixj)3
3! + · · ·
=e−γxi2−γxj2 1 · 1+r 2γ
1!xi ·r 2γ 1!xj+
r(2γ)2 2! xi2 ·
r(2γ)2 2! xj2 +
r(2γ)3 3! xi3 ·
r(2γ)3
3! xj3 + · · · = φ(xi)Tφ(xj), where
φ(x) = e−γx2
1,r 2γ 1!x,
r(2γ)2 2! x2,
r(2γ)3
3! x3, · · ·
T
.
Basic concepts
More about Kernels
How do we know kernels help to separate data?
In Rl, any l independent vectors
⇒ linearly separable
(x1)T
...
(xl)T
w = +e
−e
If K positive definite ⇒ data linearly separable K = LLT.
Transforming training points to independent vectors in Rl
Basic concepts
So what kind of kernel should I use?
What kind of functions are valid kernels?
How to decide kernel parameters?
Will be discussed later
Basic concepts
Decision function
At optimum
w = Pl
i=1αiyiφ(xi) Decision function
wTφ(x) + b
=
l
X
i=1
αiyiφ(xi)Tφ(x) + b
=
l
X
i=1
αiyiK(xi, x) + b
Only φ(xi) of αi > 0 used ⇒ support vectors
Basic concepts
Support Vectors: More Important Data
−1.5 −1 −0.5 0 0.5 1
−0.2 0 0.2 0.4 0.6 0.8 1 1.2
Basic concepts
So we have roughly shown basic ideas of SVM A 3-D demonstration
www.csie.ntu.edu.tw/˜cjlin/libsvmtools/svmtoy3d Further references, for example,
[Cristianini and Shawe-Taylor, 2000, Sch¨olkopf and Smola, 2002]
Also see discussion on kernel machines blackboard www.kernel-machines.org/phpbb/
SVM primal/dual problems
Outline
Basic concepts
SVM primal/dual problems
Training linear and nonlinear SVMs
Parameter/kernel selection and practical issues Multi-class classification
Discussion and conclusions
SVM primal/dual problems
Deriving the Dual
Consider the problem without ξi
minw,b
1 2wTw
subject to yi(wTφ(xi) + b) ≥ 1, i = 1, . . . , l.
Its dual
minα
1
2αTQα− eTα
subject to 0 ≤ αi, i = 1, . . . , l, yTα = 0.
SVM primal/dual problems
Lagrangian Dual
maxα≥0 min
w,b L(w, b, α), where
L(w, b, α) = 1
2kwk2 −
l
X
i=1
αi yi(wTφ(xi) + b) − 1 Strong duality (be careful about this)
min Primal = max
α≥0 min
w,b L(w, b, α)
SVM primal/dual problems
Simplify the dual. When α is fixed, minw,b L(w, b, α) =
(−∞ if Pl
i=1αiyi 6= 0, minw
1
2wTw−Pl
i=1αi[yi(wTφ(xi) − 1] if Pl
i=1αiyi = 0.
If Pl
i=1αiyi 6= 0, decrease
−b
l
X
i=1
αiyi in L(w, b, α) to −∞
SVM primal/dual problems
If Pl
i=1αiyi = 0, optimum of the strictly convex
1
2wTw−Pl
i=1αi[yi(wTφ(xi) − 1] happens when
∂
∂wL(w, b, α) = 0.
Thus,
w =
l
X
i=1
αiyiφ(xi).
SVM primal/dual problems
Note that
wTw =
l
X
i=1
αiyiφ(xi)
T l
X
j=1
αjyjφ(xj)
= X
i,j
αiαjyiyjφ(xi)Tφ(xj)
The dual is maxα≥0
l
P
i=1
αi − 12 P
i,j
αiαjyiyjφ(xi)Tφ(xj) if Pl
i=1αiyi = 0,
−∞ if Pl
i=1αiyi 6= 0.
SVM primal/dual problems
Lagrangian dual: maxα≥0 minw,bL(w, b, α)
−∞ definitely not maximum of the dual Dual optimal solution not happen when
l
X
i=1
αiyi 6= 0 .
Dual simplified to maxα∈Rl
l
X
i=1
αi − 1 2
l
X
i=1 l
X
j=1
αiαjyiyjφ(xi)Tφ(xj) subject to yTα = 0,
αi ≥ 0, i = 1, . . . , l.
SVM primal/dual problems
More about Dual Problems
After SVM is popular
Quite a few people think that for any optimization problem
⇒ Lagrangian dual exists and strong duality holds Wrong! We usually need
Convex programming; Constraint qualification We have them
SVM primal is convex; Linear constraints
SVM primal/dual problems
Our problems may be infinite dimensional Can still use Lagrangian duality
See a rigorous discussion in [Lin, 2001]
Training linear and nonlinear SVMs
Outline
Basic concepts
SVM primal/dual problems
Training linear and nonlinear SVMs
Parameter/kernel selection and practical issues Multi-class classification
Discussion and conclusions
Training linear and nonlinear SVMs
Training Nonlinear SVMs
If using kernels, we solve the dual minα
1
2αTQα− eTα
subject to 0 ≤ αi ≤ C , i = 1, . . . , l yTα = 0
Large dense quadratic programming Qij 6= 0, Q : an l by l fully dense matrix 30,000 training points: 30,000 variables:
(30, 0002 × 8/2) bytes = 3GB RAM to store Q:
Traditional methods:
Newton, Quasi Newton cannot be directly applied
Training linear and nonlinear SVMs
Decomposition Methods
Working on some variables each time (e.g.,
[Osuna et al., 1997, Joachims, 1998, Platt, 1998]) Similar to coordinate-wise minimization
Working set B , N = {1, . . . , l}\B fixed Sub-problem at each iteration:
minαB
1
2αTB (αkN)TQBB QBN QNB QNN
αB αkN
−
eTB (ekN)TαB αkN
subject to 0 ≤ αt ≤ C , t ∈ B, yBTαB = −yTNαkN
Training linear and nonlinear SVMs
Avoid Memory Problems
The new objective function 1
2αTBQBBαB + (−eB + QBNαkN)TαB + constant B columns of Q needed
Calculated when used Trade time for space
Training linear and nonlinear SVMs
Does it Really Work?
Compared to Newton, Quasi-Newton Slow convergence
However, no need to have very accurate α sgn
l
X
i=1
αiyiK(xi, x) + b
!
Prediction not affected much
In some situations, # support vectors ≪ # training points
Initial α1 = 0, some elements never used
Machine learning knowledge affects optimization
Training linear and nonlinear SVMs
An example of training 50,000 instances using LIBSVM
$ ./svm-train -m 200 -c 16 -g 4 22features optimization finished, #iter = 24981
Total nSV = 3370 time 5m1.456s
On a Pentium M 1.4 GHz Laptop
Calculating Q may have taken more than 5 minutes
#SVs = 3,370 ≪ 50,000
A good case where some remain at zero all the time
Training linear and nonlinear SVMs
Issues of Decomposition Methods
Working set size/selection Asymptotic convergence
Finite termination & stopping conditions Convergence rate
Numerical issues
Optimization researchers are now also interested in these issues
If interested in them, check my talk to optimization researchers in Rome last year:
http://www.csie.ntu.edu.tw/∼cjlin/talks/rome.pdf
Training linear and nonlinear SVMs
Caching and Shrinking
Speed up decomposition methods Caching [Joachims, 1998]
Store recently used kernel columns in computer memory
100K Cache
$ time ./libsvm-2.81/svm-train -m 0.01 a4a 11.463s
40M Cache
$ time ./libsvm-2.81/svm-train -m 40 a4a 7.817s
Training linear and nonlinear SVMs
Shrinking [Joachims, 1998]
Some bounded elements remain until the end Heuristically resized to a smaller problem
After certain iterations, most bounded elements identified and not changed [Lin, 2002]
So caching and shrinking are useful
Training linear and nonlinear SVMs
Caching: Issues
A simple way:
Store recently used columns What if in working set selection,
deliberately select some indices in cache Goal: minimize the total number of columns calculated
Difficult to connect algorithm and this goal
Training linear and nonlinear SVMs
SVM doesn’t Scale Up
Yes, if you use kernels
Training millions of data is time consuming
But other nonlinear methods face the same problem e.g., kernel logistic regression
Two possibilities
1 Linear SVMs: in some situations, can solve much larger problems
2 Approximation
Training linear and nonlinear SVMs
Training Linear SVMs
Linear kernel:
w,b,ξmin 1
2wTw+ C
l
X
i=1
ξi
subject to yi(wTxi + b) ≥ 1 − ξi, ξi ≥ 0.
At optimum:
ξi = max 0, 1 − yi(wTxi + b)
Training linear and nonlinear SVMs
Remaining variables: w, b minw,b
1
2wTw+ C
l
X
i=1
max 0, 1 − yi(wTxi + b)
#variables = #features + 1 If #features small, easier to solve
Training linear and nonlinear SVMs
Traditional optimization methods can be applied Training time similar to methods such as logistic regression
What if #features and #instances both large?
Very challenging
Some language/document problems are of this type
Training linear and nonlinear SVMs
Decomposition Methods for Linear SVMs
Could we still solve the dual by decomposition methods?
Even if #features small
Slow convergence when C is large
$bsvm-train -b 500 -c 500 -t 0 australian_scale optimization finished, #iter = 260092
obj = -99310.588975, rho = 0.000000 Kij = xTi xj, rank ≤ #features
positive semi-definite only
Still a research topic in understanding this
Training linear and nonlinear SVMs
Decomposition Methods for Linear SVMs
But no need to use large C
C large enough, w the same [Keerthi and Lin, 2003]
decision function the same Remember
w =
l
X
i=1
αiyixi ∈ Rn, b ∈ R1
|# of 0 < αi < C | ≤ n + 1
As C changes, optimal α share many elements at 0 and C
Training linear and nonlinear SVMs
Decomposition Methods for Linear SVMs (Cont’d)
Warm start very effective [Kao et al., 2004]
Starting from small C , faster convergence Using C = 1, 2, 4, 8, . . .
$bsvm-train -c 500 -t 0 australian_scale optimization finished, #iter = 10087 So decomposition methods can still handle large linear SVMs
Training linear and nonlinear SVMs
Approximations
#instances large and using nonlinear kernels Difficult to solve the dual
Subsampling
Simple and often effective
From this many more advanced techniques E.g., stratified subsampling
Training linear and nonlinear SVMs
Approximations (Cont’d)
Incremental way: (e.g., [Syed et al., 1999]) Data ⇒ 10 parts
train 1st part ⇒ SVs, train SVs + 2nd part, . . . Select good points first: KNN or heuristics e.g., [Bakır et al., 2005]
Hierarchical settings (e.g., [Yu et al., 2003]) Clustering training data to several groups SVM models built for each group
Training linear and nonlinear SVMs
Approximations (Cont’d)
Using only a subset to construct w
w = X
i∈B
αiyiφ(xi).
Put this into the primal
αminB,b,ξ
1
2αTBQBBαB + C
l
X
i=1
ξi
subject to Q:,BαB + by ≥ e − ξ Without considering ξi, #variables = |B| + 1
Training linear and nonlinear SVMs
Approximations (Cont’d)
Selecting B:
random [Lee and Mangasarian, 2001], incremental [Keerthi et al., 2006], and many other ways
Training linear and nonlinear SVMs
Approximations (Cont’d)
All these approaches
some simple but some sophisticated In machine learning, very often
balance between simplification and performance
Parameter/kernel selection and practical issues
Outline
Basic concepts
SVM primal/dual problems
Training linear and nonlinear SVMs
Parameter/kernel selection and practical issues Multi-class classification
Discussion and conclusions
Parameter/kernel selection and practical issues
Let’s Try a Practical Example
A problem from astroparticle physics
1 1:2.6173e+01 2:5.88670e+01 3:-1.89469e-01 4:1.25122e+02 1 1:5.7073e+01 2:2.21404e+02 3:8.60795e-02 4:1.22911e+02 1 1:1.7259e+01 2:1.73436e+02 3:-1.29805e-01 4:1.25031e+02 1 1:2.1779e+01 2:1.24953e+02 3:1.53885e-01 4:1.52715e+02 1 1:9.1339e+01 2:2.93569e+02 3:1.42391e-01 4:1.60540e+02 1 1:5.5375e+01 2:1.79222e+02 3:1.65495e-01 4:1.11227e+02 1 1:2.9562e+01 2:1.91357e+02 3:9.90143e-02 4:1.03407e+02
Training and testing sets available: 3,089 and 4,000
Parameter/kernel selection and practical issues
The Story Behind this Data Set
User:
I am using libsvm in a astroparticle physics application .. First, let me congratulate you to a really easy to use and nice package. Unfortunately, it gives me astonishingly bad results...
OK. Please send us your data
I am able to get 97% test accuracy. Is that good enough for you ?
User:
You earned a copy of my PhD thesis
Parameter/kernel selection and practical issues
Training and Testing
Training
$./svm-train train.1
optimization finished, #iter = 6131 nSV = 3053, nBSV = 724
Total nSV = 3053 Testing
$./svm-predict test.1 train.1.model test.1.out Accuracy = 66.925% (2677/4000)
nSV and nBSV: number of SVs and bounded SVs (αi = C ).
Parameter/kernel selection and practical issues
Why this Fails
After training, nearly 100% support vectors Training and testing accuracy different
$./svm-predict train.1 train.1.model o Accuracy = 99.7734% (3082/3089)
Most kernel elements:
Kij = e−kxi−xjk2/4
(= 1 if i = j,
→ 0 if i 6= j.
Some features in rather large ranges
Parameter/kernel selection and practical issues
Data Scaling
Without scaling
Attributes in greater numeric ranges may dominate Example:
height gender
x1 150 F
x2 180 M
x3 185 M
and
y1 = 0, y2 = 1, y3 = 1.
Parameter/kernel selection and practical issues
The separating hyperplane almost vertical
x1
x2x3
Strongly depends on the first attribute; but second may be also important
Linearly scale the first to [0, 1] by:
1st attribute − 150 185 − 150 , Scaling generally helps, but not always
Parameter/kernel selection and practical issues
Other ways for scaling
Needed for k Nearest Neighbor, Neural networks as well
unless the method is scale-invariant
Parameter/kernel selection and practical issues
Data Scaling: Same Factors
A common mistake
$./svm-scale -l -1 -u 1 train.1 > train.1.scale
$./svm-scale -l -1 -u 1 test.1 > test.1.scale Same factor on training and testing
$./svm-scale -s range1 train.1 > train.1.scale
$./svm-scale -r range1 test.1 > test.1.scale
Parameter/kernel selection and practical issues
After Data Scaling
Train scaled data and then prediction
$./svm-train train.1.scale
$./svm-predict test.1.scale train.1.scale.model test.1.predict
Accuracy = 96.15%
Training accuracy now is
$./svm-predict train.1.scale train.1.scale.model Accuracy = 96.439% (2979/3089)
Default parameter: C = 1, γ = 0.25
Parameter/kernel selection and practical issues
Different Parameters
If we use C = 20, γ = 400
$./svm-train -c 20 -g 400 train.1.scale
$./svm-predict train.1.scale train.1.scale.model Accuracy = 100% (3089/3089)
100% training accuracy but
$./svm-predict test.1.scale train.1.scale.model Accuracy = 82.7% (3308/4000)
Very bad test accuracy Overfitting happens
Parameter/kernel selection and practical issues
Overfitting
In theory
You can easily achieve 100% training accuracy This is useless
When training and predicting a data, we should Avoid underfitting: small training error
Avoid overfitting: small testing error
Parameter/kernel selection and practical issues
● and ▲: training; and △: testing
Parameter/kernel selection and practical issues
Parameter Selection
Is important
Now parameters are C, kernel parameters Example:
γ of e−γkxi−xjk2 a, b, d of (xTi xj/a + b)d How to select them?
So performance better?
Parameter/kernel selection and practical issues
Parameter Selection (Cont’d)
Also how to select kernels?
e.g., RBF or polynomial
Moreover, how to select methods?
e.g., SVM or decision trees?
Parameter/kernel selection and practical issues
Performance Evaluation
l training data, xi ∈ Rn, yi ∈ {+1, −1}, i = 1, . . . , l, a learning machine:
x → f (x, α), f (x, α) = 1 or − 1.
Different α: different machines
The expected test error (generalized error) R(α) =
Z 1
2|y − f (x, α)|dP(x, y) y: class of x (i.e. 1 or -1)
Parameter/kernel selection and practical issues
P(x, y ) unknown, empirical risk (training error):
Remp(α) = 1 2l
l
X
i=1
|yi − f (xi, α)|
Training errors not important; only test errors count
1
2|yi − f (xi, α)| : loss, choose 0 ≤ η ≤ 1, with probability at least 1 − η:
R(α) ≤ Remp(α) + another term A good classification method:
minimize both terms at the same time
Parameter/kernel selection and practical issues
But Remp(α) → 0; another term → large SVM:
w,b,ξmin 1
2wTw+ C
l
X
i=1
ξi
subject to yi(wTφ(xi) + b) ≥ 1 − ξi, ξi ≥ 0, i = 1, . Pl
i=1ξi related to training error
wTw/2 relate to another term: called regularization term
C: balance between the two
Parameter/kernel selection and practical issues
Performance Evaluation (Cont’d)
In practice
Available data ⇒ training and validation Train the training
Test the validation k-fold cross validation:
Data randomly separated to k groups
Each time k − 1 as training and one as testing
Parameter/kernel selection and practical issues
Using CV on training + validation
Predict testing with the best parameters from CV
Parameter/kernel selection and practical issues
CV and Test Accuracy
If we select parameters so that CV is the highest, Does CV represent future test accuracy ?
Slightly different
If we have enough parameters, we can achieve 100%
CV as well
e.g., more parameters than # of training data Available data with class labels
⇒ training, validation, testing
Parameter/kernel selection and practical issues
Selecting Kernels
RBF, polynomial, or others?
or even combinations Two situations:
Too many kernels complicates the selection Design kernels suitable for target applications
Parameter/kernel selection and practical issues
Selecting Kernels (Cont’d)
Contradicting but practically ok We have few general kernels
RBF, polynomial, etc. somewhat related Beginners’ don’t have many choices On the other hand
researchers design many special ones e.g., string kernels
Parameter/kernel selection and practical issues
Selecting Kernels (Cont’d)
For beginners, use RBF first
Linear kernel: special case of RBF
Performance of linear the same as RBF under certain parameters [Keerthi and Lin, 2003]
Polynomial: numerical difficulties (< 1)d → 0, (> 1)d → ∞
More parameters than RBF
Parameter/kernel selection and practical issues
A Simple Procedure
1 Conduct simple scaling on the data
2 Consider RBF kernel K (x, y) = e−γkx−yk2
3 Use cross-validation to find the best parameter C and γ
4 Use the best C and γ to train the whole training set
5 Test
For beginners only, you can do a lot more
Parameter/kernel selection and practical issues
Contour of Parameter Selection
d2 d2 d2 d2 d2 d2 d2 d2 d2 d2 d2 d2 d2 d2 d2 d2 d2 d2 d2 d2 d2
d2 98.8
98.6 98.4 98.2 98 97.8 97.6 97.4 97.2 97
1 2 3 4 5 6 7
lg(C)
-2 -1 0 1 2 3
lg(gamma)
Parameter/kernel selection and practical issues
The good region of parameters is quite large SVM is sensitive to parameters, but not that sensitive
Sometimes default parameters work
but it’s good to select them if time is allowed
Parameter/kernel selection and practical issues
Efficient Parameter Selection
CV on grid points may be time consuming OK if one or two parameters
But if more than two?
E.g., feature scaling:
K(x, y) = e−Pni=1γi(xi−yi)2 Some features more important
Still a challenging research issue
Parameter/kernel selection and practical issues
Remember given parameters C and γ, we solve SVM to obtain optimal w or α
Model a function of parameters
C,γmin1,...,γn
f(α(C , γ1, . . . , γn), C , γ1, . . . , γn) But usually non-convex
The function
from Bayesian frameworks (e.g., [Chu et al., 2003]) or
smoothing CV bound
CV(C , γ1, . . . , γn) ≤ f (α(C , γ1, . . . , γn), C , γ1, . . . , γn)
Parameter/kernel selection and practical issues
The minimization:
Gradient-type methods or
global optimization (e.g., genetic algorithms) The difficulty:
Certainly more efforts than one single γ But performance may be just similar?
Parameter/kernel selection and practical issues
Kernel Combination
How about using
t1K1 + t2K2 + · · · + trKr, where
t1 + · · · + tr = 1 as the kernel
Related to parameter selection
t1e−γ1kx−yk+ · · · + tre−γrkx−yk If γ1 good ⇒ t1 close to 1, others close to 0
Parameter/kernel selection and practical issues
[Lanckriet et al., 2004] form a convex f(α(t1, . . . , tr), t1, . . . , tr) when C is fixed
Semi-definite programming problem But computational cost is also high Need more empirical studies
Parameter/kernel selection and practical issues
Design Kernels
Still a research issue
e.g., in bioinformatics and vision, many new kernels But, should be careful if the function is a valid one
K(x, y) = φ(x)Tφ(y)
For example, any two strings s1, s2 we can define edit distance
e−γedit(s1,s2)
It’s not a valid kernel [Cortes et al., 2003]
Parameter/kernel selection and practical issues
Mercer condition
What kind of Kij can be represented as φ(xi)Tφ(xj)?
K(x, y) = φ(x)Tφ(y) if and only if ∀g s.t.
R g (x)2d x finite
⇒ R K (x, y)g(x)g(y)dxdy ≥ 0 A condition developed early last century However, still not easy to check
Multi-class classification
Outline
Basic concepts
SVM primal/dual problems
Training linear and nonlinear SVMs
Parameter/kernel selection and practical issues Multi-class classification
Discussion and conclusions
Multi-class classification
Multi-class Classification
k classes
One-against-the rest: Train k binary SVMs:
1st class vs. (2 − k)th class 2nd class vs. (1, 3 − k)th class
...
k decision functions
(w1)Tφ(x) + b1
...
(wk)Tφ(x) + bk
Multi-class classification
Prediction:
arg max
j (wj)Tφ(x) + bj
Reason: If the 1st class, then we should have (w1)Tφ(x) + b1 ≥ +1
(w2)Tφ(x) + b2 ≤ −1 ...
(wk)Tφ(x) + bk ≤ −1
Multi-class classification
Multi-class Classification (Cont’d)
One-against-one: train k(k − 1)/2 binary SVMs (1, 2), (1, 3), . . . , (1, k), (2, 3), (2, 4), . . . , (k − 1, k) If 4 classes ⇒ 6 binary SVMs
yi = 1 yi = −1 Decision functions class 1 class 2 f12(x) = (w12)Tx+ b12 class 1 class 3 f13(x) = (w13)Tx+ b13 class 1 class 4 f14(x) = (w14)Tx+ b14 class 2 class 3 f23(x) = (w23)Tx+ b23 class 2 class 4 f24(x) = (w24)Tx+ b24 class 3 class 4 f34(x) = (w34)Tx+ b34
Multi-class classification
For a testing data, predicting all binary SVMs Classes winner
1 2 1
1 3 1
1 4 1
2 3 2
2 4 4
3 4 3
Select the one with the largest vote class 1 2 3 4
# votes 3 1 1 1 May use decision values as well
Multi-class classification
More Complicated Forms
For example,
[Vapnik, 1998, Weston and Watkins, 1999]:
w,b,ξmin 1 2
k
X
m=1
wmTwm+ C
l
X
i=1
X
m6=yi
ξim wTy
iφ(xi) + byi ≥ wTmφ(xi) + bm + 2 − ξim, ξim ≥ 0, i = 1, . . . , l, m ∈ {1, . . . , k}\yi. yi: class of xi
kl constraints
Dual: kl variables; very large
Multi-class classification
There are many other methods
A comparison in [Hsu and Lin, 2002]
Accuracy similar for many problems But 1-against-1 fastest for training
Multi-class classification
Why 1vs1 Faster in Training
1 vs. 1
k(k − 1)/2 problems, each 2l/k data on average 1 vs. all
k problems, each l data
If solving the optimization problem:
polynomial of the size with degree d Their complexities
k(k − 1)
2 O 2l k
d
vs. kO(ld)
Discussion and conclusions
Outline
Basic concepts
SVM primal/dual problems
Training linear and nonlinear SVMs
Parameter/kernel selection and practical issues Multi-class classification
Discussion and conclusions
Discussion and conclusions
Future Directions
I mentioned quite a few. Here are others.
Better ways to handle unbalanced data
i.e., some classes few data, some classes a lot Multi-label classification
An instance associated with ≥ 2 labels e.g., a document in both politics, sports Structural data sets
An instance may not be a vector e.g., a tree from a sentence
Discussion and conclusions
Conclusions
Dealing with data is interesting especially if you get good accuracy
Some basic understandings are essential when applying classification methods
SVM is a rather mature topic
but still quite a few interesting research issues
References I
Bakır, G. H., Bottou, L., and Weston, J. (2005).
Breaking svm complexity with cross-training.
In Saul, L. K., Weiss, Y., and Bottou, L., editors, Advances in Neural Information Processing Systems 17, pages 81–88. MIT Press, Cambridge, MA.
Boser, B., Guyon, I., and Vapnik, V. (1992).
A training algorithm for optimal margin classifiers.
In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pages 144–152. ACM Press.
Chu, W., Keerthi, S., and Ong, C. (2003).
Bayesian trigonometric support vector classifier.
Neural Computation, 15(9):2227–2254.
Cortes, C., Haffner, P., and Mohri, M. (2003).
Positive definite rational kernels.
In Proceedings of the 16th Annual Conference on Learning Theory, pages 41–56.
Cortes, C. and Vapnik, V. (1995).
Support-vector network.
Machine Learning, 20:273–297.
References II
Cristianini, N. and Shawe-Taylor, J. (2000).
An Introduction to Support Vector Machines.
Cambridge University Press, Cambridge, UK.
Hsu, C.-W. and Lin, C.-J. (2002).
A comparison of methods for multi-class support vector machines.
IEEE Transactions on Neural Networks, 13(2):415–425.
Joachims, T. (1998).
Making large-scale SVM learning practical.
In Sch¨olkopf, B., Burges, C. J. C., and Smola, A. J., editors, Advances in Kernel Methods - Support Vector Learning, Cambridge, MA. MIT Press.
Kao, W.-C., Chung, K.-M., Sun, C.-L., and Lin, C.-J. (2004).
Decomposition methods for linear support vector machines.
Neural Computation, 16(8):1689–1704.
Keerthi, S. S., Chapelle, O., and DeCoste, D. (2006).
Building support vector machines with reduced classifier complexity.
Journal of Machine Learning Research, 7:1493–1515.
References III
Keerthi, S. S. and Lin, C.-J. (2003).
Asymptotic behaviors of support vector machines with Gaussian kernel.
Neural Computation, 15(7):1667–1689.
Lanckriet, G., Cristianini, N., Bartlett, P., El Ghaoui, L., and Jordan, M. (2004).
Learning the Kernel Matrix with Semidefinite Programming.
Journal of Machine Learning Research, 5:27–72.
Lee, Y.-J. and Mangasarian, O. L. (2001).
RSVM: Reduced support vector machines.
In Proceedings of the First SIAM International Conference on Data Mining.
Lin, C.-J. (2001).
Formulations of support vector machines: a note from an optimization point of view.
Neural Computation, 13(2):307–317.
Lin, C.-J. (2002).
A formal analysis of stopping criteria of decomposition methods for support vector machines.
IEEE Transactions on Neural Networks, 13(5):1045–1052.
References IV
Osuna, E., Freund, R., and Girosi, F. (1997).
Training support vector machines: An application to face detection.
In Proceedings of CVPR’97, pages 130–136, New York, NY. IEEE.
Platt, J. C. (1998).
Fast training of support vector machines using sequential minimal optimization.
In Sch¨olkopf, B., Burges, C. J. C., and Smola, A. J., editors, Advances in Kernel Methods - Support Vector Learning, Cambridge, MA. MIT Press.
Sch¨olkopf, B. and Smola, A. J. (2002).
Learning with kernels.
MIT Press.
Syed, N. A., Liu, H., and Sung, K. K. (1999).
Incremental learning with support vector machines.
In Workshop on Support Vector Machines, IJCAI99.
Vapnik, V. (1998).
Statistical Learning Theory.
Wiley, New York, NY.
References V
Weston, J. and Watkins, C. (1999).
Multi-class support vector machines.
In Verleysen, M., editor, Proceedings of ESANN99, Brussels. D. Facto Press.
Yu, H., Yang, J., and Han, J. (2003).
Classifying large data sets using svms with hierarchical clusters.
In KDD ’03: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 306–315, New York, NY, USA. ACM Press.