and Challenges
Chih-Jen Lin
Department of Computer Science National Taiwan University
San Francisco Machine Learning Meetup, October 30, 2014
Chih-Jen Lin (National Taiwan Univ.) 1 / 43
Outline
1 Introduction
2 Optimization methods
3 Sample applications
4 Big-data linear classification
5 Conclusions
Chih-Jen Lin (National Taiwan Univ.) 2 / 43
Outline
1 Introduction
2 Optimization methods
3 Sample applications
4 Big-data linear classification
5 Conclusions
Chih-Jen Lin (National Taiwan Univ.) 3 / 43
Introduction
Linear Classification
The model is a weight vector w (for binary classification)
The decision function is
sgn(wTx )
Although many new and advanced techniques are available (e.g., deep learning), linear classifiers remain to be useful because of their simplicity We will give an overview of this topic in this talk
Chih-Jen Lin (National Taiwan Univ.) 4 / 43
Linear and Kernel Classification
Linear Nonlinear
Linear: data in the original input space; nonlinear: data mapped to other spaces
Original: [height, weight]
Nonlinear: [height, weight, weight/height2] Kernel is one of the nonlinear methods
Chih-Jen Lin (National Taiwan Univ.) 5 / 43
Introduction
Linear and Nonlinear Classification
Methods such as SVM and logistic regression can be used in two ways
• Kernel methods: data mapped to another space x ⇒ φ(x )
φ(x )Tφ(y) easily calculated; no good control on φ(·)
• Linear classification + feature engineering:
Directly use x without mapping. But x may have been carefully generated. Full control on x
We will focus on the 2nd type of approaches in this talk
Chih-Jen Lin (National Taiwan Univ.) 6 / 43
Why Linear Classification?
• If φ(x ) is high dimensional, decision function sgn(wTφ(x ))
is expensive
• Kernel methods:
w ≡
l
X
i =1
αiφ(xi) for some α, K (xi, xj) ≡ φ(xi)Tφ(xj)
New decision function: sgn Pl
i =1αiK (xi, x )
• Special φ(x ) so calculating K (xi, xj) is easy. Example:
K (xi, xj) ≡ (xTi xj+ 1)2 = φ(xi)Tφ(xj), φ(x ) ∈ RO(n2)
Chih-Jen Lin (National Taiwan Univ.) 7 / 43
Introduction
Why Linear Classification? (Cont’d)
Prediction
wTx versus Xl
i =1αiK (xi, x ) If K (xi, xj) takes O(n), then
O(n) versus O(nl ) Kernel: cost related to size of training data Linear: cheaper and simpler
Chih-Jen Lin (National Taiwan Univ.) 8 / 43
Linear is Useful in Some Places
For certain problems, accuracy by linear is as good as nonlinear
But training and testing are much faster Especially document classification
Number of features (bag-of-words model) very large Large and sparse data
Training millions of data in just a few seconds
Chih-Jen Lin (National Taiwan Univ.) 9 / 43
Introduction
Comparison Between Linear and Nonlinear (Training Time & Testing Accuracy)
Linear RBF Kernel
Data set Time Accuracy Time Accuracy
MNIST38 0.1 96.82 38.1 99.70
ijcnn1 1.6 91.81 26.8 98.69
covtype 1.4 76.37 46,695.8 96.11
news20 1.1 96.95 383.2 96.90
real-sim 0.3 97.44 938.3 97.82
yahoo-japan 3.1 92.63 20,955.2 93.31 webspam 25.7 93.35 15,681.8 99.26 Size reasonably large: e.g., yahoo-japan: 140k instances and 830k features
Chih-Jen Lin (National Taiwan Univ.) 10 / 43
Comparison Between Linear and Nonlinear (Training Time & Testing Accuracy)
Linear RBF Kernel
Data set Time Accuracy Time Accuracy
MNIST38 0.1 96.82 38.1 99.70
ijcnn1 1.6 91.81 26.8 98.69
covtype 1.4 76.37 46,695.8 96.11
news20 1.1 96.95 383.2 96.90
real-sim 0.3 97.44 938.3 97.82
yahoo-japan 3.1 92.63 20,955.2 93.31 webspam 25.7 93.35 15,681.8 99.26 Size reasonably large: e.g., yahoo-japan: 140k instances and 830k features
Chih-Jen Lin (National Taiwan Univ.) 10 / 43
Introduction
Comparison Between Linear and Nonlinear (Training Time & Testing Accuracy)
Linear RBF Kernel
Data set Time Accuracy Time Accuracy
MNIST38 0.1 96.82 38.1 99.70
ijcnn1 1.6 91.81 26.8 98.69
covtype 1.4 76.37 46,695.8 96.11
news20 1.1 96.95 383.2 96.90
real-sim 0.3 97.44 938.3 97.82
yahoo-japan 3.1 92.63 20,955.2 93.31 webspam 25.7 93.35 15,681.8 99.26 Size reasonably large: e.g., yahoo-japan: 140k instances and 830k features
Chih-Jen Lin (National Taiwan Univ.) 10 / 43
Binary Linear Classification
Training data {yi, xi}, xi ∈ Rn, i = 1, . . . , l , yi = ±1 l : # of data, n: # of features
minw f (w ), f (w ) ≡ wTw 2 + C
l
X
i =1
ξ(w ; xi, yi)
wTw /2: regularization term (we have no time to talk about L1 regularization here)
ξ(w ; x , y ): loss function: we hope y wTx > 0 C : regularization parameter
Chih-Jen Lin (National Taiwan Univ.) 11 / 43
Introduction
Loss Functions
Some commonly used ones:
ξL1(w ; x , y ) ≡ max(0, 1 − y wTx ), (1) ξL2(w ; x , y ) ≡ max(0, 1 − y wTx )2, (2) ξLR(w ; x , y ) ≡ log(1 + e−y wTx). (3) SVM (Boser et al., 1992; Cortes and Vapnik, 1995):
(1)-(2)
Logistic regression (LR): (3)
Chih-Jen Lin (National Taiwan Univ.) 12 / 43
Loss Functions (Cont’d)
−y wTx ξ(w ; x , y )
ξL1
ξL2
ξLR
Their performance is usually similar
Optimization methods may be different because of differentiability
Chih-Jen Lin (National Taiwan Univ.) 13 / 43
Optimization methods
Outline
1 Introduction
2 Optimization methods
3 Sample applications
4 Big-data linear classification
5 Conclusions
Chih-Jen Lin (National Taiwan Univ.) 14 / 43
Optimization Methods
Many unconstrained optimization methods can be applied
For kernel, optimization is over a variable α where
w =
l
X
i =1
αiφ(xi)
We cannot minimize over w because it may be infinite dimensional
However, for linear, minimizing over w or α is ok
Chih-Jen Lin (National Taiwan Univ.) 15 / 43
Optimization methods
Optimization Methods (Cont’d)
Among unconstrained optimization methods,
Low-order methods: quickly get a model, but slow final convergence
High-order methods: more robust and useful for ill-conditioned situations
We will quickly discuss some examples and show both types of optimization methods are useful for linear classification
Chih-Jen Lin (National Taiwan Univ.) 16 / 43
Optimization: 2nd Order Methods
Newton direction (if twice differentiable) mins ∇f (wk)Ts + 1
2sT∇2f (wk)s This is the same as solving Newton linear system
∇2f (wk)s = −∇f (wk)
Hessian matrix ∇2f (wk) too large to be stored
∇2f (wk) : n × n, n : number of features But Hessian has a special form
∇2f (w ) = I + CXTDX ,
Chih-Jen Lin (National Taiwan Univ.) 17 / 43
Optimization methods
Optimization: 2nd Order Methods (Cont’d)
X : data matrix. D diagonal.
Using Conjugate Gradient (CG) to solve the linear system. Only Hessian-vector products are needed
∇2f (w )s = s + C · XT(D(X s)) Therefore, we have a Hessian-free approach
Chih-Jen Lin (National Taiwan Univ.) 18 / 43
Optimization: 1st Order Methods
We consider L1-loss and the dual SVM problem minα f (α)
subject to 0 ≤ αi ≤ C , ∀i , where
f (α) ≡ 1
2αTQα − eTα and
Qij = yiyjxTi xj, e = [1, . . . , 1]T We will apply coordinate descent (CD) methods The situation for L2 or LR loss is very similar
Chih-Jen Lin (National Taiwan Univ.) 19 / 43
Optimization methods
1st Order Methods (Cont’d)
Coordinate descent: a simple and classic technique Change one variable at a time
Given current α. Let ei = [0, . . . , 0, 1, 0, . . . , 0]T. min
d f (α + d ei) = 1
2Qiid2 + ∇if (α)d + constant Without constraints
optimal d = −∇if (α) Qii
Now 0 ≤ αi + d ≤ C αi ← min
max
αi − ∇if (α) Qii , 0
, C
Chih-Jen Lin (National Taiwan Univ.) 20 / 43
Comparisons
L2-loss SVM is used
DCDL2: Dual coordinate descent DCDL2-S: DCDL2 with shrinking PCD: Primal coordinate descent TRON: Trust region Newton method This result is from Hsieh et al. (2008)
Chih-Jen Lin (National Taiwan Univ.) 21 / 43
Optimization methods
Objective values (Time in Seconds)
news20 rcv1
yahoo-japan yahoo-korea
Chih-Jen Lin (National Taiwan Univ.) 22 / 43
Low- versus High-order Methods
• We saw that low-order methods are efficient to give a model. However, high-order methods may be useful for difficult situations
• An example: # instance: 32,561, # features: 123
Objective value Accuracy
# features is small ⇒ solving primal is more suitable
Chih-Jen Lin (National Taiwan Univ.) 23 / 43
Sample applications
Outline
1 Introduction
2 Optimization methods
3 Sample applications
Dependency parsing using feature combination Transportation-mode detection in a sensor hub
4 Big-data linear classification
5 Conclusions
Chih-Jen Lin (National Taiwan Univ.) 24 / 43
Outline
1 Introduction
2 Optimization methods
3 Sample applications
Dependency parsing using feature combination Transportation-mode detection in a sensor hub
4 Big-data linear classification
5 Conclusions
Chih-Jen Lin (National Taiwan Univ.) 25 / 43
Sample applications Dependency parsing using feature combination
Dependency Parsing: an NLP Application
Kernel Linear
RBF Poly-2 Linear Poly-2 Training time 3h34m53s 3h21m51s 3m36s 3m43s
Parsing speed 0.7x 1x 1652x 103x
UAS 89.92 91.67 89.11 91.71
LAS 88.55 90.60 88.07 90.71
We get faster training/testing, while maintain good accuracy
But how to achieve this?
Chih-Jen Lin (National Taiwan Univ.) 26 / 43
Linear Methods to Explicitly Train φ(x
i)
Example: low-degree polynomial mapping:
φ(x ) = [1, x1, . . . , xn, x12, . . . , xn2, x1x2, . . . , xn−1xn]T For this mapping, # features = O(n2)
Recall O(n) for linear versus O(nl ) for kernel Now O(n2) versus O(nl )
Sparse data
n ⇒ ¯n, average # non-zeros for sparse data
¯
n n ⇒ O( ¯n2) may be much smaller than O(l ¯n)
Chih-Jen Lin (National Taiwan Univ.) 27 / 43
Sample applications Dependency parsing using feature combination
Handing High Dimensionality of φ(x )
A multi-class problem with sparse data
n Dim. of φ(x ) l n w ’s # nonzeros¯ 46,155 1,065,165,090 204,582 13.3 1,438,456
¯
n: average # nonzeros per instance Degree-2 polynomial is used
Dimensionality of w is very high, but w is sparse Some training feature columns of xixj are entirely zero
Hashing techniques are used to handle sparse w
Chih-Jen Lin (National Taiwan Univ.) 28 / 43
Discussion
See more details in Chang et al. (2010) If φ(x ) is too high dimensional, people have proposed projection or hashing techniques to use fewer features as approximations
Examples: Kar and Karnick (2012); Pham and Pagh (2013)
This has been used in computational advertising (Chapelle et al., 2014)
Chih-Jen Lin (National Taiwan Univ.) 29 / 43
Sample applications Transportation-mode detection in a sensor hub
Outline
1 Introduction
2 Optimization methods
3 Sample applications
Dependency parsing using feature combination Transportation-mode detection in a sensor hub
4 Big-data linear classification
5 Conclusions
Chih-Jen Lin (National Taiwan Univ.) 30 / 43
Example: Classifier in a Small Device
In a sensor application (Yu et al., 2013), the classifier can use less than 16KB of RAM
Classifiers Test accuracy Model Size
Decision Tree 77.77 76.02KB
AdaBoost (10 trees) 78.84 1,500.54KB SVM (RBF kernel) 85.33 1,287.15KB Number of features: 5
We consider a degree-3 polynomial mapping dimensionality = 5 + 3
3
+ bias term = 57.
Chih-Jen Lin (National Taiwan Univ.) 31 / 43
Sample applications Transportation-mode detection in a sensor hub
Example: Classifier in a Small Device
One-against-one strategy for 5-class classification
5 2
× 57 × 4bytes = 2.28KB Assume single precision
Results
SVM method Test accuracy Model Size
RBF kernel 85.33 1,287.15KB
Polynomial kernel 84.79 2.28KB
Linear kernel 78.51 0.24KB
Chih-Jen Lin (National Taiwan Univ.) 32 / 43
Outline
1 Introduction
2 Optimization methods
3 Sample applications
4 Big-data linear classification
5 Conclusions
Chih-Jen Lin (National Taiwan Univ.) 33 / 43
Big-data linear classification
Big-data Linear Classification
Nowadays data can be easily larger than memory capacity
Disk-level linear classification: Yu et al. (2012) and subsequent developments
Distributed linear classification: recently an active research topic
Example: we can parallelize the 2nd-order method discussed earlier. Recall the Hessian-vector product
∇2f (w )s = s + C · XT(D(X s))
Chih-Jen Lin (National Taiwan Univ.) 34 / 43
Parallel Hessian-vector Product
Hessian-vector products are the computational bottleneck
XTDX s
Data matrix X is now distributedly stored
X1 X2
. . . Xp node 1
node 2
node p
XTDX s = X1TD1X1s + · · · + XpTDpXps
Chih-Jen Lin (National Taiwan Univ.) 35 / 43
Big-data linear classification
Instance-wise and Feature-wise Data Splits
Xiw,1 Xiw,2 Xiw,3
Xfw,1Xfw,2Xfw,3
Instance-wise Feature-wise
We won’t have time to get into details. But their communication cost is different
Data moved per Hessian-vector product Instance-wise: O(n), Feature-wise: O(l )
Chih-Jen Lin (National Taiwan Univ.) 36 / 43
Discussion: Dostributed Training or Not?
One can always subsample data to one machine for deep analysis
Deciding to do distributed classification or not is an issue
In some areas distributed training has been successfully applied
One example is CTR (click-through rate) prediction in computational advertising
Chih-Jen Lin (National Taiwan Univ.) 37 / 43
Big-data linear classification
Discussion: Platform Issues
For the above-mentioned Newton methods, we have MPI and Spark implementations
We are preparing the integration to Spark MLlib Other existing distributed linear classifiers include Vowpal Wabbit from Yahoo!/Microsoft and Sibyl from Google
Platforms such as Spark are still being rapidly changed. This is a bit annoying
A carefully implementation may sometimes thousands times faster than a casual one
Chih-Jen Lin (National Taiwan Univ.) 38 / 43
Discussion: Design of Distributed Algorithms
On one computer, often we do batch rather than online learning
Online and streaming learning may be more useful for big-data applications
The example (Newton method) we showed is a synchronous parallel algorithms
Maybe asynchronous ones are better for big data?
Chih-Jen Lin (National Taiwan Univ.) 39 / 43
Conclusions
Outline
1 Introduction
2 Optimization methods
3 Sample applications
4 Big-data linear classification
5 Conclusions
Chih-Jen Lin (National Taiwan Univ.) 40 / 43
Resources on Linear Classification
• Since 2007, we have been actively developing the software LIBLINEAR for linear classification www.csie.ntu.edu.tw/~cjlin/liblinear
• A distributed extension (MPI and Spark) is now available
• An earlier survey on linear classification is Yuan et al.
(2012)
Recent Advances of Large-scale Linear Classification.
Proceedings of IEEE, 2012
It contains many references on this subject
Chih-Jen Lin (National Taiwan Univ.) 41 / 43
Conclusions
Conclusions
Linear classification is an old topic; but recently there are new and interesting applications
Kernel methods are still useful for many
applications, but linear classification + feature engineering are suitable for some others
Linear classification will continue to be used in situations ranging from small-model to big-data applications
Chih-Jen Lin (National Taiwan Univ.) 42 / 43
Acknowledgments
Many students have contributed to our research on large-scale linear classification
We also thank the partial support from National Science Council of Taiwan
Chih-Jen Lin (National Taiwan Univ.) 43 / 43