• 沒有找到結果。

Recent Advances in Large-scale Linear Classification

N/A
N/A
Protected

Academic year: 2022

Share "Recent Advances in Large-scale Linear Classification"

Copied!
44
0
0

加載中.... (立即查看全文)

全文

(1)

Classification

Chih-Jen Lin

Department of Computer Science National Taiwan University

Talk at Asian Conference on Machine Learning, November, 2013

Chih-Jen Lin (National Taiwan Univ.) 1 / 42

(2)

et al., 2012)

Recent Advances of Large-scale Linear Classification. Proceedings of IEEE, 2012

It’s also related to our development of the software LIBLINEAR

www.csie.ntu.edu.tw/~cjlin/liblinear Due to time constraints, we will give overviews instead of deep technical details.

Chih-Jen Lin (National Taiwan Univ.) 2 / 42

(3)

Introduction

Optimization Methods

Extension of Linear Classification Discussion and Conclusions

Chih-Jen Lin (National Taiwan Univ.) 3 / 42

(4)

Outline

Introduction

Optimization Methods

Extension of Linear Classification Discussion and Conclusions

Chih-Jen Lin (National Taiwan Univ.) 4 / 42

(5)

Linear and Nonlinear Classification

Linear Nonlinear

By linear we mean a linear function is used to separate data in the original input space

Original: [height, weight]

Nonlinear: [height, weight, weight/height2] Kernel is one of the methods for nonlinear

Chih-Jen Lin (National Taiwan Univ.) 5 / 42

(6)

Linear and Nonlinear Classification (Cont’d)

Methods such as SVM and logistic regression can be used in two ways

Kernel methods: data mapped to another space x ⇒ φ(x)

φ(x)Tφ(y) easily calculated; no good control on φ(·) Linear classification + feature engineering:

We have x without mapping. Alternatively, we can say that φ(x) is our x; full control on x or φ(x) We will focus on linear here

Chih-Jen Lin (National Taiwan Univ.) 6 / 42

(7)

Why Linear Classification?

• If φ(x) is high dimensional, decision function sgn(wTφ(x))

is expensive. So kernel methods use w ≡ Xl

i =1αiφ(xi) for some α, K (xi, xj) ≡ φ(xi)Tφ(xj) Then new decision function is sgn

Pl

i =1αiK (xi, x)

• Special φ(x) so calculating K (xi, xj) is easy. Example:

K (xi, xj) ≡ (xTi xj + 1)2 = φ(xi)Tφ(xj), φ(x) ∈ RO(n2)

Chih-Jen Lin (National Taiwan Univ.) 7 / 42

(8)

Why Linear Classification? (Cont’d)

However, kernel is still expensive Prediction

wTx versus Xl

i =1αiK (xi, x) If K (xi, xj) takes O(n), then

O(n) versus O(nl ) Nonlinear: more powerful to separate data Linear: cheaper and simpler

Chih-Jen Lin (National Taiwan Univ.) 8 / 42

(9)

Linear is Useful in Some Places

For certain problems, accuracy by linear is as good as nonlinear

But training and testing are much faster Especially document classification

Number of features (bag-of-words model) very large Large and sparse data

Training millions of data in just a few seconds

Chih-Jen Lin (National Taiwan Univ.) 9 / 42

(10)

Comparison Between Linear and Nonlinear (Training Time & Testing Accuracy)

Linear RBF Kernel

Data set Time Accuracy Time Accuracy

MNIST38 0.1 96.82 38.1 99.70

ijcnn1 1.6 91.81 26.8 98.69

covtype 1.4 76.37 46,695.8 96.11

news20 1.1 96.95 383.2 96.90

real-sim 0.3 97.44 938.3 97.82

yahoo-japan 3.1 92.63 20,955.2 93.31 webspam 25.7 93.35 15,681.8 99.26 Size reasonably large: e.g., yahoo-japan: 140k instances and 830k features

Chih-Jen Lin (National Taiwan Univ.) 10 / 42

(11)

Comparison Between Linear and Nonlinear (Training Time & Testing Accuracy)

Linear RBF Kernel

Data set Time Accuracy Time Accuracy

MNIST38 0.1 96.82 38.1 99.70

ijcnn1 1.6 91.81 26.8 98.69

covtype 1.4 76.37 46,695.8 96.11

news20 1.1 96.95 383.2 96.90

real-sim 0.3 97.44 938.3 97.82

yahoo-japan 3.1 92.63 20,955.2 93.31 webspam 25.7 93.35 15,681.8 99.26 Size reasonably large: e.g., yahoo-japan: 140k instances and 830k features

Chih-Jen Lin (National Taiwan Univ.) 10 / 42

(12)

Comparison Between Linear and Nonlinear (Training Time & Testing Accuracy)

Linear RBF Kernel

Data set Time Accuracy Time Accuracy

MNIST38 0.1 96.82 38.1 99.70

ijcnn1 1.6 91.81 26.8 98.69

covtype 1.4 76.37 46,695.8 96.11

news20 1.1 96.95 383.2 96.90

real-sim 0.3 97.44 938.3 97.82

yahoo-japan 3.1 92.63 20,955.2 93.31 webspam 25.7 93.35 15,681.8 99.26 Size reasonably large: e.g., yahoo-japan: 140k instances and 830k features

Chih-Jen Lin (National Taiwan Univ.) 10 / 42

(13)

Binary Linear Classification

Training data {yi, xi}, xi ∈ Rn, i = 1, . . . , l , yi = ±1 l : # of data, n: # of features

minw f (w), f (w) ≡ wTw 2 + C

l

X

i =1

ξ(w; xi, yi)

wTw/2: regularization term (we have no time to talk about L1 regularization here)

ξ(w; x, y ): loss function: we hope y wTx > 0 C : regularization parameter

Chih-Jen Lin (National Taiwan Univ.) 11 / 42

(14)

Loss Functions

Some commonly used ones:

ξL1(w; x, y ) ≡ max(0, 1 − y wTx), (1) ξL2(w; x, y ) ≡ max(0, 1 − y wTx)2, (2) ξLR(w; x, y ) ≡ log(1 + e−y wTx). (3) SVM (Boser et al., 1992; Cortes and Vapnik, 1995):

(1)-(2)

Logistic regression (LR): (3); no reference because it can be traced back to 19th century

Chih-Jen Lin (National Taiwan Univ.) 12 / 42

(15)

Loss Functions (Cont’d)

−y wTx ξ(w; x, y )

ξL1 ξL2

ξLR

Their performance is usually similar

Chih-Jen Lin (National Taiwan Univ.) 13 / 42

(16)

Loss Functions (Cont’d)

However, optimization methods for them may be different

ξL1: not differentiable

ξL2: differentiable but not twice differentiable ξLR: twice differentiable

Chih-Jen Lin (National Taiwan Univ.) 14 / 42

(17)

Outline

Introduction

Optimization Methods

Extension of Linear Classification Discussion and Conclusions

Chih-Jen Lin (National Taiwan Univ.) 15 / 42

(18)

Optimization: 2nd Order Methods

Newton direction

mins ∇f (wk)Ts + 1

2sT2f (wk)s

This is the same as solving Newton linear system

2f (wk)s = −∇f (wk)

Hessian matrix ∇2f (wk) too large to be stored

2f (wk) : n × n, n : number of features But Hessian has a special form

2f (w) = I + CXTDX ,

Chih-Jen Lin (National Taiwan Univ.) 16 / 42

(19)

Optimization: 2nd Order Methods (Cont’d)

X : data matrix. D diagonal. For logistic regression, Dii = e−yiwTxi

1 + e−yiwTxi

Using CG to solve the linear system. Only Hessian-vector products are needed

2f (w)s = s + C · XT(D(X s)) Therefore, we have a Hessian-free approach

Chih-Jen Lin (National Taiwan Univ.) 17 / 42

(20)

2nd-order Methods (Cont’d)

In LIBLINEAR, we use the trust-region + CG approach by Steihaug (1983); see details in Lin et al. (2008)

What if we use L2 loss? It’s differentiable but not twice differentiable

ξL2(w; x, y ) ≡ max(0, 1 − y wTx)2 We can use generalized Hessian (Mangasarian, 2002). Details not discussed here

Chih-Jen Lin (National Taiwan Univ.) 18 / 42

(21)

Optimization: 1st Order Methods

We consider L1-loss and the dual SVM problem minα f (α)

subject to 0 ≤ αi ≤ C , ∀i , where

f (α) ≡ 1

TQα − eTα and

Qij = yiyjxTi xj, e = [1, . . . , 1]T We will apply coordinate descent methods The situation for L2 or LR loss is very similar

Chih-Jen Lin (National Taiwan Univ.) 19 / 42

(22)

1st Order Methods (Cont’d)

Coordinate descent: a simple and classic technique Change one variable at a time

Given current α. Let ei = [0, . . . , 0, 1, 0, . . . , 0]T. min

d f (α + d ei) = 1

2Qiid2 + ∇if (α)d + constant Without constraints

optimal d = −∇if (α) Qii

Now 0 ≤ αi + d ≤ C αi ← min

 max



αi − ∇if (α) Qii , 0

 , C



Chih-Jen Lin (National Taiwan Univ.) 20 / 42

(23)

1st Order Methods (Cont’d)

if (α) = (Qα)i − 1 = Xl

j =1Qijαj − 1

= Xl

j =1yiyjxTi xjαj − 1

O(ln) cost; l :# data, n: # features. But we can define

u ≡ Xl

j =1yjαjxj, Easy gradient calculation: costs O(n)

if (α) = (yixi)T Xl

j =1yjxjαj − 1 = yiuTxi − 1

Chih-Jen Lin (National Taiwan Univ.) 21 / 42

(24)

1st Order Methods (Cont’d)

All we need is to maintain u u = Xl

j =1yjαjxj, If

¯

αi : old ; αi : new then

u ← u + (αi − ¯αi)yixi. Also costs O(n)

References: first use for SVM probably by Mangasarian and Musicant (1999); Friess et al. (1998), but

popularized for linear SVM by Hsieh et al. (2008)

Chih-Jen Lin (National Taiwan Univ.) 22 / 42

(25)

1st Order Methods (Cont’d)

Summary of the dual coordinate descent method Given initial α and find u = P

iyiαixi. While α is not optimal (Outer iteration)

For i = 1, . . . , l (Inner iteration) (a) ¯αi ← αi

(b) G = yiuTxi − 1 (c) If αi can be changed

αi ← min(max(αi − G /Qii, 0), C ) u ← u + (αi − ¯αi)yixi

Chih-Jen Lin (National Taiwan Univ.) 23 / 42

(26)

Comparisons

L2-loss SVM is used

DCDL2: Dual coordinate descent DCDL2-S: DCDL2 with shrinking PCD: Primal coordinate descent TRON: Trust region Newton method This result is from Hsieh et al. (2008)

Chih-Jen Lin (National Taiwan Univ.) 24 / 42

(27)

Objective values (Time in Seconds)

news20 rcv1

yahoo-japan yahoo-korea

Chih-Jen Lin (National Taiwan Univ.) 25 / 42

(28)

Analysis

First-order methods can quickly get a model But second-order methods are more robust and faster for ill-conditioned situations

Both type of optimization methods are useful for linear classification

Chih-Jen Lin (National Taiwan Univ.) 26 / 42

(29)

An Example When # Features Small

# instance: 32,561, # features: 123

Objective value Accuracy

If number of features is small, solving primal is more suitable

Chih-Jen Lin (National Taiwan Univ.) 27 / 42

(30)

Outline

Introduction

Optimization Methods

Extension of Linear Classification Discussion and Conclusions

Chih-Jen Lin (National Taiwan Univ.) 28 / 42

(31)

Extension of Linear Classification

Linear classification can be extended in different ways

An important one is to approximate nonlinear classifiers

Goal: better accuracy of nonlinear but faster training/testing

Examples

1. Explicit data mappings + linear classification 2. Kernel approximation + linear classification I will focus on the first

Chih-Jen Lin (National Taiwan Univ.) 29 / 42

(32)

Linear Methods to Explicitly Train φ(x

i

)

Example: low-degree polynomial mapping:

φ(x) = [1, x1, . . . , xn, x12, . . . , xn2, x1x2, . . . , xn−1xn]T For this mapping, # features = O(n2)

When is it useful?

Recall O(n) for linear versus O(nl ) for kernel Now O(n2) versus O(nl )

Sparse data

n ⇒ ¯n, average # non-zeros for sparse data

¯

n  n ⇒ O(¯n2) may be much smaller than O(l ¯n)

Chih-Jen Lin (National Taiwan Univ.) 30 / 42

(33)

Example: Dependency Parsing

A multi-class problem with sparse data

n Dim. of φ(x) l n w’s # nonzeros¯ 46,155 1,065,165,090 204,582 13.3 1,438,456

¯

n: average # nonzeros per instance Degree-2 polynomial is used

Dimensionality of w is too large, but w is sparse Some interesting Hashing techniques are used to handle sparse w

Chih-Jen Lin (National Taiwan Univ.) 31 / 42

(34)

Example: Dependency Parsing (Cont’d)

LIBSVM LIBLINEAR

RBF Poly Linear Poly

Training time 3h34m53s 3h21m51s 3m36s 3m43s

Parsing speed 0.7x 1x 1652x 103x

UAS 89.92 91.67 89.11 91.71

LAS 88.55 90.60 88.07 90.71

We get faster training/testing, but maintain good accuracy

See detailed discussion in Chang et al. (2010)

Chih-Jen Lin (National Taiwan Univ.) 32 / 42

(35)

Example: Classifier in a Small Device

In a sensor application (Yu et al., 2013), the classifier must use less than 16KB of RAM

Classifiers Test accuracy Model Size

Decision Tree 77.77 76.02KB

AdaBoost (10 trees) 78.84 1,500.54KB SVM (RBF kernel) 85.33 1,287.15KB Number of features: 5

We consider a degree-3 mapping dimensionality = 5 + 3

3



+ bias term = 57.

Chih-Jen Lin (National Taiwan Univ.) 33 / 42

(36)

Example: Classifier in a Small Device (Cont’d)

One-against-one strategy for 5-class classification

5 2



× 57 × 4bytes = 2.28KB Assume single precision

Results

SVM method Test accuracy Model Size

RBF kernel 85.33 1,287.15KB

Polynomial kernel 84.79 2.28KB

Linear kernel 78.51 0.24KB

Chih-Jen Lin (National Taiwan Univ.) 34 / 42

(37)

Example: Classifier in a Small Device (Cont’d)

Running time (in seconds)

LIBSVM LIBLINEAR Primal Dual Training time 30,519.10 1,368.25 4,039.20 LIBSVM: polynomial kernel

LIBLINEAR: training polynomial expansions primal: 2nd-order method; dual: 1st-order LIBLINEAR dual: slow convergence. Now

#data  #features = 57

Chih-Jen Lin (National Taiwan Univ.) 35 / 42

(38)

Discussion

Unfortunately, polynomial mappings easily cause high dimensionality. Some have proposed

“projection” techniques to use fewer features as approximations

Examples: Kar and Karnick (2012); Pham and Pagh (2013)

Recently, ensemble of tree models (e.g., random forests or GBDT) become very useful. But under model-size constraints (the 2nd application), linear may still be the way to go

Chih-Jen Lin (National Taiwan Univ.) 36 / 42

(39)

Outline

Introduction

Optimization Methods

Extension of Linear Classification Discussion and Conclusions

Chih-Jen Lin (National Taiwan Univ.) 37 / 42

(40)

Big-data Linear Classification

Shared and distributed scenarios are very different Here I discuss more about distributed classification The major saving is parallel data loading

But high communication cost is a big concern

Chih-Jen Lin (National Taiwan Univ.) 38 / 42

(41)

Big-data Linear Classification (Cont’d)

Data classification if often only one component of the whole workflow

Example: distributed feature generation may be more time consuming than classification

This explains why so far not many effective packages are available for big-data classification Many research and engineering issues remain to be solved

Chih-Jen Lin (National Taiwan Univ.) 39 / 42

(42)

Conclusions

Linear classification is an old topic; but recently there are new and interesting applications

Kernel methods are still useful for many

applications, but linear classification + feature engineering are suitable for some others

Advantages of linear: because of working on x, easier for feature engineering

We expect that linear classification can be widely used in situations ranging from small-model to big-data classification

Chih-Jen Lin (National Taiwan Univ.) 40 / 42

(43)

References I

B. E. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pages 144–152. ACM Press, 1992.

Y.-W. Chang, C.-J. Hsieh, K.-W. Chang, M. Ringgaard, and C.-J. Lin. Training and testing low-degree polynomial data mappings via linear SVM. Journal of Machine Learning Research, 11:1471–1490, 2010. URL

http://www.csie.ntu.edu.tw/~cjlin/papers/lowpoly_journal.pdf.

C. Cortes and V. Vapnik. Support-vector network. Machine Learning, 20:273–297, 1995.

T.-T. Friess, N. Cristianini, and C. Campbell. The kernel adatron algorithm: a fast and simple learning procedure for support vector machines. In Proceedings of 15th Intl. Conf.

Machine Learning. Morgan Kaufman Publishers, 1998.

C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S. S. Keerthi, and S. Sundararajan. A dual coordinate descent method for large-scale linear SVM. In Proceedings of the Twenty Fifth International Conference on Machine Learning (ICML), 2008. URL

http://www.csie.ntu.edu.tw/~cjlin/papers/cddual.pdf.

P. Kar and H. Karnick. Random feature maps for dot product kernels. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS), pages 583–591, 2012.

Chih-Jen Lin (National Taiwan Univ.) 41 / 42

(44)

References II

C.-J. Lin, R. C. Weng, and S. S. Keerthi. Trust region Newton method for large-scale logistic regression. Journal of Machine Learning Research, 9:627–650, 2008. URL

http://www.csie.ntu.edu.tw/~cjlin/papers/logistic.pdf.

O. L. Mangasarian. A finite Newton method for classification. Optimization Methods and Software, 17(5):913–929, 2002.

O. L. Mangasarian and D. R. Musicant. Successive overrelaxation for support vector machines.

IEEE Transactions on Neural Networks, 10(5):1032–1037, 1999.

N. Pham and R. Pagh. Fast and scalable polynomial kernels via explicit feature maps. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 239–247, 2013.

T. Steihaug. The conjugate gradient method and trust regions in large scale optimization.

SIAM Journal on Numerical Analysis, 20:626–637, 1983.

T. Yu, D. Wang, M.-C. Yu, C.-J. Lin, and E. Y. Chang. Careful use of machine learning methods is needed for mobile applications: A case study on transportation-mode detection. Technical report, Studio Engineering, HTC, 2013. URL

http://www.csie.ntu.edu.tw/~cjlin/papers/transportation-mode/casestudy.pdf.

G.-X. Yuan, C.-H. Ho, and C.-J. Lin. Recent advances of large-scale linear classification.

Proceedings of the IEEE, 100(9):2584–2603, 2012. URL

http://www.csie.ntu.edu.tw/~cjlin/papers/survey-linear.pdf.

Chih-Jen Lin (National Taiwan Univ.) 42 / 42

參考文獻

相關文件

Nasu, M., and Tamura, T., “Vibration Test of the Underground Pipe With a Comparatively Large Cross-section,” Proceedings of the Fifth World Conference on Earthquake Engineering,

“Transductive Inference for Text Classification Using Support Vector Machines”, Proceedings of ICML-99, 16 th International Conference on Machine Learning, pp.200-209. Coppin

We will quickly discuss some examples and show both types of optimization methods are useful for linear classification.. Chih-Jen Lin (National Taiwan Univ.) 16

Data larger than memory but smaller than disk Design algorithms so that disk access is less frequent An example (Yu et al., 2010): a decomposition method to load a block at a time

Solving SVM Quadratic Programming Problem Training large-scale data..

Parallel dual coordinate descent method for large-scale linear classification in multi-core environments. In Proceedings of the 22nd ACM SIGKDD International Conference on

Ongoing Projects in Image/Video Analytics with Deep Convolutional Neural Networks. § Goal – Devise effective and efficient learning methods for scalable visual analytic

◦ Disallow tasks in the production prio rity band to preempt one another.... Jobs