• 沒有找到結果。

Optimization and Machine Learning

N/A
N/A
Protected

Academic year: 2022

Share "Optimization and Machine Learning"

Copied!
39
0
0

加載中.... (立即查看全文)

全文

(1)

Optimization and Machine Learning

Chih-Jen Lin

Department of Computer Science National Taiwan University

Talk at 25th Simon Stevin Lecture, K. U. Leuven Optimization in Engineering Center, January 17, 2013

(2)

Outline

1 Introduction

2 Optimization methods for kernel support vector machines

3 Optimization methods for linear support vector machines

4 Discussion and conclusions

(3)

Outline

1 Introduction

2 Optimization methods for kernel support vector machines

3 Optimization methods for linear support vector machines

4 Discussion and conclusions

(4)

What is Machine Learning

Extract knowledge from data

Representative tasks: classification, clustering, and others

Classification Clustering An old area, but many new and interesting applications/extensions: ranking, etc.

(5)

Data Classification

Given training data in different classes (labels known)

Predict test data (labels unknown) Classic example

1. Find a patient’s blood pressure, weight, etc.

2. After several years, know if he/she recovers 3. Build a machine learning model

4. New patient: find blood pressure, weight, etc 5. Prediction

Two main stages: training and testing

(6)

Data Classification (Cont’d)

Representative methods

Nearest neighbor, naive Bayes Decision tree, random forest

Neural networks, support vector machines

(7)

Why Is Optimization Used?

Usually the goal of classification is to minimize the test error

Therefore, many classification methods solve optimization problems

(8)

Optimization and Machine Learning

Standard optimization packages may be directly applied to machine learning applications

However, efficiency and scalability are issues Very often machine learning knowledge must be considered in designing suitable optimization methods

We will discuss some examples in this talk

(9)

Outline

1 Introduction

2 Optimization methods for kernel support vector machines

3 Optimization methods for linear support vector machines

4 Discussion and conclusions

(10)

Kernel Methods

Kernel methods are a class of classification

techniques where major operations are conducted by kernel evaluations

A representative example is support vector machine

(11)

Support Vector Classification

Training data (xi, yi), i = 1, . . . , l , xi ∈ Rn, yi = ±1 Maximizing the margin (Boser et al., 1992; Cortes and Vapnik, 1995)

minw,b

1

2wTw + C

l

X

i =1

max(1 − yi(wTφ(xi)+ b), 0) High dimensional ( maybe infinite ) feature space

φ(x) = (φ1(x), φ2(x), . . .).

w: maybe infinite variables

(12)

Support Vector Classification (Cont’d)

The dual problem (finite # variables) minα

1

TQα − eTα

subject to 0 ≤ αi ≤ C , i = 1, . . . , l yTα = 0,

where Qij = yiyjφ(xi)Tφ(xj) and e = [1, . . . , 1]T At optimum

w = Pl

i =1αiyiφ(xi)

Kernel: K (xi, xj) ≡ φ(xi)Tφ(xj) ; closed form Example: Gaussian (RBF) kernel: e−γkxi−xjk2

(13)

Support Vector Classification (Cont’d)

Only xi of αi > 0 used ⇒ support vectors

-0.2 0 0.2 0.4 0.6 0.8 1 1.2

-1.5 -1 -0.5 0 0.5 1

(14)

Large Dense Quadratic Programming

minα

1

TQα − eTα

subject to 0 ≤ αi ≤ C , i = 1, . . . , l yTα = 0

Qij 6= 0, Q : an l by l fully dense matrix 50,000 training points: 50,000 variables:

(50, 0002 × 8/2) bytes = 10GB RAM to store Q

(15)

Large Dense Quadratic Programming (Cont’d)

For quadratic programming problems, traditionally we would use

Newton or quasi Newton

However, they cannot be directly applied here because Q cannot even be stored

Currently, decomposition methods (a type of coordinate descent methods) are what used in practice

(16)

Decomposition Methods

Working on some variables each time (e.g., Osuna et al., 1997; Joachims, 1998; Platt, 1998)

Similar to coordinate-wise minimization Working set B , N = {1, . . . , l }\B fixed Sub-problem at the kth iteration:

minαB

1

2αTBkN)TQBB QBN

QNB QNN

 αB αkN



eTB (ekN)TαB αkN



subject to 0 ≤ αt ≤ C , t ∈ B, yBTαB = −yTNαkN

(17)

Avoid Memory Problems

The new objective function 1

TBQBBαB + (−eB + QBNαkN)TαB + constant Only B columns of Q are needed

|B| ≥ 2 due to the equality constraint and in general |B| ≤ 10 is used

Calculated when used : trade time for space But is such an approach practical?

(18)

How Decomposition Methods Perform?

Convergence not very fast. This is known because of using only first-order information

But, no need to have very accurate α decision function: Xl

i =1αiK (xi, x) + b Prediction may still be correct with a rough α Further, in some situations,

# support vectors  # training points Initial α1 = 0, some instances never used

(19)

How Decomposition Methods Perform?

(Cont’d)

An example of training 50,000 instances using the software LIBSVM

$svm-train -c 16 -g 4 -m 400 22features Total nSV = 3370

Time 79.524s

This was done on a typical desktop

Calculating the whole Q takes more time

#SVs = 3,370  50,000

A good case where some remain at zero all the time

(20)

How Decomposition Methods Perform?

(Cont’d)

Because many αi = 0 in the end, we can develop a shrinking techniques

Variables are removed during the optimization procedure. Smaller problems are solved

(21)

Machine Learning Properties are Useful in Designing Optimization Algorithms

We have seen that special properties of SVM did contribute to the viability of decomposition method

For machine learning applications, no need to accurately solve the optimization problem Because some optimal αi = 0, decomposition methods may not need to update all the variables Also, we can use shrinking techniques to reduce the problem size during decomposition methods

(22)

Differences between Optimization and Machine Learning

The two topics may have different focuses. We give the following example

The decomposition method we just discussed converges more slowly when C is large

Using C = 1 on a date set

# iterations: 508 Using C = 5, 000

# iterations: 35,241

(23)

Optimization researchers may rush to solve difficult cases of large C

That’s what I did before

It turns out that large C less used than small C Recall that SVM solves

1

2wTw + C (sum of training losses) A large C means to overfit training data This does not give good testing accuracy

(24)

Outline

1 Introduction

2 Optimization methods for kernel support vector machines

3 Optimization methods for linear support vector machines

4 Discussion and conclusions

(25)

Linear and Kernel Classification

We have

Kernel ⇒ map data to a higher space Linear ⇒ use the original data

Intuitively, kernel should give superior accuracy than linear

There are even some theoretical results

We optimization people may think there is no need to specially consider linear SVM

However, this is wrong if we consider their practical use

(26)

Linear and Kernel Classification (Cont’d)

Methods such as SVM and logistic regression can used in two ways

Kernel methods: data mapped to a higher dimensional space

x ⇒ φ(x)

φ(xi)Tφ(xj) easily calculated; little control on φ(·) Linear classification + feature engineering:

We have x without mapping. Alternatively, we can say that φ(x) is our x; full control on x or φ(x)

(27)

Linear and Kernel Classification (Cont’d)

For some problems, accuracy by linear is as good as nonlinear

But training and testing are much faster

This particularly happens for document classification Number of features (bag-of-words model) very large Data very sparse (i.e., few non-zeros)

Recently linear classification is a popular research topic.

(28)

Comparison Between Linear and Kernel (Training Time & Testing Accuracy)

Linear RBF Kernel Data set #data #features Time Accuracy Time Accuracy

MNIST38 11,982 752 0.1 96.82 38.1 99.70

ijcnn1 49,990 22 1.6 91.81 26.8 98.69

covtype 464,810 54 1.4 76.37 46,695.8 96.11 news20 15,997 1,355,191 1.1 96.95 383.2 96.90 real-sim 57,848 20,958 0.3 97.44 938.3 97.82 yahoo-japan 140,963 832,026 3.1 92.63 20,955.2 93.31 webspam 280,000 254 25.7 93.35 15,681.8 99.26

Therefore, there is a need to develop optimization methods for large linear classification

(29)

Comparison Between Linear and Kernel (Training Time & Testing Accuracy)

Linear RBF Kernel Data set #data #features Time Accuracy Time Accuracy

MNIST38 11,982 752 0.1 96.82 38.1 99.70

ijcnn1 49,990 22 1.6 91.81 26.8 98.69

covtype 464,810 54 1.4 76.37 46,695.8 96.11 news20 15,997 1,355,191 1.1 96.95 383.2 96.90 real-sim 57,848 20,958 0.3 97.44 938.3 97.82 yahoo-japan 140,963 832,026 3.1 92.63 20,955.2 93.31 webspam 280,000 254 25.7 93.35 15,681.8 99.26

Therefore, there is a need to develop optimization methods for large linear classification

(30)

Comparison Between Linear and Kernel (Training Time & Testing Accuracy)

Linear RBF Kernel Data set #data #features Time Accuracy Time Accuracy

MNIST38 11,982 752 0.1 96.82 38.1 99.70

ijcnn1 49,990 22 1.6 91.81 26.8 98.69

covtype 464,810 54 1.4 76.37 46,695.8 96.11 news20 15,997 1,355,191 1.1 96.95 383.2 96.90 real-sim 57,848 20,958 0.3 97.44 938.3 97.82 yahoo-japan 140,963 832,026 3.1 92.63 20,955.2 93.31 webspam 280,000 254 25.7 93.35 15,681.8 99.26

Therefore, there is a need to develop optimization methods for large linear classification

(31)

Why Linear is Faster in Training and Testing?

Let’s check the prediction cost wTx + b versus Xl

i =1αiK (xi, x) + b If K (xi, xj) takes O(n), then

O(n) versus O(nl ) Linear is much cheaper; reason:

for linear, xi is available but

for kernel, φ(xi) is not

(32)

Optimization for Linear Classification

Now a popular topic in both machine learning and optimization

Most are based on first-order information:

coordinate descent, stochastic gradient descent, or cutting plane

The reason is again that no need to accurately solve optimization problems

Let’s see another development for linear classification

(33)

Optimization for Linear Classification (Cont’d)

Martens (2010) and Byrd et al. (2011) propose the so called “Hessian free” approach

Let’s rewrite linear SVM as the following form minw

1

2wTw + C l

l

X

i =1

max(1 − yiwTxi, 0) What if we use a subset in the second term

C

|B|

X

i ∈B

max(1 − yiwTxi, 0)

(34)

Optimization for Linear Classification (Cont’d)

Then both gradient and Hessian-vector products can be cheaper

That is, if there are enough data, then the average training loss should be similar

This is a good example to take machine learning properties in designing optimization algorithms

(35)

Optimization for Linear Classification (Cont’d)

Lessons

We must know the practical use of machine learning in order to design suitable optimization algorithms Here is how I started developing optimization algorithms for linear SVM

In 2006, I visited at Yahoo! for six months. I learned that

1. Document classification is heavily used

2. Accuracy of linear and nonlinear is similar for documents

(36)

Outline

1 Introduction

2 Optimization methods for kernel support vector machines

3 Optimization methods for linear support vector machines

4 Discussion and conclusions

(37)

Machine Learning Software

Algorithms discussed in this talk are related to my machine learning software

LIBSVM (Chang and Lin, 2011):

One of the most popular SVM packages; cited more than 11, 000 times on Google Scholar

LIBLINEAR (Fan et al., 2008):

A library for large linear classification; popular in Internet companies

The core of an SVM package is an optimization solver

(38)

Machine Learning Software (Cont’d)

But designing machine learning software is quite different from optimization packages

You need to consider prediction, validation, and others

Also issues related to users (e.g., easy of use, interface, etc.) are very important for machine learning packages

(39)

Conclusions

Optimization has been very useful for machine learning

We need to take machine learning knowledge into account for designing suitable optimization

algorithms

The interaction between optimization and machine learning is very interesting and exciting.

參考文獻

相關文件

Then, we recast the signal recovery problem as a smoothing penalized least squares optimization problem, and apply the nonlinear conjugate gradient method to solve the smoothing

Chen, Conditions for error bounds and bounded level sets of some merit func- tions for the second-order cone complementarity problem, Journal of Optimization Theory and

This kind of algorithm has also been a powerful tool for solving many other optimization problems, including symmetric cone complementarity problems [15, 16, 20–22], symmetric

Chen, Conditions for error bounds and bounded level sets of some merit func- tions for the second-order cone complementarity problem, Journal of Optimization Theory and

⇔ improve some performance measure (e.g. prediction accuracy) machine learning: improving some performance measure..

3 active learning: limited protocol (unlabeled data) + requested

“Machine Learning Foundations” free online course, and works from NTU CLLab and NTU KDDCup teams... The Learning Problem What is

Suggestions to Medicine Researchers on Using ML-driven AI.. From Intelligence to Artificial Intelligence.. intelligence: thinking and