• 沒有找到結果。

Infinite Ensemble Learning with Support Vector Machines

N/A
N/A
Protected

Academic year: 2022

Share "Infinite Ensemble Learning with Support Vector Machines"

Copied!
24
0
0

加載中.... (立即查看全文)

全文

(1)

Infinite Ensemble Learning with Support Vector Machines

Hsuan-Tien Lin

in collaboration with Ling Li Learning Systems Group, Caltech

Second Symposium on Vision and Learning, 2005/09/21

(2)

Outline

1 Setup of our Learning Problem

2 Motivation of Infinite Ensemble Learning

3 Connecting SVM and Ensemble Learning

4 SVM-Based Framework of Infinite Ensemble Learning

5 Examples of the Framework

6 Experimental Comparison

7 Conclusion and Discussion

(3)

Setup of our Learning Problem

Setup of our Learning Problem

binary classification problem:

does this image represent an apple?

features of the image: a vector x ∈ X ⊆ RD.

e.g.:(x)1can describe the shape,(x)2can describe the color, etc.

difference to the features in vision: a vector of properties, not a

“set of interest points.”

label (whether the image is an apple): y ∈ {+1, −1}.

learning problem: give many images and their labels (training examples){(xi,yi)}Ni=1, find a classifier g(x) : X → {+1, −1}that predicts unseen images well.

hypotheses (classifiers): functions fromX → {+1, −1}.

(4)

Motivation of Infinite Ensemble Learning

Motivation of Infinite Ensemble Learning

g(x) : X → {+1, −1}

ensemble learning: popular paradigm.

ensemble: weighted vote of a committee of hypotheses.

g(x) =sign(P

wtht(x)) ,wt 0.

traditional ensemble learning: infinite size committee with finite number of nonzero weights.

is finiteness restriction and/or regularization?

how to handle infinite number of nonzero weights?

SVM (large-margin hyperplane): also popular.

hyperplane: a weighted combination of features.

SVM: infinite dimensional hyperplane through kernels.

g(x) =sign(P

wdφd(x) +b) .

can we use SVM for infinite ensemble learning?

(5)

Connecting SVM and Ensemble Learning

Illustration of SVM

{(xi,yi)}Ni=1

φd

implicitly computed

φ1(x)

φ2(x)

· · ·

φ(x)

wd

via duality

w1

w2

· · ·

wi)Ni=1

g(x) =sign(P

d=1wdφd(x) +b) SVM implicit

computation with K(x,x0) = P

d=1φd(x)φd(x0).

optimal solution (w,b)represented by the dual

variablesλi.

(6)

Connecting SVM and Ensemble Learning

Property of SVM

g(x) =sign(P

d=1wdφd(x) +b) =sign PN

i=1λiyiK(xi,x) +b

optimal hyperplane: represented through duality.

key for handling infinity: kernel tricksK(x,x0) =P

d=1φd(x)φd(x0).

quadratic programming of a margin-related criteria.

goal: (infinite dimensional) large-margin hyperplane.

min

w,b

1

2kwk22+C

N

X

i=1

ξi, s.t. yi

X

d=1

wdφd(xi) +b

!

≥1− ξi, ξi ≥0.

regularization: controlled with the trade-off parameter C.

(7)

Connecting SVM and Ensemble Learning

Illustration of AdaBoost

{(xi,yi)}Ni=1

ht∈ H iteratively selected

h1(x)

h2(x)

· · ·

hT(x)

wt0 iteratively assigned

w1

w2

· · ·

wT u1(i)

u2(i)

· · · g(x) =sign

PT

t=1wtht(x) AdaBoost

most successful ensemble learning algorithm.

boosts up the performance of each individual ht. emphasizes difficult examples by ut and finds (ht,wt)iteratively.

(8)

Connecting SVM and Ensemble Learning

Property of AdaBoost

g(x) =sign PT

t=1wtht(x)

iterative coordinate descent of a margin-related criteria.

min

N

X

i=1

exp(−ρi) , s.t.ρi =yi

X

t=1

wtht(xi)

!

,wt ≥0.

goal: asymptotically, large-margin ensemble.

minw,h kwk1, s.t. yi

X

t=1

wtht(xi)

!

≥1,wt ≥0.

optimal ensemble: approximated by finite one.

key for good approximation: sparsity

– some optimal ensemble has many zero weights.

regularization: finite approximation.

(9)

Connecting SVM and Ensemble Learning

Connection between SVM and AdaBoost

φd(x) ⇔ht(x)

SVM AdaBoost

G(x) =P

kwkφk(x) +b G(x) =P

kwkhk(x) wk ≥0 hard-goal

minkwkp, s.t. yiG(xi) ≥1

p=2 p=1

optimization

quadratic programming iterative coordinate descent key for infinity

kernel trick sparsity

regularization

soft-margin trade-off finite approximation

(10)

SVM-Based Framework of Infinite Ensemble Learning

Challenge

designing an infinite ensemble learning algorithm traditional ensemble learning: iterative and cannot directly be generalized.

another approach: embedding infinite number of hypotheses in SVM kernel, i.e.,K(x,x0) =P

t=1ht(x)ht(x0).

then, SVM classifier: g(x) =sign(P

t=1wtht(x) +b).

does the kernel exist?

how to ensure wt0?

our main contribution: a framework that conquers the challenge.

(11)

SVM-Based Framework of Infinite Ensemble Learning

Embedding Hypotheses into the Kernel

Definition

The kernel that embodiesH = {hα: α ∈ C}is defined as KH,r(x,x0) =

Z

C

φx(α)φx0(α)dα,

where C is a measure space, φx(α) = r(α)hα(x), and r: C → R+ is chosen such that the integral always exists.

integral instead of sum: works even for uncountableH.

KH,r(x,x0): an inner product forφx andφx0 inF = L2(C).

the classifier: g(x) =sign R

Cw(α)r(α)hα(x)dα +b .

(12)

SVM-Based Framework of Infinite Ensemble Learning

Negation Completeness and Constant Hypotheses

g(x) =sign

Z

C

w(α)r(α)hα(x)dα +b



not an ensemble classifier yet.

w(α) ≥0?

hard to handle: possibly uncountable constraints.

simple with negation completeness assumption onH.

negation completeness: h∈ Hif and only if(−h) ∈ H.

for any w , exists nonnegativew that produces same g.˜ What is b?

equivalently, the weight on a constant hypothesis.

another assumption:Hcontains a constant hypothesis.

both assumptions: mild in practice.

g(x)is equivalent to an ensemble classifier.

(13)

SVM-Based Framework of Infinite Ensemble Learning

Framework of Infinite Ensemble Learning

Algorithm

1 Consider a hypothesis setH(negation complete and contains a constant hypothesis).

2 Construct a kernelKH,r with proper r(·).

3 Properly choose other SVM parameters.

4 Train SVM withKH,r and{(xi,yi)}Ni=1to obtainλi and b.

5 Output g(x) =sign PN

i=1yiλiKH(xi,x) +b . easy: SVM routines.

hard: kernel construction.

shall inherit the profound properties of SVM.

(14)

Examples of the Framework

Decision Stump

decision stump: sq,d,α(x) =q·sign((x)d − α).

simplicity: popular for ensemble learning (e.g., Viola and Jones)







 (x)2≥ α?

Y @@

@ R

 N





 +1









−1

(a) Decision Process

- 6

s+1,2,α(x) = +1

(x)2= α (x)2

(x)1

(b) Decision Boundary Figure:Illustration of the decision stump s+1,2,α(x)

(15)

Examples of the Framework

Stump Kernel

consider the set of decision stumps S =

sq,d,αd:q ∈ {+1, −1} ,d ∈ {1, . . . ,D} , αd ∈ [Ld,Rd] . whenX ⊆ [L1,R1] × [L2,R2] × · · · × [LD,RD],S is negation complete, and contains a constant hypothesis.

Definition

The stump kernelKS is defined forS with r(q,d, αd) = 12.

KS(x,x0) = ∆S

D

X

d=1

(x)d− (x0)d

= ∆S− kx−x0k1,

where∆S = 12PD

d=1(RdLd)is a constant.

(16)

Examples of the Framework

Property of Stump Kernel

simple to compute: the constant∆S can even be dropped K˜(x,x0) = −kx−x0k1.

infinite power: under mild assumptions, SVM with C= ∞can perfectly classify training examples with stump kernel.

the popular Gaussian kernel exp(−γkxx0k22)also.

fast parameter selection: scaling the stump kernel is equivalent to scaling soft-margin parameter C.

Gaussian kernel depends on a good(γ,C)pair.

stump kernel only needs good C: roughly ten times faster.

feature space explanation for`1-norm similarity.

well suited in some specific applications:

cancer prediction with gene expressions.

(17)

Examples of the Framework

Perceptron

perceptron: pθ,α(x) =sign θTx− α .

not easy for ensemble learning: hard to design good algorithm.







 θTx ≥ α?

Y @@

@ R

 N





 +1









−1

(a) Decision Process

- 6

pθ,α(x) = +1 θTx = α (x)2

(x)1

@ I

−θ

(b) Decision Boundary Figure:Illustration of the perceptron pθ,α(x)

(18)

Examples of the Framework

Perceptron Kernel

consider the set of perceptrons P =

pθ,α: θ ∈ RD, kθk2=1, α ∈ [−R,R]

.

whenX is within a ball of radius R centered at the origin,P is negation complete, and contains a constant hypothesis.

Definition

The perceptron kernel isKP with r(θ, α) =rP, KP(x,x0) = ∆P− kx−x0k2, where rP and∆P are constants.

(19)

Examples of the Framework

Property of Perceptron Kernel

similar properties to the stump kernel.

also simple to compute.

infinite power: equivalent to a D-∞-1 neural network.

fast parameter selection: also shown in (Fleuret and Sahbi, ICCV 2003 workshop, called triangular kernel) without feature space explanation.

(20)

Examples of the Framework

Histogram Intersection Kernel

introduced for scene recognition (Odone et al., IEEE TIP, 2005).

assume(x)d: counts in the histogram (how many pixels are red?) – an integer between[0,size of image].

histogram intersection kernel:

K(x,x0) =PD

d=1min((x)d, (x0)d).

generalized with difficult math when(x)d is not an integer (Boughorbel et al., ICIP, 2005), similar tasks.

letˆs(x) = (s(x) +1)/2: HIK can be constructed easily from the framework.

furthermore, HIK equivalent to stump kernel.

insights on why HI (stump) kernel works well for the task?

(21)

Examples of the Framework

Other Kernels

Laplacian kernel:K(x,x0) =exp(−γkx−x0k1).

provably embodies infinite number of decision trees.

generalized Laplacian: K(x,x0) =exp −γP |(x)ad− (x0)ad| . can be similarly constructed with a slightly different r function.

standard kernel for histogram-based image classification with SVM (Chappelle et al., IEEE TNN, 1999).

insights on why it should work well?

exponential kernel: K(x,x0) =exp(−γkx−x0k2).

provably embodies infinite number of decision trees of perceptrons.

(22)

Experimental Comparison

Comparison between SVM and AdaBoost

tw twn th thn ri rin aus bre ger hea ion pim son vot 0

5 10 15 20 25 30 35

error (%)

SVM−Stump AdaBoost−Stump(100) AdaBoost−Stump(1000)

tw twn th thn ri rin aus bre ger hea ion pim son vot 0

5 10 15 20 25 30 35

error (%)

SVM−Perc AdaBoost−Perc(100) AdaBoost−Perc(1000)

Results

fair comparison between AdaBoost and SVM.

SVM is usually best – benefits to go to infinity.

sparsity

(finiteness) is a restriction.

(23)

Experimental Comparison

Comparison of SVM Kernels

tw twn th thn ri rin aus bre ger hea ion pim son vot 0

5 10 15 20 25 30 35

error (%)

SVM−Stump SVM−Perc SVM−Gauss

Results

SVM-Perc very similar to SVM-Gauss.

SVM-Stump comparable to, but sometimes a bit worse than others.

(24)

Conclusion and Discussion

Conclusion and Discussion

constructed: general framework for infinite ensemble learning.

infinite ensemble learning could be better – existing AdaBoost-Stump applications may switch.

derived new and meaningful kernels.

stump kernel: succeeded in specific applications.

perceptron kernel: similar to Gaussian, faster in parameter selection.

gave novel interpretation to existing kernels.

histogram intersection kernel: equivalent to stump kernel.

Laplacian kernel: ensemble of decision trees.

possible thoughts for vision

would fast parameter selection be important for some problems?

any vision applications in which those kernel models are reasonable?

do the novel interpretations give any insights?

any domain knowledge that can be brought into kernel construction?

參考文獻

相關文件

Most existing machine learning algorithms are designed by assuming that data can be easily accessed.. Therefore, the same data may be accessed

2 Distributed classification algorithms Kernel support vector machines Linear support vector machines Parallel tree learning.. 3 Distributed clustering

2 Distributed classification algorithms Kernel support vector machines Linear support vector machines Parallel tree learning?. 3 Distributed clustering

• ‘ content teachers need to support support the learning of those parts of language knowledge that students are missing and that may be preventing them mastering the

“Transductive Inference for Text Classification Using Support Vector Machines”, Proceedings of ICML-99, 16 th International Conference on Machine Learning, pp.200-209. Coppin

⇔ improve some performance measure (e.g. prediction accuracy) machine learning: improving some performance measure..

3 active learning: limited protocol (unlabeled data) + requested

“Machine Learning Foundations” free online course, and works from NTU CLLab and NTU KDDCup teams... The Learning Problem What is