• 沒有找到結果。

Machine Learning Techniques (ᘤᢈ)

N/A
N/A
Protected

Academic year: 2022

Share "Machine Learning Techniques (ᘤᢈ)"

Copied!
28
0
0

加載中.... (立即查看全文)

全文

(1)

Machine Learning Techniques ( 機器學習技法)

Lecture 7: Blending and Bagging

Hsuan-Tien Lin (林軒田) htlin@csie.ntu.edu.tw

Department of Computer Science

& Information Engineering

National Taiwan University

( 國立台灣大學資訊工程系)

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 0/23

(2)

Blending and Bagging

Roadmap

1 Embedding Numerous Features: Kernel Models

Lecture 6: Support Vector Regression kernel ridge regression

(dense) via ridge regression +

representer theorem;

support vector regression

(sparse) via regularized

tube

error +

Lagrange dual

2

Combining Predictive Features: Aggregation Models

Lecture 7: Blending and Bagging

Motivation of Aggregation Uniform Blending

Linear and Any Blending

Bagging (Bootstrap Aggregation)

3 Distilling Implicit Features: Extraction Models

(3)

Blending and Bagging Motivation of Aggregation

An Aggregation Story

Your T friends g

1

, · · · ,g

T

predicts whether stock will go up as g

t

(x).

You can . . .

select

the most trust-worthy friend from their

usual performance

—validation!

mix

the predictions from all your friends

uniformly

—let them

vote!

mix

the predictions from all your friends

non-uniformly

—let them vote, but

give some more ballots

combine

the predictions

conditionally

—if

[t satisfies some condition]

give some ballots to friend t

...

aggregation

models:

mix

or

combine

hypotheses (for better performance)

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 2/23

(4)

Blending and Bagging Motivation of Aggregation

Aggregation with Math Notations

Your T friends g

1

, · · · ,g

T

predicts whether stock will go up as g

t

(x).

select

the most trust-worthy friend from their

usual performance

G(x) = g

t

(x) with t

=argmin

t∈{1,2,··· ,T } E val (g t )

mix

the predictions from all your friends

uniformly

G(x) = sign

P

T

t=1 1

· g

t

(x)



mix

the predictions from all your friends

non-uniformly

G(x) = sign

P

T

t=1 α t

· g

t

(x)

with

α t ≥ 0

• include select: α

t

= q

E

val

(g

t

) smallest y

• include uniformly: α

t

= 1

combine

the predictions

conditionally

G(x) = sign

P

T

t=1 q t (x)

· g

t

(x)

with

q t (x) ≥ 0

• include non-uniformly: q

t

(x) = α

t

aggregation models: a

rich family

(5)

Blending and Bagging Motivation of Aggregation

Recall: Selection by Validation

G(x) = g

t

(x) with t

= argmin

t∈{1,2,··· ,T }

E val (g t )

simple

and popular

what if use E

in

(g

t

)instead of

E val (g t )?

complexity price on d

VC

, remember? :-)

need

one strong

g

t

to guarantee small

E val

(and small E

out

)

selection:

rely on one strong hypothesis

aggregation:

can we do better with many (possibly weaker) hypotheses?

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 4/23

(6)

Blending and Bagging Motivation of Aggregation

Why Might Aggregation Work?

mix

different weak hypotheses

uniformly

—G(x) ‘strong’

aggregation

=⇒

feature transform (?)

mix

different random-PLA hypotheses

uniformly

—G(x) ‘moderate’

aggregation

=⇒

regularization (?)

proper aggregation =⇒

better performance

(7)

Blending and Bagging Motivation of Aggregation

Fun Time

Consider three decision stump hypotheses from R to {−1, +1}:

g

1

(x ) = sign(1 − x ), g

2

(x ) = sign(1 + x ), g

3

(x ) = −1. When mixing the three hypotheses uniformly, what is the resulting G(x )?

1

2J|x | ≤ 1K − 1

2

2J|x | ≥ 1K − 1

3

2Jx ≤ −1K − 1

4

2Jx ≥ +1K − 1

Reference Answer: 1

The ‘region’ that gets two positive votes from g

1

and g

2

is |x | ≤ 1, and thus G(x ) is positive within the region only. We see that the three decision stumps g

t

can be aggregated to form a more sophisticated hypothesis G.

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 6/23

(8)

Blending and Bagging Motivation of Aggregation

Fun Time

Consider three decision stump hypotheses from R to {−1, +1}:

g

1

(x ) = sign(1 − x ), g

2

(x ) = sign(1 + x ), g

3

(x ) = −1. When mixing the three hypotheses uniformly, what is the resulting G(x )?

1

2J|x | ≤ 1K − 1

2

2J|x | ≥ 1K − 1

3

2Jx ≤ −1K − 1

4

2Jx ≥ +1K − 1

Reference Answer: 1

The ‘region’ that gets two positive votes from g

1

and g

2

is |x | ≤ 1, and thus G(x ) is positive within the region only. We see that the three decision stumps g

t

can be aggregated to form a more sophisticated hypothesis G.

(9)

Blending and Bagging Uniform Blending

Uniform Blending (Voting) for Classification

uniform blending: known g t

, each with

1

ballot

G(x) = sign

T

X

t=1

1 · g

t

(x)

!

same

g t

(autocracy):

as good as one single

g t

very different

g t

(diversity+

democracy):

majority can

correct

minority

similar results with uniform voting for multiclass

G(x) = argmax

1≤k ≤K T

X

t=1

Jg

t

(x) = kK

how about

regression?

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 7/23

(10)

Blending and Bagging Uniform Blending

Uniform Blending for Regression

G(x) = 1 T

T

X

t=1

g t

(x)

same

g t

(autocracy):

as good as one single

g t

very different

g t

(diversity+

democracy):

=⇒

some

g t

(x) > f (x), some

g t

(x) < f (x)

=⇒average

could be

more accurate than individual

diverse hypotheses:

even simple

uniform blending

can be better than any

single hypothesis

(11)

Blending and Bagging Uniform Blending

Theoretical Analysis of Uniform Blending

G(x)

=

1 T

T

X

t=1

g t (x)

avg (g

t

(x) − f (x))

2



= avg g

2t

− 2g

t

f + f

2



= avg g

2t

 − 2Gf + f

2

= avg g

2t

 − G

2

+ (G − f )

2

= avg g

2t

 − 2G

2

+ G

2

+ (G − f )

2

= avg g

2t

− 2g

t

G + G

2

 + (G − f )

2

= avg (g

t

− G)

2

 + (G − f )

2

avg

(E

out

(g

t

)) =

avg



E(g

t

G) 2



+E

out

(G)

avg

E(g

t

G) 2



+E

out

(G)

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 9/23

(12)

Blending and Bagging Uniform Blending

Some Special g t

consider a

virtual

iterative process that for t = 1, 2, . . . , T

1

request size-N data D

t

from P

N

(i.i.d.)

2

obtain

g t

by A(D

t

)

g ¯

= lim

T →∞ G

= lim

T →∞

1 T

T

X

t=1

g t

=

E

D A(D)

avg

(E

out

(g

t

)) =

avg



E(g

t

¯ g) 2



+E

out

(

g) ¯ expected

performance of A =

expected deviation

to

consensus

+performance of

consensus

performance of

consensus: called bias

• expected deviation

to

consensus: called variance

uniform blending:

reduces

variance

for more stable performance

(13)

Blending and Bagging Uniform Blending

Consider applying uniform blending G(x) =

T 1

P

T

t=1

g

t

(x) on linear regression hypotheses g

t

(x) = innerprod(w

t

,

x). Which of the following

property best describes the resulting G(x)?

1

a constant function of

x

2

a linear function of

x

3

a quadratic function of

x

4

none of the other choices

Reference Answer: 2

G(x) = innerprod 1

T

T

X

t=1

w t

,

x

!

which is clearly a linear function of

x. Note that

we write ‘innerprod’ instead of the usual

‘transpose’ notation to avoid symbol conflict with T (number of hypotheses).

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 11/23

(14)

Blending and Bagging Uniform Blending

Consider applying uniform blending G(x) =

T 1

P

T

t=1

g

t

(x) on linear regression hypotheses g

t

(x) = innerprod(w

t

,

x). Which of the following

property best describes the resulting G(x)?

1

a constant function of

x

2

a linear function of

x

3

a quadratic function of

x

4

none of the other choices

Reference Answer: 2

G(x) = innerprod 1 T

T

X

t=1

w t

,

x

!

which is clearly a linear function of

x. Note that

we write ‘innerprod’ instead of the usual

‘transpose’ notation to avoid symbol conflict with T (number of hypotheses).

(15)

Blending and Bagging Linear and Any Blending

Linear Blending

linear blending: known g t

, each to be given

α t

ballot

G(x) = sign

T

X

t=1

α t

· g

t

(x)

!

with

α t ≥ 0

computing ‘good’ α

t

: min

α

t

≥0

E

in

(α)

linear blending for regression

α

mint

≥0

1 N

N

X

n=1

y

n

T

X

t=1

α t g t

(x

n

)

2

LinReg + transformation

min

w

i

1 N

N

X

n=1

y

n

d ˜

X

i=1

w i φ i

(x

n

)

2

like two-level learning, remember? :-)

linear blending = LinModel +

hypotheses as transform

+

constraints

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 12/23

(16)

Blending and Bagging Linear and Any Blending

Constraint on α t

linear blending = LinModel +

hypotheses as transform

+

constraints:

min

α

t

≥0

1 N

N

X

n=1

err y

n

,

T

X

t=1

α t g t

(x

n

)

!

linear blending for binary classification

if

α t

<0 =⇒

α t g t

(x) =

t |

(−g

t

(x))

negative

α t

for

g t

≡ positive

t |

for

−g t

if you have a stock up/down classifier with 99% error, tell me!

:-)

in practice, often

linear blending = LinModel +

hypotheses as transform

(( (( (( (

hhh

+

constraints hhh h

(17)

Blending and Bagging Linear and Any Blending

Linear Blending versus Selection

in practice, often

g 1

∈ H

1

,

g 2

∈ H

2

, . . . ,

g T

∈ H

T

by

minimum E in

recall:

selection by minimum E in

—bestof

best, paying d

VC



T S

t=1

H t



recall: linear blending includes

selection

as special case

—by setting

α t

=q

E val (g t )

smallesty

complexity price of linear blending

with E in

(aggregationof

best):

≥d

VC



T S

t=1

H t



like

selection, blending practically done with

(E

val

instead of

E in

) + (g

t

from minimum

E train

)

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 14/23

(18)

Blending and Bagging Linear and Any Blending

Any Blending

Given

g 1

,

g 2

, . . .,

g T

from

D train

, transform (x

n

,y

n

)in

D val

to (z

n

=

Φ

(x

n

),y

n

), where

Φ

(x) = (g

1

(x), . . . ,

g T

(x))

Linear Blending

1

compute

α

= LinearModel



{(z

n

,y

n

)}



2

return GLINB(x) =

LinearHypothesis α

(Φ(x)),

Any Blending (Stacking)

1

compute

g ˜

=

AnyModel



{(z

n

,y

n

)}



2

return GANYB(x) =

g(Φ(x)), ˜

where

Φ(x) = (g 1

(x), . . . ,

g T

(x))

any

blending:

powerful, achieves conditional blending

but

danger of overfitting, as always :-(

(19)

Blending and Bagging Linear and Any Blending

Blending in Practice

(Chen et al., A linear ensemble of individual and blended models for music rating prediction, 2012)

KDDCup 2011 Track 1: World Champion Solution by NTU

• validation set blending: a special any blending

model E

test

(squared):

519.45

=⇒

456.24

—helped

secure the lead

in

last two weeks

• test set blending: linear blending

using ˜E

test

E

test

(squared):

456.24

=⇒

442.06

—helped

turn the tables

in

last hour

blending ‘useful’ in practice,

despite the computational burden

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 16/23

(20)

Blending and Bagging Linear and Any Blending

Fun Time

Consider three decision stump hypotheses from R to {−1, +1}:

g 1

(x ) = sign(1 − x ),

g 2

(x ) = sign(1 + x ),

g 3

(x ) = −1. When x = 0, what is the resulting

Φ(x ) = (g 1

(x ),

g 2

(x ),

g 3

(x )) used in the returned hypothesis of linear/any blending?

1

(+1, +1, +1)

2

(+1, +1, −1)

3

(+1, −1, −1)

4

(−1, −1, −1)

Reference Answer: 2 Too easy? :-)

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 17/23

(21)

Blending and Bagging Linear and Any Blending

Fun Time

Consider three decision stump hypotheses from R to {−1, +1}:

g 1

(x ) = sign(1 − x ),

g 2

(x ) = sign(1 + x ),

g 3

(x ) = −1. When x = 0, what is the resulting

Φ(x ) = (g 1

(x ),

g 2

(x ),

g 3

(x )) used in the returned hypothesis of linear/any blending?

1

(+1, +1, +1)

2

(+1, +1, −1)

3

(+1, −1, −1)

4

(−1, −1, −1)

Reference Answer: 2 Too easy? :-)

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 17/23

(22)

Blending and Bagging Bagging (Bootstrap Aggregation)

What We Have Done

blending: aggregate

after getting g t

; learning: aggregate

as well as getting g t aggregation type blending learning

uniform voting/averaging ?

non-uniform linear ?

conditional stacking ?

learning

g

t

for uniform aggregation:

diversity

important

• diversity

by different models:

g 1

∈ H

1

,

g 2

∈ H

2

, . . . ,

g T

∈ H

T

• diversity

by different parameters: GD with η = 0.001, 0.01, . . ., 10

• diversity

by algorithmic randomness:

random PLA with different random seeds

• diversity

by data randomness:

within-cross-validation hypotheses g

v

next:

diversity

by data randomness

without

g

(23)

Blending and Bagging Bagging (Bootstrap Aggregation)

Revisit of Bias-Variance

expected

performance of A =

expected deviation

to

consensus

+performance of

consensus consensus ¯ g

=

expected g t

from

D t

∼ P

N

• consensus

more stable than direct A(D),

but comes from many more

D t

than the D on hand

want: approximate

g ¯

by

• finite (large) T

• approximate g

t

= A(D

t

) from D

t

∼ P

N

using only D

bootstrapping: a statistical tool that re-samples

from D to ‘simulate’

D t

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 19/23

(24)

Blending and Bagging Bagging (Bootstrap Aggregation)

Bootstrap Aggregation

bootstrapping

bootstrap sample

D ˜ t

: re-sample N examples from D

uniformly with replacement—can also use arbitrary N 0

instead of original N

virtual aggregation

consider a

virtual

iterative process that for t = 1, 2, . . . , T

1

request size-N data D

t

from P

N

(i.i.d.)

2

obtain

g t

by A(D

t

)

G

=Uniform({g

t

})

bootstrap aggregation

consider a

physical

iterative process that for t = 1, 2, . . . , T

1

request size-N’data

D ˜ t

from

bootstrapping

2

obtain

g t

by A(

D ˜ t

)

G

=Uniform({g

t

})

bootstrap aggregation (BAGging):

a simple

meta algorithm

on top of

base algorithm

A

(25)

Blending and Bagging Bagging (Bootstrap Aggregation)

Bagging Pocket in Action

TPOCKET =1000; TBAG=25

very

diverse

g

t

from bagging

proper

non-linear

boundary after aggregating binary classifiers

bagging works reasonably well

if base algorithm sensitive to data randomness

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 21/23

(26)

Blending and Bagging Bagging (Bootstrap Aggregation)

Fun Time

When using bootstrapping to re-sample N examples ˜D

t

from a data set D with N examples, what is the probability of getting ˜D

t

exactly the same as D?

1

0 /N

N

=0

2

1 /N

N

3

N! /N

N

4

N

N

/N

N

=1

Reference Answer: 3

Consider re-sampling in an ordered manner for N steps. Then there are (N

N

)possible

outcomes ˜D

t

, each with equal probability. Most importantly, (N!) of the outcomes are

permutations of the original D, and thus the answer.

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 22/23

(27)

Blending and Bagging Bagging (Bootstrap Aggregation)

Fun Time

When using bootstrapping to re-sample N examples ˜D

t

from a data set D with N examples, what is the probability of getting ˜D

t

exactly the same as D?

1

0 /N

N

=0

2

1 /N

N

3

N! /N

N

4

N

N

/N

N

=1

Reference Answer: 3

Consider re-sampling in an ordered manner for N steps. Then there are (N

N

)possible

outcomes ˜D

t

, each with equal probability. Most importantly, (N!) of the outcomes are

permutations of the original D, and thus the answer.

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 22/23

(28)

Blending and Bagging Bagging (Bootstrap Aggregation)

Summary

1 Embedding Numerous Features: Kernel Models

2

Combining Predictive Features: Aggregation Models

Lecture 7: Blending and Bagging

Motivation of Aggregation

aggregated G strong and/or moderate Uniform Blending

diverse hypotheses, ‘one vote, one value’

Linear and Any Blending

two-level learning with hypotheses as transform Bagging (Bootstrap Aggregation)

bootstrapping for diverse hypotheses

next: getting more diverse hypotheses to make G strong

3 Distilling Implicit Features: Extraction Models

參考文獻

相關文件

3 Distilling Implicit Features: Extraction Models Lecture 14: Radial Basis Function Network. RBF

Lecture 4: Soft-Margin Support Vector Machine allow some margin violations ξ n while penalizing them by C; equivalent to upper-bounding α n by C Lecture 5: Kernel Logistic

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 5/22.. Decision Tree Decision Tree Hypothesis. Disclaimers about

1 Embedding Numerous Features: Kernel Models Lecture 1: Linear Support Vector Machine.. linear SVM: more robust and solvable with quadratic programming Lecture 2: Dual Support

1 Embedding Numerous Features: Kernel Models Lecture 1: Linear Support Vector Machine.

Principle Component Analysis Denoising Auto Encoder Deep Neural Network... Deep Learning Optimization

For a deep NNet for written character recognition from raw pixels, which type of features are more likely extracted after the first hidden layer.

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 3/24.:. Deep Learning Deep