• 沒有找到結果。

Machine Learning Foundations (ᘤ9M)

N/A
N/A
Protected

Academic year: 2022

Share "Machine Learning Foundations (ᘤ9M)"

Copied!
27
0
0

加載中.... (立即查看全文)

全文

(1)

Machine Learning Foundations ( 機器學習基石)

Lecture 13: Hazard of Overfitting

Hsuan-Tien Lin (林軒田) htlin@csie.ntu.edu.tw

Department of Computer Science

& Information Engineering

National Taiwan University

( 國立台灣大學資訊工程系)

(2)

Hazard of Overfitting

Roadmap

1 When Can Machines Learn?

2 Why Can Machines Learn?

3 How Can Machines Learn?

Lecture 12: Nonlinear Transform

nonlinear 

via

nonlinear feature transform Φ

plus

linear 

with price of

model complexity

4

How Can Machines Learn

Better?

Lecture 13: Hazard of Overfitting What is Overfitting?

The Role of Noise and Data Size Deterministic Noise

Dealing with Overfitting

(3)

Hazard of Overfitting What is Overfitting?

Bad Generalization

regression for x ∈ R with N = 5 examples

• target f (x ) = 2nd order polynomial

label y

n

=f (x

n

) +very small noise

linear regression in Z-space + Φ= 4th order polynomial

• unique solution passing all examples

=⇒ E in (g) = 0

• E out (g) huge

x

y

Data Target Fit

bad generalization:

low E in

,

high E out

(4)

Hazard of Overfitting What is Overfitting?

Bad Generalization and Overfitting

take dVC=1126 for learning:

bad generalization

—(E

out

-

E in

) large

switch from dVC =dVC

to dVC =1126:

overfitting

—E

in ↓, E out ↑

switch from dVC =dVC

to dVC =1:

underfitting

—E

in ↑, E out

in-sample error model complexity out-of-sample error

VC dimension, dvc

Error

dvc

bad generalization: low E

in

, high E

out

;

overfitting: lower

E

in

, higherE

out

(5)

Hazard of Overfitting What is Overfitting?

Cause of Overfitting: A Driving Analogy

x

y

‘good fit’

=⇒

x

y

Data Target Fit

overfit

learning driving

overfit commit a car accident

use excessive dVC ‘drive too fast’

noise

bumpy road

limited data size N

limited observations about road condition next: how does

noise

&

data size

affect

overfitting?

(6)

Hazard of Overfitting What is Overfitting?

Fun Time

Based on our discussion, for data of fixed size, which of the following situation is relatively of the lowest risk of overfitting?

1

small noise, fitting from small dVCto median dVC

2

small noise, fitting from small dVCto large dVC

3

large noise, fitting from small dVCto median dVC

4

large noise, fitting from small dVCto large dVC

Reference Answer: 1

Two causes of overfitting are noise and excessive dVC. So if both are relatively ‘under control’, the risk of overfitting is smaller.

(7)

Hazard of Overfitting What is Overfitting?

Fun Time

Based on our discussion, for data of fixed size, which of the following situation is relatively of the lowest risk of overfitting?

1

small noise, fitting from small dVCto median dVC

2

small noise, fitting from small dVCto large dVC

3

large noise, fitting from small dVCto median dVC

4

large noise, fitting from small dVCto large dVC

Reference Answer: 1

Two causes of overfitting are noise and excessive dVC. So if both are relatively ‘under control’, the risk of overfitting is smaller.

(8)

Hazard of Overfitting The Role of Noise and Data Size

Case Study (1/2)

10-th order target function + noise

x

y

Data Target

g 2 ∈ H 2 g 10 ∈ H 10

E

in 0.050 0.034

E

out 0.127 9.00

50-th order target function noiselessly

x

y

Data Target

g 2 ∈ H 2 g 10 ∈ H 10

E

in 0.029 0.00001

E

out 0.120 7680

overfitting from best

g 2 ∈ H 2

to best

g 10 ∈ H 10

?

(9)

Hazard of Overfitting The Role of Noise and Data Size

Case Study (2/2)

10-th order target function + noise

x

y

Data 2nd Order Fit 10th Order Fit

g 2 ∈ H 2 g 10 ∈ H 10

E

in 0.050 0.034

E

out 0.127 9.00

50-th order target function noiselessly

x

y

Data 2nd Order Fit 10th Order Fit

g 2 ∈ H 2 g 10 ∈ H 10

E

in 0.029 0.00001

E

out 0.120 7680

overfitting from

g 2

to

g 10

?

both yes!

(10)

Hazard of Overfitting The Role of Noise and Data Size

Irony of Two Learners

x

y

Data 2nd Order Fit 10th Order Fit

x

y

Data Target

learner

Overfit: pick g 10 ∈ H 10

learner

Restrict: pick g 2 ∈ H 2

when both

know that target = 10th

—R ‘gives up’ability to fit

but

R wins in E out

a lot!

philosophy:

concession

for

advantage? :-)

(11)

Hazard of Overfitting The Role of Noise and Data Size

Learning Curves Revisited

H 2

Number of Data Points, N

E xp ec te d E rr or

E

out

E

in

H 10

Number of Data Points, N

E xp ec te d E rr or

E

out

E

in

H

10

: lower

E out

when N → ∞,

H

10

:

but much larger generalization error for small N

gray area :

O

overfits! (E

in ↓, E out ↑)

R

always

wins in E out

if N small!

(12)

Hazard of Overfitting The Role of Noise and Data Size

The ‘No Noise’ Case

x

y

Data 2nd Order Fit 10th Order Fit

x

y

Data Target

learner

Overfit: pick g 10 ∈ H 10

learner

Restrict: pick g 2 ∈ H 2

when both

know that there is no noise

—R still wins

is there really

no noise?

‘target complexity’ acts like noise

(13)

Hazard of Overfitting The Role of Noise and Data Size

Fun Time

When having limited data, in which of the following case would learner

R

perform better than learner

O?

1

limited data from a 10-th order target function with some noise

2

limited data from a 1126-th order target function with no noise

3

limited data from a 1126-th order target function with some noise

4

all of the above

Reference Answer: 4

We discussed about 1 and 2 , but you shall be able to

‘generalize’ :-)

that

R

also wins in the more difficult case of 3 .

(14)

Hazard of Overfitting The Role of Noise and Data Size

Fun Time

When having limited data, in which of the following case would learner

R

perform better than learner

O?

1

limited data from a 10-th order target function with some noise

2

limited data from a 1126-th order target function with no noise

3

limited data from a 1126-th order target function with some noise

4

all of the above

Reference Answer: 4

We discussed about 1 and 2 , but you shall be able to

‘generalize’ :-)

that

R

also wins in the more difficult case of 3 .

(15)

Hazard of Overfitting Deterministic Noise

A Detailed Experiment

y =

f (x )

+



∼ Gaussian

Q

f

X

q=0

α q x q

| {z }

f (x )

, σ 2

!

• Gaussian iid noise 

with level

σ 2

some ‘uniform’ distribution on

f (x )

with complexity level

Q f

data size N

x

y

Data Target

goal:

‘overfit level’

for different (N,

σ 2

)and (N,

Q f

)?

(16)

Hazard of Overfitting Deterministic Noise

The Overfit Measure

x

y

Data 2nd Order Fit 10th Order Fit

• g 2 ∈ H 2

• g 10 ∈ H 10

• E in (g 10 )

E in (g 2 )

for sure

x

y

Data Target

overfit measure E out (g 10 )

E out (g 2 )

(17)

Hazard of Overfitting Deterministic Noise

The Results

impact of σ 2 versus N

Number of Data Points, N

N oi se Le ve l, σ

2

80 100 120 -0.2

-0.1 0 0.1 0.2

0 1 2

fixed Q

f

=20

impact of Q f versus N

Number of Data Points, N T ar ge t C om pl ex it y, Q

f

80 100 120 -0.2

-0.1 0 0.1 0.2

0 25 50 75 100

fixedσ

2

=0.1

ring a bell? :-)

(18)

Hazard of Overfitting Deterministic Noise

Impact of Noise and Data Size

impact of σ 2 versus N:

stochastic noise

Number of Data Points, N

NoiseLevel,σ2

80 100 120 -0.2

-0.1 0 0.1 0.2

0 1 2

impact of Q f versus N:

deterministic noise

Number of Data Points, N TargetComplexity,Qf

80 100 120 -0.2

-0.1 0 0.1 0.2

0 25 50 75 100

four reasons of serious overfitting:

data size N ↓ overfit

↑ stochastic noise

↑ overfit

↑ deterministic noise

↑ overfit

excessive power ↑ overfit

overfitting

‘easily’ happens

(19)

Hazard of Overfitting Deterministic Noise

Deterministic Noise

if

f

∈/

H: something of f

cannot be captured by

H

deterministic noise : difference between best

h ∈ H

and

f

acts like ‘stochastic noise’—not new to CS:

pseudo-random generator

difference to stochastic noise:

• depends on H

• fixed for a given x

x

y

h

f

philosophy: when teaching

a kid,

perhaps better not to use examples from a

complicated target function? :-)

(20)

Hazard of Overfitting Deterministic Noise

Fun Time

Consider the target function being sin(1126x ) for x ∈ [0, 2π]. When x is uniformly sampled from the range, and we use all possible linear hypotheses h(x ) = w · x to approximate the target function with respect to the squared error, what is the level of deterministic noise for each x ?

1

| sin(1126x)|

2

| sin(1126x) − x|

3

| sin(1126x) + x|

4

| sin(1126x) − 1126x|

Reference Answer: 1

You can try a few different w and convince yourself that the best hypothesis h

is h

(x ) = 0. The deterministic noise is the difference between f and h

.

(21)

Hazard of Overfitting Deterministic Noise

Fun Time

Consider the target function being sin(1126x ) for x ∈ [0, 2π]. When x is uniformly sampled from the range, and we use all possible linear hypotheses h(x ) = w · x to approximate the target function with respect to the squared error, what is the level of deterministic noise for each x ?

1

| sin(1126x)|

2

| sin(1126x) − x|

3

| sin(1126x) + x|

4

| sin(1126x) − 1126x|

Reference Answer: 1

You can try a few different w and convince yourself that the best hypothesis h

is h

(x ) = 0. The deterministic noise is the difference between f and h

.

(22)

Hazard of Overfitting Dealing with Overfitting

Driving Analogy Revisited

learning driving

overfit commit a car accident

use excessive dVC ‘drive too fast’

noise bumpy road

limited data size N limited observations about road condition

start from simple model

drive slowly

data cleaning/pruning

use more accurate road information

data hinting

exploit more road information

regularization

put the brakes

validation

monitor the dashboard

all very

practical

techniques to combat overfitting

(23)

Hazard of Overfitting Dealing with Overfitting

Data Cleaning/Pruning

if ‘detect’ the outlier

5

at the top by

• too close to other ◦, or too far from other ×

• wrong by current classifier

• . . .

possible action 1: correct the label (data cleaning)

possible action 2: remove the example (data pruning)

possibly helps, but

effect varies

(24)

Hazard of Overfitting Dealing with Overfitting

Data Hinting

slightly shifted/rotated digits carry the same meaning

possible action: add

virtual examples

by shifting/rotating the given digits (data hinting)

possibly helps, but

watch out

—virtual example not

iid ∼ P(x, y )!

(25)

Hazard of Overfitting Dealing with Overfitting

Fun Time

Assume we know that f (x ) is symmetric for some 1D regression application. That is, f (x ) = f (−x ). One possibility of using the knowledge is to consider symmetric hypotheses only. On the other hand, you can also generate virtual examples from the original data {(x

n

, y

n

)}as hints. What virtual examples suit your needs best?

1

{(x

n

, −y

n

)}

2

{(−x

n

, −y

n

)}

3

{(−x

n

, y

n

)}

4

{(2x

n

, 2y

n

)}

Reference Answer: 3

We want the virtual examples to encode the invariance when x → −x .

(26)

Hazard of Overfitting Dealing with Overfitting

Fun Time

Assume we know that f (x ) is symmetric for some 1D regression application. That is, f (x ) = f (−x ). One possibility of using the knowledge is to consider symmetric hypotheses only. On the other hand, you can also generate virtual examples from the original data {(x

n

, y

n

)}as hints. What virtual examples suit your needs best?

1

{(x

n

, −y

n

)}

2

{(−x

n

, −y

n

)}

3

{(−x

n

, y

n

)}

4

{(2x

n

, 2y

n

)}

Reference Answer: 3

We want the virtual examples to encode the invariance when x → −x .

(27)

Hazard of Overfitting Dealing with Overfitting

Summary

1 When Can Machines Learn?

2 Why Can Machines Learn?

3 How Can Machines Learn?

Lecture 12: Nonlinear Transform

4

How Can Machines Learn

Better?

Lecture 13: Hazard of Overfitting What is Overfitting?

lower E in but higher E out The Role of Noise and Data Size

overfitting ‘easily’ happens!

Deterministic Noise

what H cannot capture acts like noise Dealing with Overfitting

data cleaning/pruning/hinting, and more

next: putting the brakes with regularization

參考文獻

相關文件

• logistic regression often preferred over pocket.. Linear Models for Classification Stochastic Gradient Descent. Two Iterative

vice versa.’ To verify the rule, you chose 100 days uniformly at random from the past 10 years of stock data, and found that 80 of them satisfy the rule. What is the best guarantee

Which keywords below shall have large positive weights in a good perceptron for the task. 1 coffee, tea,

Which keywords below shall have large positive weights in a good perceptron for the task.. 1 coffee, tea,

2 You’ll likely be rich by exploiting the rule in the next 100 days, if the market behaves similarly to the last 10 years. 3 You’ll likely be rich by exploiting the ‘best rule’

• To enhance teachers’ knowledge and understanding about the learning and teaching of grammar in context through the use of various e-learning resources in the primary

Wang, Solving pseudomonotone variational inequalities and pseudocon- vex optimization problems using the projection neural network, IEEE Transactions on Neural Networks 17

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix