• 沒有找到結果。

# Machine Learning Foundations (ᘤ9M)

N/A
N/A
Protected

Share "Machine Learning Foundations (ᘤ9M)"

Copied!
27
0
0

(1)

## Machine Learning Foundations ( 機器學習基石)

### Lecture 13: Hazard of Overfitting

Hsuan-Tien Lin (林軒田) htlin@csie.ntu.edu.tw

### ( 國立台灣大學資訊工程系)

(2)

Hazard of Overfitting

via

plus

with price of

### 4

How Can Machines Learn

### Dealing with Overfitting

(3)

Hazard of Overfitting What is Overfitting?

### •

regression for x ∈ R with N = 5 examples

label y

=f (x

### n

) +very small noise

### •

linear regression in Z-space + Φ= 4th order polynomial

,

### high E out

(4)

Hazard of Overfitting What is Overfitting?

### •

take dVC=1126 for learning:

—(E

-

) large

### •

switch from dVC =dVC

to dVC =1126:

—E

### •

switch from dVC =dVC

to dVC =1:

—E

### in ↑, E out ↑

in-sample error model complexity out-of-sample error

VC dimension, dvc

Error

dvc

, high E

;

E

, higherE

### out

(5)

Hazard of Overfitting What is Overfitting?

## Cause of Overfitting: A Driving Analogy

‘good fit’

### overfit

learning driving

overfit commit a car accident

use excessive dVC ‘drive too fast’

&

### data size

affect

overfitting?

(6)

Hazard of Overfitting What is Overfitting?

## Fun Time

Based on our discussion, for data of fixed size, which of the following situation is relatively of the lowest risk of overfitting?

### 1

small noise, fitting from small dVCto median dVC

### 2

small noise, fitting from small dVCto large dVC

### 3

large noise, fitting from small dVCto median dVC

### 4

large noise, fitting from small dVCto large dVC

Two causes of overfitting are noise and excessive dVC. So if both are relatively ‘under control’, the risk of overfitting is smaller.

(7)

Hazard of Overfitting What is Overfitting?

## Fun Time

Based on our discussion, for data of fixed size, which of the following situation is relatively of the lowest risk of overfitting?

### 1

small noise, fitting from small dVCto median dVC

### 2

small noise, fitting from small dVCto large dVC

### 3

large noise, fitting from small dVCto median dVC

### 4

large noise, fitting from small dVCto large dVC

Two causes of overfitting are noise and excessive dVC. So if both are relatively ‘under control’, the risk of overfitting is smaller.

(8)

Hazard of Overfitting The Role of Noise and Data Size

## Case Study (1/2)

E

E

E

E

### out 0.120 7680

overfitting from best

to best

### g 10 ∈ H 10

?

(9)

Hazard of Overfitting The Role of Noise and Data Size

## Case Study (2/2)

E

E

E

E

overfitting from

to

?

### both yes!

(10)

Hazard of Overfitting The Role of Noise and Data Size

## Irony of Two Learners

learner

learner

when both

### know that target = 10th

—R ‘gives up’ability to fit

but

a lot!

philosophy:

### concession

for

(11)

Hazard of Overfitting The Role of Noise and Data Size

## Learning Curves Revisited

out

in

out

in

H

: lower

when N → ∞,

H

### 10

:

but much larger generalization error for small N

gray area :

overfits! (E

always

### wins in Eout

if N small!

(12)

Hazard of Overfitting The Role of Noise and Data Size

## The ‘No Noise’ Case

learner

learner

when both

—R still wins

is there really

### ‘target complexity’ acts like noise

(13)

Hazard of Overfitting The Role of Noise and Data Size

## Fun Time

When having limited data, in which of the following case would learner

### R

perform better than learner

### 1

limited data from a 10-th order target function with some noise

### 2

limited data from a 1126-th order target function with no noise

### 3

limited data from a 1126-th order target function with some noise

### 4

all of the above

We discussed about 1 and 2 , but you shall be able to

that

### R

also wins in the more difficult case of 3 .

(14)

Hazard of Overfitting The Role of Noise and Data Size

## Fun Time

When having limited data, in which of the following case would learner

### R

perform better than learner

### 1

limited data from a 10-th order target function with some noise

### 2

limited data from a 1126-th order target function with no noise

### 3

limited data from a 1126-th order target function with some noise

### 4

all of the above

We discussed about 1 and 2 , but you shall be able to

that

### R

also wins in the more difficult case of 3 .

(15)

Hazard of Overfitting Deterministic Noise

## A Detailed Experiment

y =

+

f

with level

### •

some ‘uniform’ distribution on

### f (x )

with complexity level

data size N

goal:

### ‘overfit level’

for different (N,

)and (N,

### Q f

)?

(16)

Hazard of Overfitting Deterministic Noise

## The Overfit Measure

for sure

### E out (g 2 )

(17)

Hazard of Overfitting Deterministic Noise

## The Results

2

80 100 120 -0.2

-0.1 0 0.1 0.2

0 1 2

fixed Q

=20

f

80 100 120 -0.2

-0.1 0 0.1 0.2

0 25 50 75 100

fixedσ

=0.1

### ring a bell? :-)

(18)

Hazard of Overfitting Deterministic Noise

## Impact of Noise and Data Size

### stochastic noise

Number of Data Points, N

NoiseLevel,σ2

80 100 120 -0.2

-0.1 0 0.1 0.2

0 1 2

### deterministic noise

Number of Data Points, N TargetComplexity,Qf

80 100 120 -0.2

-0.1 0 0.1 0.2

0 25 50 75 100

four reasons of serious overfitting:

data size N ↓ overfit

↑ overfit

↑ overfit

### ↑

excessive power ↑ overfit

### overfitting

‘easily’ happens

(19)

Hazard of Overfitting Deterministic Noise

## Deterministic Noise

if

∈/

### H: something of f

cannot be captured by

### •

deterministic noise : difference between best

and

### •

acts like ‘stochastic noise’—not new to CS:

### •

difference to stochastic noise:

### f

philosophy: when teaching

### a kid,

perhaps better not to use examples from a

### complicated target function? :-)

(20)

Hazard of Overfitting Deterministic Noise

## Fun Time

Consider the target function being sin(1126x ) for x ∈ [0, 2π]. When x is uniformly sampled from the range, and we use all possible linear hypotheses h(x ) = w · x to approximate the target function with respect to the squared error, what is the level of deterministic noise for each x ?

| sin(1126x)|

### 2

| sin(1126x) − x|

### 3

| sin(1126x) + x|

### 4

| sin(1126x) − 1126x|

You can try a few different w and convince yourself that the best hypothesis h

is h

### ∗

(x ) = 0. The deterministic noise is the difference between f and h

### ∗

.

(21)

Hazard of Overfitting Deterministic Noise

## Fun Time

Consider the target function being sin(1126x ) for x ∈ [0, 2π]. When x is uniformly sampled from the range, and we use all possible linear hypotheses h(x ) = w · x to approximate the target function with respect to the squared error, what is the level of deterministic noise for each x ?

| sin(1126x)|

### 2

| sin(1126x) − x|

### 3

| sin(1126x) + x|

### 4

| sin(1126x) − 1126x|

You can try a few different w and convince yourself that the best hypothesis h

is h

### ∗

(x ) = 0. The deterministic noise is the difference between f and h

### ∗

.

(22)

Hazard of Overfitting Dealing with Overfitting

## Driving Analogy Revisited

learning driving

overfit commit a car accident

use excessive dVC ‘drive too fast’

drive slowly

put the brakes

### validation

monitor the dashboard

all very

### practical

techniques to combat overfitting

(23)

Hazard of Overfitting Dealing with Overfitting

## Data Cleaning/Pruning

### •

if ‘detect’ the outlier

at the top by

### •

possible action 1: correct the label (data cleaning)

### •

possible action 2: remove the example (data pruning)

possibly helps, but

### effect varies

(24)

Hazard of Overfitting Dealing with Overfitting

## Data Hinting

### •

slightly shifted/rotated digits carry the same meaning

### virtual examples

by shifting/rotating the given digits (data hinting)

possibly helps, but

### watch out

—virtual example not

### iid∼ P(x, y )!

(25)

Hazard of Overfitting Dealing with Overfitting

## Fun Time

Assume we know that f (x ) is symmetric for some 1D regression application. That is, f (x ) = f (−x ). One possibility of using the knowledge is to consider symmetric hypotheses only. On the other hand, you can also generate virtual examples from the original data {(x

, y

### n

)}as hints. What virtual examples suit your needs best?

{(x

, −y

)}

{(−x

, −y

)}

{(−x

, y

)}

{(2x

, 2y

### n

)}

We want the virtual examples to encode the invariance when x → −x .

(26)

Hazard of Overfitting Dealing with Overfitting

## Fun Time

Assume we know that f (x ) is symmetric for some 1D regression application. That is, f (x ) = f (−x ). One possibility of using the knowledge is to consider symmetric hypotheses only. On the other hand, you can also generate virtual examples from the original data {(x

, y

### n

)}as hints. What virtual examples suit your needs best?

{(x

, −y

)}

{(−x

, −y

)}

{(−x

, y

)}

{(2x

, 2y

### n

)}

We want the virtual examples to encode the invariance when x → −x .

(27)

Hazard of Overfitting Dealing with Overfitting

## Summary

### 4

How Can Machines Learn

### • next: putting the brakes with regularization

(2017) observed the overfitting issue of unbiased PU learning and proposed a non-negative risk estimator to fix the problem.... Unlabeled-Unlabeled (UU) learning: In binary

15 In our series, the most frequent location for dermoid cysts was the periorbital area followed by the neck, scalp, periauricular, nasal and cheek areas, in a descending order..

Feature Exploitation Techniques Error Optimization Techniques Overfitting Elimination Techniques Machine Learning in Practice... Finale Feature

4.1.2 Overfitting and underfitting 4.1.3 Robust data fitting... 4.1.0 Scattered

The entrance system of the school gym, which does automatic face recognition based on machine learning, is built to charge four different groups of users differently: Staff,

[classification], [regression], structured Learning with Different Data Label y n. [supervised], un/semi-supervised, reinforcement Learning with Different Protocol f ⇒ (x n , y

vice versa.’ To verify the rule, you chose 100 days uniformly at random from the past 10 years of stock data, and found that 80 of them satisfy the rule. What is the best guarantee

Hsuan-Tien Lin (NTU CSIE) Machine Learning Foundations 16/22.. If we use E loocv to estimate the performance of a learning algorithm that predicts with the average y value of the

• logistic regression often preferred over pocket.. Linear Models for Classification Stochastic Gradient Descent. Two Iterative

vice versa.’ To verify the rule, you chose 100 days uniformly at random from the past 10 years of stock data, and found that 80 of them satisfy the rule. What is the best guarantee

Which keywords below shall have large positive weights in a good perceptron for the task. 1 coffee, tea,

Which keywords below shall have large positive weights in a good perceptron for the task.. 1 coffee, tea,

2 You’ll likely be rich by exploiting the rule in the next 100 days, if the market behaves similarly to the last 10 years. 3 You’ll likely be rich by exploiting the ‘best rule’

• To enhance teachers’ knowledge and understanding about the learning and teaching of grammar in context through the use of various e-learning resources in the primary

Wang, Solving pseudomonotone variational inequalities and pseudocon- vex optimization problems using the projection neural network, IEEE Transactions on Neural Networks 17

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

DVDs, Podcasts, language teaching software, video games, and even foreign- language music and music videos can provide positive and fun associations with the language for

Microphone and 600 ohm line conduits shall be mechanically and electrically connected to receptacle boxes and electrically grounded to the audio system ground point.. Lines in

• To achieve small expected risk, that is good generalization performance ⇒ both the empirical risk and the ratio between VC dimension and the number of data points have to be small..

Hsuan-Tien Lin (NTU CSIE) Machine Learning Foundations 5/25.. Noise and Error Noise and Probabilistic Target.

Two causes of overfitting are noise and excessive d VC. So if both are relatively ‘under control’, the risk of overfitting is smaller... Hazard of Overfitting The Role of Noise and

For MIMO-OFDM systems, the objective of the existing power control strategies is maximization of the signal to interference and noise ratio (SINR) or minimization of the bit

The goals of this thesis are to design a PXI RF noise source module and to develop the noise figure measurement technique.. The automatic noise figure measurement techniques can