• 沒有找到結果。

# Machine Learning Foundations (ᘤ9M)

N/A
N/A
Protected

Share "Machine Learning Foundations (ᘤ9M)"

Copied!
26
0
0

(1)

## ( 機器學習基石)

### Lecture 8: Noise and Error

Hsuan-Tien Lin (林軒田) htlin@csie.ntu.edu.tw

(2)

Noise and Error

### 2 Why

Can Machines Learn?

learning happens

if

VC,

, and

### 4 How Can Machines Learn Better?

(3)

Noise and Error Noise and Probabilistic Target

## Recap: The Learning Flow

1

1

N

N

1

2

N

what if there is

### noise?

(4)

Noise and Error Noise and Probabilistic Target

## Noise

briefly introduced

before

algorithm

### year in job 0.5 year current debt 200,000

credit? {no(−1), yes(+1)}

: good customer,

### • noise in y

: same customers, different labels?

### • noise in x: inaccurate

customer information?

does VC bound work under

### noise?

(5)

Noise and Error Noise and Probabilistic Target

## Probabilistic Marbles

one key of VC bound:

top

bottom top

bottom

sample

bin

marble

∼ P(x)

### •

deterministic color Jf (x) 6= h(x)K

marble

∼ P(x)

### •

probabilistic color

Jy 6= h(x)K with y ∼ P (y |x)

### same nature

: can estimate P[

if

VC holds for

i.i.d .

### ∼ P(x,y )

(6)

Noise and Error Noise and Probabilistic Target

## Target Distribution P(y |x)

characterizes behavior of

on one

### •

can be viewed as ‘ideal mini-target’ + noise, e.g.

### •

deterministic target f :

### • P(y |x) = 0 for y 6= f (x)

goal of learning:

predict

on

### often-seen inputs (w.r.t. P(x))

(7)

Noise and Error Noise and Probabilistic Target

## The New Learning Flow

1

1

N

N

1

2

N

1

2

N

VC still works,

### pocket algorithm explained :-)

(8)

Noise and Error Noise and Probabilistic Target

## Fun Time

### 1

In practice, we should try to compute ifD is linear separable before deciding to use PLA.

### 2

If we know thatD is not linear separable, then the target function f must not be a linear function.

### 3

If we know thatD is linear separable, then the target function f must be a linear function.

### 4

None of the above

1 After computing ifD is linear separable, we shall know

### w∗

and then there is no need to use PLA. 2 What about noise? 3 What about

‘sampling luck’? :-)

(9)

Noise and Error Error Measure

## Error Measure

### •

how well? previously, considered out-of-sample measure

=

more generally,

### •

naturally considered

### • classification: Jprediction 6= targetK

classification errorJ. . .K:

often also called

### ‘0/1 error’

(10)

Noise and Error Error Measure

## Pointwise Error Measure

can often express E (g, f ) = averaged err(g(x), f (x)), like E

(g) = E

—err: called

E

(g) = 1 N

X

err(g(x

),f (x

))

E

(g) = E

### x∼P

err(g(x),f (x))

will mainly consider pointwise

### err

for simplicity

(11)

Noise and Error Error Measure

## Two Important Pointwise Error Measures

### •

correct or incorrect?

often for

### •

how far is ˜y from y ?

often for

how does err

### ‘guide’ learning?

(12)

Noise and Error Error Measure

## Ideal Mini-Target

interplay between

and

and

define

### ideal mini-target f (x)

P(y = 1|x) = 0.2, P(y = 2|x) = 0.7, P(y = 3|x) = 0.1

err(˜y , y ) =Jy˜6= yK

y =˜







1 avg. err 0.8 2 avg. err 0.3(∗) 3 avg. err 0.9

1.9 avg. err 1.0(really? :-))

f (x) = argmax

### y ∈Y

P(y|x)

err(˜y , y ) = (˜y− y)

### 2







1 avg. err 1.1 2 avg. err 0.3 3 avg. err 1.5 1.9 avg. err 0.29(∗)

f (x) =X

### y ∈Y

y· P(y|x)

(13)

Noise and Error Error Measure

## Learning Flow with Error Measure

1

1

N

N

1

2

N

1

2

N

### error measure err

extended VC theory/‘philosophy’

### works for most H and err

(14)

Noise and Error Error Measure

## Fun Time

### Consider the following P(y |x) and err(˜y, y) = |˜y − y|. Which of the following is the ideal mini-target f (x)?

P(y = 1|x) = 0.10, P(y = 2|x) = 0.35, P(y = 3|x) = 0.15, P(y = 4|x) = 0.40.

### 1

2 = weighted median from P(y|x)

### 2

2.5 = average withinY = {1, 2, 3, 4}

### 3

2.85 = weighted mean from P(y|x)

### 4

4 = argmax P(y|x)

For the ‘absolute error’, the weighted median provably results in the minimum average err.

(15)

Noise and Error Algorithmic Error Measure

## Choice of Error Measure

### −1 intruder

two types of error:

and

g

+1 -1

f +1

-1

### false accept no error

0/1 error penalizes both types

### equally

(16)

Noise and Error Algorithmic Error Measure

## Fingerprint Verification for Supermarket

### −1 intruder

two types of error:

and

g

+1 -1

f +1

-1

g +1 -1

f +1

-1

### •

supermarket: fingerprint for discount

### • false accept: give away a minor discount, intruder left fingerprint :-)

(17)

Noise and Error Algorithmic Error Measure

## Fingerprint Verification for CIA

### −1 intruder

two types of error:

and

g

+1 -1

f +1

-1

g

+1 -1

f +1

-1

### •

CIA: fingerprint for entrance

### • false reject: unhappy employee, but so what? :-)

(18)

Noise and Error Algorithmic Error Measure

## Take-home Message for Now

is

true: just

plausible:

### •

friendly: easy to optimize forA

### • convex objective function

c

err: more in next lectures

(19)

Noise and Error Algorithmic Error Measure

## Learning Flow with Algorithmic Error Measure

1

1

N

N

1

2

N

1

2

N

### error measure err err ˆ

err: application goal;

### err: a key part of many

A

(20)

Noise and Error Algorithmic Error Measure

## Fun Time

P

Jy

6= g(x

)K

P

n

Jy

6= g(x

)K + 1000 P

n

Jy

6= g(x

)K

!

P

n

Jy

6= g(x

)K − 1000 P

n

Jy

6= g(x

)K

!

1000 P

n

Jy

6= g(x

)K + P

n

Jy

6= g(x

)K

!

When y

=−1, the

,y

)is penalized

### 1000

times more!

(21)

Noise and Error Weighted Classification

## Weighted Classification

E

(h) = E



if y = +1

if y =−1



·

E

(h) = 1 N

X



if y

= +1

if y

=−1



·

### Jy n 6= h(x n ) K

weighted classification:

### different ‘weight’ for different (x, y )

(22)

Noise and Error Weighted Classification

## Minimizing E in for Weighted Classification

= 1 N

X



if y

= +1

if y

=−1



·

PLA:

pocket: modify

—if

reaches smaller

than ˆ

### w, replace ˆw by wt+1

pocket: some guarantee on E

;

### modified pocket: similar guarantee on E inw ?

(23)

Noise and Error Weighted Classification

## Systematic Route: Connect E inw and E in0/1

D:

. . .

. . .

after

E

for LHS≡ E

### in0/1

for RHS!

(24)

Noise and Error Weighted Classification

## Weighted Pocket Algorithm

### -1 1000 0

using ‘virtual copying’,

include:

weighted PLA:

randomly check

mistakes with

### 1000

times more probability

### •

weighted pocket replacement:

if

reaches smaller

than ˆ

### w, replace ˆw by wt+1

systematic route (called ‘reduction’):

### can be applied to many other algorithms!

(25)

Noise and Error Weighted Classification

## Fun Time

0.001

0.01

0.1

### 4

1

While the quiz is a simple evaluation, it is not uncommon that the data is very

### unbalanced

for such an application. Properly ‘setting’ the weights can be used to avoid the lazy constant prediction.

(26)

Noise and Error Weighted Classification

## Summary

### 2 Why

Can Machines Learn?

### 4 How Can Machines Learn Better?

Two causes of overfitting are noise and excessive d VC. So if both are relatively ‘under control’, the risk of overfitting is smaller... Hazard of Overfitting The Role of Noise and

The entrance system of the school gym, which does automatic face recognition based on machine learning, is built to charge four different groups of users differently: Staff,

[classification], [regression], structured Learning with Different Data Label y n. [supervised], un/semi-supervised, reinforcement Learning with Different Protocol f ⇒ (x n , y

vice versa.’ To verify the rule, you chose 100 days uniformly at random from the past 10 years of stock data, and found that 80 of them satisfy the rule. What is the best guarantee

Hsuan-Tien Lin (NTU CSIE) Machine Learning Foundations 16/22.. If we use E loocv to estimate the performance of a learning algorithm that predicts with the average y value of the

• logistic regression often preferred over pocket.. Linear Models for Classification Stochastic Gradient Descent. Two Iterative

vice versa.’ To verify the rule, you chose 100 days uniformly at random from the past 10 years of stock data, and found that 80 of them satisfy the rule. What is the best guarantee

Which keywords below shall have large positive weights in a good perceptron for the task. 1 coffee, tea,

Which keywords below shall have large positive weights in a good perceptron for the task.. 1 coffee, tea,

2 You’ll likely be rich by exploiting the rule in the next 100 days, if the market behaves similarly to the last 10 years. 3 You’ll likely be rich by exploiting the ‘best rule’

A floating point number in double precision IEEE standard format uses two words (64 bits) to store the number as shown in the following figure.. 1 sign

A floating point number in double precision IEEE standard format uses two words (64 bits) to store the number as shown in the following figure.. 1 sign

In this report, formats were specified for single, double, and extended precisions, and these standards are generally followed by microcomputer manufactures using

You shall find it difficult to generate more kinds by varying the inputs, and we will give a formal proof in future lectures.

Lecture 5: Training versus Testing Hsuan-Tien Lin (林 軒田) htlin@csie.ntu.edu.tw?. Department of

Definition of VC Dimension VC Dimension of Perceptrons Physical Intuition of VC Dimension Interpreting VC Dimension?. 3 How Can

happy linear modeling after Z = Φ(X ) Price of Nonlinear Transform.

Customers with higher monthly income should naturally be given a higher credit limit, which is captured by the positive weight on the ‘monthly income’ feature... Then, a

Lecture 14: Regularization Regularized Hypothesis Set Weight Decay Regularization Regularization and VC Theory General Regularizers.?. Regularization Regularization and

effective price of choice in training: (wishfully) growth function m H (N) with a break point Lecture 6: Theory of Generalization. Restriction of

Hsuan-Tien Lin (NTU CSIE) Machine Learning Foundations 5/25.. Noise and Error Noise and Probabilistic Target.

Hsuan-Tien Lin (NTU CSIE) Machine Learning Foundations 0/26... The

Two causes of overfitting are noise and excessive d VC. So if both are relatively ‘under control’, the risk of overfitting is smaller... Hazard of Overfitting The Role of Noise and