• 沒有找到結果。

-Artificial Neural Network- Chapter 5 Back Propagation Network

N/A
N/A
Protected

Academic year: 2022

Share "-Artificial Neural Network- Chapter 5 Back Propagation Network"

Copied!
20
0
0

加載中.... (立即查看全文)

全文

(1)

-Artificial Neural Network-

Chapter 5 Back Propagation Network

朝陽科技大學

資訊管理系

李麗華 教授

(2)

Introduction (1)

• BPN = Back Propagation Network

• BPN is a layered feedforward supervised network.

• BPN provides an effective means of allowing a computer to examine data patterns that may be incomplete or noisy.

• BPN can take various type of input, i.e., binary data or real data.

• The output of BPN is depending on the transfer function used.

(1) If the sigmoid function is used, then the output 0≤y ≤1 (2) If the hyperbolic Tangent function is used,

then the output : -1 ≤y ≤1

(3)

朝陽科技大學 李麗華 教授 3

Introduction (2)

Architecture:

Xn Yj

X1

X2

Y1

Y2

H1

H2

Hh

θ1

θh θ2

(4)

Introduction (3)

•Input layer: [X1,X2,….Xn].

•Hidden layer: can have more than one layer.

derive: net1, net2, …neth; transfer output H1, H2,…,Hh,

Hh will be used as the input to derive the result for output layer

•Output layer: [Y1,…Yj].

•Weights: Wij.

•Transfer function: Nonlinear  Sigmoid function

(*) The nodes in the hidden layers organize themselves in a way that net j

j

e

net

f

  1 ) 1

(

(5)

朝陽科技大學 李麗華 教授 5

Introduction (4)

•Application of BPN is quite broad.

– Pattern Recognition (樣本識別; 字母識別) – Prediction (股巿預測)

– Classification (客群分類) – Learning (資料學習)

– Control (回饋與控制) – CRM (客服分群服務)

(6)

Processing Steps (1)

The processing steps can be briefly described as follows.

1. Based on the problem domain, set up the network.

2. Randomly generate weights Wij.

3. Feed a training set, [X1,X2,….Xn], into BPN.

4. Compute the weighted sum and apply the transfer function on each node in each layer. Feeding the

transferred data to the next layer until the output layer is reached.

5. The output pattern is compared to the desired output and an error is computed for each unit.

(7)

朝陽科技大學 李麗華 教授 7

Processing Steps (2)

6. Feedback the error back to each node in the hidden layer.

7. Each unit in hidden layer receives only a portion of total errors and these errors then feedback to the input layer.

8. Go to step 4 until the error is very small.

9. Repeat from step 3 again for another training set.

(8)

Computation Processes(1/10)

•The detailed computation processes of BPN.

1. Set up the network according to the input nodes and the output nodes required. Also, properly choosing the hidden layers and nodes.

2. Randomly assigned the weights.

3. Feed the training pattern (set) into the network and do the following computation.

x1

: :

hj Xi

ih

neth

Xn

Hh

: :

: :

Yj

θ1

θh

θj

H1 net1

nh

(9)

朝陽科技大學 李麗華 教授 9

Computation Processes(2/10)

4. Compute from the Input layer to hidden layer for each node.

neth

h h

h

net e f

H net

 

1 ) 1

(

i

h i

ih

X -

W

=

5. Compute from the hidden layer to output layer for each node.

netj

h j

j

net e f

Y net

 

1 ) 1

(

- H W

=

i

j h

hj

(10)

Computation Processes(3/10)

6. Calculate the total error & find the difference for correction

δj=Yj(1-Yj)( Tj -Yj) δh=Hh(1- Hh) Σ

jWhj δj

7. ΔWhj=ηδj Hh ΔΘj = -ηδj ΔWih=ηδh Xi ΔΘh= -ηδh 8. update weights

Whj=Whj+ΔWhj ,Wih=Wih+ΔWih , Θj= Θj + ΔΘj, Θh= Θh + ΔΘh

9. Repeat steps 4~8, until the error is very small.

10.Repeat steps 3~9, until all the training patterns are learned.

(11)

朝陽科技大學 李麗華 教授 11

EX: Use BPN to solve XOR (1)

• Use BPN to solve the XOR problem

• Let W11=1, W21= -1, W12= -1, W22=1, W13=1, W23=1, Θ1=1, Θ2=1,Θ3=1, η=10

0 1 1

1 1 -1

1 -1 1

0 -1 -1

T X1 X2

W23 W13

W22 W21

W12 W11 X1

X2

Y1 H1

H2 Θ1

Θ2

Θ3

(12)

EX: BPN Solve XOR (2)

• ΔW12=ηδ1 X1 =(10)(-0.018)(-1)=0.18

• ΔW21=ηδ1 X2 =(10)(-0.018)(-1)=0.18

• ΔΘ1 =-ηδ1 = -(10)(-0.018)=0.18

• 以下為第一次修正後的權重值.

X2 X1

0.754 1.18

0.82

0.754

1.915 1.18

0.82

(13)

朝陽科技大學 李麗華 教授 13

BPN Discussion

1. Number of hidden nodes increase, the convergence will get slower. But the error can be minimized.

2. The general concept of designing the number of hidden node uses:

# of hidden nodes=(Input nodes + Output nodes)/2, or

# of hidden nodes=(Input nodes * Output nodes)1/2 3. Usually, 1~2 hidden layer is enough for learning a

complex problem. Too many layers will cause the learning very slow. When the problem is hyper-

dimension and very complex, then an extra layer could be used

4. Learning rate, η, usually set between [0.1, 1.0], but it depends on how fast and how detail the network shall learn.

(14)

The Gradient Steepest Descent Method(SDM) (1)

•The gradient steepest descent method

•Recall:

•We want the difference of computed output and expected output getting close to 0.

•Therefore, we want to obtain so that we can update weights to improve the network results.

j n

i j

n ij

j

W A

net  

1

j

ij j

j

W

A E T

E ( 1 / 2 ) ( )

2

W

ij

-

Wij

E

(15)

朝陽科技大學 李麗華 教授 15

The Gradient Steepest Descent Method(SDM) (2)

ij n j k kj k ij

n j

ij n

j n

j n j n

ij j n j n

ij j

W A W W

net

W net net

A A

E W

net net

E W

E

 

 

 

)

( (1)

For

) (

) (

) (

) )(

(

1

) 1 ) (

2 ( )

3 (

1

1 1

For (3-2) when n is the hidden layer

( )( )

n k n

k jk

n n n

k k

j k j

net

E E

A net A W



) ) (

( (2)

For

' n

n j j

n j n

j n

j f net

net net f

net

A

2

For (3-1): when n is the output layer [1/ 2 ( ) ]

-(Tj- )

n

k k

k n

n n j

j j

T A

E A

A A

(16)

The Gradient Steepest Descent Method(SDM) (3)

1

1

n 1

j

From (1)(2)(3) we have two types of values:

When n is output layer

=-( - ) ( ) ( (B))

or ( (A))

we get ( ) ( et )

n t n n

j j j i

i j

n n

j i

n t n

j i j

E T A f A A W

A

T A f n



 

代入 代入

(17)

朝陽科技大學 李麗華 教授 17

The Gradient Steepest Descent Method(SDM) (4)

1 1

1

1

When n is hidden layer

=-[ ] ( et ) ( (B))

or ( (A))

we get [ ] ( et )

n t n n

k jk j i

i j k

n n

j i

n n t n

j k jk j

k

E W f n A

W

A

W f n



代入 代入

1

1

n n

j i

ij

ij ij ij

n n

ij j i

j j j

n j

E A

W

W W ΔW

W A

Δ

  

  



  

     

  

(18)

The Gradient Steepest Descent Method(SDM) (5)

j j

-netj -1

-net -1 -2 -net

2

( ) 1 (1 e )

1

( ) [(1 e ) ] ][-( e )]

1

(1 ) (1 ) 1

1

j

j j j

n

j net

t n

j

netj netj

net net net

j j

f net

e f net

e e

e e e

f(net )( - f(net ))

 

 

[

] (1 ) if n is hidden layer layer output

is n if )

Y - (1 )Y Y - (T

1

j j

j j

j j

k

ik n

j n

j W H H

(19)

朝陽科技大學 李麗華 教授 19

The Gradient Steepest Descent Method(SDM) (6)

• Learning computation

h

j

j j j j j

1. Compute value of the hidden layer

H ( ) 1

1

2. Compute value of the output layer Y ( ) 1

1

3. =Y (1- Y )(T - Y ) C

h

j

j ih i h

i

h net

j hj h j

i

j net

net W X

f net

e

net W H

f net

e

  

 

  

 

ompute the value difference for correction

h h 1 h hj j

j

δ H ( - H ) W δ

(20)

The Gradient Steepest Descent Method(SDM) (7)

h i

4. H = ompute the value to be updated H

5.

hj j j

ih h

hj hj hj j j j

ih ih ih h h h

W C

W

W W W

W W W

  



  

  

  

 

   

   

參考文獻

相關文件

Idea: condition the neural network on all previous words and tie the weights at each time step. Assumption: temporal

• We need to make each barrier coincide with a layer of the binomial tree for better convergence.. • The idea is to choose a Δt such

Model checking: residuals can be obtained by 1-step ahead forecast errors at each time point, (observations minus the fitted value) Then if the model is adequate, residuals should

To tackle these problems, this study develops a novel approach integrated with some graph-based heuristic working rules, robust back-propagation neural network (BPNN) engines

Moreover, this chapter also presents the basic of the Taguchi method, artificial neural network, genetic algorithm, particle swarm optimization, soft computing and

This study proposed the Minimum Risk Neural Network (MRNN), which is based on back-propagation network (BPN) and combined with the concept of maximization of classification margin

To solve this problem, this study proposed a novel neural network model, Ecological Succession Neural Network (ESNN), which is inspired by the concept of ecological succession

Step 5: Receive the mining item list from control processor, then according to the mining item list and PFP-Tree’s method to exchange data to each CPs. Step 6: According the