• 沒有找到結果。

-Artificial Neural Network(ANN) -

N/A
N/A
Protected

Academic year: 2022

Share "-Artificial Neural Network(ANN) -"

Copied!
47
0
0

加載中.... (立即查看全文)

全文

(1)

-Artificial Neural Network(ANN) -

Chapter 1 Introduction

朝陽科技大學

資訊管理系

李麗華 教授

(2)

Background

‧自古人類即對”人”為什麼能有這麼多能力 很好奇,人們很好奇大腦的功能,人類也不 斷的從生物的角度來觀察人的智慧如何產生 及如何運作。

你大頭啦~~

http://www.dls.ym.edu.tw/neuroscience/nsdivide_c.htm#cns

真是神經” 析析”

http://www.dls.ym.edu.tw/neuroscience/alz_c.html

‧在電腦早期出現時(1940年代)即有許多學者

(3)

人類的好奇與探索花絮

1.解剖高手達文西(Leonardo da Vinci):

達文西他=不但是義大利的建築師、雕刻家、

發明家、 工程師、畫家,他也是位解剖學

者,達文西共解剖了30具人體,依解剖內容繪 製超過200篇畫作,其中包含腦的頭骨形態及 腦部不同的交叉截面圖(橫斷面、縱切面、正 切面) 。他的手稿亦包含論人的記憶、智力 等。

(資料來源:http://zh.wikipedia.org/

http://zh.wikipedia.org/w/index.php?title=%E5%88%97%E5%A5%A5%E7%BA%B3%E5%A 4%9A%C2%B7%E8%BE%BE%E8%8A%AC%E5%A5%87&variant=zh-tw)

(4)

人類的好奇與探索花絮

2.愛因斯坦大腦人人愛

-1985年,由美國神經科學家戴蒙(Marian Diamond)領銜,

發表了第一篇研究愛因斯坦大腦的報告。

-報告指出,愛因斯坦的左頂葉,神經元與神經膠細胞的比 例小於常人。

-根據過去的研究,哺乳類神經元與神經膠細胞比例,從小 鼠到人有逐步降低的趨勢,有些學者因而推測,神經元執 行的功能越複雜,越需要神經膠細胞的支持。

-第二篇研究論文發表於1996年,由神經學安德森(Britt Anderson)助理教授提出,愛因斯坦的大腦皮質中,神經元 密度較高。這表示愛因斯坦大腦皮質神經元有較佳的傳訊 效率,因而可以解釋愛因斯坦的超卓天才。

(5)

Biological Neural

• 人類自古即對解開人類的大腦及如何思考有極大的興趣。

早期學者欲透過人腦的結構及運作方式,思考人腦的奧妙

• 人腦:大約有1000億個神經細胞 (neuron)每個神經細胞約有 1000個神經連結

(共約有100萬億根的神經連結)

• 神經細胞主要元素:

- Axon(神經軸)

- Cell body(neuron)神經元 - Nucleus(神經核)

- Dandrites(神經樹) - synapse(神經節)

• 作用方式: 可傳導化學物質

Two Status:Excitatory/Inhibitory

(6)

A Slice of Neurons Question:

What is this?

(7)

Spinal motor neuron Hippocampal pyramidal cell Purkinje cell of cerebellum

(8)

Postsynaptic cells Presynaptic cell

Terminal Axon

Myelin sheath

Node of Ranvier Axon (initial segment)

Axon hillock

Basal dendrites

Nucleus

Perikaryon Cell body Excitatory Terminal fiber of an axon Inhibitory

terminal fiber of an axon

Apical dendrites

Presynaptic terminal Synaptic cleft

Postsynaptic dendrite Dendrite

照過來……瞧瞧我神經CC的內在美

(9)
(10)

Introduction to ANN

Def: A structure (network) composed of a number of interconnected units (artificial neurons). Each unit has an I/O characteristic and implements a local computation or function.

The output of any unit is determined by its I/O characteristic, its interconnection to other units, and (possibly) external inputs.

Although “hand crafting”of the network is

possible, the network usually develops an overall

(11)

Introduction to ANN

中文的ANN定義:

類神經網路是一種計算系統,包括軟體與硬體,它使用大 量簡單的相連人工神經來模仿生物神經網路的能力.

人工神經是生物神經的簡單模擬,它從外界環境或者其它 人工神經取得資訊,並加以簡單運算,並將其結果輸出到 外界環境或者其它人工神經元.

(12)

ANN History

1. Creation Age(孕育期)(before1956) 2. Birth Age(誕生期)(1957-1968)

3. Dark Age(挫折期)(1969-1981) 4. Reborn Age(重生期)(1982-1986)

5. Mature Age(成熟期)(1987~present)

(13)

Creation Age

•In 1943, McCulloch & Pitts first proposed the neural mathematical model, earlier than the first computer were created. Their premises(assumptions) are:

1. Neuron has two status, i.e., all or none. (excitatory or inhibitory)

2. A neuron is triggered by certain amount of accumulated synapses and the trigger action has nothing to do with the previous status.

3. Only synapse can be delayed.

4. When under inhibitory status a neuron can not be trigged.

5. The neuron net structure is not changed.

(14)

Creation Age

•In 1949, Hebb proposed the Hebb learning rule:

1. Information resides in Synapes 2. learning rule

3. symmetrical weights

4. When a group of weak neurons is triggered, the strength of connection between them is increased.

(即鄰近neuron的訊號可以累積增強weights的 值)

(15)

Birth Age

•In 1957, Rosenblatt proposed the first network model, i.e., Perceptron. (即感知器;當時僅有單層架構)

•In 1960, Widrow porposed another model, i.e.,

Adaline.(這是屬於連續值的線性網路, 己開始採用 learning rule與當時人工智慧邏輯推理很不同)

(16)

Dark Age

• In 1969, Minsky & Papert proved that Perceptron has limited learning usage, because this model

cannot solve the XOR problem.

(註)

因當時AI較紅,電腦速度慢, 由於被Minsky的證明

困住等原因

ANN研究因此而限於低潮) 但此

時期仍有: 芬蘭的Kohonen所提出的Self

Organization Map(SOM)與 Grossberg學者等提出 Adaptive Resonance Theory(ART) model.

(17)

Reborn Age

•In 1982, Hopfield proposed the Hopfield

Network and the Auto-associative Memory Models.

•In 1985, Hopfield proposed another Hopfield &

Tank Network to solve the Traveling Salesman Problem.

•After these researches, the ANN models were again treated as a promising research area .

(18)

Reborn Age

•In 1986, Rumelhart et al. introduced the BPN in their book: “Parallel Distributed Processing”in which generalized delta rule are explained. In addition, they explain how BPN can solve the XOR problem.

•Until 1990, BPN had become one of the most popular and highly utilized ANN model.

(19)

Mature Age

•Up to now, the ANN models has been widely studied and many models has been proposed.

•Conferences and Journals are created for ANN studies, such as ICNN(International Conference on NN,

IJCNN( International Joint Conference on NN, held by IEEE & INNS).

•Besides, many tools and software,such as SNNS and MatLab, are been developed for making applying NN easier.

(20)

The Node Characteristics of ANN

1. Input: training sets(or training patterns), X=[X1, X2, …., Xn].

2. Output: computed output Y=[Y1,Y2,…,Yj], testing sets T=[T1,T2,…,Tj]

3. Connections: Weights, Wij.

4. Processing Element(PE): Summation function, Activity function, & Transfer function.

Summation

Activity fc. Transfer fc. Output X1

(21)

視覺應用案例

人腦

(22)

視覺應用案例

形狀?

大小?

輪廓?

模擬人腦的判斷要項

(23)

Types of ANN

•According to Learning Type:

– Supervise Learning: 運用已知的一組輸入範例及預期答 案來訓練網路,例如: Perceptron, BPN, PNN, LVQ, CPN – Unsupervise Learning: 不斷透過輸入的範例來做學習與

修正網路, 例:SOM, ART

– Associative memory Learning: 直接訓練並記憶所訓練過 的所有對照資料or 圖形, 例: Hopfield, Bidirectional

Associative Memory(BAM), Hopfield-Tank

– Optimization Application:找尋最佳解, 例: ANN, HTN

(24)

Types of ANN

•According to Network Structure:

–Feedforward (one-way) –Feedforward (Two-way) –Feedback

(25)

Feedforward (one-way)

Y X1

X2

Xn

(26)

Feedforward (two-way)

Y X1

X2

Xn

(27)

Feedback (one-way)

Y X1

X2

Xn

(28)
(29)

(a)差距法則 (b)通用差距法則 (c)波茲曼機學習法則

(30)

Problem Solving Area

‧Classification

‧Clustering

‧Prediction

‧Memorizing

‧Learning

•Optimization

•Control

•Recognition

•Decision-making

(31)

Classification problem

150 155 160 165 170 175 180 185 190

40 45 50 55 60 65 70 75 80 85 90

身高

Classification line y

Paremeters

EX: A simple case of classification problem

(32)

Application Area

‧ 信用評估

‧ 管制圖判讀

‧ 生產製程變數預測

‧ 顧客篩選

‧ 銷售預測

‧ 製程監控

‧ 石油探勘

‧ 汽車診斷

‧ 工廠排程

‧ 投資決策

‧ 稅務稽查

‧ 代款審核

‧ 債券分級

‧ 醫學診斷

‧ 氣象預測

‧ 儀器分析

‧ 目標追蹤

‧ 電腦音樂

(33)

Example using ANN model

•The credit prediction for a customer who applys for loan.

<<Input>>

1.Customer’s salary

2.Customer’s house debt 3.Customer’s car debt 4.Customer’s average

living cost

5.Customer’s credit history

<<Output>>

1.O.K. Qualified for loan 2.Sorry, no new loan is available for the loaner.

ANN Model

(34)

The comparison of ANN with Regression(1/2)

•Variable prediction vs. Regression Analysis

– For regression, the main task is to find out the

parameters a0,a1,a2,a3,…an . Therefore, Regression can be used to do the classification or prediction.

– However, if the problem is belongs to non-linear type, then it will become difficult to solve. ANN is good for the

n

n X

a X

a X

a X

a a

Y 0 1 1 2 2 3 3 ...

(35)

The comparison of ANN with Regression(2/2)

•ANN vs. Regression The ANN advantages:

–Can solve non-linear problem

–Parameters can be modified easily –Easy to construct the model

–Accepts any type of input

•ANN vs. Regression  The disadvantage:

–Takes time to find the global minimum(the best solution)

–May be over learning

–Accepts any type of input

(36)

The comparison of ANN with Time Series

•Time Series

– Based on the (time) history values to predict future results.

– EX:

• prediction of stock market,

p t p t

t t

t a a X a X a X a X

X 0 1 1 2 2 3 3 ...

(37)

The comparison of ANN with Decision Making

•Decision Making

– By Applying the same inputs to find out which fi has the best outcome. The decision is made based on the best outcome.

– EX: Credit evaluation, Scheduling, Strategic decision

n mn m

m n

m m

n n n

n n n

X a X

a X

a a

X X

X X f

f

X a X

a X

a a

X X

X X f f

X a X

a X

a a

X X

X X f f

...

) ,...

, ,

( :

...

) ,...

, ,

(

...

) ,...

, ,

(

2 2 1

1 10

3 2

1

2 2

22 1

21 20

3 2 1 2

2

1 2

12 1

11 10

3 2

1 1 1

(38)

類神經網路模式列表(1/3)

可解XOR問題

應用最普遍

成功案例多

學習精度高 樣本識別

分類問題 函數合成 適應控制 1974-

1985 P. Werbos

D. Parker D. umelhart 倒傳遞

網路

無法解XOR問題

模式最簡單

發展最早

字母識別

目前已多改為 multilayer

network做應用 1957

F. Rosenblatt 感知機

主要特點 主要應用

研發 年代 主要研發者

項目 模式

(39)

類神經網路模式列表(2/3)

學習速度快

回想速度快

理論簡明

樣本識別

分類問題 1988

T. Kohonen 學習向

量量化 網路

學習速度快

回想速度慢

理論簡明

樣本識別 分類問題 1988

D. F.

Specht 機率神

經網路

主要特點 主要應用

研發 年代 主要研發者

項目 模式

(40)

類神經網路模式列表(3/3)

較不易陷入局部最 小值

組合最適 化問題 1985

J. Hopfield D. Tank 霍普菲

爾坦克 網路

網路具穩定性

網路具可塑性

學習速度快

具警戒值觀念

樣本識別

聚類問題 1976-

1986 G. A.

Carpenter S. Grossberg 自適應

共振理

具有臨近區域觀念

學習速度快

聚類問題

拓撲映射 1980

T. Kohonen 自組織

映射圖

主要特點 主要應用

研發 年代 主要研發者

項目 模式

(41)

Typical Learning Methods for Basic Learning Strategies

Learning Strategy

Supervised

Delta Rule Backpropagation

Hebbian Stochastic

Unsupervised

Competitive Hebbian

Reinforcement

Learning Automata

(42)

Categories of Network Types by Broad Learning Method

Learning Method

Supervised

ADALINE Boltzmann Cascade Correlation

GRNN

Unsupervised

ART Hopfield

LVQ

Neocognitron

Reinforcement

(43)

Categories of Network Types by Learning Type

Learning Type

Error Correction

ADALINE CCN GRNN Hopfield MLFF with BP

Perceptron RBF RNN

Stochastic

Boltzmann Machine Cauchy Machine

Competitive

ART CPN LVQ SOFM

Hebbian

AM BSB BAM Hopfield Neocognitron

(44)

Categories of Network Types by Architectural Type

Architecture Type

Singer Layer Feedforward

ADALINE AM Hopfield

LVQ

Recurrent

ART BAM Boltzmann

Machine Cauchy Machine

Multilayer Feed Forward

CCN GRNN MADALINE MLFF with BP

(45)

Categories of Network Types by Application Type

Application Type

Associative Memory

ART AM BAM

BSB Hopfield MLFF with

BP

Optimization

ADALINE Boltzmann Hopfield MLFF with BP

RNN SOFM

General Mapping

CCN GRNN

Prediction

ADALINE CCN GRNN MADALINE MLFF with BP

RBF RNN SOFM

Pattern Recognition

ART CCN CPN GRNN

LVQ MLFF with BP RBF Neocognitron RCE SOFM

Classification

ADALINE ART CCN CPN GRNN

LVQ MLFF with BP

RBF RCE

(46)

Neural Network Taxonomies

• Perceptron

• Hopfield

• ADALINE (Adaptive Linear Neural Element)

• MADALINE (Multilayer ADALINE)

• BPN (Back Propagation Network)

• ART (Adaptive Resonant Theory)

• AM (Associative Memories)

• BAM (Bidirectional Associative Memory)

• Boltzmann Machine

• CCN (Cascade Correlation)

(47)

Neural Network Taxonomies

• LVQ (Learning Vector Quantization)

• MLFF with BP (Multilayer Feedforward Backpropagation)

• PNN (Probabilistic Neural Network)

• RBF (Radial Basic Function)

• RNN (Recurrent Neural Networks)

• SOM (Self-Organizing Map) or SOFM (Self- Organizing Feature Map)

• NLN (Neurologic Networks)

參考文獻

相關文件

• gather photos under CC-BY-2.0 license on Flicker (thanks to the authors below!) and label them as apple/other for learning.. (APAL stands for Apple and Pear

Principle Component Analysis Denoising Auto Encoder Deep Neural Network... Deep Learning Optimization

Deep learning usually refers to neural network based model.. Shallow – Speech Recognition. ◉

for training

Random Forest: Theory and Practice Neural Network Motivation.. Neural Network Hypothesis Neural Network Training Deep

Moreover, this chapter also presents the basic of the Taguchi method, artificial neural network, genetic algorithm, particle swarm optimization, soft computing and

This study proposed the Minimum Risk Neural Network (MRNN), which is based on back-propagation network (BPN) and combined with the concept of maximization of classification margin

To solve this problem, this study proposed a novel neural network model, Ecological Succession Neural Network (ESNN), which is inspired by the concept of ecological succession