• 沒有找到結果。

Slides credited from Dr. David Silver & Hung-Yi Lee

N/A
N/A
Protected

Academic year: 2022

Share "Slides credited from Dr. David Silver & Hung-Yi Lee"

Copied!
79
0
0

加載中.... (立即查看全文)

全文

(1)
(2)

Outline

Machine Learning

Supervised Learning v.s. Reinforcement Learning

Reinforcement Learning v.s. Deep Learning Introduction to Reinforcement Learning

Agent and Environment

Action, State, and Reward Markov Decision Process

Reinforcement Learning Approach

Value-Based

Policy-Based

Model-Based

Problems within RL

Learning and Planning

Exploration and Exploitation

RL for Unsupervised Model

2

(3)

Outline

Machine Learning

Supervised Learning v.s. Reinforcement Learning

Reinforcement Learning v.s. Deep Learning Introduction to Reinforcement Learning

Agent and Environment

Action, State, and Reward Markov Decision Process

Reinforcement Learning Approach

Value-Based

Policy-Based

Model-Based

Problems within RL

Learning and Planning

Exploration and Exploitation

(4)

Machine Learning

4

Machine Learning

Unsupervised Learning Supervised

Learning

Reinforcement Learning

(5)

Supervised v.s. Reinforcement

Supervised Learning

Training based on

supervisor/label/annotation

Feedback is instantaneous

Time does not matter

Reinforcement Learning

Training only based on reward signal

Feedback is delayed

Time matters

Agent actions affect subsequent data

(6)

Supervised v.s. Reinforcement

Supervised

Reinforcement

6

……

Say “Hi”

Say “Good bye”

Learning from teacher

Learning from critics

Hello  ……

“Hello”

“Bye bye”

……. ……. OXX???!

Bad

(7)

Reinforcement Learning

RL is a general purpose framework for decision making

RL is for an agent with the capacity to act

Each action influences the agent’s future state

Success is measured by a scalar reward signal

Goal: select actions to maximize future reward

(8)

Deep Learning

DL is a general purpose framework for representation learning

Given an objective

Learn representation that is required to achieve objective

Directly from raw inputs

Use minimal domain knowledge

8

x1

x2

… …

y1

y2

… …

… …

yM

xN

vector x

vector y

(9)

Deep Reinforcement Learning

AI is an agent that can solve human-level task

RL defines the objective

DL gives the mechanism

RL + DL = general intelligence

(10)

Deep RL AI Examples

Play games: Atari, poker, Go, … Explore worlds: 3D worlds, …

Control physical systems: manipulate, …

Interact with users: recommend, optimize, personalize, …

10

(11)

Introduction to RL

Reinforcement Learning

(12)

Outline

Machine Learning

Supervised Learning v.s. Reinforcement Learning

Reinforcement Learning v.s. Deep Learning Introduction to Reinforcement Learning

Agent and Environment

Action, State, and Reward Markov Decision Process

Reinforcement Learning Approach

Value-Based

Policy-Based

Model-Based

Problems within RL

Learning and Planning

Exploration and Exploitation

RL for Unsupervised Model

12

(13)

Reinforcement Learning

RL is a general purpose framework for decision making

RL is for an agent with the capacity to act

Each action influences the agent’s future state

Success is measured by a scalar reward signal

Big three: action, state, reward

(14)

Agent and Environment

14

→←

MoveRight MoveLeft

observation ot action at

reward rt Agent

Environment

(15)

Agent and Environment

At time step t

The agent

Executes action at

Receives observation ot

Receives scalar reward rt

The environment

Receives action at

Emits observation ot+1

Emits scalar reward rt+1

t increments at env. step

observation ot

action at

reward rt

(16)

State

Experience is the sequence of observations, actions, rewards

State is the information used to determine what happens next

what happens depends on the history experience

The agent selects actions

The environment selects observations/rewards

The state is the function of the history experience

16

(17)

observation ot

action at

reward rt

Environment State

The environment state 𝑠𝑡𝑒 is the

environment’s private representation

whether data the environment uses to pick the next observation/reward

may not be visible to the agent

may contain irrelevant information

(18)

observation ot

action at

reward rt

Agent State

The agent state 𝑠𝑡𝑎 is the agent’s internal representation

whether data the agent uses to pick the next action  information used by RL algorithms

can be any function of experience

18

(19)

Information State

An information state (a.k.a. Markov state) contains all useful information from history

The future is independent of the past given the present

Once the state is known, the history may be thrown away

The state is a sufficient statistics of the future

A state is Markov iff

(20)

Fully Observable Environment

Full observability: agent directly observes environment state

information state = agent state = environment state

20

This is a Markov decision process (MDP)

(21)

Partially Observable Environment

Partial observability: agent indirectly observes environment

agent state ≠ environment state

Agent must construct its own state representation 𝑠𝑡𝑎

Complete history:

Beliefs of environment state:

Hidden state (from RNN):

This is partially observable Markov decision process (POMDP)

(22)

Reward

Reinforcement learning is based on reward hypothesis A reward rt is a scalar feedback signal

Indicates how well agent is doing at step t

22

Reward hypothesis: all agent goals can be desired by maximizing expected cumulative reward

(23)

Sequential Decision Making

Goal: select actions to maximize total future reward

Actions may have long-term consequences

Reward may be delayed

It may be better to sacrifice immediate reward to gain more long-term reward

(24)

Scenario of Reinforcement Learning

Agent

Environment

Observation Action

Reward Don’t do

that

State

Change the

environment

(25)

Scenario of Reinforcement Learning

Agent Observation

Reward Thank you.

State

Action Change the environment

(26)

Machine Learning ≈ Looking for a Function

Observation Action

Reward Function

input

Used to pick the best function

Function output Actor/Policy

Action = π(Observation)

Environment

(27)

Learning to Play Go

Observation Action

Reward

Next Move

(28)

Learning to Play Go

Observation Action

Reward

Agent learns to take actions maximizing expected reward.

Environment If win, reward = 1

If loss, reward = -1 reward = 0 in most cases

(29)

Learning to Play Go

Supervised

Reinforcement Learning

Next move:

“5-5”

Next move:

“3-3”

First move …… many moves …… Win!

Learning from teacher

Learning from experience

(Two agents play with each other.)

(30)

Learning a Chatbot

Machine obtains feedback from user

How are you?

Bye bye 

Hello

Hi 

-10 3

Chatbot learns to maximize the expected reward

(31)

Learning a Chatbot

Let two agents talk to each other (sometimes generate good dialogue, sometimes bad)

How old are you?

See you.

See you.

How old are you?

I am 16.

I though you were 12.

What make you think so?

(32)

Learning a chat-bot

By this approach, we can generate a lot of dialogues.

Use pre-defined rules to evaluate the goodness of a dialogue Dialogue 1 Dialogue 2 Dialogue 3 Dialogue 4

Dialogue 5 Dialogue 6 Dialogue 7 Dialogue 8

Machine learns from the evaluation as rewards

(33)

Learning to Play Video Game

Space invader: terminate when all aliens are killed, or your spaceship is destroyed

fire Score

(reward) Kill the

aliens

shield

(34)

Learning to Play Video Game

34

Start with observation 𝑠1 Observation 𝑠2 Observation 𝑠3

Action 𝑎1: “right”

Obtain reward 𝑟1 = 0

Action 𝑎2: “fire”

(kill an alien) Obtain reward

𝑟2 = 5

Usually there is some randomness in the environment

(35)

Learning to Play Video Game

Start with observation 𝑠1 Observation 𝑠2 Observation 𝑠3

After many turns

Game Over

(spaceship destroyed)

This is an episode.

Learn to maximize the expected cumulative

reward per episode

(36)

More applications

Flying Helicopter

https://www.youtube.com/watch?v=0JL04JJjocc

Driving

https://www.youtube.com/watch?v=0xo1Ldx3L5Q

Robot

https://www.youtube.com/watch?v=370cT-OAzzM

Google Cuts Its Giant Electricity Bill With DeepMind-Powered AI

http://www.bloomberg.com/news/articles/2016-07-19/google-cuts-its-giant- electricity-bill-with-deepmind-powered-ai

Text Generation

https://www.youtube.com/watch?v=pbQ4qe8EwLo

(37)

Markov Decision Process

Fully Observable Environment

(38)

Outline

Machine Learning

Supervised Learning v.s. Reinforcement Learning

Reinforcement Learning v.s. Deep Learning Introduction to Reinforcement Learning

Agent and Environment

Action, State, and Reward Markov Decision Process

Reinforcement Learning Approach

Value-Based

Policy-Based

Model-Based

Problems within RL

Learning and Planning

Exploration and Exploitation

RL for Unsupervised Model

38

(39)

Markov Process

Markov process is a memoryless random process

i.e. a sequence of random states S1, S2, ... with the Markov property

Student Markov chain

Sample episodes from S1=C1

• C1 C2 C3 Pass Sleep

• C1 FB FB C1 C2 Sleep

• C1 C2 C3 Pub C2 C3 Pass Sleep

• C1 FB FB C1 C2 C3 Pub

• C1 FB FB FB C1 C2 C3 Pub C2 Sleep

(40)

Student MRP

Markov Reward Process (MRP)

40

Markov reward process is a Markov chain with values

The return Gt is the total discounted reward from time-step t

(41)

Markov decision process is a MRP with decisions

It is an environment in which all states are Markov

Markov Decision Process (MDP)

Student MDP

(42)

Markov Decision Process (MDP)

S : finite set of states/observations A : finite set of actions

P : transition probability R : immediate reward γ : discount factor

Goal is to choose policy π at time t that maximizes expected overall return:

42

(43)

Reinforcement Learning

(44)

Outline

Machine Learning

Supervised Learning v.s. Reinforcement Learning

Reinforcement Learning v.s. Deep Learning Introduction to Reinforcement Learning

Agent and Environment

Action, State, and Reward Markov Decision Process Reinforcement Learning

Value-Based

Policy-Based

Model-Based

Problems within RL

Learning and Planning

Exploration and Exploitation RL for Unsupervised Model

44

(45)

Major Components in an RL Agent

An RL agent may include one or more of these components

Value function: how good is each state and/or action

Policy: agent’s behavior function

Model: agent’s representation of the environment

(46)

Value Function

A value function is a prediction of future reward (with action a in state s)

Q-value function gives expected total reward

from state and action

under policy

with discount factor

Value functions decompose into a Bellman equation

46

(47)

Optimal Value Function

An optimal value function is the maximum achievable value

The optimal value function allows us act optimally

The optimal value informally maximizes over all decisions

Optimal values decompose into a Bellman equation

(48)

Value Function Approximation

Value functions are represented by a lookup table

too many states and/or actions to store

too slow to learn the value of each entry individually

Values can be estimated with function approximation

48

(49)

Q-Networks

Q-networks represent value functions with weights

generalize from seen states to unseen states

update parameter for function approximation

(50)

Q-Learning

Goal: estimate optimal Q-values

Optimal Q-values obey a Bellman equation

Value iteration algorithms solve the Bellman equation

50

learning target

(51)

Policy

A policy is the agent’s behavior A policy maps from state to action

Deterministic policy:

Stochastic policy:

(52)

Policy Networks

Represent policy by a network with weights

Objective is to maximize total discounted reward by SGD

52

stochastic policy deterministic policy

(53)

Policy Gradient

The gradient of a stochastic policy is given by

The gradient of a deterministic policy is given by

(54)

Model

54

observation ot

action at

reward rt

A model predicts what the environment will do next

P predicts the next state

R predicts the next immediate reward

(55)

Reinforcement Learning Approach

Value-based RL

Estimate the optimal value function

Policy-based RL

Search directly for optimal policy

Model-based RL

Build a model of the environment

Plan (e.g. by lookahead) using model

is the policy achieving maximum future reward

is maximum value achievable under any policy

(56)

Maze Example

Rewards: -1 per time-step Actions: N, E, S, W

States: agent’s location

56

(57)

Maze Example: Value Function

Rewards: -1 per time-step Actions: N, E, S, W

States: agent’s location

Numbers represent value Q (s) of each state s

(58)

Maze Example: Policy

Rewards: -1 per time-step Actions: N, E, S, W

States: agent’s location

58

Arrows represent policy π(s) for each state s

(59)

Maze Example: Value Function

Rewards: -1 per time-step Actions: N, E, S, W

States: agent’s location

Grid layout represents transition model P

(60)

Categorizing RL Agents

Value-Based

No Policy (implicit)

Value Function

Policy-Based

Policy

No Value Function

Actor-Critic

Policy

Value Function

Model-Free

Policy and/or Value Function

No Model

Model-Based

Policy and/or Value Function

Model

60

(61)

RL Agent Taxonomy

(62)

Problems within RL

62

(63)

Outline

Machine Learning

Supervised Learning v.s. Reinforcement Learning

Reinforcement Learning v.s. Deep Learning Introduction to Reinforcement Learning

Agent and Environment

Action, State, and Reward Markov Decision Process Reinforcement Learning

Value-Based

Policy-Based

Model-Based

Problems within RL

Learning and Planning

Exploration and Exploitation

(64)

Learning and Planning

In sequential decision making

Reinforcement learning

The environment is initially unknown

The agent interacts with the environment

The agent improves its policy

Planning

A model of the environment is known

The agent performs computations with its model (w/o any external interaction)

The agent improves its policy (a.k.a. deliberation, reasoning, introspection, pondering, thought, search)

64

(65)

Atari Example: Reinforcement Learning

Rules of the game are unknown

Learn directly from interactive game-play Pick actions on joystick, see pixels and scores

(66)

Atari Example: Planning

66

Rules of the game are known

Query emulator based on the perfect model inside agent’s brain

If I take action a from state s:

what would the next state be?

what would the score be?

Plan ahead to find optimal policy e.g. tree search

(67)

Exploration and Exploitation

Reinforcement learning is like trial-and-error learning The agent should discover a good policy from the

experience without losing too much reward along the way

Exploration finds more information about the environment Exploitation exploits known information to maximize reward

When to try?

It is usually important to explore as well as exploit

(68)

Outline

Machine Learning

Supervised Learning v.s. Reinforcement Learning

Reinforcement Learning v.s. Deep Learning Introduction to Reinforcement Learning

Agent and Environment

Action, State, and Reward Markov Decision Process Reinforcement Learning

Policy-Based

Value-Based

Model-Based

Problems within RL

Learning and Planning

Exploration and Exploitation

RL for Unsupervised Model

68

(69)

RL for Unsupervised Model:

Modularizing Unsupervised Sense

Embeddings (MUSE)

(70)

Word2Vec Polysemy Issue

Words are polysemy

An apple a day, keeps the doctor away.

Smartphone companies including apple, …

If words are polysemy, are their embeddings polysemy?

No 

What’s the problem?

70

tree

trees

rock

rocks

(71)

Smartphone companies including blackberry, and sony will be invited.

Modular Framework

Two key mechanisms

Sense selection given a text context

Sense representation to embed statistical characteristics of sense identity

apple

apple-1 apple-2 sense selection

sense embedding

sense representation sense selection

sense identity

reinforcement learning

(72)

Sense Selection Module

Input: a text context ഥ𝐶𝑡 = 𝐶𝑡−𝑚, … , 𝐶𝑡 = 𝑤𝑖, … , 𝐶𝑡+𝑚 Output: the fitness for each sense 𝑧𝑖1, … , 𝑧𝑖3

Model architecture: Continuous Bag-of-Words (CBOW) for efficiency

Sense selection

Policy-based

Value-based (Q-value)

72

Sense Selection Module

𝐶𝑡 = 𝑤𝑖

𝐶𝑡−1

𝑞(𝑧𝑖1| ഥ𝐶𝑡) 𝑞(𝑧𝑖2| ഥ𝐶𝑡) 𝑞(𝑧𝑖3| ഥ𝐶𝑡) Sense selection for target word 𝐶𝑡

matrix 𝑄𝑖

matrix 𝑃

𝐶𝑡+1

including apple blackberry

companies and

(73)

Sense Representation Module

Input: sense collocation 𝑧𝑖𝑘, 𝑧𝑗𝑙

Output: collocation likelihood estimation Model architecture: skip-gram architecture

Sense representation learning

𝑧𝑖1

Sense Representation Module

𝑃(𝑧𝑗2|𝑧𝑖1) 𝑃(𝑧𝑢𝑣|𝑧𝑖1)

matrix 𝑈 matrix 𝑉

(74)

A Summary of MUSE

74 Corpus: { Smartphone companies including apple blackberry, and sony will be invited.}

sense selection ←

reward signal

sense selection

sample collocation

1

2

2 3

Sense selection for collocated word 𝐶𝑡

Sense Selection Module

𝐶𝑡′ = 𝑤𝑗

𝐶𝑡′−1

𝑞(𝑧𝑗1|𝐶𝑡′) 𝑞(𝑧𝑗2|𝐶𝑡′) 𝑞(𝑧𝑗3|𝐶𝑡′) matrix 𝑄𝑗

matrix 𝑃

𝐶𝑡′+1

apple and

including blackberry sony

𝑧𝑖1

Sense Representation Module 𝑃(𝑧𝑗2|𝑧𝑖1) 𝑃(𝑧𝑢𝑣|𝑧𝑖1)

negative sampling

matrix 𝑉

matrix 𝑈

Sense Selection Module 𝐶𝑡 = 𝑤𝑖

𝐶𝑡−1

𝑞(𝑧𝑖1| ഥ𝐶𝑡) 𝑞(𝑧𝑖2| ഥ𝐶𝑡) 𝑞(𝑧𝑖3| ഥ𝐶𝑡) Sense selection for target word 𝐶𝑡

matrix 𝑄𝑖

matrix 𝑃

𝐶𝑡+1

including apple blackberry

companies and

The first purely sense-level embedding learning with efficient sense selection.

(75)

Context … braves finish the season in tie with the los angeles dodgers …

… his later years proudly wore tie with the chinese characters for …

k-NN scoreless otl shootout 6- 6 hingis 3-3 7-7 0-0

pants trousers shirt juventus blazer socks anfield

Figure

Qualitative Analysis

(76)

Qualitative Analysis

76

Context … of the mulberry or the blackberry and minos sent him to …

… of the large number of blackberry users in the us federal …

k-NN cranberries maple

vaccinium apricot apple

smartphones sap microsoft ipv6 smartphone

Figure

(77)

Demonstration

(78)

Concluding Remarks

RL is a general purpose framework for decision making under interactions between agent and environment

RL is for an agent with the capacity to act

Each action influences the agent’s future state

Success is measured by a scalar reward signal

Goal: select actions to maximize future reward

An RL agent may include one or more of these components

Value function: how good is each state and/or action

Policy: agent’s behavior function

Model: agent’s representation of the environment

78

action state reward

(79)

References

Course materials by David Silver: http://www0.cs.ucl.ac.uk/staff/D.Silver/web/Teaching.html ICLR 2015 Tutorial: http://www.iclr.cc/lib/exe/fetch.php?media=iclr2015:silver-iclr2015.pdf ICML 2016 Tutorial: http://icml.cc/2016/tutorials/deep_rl_tutorial.pdf

參考文獻

Outline

相關文件

Machine Translation Speech Recognition Image Captioning Question Answering Sensory Memory.

Constrain the data distribution for learned latent codes Generate the latent code via a prior

Reinforcement learning is based on reward hypothesis A reward r t is a scalar feedback signal. ◦ Indicates how well agent is doing at

 Sequence-to-sequence learning: both input and output are both sequences with different lengths..

 Goal: select actions to maximize future reward Big three: action, state, reward.. Scenario of Reinforcement Learning.. Agent learns to take actions to maximize expected

Training two networks jointly  the generator knows how to adapt its parameters in order to produce output data that can fool the

 Goal: select actions to maximize future reward Big three: action, state,

Input domain: word, word sequence, audio signal, click logs Output domain: single label, sequence tags, tree structure, probability