• 沒有找到結果。

Chapter 6 CPU Scheduling

N/A
N/A
Protected

Academic year: 2022

Share "Chapter 6 CPU Scheduling"

Copied!
32
0
0

加載中.... (立即查看全文)

全文

(1)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Contents

1. Introduction

2. Computer-System Structures 3. Operating-System Structures 4. Processes

5. Threads

6. CPU Scheduling

7. Process Synchronization 8. Deadlocks

9. Memory Management 10.Virtual Memory 11.File Systems

Chapter 6 CPU Scheduling

(2)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

CPU Scheduling

ƒ Objective:

ƒ Basic Scheduling Concepts

ƒ CPU Scheduling Algorithms

ƒ Why Multiprogramming?

ƒ Maximize CPU/Resources Utilization (Based on Some Criteria)

CPU Scheduling

ƒ Process Execution

ƒ CPU-bound programs tend to have a few very long CPU bursts.

ƒ IO-bound programs tend to have may very short CPU bursts.

CPU-Burst

I/O-Burst New

Terminate

(3)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

CPU Scheduling

ƒ The distribution can help in selecting an appropriate CPU-scheduling algorithms

20 60 100 120

8 16 24

Burst Duration (ms)

frequency

CPU Scheduling

ƒ CPU Scheduler – The Selection of Process for Execution

ƒ A short-term scheduler New

Ready Running

Terminated

Waiting dispatched

(4)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

CPU Scheduling

ƒ Nonpreemptive Scheduling

ƒ A running process keeps CPU until it volunteers to release CPU

ƒ Advantage

ƒEasy to implement (at the cost of better resource sharing)

ƒ E.g., Windows 3.1

CPU Scheduling

ƒ Preemptive Scheduling

ƒ Beside the instances for non-preemptive scheduling, CPU scheduling occurs whenever some process becomes ready or the running process leaves the running state!

ƒ Issues involved:

ƒ Protection of Resources such as I/O queues

ƒ Synchronization

ƒ E.g., Interrupts and System calls

(5)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

CPU Scheduling

ƒ Dispatcher

ƒ Functionality:

ƒ Switching context

ƒ Switching to user mode

ƒ Restarting a user program

ƒ Dispatch Latency:

Start a process Must be fast

Stop a process

Scheduling Criteria

ƒ Why?

ƒ Different scheduling algorithms may favor one class of processes over another!

ƒ Criteria

ƒ CPU Utilization

ƒ Throughput

ƒ Turnaround Time

ƒ Waiting Time

ƒ Response Time

(6)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Scheduling Criteria

ƒ How to Measure the Performance of CPU Scheduling Algorithm?

ƒ Optimization of what?

ƒ General Consideration

ƒAverage Measure

ƒMinimum or Maximum Values

ƒ Variance Æ Predictable Behavior

Scheduling Algorithms

ƒ First-Come, First-Served Scheduling (FIFO)

ƒ Shortest-Job-First Scheduling (SJF)

ƒ Priority Scheduling

ƒ Round-Robin Scheduling (RR)

ƒ Multilevel Queue Scheduling

ƒ Multilevel Feedback Queue Scheduling

ƒ Multiple-Processor Scheduling

(7)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

First-Come, First-Served Scheduling (FCFS)

ƒ The process which requests the CPU first is allocated the CPU

ƒ Properties:

ƒ Non-preemptive scheduling

ƒ CPU might be hold for an extended period.

CPU request

A FIFO ready queue dispatched

First-Come, First-Served Scheduling (FCFS)

ƒ Example

Process P1 P2 P3

CPU Burst Time 24

3 3

P1 P2 P3

0 24 27 30

Average waiting time

= (0+24+27)/3 = 17

P2 P3 P1

0 3 6 30

Average waiting time

= (6+0+3)/3 = 3

*The average waiting time is highly affected by process CPU burst times !

Gnatt Chart

(8)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

ƒ Example: Convey Effect

ƒ One CPU-bound process + many I/O-bound processes

First-Come, First-Served Scheduling (FCFS)

CPU

ready queue

ready queue

I/O device

idle

All other processes wait for it to get off the CPU!

Shortest-Job-First Scheduling (SJF)

ƒ Non-Preemptive SJF

ƒ Shortest next CPU burst first

process P1 P2 P3 P4

CPU burst time 6

8 7 3

P4 P1 P3 P2

0 3 9 16 24

(9)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Shortest-Job-First Scheduling (SJF)

ƒ Nonpreemptive SJF is optimal when processes are all ready at time 0

ƒ The minimum average waiting time!

ƒ Prediction of the next CPU burst time?

ƒ Long-Term Scheduler

ƒA specified amount at its submission time

ƒ Short-Term Scheduler

ƒExponential average

τ

n+1

= α t

n

+ (1-α) τ

n

Shortest-Job-First Scheduling (SJF)

ƒ Preemptive SJF

ƒ Shortest-remaining-time-first Process

P1 P2 P3 P4

CPU Burst Time 8

4 9 5

Arrival Time 0 1 2 3

P1 P2 P4 P1 P3

0 1 5 10 17 26

Average Waiting Time = ((10-1) + (1-1) + (17-2) + (5-3))/4 = 26/4

= 6.5

(10)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Shortest-Job-First Scheduling (SJF)

ƒ Preemptive or Non-preemptive?

ƒ Criteria such as AWT (Average Waiting Time)

0 10

1 10 11

Non-preemptive AWT = (0+(10-1))/2

= 9/2 = 4.5

or 0

1 2

11 Preemptive AWT

= ((2-1)+0) = 0.5

* Context switching cost ~ modeling & analysis

Priority Scheduling

ƒ CPU is assigned to the process with the highest priority – A

framework for various scheduling algorithms:

ƒ FCFS: Equal-Priority with Tie- Breaking by FCFS

ƒ SFJ: Priority = 1 / next CPU burst length

(11)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Priority Scheduling

Process P1 P2 P3 P4 P5

CPU Burst Time 10

1 2 1 5

Arrival Time 3 1 3 4 2 Gantt Graph

P2 P5 P1 P3 P4

0 1 6 16 18 19

Priority Scheduling

ƒ Priority Assignment

ƒ Internally defined – use some

measurable quality, such as the # of open files,

ƒ Externally defined – set by criteria external to the OS, such as the criticality levels of jobs.

Average I/O Burst Average I/O Burst

(12)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Priority Scheduling

ƒ Preemptive or Non-Preemptive?

ƒ Preemptive scheduling – CPU scheduling is invoked whenever a process arrives at the ready queue, or the running process relinquishes the CPU.

ƒ Non-preemptive scheduling – CPU scheduling is invoked only when the running process relinquishes the CPU.

Priority Scheduling

ƒ Major Problem

ƒ Indefinite Blocking (/Starvation)

ƒ Low-priority processes could starve to death!

ƒ A Solution: Aging

ƒ A technique that increase the priority of processes waiting in the system for a long time.

(13)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Round-Robin Scheduling (RR)

ƒ RR is similar to FCFS except that

preemption is added to switch between processes.

ƒ Goal: Fairness – Time Sharing

ready running

Interrupt at every time quantum (time slice)

FIFO…

CPU

The quantum is used up!

New process

Round-Robin Scheduling (RR)

Process P1 P2 P3

CPU Burst Time 24

3 3

Time slice = 4

P1 P2 P3 P1 P1 P1 P1 P1 0 4 7 10 14 18 22 26 30

AWT = ((10-4) + (4-0) + (7-0))/3

= 17/3 = 5.66

(14)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Round-Robin Scheduling (RR)

ƒ Service Size and Interval

ƒ Time quantum = q Æ Service interval <= (n- 1)*q if n processes are ready

ƒ IF q = ∞, then RR Æ FCFS

ƒ IF q = ε, then RR Æ processor sharing. The

# of context switchings increases!

0 10

0 6 10

0 10

process quantum 12

6 1

context switch # 0

1 9

If context switch cost

time quantum = 10% => 1/11 of CPU is wasted!

Round-Robin Scheduling (RR)

ƒ Turnaround Time

0 10 20 30

0 10 20 30

0 10 20 30

process (10ms) P1 P2

P3 20 30

10 20

0 10

quantum = 10 quantum = 1

Average Turnaround Time

= (10+20+30)/3 = 20

ATT = (28+29+30)/3 = 29

=> 80% CPU Burst < time slice

(15)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Multilevel Queue Scheduling

ƒ Partition the ready queue into several separate queues =>

processes can be classified into different groups and permanently assigned to one queue.

Process Group 1 Process Group 2

Process Group n

Multilevel Queue Scheduling

ƒ Intra-queue scheduling

ƒ Independent choice of scheduling algorithms.

ƒ Inter-queue scheduling

a. Fixed-priority preemptive scheduling

a. e.g., foreground queues always have absolute priority over background queues.

b. Time slice between queue

a. e.g., 80% CPU is given to foreground processes, and 20% CPU to background processes.

c. More??

(16)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Multilevel Feedback Queue Scheduling

ƒ Different from Multilevel Queue

Scheduling by Allowing Processes to Migrate Among Queues.

ƒ Configurable Parameters:

a. # of queues

b. The scheduling algorithm for each queue c. The method to determine when to upgrade a

process to higher priority queue.

d. The method to determine when to degrade a process to lower priority queue.

e. The method to determine which queue a newly ready process will enter.

*Inter-queue scheduling: Fixed-priority preemptive?!

Multilevel Feedback Queue Scheduling

ƒ Example

quantum = 8

quantum = 16

FCFS

*Idea: Separate processes with different CPU-burst characteristics!

(17)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Multiple-Processor Scheduling

ƒ CPU scheduling in a system with multiple CPUs

ƒ A Homogeneous System

ƒ Processes are identical in terms of their functionality.

ÎCan processes run on any processor?

ƒ A Heterogeneous System

ƒ Programs must be compiled for instructions on proper processors.

Multiple-Processor Scheduling

ƒ Load Sharing – Load Balancing!!

ƒ A queue for each processor

ƒ Self-Scheduling – Symmetric Multiprocessing

ƒ A common ready queue for all processors.

ƒ Master-Slave – Asymmetric Multiprocessing

ƒ Need synchronization to access common data structure, e.g., queues.

ƒ One processor accesses the system structures Æ no need for data sharing

(18)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Real-Time Scheduling

ƒ Definition

ƒ Real-time means on-time, instead of fast!

ƒ Hard real-time systems:

ƒ Failure to meet the timing constraints (such as deadline) of processes may result in a catastrophe!

ƒ Soft real-time systems:

ƒ Failure to meet the timing constraints may still contribute value to the system.

Real-Time Scheduling

ƒ Dispatch Latency

1. Preemption of the running process

2. Releasing resources needed by the higher priority process

3. Context switching to the higher priority process

Dispatch latency

Conflicts Dispatch

(19)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Real-Time Scheduling

ƒ Minimization of Dispatch Latency?

ƒ Context switching in many OS, e.g., some UNIX versions, can only be done after a system call completes or an I/O blocking occurs

ƒ Solutions:

1. Insert safe preemption points in long- duration system calls.

2. Protect kernel data by some

synchronization mechanisms to make the entire kernel perceptible.

Real-Time Scheduling

ƒ Priority Inversion:

ƒ A higher-priority processes must wait for the execution of a lower-priority

processes.

P1 P2 P3

D

Time

P3 also waits for P2 to complete?

D D D V

P1 P2

P3 D

Time

Time

Request for D

(20)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Real-Time Scheduling

ƒ Priority Inheritance

ƒ The blocked process inherits the priority of the process that causes the blocking.

P1 P2 P3

D Time

D D D V

P1 P2

P3 D

Time

Time

Request for D

Real-Time Scheduling

ƒ Earliest Deadline First Scheduling (EDF)

ƒ Processes with closer deadline have higher priorities.

ƒ An optimal dynamic-priority-driven scheduling algorithm for periodic and aperiodic processes!

)) / 1 ( ) (

(priority τidi

Liu & Layland [JACM 73] showed that EDF is optimal in the sense that a process set is

(21)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Real-Time Scheduling – EDF

process P1 P2 P3

CPU Burst time 4

5 6

Period 20 15 16

Initial Arrival Time 0

1 2

Average waiting time

=(11+0+4)/3=5

0 2 6 18 20

15

0 1 6 12

P2 P3 P1

P1

15

0 1 12 20

0 1 6 16 20

P3

12

A General Architecture of RTOS’s

ƒ Objectives in the Design of Many RTOS’s

ƒ Efficient Scheduling Mechanisms

ƒ Good Resource Management Policies

ƒ Predictable Performance

ƒ Common Functionality of Many RTOS’s

ƒ Task Management

ƒ Memory Management

ƒ Resource Control, including devices

ƒ Process Synchronization

(22)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

A General Architecture

Bottom Half Top Half processes User

Space OS

hardware

Timer expires to

• Expire the running process’s time quota

• Keep the accounting info for each process

System calls such as I/O requests which may cause the releasing CPU of a process!

Interrupts for Services

A General Architecture

ƒ 2-Step Interrupt Services

ƒ Immediate Interrupt Service

ƒ Interrupt priorities > process priorities

ƒ Time: Completion of higher priority ISR, context switch, disabling of certain interrupts, starting of the right ISR (urgent/low-level work, set events)

ƒ Scheduled Interrupt Service

ƒ Usually done by preemptable threads

ƒ Remark: Reducing of non-preemptable code, Priority Tracking/Inheritance (LynxOS), etc.

ISR

I

Interrupt/ISR Latency

SecheduledService

IST Latency

(23)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

A General Architecture

ƒ Scheduler

ƒ A central part in the kernel

ƒ The scheduler is usually driven by a clock interrupt periodically, except when voluntary context switches occur – thread quantum?

ƒ Timer Resolution

ƒ Tick size vs Interrupt Frequency

ƒ 10ms? 1ms? 1us? 1ns?

ƒ Fine-Grained hardware clock

A General Architecture

ƒ Memory Management

ƒ No protection for many embedded systems

ƒ Memory-locking to avoid paging

ƒ Process Synchronization

ƒ Sources of Priority Inversion

ƒNonpreemptible code

ƒ Critical sections

ƒA limited number of priority levels, etc.

(24)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Algorithm Evaluation

ƒ A General Procedure

ƒ Select criteria that may include several measures, e.g., maximize CPU

utilization while confining the maximum response time to 1 second

ƒ Evaluate various algorithms

ƒ Evaluation Methods:

ƒ Deterministic modeling

ƒ Queuing models

ƒ Simulation

ƒ Implementation

Deterministic Modeling

ƒ A Typical Type of Analytic Evaluation

ƒ Take a particular predetermined workload and defines the performance of each algorithm for that workload

ƒ Properties

ƒ Simple and fast

ƒ Through excessive executions of a number of examples, treads might be identified

ƒ But it needs exact numbers for inputs, and its answers only apply those cases

ƒ Being too specific and requires too exact knowledge to be useful!

(25)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Deterministic Modeling

process P1 P2 P3 P4 P5

CPU Burst time 10 29 3 7 12

P3 P5 P2

0 3 10 20 61

P4 P1

32

Nonpreemptive Shortest Job First

P1 P2

0 10 20 40 50 61

P2 23

P4 P5 30

P2 P5 52

Round Robin (quantum =10)

P1

0 10 39 42 49 61

FCFC

Average Waiting Time (AWT)=(0+10+39+42+49)/5=28

AWT=(10+32+0+3+20)/5=13

AWT=(0+(10+20+2)+20+23+(30+10))/5=13

Queueing Models

ƒ Motivation:

ƒ Workloads vary and there is no stati set of processes

ƒ Models (~queueing-network analysis)

ƒ Workload:

a. Arrival rate: the distribution of times when processes arrive

b. The distributions of CPU & I/O bursts

ƒ Service rate

(26)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Queueing Models

ƒ Model a computer system as a network of servers. Each server has a queue of waiting processes

ƒ Compute average queue length, waiting time.., and so on.

ƒ Properties:

ƒ Generally useful but with limited

application to the classes of algorithms &

distributions

ƒ Assumptions are made to make

problems solvable => inaccurate results

Queueing Models

ƒ Example: Little’s formula

n = # of processes in queue λ = arrival rate

ω = average waiting time in queue

ƒ If n =14 & λ =7 processes/sec, then w

= 2 seconds.

w n = λ ∗

λ w steady state!

λ

(27)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Simulation

ƒ Motivation:

ƒ Get a more accurate evaluation.

ƒ Procedures:

ƒ Program a model of the computer system

ƒ Drive the simulation with various data sets

ƒRandomly generated according to some probability distributions

=> inaccurate indicate only the occurrence frequency of events. Miss the order & the relationships of events.

ƒTrace tapes: monitor the real system &

record the sequence of actual events.

Simulation

ƒ Properties:

ƒ Accurate results can be gotten but it could be expensive in terms of

computation time and storage space.

ƒ The coding, design, and debugging of a simulation can be a big job.

(28)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Implementation

ƒ Motivation:

ƒ Get more accurate results than a simulation!

ƒ Procedure:

ƒ Code scheduling algorithms

ƒ Put them in the OS

ƒ Evaluate the real behavior

Implementation

ƒ Difficulties:

ƒ Cost in coding algorithm and modifying OS

ƒ Reaction of users to a constantly changing OS

ƒ The environment in which algorithms are used will change

ƒ e.g., users may adjust their behaviors according to the selected algorithms

=> Separation of the policy and mechanism!

(29)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Process Scheduling Model

ƒ Process Local Scheduling

ƒ User-level threads

ƒ System Global Scheduling

ƒ Kernel-level threads

Process Scheduling Model – Solaris 2

ƒ Priority-Based Process Scheduling

ƒ Real-Time

ƒ System

ƒ Kernel processes

ƒ Time-Sharing

ƒ A default class

ƒ Interactive

ƒ Each LWP inherits its class from its parent process

low

(30)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Process Scheduling Model – Solaris 2

ƒ Real-Time

ƒ A guaranteed response

ƒ System

ƒ The priorities of system processes are fixed.

ƒ Time-Sharing

ƒ Multilevel feedback queue scheduling – priorities inversely proportional to time slices

ƒ Interactive

ƒ Prefer windowing process

Process Scheduling Model – Solaris 2

ƒ The selected thread runs until one of the following occurs:

ƒ It blocks.

ƒ It uses its time slice (if it is not a system thread).

ƒ It is preempted by a higher-priority thread.

(31)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Process Scheduling Model – Windows 2000

ƒ Priority-Based Preemptive Scheduling

ƒ Priority Class/Relationship: 0..31

ƒ Dispatcher: A process runs until

ƒIt is preempted by a higher-priority process.

ƒIt terminates

ƒIts time quantum ends

ƒIt calls a blocking system call

ƒ Idle thread

ƒ Time quantum – thread

ƒ Increased after some waiting

ƒ Decreased after some computation

ƒ Favor foreground processes

Process Scheduling Model – Windows 2000

1 1

1 1

1 16

Idle

2 4

6 8

11 22

Lowest

3 5

7 9

12 23

Below normal

4 6

8 10 13

24

Normal

5 7

9 11 14

Above 25

normal

6 8

10 12

15 26

Highest

15 15

15 15

15 31

Time- critical

Idle priority Below

normal Normal

Above normal High

Real- time

Variable Class Real-Time Class

Base Priority

A Typical Class

(32)

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.

Process Scheduling Model – Linux

ƒ A Time-Sharing Algorithm for Fairness

ƒ Credits = (credits / 2) + priority

ƒ Recrediting when no runnable process has any credits.

ƒ Favor interactive or I/O-bound processes

ƒ Real-Time Scheduling Algorithms

ƒ FCFS & RR as required by POSIX.1b

ƒ No scheduling in kernel space for conventional Linux

參考文獻

相關文件

As as single precision floating point number, they represent 23.850000381, but as a double word integer, they represent 1,103,023,309.. The CPU does not know which is the

† CPU 執行原工作,若週邊裝置有需求,則發出中 斷信號通知CPU ,待CPU知道後,暫停目前工作

下列關於 CPU 的敘述,何者正確?(A)暫存器是 CPU 內部的記憶體(B)CPU 內部快取記憶體使 用 Flash Memory(C)具有 32 條控制匯流排排線的 CPU,最大定址空間為

ƒ Preemptive scheduling – CPU scheduling is invoked whenever a process arrives at the ready queue, or the running process relinquishes the CPU.. ƒ Non-preemptive scheduling – CPU

In this chapter, a dynamic voltage communication scheduling technique (DVC) is proposed to provide efficient schedules and better power consumption for GEN_BLOCK

Therefore, this study proposes a Reverse Logistics recovery scheduling optimization problem, and the pallet rental industry, for example.. The least cost path, the maximum amount

This study conducted DBR to the production scheduling system, and utilized eM-Plant to simulate the scheduling process.. While comparing the original scheduling process

proposed a greedy algorithm to utilize the Divide-and-Conquer technique to obtain near optimal scheduling while attempting to minimize the size of total communication messages