* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Contents
1. Introduction
2. Computer-System Structures 3. Operating-System Structures 4. Processes
5. Threads
6. CPU Scheduling
7. Process Synchronization 8. Deadlocks
9. Memory Management 10.Virtual Memory 11.File Systems
Chapter 10
Virtual Memory
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Virtual Memory
Virtual Memory
A technique that allows the execution of a process that may not be
completely in memory.
Motivation:
An entire program in execution may not all be needed at the same time!
e.g. error handling routines, a large array, certain program features, etc
Virtual Memory
Potential Benefits
Programs can be much larger that the amount of physical memory. Users can concentrate on their problem programming.
The level of multiprogramming increases because processes occupy less physical memory.
Each user program may run faster because less I/O is needed for loading or swapping user programs.
Implementation: demand paging,
demand segmentation (more difficult),etc.
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Demand Paging – Lazy Swapping
Process image may reside on the backing store. Rather than swap in the entire
process image into memory, Lazy
Swapper only swap in a page when it is needed!
Pure Demand Paging – Pager vs Swapper
A Mechanism required to recover from the missing of non-resident referenced pages.
Page Fault: occurs when program references a non-memory-resident page.
Demand Paging – Lazy Swapping
CPU p d f d
4 v
6 vi
i
9 vi
i i
Page Table
. . . 9 -F
8 7 6 - C
5 4 - A
3 2 1 0
valid-invalid bit invalid page?
non-memory resident page?
A B C D E F
Logical Memory
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
A Procedure to Handle A Page Fault
OS
CPU i Free
Frame 1. Reference
6. Return to execute the instruction
5. Reset the Page Table 2. Trap
(valid disk-resident page) 3. Issue a ‘read”
instruction & find a free frame
4. Bring in the missing page
A Procedure to Handle A Page Fault
Pure Demand Paging:
Never bring in a page into the memory until it is required!
Pre-Paging
Bring into the memory all of the pages that “will” be needed at one time!
Locality of reference
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Hardware Support for Demand Paging
New Bits in the Page Table
To indicate that a page is now in memory or not.
Secondary Storage
Swap space in the backing store
A continuous section of space in the secondary storage for better
performance.
Crucial issues
Example 1 – Cost in restarting an instruction
Assembly Instruction: Add a, b, c
Only a short job!
Re-fetch the instruction, decode, fetch operands, execute, save, etc
Strategy:
Restart the instruction from the beginning!
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Crucial Issues
Example 2 – Block-Moving Assembly Instruction
MVC x, y, 256
IBM System 360/ 370
Characteristics
More expensive
“self-modifying” “operands”
Solutions:
Pre-load pages
Pre-save & recover before page-fault services
x:
y:
A B C D A B C D
Page fault!
Return??
X is destroyed MVC x, y, 4
Crucial Issues
(R2) +
- (R3)
Page Fault
When the page fault is serviced, R2, R3 are modified!
- Undo Effects!
Example 3 – Addressing Mode
MOV (R2)+, -(R3)
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Performance of Demand Paging
Effective Access Time:
ma: memory access time for paging
p: probability of a page fault
pft: page fault time (1 - p) * ma + p * pft
Performance of Demand Paging
Page fault time - major components
Components 1&3 (about 103 ns ~ 105ns)
Service the page-fault interrupt
Restart the process
Component 2 (about 24ms)
Read in the page (multiprogramming!
However, let’s get the taste!)
pft≈ 25ms = 25,000,000 ns
Effect Access Time (when ma = 100ns)
(1-p) * 100ns + p * 25,000,000 ns
100ns + 24,999,900ns * p
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Performance of Demand Paging
Example
p = 1/1000
Effect Access Time ≈ 25,000 ns
→ Slowed down by 250 times
How to only 10% slow-down?
110 > 100 + 25,000,000 * p p < 0.0000004
p < 1 / 2,500,000
Performance of Demand Paging
How to keep the page fault rate low?
Effective Access Time ≈ 100ns + 24,999,900ns * p
Handling of Swap Space – A Way to Reduce Page Fault Time (pft)
Disk I/O to swap space is generally faster than that to the file system.
Preload processes into the swap space before they start up.
Demand paging from file system but do page replacement to the swap space. (BSD UNIX)
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Process Creation
Copy-on-Write
Rapid Process Creation and Reducing of New Pages for the New Process
fork(); execve()
Shared pages Æ copy-on-write pages
Only the pages that are modified are copied!
3 4 6 1
3 4 6 1
* data1
*
* ed1
*
* ed2
*
* ed3
?? ::
Page Table 1
Page Table 2
P1
P2
page 0 1 2 3 4 5 6 7 n
Process Creation
Copy-on-Write
zero-fill-on-demand
Zero-filled pages
vfork() vs fork() with copy-on-demand
vfork() lets the sharing of the page table and pages between the parent and child processes.
Where to keep the needs of copy-on- demand information for pages?
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Memory-Mapped Files
File writes might not cause any disk write!
Solaris 2 uses memory-mapped files for open(), read(), write(), etc.
1 2 3 4 5 6 24 51 6 3
2 45 1
6 3
2 45 1
6 3
Disk File
P1 VM P2 VM
Page Replacement
Demand paging increases the
multiprogramming level of a system by
“potentially” over-allocating memory.
Total physical memory = 40 frames
Run six processes of size equal to 10 frames but using only five frames. => 10 spare frames
Most of the time, the average memory usage is close to the physical memory size if we increase a system’s
multiprogramming level!
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Page Replacement
Q: Should we run the 7th processes?
How if the six processes start to ask their shares?
What to do if all memory is in use, and more memory is needed?
Answers
Kill a user process!
But, paging should be transparent to users?
Swap out a process!
Do page replacement!
Page Replacement
A Page-Fault Service
Find the desired page on the disk!
Find a free frame
Select a victim and write the victim page out when there is no free frame!
Read the desired page into the selected frame.
Update the page and frame tables, and restart the user process.
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
B M 0
E 7
A 6
J 5
M/B 4
H 3
D 2
1
OS
v 5
i v 4
v 3
v 2
v 7
i v 6
3 2 1 0
M J Load
M H
3 2 1 0
E D B A
P1
P2 PC
Page Replacement
Page Table Logical Memory
OS
Two page transfers per page fault if no frame is available!
Y V 7
Y V 3
N V 4
N V 6
Modify (/Dirty) Bit! To
“eliminate” ‘swap out” =>
Reduce I/O time by one-half
Page Replacement
Page Table
Valid-Invalid Bit
Modify Bit is set by the hardware automatically!
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Page Replacement
Two Major Pieces for Demand Paging
Frame Allocation Algorithms
How many frames are allocated to a process?
Page Replacement Algorithms
When page replacement is required, select the frame that is to be
replaced!
Goal: A low page fault rate!
Note that a bad replacement choice does not cause an incorrect execution!
Page Replacement Algorithms
Evaluation of Algorithms
Calculate the number of page faults on strings of memory references, called reference strings, for a set of algorithms
Sources of Reference Strings
Reference strings are generated artificially.
Reference strings are recorded as system traces:
How to reduce the number of data?
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Page Replacement Algorithms
Two Observations to Reduce the Number of Data:
Consider only the page numbers if the page size is fixed.
Reduce memory references into page references
If a page p is reference, any immediately following references to page p will never cause a page fault.
Reduce consecutive page references of page p into one page reference.
Page Replacement Algorithms
Does the number of page faults decrease when the number of page frames available increase?
XX XX page# offset
0100, 0432, 0101, 0612, 0103, 0104, 0101, 0611 01, 04, 01, 06, 01, 01, 01, 06
01, 04, 01, 06, 01, 06
Example
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
FIFO Algorithm
A FIFO Implementation
1. Each page is given a time stamp when it is brought into memory.
2. Select the oldest page for replacement!
reference string page frames
FIFO queue
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7
0 7 0 1
2 0 1
2 3 1
4 3 0 2 3 0
4 2 0
4 2 3
0 2 3
7 7
0 7 0 1
0 1 2
1 2 3
2 3 0
3 0 4
0 4 2
4 2 3
2 3 0
0 1 3
0 1 2
7 1 2
7 0 2
7 0 1 3
0 1
0 1 2
1 2 7
2 7 0
7 0 1
FIFO Algorithm
The Idea behind FIFO
The oldest page is unlikely to be used again.
?? Should we save the page which will be used in the near future??
Belady’s anomaly
For some page-replacement algorithms, the page fault rate may increase as the number of allocated frames increases.
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
FIFO Algorithm
Run FIFO algorithm on the following reference 1 2 3 4 1 2 5 1 2 3 4 5 1 1 1 2 3 4 1 1 1 2 5 5 2 2 3 4 1 2 2 2 5 3 3 3 4 1 2 5 5 5 3 4 4 1 1 1 1 1 1 2 3 4 5 1 2 2 2 2 2 2 3 4 5 1 2 3 3 3 3 3 4 5 1 2 3 4 4 4 4 5 1 2 3 4 5
Push out pages that will be used later!
string:
3 frames
4 frames
Optimal Algorithm (OPT)
Optimality
One with the lowest page fault rate.
Replace the page that will not be used for the longest period of time. ÅÆ Future Prediction
reference string page frames
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7
0 7 0 1
2 0 1
2 0 3
2 4 3
2 0 3
2 0 1
7 0 1 next 7
next 0
next 1
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Least-Recently-Used Algorithm (LRU)
The Idea:
OPT concerns when a page is to be used!
“Don’t have knowledge about the future”
Use the history of page referencing in the past to predict the future !
S ? SR ( SRis the reverse of S !)
LRU Algorithm
reference string page frames
FIFO queue
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7
0 7 0 1
2 0 1
2 0 3
4 0 3
4 0 2
4 3 2
0 3 2
0 0
7 1 0 7
2 1 0
3 0 2
0 3 2
4 0 3
2 4 0
3 2 4
0 3 2
1 3 2
1 0 2
7 0 7 1
2 3
2 1 3
1 0 2
7 1 0
0 7 1 0
2 1
3 0 2
2 3 0
0 2 1
1 0 7 a wrong prediction!
Example
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
LRU Implementation – Counters
CPU p d f d
frame # v/itimetag p
f
cnt++
Time of Last Use!
………
Page Table for Pi Logical
Address
Physical Memory
Disk Update the
“time-of-use”
field A Logical Clock
LRU Implementation – Counters
Overheads
The logical clock is incremented for every memory reference.
Update the “time-of-use” field for each referenced page.
Search the LRU page for replacement.
Overflow prevention of the clock & the maintenance of the “time-of-use” field of each page table.
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
LRU Implementation – Stack
CPU p d f d
frame # v/i p
f
………
Page Table Logical
Address
Physical Memory
Disk
…
Head
Tail
(The LRU page!) A LRU
Stack
move
Overheads: Stack maintenance per memory reference ~ no search for page replacement!
A Stack Algorithm
Need hardware support for efficient implementations.
Note that LRU maintenance needs to be done for every memory reference.
memory- resident pages
memory- resident pages
n frames
⊆
available
(n +1) frames available
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
LRU Approximation Algorithms
Motivation
No sufficient hardware support
Most systems provide only “reference bit”
which only indicates whether a page is used or not, instead of their order.
Additional-Reference-Bit Algorithm
Second-Chance Algorithm
Enhanced Second Chance Algorithm
Counting-Based Page Replacement
Additional-Reference-Bits Algorithm
Motivation
Keep a history of reference bits
1 01101101
0 10100011
0 11101010
1 00000001
… … OS shifts all
history registers right by one bit at each regular interval!!
reference
bit one byte per page in memory
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
History Registers
But, how many bits per history register should be used?
Fast but cost-effective!
The more bits, the better the approximation is.
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 LRU
(smaller value!)
MRU
Not used for 8 times
Used at least once every time
Additional-Reference-Bits Algorithm
Second-Chance (Clock) Algorithm
Motivation
Use the reference bit only
Basic Data Structure:
Circular FIFO Queue
Basic Mechanism
When a page is selected
Take it as a victim if its reference bit = 0
Otherwise, clear the bit and advance to the next page
0 0 1 1
1 0…
Reference Bit
Page
…
0 0 0 0
1 0…
Reference Bit
Page
…
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Enhance Second-Chance Algorithm
Motivation:
Consider the cost in swapping out” pages.
4 classes (reference bit, modify bit)
(0,0) – not recently used and not “dirty”
(0,1) – not recently used but “dirty”
(1,0) – recently used but not “dirty”
(1,1) – recently used and “dirty”
low priority
high priority
Enhance Second-Chance Algorithm
Use the second-chance algorithm to replace the first page encountered in the lowest nonempty class.
=> May have to scan the circular queue several times before find the right page.
Mac Virtual Memory Management
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Counting Algorithms
Motivation:
Count
the # of references made to each page, instead of their referencing times. Least Frequently Used Algorithm (LFU)
LFU pages are less actively used pages !
Potential Hazard: Some heavily used pages may no longer be used !
A Solution – Aging
Shift counters right by one bit at regular interval.
Counting Algorithms
Most Frequently Used Algorithm (MFU)
Pages with the smallest number of references are probably just brought in and has yet to be used!
LFU & MFU replacement schemes can be fairly expensive!
They do not approximate OPT very well!
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Page Buffering
Basic Idea
a. Systems keep a pool of free frames
b. Desired pages are first “swapped in” some pages in the pool.
c. When the selected page (victim) is later written out, its frame is returned to the pool.
Variation 1
a. Maintain a list of modified pages.
b. Whenever the paging device is idle, a modified page is written out and reset its
“modify bit”.
Page Buffering
Variation 2
a. Remember which page was in each frame of the pool.
b. When a page fault occurs, first check whether the desired page is there already.
Pages which were in frames of the pool must be “clean”.
“Swapping-in” time is saved!
VAX/VMS with the FIFO replacement algorithm adopt it to improve the performance of the FIFO algorithm.
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Frame Allocation – Single User
Basic Strategy:
User process is allocated any free frame.
User process requests free frames from the free-frame list.
When the free-frame list is exhausted, page replacement takes place.
All allocated frames are released by the ending process.
Variations
O.S. can share with users some free frames for special purposes.
Page Buffering - Frames to save “swapping”
time
Frame Allocation – Multiple Users
Fixed Allocation a. Equal Allocation
m frames, n processes Æ m/n frames per process
b. Proportional Allocation 1. Ratios of Frames ∝ Size
S = Σ Si, Ai∝ (Si/ S) x m, where (sum <= m) &
(Ai>= minimum # of frames required)
2. Ratios of Frames ∝ Priority
Si: relative importance
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Frame Allocation – Multiple Users
Dynamic Allocation
a. Allocated frames ∝ the multiprogramming level b. Others
The minimum number of frames required for a process is determined by the instruction-set architecture.
ADD A,B,C Æ 4 frames needed
ADD (A), (B), (C) Æ 1+2+2+2 = 7 frames, where (A) is an indirect addressing
Frame Allocation – Multiple Users
Minimum Number of Frames (Continued)
How many levels of indirect
addressing should be supported?
It may touch every page in the logical address space of a process
=> Virtual memory is collapsing!
A long instruction may cross a page boundary.
MVC X, Y, 256 Æ 2 + 2 + 2 = 6 frames
The spanning of the instruction and the operands.
address
16 bits
1 indirect 0 direct
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Frame Allocation – Multiple Users
Global Allocation
Processes can take frames from others. For example, high-priority processes can
increase its frame allocation at the expense of the low-priority processes!
Local Allocation
Processes can only select frames from their own allocated frames Æ Fixed Allocation
The set of pages in memory for a process is affected by the paging behavior of only that process.
Frame Allocation – Multiple Users
Remarks
a.Global replacement generally results in a better system throughput
b.Processes can not control their own page fault rates such that a process can effect each another easily.
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Thrashing
Trashing – A High Paging Activity :
A process is thrashing if it is spending more time paging that executing.
Why thrashing ?
Too few frames allocated to a process
Thrashing
low CPU utilization Dispatch a new process
under a global page- replacement algorithm
degree of multiprogramming
CPU utilization
thrashing
Thrashing
Solutions:
Decrease the multiprogramming level Æ Swap out processes!
Use local page-replacement algorithms
Only limit thrashing effects “locally”
Page faults of other processes also slow down.
Give processes as many frames as they need!
But, how do you know the right number of frames for a process?
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Locality Model
A program is composed of several different (overlapped) localities.
Localities are defined by the program
structures and data structures (e.g., an array, hash tables)
How do we know that we allocate enough frames to a process to accommodate its current locality?
localityi= {Pi,1,Pi,2,…,Pi,ni}
control flow
localityj= {Pj,1,Pj,2,…,Pj,nj}
Working-Set Model
The working set is an approximation of a program’s locality.
Page references
…2 6 1 5 7 7 7 7 5 1 6 2 3 4 1 2 3 4 4 4 3 4 3 4 4 4
working-set windowΔ t1
working-set(t1) = {1,2,5,6,7}
working-set windowΔ t2
working-set(t2) = {3,4}
The minimum allocation
Δ ∞
All touched pages may cover several
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Working-Set Model
where M is the total number of available frames.
∑
− − ≤= working set size M
D i
Suspend some processes and
swap out their pages.
“Safe”
D>M
Extra frames are available, and initiate new processes.
D>M
D≦M
Working-Set Model
The maintenance of working sets is expensive!
Approximation by a timer and the reference bit
Accuracy v.s. Timeout Interval!
0 1
1 0
…… …… …… ……
shift or copy
timer!
reference bit in-memory history
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Page-Fault Frequency
Motivation
Control thrashing directly through the observation on the page-fault rate!
increase # of frames!
decrease # of frames!
upper bound
lower bound
number of frames
*Processes are suspended and swapped out if the number of available frames is reduced to that under the minimum needs.
page-fault rate
OS Examples – NT
Virtual Memory – Demand Paging with Clustering
Clustering brings in more pages surrounding the faulting page!
Working Set
A Min and Max bounds for a process
Local page replacement when the max number of frames are allocated.
Automatic working-set trimming reduces allocated frames of a process to its min when the system threshold on the available frames is reached.
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
OS Examples – Solaris
Process pageout first clears the reference bit of all pages to 0 and then later returns all pages with the reference bit = 0 to the system (handspread).
4HZ Æ 100HZ when desfree is reached!
Swapping starts when desfree fails for 30s.
pageout runs for every request to a new page when minfree is reached.
lotsfree 100
slowscan 8192 fastscan
desfree minfree
Other Considerations
Pre-Paging
Bring into memory at one time all the pages that will be needed!
Issue
Pre-Paging Cost Cost of Page Fault Services ready
processes
suspended processes resumed
swapped out
Do pre-paging if the working set is known!
Not every page in the working set will be used!
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Other Considerations
Page Size
Trends - Large Page Size
∵ The CPU speed and the memory capacity grow much faster than the disk speed!
small p d large
Smaller Page Table Size &
Better I/O Efficiency Better
Resolution for Locality &
Internal
Fragmentation 512B(29)~16,384B(212) Page Size
Other Considerations
TLB Reach
TLB-Entry-Number * Page-Size
Wish
The working set is stored in the TLB!
Solutions
Increase the page size
Have multiple page sizes – UltraSparc II (8KB - 4MB)
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Other Considerations
Inverted Page Table
The objective is to reduce the
amount of physical memory for page tables, but they are needed when a page fault occurs!
More page faults for page tables will occur!!!
Other Considerations
Program Structure
Motivation – Improve the system performance by an awareness of the underlying demand paging.
var A: array [1..128,1..128] of integer;
for j:=1 to 128
for i:=1 to 128 A(i,j):=0 A(1,1)
A(1,2) . . A(1,128)
A(2,1) A(2,2)
. . A(2,128)
A(128,1) A(128,2)
. . A(128,128)
……
128 words
128 pages
128x128 page faults if the process has less than 128 frames!!
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Other Considerations
Program Structures:
Data Structures
Locality: stack, hash table, etc.
Search speed, # of memory references, # of pages touched, etc.
Programming Language
Lisp, PASCAL, etc.
Compiler & Loader
Separate code and data
Pack inter-related routines into the same page
Routine placement (across page boundary?)
I/O Interlock
buffer Drive
• DMA gets the following information of the buffer:
• Base Address in Memory
• Chunk Size
• Could the buffer-residing Physical Memory
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
I/O Interlock
Solutions
I/O Device ÅÆ System Memory ÅÆ User Memory
Extra Data Copying!!
Lock pages into memory
The lock bit of a page-faulting page is set until the faulting process is dispatched!
Lock bits might never be turned off!
Multi-user systems usually take locks as
“hints” only!
Real-Time Processing
Solution:
Go beyond locking hints Î Allow privileged users to require pages being locked into memory!
Predictable Behavior
Virtual memory introduces unexpected, long-term delays in the execution of a program.
* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2002.
Demand Segmentation
Motivation
Segmentation captures better the logical structure of a process!
Demand paging needs a significant amount of hardware!
Mechanism
Like demand paging!
However, compaction may be needed!
Considerable overheads!