• 沒有找到結果。

• Reduce miss penalty

N/A
N/A
Protected

Academic year: 2022

Share "• Reduce miss penalty"

Copied!
42
0
0

加載中.... (立即查看全文)

全文

(1)

Lecture 9:

Improving Cache Performance:

• Reduce miss rate

• Reduce miss penalty

• Reduce hit time

(2)

Review

• ABC of Cache:

– Associativity – Block size – Capacity

• Cache organization

– Direct-mapped cache : A = 1, S = C/B – N-way set-associative: A = N, S =C /(BA) – Fully assicaitivity “ S =1

• 3 kinds of cache misses (3Cs)

– Compulsory – Conflict

– Capacity

• How to improve cache performance?

– Average memory access time = hit time + miss-rate X miss-penalty

(3)

1. Reduce Misses via Larger Block Size

• Block size Compulsory misses

– Spatial locality

– Example: access patter 0x0000,0x0004,0x0008,0x0012,..

Block size = 2 Word

0x0012 (hit) 0x0008 (miss)

0x0004 (hit) 0x0000 (miss)

Block size = 4 Word 0x0000 (miss) 0x0004 (hit) 0x0008 (hit) 0x0012 (hit)

(4)

• Block size Miss penalty

• Larger block size may increase capacity misses & conflict misses

– Example1 (conflict misses):

» access pattern 0,2,4,6,9,11,13,15,0,2,4,6,…

1. Reduce Misses via Larger Block Size

15 6

13 4

11 2

9 0

7 6

5 4

3 2

1 0

16 15

14 13

12 11

10 9

Block size = 1B, direct-mapped cache

Block size = 2B, direct--mapped cache

7 6

5 4

3 2

1 0

(5)

Reduce Misses via Larger Block Size

• Example2 (capacity misses)

– Access pattern 0,2,4,6,8,12,14,16,0,2,4,

16 14

12 8

6 4

2 0

Block size = 1B, fully-associative cache

7 6

5 4

3 2

1 0

17 16

15 14

13 12

9 8

Block size = 2B, fully-associative cache

(6)

Block Size (bytes) Miss

Rate

0%

5%

10%

15%

20%

25%

16 32 64 128 256

1K 4K 16K 64K 256K

1. Reduce Misses via Larger Block

Size

(7)

2. Reduce Misses via Higher Associativity

• 2:1 Cache Rule:

– Miss Rate DM cache size N - Miss Rate 2-way cache size N/2

• Beware: Execution time is only final measure!

– Will Clock Cycle time increase?

– Hill [1988] suggested hit time external cache +10%, internal + 2% for 2-way vs. 1-way

(8)

Cache Size (KB) 0

0.02 0.04 0.06 0.08 0.1 0.12 0.14

1 2 4 8 16 32 64 128

1-way

2-way

4-way

8-way

Capacity

Compulsory

2:1 Cache Rule

Conflict

x x

(9)

Example: Avg. Memory Access Time vs. Miss Rate

• Example: assume CCT = 1.10 for 2-way, 1.12 for 4-way, 1.14 for 8-way vs. CCT direct mapped

Cache Size Associativity

(KB) 1-way 2-way 4-way 8-way

1 2.33 2.15 2.07 2.01

2 1.98 1.86 1.76 1.68

4 1.72 1.67 1.61 1.53

8 1.46 1.48 1.47 1.43

16 1.29 1.32 1.32 1.32 32 1.20 1.24 1.25 1.27 64 1.14 1.20 1.21 1.23 128 1.10 1.17 1.18 1.20

(10)

3. Reducing Misses via Victim Cache

• How to combine fast hit time of Direct Mapped yet still avoid conflict misses?

• Add buffer to place data discarded from cache

• Jouppi [1990]: 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache

01234567

4

victim cache

12

01234567

1 2

4

4

01234567

4

12

(11)

4. Reducing Misses via Pseudo- Associativity

• How to combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way SA cache?

• Divide cache: on a miss, check other half of cache to see if there, if so have a pseudo-hit (slow hit)

• Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles

– Better for caches not tied directly to processor Hit Time

Pseudo Hit Time Miss Penalty

Time

(12)

5. Reducing Misses by HW Prefetching of Instruction & Data

• Bring a cache block up the memory hierarchy before it is requested by the processor

• Example: Stream Buffer for instruction prefetching (alpha 21064)

– Alpha 21064 fetches 2 blocks on a miss – Extra block placed in stream buffer

– On miss check stream buffer 01234567

4 5

stream buffer

CPU

Issue prefetch request 6

memory

(13)

5. Reducing Misses by HW

Prefetching of Instruction & Data

• Works with data blocks too:

– Jouppi [1990] 1 data stream buffer got 25% misses from 4KB cache; 4 streams got 43%

– Palacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches

• Prefetching relies on extra memory

bandwidth that can be used without penalty

(14)

6. Reducing Misses by SW Prefetching Data

• Data Prefetch

– Compiler insert prefetch instructions to the request the data before they are needed

– Load data into register (HP PA-RISC loads)

– Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9) – Special prefetching instructions cannot cause faults;

a form of speculative execution – Need a non-blocking cache

• Issuing Prefetch Instructions takes time

– Is cost of prefetch issues < savings in reduced misses?

(15)

7. Reducing Misses by Compiler Optimizations

• Instructions

– Reorder procedures in memory so as to reduce misses – Profiling to look at conflicts

– McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache with 4 byte blocks

• Data

– Merging Arrays: improve spatial locality by single array of compound elements vs. 2 arrays

– Loop Interchange: change nesting of loops to access data in order stored in memory

– Loop Fusion: Combine 2 independent loops that have same looping and some variables overlap

– Blocking: Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows

(16)

Merging Arrays Example

/* Before */

int val[SIZE];

int key[SIZE];

/* After */

struct merge { int val;

int key;

};

struct merge merged_array[SIZE];

Reducing conflicts between val & key

Key2 Key1 Val4 Val3 Val2 Val1

Key4 Key3

conflict

Key3 V al3 Key2 Val2 Key1 Val1

Key4 Val4

(17)

Loop Interchange Example

/* Before */

for (k = 0; k < 100; k = k+1) for (j = 0; j < 100; j = j+1)

for (i = 0; i < 5000; i = i+1) x[i][j] = 2 * x[i][j];

/* After */

for (k = 0; k < 100; k = k+1) for (i = 0; i < 5000; i = i+1)

for (j = 0; j < 100; j = j+1) x[i][j] = 2 * x[i][j];

Sequential accesses Instead of striding through

memory every 100 words (improve spatial

(18)

Loop Fusion Example

/* Before */

for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1)

a[i][j] = 1/b[i][j] * c[i][j];

for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1)

d[i][j] = a[i][j] + c[i][j];

/* After */

for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1)

{ a[i][j] = 1/b[i][j] * c[i][j];

d[i][j] = a[i][j] + c[i][j];}

2 misses per access to a & c vs. one miss per access

(improve temporal locality)

(19)

Blocking Example

for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1)

{r = 0;

for (k = 0; k < N; k = k+1){

r = r + y[i][k]*z[k][j];};

x[i][j] = r;

};

0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 6

i

j

i

k

k

j

(20)

Blocking Example

/* Before */

for (i = 0; i < N; i = i+1) for (j = 0; j < N; j = j+1)

{r = 0;

for (k = 0; k < N; k = k+1){

r = r + y[i][k]*z[k][j];};

x[i][j] = r;

};

• Capacity Misses a function of N & Cache Size:

– 3 NxN => no capacity misses; otherwise ...

• Idea: compute on BxB submatrix that fits

(21)

Blocking Example

/* After */

for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i = i+1)

for (j = jj; j < min(jj+B-1,N); j = j+1) {r = 0;

for (k = kk; k < min(kk+B-1,N); k = k+1) { r = r + y[i][k]*z[k][j];};

x[i][j] = x[i][j] + r;

};

(22)

Blocking Example

/* After */

for (jj = 0; jj < N; jj = jj+B) for (kk = 0; kk < N; kk = kk+B) for (i = 0; i < N; i = i+1)

for (j = jj; j < min(jj+B-1,N); j = j+1) {r = 0;

for (k = kk; k < min(kk+B-1,N); k = k+1) { r = r + y[i][k]*z[k][j];};

x[i][j] = x[i][j] + r;

};

• Capacity Misses from 2N

3

+ N

2

to 2N

3

/B +N

2

• B called Blocking Factor

• Conflict Misses Too?

(23)

Reducing Conflict Misses by Blocking

• Conflict misses in caches not FA vs. Blocking size

Blocking Factor 0

0.05 0.1

0 50 100 150

Fully Associative Cache Direct Mapped Cache

(24)

Performance Improvement

1 1.5 2 2.5 3

compress cholesky

(nasa7) spice mxm (nasa7) btrix (nasa7)

tomcatv gmty (nasa7) vpenta (nasa7)

merged arrays

loop

interchange

loop fusion blocking

Summary of Compiler Optimizations to

Reduce Cache Misses

(25)

Reducing Miss Rate Summary

• Reducing Miss Rate

1. Reduce Misses via Larger Block Size 2. Reduce Misses via Higher Associativity 3. Reducing Misses via Victim Cache

4. Reducing Misses via Pseudo-Associativity

5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data

7. Reducing Misses by Compiler Optimizations

• Remember danger of concentrating on just one parameter when evaluating performance

• reducing Miss penalty

(26)

1. Reducing Miss Penalty: Read Priority over Write on Miss

• Write buffer dilemma:

– Decrease write-miss penalty ; increase read-miss penalty

SW 512(R0), R3 ; M[512]<- R3 (cache index 0) LW R1, 1024(R0) ; R1 <- M[1024] (cache index 0) LW R2, 512(R0) ; R2 <- M[512] (cache index 0)

RAW Hazard (assume direct-mapped, write-through cache)

cache

memory

Write buffer

old value (r0+512)

New value (r0+512)

Read

write

(27)

1. Reducing Miss Penalty: Read Priority over Write on Miss

• If simply wait for write buffer to empty might increase read miss penalty by 50% (old MIPS 1000)

• Check write buffer contents before read;

if no conflicts, let the memory access continue

• Write Back?

– Read miss replacing dirty block

– Normal: Write dirty block to memory, and then do the read

– Instead copy the dirty block to a write buffer, then do the read, and then do the write

– CPU stall less since restarts as soon as do read

(28)

100 300 200 204

1 1 0 0

1 1 1 0

1 0 0 0

Sub-blocks

1 0 1 0

2. Subblock Placement to Reduce Miss Penalty

• Don’t have to load full block on a miss

• Have bits per subblock to indicate valid

• (Originally invented to reduce tag storage)

Valid Bits

204 200 300 100

0 0

0 0

1 0

1 0

0 0

1 1

1 1

1 1

(29)

3. Early Restart and Critical Word First

• Don’t wait for full block to be loaded before restarting CPU

– Early restart—As soon as the requested word of the block arrives, send it to the CPU and let the CPU continue execution

– Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first

• Generally useful only in large blocks,

• Spatial locality a problem; tend to want next

sequential word, so not clear if benefit by early restart

(30)

4. Non-blocking Caches to reduce stalls on misses

Non-blocking cache or lockup-free cache allowing the data cache to continue to supply cache hits during a miss

• “hit under miss” reduces the effective miss penalty by being helpful during a miss instead of ignoring the requests of the CPU

• “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses

– Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses

(31)

5th Miss Penalty Reduction:

Second Level Cache

• L2 Equations

AMAT = Hit TimeL1 + Miss RateL1 x Miss PenaltyL1

Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 x Miss PenaltyL2

AMAT = Hit TimeL1 + Miss RateL1 x (Hit TimeL2 + Miss RateL2 + Miss PenaltyL2)

• Definitions:

– Local miss rate— misses in this cache divided by the total number of memory accesses to this cache (Miss rateL2)

– Global miss rate—misses in this cache divided by the total number of memory accesses generated by the CPU

(Miss RateL1 x Miss RateL2)

(32)

Comparing Local and Global Miss Rates

Figure 5.23

• 32 KByte 1st level cache;

Increasing 2nd level cache

• Global miss rate close to single level cache rate provided L2 >> L1

• Don’t use local miss rate

(33)

L2 Cache Design Principle

• L2 not tied to CPU clock cycle

• Consider Cost & A.M.A.T.

• Generally Fast Hit Times and fewer misses

• Since hits are few, target miss reduction

– Larger cache, higher associativity and larger blocks

(34)

Relative CPU Time

Block Size 1

1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2

16 32 64 128 256 512

1.36

1.28 1.27 1.34

1.54

1.95

L2 cache block size & A.M.A.T.

• 32KB L1, 8 byte path to memory

(35)

Reducing Miss Penalty Summary

• Five techniques

– Read priority over write on miss – Subblock placement

– Early Restart and Critical Word First on miss – Non-blocking Caches (Hit Under Miss)

– Second Level Cache

• Can be applied recursively to Multilevel Caches

– Danger is that time to DRAM will grow with multiple levels in between

(36)

Review: Improving Cache Performance

1. Reduce the miss rate,

2. Reduce the miss penalty, or

3. Reduce the time to hit in the cache.

(37)

1. Fast Hit times via Small and Simple Caches

• Why Alpha 21164 has 8KB Instruction and 8KB data cache + 96KB second level cache

• Direct Mapped, on chip

(38)

2. Fast hits by Avoiding Address Translation

• Address translation – from virtual to physical addresses

• Physical cache:

– 1. Virtual -> physical

– 2. Use physical address to index cache => longer hit time

• Virtual cache:

– Use virtual cache to index cache => shorter hit time – problem: aliasing

More on this after covering virtual memory issues !

(39)

• Pipeline Tag Check and Update Cache as separate stages; current

write tag check & previous write cache update

• Only Write in the pipeline; empty during a miss

write x1 write x2

• In color is Delayed Write Buffer; must be checked on reads; either

complete write or read from buffer

3. Fast Hit Times Via Pipelined Writes

tag check x1 write data

tag check x2 write data

(40)

3. Fast Hit Times Via Pipelined Writes

• In color is Delayed Write Buffer; must be checked on reads; either complete write or read from buffer

tag Delayed write buffer

mux

CPU

Data in data out

=?

=?

Data

address

Write buffer

Lower level memory

(41)

4. Fast Writes on Misses Via Small Subblocks

• If most writes are 1 word, subblock size is 1 word, & write through then always write subblock & valid bit

immediately

– Tag match and valid bit already set: Writing the block was proper, &

nothing lost by setting valid bit on again.

– Tag match and valid bit not set: The tag match means that this is the proper block; writing the data into the subblock makes it appropriate to turn the valid bit on.

– Tag mismatch: This is a miss and will modify the data portion of the block. As this is a write-through cache, however, no harm was done;

memory still has an up-to-date copy of the old value. Only the tag to the address of the write and the valid bits of the other subblock need be changed because the valid bit for this subblock has already been set

• Doesn’t work with write back due to last case

(42)

Cache Optimization Summary

Technique MR MP HT Complexity

Larger Block Size + 0

Higher Associativity + 1

Victim Caches + 2

Pseudo-Associative Caches + 2

HW Prefetching of Instr/Data + 2

Compiler Controlled Prefetching + 3

Compiler Reduce Misses + 0

Priority to Read Misses + 1

Subblock Placement + 1

Early Restart & Critical Word 1st + 2

Non-Blocking Caches + 3

Second Level Caches + 2

Small & Simple Caches + 0

Avoiding Address Translation + 2

Pipelining Writes + 1

參考文獻

相關文件

In this section we introduce a type of derivative, called a directional derivative, that enables us to find the rate of change of a function of two or more variables in any

• A put gives its holder the right to sell a number of the underlying asset for the strike price.. • An embedded option has to be traded along with the

– Local miss rate— misses in this cache divided by the total number of memory accesses to this cache (Miss rate L2 ). – Global miss rate—misses in this cache divided by the total

– Number of TLB entries are restricted by clock cycle time, so a larger page size maps more memory, thereby reducing TLB misses. • Reasons for a smaller

The static, private, local and argument variables are mapped by the compiler on the four memory segments static , this , local , argument. In addition, there are four

In this paper, we evaluate whether adaptive penalty selection procedure proposed in Shen and Ye (2002) leads to a consistent model selector or just reduce the overfitting of

It is my pleasure to welcome our special guest Miss Linda Chu, the chairperson of the New Life Group, and all of you to our annual Cultural Festival. The Cultural Festival is

Then, we tested the influence of θ for the rate of convergence of Algorithm 4.1, by using this algorithm with α = 15 and four different θ to solve a test ex- ample generated as