• 沒有找到結果。

write write decode

N/A
N/A
Protected

Academic year: 2022

Share "write write decode"

Copied!
52
0
0

加載中.... (立即查看全文)

全文

(1)

Advanced Architecture

Yung-Yu Chuang

with slides by S. Dandamudi, Peng-Sheng Chen, Kip Irvine, Robert Sedgwick and Kevin Wayne

(2)

Basic architecture

(3)

Basic microcomputer design

• clock synchronizes CPU operations

• control unit (CU) coordinates sequence of execution steps

• ALU performs arithmetic and logic operations

Central Processor Unit (CPU)

Memory Storage Unit

registers

ALU clock

I/O Device

#1

I/O Device

#2

data bus

control bus address bus

CU

(4)

Basic microcomputer design

• The memory storage unit holds instructions and data for a running program

• A bus is a group of wires that transfer data from one part to another (data, address, control)

Central Processor Unit (CPU)

Memory Storage Unit

registers

ALU clock

I/O Device

#1

I/O Device

#2

data bus

control bus address bus

CU

(5)

Clock

• synchronizes all CPU and BUS operations

• machine (clock) cycle measures time of a single operation

• clock is used to trigger events

one cycle 1

0

• Basic unit of time, 1GHz→clock cycle=1ns

• An instruction could take multiple cycles to

complete, e.g. multiply in 8088 takes 50 cycles

(6)

Instruction execution cycle

• Fetch

• Decode

• Fetch

operands

• Execute

• Store output

I-1 I-2 I-3 I-4

PC program

I-1 instruction register op1

op2

memory fetch

ALU registers

write decode

execute read

write

(output)

registers

flags

program counter

instruction queue

(7)

Pipeline

(8)

Multi-stage pipeline

• Pipelining makes it possible for processor to execute instructions in parallel

• Instruction execution divided into discrete stages

S1 S2 S3 S4 S5

1

Cycles

Stages

S6

2 3 4 5 6 7 8 9 10 11 12

I-1

I-2

I-1

I-2

I-1

I-2

I-1

I-2

I-1

I-2

I-1

I-2

Example of a non- pipelined processor.

For example, 80386.

Many wasted cycles.

(9)

Pipelined execution

• More efficient use of cycles, greater throughput of instructions: (80486 started to use pipelining)

S1 S2 S3 S4 S5

1

Cycles

Stages

S6

2 3 4 5 6 7

I-1

I-2 I-1

I-2 I-1

I-2 I-1

I-2 I-1

I-2 I-1 I-2

For k stages and n instructions, the number of

required cycles is:

k + (n – 1)

compared to k*n

(10)

• Pipelining requires buffers

– Each buffer holds a single value

– Ideal scenario: equal work for each stage

• Sometimes it is not possible

• Slowest stage determines the flow rate in the entire pipeline

Pipelined execution

(11)

Pipelined execution

• Some reasons for unequal work stages

– A complex step cannot be subdivided conveniently – An operation takes variable amount of time to

execute, e.g. operand fetch time depends on where the operands are located

• Registers

• Cache

• Memory

– Complexity of operation depends on the type of operation

• Add: may take one cycle

• Multiply: may take several cycles

(12)

• Operand fetch of I2 takes three cycles

– Pipeline stalls for two cycles

• Caused by hazards

– Pipeline stalls reduce overall throughput

Pipelined execution

(13)

Wasted cycles (pipelined)

• When one of the stages requires two or more clock cycles, clock cycles are again wasted.

S1 S2 S3 S4 S5

1

Cycles

Stages

S6

2 3 4 5 6 7

I-1 I-2 I-3

I-1 I-2 I-3

I-1 I-2 I-3

I-1

I-2 I-1

I-1 8

9

I-3 I-2

I-2 exe

10 11

I-3

I-3 I-1

I-2

I-3

For k stages and n instructions, the

number of required cycles is:

k + (2n – 1)

(14)

Superscalar

A superscalar processor has multiple execution pipelines. In the following, note that Stage S4 has left and right pipelines (u and v).

S1 S2 S3 u S5

1

Cycles

Stages

S6

2 3 4 5 6 7

I-1 I-2 I-3 I-4

I-1 I-2 I-3 I-4

I-1 I-2 I-3 I-4

I-1

I-3 I-1

I-2 I-1 v

I-2

I-4 S4

8 9

I-3 I-4

I-2 I-3

10 I-4

I-2

I-4 I-1

I-3

For k states and n instructions, the

number of required cycles is:

k + n

Pentium: 2 pipelines Pentium Pro: 3

(15)

Pipeline stages

• Pentium 3: 10

• Pentium 4: 20~31

• Next-generation micro-architecture: 14

• ARM7: 3

(16)

Hazards

• Three types of hazards

– Resource hazards

• Occurs when two or more instructions use the same resource, also called structural hazards

– Data hazards

• Caused by data dependencies between instructions, e.g. result produced by I1 is read by I2

– Control hazards

• Default: sequential execution suits pipelining

• Altering control flow (e.g., branching) causes problems, introducing control dependencies

(17)

Data hazards

add r1, r2, #10 ; write r1 sub r3, r1, #20 ; read r1

fetch decode reg ALU wb

fetch decode stall reg ALU wb

(18)

Data hazards

• Forwarding: provides output result as soon as possible

fetch decode reg ALU wb

fetch decode stall reg ALU

add r1, r2, #10 ; write r1 sub r3, r1, #20 ; read r1

wb

(19)

Data hazards

• Forwarding: provides output result as soon as possible

fetch decode reg ALU wb

fetch decode stall reg ALU add r1, r2, #10 ; write r1

sub r3, r1, #20 ; read r1

fetch decode stall reg ALU wb

wb

(20)

Control hazards

fetch decode reg ALU wb

fetch decode reg ALU wb

fetch decode reg ALU wb fetch decode reg ALU

fetch decode reg ALU bz r1, target

add r2, r4, 0 ...

target: add r2, r3, 0

wb

(21)

Control hazards

• Braches alter control flow

– Require special attention in pipelining

– Need to throw away some instructions in the pipeline

• Depends on when we know the branch is taken

• Pipeline wastes three clock cycles – Called branch penalty

– Reducing branch penalty

• Determine branch decision early

(22)

Control hazards

• Delayed branch execution

– Effectively reduces the branch penalty

– We always fetch the instruction following the branch

• Why throw it away?

• Place a useful instruction to execute

• This is called delay slot

add R2,R3,R4 branch target sub R5,R6,R7 . . .

branch target add R2,R3,R4

sub R5,R6,R7 . . .

Delay slot

(23)

Branch prediction

• Three prediction strategies

– Fixed

• Prediction is fixed

– Example: branch-never-taken

» Not proper for loop structures

– Static

• Strategy depends on the branch type

– Conditional branch: always not taken – Loop: always taken

– Dynamic

• Takes run-time history to make more accurate predictions

(24)

Branch prediction

• Static prediction

– Improves prediction accuracy over Fixed

Instruction type Instruction Distribution

(%)

Prediction:

Branch taken?

Correct prediction

(%) Unconditional

branch

70*0.4 = 28 Yes 28

Conditional branch

70*0.6 = 42 No 42*0.6 = 25.2

Loop 10 Yes 10*0.9 = 9

Call/return 20 Yes 20

Overall prediction accuracy = 82.2%

(25)

Branch prediction

• Dynamic branch prediction

– Uses runtime history

• Takes the past n branch executions of the branch type and makes the prediction

– Simple strategy

• Prediction of the next branch is the majority of the previous n branch executions

• Example: n = 3

– If two or more of the last three branches were taken, the prediction is “branch taken”

• Depending on the type of mix, we get more than 90%

prediction accuracy

(26)

Branch prediction

• Impact of past n branches on prediction accuracy

Type of mix

n Compiler Business Scientific

0 64.1 64.4 70.4

1 91.9 95.2 86.6

2 93.3 96.5 90.8

3 93.7 96.6 91.0

4 94.5 96.8 91.8

5 94.7 97.0 92.0

(27)

10 Predict

branch

01 Predict no branch

Branch prediction

00 Predict no branch

11 Predict branch branch

branch no

branch

no

branch

branch

no

branch no

branch branch

(28)

Multitasking

• OS can run multiple programs at the same time.

• Multiple threads of execution within the same program.

• Scheduler utility assigns a given amount of CPU time to each running program.

• Rapid switching of tasks

– gives illusion that all programs are running at once – the processor must support task switching

– scheduling policy, round-robin, priority

(29)

Cache

(30)

SRAM vs DRAM

Tran. Access Needs

per bit time refresh? Cost Applications SRAM 4 or 6 1X No 100X cache memories DRAM 1 10X Yes 1X Main memories,

frame buffers

Central Processor Unit (CPU)

Memory Storage Unit registers

ALU clock

I/O Device

#1

I/O Device

#2 data bus

control bus address bus

CU

(31)

The CPU-Memory gap

The gap widens between DRAM, disk, and CPU speeds.

1 10 100 1,000 10,000 100,000 1,000,000 10,000,000 100,000,000

1980 1985 1990 1995 2000

year

ns

Disk seek time DRAM access time SRAM access time CPU cycle time

register cache memory disk

Access time (cycles)

1 1-10 50-100 20,000,000

(32)

Memory hierarchies

• Some fundamental and enduring properties of hardware and software:

– Fast storage technologies cost more per byte, have less capacity, and require more power (heat!).

– The gap between CPU and main memory speed is widening.

– Well-written programs tend to exhibit good locality.

• They suggest an approach for organizing

memory and storage systems known as a

memory hierarchy.

(33)

Memory system in practice

Larger, slower, and cheaper (per byte) storage devices

registers on-chip L1 cache (SRAM)

main memory (DRAM)

local secondary storage (virtual memory) (local disks)

remote secondary storage

(tapes, distributed file systems, Web servers) off-chip L2

cache (SRAM) L0:

L1:

L2:

L3:

L4:

L5:

Smaller, faster, and more expensive (per byte) storage devices

(34)

Reading from memory

• Multiple machine cycles are required when reading from memory, because it responds much more slowly than the CPU (e.g.33 MHz). The wasted clock cycles are called wait states.

Processor Chip

L1 Data 1 cycle latency

16 KB 4-way assoc Write-through

32B lines L1 Instruction 16 KB, 4-way

32B lines

Regs. L2 Unified

128KB--2 MB 4-way assoc

Write-back Write allocate

32B lines

MemoryMain Up to 4GB

Pentium III cache hierarchy

(35)

Cache memory

• High-speed expensive static RAM both inside and outside the CPU.

– Level-1 cache: inside the CPU – Level-2 cache: outside the CPU

• Cache hit: when data to be read is already in cache memory

• Cache miss: when data to be read is not in cache memory. When? compulsory, capacity and conflict.

• Cache design: cache size, n-way, block size,

replacement policy

(36)

Caching in a memory hierarchy

0 1 2 3

4 5 6 7

8 9 10 11

12 13 14 15

Larger, slower, cheaper Storage device at level k+1 is partitioned into blocks.

Data is copied between levels in block-sized transfer units

8 9 14 3

Smaller, faster, more Expensive device at level k caches a

subset of the blocks from level k+1

level k

level k+1

4

4

4 10

10

10

(37)

Request 14

Request 12

General caching concepts

• Program needs object d, which is stored in some block b.

• Cache hit

– Program finds b in the cache at level k. E.g., block 14.

• Cache miss

– b is not at level k, so level k cache must fetch it from level k+1.

E.g., block 12.

– If level k cache is full, then some current block must be replaced

(evicted). Which one is the “victim”?

• Placement policy: where can the new block go? E.g., b mod 4

• Replacement policy: which block should be evicted? E.g., LRU

9 3

0 1 2 3

4 5 6 7

8 9 10 11

12 13 14 15

level k

level k+1

1414

12

14

4*

124*

12

0 1 2 3

Request 12

4*4*

12

(38)

Locality

• Principle of Locality: programs tend to reuse

data and instructions near those they have used recently, or that were recently referenced

themselves.

– Temporal locality: recently referenced items are likely to be referenced in the near future.

– Spatial locality: items with nearby addresses tend to be referenced close together in time.

• In general, programs with good locality run faster then programs with poor locality

• Locality is the reason why cache and virtual

memory are designed in architecture and

operating system. Another example is web

browser caches recently visited webpages.

(39)

Locality example

• Data

– Reference array elements in succession (stride-1 reference pattern):

– Reference sum each iteration:

• Instructions

– Reference instructions in sequence:

– Cycle through loop repeatedly:

sum = 0;

for (i = 0; i < n; i++) sum += a[i];

return sum;

Spatial locality

Spatial locality Temporal locality

Temporal locality

(40)

Locality example

• Being able to look at code and get a qualitative sense of its locality is important. Does this

function have good locality?

int sum_array_rows(int a[M][N]) {

int i, j, sum = 0;

for (i = 0; i < M; i++)

for (j = 0; j < N; j++) sum += a[i][j];

return sum;

} stride-1 reference pattern

(41)

Locality example

• Does this function have good locality?

int sum_array_cols(int a[M][N]) {

int i, j, sum = 0;

for (j = 0; j < N; j++)

for (i = 0; i < M; i++) sum += a[i][j];

return sum;

} stride-N reference pattern

(42)

Blocked matrix multiply performance

• Blocking (bijk and bikj) improves performance by a factor of two over unblocked versions (ijk and jik)

– relatively insensitive to array size.

0 10 20 30 40 50 60

25 50 75 100 125

150 175

200 225

250 275

300 325

350 375

400 Array size (n)

Cycles/iteration

kji jki kij ikj jik ijk

bijk (bsize = 25) bikj (bsize = 25)

(43)

Cache-conscious programming

• make sure that memory is cache-aligned

• Split data into hot and cold (list example)

• Use union and bitfields to reduce size and

increase locality

(44)

RISC v.s. CISC

(45)

Trade-offs of instruction sets

• Before 1980, the trend is to increase instruction complexity (one-to-one mapping if possible) to bridge the gap. Reduce fetch from memory.

Selling point: number of instructions, addressing modes. (CISC)

• 1980, RISC. Simplify and regularize instructions to introduce advanced architecture for better performance, pipeline, cache, superscalar.

high-level language machine code semantic gap

compiler C, C++

Lisp, Prolog, Haskell…

(46)

RISC

• 1980, Patternson and Ditzel (Berkeley),RISC

• Features

– Fixed-length instructions – Load-store architecture – Register file

• Organization

– Hard-wired logic

– Single-cycle instruction – Pipeline

• Pros: small die size, short development time, high performance

• Cons: low code density, not x86 compatible

(47)

RISC Design Principles

• Simple operations

– Simple instructions that can execute in one cycle

• Register-to-register operations

– Only load and store operations access memory

– Rest of the operations on a register-to-register basis

• Simple addressing modes

– A few addressing modes (1 or 2)

• Large number of registers

– Needed to support register-to-register operations – Minimize the procedure call and return overhead

(48)

RISC Design Principles

• Fixed-length instructions

– Facilitates efficient instruction execution

• Simple instruction format

– Fixed boundaries for various fields

• opcode, source operands,…

(49)

CISC and RISC

• CISC – complex instruction set

– large instruction set

– high-level operations (simpler for compiler?)

– requires microcode interpreter (could take a long time)

– examples: Intel 80x86 family

• RISC – reduced instruction set

– small instruction set

– simple, atomic instructions

– directly executed by hardware very quickly

– easier to incorporate advanced architecture design – examples: ARM (Advanced RISC Machines) and DEC

Alpha (now Compaq), PowerPC, MIPS

(50)

CISC and RISC

CISC

(Intel 486)

RISC

(MIPS R4000)

#instructions 235 94

Addr. modes 11 1

Inst. Size (bytes) 1-12 4

GP registers 8 32

(51)

Why RISC?

• Simple instructions are preferred

– Complex instructions are mostly ignored by compilers

• Due to semantic gap

• Simple data structures

– Complex data structures are used relatively infrequently

– Better to support a few simple data types efficiently

• Synthesize complex ones

• Simple addressing modes

– Complex addressing modes lead to variable length instructions

• Lead to inefficient instruction decoding and scheduling

(52)

Why RISC? (cont’d)

• Large register set

– Efficient support for procedure calls and returns

• Patterson and Sequin’s study

– Procedure call/return: 1215% of HLL statements

» Constitute 3133% of machine language instructions

» Generate nearly half (45%) of memory references

– Small activation record

• Tanenbaum’s study

– Only 1.25% of the calls have more than 6 arguments – More than 93% have less than 6 local scalar variables – Large register set can avoid memory references

參考文獻

相關文件

straight red hair long brown hair gray hair curly red hair.. What does she

She is going to buy them a digital camera, because they love taking pictures of..

• 57 MMX instructions are defined to perform the parallel operations on multiple data elements packed into 64-bit data types. • These include add, subtract, multiply, compare ,

Ideal CPI + Pipeline stall CPI Clock Cycle pipelined Speedup = Pipeline depth Clock Cycle unpipelined. 1 + Pipeline stall CPI Clock Cycle

• 57 MMX instructions are defined to perform the parallel operations on multiple data elements packed into 64-bit data types. • These include add, subtract, multiply, compare ,

• The memory storage unit is where instructions and data are held while a computer program is running.. • A bus is a group of parallel wires that transfer data from one part of

• Virtual memory uses disk as part of the memory, thus allowing sum of all programs can be larger than physical memory. • Divides each segment into 4096-byte blocks

bgez Branch on greater than or equal to zero bltzal Branch on less than zero and link. bgezal Branch on greter than or equal to zero