• 沒有找到結果。

timed wait

N/A
N/A
Protected

Academic year: 2022

Share "timed wait"

Copied!
32
0
0

加載中.... (立即查看全文)

全文

(1)

Chapter 3

Transport Layer

Computer Networking:

A Top Down Approach Featuring the Internet, 3rd edition.

(2)

Chapter 3 outline

ˆ 3.1 Transport-layer services

ˆ 3.2 Multiplexing and demultiplexing

ˆ 3.3 Connectionless transport: UDP

ˆ 3.4 Principles of

reliable data transfer

ˆ 3.5 Connection-oriented transport: TCP

 segment structure

 reliable data transfer

 flow control

 connection management

ˆ 3.6 Principles of congestion control

ˆ 3.7 TCP congestion control

(3)

TCP Flow Control

ˆ receive side of TCP connection has a receive buffer:

ˆ speed-matching

service: matching the send rate to the

receiving app’s drain rate

sender won’t overflow receiver’s buffer by transmitting too much,

too fast

flow control

(4)

TCP Flow control: how it works

(Suppose TCP receiver discards out-of-order segments)

ˆ spare room in buffer

= RcvWindow

= RcvBuffer-[LastByteRcvd - LastByteRead]

ˆ Rcvr advertises spare room by including value of RcvWindow in

segments

ˆ Sender limits unACKed data to RcvWindow

 guarantees receive

buffer doesn’t overflow

(5)

Chapter 3 outline

ˆ 3.1 Transport-layer services

ˆ 3.2 Multiplexing and demultiplexing

ˆ 3.3 Connectionless transport: UDP

ˆ 3.4 Principles of

reliable data transfer

ˆ 3.5 Connection-oriented transport: TCP

 segment structure

 reliable data transfer

 flow control

 connection management

ˆ 3.6 Principles of congestion control 3.7 TCP congestion

(6)

TCP Connection Management

Recall: TCP sender, receiver establish “connection”

before exchanging data segments

ˆ initialize TCP variables:

 seq. #s

 buffers, flow control info (e.g. RcvWindow)

ˆ client: connection initiator

Socket clientSocket = new Socket("hostname","port number");

ˆ server: contacted by client

Socket connectionSocket = welcomeSocket.accept();

Three way handshake:

Step 1: client host sends TCP SYN segment to server

 specifies initial seq #

 no data

Step 2: server host receives SYN, replies with SYNACK segment

 server allocates buffers

 specifies server initial seq. #

Step 3: client receives SYNACK, replies with ACK segment,

(7)

TCP Connection Management (cont.)

Closing a connection:

client closes socket:

clientSocket.close();

Step 1: client end system sends TCP FIN control segment to server

Step 2: server receives

client

FIN

server

ACK

ACK FIN

close

close

(8)

TCP Connection Management (cont.)

Step 3: client receives FIN, replies with ACK.

 Enters “timed wait” - will respond with ACK to received FINs

Step 4: server, receives ACK. Connection closed.

client

FIN

server

ACK

ACK FIN

closing

closing

closedtimed wait

closed

(9)

TCP Connection Management (cont)

TCP client lifecycle

TCP server lifecycle

(10)

Chapter 3 outline

ˆ 3.1 Transport-layer services

ˆ 3.2 Multiplexing and demultiplexing

ˆ 3.3 Connectionless transport: UDP

ˆ 3.4 Principles of

reliable data transfer

ˆ 3.5 Connection-oriented transport: TCP

 segment structure

 reliable data transfer

 flow control

 connection management

ˆ 3.6 Principles of congestion control

ˆ 3.7 TCP congestion control

(11)

Principles of Congestion Control

Congestion:

ˆ informally: “too many sources sending too much data too fast for network to handle”

ˆ different from flow control!

ˆ manifestations:

 lost packets (buffer overflow at routers)

 long delays (queueing in router buffers)

ˆ a top-10 problem!

(12)

Causes/costs of congestion: scenario 1

ˆ two senders, two receivers

ˆ one router,

infinite buffers

ˆ no retransmission

ˆ large delays

when congested

ˆ maximum achievable

unlimited shared output link buffers Host A

λin: original data

Host B

λout

R/2

(13)

Causes/costs of congestion: scenario 2

ˆ one router, finite buffers

ˆ sender retransmission of lost packet

finite shared output link buffers Host A λ

in: original data

Host B

λout

λ'in: original data, plus retransmitted data

(14)

Causes/costs of congestion: scenario 2

ˆ “perfect” retransmission only when loss:

ˆ retransmission of delayed (not lost) packet makes larger (than perfect case) for same

λ

in

λ

> out

λ

in

λ

out

“costs” of congestion:

more work (retrans) for given “goodput”

R/2

λin R/2

λout

b.

R/2

λin R/2

λout

a.

R/2

λin R/2

λout

c.

R/4 R/3

(15)

Causes/costs of congestion: scenario 3

ˆ four senders

ˆ multihop paths

ˆ timeout/retransmit

λ

in

Q: what happens as and increase

λ

?

in

finite shared output link buffers

Host A

λin: original data

Host B

λout λ'in: original data, plus

retransmitted data

Host C

(16)

Causes/costs of congestion: scenario 3

Another “cost” of congestion:

ˆ when packet dropped, any “upstream” transmission capacity used for that packet was wasted!

H o s t A

H o s t B

λ

o u t

(17)

Approaches towards congestion control

End-end congestion control:

ˆ no explicit feedback from network

ˆ congestion inferred from end-system observed loss, delay

ˆ approach taken by TCP

Network-assisted congestion control:

ˆ routers provide feedback to end systems

 single bit indicating congestion (IBM SNA, DEC DECbit, TCP/IP ECN, ATM)

Two broad approaches towards congestion control:

(18)

Chapter 3 outline

ˆ 3.1 Transport-layer services

ˆ 3.2 Multiplexing and demultiplexing

ˆ 3.3 Connectionless transport: UDP

ˆ 3.4 Principles of

reliable data transfer

ˆ 3.5 Connection-oriented transport: TCP

 segment structure

 reliable data transfer

 flow control

 connection management

ˆ 3.6 Principles of congestion control

ˆ 3.7 TCP congestion control

(19)

TCP Congestion Control

ˆ end-end control (no network assistance)

ˆ sender limits transmission:

LastByteSent-LastByteAcked

≤ CongWin

ˆ Roughly,

How does sender

perceive congestion?

ˆ loss event = timeout or 3 duplicate acks

ˆ TCP sender reduces rate (CongWin) after loss event

three mechanisms:

rate = CongWin

RTT Bytes/sec

(20)

TCP AIMD

8 Kbytes 16 Kbytes 24 Kbytes

time congestion

window

multiplicative decrease:

cut CongWin in half after loss event

additive increase:

increase CongWin by 1 MSS every RTT in the absence of loss events: probing

(21)

TCP Slow Start

ˆ When connection begins, CongWin = 1 MSS

 Example: MSS = 500 bytes & RTT = 200 msec

 initial rate is about 20 kbps

ˆ available bandwidth may be >> MSS/RTT

 desirable to quickly ramp

ˆ When connection begins, increase rate

exponentially fast until first loss event

(22)

TCP Slow Start (more)

ˆ When connection

begins, increase rate exponentially until first loss event:

 double CongWin every RTT

 done by incrementing CongWin for every ACK received

ˆ Summary: initial rate is slow but ramps up exponentially fast

Host A

one segment

RTT

Host B

time

two segments

four segments

(23)

Refinement

ˆ After 3 dup ACKs:

 CongWin is cut in half

 window then grows linearly

ˆ But after timeout event:

 CongWin instead set to 1 MSS;

 window then grows

• 3 dup ACKs indicates network capable of

delivering some segments

• timeout before 3 dup ACKs is “more alarming”

Philosophy:

(24)

Refinement (more)

Q: When should the exponential

increase switch to linear?

A: When CongWin gets to 1/2 of its value before

timeout.

Implementation:

ˆ Variable Threshold

ˆ At loss event, Threshold is set to 1/2 of CongWin just before loss event

Congestion Window (in segments)

(25)

Summary: TCP Congestion Control

ˆ When CongWin is below Threshold, sender in slow-start phase, window grows exponentially.

ˆ When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly.

ˆ When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to

Threshold.

(26)

TCP sender congestion control

Event State TCP Sender Action Commentary ACK receipt

for previously unacked

data

Slow Start (SS)

CongWin = CongWin + MSS, If (CongWin > Threshold)

set state to “Congestion Avoidance”

Resulting in a doubling of CongWin every RTT

ACK receipt for previously unacked

data

Congestion Avoidance (CA)

CongWin = CongWin+MSS * (1/CongWin)

Additive increase, resulting in increase of CongWin by 1 MSS every RTT

Loss event detected by triple

duplicate ACK

SS or CA Threshold = CongWin/2, CongWin = Threshold,

Set state to “Congestion Avoidance”

Fast recovery,

implementing multiplicative decrease. CongWin will not drop below 1 MSS.

Timeout SS or CA Threshold = CongWin/2, CongWin = 1 MSS,

Set state to “Slow Start”

Enter slow start

Duplicate ACK

SS or CA Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

(27)

TCP throughput

ˆ

What’s the average throughout of TCP as a function of window size and RTT?

 Ignore slow start

ˆ

Let W be the window size when loss occurs.

ˆ

When window is W, throughput is W/RTT

ˆ

Just after loss, window drops to W/2,

throughput to W/2RTT.

(28)

TCP Futures

ˆ Example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput

ˆ Requires window size W = 83,333 in-flight segments

ˆ Throughput in terms of loss rate:

ˆ ➜ L = 2·10-10 Wow

ˆ New versions of TCP for high-speed needed!

L RTT

MSS 22

.

1

(29)

Fairness goal: if K TCP sessions share same

bottleneck link of bandwidth R, each should have average rate of R/K

TCP connection 1

bottleneck TCP

TCP Fairness

(30)

Why is TCP fair?

Two competing sessions:

ˆ Additive increase gives slope of 1, as throughout increases

ˆ multiplicative decrease decreases throughput proportionally

R equal bandwidth share

Connection 2 throughput

congestion avoidance: additive increaseloss: decrease window by factor of 2 congestion avoidance: additive increase loss: decrease window by factor of 2

(31)

Fairness (more)

Fairness and UDP

ˆ Multimedia apps often do not use TCP

 do not want rate

throttled by congestion control

ˆ Instead use UDP:

 pump audio/video at constant rate, tolerate packet loss

ˆ Research area: TCP

Fairness and parallel TCP connections

ˆ nothing prevents app from opening parallel

connections between 2 hosts.

ˆ Web browsers do this

ˆ Example: link of rate R supporting 9 connections;

 new app asks for 1 TCP, gets rate R/10

(32)

Chapter 3: Summary

ˆ principles behind transport layer services:

 multiplexing, demultiplexing

 reliable data transfer

 flow control

 congestion control

ˆ instantiation and

implementation in the Internet

 UDP

Next:

ˆ leaving the network

“edge” (application, transport layers)

ˆ into the network

“core”

參考文獻

相關文件

Error bounds for the analytic center pOCP soln when distances have small errors... Decrease µ by a factor

Error bounds for the analytic center SOCP soln when distances have small errors... Decrease µ by a factor

i.CTCAE(the common terminology criteria for adverse events) v4.0 grade≧2 audiometric hearing loss. ii.CTCAE v4.0 grade≧2

Due to the increase in housing rent, rising prices in outbound package tours and air tickets during summer holidays, as well as in gasoline that was affected by price increase

As for other sections, apart from the 9.81% decrease of the price index of Education, lower charges for mobile phone services drove the price index of Communication down by

The probability of loss increases rapidly with burst size so senders talking to old-style receivers saw three times the loss rate (1.8% vs. The higher loss rate meant more time spent

congestion avoidance: additive increase loss: decrease window by factor of 2 congestion avoidance: additive increase loss: decrease window by factor of 2..

However, the SRAS curve is upward sloping, which indicates that an increase in the overall price level tends to raise the quantity of goods and services supplied and a decrease in