Chapter 3
Transport Layer
Computer Networking:
A Top Down Approach Featuring the Internet, 3rd edition.
Chapter 3 outline
3.1 Transport-layer services
3.2 Multiplexing and demultiplexing
3.3 Connectionless transport: UDP
3.4 Principles of
reliable data transfer
3.5 Connection-oriented transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 Principles of congestion control
3.7 TCP congestion control
TCP Flow Control
receive side of TCP connection has a receive buffer:
speed-matching
service: matching the send rate to the
receiving app’s drain rate
sender won’t overflow receiver’s buffer by transmitting too much,
too fast
flow control
TCP Flow control: how it works
(Suppose TCP receiver discards out-of-order segments)
spare room in buffer
= RcvWindow
= RcvBuffer-[LastByteRcvd - LastByteRead]
Rcvr advertises spare room by including value of RcvWindow in
segments
Sender limits unACKed data to RcvWindow
guarantees receive
buffer doesn’t overflow
Chapter 3 outline
3.1 Transport-layer services
3.2 Multiplexing and demultiplexing
3.3 Connectionless transport: UDP
3.4 Principles of
reliable data transfer
3.5 Connection-oriented transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 Principles of congestion control 3.7 TCP congestion
TCP Connection Management
Recall: TCP sender, receiver establish “connection”
before exchanging data segments
initialize TCP variables:
seq. #s
buffers, flow control info (e.g. RcvWindow)
client: connection initiator
Socket clientSocket = new Socket("hostname","port number");
server: contacted by client
Socket connectionSocket = welcomeSocket.accept();
Three way handshake:
Step 1: client host sends TCP SYN segment to server
specifies initial seq #
no data
Step 2: server host receives SYN, replies with SYNACK segment
server allocates buffers
specifies server initial seq. #
Step 3: client receives SYNACK, replies with ACK segment,
TCP Connection Management (cont.)
Closing a connection:
client closes socket:
clientSocket.close();
Step 1: client end system sends TCP FIN control segment to server
Step 2: server receives
client
FIN
server
ACK
ACK FIN
close
close
TCP Connection Management (cont.)
Step 3: client receives FIN, replies with ACK.
Enters “timed wait” - will respond with ACK to received FINs
Step 4: server, receives ACK. Connection closed.
client
FIN
server
ACK
ACK FIN
closing
closing
closedtimed wait
closed
TCP Connection Management (cont)
TCP client lifecycle
TCP server lifecycle
Chapter 3 outline
3.1 Transport-layer services
3.2 Multiplexing and demultiplexing
3.3 Connectionless transport: UDP
3.4 Principles of
reliable data transfer
3.5 Connection-oriented transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 Principles of congestion control
3.7 TCP congestion control
Principles of Congestion Control
Congestion:
informally: “too many sources sending too much data too fast for network to handle”
different from flow control!
manifestations:
lost packets (buffer overflow at routers)
long delays (queueing in router buffers)
a top-10 problem!
Causes/costs of congestion: scenario 1
two senders, two receivers
one router,
infinite buffers
no retransmission
large delays
when congested
maximum achievable
unlimited shared output link buffers Host A
λin: original data
Host B
λout
R/2
Causes/costs of congestion: scenario 2
one router, finite buffers
sender retransmission of lost packet
finite shared output link buffers Host A λ
in: original data
Host B
λout
λ'in: original data, plus retransmitted data
Causes/costs of congestion: scenario 2
“perfect” retransmission only when loss:
retransmission of delayed (not lost) packet makes larger (than perfect case) for same
λ
inλ
> out
λ
inλ
out“costs” of congestion:
more work (retrans) for given “goodput”
R/2
λin R/2
λout
b.
R/2
λin R/2
λout
a.
R/2
λin R/2
λout
c.
R/4 R/3
Causes/costs of congestion: scenario 3
four senders
multihop paths
timeout/retransmit
λ
inQ: what happens as and increase
λ
?in
finite shared output link buffers
Host A
λin: original data
Host B
λout λ'in: original data, plus
retransmitted data
Host C
Causes/costs of congestion: scenario 3
Another “cost” of congestion:
when packet dropped, any “upstream” transmission capacity used for that packet was wasted!
H o s t A
H o s t B
λ
o u t
Approaches towards congestion control
End-end congestion control:
no explicit feedback from network
congestion inferred from end-system observed loss, delay
approach taken by TCP
Network-assisted congestion control:
routers provide feedback to end systems
single bit indicating congestion (IBM SNA, DEC DECbit, TCP/IP ECN, ATM)
Two broad approaches towards congestion control:
Chapter 3 outline
3.1 Transport-layer services
3.2 Multiplexing and demultiplexing
3.3 Connectionless transport: UDP
3.4 Principles of
reliable data transfer
3.5 Connection-oriented transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 Principles of congestion control
3.7 TCP congestion control
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission:
LastByteSent-LastByteAcked
≤ CongWin
Roughly,
How does sender
perceive congestion?
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms:
rate = CongWin
RTT Bytes/sec
TCP AIMD
8 Kbytes 16 Kbytes 24 Kbytes
time congestion
window
multiplicative decrease:
cut CongWin in half after loss event
additive increase:
increase CongWin by 1 MSS every RTT in the absence of loss events: probing
TCP Slow Start
When connection begins, CongWin = 1 MSS
Example: MSS = 500 bytes & RTT = 200 msec
initial rate is about 20 kbps
available bandwidth may be >> MSS/RTT
desirable to quickly ramp
When connection begins, increase rate
exponentially fast until first loss event
TCP Slow Start (more)
When connection
begins, increase rate exponentially until first loss event:
double CongWin every RTT
done by incrementing CongWin for every ACK received
Summary: initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
Refinement
After 3 dup ACKs:
CongWin is cut in half
window then grows linearly
But after timeout event:
CongWin instead set to 1 MSS;
window then grows
• 3 dup ACKs indicates network capable of
delivering some segments
• timeout before 3 dup ACKs is “more alarming”
Philosophy:
Refinement (more)
Q: When should the exponential
increase switch to linear?
A: When CongWin gets to 1/2 of its value before
timeout.
Implementation:
Variable Threshold
At loss event, Threshold is set to 1/2 of CongWin just before loss event
Congestion Window (in segments)
Summary: TCP Congestion Control
When CongWin is below Threshold, sender in slow-start phase, window grows exponentially.
When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly.
When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to
Threshold.
TCP sender congestion control
Event State TCP Sender Action Commentary ACK receipt
for previously unacked
data
Slow Start (SS)
CongWin = CongWin + MSS, If (CongWin > Threshold)
set state to “Congestion Avoidance”
Resulting in a doubling of CongWin every RTT
ACK receipt for previously unacked
data
Congestion Avoidance (CA)
CongWin = CongWin+MSS * (1/CongWin)
Additive increase, resulting in increase of CongWin by 1 MSS every RTT
Loss event detected by triple
duplicate ACK
SS or CA Threshold = CongWin/2, CongWin = Threshold,
Set state to “Congestion Avoidance”
Fast recovery,
implementing multiplicative decrease. CongWin will not drop below 1 MSS.
Timeout SS or CA Threshold = CongWin/2, CongWin = 1 MSS,
Set state to “Slow Start”
Enter slow start
Duplicate ACK
SS or CA Increment duplicate ACK count for segment being acked
CongWin and Threshold not changed
TCP throughput
What’s the average throughout of TCP as a function of window size and RTT?
Ignore slow start
Let W be the window size when loss occurs.
When window is W, throughput is W/RTT
Just after loss, window drops to W/2,
throughput to W/2RTT.
TCP Futures
Example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput
Requires window size W = 83,333 in-flight segments
Throughput in terms of loss rate:
➜ L = 2·10-10 Wow
New versions of TCP for high-speed needed!
L RTT
⋅ MSS 22
.
1
Fairness goal: if K TCP sessions share same
bottleneck link of bandwidth R, each should have average rate of R/K
TCP connection 1
bottleneck TCP
TCP Fairness
Why is TCP fair?
Two competing sessions:
Additive increase gives slope of 1, as throughout increases
multiplicative decrease decreases throughput proportionally
R equal bandwidth share
Connection 2 throughput
congestion avoidance: additive increaseloss: decrease window by factor of 2 congestion avoidance: additive increase loss: decrease window by factor of 2
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP
do not want rate
throttled by congestion control
Instead use UDP:
pump audio/video at constant rate, tolerate packet loss
Research area: TCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel
connections between 2 hosts.
Web browsers do this
Example: link of rate R supporting 9 connections;
new app asks for 1 TCP, gets rate R/10
Chapter 3: Summary
principles behind transport layer services:
multiplexing, demultiplexing
reliable data transfer
flow control
congestion control
instantiation and
implementation in the Internet
UDP
Next:
leaving the network
“edge” (application, transport layers)
into the network
“core”