• 沒有找到結果。

Simulation result of TCP flow using adaptive control algorithm for

Chapter 2 Background knowledge

2.3 Simulation result of TCP flow using adaptive control algorithm for

The adaptive control algorithm is a congestion control method to help us measuring the metric of routing decision and it also approaches optimal rate allocation of different sources by disperity routing. However, the performance of throughput is bad when we apply TCP in the network because if we route packets of a TCP flow to different paths that the RTT of different paths for the TCP flow may have large variance and we may choose a path with low available bandwidth for the TCP flow. The bad simulation result of dispersity routing show in following figure:

Fig 3 Throughput of TCP flow (without identification flow in router)

In the figure, we see that the oscillation of throughput of TCP flow is acutely and we modify the problem by identifying each flow on edge router in the network. One of modified results shows in the following figure:

Fig 4 Throughput of TCP flow

As the figure shows, the oscillation is lower than the original case. We use TFRC to replace TCP in the thesis. TFRC (TCP-Friendly Rate Control) is used to get more stable, smooth throughput for TCP. The throughput function of TCP shows as following:

)

In the throughput function, where s is packet size and R is round-trip time, p is steady state loss event rate, tRTO is TCP retransmit timeout value. TFRC calculate loss event rate take average loss interval method. It can help TCP flows having smoother throughputs in steady state. The following extra sub-section will introduce the average loss interval method and how TFRC define the lost interval.

* 2.4 Determine a new lost event interval calculation and Average loss interval method

Average loss interval method take a more smoothly calculation lost event rate than original TCP congestion detection method, that is, congestion window method. Original congestion window control will case AIMD (additive increase/multiplicative decrease). The key of smooth throughput of TFRC is that the new loss probability calculation method defines a lost event not by detecting a lost packet but calculating how many packets in the lost interval to uniformly compute TCP loss probability. TCP defines a lost packet that is the receiver receives a sequence number of packet out of three orders than the sequence number of old packet in TCP receiver end. When a packet loses, we will calculate a false arrival time for the lost packet to determine if we need to update a new loss interval. The algorithm shows in the following:

We assume some parameters in the follows before us calculation of the false arrival time:

S_loss is the sequence number of a lost packet.

S_before is the sequence number of the last packet to arrive with sequence number before S_loss.

S_after is the sequence number of the first packet to arrive with sequence number before S_loss.

T_before is the reception time of S_before.

T_after is the reception time of S_after.

T_loss is the false reception time of S_loss.

Then the calculation function of T_loss is:

_ )

If T_loss+R>T_after, we will need update our old loss event interval to next new lost event interval, the new arrival packet will be calculated into the number of new interval. When we need

Fig 5 Illustration of average loss interval method Here is the algorithm:

Assume that w_i is weight of loss interval i, for i=0,2,…,n.

Weights w_0 to w_n are calculated as:

If ( i<n/2 )

After determining weights of all intervals, we calculate the average loss interval as follows:

∑ ∑

for weights wi:

Chapter 3

Unified routing and congestion control

Routing algorithms make routing decisions rely on the feedback information that is proposed by some measurement mechanisms. In the thesis, we had introduced the adaptive control algorithm in chapter 2. Now, in the chapter, we will introduce how to use the adaptive control algorithm to help our routing method making decision.

Our routing algorithm is based on the adaptive control algorithm. As mention in chapter 2, the control algorithm will return bandwidth of each path for each source and we use the information as our routing metric. We propose two methods, one is proportion routing and another is choosing maximum bandwidth routing. Our method is very simple and can easily be implemented in really routing management.

3.1 Introduction of unified routing and congestion control

Let us trace the flowchart in the following figure, from start point, we will randomly route each flow of different sources to different available routing paths. After a period of time, the unified mechanism will measure each buffers size of links to determine if buffer is full or still having free space. If one of buffer in the links is full, then the mechanism will recognize as this link occurs congestion. In the other words, we define congestion as the arrival rate of packets large than the capacity of link. Once the nodes detect congestion they will use BECN mechanism to feedback congestion messages to each source. After measuring link state, the information of congestion will let the congestion control mechanism to deal with and the congestion control needs considering different traffic demands in the control law. In the other words, the block of congestion control will assign different traffic demands to get different control laws. Then the feedback information will be send to their sources and these sources will take some routing strategies to lead flows to the “correct path”. Routing algorithms adjust their decision weights

when they receive new feedback information. This kind of scenario will continue till to the goal of global achieved is approached. The total flowchart of unified our routing methods and adaptive control algorithm shows in following figure.

Fig 6 Flowchart of unified routing and congestion control

3.2 Unified Chosen maximum bandwidth routing and adaptive control algorithm

In the following subsection, we will introduce how to use the information xi,j as our routing metrics. As we introduce in section 3.1, when the congestion information is received by the sources, they will add the bi,j value of adaptive congestion control laws and the adaptive congestion control mechanism will re-assign bandwidth for each routing path. According to the information of available bandwidth of each path, we propose the chosen maximum bandwidth

a source. This kind of routing decision likes as gradient decent method of optimal theory. We will approach the optimal point by leading each new arrival flow away from the most congestion path.

The form of routing decision of “chosen maximum bandwidth” method is:

Ω

Where parameter xi,j is sending rate (available bandwidth of paths) of source i taking path j and parameter n means how many path source i can choose to route flows. i

Let us more explicit introduce the method by the following example:

Consider in the network having two sources, source 1 owns four paths to route the flow to destination and source 2 owns three paths to route the flow to destination. We get the information of available bandwidth for each path belongs to the source periodically. Let us define the four paths which belong to source 1 as p11, p12, p13, p14 and the three paths which belong to source 2 as p21, p22, p23. For their individual available bandwidth, we express them as x11, x12, x13, x14

and x21, x22, x23. When we use chosen maximum bandwidth method to choose the suitable routing path for the new arrival flow, we compare the value like this:

For source 1: For source 2:

end end else

take shortest path;

end

3.3 Unified proportion routing and adaptive control algorithm

In the section we will introduce another method, which is, using the available bandwidth of each routing path as weights to randomly route the new arrival flows to a path. The kind of routing method generally calls as proportion routing method. Proportion routing is commonly applying in routing area and it can efficiently distribute flows in different routing path based on the weight proportion of routing metric. Our method uses the feedback bandwidth of each path by adaptive control algorithm. When a new arrival flow comes, we can assign the flow to a path by a probability which generates by the rate of the available bandwidth of each routing path. The form of routing decision of proportion routing shows in the following:

=

Where parameter xi,j is sending rate (available bandwidth of paths) of source i taking path j and parameter ni means how many path source i can choose to route flows.

Let us more explicit introduce the method by using the example in section 3.2:

In the above case, we know that there are two sources in the network and source 1 has four available routing paths, source 2 has two available routing paths. Once again, we use the information of available bandwidth for each path which provides by adaptive control algorithm as our routing metric. When a new arrival flow comes, we take the following strategy:

For Source 1:

Chapter 4 Simulation

In the following figure, we take the figure of network as our simulation topology. In the figure, we have eight sources and the transmission delays of each links, the capacity of each links are show in the figure. Our assumption is all sources can generate TCP traffic randomly. All TCP traffics implement by TFRC and queue management for buffer of link is FIFO. We also assume some paths of eight source-destination pairs in the following table.

These sources could route paths we form as a routing path table, show as following:

Table 1 All available paths for all SD pairs

The notation ni i=1,2,3,4,5,6,7,8 means how many paths each type can route in the network topology.

4.1 Simulation model introduction

We assume all TCP flows randomly appear and each flow has different lifetime and we take the first in first out (FIFO) queue management in our simulation. Meanwhile, when the queue of link is full, we take tail drop strategy. Our simulation take UML to help build model. We build a network environment show in the following object main diagram:

Fig 8 Our UML model

In the diagram we can find that network is a main object to connect other classes like source, node, link, and receiver. In other words, network components include these objects and network can get any information from these four classes. This model is simplified for real network environment. Source class produces TCP flows and trains them into network, node class is like router, and each node own a one by one routing table in the object model. When a packet comes, the only thing node need to do is that searches the packet header and find out the source, destination of the packet. According to the information, node gets knowledge of which path it should transport these passing through packets. When node transports these packets into next node, the state-chart of link get starting and will check that buffer size is full or still has space.

The state-chart of link shows in the following figure.

Fig 9 State chart of link

While a new packet coming, link will check buffer size and transports the old packet which in the front of link buffer to next node. Link state-chart needs to distinguish which one is forwarding and which one is back-forwarding. In other words, link is the bridge of any two nodes in the network and its transmission is bi-direction.

4.2 Implementation of TCP transmitter and TCP receiver by UML

As previous background knowledge introduces, TCP is a well-known protocol and its can be replaced by TCP friendly protocol. Also we know that stability of TCP based on end to end congestion control mechanism. The following we will implement our TFRC flow model by UML and illustrate how they work together. The state-chart of TCP transmitter show in the following figure:

Fig 10 State chart of TCP transmitter

In the figure, we can see that TCP transmitter will send packets of flows into ingress node of network and receive ack message in the same time. TCP transmitter also needs to record RTT value in order to determine the next packet sending time. As the state-chart shows, in “intime”

state, if “end_time>nofeedback_time” means TCP under slow-start state and need double original TCP data rate else means TCP under stable state and data rate not need change (see [10] p.10).

TCP lifetime also determine by transmitter. Transmitter will continue sent packet until the parameter “TCPlifetime” lower than 0. When receiver receives a packet, it must distinguish packet number if out of order (the sequence number of new arrival packet is out of 3 sequence number than the number of old packet) and if update the loss-event interval need introduce in background knowledge section. The state-chart of receiver illustrates in the following figure. In the figure, we can see that receiver only need to do one thing. That is, we mention before the “out of order packet check”. The design of main ideal of TFRC mechanism is that the receiver should be as possible as simply and let transmitter to deal with most works.

Fig 11 State chart of TCP receiver

4.3 Compare for SPR and our routing method

Our simulation compares three methods results, the first is shortest path routing method (SPR), and the second is proportion routing method, finally, we show choose maximum bandwidth routing method result. For all eight sources simulation, we only discuss the results of source 5,7,8 because the results of source 5,7,8 have unique meaning on choosing paths for total network topology.

The following figures show the mean data rate of source 5,7,8 using shortest path routing method. From my simulation, I observe that the original shortest path method makes TCP flow of source 7 having un-uniformly throughput show in the following figures. This means that some flows in source 7 may get bad throughputs because source 7 needs to share bandwidth with source 3,4. For source 5 and source 8, they are show the uniform distribution of throughput in Fig 12 and Fig 14 because no other sources will share their bandwidth on their paths.

Fig 12 Mean data rate of SD pair 5 with SPR method

Fig 14 Mean data rate of SD pair 8 with SPR method

Because the bad result of source 7, we try to use proportion routing method and choose maximum bandwidth method to improve all user’s throughput fairness and let network manger approach maximum utilize buffer of link. These results show as following figures:

Fig 15 Mean data rate of SD pair 5 with proportion routing method

Fig 17 mean data rate of SD pair 8 with proportion routing method

Fig 18 Mean data rate of SD pair 5 with chosen maximum bandwidth method

Fig 19 Mean data rate of SD pair 7 with chosen maximum bandwidth method

In the results of choose maximum bandwidth routing method, it gets more uniform distribution for throughput of each flows of individual source-destination pairs. Individual source-destination pairs get well utilization in each links because we unite routing and adaptive control algorithm.

It looks like that the fairness of choosing maximum bandwidth routing method is better than proportion routing and shortest path routing. However, the global fairness of chosen maximum bandwidth routing is not the best. In order to show the global fairness is really not the best, we compare again the fairness for all source-destination pairs by our global fairness index.

The function of global fairness index is:

∑∑

The table 1 lists results of three methods:

Table 2 Global fairness index of three methods

In the table 2, we can see that the global fairness of proportion routing method is best and chosen maximum bandwidth routing is less about 0.02 percent for global fairness index, shortest path routing is less about 0.2 percent for global fairness index. Form the above results of simulation;

we guess that when the fairness index increases then the throughput of each flow will decrease. In order to prove the phenomenon is rally truly, we also measure the mean throughput for three methods in the table 3.

Table 3 Average throughput of three methods

In table 3, we can see that the mean throughput of SPR is highest, throughput of chosen maximum bandwidth routing lessees about 600 packets per flow than SPR and the worse case is proportion routing, it lessees about 1100 packets per flow than SPR. It shows that if we want to get good global fairness then our total throughput will decrease at the same time.

Chapter 5

Conclusion and Future work

In the thesis we propose two routing methods that unify congestion control and they improve the total fairness among different source-destination pairs. The two methods only base on binary feedback information to configure their routing strategies. According to our simulation, the proportion routing method can achieve to 82 percent for global fairness index. However, the mean throughput will be reduced to about 2900 packets per flow. For this kind of result, we guess that using SPR method does not spent the bandwidth of other TCP flows in their original paths and using proportion routing may spend the bandwidth of other TCP flows cause the throughput of flows decreasing. How to solve this problem is the biggest work for us. For the further research, maybe we can trade off between fairness and throughput by adding some parameters to configure them to approach a balancing situation. The parameters can like as flows number in a path etc.

References

[1] Emilio Leonardi, Marco Mellia, Marco Ajmone Marsan, Fabio Neri, “Joint Optimal Scheduling and Routing for Maximum Network Throughput”, IEEE/INFOCOM VOL 2, 2005, pages 819-830.

[2] Constantino Lagoa, Hao Che and Bernardo A. Movsichoff, “Adaptive Control Algorithms for Decentralized Optimal Traffic Engineering in the Internet”, IEEE/ACM TRANSACTIONS ON NETWORKING 2004, 12(3):415-428.

[3] Anindya Basu, Alvin Lin, Sharad Ramanathan, “Routing Using Potentials: A Dynamic Traffic-Aware Routing Algorithm”, ACM SIGCOMM, August 25-29, 2003.

[4] Sally Floyd, Mark Handley, Jitendra Padhye, “Equation-Based Congestion Control for Unicast Applications: the Extended Version*”, March 2000. ICSI Technical Report TR-00-03, URL http://www.aciri.org/tfrc/.

[5] R. J. La and V. Anantharam, “Charge-sensitive TCP and rate control in the Internet,” in Proc.

IEEE INFOCOM, Mar. 2000, pp. 1166–1175.

[6] F. Bonomi and K. W. Fendick, “The rate-based flow control framework for the available bit rate ATM service,” IEEE Network, vol. 9, no. 2, pp. 25–39, Mar./Apr. 1995.

[7] N. F. Maxemchuk, “DISPERSITY ROUTING”, Proceedings. of ICC, June 1975, pp.

41.10-41.13.

[8] P. Newman, “Traffic Management for ATM Local Area Networks,” IEEE Communications Magazine, pp.44-50, August. 1994.

[9] S. Floyd, E. Kohler, “TCP Friendly Rate Control (TFRC): The Small-Packet (SP) Variant”, April 2007.URL http://www.ietf.org/rfc/rfc4828.txt.

Appendix A

Fig 21 Mean data rate of SD pair 1 with SPR method

Fig 23 Mean data rate of SD pair 3 with SPR method

Fig 24 Mean data rate of SD pair 4 with SPR method

Fig 25 Mean data rate of SD pair 6 with SPR method

Fig 27 Mean data rate of SD pair 2 with proportion routing method

Fig 28 Mean data rate of SD pair 3 with proportion routing method

Fig 29 Mean data rate of SD pair 4 with proportion routing method

Fig 30 Mean data rate of SD pair 6 with proportion routing method

Fig 31 Mean data rate of SD pair 1 with chosen maximum bandwidth routing method

Fig 32 Mean data rate of SD pair 2 with chosen maximum bandwidth routing method

Fig 33 Mean data rate of SD pair 3 with chosen maximum bandwidth routing method

Fig 34 Mean data rate of SD pair 4 with chosen maximum bandwidth routing method

Fig 35 Mean data rate of SD pair 6 with chosen maximum bandwidth routing method

Appendix B

Table 4 Local fairness index of three methods

相關文件