• 沒有找到結果。

Performance analysis of a generic GMPLS switching architecture with flush capability

N/A
N/A
Protected

Academic year: 2021

Share "Performance analysis of a generic GMPLS switching architecture with flush capability"

Copied!
5
0
0

加載中.... (立即查看全文)

全文

(1)

Performance Analysis of A Generic GMPLS Switching Architecture with Flush

Capability

y

Ling-Chih Kao and Zsehong Tsai

Department of Electrical Engineering, National Taiwan University Taipei, TAIWAN

Abstract— The performance of a GMPLS switching architecture with the flush capability is studied.For this switching architecture, we propose a queueing model that includes the control plane, the switching buffer mech-anism, and the flush mechanism.The flush capability is included to reduce the out-of-sequence problem due to dynamic path changes.The behavior of aggregated streams, the label-setup and release policies, and the mech-anisms for efficient resource allocation are all covered.With the proposed model, one can select appropriate parameters for the label-setup policy and the label-release policy to match the traffic load and network environment. Key performance metrics, such as the throughput, the label-setup rate, and the fast path bandwidth utilization, can all be evaluated by this mathemat-ical model.Numermathemat-ical results and simulations are used to verify the accu-racy of our proposed queueing model.Finally, the trade-off among these performance metrics can be observed as well.

Keywords—GMPLS, switching, routing, performance analysis.

I . I NTRODUCTION

In recent years, it has been a trend to adopt new technologies to overcome the scalability and complexity issues in routing ta-ble lookup and packet forwarding. In the Internet Engineering Task Force (IETF), Multi-Protocol Label Switching (MPLS) [1] is proposed and considered to be one of the most important so-lutions. The basic concept of MPLS is that packet forwarding is based on a fixed short length label instead of longest match-ing search, which can shorten packet transit time. There are two most popular approaches for connection setup in MPLS. The traffic-driven method is to trigger label-setup according to traffic demand, while the topology-driven system is based on routing information. When it is necessary to combine traffic en-gineering aspect of MPLS with bandwidth provision capability of DWDM, Multi-Protocol Lambda Switching (MPS) [2] is found to play a major role. Meanwhile, considering that there are many different underlying data-link and physical layer tech-nologies, Generalized Multi-Protocol Label Switching (GM-PLS) [3], [4] is thus suggested to extend MPLS to encompass time-division, wavelength (lambdas) and spatial switching.

In order to control different switching operations under GM-PLS, the label defined for various switches is required to be of different formats [5], and related signaling and routing protocols also need modifying [3], [6]. However, the key operations of the control plane of these various switching protocol suites are found to be similar. Although basic functions of the (G)MPLS control plane have been discussed or defined in the literature, operation procedures for efficient resource control have not been defined completely and their impact on performance are still under investigation. At the same time, a sophisticated queue-ing model which can evaluate the performance of the (G)MPLS switching network is found not quite easy to build. In [7], [8], yThis work was supported by National Science Council, R.O.C., under

Grant NSC 90-2213-E-002-076, and by Ministry of Education, R.O.C., under Grant 89E-FA06-2-4-7.

Nakazawa and et al. presented a mathematical model for an MPLS switch. However, many operation details of the MPLS control plane were not well covered in their work. Such exam-ples are the occasion to setup an Label Switched Path (LSP), the allowed lifetime for an LSP, and the appropriate time to release an LSP. Our previous work [9], [10] analyzed the operations of MPLS switch including the above-mentioned behavior but only under heavy load and long-duration traffic. Therefore, a model embracing detailed operations of the GMPLS control plane is still strongly needed.

In this paper, we develop a queueing model to characterize the behavior of detailed operations of a generic GMPLS switch with flush mechanism. Aggregation of IP streams is assumed so that the label (or lambda) usage can be reduced and the processing load of the GMPLS controller can be alleviated. The label-setup policy we propose is based on the accumulated packets in the default path buffer. According to this policy, the path is set up only when the number of packets has reached a threshold. The label-release policy is controlled by an adjustable label-release timer. Efficient resource allocation mechanism is thus achieved by fine tuning the flexible label-setup policy and the adjustable label-release timer. In ATM LAN Emulation [11], there is a flush mechanism that administers the change from the Broadcast Unknown Server (BUS) forwarding path onto the Data Direct VCC path, which is to ensure that none of the arriving packets are out-of-sequence. The necessity to include the flush mecha-nism in the GMPLS switch architecture is similar. Hence, the flush mechanism is invoked when the fast path becomes avail-able. Under this mechanism, the packets accumulated in the default path will be switched to the fast path as soon as the fast path becomes available. Although our queueing model is traffic-driven oriented, the behavior of the topology-traffic-driven system can be approximately obtained via extreme case of this traffic-driven model. The key performance measures such as the throughput, the label-setup rate, and the fast path bandwidth utilization, can all be derived in the proposed model.

The remainder of the paper is organized in the following. In Section II, the queueing model for a GMPLS switch is de-scribed. In Section III, the analysis procedure is presented. In Section IV, three performance measures are derived. Numerical experiments are discussed in Section V. Conclusions are drawn in Section VI.

II. QUEUEINGMODEL

In this section, a queueing model characterizing the behavior of an aggregated IP stream passing through a GMPLS switch is presented. The number of labels is assumed to be enough for all incoming flows. The bandwidth allocated to each label (or a flow) is fixed. Therefore, we can focus on investigating

(2)

the steady-state performance of a GMPLS switch without label contentions. For simplicity, we focus on the case that only one single flow is included in this queueing model. The results can then be easily extended to the general case. Regarding the traffic source, an aggregated stream (equivalently a flow) is assumed to be consisted of N homogeneous IPPs (Interrupted Poisson Process), where each IPP has an exponentially distributed on (off ) duration with mean equals1= (1= ) andis the arrival rate in on state. Note that this traffic source model includes four parameters to match most Markovian traffic patternsz

.

The queueing model for a GMPLS switch, the GMPLS queueing model, is shown in Fig. 1. The solid lines in Figure 1 denote the paths that packets go through and the dotted lines are the signaling paths. There are three major functional blocks in this model: the GMPLS controller, the default route module, and the fast route module. The functions of the control plane are included in the GMPLS controller. The default route mod-ule stands for the IP-layer (layer-3) and data-link layer (layer-2) on the default path. The fast route data-link is represented by the fast route module. In this GMPLS architecture, the label is used as a generic term. When GMPLS is used to control TDM such as SONET, time slots are labels. Each frequency (or) corresponds to a label when FDM such as WDM is taken as the underlying switching technology. When the switching mech-anism is space-division multiplexing based, labels are referred to as ports. Six queueing nodes are included in this model: Default Route, Fast Route, Label Pool, Fast Route Setup, La-bel Release, and LaLa-bel Release Timer. Traffic served by tra-ditional routing protocol will be served by the Default Route node, whose buffer stores the packets which cannot be pro-cessed in time by the IP processor on the default path. Mean-while, the Fast Route node serves packets whose stream has been assigned a label. The fast path buffer stores the pack-ets which cannot be processed in time by the fast route. The Label Pool stores the labels which represent the availability of the fast path and the fast path is available if there is a label in the Label Pool. The Fast Route Setup node represents the time required to set up an LSP for an aggregated stream. The La-bel Release node represents the time required to release a laLa-bel. The Label Release Timer node represents a label-release timer. This timer indicates the maximum length of idle period of an ag-gregated stream before its label is released for other use. Once an aggregated stream is granted a label, it is served with its own Fast Route node and uses its own label-release mechanism. As a result, this model is used to examine the protocol efficiency instead of label competitions.

When an aggregated IP stream arrives, two possible opera-tions may occur. In the first case, incoming traffic has been assigned a fast path. Then its packets will be directly sent to the Fast Route node. In the second case, incoming traffic has not been assigned a fast path. All the packets are continu-ously served by the Default Route node during the label-setup operations under this situation. If the accumulated packets in the buffer of the Default Route node have not reached the trig-gering threshold (m), it is served by the Default Route node through traditional IP routing protocol. However, if the

accu-z

Self-similar process is not considered in this paper.

Label_Release_Timer

Sources

Label_Pool Label_Release

GMPLS Controller

Setup Request Packet Flow Classifier & Label_Setup_Policy µS µP µF Fast_Route_Setup µO Default Route Module

Fast_Route Fast Route Mudule Default_Route

µD

Packets Signaling

Fig. 1. GMPLS queueing model.

mulated packets in the buffer of the Default Route node reach the triggering threshold, the flow classifier & label setup policy manager will trigger the default route module to send a setup request packet through the Fast Route Setup node to its down-stream LSR (Label Switch Router) until the egress LSR for ne-gotiating an appropriate LSP according to the current network resources. The GMPLS controller will set up a fast path for this stream and assign it a label. The packets accumulated in the buffer of the Default Route node will then be rerouted to the Fast Route node when the fast path becomes available, and such procedure is called the flush mechanism.

The label manager maintains an activity timer to control the label-release operation of the flow. The label is released only if the activity timer indicating that the maximum allowed inactive duration has been reached. Incoming packets will be blocked if the accumulated packets in the Default Route node exceed the buffer size of the Default Route node but the stream has not been assigned a fast path, or if the accumulated packets in the Fast Route node exceed the buffer size of the Fast Route node when the stream has been assigned a fast path.

III. STEADY-STATEANALYSIS

We here present a procedure to calculate steady-state distri-bution of a GMPLS switch model as shown in Fig. 1. We adopt the following additional notations:

1=: packet length (bit per packet). CD: default path capacity (bps). CO: fast path capacity (bps). 

O

=CO: service rate in the Fast Route node. T rel = 1  P

: the average sojourn time of the La-bel Release Timer node.



S: service rate in the Label Release node. 

F: service rate in the Fast Route Setup node. T

LSP: label-setup latency. 

D

=CD: service rate in the Default Route node. n: buffer size of Default Route node.

(3)

t: buffer size of Fast Route node. n

I: the number of packets in the Default Route node. n

S: the number of packets in the Fast Route node. n

T: the number of labels in the Label Pool ( n

T

= 1, if the fast path is available;n

T

=0, otherwise). nO: the number of IPPs in on state. 

T: the state of the Label Release Timer node ( 

T

=0, if the Label Release Timer node is idle;

T

=1, otherwise). R: the state of the Label Release node (

R

= 0, if the Label Release node is idle;

R

=1, otherwise).

m: the triggering threshold which represents the minimum number of accumulated packets that will trigger label-setup operation.

The aggregated system state of the GMPLS queueing model is defined as the number of IPPs in on state, and we use 

k to denote the steady-state probability, wherek (ranging from 0N) is the state. The aggregated transition diagram is shown in Fig. 2. We then employ the state vector(a;b;c;d;e;f)to represent the statekof the aggregated system state of the GM-PLS queueing model whenn

I = a;n S = b;n O = c; T =d,  R = e, andn T

= f. The behavior of the GMPLS queueing model can then be predicted by a Markov chain.

0 1 Nα β (N-1)α 2β k (N-k+1)α kβ (N-k)α (k+1)β N-1 Nα (N-1)β α Nβ N

Fig. 2. Aggregated state-transition diagram of the GMPLS queueing model.

In this model, the service time is assumed to be exponentially distributed in all nodes. At the same time, silence interval, burst length and IP packet size are also assumed to be exponentially distributed.

According to the above definitions, the global balance equa-tions in the statekof the aggregated state-transition diagram are listed as follows. P n t;t;k ;0;0;0 =(1  O  D )P n t;t;k ;0;0;0 +kP n t;t 1;k ;0;0;0 + F P n;0;k ;0;0;1 (1) P n i;t;k ;0;0;0 =(1  O  D )P n i;t;k ;0;0;0 + D P n i+1;t;k ;0;0;0 +kP n i;t 1;k ;0;0;0 + F P n i+t;0;k ;0;0;1 ;t+1in 1 (2) P 0;t;k ;0;0;0 =(1  O )P 0;t;k ;0;0;0 + D P 1;t;k ;0;0;0 +kP 0;t 1;k ;0;0;0 + F P t;0;k ;0;0;1 (3) P 0;j;k ;0;0;0 =(1  O k)P 0;j;k ;0;0;0 +kP 0;j 1;k ;0;0;0 +DP 1;j;k ;0;0;0 +OP 0;j+1;k ;0;0;0 ;2jm 1 (4) P 0;j;k ;0;0;0 =(1  O k)P 0;j;k ;0;0;0 +kP 0;j 1;k ;0;0;0 + D P 1;j;k ;0;0;0 + F P j;0;k ;0;0;1 + O P 0;j+1;k ;0;0;0 ;mjt 1 (5) P 0;1;k ;0;0;0 =(1  O k)P 0;1;k ;0;0;0 + O P 0;2;k ;0;0;0 +kP 0;0;k ;1;0;0 + D P 1;1;k ;0;0;0 (6) P 0;0;k ;1;0;0 =(1  P k)P 0;0;k ;1;0;0 + O P 0;1;k ;0;0;0 + D P 1;0;k ;1;0;0 (7) P n t;t j;k ;0;0;0 =(1  O  D k)P n t;t j;k ;0;0;0 + O P n t;t j+1;k ;0;0;0 +kP n t;t j 1;k ;0;0;0 ;1jt 2 (8) P n t;1;k ;0;0;0 =(1  O  D k)P n t;1;k ;0;0;0 + O P n t;2;k ;0;0;0 +kP n t;0;k ;1;0;0 (9) P n t;0;k ;1;0;0 =(1  D  P k)P n t;0;k ;1;0;0 + O P n t;1;k ;0;0;0 (10) P n i;t j;k ;0;0;0 =(1  O  D k) P n i;t j;k ;0;0;0 + D P n i+1;t j;k ;0;0;0 +k P n i;t j 1;k ;0;0;0 + O P n i;t j+1;k ;0;0;0 ;1in 1;t+1jt 2 (11) P n i;1;k ;0;0;0 =(1  O  D k)P n i;1;k ;0;0;0 + O P n i;2;k ;0;0;0 +kP n i;0;k ;1;0;0 + D P n i+1;1;k ;0;0;0 ;t+1in 1 (12) P n i;0;k ;0;0;0 =(1  P  D k)P n i;0;k ;1;0;0 + O P n i;1;k ;0;0;0 + D P n i+1;0;k ;1;0;0 ;t+1in 1 (13) P n;0;k ;0;1;0 =(1  S  D )P n;0;k ;0;1;0 +kP n 1;0;k ;0;1;0 (14) P n i;0;k ;0;1;0 =(1  S  D k)P n i;0;k ;0;1;0 +kP n i 1;0;k ;0;1;0 + D P n i+1;0;k ;0;1;0 ;1it 1 (15) P n i;0;k ;0;1;0 =(1  S  D k)P n i;0;k ;0;1;0 + P P n i;0;k ;1;0;0 +kP n i 1;0;k ;0;1;0 +DP n i+1;0;k ;0;1;0 ;tin 1 (16) P 0;0;k ;0;1;0 =(1  S k)P 0;0;k ;0;1;0 + P P 0;0;k ;1;0;0 + D P 1;0;k ;0;1;0 (17) P n;0;k ;0;0;1 =(1  F  D )P n;0;k ;0;0;1 + S P n;0;k ;0;1;0 +kP n 1;0;k ;0;0;1 (18) P n i;0;k ;0;0;1 =(1  F k  D )P n i;0;k ;0;0;1 + S P n i;0;k ;0;1;0 +kP n i 1;0;k ;0;0;1 + D P n i+1;0;k ;0;0;1 ;1in m (19) P n i;0;k ;0;0;1 =(1  D k)P n i;0;k ;0;0;1 + S P n i;0;k ;0;1;0 +kP n i 1;0;k ;0;0;1 + D P n i+1;0;k ;0;0;1 ;n m+1in 1 (20) P 0;0;k ;0;0;1 =(1 k)P 0;0;k ;0;0;1 + S P 0;0;k ;0;1;0 + D P 1;0;k ;0;0;1 (21) whereP

a;b;c;d;e;fis the steady-state probability of the state vec-tor(a;b;c;d;e;f). The detailed state-transition diagram corre-sponding to equations (1)–(21) is shown in Fig. 3.

IV. PERFORMANCEMEASURES

One key performance metric is the throughput. We define T

dand T

fas the average throughput at the Default Route node and Fast Route node respectively.T

total =T

d +T

fis the total throughput. Their formulas are given by

T d =  D f N X k =0 n X i=1 t X j=1 P i;j;k ;0;0;0  k + N X k =0 n X i=1 (P i;0;k ;1;0;0 +P i;0;k ;0;1;0 +P i;0;k ;0;0;1 ) k g (22) T f =  O N X k =0 n X i=0 t X j=1 P i;j;k ;0;0;0  k (23)

Since the label-setup rate is proportional to the required label processing load, it is included as another key metric. The label-setup rateS

R is defined as the average number of label-setup operations in the Fast Route Setup node per unit time and given by S R = F N X k =0 n X i=m P i;0;k ;0;0;1  k (24)

(4)

0,0,k, 0,0,1 0,0,k, 0,1,0 1,0,k, 0,1,0 kλ 1,0,k, 0,0,1 2,0,k, 0,0,1 µD kλ µD kλ µD kλ 3,0,k, 0,0,1 µD kλ m-1,0,k, 0,0,1 m,0,k, 0,0,1 µS 0,0,k, 1,0,0 µP 0,1,k, 0,0,0 0,2,k, 0,0,0 µO kλ µO kλ µO kλ 0,m-1,k,0, 0,0 0,m,k, 0,0,0 µO µO kλ µO kλ 1,0,k, 1,0,0 µP 1,1,k, 0,0,0 1,2,k, 0,0,0 µO kλ µO kλ µO kλ 1,m-1,k,0, 0,0 1,m,k, 0,0,0 µO µO kλ µO kλ µD µD µD µD µD µD µD µD µD µD n-1,0,k, 0,1,0 n,0,k, 0,1,0 n-t-2,0,k, 0,1,0 n-t-1,0,k, 0,1,0 n-t,0,k,0 ,1,0 n-t,0,k,1 ,0,0 µP n-t,1,k,0 ,0,0 n-t,2,k,0 ,0,0 µO kλ µO kλ µO kλ n-t,m-1,k,0, 0,0 n-t,m,k, 0,0,0 µO µO kλ µO kλ µD µD µD µD µD µO µO kλ µO kλ µO µO µO kλ n-t-1,0,k, 1,0,0 µP n-t-1,1,k, 0,0,0 n-t-1,2,k, 0,0,0 kλ n-t- 1,m-1,k,0,0, 0 n-t-1,m,k, 0,0,0 kλ µO kλ µO kλ µO kλ n-t-2,0,k, 1,0,0 µP n-t-2,1,k, 0,0,0 n-t-2,2,k, 0,0,0 kλ n-t- 2,m-1,k,0,0, 0 n-t-2,m,k, 0,0,0 kλ µD µD µD µD µD µO µO µO µD µD µD µD µD µD kλ µD kλ µD kλ µD kλ µD kλ µD kλ µD kλ µS µS µS µS µS µS µF µD kλ m+1, 0,k,0, 0,1 µO kλ 0,m+ 1,k,0, 0,0 µO kλ 1,m+ 1,k,0, 0,0 µD µD µO kλ n-t,m+1, k,0,0,0 µD µO kλ n-t-1,m+1, k,0,0,0 µO kλ n-t-2,m+1, k,0,0,0 µD µD µF kλ t,0,k,0 ,0,1 t+1,0, k,0,0, 1 n-1,0,k, 0,0,1 n,0,k, 0,0,1 0,t,k,0 ,0,0 µO 1,t,k,0 ,0,0 µO n-t,t,k,0, 0,0 µO µO n-t-1,t,k,0 ,0,0 n-t-2,t,k,0 ,0,0 µO µF µF kλ kλ kλ kλ µF µF n-2,0,k, 0,0,1 µF µD µD µD µD µD µF t-1,0,k, 0,0,1 µO kλ 0,t-1,k,0, 0,0 µO kλ 1,t-1,k,0, 0,0 µD µD µO kλ n-t,t-1,k,0, 0,0 µD µO kλ n-t-1,t-1,k,0,0, 0 µO kλ n-t-2,t-1,k,0,0, 0 µD µD µF µD kλ µD kλ µD kλ µD kλ µD kλ µD kλ µD kλ µD kλ µD kλ µD kλ µD kλ

Fig. 3. Detailed state-transition diagram in statekof aggregated state-transition

diagram.

We also include the fast path bandwidth utilization so that one can predict the ratio of wasted bandwidth. Here, the time periods considered to be “reserved” by an aggregated stream include the packet transmission time by the Fast Route node (with time ratio Bf), the idle period waiting for label-release timeout (with time ratio Bt) and the time required to release a label (with time ratio B

r). However, only the period that the packets are transmitted by the Fast Route node is considered effectively utilized. Hence, the fast path bandwidth utilization U F is given by U F = B f Bf+Bt+Br , where B f = P N k =0 P n i=0 P t j=1 P i;j;k ;0;0;0  k, B t = P N k =0 P n i=0 P i;0;k ;1;0;0  k, and B r = P N k =0 P n i=0 P i;0;k ;0;1;0 k. V. NUMERICALEXAMPLES

In this section, we demonstrate the applicability of the queue-ing model and present analytical and simulation results of the proposed generic GMPLS switch. We also illustrate the trade-off among key system parameters. Throughout this section, we set the number of IPPs (N) to 5, the average silence interval (1= ) to 0.2 sec, the average burst length (1= ) to 0.8 sec, the average IP packet size to 512 bytes, the fast path capacity to 150 Mbps, the default path capacity to 100 Mbps, the average label-release latency ( 1

 S

) to 0.2 ms, the buffer size of Default Route node (n) to 50 packets, and the buffer size of Fast Route node

0 50 100 150 200 0 0.5 1 1.5 2 2.5

Label-Setup Rate (per second)

Normalized Offered Load

analysis TLSP = 1 ms simulation TLSP = 1 ms analysis TLSP = 10 ms simulation TLSP = 10 ms analysis TLSP = 100 ms simulation TLSP = 100 ms

Fig. 4. Label-setup rate as a function of normalized offered load withT rel

= 1ms andm=3under differentT

LSP. 0 2 4 6 8 10 12 14 0 0.5 1 1.5 2 2.5

Label-Setup Rate (per second)

Normalized Offered Load analysis T LSP = 1 ms simulation TLSP = 1 ms analysis TLSP = 10 ms simulation TLSP = 10 ms analysis TLSP = 100 ms simulation T LSP = 100 ms

Fig. 5. Label-setup rate as a function of normalized offered load withT rel

= 50ms andm=3under differentT

LSP.

(t) to 30 packets. The normalized offered load is defined as N  +  =D.

From Figs. 4 and 5, one can observe that the longer the label-setup latency (T

LSP), the lower the label-setup rate. In other words, when it takes time to set up an LSP due to longT

LSP, more traffic will go through the default path. Hence, a switch with very large T

LSP should not be considered a topology-driven system because most traffic still goes through its default path. When traffic load becomes heavy, we also notice the in-crease of LSP lifetime. As a result, the label-setup rate dein-creases as traffic increases.

Although the total throughput is almost the same under dif-ferentT

LSP and T

rel, the difference exists in the behavior of default path and fast path. With our model, one can determine how much traffic is served by the fast path. We plot the through-put as a function of the normalized offered load with m = 3 under differentT

LSP and T

relin Fig. 6 and Fig. 7. From these two figures, one can find that the default path throughput will increase with the increase of traffic load if total traffic load is light. When most traffic starts to be switched to the fast path, the default path throughput decreases. Additionally, one can ob-serve that most traffic will go through the fast path with larger T

relif load is heavy enough, even under different T

LSP. An-other phenomenon is that most traffic goes through the default path with smallerT

reland large T

LSP in the range of small to medium traffic condition. This result shows that largerT

(5)

0 50 100 150 200 250 0 0.5 1 1.5 2 2.5 Thruhghput (Mbps)

Normalized Offered Load analysis total, Trel = 1 ms simulation total, Trel = 1 ms analysis fast route, Trel = 1 ms simulation fast route, Trel = 1 ms analysis default route, Trel = 1 ms simulation default route, Trel = 1 ms analysis total, Trel = 50 ms simulation total, Trel = 50 ms analysis fast route, Trel = 50 ms simulation fast route, Trel = 50 ms analysis default route, Trel = 50 ms simulation default route, Trel = 50 ms

Fig. 6. Throughput as a function of normalized offered load withT LSP

=1ms

andm=3under differentT rel. 0 50 100 150 200 250 0 0.5 1 1.5 2 2.5 Thruhghput (Mbps)

Normalized Offered Load analysis total, Trel = 1 ms simulation total, Trel = 1 ms analysis fast route, Trel = 1 ms simulation fast route, Trel = 1 ms analysis default route, Trel = 1 ms simulation default route, Trel = 1 ms analysis total, Trel = 50 ms simulation total, Trel = 50 ms analysis fast route, Trel = 50 ms simulation fast route, Trel = 50 ms analysis default route, Trel = 50 ms simulation default route, Trel = 50 ms

Fig. 7. Throughput as a function of normalized offered load withT LSP

= 100ms andm=3under differentT

rel.

favorable for longerT LSP.

The ratio of wasted bandwidth can be predicted by the fast path bandwidth utilization. In Fig. 8, we demonstrate that the fast path bandwidth utilization for smallerT

relis always higher than that for largerTrel, which is the key feature of a data-driven GMPLS switch. Although the fast path bandwidth utilization becomes higher for smaller value ofT

rel, the label-setup rate also increases, which is a trade-off.

From the above results, one can know that when T rel is small, the system behavior is traffic-driven oriented. However, in the case thatT

rel is sufficiently large, the system behavior approaches a topology-driven GMPLS switch.

VI. CONCLUSIONS

A generic GMPLS switching architecture with flush capabil-ity is presented. With this architecture and its flush mechanism, the GMPLS out-of-sequence problem is relieved. According to the presented queueing model, one can effectively fine tune the resource utilization level, or the label processing load. In ad-dition, the trade-off between the fast path bandwidth utilization and the label-setup rate can be observed. We conclude that an appropriate value of label-release timerT

relcan be carefully se-lected to meet the requirement of both. For a network with large round-trip time and sufficient resources in the fast path, if one uses a small value ofTrel, most traffic will go through the

de-0 20 40 60 80 100 0 0.5 1 1.5 2 2.5

Fast Path Bandwidth Utilization (%

)

Normalized Offered Load

analysis TLSP = 10 ms, Trel = 1 ms simulation TLSP = 10 ms, Trel = 1 ms analysis TLSP = 10 ms, Trel = 10 ms simulation TLSP = 10 ms, Trel = 10 ms analysis TLSP = 10 ms, Trel = 50 ms simulation TLSP = 10 ms, Trel = 50 ms

Fig. 8. Fast path bandwidth utilization as a function of normalized offered load withT

LSP

=10 ms andm=3under differentT rel.

fault path instead of the fast path. Therefore, choosing large value ofT

relis preferred. For a network with a small round-trip delay and insufficient resources in the fast path, it is adequate to use the system with a small value ofTrel.

Our study shows that although the GMPLS switch does oper-ate efficiently in both local and wide areas, its best performance can be achieved only when its control plane parameters are ap-propriately tuned.

REFERENCES

[1] E. Rosen, A. Viswanathan, and R. Callon, “Multiprotocol Label Switching Architecture,” RFC 3031, Jan. 2001.

[2] D. Awduche, Y. Rekhter, J. Darke, and R. Colton, “Multi-Protocol Lambda Switching: Combining MPLS Traffic Engineering Control with Optical Crossconnects,” IETF Internet draft-awduche-mpls-te-optical-02.txt, July 2000.

[3] P. Ashwood-Smith, et al., “Generalized Multi-Protocol Label Switching (GMPLS) Architecture,” IETF Internet

draft-many-gmpls-architecture-00.txt, Feb. 2001.

[4] A. Banerjee, et al., “Generalized Multiprotocol Label Switching: An Overview of Routing and Management Enhancements,” IEEE Commun.,

Mag., pp. 2–8, Jan. 2001.

[5] J. Sadler, et al., “Generalized Switch Management Protocol (gsmp),” IETF

Internet draft-sadler-gsmp-tdm-labels-00.txt, Feb. 2001.

[6] P. Ashwood-Smith, et al., “Generalized MPLS–Signaling Functional Description,” IETF Internet draft-ietf-mpls-generalized-signaling-02.txt, March 2001.

[7] S. Nakazawa, K. Kawahara, S. Yamaguchi, and Y. OIE, “Performance Comparasion with Layer 3 Switches in Case of Flow-And Topology-Driven Connection Setup,” IEEE GLOBECOM’99, pp. 79–86, Rio de Janeiro, Brazil.

[8] S. Nakazawa, H. Tamura, K. Kawahara, and Y. OIE, “Performance analy-sis of IP datagram transmission delay in MPLS: Impact of both the number and the bandwidth od LSPs of Layer 2,” IEEE ICC 2001, pp. 1006–1010, Helsinki, Finland.

[9] L.-C. Kao and Z. Tsai, “Performance Analysis of Flow-Based Label Switching: the Single IP Flow Model,” IEICE Trans. Commun., vol. E83-B, no. 7, pp. 1417–1425, July 2000.

[10] L.-C. Kao and Z. Tsai, “Steady-State Performance Analysis of MPLS La-bel Switching,” IEICE Trans. Commun. vol. E84-B, no.8, pp. 2279–2291, Aug. 2001.

數據

Fig. 1. GMPLS queueing model.
Fig. 2. Aggregated state-transition diagram of the GMPLS queueing model.
Fig. 4. Label-setup rate as a function of normalized offered load with T rel
Fig. 7. Throughput as a function of normalized offered load with T LSP

參考文獻

相關文件

The Service Provider Switching Model SPSM: A Model of Consumer Switching Behavior in the Services Industry. „Migrating‟ to New

Moreover, when compared with the battery charger with the traditional pulse-width-modulated one, the novel battery charger with zero-current switching converter indeed reduces

This paper presents (i) a review of item selection algorithms from Robbins–Monro to Fred Lord; (ii) the establishment of a large sample foundation for Fred Lord’s maximum

We explicitly saw the dimensional reason for the occurrence of the magnetic catalysis on the basis of the scaling argument. However, the precise form of gap depends

n Media Gateway Control Protocol Architecture and Requirements.

Experiment a little with the Hello program. It will say that it has no clue what you mean by ouch. The exact wording of the error message is dependent on the compiler, but it might

We shall show that after finite times of switching, the premise variable of the fuzzy system will remain in the universe of discourse and stability of the adaptive control system

The Knowledge Value Added theory provides a promising model for quantitative performance measurement of a CoP as long as the value-adding activities are identified with the quantified