• 沒有找到結果。

Performance Analysis of A Generic GMPLS Switching Architecture under ON--OFF Traffic Sources

N/A
N/A
Protected

Academic year: 2021

Share "Performance Analysis of A Generic GMPLS Switching Architecture under ON--OFF Traffic Sources"

Copied!
12
0
0

加載中.... (立即查看全文)

全文

(1)

Architecture under ON–OFF Traffic Sources



Ling-Chih Kao and Zsehong Tsai

Department of Electrical Engineering, National Taiwan University, Taipei, TAIWAN

d6942004@yahoo.com.tw, ztsai@cc.ee.ntu.edu.tw

Abstract. This paper proposes a queueing model including the control plane and the switching buffer mechanism of a GMPLS switch for evaluating the perfor-mance of a GMPLS switching architecture. With the proposed model, one can select appropriate parameters for the label-setup policy and the label-release pol-icy to match the traffic load and network environment. Key performance metrics, including the throughput, the label-setup rate, and the fast path bandwidth utiliza-tion can be obtained via the analytical results. Numerical results and simulautiliza-tions are used to verify the accuracy of our proposed queueing model. The trade-off among these performance metrics can be observed as well.

1 Introduction

In recent years, it has been a trend to provide a wide range of data services over the same backbone network via newly adopted technologies such as Multi-Protocol Label

Switching (MPLS) [1] and Multi-Protocol Lambda Switching (MPλS) [2] to overcome

the scalability and complexity issues. Many other layer-3 switching technologies [3] have also been specially designed to solve the dilemma of routing table scalability and overload of routing processing. But these proposed technologies may not be compatible with one another. In order to integrate these techniques, the Internet Engineering Task Force (IETF) proposes MPLS. The basic concept of MPLS is that packet forwarding is based on a fixed short length label instead of longest matching search, which can shorten packet transit time. There are two most popular approaches for connection setup in MPLS. The traffic-driven method is to trigger label-setup according to traffic demand, while the topology-driven system is based on routing information. In addition, by way of Constraint-based Routed Label Distribution Protocol (CR-LDP) [4] or Resource ReSer-Vation Protocol (RSVP) [5], it is possible to include QoS mechanism in MPLS. When it is necessary to combine traffic engineering aspect of MPLS and bandwidth provision

capability of DWDM, MPλS [2] is found to play a major role. Meanwhile,

consider-ing that there are many different underlyconsider-ing data-link and physical layer technologies, Generalized Multi-Protocol Label Switching (GMPLS) [6,7] is thus suggested to extend MPLS to encompass time-division, wavelength (lambdas) and spatial switching.

In order to control different switching operations under GMPLS, the label defined for various switches is required to be of different formats [8], and related signaling and

This work was supported by National Science Council, R.O.C., under Grant NSC

90-2213-E-002-076, and by Ministry of Education, R.O.C., under Grant 89E-FA06-2-4-7.

I. Chong (Ed.): ICOIN 2002, LNCS 2343, pp. 407–418, 2002. c

(2)

routing protocols also need modifying [6,9]. However, the key operations of the control plane of these various switching protocol suites are found to be similar. Although basic functions of the GMPLS control plane have been discussed or defined in the literature, operation procedures for efficient resource control have still being defined and their impact on performance have still being investigated. Some papers [10]–[12] proposed performance queueing models for MPLS or GMPLS, but most detailed operations of the GMPLS control plane are not well covered. At the same time, it is often found that a sophisticated queueing model which can evaluate the performance of the GMPLS switching network is not easy to build. Therefore, a model embracing detailed operations of the GMPLS control plane is thus strongly needed.

In this paper, we develop a queueing model to characterize the behavior of most operations of a GMPLS switch. The aggregation of IP streams can save usage of la-bels (or lambdas) and thus alleviate the processing load of the GMPLS controller. The label-setup policy is based on the accumulated packets in the default path buffer. The label-release policy is controlled by an adjustable label-release timer. Efficient resource allocation mechanism is thus achieved by fine tuning the flexible label-setup policy and the adjustable label-release timer. Although our queueing model is traffic-driven ori-ented, the behavior of the topology-driven system can be approximately obtained via extreme case of this traffic-driven model. Some key performance measures, such as the throughput, the label-setup rate, and the path bandwidth utilization, can all be derived in the proposed model.

The remainder of the paper is organized in the following. In Section II, the queueing model for a GMPLS switch is described. In Section III, the analysis procedure is pro-posed. In Section IV, three performance measures are derived. Numerical experiments and simulation results are discussed in Section V. Conclusions are drawn in Section VI.

2 Queueing Model

In this section, a queueing model characterizing the behavior of an aggregated IP stream passing through a GMPLS switch is proposed. The number of labels is assumed to be enough for all incoming flows. The bandwidth allocated to each label (or a flow) is fixed. Therefore, we can focus on investigating the steady-state performance of a GMPLS switch without label contentions. For simplicity, we focus on the case that only one single flow is included in this queueing model. The results can then be easily extended to the general case. Regarding the traffic source, an aggregated stream (equivalently a flow) is

assumed to be consisted ofN homogeneous IPPs (Interrupted Poisson Process), where

each IPP has an exponentially distributed on (off ) duration with mean equals1/α (1/β)

andλ is the arrival rate in on state. Note that this traffic source model includes four

parameters to match most Markovian traffic patterns1

The queueing model for a GMPLS switch, GMPLS queueing model, is shown in Fig. 1. The solid lines in Fig. 1 denote the paths that packets go through and the dotted lines are the signaling paths. There are three major functional blocks in this model: the GMPLS controller, the default route module, and the fast route module. The functions

(3)

of the control plane are included in the GMPLS controller. The default route module stands for the IP-layer (layer-3) and data-link layer (layer-2) on the default path. The fast route data-link is represented by the fast route module. In this GMPLS architecture, the

labelis used as a generic term. When GMPLS is used to control TDM such as SONET,

time slots are labels. Each frequency (orλ) corresponds to a label when FDM such as

WDM is taken as the underlying switching technology. When the switching mechanism is space-division multiplexing based, labels are referred to as ports. Six queueing nodes are included in this model: Default Route, Fast Route, Label Pool, Fast Route Setup,

Label Release, and Label Release Timer.Traffic served by traditional routing protocol will be served by the Default Route node, whose buffer stores the packets which cannot be processed in time by the IP processor on the default path. Meanwhile, the Fast Route node serves packets whose stream has been assigned a label. The fast path buffer stores the packets which can not be processed in time by the fast route. The Label Pool stores the labels which represent the availability of the fast path and the fast path is available if there is a label in the Label Pool. The Fast Route Setup node represents the time required to set up an LSP (Label Switched Path) for an aggregated stream. The Label Release node represents the time required to release a label. The Label Release Timer node represents a label-release timer. This timer indicates the maximum length of idle period of an aggregated stream before its label is released for other use. Once an aggregated stream is granted a label, it is served with its own Fast Route node and uses its own label-release mechanism. As a result, this model is used to examine the protocol efficiency instead of label competitions. The assumed details of label operations over this queueing model are described as follows.

When an aggregated IP stream arrives, two possible operations may occur. In the first case, incoming traffic has been assigned a fast path. Then its packets will be directly sent to the Fast Route node. In the second case, incoming traffic has not been assigned a fast path. All the packets are continuously served by the Default Route node (via the

default path) during the label-setup operations under this situation. If the accumulated packets in the buffer of the Default Route node have not reached the triggering threshold (m), it is served by the Default Route node through traditional IP routing protocol.

However, if the accumulated packets in the buffer of the Default Route node reach the triggering threshold, the flow classifier & label setup policy manager will trigger the default route module to send a setup request packet through the Fast Route Setup node to its downstream LSR (Label Switch Router) until the egress LSR for negotiating an appropriate LSP according to the current network resources. The GMPLS controller will set up a path called the fast path for this stream and assign it a label.

The label manager maintains an activity timer to control the label-release operation of the flow. The label is released only if the activity timer indicating that the maximum allowed inactive duration has been reached. Incoming packets will be blocked if the ac-cumulated packets in the Default Route node exceed the buffer size of the Default Route node but the stream has not been assigned a fast path, or if the accumulated packets in the Fast Route node exceed the buffer size of the Fast Route node when the stream has been assigned a fast path.

(4)

Label_Release_Timer

Sources

Label_Pool Label_Release

GMPLS Controller

Setup Request Packet Flow Classifier & Label_Setup_Policy µS µP µF Fast_Route_Setup µO

Default Route Module

Fast_Route

Fast Route Mudule

Default_Route

µD

Packets Signaling

Fig. 1. GMPLS queueing model.

3 Steady-State Analysis

We here propose a procedure to calculate steady-state distribution of a GMPLS switch model as shown in Fig. 1. We adopt the following notations:

1/µ: packet length (bit per packet).

CD: default path capacity (bps).

CO: fast path capacity (bps).

µO= µCO: service rate of the Fast Route node.

Trel = µ1P: the average sojourn time of the Label Release Timer node, whereµP is

the service rate of the Label Release Timer node.

µS: service rate of the Label Release node.

µF: service rate of the Fast Route Setup node.

TLSP: label-setup latency.

µD= µCD: service rate of the Default Route node.

n: buffer size of Default Route node. t: buffer size of Fast Route node.

(5)

nS: the number of packets in the Fast Route node.

nT: the number of labels in the Label Pool (nT = 1, if the fast path is available;

nT = 0, otherwise).

nO: the number of IPPs in on state.

πT: the state of the Label Release Timer node (πT = 0, if the Label Release Timer

node is idle;πT = 1, otherwise).

πR: the state of the Label Release node (πR = 0, if the Label Release node is idle;

πR= 1, otherwise).

m: the triggering threshold which represents the minimum number of accumulated

packets that will trigger label-setup operation.

In order to avoid the state size explosion problem, we employ the technique of state-aggregation approximation. The aggregated system state of the GMPLS queueing

model is defined as the number of IPPs in on state, and we useπkto denote the

steady-state probability, wherek (ranging from 0∼N) is the aggregated state. The aggregated

transition diagram is shown in Fig. 2. We then employ the state vector(a, b, c, d, e, f)

to represent the internal state of the aggregated statek of the GMPLS queueing model

whennI = a, nS = b, nO = c, πT = d, πR = e, and nT = f. The behavior of the GMPLS queueing model can then be predicted by a Markov chain.

0 1 Nα β (N-1)α 2β k (N-k+1)α kβ (N-k)α (k+1)β N-1 Nα (N-1)β α Nβ N

Fig. 2. Aggregated state-transition diagram of the GMPLS queueing model.

In this model, the service time is assumed to be exponentially distributed in all nodes. At the same time, silence interval, burst length and IP packet size are also assumed to be exponentially distributed. According to the above definitions, the global balance

equations in statek of aggregated state-transition diagram are listed as follows.

Pn,t,k,0,0,0= (1 − µO− µD)Pn,t,k,0,0,0+ kλPn,t−1,k,0,0,0 (1)

Pn−i,t,k,0,0,0 = (1 − µO− µD)Pn−i,t,k,0,0,0+ µDPn−i+1,t,k,0,0,0

+kλPn−i,t−1,k,0,0,0, 1 ≤ i ≤ n − 1 (2)

P0,t,k,0,0,0 = (1 − µO)P0,t,k,0,0,0+ µDP1,t,k,0,0,0+ kλP0,t−1,k,0,0,0 (3)

Pn−i,t−j,k,0,0,0= (1 − µO− µD− kλ)Pn−i,t−j,k,0,0,0+ µDPn−i+1,t−j,k,0,0,0 +kλPn−i,t−j−1,k,0,0,0+ µOPn−i,t−j+1,k,0,0,0, 1 ≤ i ≤ n − 1, 1 ≤ j ≤ t − 2 (4) Pn,t−j,k,0,0,0= (1 − µO− µD− kλ)Pn,t−j,k,0,0,0+ µOPn,t−j+1,k,0,0,0 +kλPn,t−j−1,k,0,0,0, 1 ≤ j ≤ t − 2 (5) P0,t−j,k,0,0,0 = (1 − µO− kλ)P0,t−j,k,0,0,0+ µOP0,t−j+1,k,0,0,0 +kλP0,t−j−1,k,0,0,0+ µDP1,t−j,k,0,0,0, 1 ≤ j ≤ t − 2 (6)

(6)

0,0,k, 0,0,1 1,0,k, 0,0,1 2,0,k, 0,0,1 µD kλ µD kλ µD kλ 3,0,k, 0,0,1 µD kλ m-1,0,k, 0,0,1 m,0,k, 0,0,1 µD kλ m+1, 0,k,0, 0,1 µD n-2,0,k, 0,1,0 n-1,0,k, 0,1,0 n,0,k, 0,1,0 kλ µD kλ µD kλ µD kλ µD kλ µD kλ 1,0,k, 0,1,0 2,0,k, 0,1,0 3,0,k, 0,1,0 µD m-1,0,k, 0,1,0 m,0,k, 0,1,0 m+1, 0,k,0, 1,0 kλ µD kλ µD kλ µD kλ n-2,0,k, 0,0,1 n-1,0,k, 0,0,1 n,0,k, 0,0,1 µF µF µF µF µF n-2,0,k, 1,0,0 n-1,0,k, 1,0,0 µD µD m-1,0,k, 1,0,0 m,0,k, 1,0,0 m+1, 0,k,1, 0,0 µD µD µD µD 0,0,k, 1,0,0 1,0,k, 1,0,0 2,0,k, 1,0,0 µD µD µP µD 0,0,k, 0,1,0 n-2,1,k, 0,0,0 n-1,1,k, 0,0,0 µD µD m-1,1,k, 0,0,0 m,1,k, 0,0,0 m+1, 1,k,0, 0,0 µD µD µD µD 0,1,k, 0,0,0 1,1,k, 0,0,0 2,1,k, 0,0,0 µD µD µD µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ n-2,2,k, 1,0,0 n-1,2,k, 0,0,0 µD µD m-1,2,k, 0,0,0 m,2,k, 0,0,0 m+1, 2,k,0, 0,0 µD µD µD µD 0,2,k, 0,0,0 1,2,k, 0,0,0 2,2,k, 0,0,0 µD µD µD µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ n-2,t-1,k,0, 0,0 n-1,t-1,k,0, 0,0 µD µD m-1,t-1,k,0, 0,0 m,t-1,k,0, 0,0 m+1,t-1,k,0,0, 0 µD µD µD 0,t-1,k,0, 0,0 1,t-1,k,0, 0,0 2,t-1,k,0, 0,0 µD µD µD n-2,t,k,0 ,0,0 n-1,t,k,0 ,0,0 n,t,k,0 ,0,0 m-1,t,k,0 ,0,0 m,t,k, 0,0,0 m+1,t ,k,0,0, 0 0,t,k,0 ,0,0 1,t,k,0 ,0,0 2,t,k,0 ,0,0 µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ µO kλ n,1,k, 0,0,0 µD µO kλ µO kλ µO kλ n,0,k, 1,0,0 µD n,2,k, 0,0,0 µD µO kλ n,t-1,k,0, 0,0 µD µS µS µS µS µS µS µS µS µS µS µP µP µP µP µP µP µP µP µP µD kλ µO kλ µD µD µD µD µD µD µD µD µD µO kλ µD kλ µD kλ µD kλ µD kλ µD kλ µD kλ µD µD

Fig. 3. Detailed state-transition diagram in aggregated statek.

Pn,1,k,0,0,0= (1 − µO− µD− kλ)Pn,1,k,0,0,0+ µOPn,2,k,0,0,0

+kλPn,0,k,1,0,0 (7)

Pn−i,1,k,0,0,0= (1 − µO− µD− kλ)Pn−i,1,k,0,0,0+ µOPn−i,2,k,0,0,0

+kλPn−i,0,k,1,0,0+ µDPn−i+1,1,k,0,0,0, 1 ≤ i ≤ n − 1 (8)

P0,1,k,0,0,0= (1 − µO− kλ)P0,1,k,0,0,0+ µOP0,2,k,0,0,0+ kλP0,0,k,1,0,0

+µDP1,1,k,0,0,0 (9)

Pn,0,k,1,0,0= (1 − µD− µP− kλ)Pn,0,k,1,0,0+ µOPn,1,k,0,0,0

+µFPn,0,k,0,0,1 (10)

Pn−i,0,k,1,0,0= (1 − µD− µP− kλ)Pn−i,0,k,1,0,0+ µOPn−i,1,k,0,0,0

(7)

Pn−i,0,k,1,0,0= (1 − µD− µP− kλ)Pn−i,0,k,1,0,0+ µOPn−i,1,k,0,0,0

+µDPn−i+1,0,k,1,0,0, n − m + 1 ≤ i ≤ n − 1 (12)

P0,0,k,1,0,0= (1 − µP− kλ)P0,0,k,1,0,0+ µOP0,1,k,0,0,0+ µDP1,0,k,1,0,0 (13)

Pn,0,k,0,1,0= (1 − µS− µD)Pn,0,k,0,1,0+ µPPn,0,k,1,0,0

+kλPn−1,0,k,0,1,0 (14)

Pn−i,0,k,0,1,0= (1 − µS− µD− kλ)Pn−i,0,k,0,1,0+ µPPn−i,0,k,1,0,0

+kλPn−i−1,0,k,0,1,0+ µDPn−i+1,0,k,0,1,0, 1 ≤ i ≤ n − 1 (15)

P0,0,k,0,1,0= (1 − µS− kλ)P0,0,k,0,1,0+ µPP0,0,k,1,0,0+ µDP1,0,k,0,1,0 (16)

Pn,0,k,0,0,1= (1 − µF− µD)Pn,0,k,0,0,1+ µSPn,0,k,0,1,0

+kλPn−1,0,k,0,0,1 (17)

Pn−i,0,k,0,0,1= (1 − µF− kλ − µD)Pn−i,0,k,0,0,1+ µSPn−i,0,k,0,1,0

+kλPn−i−1,0,k,0,0,1+ µDPn−i+1,0,k,0,0,1, 1 ≤ i ≤ n − m (18)

Pn−i,0,k,0,0,1= (1 − µD− kλ)Pn−i,0,k,0,0,1+ µSPn−i,0,k,0,1,0 +kλPn−i−1,0,k,0,0,1+ µDPn−i+1,0,k,0,0,1,

n − m + 1 ≤ i ≤ n − 1 (19)

P0,0,k,0,0,1= (1 − kλ)P0,0,k,0,0,1+ µSP0,0,k,0,1,0+ µDP1,0,k,0,0,1 (20)

wherePa,b,c,d,e,fis the steady-state probability of the state vector(a, b, c, d, e, f). The

detailed state-transition diagram corresponding to equations (1)–(20) is shown in Fig. 3.

4 Performance Measures

One key performance metric is the throughput. We define Td andTf as the average

throughput at the Default Route node and Fast Route node respectively.Ttotal= Td+Tf

is the total throughput. Their formulas are given by

Td= µD{ N  k=0 n  i=1 t  j=1 Pi,j,k,0,0,0πk+ N  k=0 n  i=1 (Pi,0,k,1,0,0 +Pi,0,k,0,1,0+ Pi,0,k,0,0,1)πk} (21) Tf = µO N  k=0 n  i=0 t  j=1 Pi,j,k,0,0,0πk (22)

Since the label-setup rate is proportional to the required label processing load, it is

included as another key metric. The label-setup rateSRis defined as the average number

of label-setup operations in the Fast Route Setup node per unit time and given by

SR= µF N  k=0 n  i=m Pi,0,k,0,0,1πk (23)

Regarding the path bandwidth utilization we focus on the prediction of the ratio of wasted bandwidth on the fast path. For the fast path, the time periods considered to be “re-served” by an aggregated stream include the packet transmission time by the Fast Route

(8)

node (with time ratioBf), the idle period waiting for label-release timeout (with time

ratioBt) and the time required to release a label (with time ratioBr). However, only the

period that the packets are transmitted by the Fast Route node is considered effectively

utilized. Hence, the fast path bandwidth utilizationUF is given byUF = Bf+BBft+Br,

whereBf = Nk=0ni=0tj=1Pi,j,k,0,0,0πk,Bt = Nk=0ni=0Pi,0,k,1,0,0πk, and

Br=Nk=0 n i=0Pi,0,k,0,1,0πk. 0 50 100 150 200 0 0.5 1 1.5 2 2.5

Label-Setup Rate (per second)

Normalized Offered Load

analysis TLSP = 1 ms simulation Tanalysis TLSP = 1 ms LSP = 10 ms simulation Tanalysis TLSP = 10 ms LSP = 100 ms simulation TLSP = 100 ms

Fig. 4. Label-setup rate as a function of normalized offered load withTrel = 1 ms and m = 3

under differentTLSP.

5 Numerical Examples

In this section, we demonstrate the applicability of the queueing model and discuss ana-lytical and simulation results of the proposed generic GMPLS switch. We also illustrate the trade-off among key system parameters. Throughout this section, we set the

num-ber of IPPs (N) to 5, the average silence interval (1/α) to 0.2 sec, the average burst

length (1/β) to 0.8 sec, the average IP packet size to 512 bytes, the fast path capacity

to 150 Mbps, the default path capacity to 100 Mbps, the average label-release latency (µ1

S) to 0.2 ms, the buffer size of Default Route node (n) to 50 packets, and the buffer

size of Fast Route node (t) to 30 packets. The normalized offered load is defined as

 α α+β



/µD.

From Fig.s 4 and 5, one can observe that the longer the label-setup latency (TLSP),

the lower the label-setup rate. In other words, when it takes time to set up an LSP due

to longTLSP, more traffic will go through the default path. Hence, a switch with very

largeTLSPshould not be considered a topology-driven system because almost all traffic

still goes through its default path. When traffic load becomes large, we also notice the increase of LSP lifetime. As a result, the label-setup rate decreases as traffic increases.

(9)

0 2 4 6 8 10 12 14 0 0.5 1 1.5 2 2.5

Label-Setup Rate (per second)

Normalized Offered Load

analysis TLSP= 1 ms simulation TLSP = 1 ms analysis TLSP = 10 ms simulation Tanalysis TLSP = 10 ms LSP = 100 ms simulation TLSP= 100 ms

Fig. 5. Label-setup rate as a function of normalized offered load withTrel = 50 ms and m = 3

under differentTLSP. 0 50 100 150 200 250 0 0.5 1 1.5 2 2.5 T hruhghput (M bps)

Normalized Offered Load analysis total, Trel = 1 ms

simulation total, Trel= 1 ms

analysis fast route, Trel = 1 ms

simulation fast route, Trel = 1 ms

analysis default route, Trel = 1 ms

simulation default route, Tanalysis total, Trel = 1 ms rel = 50 ms

simulation total, Trel = 50 ms

analysis fast route, Trel= 50 ms

simulation fast route, Trel= 50 ms

analysis default route, Trel= 50 ms

simulation default route, Trel= 50 ms

Fig. 6. Throughput as a function of normalized offered load withTLSP = 1 ms and m = 3 under

differentTrel.

Although the total throughput is almost the same under differentTLSP and label

release timer (Trel), the difference exists in the behavior of default path and fast path.

With our model, one can determine how much traffic is served by the fast path. We plot

the throughput as a function of normalized offered load withm = 3 under different

TLSPandTrelin Fig. 6 and Fig. 7. From these two figures, one can find that the default

path throughput will increase with the increase of traffic load if total traffic load is light. When most traffic starts to be switched to the fast path, the default path throughput decreases. Additionally, one can observe that most traffic will go through the fast path

(10)

0 50 100 150 200 250 0 0.5 1 1.5 2 2.5 T hruhghput (M bps)

Normalized Offered Load analysis total, Trel = 1 ms

simulation total, Trel = 1 ms

analysis fast route, Trel= 1 ms

simulation fast route, Trel= 1 ms

analysis default route, Trel= 1 ms

simulation default route, Trel= 1 ms

analysis total, Trel= 50 ms

simulation total, Trel= 50 ms

analysis fast route, Trel = 50 ms

simulation fast route, Trel= 50 ms

analysis default route, Trel = 50 ms

simulation default route, Trel= 50 ms

Fig. 7. Throughput as a function of normalized offered load withTLSP = 100 ms and m = 3

under differentTrel.

0 20 40 60 80 100 0 0.5 1 1.5 2 2.5 LSP Bandwidth Utilization (%)

Normalized Offered Load

analysis Trel= 1 ms

simulation Tanalysis Trel= 1 ms rel = 10 ms

simulation Tanalysis Trel = 10 ms rel= 50 ms

simulation Trel= 50 ms

Fig. 8. Fast path bandwidth utilization as a function of normalized offered load withTLSP = 10

ms andm = 3 under different Trel.

goes through the default path with smallTreland largeTLSP in the range of small to

medium traffic condition.

The ratio of wasted bandwidth can be predicted by the fast path bandwidth utilization.

From Fig. 8, one can know that the fast path bandwidth utilization for smallerTrel is

always higher than that for largerTrelunder arbitrary traffic load as long asTrelis set

sufficiently long. However, one can observe that the fast path bandwidth utilization with

small Trel (such as 1 ms) is lower than that with larger Trel (such as 10 ms and 50

(11)

label-release timeout (Bt) under smaller value ofTrel will increase under such load.

When traffic is heavy, this phenomenon will diminish becauseBtbecomes small.

From the above results, one can know that whenTrelis small, the system behavior

is traffic-driven oriented. However, in the case thatTrelis extremely large, the system

behavior approaches a topology-driven GMPLS switch.

6 Conclusions

The queueing model for a generic GMPLS switching architecture is proposed. On the basis of the approximated analysis and simulation results, one can effectively fine tune the resource utilization level or label processing load. Furthermore, the trade-off between the fast path bandwidth utilization and the label-setup rate can be observed. Hence,

an appropriate value of label release timerTrel can be carefully selected to meet the

requirement of both. For a network with large round-trip time and sufficient resources

in the fast path, if one uses a small value ofTrel, most traffic will go through the default

path instead of the fast path. Therefore, choosing large value ofTrel is preferred. For

a network with a small round-trip delay and insufficient resources in the fast path, it is

adequate to use the system with a small value ofTrel.

Our study shows that the best performance of a GMPLS switch can be achieved only when its control plane parameters are appropriately tuned. In the future, we will investigate a mechanism to reduce the out-of-sequence problem due to dynamic path changes in GMPLS.

References

1. E. Rosen, A. Viswanathan, and R. Callon, “Multiprotocol Label Switching Architecture,”

RFC 3031,Jan. 2001.

2. D. Awduche, Y. Rekhter, J. Darke, and R. Colton, “Multi-Protocol Lambda Switching: Com-bining MPLS Traffic Engineering Control with Optical Crossconnects,” IETF Internet

draft-awduche-mpls-te-optical-02.txt,July 2000.

3. C. Y. Metz, IP Switching: Protocols and Architextures, MacGraw-Hill, 1999.

4. O. Aboul-Magd, et al., “Constraint-Based LSP Setup using LDP,” IETF Internet

draft-ietf-mpls-cr-ldp-05.txt,Feb. 2001.

5. D. O. Awduche, et al., “RSVP-TE: Extensions to RSVP for LSP Tunnels,” IETF Internet

draft-ietf-mpls-rsvp-lsp-tunnel-09.txt,Aug. 2001.

6. P. Ashwood-Smith, et al., “Generalized Multi-Protocol Label Switching (GMPLS) Architec-ture,” IETF Internet draft-ietf-ccamp-gmpls-architecture-00.txt, June 2001.

7. A. Banerjee, et al., “Generalized Multiprotocol Label Switching: An Overview of Routing and Management Enhancements,” IEEE Commun., Mag., pp. 2–8, Jan. 2001.

8. J. Sadler, et al., “Generalized Switch Management Protocol (gsmp),” IETF Internet

draft-sadler-gsmp-tdm-labels-00.txt,Feb. 2001.

9. P. Ashwood-Smith, et al., “Generalized MPLS–Signaling Functional Description,” IETF

In-ternet draft-ietf-mpls-generalized-signaling-05.txt,July 2001.

10. S. Nakazawa, K. Kawahara, S.Yamaguchi, andY. OIE, “Performance Comparasion with Layer 3 Switches in Case of Flow-And Topology-Driven Connection Setup,” IEEE GLOBECOM’99, pp. 79–86, Rio de Janeiro, Brazil.

(12)

11. L.-C. Kao and Z. Tsai, “Performance Analysis of Flow-Based Label Switching: the Single IP Flow Model,” IEICE Trans. Commun., vol. E83-B, no. 7, pp. 1417–1425, July 2000. 12. L.-C. Kao and Z. Tsai, “Steady-State Performance Analysis of MPLS Label Switching,”

數據

Fig. 1. GMPLS queueing model.
Fig. 2. Aggregated state-transition diagram of the GMPLS queueing model.
Fig. 3. Detailed state-transition diagram in aggregated state k.
Fig. 4. Label-setup rate as a function of normalized offered load with T rel = 1 ms and m = 3 under different T LSP .
+3

參考文獻

相關文件

volume suppressed mass: (TeV) 2 /M P ∼ 10 −4 eV → mm range can be experimentally tested for any number of extra dimensions - Light U(1) gauge bosons: no derivative couplings. =>

We explicitly saw the dimensional reason for the occurrence of the magnetic catalysis on the basis of the scaling argument. However, the precise form of gap depends

incapable to extract any quantities from QCD, nor to tackle the most interesting physics, namely, the spontaneously chiral symmetry breaking and the color confinement.. 

• Formation of massive primordial stars as origin of objects in the early universe. • Supernova explosions might be visible to the most

Miroslav Fiedler, Praha, Algebraic connectivity of graphs, Czechoslovak Mathematical Journal 23 (98) 1973,

2-1 註冊為會員後您便有了個別的”my iF”帳戶。完成註冊後請點選左方 Register entry (直接登入 my iF 則直接進入下方畫面),即可選擇目前開放可供參賽的獎項,找到iF STUDENT

Indeed, in our example the positive effect from higher term structure of credit default swap spreads on the mean numbers of defaults can be offset by a negative effect from

(Another example of close harmony is the four-bar unaccompanied vocal introduction to “Paperback Writer”, a somewhat later Beatles song.) Overall, Lennon’s and McCartney’s