• 沒有找到結果。

Optimal Queue Management Algorithm for ATM Networks

Chapter 2 Related Works

2.3. Optimal Queue Management Algorithm for ATM Networks

To support heterogeneous QoS differentiation such as delay bound and packet loss probability, it is necessary to jointly design time priority and loss priority schemes. In [42]-[48], relative differentiated service, one approach in DiffServ framework, was proposed trying to provide heterogeneous QoS differentiation. In relative differentiated service, packets are grouped into multiple classes so that a packet belonging to a higher priority class receives better service than a

packet belonging to a lower one. The proportional differentiation model was proposed to refine the relative differentiated service with quantified QoS spacing. In proportional differentiation model, performance metrics such as average delay and/or packet loss probability are controlled to be proportional to the differentiation parameters chosen by network operators. Assume that there are

N service classes. The average experienced delay and suffered packet loss probability of the i th

service class, denoted by di and Pi , respectively, are spaced from those of the jth service class as

i j i j

d d   and P Pi j  i j , 1 ,i j N . Here, i and i denote, respectively, the delay and packet loss probability differentiation parameters of the ith service class. The work presented for relative differentiated service successfully controls the average delays and packet loss probability in a proportional sense. However, this service model is not practical for real-time traffic.

The reasons are stated as follows. 1) For real-time traffic, we believe it is more meaningful for a multiplexer to guarantee delay bounds rather than providing proportional average delays. 2) Since packets of real-time traffic have to be dropped whenever they violate their delay bound, buffer overflow can be eliminated by engineering the buffer space according to the delay bound of all real-time traffic and the service capability of the system. As a result, it is reasonable to assume that packet loss only results from deadline violation for a multiplexer dealing with real-time traffic.

In [49], the authors generalized the QoS scheme [11] and combined it with the earliest deadline first (EDF) service discipline to support multiple delay bound and cell loss probability requirements for real-time traffic flows in ATM networks, assuming cell loss only results from deadline violation.

This generalized version is named G-QoS. It was proved that the G-QoS scheme is optimal in the sense that it minimizes the effective bandwidth among all stable and generalized space-conserving schemes. A scheme is said to be generalized space-conserving if a packet is discarded only when it or some other packets buffered in the system will violate their delay bounds. Moreover, effective bandwidth refers to as the minimum required bandwidth to meet QoS requirements of all traffic flows. Two drawbacks of the G-QoS scheme are 1) it only handles fixed-length packets and 2) when batches of packets arrive, packet-by-packet processing requires high computational complexity.

The G-QoS scheme and its original version, the QoS scheme, are related to our work and will be reviewed in the following paragraphs.

It is assumed that there are K traffic flows, namely, f , 1 f ,…, and 2 f , which are K multiplexed into a system with transmission capability C and a single queue of size B. Consider

f . Let k P represent its packet loss probability requirement. The number of arrived and k

discarded packets (or cells) by time t are denoted by A tk

 

and L tk

 

, respectively. The running packet loss probability P tk

 

is defined as P tk

 

L tk

 

A tk

 

.

The QoS scheme [11]

The QoS scheme is operated as follows. Assume that a packet arrives at time t, and the buffer is fully occupied. Define D t

 

as the set which contains indices of traffic flows that have at least one packet in the buffer (excluding the one under transmission). Let f be the flow in j D t

 

such that P tj

 

Pj P tk

 

Pk , 1 k K. If the arriving packet belongs to f , then this packet is j discarded. Otherwise, a packet which belongs to f is discarded and the arriving packet is j admitted to the buffer. As was proved in [12], the QoS scheme is optimal in the sense that it achieves maximum bandwidth utilization among all stable and space-conserving schemes.

The G-QoS scheme [49]

In the G-QoS scheme, it was assumed that the buffer is sufficiently large so that there is no cell loss due to lack of buffer space. The EDF policy was adopted as its service discipline. Upon arrival, a cell is marked with its deadline, which is equal to its arrival time plus the requested delay bound. Then, the schedulability test of the EDF scheduler is performed according to the deadlines of the newly arrival and all the other existing ones. The newly arrived cell is admitted into the buffer without discarding any cell if no cell will violate its delay bound, assuming that there is no

more cell arrival in the future. Otherwise, a cell in the discarding set is lost. The discarding set

 

S t is the maximum subset of existing cells at time t, including the newly arrived one, such that the remaining cells in the system are schedulable if cell c is discarded for any cS t

 

. Which

cell is to be discarded is determined by the normalized running cell loss probabilities of traffic flows having cells in the discarding set. Among these traffic flows, a cell which belongs to the traffic flow with the smallest normalized running cell loss probability is discarded. It was proved that the G-QoS scheme is optimal in the sense that it minimizes the effective bandwidth among all stable and generalized space-conserving schemes.

Chapter 3

Resource Allocation for Real-Time Traffic in IEEE 802.11e WLANs

The Medium Access Control (MAC) of IEEE 802.11e defines a novel coordination function, namely, Hybrid Coordination Function (HCF), which allocates Transmission Opportunity (TXOP) to stations taking their quality of service (QoS) requirements into account. However, the reference TXOP allocation scheme of HCF Controlled Channel Access (HCCA), a contention-free channel access function of HCF, is only suitable for constant bit rate (CBR) traffic. For variable bit rate (VBR) traffic, packet loss may occur seriously. In this chapter, we generalize the reference design with an efficient TXOP allocation algorithm, a multiplexing mechanism, and the associated admission control unit to guarantee QoS for VBR flows with different delay bound and packet loss probability requirements. We define equivalent flows and aggregate packet loss probability to take advantage of both intra-flow and inter-flow multiplexing gains so that high bandwidth efficiency can be achieved. Moreover, the concept of proportional-loss fair service scheduling is adopted to

allocate the aggregate TXOP to individual flows. From numerical results obtained by computer simulations, we found that our proposed scheme meets QoS requirements and results in much higher bandwidth efficiency than previous algorithms.

3.1. System Model

The studied system consists of K QSTAs, called QSTA1, QSTA2, …, and QSTAK such that

QSTA has i n existing VBR flows. Transmission over the wireless medium is divided into SIs i

and the duration of each SI, denoted by SI, is a sub-multiple of the length of a beacon interval T . b Moreover, an SI is further divided into a contention period and a contention-free period. The HCCA protocol is adopted during contention-free periods.

It is assumed that every QSTA has the capability to measure channel quality to determine a feasible transmission rate which yields a frame error rate sufficiently smaller than the packet loss probability requirements requested by all traffic flows attached to the QSTA. The relationship between measured channel quality and frame error rate can be found in [52].

The QoS requirements of traffic flows are specified with delay bound and packet loss probability. Every QSTA is equipped with sufficiently large buffer so that a packet is dropped if and only if (iff) it violates the delay bound. It is assumed that there are I different packet loss probabilities, represented by P , 1 P , …, and 2 PI with P1P2  ... PI , and J possible delay bounds, denoted by D , 1 D , …, and 2 D with J D1D2  ... D . We assume that J D1SI and

jj

D SI for some integer j 1.

HC allocates TXOPs to QSTAs based on a static and periodic schedule. As illustrated in Fig.

3.1, the TXOP for QSTA , denoted by k TXOP , is allocated every SI and is of fixed length. The k length of scheduled SI is chosen to be the minimum of all requested delay bounds. Note that SI is updated if a new flow with delay bound smaller than those of existing ones is admitted or the existing flow with the smallest delay bound is disconnected and there is no other existing flow with the same delay bound. In this case, the TXOPs allocated to QSTAs have to be recalculated accordingly.

TXOP1 TXOP2 TXOPa TXOPK TXOP1 TXOP2 TXOPa TXOPK

Fig. 3.1 Static and periodic schedule for 802.11e HCCA.

Consider the existing flows of a specific QSTA, say QSTAa. The na flows attached to QSTAa are classified into groups according to their QoS requirements. Let F represent the set which i j, contains all traffic flows with packet loss probability P and delay bound i D . Furthermore, let j

1 ,

i j J i j

F    F and F  1 i I  Fi. To reduce computational complexity, we assume that the traffic

arrivals of different flows are independent Gaussian processes. Since sum of independent Gaussian random variables remains Gaussian, the aggregated flow of all the flows in set F is Gaussian and i j, will be represented by f . For convenience, we shall consider i j, f as a single flow. A separate i j, arrive in one SI and the packet size.

Our proposed scheme consists of an aggregate TXOP allocation algorithm, the proportional-loss fair service scheduler, and the associated admission control unit. As mentioned before, TXOP allocation and admission control are performed in HC and proportional-loss fair service scheduler is implemented in QSTAs. An overview of our proposed scheme is depicted in Fig. 3.2. Once again, let us consider QSTAa with na traffic flows, which are classified into IJ groups according to their QoS requirements.

queuesIJ

,

, i j a

i j

TD TXOP

Fig. 3.2 The system architecture of our proposed scheme

3.2. Aggregate TXOP Allocation Algorithm

For ease of presentation, we firstly consider the case that flows are with identical packet loss probability requirement and then, generalize the results to the case that flows are with different packet loss probability requirement.

Flows with identical packet loss probability requirements

It is assumed that flows requesting different delay bounds but identical packet loss probabilities.

Without loss of generality, assume that the packet loss probability requested by all flows is P . As 1 a result, we have F  . Further, for ease of description, we assume that there is at least one F1 traffic flow with delay bound D . 1

Consider QSTA which has a n flows. The a n flows are classified into a J disjoint sets

F , 1,1 F , …, and 1,2 F such that a flow belongs to 1,J F iff its delay bound is 1, jjSI. Let f , 1, j 1 j J  , with traffic arrival distribution N( 1,j, 1,2j) denote the aggregated flow of all the flows in set F1, j . The first come first serve (FCFS) service discipline was adopted for packet transmission. The effective bandwidth c1, j of flow f1, j is computed to take advantage of intra-flow multiplexing gain. The effective bandwidth c is defined as the minimum TXOP 1, j allocated to flow f to guarantee a packet loss probability smaller than or equal to 1, j P for flow 1

f . Since the delay bound of flow 1, j f is 1, jjSI, the effective bandwidth c can be determined 1, j with a finite-buffer queueing model where the buffer size is jc1,j, the server transmission

capability is c1, j , and the desired packet loss probability is P . Given the traffic arrival 1 distribution N(1,j, 1,2j), the effective bandwidth c can be written as 1, j c1,j 1,j 1,j 1,j, where

1, j was called the QoS parameter of flow f . Derivation of packet loss probability for a 1, j finite-buffer system is complicated. Reference [50] provided a good approximation based on the tail probability of an infinite buffer system and the loss probability of a buffer-less system, as shown in equation (8).

In the above equation, P xL

 

represents the packet loss probability of a finite-buffer system with buffer size x and P X

x

denotes the tail probability above level x of an infinite-buffer system. (approximate) packet loss probability of a finite-buffer system with server transmission capability

c and buffer size 1, jjc1,j as

turn can be used to derive the effective bandwidth c1,j 1,j 1,j 1,j for flow f . 1, j

where R represents the feasible physical transmission rate of a QSTA . a

As mentioned before, using buffer to store packets achieves intra-flow multiplexing gain. To further achieve inter-flow multiplexing gain, an equivalent flow of delay bound D , denoted by 1 ˆ1,

f , j is the QoS parameter of the equivalent flow. Since the delay bound of the equivalent flow ˆ1,

f is j

flows ˆ1,

f j, 1 j J  , one can determine the aggregate equivalent flow ˆf1. Let N

 ˆ ˆ1, 12

denote the distribution of traffic arrival in one SI for the aggregate equivalent flow ˆf1. Since sum of independent Gaussian random variables remains Gaussian, we have ˆ1 1,1 J 2 ˆ1,j

  

j and

The criterion shown in equation (4) was used for admission control.

Clearly, assuming all traffic flows have identical packet loss probabilities is a big constraint of the above scheme. A straightforward solution to handle flows with different packet loss

probabilities is to assume that all flows have the most stringent requirement. Unfortunately, such a solution increases the effective bandwidths of flows which allow packet loss probabilities greater than the smallest one. Another possible solution is to compute separately the effective bandwidth

ˆi

c for aggregated equivalent flow fˆi, 1 i I  , and allocate TXOPa

iI1cˆi . Such a solution, however, does not take advantage of inter-flow multiplexing gain. In the following sub-section, we present our proposed scheme which considers different packet loss probabilities and takes advantage of inter-flow multiplexing gain.

Flows with different packet loss probability requirements

First of all, an aggregate equivalent flow, denoted by fˆi, is determined using the technique described in the last section for flows f , i,1 f , …, and i,2 fi J, , for all i, 1 i I  . Note that the packet loss probability requirement of fˆi is P . Let i N

 ˆ ˆi, i2

represent the traffic arrival distribution for flow fˆi. Define ˆf as the ultimate equivalent flow with traffic arrival distribution

iI1 ˆi, Ii1 ˆi2

computed using equation (9) with desired packet loss probability Pultimate. The aggregate TXOP

allocated to QSTA can be calculated using equation (13), except that the aggregate effective a bandwidth and the average number of packets which can be served in one SI are obtained by

2

In equation (19), L denotes the weighted average nominal packet size of all the flows in F and is calculated by L

iI1N Li i

 

iI1Ni

, where Ni and Li can be obtained using equations (15) and (16), respectively. The aggregate TXOP allocation procedure for QSTA is summarized a below.

Step 1. For 1 i I  , determine the aggregate equivalent flow fˆi with packet loss probability requirement P for flows i f , i,1 f , …, and i,2 fi J, .

Step 2. Determine the packet loss probability Pultimate using equation (17).

Step 3. Compute the QoS parameter of the ultimate equivalent flow using equation (9) with

ultimate

P as the desired packet loss probability.

Step 4. Compute the aggregate transmission duration TXOP allocated to a QSTA using a equation (13) with the effective bandwidth and average number of packets served in one SI obtained from equations (18) and (19).

3.3. Proportional-loss Service Scheduler

When polled, QSTAa needs to determine how the flows attached to it share the allocated TXOP.

Let Queue denote the queue maintained in QSTAi j, a that is used to save packets of flow f . As i j, shown in Fig. 3.3, Queue is divided into i j,j virtual sub-queues such that the p sub-queue, th represented by Queuei jp, , 1 pj, contains packets which can be kept for up to p SIs before violating the delay bound. How the allocated TXOP is shared is controlled by our proposed proportional-loss fair service scheduler.

Consider the n SI. The proposed proportional-loss fair service scheduler is similar to the earliest th deadline first (EDF) scheduler [53]. Let ,

 

therefore, no traffic is lost in the n SI. In this case, our proposed proportional-loss fair service th

scheduler is the same as the EDF scheduler.

according to the EDF scheduler. Any packet which can be kept for longer than m SI stays in queue. Packets in Queuei jm, , 1 i I  , jm , are handled differently by our proposed proportional-loss fair service scheduler and the EDF scheduler. In the proposed proportional-loss fair service scheduler, which packets should stay in queue (if m1) or be dropped (if m1) is decided based on running packet loss probabilities. Once the decision is made, the service order of those packets to be transmitted is determined by the EDF scheduler.

Define

 

, 1 ,

 

Our proposed proportional-loss fair service scheduler tries to minimize the total amount of packet loss while maintaining a kind of fairness in the sense that the (pseudo) running packet loss

probabilities of traffic flows are proportional to their packet loss probability requirements. To

However, the solution obtained by equation (22) may be infeasible, i.e., it is possible to have

   

, ,

m

i j i j

l nQ n or li j,

 

n 0 for some

 

i j,Uactive. If it happens, then adjustment is necessary to make the solution feasible. The adjustment is accomplished by the loss computation algorithm shown in Appendix B. Its basic idea is described below. There are four possible cases for the solution obtained by equation (22).

Case 1 0li j,

 

nQi jm,

 

n for all

 

i j,Uactive. Theorem 3.1 below, the updated solution should fall in either Case 1 or Case 2. If it falls in Case 1, then a feasible solution is obtained. Otherwise, the same process is repeated. Eventually, a feasible solution will be obtained because it holds that , ,

   

m

Note that proof of all Lemmas and Theorems are provided in Appendix A. Theorem 3.1 says that if we set ,

 

,

 

point out that although Theorem 3.1 is stated for one

 

r s, which satisfies ,

 

,

 

m

r s r s

l nQ n , it actually implies the same conclusion if multiple queues satisfy the condition.

Case 3 ,

 

,

 

updated solution will fall in either Case 1 or Case 3. This is implied by Theorem 3.2 stated below.

Similarly, a feasible solution is found if the updated solution falls in Case 1. Otherwise, the same process is repeated till a feasible solution appears. The proof for Theorem 3.2 is similar to that for Theorem 3.1 and is omitted. conclusion if multiple queues satisfy the condition. Therefore, for Case 3, we can repeatedly set

,

 

0

li j n  for all

 

,i j such that li j,

 

n 0 and solve equations (20) and (21) for the updated

active adjustment is necessary if the solution falls in Case 1. If the solution falls in Case 2, then Case 2 is performed repeatedly until a feasible solution is found. Similarly, if the solution falls in Case 3, then Case 3 will be repeatedly executed until a feasible solution is obtained. Finally, if the solution falls in Case 4, then either Sub-case 1 or Sub-case 2 is performed again.

Sub-case 2  , ,

   

achieve the equality described in equation (20) for queues in the updated V . If the solution falls 2 in Case 3, then Case 3 will be repeatedly executed until a feasible solution is obtained. Finally, if the solution falls in Case 4, then either Sub-case 1 or Sub-case 2 is performed again.

The computational complexity of the loss computation algorithm is stated in the following Theorem 3.3.

Theorem 3.3 The loss computation algorithm takes at most 2

N1

iterations to find the feasible solution, where NUactive , the size of Uactive.

After the feasible solution is found, TDi j,

 

n can be obtained according to equation (23). If data are dropped (i.e., m1), Li j,

 

n is updated as follows

     

, , 1 ,

i j i j i j

L nL n l n (24)

Since the number of real-time flows attached to each QSTA is normally small, the complexity of the loss computation algorithm should be acceptable. Furthermore, because of static and periodic TXOP allocation, each QSTA has time one SI to compute the solution. Therefore, the proposed proportional-loss fair service scheduler should be feasible for real systems.

3.4. The Associated Admission Control Unit

Assume that QSTAa is negotiating with HC for a new traffic flow, i.e., the (na1)th flow of QSTAa, that requires packet loss probability P and delay bound i D . Define available bandwidth j

BWava as recalculated using equation (17) with the above updated parameters as input. Finally, the effective bandwidth and the required TXOP, denoted by TXOP , can be computed, respectively, by equations (9) a* and(13). Define TXOPTXOPa*TXOPa. The new flow is admitted iff the following inequality is

BWava as recalculated using equation (17) with the above updated parameters as input. Finally, the effective bandwidth and the required TXOP, denoted by TXOP , can be computed, respectively, by equations (9) a* and(13). Define TXOPTXOPa*TXOPa. The new flow is admitted iff the following inequality is

相關文件