• 沒有找到結果。

Chapter 4 Resource Allocation for Real-Time and Non-Real-Time Traffic in

4.2. The Proposed Scheme

In this chapter, we present a resource allocation scheme which considers both delay bound and loss probability requirements requested by real-time traffic flows. As shown in Fig. 4.1, the minimum requested bandwidths of real-time flows are computed, summed for each SS, and then used together with queue occupancy as constraints in resource allocation. After the solution is obtained, a PL scheduler is adopted to determine how multiple real-time traffic flows attached to the same SS share the allocated bandwidth. In case the available resource is not sufficient to provide each flow its minimum requested bandwidth, a pre-processor is required to maximize the number of real-time flows attached to each SS that meet their QoS requirements. We describe calculation of minimum requested bandwidth, resource allocation, PL scheduler, and pre-processor separately

below.

SSn Minimum requested bandwidth calculation Resource allocation for maximum-throughput with QoS constraints

 

Fig. 4.1 Architecture of the proposed scheme.

The minimum requested bandwidth

PP t , then the running loss probability is still greater than or equal to the pre-defined level

,

Lemma 4.1. It holds that

The minimum requested bandwidth for all cases is summarized in Table 4.1. Note that the actual

allocated bandwidth could be different from Rn k*,

 

t . After obtaining Rn k*,

 

t for all k, 1 k Kn,

Resource allocation for maximum-throughput with QoS constraints

As described in Problem P1, the proposed resource allocation algorithm maximizes system

throughput while providing QoS guarantee to real-time traffic flows. In problem P1, we let

 

* 0

R tn  for all SS nNRT. As in previous section, we use rn m,

 

t to denote the maximum

achievable transmission rate on the mth sub-channel for SS n in the tth frame. The variable

Problem P1 can be solved by some integer linear programming algorithm [55]. If there is no feasible solution, meaning that the available resource is smaller than the summation of all minimum requested bandwidths, we set xn m,

 

t 0, for all nNRT, 1 m M  , and solve a modified problem, called problem P2, which is basically the same as problem P1 except that the constraint shown in equation (30) is replaced by 0 M1 n m,

 

n m,

 

n*

 

,

m x t r t R t n

   . Note that the

solution of Problem P2 always exists because xn m,

 

t 0, for all n , 1 m M  is one feasible solution. Unfortunately, the complexity of integer linear programming is NP-complete [56]. One possible strategy to mitigate the computational complexity is to set un m, =rn m,

 

t for all n , 1 m M  , and conduct the matrix-based scheduling algorithm for one or two rounds. In the first

round, we only consider SSs contained in  , assuming that the queue occupancy of SS n is equal RT

to R tn*

 

. The algorithm ends if the resource is exhausted in the first round. Otherwise, the second round is performed to allocate the remaining resource to all SSs, assuming the queue

occupancy of SS n is equal to Q tn

 

R tn*

 

. According to the analysis provided in Chapter 2.2, the computational complexity of the modified matrix-based scheduling algorithm is

2 2

2 2

(max( , ))

O M     MM .

Let yn m,

 

t be the solution obtained either from integer linear programming or matrix-based scheduling algorithm. We have

 

1 ,

 

,

 

R tR t . In this case, we need a user-level resource allocation algorithm for the attached flows to share the allocated bandwidth. In the following sub-section, we define the PL scheduler to solve this problem.

Proportional-loss (PL) scheduler

Consider SS n and assume that it is attached with multiple real-time traffic flows. Define

three disjoint sets U , Z U , and P UA such that flow fn k, is contained in U , Z U , or P U iff A

subject to proposed PL scheduler achieves min-max optimality, as stated in Lemma 4.2. In Theorem 4.3, we show that if there exists a scheduler which guarantees the loss probability requirements, so does the PL scheduler.

Lemma 4.2. Given R tn

 

0, Sn k,

 

t1 , Ln k,

 

t1 , and

Qn km, [ ]t

mDn k,1, 1 k Kn, the proposed PL scheduler minimizes the maximum normalized running loss probability of all the traffic flows attached to SSn.

Theorem 4.3. Given R tn

 

0, Sn k,

 

t1 , Ln k,

 

t1 , and

Qn km, [ ]t

mDn k,1, 1 k Kn, if there exists a scheduler which can guarantee the loss probability requirements of all the Kn traffic flows, so can the PL scheduler.

Theorem 4.3 provides the answer why the PL scheduler is proposed as the user-level resource

allocation algorithm. Define [R tn

 

, Sn k,

 

t1 , Ln k,

 

t1 , {Qn km, [ ]}t Dmn k,1 (1 k Kn)] as the state of SS n at the beginning of the t frame. Given the state at the beginning of the first frame, the th PL scheduler is preferred over other schedulers in the first frame, according to Theorem 4.3.

Assume that the PL scheduler is adopted in the first frame. The state at the beginning of the second frame is determined once traffic arrivals at the beginning of the second frame is known and Rn

 

2

is provided. Based on Theorem 4.3 again, the PL scheduler is still the preferred scheduler in the

second frame. The arguments can be applied to all frames.

In the rest of this sub-section, we present a realization of the PL scheduler. Again, consider

SS n in the tth frame and assume that R tn

 

is given. We need to determine Rn k,

 

t , 1 k Kn, so that equations (32) and (33) are satisfied.

Lemma 4.4. If R tn

 

R tn*

 

, equations (32) and (33) are satisfied for Rn k,

 

tR*n k,

 

t , 1 k Kn.

gradually, keeping equations (32) satisfied, until

 

n1 ,

 

(no solution is found earlier than Event 1), where

           

other events to happen can be similarly determined. After all flows are placed in the correct sets, the solution can be obtained by solving equations (32) and (33). To summarize, we repeatedly check the inequality shown in equation (35). If it holds, flow fn k,* is moved from one set to

and

All flows are placed in their correct sets once the inequality shown in (35) becomes false. The

solution can then be obtained as follows. Set Rn k,

 

t 0 if fn k,UZ or Qn k,

 

t if fn k,UA. For fn k,UP1UP2, Rn k,

 

t can be obtained by Rn k,

 

thn k,

P t PnF

 

n k, ;t

, where PnF

 

t represents the normalized running loss probability for any fn k,UP1UP2 at the end of the t th frame and is derived in the Appendix A.

Case 2 R tn

 

R tn*

 

Case 2 is similar to Case 1, except that we need to decrease Rn k,

 

t for fn k,UP1UP2UA. For this case, we repeatedly check the inequality shown in (38) until it becomes false. If it is true, flow fn k,* is moved from U to A UP2, from UP2 to U , or from P1 U to P1 U . Z

After the inequality shown in (38) becomes false, the solution can be obtained as follows. Set

,

 

0

Rn k t  if fn k,UZ or Qn k,

 

t if fn k,UA. For fn k,UP1UP2, Rn k,

 

t can be obtained

by Rn k,

 

thn k,

P t PnF

 

n k, ;t

. The pseudo code of the above realization of the PL scheduler is provided in the Appendix B.

Note that, for Case 1, the maximum number of iterations needed for the PL scheduler is 3K , n which happens when each flow is moved from U to Z U , from P1 U to P1 UP2, and then from UP2 to U . In each iteration, the computational complexity is (A O Kn) . Therefore, the total computational complexity is O K( n2). Obviously, the complexity for Case 2 is the same.

Pre-processor

Assume that R tn

 

R t*n

 

(i.e., Case 2 occurs) and Rn k,

 

t 0. In this case, flow fn k, will violate its loss probability requirement if the PL scheduler is adopted. As a consequence, all

flows attached to SS n violate their loss probability requirements if Rn k,

 

t 0 for all k. This is clearly not desirable. One possible remedy is to place a pre-processor in front of the PL

scheduler to maximize the number of flows which meet their loss probability requirements. Let

   

operation if   . Otherwise, repeat the process. After the operation of the pre-process ends, the remaining resource is allocated to the remaining flows belonging to UP1UP2UA by the PL scheduler. Clearly, the computational complexity of the pre-processor is (O KnlogKn , where )

   

1 2 , | , , , ,

n P P n k n k A n k n k n

K  UUf fU P tPK . As will be seen in the next section, adoption of the pre-processor can significantly increase the number of real-time flows which meet their QoS requirements.

4.3. Simulation Results

In our simulations, SSs are uniformly distributed in a circular area of radius 2Km and the BS is located at the center. Two types of real-time traffic flows are studied. Parameters of the simulation environment, AMC schemes, traffic specifications and QoS requirements of real-time flows are summarized in 0. A frame is decomposed into downlink and uplink sub-frame. We only consider downlink transmission, which is assumed to occupy 30 time slots in a frame. The other time slots are used for uplink transmission and signaling overhead. For non-real-time traffic, we assume that its queue is always non-empty. Two scenarios are investigated. In both scenarios, we assume that |NRT | 40 and the minimum requested bandwidth of every non-real-time flow is zero.

In the first scenario, in addition to the 40 non-real-time flows, there are various number of SSs each attached with one Type I real-time flow. The second scenario has 13 SSs each attached with two real-time flows, one of Type I and another of Type II. Simulations are performed for 10,000 frames using Matlab on a PC with an Intel Core 2 Quad CPU operated at 2.83GHz with 3072 MB of RAM.

For the first scenario, we compare our proposed scheme with the pure maximum-throughput

algorithm, the three scheduling polices proposed in [36], and the M-LWDF scheme. To maximize system throughput, the minimum requested bandwidth of any real-time traffic flow is zero for the pure maximum-throughput algorithm. For fair comparison, we change the resource granularity from sub-channel to time slot for the three policies proposed in [36]. With such a change, their performances are better than the original versions. We label our proposed scheme by

“proposed:ILP” or “proposed:Matrix” if the resource allocation problem is solved by integer linear programming or matrix-based scheduling algorithm, respectively. Both the PL scheduler and the pre-processor are adopted in Scenario 2 for all investigated schemes, except the M-LWDF scheme.

Table 4. 2 Parameters of simulation environment, traffic characteristics, QoS requirements Doppler frequency 4.6 Hz (speed: 2Km/hr)

Pass loss exponent 4

Content Voice video streaming (Star War II)

Codec format G.711 MPEG 4

Mean inter-arrival time 20ms 40ms

Mean packet size 200 bytes 267bytes

Delay bound 80ms 160ms

Loss probability requirement 10(%) 5, 10, 15, 20, 25(%) The adopted modulation and coding scheme [35].

Mode Modulation Coding rate Receiver SNR (dB)

1 QPSK 1/2 5

In Fig. 4.3 and Fig. 4.4, we compare, respectively, total system throughput and loss probability of the investigated schemes for SSs attached with Type I real-time traffic flows in the first scenario.

Compared with the schemes presented in [36] for   and 0   , our proposed scheme achieves 1 better system throughput. The maximum improvement is about 28% (6.018Mbps versus 4.696Mbps), which occurs when RT 60. Although the pure maximum-throughput algorithm and the scheme presented in [36] for    have better throughput performance than our proposed scheme, their loss probabilities are higher than the specified value. In fact, a large proportion (about 80%) of real-time data is lost for the pure maximum-throughput algorithm. The reason is that there are many SSs attached with non-real-time traffic flows that are assumed to always have data for transmission. The improvement of our proposed scheme stops when RT 70. The reason is that, for RT 70, the average running loss probability is greater than the loss probability requirement and, therefore, the resource is allocated to users with good channel qualities by our proposed scheme and the scheme presented in [36] for   and 0   . Compared with the 1 M-LWDF scheme, our proposed algorithm achieves higher throughput without sacrificing QoS guarantee.

10 20 30 40 50 60 70 0

5 10 15 20 25 30

Number of SSs attached with Type I traffic flows

Throughput (Mbps)

MAX-throughput

Scheme of [36] with  =0 Scheme of [36] with  =1 Scheme of [36] with  =  proposed:Matrix

M-LWDF

Fig. 4.3 Throughputs of various schemes in the first scenario.

10 20 30 40 50 60 70 0

10 20 30 40 50 60 70 80

Number of SSs attached with Type I traffic flows

Loss probability (%)

MAX-throughput

Scheme of [36] with  =0 Scheme of [36] with  =1 Scheme of [36] with  =  proposed:Matrix

M-LWDF

Fig. 4.4 Loss probabilities of SSs attached with real-time traffic flows in the first scenario.

In Fig. 4.5 and Fig. 4.6, we compare the performances of our proposed:ILP and proposed:Matrix schemes. Results show that the difference is not significant. For RT 30, the execution time of the proposed:Matrix scheme is 0.9 ms, which is much smaller than 47.4 ms, the execution time of the proposed:ILP scheme.

Fig. 4.7 shows the comparison of throughput performances of the investigated schemes which guarantee QoS of all the real-time flows in the second scenario. As one can see, our proposed:Matrix scheme outperforms M-LWDF and the scheme of [13] with   or 1. The 0 improvement increases as the loss probability requirement increases. The reason is simply because our proposed:Matrix scheme takes loss probability requirements into consideration in calculating the minimum requested bandwidth of every real-time flow. As shown in Table 4.3, both M-LWDF and the scheme of [13] (with   or 1) do not take full advantage of the tolerance of data loss feature 0 of real-time flows. By controlling the actual loss probabilities close to requirements, our proposed scheme improves system throughput.

To study the effect of pre-processor, we conduct simulations for our proposed:Matrix scheme with and without pre-processor. The results are shown in Table 4.4. For comparison, we also include simulation results of the M-LWDF scheme. In this table, the loss probability requirement of Type II real-time flows is chosen to be 10% As one can see, the number of Type II flows which meet their QoS requirements with pre-processor is much larger than that without pre-processor when

|RT | is large. The reason is that, under the PL scheduler, the denominator of the running loss

probability, i.e, Sn k,

 

tLn k.

 

t , is often smaller for a real-time flow with a smaller data arrival rate.

As a result, a flow with a smaller data arrival rate tends to have a smaller minimum requested bandwidth and is more likely to be selected by the pre-processor. In our simulations, a flow of Type II has a smaller data arrival rate than a flow of Type I. When compared with M-LWDF, the proposed:Matrix scheme with pre-processor yields more flows which meet their QoS requirements.

One interesting observation is that M-LWDF favors Type I flows. This is because Type I flows require more stringent delay bounds than Type II flows, which implies Type I flows are assigned higher priority than Type II flows when loss probability requirements are identical. We also conducted simulations for a scenario where all SSs are attached with two Type II flows. The loss probability requirement is 10% for one flow and 20% for the other. Results show that the pre-processor favors flows with 20% loss probability requirement. This is intuitively true because, under the same data arrival distribution, a flow with a larger loss probability requirement tends to have a smaller minimum requested bandwidth than one which has a smaller loss probability requirement. Owing to space limitation, we do not show these results.

We have presented in this chapter an efficient resource allocation scheme which tries to maximize system throughput while providing QoS support to real-time traffic flows. The basic idea of our proposed scheme is to calculate a dynamic minimum requested bandwidth for each traffic flow and use it as a constraint in an optimization problem which maximizes system throughput.

The minimum requested bandwidth is a function of the pre-defined loss probability and the running loss probability. In addition, a user-level PL scheduler is proposed to determine the bandwidth share for multiple real-time flows attached to the same SS. A pre-processor is adopted to maximize the number of real-time flows attached to each SS which meet their QoS requirements, when the resource is not sufficient to provide every flow its minimum requested bandwidth. Computer simulations were conducted to evaluate the performance of our proposed scheme. Results show that the running loss probabilities of traffic flows attached to the same SS are effectively controlled to be proportional to their loss probability requirements. Besides, compared with previous designs, our proposed scheme achieves higher throughput while providing QoS support. Although we present our designs for long time average of loss probabilities, the idea can be applied to other measurements such as exponentially weighted moving average. How to design a pre-processor which meets user’s need is an interesting topic which can be further studied. Evaluation of the impact to user perception of satisfaction for various performance measurements is another potential further research topic.

Table 4. 3 Loss probabilities for users attached with one Type I and one Type II real-time flows.

Table 4. 4 Number of Type I and Type II flows which meet their QoS requirements in the second scenario.

10 20 30 40 50 60 70 0

5 10 15 20 25

Number of SSs attached with Type I traffic flows

Throughput

proposed:ILP porposed:Matrix

Fig. 4.5 Throughput comparison between proposed:ILP and proposed:Matrix schemes.

10 20 30 40 50 60 70 0

2 4 6 8 10 12 14 16

Number of SSs attached with Type I traffic flows

Loss probability (%)

proposed:ILP porposed:Matrix

Fig. 4.6 Loss probability comparison between proposed:ILP and proposed:Matrix schemes.

5 10 15 20 25 9

10 11 12 13 14 15 16

Loss probability requirements (%)

Throughput (Mbps)

Scheme of [36] with  = 0 Scheme of [36] with  = 1 proposed:Matrix

M-LWDF

Fig. 4.7 Throughputs of various schemes in the second scenario.

Chapter 5

Optimal Queue Management

Algorithm for Real-Time Traffic

As real-time applications are proliferating rapidly, QoS guarantee for traffic flows becomes an important issue. A generalized quality of service (G-QoS) scheme coupled with the earliest deadline first (EDF) service discipline was proposed to support multiple delay bounds and cell loss probabilities in ATM networks. The G-QoS scheme, however, is only suitable for ATM networks which transport fixed-length packets. In this chapter we study a multiplexing system which handles variable-length packets. A proportional loss (PL) queue management algorithm is proposed for packet discarding, which combined with the work-conserving EDF service discipline, can provide QoS guarantee for real-time traffic flows with different delay bound and loss probability requirements. We show that the proposed PL queue management algorithm is optimal because it minimizes the effective bandwidth among all stable and generalized space-conserving schemes.

The PL queue management algorithm is presented for fluid-flow models. Two packet-based

algorithms are investigated for real packet switched networks. One of the two algorithms is a direct extension of the G-QoS scheme and the other is derived from the proposed fluid-flow based PL queue management algorithm. Simulation results show that the scheme derived from our proposed PL queue management algorithm performs better than the one directly extended from the G-QoS scheme.

5.1. System Model

As illustrated in Fig 5.1, the system investigated in this paper is a multiplexer handling variable-length packets. Assume that there are K traffic flows, namely, f , 1 f ,…, and 2 f . In K the investigated multiplexer, each traffic flow is allocated with a separate queue, denoted by Queue , 1

Queue , …, and 2 Queue . Time is divided into slots of same duration T. In each time slot, the K

service capability of the multiplexer for each flow is identical and equal to C. The service scheduler arranges data of each flow for service according to the work-conserving EDF. It is assumed that data always arrives in the beginning of each time slot. Upon data arrivals, the queue management algorithm will decide if it is schedulable. If yes, no further action will be taken.

Otherwise, some data are discarded so that the remaining data can be transmitted before their own deadlines.

QoS are specified by delay bound and loss probability. Consider f , k 1 k K. Its delay bound and loss probability requirement are denoted by D and k P , respectively, where k

k k

Fig. 5.1 Architecture of the investigated multiplexer system and the structure of virtual sub-queues, Queue , 1km  mk, for Queue .k

5.2. The Proposed PL Queue Management Algorithm

It is assumed that packets are infinitesimally dividable, which is referred to as fluid-flow model in this dissertation. A more realistic system which manages the queues packet by packet, namely,

It is assumed that packets are infinitesimally dividable, which is referred to as fluid-flow model in this dissertation. A more realistic system which manages the queues packet by packet, namely,

相關文件