• 沒有找到結果。

Chapter 2 Uplink Scheduling in EPON Access Network

2.6 Concluding Remarks

In this chapter, we proposed two scheduling scheme to obtain the overall fairness which combines the fairness of packet delay and fairness of packet blocking probability.

The proposed schemes are Hybrid LQF-QLP and Hybrid EQL-QLP. Each scheme is combined by two basic sub-schemes. We use a queue length threshold to be an adjusting parameter. In the simulation, we define three class of service, i.e. voice, video and data service. The basic requirement of the scheduling algorithm is to meet the delay bound of voice service. Under this condition, we try to improve the fairness of packet delay and fairness packet blocking probability, simultaneously.

Simulation results show that by adopting proposed schemes, the overall fairness of data service can be improved compare with traditional scheduling schemes, such as QLP, LQF. It also shows that the Hybrid LQF-QLP scheme has better performance than Hybrid EQL-QLP scheme. We conclude that the proposed scheme can not only maintain the QoS criterion of real-time service, but also support the good fairness for non-real-time service in terms of packet delay and packet blocking probability.

Chapter 3

Prediction-Based Scheduling Algorithms

Equation Section 3

3.1 Introduction

The scheduling architecture proposed in Chapter 2 can guarantee the delay criterion of voice service and support a good fairness to best-effort data service. In order to maintain the delay criterion of voice service, the maximum cycle time should be bounded [7]. We have shown the relationship between average packet delay of voice service and maximum cycle time (Tmax) in Chapter 2, where the average packet delay will achieve one and half of the cycle time. In [7], the authors proposed a CBR-credit method to eliminate a phenomenon called light-load penalty. The principle of CBR-credit method is to reserve more resource for CBR traffic. So that the granted transmission data volume of low priority service can be transmit without replacing by CBR traffic. In [10], the authors also use the method similar to CBR-credit to guarantee the packet delay of the highest priority service. The resource reserved for highest priority service is the same as the sum of data volume granted and reported in the last cycle. Thus, the average packet delay of highest priority service can be guaranteed.

In [13], the authors proposed a prediction-based LQF scheduling algorithm. They proved that the packet blocking probability is lower than the case which adopts LQF scheduling algorithm only.

In IPACT, we know that if the overhead, such as control messages and guard times, remain fixed during every cycle, the throughput will increase with the maximum cycle time (Tmax). Thus, we want to increase the cycle time to improve the bandwidth efficiency. To resist the increasing packet delay, we add a predictor in our scheduler architecture. Based on the predicted value of queue occupancy, we can make a better optimization of our system.

In the following of this chapter, we first introduce the system model and the prediction-based scheduler architecture in Session 3.2. And we introduce the principle of the prediction in Session 3.3.

Then, the prediction-based scheduling algorithm is expressed in Session 3.4. Finally, the simulation result is obtained in Session 3.5.

3.2 System Model

In the prediction-based system model, the EPON network still remain the same as non-prediction-based EPON network model, besides the scheduler is replaced by prediction-based scheduler, as shown in Figure 3.1. There are also an OLT and N ONUs supported in the prediction-based EPON architecture. And each ONU still can support three classes of services, denoted as P P1, 2, and ,P3 by equipping 3 priority queues. The line rate is RN Mbps between OLT and ONUs and RU Mbps between ONUs and user-site both in the downlink direction and uplink direction. In an ONU, the incoming packets will first be classified into different class of service and then stored in the corresponding priority queue. When ONU receive a GATE message, the queue manager can transmit packets based on the granted transmission data volume. And after transmitting user information, the queue manager will generate the REPORT message to request additional resource.

Figure 3.1: A prediction-based EPON model

The proposed prediction-based scheduler architecture is shown in Figure 3.2. We can see that there exists a predictor in our scheduler architecture. When a REPORT message arrive at OLT, it would be divided into two part of information, i.e. requested data volume of each class of service and the timestamp information of corresponding ONU. The request information will pass into predictor to predict the data volume of new coming packets which received by ONU after generating REPORT message. In other words, the predictor tries to guess the queue occupancies of each queue in corresponding ONU when this ONU receive the GATE message in the next cycle. We will describe the principle of our predictor in the later session. In the meantime, the timestamp information also passes into timing function to calculate the latest round-trip time (RTT). The method of updating the RTT is described in Figure 2.2. After the operation of timing function and predictor, the latest RTT and predicted queue occupancies will be stored in RAM. During a cycle, OLT will receive N REPORT messages from N ONUs, and they all will be stored until next scheduling time.

Figure 3.2: The prediction-based scheduler architecture

Before the beginning of the next cycle, the Decision Maker will calculate the timing and grant how much data volume each queue can transmit in the next cycle. The input is all the information stored in RAM, such as the latest RTT of each ONU and predicted queue occupancy of each queue in each ONU. After scheduling, the Grant Table would be updated. Finally, based on the Grant Table, the OLT can send GATE messages to ONUs so that ONUs know how much data volume each service can transmit.

In addition, we also adopt IPACT to be the interaction method between OLT and ONUs. In principle, IPACT uses an interleaved polling approach, where the next ONU is polled before the transmission from the previous one has arrived. The detail of IPACT-based polling procedure is introduced in Figure 2.5.

3.3 Predictor

The concept of prediction is shown in Figure 3.3. We see that before the beginning of Cycle (n+1), we must do a scheduling operation so that the packets transmitted by ONU i (we assume that ONU i is the first candidate to transmit packets in a cycle) can be received in OLT just at the time

that last candidate finished its transition in Cycle (n).

Figure 3.3: The concept of prediction method

Assume that OLT need do a scheduling operation at t0. During Cycle (n), the OLT has received user data, which data volume is denoted as D ni j[ ], and REPORT messages, in which the request data volume is denoted as Q ni j[ ], where i=1,...,N, and j=1, 2, 3. In addition, we denote

i j[ ]

V n as the data volume of arrival packets, where i=1,...,N, and j=1, 2, 3.

When OLT receives a REPORT message, in which the request data volume Q ni j[ ] is included, then the data volume of arrival packets V ni j[ ] can be obtained by

[ ] [ ] [ ] [ 1] ,

i j ij ij ij

V n =D n +Q nQ n− (3.1)

where i is ranged from 1 to N, and j is ranged from 1 to 3. Now we define the prediction order K as the number of samples we want to reference in the past history. Then prediction value of

[ 1],

can be derived by

[ 1] [ ] [ 1].

ij ij ij

O n+ =Q n +V n′ + (3.3)

When OLT receives a REPORT message, it will generate predicted queue occupancy for each class of service and store them in RAM. Then, at scheduling time t0, the scheduler can use the predicted queue occupancies to fairly assign the resource.

3.4 Prediction-based Scheduling Algorithm

Suppose that we have a set of predicted queue occupancies, i.e.

, where 1,..., , and 1, 2,3

Oi j i= N j= , at scheduling time, we want to design the granted data volumes, i.e. Gi j, where i=1,...,N, and j=1, 2, 3, that each queue can transmit during next cycle.

Also, we consider three kinds of service. The real time services, i.e. voice and video service, have higher priority and the non-real-time service, i.e. data service, has lower priority. In our scheduling algorithm, we allocate the resource to real-time services as more as possible, that is

1 min( max, 1),

Then we can derive the residual available capacity R as follows:

max 1 2

In the following session, we introduce the scheduling mechanisms for non-real-time data service.

It is similar to the ones introduced in Chapter 2, besides the input of the scheduler is replaced by predicted queue occupancies. In Session 3.4.1, we explain the operation of Hybrid PEQL-PQLP scheme; and in Session 3.4.2, a detailed description of Hybrid PLQF-PQLP scheme is given.

3.4.1 Hybrid PEQL-PQLP Scheme

We rename the prediction-based Hybrid EQL-QLP scheme as Hybrid prediction-based Equal Queue Length – prediction-based Queue Length Proportion (Hybrid PLQF-PQLP) scheme. The goal of Hybrid PEQL-PLQF scheme is also to make the fairness index F , defined in equation (2.3), as close to 1 as possible. The idea of this scheme is similar to previous one. The difference is that if the predicted queue occupancy of any data service queues is over the queue length threshold, Qth, the mechanism switches to the PEQL scheme to balance the queue occupancies. Otherwise, the PQLP mechanism is used.

Also, in the beginning of scheduling, the scheduler has the information of a set of predicted queue occupancies, i.e. Oi3, where i=1,...,N. Then, the granted data volume of each data service

If the summation of predicted queue lengths of data service is smaller than the residual available capacity R, all the data service queues are allowed to transmit packets which data volumes are at most to the predicted queue occupancy. If the summation of the predicted queue occupancies of data service is larger than R, but all of the them are smaller than Qth, then the resource assign to each data service queue is proportional to its predicted queue occupancy. However, if the summation of the predicted queue occupancies of data service is larger than R, and one or more predicted queue occupancies are larger than Qth, then the EQL method is adopted. The method is described as follows:

Define an index set K ={ ,k k1 2,....,kn}, where K∈{1, 2,...., }N and index kj satisfies

Then the resource assign to each data service queue is

3

The resource allocated to each queue, which queue length is larger than avg, is the difference between its queue length and avg. However, there is no resource assigned to the queues, which queue lengths are smaller than avg.

3.4.2 Hybrid PLQF-PQLP scheme

Also, we named the prediction-based Hybrid LQF-QLP as Hybrid prediction-based Longest Queue First – prediction-based Queue Length Proportion (Hybrid PLQF-PQLP). The goal of Hybrid PLQF-PQLP scheme is also to make the fairness index F, defined in equation (2.3), as close to 1 as possible. The inputs are the predicted queue occupancies, i.e. Oi3, where i=1,...,N, and the residual available capacity R, and the outputs are the granted data volume, i.e.

3, where 1,..., .

Gi i= N The resource assignment is written as

3 3

Similarly, if the summation of predicted queue occupance of data service is smaller than the residual available capacity R, all the requested packets are allowed to transmit in the next cycle, because the resource is adequate to assign to each data service. If the summation of predicted queue occupance of data service is larger than R, but all of them are smaller than Qth, then the resource assign to each data service queue is proportional to its requested data volume. However, if the summation of predicted queue occupance of data service is larger than R, and one or more of them are larger than Qth, then the resource assigned to each data service is described as follows.

Define the permutation function π:[1,N]→[1,N], so that Oπ(1),3Oπ(2),3≥ ≥... Oπ(N),3. Note that the permutation function π is the index set with descent order of queue size mapping from the original index.

In equation (3.12), the resource allocated to each data service queue consists of two parts of resource assignment. In the first part, the scheduler allocates the emergent queues, which predicted

queue occupancies are larger than queue length threshold, an amount of the difference between the predicted queue occupancy and the queue length threshold. In addition, the longest queue has the highest priority to share the resource.

In second part, the scheduler allocates the remaining resource, denoted as R′, which is the available resource after allocation in the first part, as derived by equation (3.13), to each ONU again.

However, different from the first part, the allocation is based on the proportion of remaining queue occupancies.

3.5 Simulation Result

3.5.1 Simulation Environment

The system parameters are described as follows:

Parameter Description Value

Number of samples we want to reference in the past history (prediction order)

3

Table 3.1: System parameters used in prediction-based environment

The setting of loads for all ONUs is the same as described in session 2.5, that is

0 Average System Load 600 Mbps ~ 1000 Mbps

×

×

× ×

The average loads of all P1 services, all P2 services, and ten of P3 services remain a fixed value in every experiment. But the average loads of six of P3 services will change from 15 Mbps to 80 Mbps in different experiment. As a result, the average system load will change approximately from 600 Mbps to 1000 Mbps.

3.5.2 Simulation Result and Conclusion

During the simulation, we choose the Qth =0.7 in Hybrid PLQF-PQLP scheme and

th 0.9

Q = in Hybrid PEQL-PQLP scheme based on the result discussed in Session 2.5. First of all, we want to see the effect of adding a predictor in our scheduler, compare with the case that no predictor implemented in it. Figure 3.4 shows the system throughput of two proposed scheduling schemes. For each scheme, the prediction-based scheme is compared with non-prediction-based scheme. We can see that the system throughput in prediction-based schemes is a little bit worse than non-prediction-based scheme. It is because that the prediction results in a prediction error. In other words, the moving average prediction method can not perfectly match the behavior (variance) of self-similar traffic.

Figure 3.4: The System Throughput (Tmax = 0.66ms)

And then, the difference of the average packet delay of voice service between the prediction-based scheme and non-prediction-based scheme is shown in Figure 3.5. Also, we set the value of Tmax equal to 0.66ms in each scheme. The result shows that the average packet delay in prediction-based scheme can be reduced more, because the average packet delay of prediction-based scheme is half of the cycle, whereas the average packet delay is one and half of the cycle in non-prediction-based scheme.

Figure 3.5: Average packet delay of Voice Service (Tmax = 0.66ms)

Then, the fairness of packet blocking probability and packet delay are illustrated in Figure 3.6 and Figure 3.7, respectively. In Figure 3.6, the result shows that the fairness in packet blocking probability can be improved in prediction-based schemes. By considering additional arrival packets in our schemes, the predicted queue occupancy may exceed the buffer size. Thus, high-loading ONUs can request more resource than the schemes with no predictor. As a result, the difference of packet blocking probability between high-loading ONUs and low-loading ONUs will be decreased. In Figure 3.7, the results show that the fairness is better in prediction-based schemes than in non-prediction-based scheme in the beginning of the curve, because the more real queue occupancies are taken into account. When the system load is high, the performance of prediction-based scheme is decreased, even lower than the schemes with no predictor. The reason is described as follows. The predicted queue occupancy will exceed the buffer size under high-loading condition. However, the arrival packets will be blocked when the buffer is overflowed. And the system ignores the packet delay of blocked packet. Thus, the effect of ignoring the packet delay of blocked packets results in a more unfair environment. If we combine two fairness indexes by the definition of overall fairness

index, we can find that the performance is better in the prediction-based environment than in the non-prediction-based environment.

Figure 3.6: Packet blocking probability fairness index of data service (Tmax = 0.66ms)

Figure 3.7: Packet delay fairness index of data service (Tmax = 0.66ms)

As the result shown in Figure 3.5, the average packet delay of voice service in prediction-based scheme is much lower than the delay criterion. Thus, the maximum cycle time in prediction-based environment can be extended. Now we set the maximum cycle time to 2ms, and see the difference compared with Tmax =0.66 ms.

In the beginning, we also want to check the average packet delay of voice service when we extend the maximum cycle time. The result is shown in Figure 3.8, we can see that both algorithms have the average packet delay lower than the specified delay bound (1.5ms). Under this basic requirement, then, we can observe the other performances.

Figure 3.8: Average Packet Delay of Voice Service

In Figure 3.9, the performance of system throughput is observed, where we defined the system throughput as the ratio between the traffic load injected into ONUs and traffic load transmitted from ONUs to OLT. In this figure, we normalize the system throughput to maximum system bandwidth. In this figure, besides four proposed schemes, we also compare the performance with limited service, which is proposed in [7]. We can find that by adopting prediction-based schemes, the throughput can

be improved 7% more than non-prediction-based schemes. The reason is that by using prediction-based schemes, the Tmax can be extended to longer than non-prediction-based schemes.

So the percentage of the overhead, such as control messages and guard, will be smaller. Additionally, we can see that the Hybrid (P)LQF-(P)QLP scheme has better performance in system throughput than Hybrid (P)EQL-(P)QLP scheme. It is because the packet truncation error is smaller in (P)LQF-(P)QLP scheme than in (P)EQL-(P)QLP scheme. In IPACT scheme, the total data volume, including real-time service and non-real-time service, an ONU can transmit during one cycle is constrained by Lmax. Even high-loading-ONUs tend to request resource exceeding Lmax, the OLT still grant theses high-loading-ONUs to transmit data volume to Lmax. Because OLT grants a part of ONUs to transmit data volume Lmax, but grants the others smaller then Lmax, the average cycle time will be smaller than Tmax. As a result, the throughput would be lower than the proposed prediction-based hybrid schemes.

Figure 3.9: The System Throughput

The performance of packet blocking probability fairness index is shown in Figure 3.10. We compare two proposed prediction-based schemes and limited service together. We can see that the Hybrid PLQF-PQLP scheme has better performance in fairness of packet blocking probability than the other two schemes. By adopting Hybrid PLQF-PQLP scheme, the high-loading ONUs can always get the resource. So the packet blocking probability will be lower in high-loading ONUs and higher in low-loading ONUs. As a result, the difference of packet blocking probability between high-loading ONUs and low-loading ONUs is the smallest in these three schemes. By adopting Hybrid

The performance of packet blocking probability fairness index is shown in Figure 3.10. We compare two proposed prediction-based schemes and limited service together. We can see that the Hybrid PLQF-PQLP scheme has better performance in fairness of packet blocking probability than the other two schemes. By adopting Hybrid PLQF-PQLP scheme, the high-loading ONUs can always get the resource. So the packet blocking probability will be lower in high-loading ONUs and higher in low-loading ONUs. As a result, the difference of packet blocking probability between high-loading ONUs and low-loading ONUs is the smallest in these three schemes. By adopting Hybrid

相關文件