• 沒有找到結果。

Simulation Result and Conclusion

Chapter 3 Prediction-Based Scheduling Algorithms

3.4 Prediction-based Scheduling Algorithm

3.5.2 Simulation Result and Conclusion

×

× ×

The average loads of all P1 services, all P2 services, and ten of P3 services remain a fixed value in every experiment. But the average loads of six of P3 services will change from 15 Mbps to 80 Mbps in different experiment. As a result, the average system load will change approximately from 600 Mbps to 1000 Mbps.

3.5.2 Simulation Result and Conclusion

During the simulation, we choose the Qth =0.7 in Hybrid PLQF-PQLP scheme and

th 0.9

Q = in Hybrid PEQL-PQLP scheme based on the result discussed in Session 2.5. First of all, we want to see the effect of adding a predictor in our scheduler, compare with the case that no predictor implemented in it. Figure 3.4 shows the system throughput of two proposed scheduling schemes. For each scheme, the prediction-based scheme is compared with non-prediction-based scheme. We can see that the system throughput in prediction-based schemes is a little bit worse than non-prediction-based scheme. It is because that the prediction results in a prediction error. In other words, the moving average prediction method can not perfectly match the behavior (variance) of self-similar traffic.

Figure 3.4: The System Throughput (Tmax = 0.66ms)

And then, the difference of the average packet delay of voice service between the prediction-based scheme and non-prediction-based scheme is shown in Figure 3.5. Also, we set the value of Tmax equal to 0.66ms in each scheme. The result shows that the average packet delay in prediction-based scheme can be reduced more, because the average packet delay of prediction-based scheme is half of the cycle, whereas the average packet delay is one and half of the cycle in non-prediction-based scheme.

Figure 3.5: Average packet delay of Voice Service (Tmax = 0.66ms)

Then, the fairness of packet blocking probability and packet delay are illustrated in Figure 3.6 and Figure 3.7, respectively. In Figure 3.6, the result shows that the fairness in packet blocking probability can be improved in prediction-based schemes. By considering additional arrival packets in our schemes, the predicted queue occupancy may exceed the buffer size. Thus, high-loading ONUs can request more resource than the schemes with no predictor. As a result, the difference of packet blocking probability between high-loading ONUs and low-loading ONUs will be decreased. In Figure 3.7, the results show that the fairness is better in prediction-based schemes than in non-prediction-based scheme in the beginning of the curve, because the more real queue occupancies are taken into account. When the system load is high, the performance of prediction-based scheme is decreased, even lower than the schemes with no predictor. The reason is described as follows. The predicted queue occupancy will exceed the buffer size under high-loading condition. However, the arrival packets will be blocked when the buffer is overflowed. And the system ignores the packet delay of blocked packet. Thus, the effect of ignoring the packet delay of blocked packets results in a more unfair environment. If we combine two fairness indexes by the definition of overall fairness

index, we can find that the performance is better in the prediction-based environment than in the non-prediction-based environment.

Figure 3.6: Packet blocking probability fairness index of data service (Tmax = 0.66ms)

Figure 3.7: Packet delay fairness index of data service (Tmax = 0.66ms)

As the result shown in Figure 3.5, the average packet delay of voice service in prediction-based scheme is much lower than the delay criterion. Thus, the maximum cycle time in prediction-based environment can be extended. Now we set the maximum cycle time to 2ms, and see the difference compared with Tmax =0.66 ms.

In the beginning, we also want to check the average packet delay of voice service when we extend the maximum cycle time. The result is shown in Figure 3.8, we can see that both algorithms have the average packet delay lower than the specified delay bound (1.5ms). Under this basic requirement, then, we can observe the other performances.

Figure 3.8: Average Packet Delay of Voice Service

In Figure 3.9, the performance of system throughput is observed, where we defined the system throughput as the ratio between the traffic load injected into ONUs and traffic load transmitted from ONUs to OLT. In this figure, we normalize the system throughput to maximum system bandwidth. In this figure, besides four proposed schemes, we also compare the performance with limited service, which is proposed in [7]. We can find that by adopting prediction-based schemes, the throughput can

be improved 7% more than non-prediction-based schemes. The reason is that by using prediction-based schemes, the Tmax can be extended to longer than non-prediction-based schemes.

So the percentage of the overhead, such as control messages and guard, will be smaller. Additionally, we can see that the Hybrid (P)LQF-(P)QLP scheme has better performance in system throughput than Hybrid (P)EQL-(P)QLP scheme. It is because the packet truncation error is smaller in (P)LQF-(P)QLP scheme than in (P)EQL-(P)QLP scheme. In IPACT scheme, the total data volume, including real-time service and non-real-time service, an ONU can transmit during one cycle is constrained by Lmax. Even high-loading-ONUs tend to request resource exceeding Lmax, the OLT still grant theses high-loading-ONUs to transmit data volume to Lmax. Because OLT grants a part of ONUs to transmit data volume Lmax, but grants the others smaller then Lmax, the average cycle time will be smaller than Tmax. As a result, the throughput would be lower than the proposed prediction-based hybrid schemes.

Figure 3.9: The System Throughput

The performance of packet blocking probability fairness index is shown in Figure 3.10. We compare two proposed prediction-based schemes and limited service together. We can see that the Hybrid PLQF-PQLP scheme has better performance in fairness of packet blocking probability than the other two schemes. By adopting Hybrid PLQF-PQLP scheme, the high-loading ONUs can always get the resource. So the packet blocking probability will be lower in high-loading ONUs and higher in low-loading ONUs. As a result, the difference of packet blocking probability between high-loading ONUs and low-loading ONUs is the smallest in these three schemes. By adopting Hybrid PEQL-PQLP scheme, if the system load is high, the scheduler tries to balance the queue occupancies of all queues. So, the low-loading ONUs have more chance to get the resource, and the difference of packet blocking probability between high-loading ONUs and low-loading ONUs would be higher than the case which adopts PLQF-PQLP scheme. By adopting limited-service scheme, the packet blocking probability is independent between high-loading ONUs and low-loading ONUs. The packet blocking probability of high-load ONUs will increase with system load, however, the packet blocking probability would remain a small value even the system load is increased. Thus, the difference of packet blocking probability between high-loading ONUs and low-loading ONUs is large and the performance of fairness is low.

Figure 3.10: Fairness index of packet blocking probability

We show the performance of delay fairness index in Figure 3.11. We can see that PEQL-PQLP scheme is fairer than PLQF-PQLP scheme. This is because the scheduler can assign more resource to low-loading ONUs if we adopt PEQL-PQLP scheme. For limited-service scheme, the average packet delay is also independent between high-loading ONUs and low-loading ONUs. Thus, the performance is not good in packet delay fairness.

Finally, combined the performance of packet delay fairness and packet blocking probability fairness, we can get the overall fairness index, as shown in Figure 3.12. We can find that the PLQF-PQLP scheme is better than the other two schemes.

Figure 3.11: Fairness index of packet delay

Figure 3.12: Overall fairness index of data service

3.6 Concluding Remarks

In this Chapter, we proposed a prediction-based scheduler architecture. A moving average method is chosen to estimate the number of arrived packets during a cycle. The results show that the prediction error results in a slightly decreased throughput, but will decrease the average packet delay especially in voice service. With the decreased delay, the maximum cycle time can be extended to improve the performance of system throughput.

In the simulation, we can see that the moving average method can not perfectly estimate the behavior of self-similar traffic. Because the variation of self-similar traffic is large, the predictor will over estimate frequently. We believe that there exist better predictors that can reduce the prediction error in this model.

Chapter 4 Conclusion

In this Thesis, we first proposed a scheduling method that is suitable for EPON access network in Chapter 2. Three classes of service are considered, i.e. real-time voice, real-time video, and non-real-time data service. For real-time service, the delay-considered scheduling is introduced, the average delay of voice packets and is considered. And then, for non-real-time service, the fairness-considered scheduling method is discussed. We proposed two scheduling algorithms to obtain the overall fairness which combines the fairness of packet delay with fairness of packet blocking probability. The proposed algorithms are Hybrid EQL-QLP and Hybrid LQF-QLP scheme.

Each scheme is combined by two basic sub-schemes. We use a queue length threshold to be an adjusting parameter. The basic requirement of our scheme is to maintain the delay bound of voice service. Under this condition, we try to improve the fairness of packet delay and fairness packet blocking probability for non-real-time data service.

Simulation results show that, by adopting proposed scheme, the average packet delay of voice service can guaranteed and the overall fairness of data service can be improved compare with traditional scheduling schemes, such as QLP, LQF. It also shows that the Hybrid LQF-QLP scheme has better performance than Hybrid EQL-QLP scheme. We conclude that the proposed scheme can not only maintain the QoS criterion of voice service, but also support the good fairness for non-real-time service in terms of packet delay and packet blocking probability.

In Chapter 3, we proposed a prediction-based scheduler architecture. A moving average method is chosen to estimate the number of arrived packets during a cycle. The results show that the prediction error results in a slightly decreased throughput, but will decrease the average packet delay especially in voice service. With the decreased delay, the maximum cycle time can be extended to

improve the performance of system throughput.

In the simulation, we can see that the moving average method can not perfectly estimate the behavior of self-similar traffic. Because the variation of self-similar traffic is large, the predictor will over estimate frequently. However, the prediction of real-time traffic results in an extended cycle time.

When the cycle time is extended, the system throughput can be improved more, compare with non-prediction-based scheme. Thus, we believe the moving average is a cost-effective solution to improve the system throughput in EPON environment.

Appendix A

LRD and Self-Similar

The characteristics of LRD and self-similarity can be specified as follows: Let X t( ) be a wide-sense stationary stochastic process, i.e., a process with a stationary mean µ =E X t

[

( )

]

, a exactly (second-order) self-similar if the auto-covariance function is preserved across different time scales, i.e., γ( )m ( )k equals to γ( )k for all m and k. The process X t( ) is said to be asymptotically self-similar if γ( )m( )k →γ( )k , as m→ ∞. The measure of a process’ self-similarity is a Hurst parameter H, where 1

2<H <1. The self-similarity can be viewed as an ability of an aggregated process to “preserve” the burstiness of the original process, i.e., the property of slowly decaying variance

( ) 2 2

var⎡⎣X m ( )t ⎤⎦ ∼m H . (A.2)

The property of long-range dependence refers to a non-summable auto-correlation function ( )k

The long-range dependence results from a heavy-tailed distribution of the corresponding stochastic process. In a heavy-tail distribution, the decay obeys power law:

[ ] , as and 1 2.

P X >xcxα x→ ∞ < <α (A.4)

As a result, the probability of an extremely large observation in LRD process is non-negligible.

On the other hand, this means that extremely large bursts of data (packet trains) and extremely long periods of silence (inter-arrival times) will occur from time to time.

Appendix B

Reference Scheduling Algorithms

The reference schemes are observed and compared with the proposed algorithms. These reference schemes are limited-service [7], Queue Length Proportional (QLP) [11], and Longest Queue First (LQF) [12]. Before presenting the proposed algorithms, we want to introduce these schemes first.

In the beginning of session 2.4, we define the queue occupancies of class j service in ONU i as

Qi j and the granted data volume of class j service in ONU i as Gi j. By using equation (2.10), the residual available capacity R for best-effort data service can be derived. Then, by adopting limited-service, the grant transmission data volume of best-effort data service will be:

3 min( max 1 2, 3) , 1, 2, , .

i i i i

G = LGG Q i= … N (A.5)

The limited-service discipline grants the requested number of bytes, but no more than the constraint of

(

LmaxGi1Gi2

)

. That means the overall data volume, including real-time packets

and no-real-time packets, that an ONU can transmit in one cycle will not exceed the boundary of Lmax. If QLP scheme is adopted, then the grant transmission data volume assigned to each best-effort data service is proportional to the requested data volume, that is

3

The goal of QLP scheme is to guarantee a fair service. By adopting QLP scheme, the average packet delay would be the same. And last, we want to show the assignment method by adopting LQF scheme. If we sort the requested data volume

{

Q13,Q23,…,QN3

}

from maximum value to minimum

By adopting LQF scheme, the queue with the largest queue occupancy is assigned the highest priority, and the scheduler always serves longest queue first. So, we can imagine that the goal of LQF is to reduce the queue length in order to cater to imminent bursty traffic.

Appendix C

Examples of EQL scheme

Assume that all the queue lengths are larger than the value Avg, as show in Figure A.1, Then, the capacity allows transmitting for each queue is:

3 where R is the available residual resource for data service. Obviously, after allocating bandwidth,

the queue occupancies are the same, i.e. Qi3 =Avg, where i=1, 2, ...,N , and packet blocking probability is decreased. The average queue occupancy Avg is

3

Figure A.1: First example of EQL scheme

Moreover, under a condition that the occupancies between each queue are quite different, it is possible that we can not achieve the goal of equal occupancies, as shown in Figure A.2. Suppose the queue with shortest occupancies is Qik, the optimal allocating method is to pick out the shortest one and allocate total available resource to the other ones. That is

3

And after allocating bandwidth, the new average queue occupancy Avg' is:

3

Figure A.2: Second example of EQL mechanism

If we still can not achieve the goal of equal occupancy, then second smallest ones out and allocate the resource to the others. This step may be repeated more time until we find the optimal solution. Obviously, by adopting EQL mechanism, the balance between the queue occupancies is achieved, and the possibility of packet dropping of emergent queues can be decreased.

The EQL mechanism described before can also be described as mathematic form as follows:

13 13 23 23 3 3

If one or more of the solutions, denoted as Gj3, where jN, are negative. In this situation, we set the most negative one to zero, i.e. min{ l3} 0

l j G

= , and recalculate the grant transmission capacity in equation (A.11). Now the number of variables and equations becomes N-1. This operation will be repeated until that all the grant transmission capacities we solve are not less than zero.

Bibliography

[1] G. Kramer and G. Pesavento, “Ethernet Passive Optical Network (EPON): Building a Next-Generation Optical Acces Network,” IEEE Commun. Mag., vol. 40, pp.66-73, Feb.

2002.

[2] H. Ueda, K. Okada, B. Ford, G. Mahony, S. Hornung, D. Faulkner, J. Abiven, S. Durel, R. Ballart, and J. Erickson, “Deployment status and common technical specifications for a B-PON system,” IEEE Commun. Mag., vol. 39, pp. 134-141, Dec. 2001.

[3] Jingshown Wu, F. R. Gu, and H. W. Tsao, “Jitter performance analysis of SOCDMA-based EPON using perfect difference codes,” IEEE Journal of Lightwave Technology, vol. 22, no. 5, pp. 1309-1319, May 2004.

[4] G. Kramer, B. Mukherjee, and G. Pesavento, “Ethernet PON (ePON): design and analysis of an optical access network,” Photonic Network Commun., vol. 3, no. 3, pp.

307-319, July 2001.

[5] G. Kramer, B. Mukherjee, and G. Pesavento, “IPACT: A Dynamic Protocol for an Ethernet PON (EPON),” IEEE Commun. Mag., vol. 40, no. 2, pp. 74-80, Feb. 2002.

[6] K. Rege et al., “QoS management in trunk-and-branch switched Ethernet networks,”

IEEE Commun. Mag., vol. 40, pp. 30-36, Dec. 2002.

[7] G. Kramer, B. Mukherjee, S. Dixit, Y. Ye, and R. Hirth, “Supporting differentiated classes of service in Ethernet passive optical networks,” J. Opt. Networks, vol. 1, no. 8/9, pp. 280-298, Aug. 2002.

[8] S. Choi and J. Huh, “Dynamic bandwidth allocation algorithm for multimedia services over Ethernet PONs,” ETRI J., vol. 24, no. 6, pp. 465-468, Dec. 2002.

[9] M. Ma, Y. Zhu, and T. H. Cheng, “A bandwidth guaranteed polling MAC protocol for Ethernet passive optical networks,” in Proc. IEEE INFOCOM, San Francisco, CA,

Mar.-Apr. 2003, pp. 22-31.

[10] C. M. Assi, Y. Ye, S. Dixit, and M. A. Ali, “Dynamic bandwidth allocation for quality-of-service over Ethernet PONs,” IEEE Journal on Selected Areas in Commun., vol. 21, no. 9, pp. 1467-1477, Nov. 2003.

[11] D. Liu, N. Ansari, and E. Hou, ”QLP: A Joint Buffer Management and Scheduling Scheme for Input Queueed Switches,” IEEE Workshop on High Performance Switching and Routing, pp. 29-31, May 2001.

[12] D. -S. Lee, “Generalized longest queue first: An adaptive scheduling discipline for ATM networks,” IEEE INFORCOM’97, vol. 3, pp. 1096-1104

[13] Mu Si, Ding Quanlong, and Ko Chi Chung, “Improving the network performance using perdiction based longest queue first (PLQF) scheduling algorithm,” IEEE International Conference on ATM and High Speed Intelligent Internet Symposium, vol. 22-25, pp.

344-348, April 2001.

[14] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss, “An Architecture for Differentiated Services,” IETF, RFC 2475, Dec. 1998.

[15] Virtual Bridged Local Area Networks, IEEE Standard 802.1Q, 1998.

[16] IEEE standard 802.3ah task force home page [Online]. Available:

http://www.ieee802.org/3/fem

[17] A. Adas, “Traffic models in broadband networks,” IEEE Commun. Mag., vol. 35, no. 7, pp. 82-89, July 1997.

[18] W. Willinger, M. Taqqu, R. Sherman, and D. Wilson, “Self-similarity through high-variability: statistical analysis of Ethernet LAN traffic at the source level,” In Proc.

ACM SIGCOMM’95, pp. 100-113, Cambridge, MA, Aug. 1995.

[19] W. Leland, M. Taqqu, W. Willinger, and D. Wilson, “On the Self-Similar Nature of Ethernet Traffic (Extended Version),” IEEE/ACM Transactions on Networking, vol. 2,

no. 1, pp. 1-15, Feb. 1994.

[20] R. Jain, The Art of Computer Systems Performance Analysis, John Wiley and Sons Inc, 1991.

Vita

姓名:彭崇禎 學歷:

2002 ~ 2004 交通大學電信工程研究所 2000 ~ 2002 台灣科技大學電機工程系 1995 ~ 2000 台北科技大學電機工程科

E-mail: jackpeng.cm91g@nctu.edu.tw

相關文件