• 沒有找到結果。

Experimental Results

在文檔中 中 華 大 學 碩 士 論 文 (頁 39-50)

Chapter 5. Experiments

5.2 Experimental Results

In scenario 1, the number of tasks was changed to compare the makespan of seven scheduling algorithms. In Figure 5.1 ~ Figure 5.3, the experimental results show that when the number of tasks increased, the time for finishing tasks increased, too. And, in all situations, the scheduling algorithms which are based on GA performed much better than other conventional scheduling algorithms, e.g. compared with the Random, Figure 5.1 shows that when the number of tasks was 2100 and nodes was 20, Z-GA shortened makespan by 44% (less 202510 sec.), GDSA shortened makespsan by 45% (less 204955 sec.), and EDSA shortened makespan by 49% (less 223044 sec.). Figure 5.2 shows that when the number of tasks was 2100 and nodes was 40, Z-GA shortened makespan by 46% (less 129647 sec.), GDSA shortened makespan by 38% (less 108431 sec.) and EDSA shortened makespan by 55% (less 15629 sec.). Further, Figure 5.3 shows that when the number of tasks was 2100 and nodes was 60, Z-GA shortened makespan by 49% (less 108186 sec.), GDSA shortened makespan by 41% (less 90980 sec.), and EDSA shortened makespan by 60% (less 132165 sec.).

In Figure 5.1, the difference in performance between the GDSA and Z-GA is less through the experiments. Z-GA performed better than GDSA in the case of 300 and 600 tasks, e.g. compared with the GDSA, Figure 5.1 shows that when the number of tasks was 300 and nodes was 20, Z-GA shortened makespan by 13% (less 5191 sec.).

33

Oppositely, GDSA performed better than Z-GA when the number of tasks is larger than 900 tasks, e.g. compared with the Z-GA, Figure 5.1 shows that when the number of tasks was 2100 and nodes was 20, GDSA shortened makespan by 10% (less 2445 sec.). But, in Figure 5.2 and Figure 5.3, when the number of computing nodes increased to 40 and 60, the performance of GDSA decreased, e.g. compared with the GDSA, Figure 5.2 shows that when the number of tasks was 2100 and nodes was 40, Z-GA shortened makespan by 12% (less 21216 sec.) and Figure 5.3 shows that when the number of tasks was 2100 and nodes was 60, Z-GA shortened makespan by 13%

(less 17206 sec.). Because of the larger number of computing nodes, the solution spaces of GDSA grew large and searching efficiency was influenced. Relatively, even if the number of computing nodes increased, EDSA performed best throughout the experiments. Compared with the Z-GA, Figure 5.1 shows that when the number of tasks was 300, EDSA shortened makespan (less 3065 sec.) by 9%. When the number of tasks increased to 2100, EDSA shortened makespan by 8% (less 20500 sec.).

Figure 5.2 shows than when the number of computing nodes increased to 40, EDSA shortened makespan by 22% (less 4869sec.) in the case of 300 tasks and 17% (less 26882 sec.) in the case of 2100 tasks. Even the number of computing nodes increased to 60, Figure 5.3 shows that EDSA shortened makespan by 31% (less 5500 sec.) in the case of 300 tasks and 22% in the case of 2100 tasks (less 23979 sec.). So, the experiments showed that EDSA performed better than Z-GA throughout the experiments. This is due to EDSA using different crossover and mutation operators and changing probabilities dynamically rather than only changing probabilities in for a fixed number of generations. Because it is hard to predict whether a fitness value going to be invariable or convergent, changing crossover and mutation probabilities according to a fixed number of generations is not suitable.

34

Figure 5.1 Makespan in different number of tasks (nodes = 20)

Figure 5.2 Makespan in different number of tasks (nodes = 40)

Figure 5.3 Makespan in different number of tasks (nodes = 60)

35

In scenario 2, the number of computing nodes was changed to compare the makespan of seven scheduling algorithms. As shown in Figure 5.4 ~ Figure 5.7, because of the use of GA, Z-GA, GDSA and EDSA had respectable performance, e.g.

compared with the random, Figure 5.4 shows that when the number of tasks was 500 and nodes was 60, Z-GA shortened makespan by 55% (less 32468 sec.), GDSA shortened makespan by 43% (less 25327 sec.) and EDSA shortened makespan by 67% (less 39354 sec.). Figure 5.5 shows that when the number of tasks was 1000 and nodes was 60, Z-GA shortened makespan by 49% (less 48941 sec.), GDSA shortened makespan by 32% (less 31749 sec.) and EDSA shortened makespan by 63%(less 63038 sec.). Figure 5.6.6 shows that when the number of tasks was 1500 and nodes was 60, Z-GA shortened makespan by 48% (less 69460 sec.), GDSA shortened makespan by 36% (less 51737 sec.), and EDSA shortened makespan by 59% (less 85312 sec.). Figure 5.7 shows that when the number of tasks was 2000 and nodes was 60, Z-GA shortened makespan by 49% (less 98697 sec.), GDSA shortened makespan by 41% (less 83119 sec.) and EDSA shortened makespan by 58% (less 116689 sec.).

And, when the number of computing nodes is less and the number of tasks is large, GDSA performed better than Z-GA, e.g. compared with Z-GA, Figure 5.7.7 shows that GDSA shortened makespan by 5% (less 27187 sec.) in the case of 10 nodes and 4% (less 9459 sec.) in the case of 20 nodes. But, as mentioned in the Scenario 1, the searching efficiency of GDSA was influenced when the number of computing nodes increased, e.g. compared with GDSA, Figure 5.7 shows that Z-GA shortened makespan by 16% (less 23376 sec.) in the case of 50 nodes and 13% (less 15578 sec.) in the case of 60 nodes. According to the above results, we can observe that when the number of computing nodes increases, the searching efficiency of

36

GDSA will reduce because of the larger solution space.

Compared with Z-GA, Figure 5.4 shows that EDSA shortened makespan by 7%

(less 9506 sec.) in the case of 10 nodes and 26 % (less 6885 sec.) in the case of 60 nodes. In Figure 5.5, when the number of scheduled tasks was 1000, EDSA shortened makespan by 6% (less 16815 sec.) in the case of 10 nodes and 27% (less 14098 sec.) in the case of 60 nodes. In Figure 5.6, when the number of scheduled tasks was 1500, EDSA shortened makespan by 7% (less 26001 sec.) in the case of 10 nodes and 21%

(less 15851 sec.) in the case of 60 nodes. In Figure 5.7, when the number of scheduled tasks increased to 2000, EDSA shortened makespan by 7% (less 39024 sec.) in the case of 10 nodes and 18 % (less 17990 sec.) in the case of 60 nodes. So, it was observed that due to the use of different operators and dynamic probabilities, the performance of EDSA was not influenced even if the number of computing nodes or tasks increased. So the proposed EDSA can perform better than Z-GA through the experiments.

Figure 5.4 Makespan in different number of computing nodes (tasks = 500)

37

Figure 5.5 Makespan in different number of computing nodes (tasks = 1000)

Figure 5.6 Makespan in different number of computing nodes (tasks = 1500)

Figure 5.7 Makespan in different number of computing nodes (tasks = 2000)

38

From the above two scenarios, we can observe that the scheduling algorithms which using the optimal-searching technique of GA have better performance than other conventional scheduling algorithms. Although the performance of GDSA scheduling algorithm is affected by the number of computing nodes which joining the computation, it has respectable performance when the number of computing nodes is less and the number of scheduled tasks is large. And the EDSA scheduling algorithm outperformed others in two scenarios, because of the use of different crossover and mutation operators and changing rates dynamically.

39

Chapter 6. Conclusion and Future Works

Although grid computing can integrate the computational resources which locate at different geographical or network areas, it is an NP problem for scheduling tasks to the computing nodes efficiently. For solving this scheduling problem, we propose two scheduling algorithms GDSA and EDSA using the genetic algorithm to search a near-optimal schedule in a grid computing environment. In the GDSA, chromosome was encoded according to the number of computing nodes to shorten the length of the chromosome and the gene information which representing the number of processing tasks limit the solution space. Thus, the evolution can be speeded in the limitary solution space and shorter chromosomal length. And, in the EDSA, the use of hybrid crossover and incremental mutation enhanced the efficiency of evolution.

Furthermore, the probabilities controlled by the variance of the fitness value stopped the solution from becoming local-optimal or losing the chance to evolve toward a better solution. In order to prove the performance of the proposed GDSA and EDSA, the simulation was performed. Compared with the traditional random approach, GDSA shortened makespan by 49% (in the case of 20 nodes, 2100 tasks) and EDSA shortened makespan by 60% (in the case of 60 nodes, 2100 tasks). Moreover, the results show that EDSA performed best throughout the experiments. In other words, the proposed EDSA can schedule a batch of tasks according to the fittest computing nodes, and efficiently shorten the time to complete tasks.

Although genetic algorithm has respectable efficiency in searching an optimal solution, it costs more time in evolution. In the future, we will study how to combine other optimal-searching techniques or utilize other approaches to shorten the time in evolution. If we can do that, the number of the evolutional generations can be reduced, and the time cost in finishing the jobs can be shortened greatly.

40

Reference

[1] J.H. Abawajy, “Fault-Tolerant Dynamic Job Scheduling Policy,” 6th International Conference on Algorithms and Architectures for Parallel Processing, Lecture Notes in Computer Science, vol. 3719, pp. 165-173, October 2005.

[2] K. Aida, A. Takefusa, H. Nakada , S. Matsuoka , S. Sekiguchi and U. Nagashima,

“Performance evaluation model for scheduling in a global computing system,”

International Journal of High Performance Computing Applications, vol. 4, pp.

268-279, 2000.

[3] H. Casanova, “Simgrid: A toolkit for the simulation of application scheduling,”

1st International Symposium on Cluster Computing and the Grid, pp. 430, 2001.

[4] K.-W. Cheng, C.-T. Yang, C.-L. Lai, S.-C. Chang,” A parallel loop self-scheduling on grid computing environments,” 7th International Symposium on Parallel Architectures, Algorithms and Networks, pp. 409-414, May 2004.

[5] G. Chryssolouris and V. Subramaniam, “Dynamic scheduling of manufacturing job shops using genetic algorithms,” Journal of Intelligent Manufacturing, vol.

12, no. 3, pp. 281-293, June 2001.

[6] K.P. Dahal, G.M. Burt, J.R. McDonald, and A. Moyes “A case study of scheduling storage tanks using a hybrid genetic algorithm,” IEEE Transactions on Evolutionary Computation, vol.5, issue 3, pp. 283-294, June 2001.

[7] K. Etminani and M. Naghibzadeh, “A Min-Min Max-Min Selective Algorithm for Grid Task Scheduling,” 3rd IEEE/IFIP International Conference in Central Asia on Internet, pp. 1-7, September 2007.

[8] M.R. Garey and D.S. Johnson, “Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., New York, USA, 1979.

[9] R. Gruber, V. Keller, P. Kuonen, M.-C. Sawley, B. Schaeli, A. Tolou,

41

M. Torruella and T.-M. Tran, “Towards an Intelligent Grid Scheduling System,”

Sixth International Conference on Parallel Processing and Applied Mathematics, Lecture Notes in Computer Science, vol. 3911, pp. 751-757, September 2005.

[10] A.T. Haghighat, K. Faez, M. Dehghan, A. Mowlaei and Y. Ghahremani,

“GA-Based Heuristic Algorithms for QoS Based Multicast Routing,” The Twenty-second SGAI International Conference on Knowledge Based Systems and Applied Artificial Intelligence, vol. 16, issues 5-6, pp. 305-312, July 2003.

[11] J.H. Holland, “Adaption in Natural and Artificial System,” MIT Press, Cambridge, MA, USA, 1992.

[12] E. S. H. Hou , N. Ansari and H. Ren, “A Genetic Algorithm for Multiprocessor Scheduling,” IEEE Transactions on Parallel and Distributed Systems, vol. 5, no.

2, pp.113-120, February 1994.

[13] O.H. Ibarra and C.E. Kim, “Heuristic Algorithms for Scheduling Independent Tasks on Nonidentical Processors,” Journal of the ACM, vol. 24, issue 2, pp.

280-289, April 1977.

[14] S.H. Jang and J.S. Lee, “Predictive Grid Process Scheduling Model in Computational Grid,” International Workshop on Metropolis/Enterprise Grid and Applications, Lecture Notes in Computer Science, vol. 3842, pp. 525-533, January 2006.

[15] K.H. Kim and R. Buyya, “Fair Resource Sharing in Hierarchical Virtual Organizations for Global Grids,” 8th IEEE/ACM International Conference on Grid Computing, pp. 50-57, September 2007.

[16] H. Lee, D. Lee and R.S. Ramakrishna, “An Enhanced Grid Scheduling with Job Priority and Equitable Interval Job Distribution,” The first International Conference on Grid and Pervasive Computing, Lecture Notes in Computer

42

Science, vol. 3947, pp. 53-62, May 2006.

[17] S.-S. Leu and C.-H. Yang, “GA-Based Multicriteria Optimal Model for Construction Scheduling,” Journal of Construction Engineering and Management, vol. 125, no. 6, pp. 420-427, November/December 1999.

[18] M. Maheswaran, S. Ali, H.J. Siegel, D. Hensgen and R.F. Freund, “Dynamic mapping of a class of independent tasks onto heterogeneous computing systems,”

Journal of Parallel and Distributed Computing, vol. 59, issue 2, pp. 107-131, November 1999.

[19] W.-C. Shih, C.-T. Yang and S.-S. Tseng, “A Performance-Based Approach to Dynamic Workload Distribution for Master-Slave Applications on Grid Environments,” The first International Conference on Grid and Pervasive Computing, Lecture Notes in Computer Science, vol. 3947, pp. 73-82, May 2006.

[20] G. Singh, C. Kesselman and E. Deelman, “A Provisioning Model and its Comparison with Best Effort for Performance-Cost Optimization in Grids,” The Sixteenth IEEE International Symposium on High-Performance Distributed Computing, pp. 117-126, June 2007.

[21] H. Song, X. Liu, D. Jakobsen, R. Bhagwan, X. Zhang, K. Taura and A. Chien,

“The MicroGrid: A scientific tool for modeling computational Grids,” Journal of Scientific Programming, vol. 8, no. 3, pp. 127-141, 2000.

[22] E.-H. Song, Y.-S. Jeon, S.-K. Han and Y.-S. Jeong, “Hierarchical and Dynamic Information Management Framework on Grid Computing,” International Federation for Information Processing, Lecture Notes in Computer Science, vol.

4096, pp. 151-161, October 2006.

[23] R. Subrata, A.Y. Zomaya and B. Landfeldt, “Artificial life techniques for load

在文檔中 中 華 大 學 碩 士 論 文 (頁 39-50)

相關文件