• 沒有找到結果。

Chapter 6 Comparisons and Discussions

6.2 Discussions

In the IPSO and BFPSO approaches, we investigated hybridization by combining PSO with IA and BFO, respectively. In IPSO method, the major parameter learning process is achieved by IA. In order to avoid trapping in a local optimal solution and ensure the search capability of near global optimal solution, we employ the advantages of the PSO to improve mutation mechanism of IA. In addition, the balance between exploration of the search space and exploitation of potentially good solutions is considered as a fundamental problem in nature-inspired systems. Too much stress on exploration results in a pure random search whereas too much exploitation results in a pure local search. Clearly, intelligent search must self-adaptively combine exploration of the new regions of the space with evaluation of potential solutions already identified. The BFPSO combines both algorithms BFO and PSO to balance the exploration and exploitation abilities of the search space.

Unlike IPSO and BFPSO approaches that use PSO as the enhance mechanism to improve the performance of basic IA and BFO. In DMPSO approach, the parameter learning method is based on the PSO algorithm and the distance-based mutation operator is introduced to increase the population diversity, which strongly encourages a global search giving the particles more chance of escaping from local optimum and converging to the global optimum.

It should be notice that due to the PSO plays different role between the proposed IPSO, BFPSO and DMPSO methods, the parameters of PSO are not totally the same for these three parameter learning algorithms. The functions of IA, BFO and PSO are summarized in Table 6.3. Furthermore, the predefined fuzzy rule number in IPSO method is set to be 4 which were different from others.

Table 6.3: The roles of IA, BFO and PSO in the proposed learning algorithm.

Method IPSO BFPSO DMPSO

Fuzzy Rule Numbers 4 3 3

Basic/Main Algorithm IA BFO PSO

Mechanism PSO PSO Mutation operator

Enhancement

Function

Increase population

diversity

Improve global search

ability

Increase population

diversity

Moreover, in order to obtain better simulation results, the proposed learning algorithms always require training data to be sufficient and proper. However, there is no procedure or rule suitable for all cases in choosing training data. One rule of thumb is that training data should cover the entire expected input space and then during the training process select training-vector pairs randomly from the set.

Chapter 7

Conclusions and Future Works

Fuzzy logic and artificial neural networks are complementary technologies in the design of intelligent systems. The combination of these two technologies into an integrated system appears to be a promising path toward the development of intelligent systems capable of capturing qualities characterizing the human brain.

Both neural networks and fuzzy logic are powerful design techniques that have their strengths and weaknesses. The integrated neuro-fuzzy systems possess the advantages of both neural networks (e.g. learning abilities, optimization abilities and connectionist structures) and fuzzy systems (e.g. humanlike IF-THEN rules thinking and ease of incorporating expert knowledge). In this way, it is possible to bring the low-level learning and computational power of neural networks into fuzzy systems and also high-level humanlike IF-THEN thinking and reasoning of fuzzy systems into neural networks.

A neuro-fuzzy system is a fuzzy system, whose parameters are learned by a learning algorithm. It has a neural network architecture constructed from fuzzy reasoning, and can always be interpreted as a system of fuzzy rules. Learning is used to adaptively adjust the rules in the rule base, and to produce or optimize the membership functions of a fuzzy system. Structured knowledge is codified as fuzzy rules. Modern neuro-fuzzy systems are usually represented as special multilayer feedforward neural networks. Hayashi et al. [126] showed that a feedforward neural network could approximate any fuzzy rule based system and any feedforward neural network may be approximated by a rule based fuzzy inference system.

In this dissertation, the neuro-fuzzy architecture we used is called

functional-link-based neuro-fuzzy network (FLNFN) model. The FLNFN model uses a functional link neural network to the consequent part of the fuzzy rules. FLNFN is a multilayer feedforward network in which each node performs a particular function (node function) based on incoming signals and a set of parameters pertaining to this node. The FLNFN model can automatically be constructed and the FLNFN parameters can be adjusted by performing structure/parameter learning schemes.

In Chapter 3, the proposed IPSO method combines the IA and PSO to perform parameter learning. The advantages of the proposed IPSO method are summarized as follows: 1) We employed the advantages of PSO to improve the mutation mechanism;

2) The complicated problems can be better solved than IA and PSO; 3) There is more of a likelihood to get a global optimum compared to heuristic methods; 4) The experimental results have shown that our method obtains better results than other existing methods in accuracy rate and convergence speed.

In Chapter 4, an innovative BFPSO algorithm is applied for the design of neuro-fuzzy classifier. Conventional BFO depends on random search directions which may lead to delay in reaching global solution while PSO is prone to be trapped in local optima. In order to get better optimization, the new algorithm combines advantages of both the algorithms i.e. PSO’s ability to exchange social information and BFO’s ability in finding new solutions by elimination and dispersal. The BFPSO algorithm combines PSO-based mutation operator with bacterial chemotaxis in order to make judicious use of exploration and exploitation abilities of search space and to avoid false and premature convergence. The simulation results showed that the overall performance of the hybrid algorithm outperforms conventional BFO and PSO.

Unlike IPSO and BFPSO approaches that use PSO as the enhance mechanism to improve the performance of basic IA and BFO. In chapter 5, the PSO-based learning algorithm, called DMPSO, for the neural fuzzy system is presented. In DMPSO

approach, the parameter learning method is based on the PSO algorithm and the distance-based mutation operator is introduced to increase the population diversity, which strongly encourages a global search giving the particles more chance of escaping from local optimum and converging to the global optimum. The simulation results have shown the proposed DMPSO method yields better performance than other existing models under some circumstances in the nonlinear system control application fields.

In Chapter 6, the well-known skin color detection problem is used as the benchmark to demonstrate the performance and efficiency of the proposed IPSO, BFPSO and DMPSO method. The aim of the skin color detection is to distinguish between skin and non-skin pixels based on the Y, Cb and Cr information. The average accuracy rates of testing and training data with different approaches were depicted in Table 6.2. Since the predefined rule number is not identical, we cannot make the comparison fairly. From the simulation results, we can only conclude that DMPSO outperforms BFPSO and IPSO seems to be over-trained.

Although the proposed algorithms yield better performance in the classification and nonlinear system control applications, but there still some advanced topics should be addressed in future research.

In general, synthesizing a neuro-fuzzy system, two major types of learning are required: structure learning algorithms to find appropriate fuzzy logic rules; and parameter learning algorithms to fine-tune the membership functions and other parameters. There are several ways that structure learning and parameter learning can be combined in a neuro-fuzzy system. They can be performed sequentially: structure learning is used first to find the appropriate structure of a neuro-fuzzy system; and parameter learning is then used to fine-tune the parameters. In some situations, only parameter learning or structure learning is necessary when structure (fuzzy rules) or

parameters (membership functions) are provided by experts, and the structure in some neuro-fuzzy systems is fixed. Identification of fuzzy rules has been one of the most important aspects in the design of neuro-fuzzy sysyem. Identified rules and concise rules can provide an initial structure of networks so that learning processes can be fast, reliable and highly intuitive. To overcome the limitations of using expert knowledge in defining the fuzzy rules, data driven methods to create fuzzy systems are needed.

Therefore, the first advanced research topic is to generate fuzzy rules from numerical data more efficiently.

The choice of the model’s structure is very important, as it determines the flexibility of the model in the approximation of (unknown) systems. Despite of the research that has already been done in the area of neuro-fuzzy systems the recurrent variants of this architecture are still rarely studied. In contrast to pure feed-forward architectures, that have a static input-output behavior, recurrent models are able to store information of the past (e.g. prior system states) and are thus more appropriate for the analysis of dynamic systems. The second advanced research topic is to apply the proposed IPSO, BFPSO and DMPSO into the recurrent neural network to learn and optimize a hierarchical fuzzy rule base with feedback connections.

In this dissertation, a systematic method was not used to determine the initial parameters. The initial parameters are determined by practical experimentation or by trial-and-error. In future works, we will try to develop a well-defined method to automatically determine the initial parameters, and thus inexperienced users could design a neuro-fuzzy system with ease.

Bibliography

1. H. Takagi, N. Suzuki, T. Koda and Y. Kojima, “Neural Networks Designed on Approximate Reasoning Architecture and Their Applications,” IEEE Transactions on Neural Networks, vol. 3, no. 5, pp. 752-760, 1992.

2. E. Sanchez, T. Shibata and L. A. Zadeh, Genetic Algorithms and Fuzzy Logic Systems: Soft Computing Perspectives. World Scientific, 1997.

3. O. Cordon, F. Gomide, F. Herrera, F. Hoffmann and L. Magdalena, “Ten Years of Genetic Fuzzy Systems: Current Framework and New Trends,” Fuzzy Sets and Systems, vol. 141, no. 1, pp. 5-31, 2004.

4. A. Homaifar and E. McCormick, “Simultaneous Design of Membership Functions and Rule Sets for Fuzzy Controllers Using Genetic Algorithms,” IEEE Transactions on Fuzzy Systems, vol. 3, no. 2, pp. 129-139, 1995.

5. J. R. Velasco, “Genetic-Based On-Line Learning for Fuzzy Process Control,”

International Journal of Intelligent Systems, vol. 13, no. 10-11, pp. 891-903, 1998.

6. H. Ishibuchi, T. Nakashima and T. Murata, “Performance Evaluation of Fuzzy Classifier Systems for Multidimensional Pattern Classification Problems,” IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics, vol. 29, no. 5, pp. 601-618, 1999.

7. J. Vieira, F. M. Dias, A. Mota, “Neuro-Fuzzy Systems: A Survey,” WSEAS Transactions on Systems, vol. 3, no. 2, pp. 414-419, 2004.

8. C.-T. Lin and C. S. G. Lee, Neural Fuzzy Systems: A Neuro-Fuzzy Synergism to Intelligent Systems. Prentice-Hall, 1996.

9. S. Mitra and Y. Hayashi, “Neuro-Fuzzy Rule Generation: Survey in Soft

Computing Framework,” IEEE Transactions on Neural Networks, vol. 11, no. 3, pp. 748-768, 2000.

10. A. V. Nandedkar and P. K. Biswas, “A Granular Reflex Fuzzy Min-Max Neural Network for Classification,” IEEE Transactions on Neural Networks, vol. 20, no.

7, pp. 1117-1134, 2009.

11. G.-D. Wu and P.-H. Huang, “A Maximizing-Discriminability-Based Self-Organizing Fuzzy Network for Classification Problems,” IEEE Transactions on Fuzzy Systems, vol. 18, no. 2, pp. 362-373, 2010.

12. O. Cordon, F. Herrera, F. Hoffmann and L. Magdalena, Genetic Fuzzy Systems:

Evolutionary Tuning and Learning of Fuzzy Knowledge Bases. World Scientific, 2001.

13. P. P. Angelov, Evolving Rule-Based Models: A Tool for Design of Flexible Adaptive Systems. Physica-Verlag, 2002.

14. A. Gonzalez and R. Perez, “SLAVE: A Genetic Learning System Based on an Iterative Approach,” IEEE Transactions on Fuzzy Systems, vol. 7, no. 2, pp.

176-191, 1999.

15. M. Russo, “FuGeNeSys – A Fuzzy Genetic Neural System for Fuzzy Modeling,”

IEEE Transactions on Fuzzy Systems, vol. 6, no. 3, pp. 373-388, 1998.

16. H. Ishibuchi, K. Nozaki, N. Yamamoto and H. Tanaka, “Selecting Fuzzy If-Then Rules for Classification Problems Using Genetic Algorithms,” IEEE Transactions on Fuzzy Systems, vol. 3, no. 3, pp. 260-270, 1995.

17. H. Ishibuchi, T. Murata and I. B. Turksen, “Single-Objective and Two-Objective Genetic Algorithms for Selecting Linguistic Rules for Pattern Classification Problems,” Fuzzy Sets and Systems, vol. 89, no. 2, pp. 135-150, 1997.

18. H. Ishibuchi and Y. Nojima, “Analysis of Interpretability-Accuracy Tradeoff of Fuzzy Systems by Multiobjective Fuzzy Genetics-Based Machine Learning,”

International Journal of Approximate Reasoning, vol. 44, no. 1, pp. 4-31, 2007.

19. L.-X. Wang and J. M. Mendel, “Generating Fuzzy Rules by Learning from Examples,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 22, no. 6, pp. 1414-1427, 1992.

20. C.-J. Lin and C.-T. Lin, “An ART-Based Fuzzy Adaptive Learning Control Network,” IEEE Transactions on Fuzzy Systems, vol. 5, no. 4, pp. 477-496, 1997.

21. C.-T. Lin, C.-J. Lin and C. S. G. Lee, “Fuzzy Adaptive Learning Control Network with On-Line Neural Learning,” Fuzzy Sets and Systems, vol. 71, no. 1, pp. 25-45, 1995.

22. W.-S. Lin, C.-H. Tsai and J.-S. Liu, “Robust Neuro-Fuzzy Control of Multivariable Systems by Tuning Consequent Membership Functions,” Fuzzy Sets and Systems, vol. 124, no. 2, pp. 181-195, 2001.

23. T. Takagi and M. Sugeno, “Fuzzy Identification of Systems and Its Applications to Modeling and Control,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-15, no. 1, pp. 116-132, 1985.

24. J.-S. R. Jang, “ANFIS: Adaptive-Network-Based Fuzzy Inference System,”

IEEE Transactions on Systems, Man, and Cybernetics, vol. 23, no. 3, pp.

665-685, 1993.

25. C.-F. Juang and C.-T. Lin, “An On-Line Self-Constructing Neural Fuzzy Inference Network and Its Applications,” IEEE Transactions on Fuzzy Systems, vol. 6, no.1, pp. 12-32, 1998.

26. D. Nauck and R. Kruse, “A Neuro-Fuzzy Method to Learn Fuzzy Classification Rules from Data,” Fuzzy Sets and Systems, vol. 89, no.3, pp. 277-288, 1997.

27. S. Paul and S. Kumar, “Subsethood-Product Fuzzy Neural Inference System (SuPFuNIS),” IEEE Transactions on Neural Networks, vol. 13, no. 3, pp.

578-599, 2002.

28. J.-S. Wang and C. S. G. Lee, “Self-Adaptive Neuro-Fuzzy Inference Systems for Classification Applications,” IEEE Transactions on Fuzzy Systems, vol. 10, no. 6, pp. 790-802, 2002.

29. C.-J. Lin and C.-T. Lin, “Reinforcement Learning for an ART-Based Fuzzy Adaptive Learning Control Network,” IEEE Transactions on Neural Networks, vol. 7, no. 3, pp. 709-731, 1996.

30. F.-J. Lin, C.-H. Lin and P.-H. Shen, “Self-Constructing Fuzzy Neural Network Speed Controller for Permanent-Magnet Synchronous Motor Drive,” IEEE Transactions on Fuzzy Systems, vol. 9, no. 5, pp. 751-759, 2001.

31. C.-J. Lin and C.-H. Chen, “Identification and Prediction Using Recurrent Compensatory Neuro-Fuzzy Systems,” Fuzzy Sets and Systems, vol. 150, no. 2, pp. 307-330, 2005.

32. C.-H. Chen, C.-J. Lin and C.-T. Lin, “A Functional-Link-Based Neurofuzzy Network for Nonlinear System Control,” IEEE Transactions on Fuzzy Systems, vol. 16, no. 5, pp. 1362-1378, 2008.

33. M.-T. Su, C.-H. Chen, C.-J. Lin and C.-T. Lin, “A Rule-Based Symbiotic Modified Differential Evolution for Self-Organizing Neuro-Fuzzy Systems,”

Applied Soft Computing, vol. 11, no. 8, pp. 4847-4858, 2011.

34. R. Fuller, Introduction to Neuro-Fuzzy Systems, Studies in Fuzziness and Soft Computing. Physica-Verlag, 2000.

35. C.-T. Lin and C. S. G. Lee, “Neural-Network-Based Fuzzy Logic Control and Decision System,” IEEE Transactions on Computers, vol. 40, no. 12, pp.

1320-1336, 1991.

36. H. Bunke and A. Kandel, Neuro-Fuzzy Pattern Recognition. World Scientific, 2000.

37. S. K. Pal and S. Mitra, Neuro-Fuzzy Pattern Recognition: Methods in Soft Computing. John Wiley & Sons, 1999.

38. L. Chen, D. H. Cooley and J. Zhang, “Possibility-Based Fuzzy Neural Networks and Their Application to Image Processing,” IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics, vol. 29, no. 1, pp. 119-126, 1999.

39. S.-W. Lin, S.-C. Chen, W.-J. Wu and C.-H. Chen, “Parameter Determination and Feature Selection for Back-Propagation Network by Particle Swarm Optimization,” Knowledge and Information Systems, vol. 21, no. 2, pp. 249-266, 2009.

40. T. Weise, Global Optimization AlgorithmsTheory and Application. it-weise.de (self-published), 2009. (http://www.it-weise.de/projects/book.pdf)

41. J. Kennedy and R. Eberhart, “Particle Swarm Optimization,” in Proceedings of the 1995 IEEE International Conference on Neural Networks, vol. 4, pp.

1942-1948, 1995.

42. R. Eberhart and J. Kennedy, “A New Optimizer Using Particle Swarm Theory,”

in Proceedings of the Sixth International Symposium on Micro Machine and Human Science, pp. 39-43, 1995.

43. J. Kennedy, R. C. Eberhart and Y. Shi, Swarm Intelligence. Morgan Kaufmann, 2001.

44. M. P. Wachowiak, R. Smolikova, Y. Zheng, J. M. Zurada and A. S. Elmaghraby,

“An Approach to Multimodal Biomedical Image Registration Utilizing Particle Swarm Optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 289-301, 2004.

45. W.-F. Leong and G. G. Yen, “PSO-Based Multiobjective Optimization with Dynamic Population Size and Adaptive Local Archives,” IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics, vol. 38, no. 5, pp.

1270-1293, 2008.

46. E. Mezura-Montes and C. A. C. Coello, “A Simple Multimembered Evolution Strategy to Solve Constrained Optimization Problems,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 1, pp. 1-17, 2005.

47. C. A. C. Coello, G. T. Pulido and M. S. Lechuga, “Handling Multiple Objectives with Particle Swarm Optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 256-279, 2004.

48. R. Xu, G. C. Anagnostopoulos and D. C. Wunsch II, “Multiclass Cancer Classification Using Semisupervised Ellipsoid ARTMAP and Particle Swarm Optimization with Gene Expression Data,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 4, no. 1, pp. 65-77, 2007.

49. F. Melgani and Y. Bazi, “Classification of Electrocardiogram Signals with Support Vector Machines and Particle Swarm Optimization,” IEEE Transactions on Information Technology in Biomedicine, vol. 12, no. 5, pp. 667-677, 2008.

50. M. Sugisaka and X. Fan, “An Effective Search Method for Neural Network Based Face Detection Using Particle Swarm Optimization,” IEICE Transactions on Information and Systems, vol. E88-D, no. 2, pp. 214-222, 2005.

51. C.-F. Juang, “A Hybrid of Genetic Algorithm and Particle Swarm Optimization for Recurrent Network Design,” IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics, vol. 34, no. 2, pp. 997-1006, 2004.

52. C.-T. Lin, C.-T. Yang and M.-T. Su, “A Hybridization of Immune Algorithm with Particle Swarm Optimization for Neuro-Fuzzy Classifiers,” International Journal of Fuzzy Systems, vol. 10, no. 3, pp. 139-149, 2008.

53. M.-T. Su and C.-T. Lin, “Nonlinear System Control Using Functional-Link-Based Neuro-Fuzzy Network Model Embedded with Modified Particle Swarm Optimizer,” in Proceedings of the 19th National Conference on

Fuzzy Theory and Its Application, 2011.

54. D. Sedighizadeh and E. Masehian, “Particle Swarm Optimization Methods, Taxonomy and Applications,” International Journal of Computer Theory and Engineering, vol. 1, no. 5, pp. 486-502, 2009.

55. J. J. Liang, A. K. Qin, P. N. Suganthan and S. Baskar, “Comprehensive Learning Particle Swarm Optimizer for Global Optimization of Multimodal Functions,”

IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281-295, 2006.

56. R. Poli, J. Kennedy and T. Blackwell, “Particle Swarm Optimization: An Overview,” Swarm Intelligence, vol. 1, no. 1, pp. 33-57, 2007.

57. A. P. Engelbrecht, Fundamentals of Computational Swarm Intelligence. John Wiley & Sons, 2006.

58. Y. Shi and R. Eberhart, “A Modified Particle Swarm Optimizer,” in Proceedings of the 1998 IEEE International Conference on Evolutionary Computation, pp.

69-73, 1998.

59. Y. Shi and R. C. Eberhart, “Parameter Selection in Particle Swarm Optimization,” in Proceedings of the 7th International Conference on Evolutionary Programming VII, vol. 160, no. 4, pp. 591-600, 1998.

60. Y. Shi and R. Eberhart, "Particle Swarm Optimization with Fuzzy Adaptive Inertia Weight," in Proceedings of the Workshop on Particle Swarm Optimization, pp. 101-106, 2001.

61. A. Ratnaweera, S. K. Halgamuge and H. C. Watson, “Self-Organizing Hierarchical Particle Swarm Optimizer with Time-Varying Acceleration Coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp.

240-255, 2004.

62. H.-Y. Fan and Y. Shi, “Study on Vmax of Particle Swarm Optimization,” in

Proceedings of the Workshop on Particle Swarm Optimization, 2001.

63. M. Clerc and J. Kennedy, “The Particle Swarm-Explosion, Stability, and Convergence in a Multidimensional Complex Space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58-73, 2002.

64. J. Kennedy, “Small Worlds and Mega-Minds: Effects of Neighborhood Topology on Particle Swarm Performance,” in Proceedings of the 1999 Congress on Evolutionary Computation, vol. 3, pp. 1931-1938, 1999.

65. J. Kennedy and R. Mendes, “Population Structure and Particle Swarm Performance,” in Proceedings of the 2002 Congress on Evolutionary Computation, vol. 2, pp. 1671-1676, 2002.

66. P. N. Suganthan, “Particle Swarm Optimizer with Neighborhood Operator,” in Proceedings of the 1999 Congress on Evolutionary Computation, vol. 3, pp.

1958-1962, 1999.

67. X. Hu and R. Eberhart, “Multiobjective Optimization Using Dynamic Neighborhood Particle Swarm Optimization,” in Proceedings of the 2002 Congress on Evolutionary Computation, vol. 2, pp. 1677-1681, 2002.

68. K. E. Parsopoulos and M. N. Vrahatis, “UPSO: A Unified Particle Swarm Optimization Scheme,” in Lecture Series on Computer and Computational Sciences, vol. 1, pp. 868-873, 2004.

69. K. E. Parsopoulos and M. N. Vrahatis, “Unified Particle Swarm Optimization for Solving Constrained Engineering Optimization Problems,” in Lecture Notes in Computer Science, vol. 3612, pp. 582-591, 2005.

70. R. Mendes, J. Kennedy and J. Neves, “The Fully Informed Particle Swarm:

Simpler, Maybe Better,” IEEE Transactions on Evolutionary Computation, vol.

8, no. 3, pp. 204-210, 2004.

71. T. Peram, K. Veeramachaneni and C. K. Mohan, “Fitness-Distance-Ratio Based

Particle Swarm Optimization,” in Proceedings of the 2003 IEEE Swarm Intelligence Symposium, pp. 174-181, 2003.

72. P. J. Angeline, “Using Selection to Improve Particle Swarm Optimization,” in Proceedings of the 1998 IEEE Congress on Evolutionary Computation, pp.

84-89, 1998.

73. M. Lovbjerg, T. K. Rasmussen and T. Krink, “Hybrid Particle Swarm Optimizer with Breeding and Subpopulations,” in Proceedings of the Third Genetic and Evolutionary Computation Conference, vol. 1, pp. 469-476, 2001.

74. V. Miranda and N. Fonseca, “New Evolutionary Particle Swarm Algorithm

74. V. Miranda and N. Fonseca, “New Evolutionary Particle Swarm Algorithm

相關文件