• 沒有找到結果。

Chaotic catfish particle swarm optimization for solving global numerical optimization problems

N/A
N/A
Protected

Academic year: 2021

Share "Chaotic catfish particle swarm optimization for solving global numerical optimization problems"

Copied!
8
0
0

加載中.... (立即查看全文)

全文

(1)

Chaotic catfish particle swarm optimization for solving global

numerical optimization problems

Li-Yeh Chuang

a

, Sheng-Wei Tsai

b

, Cheng-Hong Yang

b,c,⇑ a

Institute of Biotechnology and Chemical Engineering, I-Shou University, Kaohsiung 84001, Taiwan b

Department of Electronic Engineering, National Kaohsiung University of Applied Sciences, Kaohsiung 80778, Taiwan c

Department of Network Systems, Toko University, Chiayi 61363, Taiwan

a r t i c l e

i n f o

Keywords:

Particle swarm optimization Chaos

Chaotic map Catfish effect CatfishPSO

a b s t r a c t

Chaotic catfish particle swarm optimization (C-CatfishPSO) is a novel optimization algo-rithm proposed in this paper. C-CatfishPSO introduces chaotic maps into catfish particle swarm optimization (CatfishPSO), which increase the search capability of CatfishPSO via the chaos approach. Simple CatfishPSO relies on the incorporation of catfish particles into particle swarm optimization (PSO). The introduced catfish particles improve the perfor-mance of PSO considerably. Unlike other ordinary particles, the catfish particles initialize a new search from extreme points of the search space when the gbest fitness value (global optimum at each iteration) has not changed for a certain number of consecutive iterations. This results in further opportunities of finding better solutions for the swarm by guiding the entire swarm to promising new regions of the search space and accelerating the search. The introduced chaotic maps strengthen the solution quality of PSO and CatfishPSO signif-icantly. The resulting improved PSO and CatfishPSO are called chaotic PSO (C-PSO) and cha-otic CatfishPSO (C-CatfishPSO), respectively. PSO, C-PSO, CatfishPSO, C-CatfishPSO, as well as other advanced PSO procedures from the literature were extensively compared on sev-eral benchmark test functions. Statistical analysis of the experimental results indicate that the performance of C-CatfishPSO is better than the performance of PSO, C-PSO, CatfishPSO and that C-CatfishPSO is also superior to advanced PSO methods from the literature.

Ó 2011 Elsevier Inc. All rights reserved.

1. Introduction

Evolutionary algorithms with their heuristic and stochastic properties often suffer from getting stuck in local optima. This common characteristic led to the development of evolutionary computation as an increasingly important field. A GA is a sto-chastic search procedure based on the mechanics of natural selection, genetics and evolution[1]. Since this type of algorithm simultaneously evaluates many points in the search space, it is more likely to find a global solution to a given problem. PSO describes a solution process in which each particle moves through a multidimensional search space[2]. The particle velocity and position are constantly updated according to the best previous performance of the particle or of the particle’s neighbors, as well as the best performance of all particles in the entire population. GAs have demonstrated the ability to reach near-optimal solutions for large problems; however, they may require a long processing time to reach a near-near-optimal solution. Similarly to GAs, BPSO is also a population-based optimizer. BPSO has a memory, so knowledge of good solutions is retained by all the particles and optimal solutions are found by the swarm particles if they follow the best particle. Unlike GAs, BPSO does not contain any crossover and mutation processes[3]. Hybridization of evolutionary algorithms with local search has

0096-3003/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2011.01.081

⇑ Corresponding author.

E-mail addresses:chuang@isu.edu.tw(L.-Y. Chuang),1096305108@cc.kuas.edu.tw(S.-W. Tsai),chyang@cc.kuas.edu.tw(C.-H. Yang). Contents lists available atScienceDirect

Applied Mathematics and Computation

(2)

been investigated in many studies[4,5]. Such hybrids are often referred to as memetic algorithms (MA). An MA can be trea-ted as a genetic algorithm coupled with a local search procedure[6]. The shuffled frog leaping algorithm (SFL algorithm) combines the benefits of an MA and the social PSO algorithm. Unlike in MAs and PSO, the population consists of a set of solu-tions (frogs), which is partitioned into subsets referred to as memeplexes. In the search space, each group performs a local search, and then exchanges information with other groups[7]. Ant-colony optimization algorithms (ACO) were developed by Dorigo et al. Similar to PSO, they evolve not based on genetics but on social behavior. Unlike PSO, the ACO uses ants to find the shortest route between their ant hill and a source of food; ants can deposit pheromone trails whenever they travel as a form of indirect communication[8].

Generating an ideal random sequence is of great importance in the fields of numerical analysis, sampling and heuristic optimization. Recently, a technique which employs chaotic sequences via the chaos approach (chaotic maps) has gained a lot of attention and been widely applied in different areas, such as the chaotic neural network (CNN)[9], chaotic optimiza-tion algorithms (COA)[10,11], nonlinear circuits[12], DNA computing[13], and image processing[14]. All of the above-men-tioned methods rely on the same pivotal operation, namely the adoption of a chaotic sequence instead of a random sequence, and thereby improve the results due to the unpredictability of the chaotic sequence[15].

Chaos can be described as a bounded nonlinear system with deterministic dynamic behavior that has ergodic and sto-chastic properties[16]. It is very sensitive to the initial conditions and the parameters used. In other word, cause and effect of chaos are not proportional to the small differences in the initial values. In what is called the ‘‘butterfly effect’’, small vari-ations of an initial variable will result in huge differences in the solutions after some iteration. Mathematically, chaos is ran-dom and unpredictable, yet it also possesses an element of regularity.

PSO shows a promising performance on nonlinear function optimization and has thus received much attention[17]. However, the local search capability of PSO is poor[18]since premature convergence occurs often, especially in the case complex multi-peak search problems[19]. In order to overcome these disadvantages of PSO, many improvements have been proposed. Yet another approach introduces a fuzzy system to adapt the inertia weight for three benchmark test functions[20]. Liu et al. proposed center particle swarm optimization (CenterPSO), which introduced a center particle into LDWPSO to improve the performance [17]. Xi et al. proposed an improved Quantum-behaved PSO, which introduces a weight parameter into the calculation of the mean best position in QPSO in order to render the importance of particles in the swarm when they are evolving; this method is called weighted QPSO (WQPSO)[21]. Jiao et al. proposed dynamic inertia weight PSO (IPSO), which uses a dynamic inertia weight to decrease the inertia factor in the velocity update equa-tion of the original PSO[22]. Yang et al. proposed another dynamic inertia weight to modify the velocity update formula in a method called modified particle swarm optimization with dynamic adaptation (DAPSO)[23]. Shelokar et al. proposed particle swarm ant colony optimization (PSACO), a hybrid of particle swarm optimization and ant colony optimization, which uses co-operative, population-based global search swarm intelligence metaheuristics [24]. Zhihua et al. proposed two strategies to improve the exploration and exploitation capability, namely FUSS and RWS. FUSS is a uniform fitness selection strategy, in which weak selection pressure is incorporated into standard PSO. RWS is designed to further en-hance the exploration capability in order to escape form a local optimum[25]. Jing et al. developed a knowledge-based co-operative particle swarm optimization (KCPSO), which mainly metaphors the self-cognitive and self-learning process of evolutionary agents in a special environment, and introduces a knowledge billboard into PSO to record varieties of search information [26]. Ali and Kaelo proposed an efficiency value to identify the cause for their slow convergence in PSO. Some modifications were proposed in the position update rule of PSO in order to make the convergence faster

[27]. Yan et al. proposed a shuffling master–slave swarm evolutionary algorithm based on particle swarm optimization (MSSE-PSO). The population is sampled randomly from the feasible space and partitioned into several sub-swarms (one master swarm and additional slave swarms), in which each slave swarm independently executes PSO. The master swarm is enhanced by the social knowledge of the master swarm itself and that of the slave swarms[28]. For promoting any PSO variants, Loannis et al. proposed a stopping rule and similarity check in order to enhance the speed of all PSO variants[29]. These PSO variants propose interesting strategies in terms of how to avoid premature convergence for sus-taining the variety amongst individuals, and also contain properties that ultimately evolve a population towards a higher fitness (global optimization or local optima). Recently, numerous improvements, which rely on the chaos approach, have been proposed for PSO in order to overcome this disadvantage. Chaotic maps (including logistic maps) can easily be implemented and avoid entrapment in local optima[30–34]. The inherent characteristics of chaos can enhance PSO by enabling it to escape from local solutions, and thus improve the global search capability of PSO[32]. Logistic maps were introduced in nonlinear dynamics of biological populations evidencing chaotic behavior[35] and are often cited as an archetypal example.

In this paper, we propose chaotic CatfishPSO (C-CatfishPSO), in which chaotic maps are applied to improve the perfor-mance of the CatfishPSO algorithm. In CatfishPSO, the catfish effect is applied to improve the perforperfor-mance of particle swarm optimization (PSO). This effect is the result of the introduction of new particles into the search space (‘‘catfish particles’’), which replace particles with the worst fitness; these catfish particles are initialized at extreme points of the search space when the fitness of the global best particle has not improved for a certain number of consecutive iterations. This results in further opportunities of finding better solutions for the swarm by guiding the whole swarm to promising new regions of the search space[36]. The logistic map was introduced into our study to improve the search behavior and to prevent entrapment of the particles in a locally optimal solution. The proposed method was applied to several benchmark functions

(3)

from the literature. Statistical analysis of the experimental results show that the performance of C-CatfishPSO is superior to PSO, C-PSO, CatfishPSO, and other advanced PSO methods.

2. Method

2.1. Particle swarm optimization (PSO)

In original PSO[2], each particle is analogous to an individual ‘‘fish’’ in a school of fish. It is a population-based optimi-zation technique, where a population is called a swarm. A swarm consists of N particles moving around in a D-dimensional search space. The position of the ith particle can be represented by xi= (xi1, xi2, . . . , xiD). The velocity for the ith particle can be

written as

v

i= (

v

i1,

v

i2, . . . ,

v

iD). The positions and velocities of the particles are confined within [Xmin, Xmax]Dand [Vmin, Vmax]D,

respectively. Each particle coexists and evolves simultaneously based on knowledge shared with neighboring particles; it makes use of its own memory and knowledge gained by the swarm as a whole to find the best solution. The best previously encountered position of the ith particle is denoted its individual best position pi= (pi1, pi2, . . . , piD), a value called pbesti. The

best value of the all individual pbestivalues is denoted the global best position g = (g1, g2, . . . , gD) and called gbest. The PSO

process is initialized with a population of random particles, and the algorithm then executes a search for optimal solutions by continuously updating generations. At each generation, the position and velocity of the ith particle are updated by pbesti

and gbest in the swarm. The update equations can be formulated as:

v

new id ¼ w 

v

old id þ c1 r1 pbestid xoldid   þ c2 r2 gbestd xoldid   ; ð1Þ xnew id ¼ x old id þ

v

new id ; ð2Þ

r1and r2are random numbers between (0, 1), and c1and c2are acceleration constants, which control how far a particle will

move in a single generation. Velocities

v

new

id and

v

oldid denote the velocities of the new and old particle, respectively. xoldid is the

current particle position, and xnew

id is the new, updated particle position. The inertia weight w controls the impact of the

pre-vious velocity of a particle on its current one[37]. In general, the inertia weight is decreased linearly from 0.9 to 0.4 through-out the search process to effectively balance the local and global search abilities of the swarm[38]. The equation for the inertia weight w can be written as:

w ¼ ðwmax wminÞ 

Iterationmax Iterationi Iterationmax

þ wmin: ð3Þ

In Eq.(3), wmaxis 0.9, wminis 0.4 and Iterationmaxis the maximum number of allowed iterations. The pseudo-code of the PSO

process is shown below. PSO pseudo-code 01: begin

02: Randomly initialize particles swarm

03: while (number of iterations, or the stopping criterion is not met) 04: Evaluate fitness of particle swarm

05: for n = 1 to number of particles

06: Find pbest

07: Find gbest

08: for d = 1 to number of dimension of particle 09: update the position of particles by Eq.(1) and (2)

10: next d

11: next n

12: update the inertia weight value by Eq.(3)

13: next generation until stopping criterion 14: end

2.2. Chaotic particle swarm optimization (C-PSO)

In PSO, the parameters w, r1and r2are the key factors affecting the convergence behavior[39,40]. The inertia weight

con-trols the balance between the global exploration and the local search ability. A large inertia weight favors the global search, while a small inertia weight favors the local search. For this reason, an inertia weight that linearly decreases from 0.9 to 0.4 throughout the search process is usually used[38]. Since logistic maps are frequently used chaotic behavior maps and cha-otic sequences can be quickly generated and easily stored, there is no need for storage of long sequences[14]. In C-PSO, se-quences generated by the logistic map substitute the random parameters r1and r2in PSO. The parameters r1and r2are

(4)

Crðtþ1Þ¼ k  CrðtÞ ð1  CrðtÞÞ: ð4Þ

In Eq.(4), Cr(0)is generated randomly for each independent run, with Cr(0)not being equal to {0, 0.25, 0.5, 0.75, 1} and k equal

to 4. The driving parameter k of the logistic map, controls the behavior of Cr(t)(as t goes to infinity)[41].

The velocity update equation for C-PSO can be formulated as:

v

new id ¼ w 

v

old id þ c1 Cr  pbestid xoldid   þ c2 ð1  CrÞ  gbestd xoldid   : ð5Þ

In Eq.(5), Cr is a function based on the results of the logistic map with values between 0.0 and 1.0.Fig. 1shows the chaotic Cr value using a logistic map for 300 iterations where Cr(0)= 0.001. The pseudo-code of C-PSO is shown below.

C-PSO pseudo-code 01: begin

02: Randomly initialize particles swarm 03: Randomly generate Cr(0)

04: while (number of iterations, or the stopping criterion is not met) 05: Evaluate fitness of particle swarm

06: for n = 1 to number of particles

07: Find pbest

08: Find gbest

09: for d = 1 to number of dimension of particle 10: update the Chaotic Cr value by Eq.(4)

11: update the position of particles by Eq.(5)and Eq.(2)

12: next d

13: next n

14: update the inertia weight value by Eq.(3)

15: next generation until stopping criterion 16: end

2.3. Catfish particle swarm optimization (CatfishPSO)

The underlying idea for the development of CatfishPSO was derived from the catfish effect observed when catfish were introduced into large holding tanks of sardines[36]. The catfish in competition with the sardines, stimulate renewed move-ment amongst the sardines. Similarly, the introduced catfish particles stimulate a renewed search by the other ‘‘sardine’’ par-ticles in CatfishPSO. In other words, the catfish parpar-ticles can guide parpar-ticles trapped in a local optimum onto a new regions of the search space, and thus to potentially better particle solutions.

In CatfishPSO, a population is randomly initialized in a first step, and the particles are distributed over the D-dimensional search space. The position and velocity of each particle are updated by Eqs.(1)–(3). If the distance between gbest and the surrounding particles is small, each particle is considered a part of the cluster around gbest and will only move a very small distance in the next generation. To avoid this premature convergence, catfish particles are introduced and replace the 10% of original particles with the worst fitness values of the swarm. These catfish particles are essential for the success of a given optimization task. The pseudo-code for CatfishPSO is shown below. Further details on the CatfishPSO mechanism can be found in Chuang et al.[36].

Dynamics of logistic map

0 0.2 0.4 0.6 0.8 1 0 50 100 150 200 250 300 Number of Generations Ch ao tic Cr va lu e

(5)

CatfishPSO Pseudo-code 01: Begin

02: Randomly initialize particles swarm

03: while (number of iterations, or the stopping criterion is not met) 04: Evaluate fitness of particle swarm

05: for n = 1 to number of particles

06: Find pbest

07: Find gbest

08: for d = 1 to number of dimension of particle 09: update the position of particles by Eq.(1) and (2)

10: next d

11: next n

12: if fitness of gbest is the same Seven times then

13: Sort the particle swarm via fitness from best to worst

14: for n = number of Nine-tenths of particles to number of particles 15: for d = 1 to number of dimension of particle

16: Randomly select extreme points at Max or Min of the search space

17: Reset the velocity to 0

18: next d

19: next n

20: end if

21: update the inertia weight value by Eq.(3)

22: next generation until stopping criterion 23: end

2.4. Chaotic catfish particle swarm optimization (C-CatfishPSO)

In C-CatfishPSO, a logistic map is embedded into CatfishPSO, which updates the parameters r1and r2based on Eq.(4). The

logistic map improves the search capability of CatfishPSO significantly. The particle velocities are updated according to Eq.

(5). The pseudo-code for C-CatfishPSO is shown below. C-CatfishPSO Pseudo-code

01: Begin

02: Randomly initialize particles swarm 03: Randomly generate Cr(0)

04: while (number of iterations, or the stopping criterion is not met) 05: Evaluate fitness of particle swarm

06: for n = 1 to number of particles

07: Find pbest

08: Find gbest

09: for d = 1 to number of dimension of particle 10: update the Chaotic Crvalue by Eq.(4)

11: update the position of particles by Eqs.(5) and (2)

12: next d

13: next n

14: if fitness of gbest is the same Seven times then

15: Sort the particle swarm via fitness from best to worst

16: for n = number of Nine-tenths of particles to number of particles 17: for d = 1 to number of dimension of particle

18: Randomly select extreme points at Max or Min of the search space

19: Reset the velocity to 0

20: next d

21: next n

22: end if

23: update the inertia weight value by Eq.(3)

24: next generation until stopping criterion 25: end

(6)

3. Numerical simulation 3.1. Benchmark functions

In order to illustrate, compare and analyze the effectiveness and performance of the PSO, C-PSO, CatfishPSO and C-Cat-fishPSO algorithms for optimization problems, ten representative benchmark functions were used to test the algorithms. These ten benchmark functions are shown below.

Rosenbrock f1ðxÞ ¼ X D1 i¼1 100 xiþ1 x2i  2 þ ðxi 1Þ2   ð6Þ Rastrigrin f2ðxÞ ¼ XD i¼1 x2 i  10 cosð2

p

xiÞ þ 10   ð7Þ Griewark f3ðxÞ ¼ 1 4000 XD i¼1 x2 i  YD i¼1 cos xiffiffi i p   þ 1 ð8Þ Sphere f4ðxÞ ¼ XD i¼1 x2 i ð9Þ Table 1

Parameter settings of the ten benchmark functions.

Name Trait Search space Asymmetric initialization range Xmin Xmax Optimum

Rosenbrock Unimodal 100xi100 15xi30 100 100 0 Rastrigrin Multimodal 10xi10 2.56xi5.12 10 10 0 Griewark Multimodal 600xi600 300xi600 600 600 0 Sphere Unimodal 100xi100 50xi100 100 100 0 Ackley Multimodal 100xi100 50xi100 100 100 0 Schwefel Multimodal 500xi500 500xi 250 500 500 0 Ellipsoid Unimodal 100xi100 50xi100 100 100 0

Sum of difference power Unimodal 3xi3 1.5xi3.0 3 3 0

Cigar Unimodal 100xi100 50xi100 100 100 0

Ridge Unimodal 100xi100 50xi100 100 100 0

Table 2

Mean function value for Rosenbrock function.

Pop. Dim. Gen. Optimal PSO C-PSO CatfishPSO C-CatfishPSO

20 10 1000 0 95.893±230.136 28.178±426.317 5.855±0.413 3.597±3.708 20 1500 0 167.604±318.927 27.770±248.084 16.257±0.385 4.527±6.290 30 2000 0 268.148±421.517 27.707±043.068 26.555±0.456 4.359±7.528 40 10 1000 0 69.868±187.713 7.186±039.456 5.443±0.392 2.294±3.082 20 1500 0 135.475±269.301 24.960±251.540 15.993±0.445 2.327±4.525 30 2000 0 207.524±345.617 42.313±338.070 26.368±0.523 2.085±4.835 80 10 1000 0 39.886±094.309 9.819±088.879 5.086±0.418 1.628±2.667 20 1500 0 105.067±237.020 15.334±007.234 15.761±0.490 1.387±3.092 30 2000 0 156.822±288.267 27.809±083.319 26.224±0.584 1.081±2.949 160 10 1000 0 30.009±076.045 6.956±056.861 4.742±0.485 1.278±2.382 20 1500 0 70.063±125.544 26.696±343.886 15.501±0.539 0.987±2.586 30 2000 0 105.813±185.191 27.066±057.849 26.005±0.631 0.558±1.955 Average 121.014±231.632 22.650±165.380 15.817±0.480 2.179±3.795 C-CatfishPSO V.S. R+ R R= P-value Significant (a= 0.05) PSO 12 0 0 0.002 YES C-PSO 12 0 0 0.002 YES CatfishPSO 12 0 0 0.002 YES

(7)

Ackley f5ðxÞ ¼ 20exp 0:2 ffiffiffiffiffiffiffiffiffiffiffi 1 D PD i¼1 x2 i r    exp 1 D PD i¼1 cosð2pxiÞ   þ 20 þ e ð10Þ Schwefel f6ðxÞ ¼ 418:9809D  XD i¼1 xisin ffiffiffiffiffiffiffi jxij p   ð11Þ Ellipsoid f7ðxÞ ¼ XD i¼1 ix2i ð12Þ

Sum of difference power

f8ðxÞ ¼ XD

i¼1

jxijiþ1 ð13Þ

Table 3

Mean function value for Rastrigrin function.

Pop. Dim. Gen. Optimal PSO C-PSO CatfishPSO C-CatfishPSO

20 10 1000 0 5.128±02.627 4.399±02.753 0.000±0.000 0.000±0.000 20 1500 0 22.021±07.176 12.675±07.048 0.000±0.000 0.000± 0.000 30 2000 0 47.735±12.037 22.693±10.869 0.000±.000 0.000±0.000 40 10 1000 0 3.375±01.748 2.934±01.783 0.000±0.000 0.000±.000 20 1500 0 16.446±05.294 10.061±05.714 0.000±0.000 0.000±0.000 30 2000 0 36.987±09.692 18.849±9.129 0.000±0.000 0.000±0.000 80 10 1000 0 2.173±01.322 2.093±01.410 0.000±0.000 0.000±0.000 20 1500 0 12.676±4.377 8.197±4.355 0.000±0.000 0.000±0.000 30 2000 0 29.579±7.703 15.560±07.870 0.000±0.000 0.000±0.000 160 10 1000 0 1.260±01.007 1.260±1.259 0.000±0.000 0.000±0.000 20 1500 0 9.517±3.119 6.660±04.002 0.000±0.000 0.000±0.000 30 2000 0 23.468±06.178 13.147±08.279 0.000±0.000 0.000±0.000 Average 17.530±05.190 9.877±05.373 0.000±0.000 0.000±0.000 C-CatfishPSO V.S. R+ R R= P-value Significant (a= 0.05) PSO 12 0 0 0.002 YES C-PSO 12 0 0 0.002 YES CatfishPSO 0 0 12 1.000 NO Table 4

Mean function value for Griewark function.

Pop. Dim. Gen. Optimal PSO C-PSO CatfishPSO C-CatfishPSO

20 10 1000 0 0.102±00.056 0.061±0.050 0.000±0.000 0.000±0.000 20 1500 0 0.480±06.361 0.002±0.009 0.000±0.000 0.000±0.000 30 2000 0 2.455±14.650 0.361±5.692 0.000±0.000 0.000±.000 40 10 1000 0 0.087±00.043 0.067±.035 0.000±0.000 0.000±0.000 20 1500 0 0.120±02.846 0.096±.847 0.000±0.000 0.000±0.000 30 2000 0 1.010±9.452 0.273±.957 0.000±.000 0.000±0.000 80 10 1000 0 0.074±00.032 0.061 ±0.027 0.000±0.000 0.000±0.000 20 1500 0 0.030±0.026 0.011±0.037 0.000±0.000 0.000±0.000 30 2000 0 0.193±04.034 0.091±2.858 0.000±0.000 0.000±0.000 160 10 1000 0 0.066±00.028 0.058±0.031 0.000±0.000 0.000±0.000 20 1500 0 0.032±00.027 0.016±0.038 0.000±0.000 0.000±0.000 30 2000 0 0.012±00.015 0.002±0.037 0.000±0.000 0.000±0.000 Average 0.388±03.131 0.135±1.385 0.000±0.000 0.000±0.000 C-CatfishPSO V.S. R+ R R= P-value Significant (a= 0.05) PSO 12 0 0 0.002 YES C-PSO 12 0 0 0.002 YES CatfishPSO 0 0 12 1.000 NO

(8)

References

[1] J.H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, MI, 1975.

[2] J. Kennedy, R.C. Eberhart, Particle swarm optimization, IEEE International Conference on Neural Networks 4, Perth, Australia (1995) 1942–1948. [3] E. Elbeltagi, T. Hegazy, D. Grierson, Comparison among five evolutionary-based optimization algorithms, Advanced Engineering Informatics 19 (2005)

43–53.

[4] Y-T. Kao, E. Zahara, A hybrid genetic algorithm and particle swarm optimization for multimodal functions, Applied Soft Computing 8 (2008) 849–857. [5] C-F. J, A hybrid of genetic algorithm and particle swarm optimization for recurrent network design, IEEE Transactions on Systems, Man, and

Cybernetics, Part B 34 (2004) 997–1006.

[6] K. Sorensen, M. Sevaux, MAPM: memetic algorithms with population management, Computers & Operations Research 33 (2006) 1214–1225. [7] M.M. Eusuff, K.E. Lansey, Optimization of water distribution network design using the shuffled frog leaping algorithm, Journal of Water Resources Plan

Manage 129 (2003) 210–225.

[8] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of cooperating agents, IEEE Transactions on Systems, Man, and Cybernetics 26 (2002) 29–41.

[9] K. Aihara, T. Takabe, M. Toyoda, Chaotic neural networks, Physics Letters A 144 (1990) 333–340.

[10] B. Li, W.S. Jiang, Optimizing complex functions by chaos search, Cybernetics and Systems 29 (1998) 409–419.

[11] Z. Lu, L.S. Shieh, G.R. Chen, On robust control of uncertain chaotic systems: a sliding-mode synthesis via chaotic optimization, Chaos, Solitons & Fractals 18 (2003) 819–827.

[12] P. Arena, R. Caponetto, L. Fortuna, A. Rizzo, M.L. Rosa, Self organization in non recurrent complex system, International Journal of Bifurcation and Chaos 10 (2000) 1115–1125.

[13] G. Manganaro, J. Pineda de Gyvez, DNA computing based on chaos, Evolutionary Computation (2002) 255–260.

[14] H. Gao, Y. Zhang, S. Liang, D. Li, A new chaotic algorithm for image encryption, Chaos, Solitons and Fractals 29 (2006) 393–399.

[15] B. Alatas, E. Akin, A. Bedri Ozer, Chaos embedded particle swarm optimization algorithms, Chaos, Solitons and Fractals 40 (2009) 1715–1734. [16] H.G. Schuster, Deterministic chaos an introduction, Second revised ed., Physick-Verlag GmnH, Weinheim, Federal Republic of Germany, 1988. [17] Y. Liu, Z. Qin, Z. Shi, J. Lu, Center particle swarm optimization, Neurocomputing 70 (2007) 672–679.

[18] P.J. Angeline, Evolutionary optimization versus particle swarm optimization: philosophy and performance differences, in: Lecture Notes in Computer Science, Springer, Berlin, 1998, pp. 601–610.

[19] Y. Jiang, T. Hu, C.C. Huang, X. Wu, An improved particle swarm optimization algorithm, Applied Mathematics and Computation 193 (2007) 231–239. [20] Y. Shi, R.C. Eberhart, Fuzzy adaptive particle swarm optimization, in: Proceedings of the 2001 congress on evolutionary computation 1, 2001, pp. 101–

106.

[21] M. Xi, J. Sun, W. Xu, An improved quantum-behaved particle swarm optimization with weighted mean best position, Applied Mathematics and Computation 205 (2008) 751–759.

[22] B. Jiao, Z. Lian, X.S. Gu, A dynamic inertia weight particle swarm optimization algorithm, Chaos, Solution and Fractals 37 (2008) 698–705. [23] X. Yang, J. Yuan, J. Yuan, H. Mao, A modified particle swarm optimization with dynamic adaptation, Applied Mathematics and Computation 189 (2007)

205–1213.

[24] P.S. Shelokar, P. Siarry, V.K. Jayaraman, B.D. Kulkarni, Particle swarm ant colony optimization hybridized for improved continuous optimization, Applied Mathematics and Computation 188 (2007) 129–142.

[25] C. Zhihua, C. Xingjuan, Z. Jianchao, S. Guoji, Particle swarm optimization with FUSS and RWS for high dimensional functions, Applied Mathematics and Computation 205 (2008) 98–108.

[26] J. Jing, Z. Jianchao, H. Chongzhao, W. Qinghua, Knowledge-based cooperative particle swarm optimization, Applied Mathematics and Computation 205 (2008) 861–873.

[27] M.M. Ali, P. Kaelo, Improved particle swarm algorithm for global optimization, Applied Mathematics and Computation 196 (2008) 578–593. [28] J. Yan, L. Changmin, H. Chongchao, W. Xianing, Improved particle swarm algorithm for hydrological parameter optimization, Applied Mathematics and

Computation 217 (2010) 3207–3215.

[29] I.G. Tsoulos, A. Stavrakoudis, Enhancing PSO methods for global optimization, Applied Mathematics and Computation 216 (2010) 2988–3001. [30] L. Wang, D.Z. Zheng, Q.S. Lin, Survey on chaotic optimization methods, Comput Tech Automat 20 (2001) 1–5.

[31] J. Chuanwen, E. Bompard, A self-adaptive chaotic particle swarm algorithm for short term hydroelectric system scheduling in deregulated environment, Energy Conversion and Management 46 (2005) 2689–2696.

[32] B. Liu, L. Wang, Y.H. Jin, F. Tang, D.X. Huang, Improved particle swarm optimization combined with chaos, Chaos, Solitons and Fractals 25 (2005) 1261– 1271.

[33] T. Xiang, X. Liao, K.W. Wong, An improved particle swarm optimization algorithm combined with piecewise linear chaotic map, Applied Mathematics and Computation 190 (2007) 1637–1645.

[34] L. dos Santos Coelho, V.C. Mariani, A novel chaotic particle swarm optimization approach using Hénon map and implicit filtering local search for economic load dispatch, Chaos, Solitons and Fractals 39 (2009) 510–518.

[35] R.M. May, Simple mathematical models with very complicated dynamics, Nature 261 (1976) 459–467.

[36] L.Y. Chuang, S.W. Tsai, C.H. Yang, Catfish Particle Swarm Optimization, in: IEEE Swarm Intelligence Symposium 2008 (SIS 2008), St. Louis, Missouri (2008) 20.

[37] Y. Shi, R.C. Eberhart, A modified particle swarm optimizer, in: Proceedings of IEEE International Conference on Evolutionary Computation, Anchorage, AK, 2002, pp. 69–73.

[38] Y. Shi and R.C. Eberhart, Empirical study of particle swarm optimization, in: Proceedings of Congress on Evolutionary Computation, Washington, DC, 2002, pp. 1945-1949.

[39] I.C. Trelea, The particle swarm optimization algorithm: convergence analysis and parameter selection, Information Processing Letters 85 (2003) 317– 325.

[40] S. Naka, T. Genji, T. Yura, Y. Fukuyama, A hybrid particle swarm optimization for distribution state estimation, IEEE Transactions on Power Systems 18 (2003) 60–68.

[41] D. Kuo, Chaos and its computing paradigm, IEEE Potentials Magazine 24 (2005) 13–15.

[42] X. Yang, J. Yuan, J. Yuan, H. Mao, A modified particle swarm optimizer with dynamic adaptation, Applied Mathematics and Computation 189 (2007) 1205–1213.

[43] A. Ratnaweera, S.K. Halgamuge, Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients, IEEE Transactions on Evolutionary Computation 8 (2004) 240–255.

[44] D.E. Knuth, The Art of Computer Programming, Seminumerical Algorithms, Third ed., vol. 2, Addison-Wesley, 1997, pp. 10–26. Section 3.2.1: The Linear Congruential Method.

數據

Fig. 1. Chaotic Crvalue using a logistic map for 300 iterations; Cr (0) = 0.001.

參考文獻

相關文件

Wang, Solving pseudomonotone variational inequalities and pseudo- convex optimization problems using the projection neural network, IEEE Transactions on Neural Network,

Numerical experiments are done for a class of quasi-convex optimization problems where the function f (x) is a composition of a quadratic convex function from IR n to IR and

Chen, The semismooth-related properties of a merit function and a descent method for the nonlinear complementarity problem, Journal of Global Optimization, vol.. Soares, A new

For different types of optimization problems, there arise various complementarity problems, for example, linear complementarity problem, nonlinear complementarity problem,

In section 4, based on the cases of circular cone eigenvalue optimization problems, we study the corresponding properties of the solutions for p-order cone eigenvalue

We cannot exclude the presence of the SM Higgs boson below 127 GeV/c 2 because of a modest excess of events in the region. between 115 and 127

It is well-known that, to deal with symmetric cone optimization problems, such as second-order cone optimization problems and positive semi-definite optimization prob- lems, this

This kind of algorithm has also been a powerful tool for solving many other optimization problems, including symmetric cone complementarity problems [15, 16, 20–22], symmetric