• 沒有找到結果。

An improved particle swarm optimization with double-bottom chaotic maps for numerical optimization

N/A
N/A
Protected

Academic year: 2021

Share "An improved particle swarm optimization with double-bottom chaotic maps for numerical optimization"

Copied!
10
0
0

加載中.... (立即查看全文)

全文

(1)

An improved particle swarm optimization with double-bottom

chaotic maps for numerical optimization

Cheng-Hong Yang

a

, Sheng-Wei Tsai

a

, Li-Yeh Chuang

b

, Cheng-Huei Yang

c,⇑

a

Department of Electronic Engineering, National Kaohsiung University of Applied Sciences, Kaohsiung 80778, Taiwan b

Institute of Biotechnology and Chemical Engineering, I-Shou University, Kaohsiung 80041, Taiwan c

Department of Electronic Communication Engineering, National Kaohsiung Marine University, Kaohsiung 81157, Taiwan

a r t i c l e

i n f o

Keywords:

Particle swarm optimization Chaotic adjustment schemes Chaotic maps

a b s t r a c t

Chaos theory studies the behavior of dynamical systems that are highly sensitive to their initial conditions. This effect is popularly referred to as the butterfly effect. Small differ-ences in the initial conditions yield widely diverging outcomes for chaotic systems, render-ing long-term prediction impossible in general. In mathematics, a chaotic map is a map (i.e., an evolution function) that exhibits some sort of chaotic behavior. Chaotic maps occur in the study of dynamical systems and often generate fractals. In this paper, an improved logistic map, namely a double-bottom map, with particle swarm optimization was applied to the test function. Simple PSO adopts a random sequence with a random starting point as a parameter, and relies on this parameter to update the positions and velocities of the particles. However, PSO often leads to premature convergence, especially in complex multi-peak search problems. In recent years, the use of chaotic sequences in optimization techniques rather than random sequences with random seeds has been growing steadily. Chaotic sequences, which are created by means of chaotic maps, have been proven easy and fast to generate and are more easily stored then random seed processes. They can improve the performance of PSO due to their unpredictability. Double-bottom maps are designed by the updating equation of PSO in order to balance the exploration and exploi-tation capability. We embedded many commonly used chaotic maps as well as our double-bottom map into PSO to improve performance, and compared these versions to each other to demonstrate the effectiveness of the PSO with the double-bottom map. We call this improved PSO method Double-Bottom Map PSO (DBMPSO). In the conducted experiments, PSO, DBMPSO and other chaotic PSOs were extensively compared on 22 benchmark test functions. The experimental results indicate that the performance of DBMPSO is signifi-cantly better than the performance of other PSOs tested.

Ó 2012 Elsevier Inc. All rights reserved.

1. Introduction

Generating an ideal random sequence is of great importance for numerical analysis, sampling and heuristic optimization. Recently, chaotic sequences produced via a chaos approach (chaotic maps) have gained much attention and are increasingly used in different areas, e.g. chaotic optimization algorithms (COA)[10], etc. The processes involved in the above-mentioned methods all include operations that adopt a chaotic sequence instead of a random sequence, which improves the results due

0096-3003/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2012.06.015

⇑ Corresponding author.

E-mail addresses:chyang@cc.kuas.edu.tw(C.-H. Yang),1096305108@cc.kuas.edu.tw(S.-W. Tsai),chuang@isu.edu.tw(L.-Y. Chuang),chyang@mail. nkmu.edu.tw(C.-H. Yang).

Contents lists available atSciVerse ScienceDirect

Applied Mathematics and Computation

(2)

to the unpredictability of the sequences[1]. Chaos can be described as a bounded nonlinear system with deterministic dy-namic behavior. It has both ergodic and stochastic properties[12]. In addition, chaos is very sensitive to its initial conditions and parameters. In other word, the cause and effect of chaos are not proportional to the small differences in the initial values, and thus variations in the initial values have a profound influence after some iteration (‘‘butterfly effect’’). Mathematically, chaos is random and unpredictable, yet it also possesses an element of regularity.

Particle swarm optimization (PSO) is a population-based stochastic optimization technique developed by Kennedy and Eberhart in 1995[9]and inspired by the simulation of the social behavior of organisms, such as birds in a flock and fish in a school. During the past decade, PSO has been successfully employed to effectively solve optimization problems in various areas, including function optimization, parameter optimization, amongst others[2]. PSO shows a promising perfor-mance on nonlinear function optimization and has thus received much attention. However, the local search capability of PSO is poor since it often leads to premature convergence, especially in complex multi-peak search problems [8]. Recently, further improvements using the chaos approach have been proposed in order to overcome the inherent disadvantages of PSO. Chaotic maps constitute an improvement, since they are easily implemented and have the ability to avoid entrapment of the PSO in local optima. The chaos characteristics can enhance the PSO by enabling it to escape from local solutions, and thus improve the global search capability of PSO[17].

For high-performance optimization techniques, a suitable tradeoff between exploration and exploitation is essential. One of the most important considerations in PSO is how to effectively balance the global and local search abilities of the swarm, because the proper balance of global and local search over the entire run is critical to the success of PSO. For this reason, a strategy parameter – the well-known inertia weight factor – was introduced in an effort to strike a better balance between global exploration and local exploitation[14]. The inertia weight factor was originally introduced into the velocity update equation of PSO. A large inertia weight facilitates global exploration and thus enables PSO to execute a search over various regions in the problems space, while a small inertia weight facilitates local exploitation, which searches a promising area for a precise optimum. After the strategy parameter inertia weight had been proposed, an increasing number of investigations focused on ways to modify the inertia weight during the search process in order to further improve the performance of PSO. Jiang and Bompard adopted sequences generated by a chaotic map as a substitute for the inertia weight parameter in PSO[6]. All the above-mentioned methods control the inertia weight of particles and resulted in an improved performance of the PSO algorithm. A common characteristic of all these methods is that an initially large inertia weight value decreases to a small value at the end, which allows particles to cover a wider search space and to converge on promising regions of the search space. Hence, we also suggest using an inertia weight that linearly decreases (LDW) from 0.9 to 0.4 during the search process. This operation greatly improves the performance of PSO[15].

In order to further improve the performance of PSO, we investigated more efficient ways of utilizing the r1and r2factors, which affect the convergence behavior of PSO. We proposed a novel chaotic map that considers 0.0, 0.5 and 1.0 three impor-tant values to adjust the PSO search direction. These different distribution ratios of 0.0, 0.5 and 1.0 can be produced by adjusting the parameter n of a double-bottom map. Twenty-two benchmark functions with unimodal and multimodal traits were selected as test functions. The parameters n = 1, 2, 4 and 8 were used to test these benchmark functions. Statistical analysis of the experimental results indicates that the performance of DBMPSO (n = 2) is better than the performance of DBMPSO (n = 1), DBMPSO (n = 4), DBMPSO (n = 8) and other chaotic PSOs.

2. Method

2.1. Particle swarm optimization (PSO)

In PSO, the swarm consists of N particles moving around a D-dimensional search space. Each particle evaluates the objective function at its current location, and then determines its movement through the search space by combining some aspects of the history of its current and best locations with those of one or more members of the swarm, with some random perturbations. The pbestivalue is introduced as the best previously visited location of the ith particle; it is denoted pi= (pi1, -pi2, . . ., piD). The gbest value is the global best locations at the visited locations of all particles so far; it is denoted g = (g1, g2, . . ., gD). The current location of the ith particle is represented by xi= (xi1, xi2, . . ., xiD), x 2 (Xmin, Xmax)Dand its velocity is represented as

vi

= (

vi1

,

vi2

, . . .,

viD

),

v

2 (Vmin, Vmax)D.

The PSO algorithm can be divided into seven steps within a processing period. The process for implementing PSO is as follows:

Step 1. Initialization of a population array of particles with random locations and velocities on D-dimensions in the search space of the problem.

Step 2. Evaluation of the desired optimization fitness function in D variables in each particle.

Step 3. Comparison of the particle’s fitness value with its pbest. If current value is better than the fitness value of pbest, then set fitness value of pbest equal to the current value, and the location of pbest equal to the current location in D-dimensional space.

Step 4. Identification of the neighborhood particle with the best success so far; its index is assigned to gbest. Step 5. Change of velocity and position of each particle according to Eqs.(1) and (2).

(3)

Step 6. If a termination criterion is met, the PSO process is terminated. Otherwise go to Step 2.

In first step, particles are respectively initialized in a population of random solutions. Then each particle finds its own pbestiby comparing its current fitness value to the fitness of its previous location (Steps 2 and 3). In Step 4, the gbest of all the particles in the population is determined. And Step 5, the PSO algorithm executes a search for optimal solutions by updating the generations. In each generation, the position and velocity of the ith particle are updated with pbestiand gbest of the swarm population. The update equations can be formulated as:

(a) Frequency spectrum of Random Seed

(b) Frequency spectrum of Logistic Map

(c) Frequency spectrum of double-bottom map

with n = 1 (2n

π = 2π)

(d) Frequency spectrum of double-bottom map

with n = 2 (2nπ = 4 )

Double-Bottom Map (2π) 0 20 40 60 80 100

0.0 Value of the parameter r 1.0

Frequency Double-Bottom Map (4π) 0 20 40 60 80 100

0.0 Value of the parameter r 1.0

Frequency Random Seed 0 20 40 60 80 100 0.0 1.0 Frequenc y

Value of the parameter r

Logistic Map 0 20 40 60 80 100

0.0 Value of the parameter r 1.0

Frequenc

y

(e) Frequency spectrum of double-bottom map

with n = 4 (2n

π = 8π)

(f) Frequency spectrum of double-bottom map

with n = 8 (2 n

π = 16 π)

Double-Bottom Map (8π) 0 20 40 60 80 100

0.0 Value of the parameter r 1.0

Frequenc y Double-Bottom Map (16π) 0 20 40 60 80 100

0.0 Value of the parameter r 1.0

Frequency

Fig. 1. Frequency spectra for random seed, logistic map and double-bottom maps for 1000 iterations; Cr0 = DBMr0

(4)

v

new id ¼ w 

v

old id þ c1 r1 pbestid xoldid   þ c2 r2 gbestd xoldid   ð1Þ xnew id ¼ xoldid þ

v

newid ð2Þ

where r1and r2are random numbers between (0, 1), and c1and c2are acceleration constants that control how far a particle moves in a single generation. Velocities

v

new

id and

v

oldid, respectively, denote the velocities of the new and old particles; xoldid is the current particle position, and xnew

id is the updated particle position. The inertia weight w controls the impact of the pre-vious velocity of a particle on its current one; w is designed to replace Vmaxand adjust the influence of previous particle velocities on the optimization process [14]. In general, the inertia weight decreases linearly from 0.9 to 0.4 throughout the search process[13]. The respective equation can be written as:

w ¼ wð max wminÞ 

Iterationmax Iterationi

Iterationmax þ wmin ð3Þ

In Eq.(3), wmaxis 0.9, wminis 0.4 and Iterationmaxis the maximum number of allowed iterations. In Eqs.(1) and (2), the velocity implies the degree to which a particle’s position should be changed at a particular moment in time so that it can equal that of the global best position, i.e., the velocity of the particle flying toward the best position. To obtain a search solu-tion, the particles’ velocities in each dimension are limited within [Vmin, Vmax]D, and the particles’ positions are limited within [Xmin, Xmax]D, thus determining the size of the steps the particle is allowed to take through the solution space. And finally, if a termination criterion is met, which is usually a sufficiently good fitness or a maximum number of iterations, then the PSO process is ended; otherwise the Steps 2–6 are repeated in the next iteration.

2.2. Chaotic particle swarm optimization (CPSO)

In PSO, the parameters w, r1and r2are the key factors affecting the convergence behavior of the PSO. The inertia weight w controls the balance between the global exploration and the local search ability. A large inertia weight facilitates the global search, while a small inertia weight facilitates the local search. For this reason, an inertia weight that linearly decreases from 0.9 to 0.4 throughout the search process is usually adopted[13]. Additionally, since many chaotic maps are frequently used chaotic behavior maps and chaotic sequences can be quickly generated and easily stored, there is no need for storage of long sequences[5]. The eight chaotic maps (Appendix A) are applied to substitute the random parameters r1and r2in the original PSO; this substitution results in a more efficient search process. Crkis randomly generated for each independent run and Crk+1is the next state.

2.3. Double-bottom map PSO (DBMPSO)

The double-bottom map is proposed by this study. The double-bottom map, unlike other chaotic maps, can provide high frequencies in three regions of the time line, i.e., 0.0, 0.5, and 1.0. In the property of PSO, r1and r2separately influence the search abilities of the exploitation and exploration, and how r1and r2affect the convergence behavior of PSO is very impor-tant. Ideally the different distribution ratios of 0.0, 0.5, and 1.0 can be effective in balancing the search behavior; however, the double-bottom map is designed to satisfy this PSO property. InFig. 1c–f we clearly see the frequency spectrum of double-bottom maps which occur near 0.0, 0.5, and 1.0; these can be adjusted by the variable n to control the frequencies near 0.0, 0.5, and 1.0.

(5)

In double-bottom map PSO (DBMPSO), sequences generated by the double-bottom map substitute the random parame-ters r1and r2in PSO. The parameters r1and r2are modified by the double-bottom map based on the following equation.

DBMrkþ1¼ sinð2nh

p

DBMrkÞ þ 1i.2; n 2 N ð4Þ

The initial range of DBMr0is [0, 1], but DBMr0is not equal to {0, 0.25, 0.5, 1} when n is an integer. The velocity update equation for DBMPSO can accordingly be formulated as:

v

new id ¼ w 

v

old id þ c1 DBMr  pbestid xoldid   þ c2 DBMr  gbestd xoldid   ð5Þ

In Eq.(5), DBMr is a function based on the results of the double-bottom map with values between 0.0 and 1.0. The pseudo-code of DBMPSO is shown below.

DBMPSO pseudo-code 01: Begin

02: Randomly initialize particles swarm and DBMr0

03: While (number of iterations, or the stopping criterion is not met) 04: Evaluate fitness of particle swarm

05: For n = 1 to number of particles

06: Find pbest

07: Find gbest

08: For d = 1 to number of dimension of particle 09: Update the position of particles by Eqs.(5) and (2)

10: Next d

11: Next n

12: Update the inertia weight value by Eq.(3)

13: Update the value of DBMr by Eq.(4)

14: Next generation until stopping criterion 15: End

For the sake of showing the characteristics of random seeds, a logistic map and a double-bottom map, we draw the fre-quency spectrum of the same with difference n, i.e., n = 1, 2, 4 and 8. The respective spectra can be found inFig. 1a–f. The random sequence distribution of the random seed is almost flat (Fig. 1a). The random sequence distribution of the logistic map with LMr0= 0.3 for 1000 iterations is show inFig. 1b. The output sequence data have more occurrences near the ex-tremes of 0 and 1, meaning that events occur more near 0 and 1 with a higher possibility. However, the logistic map can successfully replace the parameters r1and r2of PSO over the entire space due to its ergodicity, even though the distribution

(6)

is not flat[7]. Double-bottom maps raise the possibility near the 0.5 mark to compensate for the deficiency of the logistic map; the possibility near 0.5 can be adjusted by the variable n (Fig. 1c–f). This may be a possible strategy for improving the performance of PSO.

We use Devaney’s definition of chaos[4], which implies chaotic properties of our functions. Devaney’s definition of chaos is based on three conditions: (1) the sensitive dependence upon the initial condition, (2) the topological transitivity, and (3) the dense distribution of the periodic orbits. Another popular method chaos definition is the Lyapunov chaos (positive Lyapunov exponent)[11,16]. According to the sensitivity to initial conditions, the orbits inside a strange attractor would only separate and then never meet again. The sensitivity or insensitivity to initial conditions is quantified by the dominant Lyapunov Exponent (LE), which is also a very useful approach for distinguishing chaotic and nonchaotic dynamics. LE is the rate of growth over time of the effects of a small perturbation to the system state; a system with k > 0 implies that the property of sensitive dependence on initial conditions and effects of perturbation are magnified over time by the system’s intrinsic dynamic (i.e., it is chaotic), while k <= 0 implies that the effects of external perturbations decay asymptotically to zero over time (i.e., the system is not chaotic).Fig. 2shows the LE of the double-bottom map with the control parameter n ranging from 0 to 20. Analyzing the LE in the double-bottom map, it can be seen that the first chaotic start is at n = 0.7, in which k <= 0 still exists with [0.7, 3.07]. However, the chaotic properties arise when n > 3.07 until 1

Fig. 4. Time series plot of double-bottom map (n = 1) for 100 generations of Cr value in four different initial values.

Table 1

Parameter settings for the 22 benchmark functions.

Function Trait Search space Asymmetricinitialization range Xmin Xmax Optimal value = 0

Sphere Unimodal 100 6 xi6100 50 6 xi6100 100 100 0D

Ellipsoid Unimodal 100 6 xi6100 50 6 xi6100 100 100 0D

Sum of difference power Unimodal 3 6 xi63 1.5 6 xi63.0 3 3 0D

Cigar Unimodal 100 6 xi6100 50 6 xi6100 100 100 0D Ridge Unimodal 100 6 xi6100 50 6 xi6100 100 100 0D Step Unimodal 100 6 xi6100 50 6 xi6100 100 100 0D Colville Unimodal 100 6 xi6100 15 6 xi630 100 100 (1, 1, 1, 1) Easom Unimodal 100 6 xi6100 50 6 xi6100 100 100 (p,p) Matyas Unimodal 10 6 xi610 5 6 xi610 100 100 (0, 0) Beale Unimodal 4.5 6 xi64.5 4.50 6 xi62.25 4.5 4.5 (3, 0.5) Bohachevsky 1 Unimodal 100 6 xi6100 50 6 xi6100 100 100 (0, 0) Bohachevsky 2 Unimodal 100 6 xi6100 50 6 xi6100 100 100 (0, 0) Bohachevsky 3 Unimodal 100 6 xi6100 50 6 xi6100 100 100 (0, 0) Rosenbrock Unimodal 100 6 xi6100 15 6 xi630 100 100 1D Rastrigrin Multimodal 10 6 xi610 2.56 6 xi65.12 10 10 0D Griewark Multimodal 600 6 xi6600 300 6 xi6600 600 600 0D Ackley Multimodal 100 6 xi6100 50 6 xi6100 100 100 0D Schwefel Multimodal 500 6 xi6500 500 6 xi6250 500 500 420.9687D Schaffer f6 Multimodal 100 6 xi6100 50 6 xi6100 100 100 (0, 0) Goldstein-Price_2D Multimodal 2 6 xi62 1 6 xi62 2 2 (0, 1)

Six-Hump camel back Multimodal 5 6 xi65 2.5 6 xi65.0 5 5 (0.0898, 0.7126)

(0.0898, 0.7126)

(7)

(chaotic end). We initiate the system by providing a vector with an initial value to observe how the system evolves.Figs. 3 and 4show the four initial values with small differences in the double-bottom map, i.e., 0.77777, 0.7777, 0.777, and 0.77. We use n = 4 (Fig. 3) and n = 1 (Fig. 4) in the double-bottom map to test the 100 iterations for analysis of variation (we show the interval between five iterations). Then we start four trajectories from similar initial conditions and subject two trajectories to the saved sequence of random perturbations.Fig. 3illustrates that starting points of the four initial values are very close, but

Table 2

Mean best fitness values for 14 unimodal benchmark functions for DBMPSO.

Function Pop Mean best fitness values over 30 runs (best fitness values over 30 runs) DBMPSO

DBMPSO (n = 1) DBMPSO (n = 2) DBMPSO (n = 4) DBMPSO (n = 8)

Sphere 30 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 50 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) Ellipsoid 30 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 50 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) Sum of difference power 30 1.121E15 ± 4.886E15

(0.000) 0.000 ± 0.000 (0.000) 1.878E16 ± 1.011E15 (0.000) 5.795E18 ± 2.639E17 (0.000) 50 2.229E14 ± 1.199E13 (0.000) 0.000 ± 0.000 (0.000) 1.114E14 ± 5.138E14 (0.000) 1.052E17 ± 5.666E17 (0.000) Cigar 30 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 50 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) Ridge 30 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 50 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) Step 30 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 50 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000)

Colville 30 8.876E13 ± 6.828E13

(2.104E13) 3.229E13 ± 1.108E13 (1.64E14) 6.229E13 ± 2.265E13 (6.636E14) 1.978E1 ± 1.065E2 (4.393E15) 50 1.978E1 ± 1.065E2 (3.446E14) 2.8E13 ± 1.156E13 (3.369E14) 1.978E1 ± 1.065E2 (5.114E14) 6.203E13 ± 1.777E13 (2.624E13)

Easom 30 5.218E13 ± 2.486E13

(5.829E14) 5.369E14 ± 3.776E14 (0.000) 4.882E13 ± 2.521E13 (2.665E14) 5.421E13 ± 2.736E13 (2.576E14) 50 4.08E13 ± 2.72E13 (3.331E16) 7.773E14 ± 5.642E14 (1.554E15) 5.112E13 ± 2.855E13 (1.665E14) 4.361E13 ± 2.561E13 (3.186E14) Matyas 30 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 50 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000)

Beale 30 5.08E2 ± 1.901E1

(2.565E14) 1.243E13 ± 6.316E14 (1.792E15) 3.948E13 ± 2.844E13 (7.583E16) 4.422E13 ± 2.709E13 (6.505E15) 50 4.52E13 ± 2.628E13 (7.281E15) 6.985E14 ± 5.291E14 (3.359E15) 4.45E13 ± 2.863E13 (1.445E14) 4.637E13 ± 2.54E13 (2.294E14) Bohachevsky 1 30 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 50 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) Bohachevsky 2 30 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 50 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) Bohachevsky 3 30 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 50 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000)

Rosenbrock 30 1.246E1 ± 1.42E1

(9.447E10) 7.038E5 ± 1.321E4 (8.031E15) 1.724E1 ± 1.407E1 (8.176E13) 2.202E1 ± 1.215E1 (2.258E13) 50 1.259E1 ± 1.412E1 (3.462E13) 3.602E5 ± 5.71E5 (9.095E13) 1.62E1 ± 1.417E1 (2.597E9) 2.197E1 ± 1.212E1 (4.54E13) The best results are indicated in bold type.

(8)

their Cr values generate a huge divergence over time. The divergence amongst four Cr values after 10 iterations should be noted. The Cr values show variance (i.e., chaotic property) throughout the achievable space from (0.5, 1) to (0, 1).Fig. 4shows that the trajectories quickly converge and thereafter experience identical pathway patterns; whereas theFig. 3dynamics show that the trajectories soon diverge, leaving no hint of their initial similarity. The above verifies that our proposed double-bottom map is indeed chaotic.

3. Numerical simulation 3.1. Benchmark functions

The 22 most commonly used representative benchmark functions were employed in our experiment to illustrate and ana-lyze the effectiveness and performance of the PSO algorithm with chaotic adjustment for numerical optimization problems. These 22 functions are shown inAppendices A–Cand can be grouped into unimodal (Sphere, Ellipsoid, Sum of difference power, Cigar, Ridge, Step, Colville, Easom, Matyas, Beale, Bohachevsky 1, Bohachevsky 2, Bohachevsky 3, and Rosenbrock) and multi-modal functions (Rastrigrin, Griewark, Ackley, Schwefel, Schaffer f6, Goldstein-Price, Six-Hump camel back, and Booth), for which the number of local minima increases exponentially with the problem dimension.

3.2. Parameter settings

In our experiments, 30 dimension sizes were tested for each function, and the corresponding maximum number of generations was set to 1000. Two population sizes were used for each function, i.e., population sizes of 10 and 50, respec-tively. The same sets of parameters were assigned for PSOs, i.e., c1= c2= 2. The inertia weight w we used was recommended by Shi and Eberhart[13]and linearly decreased from 0.9 to 0.4. Xmaxwas equal to Vmaxand Xminwas equal to Vmin. For each

Table 3

Mean best fitness value for eight multimodal benchmark functions for DBMPSO.

Function Pop Mean best fitness values over 30 runs (Best fitness values over 30 runs) DBMPSO

DBMPSO (n = 1) DBMPSO (n = 2) DBMPSO (n = 4) DBMPSO (n = 8)

Rastrigrin 30 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 50 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) Griewark 30 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 50 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) Ackley 30 8.882E16 ± 0.000 (8.882E16) 8.882E16 ± 0.000 (8.882E16) 8.882E16 ± 0.000 (8.882E16) 8.882E16 ± 0.000 (8.882E16) 50 8.882E16 ± 0.000 (8.882E16) 8.882E16 ± 0.000 (8.882E16) 8.882E16 ± 0.000 (8.882E16) 8.882E16 ± 0.000 (8.882E16)

Schwefel 30 6.391E3 ± 2.1E3

(4.014E3) 4.375E2 ± 8.288E2 (3.818E4) 5.635E3 ± 2.785E3 (5.18E4) 4.776E3 ± 3.316E3 (3.818E4) 50 6.297E3 ± 1.904E3 (3.569E2) 2.261E2 ± 6.404E2 (3.818E4) 5.768E3 ± 2.621E3 (1.937E1) 5.781E3 ± 2.65E3 (4.757E2) Schaffer f6 30 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 50 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) 0.000 ± 0.000 (0.000) Goldstein-Price_2D 30 5.161E13 ± 2.508E13

(5.551E14) 1.693E13 ± 5.116E14 (3.73E14) 5.446E13 ± 2.407E13 (5.373E14) 4.807E13 ± 2.557E13 (9.193E14) 50 5.383E13 ± 2.134E13 (1.79E13) 2.15E13 ± 6.736E14 (7.905E14) 5.314E13 ± 2.472E13 (7.683E14) 5.016E13 ± 2.062E13 (1.652E13)

SixHump camel back 30 5.523E13 ± 2.11E13 (5.462E14) 1.219E13 ± 7.455E14 (1.044E14) 5.58E13 ± 2.794E13 (5.329E15) 5.564E13 ± 3.132E13 (1.021E14) 50 3.946E13 ± 2.479E13 (1.221E14) 9.261E14 ± 4.616E14 (8.438E15) 4.348E13 ± 2.287E13 (4.818E14) 4.895E13 ± 3.076E13 (2.975E14)

Booth 30 4.817E13 ± 2.475E13

(1.185E14) 1.167E13 ± 6.152E14 (7.508E16) 4.693E13 ± 2.867E13 (1.371E14) 4.346E13 ± 2.544E13 (2.997E14) 50 4.93E13 ± 2.781E13 (1.402E14) 1.123E13 ± 6.709E14 (1.664E15) 4.204E13 ± 2.583E13 (6.019E14) 4.888E13 ± 2.715E13 (2.232E14)

(9)

experimental setting we executed 30 independent runs. The parameter settings of the 22 benchmark functions are listed in

Table 1.

3.3. Results and discussion

The performances of PSO, DBMPSO and other chaotic PSOs were compared by the mean fitness value and the best fitness value in 30 independent runs. The main experimental results of the 22 benchmark functions are listed inTables 2 and 3. Additional test data (Tables 4 and 5) can be found inAppendix B. Each particle is randomly initiated and all methods use the same initial swarm. If any mean fitness value or best fitness value is <1015, a value of 0.00E + 00 is displayed.Fig. 5 (Appendix C) plots the mean best fitness values over the number of generations for PSO[13]and 30 runs of DBMPSO with 30 particles for the 22 benchmark functions.

3.3.1. Performance of DBMPSO (n = 1), DBMPSO (n = 2), DBMPSO (n = 4) and DBMPSO (n = 8)

In DBMPSO, we used a double-bottom map to upgrade the possibility near 0.5 to make up for the inherent shortcomings of logistic maps. In order to observe the efficiency of the difference probability near 0.5, we adjust the variable n (n = 1, 2, 4 and 8). In our experiments, the most stable solution was obtained when n = 2. DBMPSO with n = 2 outperformed PSO and logistic map PSO for all benchmark functions under any of the experimental settings used. The experimental results also con-firmed that upgrading the probability near 0.5 effectively compensates for the deficiency of the logistic map.

3.3.2. Performance of PSO, DBMPSO and other chaotic PSOs

Fig. 5(Appendix C) plots the mean best fitness in the form of a logarithmic value over the number of generations for the various PSO methods tested with 30 particles on 22 30-dimensional functions.Tables 4 and 5(Appendix B) indicate that DBMPSO (n = 2) is capable of finding optimal solutions in cases where a value is <1015. For the sake of convenience, we show the graphs for the test functions up until the value 101.Fig. 5(Appendix C) indicates that the search rate of DBPSO is clearly superior to that of PSO, CPSO(Logistic), CPSO(Sinusoidal), CPSO(Tent), CPSO(Gauss), CPSO(Circle), CPSO(Arnold), CPSO(Sinai) and CPSO(Zaslavskii) on all 22 benchmark functions. In addition, the graphs inFig. 5(Appendix C) demonstrate that DBMPSO is capable of finding optimal solutions within 200 generations, a fact also demonstrated inTables 4 and 5

(Appendix B), where the mean fitness value was equal to the best fitness value of 30 runs. The experimental results attest to the faster convergence of DBMPSO, which means that an optimal solution can be found faster. It can further be concluded from the data that upgrading the probability near 0.5 is an effective strategy to counteract the inherent flaws of the logistic map, and that the search capability can be improved by simulating the parameters r1and r2of PSO.

3.3.3. Statistical analysis

The 10 PSO methods were compared to the average function values of the PSO[13]in two statistical tests, i.e., the Fried-man test and the multiple comparison approach[3]. The Friedman test is used to test whether the performance of the dif-ferent PSO methods was equal. The multiple comparison approach is used to determine which method had significantly different accuracies if the Friedman test was rejected.

3.3.3.1. Friedman test. The Friedman test is a nonparametric counterpart of the parametric two-way analysis of variance test; it was used to compare the function values of the PSO methods when the distribution of the underlying population was not specified. The hypothesis being tested was that all the methods had equal performances (function values), and the alterna-tive hypothesis was that all methods did not have equal performances (function values). Rijis the rank (from 1 to k) assigned to method j on problem I. It is equal to 1 if it is lowest value among the methods. In the case of a tie, average ranks are used. The test statistics are defined by the following equations:

Tf ¼ ðn  1Þ Bfnkðkþ1Þ 2 4 n o Af Bf ð6Þ Rj¼ Xn i¼1 Rij for j ¼ 1; 2; . . . ; k ð7Þ Af ¼ Xn i¼1 Xk j¼1 R2 ij ð8Þ Bf ¼ 1 n Xk j¼1 R2j ð9Þ

(10)

References

[1] B. Alatas, E. Akin, A.B. Ozer, Chaos embedded particle swarm optimization algorithms, Chaos, Solitons & Fractals 40 (2009) 1715–1734. [2] D.B. Chen, C.X. Zhao, Particle swarm optimization with adaptive population size and its application, Applied Soft Computing 9 (2009) 39–48. [3] W.J. Conover, Practical Nonparametric Statistics, Wiley and Sons Inc., New York, 1998.

[4] R.L. Devaney, An Introduction to Chaotic Dynamical Systems, Addison-Wesley, 1989.

[5] G. Heidari-Bateni, C.D. McGillem, A chaotic direct-sequence spread-spectrum communication system, IEEE Transactions on Communications Society 42 (2002) 1524–1527.

[6] C.W. Jiang, E. Bompard, A hybrid method of chaotic particle swarm optimization and linear interior for reactive power optimisation, Mathematics and Computers in Simulation 68 (2005) 57–65.

[7] C.W. Jiang, E. Bompard, A self-adaptive chaotic particle swarm algorithm for short term hydroelectric system scheduling in deregulated environment, Energy Conversion and Management 46 (2005) 2689–2696.

[8] Y. Jiang, T.S. Hu, C. Huang, X.N. Wu, An improved particle swarm optimization algorithm, Applied Mathematics and Computation 193 (2007) 231–239. [9] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: IEEE International Conference on Neural Networks, Perth, Australia, 1995, pp. 1942–1948. [10] Z. Lu, L.S. Shieh, G.R. Chen, On robust control of uncertain chaotic systems: a sliding-mode synthesis via chaotic optimization, Chaos, Solitons & Fractals

18 (2003) 819–827.

[11] C. Robinson, Dynamical Systems: Stability, Symbolic Dynamics, and Chaos, Studies in Advanced Mathematics, CRC Press, 1998. [12] H.G. Schuster, L.W. Just, Deterministic Chaos: An Introduction, Fourth ed., Wiley-VCH, Weinheim, 2005.

[13] Y. Shi, R.C. Eberhart, Empirical study of particle swarm optimization, in: Congress on Evolutionary Computation, Washington, DC, USA, 1999, pp. 1945– 1949.

[14] Y. Shi, R.C. Eberhart, A modified particle swarm optimizer, in: IEEE World Congress on Computational Intelligence, Anchorage, AK, USA, 1998, pp. 69– 73.

[15] Y. Shi, R.C. Eberhart, Parameter selection in particle swarm optimization, Lecture Notes in Computer Science 1447 (1998) 591–600. [16] S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos, Springer, 2003.

[17] T. Xiang, X. Liao, K.W. Wong, An improved particle swarm optimization algorithm combined with piecewise linear chaotic map, Applied Mathematics and Computation 190 (2007) 1637–1645.

數據

Fig. 1. Frequency spectra for random seed, logistic map and double-bottom maps for 1000 iterations; Cr 0 = DBMr 0 = 0.3.
Fig. 2. Lyapunov exponent of a double-bottom map.
Fig. 3. Time series plot of double-bottom map (n = 4) for 100 generations of Cr value in four different initial values.
Fig. 4. Time series plot of double-bottom map (n = 1) for 100 generations of Cr value in four different initial values.
+2

參考文獻

相關文件

 The TRG consists of two components: a basic component which is an annual recurrent cash grant provided to schools for the appointment of supply teachers to cover approved

Numerical experiments are done for a class of quasi-convex optimization problems where the function f (x) is a composition of a quadratic convex function from IR n to IR and

For the proposed algorithm, we establish a global convergence estimate in terms of the objective value, and moreover present a dual application to the standard SCLP, which leads to

To improve the convergence of difference methods, one way is selected difference-equations in such that their local truncation errors are O(h p ) for as large a value of p as

• An algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output.. • An algorithm is

It is useful to augment the description of devices and services with annotations that are not captured in the UPnP Template Language. To a lesser extent, there is value in

 The TRG consists of two components: a basic component which is an annual recurrent cash grant provided to schools for the appointment of supply teachers to cover approved

▪ Approximation algorithms for optimization problems: the approximate solution is guaranteed to be close to the exact solution (i.e., the optimal value)..