• 沒有找到結果。

A Parallel Particle Swarm Optimization Algorithm with Communication Strategies

N/A
N/A
Protected

Academic year: 2021

Share "A Parallel Particle Swarm Optimization Algorithm with Communication Strategies"

Copied!
5
0
0

加載中.... (立即查看全文)

全文

(1)

JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 21, 809-818 (2005)

Short Paper

_________________________________________________

A Parallel Particle Swarm Optimization Algorithm

with Communication Strategies

JUI-FANG CHANG1,SHU-CHUAN CHU2,JOHN F. RODDICK3 AND JENG-SHYANG PAN4

1

Department of International Trade

4

Department of Electronic Engineering National Kaohsiung University of Applied Sciences

Kaohsiung, 807 Taiwan

2

Department of Information Management Cheng Shiu University

Kaohsiung, 833 Taiwan

3

School of Informatics and Engineering Flinders University of South Australia

Adelaide 5001, South Australia

Particle swarm optimization (PSO) is an alternative population-based evolutionary computation technique. It has been shown to be capable of optimizing hard mathematical problems in continuous or binary space. We present here a parallel version of the particle swarm optimization (PPSO) algorithm together with three communication strategies which can be used according to the independence of the data. The first strategy is de-signed for solution parameters that are independent or are only loosely correlated, such as the Rosenbrock and Rastrigrin functions. The second communication strategy can be applied to parameters that are more strongly correlated such as the Griewank function. In cases where the properties of the parameters are unknown, a third hybrid communication strategy can be used. Experimental results demonstrate the usefulness of the proposed PPSO algorithm.

Keywords: particle swarm optimization (PSO), parallel particle swarm optimization

(PPSO), communication strategies, Rosenbrock and Rastrigrin functions, Griewank function

1. INTRODUCTION

The particle swarm optimization (PSO) algorithm is based on the evolutionary computation technique [1-3]. PSO optimizes an objective function by conducting population-based search. The population consists of potential solutions, called particles, similar to birds in a flock. The particles are randomly initialized and then freely fly across the multi-dimensional search space. While flying, every particle updates its

Received August 26, 2003; revised August 9, 2004; accepted September 23, 2004. Communicated by Chin-Teng Lin.

(2)

JUI-FANG CHANG, SHU-CHUAN CHU,JOHN F. RODDICK AND JENG-SHYANG PAN 810

velocity and position based on its own best experience and that of the entire population. The updating policy will cause the particle swarm to move toward a region with a higher object value. Eventually, all the particles will gather around the point with the highest object value. PSO attempts to simulate social behavior, which differs from the natural selection schemes of genetic algorithms.

PSO processes the search scheme using populations of particles, which corresponds

to the use of individuals in genetic algorithms. Each particle is equivalent to a candidate solution of a problem. The particle moves according to an adjusted velocity, which is based on that particle’s experience and the experience of its companions. For the

D-dimensional function f( ),⋅ the ith particle for the tth iteration can be represented as

( (1), (2), ..., ( )).

t t t t

i i i i

X = x x x D (1)

Assume that the best previous position of the ith particle at the tth iteration is represented as ( (1), (2), ..., ( )); t t t t i i i i P = p p p D (2) then, ). ( ... ) ( ) ( 1 i1 t i t i f P f P P f ≤ − ≤ ≤ (3)

The velocity of the ith particle at the tth iteration, Vit, can be represented as

( (1), (2), ..., ( )).

t t t t

i i i i

V = v v v D (4)

The best position amongst all the particles, Gt, from the first iteration to the tth iteration,

where best is defined by some function of the swarm, is

( (1), (2), ..., ( )).

t t t t

G = g g g D (5)

The original particle swarm optimization algorithm can be expressed as follows: ), ( ) ( 2 2 1 1 1 t i t t i t i t i t i V C r P X C r G X V + = + ⋅ ⋅ − + ⋅ ⋅ − (6) 1 1 , t t t i i i X + =X +V + i=0, 1, ...,N−1, (7)

where N is the particle size, Vmax≤

1 t i

V + ≤ Vmax (Vmax is the maximum velocity), and r1

and r2 are random variables such that 0 ≤ r1, r2≤ 1.

A modified version of the particle swarm optimizer [4] and an adaption using the

inertia weight*, W, a parameter for controlling the dynamics of flying of the modified

particle swarm [5], have also been presented. The latter version of the modified particle swarm optimizer can be expressed as

*

(3)

PPSO ALGORITHM WITH COMMUNICATION STRATEGIES 811 ), ( ) ( 2 2 1 1 1 t i t t i t i t i t t i W V C r P X C r G X V + = ⋅ + ⋅ ⋅ − + ⋅ ⋅ − (8) 1 1 , t t t i i i X + =X +V + i=0,1,...,N−1, (9) where Wt is the inertia weight at the tth iteration, and C1 and C2 are factors used to

con-trol the related weighting of corresponding terms. The weighting factors, C1 and C2,

achieve a compromise between exploration and exploitation. In this paper, the concept of parallel processing is applied to particle swarm optimization, and Parallel Particle

Swarm Optimization (PPSO) is presented based on a different solution space.

2. PARALLEL PARTICLE SWARM OPTIMIZATION

Parallel processing aims to produce the same results achievable using multiple proc-essors with the goal of reducing the run time. In this study, the spirit of the data parallel-ism method was utilized to create a parallel particle swarm optimization (PPSO) algo-rithm. The purpose of applying parallel processing to particle swarm optimization goes further than merely being a hardware accelerator. Rather, a distributed formulation is developed which gives better solutions with reduced overall computation.

It is difficult to find an algorithm which is efficient and effective for all types of problems. Our research has indicated that the performance of PPSO can be highly de-pendent on the level of correlation between parameters and the nature of the communica-tion strategy. The mathematical form of the parallel particle swarm optimizacommunica-tion algo-rithm can be expressed as follows:

) ( , , 1 1 , 1 , t j i t j i t j i t t j i W V C r P X V + = ⋅ + ⋅ ⋅ − +C2r2⋅(GtjXit,j), (10) , 1 , , 1 ,+ = + + t j i t j i t j i X V X (11) ), ( ) (Gt f Gtj f ≤ (12) where i = 0, 1, 2, …, Nj− 1, j = 0, 1, 2, …, S − 1, S (= 2m) is the number of groups (and m

is a positive integer), Nj is the particle size for the jth group,

t j i

X , is the ith particle posi-tion in the jth group at the tth iteraposi-tion, Vit,j is the velocity of the ith particle in the jth group at the tth iteration, G is the best position among all the particles of the jth group tj

from the first iteration to the tth iteration, and G is the best position among all the parti-t

cles in all the groups from the first iteration to the tth iteration.

Three communication strategies have been developed for PPSO. The first strategy, shown in Fig. 1, is based on the observation that if parameters are independent or are only loosely correlated, then the better particles are likely to obtain good results quite quickly. Thus, multiple copies of the best particles for all groups G are mutated, and t

these mutated particles migrate and replace the poorer particles in the other groups every

R1 iterations.

However, if the parameters of a solution are loosely correlated, the better particles in each group will tend to not obtain optimum results particularly quickly. In this case, a second communication strategy may be applied as depicted in Fig. 2. This strategy is

(4)

PPSO ALGORITHM WITH COMMUNICATION STRATEGIES 817

REFERENCES

1. R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in

Pro-ceedings of 6th International Symposium on Micro Machine and Human Science,

1995, pp. 39-43.

2. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of IEEE

International Conference on Neural Networks, 1995, pp. 1942-1948.

3. P. Tarasewich and P. R. McMullen, “Swarm intelligence,” Communications of the

ACM, Vol. 45, 2002, pp. 63-67.

4. Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” in Proceedings of

IEEE World Congress on Computational Intelligence, 1998, pp. 69-73.

5. Y. Shi and R. Eberhart, “Empirical study of particle swarm optimization,” in

Pro-ceedings of Congress on Evolutionary Computation, 1999, pp. 1945-1950.

Jui-Fang Chang (張瑞芳) received the Ph.D. degree in Finance from United States International University, U.S.A. in 1997. Currently she holds chairwoman of Interna-tional Business Department, NaInterna-tional Kaohsiung University of Applied Sciences, Taiwan. She has a combined academic background and research experience in Finance, Interna-tional Finance, InternaInterna-tional Marketing and InternaInterna-tional Business Management. Her current research concentrates on genetic algorithms, particle swarm optimization (PSO), fuzzy logic, and neural networks applied in portfolio and stock market.

Shu-Chuan Chu (朱淑娟) received the B.S. degree from National Taiwan Univer-sity of Science and Technology, Taiwan in 1988. She got the Ph.D. from the School of Informatics and Engineering at the Flinders University of South Australia in 2004. Cur-rently, she is the Assistant Professor in the Department of Information Management, Cheng Shiu University. Her current research interests include soft computing, pattern recognition, and data mining.

John F. Roddick received the B.S. (Eng) (Hons) degree from Imperial College, London, the M.S. degree from Deakin University, and the Ph.D. degree from La Trobe University. He currently holds the SACITT Chair of Information Technology in the School of Informatics and Engineering at the Flinders University of South Australia. He has also held positions at the Universities of South Australia and Tasmania and was a project leader and a consultant in the information technology industry. His technical in-terests include data mining and knowledge discovery, schema versioning, and enterprise systems. He is editor-in-chief of the Journal of Research and Practice in Information Technology, a fellow of the Australian Computer Society and the Institution of Engineers, Australia and a member of the IEEE Computer Society and the ACM.

(5)

JUI-FANG CHANG, SHU-CHUAN CHU,JOHN F. RODDICK AND JENG-SHYANG PAN 818

Jeng-Shyang Pan (潘正祥) received the B.S. degree in Electronic Engineering from the National Taiwan University of Science and Technology in 1986, the M.S. de-gree in Communication Engineering from the Chiao Tung University, Taiwan in 1988, and the Ph.D. degree in Electrical Engineering from the University of Edinburgh, U.K. in 1996. Currently, he is a Professor in the Department of Electronic Engineering, National Kaohsiung University of Applied Sciences, Taiwan. Professor Pan has published 40 in-ternational journal papers and 100 conference papers. He is the co-editor-in-chief of In-ternational Journal of Innovative Computing, Information and Control. His current re-search interests include computational intelligence, information security, and signal proc-essing.

參考文獻

相關文件

For the proposed algorithm, we establish a global convergence estimate in terms of the objective value, and moreover present a dual application to the standard SCLP, which leads to

Optim. Humes, The symmetric eigenvalue complementarity problem, Math. Rohn, An algorithm for solving the absolute value equation, Eletron. Seeger and Torki, On eigenvalues induced by

If we want to test the strong connectivity of a digraph, our randomized algorithm for testing digraphs with an H-free k-induced subgraph can help us determine which tester should

The Model-Driven Simulation (MDS) derives performance information based on the application model by analyzing the data flow, working set, cache utilization, work- load, degree

To explore different e-learning resources and strategies that can be used to successfully develop the language skills of students with special educational needs in the

Explore different e-learning resources and strategies that can be used to successfully develop the language skills of students with special educational needs in the..

In this work, we will present a new learning algorithm called error tolerant associative memory (ETAM), which enlarges the basins of attraction, centered at the stored patterns,

We will design a simple and effective iterative algorithm to implement “The parallel Tower of Hanoi problem”, and prove its moving is an optimal