• 沒有找到結果。

An improved vector particle swarm optimization for constrained optimization problems

N/A
N/A
Protected

Academic year: 2021

Share "An improved vector particle swarm optimization for constrained optimization problems"

Copied!
5
0
0

加載中.... (立即查看全文)

全文

(1)

An improved vector particle swarm optimization for constrained

optimization problems

Chao-li Sun

a,⇑

, Jian-chao Zeng

a

, Jeng-shyang Pan

b,c a

Complex System and Computational Intelligence Laboratory, Taiyuan University of Science and Technology, Taiyuan, Shanxi 030024, China

b

Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, Guangdong 518055, China

c

Department of Electronic Engineering, National Kaohsiung University of Applied Sciences, Kaohsiung 807, Taiwan

a r t i c l e

i n f o

Article history: Received 11 March 2009

Received in revised form 9 August 2010 Accepted 27 November 2010

Keywords:

Particle swarm optimization Constrained optimization problems Multi-dimensional search algorithm

a b s t r a c t

Increasing attention is being paid to solve constrained optimization problems (COP) fre-quently encountered in real-world applications. In this paper, an improved vector particle swarm optimization (IVPSO) algorithm is proposed to solve COPs. The constraint-handling technique is based on the simple constraint-preserving method. Velocity and position of each particle, as well as the corresponding changes, are all expressed as vectors in order to present the optimization procedure in a more intuitively comprehensible manner. The NVPSO algorithm[30], which uses one-dimensional search approaches to find a new fea-sible position on the flying trajectory of the particle when it escapes from the feafea-sible region, has been proposed to solve COP. Experimental results showed that searching only on the flying trajectory for a feasible position influenced the diversity of the swarm and thus reduced the global search capability of the NVPSO algorithm. In order to avoid neglecting any worthy position in the feasible region and improve the optimization effi-ciency, a multi-dimensional search algorithm is proposed to search within a local region for a new feasible position. The local region is composed of all dimensions of the escaped particle’s parent and the current positions. Obviously, the flying trajectory of the particle is also included in this local region. The new position is not only present in the feasible region but also has a better fitness value in this local region. The performance of IVPSO is tested on 13 well-known benchmark functions. Experimental results prove that the proposed IVPSO algorithm is simple, competitive and stable.

Ó 2010 Elsevier Inc. All rights reserved.

1. Introduction

Constrained optimization problems (COP) are mathematical programming problems frequently encountered and difficult to solve in applications such as engineering design, VLSI design, structural optimization, location and allocation problems [23]. Consequently, it is important for both academicians and practitioners to solve constrained optimization problems effi-ciently and effectively. Generally, a constrained problem can be described as follows:

min f ð~xÞ

s:t: gkð~xÞ 6 0 k ¼ 1; 2; . . . ; m hpð~xÞ ¼ 0 p ¼ 1; 2; . . . ; l ~xmin6~x 6 ~xmax

ð1Þ

0020-0255/$ - see front matter Ó 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.ins.2010.11.033

⇑Corresponding author. Tel.: +86 3516998016.

E-mail addresses:clsun1225@163.com(C.-l. Sun),zengjianchao@263.net(J.-c. Zeng),jspan@cc.kuas.edu.tw(J.-s. Pan).

Contents lists available atScienceDirect

Information Sciences

(2)

where ~x ¼ ðx1;x2; . . . ;xDÞT is the solution vector; ~x 2 S  RD; S is defined as a D-dimensional space having lower and upper bounds ½~xmin;~xmax denoted as ~xmin¼ ðx1 min;x2 min; . . . ;xD minÞT and ~xmax¼ ðx1 max;x2 max; . . . ;xD maxÞT, respectively; m and l are the numbers of inequality and equality constraints, respectively; and feasible region F # S is the region of S in which all con-straints are satisfied. ~x is called a feasible solution when it is in feasible region F. Normally, an equality constraint is often dealt with two inequality constraints hpð~xÞ 6 d and hpð~xÞ P d; p ¼ 1; 2; . . . ; l for solving constrained optimization problems, where d is the tolerance allowed (a very small positive value).

Deterministic algorithms, such as the gradient[10,28], the penalty function[4]and the sequential quadratic program-ming[15], have been proposed to solve constrained optimization problems. However, most deterministic algorithms make assumptions on continuity and differentiability of objective functions and the results obtained from this kind of algorithm are often with local optima. This has resulted in evolutionary algorithms (EA), such as evolutionary programming[5,29], ge-netic algorithm[1,2], simulated annealing[18,19], particle swarm optimization[6,16], gravitational search algorithm[7], heuristic Kalman algorithm[32] and some hybrid algorithms[22,34] being applied to address constrained optimization problems. Compared to deterministic algorithms, evolutionary algorithms (EA) require fewer parameters, without requiring the objective function to be derivable or even continuous. Furthermore, they can perform as global optimization techniques without specific knowledge of the problem because of the good balance between exploration and exploitation of the whole search space[12].

Kennedy and Eberhart proposed particle swarm optimization (PSO) as a global evolutionary algorithm in 1995[6,16]. The idea of PSO was based on simulation of simplified social models such as birds flocking and fish schooling. Like other stochastic evolutionary algorithms, PSO is independent of mathematical characteristics of objective problems. However, unlike these algorithms, with PSO, each particle has its own unique memory to ‘‘remember’’ its own best solution. Thus, each particle has its own ‘‘idea’’ of whether it has found the optimum. However, in other algorithms, previous knowledge of the problem will be destroyed once the population changes. PSO has attracted increasing attention because of its sim-ple concept, easy imsim-plementation, and quick convergence. These advantages allow it to be applied successfully in a vari-ety of fields, mainly for unconstrained continuous optimization problems[17]. PSO has also been proposed for solving constrained optimization problems. However, like other evolutionary algorithms, PSO lacks an explicit constraint-han-dling mechanism for constrained optimization problems. Nevertheless, compared with other kinds of evolutionary algo-rithms, PSO has been less explored for solving constrained optimization problems. In this paper, an appropriate constraint-handling mechanism is proposed in the particle swarm optimization algorithm to solve constrained optimiza-tion problems.

Until now, the penalty function method, the feasibility-based rules method, and the constraint-preserving method have been the main approaches in use of PSO algorithms for handling constraints. The penalty function method is the most pop-ular constraint-handling technique due to its simple principle. It converts a constrained optimization problem to an uncon-strained one by adding a penalty item to the objective function[20,25]. This approach may work quite well for some problems if penalty parameters are carefully tuned but then the approach itself becomes a difficult optimization problem [27], since too few too many penalizations may result in unsuccessful optimization. The method with feasibility-based rules introduce instructions for determining the best solution of the swarm (gbest) and the best historical solution of each particle (gbest) in the PSO[3,12]. Additional parameters do not need to be designed. However, according to feasibility-based rules, feasible solutions are always considered to be better than infeasible ones, which may lead to pressure on selection of feasible solutions and result in premature convergence. The constraint-preserving method is the easiest mechanism for understand-ing the handlunderstand-ing of constraints conflicts. It reduces the search space by ensurunderstand-ing that all candidate solutions satisfy the con-straints all the time[14]. Solutions are initialized within the feasible space and the resulting solutions iterate within the feasible region. This method requires initialization of particles inside the feasible region, which may require a long initial-ization process and may be difficult to achieve. However, particle swarm optiminitial-ization has been proved to produce initial feasible solutions quickly even for highly constrained optimization problems[31].

Based on analysis of different kinds of constraint-handling mechanisms, the constraint-preserving method is selected to handle all constraints conflicts in this paper. In order to understand the constraint-handling procedure in the hyperspace, the position, velocity and the corresponding changes of each particle are all expressed in vectors. The fly-back mechanism[13]is common in the constraint-preserving method; the idea that each particle flies back to its parental position when it escapes from the feasible region is very simple but whether the new position is within the feasible region depends completely on the changes in its velocity, which may result in algorithm’s stagnation if the new position violates the constraints. The new vec-tor particle swarm optimization (NVPSO) also uses the constraint-preserving method as the constraint-handling mechanism for solving constrained optimization problems[30]. In[30], two operations were proposed to draw the escaped particle suc-cessfully back into the feasible region. Firstly, a shrinkage coefficient was used to ensure that all particles were within the upper and lower bounds, and then one-dimensional search approaches were employed to pull the particle back into the fea-sible space if it did not satisfy functional equality or inequality constraints. The first step is advanced separately to ensure that NVPSO can find the optimal solutions just on the upper or lower bound. Since the parental position of the escaped par-ticle is within the feasible region, it can certainly find a feasible position on the flying trajectory of the parpar-ticle. Obviously, the NVPSO algorithm can find optimum solutions which do not violate upper and lower bound and satisfy all functional con-straints. However, it may not find optimum solutions when the upper or lower bound is separated from boundaries of func-tional constraints and optimal solutions are only on the latter; f one-dimensional search algorithms may not be able to reach the exact boundaries of functional constraints. In order to avoid the drawbacks of these methods and find global optimal

(3)

solutions for a constrained optimization problem, a multi-dimensional search approach is proposed in this paper to find a feasible position for the escaped particle.

The remainder of this paper is organized as follows. Section2introduces the standard particle swarm optimization. Sec-tion3presents recent PSO-based approaches for constrained optimization problems. In Section4, the improved vector par-ticle swarm optimization algorithm is proposed and explained in detail. Simulation and comparisons are presented in Section5along with some discussions. Finally, the conclusion and suggestions for future work are provided in Section6. 2. Standard particle swarm optimization

Particle swarm optimization is a stochastic, population-based and global evolutionary algorithm proposed by Kennedy and Eberhart in 1995[6,16]. It has gained considerable acceptance because of its simplicity and effectiveness in performing difficult optimization tasks. With the standard particle swarm optimization, each particle of the swarm adjusts its trajectory according to its own flying experience and the flying experiences of other particles within its topological neighborhood in a D-dimensional space S. The velocity and position of particle i are represented as ~

v

i¼ ð

v

i1;

v

i2; . . . ;

v

iDÞ and ~xi¼ ðxi1;xi2; . . . ;xiDÞ, respectively. Its best historical position is recorded as ~pi¼ ðpi1;pi2; . . . ;piDÞ, which is also called pbest. The best historical position that the entire swarm has passed is denoted as ~pg¼ ðpg1;pg2; . . . ;pgDÞ, which is also called gbest. The veloc-ity and position of particle i on dimension d(d = 1, 2, . . . , D) in iteration t + 1 are updated as follows:

v

idðt þ 1Þ ¼

x

v

idðtÞ þ c1r1ðpidðtÞ  xidðtÞÞ þ c2r2ðpgdðtÞ  xidðtÞÞ ð2Þ

xidðt þ 1Þ ¼ xidðtÞ þ

v

idðt þ 1Þ ð3Þ

where

x

is a parameter called the inertia weight, c1and c2are positive constants referred to as cognitive and social param-eters, respectively, and r1and r2are random numbers generated from a uniform distribution in the region of [0, 1]. 3. Related work

As we know, evolutionary algorithms are developed primarily as unconstrained optimization methods. However, relative to deterministic algorithms, stochastic evolutionary algorithms do not need to make strong assumptions on continuity and differentiability of the objective function. Moreover, all of these algorithms perform search in the whole solution space. Therefore, more and more constrained optimization problems have been proposed to be solved by evolutionary algorithms. As a global evolutionary algorithm, PSO is attracting more and more attention because of the ease with which it can be understood and implemented, as well as its quick convergence. However, in comparison with other kinds of evolutionary algorithms, it has been explored less for solving constrained optimization problems.

The penalty function method is the most common approach for solving constrained optimization problems. Parsopoulos and Vrahatis[25]proposed minimization of a non-stationary multi-stage assignment penalty function to tackle constrained optimization problems. Simulation results showed that PSO outperformed other evolutionary algorithms but the design of the multi-stage assignment penalty function was very complex. He and Wang[11,12]proposed a co-evolutionary particle swarm optimization (CPSO) approach for solving constrained optimization problems that employs the notion of co-evolution to adapt penalty factors. With the CPSO approach, penalty functions have also evolved for use with PSO in a self-tuning way. Simulated results showed that some solutions were better than those previously reported in the literature.

Pulido and Coello[26]proposed a comparatively simple constraint-handling mechanism which was later called the fea-sibility-based rules method, for choosing leaders in the particle swarm optimization algorithm. In order to improve the exploratory capabilities of the proposed algorithm, a turbulence operator was incorporated into the algorithm. Experimental results obtained by this approach were highly competitive, and in some cases, even improved on results obtained by other algorithms such as Homomorphous Maps[21]and ASCHEA[9]. He and Wang[12]proposed a hybrid particle swarm opti-mization with feasibility-based rules for constrained optiopti-mization. Similar to[26], simulated annealing was applied as a tur-bulence operator for the best solution of the swarm to help the algorithm escape local optima.

The simplest constraint-handling mechanism is to pull the escaped particles back to the feasible region. Hu and Eberhart [14], Guo et al.[8]employed fly-back mechanism as a preserving feasibility strategy to deal with constraints conflicts, which is capable of maintaining a feasible population throughout the lifetime of the entire swarm. In addition, Sun et al.[30] pro-posed a vector particle swarm optimization algorithm to solve constrained optimization problem, in which one-dimensional search methods were used to find a feasible position for each escaped particle.

4. IVPSO algorithm for constrained optimization problems

PSO has gained increasing popularity for its simplicity and effectiveness in a variety of fields, mainly for unconstrained optimization problems. However, different kinds of complicated constraints exist in different kinds of optimization prob-lems, which PSO is unable to solve directly. Hence, the constraint-handling mechanism is quite important for PSO to solve constrained optimization problems. In this paper, the constraint-preserving method is selected to handle constraints con-flicts in which particles keep only feasible solutions in their memory, thereby minimizing the search space. The common handling mechanism in the constraint-preserving method for escaped particles is the ‘‘fly-back’’ mechanism, where escaped

(4)

Table 7shows the worst results obtained by different algorithms. The worst results obtained by IVPSO are almost better than those obtained by other algorithms except solutions found by SR and CHMPSO for problem g02, and CHMPSO and PESO for problem g13.

On the whole, performance of the IVPSO algorithm is very competitive vis-à-vis state-of-the-art approaches. Although the IVPSO algorithm could not get the known optimal for all the 30 runs, it could find the best optimal solutions at least once in the runs. In case of problems g02 and g13, the difference between the best and the worst result is quite large. As we know, the standard particle swarm optimization is a global evolutionary algorithm but, it is easy for it to fall into the local optima. The improved vector particle swarm optimization algorithm proposed in this paper only considers the constraint-handling mechanism; it has not considered possible improvement in exploration of the algorithm, which is the reason why in case of problems g02 and g13 it can fall into the local optima.

The computational cost, measured in the number of evaluations of the fitness function (FFE), performed by IVPSO, is 40  1000  (40  100) = 160,000,000 FFE which is higher than other algorithms. However, the proposed IVPSO algorithm is very fit for optimization problems which need more accurate optimal results.

6. Conclusions

In this paper, an improved vector particle swarm optimization algorithm is proposed to solve constrained optimization problems. The constraint-handling technique used is based on the constraint-preserving method. Thus, when a particle flies away from the feasible region, a new feasible position is in a local region for the escaped particle. For better understanding of updating of each particle, velocity and position and the corresponding changes of each particle are all denoted as vectors. One-dimensional search algorithms have been considered to search for a feasible position on the flying trajectory for the escaped particle. This kind of algorithm can perform well for problems where the optimal solutions are in the feasible region but not that well for those problems for whom the optima are on the boundaries of functional constraints. Thus, in this paper, a multi-dimensional search algorithm is proposed for search of a feasible position in a local region composed of all dimen-sions of the parent and the current position of the escaped particle. In order to improve optimization efficiency, the new po-sition is required not only to be in the feasible region but also to have a better fitness value in the local region.

Experimental results prove that the proposed IVPSO algorithm is simple, competitive and comparatively stable. However, because the IVPSO algorithm should search a feasible position for each escaped particle using the multi-dimensional search approach, computation speed of the proposed algorithm is comparatively slow. Therefore, our future work is to improve implementation efficiency of the IVPSO algorithm even with the same results.

Acknowledgements

This work is supported in part by National Science Foundation of China under Grant No. 60674104 and by Natural Science Foundation of Shanxi Province of China under Grant No. 20081030.

References

[1] D. Beasley, D.R. Bull, R.R. Martin, An overview of genetic algorithms: Part 2, Research topics, University Computing 15 (2) (1993) 170–181. [2] D. Beasley, D.R. Bull, R.R. Martin, An overview of genetic algorithms: Part 1, Fundamentals, University Computing 15 (2) (1993) 58–69.

[3] K. Deb, An efficient constraint handling method for genetic algorithms, Computer Methods in Applied Mechanics and Engineering 186 (2) (2000) 311– 338.

[4] R. Divi, H.K. Kesavan, A shifted penalty function approach for optimal load-flow, IEEE Transactions on Power Apparatus and Systems PAS-101 (9) (1982) 3502–3512.

[5] J. Dou, X. Wang, An Efficient Evolutionary Programming, in: International Symposium on Information Science and Engineering, 2008, pp. 401–404. [6] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proceedings of the Sixth International Symposium on Micro Machine and

Human Science, 1995.

[7] R. Esmat, N. Hossein, S. Saeid, GSA: a gravitational search algorithm, Information Sciences 179 (13) (2009) 2232–2248.

[8] C. Guo et al, Swarm intelligence for mixed-variable design optimization, Journal of Zhejiang University Science 5 (7) (2004) 851–860.

[9] S.B. Hamida, M. Schoenauer, ASCHEA: New Results Using Adaptive Segregational Constraint Handling, in: Proceedings of the Congress on Evolutionary Computation 2002 (CEC’2002), 2002.

[10] A.A. Hasan, M.A. Hasan, Constrained gradient descent and line search for solving optimization problem with elliptic constraints, in: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’03), 2003, pp. 793–796.

[11] Q. He, L. Wang, An effective co-evolutionary particle swarm optimization for constrained engineering design problems, Engineering Applications of Artificial Intelligence 20 (1) (2007) 89–99.

[12] Q. He, L. Wang, A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization, Applied Mathematics and Computation 186 (2) (2007) 1407–1422.

[13] S. He, E. Prempain, Q.H. Wu, An improved particle swarm optimizer for mechanical design optimization problems, Engineering Optimization 36 (5) (2004) 585–605.

[14] X. Hu, R.C. Eberhart, Solving constrained nonlinear optimization problems with particle swarm optimization, in: Proceedings of 6th World Multiconference on Systemics, Cybernetics and Informatics, 2002.

[15] T.A. Johansen, T.I. Fossen, S.P. Berge, Constrained nonlinear control allocation with singularity avoidance using sequential quadratic programming, IEEE Transactions on Control Systems Technology 12 (1) (2004) 211–216.

[16] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, 1995, pp. 1942–1948. [17] J. Kennedy, R. Eberhart, Y. Shi, Swarm Intelligence (2001).

[18] S. Kirkpatrick, Optimization by simulated annealing: quantitative studies, Journal of Statistical Physics 34 (5) (1984) 975–986. [19] S. Kirkpatrick et al, Optimization by simulated annealing, Science 220 (4598) (1983) 671–680.

(5)

[20] S. Kitayama, M. Arakawa, K. Yamazaki, Penalty function approach for the mixed discrete nonlinear problems by particle swarm optimization, Structural Multidisciplinary Optimization 32 (2006) 191–202.

[21] S. Koziel, Z. Michalewicz, Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization, Evolutionary Computation 7 (1999) 19–44.

[22] H. Liu, Z. Cai, Y. Wang, Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization, Applied Soft Computing (2009) 1–12.

[23] H. Lu, W. Chen, Self-adaptive velocity particle swarm optimization for solving constrained optimization problems, Journal of Global Optimization 41 (3) (2008) 427–445.

[24] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained parameter optimization problems, Evolutionary Computation 4 (1) (1996) 1– 32.

[25] K.E. Parsopoulos, M.N. Vrahatis, Particle swarm optimization method for constrained optimization problems, in: Proceedings of the 2nd Euro-International Symposium on Computational Intelligence, 2002, pp. 214–220.

[26] G.T. Pulido, C.A.C. Coello, A constraint-handling mechanism for particle swarm optimization, in: Congress on Evolutionary Computation, 2004. [27] T.P. Runarsson, X. Yao, Stochastic ranking for constrained evolutionary optimization, IEEE Transactions on Evolutionary Computation 4 (3) (2000) 284–

294.

[28] E. Sandgren, Nonlinear integer and discrete programming in mechanical design optimization, Journal of Mechanical Design 112 (2) (1990) 223–229. [29] S.R. Sathyanarayan, H.K. Birru, K. Chellapilla, Evolving nonlinear time-series models using evolutionary programming, in: Proceedings of the 1999

Congress on Evolutionary Computation, 1999.

[30] C. Sun, J. Zeng, J. Pan, An New Vector Particle Swarm Optimization for Constrained Optimization Problems, in: Proceedings of the 2009 International Joint Conference on Computational Sciences and Optimization, 2009, pp. 485–488.

[31] C. Sun, J. Zeng, J. Pan, A New Method for Constrained Optimization Problems to Produce Initial Values, in: 2009 Chinese Control and Decision Conference (CCDC 2009), 2009, pp. 2690–2692.

[32] R. Toscano, P. Lyonnet, A new heuristic approach for non-convex optimization problems, Information Sciences 180 (2010) 1955–1966.

[33] Z. Yu, D. Wang, H.S. Wong, Nearest neighbor evolutionary algorithm for constrained optimization problem, in: IEEE Congress on Evolutionary Computation, 2008.

[34] E. Zahara, Y.T. Kao, Hybrid Nelder–Mead simplex search and particle swarm optimization for constrained engineering design problems, Expert Systems with Applications 36 (2) (2009) 3880–3886.

[35] A.E.M.n. Zavala, A.H.a. Aguirre, E.R.V. Diharce, Particle evolutionary swarm optimization algorithm (PESO), in: Sixth Mexican International Conference on Computer Science, ENC. 2005.

Chao-li Sun received her M.S. in computer application technology from Hohai University, Nanjing, China, in 2000. She is currently working as Lecturer in Taiyuan University of Science and Technology and working toward the Ph.D. degree in Mechanical & Electric Engineering College, Taiyuan University of Science and Technology, Shanxi, China, focusing on swarm intelligent optimization, swarm system and optimization on complex mechanical systems.

Jian-chao Zeng received his Ph.D. degree in system engineering from Xi’an Jiaotong University, Xi’an, China, in 1990. Currently, he is a professor of Taiyuan University of Science and Technology, Shanxi, China. His current research interests include evo-lutionary swarm intelligent optimization, swarm system, complex system simulations, estimation of distribution algorithm, swarm robots and optimization on complex mechanical systems.

Jeng-shyang Pan received his Ph.D. in electrical engineering from University of Edinburgh in 1996 and took the full professor position from National Kaohsiung University of Applied Science in 2000. He is the fellow of IET and founder of two international conferences (IEEE International Conference on Intelligent Information Hiding and Multimedia Signal Processing; International Conference on Innovative Computing, Information and Control). His work is focused on image processing, information security and information hiding, intelligent watermarking and optimization algorithms.

參考文獻

相關文件

Shih-Cheng Horng , Feng-Yi Yang, “Embedding particle swarm in ordinal optimization to solve stochastic simulation optimization problems”, International Journal of Intelligent

Then, we recast the signal recovery problem as a smoothing penalized least squares optimization problem, and apply the nonlinear conjugate gradient method to solve the smoothing

Then, we recast the signal recovery problem as a smoothing penalized least squares optimization problem, and apply the nonlinear conjugate gradient method to solve the smoothing

For different types of optimization problems, there arise various complementarity problems, for example, linear complemen- tarity problem, nonlinear complementarity problem

For different types of optimization problems, there arise various complementarity problems, for example, linear complementarity problem, nonlinear complementarity problem,

It is well-known that, to deal with symmetric cone optimization problems, such as second-order cone optimization problems and positive semi-definite optimization prob- lems, this

For finite-dimensional second-order cone optimization and complementarity problems, there have proposed various methods, including the interior point methods [1, 15, 18], the

Large data: if solving linear systems is needed, use iterative (e.g., CG) instead of direct methods Feature correlation: methods working on some variables at a time (e.g.,