• 沒有找到結果。

Open Shop Scheduling Problem

CHAPTER 2 LITERATURE REVIEW

2.4 Open Shop Scheduling Problem

The open shop scheduling problem (OSSP) can be stated as follows (Gonzalez &

Sahni, 1976): there is a set of n jobs that have to be processed on a set of m machines.

Every job consists of m operations, each of which must be processed on a different machine for a given process time. The operations of each job can be processed in any order. At any time, at most one operation can be processed on each machine, and at most one operation of each job can be processed. In this research, the problem is to find a non pre-emptive schedule to minimize the makespan (Cmax), that is, the time required to complete all jobs.

The constraints in the classical OSSP are similar to the classical JSSP but there are no precedence constraints among the same job operations. The OSSP is NP-hard

for m≥3 (Gonzalez & Sahni, 1976), so it cannot be exactly solved in a reasonable computation time. Guéret and Prins (1998) proposed two fast heuristics, the results of which are better than other classical heuristics. Domdorf et al. (2001) proposed a branch-and-bound method, which is the current best method to solve OSSP exactly.

Many metaheuristic algorithms have been developed in the last decade to solve OSSP, such as simulated annealing (SA) (Liaw, 1999), tabu search (TS) (Alcaide & Sicilia, 1997; Liaw, 1999), genetic algorithm (GA) (Liaw, 2000; Prins, 2000), ant colony optimization (ACO) (Blum, 2005), and neural network (NN) (Colak & Agarwal, 2005).

CHAPTER 3

DEVELOPING A PARTICLE SWARM OPTIMIZATION FOR A DISCRETE OPTIMIZATION PROBLEM

The original PSO is suited to a continuous solution space. We have to modify the original PSO in order to better suit it to discrete optimization problems. In this chapter, we will discuss the probably success factors to develop a PSO design for a discrete optimization problem. We separated a PSO design into several parts to discuss:

particle position representation, particle velocity, particle movement, decoding operator, and other search strategies.

3.1 Particle Position Representation

PSO represents the solutions by particle positions. There are various particle position representations for a discrete optimization problem. How to represent solutions by particle positions is a research topic when we develop a PSO design.

Generally, the Lamarckian property is used to discriminate between good and bad representations. The Lamarckian property is that the offspring can inherit goodness from its parents. For example, if there are six operations to be sorted on a machine, and we implement the random key representation (Bean, 1994) to represent a sequence, there are two positions of two particle positions as follows:

position 1: [0.25, 0.27, 0.21, 0.24, 0.26, 0.23]

position 2: [0.22, 0.25, 0.23, 0.26, 0.24, 0.21]

Then the operation sequence can be generated by sort the operations according to the increasing order of their position values as follows:

permutation 1: [3 6 4 1 5 2] permutation 2: [6 1 3 5 2 4]

We can find that these two permutations are quite different even though their positions are very close to each other. This is because the location in the permutation of one operation depends on the position values of other operations. Hence, the random key representation has no Lamarckian.

If we directly implement the original PSO design (i.e. the particles search solutions in a continuous solution space) to a scheduling problem, we can implement the random key representation to represent a sequence of operations on a machine.

However, the PSO will be more efficient if the particle position representation is with higher Lamarckian.

3.2 Particle Velocity and Particle Movement

The particle velocity and particle movement are designed for the specific particle position representation. In each iteration, a particle moves toward pbest and gbest positions, that is, the next particle position is determined by current position, pbest position, and gbest position. Furthermore, particle moves according to its velocity and movement mechanisms. Each particle moves from current position (solution) to one of the neighborhood positions (solutions). Therefore, the particle movement mechanism should be designed according the neighborhood structure. The advantage of neighborhood designs can be estimated by following properties (Mattfeld, 1996):

Correlation: the solution resulting from a move should not differ much from the starting one.

Feasibility: the feasibility of a solution should be preserved by all moves.

Improvement: all moves in the neighborhood should have a good chance to improve the objective of a solution.

Size: the number of moves in the neighborhood should be reasonably small to avoid excessive computational cost of their evaluation.

Connectivity: it should be possible to reach the optimal solution from any starting one by performing a finite number of neighborhood moves.

We believe that the PSO will be more efficient if we design the particle velocity and the particle movement mechanisms according to these properties.

3.3 Decoding Operator

Decoding operator is used to decode a particle position into a solution. The decoding operator is designed according the specific particle position representation and the characteristics of the problem. A superior decoding operator can map the positions to the solution space in a smaller region but not excluding the optimal solution. In chapter 6, we designed four decoding operators for OSSP. The results show that the decoding operator design extremely influences the solution quality.

3.4 Other Search Strategies

.Diversification Strategy

We can also consider implementing other search strategies. The purpose of most search strategies is to control the intensification and the diversification. One of the search strategies is the structure of gbest and pbest solutions. In the original PSO design, each particle has its own pbest solution and the swam has only one gbest solution. Eberhart and Shi (2001) show a “local” version of the particle swarm. In this version, particles have information only of their own and their neighbors’ bests, rather

that that of the entire group. Instead of moving toward a kind of stochastic average of pbest and gbest (the best location of the entire group), particles move toward points defined by pbest and “lbest,” which is the index of the particle with the best evaluation in the particle’s neighborhood.

In this research, we proposed a diversification strategy. In this strategy, the pbest solution of each particle is not the best solution found by the particle itself, but one of the best N solutions found by the swarm so far where N is the size of the swarm.

.Selection Strategy

Angeline (1998) proposed a selection strategy, which is performed as follows.

After all the particles move to new positions, select the s best particles. The better particle set S ={k1,k2,K,ks) replaces the positions and velocities of the other particles. The addition of this selection strategy should provide the swarm with a more exploitative search mechanism that should find better optima more consistently.

In chapter 4, we modified the method proposed by Angeline (1998) based on the concept of building blocks where a block is part of a solution, and a good solution should include some superior blocks. The concept of building blocks is that if we can precisely find out the superior blocks and accumulate the superior blocks in the population, the genetic algorithm will perform better (Goldberg, 2002).

3.5 The Process to Develop a New Particle Swarm Optimization

As mentioned above, we separated PSO into several parts. The particle position representation determines the program data structure and other parts are designed for the specific particle position representation. Therefore, the first step of developing a new PSO is to determine the particle position representation. Then design particle

representation. Finally implement some search strategies for further improving solution quality. Figure 3.1 shows the process to develop a new particle swarm optimization. All the PSOs in this research are developed by the process described as Figure 3.1.

Figure 3.1 The process to develop a new particle swarm optimization Design particle position representation

Design particle velocity and particle movement for the specific particle position representation

Design decoding operator for the specific particle position

representation

Implement other search strategies

(#optional)

CHAPTER 4

A DISCRETE BINARY PARTICLE SWARM

OPTIMIZATION FOR THE MULTIDIMENSIONAL 0-1 KNAPSACK PROBLEM

Kennedy and Eberhart (1997) proposed a discrete binary particle swarm optimization (DPSO), which was designed for discrete binary solution space. Rastegar et al. (2004) proposed another DPSO based on learning automata. In both of the DPSOs, when particle k moves, its position on the jth variable x equals 0 or 1 at kj random. The probability that x equals 1 is obtained by applying a sigmoid kj transformation to the velocity v ))kj (1/1+exp(−vkj . In the above two DPSOs, the positions and velocities of particles can be updated by the following equations:

) between 0.9975 and 0.0025.

The DPSOs proposed by Kennedy and Eberhart (1997) and Rastegar et al. (2004) are designed for discrete binary optimization problems with no constraint. However, there are resource constraints in MKP. If we want to solve MKP by DPSO, we have to modify DPSO to fit the MKP characteristics.

There are two main differences between the DPSO in this chapter and the DPSOs

in previous research: (i) particle velocity and (ii) particle movement. The particle velocity is modified based on the tabu list and the concept of building blocks, and then the particle movement is modified based on the crossover and mutation of the genetic algorithm (GA). Besides, we applied the repair operator to repair solution infeasibility, the diversification strategy to prevent particles becoming trapped in local optima, the selection strategy to exchange good blocks between particles, and the local search to further improve solution quality. The computational results show that our DPSO effectively solves MKP and better than other traditional algorithms.

4.1 Particle Position Representation

In out DPSO, the particle position is represented by binary variables. For a MKP with n items, we represent the particle k position by n binary variables, i.e.

] , , ,

[ k1 k2 kn

k x x x

x = K

Where xkj

{ }

0,1 denotes the value of jth variable of particle k’s solution. Each time we start a run, DPSO initializes a population of particles with random positions, and initializes the gbest solution by a surrogate duality approach (Pirkul, 1987). We determine the pseudo-utility ratio uj = pj

mi= yirij

/ 1 for each variable, where yi is the shadow price of the ith constraint in the LP relaxation of the MKP. To initialize the gbest solution, we set gbestj ←0 for all variable j and then add variables (gbestj ←1) into the gbest solution by descending order of uj as much as possible without violating any constraint.

Initializing the gbest solution has two purposes. The first is to improve the consistency of run results. Because the solutions of DPSO are generated randomly, the computational results will be different in each run. If we give DPSO the same initial point of gbest solution in each run, it may improve result consistency. The second

purpose is to improve result quality. Similar to other local search approaches, a good initial solution can accelerate solution convergence with better results.

4.2 Particle Velocity

When PSO is applied to solve problems in a continuous solution space, due to inertia, the velocity calculated by equation (4.1) not only moves the particle to a better position, but also prevents the particle from moving back to the previous position. The larger the inertia weight, the harder the particle backs to the current position. The DPSO velocities proposed by Kennedy and Eberhart (1997) and Rastegar et al. (2004) move particles toward the better position, but cannot prevent the particles from being trapped in local optima.

We modified the particle velocity based on the tabu list, which is applied to prevent the solution from being trapped in local optima. In our DPSO, each particle has its own tabu list, the velocity. There are two velocity values, v and kj v′ , for kj each variable x . If kj x changes when particle k moves, we set kj vkj←1 and v′kj

x . When kj v equals 1, it means that kj x has changed, variable j was added into the kj tabu list of particle k, and we should not change the value of x in the next few kj iterations. Therefore, the velocity can prevent particles from moving back to the last position in the next few iterations. The value of v′ is used to record the value of kj x kj

after the value of x has been changed. The set of variable kj v′ is a “block” which kj is a part of a solution that particle k obtained from the pbest solution and gbest solution. It is applied to the selection strategy with the concept of building blocks that we will describe in section 4.6.

In our DPSO, we also implement inertia weight w to control particle velocities where w is between 0 and 1. We randomly update velocities at the beginning of each

iteration. For each particle k and jth variable, if v equals 1, kj v will be set to 0 with kj probability (1- w ). This means that if variable x is in the tabu list of particle k, kj variable x will be dropped from the tabu list with probability (1- w ). Moreover, the kj exploration and exploitation can be controlled by w . The variable x will be held kj in the tabu list for more iterations with a larger w and vice versa. The pseudo code of updating velocities is given in Figure 4.1.

for each particle k and variable j do rand ~ U(0,1)

if (vkj =1)and (randw) then

←0 vkj

end if end for

Figure 4.1 Pseudo code of updating velocities.

4.3 Particle Movement

In the DPSO we proposed, particle movement is similar to the crossover and mutation of GA. When particle k moves, if x is not in the tabu list of particle k (i.e. kj

v =0), the value of kj x will be set to kj pbest with probability kj c (if 1

xkjpbest ), set to kj gbest with probability j c (if 2 xkjgbest ), set to (1-j x ) kj with probability c , or not changed with probability (1-3 c -1 c -2 c ). Where 3 c , 1 c , 2 and c are parameters of the algorithm with 3 c1+c2 +c3 ≤1 and ci ≥0, i=1, 2, 3.

Since the value of x may be changed by repair operator or local search kj procedure (we will describe them in section 4.4 and in section 4.7 respectively), if x kj is in the tabu list of particle k (i.e. v = 1), the value of kj x will be set to the value of kj v′ which is the latest value that kj x obtained from the pbest solution or gbest kj

solution. At the same time, if the value of x changes, we update kj v and kj v′ as kj we mentioned in section 4.2. The pseudo code of particle movement is given in Figure

4.2.

Figure 4.2 Pseudo code of particle movement.

4.4 Repair Operator

After a particle generates a new solution, we apply the repair operator to repair solution infeasibility and to improve it. There are two phases to the repair operator.

The first is the drop phase. If the particle generates an infeasible solution, we need to drop (xkj ←0, if xkj =1) some variables to make it feasible. The second phase is the add phase. If the particle finds a feasible solution, we add (xkj ←1, if xkj =0) more variables to improve it. Each phase is performed twice: the first time we consider the particle velocities, and the second time we do not consider the particle velocities.

Similar to initializing the gbest solution described in section 4.1, we applied the Pirkul (1987) surrogate duality approach to determine the variable priority for adding or dropping. First, we determine the pseudo-utility ratio uj = pj

mi= yirij

/ 1 for each variable, where y is the shadow price of the ii th constraint in the LP relaxation of the MKP. We drop variables by ascending order of u until the solution is feasible, and j then we add variables by descending order of u as much as possible without

violating any constraint. The pseudo code of repair operator is given in Figure 4.3.

Ri = the accumulated resources of constraint i

U permutation of (1,2,…,n) with uU[j]uU[j+1] (j = 1,…, n-1)

Figure 4.3 Pseudo code of repair operator

4.5 The Diversification Strategy

If the pbest solutions are all the same, the particles will be trapped in local optima. To prevent such a situation, we propose a diversification strategy to keep the pbest solutions different. In the diversification strategy, the pbest solution of each particle is not the best solution found by the particle itself, but one of the best N solutions found by the swarm so far where N is the size of the swarm.

The diversification strategy is performed according to the following process.

After all of the particles generate new solutions, for each particle, compare the particle’s fitness value with pbest solutions. If the particle’s fitness value is better than the worst pbest solution and the particle’s solution is not equal to any of the pbest solutions, replace the worst pbest solution with the solution of the particle. At the same time, if the particle’s fitness value is better than the fitness value of the gbest solution, replace the gbest solution with the solution of the particle. The pseudo code of updating pbest solutions is given in Figure 4.4.

k is the index of the worst pbest solution *

Figure 4.4 Psudo code of updating pbest solutions.

4.6 The Selection Strategy

Angeline (1998) proposed a selection strategy, which is performed as follows.

After all the particles move to new positions, select the s best particles. The better particle set S ={k1,k2,K,ks) replaces the positions and velocities of the other particles. The addition of this selection strategy should provide the swarm with a more exploitative search mechanism that should find better optima more consistently.

We modified the method proposed by Angeline (1998) based on the concept of building blocks where a block is part of a solution, and a good solution should include some superior blocks. The concept of building blocks is that if we can precisely find out the superior blocks and accumulate the superior blocks in the population, the genetic algorithm will perform better (Goldberg, 2002).

Find out the s best particles S ={k1,k2,K,ks)

Figure 4.5 Pseudo code of selection strategy.

In our DPSO, the velocities vk′ ={vk1,vk2,K,vkn′ } is a block that particle k obtained from pbest solution and gbest solution in each iteration. The v′ may be a k superior block if the solution of particle k is better then others. Therefore, in our modified selection strategy, the better particle set S only replaces the velocities (i.e.

v and k v′ ) of the other particles. The pseudo code of selection strategy is given in k Figure 4.5.

4.7 Local Search

We implement a local search procedure after a particle generates a new solution for further improved solution quality. The classical add/drop neighborhood is that we remove a variable from the current solution and add another variable to it without violating any constraint at the same time. We modified the neighborhood with the concept of building blocks to reduce the neighborhood size. We focus on the block when we implement a local search. The variables are classified to 4 sets:

} 0

|

0 ={j xkj =

J , }J1 ={j|xkj =1 , }J0′ ={j|xkj =0∧vkj =1 , and J1′ ={j|xkj =1 }

=1

vkj . The modified neighborhood is defined as follows: add (or drop) one variable from J0′ (or J1′) and drop (or add) one variable from J (or 1 J ) without 0

violating any constraint at the same time. In our experiment, the size of the modified neighborhood is about twenty times smaller then the classical one. Besides, we add variables by descending order of p as much as possible without violating any j constraint after the add/drop process.

Ri = the accumulated resources of constraint i

// Add/drop local search begin 0 // Add/drop local search end

// Add more variables begin do

// Add more variables end for

end

Figure 4.6 Pseudo code of local search procedure.

We do not repeat the local search until the solution reaches the local optima. The local search procedure is performed four times at most for reducing the computation time and preventing being trapped in local optima. The pseudo code of local search is given in Figure 4.6.

4.8 Computational Results

Our DPSO was tested on the problems proposed by Chu and Beasley (1998).

These problems are available on the OR-Library web site (Beasley, 1990) (URL:

http:// people.brunel.ac.uk/~mastjjb/jeb/info.html). The number of constraints m was set to 5, 10, and 30, and the number of variables n was set to 100, 250, and 500. For each m-n combination, thirty problems are generated, and the tightness ratio α (α =bi/

nj=1rij ) was set to 0.25 for the first ten problems, to 0.5 for the next ten problems, and to 0.75 for the remaining problems. Therefore, there are 27 problem sets for different n-m-α combinations, ten problems for each problem set, and 270 problems in total.

The program was coded in Visual C++, optimized by speed, and run on an AMD

The program was coded in Visual C++, optimized by speed, and run on an AMD

相關文件