• 沒有找到結果。

CHAPTER 4 A DISCRETE BINARY PARTICLE SWARM OPTIMIZATION

4.10 Appendix

A pseudo code of the DPSO for MKP is given below:

Initialize a population of particles with random positions and velocities.

Initialize the gbest solution by surrogate duality approach repeat

update velocities according to Figure 4.1.

for each particle k do

move particle k according to Figure 4.2.

repair the solution of particle k according to repair operator (Figure 4.3).

calculate the fitness value of particle k.

perform local search on particle k (Figure 4.6).

end for

update gbest and pbest solutions according to diversification strategy (Figure 4.4).

perform selection according to selection strategy (Figure 4.5).

until maximum iterations is attained

CHAPTER 5

A PARTICLE SWARM OPTIMIZATION FOR THE JOB SHOP SCHEDULING PROBLEM

The original PSO is used to solve continuous optimization problems. Since the solution space of a shop scheduling problem is discrete, we have to modify the particle position representation, particle movement, and particle velocity to better suit PSO for scheduling problems. In the PSO for JSSP, we modified the particle position representation using preference-lists and the particle movement using a swap operator.

Moreover, we propose a diversification strategy and a local search procedure for better performance.

5.1 Particle Position Representation

We implement the preference list-based representation (Davis, 1985), which has half-Lamarckian (Cheng et al., 1996). In the preference list-based representation, there is a preference list for each machine. For an n-job m-machine problem, we can represent the particle k position by an m×n matrix, and the ith row is the preference list of machine i, i.e. of machine i. Similar to GA (Kobayashi et al., 1995) decoding a chromosome into a schedule, we also use Giffler & Thompson’s heuristic (Giffler &

Thompson, 1960) to decode a particle’s position to an active schedule. The G&T algorithm is shown as Figure 5.1. For example, there are 4 jobs and 4 machines

as shown on

Table 5.1, and the position of particle k is

⎥⎥

Table 5.1 A 4×4 job shop problem example

jobs machine sequence processing times

1 1, 2, 4, 3 p = 5, 11 p = 4, 21 p = 2, 41 p = 2 31

2 2, 1, 3, 4 p = 4, 22 p = 3, 12 p = 3, 32 p = 2 42

3 4, 1, 3, 2 p = 2, 43 p = 2, 13 p = 3, 33 p = 4 23 4 3, 1, 4, 2 p = 3, 34 p = 2, 14 p = 3, 44 p = 4 24

We can decode X to an active schedule following the G&T algorithm: k Initialization of others in the preference list of machine 4, and add it into schedule

S , as illustrated in Figure 5.2(a).

Update Ω={o11,o22,o13,o34}.

Notation:

o : the operation of job j that needs to be processed on machine i. ij

S : the partial schedule that contains scheduled operations.

Ω : the set of schedulable operations.

s : the earliest time at which ij oij∈Ω could be started.

p : the processing time of ij o . ij

f : the earliest time at which ij oij ∈Ω could be finished, f =ij s +ij p . ij G&T algorithm:

Initialize S ←φ; Ω is initialized to contain all operations without predecessors.

repeat

Determine * min{ ij}

o f

f

ij

← and the machine m on which * f *

could be realized.

Identify the operation set Ο{oij|sij < f*,oij∈Ω,i=m*}. Choose o from the operation set O , where ij* o is ahead of *ij

others in the preference list of machine m ; add * o to S , *ij

and assign s as the starting time of ij o . *ij

Delete o from ij* Ω and include its immediate successor in Ω

if o is not the last operation of job j. ij* Until Ω is empty.

Figure 5.1 The G&T algorithm

M1

M2

M3

M4 (4,3)

time 5 10 15 20 25

Figure 5.2(a) Partial schedule after the operation o43 scheduled.

Iteration 2

s = 0, 11 s = 0, 22 s =2, 13 s =0; 34 f = 5, 11 f = 4, 22 f = 4, 13 f = 3; 34 f = * min{ f , 11 f , 22 f , 13 f } = 3, 34 m =4. *

Identify the operation set O={o34}; choose operation o , which is ahead 34 of others in the preference list of machine 3, and add it into schedule

S , as illustrated in Figure 5.2(b).

Update Ω={o11,o22,o13,o14}.

M1 M2

M3 (3,4)

M4 (4,3)

time 5 10 15 20 25

Figure 5.2(b) Partial schedule after the operation o34 scheduled.

Iteration 3

s = 0, 11 s = 0, 22 s =2, 13 s =3; 14 f = 5, 11 f = 4, 22 f = 4, 13 f = 5; 14 f = * min{ f , 11 f , 22 f , 13 f } = 3, 14 m =1. *

Identify the operation set O={o11,o13}; choose operation o , which is 11 ahead of others in the preference list of machine 1, and add it into schedule S , as illustrated in Figure 5.2(c).

Update Ω={o21,o22,o13,o14}.

M1 (1,1)

M2

M3 (3,4)

M4 (4,3)

time 5 10 15 20 25

Figure 5.2(c) Partial schedule after the operation o11 scheduled.

Iteration 4

s = 5, 21 s = 0, 22 s =5, 13 s =5; 14 f = 9, 21 f = 4, 22 f = 7, 13 f = 7; 14 f = * min{ f , 21 f , 22 f , 13 f } = 4, 14 m =2. *

Identify the operation set O={o22}; choose operation o , which is ahead 22 of others in the preference list of machine 2, and add it into schedule

S , as illustrated in Figure 5.2(d).

Update Ω={o21,o12,o13,o14}.

M1 (1,1)

M2 (2,2)

M3 (3,4)

M4 (4,3)

time 5 10 15 20 25

Figure 5.2(d) Partial schedule after the operation o22 scheduled.

Iteration 5

s = 5, 21 s = 5, 12 s =5, 13 s =5; 14 f = 9, 21 f = 7, 12 f = 7, 13 f = 7; 14 f = * min{ f , 21 f , 12 f , 13 f } = 7, 14 m =1. *

Identify the operation set O={o12,o13,o14}; choose operation o , which is 12

ahead of others in the preference list of machine 1, and add it into schedule S , as illustrated in Figure 5.2(e).

Update Ω={o21,o32,o13,o14}.

M1 (1,1) (1,2)

M2 (2,2)

M3 (3,4)

M4 (4,3)

time 5 10 15 20 25

Figure 5.2(e) Partial schedule after the operation o12 scheduled.

Iteration 6

s = 5, 21 s = 8, 32 s =8, 13 s =8; 14 f = 9, 21 f = 11, 32 f = 9, 13 f = 9; 14 f = * min{ f , 21 f , 32 f , 13 f } = 9, 14 m =2. *

Identify the operation set O={o21}; choose operation o , which is ahead 21 of others in the preference list of machine 2, and add it into schedule

S , as illustrated in Figure 5.2(f).

Update Ω={o41,o32,o13,o14}.

M1 (1,1) (1,2)

M2 (2,2) (2,1)

M3 (3,4)

M4 (4,3)

time 5 10 15 20 25

Figure 5.2(f) Partial schedule after the operation o21 scheduled.

Iteration 7

s = 9, 41 s = 8, 32 s =8, 13 s =8; 14 f = 11, 41 f = 11, 32 f = 9, 13 f = 9; 14 f = * min{ f , 41 f , 32 f , 13 f } = 9, 14 m =1. *

Identify the operation set O={o13,o14}; choose operation o , which is 14

ahead of others in the preference list of machine 1, and add it into schedule S , as illustrated in Figure 5.2(g).

Update Ω={o41,o32,o13,o44}.

M1 (1,1) (1,2) (1,4)

M2 (2,2) (2,1)

M3 (3,4)

M4 (4,3)

time 5 10 15 20 25

Figure 5.2(g) Partial schedule after the operation o14 scheduled.

Iteration 8

s = 9, 41 s = 8, 32 s =10, 13 s =10; 44 f = 11, 41 f = 11, 32 f = 12, 13 f = 13; 44

f = min{* f , 41 f , 32 f , 13 f } = 11, 44 m =4. *

Identify the operation set O={o41,o44}; choose operation o , which is 44 ahead of others in the preference list of machine 4, and add it into schedule S , as illustrated in Figure 5.2(h).

Update Ω={o41,o32,o13,o24}.

M1 (1,1) (1,2) (1,4)

M2 (2,2) (2,1)

M3 (3,4)

M4 (4,3) (4,4)

time 5 10 15 20 25

(h) Partial schedule after the operation o44 scheduled.

Iteration 9

s = 13, 41 s = 8, 32 s =10, 13 s =13; 24 f = 15, 41 f = 11, 32 f = 12, 13 f = 17; 24

f = min{* f , 41 f , 32 f , 13 f } = 11, 44 m =3. *

Identify the operation set O={o32}; choose operation o , which is ahead 32

of others in the preference list of machine 3, and add it into schedule S , as illustrated in Figure 5.2(i).

Update Ω={o41,o42,o13,o24}.

M1 (1,1) (1,2) (1,4)

M2 (2,2) (2,1)

M3 (3,4) (3,2)

M4 (4,3) (4,4)

time 5 10 15 20 25

Figure 5.2(i) Partial schedule after the operation o32 scheduled.

Iteration 10

s = 13, 41 s = 11, 42 s =10, 13 s =13; 24 f = 15, 41 f = 13, 42 f = 12, 13 f = 17; 24

f = min{* f , 41 f , 42 f , 13 f } = 12, 24 m =1. *

Identify the operation set O={o13}; choose operation o , which is ahead 13 of others in the preference list of machine 1, and add it into schedule

S , as illustrated in Figure 5.2(j).

Update Ω={o41,o42,o33,o24}.

M1 (1,1) (1,2) (1,4) (1,3)

M2 (2,2) (2,1)

M3 (3,4) (3,2)

M4 (4,3) (4,4)

time 5 10 15 20 25

Figure 5.2(j) Partial schedule after the operation o13 scheduled.

Iteration 11

s = 13, 41 s = 11, 42 s =12, 33 s =13; 24 f = 15, 41 f = 13, 42 f = 15, 33 f = 17; 24

f = min{* f , 41 f , 42 f , 33 f } = 13, 24 m =4. *

Identify the operation set O={o42}; choose operation o , which is ahead 42

of others in the preference list of machine 4, and add it into schedule S , as illustrated in Figure 5.2(k).

Update Ω={o41,o33,o24}.

M1 (1,1) (1,2) (1,4) (1,3)

M2 (2,2) (2,1) (4,2)

M3 (3,4) (3,2)

M4 (4,3) (4,4)

time 5 10 15 20 25

Figure 5.2(k) Partial schedule after the operation o42 scheduled.

Iteration 12

s = 13, 41 s =12, 33 s =13; 24 f = 15, 41 f = 15, 33 f = 17; 24 f = min{* f , 41

f , 33 f } = 15, 24 m =4. *

Identify the operation set O={o41}; choose operation o , which is ahead 41 of others in the preference list of machine 4, and add it into schedule

S , as illustrated in Figure 5.2(l).

Update Ω={o31,o33,o24}.

M1 (1,1) (1,2) (1,4) (1,3)

M2 (2,2) (2,1) (4,2)

M3 (3,4) (3,2)

M4 (4,3) (4,4) (4,1)

time 5 10 15 20 25

Figure 5.2(l) Partial schedule after the operation o41 scheduled.

Iteration 13

s = 15, 31 s =12, 33 s =13; 24 f = 17, 31 f = 15, 33 f = 17; 24 f = min{* f , 31

f , 33 f } = 15, 24 m =3. *

Identify the operation set O={o31,o33}; choose operation o , which is 33

ahead of others in the preference list of machine 3, and add it into schedule S , as illustrated in Figure 5.2(m).

Update Ω={o31,o23,o24}.

M1 (1,1) (1,2) (1,4) (1,3)

M2 (2,2) (2,1) (4,2)

M3 (3,4) (3,2) (3,3)

M4 (4,3) (4,4) (4,1)

time 5 10 15 20 25

Figure 5.2(m) Partial schedule after the operation o33 scheduled.

Iteration 14

s = 15, 31 s =15, 23 s =13; 24 f = 17, 31 f = 19, 33 f = 17; 24 f = min{* f , 31

f , 23 f } = 17, 24 m =3. *

Identify the operation set O={o31}; choose operation o , which is ahead 31 of others in the preference list of machine 3, and add it into schedule

S , as illustrated in (n).

Update Ω={o23, o24}.

M1 (1,1) (1,2) (1,4) (1,3)

M2 (2,2) (2,1) (4,2)

M3 (3,4) (3,2) (3,3) (3,1)

M4 (4,3) (4,4) (4,1)

time 5 10 15 20 25

Figure 5.2(n) Partial schedule after the operation o31 scheduled.

Iteration 15

s =15, 23 s =13; 24 f = 19, 33 f = 17; 24 f = min{* f , 23 f } = 17, 24 m =2. * Identify the operation set O={o23,o24}; choose operation o , which is 23

ahead of others in the preference list of machine 2, and add it into schedule S , as illustrated in (o).

Update Ω={o }. 24

M1 (1,1) (1,2) (1,4) (1,3)

M2 (2,2) (2,1) (4,2) (2,3)

M3 (3,4) (3,2) (3,3) (3,1)

M4 (4,3) (4,4) (4,1)

time 5 10 15 20 25

Figure 5.2(o) Partial schedule after the operation o23 scheduled.

Iteration 16

s =19; 24 f = 23; 24 f = min{* f } = 23, 24 m =2. *

Identify the operation set O={o24}; choose operation o , which is ahead 24

of others in the preference list of machine 2, and add it into schedule S , as illustrated in Figure 5.2(p).

Update Ω=φ , and then stops.

M1 (1,1) (1,2) (1,4) (1,3)

M2 (2,2) (2,1) (4,2) (2,3) (2,4)

M3 (3,4) (3,2) (3,3) (3,1)

M4 (4,3) (4,4) (4,1)

time 5 10 15 20 25

Figure 5.2(p) Partial schedule after the operation o24 scheduled.

Figure 5.2 An illustration of decoding a particle position into a schedule.

5.2 Particle Velocity

When a particle moves in a continuous solution space, due to inertia, the particle velocity not only moves the particle to a better position, but also prevents the particle from moving back to the current position. The velocity can be controlled by inertia weight w in equation (2.1). The larger the inertia weight, the harder the particle backs to the current position.

If we implement preference list-based representation, the velocity of operation

o of particle k is denoted by ij vijk, vijk∈{0,1}, where o is the operation of job j ij

that needs to be processed on machine i. When vijk equals 1, it means that operation

oij in the preference list of particle k (the position matrix, X ) has just been moved k to the current location, and we should not move it in this iteration. On the contrary, if operation o is moved to a new location in this iteration, we set ij v ←ijk 1, indicating that o has been moved in this iteration and should not been moved in the next few ij iterations. The particle velocity can prevent recently moved operations from moving back to the original location in the next iterations.

Just as the original PSO is applied to a continuous solution space, inertia weight w is used to control particle velocities. We randomly update velocities at the

beginning of the iteration. For each particle k and operation o , if ij vijk equals 1, vijk will be set to 0 with probability (1- w ). This means that if operation o is fixed on ij the current location in the preference list of particle k, o is allowed to move in this ij iteration with probability (1- w ). The newly moved operations will then be fixed for more iterations with larger inertia weight, and fixed for less iterations with smaller inertia weight. The pseudo code for updating velocities is given as Figure 5.3.

for each particle k and operation o do ij rand ~ U(0,1)

if (vijk ≠0)and (randw) then

←0

ijk

v end if end for

Figure 5.3 The pseudo code of updating velocities.

5.3 Particle Movement

The particle movement is based on the swap operator. If vijk=0, the job j on x ik will be moved to the corresponding location of pbest with probability ik c , and will 1

be moved to the corresponding location of gbest with probability i c . Where 2 x ik is the preference list of machine i of particle k, pbest is the preference list of ik machine i of kth pbest solution, gbest is the preference list of machine i of i gbest solution, c and 1 c are constant between 0 and 1, and 2 c1+ c2 ≤1. The process is described as follows:

Step 1: Randomly choose a location l in x . ik Step 2: Denote the job on location l in x by ik J . 1

Step 3: Find out the location of J in 1 pbest with probability ik c , 1 or find out the location of J in 1 gbest with probability i

c . Denote the location that has been found in 2 pbest or ik gbest by l′, and denote the job in location l′ in i x by ik J . 2

Step 4: If J has been denoted, 2 viJk1=0, and viJk

2=0, then swap J 1 and J in 2 x , and set ik viJk1←1.

Step 5: If all the locations in x have been considered, then stop. ik Otherwise, if l< , then set n l ← l+1, else l ←1, and go to Step 2, where n is the number of jobs.

For example, there is a 5-job problem, and x , ik pbest , ik gbest , and i v are ik showed as Figure 5.4(a). We set c =0.5 and 1 c =0.3 in this instance. 2

In Step 1, we randomly choose a location l =3. In Step 2, the job in the 3rd location in x is job 4, i.e. ik J = 4. In Step 3, we generate a random variable rand 1 between 0 and 1, and the generated random variable rand is 0.6. Since

2 1

1 rand c c

c < ≤ + , we find out the location of J in 1 gbest . The location li ′=5, and

th x is job 5, i.e. k J =5. Step 1 to Step 3 is shown as

Figure 5.4(b). In Step 4, since vik4 =0 and vik5 =0, swap job 4 and job 5 in x and ik set vik4←1 is shown as Figure 5.4(c). In Step 5, set l←4, and go to Step 2. Repeat the procedure until all the locations in x have been considered. ik

We also adopt a mutation operator in our algorithm. After a particle moves to a new position, we randomly choose a machine and two jobs on the machine, and then swap these two jobs, disregarding vijk. The particle movement pseudo code is given as Figure 5.5.

}

Figure 5.4 An instance of particle movement.

for i←1 to m do //for machine 1 to machine m

Figure 5.5 Pseudo code of particle movement.

5.4 The Diversification Strategy

If all the particles have the same pbest solutions, they will be trapped into local optima. To prevent such a situation, we proposed a diversification strategy to keep the pbest solutions different (i.e. keeps the makespans of pbest solutions different). In the diversification strategy, the pbest solution of each particle is not the best solution found by the particle itself, but one of the best N solutions found by the swarm so far where N is the size of the swarm. Once any particle generates a new solution, the pbest and gbest solutions will be updated in these three situations:

1. If the particle’s fitness value is better than the fitness value of the gbest solution, set the worst pbest solution equal to the current gbest solution, and set the gbest solution equal to the particle solution.

2. If the particle’s fitness value is worse than the gbest solution, but better then the worst pbest solution and not equal to any gbest or pbest solution, set the worst pbest solution equal to the particle solution.

3. If the particle’s fitness value is equal to any pbest or gbest solution, replace the pbest or gbest solution (whose fitness value is equal to the particle fitness value) with the particle solution.

The pseudo code for updating the pbest solution and gbest solution with diversification strategy is given as Figure 5.6.

N: the size of the swarm

Sk: the schedule generated by particle k

worst

pbest : the worst solution of pbest solutions Cmax(Sk): the makespan of Sk

//↑ situation 2 as described in 3.4

if end

Figure 5.6 Pseudo code of updating pbest solution and gbest solution with diversification strategy.

5.5 Local Search

The tabu search is a metaheuristic approach and a strong local search mechanism.

In the tabu search, the algorithm starts from an initial solution and improves it iteratively to find a near-optimal solution. This method was proposed and formalized primarily by Glover (1986, 1989, 1990). We applied the tabu search proposed by Nowicki and Smutnicki (1996) but without back jump tracking. We briefly describe Nowicki and Smutnicki’s method as follows:

The neighborhood structure

Nowicki and Smutnicki’s method randomly chooses a critical path in the current schedule, and then represents the critical path in terms of blocks. The neighborhood exchanges the first two and the last two operations in every block, but excludes the first and last operations in the critical path. The research of Jain et al. (2000) shows that the strategy used to generate the critical path does not materially affect the final solution. Therefore, in this research, we randomly choose one critical path if there is more than one critical path. For example, there is a schedule for a 4-job, 3-machine problem, as shown in Figure 5.7(a). We can find that there are two critical paths:

CP1={o31, o11, o13, o33} and CP2={o31, o32, o22, o21, o24, o14}, where oij is the operation of job j that needs to be processed on machine i. If we randomly choose CP2, we can represent CP2 in terms of blocks: {o31, o32}, {o22, o21, o24}, and {o14}. The possible moves in this schedule are exchanging {o22, o21} or {o21, o24} (see Figure 5.7(b)).

o11 o13 o12 o14

o23 o22 o21 o24

o31 o32 o34 o33

M1

M2

M3

(a) An instance of job shop schedule.

o11 o13 o12 o14

o23 o22 o21 o24

o31 o32 o34 o33

M1

M2

M3

Figure 5.7 An illustration of neighborhoods in tabu search.

Tabu list

The tabu list consists of maxt operation pairs that have been moved in the last maxt moves in the tabu search. If a move {

iJ1

o ,

iJ2

o } has been performed, this move

replaces the oldest move in the tabu list, and moving these same two operations is not permitted while the move is recorded in the tabu list.

Back jump tracking

When finding a new best solution, store the current state (the new best solution, set of moves, and tabu list) in a list L. After the tabu search algorithm performs maxiter_tabu iterations, restart the tabu search algorithm from the latest recorded state, and repeat it until the list L is empty. We did not implement the back jump tracking in our algorithm to reduce computation time.

We implement a tabu search procedure after a particle generates a new solution for further improved solution quality. The tabu search will be stopped after 100 moves that do not improve the solution. The research of Jain et al. (2000) shows that the solution quality of tabu search (Nowicki & Smutnicki, 1996) is mainly affected by its initial solution. Therefore, in the hybrid PSO, the purpose of the PSO process is to provide good and diverse initial solutions to the tabu search.

5.6 Computational Results

There are three PSOs we tested: (1) priority-based PSO, of which the particle position is represented by the priorities of operations, and implements the original PSO design; (2) preference list-based PSO, of which the particle position is represented by a preference list of machines; (3) hybrid PSO (HPSO), which is the

preference list-based PSO with a local search mechanism. The PSOs were tested on Fisher and Thompson (1963) (FT06, FT10 and FT20), Lawrence (1984) (LA01 to LA40) and Taillard (1993) (TA01 to TA80) test problems. These problems are available on the OR-Library web site (Beasley, 1990) (URL: http://people.brunel.

ac.uk/~mastjjb/jeb/info.html) and Taillard’s web site (URL:

http://ina2.eivd.ch/Collaborateurs/etd/problemes.dir/ordonnancement.dir/ordonnance ment.html).

In the preliminary experiment, four swarm sizes N (10, 20, 30, 50) were tested, where N=30 was superior and used for all further studies. The other parameters of the priority-based PSO were set to the same common settings as most of the previous research: 0c1 =2. , 0c2 =2. , the inertia weight w is decreased linearly from 0.9 to 0.4 during a run, and the maximum value of x and ij v , Xmax and Vmax are equal ij to the number of jobs n and n/5 respectively.

The parameters of the preference list-based PSO are determined experimentally.

The parameters c and 1 c were tested between 0.1 and 0.5 in increments of 0.1, 2

and the parameter w was tested between 0 and 0.9 in increments of 0.1. The settings 5

.

1 =0

c , 3c2 =0. and w=0.5 were superior. The length of the tabu list maxt was set to 8 where the value is derived from Nowicki and Smutnicki (1996). The tabu search will be stopped after 100 moves that do not improve the solution. The priority-based PSO and the preference list-based PSO will be terminated after 105 iterations, and HPSO will be terminated after 103 iterations. The number of iterations is determined by the computation time compared with Pezzella and Merelli (2000) and Gonçalves et al. (2005).

The program was coded in Visual C++, optimized by speed, and run on an AMD

Athlon 1700+ PC twenty times for each of the 123 problems. The proposed algorithm is compared with Shifting Bottleneck (Adams et al., 1988; Balas & Vazacopoulos, 1998), Tabu Search (Sun et al., 1995; Nowicki & Smutnicki, 1996; Pezzella & Merelli, 2000), and Genetic Algorithm (Wang & Zheng, 2001; Gonçalves et al., 2005).

The computational results of FT and LA test problems are shown as Table 5.2.

The results show that the preference list-based PSO we proposed is much better than the original design, the priority-based PSO. Since the number of instances tested by each method is different, we cannot compare the result by average gap directly.

Nevertheless, the result obtained by HPSO is better then other algorithms that tested all of the 43 instances, and the HPSO obtained the best-known solution for 41 of the 43 instances.

Table 5.3 shows the average computation time on FT and LA test problems in CPU seconds. The ‘best-solution time’ is the average time that the algorithm takes to first reach the final best solution, and the ‘total time’ is the average total computation time that the algorithm takes during a run. In HPSO, there is about 99% computation time spent on local search process. As mentioned in section 5.5, the solution quality of tabu search (Nowicki & Smutnicki, 1996) is mainly affected by its initial solution, and the main purpose of the PSO process is to provide good and diverse initial solutions to tabu search. Therefore, the computational results show that the hybrid method, HPSO, performs better than both TSAB and PSO, and its average gap is 0.356% less than PSO.

We further tested HPSO on TA test problems (Taillard, 1993). The computational results are shown in Table 5.4, and we particularly compared HPSO with TSSB (Pezzella & Merelli, 2000) in Table 5.5. Since the maximum computation time of TSSB is about 3×104 seconds and our machine is about ten times faster then TSSB

(Pezzella & Merelli, 2000), we limited the maximum computation time of HPSO in 3

×103 seconds. As mentioned above, 99% of the computation time is spent on the local search process in HPSO. Therefore, we do not reduce the computation time by decreasing the number of iterations, but decreasing the percentage of particles that perform a local search procedure. The HPSO will also be terminated after 103 iterations, but there are only 34.6% of particles randomly chosen to perform the local

×103 seconds. As mentioned above, 99% of the computation time is spent on the local search process in HPSO. Therefore, we do not reduce the computation time by decreasing the number of iterations, but decreasing the percentage of particles that perform a local search procedure. The HPSO will also be terminated after 103 iterations, but there are only 34.6% of particles randomly chosen to perform the local

相關文件