• 沒有找到結果。

4. The Genetic Algorithm For WPSP with min C max

4.2 The Hybrid Genetic Algorithm

4.2.4 Genetic operators

gorithm. Crossover generates offspring by combining two chromosomes’ features.

Mutation operates one chromosome by randomly selecting two genes and swapping them. Generally specking, the crossover operator plays an important role for the performance in the GA cycle. The performance of crossover in each operator does affect the performance of GA. So we adopt the different rules for designing crossover and mutation. Both crossover and mutation can handle the job permutation and setup time on the identical parallel machines, so the methods of crossover and mutation should be suitable for use.

Crossover

Because

romosome may have a bad fitness value, an infeasible solution, through the traditional crossover operator. We provide a new crossover considering the time postponement concept to figure out the problem with due date restriction. The time postponement is the value that the non-expected event can delay for at most. The crossover operates two parents and creates a single offspring. It breeds the primary

fills the offspring with remaining genes derived from the other parent. The selection of sub-schedule is considering the job slackness, the time postponement, on each identical parallel machine. The crossover copies the better sub-schedules on some identical parallel machines from one parent to the offspring for preserving the good permutation of jobs. The other empty positions of the offspring can be filled with one way, which is a left-to-right scan from the other parent. We let

C

νk be the completion time of job,

d

νk be the due date of job on ν th position of machine

m ,

k

n be the number of job consideration on machine m , and

k

N be the num

k of jobs on machine

m . Equa

k (38) indicates the slackness of job on

ber

tion ν th position f

machine

m .

k

SSL

νnk of equation (39) means the sum of slackness values for n consecutive jobs on machine

m , which are located from

k

o

ν th to (ν + n−1)th position. fter calculating the value

SSL

νnk, we can estimate the average value of slackness

A

SSL

νnk for n jobs started from ν th to (ν + n−1)th position o e

m . The estimations of slackness values are defined as following:

k

k k

k

d

SL

= =1,2,K,

N

k

n machin

C

ν ν ; k 1,2,K K,

ν

ν − , = (38)

− + 1n ν∑

= k

nk

SL

SSL

ν ν ν , ν =1,2,K,

N

k; 11≤

n

N

k −ν + ;

k

=1,2,K,

K

(39)

−1

×

= SSL n

SSL

νnk νnk (40)

In the beginning of crossover, GA would choose one parent with better fitness value and breed the partitio

3.

ning structure of the parent into the offspring as shown in Figure

3 7 * 9 5 * 4 1 6 * 2 8 P1

8 1 5 * 2 * 3 7 * 4 9 6

* * *

P2 offspring

Figure 3. Copy the partitioning structure to offspring

Then for all jobs on each machine, GA calculates of all combinations of jobs in sequences and d

SSL

νnk

SSL

νnk from each

SSL

νnk. Choose the smallest erives each

SSL

νnk for each machine and put the job combinations into the sub-schedules as

shown in Figure 4.

3 7 * 9 5 * 4 1 6 * 2 8

8 1 5 * 2 * 3 7 * 4 9 6

3 * 9 * 4 1 6 * 8

P1

P2 offspring

Figure 4. Copy the sub-schedules to offspring from the parent with better fitness value.

Finally, GA would use a left-to-right scan to fill the offspring with remaining genes derived from the other parent as shown in Figure 5.

3 7 * 9 5 * 4 1 6 * 2 8

8 1 5 * 2 * 3 7 * 4 9 6 3 5 * 9 2 * 4 1 6 * 8 2 P1

offspring P2 Starting point

on offspring

Figure 5. Fill the empty of the offspring from the other parent.

After recounting the execution of crossover, here is the procedure of crossover divided into three steps.

Step1. Get the partitioning symbol * from one parent which has better fitness value.

St

nd

ep2. Choose the sub-schedules from the parent with better fitness value. Let parameters k a ν be one. The selection of sub-schedules is as following:

Step 2-1. For the ν th position of machine

m on the chromosome,

k calculate the slackness value

SL

νk of job

r on ν th position.

i Step 2-2. Let ν =ν +1. If the value ν is larger than

N , go to step2-3.

k

Otherwise, calculate the slackness value

SL

νk of job on the ν th

m -1.

Step 2-3.

position of achine

m and go to step2

k

For ν =1,2,K,

N

kand 11≤

n

N

k −ν + , estimate all kinds of a

aver ge slackness value

SSL

νnk. Select the largest one

SSL

ν′nk

among all

SSL

νnk and put the job combination from ν′ th position to ν′ n+ ′−1th position on machine into the su hedule of

Step 2-3. ailable

of b-sc machin

Let

k

= k+1 and check the constraint of the number of av

machines K. If k is larger than the number available ma hen go to step 2-4. Otherwise, Let index

e

m .

k

chines K, t ν be one

and go to step2-1.

Step 2-4. Sele K sub-schedules of all machines and copy the K sub-schedules to offspring.

mpty of the offspring with the unscheduled genes by aking a t scan from the o

ct the

Step 3. Fill the e m

left-to-righ ther parent without violating the job due date

We u The

roceeds by randomly choosing two genes on the chromosome and then swapping them. If the schedule shows infeasible after mutation technique, we would pr

restriction. The starting point of the filling can be generated randomly.

Mutation

se the swapping technique as the mutation method in this paper.

mutation p

eserve the original one from due date violence. There are three possible exchanging ways through the swapping mutation: (1) the swapping of two jobs from the same identical parallel machine, (2) the swapping of two jobs from different identical parallel machines, (3) the swapping of one job and one partitioning symbol.

Fig. 6(a), 6(b), and 6(c) shown below can express the swapping methods.

8 5 * 2 3 * 4 1 6 * 7 9 Before

8 5 * 3 2 * 4 1 6 * 7 9 After

(a)

8 5 * 2 3 * 4 1 6 * 7 9 Before

2 5 * 8 3 * 4 1 6 * 7 9 After

(b)

8 5 * 2 3 * 4 1 6 * 7 9 Before

8 5 * 2 3 4 * 1 6 * 7 9 After

(c)

Figure 6. Illustration of swapping technique

4.2.5 Elitist Strategy

When the number of of ected level, we will

mix the offspring with the original parents to get the enlarged population. Then we fspring in the pool is reaching to the exp

use the roulette wheel as the concept of elitist strategy for choosing the better part of the enlarged population. That means the fitter chromosome is selected first for su

osome in the population will be re-estimated by using the fitness function. There are several stopped: (1) see if the chromosomes in the current population are fitter than the ones in the previous population by calculating the to

paring the performance of improving heuristics and the hybrid genetic algorithm, we design a set of 16 problems with different circumstances for parallel identical machines and 100 jobs, which are divided into 30 product types and should be completed before the given due date.

Th

rviving to the next generation. In our GA cycle, the elitist way is to preserve the better chromosomes in each generation and reduce the errors of stochastic sampling.

Through the elitist strategy, the number of chromosomes in each generation will be equal to the original population we determined in the beginning.

4.2.6 Stopping Criteria

After the genetic operators and elitist strategy, the fitness of each chrom

rules to decide if the GA cycle should be

tal and average fitness values of all chromosomes in every cycle, (2) see if the best chromosome in the current GA pool is fitter than the best one in the old GA pool by calculating the fitness value of the best chromosome in every cycle, (3) see if the number of generation reaches to the level we requested. We choose the item (3) according to the convenience of GA operation. Therefore, in our experiment, the GA operation will be terminated if the number of generation reaches to what we prescribed.

5. Problem Design and Testing

For the sake of com

testing. Each problem includes 25

e 100 jobs would be processed on the 25 parallel identical machines and each machine capacity is set to be three days, 4320 minutes. Here “minute” is used as the time unit for job processing time, job due dates, setup time, and machine capacity.

In this paper, we highlight the impact of setup time of consecutive jobs from different product families or different operation temperatures. So the time cost by changing

probe card before the machine is ready to process the coming job with different product family is set to 80 or 120 minutes (80 or 120 minutes is according to different product family). The required times of adjusting temperature from room to high is set to be 60 minutes, from high to room is set to be 80 minutes, and from high to high is set to be 140 minutes. Because the time of adjusting temperature from room to room does not need to warm up or cool down the machine, it is set to be 0 minutes.

The time of loading code before the machine is ready to process the coming job with different product type is set to be 5 minutes. And the initial setup time of machine from idle to processing state is set to be 100 minutes. The setup time of consecutive jobs from same product type is set to be 0 minutes under all operation temperatures.

The problem design is based on the wafer probing shop floor in an IC manufacturing factory of the Science-based Industrial Park, Taiwan. The problem test is divided into four factors, which contains (1) the product family ratio, including tw

milies is related to the setup time of ed to evaluate the influence of product families on the performance of scheduling solutions via the factor, product family ratio. If a product fa

o grouping levels R2 and R6, (2) the tightness of due dates, including stable and increasing states, (3) the consideration of adjusting temperature, including setup time with temperature consideration or not, (4) the total processing time, including low and high levels.

Product Family Ratio (R)

The distribution of jobs to the product fa consecutive jobs. We ne

mily has large number of jobs, it may lead to a smaller value of total setup time of scheduling solutions. Oppositely, if a product family has small number of jobs, it may result in a larger value of total setup time of machine schedules. Here we define an index, product family ratio, which is the division of the number of job product types by the number of job product families. There are 100 jobs divided into 30 product types in our test problem. For example, if the value of product family ratio is 2, it means that 30 product types of 100 jobs are distributed into 15 product families randomly. In our design, there are two levels for testing, R2 and R6, which means

30 product types of jobs are divided into 15 and 5 product families individually. The evaluation of product family ratio is expressed in equation (41).

F

Tightness of Due Dates (T_Due)

Here we use tightness of due dates for evaluating the density of job due dates. It me, the expected setup time, the machine capacity before due dates, and the number of jobs with given due dates. The tightness index is

is including the job processing ti

defined as below:

where the number of available machines

K

and the expected setup time are expressed in Section 2 and 3. Due dates of Jobs in the test problem are

three time points, which are 1, 2, and 3 days. is denoted as the total t.

If the tightness of due dates is stable, that means there are 30 jobs assigned for 1440 minutes of due dates, 35 jobs assigned for 2880 minutes of du

ES

divided into )

P

(Y

processing time of jobs of which due dates are given before Yth due day poin We define the notation

Cap

(Y) as the available capacity of machine before Yth due day point. And

Num

(Y) is to express the number of jobs of which due dates are given before Yth due day point.

According to equation (42), we can evaluate three tightness indexes under three time points of due dates.

e dates, and 35 jobs assigned for 4320 minutes of due dates randomly. And the tightness values of due dates would be nearly equal. If the tightness of due dates is increasing, that means there are 5 jobs assigned for 1440 minutes of due dates, 15 jobs assigned for 2880 minutes of due dates, and 80 jobs assigned for 4320 minutes of due dates randomly. Besides, the tightness of due date 1440 minutes would be smaller than the tightness of due date 2880 minutes, and the tightness of due date 2880 minutes would be smaller than the tightness of due date 4320 minutes

Temperature Consideration (Te)

Because the setup time of loading temperature is longer than the setup time of peripheral hardware, we take the factor, temperature changing, into consideration in of setup time is related to the product types, product families of two consecutive jobs generally. If temperature change of machine is considered in tes

e of scheduling difficulties, so it is an index for evaluating the performance of scheduling heuristics.

cessing time, high and low, to represent the size of total machine workload. High and low levels of total processing time are set to be 54126 m

our problem design. The value

ting situation, it should be added in setup time. Our problem is designed to consider setup time with temperature changing or not.

Total Processing Time (Total_PT)

The value of total processing time would influence the degre

We generate two levels of total pro

inutes and 66379 minutes, which have 1.5 and 1.85 days of machine utilization individually. Table 2 below shows the summary of 16 testing problems, and other related information suchlike product types, product families, tightness of due dates, and setup time of two consecutive jobs, is shown in the appendix.

Table 2. Summary of 16 problem design

6. Computational Results

Because our objective is minimum makespan, total setup time and the distribution of jobs to parallel machines both are related to the qualities of testing solutions. In order to find the setting parameters of improving heuristics for reducing total setup time, we put these WPSP algorithms into a lot of pre-tests. The following settings of WPSP algorithms are efficient in solving identical parallel-machine scheduling problems. The parameters A and B of modified sequential savings algorithm (MSSA) are set to be 0.975 and 0.55. The parameter λ of parallel insertion with the slackness (PIA II) is set to be 0.7. And the param ter e ϕ of parallel insertion with

e (PIA IV) is set to be 0.8. The improving heuristics are encoded in Visual Basic 6.0, which are implemented in the compiled form on a PC with

And the average of testing solutions solved by PIA III in 16 problems is the smallest, which means PIA III is most efficient with minimum makespan. By considering one experimental factor once, the computational results of 16 problems can be tran

all kinds of situations, PIA III having 8 and 7 smallest ones individually also shows the variance of regret measur

AMD 1150 MHz CPU and 512 MB RAM.

6.1 ANOVA Analysis of Improving Heuristics

The CPU times cost by saving algorithms is 1137.396 seconds in average, and the CPU time cost by insertion algorithms is 73.629 seconds in average. The computational results of WPSP algorithms in 16 problems are expressed in Table 3.

The testing results show that SSA and SIA are not robust in our testing problems because SSA and SIA would generate unfeasible solutions in some cases. Therefore, we look for best solutions exclusive of SSA and SIA and find that PIA III has the largest number of best solutions via Table 3.

in solving the 16 testing problems of WPSP

sformed into the performance comparison of all situations as shown in Table 4. From the opinion of comparing mean and standard deviation of solutions in

that it outperforms other WPSP algorithms except for the situation, where total processing time is low.

Table 3. Computational results of WPSP algorithms in 16 test problems.

SSA MSSA SIA PIA PIA I PIA II PIA III PIA IV

1 2730 2965 2724 2837 2660 2599* 2600 2639 2653

2 3297 3709 3240 3194 3218 3157* 3291

3 2730 2661 2611* 2611 2703 2616 2632 2611* 2669

4 3297 3186 3083* 3116 3197 3115 3158 3105 3140

5 3196 3152 2795 2837 2879 2728* 2757 2730 2778

6 3763 4308 3461 3265 3357 3248* 3955

7 3196 2682 2739 2611 2805 2683 2758 2659* 2770

8 3763 3253 3223 3116 3376 3201 3223 3194* 3242

9 2687 2692 2697 2664 2635 2566* 2569 2594 2644

10 3253 3648 3476 3193 3140 3111* 3125 3225

11 2687 2548 2597 2604 2627 2577 2624 2565* 2624

12 3253 3113 3127 3063 3094 3081 3066* 3083 3116

13 3153 3600 2766 2977 2782 2732 2714* 2731 2930

14 3719 3596 3447 3284 3274* 3274* 3675

15 3153 2721

16 3719 3201

Problem

No. Cmax Cmax Cmax Cmax Cmax Cmax Cmax Cmax

2713 2696 2728 2662 2669 2654* 2729

3235 3245 3295 3223* 3228 3223* 3269

Mean 3087.438 3007.625 2916.625 2934.875 2912 3044.375

No. of * 2 4 4 9

Result with grey bcakground indicates a unfeasible solution in scena Label * means the best of all exclusive of SSA and SIA.

Expected Capacity

Table 4. Computational results of improving heuristics under all kinds of situations.

n Mean Stdev Mean Stdev Mean Stdev Mean Stdev

Total 16 3087.44 485.11 3007.63 309.04

R=2 8 3149.00 589.21 3040.13 314.85

R=6 8 3025.88 385.29 2975.13 321.12

Te=Yes 8 3171.88 558.31 3096.63 325.18

Te=No 8 3003.00 419.85 2918.63 284.27

T_Due=Stable 8 3258.88 600.47 3037.13 339.57

T_Due=Increase 8 2920.63 292.66 2916.00 276.65 2882.75* 275.89 2978.13 295.58 Total_PT=Low 8 2877.63 350.02 2705.25 69.66 2729.63 138.16 2727.38 89.24

Total_PT=High 8 3469.63 406.89 3287.88 131.07

Number of * 1

n Mean Stdev Mean Stdev Mean Stdev Mean Stdev Total 16 2916.63 287.85 2934.88 289.49 2912.00* 279.90 3044.38 392.77 R=2 8 2925.13 292.62 2962.88 305.12 2917.87* 280.73 3062.25 443.37 R=6 8 2908.13 302.93 2906.88 291.02 2906.13* 298.32 3026.50 365.06 Te=Yes 8 2972.25 291.64 2997.50 295.96 2964.13* 291.52 3168.50 455.44 Te=No 8 2861.00 292.28 2872.25 288.14 2859.88* 276.92 2920.25 296.54 T_Due=Stable 8 2938.50 310.03 2950.00 322.71 2937.25* 289.28 3143.88 483.84 T_Due=Increase 8 2894.75 283.45 2919.75 273.71 2886.75 287.65 2944.88 271.75 Total_PT=Low 8 2645.37* 65.48 2665.38 71.45 2647.88 59.79 2724.63 101.39 Total_PT=High 8 3187.88 71.37 3204.38 92.00 3176.13* 69.67 3364.13 294.29

Number of * 1 7

Result with grey bcakground means there are unfeasible solutions in scenario

Label * indicates the best among algorithms in situations considering single factor and whole conditions.

PIA III PIAIV

PIAI PIAII

SSA MSSA SIA PIA

In order to find the effects of improving heuristics and experimental factors on the problem design, we use statistical analysis by applying statistical software, SAS.

First of all, we check the satisfaction of normality assumption for 96 data of Table 3 except for SSA and SIA. The check of normality assumption is expressed in Table 5 and the solutions are normally distributed. Then use ANOVA to check for the significances of all experimental factors and interactions. The summary of ANOVA table shown in Table 6 shows that five single factors, product family ratio, temperature changing consideration, tightness of due date, total processing time level,

and algorithms, would significantly affect the solutions of WPSP with minimum makespan under 99% confidential intervals. Besides, p values of interactions less than 0.01 also have significant effect on the performance of testing problems.

Through Duncan’s multiple comparison as shown in Table 7, the statistical results show that the multiple comparisons divide WPSP algorithms into two groups, A and B.

The same letter of Duncan’s groups indicates that there is no significant difference between WPSP algorithms. So the first group is MSSA, PIA IV, and PIA, of which the performance of solutions is inferior to the second group of PIA II, PIA I, and PIA III.

Table 5. Check of normality assumption for 96 solutions in 16 test problems.

Statistic p Value Test

Shapiro-Wilk W 0.886611 Pr < W <0.0001 Kolmogorov-Smirnov D 0.17839 Pr > D <0.0100 Cramer-von Mises W-Sq 0.58319 Pr > W-Sq <0.0050 Anderson-Darling A-Sq 3.464467 Pr > A-Sq <0.0050

Table 6. The summary of ANOVA table under 99% confidential intervals.

Factor SS d.f. MS F value p-value

R 63500 1 63500 10.548 <0.01

Te 583908 1 583908 96.991 <0.01

Total_PT 8516246 1 8516246 1414.611 <0.01

Algorithm 432625 5 86525 14.372 <0.01

R*Te 184 1 184 0.031

R*T_Due 11726 1 11726 1.948

Te*T_Due 47126 1 47126 7.828 <0.01

R*Total_PT 13325 1 13325 2.213

Te*Total_PT 32893 1 32893 5.464

T_Due*Total_PT 170775 1 170775 28.367 <0.01

R*Algorithm 33404 5 6681 1.110

Te*Algorithm 59144 5 11829 1.965

T_Due*Algorithm 313319 5 62664 10.409 <0.01 Total_PT*Algorithm 168813 5 33763 5.608 <0.01

Error 156525.3 26 6020

T_Due 350779 1 350779 58.267 <0.01

72 .

26

7

, 1 , 01 .

0

=

F F0.01,5,26

= 3 . 82

Table 7. Duncan’s multiple comparisons for the performance of WPSP algorithms.

Duncan

Grouping Mean No. of

problems Algorithm

A 3087.44 16 MSSA

A 3044.38 16 PIA IV

A 3007.63 16 PIA

B 2934.88 16 PIA II

B 2916.63 16 PIA I

B 2912 16 PIA III

6.2 Computational Results of GA with Initial Population from WPSP algorithms

We use WPSP algorithms except for SSA and SIA for generating initial population

of GA these

initial solutions to enlarge our population size for larger species of chromosomes. In our problem design, we set the population size to be 30. Other genetic factors, mutation rate and generation size, are considered in testing problems because they would affect genetic combinations of chromosomes. We select problems No.7 and No. 8 of Table 2 for testing the performance and solution time of GA. One is the situation that product family ratio is 2, temperature changing is considered, total processing time level is low, and tightness of due dates is increasing. The other selected is the situation that product family ratio is 2, temperature changing is considered, total processing time level is high, and tightness of due dates is increasing.

The mutation rate (denoted as pm) is divided into five levels, 0, 0.25, 0.5, 0.75, and 1.

A s

equal to 1500. Each problem is solved by hybrid GA with different mutation rates and repeated four times for checking if the mutation rate is significantly effective.

The statistical results of hybrid GA in problem No. 7 and No. 8 are shown in Table 8. It shows that hybrid GA would improve the initial solutions while the mutation rate is larger than 0. And we know that the generation number is proportional to the

running times of GA in 250, 500, and 750 generations are about 191 seconds, 401 se

. Because the initial solutions are not sufficient enough, we make use of

nd the hybrid GA is proceeding until the number of generation (denoted as gen) i

performance of scheduling solutions of WPSP with minimum makespan. The

conds, and 563 seconds individually. They are apparently larger than the running time of improving heuristics. Because the roulette wheel method is selecting chromosomes based on fitness values randomly, there is not a definite mutation rate

conds, and 563 seconds individually. They are apparently larger than the running time of improving heuristics. Because the roulette wheel method is selecting chromosomes based on fitness values randomly, there is not a definite mutation rate

相關文件