行政院國家科學委員會專題研究計畫 成果報告

48  Download (0)

Full text

(1)

行政院國家科學委員會專題研究計畫 成果報告

粒子群最佳化於多目標排程問題之應用(第 3 年) 研究成果報告(完整版)

計 畫 類 別 : 個別型

計 畫 編 號 : NSC 96-2221-E-216-052-MY3

執 行 期 間 : 98 年 08 月 01 日至 99 年 07 月 31 日 執 行 單 位 : 中華大學工業管理學系

計 畫 主 持 人 : 沙永傑

計畫參與人員: 博士班研究生-兼任助理人員:林信宏

報 告 附 件 : 出席國際會議研究心得報告及發表論文

處 理 方 式 : 本計畫可公開查詢

中 華 民 國 99 年 09 月 24 日

(2)

行政院國家科學委員會補助專題研究計畫成果報告 行政院國家科學委員會補助專題研究計畫成果報告

粒子群最佳化於多目標排程問題之應用 粒子群最佳化於多目標排程問題之應用 粒子群最佳化於多目標排程問題之應用 粒子群最佳化於多目標排程問題之應用

計畫類別 計畫類別 計畫類別

計畫類別: : : : 個別型計畫 個別型計畫 個別型計畫 個別型計畫 □整合型計畫 整合型計畫 整合型計畫 整合型計畫 計畫編號

計畫編號 計畫編號

計畫編號: : : :NSC 96- - - -2221- - - -E- - - -216- - - -052- - - -MY3 執行期間

執行期間 執行期間

執行期間: : : : 96 年 年 年 年 8 月 月 月 月 1 日 日 日 日 至 至 至 至 99 年 年 年 年 7 月 月 月 月 31 日 日 日

計畫主持人 計畫主持人 計畫主持人

計畫主持人: : : :沙永傑 沙永傑 沙永傑 沙永傑

執行單位 執行單位 執行單位

執行單位: : : :中華大學工業工程與管理學系 中華大學工業工程與管理學系 中華大學工業工程與管理學系 中華大學工業工程與管理學系

中 中 中

中 華 華 華 華 民 民 民 民 國 國 國 國 九 十 九 九 十 九 九 十 九 九 十 九 年 年 年 年 九 九 九 九 月 月 月 月 十 十 十 十 日 日 日 日

(3)

1

行政院國家科學委員會專題研究計畫成果報告 行政院國家科學委員會專題研究計畫成果報告 行政院國家科學委員會專題研究計畫成果報告 行政院國家科學委員會專題研究計畫成果報告

粒子群最佳化於多目標排程問題之應用 粒子群最佳化於多目標排程問題之應用 粒子群最佳化於多目標排程問題之應用 粒子群最佳化於多目標排程問題之應用

A Particle Swarm Optimization for Multi-Objective Scheduling Problems

計 畫 編 號:NSC 96-2221-E-216-052MY3 執 行 期 限:96 年 8 月 1 日至 99 年 7 月 31 日

主 持 人:沙永傑 中華大學工業工程與管理學系

一 一 一

一、 、 、 、中文摘要 中文摘要 中文摘要 中文摘要

粒 子 群 最 佳 化 (Particle Swarm Optimization, PSO)是一種以群體搜尋為基礎 的最佳化演算法,其群體是由各自獨立的粒子 所組成。PSO 模擬空間中的粒子運動,以搜 尋空間(search space)表達出問題的解集合空 間(solution space)。每個搜尋空間中的位置 (position)對應到一個該問題解集合空間中的 解。粒子群於搜尋空間中(解集合空間中)合作 搜尋出最佳位置(最佳解)。PSO 最初應用於求 解連續性最佳化問題。然而近年來也被應用於 求解組合最佳化問題。

多數與排程問題有關的研究都只探討如 何使單一目標最佳化,如:總完工時間、總延 遲成本、延遲工件數…等。在現實情況下,決 策者必需作 出能夠 將 這些目標最 佳化的決 策,卻又面臨這些目標之間互相衝突的問題。

若只追求單一目標的最佳化排程,則會造成另 一目標的損失。本研究的主要目的在架構出能 求解多目標排程問題之 PSO,幫助決策者在 面對如此複雜排程問題時能夠做出合理決策。

本研究以三年時間分階段進行。第一年以 流程型排程問題(flow shop scheduling problem) 為研究對象;第二年以零工式排程問

題(job shop scheduling problem)為研究對象;

第 三 年 以 開 放 型 排 程 問 題 (open shop

scheduling problem)為研究對象。在這三種最 基 本 的 排 程 問 題 中 , 我 們 以 完 工 時 間 (makespan)、總流程時間(total flow time)、總 延遲時間(total tardiness)作為目標函數。本研 究所提出的 PSO,應用於標竿測試問題與過 去的啟發式演算法,在求解的速度與品質上有 較佳的表現。

關鍵詞關鍵詞關鍵詞

關鍵詞::::::::粒子群最佳化粒子群最佳化粒子群最佳化粒子群最佳化、、、、流程型排程流程型排程流程型排程流程型排程、、、、零工零工零工零工 型

型型

型排程排程排程、排程、、開放型排程、開放型排程開放型排程開放型排程、、、、多目標排程多目標排程多目標排程多目標排程

Abstract

Particle Swarm Optimization (PSO) is a population-based optimization algorithm. Each particle is an individual and the swarm is composed of particles. PSO mimics the particle movement in a space. In PSO, the problem solution space is formulated as a search space.

Each position in the search space is a correlated solution of the problem. Particles cooperate to find out the best position (best solution) in the search space (solution space).

The original intention of PSO is to solve continuous optimization problems. However, it has been implemented to solve many combinatorial optimization problems in recent years.

In most previous research about scheduling,

(4)

2 example, total complete time, total tardiness, and maximum makespan. In fact, the decision makers have to simultaneously optimize these objectives, but there are conflicts between these objectives. If we only optimize one of these objectives, we will loose another objective. In this research, we will construct a PSO to solve multi-objective scheduling problems. That can help decision makers to make a strategy to handle such complex scheduling problems.

We will execute this research in three years.

In the first year, we focus on the flow shop scheduling problem. In the second year, we focus on the job shop scheduling problem. In the third year, we focus on the open shop scheduling problem. In these three scheduling problems, the objective functions are maximum completion time, total weighted completion time, and total weighted tardiness. Moreover, we will add some constraints into the scheduling problems, such as limited intermediate storage and dependent setup time.

Keywords: Particle swarm optimization, Multi-objective scheduling problem

二 二 二

二、 、 、 、研究目的 研究目的 研究目的 研究目的

多數與排程問題有關的研究都只探討如 何使單一目標最佳化,如:總完工時間、總延 遲成本、延遲工件數…等。在現實情況下,決 策者必需作 出能夠 將 這些目標最 佳化的決 策,卻又面臨這些目標之間互相衝突的問題。

若只追求單一目標的最佳化排程,則會造成另 一目標的損失。

大多數排程問題皆為 NP 問題,也就是 說,此問題無法在合理的計算時間內求出最佳 解。因此,學者們發展出各類型啟發式解法試 圖求出近似最佳解,而演化式計算(Evolution Computation)則是近十年來最被廣泛應用,也

計 算 中 又 以 粒 子 群 最 佳 化 (Particle Swarm Optimization, PSO)為最新、且為近 3 年來最 被廣泛研究及討論的演算法。另於一般單目標 的排程問題中,我們已將 PSO 應用於 JobShop 及 Open Shop 問題上。研究發現應用 PSO 求 解會得到比其它方法更好的結果(Sha & Hsu, 2006a, 2006b)。因此,本研究的主要目的在架 構出能求解多目標排程問題之 PSO,幫助決 策者在面對此複雜的排程問題時能夠做出合 理的決策。

因此本研究的主要目的是架構出適合求 解多目標排程問題的 PSO。除此之外,由於 PSO 原 本 是 用 來 求 解 連 續 性 最 佳 化 問 題 (continues optimization problems),在組合最佳 化方面的應用尚未成熟,還有許多可供研究及 改良的空間。排程問題也是組合最佳化問題之 一,因此本研究所發展之 PSO 將為後續 PSO 應用於組合最佳化問題研究之基礎。我們也將 以本研究為基礎,將本研究所發展之 PSO 應 用於求解其它組合最佳化問題。

三 三 三

三、 、 、粒子群 、 粒子群 粒子群 粒子群演算法 演算法 演算法 演算法相關文獻 相關文獻 相關文獻 相關文獻

粒 子 群 最 佳 化 (Particle Swarm Optimization, PSO) 由 Kennedy 和 Eberhart 於 1995 年提出。它是由模仿粒子於空間中運 動的行為,而發展出的最佳化方法。PSO 和 基因演算法(Genetic Algorithm,GA)相同,都是 以群體為基礎的(population-based)最佳化演算 法。在 PSO 中,群體(swarm)是由粒子(particle) 所組成。群體(swarm)和粒子(particle)的關係,

和 在 GA 中 母 體 (population) 和 染 色 體 (chromosome)的關係相似。

PSO 以搜尋空間(search space)表達出問 題的解集合空間(solution space)。每個搜尋空 間中的位置(position)對應到一個該問題解集 合空間中的解。粒子群於搜尋空間中(解集合 空間中)合作搜尋出最佳位置(最佳解)。而其中 粒 子 運 動 主 要 受 到 三 個 因 素 影 響 : 慣 性

(5)

3 (inertia)、個體最佳位置(pbest position)、群體 最佳位置(gbest position)。慣性(inertia)為粒子 於 上 一 循 環 (iteration) 所 殘 留 下 來 的 速 度 (velocity),它能夠由慣性權重(inertia weight) 控制。慣性(inertia)的用意在於防止粒子停留 在同一範圍內,而能跳出區域最佳解(local optima)。個體最佳位置(pbest solution)為粒子 本身到目前為止所搜尋出的最佳位置(或最佳 解),因此每個粒子有它自已的個體最佳位置 (pbest position)。群體最佳位置(gbest position) 則是到目前為止群體(swarm)所搜尋出的最佳 位置(最佳解),而整個群體(swarm)只會有一個 群體最佳位置(gbest position)。

在 PSO 中粒子速度(velocity)由一向量表 示,而在每個循環(iteration)中粒子根據它所擁 有的速度移動它的位置。在每個循環(iteration) 裡,粒子朝著個體最佳位置(pbest position)以 及群體最佳位置(gbest position)移動,而其速 度也是根據個體最佳位置(pbest position)及群 體最佳位置(gbest position)隨機求得。

四 四 四

四、 、 、 、研究成果 研究成果 研究成果 研究成果

本 研 究 已 完 成 求 解 流 程 型 排 程 問 題 (Flow-shop Scheduling Problem)之 PSO,為了 證明所發展的 PSO 的適用性、求解品質與效 率 , 我 們 將 本 研 究 的 PSO 與 TSP-GA(Ponnambalam, 2004)同時測試 21 個 問題進行比較。目標式有三個,分別為最小完 工時間(Makespan)、平均流程時間(Mean Flow Time)與機器閒置時間(Machine Idle Time)。運 算的結果顯示,以平均相對誤差而言,針對最 小完工時間,PSO 有 17 個問題表現優於 TSP-GA;平均流程時間,PSO 有 18 個問題 表現優於 TSP-GA;機器閒置時間,PSO 有 19 個問題表現優於 TSP-GA。綜合而言,PSO 在 21 個案例中的 19 個,同時達到三個目標 且優於 TSP-GA 。此結果已發表於 The 38th International Conference on Computers and Industrial Engineering (2008).

另一方面,亦將 PSO 與傳統的啟發式演 算法 CDS 與 NEH 進行比較。同樣以最小完 工時間(Makespan)、平均流程時間(Mean Flow Time)與機器閒置時間(Machine Idle Time)作 為目標式。此次共測試 161 個標竿問題,包 含 Rec01 至 Rec41,Tai20×5 至 Tai500×20。

求解的結果顯示 PSO 比傳統的啟發式演算法 CDS 與 NEH 具備明顯的優勢。結果發表於 The 9th Asia Pacific Industrial Engineering &

Management Systems Conference (2008).

PSO 應用於求解多目標流程型排程問題 的 結 果 已 發 表 於 International Journal of Advanced Manufacturing Technology Vol. 45, No. 7, pp. 749-762. Vol. 37, No. 2, pp.

1065-1070, 2009 (SCI)(如附錄一).

本 研 究 已 完 成 求 解 零 工 式 排 程 問 題 (Job-shop Scheduling Problem)之 PSO,同樣 的,我們選 擇三個 目 標式,最小 完工時間 (Makespan)、總延遲時間(Total Tardiness)與機 器閒置時間(Machine Idle Time)。比較的對象 則是 MOGA (Ponnambalam, 2001),在 23 個 標竿問題中,PSO 在最小完工時間與總延遲 時間,完全領先 MOGA;至於機器閒置時間 則於 22 個問題中具競爭力。

PSO 應用於求解多目標零工型排程問題 的 結 果 已 發 表 於 Expert Systems with Applications, Vol. 37, No. 2, pp. 1065-1070, 2010 (SCI) (如附錄二).

對於開放式排程問題,本研究亦完成 PSO 設計,我們以最小完工時間(Makespan)、總流 程 時 間 (Total Flow Time) 與 機 器 閒 置 時 間 (Machine Idle Time) 作為目標式。由於開放式 排程問體文獻相對較少,本研究另外設計並撰 寫基因演算法作為比較,並以 Guéret and Prins (1999)標竿問題進行測試,求解結果顯示本驗 就提出的 PSO 整體績效大於 GA。此結果已投 稿 於 Journal of Industrial and Management Optimization.

(6)

4

五 五 五

五、 、 、 、參考文獻 參考文獻 參考文獻 參考文獻

Coello, C.A., & Lechuga, M.S. (2002). “MOPSO: a proposal for multiple objective particle swarm optimization,” Proceedings of the 2002 Congress on Evolutionary Computation, Vol. 2, 1051-1056.

Giffler J & Thompson G..L. (1960). “Algorithms for solving production scheduling problems,”Operations Research ,Vol. 8, 487-503.

Goldberg D.E. (1989). Genetic algorithms in search, optimization and machine learning. Reading, MA:

Addison-Wesley.

Gonçalves J.F., Mendes .J.J de M., & Resende M.G.C.

(2005). “A hybrid genetic algorithm for the job shop scheduling problem,” European Journal of Operational Research, Vol. 167, No. 1, 77-95.

Hu, X., & Eberhart, R.C. (2002). “Multiobjective optimization using dynamic neighborhood particle swarm optimization,” Proceedings of the 2002 Congress on Evolutionary Computation, Vol. 2,1677-1681.

Kennedy, J., & Eberhart, R.C. (1995). “Particle swarm optimization,” In: Proceedings of the 1995 IEEE International Conference on Neural Networks, 4 (pp.

1942-1948). Piscataway, NJ: IEEE Press.

Liaw, C-F. (2000). “A hybrid genetic algorithm for the open shop scheduling problem,” European Journal of Operational Research; Vol. 124, 28-42.

Lourenço, H.R. (1995). “Local optimization and the job-shop scheduling problem,” European Journal of Operational Research, 83, 347-364.

Ponnambalam S. G., V. Ramkumar and N. Jawahar, 2001,

"A multiobjective genetic algorithm for job shop scheduling". Production Planning and Control, 12(8), 764-774

Ponnambalam, S. G., H. Jagannathan, et al. (2004), "A TSP-GA multi-objective algorithm for flow-shop scheduling", International Journal of Advanced Manufacturing Technology 23(11), 909-915.

Sha, D.Y., & Hsu, C.-Y. (2006a). “A Hybrid particle

Computers & Industrial Engineering, Vol. 51, No. 4, pp.

791-808.

Sha D.Y., & Hsu, C-Y. (2006b). “A modified parameterized active schedule generation algorithm for the job shop scheduling problem,” Proceedings of the 36th International Conference on Computers and Industrial Engineering (ICCIE 2006) (pp. 702-12).

Shi, Y., & Eberhart, R.C. (1998a). “Parameter selection in particle swarm optimization,” In V.W. Porto, N.

Saravanan, D. Waagen, & A.E. Eiben (eds),

Proceedings of the 7th International Conference on Evolutionary Programming (pp. 591-600). New York:

Springer-Verlag.

Shi, Y., & Eberhart, R.C. (1998b). “A modified particle swarm optimizer,” In: D. Fogel, Proceedings of the 1998 IEEE International Conference on Evolutionary Computation (pp. 69-73). Piscataway, NJ: IEEE Press.

Sun, D., Batta, R., & Lin, L. (1995). Effective job shop scheduling through active chain manipulation. Computers

& Operations Research, 22(2), 159-172.

Wang, L., & Zheng, D. (2001). An effective hybrid optimization strategy for job-shop scheduling problems.

Computers & Operations Research, 28, 585-596.

Zhang, H., Li, X., Li, H., & Huang, F. (2005). “Particle swarm optimization-based schemes for resource-constrained project scheduling,” Automation in Construction, 14, 393-404.

Zhang, L.B., Zhou, C.G., Liu, X.H., Ma, Z.Q., Ma, M., &

Liang, Y.C. (2003). “Solving multi objective optimization problems using particle swarm optimization,” Proceedings of the 2003 Congress on Evolutionary Computation, Vol.

4, 2400 – 2405.

六 六 六

六、 、 、附錄 、 附錄 附錄 附錄

1. A particle swarm optimization for multi-objective flow-shop scheduling

2. A multi-objective PSO for job-shop scheduling problems

(7)

ORIGINAL ARTICLE

A particle swarm optimization for multi-objective flowshop scheduling

D. Y. Sha&Hsing-Hung Lin

Received: 5 September 2008 / Accepted: 6 February 2009

# Springer-Verlag London Limited 2009

Abstract The academic approach of single-objective flow- shop scheduling has been extended to multiple objectives to meet the requirements of realistic manufacturing systems.

Many algorithms have been developed to search for optimal or near-optimal solutions due to the computational cost of determining exact solutions. This paper provides a particle swarm optimization-based multi-objective algorithm for flowshop scheduling. The proposed evolutionary algorithm searches the Pareto optimal solution for objectives by considering the makespan, mean flow time, and machine idle time. The algorithm was tested on benchmark problems to evaluate its performance. The results show that the modified particle swarm optimization algorithm performed better in terms of searching quality and efficiency than other traditional heuristics.

Keywords PSO . Multi-objective . Flowshop scheduling . Pareto optimal

1 Introduction

Production scheduling in real environments has become a significant challenge in enterprises maintaining their com- petitive positions in rapidly changing markets. Flowshop

scheduling problems have attracted much attention in academic circles in the last five decades since Johnson’s initial research. Most of these studies have focused on finding the exact optimal solution. A brief overview of the evolution of flowshop scheduling problems and possible approaches to their solution over the last 50 years has been provided by Gupta and Stafford [5]. That survey indicated that most research on flowshop scheduling has focused on single-objective problems, such as minimizing completion time, total flow time, or total tardiness. Numerous heuristic techniques have been developed for obtaining the approx- imate optimal solution to NP-hard scheduling problems. A complete survey of flowshop scheduling problems with makespan criterion and contributions, including exact methods, constructive heuristics, improved heuristics, and evolutionary approaches from 1954 to 2004, was offered by Hejazi et al. [7]. Ruiz et al. [24] also presented a review and comparative evaluation of heuristics and meta-heuristics for permutation flowshop problems with the makespan criteri- on. The NEH algorithm [17] has been shown to be the best constructive heuristic for Taillard’s benchmarks [28] while the iterated local search [27] method and the genetic algorithm (GA) [23] are better than other meta-heuristic algorithms.

Most studies of flowshop scheduling have focused on a single objective that could be optimized independently.

However, empirical scheduling decisions might not only involve the consideration of more than one objective, but also require minimizing the conflict between two or more objectives. In addition, finding the exact solution to scheduling problems is computationally expensive because such problems are NP-hard. Solving a scheduling problem with multiple objectives is even more complicated than solving a single-objective problem. Approaches including meta-heuristics and memetics have been developed to reduce the complexity and improve the efficiency of solutions.

Int J Adv Manuf Technol DOI 10.1007/s00170-009-1970-6

The English in this document has been checked by at least two professional editors, both native speakers of English. For a certificate, see:http://www.textcheck.com/cgi-bin/certificate.cgi?id=emRe2r D. Y. Sha

Department of Industrial Engineering and System Management, Chung Hua University,

Hsinchu, Taiwan, Republic of China H.-H. Lin (*)

Department of Industrial Engineering and Management, National Chiao Tung University,

Hsinchu, Taiwan, Republic of China e-mail: hsinhung@gmail.com

(8)

Hybrid heuristics combining the features of different methods in a complementary fashion have been a hot issue in the fields of computer science and operational research [15]. Ponnambalam et al. [19] considered a weighted sum of multiple objectives, including minimizing the makespan, mean flow time, and machine idle time as a performance measurement, and proposed a multi-objective algorithm using a traveling salesman algorithm and the GA for the flowshop scheduling problem. Rajendran et al. [21]

approached the problem of scheduling in permutation flowshop using two ant colony optimization (ACO) approaches, first to minimize the makespan, and then to minimize the sum of the total flow time. Yagmahan [30]

was the first to apply ACO meta-heuristics to flowshop scheduling with the multiple objectives of makespan, total flow time, and total machine idle time.

The literature on multi-objective flowshop scheduling problems can divided into two groups: a priori approaches with assigned weights of each objective, and a posteriori approaches involving a set of non-dominated solutions [18]. There is also a multi-objective GA (MOGA) called PGA-ALS, designed to search non-dominated sequences with the objectives of minimizing makespan and total flow time. The multi-objective solutions are called non- dominated solutions (or Pareto optimal solutions in the case of Pareto optimality). Eren et al. [4] tackled a multi- criteria two-machine flowshop scheduling problem with minimization of the weighted sum of total completion time, total tardiness, and makespan.

Particle swarm optimization (PSO) is an evolutionary technique for unconstrained continuous optimization prob- lems proposed by Kennedy et al. [10] The PSO concept is based on observations of the social behavior of animals such as birds in flocks, fish in schools, and swarm theory.

To minimize the objective of maximum completion time (i.

e., the makespan), Liu et al. [15] invented an effective PSO- based memetic algorithm for the permutation flowshop scheduling problem. Jarboui et al. [9] developed a PSO algorithm for solving the permutation flowshop scheduling problem; this was an improved procedure based on simulated annealing. PSO was recommended by Tasgetiren et al. [29] to solve the permutation flowshop scheduling problem with the objectives of minimizing makespan and the total flow time of jobs. Rahimi-Vahed et al. [22] tackled a bi-criteria permutation flowshop scheduling problem where the weighted mean completion time and the weighted mean tardiness were minimized simultaneously.

They exploited a new concept called the ideal point and a new approach to specifying the superior particle’s position vector in the swarm that is designed and used for finding the locally Pareto optimal frontier of the problem. Due to the discrete nature of the flowshop scheduling problem, Lian et al. [14] addressed permutation flowshop scheduling

with a minimized makespan using a novel PSO. All these approaches have demonstrated the advantages of the PSO method: simple structure, immediate applicability to prac- tical problems, ease of implementation, quick solution, and robustness.

The aim of this paper is to explore the development of PSO for elaborate multi-objective flowshop scheduling problems. The original PSO was used to solve continuous optimization problems. Due to the discrete solution spaces of scheduling optimization problems, we modified the particle position representation, particle movement, and particle velocity in this study.

The remainder of this paper is organized as follows.

Section 2 contains a formulation of the flowshop schedul- ing problem with two objectives. Section 3 describes the algorithm of the proposed PSO approach. Section 4 con- tains the simulated results of benchmark problems. Sec- tion5 provides some conclusions and future directions.

2 Problem formulation

The problem of scheduling in flowshops has been the subject of much investigation. The primary elements of flowshop scheduling include a set of m machines and a collection of n jobs to be scheduled on the set of machines.

Each job follows the same process of machines and passes through each machine only once. Each job can be processed on one and only one machine at a time, whereas each machine can process only one job at a time. The processing time of each job on each machine is fixed and known in advance. We formulate the multi-objective flow- shop scheduling problem using the following notation:

& n is the total number of jobs to be scheduled,

& m is the total number of machines in the process,

& t(i, j) is the processing time for job i on machine j (i=1, 2,…n) and (j=1,2,…m), and

& {π1,π2,…, πn} is the permutation of jobs.

The objectives considered in this paper can be calculated as follows:

& Completion time (makespan) C p; jð Þ:

Cðp1; 1Þ ¼ t pð 1; 1Þ

Cðpi; 1Þ ¼ C pð i1; 1Þ þ t pð i; 1Þi ¼ 2; . . . ; n Cðp1; jÞ ¼ C pð 1; j  1Þ þ t p; jð Þj ¼ 2; . . . ; m Cðpi; jÞ ¼ max C pf ð i1; jÞ; C pð i; j  1Þg þ tðpi; jÞ

i¼ 2; . . . ; n; j ¼ 2; . . . ; m

& Makespan, fC max¼ C pð n; mÞ,

& Mean flow time, fMFT ¼ Pn

i¼1Cðpi; mÞ

 

n,

(9)

& Machine idle time, and

& fMIT ¼ fC pð 1; j  1Þ þPn

i¼2fmax Cf ðpi; j  1Þ  C pð i1; jÞ; 0ggjj ¼ 2:::mg

3 Basic PSO concept

PSO is an evolutionary technique (Kennedy et al. [10]) for solving unconstrained continuous optimization problems.

The PSO concept is based on observations of the social behavior of animals. The population consisting of individ- uals (particles) is assigned a randomized initial velocity according each individual’s own movement experience and that of the rest of the population. The relationship between the swarm and the particles in PSO is similar to the relationship between the population and the chromosomes in the GA.

The PSO problem solution space is formulated as a search space. Each position of the particles in the search space is a correlated solution of the problem. Particles cooperate to determine the best position (solution) in the search space (solution space).

Suppose that the search space is D-dimensional and there are m particles in the swarm. Each particle is located at position Xi={xi1, xi2,…, xiD} with velocity Vi={vi1, vi2,…, viD}, where i=1, 2,…, m. In the PSO algorithm, each particle moves toward its own best position (pbest) denoted as Pbesti={pbesti1, pbesti2,…, pbestin}. The best position of the whole swarm (gbest) denoted as Gbest={gbest1, gbest2,…, gbestn} with each iteration. Each particle changes its position according to its velocity, which is randomly generated toward the pbest and gbest positions. For each particle r and dimension s, the new velocity vrsand position xrsof particles can be calculated by the following equations:

vtrs¼ w  vt1rs þ c1 rand1 pbest rst1 xt1rs  þ c2

 rand2 gbest st1 xt1rs 

ð1Þ

xtrs¼ xt1rs þ vt1rs ð2Þ

where t is the iteration number. The inertial weight w is used to control exploration and exploitation. A large value of w keeps particles at high velocity and prevents them from becoming trapped in local optima. A small value of w maintains particles at low velocity and encourages them to exploit the same search area. The constants c1and c2are acceleration coefficients that determine whether particles prefer to move closer to the pbest or gbest positions. The rand1 and rand2 are independent random numbers uni- formly distributed between 0 and 1. The termination

criterion of the PSO algorithm includes the maximum number of generations, the designated value of pbest, and no further improvement in pbest. The standard PSO process outline is as follows.

Step 1: initialize a population of particles with random positions and velocities on D dimensions in the search space.

Step 2: update the velocity of each particle according to Eq. (1).

Step 3: update the position of each particle according to Eq. (2).

Step 4: map the position of each particle into the solution space and evaluate its fitness value according to the desired optimization fitness function. Simulta- neously update the pbest and gbest positions if necessary.

Step 5: loop to Step 2 until an exit criterion is met, usually a sufficient goodness of fitness or a maximum number of iterations.

The original PSO was designed for a continuous solution space. We modified the PSO position representation, particle velocity, and particle movement as described in the next section to make PSO suitable for combinational optimization problems.

4 Formation of the proposed PSO

There are two different representations of particle position associated with a schedule. Zhang [31] demonstrated that permutation-based position representation outperforms priority-based representation. While we have chosen to implement permutation-based position representation, we must also adjust the particle velocity and particle movement as described in Sections4.2and4.3. We have also included the maintenance of Pareto optima and local search procedures to achieve better performance.

4.1 Position representation

In this study, we randomly generated a group of particles (solutions) represented by a permutation sequence that is an ordered list of operations. The following example is a permutation sequence for a six-job permutation flowshop scheduling problem, where jnis the operation of job n.

Index : 1 2 3 4 5 6

Permutation : j4 j3 j1 j6 j2 j5

An operation earlier in the list has a higher priority of being placed into the schedule. We used a list with a length Int J Adv Manuf Technol

(10)

of n for an n-job problem in our algorithm to represent the position of particle k, i.e.,

Xk¼ x k1xk2. . . xkn

;

xki is the priority of jiin particle k:

Then, we convert the permutation list to a priority list.

Thexkiis a value randomly initialized to some value between (p–0.5) and (p + 0.5). This means xki p þ rand  0:5, where p is the location (index) of jiin the permutation list, and rand is a random number between 0 and 1. Conse- quently, the operation with smaller xkihas a higher priority for scheduling. The permutation list mentioned above can be converted to

Xk¼ 2:7 5:2 1:8 0:6 6:3 3:9½ 

4.2 Particle velocity

The original PSO velocity concept is that each particle moves according to the velocity determined by the distance between the previous position of the particle and the gbest (pbest) solution. The two major purposes of the particle velocity are to move the particle toward the gbest and pbest solutions, and to maintain the inertia to prevent particles from becoming trapped in local optima.

In the proposed PSO, we concentrated on preventing particles from becoming trapped in local optima rather than moving them toward the gbest (pbest) solution. If the priority value increases or decreases with the present velocity in this iteration, we maintain the priority value increasing or decreasing at the beginning of the next iteration with probability w, which is the PSO inertial weight. The larger the value of w is, the greater the number of iterations over which the priority value keeps increasing or decreasing, and the greater the difficulty the particle has returning to the current position. For an n-job problem, the velocity of particle k can be represented as

Vk¼ v k1 vk2. . . vkn

; vki 2 1; 0; 1f g where vki is the velocity of jiof particle k:

The initial particle velocities are generated randomly.

Instead of considering the distance from xki to pbestkiðgbestiÞ, our PSO considers whether the value of xki is larger or smaller than pbestkiðgbestiÞ If xki has decreased in the present iteration, this means that pbestkiðgbestiÞ is smaller than xki, and xki is set moving toward pbestkiðgbestiÞ by letting vki 1. Therefore, in the next iteration, xki is kept decreasing by one (i.e., xki xki  1) with probability w.

Conversely, if xki has increased in this iteration, this means that pbestkiðgbestiÞ is larger than xki, and xki is set moving toward pbestkiðgbestiÞ by letting vki 1. Therefore, in the

next iteration, xki is kept increasing by one (i.e. xki xki þ 1) with probability w.

The inertial weight w influences the velocity of particles in PSO. We randomly update velocities at the beginning of each iteration. For each particle k and operation ji, if vki is not equal to 0, vki is set to 0 with probability (1–w). This ensures that xki stops increasing or decreasing continuously in this iteration with probability (1–w).

4.3 Particle movement

The particle movement is based on the insertion operator proposed by Sha et al. [25, 26]. The insertion operator is introduced to the priority list to reduce computational complexity. We illustrate the effect of the insertion operator using the permutation list example described above. If we wish to insert j4 into the third location of the permutation list, we must move j6to the sixth location, move j1to the fifth location, move j2to the fourth location, and then insert j4in the third location. The insertion operation comprising these actions costs O(n/2) on average. However, the insertion operator used in this study need only set xki 3þ rand  0:5 when we want to insert j5 in the third location of the permutation. This requires only one step for each insertion. If the random number rand equals 0.1, for example, after j4 is inserted into the third location, then Xkbecomes Xk ¼ 2:7 5:2 1:8 0:6 2:6 3:9½ .

If we wish to insert ji into the pth location in the permutation list, we could set xki p þ rand  0:5. The location of operation ji in the permutation sequence of the kth pbest and gbest solutions are pbestki and gbesti, respectively. As particle k moves, if vki equals 0 for all ji, then xki is set to pbestki þ rand  0:5 with probability c1and set to gbesti+ rand−0.5 with probability c2, where rand is a random number between 0 and 1, c1and c2are constants between 0 and 1, and c1þ c2  1. We explain this concept by assuming specific values for Vk, Xk, pbestk, gbest, c1, and c2.

Vk¼ 1 0 0 1 0 0½ ;

Xk¼ 2:7 5:2 1:8 0:6 6:3 3:9½ ;

pbestk¼ 5 1 4 6 3 2½ ;

gbest¼ 6 3 4 5 1 2½ ; c1¼ 0:8; c2¼ 0:1:

– For j1, since vk1 6¼ 0 and xk1 xk1þ vk1, then xk1¼ 1:7.

– For j2, since vk2¼ 0, the generated random number rand1=0.6. Since rand1 c1, then the generated ran- dom number rand2=0.3. Since pbestk2 xk2, set vk2

1 and xk2 pbestk2þ rand2 0:5, i.e., xk2¼ 0:8.

– For j3, since vk3¼ 0, the generated random number rand1=0.93. Since rand1> c1þ c2, xk3 and vk3 do not need to be changed.

(11)

– For j4, since vk4¼ 1, then xk4 xk4þ vk4, i.e., xk4¼ 1:6.

– For j5, since vk5 ¼ 0, the generated random number rand1=0.85. Since c1 < rand1 c1þ c2, the generat- ed random number rand2= 0.7. Since gbest5 xk5, set vk5 1. Then xk5 gbest5þ rand2 0:5, i.e., xk5 ¼ 1:2.

– For j6, since vk6 ¼ 0, the generated random number rand1=0.95. Since rand1> c1þ c2, xk6 and vk6 do not need to be changed.

Therefore, after particle k moves, the Vkand Xkare

Vk¼ ½1 1 0 1 1 0

Xk¼ ½1:6 0:8 1:8 1:7 1:2 3:9

In addition, we use a mutation operator in our PSO algorithm. After moving a particle to a new position, we randomly choose an operation and then mutate its priority value xki in accordance with vki. If xki  ðn=2Þ, we randomly set xki to a value between (n/2) and n, and set vki 1. If xki > ðn=2Þ, we randomly set xki to a value between 0 and (n/2), and set vki 1.

4.4 Pareto optimal set maintenance

Real empirical scheduling decisions often involve not only the consideration of more than one objective at a time, but also must prevent the conflict of two or more objectives.

The solution set of the multi-objective optimization problem with conflicting objective functions consistent with the solutions so that no other solution is better than all other objective functions is called Pareto optimal. A multi-objective minimization problem with m decision variables and n objectives is given below to describe the concept of Pareto optimality.

Minimize F xð Þ ¼ fð1ð Þ; fx 2ð Þ; . . . ; fx nð Þx Þ where; x 2 <m; F xð Þ 2 <n

A solution p is said to dominate solution q if and only if fkð Þ  fp kð Þ 8k 2 1; 2; . . . ; nq f g

fkð Þ < fp kð Þ 9k 2 1; 2; . . . ; nq f g

Non-dominated solutions are defined as solutions that dominate the others but do not dominate themselves.

Solution p is said to be a Pareto optimal solution if there exists no other solution q in the feasible space that could dominate p. The set including all Pareto optimal solutions is referred to as the Pareto optimal or Pareto optimalPareto optimal set. A graph plotted using collected Pareto optimal solutions in feasible space is referred to as the Pareto front.

The external Pareto optimal set is used to produce a limited size of non-dominated solutions (Knowles et al., [11]; Zitzler et al. [32]). The maximum size of the archive

set is specified in advance. This method is used to avoid missing fragments of the non-dominated front during the search process. The Pareto optimal front is formed as the archive is updated iteratively. When the archive set is sufficiently empty and a new non-dominated solution is detected, the new solution enters the archive set. As the new solution enters the archive set, any solution already there that is dominated by this solution will be removed.

When the maximum archive size reaches its preset value, the archive set must decide which solution should be replaced. In this study, we propose a novel Pareto archive set update process to preclude losing non-dominated solutions when the Pareto archive set is full. When a new non-dominated solution is discovered, the archive set is updated when one of the following situations occurs: either the number of solutions in the archive set is less than the maximum value, or if the number of solutions in the archive set is equal to or greater than the maximum value, then the one solution in the archive set that is most dissimilar to the new solution is replaced by the new solution. We measure the dissimilarity by the Euclidean distance. A longer distance implies a higher dissimilarity.

The non-dominated solution in the Pareto archive set with the longest distance to the newly found solution is replaced.

For example, the distance (dij) between X1 and X2 is calculated as

X1¼ 2:7 5:2 1:8 0:6 6:3 3:9½  X2¼ 1:6 0:8 1:8 1:7 1:2 3:9½ 

dij¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2:7  1:6

ð Þ2þ 5:2  0:8ð Þ2þ 0:6  1:7ð Þ2þ 6:3  1:2ð Þ2 q

¼ 6:91

The Pareto archive set is updated at the end of each iteration in the proposed PSO.

4.5 Diversification strategy

If all the particles have the same non-dominated solutions, they will be trapped in the local optimal. To prevent this, a diversification strategy is proposed to keep the non- dominated solutions different. Once any new solution is generated by the particles, the non-dominated solution set is updated according to one of three situations.

1. If the solution of the particle is dominated by the gbest solution, assign the particle solution to gbest.

2. If the solution of the particle equals any solution in the non-dominated solution set, replace the non-dominated solution with the particle solution.

3. If the solution of the particle is dominated by the worst non-dominated solution and not equal to any non- dominated solution, set the worst non-dominated solution equal to the particle solution.

Int J Adv Manuf Technol

(12)

5 Computational results

The proposed PSO algorithm was verified by benchmark problems obtained from the OR-Library that were contrib- uted by Carlier [2], Heller [8], and Reeves [23]. The test program was coded in Visual C++ and run 20 times on each problem using an Intel Pentium 4 3.0-GHz processor with 1 GB of RAM running Windows XP. We used four swarm sizes N (10, 20, 60, and 80) to test the algorithm during a pilot experiment. A value of N=80 was best, so it was used in all subsequent tests. The algorithm parameters were set as follows: c1and c2were tested over the range 0.1–0.7 in increments of 0.2, and the inertial weight w was reduced from wmax to wmin during the iterations. Parameter wmax

was set to 0.5, 0.7, and 0.9 corresponding to wminvalues of 0.1, 0.3, and 0.5. Settings of c1=0.7, c2=0.1, wmax=0.7, and wmin=0.3 worked best.

The proposed PSO algorithm was compared with five heuristic algorithms: CDS[1], NEH[17], RAJ[20], GAN- RAJ[6] and Laha[13]. We also coded these methods in Visual C++. The CDS heuristic [1] takes its name from its three authors and is a heuristic generalization of Johnson’s algorithm. The process generates a set of m−1 artificial two-machine problems, each of which is then solved by Johnson’s rule. In this study, we modified the original CDS and compared the makespan, mean flow time, and machine idle time of all m−1 generated problems. The non- dominated solution was selected to compare with the solutions obtained from our PSO algorithm. The other comparison was based on solutions determined by the NEH algorithm introduced by Nawaz et al. [17]. The NEH investigates n(n+1)/2 permutations to find near-optimal solutions. As we did for CDS, we modified the original NEH and compared the three objectives of all n(n+1)/2

sequences. We compared the non-dominated solution from these sequences with the solutions from our PSO.

The following two performance measures are used in this study: average-relative percentage deviation (ARPD) and maximum percentage deviation (MPD) where MS stands for makespan, TFT represents total flow time, MIT stands for machine idle time, H is the heuristic.

ARPDMS¼100 10

X10

i¼1

MSH;i BestMSi BestMSi

ð3Þ

MPDMS¼ MAXi¼1::10 MSH;i BestMSi

BestMSi

 100 ð4Þ

ARPDTFT¼100 10

X10

i¼1

TFTH;i BestTFTi

BestTFTi

ð5Þ

MPDTFT¼ MAXi¼1::10 TFTH;i BestTFTi

BestTFTi

 100 ð6Þ

ARPDMIT¼100 10

X10

i¼1

MITH;i BestMITi

BestMITi

ð7Þ

MPDMIT¼ MAXi¼1::10 MITH;i BestMITi

BestMITi

 100 ð8Þ

We tested our PSO on nine different problem sizes (n=

20, 50, 100 and m = 5, 10, 20) from Taillard’s [28]

benchmarks. Table 1 compares the six methods using the

Table 1 Comparison of makespan(MS) for different heuristics

Problem size NEH [17] CDS [1] RAJ [20] GAN-RAJ [6] Laha [13] PSO

n m ARPD MPD ARPD MPD ARPD MPD ARPD MPD ARPD MPD ARPD MPD

20 5 1.84 0.25 0.76 0.15 0.44 0.12 0.63 0.14 1.55 0.2 0.00 0.00

10 1.78 0.23 0.71 0.12 0.85 0.17 0.83 0.14 1.50 0.20 0.00 0.00

20 1.27 0.17 0.44 0.06 0.88 0.14 0.82 0.12 1.06 0.15 0.00 0.00

50 5 1.24 0.17 0.83 0.14 0.26 0.05 0.37 0.08 1.29 0.22 0.02 0.02

10 1.28 0.19 0.59 0.08 0.48 0.09 0.53 0.10 1.29 0.18 0.01 0.01

20 1.08 0.17 0.07 0.02 0.35 0.07 0.39 0.07 1.02 0.16 0.06 0.03

100 5 1.04 0.19 0.46 0.12 0.36 0.07 0.23 0.07 1.05 0.16 0.07 0.07

10 0.28 0.06 0.47 0.07 0.29 0.06 0.24 0.04 0.89 0.13 0.01 0.01

20 0.65 0.11 0.16 0.04 0.21 0.05 0.18 0.04 0.72 0.10 0.01 0.01

NEH Nawaz et al. [17], CDS Campbell et al. [1], RAJ Rajendran C [20], GAN-RAJ Gangadharan and Rajendran [6], Laha Laha and Chakraborty [12,13], PSO proposed PSO)

Figure

Updating...

References

Related subjects :