• 沒有找到結果。

In this chapter, we will discuss demand forecasting and machine learning for supply chain demand forecasting. In the first part, we want to find out why the demand of W company is so unpredictable. When we know the demand characteristics, we can find the breaking point easily. In the second part, we discuss the difference between our research and the previous studies and our reasons for choosing a certain machine learning method.

2.1 The Problems of Demand Prediction

Hendry, Simangunsong, and Stevenson (2011) have listedthree kinds of uncertainties in the supply chain management: demand uncertainty, manufacture uncertainty, and supply uncertainty. The demand uncertainty results from the uncertainties of the demand quantity and the period when the demand occurs. The manufacture

uncertainty results from the uncertainties of the yield rates, the crashing frequency of the machines, and other logistical problems. The supply uncertainty results from the uncertainties of the quality of the components, the reliability of import, and the backup level of the components. The manufacture uncertainty can be reduced by using more precise equipment and the supply uncertainty can be reduced by using historical data to roughly predict an expected supply. However, a simple method to

resolve the problem of demand uncertainty does not exist, as demand prediction is too complex.

Bullwhip effect is one of the main reasons why the demand is so unpredictable. The bullwhip effect results from poor supply chain integration. Babai et al. (2015) found that a small increase or decrease in the demand of the final customers will finally lead to a huge difference for the upstream suppliers. Each supply chain unit will adjust its demand to have a safe stock quantity. By the time information is delivered to the upstream suppliers, it has already been distorted by each unit in the supply chain. The bullwhip effect makes the demand quantity change more than the real change.

DOI:10.6814/THE.NCCU.MIS.003.2019.A05

5

W company has a serious forecasting problem. First, a lot of the research (e.g., Bischak, Naseraldin and Silver, 2008; Esper and Waller, 2014) assume that the demand pattern is normally distributed and stationary. However, the demand distributions that W company faces are intermittent or lumpy. It is very likely that components of the latest 3C products did not even exist one year ago, which means the components are being frequently replaced. Therefore, the demand for the components is finite rather than stationary.

The demand pattern of W company has a finite horizon with a long lead time. Deng, Paul, Tan, and Wei (2017) proved that a periodic review system tends to have more stocks on hand at the end of the time-domain when the demand has a finite horizon.

The difference between the minimum and maximum value of the future demand will increase as the lead time increases. In fact, it is not easy to find a method to predict the demand that W company met with.

2.2 Machine Learning for Supply Chain Demand Forecasting

The study by Crone, Fildes, Nikolopoulos, and Syntetos (2008) shows that a lot of time series models have been used in demand forecasting, such as autoregressive integrated moving average (ARIMA) and exponential smoothing. Besides,

Carbonneau, Laframboise, and Vahidov (2008) tried some machine learning models, neural networks, and support vector machines to forecast the supply chain demand.

However, all these models are at item level. In other words, the models are fitted with individual items. We want to forecast a block with many weeks at once. The week number of the block depends on the weeks of the supplier lead time. In this situation, our data size decreases significantly. If the supplier lead time is 8 weeks, and we have 72 weeks’ data in total, we only have 72/8 blocks, which is 9 blocks. This data will not be enough for the machine to learn. As a solution for this problem, we use different items together as the input. We fit the model with cross-item aggregation.

For every round we use all the available items to fit the models.

Unlike the neural networks, the Gradient Boosting Machine (GBM) does not use a strong machine learning model for forecasting. On the contrary, GBM is a collection

DOI:10.6814/THE.NCCU.MIS.003.2019.A05

6

of many weak machine learning models (Knoll and Natekin, 2013). GBM can improve some weakness of the strong machine learning model. A strong machine learning model will have a complex algorithm, which will not make it easily generalizable

GBM using a number of weak machine learning models can easily prevent this

problem. In addition, the other advantage of using weak machine learning models is the time saved. The computation time of a weak machine learning model is much less than a strong machine learning model.

Gradient boosting is the process of finding out the lowest loss function through iterative execution. There will be three steps in each iteration. The first step finds the direction. The second decides the step size; the last step renews the value. The machine will decide the direction and step size according to the gradient.

Click, Lanford, Malohlava, Parmar, and Roark (2015) published the H2O package in R and Python to make the use of GBM, Random Forest, and other machine learning models easier. Unlike Random Forest that averages the independent regression trees, GBM utilizes an ensemble of trees, in which each tree sequentially learns from the previous tree’s prediction errors. Compared to Deep Learning that is sensitive to the scale of variables and is time consuming, GBM is easy-to-train and shown to be the best method in many structural data-driven predictions.

DOI:10.6814/THE.NCCU.MIS.003.2019.A05

7

CHAPTER 3 INVENTORY MODEL AND POLICY

相關文件