## 國立臺灣大學電機資訊學院電機工程學系 碩士論文

### Department of Electrical Engineering

### College of Electrical Engineering and Computer Science

## National Taiwan University Master Thesis

## 浸染布料之配方預測－使用深層神經網路 Recipe Prediction of Dyed Textile

## Using Deep Neural Network

## 黃畊彰

## Keng-Chang Huang

## 指導教授：雷欽隆博士 Advisor: Chin-Laung Lei, Ph.D.

## 中華民國 109 年 7 月

## July, 2020

## 誌謝

能夠完成本論文，最先要感謝我的父親與妻子。謝謝父親父代母職，含辛茹苦 將我和哥哥養育成人。感謝妻子平日替我打理家務，讓我能無後顧之憂地專心學 習。謝謝你們，在此將這個人生的里程碑獻給你們。

感謝指導教授雷欽隆老師。謝謝雷老師在研究上給予許多建議與提點，讓我能 夠突破自身的盲點，完成此篇論文。接著要感謝宏遠紡織，以及陳昇瑋老師牽線，

提供我資料進行研究。另外也要感謝之凡學長、嘉慶學長、冠志還有凱全，和你 們的討論給予我許多靈感。我還想要感謝母校臺大，提供卓越的環境，讓我們有 更好的立足點探索這個未知的世界。

最後，感謝至今遇到的所有人、事、物，謝謝你們參與了我的人生。由衷感 謝。

## 摘要

在紡織產業中，色彩再現 (color reproduction) 試誤的時間往往多達數日。若 有準確的染料配方預測模型可以使用，則可降低配色師在配色時所需的時間。然 而，找到一個具有良好擴充性的染料配方預測模型一直以來都是個令人困擾的問 題。在之前的相關研究中提出的方法有許多限制，例如每個配方所使用的染料數 量是固定的，或者可供挑選的染料數量只有三至四種。並且這些方法都沒辦法找 到新穎配方，新穎配方是那些不曾出現在訓練集資料中的配方。

在本文中，我們提出了一個沒有上述限制的方法。首先，我們使用深層神經網 路建立一個色彩預測模型，該模型之輸入為染料配方，輸出為布料的顏色表徵，

*如 CIEL*^{∗}*a*^{∗}*b** ^{∗}* 或反射率光譜。當模型訓練完畢後，我們以目標布料的表徵為輸

入，利用該模型來尋找對應的反函數以預測染料配方。最後，我們使用軟體驗證 技術來檢視預測之染料配方是否能夠染出相同的顏色。

在本實驗中，我們使用了十重驗證 (ten-fold validation) 來檢視模型的成效。

其中，資料總數為 7604 筆，可使用的布料有 3 種，可使用的染料有 38 種。根
據實驗的結果，本方法預測染料的效果良好，超過 87% 的驗證樣本 (test set) 的
*與實際資料 (ground truth) 的顏色差距 CIE∆E*_{ab}^{∗}*小於 2.3 (CIEL*^{∗}*a*^{∗}*b** ^{∗}* 之恰辨差
異，just noticeable difference [2])。

關鍵字: 色彩配對、色彩再現、色彩管理、染料配方預測、配方預測、色彩預 測、深層神經網路

## ABSTRACT

In the textile industry, a good recipe prediction model (also known as colorant prediction model) can help colorists to reduce the time needed in color reproduction, which may take days because of try-and-errors. However, finding a scalable recipe prediction model has been a problem for a long time. Although many attempts are proposed in previous studies, there are several restrictions among them. For example, the size of recipes is fixed or the number of candidate dyes is limited to 3 or 4 primary colorants. And neither of these methods can find novel recipes of which the combinations are not shown in the training sets.

In the thesis, we propose a method in predicting dye recipes of fabric without
the restrictions mentioned above. First, we leverage a deep neural network to build
a color prediction model that takes dye recipes as input and output color repre-
*sentation of fabrics, such as CIEL*^{∗}*a*^{∗}*b** ^{∗}* or reflectance spectra. After the model is
trained, we use it to predict dye recipes by finding the inverse value of the model

*with CIEL*

^{∗}*a*

^{∗}*b*

*or reflectance spectra of a given fabric. Last, we use soft-proofing techniques to validate if the predicted recipe could produce the same color or not.*

^{∗}We use 10-fold validation on 7604 samples in total where 38 different dyes and
3 different fabrics are involved. The result is promising, showing that more than
*87% of the samples in the test set that result in CIE ∆E*_{ab}^{∗}*< 2.3, (just noticeable*
difference [2]).

**Keywords: Color Matching, Color Reproduction, Color Management, Recipe***Prediction, Color Prediction, Colorant Prediction, Deep Neural Network*

**Contents**

**1 Introduction . . . .** **1**

**2 Preliminary . . . .** **4**

2.1 Neural Network . . . 4

2.1.1 Structure of a Perceptron . . . 5

2.1.2 Structure of a Neural Network . . . 5

2.1.3 Training of Neural Networks . . . 6

2.1.4 Overfitting . . . 7

2.2 CIE Color Spaces . . . 8

2.2.1 CIERGB color Space . . . 8

2.2.2 CIEXYZ color space . . . 10

2.2.3 *CIEL*^{∗}*a*^{∗}*b** ^{∗}* Color Space and Just Noticeable Difference . . . . 11

**3 Related Work . . . 13**

3.1 Color Prediction . . . 14

3.1.1 Kubelka Munk Theory . . . 14

3.1.2 Color Prediction Using Neural Networks . . . 15

3.2 Colorant Prediction . . . 15

3.2.1 Colorant Prediction Leveraging K-M theory . . . 16

3.2.2 Colorant Prediction Leveraging Neural Networks . . . 16

3.2.3 Colorant Prediction by Inverse of CPM . . . 17

3.2.4 Restrictions . . . 18

**4 Proposed Approach . . . 20**

4.1 Data Preprocessing . . . 20

4.2 Training Phase . . . 22

4.3 Prediction Phase . . . 22

4.4 Finding Inverse Value . . . 23

**5 Result and Discussion** **. . . 25**

5.1 Prediction of Novel Recipes . . . 27

5.2 Soft-Proofing Model . . . 28

5.3 Physical Validation . . . 29

5.4 Prediction Error . . . 31

**6 Conclusion and Future Work . . . 35**

**Reference . . . 37**

**List of Tables**

5.1 Result using different methods finding the inverse in two color spaces.
The values are calculated by taking means of each fold in ten-fold
validation. . . 25
5.2 Result of predicting novel recipes. . . 27

5.3 Result of comparing two soft-proofing models in different spaces di-
vided by the convex hull formed by training sets. . . 28
5.4 *Result of physical validation. The color difference, ∆E*_{ab}* ^{∗}* , are calcu-

*lated in pair among the CIEL*^{∗}*a*^{∗}*b** ^{∗}*derived from soft-proofing model,
ground truth, and physically dyed cloths which are denoted as ”Real”. 31

**List of Figures**

1.1 Illustration showing the definitions of candidate dyes, candidate fab-

rics, a dye combination, a dye recipe, and the size of a recipe. . . 1

1.2 Activity diagram showing general steps in color matching. . . 2

2.1 Illustration of a perceptron. . . 4

2.2 Illustration of a neural network. . . 5

2.3 Training of a perceptron using gradient descent. . . 6

2.4 Color matching experiment done by Guild [14]. . . 8

2.5 CIERGB color matching functions. . . 9

2.6 CIEXYZ color matching functions. . . 11

3.1 Illustration of a typical recipe prediction model leveraging neural network. . . 17

3.2 Illustration showing the approach adopted by Chen, Yang, and Ouhy- oung in [25] . . . 18

4.1 The overview of the method proposed in the thesis, where L, a, b
*are CIEL*^{∗}*a*^{∗}*b** ^{∗}* and ˆ

*C is the vector of the recipe. CPM is a color*prediction model and a soft-proofing model is used to evaluate how well the predicted recipe vectors are. . . 20

4.2 *Visualization of finding an inverse value using gradient descent. C*_{j}^{i}*is the i*^{th}*recipe vectors at epoch j, (L*^{∗}*, a*^{∗}*, b** ^{∗}*)

^{i}

_{j}*is the i*

^{th}*CIEL*

^{∗}*a*

^{∗}*b*

^{∗}*value at epoch j, (L*

^{∗}*, a*

^{∗}*, b*

^{∗}*) is the CIEL*

^{∗}*a*

^{∗}*b*

*value of the target*

^{∗}*fabric, and µ is the learning rate. When initializing and updating C*

_{j}*, only the values of the dimensions involved will be changed and the*

^{i}rest remain 0. . . 24

5.1 Results of physical validation of 16 randomly picked recipes. In each set, the left color is the prediction of soft-proofing model, the cen- ter one is the ground truth and the right one is the color produced physically using the predicted recipes, denoted as “Real”. . . 30

5.2 The relation of MAE and recipe sizes is shown in the left figure. And
*the relation of ∆E*_{ab}* ^{∗}* and recipe sizes are shown in the right. . . 32

5.3 *Relations between errors and L** ^{∗}* . . . 33

5.4 *Relations between errors and a** ^{∗}* . . . 33

5.5 *Relations between errors and b** ^{∗}* . . . 33

5.6 *Relations between errors and C** ^{∗}*. . . 34

5.7 *Relations between errors and h . . . 34*

**Chapter 1** **Introduction**

In the textile industry, fabric dyeing is a process that attaches color onto fabrics.

The general dyeing process in the laboratory-scale may include dye bath, heating and color fixation, washing, and drying. The dye bath is a process that soap fabric in dyes to attach color onto it. The fabric will then be heated so that the color will be fixed to the fabric permanently. After that, the fabric is washed to remove unattached colorants and then be dried. The detail and the time needed for each phase are dependant on dyes and fabric used and are also business secrets that vary among companies. In general, the dyeing process usually lasts hours.

After having a big picture of the dyeing process in the lab-scale, we need to

Figure 1.1: Illustration showing the definitions of candidate dyes, candidate fabrics, a dye combination, a dye recipe, and the size of a recipe.

Figure 1.2: Activity diagram showing general steps in color matching.

define some terms used in our thesis first. Figure 1.1 shows the terms we use to describe the parts utilized in our experiment. Candidate dyes are the dyes that can be picked, whose size is 38 in the experiment. Candidate fabrics are the fabrics that can be picked to form a recipe and there are 3 of them in the experiment. Dye combination is the dyes picked. In the case showed in figure 1.1, the dye combination is ”green and purple”. A Dye combination, the concentrations of the dyes, and a piece of fabric are altogether called a recipe. The size of a recipe refers to how many dyes are picked in the recipe, which is 2 in the case shown in figure 1.1.

Now, let us consider a common scenario in the industry, which is to reproduce the
same color of a given piece of colored fabric from customers. We refer to the given
fabric as ”target fabric” in the rest of the thesis. We must pick a dye combination,
find the concentration of the dyes, and use the corresponding fabric to reproduce
the same color before put them into mass production. That is, we need to find
the recipe for that colored fabric first. A typical color matching flow is shown as an
activity diagram in fig 1.2. A colorist must find a dye recipe for a target fabric based
on their own experience, then dyes the fabric, and compares the reproduced color
on the fabric and the one on the target fabric. The measurement of color difference
*is ∆E*_{ab}* ^{∗}* and the preferred difference is under 2.3, which is just noticeable difference
[2].

It is apparent that the color matching process requires many try-and-errors, and also requires a lot of time since the dyeing process usually lasts hours. Therefore,

finding a precise recipe prediction model will considerably reduce the repetition of try-and-errors and hence reduce the time and efforts needed. Our purpose is to find a precise recipe prediction model that can predict the recipe given target fabric.

In this thesis, first, we employ a deep neural network to build a color prediction
*model, which takes a dye recipe as the input and produces CIEL*^{∗}*a*^{∗}*b** ^{∗}* or reflectance
spectra of the fabric as the output. When conducting color reproduction, we are

*given a target fabric, whose CIEL*

^{∗}*a*

^{∗}*b*

*and reflectance spectra can be measured, and we find the inverse value (i.e. the recipe) of the color prediction model with those values. And then we use soft-proofing to see if the predicted recipe can produce the same color or not.*

^{∗}Our contribution is threefold. First, to our knowledge, we are the first one who proposes an approach that can be applied in the prediction involving up to 38 can- didate dyes and various recipe sizes (1 to 4 components in each recipe). This shows that our approach has good scalability regarding the size of recipes and the number of candidate dyes. Second, the method we propose can still yield acceptable results involving novel dye combinations that do not appear in the training set, which has not been discussed in the previous researches either. Third, the approach we pro- posed is data-driven, easily adopted, and yields promising results in the meantime.

In the following parts, we will brief through the preliminaries used in the thesis in chapter 2, which include neural network and color system. Then we will review related work in chapter 3. In chapter 4, we will elaborate on the way we vectorize a recipe, train model, and find the inverse value of the model. Then we will discuss the findings in our experiments in chapter 5. Last, in chapter 6, we make conclusions on our approach and point out possible improvements that could be made in the future.

**Chapter 2** **Preliminary**

**2.1 Neural Network**

Neural networks are well-developed techniques that can be applied in many fields such as image classification, image processing, natural language processing, and time series prediction, etc. Here, we only briefly cover the parts related to the thesis such as the mathematical definition of a neuron (perceptron), the mathematical definition of a neural network, and supervised learning. For more detailed information, one can refer to the book written by Nielsen [22] and the references cited [20], [23].

Figure 2.1: Illustration of a perceptron.

**2.1.1** **Structure of a Perceptron**

*A typical perceptron is shown in figure 2.1. In the figure, the input vector x*_{1...N}*is denoted as ⃗x. A perceptron is just a function that takes ⃗x as input and output a*
*value y. The function consists of two parts, the linear part as well as the nonlinear*
*part. In the linear part, it is a multiplication of input vector ⃗x and weight vector ⃗w*
*adding a bias b. And the common functions used in the nonlinear part, which is also*
*known as activation function denoted as f in the figure, are threshold function and*
sigmoid function. A threshold function outputs 1 if the input exceeds the threshold,
and outputs 0 vice versa. And sigmoid function is listed in equation 2.1. The
behavior of a perceptron can be defined as equation 2.2.

*S(x) =* *e*^{x}

*e** ^{x}*+ 1 (2.1)

*y = f (⃗x· ⃗w*^{T}*+ b)* (2.2)

**2.1.2** **Structure of a Neural Network**

A neural network consists of many perceptrons, and the perceptrons are layered, which are known as hidden layers, as shown in figure 2.2.

Figure 2.2: Illustration of a neural network.

Figure 2.3: Training of a perceptron using gradient descent.

*In the figure, the input is also denoted as ⃗x. Since there are many perceptrons*
*in each layer, we denote weight vectors related to each perceptron as ⃗w*_{s}^{l}*, where l*
*represent which layer the weights locate and s is the position of the perceptron in*
*the layer with max value equals S** ^{l}* for each layer l. Instead of a weight vector in

*the case involving only one perceptron, we have a weight matrix W*

^{l}*= ⃗w*

_{1}

^{l}*. . . ⃗w*

_{S}

^{l}*l*for

*each layer l. The activation function at each layer is denoted as f*

*and output of*

^{l}*each layer is denoted as y*

_{s}

^{l}*or ⃗y*

*. The bias value of each layer is omitted for brevity.*

^{l}*With the denotation defined, we can derive the output of the final layer ⃗y** ^{L}* with the
following equations.

*⃗*

*y*^{l}*= ⃗y*^{l}^{−1}*· W*^{l}*and ⃗y*^{0} *= ⃗x*

(2.3)

**2.1.3** **Training of Neural Networks**

Now we know how the output values are calculated in a neural network, but we still do not know how to find the weights that form a neural network. To find the weights we need, we must ”train” the neural network with a training set consist of input and the corresponding output data known in advance. This kind of training is known as supervised training. For simplicity, let us consider to train a perceptron shown in figure 2.3. Bias and activation functions are omitted for brevity. In figure

*2.3, we define a value called loss which is calculated via a loss function L(y, ˆy). For*
each sample in the training set, we input it to the neural network and calculate the
prediction value ˆ*y. Each ˆy must be the same as the ground truth in the training*
set if the neural network is a perfect model. The definition of ”the same” is defined
*by loss function L(y, ˆy). The common loss functions are absolute deviation or root*
*mean square deviation between y and ˆy. So the problem now becomes that we need*
to find weights of a perceptron, which can produce minimum losses for all samples
in the training set. The statement can be expressed as the following equation:

argmin

*w*

*L(y, ˆy) = L(y, (⃗x· ⃗w)), for all x and y in training set* (2.4)
To find the weights that can minimize the loss, an iterative algorithm called
gradient descent can be leveraged. To leverage gradient descent to find the weights,
first, we need to initialize weights with values other than zero. and then we use the
following equation to update the weights:

*⃗*

*w*_{t+1}*= ⃗w*_{t}*− µ∇L(y, ˆy)* (2.5)

*where ⃗w is the weight vector of the perceptron, t is the iteration usually called*

*”epoch” in the field of machine learning, µ is the learning rate, which controls the*
extend the weight is updated and L is the loss function. The process will be continued
for certain epochs or if the sum of losses of all samples is small enough. The concept
of finding weights of neural networks is the same as what we have described above.

**2.1.4 Overfitting**

Overfitting is a phenomenon that a neural network can predict outputs very well with the inputs in the training set but performs badly for those that are not in the training set. There are a lot of techniques that can be employed to moderate overfitting. One of the most common tricks is to use a test set to calculate the

loss and use it to decide if the iterative process of gradient descent should stop.

The samples in the test set do not present in the training set. Another trick is to restrict weight from being too big. This can be achieved by regularization, which adds another term consist of values of weights in the loss function. For example, the sum of the square of the weights is added to loss when L2 regularization is used.

**2.2** **CIE Color Spaces**

Commission Internationale de l’Eclairage, CIE, is the organization that has de-
veloped many standards such as color spaces in many fields related to color science
*and illumination. CIEL*^{∗}*a*^{∗}*b** ^{∗}* is one of the color spaces developed by CIE and it
is a fairly common color space used in the textile industry. In this section, we will

*introduce the CIEL*

^{∗}*a*

^{∗}*b*

*color space and others related to it. To get more detailed information on the CIE systems, one can refer to the book edited by Schanda [14].*

^{∗}**2.2.1** **CIERGB color Space**

In 1931, Guild conducted an experiment to determine the coordinate of stimuli, that is, the color of lights. The experiment required respondents to answer if a

Figure 2.4: Color matching experiment done by Guild [14].

monochromatic stimulus (light) is the same as the one that is mixed by three primary monochromatic stimuli, which are red, green, and blue. The experiment is shown as figure 2.4. Seven observers who are known as CIE 1931 Standard Observer were involved in the experiment. The wavelength of [R] is 700.0 nm, the one of [G] is 546.1 nm and 435.8 nm for [B]. And the color of a stimulus can be defined as equation 2.6.

*[C]≡ R[R] + G[G] + B[B]* (2.6)

*where [C] is the unknown stimulus;* *≡ reads as ”matches”; [R], [G], [B] are the*
units of the monochromatic primary stimuli and R, G, B represent the amounts
of the primary stimuli to be used to match an unknown light, known as CIERGB
values. The experiment was conducted on many visible monochromatic lights and
the corresponding CIERGB values are collected. By putting values of R, G, B

Figure 2.5: CIERGB color matching functions.

as the y-axis and the wavelength of the monochromatic stimuli as x-axis, we can
get the CIERGB color matching function ¯*r(λ), ¯g(λ) and ¯b(λ) shown in figure 2.5.*

To determine CIERGB values of non-monochromatic stimuli, equation 2.7 can be leveraged.

*[C] =*

∫ _{780 nm}

*380 nm*

¯

*r(λ)P (λ)dλ· [R]*

+

∫ *780 nm*
*380 nm*

¯

*g(λ)P (λ)dλ· [G]*

+

∫ _{780 nm}

*380 nm*

¯*b(λ)P (λ)dλ· [B]*

(2.7)

*where [C] is the color of a non-monochromatic stimuli, P (λ) is the spectra of the*
non-monochromatic stimuli, ¯*r(λ), ¯g(λ), ¯b(λ) are the color matching function and*
[R], [G], [B] are the monochromatic primary stimuli. For clarity, the RGB color
space here is different from the common RGB coordinate used by computers.

**2.2.2** **CIEXYZ color space**

As one can see that there are negative values in ¯*r(λ), which leads to difficulty*
in calculation in the time without aid of computers, RGB values are converted to
XYZ with equation 2.8 and CIEXYZ color matching functions, ¯*x(λ), ¯y(λ) and ¯z(λ)*
can be get as shown in figure 2.6.

*X*
*Y*
*Z*

=

*2.7689 1.7517 1.1130*
*1.0000 4.5907 0.0601*
0 *0.0565 5.5943*

*R*
*G*
*B*

(2.8)

CIEXYZ values of a non-monochromatic light can be calculated by equation 2.9,
where XYZ is CIEXYZ values, ¯*x(λ), ¯y(λ) and ¯z(λ) are the matching functions, and*
*ϕ(λ) is the spectra reflected by a surface. The value of k is calculated by setting Y*

*= 100 and setting the spectra of ”reference white” as ϕ(λ). The so-called ”reference*

Figure 2.6: CIEXYZ color matching functions.

white” is the color reflected by a perfect surface under a certain standard illuminant.

A perfect surface is a surface that can reflect all visible lights at every wavelength.

Standard illuminants are defined by CIE and D65 is a common standard illuminant used today [14]. Generally speaking, X value also represents red, Y represents green and Z value represents blue.

*X = k·*∫*780 nm*

*380 nm* *x(λ)ϕ(λ)dλ*¯
*Y = k·*∫_{780 nm}

*380 nm* *y(λ)ϕ(λ)dλ*¯
*Z = k·*∫_{780 nm}

*380 nm* *z(λ)ϕ(λ)dλ*¯

(2.9)

**2.2.3** *CIEL*

^{∗}*a*

^{∗}*b*

^{∗}**Color Space and Just Noticeable Difference**

CIEXYZ values are the base of other color spaces developed by CIE. A set of XYZ values represents a color under a certain standard illuminant. However, CIEXYZ color space is not uniformed, which means that the distance between two points

in CIEXYZ space cannot represent the actual color difference perceived by human eyes. For example, one may find a huge color difference between two different green colors but a small difference between two different purple colors, and the distance between the colors in each pair are the same.

*To solve this problem, CIEXYZ values are transformed to CIEL*^{∗}*a*^{∗}*b** ^{∗}* by equa-
tion 2.10

*L*^{∗}*= 116 f (Y /Y** _{n}*)

*− 16*

*a*^{∗}*= 500[f (X/X** _{n}*)

*− f(Y /Y*

*n*)]

*b*^{∗}*= 200[f (Y /Y**n*)*− f(Z/Z**n*)]

*where f (t) =*

1 3

(29 6

)2

*t +* 6

29*, if t≤*( 6
29

)3

*t*^{1/3}*, if t >* ( 6
29

)3

(2.10)

*where L*, a*, and b* are CIEL*^{∗}*a*^{∗}*b** ^{∗}* values, X, Y, and Z are CIEXYZ values,

*X*

_{n}*, Y*

_{n}*, Z*

*is the CIEXYZ values of the reference white.*

_{n}The commonly accepted hypothesis assumes that the signals generated by the
cone cells will be processed to three signals, lightness, red/green and blue/yellow. In
*CIEL*^{∗}*a*^{∗}*b*^{∗}*space, L** ^{∗}* represents lightness which means how bright or dark a color

*is. The value of a*

^{∗}*represents the extent of magenta and green a color is. And b*

*represents the blue the extent of blue and yellow a color is.*

^{∗}*The value of ∆E*_{ab}* ^{∗}* which represents color difference is defined as the 2-norm

*distance of two points. Although CIEL*

^{∗}*a*

^{∗}*b*

*space is known as one of the so-called*

^{∗}*uniform color spaces, it is not perfectly uniform. The value 2.3 in ∆E*

_{ab}*is known*

^{∗}*as the just noticeable difference [2]. Generally, for two colors with ∆E*

_{ab}

^{∗}*< 2.3, they*will be perceived the same to human eyes.

**Chapter 3**

**Related Work**

In this chapter, first, we will state the difference and relation between color prediction and colorant prediction. Then we will introduce the most famous color prediction model, known as Kubelka Monk Equation. Later, we will talk about researches that leverage neural network in the field, and state the pros and cons of current methods.

Color prediction model, also known as recipe prediction model, is a model that
takes the representations of colorant, pigment, or dyes as input and output color
*representations, such as CIEL*^{∗}*a*^{∗}*b** ^{∗}* or reflectance spectra, of the color painted on
the substrate. On the other hand, colorant prediction models are models that take
the representation of color on the substrate as input and output representation of
colorants. Taking painting as an example, if one wants to know what is color painted
on paper of two pigments mixed, a color prediction model can be employed to figure
that out. And if one wants to know how to reproduce the color of painted paper,
then a colorant prediction model will come into play this time. Color prediction
model and colorant prediction model are inverse to each other.

**3.1** **Color Prediction**

**3.1.1** **Kubelka Munk Theory**

Kubelka Munk theory [1] is a commonly accepted method to derive the re-
flectance spectra of a painted surface, and the equation to calculate a painted surface
*at a certain wavelength λ is shown in the following equations [4], [10],*

*R =* 1*− ξ(a − b conth bSh)*

*a− ξ + b conth bSh* (3.1a)

*a = 1 +* *A*

*S, b =√*

*a*^{2}*− 1* (3.1b)

where h is the thickness of the pigment layer, S and A are the pigment’s scattering
*and absorption coefficients, ξ is the substrate reflectance, and R is the resulting*
reflectance of the pigment layer.

However, the equation listed in equation 3.1 is not applicable in mixing colorants since scattering and absorption coefficients are also determined by the concentrations of pigments. Now, we consider the KM equation applied to turbid material, which is shown below,

*A*

*S* = (1*− R**∞*)^{2}

*2R** _{∞}* (3.2)

*where S and A are the pigment’s scattering and absorption coefficients, and R** _{∞}* is
the reflectance of the pigment at infinite thickness. The equation listed in equation
3.2 establishes the relation among scattering coefficient, absorption coefficient, and
reflectance. Saunderson further indicated that A and S of mixing colorants can be
calculated by adding them proportional to the concentration [3], which can result
in the following equation,

*A*

*S* = *C*_{0}*A*_{0}*+ C*_{1}*A*^{′}_{1}*+ C*_{2}*A*^{′}_{2}*+ ... + C*_{n}*A*^{′}_{n}

*C* *S* *+ C* *S*^{′}*+ C* *S*^{′}*+ ... + C* *S** ^{′}* (3.3)

*where A and S are the mixture’s absorption and scattering coefficients; A*^{′}_{i}*and S*_{i}^{′}*are absorption and scattering coefficient and of each colorant; C** _{i}*is the concentration

*of each colorant; A*

_{0}

*and S*

_{0}are the values for the substrate, added by Allen in 1980 [6] and since adopted.

Thus, through equation 3.2 and 3.3, we can calculate absorption coefficients, scattering coefficients of the mixed colorant. Then we can calculate the reflectance spectra of the painted surface by 3.1.

However, Nickols and Orchard shows that absorption and scattering coefficient
calculated via equation 3.3 will result in great error [5] and it also has been shown
**that the behavior of the coefficients of mixing colorants is nonlinear yang2010kubelka.**

And That is the main reason why color prediction and colorant prediction models that leverage K-M theory are not well adopted in the dying industry.

**3.1.2** **Color Prediction Using Neural Networks**

Neural networks are an alternative to K-M theory to avoid measurements and
corrections that take the reflection of the paint and air into account [17]. Westland
and Bishop, Bushnell, and Westland also indicated that the assumptions of K-M
theory may not be met in several cases and neural network is an alternative [8],
[12]. There have been lots of researches leverage neural networks as color prediction
models in many related fields such as painting and printing [12], [25]–[27] since the
1980s. They all take recipe representation as input and output representation of the
*painted surface in CIEL*^{∗}*a*^{∗}*b** ^{∗}* or reflectance spectra.

**3.2** **Colorant Prediction**

In the previous sections, we discussed color prediction models. The models can not be directly used to find the recipes needed to reproduce the color on a surface.

We need to use colorant prediction models, which are also called recipe prediction

models, to serve this purpose.

We categorize colorant prediction models into three categories. The ones that employ K-M theory, the ones that employ neural networks directly, and the ones that employ a color prediction model and find its inverse.

**3.2.1 Colorant Prediction Leveraging K-M theory**

Colorant prediction models that employ K-M theory use equation 3.3 to derive
their model. For example, Nateri and Ekrami derived an equation that takes the
derivative of the *A*

*S* of the fabric and the *A*

*S* of a standard sample, to predict the
concentration of colorants in bi-component mixing [15].

**3.2.2** **Colorant Prediction Leveraging Neural Networks**

The second kind of colorant prediction models just employs neural networks that take the representation of the color on a substrate and output recipes that can reproduce the color.

Bishop, Bushnell, and Westland used a two-hidden-layer neural network taking
*CIEL*^{∗}*a*^{∗}*b** ^{∗}* of target fabric as input and generating concentration of three primary
colorants (red, yellow, blue) as output [8].

*Tominaga inputted a set of CIEL*^{∗}*a*^{∗}*b** ^{∗}* values of a painted paper to a neural
network and output CMYK values [9]. He also built another neural network as a

*soft-proofing model which takes CMYK as input and generates CIEL*

^{∗}*a*

^{∗}*b*

*values as output. This model is used to see if the predicted CMYK values can reproduce*

^{∗}*the same CIEL*

^{∗}*a*

^{∗}*b*

*values. Physical experiments are also done to validate if the*

^{∗}*framework works. It turned out that ∆E*

_{ab}*is under 3 between the colors produced by printers and the colors produced by the soft-proofing model.*

^{∗}Almodarresi, Mokhtari, Almodarresi, et al. scanned a target fabric to get its
*image, extracted CIEL*^{∗}*a*^{∗}*b** ^{∗}* values of each pixel on the image, and then aggregated

*CIEL*

^{∗}*a*

^{∗}*b*

*values into histogram [18]. And they used a neural network taking the*

^{∗}Figure 3.1: Illustration of a typical recipe prediction model leveraging neural net- work.

features as input and generating concentrations of three primary colors (red, yellow, blue) as output.

**sennaroglu2014colour use a neural network outputting concentration of fluorescent-**
**dyed acrylic fabric with reflectance spectra sennaroglu2014colour.**

Tarasov, Milder, and Tyagunov took logarithm on reflectance spectra to get spectral density, which is the input of a neural network outputting CMYK values [28], [30].

The methods leveraging neural network can be summarized as figure 3.1. A
*neural network is directly used, which takes reflectance spectra, CIEL*^{∗}*a*^{∗}*b** ^{∗}*or values
of other color space, and produces amounts or concentrations of three to four primary
colorants.

**3.2.3** **Colorant Prediction by Inverse of CPM**

Models of the third kind predict a recipe by finding the inverse value of a color
prediction model. For example, Westland pointed out that it is feasible that one
can find a recipe by inputting many different recipes into a color prediction model,
which is implemented by ANN, until finding one that can generate output, such as
*CIEL*^{∗}*a*^{∗}*b** ^{∗}*, that is close enough to the color of the target fabric [12].

In [25], Chen built a color prediction model that takes transmittance spectra, reflectance spectra and amount of two pigments from thirteen candidate pigments,

Figure 3.2: Illustration showing the approach adopted by Chen, Yang, and Ouhy- oung in [25]

and output reflectance spectra of the mixed pigment painted on a square piece of
paper [25]. They enter many recipes to that model and record the amounts of
*the pigments, the predicted reflectance and CIEL*^{∗}*a*^{∗}*b** ^{∗}* in a lookup table. When
predicting the recipe needed for a target color, one just has to find the recipe whose

*CIEL*

^{∗}*a*

^{∗}*b*

*is most close to the given color in the lookup table as shown in figure 3.2.*

^{∗}**3.2.4** **Restrictions**

The current methods have some restrictions. First, the number of candidate colorants that can be used is limited. As we can see in the previous researches, the total number of colorants is typically 3 or 4, except that the one is thirteen in [25].

Second, the size of recipes (i.e. the number of colorants used in a recipe) is limited, they are fixed in all the models. Moreover, for the second kind model, the output will never be large because of metamerism [14], which is the phenomenon that different reflectance spectra may result in the same color, implying that different recipes can produce colors that are very close. If there are different samples in the training set producing close colors but with different dye combinations, the performance of the neural network model must be bad since there is no way to find a model that

maps one input to many outputs. Moreover, the researches discussed above can not predict novel recipes. There is no way to use those models to find a recipe using different dye combinations that are not shown in the training set.

The colorant prediction model that we propose is one of the third kinds. The way that we vectorize recipes and find the inverse values allows different sizes of recipes as well as metamerism presented in the training set. There is no restriction on the number of candidate colorants either.

**Chapter 4**

**Proposed Approach**

The overview of our approach in the prediction phase is visualized in figure 4.1 and will be explained in the following sections.

**4.1** **Data Preprocessing**

The samples used in the experiment are provided by Everest Textile Co., Ltd,
Taiwan. Each sample consists of a recipe with size ranging from one to four, re-
*flectance spectra, CIEL*^{∗}*a*^{∗}*b** ^{∗}*, and CIELCh values. The reflectance spectra and

Figure 4.1: The overview of the method proposed in the thesis, where L, a, b are
*CIEL*^{∗}*a*^{∗}*b** ^{∗}* and ˆ

*C is the vector of the recipe. CPM is a color prediction model and*a soft-proofing model is used to evaluate how well the predicted recipe vectors are.

*CIEL*^{∗}*a*^{∗}*b** ^{∗}* values are measured and recorded by machines and software developed
by Datacolor, and then entered manually into the CSV file.

There are 8518 real-world samples, 487 standard samples, and 3 white samples.

White samples are the data of the three fabrics that are not dyed by any colorant.

Standard samples are the ones that are obtained by dying three fabrics with each
colorant at concentrations of 0.06, 0.5, 1.5, and 3. The real-world data are the
samples that are dyed to match the color that customers ask for. The data is
then cleaned by the following procedures. First, we drop any sample with values
*are outside reasonable ranges or not numbers. In terms of CIEL*^{∗}*a*^{∗}*b** ^{∗}*, values of
L should be between 0 and 100, values of a or b should be between -128 to 128.

The values of reflectance spectra should be between 0 and 200% (fluorescent dyes).

The values of concentration should be between 0 and 4 mg/L. Second, we drop
samples whose second derivative of reflectance spectra exceeding a certain threshold
because it is usually typo making the spectra too sharp. Third, we calculate the
*CIELCh values from CIEL*^{∗}*a*^{∗}*b** ^{∗}* values. For each sample, it will be dropped if the
calculated CIELCh value is not the same as the given ones. Last, the fabric of each
sample is categorized into one of the three basic fabric which are cotton-based fabric,
polyester-based fabric, and nylon-based fabric.

After dirty samples are removed, 7604 samples are left. Three of them are the white samples, 450 of them are standard samples, and 7151 are real-world samples.

In real-world samples, 24 of the recipes have sizes of 1, 733 have sizes of 2, 6333 have sizes of 3, and 61 have sizes of 4.

To make recipe be able to be inputted to our model, vectorization of the recipes are needed. The way we vectorize recipes is straightforward. First, we make a 38- dimension vector corresponding to 38 candidate dyes and the value of each dimension is the concentration of the dye in a recipe, which is between 0 to 4. Then we concatenate the vector with the one-hot vector of three fabrics, which becomes the 41-dimension recipe vector. As to the outputs, we use standarization to normalize

*CIEL*^{∗}*a*^{∗}*b** ^{∗}* as well as reflectance.

**4.2** **Training Phase**

In the training phase, we train two color prediction models, one is the model
that will later be used to find inverse values (denoted as CPM) and the other is used
as a soft-proofing model (denoted as SPM). Different structures and seeds are used
*to train the two models. We use the recipe vectors as input and the CIEL*^{∗}*a*^{∗}*b** ^{∗}*
or reflectance spectra as output, to train the models. The models are implemented
with TensorFlow and MSE is the loss function. We use ten-fold validation on the
real-world samples, and the training sets of each fold will be concatenated with
standard and white samples. The standard and white samples are used as anchor
points in the training set. We will discuss it later in section 5.1.

**4.3** **Prediction Phase**

In the prediction phase, we are given a target fabric with color that we want to reproduce. In our experiment, we assume that the dye combination of the dye recipe is known and we just try to find concentrations of them. In practice, dye combinations can be easily picked by the colorists, and decoupling the picking of dye combinations and concentration can provide more flexibility so that the colorist can try different dye combinations to get different recipes for a color. On the other hand, we can also pick the dye combination that produces the closest color in the real-world samples.

After a dye combination is picked, we can then find the inverse value, that is,
*the recipe, of the CPM with CIEL*^{∗}*a*^{∗}*b** ^{∗}* of the target fabric. We will discuss the
ways we adopted to find inverse values in the next section. The predicted recipe

*will be inputted to the SPM to get a set predicted CIEL*

^{∗}*a*

^{∗}*b*

*values or reflectance spectra. There two metrics that we can evaluate the method. The first one is the*

^{∗}similarity of a predicted recipe to the ground-truth recipe. We calculate the mean
absolute error as well as the relative error of recipe vectors in the experiment. For a
recipe, MAE and relative error are shown in equation 4.1 and 4.2 respectively. For
*the other metric, we calculate the CIE ∆E*_{ab}* ^{∗}* between the predicted values generated
by SPM and the ones of the target fabric.

*M AE =* *sum(|C**gt**− ˆC|)*

*size of the recipe* (4.1)

*Relatvie Error =* *sum(|C**gt**− ˆC|)*

*sum(C** _{gt}*) (4.2)

**4.4 Finding Inverse Value**

Two ways can be used to find the inverse value of CPM, grid search and gradient
descent. Grid search is to enter as many different concentrations to the CPM as
*possible to get the corresponding CIEL*^{∗}*a*^{∗}*b** ^{∗}* output, and the concentration that

*yields the smallest error to the CIEL*

^{∗}*a*

^{∗}*b*

*of the target fabric is the recipe we want. We use stride equals 0.01 for the recipes whose size is 1 or 2 and for those with size greater than 2, the stride must be increased otherwise it will take minutes even hours in run time.*

^{∗}Gradient descent is the other way to find the inverse value of a neural network.

This technique is leveraged in many works. For example, in the so-called ”deep dream” project [21] developed by Mordvintsev, Olah, and Tyka, the input image is updated by gradient of the loss between the output value and the one hot vector chosen. And in asm2vec [29], the predicted vector of an assembly function is also trained in run time.

The way we use gradient descent to find the inverse values is visualized in figure 4.2. First, we initialize recipe vectors with random values at the dimensions that correspond to the chosen dye combination, 1 to the dimension corresponds to the

*Figure 4.2: Visualization of finding an inverse value using gradient descent. C*_{j}* ^{i}* is

*the i*

^{th}*recipe vectors at epoch j, (L*

^{∗}*, a*

^{∗}*, b*

*)*

^{∗}

^{i}

_{j}*is the i*

^{th}*CIEL*

^{∗}*a*

^{∗}*b*

*value at epoch j,*

^{∗}*(L*

^{∗}*, a*

^{∗}*, b*

^{∗}*) is the CIEL*

^{∗}*a*

^{∗}*b*

^{∗}*value of the target fabric, and µ is the learning rate.*

*When initializing and updating C*_{j}* ^{i}*, only the values of the dimensions involved will
be changed and the rest remain 0.

fabric and 0 at the rest of the dimensions. Then we input the initialized vectors to
the CPM and calculate the gradient of losses, which are the MSE between predicted
result and the fabric, to update the input recipe vectors instead of the weights. Each
time the recipe vectors are updated, we only update the values at the dimensions
with the chosen dye combination as well as the fabric and keep the rest to zero. The
process is repeated for certain epochs or until the losses are small enough. After
that, we take the mean of the recipe vectors or the one that gives the minimum loss
*to the CIEL*^{∗}*a*^{∗}*b** ^{∗}*/reflectance spectra of the fabric. The former one is denoted as

”G.D. mean” and the latter one is denoted as ”G.D. min loss” in the result.

**Chapter 5**

**Result and Discussion**

We implement our model with Keras and TensorFlow in python. The color prediction model consists of 9 hidden layers. There are 128 neurons in the first seven layers, 32 on the eighth layer, and 16 on the ninth layer. The activation function is leaky relu on all hidden layers with values of 0.5. The soft-proofing model we use also has 9 hidden layers but the number of neurons is different. There 512 neurons on the first two layers, 256 neurons on the following two layers, 128 on the fifth to seventh layer, 32 on the eighth layer, and 16 on the ninth layer. The activation function used in the soft-proofing model is leaky relu and the value is 0.5 too. We use 10-fold validation in the experiment, so a CPM will be trained for each Table 5.1: Result using different methods finding the inverse in two color spaces.

The values are calculated by taking means of each fold in ten-fold validation.

Color space

Inverse

Method MAE Relative Error

Mean

*∆E*_{ab}^{∗}*∆E*_{ab}* ^{∗}* <5

*∆E*

_{ab}*<2.3*

^{∗}CIELab

G.D. mean 0.1343 10.80% 1.5820 96.94% 84.37%

G.D. min loss 0.1385 10.97% 1.3969 97.80% 87.20%

Grid search 0.1276 10.24% 1.6105 97.13% 82.59%

Reflectance Spectra

G.D. mean 0.1531 12.53% 2.6294 91.43% 62.26%

G.D. min loss 0.1068 8.45% 2.2944 93.82% 66.83%

Grid search 0.1067 8.65% 2.4979 92.46% 61.35%

fold. There is only one soft-proofing model with the ratio of training size to test size being 9 to 1. The models are light enough to be trained on a laptop with an intel i5 core and an NVIDIA GTX 1650 GPU.

*We use the CIEL*^{∗}*a*^{∗}*b** ^{∗}* values and reflectance spectra of fabrics to predict dye
recipes. Grid search as well as gradient descent is used in finding inverse values.

The result is shown in table 5.1. The MAE and relative error of recipes show the
*performance on finding inverse values of each method. On the other hand, ∆E*_{ab}* ^{∗}*
indicates how well predicted recipes can yield the same color of fabrics. As to MAE
and relative error with reflectance spectra as color space, we can see that G.D. min
loss and Grid search have almost equal capability to find the inverse and G.D. mean

*performs worse than others. For the MAE and relative error with CIEL*

^{∗}*a*

^{∗}*b*

*, grid search still performs better than others, and G.D. min loss performs worse than others but close to G.D. mean. Thus, according to the result, it may be fair to say that grid search has the best stability and capability of finding inverse values, and G.D. mean is the least stable one.*

^{∗}*As to ∆E*_{ab}* ^{∗}* , G.D. min loss shows the best performance no matter which color

*space is used. Up to 87.20% of samples yield ∆E*

_{ab}*< 2.3. G.D. mean and grid search*

^{∗}*also yield good results, which are 84.37% and 82.59% respectively, with CIEL*

^{∗}*a*

^{∗}*b*

*as the color space.*

^{∗}MAE and relative error of recipes do not necessarily have the same tendency as

*∆E*_{ab}* ^{∗}* . For example, we can see that MAE of G.D. min loss and grid search, with

*reflectance spectra as color space, are lower than the ones with CIEL*

^{∗}*a*

^{∗}*b*

*, but*

^{∗}*∆E*_{ab}* ^{∗}* are higher. And although grid search is more capable of finding the inverse,

*G.D. min loss still yields CIEL*

^{∗}*a*

^{∗}*b*

*values that are closer to fabrics. We will discuss this further in section 5.4.*

^{∗}**5.1** **Prediction of Novel Recipes**

In this section, we will discuss if our approach can be applied to predict recipes whose dye combination is not shown in the training set. We conduct two experiments to find out the answer. For the first experiment, we exclude all samples whose recipe size equals 1 from the training set. For the second one, we randomly picked 9 dye combinations, 8 of which have sizes of 3, and the last one has sizes of 2. We exclude any samples of those dye combinations from the training set and use them as the testing sets in both experiments. The result is shown in table 5.2.

For the novel recipes whose size equals 1, the performance of the predictions
is bad, which means that the model can not learn well on the characteristics of a
single dye by mixing dyes. For those with sizes of 2 or 3, the performance is still
*acceptable. More than 56% of the test samples yield ∆E*_{ab}^{∗}*< 2.3 and 90% of the*
*test samples yield ∆E*_{ab}^{∗}*< 5. We inference that the model learns how to mix novel*
recipes from the experience of mixing other combinations in the training set.

In section 4.2, we mention that we put all the standard recipes in training sets as anchor points. The result in this section can be explanatory. It might also be helpful to build models with training sets where anchor points are built on candidate dye combinations beforehand. This might be possible work that deserves studying to further reduce the size of the training set and enhance the performance of the approach.

Table 5.2: Result of predicting novel recipes.

Size of

Novel Recipes MAE Relative

Error *Mean ∆E*_{ab}^{∗}*∆E*_{ab}* ^{∗}* <5

*∆E*

_{ab}*<2.3*

^{∗}1 0.4728 19.55% 7.7298 49.79% 20.63%

2 or 3 0.1840 15.85% 2.5953 90.39% 56.26%

Table 5.3: Result of comparing two soft-proofing models in different spaces divided by the convex hull formed by training sets.

Soft-Proofing Model

Out/Inside

Convex Hull *Mean ∆E*_{ab}^{∗}*∆E*_{ab}* ^{∗}* <5

*∆E*

_{ab}*<2.3 Xgboost*

^{∗}Outside 4.1425 76.68% 49.78%

Inside 2.1685 91.55% 71.77%

Overall 3.6624 80.30% 55.13%

DNN

Outside 1.4816 97.47% 86.40%

Inside 1.1330 98.85% 89.71%

Overall 1.3968 97.80% 87.20%

**5.2** **Soft-Proofing Model**

In this section, we will discuss the performance of two types of soft-proofing models, xgboost regression model, and DNN, on extrapolation.

**The detail of regression trees and xgboost can be found in chen2016xgboost,**
[7]. In short, xgboost with regression kernels can be deemed as a model using
different linear regression models at different sections of the input space to predict
a target value.

Since predicted recipe vectors are found by finding the inverse values, a lot of them are outside the convex hull space formed by all 7604 samples. For soft-proofing models, they are doing extrapolations when taking the predicted recipe vectors that are outside the convex hull space as input. Extrapolation is harder and has great impacts on the soft-proofing models. There are 5412 predicted recipes are outside the convex hull and 1739 are inside the convex hull.

We train two soft-proofing models. The way to train the soft-proofing model implemented with DNN is the same one we used described in section 4.2 and we use all 7604 samples to train the xgboost model. Then we input the predicted recipes to both models and compare the result to the ground truth. The result is shown in table 5.3.

We can see in the table that for each type of the model, those outside the convex
*hull yield bigger errors than those inside the convex hull. And the ∆E*_{ab}* ^{∗}* yielded by

xgboost regression is bigger than those yielded by DNN. As we mentioned earlier, xgboost can be deemed as a model using different regression models at different sections of the input space, so it is apparent that the error is much bigger for the inputs that are outside the convex hull since the behavior of mixing is nonlinear.

For those input inside the convex hull, xgboost still yielding bigger errors than DNN
*showing that DNN has better capabilities of transforming dye recipes to CIEL*^{∗}*a*^{∗}*b** ^{∗}*.
The impact of extrapolation on soft-proofing models is discussed in this sec-
tion and it is shown that implementations that are resilient to extrapolation make
big differences. Based on our findings, DNN may be a better choice to implement
soft-proofing models instead of others employing regressions. The fact of bad per-
formance due to extrapolation also implies that anchor points with different dye
combination may improve the performance since extrapolation may be greatly re-
duced if enough anchor points are chosen to form a bigger convex hull.

**5.3** **Physical Validation**

To validate if the approach works, we randomly picked 16 samples from the test set to predict 16 recipes and reproduce the color physically. The result is shown in table 5.4 and figure 5.1. We denote the result of physical experiments as ”Real”

*in the table and we show the ∆E*_{ab}* ^{∗}* between the ones yielded by ground truth and
soft-proofing (prediction), the ones yielded by ground truth and Real, and the ones
yielded by soft-proofing and Real.

Although the color differences between the ground-truth colors and the repro-
duced ones are not as good as the differences yielded by our approach, the results
*are still promising. There are 8 of them yielding ∆E*_{ab}^{∗}*< 2.3, 13 of them yielding*

*∆E*_{ab}^{∗}*< 5 and the mean of color difference is 3.2898.*

Figure 5.1: Results of physical validation of 16 randomly picked recipes. In each set, the left color is the prediction of soft-proofing model, the center one is the ground truth and the right one is the color produced physically using the predicted recipes, denoted as “Real”.

*Table 5.4: Result of physical validation. The color difference, ∆E*_{ab}* ^{∗}* , are calculated

*in pair among the CIEL*

^{∗}*a*

^{∗}*b*

*derived from soft-proofing model, ground truth, and physically dyed cloths which are denoted as ”Real”.*

^{∗}Recipe

*∆E*_{ab}* ^{∗}*
Ground Truth

vs Prediction

*∆E*_{ab}* ^{∗}*
Ground truth

vs Real

*∆E*_{ab}* ^{∗}*
Prediction

vs Real

1 3.2215 5.1120 2.5935

2 2.0067 2.0320 2.1013

3 0.7018 1.4934 1.8500

4 0.4960 3.7275 3.9344

5 1.9591 3.9153 2.1870

6 1.5011 8.0488 7.6269

7 0.9397 1.8122 2.6959

8 1.7136 1.4008 1.1579

9 1.5422 3.0418 2.2836

10 1.3205 2.3179 1.7505

11 0.9488 10.9681 11.5454

12 0.6185 0.5524 1.0937

13 1.0505 1.7749 2.5386

14 0.7425 1.6478 1.1910

15 1.0665 2.6682 1.8422

16 0.4021 2.1251 2.1365

Mean 1.2644 3.2899 3.0330

**5.4** **Prediction Error**

In this section, we discuss the error observed in the experiments. One can easily
notice that MAE and relative error of recipes can only represent how well the inverse
value is. Small MAE and relative error do not necessarily mean that the predicted
color is close to ground truth. The reason is that for each recipe, MAE does not
take the total amount of the dyes used into account. For example, if the sum of the
concentration of a recipe is 3 and the sum of another recipe, whose combination is
*the same as the former recipe, is 4, the ∆E*_{ab}* ^{∗}* of the former recipe will be bigger.

To take this into account, we introduce relative error. However, same relative error may still lead to different color difference. For example, if a recipe of grey color consists of two dyes, red and green, with concentration equals 1 for both. We

(a) Size of recipe vs MAE *(b) Size of recipe vs ∆E*_{ab}^{∗}

Figure 5.2: The relation of MAE and recipe sizes is shown in the left figure. And
*the relation of ∆E*_{ab}* ^{∗}* and recipe sizes are shown in the right.

*denote the recipe as R(red = 1, green = 1). The color difference between R(red =*
*1, green = 1) and R(red = 1.1, green = 0.9) will be bigger than the difference*
*between R(red = 1, green = 1) and R(red = 1.1, green = 1.1), although the relative*
error of both pairs are the same. Moreover, since the color representation of each
dye and each recipe are different. For some dyes, their colors are light even when
their concentrations are large and vice versa for others. For some recipes, small
differences in the recipe may yield big color differences, and on the other hand, large
differences in the recipe may still yield small color differences for some recipes.

*We group the MAE and ∆E*_{ab}* ^{∗}* by size of recipes and they are shown as box plots
in figure 5.2a and 5.2b. In figure 5.2a, we can see that larger size yielding larger
error in MAE of recipes, and we suggest it is simply the fact that the space of finding
inverse values is larger when the size of a recipe goes up. However, we do not see
any tendency in figure 5.2b, and the possible source of these errors might come from
the behaviors of recipes we addressed in the last paragraph as well as the imbalance
of data.

*Last, We try to find if there are relations between errors and L*^{∗}*, a*^{∗}*, b*^{∗}*, C** ^{∗}* as
well as h. The relations are shown in figure 5.3 through figure 5.7. We can only see

*a slight positive relation between ∆E*

_{ab}

^{∗}*and L*

*, and slight negative relation between*

^{∗}*MAE and L*

*. For the former, it is consistent with the fact that adding or deducting dyes makes less color difference when the color is dark than the difference when*

^{∗}*(a) L*^{∗}*vs ∆E*_{ab}^{∗}*(b) L** ^{∗}* vs MAE

*Figure 5.3: Relations between errors and L*

^{∗}*(a) a*^{∗}*vs ∆E*_{ab}^{∗}*(b) a** ^{∗}* vs MAE

*Figure 5.4: Relations between errors and a*

^{∗}*(a) b*^{∗}*vs ∆E*_{ab}^{∗}*(b) b** ^{∗}* vs MAE

*Figure 5.5: Relations between errors and b*

^{∗}*(a) C*^{∗}*vs ∆E*_{ab}^{∗}*(b) C** ^{∗}* vs MAE

*Figure 5.6: Relations between errors and C*

^{∗}*(a) h vs ∆E*_{ab}^{∗}*(b) h vs MAE*

*Figure 5.7: Relations between errors and h*

the color is light. For example, if one drips pigment into a glass of water, for the first few drips the color change drastically. But if one drips pigment into a glass of black ink, the change would be very small. For the latter, we could not find a good interpretation of the data we get.

**Chapter 6**

**Conclusion and Future Work**

In this thesis, we proposed a straightforward data-driven approach that can be easily adopted. The method can predict the concentrations of a chosen dye combination and reproduce the color of the target fabric.

The approach leverages a color prediction model implemented by deep neural
networks. We find the inverse value, which is the recipe, of the model with the
*CIEL*^{∗}*a*^{∗}*b** ^{∗}* or reflectance of a given fabric. And we use a soft proofing model to

*predict the color, that is, CIEL*

^{∗}*a*

^{∗}*b*

*or reflectance of the predicted recipe.*

^{∗}We use two different ways to find the inverse values of the color prediction models,
grid search and gradient descent. Grid search performs better in finding the inverse
*value but both perform almost equally well in ∆E*_{ab}* ^{∗}* .

Our approach is flexible in the size of the recipes, which is between 1 to 4 and it can be easily scaled up. The way we use in the approach can avoid metamerism problems that will show up in some methods. Moreover, our approach can still have acceptable performances on novel recipes, which are not been discussed in other researches.

Under the structure of our approach, possible improvements that can be done are perhaps mainly in feature engineering. Currently, fabrics are presented as one- hot vectors in the recipe vectors. One can study the physical properties of fabrics