• 沒有找到結果。

Modeling and optimization of a plastic thermoforming process

N/A
N/A
Protected

Academic year: 2021

Share "Modeling and optimization of a plastic thermoforming process"

Copied!
14
0
0

加載中.... (立即查看全文)

全文

(1)

http://jrp.sagepub.com/

Composites

Journal of Reinforced Plastics and

http://jrp.sagepub.com/content/23/1/109

The online version of this article can be found at:

DOI: 10.1177/0731684404029324

2004 23: 109

Journal of Reinforced Plastics and Composites

Chyan Yang and Shiu-Wan Hung

Modeling and Optimization of a Plastic Thermoforming Process

Published by:

http://www.sagepublications.com

can be found at:

Journal of Reinforced Plastics and Composites

Additional services and information for

http://jrp.sagepub.com/cgi/alerts Email Alerts: http://jrp.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://jrp.sagepub.com/content/23/1/109.refs.html Citations:

What is This?

- Jan 1, 2004

Version of Record

>>

(2)

Thermoforming Process

CHYANYANG1ANDSHIU-WANHUNG1,2,*

1

Institute of Business and Management National Chiao Tung University 2

Department of International Business Ming-Chuan University, 250 Chung Shan N. Road

Section 5, Taipei III, Taiwan

ABSTRACT: Thermoforming of plastic sheets has become an important process in industry because of their low cost and good formability. However there are some unsolved problems that confound the overall success of this technique. Nonuniform thickness distribution caused by inappropriate processing condition is one of them. In this study, results of experimentation were used to develop a process model for thermoforming process via a supervised learning back propagation neural network. An ‘‘inverse’’ neural network model was proposed to predict the optimum processing conditions. The network inputs included the thickness distribution at different positions of molded parts. The output of the processing parameters was obtained by neural computing. Good agreement was reached between the computed result by neural network and the experimental data. Optimum processing parameters can thus be obtained by using the neural network scheme we proposed. This provides significant advantages in terms of improved product quality.

KEY WORDS: inverse back propagation neural network, thermoforming, modeling and optimiza-tion, processing parameter.

INTRODUCTION

A

SIGNIFICANT AMOUNTof research continues in the area of thermoforming [1] process in industry due to their low cost and good formability. It is most widely used in the packaging industries. Other applications include making large parts such as refrigerator door liners, bathtubs, signs and automotive interior trim. A thick sheet is clamped in a frame and is heated to a temperature well above its glass transition temperature, so that it becomes rubbery and soft. It is then placed over a mold and is stretched to take the contours of the mold, either by plug assist or a differential pressure (Figure 1(a) and (b)). Thermoforming has advantages over its better known competitor processes such as injection molding and compression molding, because it uses simpler molds and a much lower forming pressure. Thermoforming is the process of choice where short production runs cannot justify the expense of the more expensive injection tooling, or where short lead times from design to production are critical. Larger parts like bathtubs or refrigerator door liners are only economically possible by thermoforming.

*Author to whom correspondence should be addressed. E-mail: shiuwan@hotmail.com

Journal ofREINFORCEDPLASTICS ANDCOMPOSITES, Vol. 23, No. 1/2004 109

(3)

Figure 1. (a) Schematic of the thermoforming process; (b) axisymmetric geometry of the mold and the assist plug.

(4)

Although the thermoforming process has been developed for over two decades, there are still some unsolved problems that confound the overall success of this technology. Nonuniform thickness distribution caused by inappropriate processing conditions and trade-off effect is one of them. During forming, the sheet thins which makes it necessary to optimize the process before molding a part. Conventionally the molders optimize the thickness of thermoformed parts by a time-consuming trial-and-error process. Research works [2–6] have been completed in studying the thermoformability of various thermoplastic materials. Only limited effort has been done in optimizing the thermo-forming of polymeric sheets.

Previous research works have shown that the quality of the thermoformed parts depends on many processing variables such as temperature of heating pipes, vacuum pressure, and plug material, plug moving speed, and plug displacement. Therefore, thermoforming is a highly complex, multivariable, and nonlinear process, leading to the difficulty in theoretically modeling the process.

In this work, an inverse back propagation neural network [7–9] was proposed to model the thermoforming process of polyethylene terephthalate (PET) materials and to predict the optimum processing parameters. The network inputs included the thickness distribution at different positions of molded parts. The output of the processing parameters was obtained by neural computing. The network training was based on 47 sets of the training samples and the trained network was tested with 10 sets of the test samples, which were different from the training data. The final goal of this study is to, by using the neural network method, optimize the thermoforming process of PET sheets. This provides significant advantages in terms of improved product quality.

NEURAL NETWORK METHODOLOGY

A neural network is a computer system, which mimics the structure of human brain and imitates intelligent behavior [10]. It consists of many simple and highly connected neurons (processing elements or nodes), and processes information by its dynamic-state response to external inputs. It can deal with the problems of highly dimensional and nonlinear systems. The parallel distributed processing of the neural networks promises high computation rates provided by the massive parallelism, a greater degree of robustness or fault tolerance adapt and to continue to improve performance. The learning is based on samples, so it is especially suitable for the complicated process with a nontransparent mechanism. Therefore, neural networks, as one of the most active branches of artificial intelligence in recent years, have been widely used in the process industries, including fault diagnosis and pattern recognition, process control and optimization, system modeling, and on-line measurement and prediction.

The architecture of a neural network depends on three key factors: network topology, node characteristics (activation functions), and the learning algorithm (learning rule). There are different types of neural networks and, among them, back propagation neural networks (BPNNs) are the most popular and widely used in various fields. In this work, an ‘‘inverse’’ BPNNs were proposed and used to model the thermoforming process of PET materials and predict the optimum processing conditions [11].

A back propagation neural network (BNPP), as shown in Figure 2, is composed of one input layer, one (or more) hidden layer(s), and one output layer. There is no theoretical

(5)

limit on the number of hidden layers but typically there are one or two. Each hidden layer has an adjustable number of nodes. The number of nodes in input and output layers depends heavily on the properties of the problem that one is studying. The weights (W ) on connections are adjustable and their initial values are generally obtained from a randomizing routine.

The inputs enter in the first layer and its outputs are exactly same as its inputs, and their weighted sums (see Figure 3) of first layer outputs become the inputs to the second layer (first hidden layer), which are transferred through a transfer function or activation function, generally using a sigmoid function (see Figure 4), to obtain the neuron outputs. The weighted sum of these outputs forms the inputs to the next layer (or output layer). Forward calculation is conducted in the same way as for the second layer until the outputs

Figure 3. A typical backpropagation network. Figure 2. Diagram of a neuron.

(6)

of the neural network are finally reached. This is so-called feed-forward calculation of BPNNs and can be expressed by the following equations [12]:

For the input layer,

oki ¼xki ð1Þ

where xk

i is the input of the ith node in the input layer for sample k and oki is its output.

For the hidden layer,

Iik¼X

n

m¼i

wimokmþwio ð2Þ

oki ¼ðIikÞ ð3Þ

where  is the sigmoidal function,

ðaÞ ¼ 1

1 þ ea ð4Þ

Iikis the input of node i for sample k; wim, the connection weight from the previous layer node m to node i; okm and oki, the output of the previous layer node m and current layer node i, respectively wio, the threshold value of node i; and n, the node number of the previous layer.

During the training (or learning) sequence, the final outputs of neural networks are calculated feedforward as described above. They are compared with the actual outputs of training samples from measurement to yield an error profile. The error profile is propagated back through the network by a learning rule to update the weights on the connections. The weights were adjusted in such a way that minimized the mean square error (MSE): Min fMSEg ¼1 2 XP i¼1 Ei ð5Þ

(7)

where P is the number of training samples and Eiis the sum of the square error of training sample i: Ei¼ XNout j¼1 ðtijyijÞ2 ð6Þ

where Noutdenotes the number of the nodes in the output layer, t is the prediction value of the j th output of sample i, where y is the actual value of the j th output of sample i. The weight change on iteration q(Wq) was calculated according to

Wq¼ Mqþ Wq1 ð7Þ

where Mqis the overall gradient; , the momentum factor aiding in convergence; and , the step size [14]. The search interval was determined by a scanning and bracketing procedure. However, as the iteration gets closer to the optimum, the conjugate gradient was introduced to improve the convergence rate. The iteration was based on

Wq ¼Mq0þ Wq1 ð8Þ M0 q¼Mqþ Mq10 ð9Þ ¼ jjMqjj 2 jjMq1jj2 ð10Þ

where M0is the overall conjugate gradient. If the MSE was less than the present positive

tolerance, then the training procedure stopped [13,14].

To build up a practical neural network, the first task is to study the process and analyze its cause–effect relationships between various variables and to determine the inputs and outputs, then to obtain sufficient training samples for the training sequence. The sample acquirement is a time-consuming process, which is very important because neural networks learn and obtain their problem-solving ability from the training samples.

In this study, an ‘‘inverse’’ neural network was proposed. Most of the neural networks used the processing parameters as the inputs and the molded product qualities as the computed outputs. We allocated inversely the inputs and outputs in our model, i.e. assign the product qualities as the inputs and the desired processing parameters as the outputs. The input variables in this research are the measured thickness profile at six different positions of molded parts (Figure 1(b)). The output variables consisted of five different processing parameters, including temperature of the heating pipe, vacuum pressure, plug moving speed, plug displacement, and plug material’s thermal conductivity. Thirteen nodes were selected for the hidden layer, which is corresponding to two times of the number of input variables plus one [7]. That is, the network 6-13-5 (six nodes in the input layer, thirteen nodes in the hidden layer, five nodes in the output layer) was chosen as the final structure, using a sigmoid function as its transfer function (Figure 4). By adopting the inverse neural network, we are able to determine the optimum processing parameter sets for the desired thickness profile of the parts.

(8)

The whole training procedure was to adjust all weights on connections of the network according to the learning rule. The iteration went on until the computed outputs reached the required precision of agreement to the actual outputs [15].

EXPERIMENTAL PROCEDURE

The plastic sheet used is polyethylene terephthalate (PET) with an initial thickness of 0.5 mm. Thermoforming experiments were conducted on a lab-scale thermoforming machine that was composed of an electrical heating system, a vacuum pump, a pneumatically actuated assist-plug and a mold. The heating system had twenty electric pipe heaters with ten on top of the sheet and the other ten on the bottom side. The top pipe heaters were oriented perpendicularly to the bottom ones for a more uniform heating. The power level of each heater can be independently controlled. The assist plug’s velocity was controlled by an adjustable pneumatic pressure. Three materials were used for the plugs: wood, phenol formaldehyde, and wood covered with woven blanket. An axisymmetric mold and a truncated-cone geometry assist-plug were used in this study (Figure 1(b)). After molding, the thickness profile at six different positions of molded parts was measured by using a micrometer.

As mentioned in earlier sessions, neural networks learn on the basis of actual samples, so it is necessary to obtain sufficient training samples to build up effective ones. The network training was based on 47 sets of the training samples and the trained network was tested with 10 sets of the test samples, which were different from the training data. Three specimens were completed for each test trial. Table 1 shows the input data (thickness distribution of molded parts) used in this study, while Table 2 lists the output data (processing parameters) used for BPNN training and testing.

Table 1. Part thickness distribution measured in the main experiment.

Run Point 1 (mm) Point 2 (mm) Point 3 (mm) Point 4 (mm) Point 5 (mm) Point 6 (mm) 1 0.1325 0.065 0.0625 0.13 0.295 0.3575 2 0.1255 0.0775 0.06 0.1075 0.3975 0.3825 3 0.13 0.0825 0.1 0.125 0.2725 0.46 4 0.135 0.0625 0.065 0.0975 0.4375 0.374 5 0.1575 0.083 0.095 0.212 0.2025 0.493 6 0.1345 0.0375 0.0615 0.071 0.3 0.4 7 0.1335 0.0635 0.0665 0.1025 0.2615 0.359 8 0.1685 0.1545 0.077 0.139 0.578 0.351 9 0.2115 0.101 0.076 0.145 0.6775 0.3675 10 0.2725 0.2765 0.2505 0.3695 0.325 0.33 11 0.1815 0.186 0.2605 0.2225 0.495 0.3415 12 0.1965 0.3125 0.2685 0.271 0.48 0.345 13 0.2115 0.2535 0.2555 0.385 0.5925 0.3385 14 0.128 0.0635 0.072 0.154 0.29 0.3625 15 0.135 0.073 0.065 0.14 0.2705 0.3695 16 0.1275 0.0875 0.0575 0.0825 0.275 0.445 17 0.1225 0.06 0.0525 0.06 0.2375 0.3625 18 0.1275 0.0775 0.057 0.0925 0.335 0.3625 (continued )

(9)

RESULTS AND DISCUSSION Cross-validation

A neural network with fewer hidden nodes will be unable to generate a complicated function. However, too many hidden nodes could cause oscillation of the fitted curve. In fact, the BPNN learning algorithm itself does not guarantee a meaningful generalization for the available data. Thus a validation phase is inevitable in neural network modeling. Cross-validation [15] is a widely acknowledged testing method in neural network learning. Using this method, a training data set is randomly selected from the available

Table 1. Continued. Run Point 1 (mm) Point 2 (mm) Point 3 (mm) Point 4 (mm) Point 5 (mm) Point 6 (mm) 19 0.141 0.0725 0.0775 0.1725 0.3545 0.4175 20 0.1355 0.085 0.1075 0.185 0.3105 0.3945 21 0.1325 0.095 0.059 0.124 0.2775 0.3785 22 0.1325 0.077 0.0635 0.0825 0.3785 0.435 23 0.1525 0.0815 0.0795 0.1155 0.257 0.3725 24 0.1615 0.134 0.0745 0.1255 0.5465 0.3565 25 0.1655 0.1035 0.06 0.1635 0.4425 0.573 26 0.1475 0.0945 0.073 0.1465 0.316 0.377 27 0.1595 0.126 0.1135 0.201 0.4595 0.5675 28 0.1505 0.1165 0.1055 0.171 0.404 0.3585 29 0.125 0.0685 0.056 0.056 0.248 0.51 30 0.129 0.0745 0.0575 0.1285 0.272 0.369 31 0.147 0.101 0.0905 0.1225 0.458 0.3905 32 0.16 0.0625 0.064 0.0695 0.3375 0.377 33 0.13 0.0675 0.0635 0.08 0.201 0.3325 34 0.1385 0.065 0.0635 0.1225 0.207 0.3475 35 0.1465 0.086 0.0795 0.144 0.3 0.4015 36 0.1315 0.0935 0.875 0.145 0.251 0.372 37 0.138 0.0765 0.064 0.0775 0.2285 0.3415 38 0.1325 0.0685 0.073 0.0735 0.353 0.3495 39 0.2395 0.39 0.384 0.1025 0.1325 0.3125 40 0.319 0.212 0.22 0.09 0.105 0.314 41 0.3725 0.451 0.406 0.104 0.1085 0.319 42 0.243 0.2335 0.1915 0.0915 0.1355 0.3255 43 0.306 0.47 0.2775 0.13 0.134 0.315 44 0.316 0.377 0.357 0.1825 0.155 0.348 45 0.133 0.07 0.064 0.079 0.182 0.488 46 0.1405 0.078 0.061 0.09 0.19 0.367 47 0.239 0.273 0.427 0.204 0.124 0.399 48 0.1705 0.2225 0.365 0.424 0.4755 0.358 49 0.1595 0.1345 0.3345 0.245 0.4685 0.35 50 0.1425 0.0835 0.09 0.205 0.3165 0.3545 51 0.146 0.174 0.3575 0.333 0.64 0.355 52 0.138 0.0925 0.089 0.1505 0.2575 0.372 53 0.135 0.0815 0.069 0.126 0.275 0.364 54 0.1365 0.104 0.0745 0.145 0.2885 0.3575 55 0.215 0.2985 0.22 0.1115 0.17 0.329 56 0.163 0.1625 0.155 0.1055 0.1135 0.334 57 0.2255 0.2225 0.281 0.2135 0.114 0.3305

(10)

Table 2. Processing parameters used in the experiments. Run Heating Temperature (C) Vacuum Pressure (bar) Plug Speed (cm/s) Plug Displacement (cm) Plug Material* 1 240 1 26.62 8 2 2 230 1 26.62 8 2 3 220 1 26.62 8 2 4 210 1 26.62 8 2 5 200 1 26.62 8 2 6 230 1 26.62 8.3 2 7 230 1 26.62 8.6 2 8 240 1 26.62 9 2 9 240 1 26.62 9.5 2 10 240 1 26.62 9.8 2 11 240 1.5 26.62 9 2 12 240 2 26.62 9 2 13 240 2.5 26.62 9 2 14 240 2.5 26.62 7.5 2 15 240 2.5 26.62 7 2 16 240 1 26.62 8 1 17 230 1 26.62 8 1 18 220 1 26.62 8 1 19 210 1 26.62 8 1 20 200 1 26.62 8 1 21 230 1 26.62 8.3 1 22 230 1 26.62 8.6 1 23 240 1 26.62 9 1 24 240 1 26.62 9.5 1 25 240 1 26.62 9.8 1 26 240 1.5 26.62 9 1 27 240 2 26.62 9 1 28 240 2.5 26.62 9 1 29 240 2.5 26.62 7.5 1 30 240 2.5 26.62 7 1 31 200 1 26.62 9 1 32 240 1 26.62 8 3 33 230 1 26.62 8 3 34 220 1 26.62 8 3 35 210 1 26.62 8 3 36 200 1 26.62 8 3 37 230 1 26.62 8.3 3 38 230 1 26.62 8.6 3 39 240 1 26.62 9 3 40 240 1 26.62 9.5 3 41 240 1 26.62 9.8 3 42 240 1.5 26.62 9 3 43 240 2 26.62 9 3 44 240 2.5 26.62 9 3 45 240 2.5 26.62 7.5 3 46 240 2.5 26.62 7 3 47 240 2.5 22.66 9 3 48 240 2.5 22.66 9 2 49 240 2.5 18.16 9 2 50 240 2.5 30.74 9 2 (continued )

(11)

sample. After training the neural network, the remainders of the observations are used to test the generated function. Usually, the sample is equally divided into two subsets, each of which serves as a training data set and test data set in turn in two trials.

In this study, forty-seven sets of data (data sets 1–47 in Tables 1 and 2) were used as training samples and the trained neural network was tested by 10 sets of data (data sets 48–57 in Tables 1 and 2), which were different from the training samples. The network inputs included the thickness distribution at different positions of molded parts (Figure 1(b)). The output of the processing parameters was obtained by neural computing. To facilitate the training process, numbers were assigned to different plug materials: 1 for wood covered with woven blanket, 2 for wood plug, and 3 for phenol formaldehyde plug. The whole training procedure was to adjust all weights on connections of the network according to the learning rule. The iteration went on until the computed outputs reached the required precision of agreement to the actual outputs. The test results in

Table 2. Continued. Run Heating Temperature (C) Vacuum Pressure (bar) Plug Speed (cm/s) Plug Displacement (cm) Plug Material* 51 240 2.5 34.25 9 2 52 240 2.5 18.16 9 1 53 240 2.5 30.74 9 1 54 240 2.5 34.25 9 1 55 240 2.5 18.16 9 3 56 240 2.5 30.74 9 3 57 240 2.5 34.25 9 3

*Numbers were assigned to different plug materials: 1 for woven blanket plug, 2 for wood plug, and 3 for phenol formaldehyde plug.

(12)

Figures 5–8 show a good agreement to the actual measurement, except the plug material one in Figure 9.

Thermoforming a polymeric sheet with an assisting plug is the most widely used technique at present. The majority of deep-drawn parts are thermoformed by this method. The assisting plug is to help stretch and push the heated thermoplastic sheets into the mold halve, so as to mold parts with a more uniform thickness distribution. The temperature of the plug can cause variations in the forming. With the normal escape of the heat into the equipment, the plug will usually have lower temperatures than that of the preheated plastic sheets. The plug will absorb heat from the sheet when contact is made. This will decrease

Figure 6. Comparison of predicted vacuum pressure to actual measurement.

(13)

the prestretching of the sheets. To minimize the plug’s chilling effect, the plug is usually made of insulating materials, such as wood or other plastic materials. In this study, three materials were used for the assisting plug: wood covered with woven blanket, wood, and phenol formaldehyde. The computed neural network results predicted that wood plug could form parts with the required thickness distribution, while the plug used in the experiments was a phenol formaldehyde one. It might be due to the fact that wood has the lower thermal conductivity than phenol formaldehyde [16] and keeps the preheated sheets warm for a longer time. It is then easier for the plug to prestretch the sheets and obtain the desired part thickness distribution.

Figure 9. Comparison of predicted plug materials to actual measurement (1: woven blanket, 2: wood, 3: phenol formaldehyde).

(14)

It should be noted here that if high precision is required for the neural network results, more accurate training samples (or measurement) should be provided.

Process Optimization

Using the ‘‘inverse’’ neural network proposed in this study, we were able to predict the optimum set of processing parameters for the inputted part thickness distribution. The input of a neural network is not limited to part thickness distribution. They can be other specifications according to the different production processes and requirements, such as mechanical properties, molecular orientation, and manufacturing costs, etc. The output of the network can also include other processing parameters, such as heating time, temperature of the assisting plug, geometry of the plug, etc. The most valuable result of this research is not only a development of a practical neural network for the thermoforming process of plastic sheets, but also a technique which has been proved to be suitable for modeling and predicting of the thermoforming process. It is valuable for the optimum control of the process and of practical significance to advanced thermoforming processes. Further research is suggested to consider the effect of different thermoplastic materials and assisting plug’s geometry and temperature, because they may influence significantly the performance of the thermoforming of thermoplastic materials.

CONCLUSIONS

This study has proposed an inverse neural network model to predict the optimum processing conditions. The network inputs included the thickness distribution at different positions of molded parts. The output of the processing parameters was obtained by neural computing. Good agreement was reached between the computed result by neural network and the experimental data. By using the neural network method, one is able to optimize the thermoforming process of PET sheets. This provides significant advantages in terms of improved product quality.

REFERENCES

1. Throne, J.L. (1986). Thermoforming, Hanser Publisher, New York. 2. Briken, F. and Potente, H. (1980). Polym. Eng. Sci., 20: 1009.

3. Malpass, V.E., Kempthorn, J.T. and Dean, A.F. (Jan 27 1989). Plast. Eng. 4. Muzzy, J.D., Wu, X. and Colton, J.S. (1990). Polym. Comp., 11: 280. 5. Machida, T. and Lee, D. (1988). Polym. Eng. Sci., 28: 405.

6. Liu, S.J. (1999). Int. Polym. Process., 14: 98.

7. Tsoukalas, L.H. and Uhrig, R.E. (1997). Neural Network and Its Applications in Engineering, Wiley, New York.

8. Simpson, P.K. (1990). Artificial Neural Systems, Pergamon Press, New York. 9. Fausett, L. (1994). Fundamentals of Neural Network, Prentice Hall, New York.

10. Sun, Q., Zhang, D., Chen, B. and Wadsworth, L.C. (1998). J. Appl. Polym. Sci., 62: 1605. 11. Lee, S.C. and Youn, J.R. (1999). J. Reinf. Plast. Comp., 18: 186.

12. El-Bouri, A., Balakrishnan, S. and Popplewell, N. (2000). Europ. J. Oper. Res., 126: 474. 13. Kattan, M.W. and Cooper, R.B. (2000). Omega, 28: 510.

14. Spoerre, J.K. and Kendall, K.N. (1998). Comput. Ind. Eng., 35: 45.

15. Keshavarai, R., Tock, R.W. and Nusholtz, G.S. (1995). J. Appl. Polym. Sci., 57: 1127. 16. Holman, J.P. (1990). Heat Transfer, McGraw Hill, New York.

數據

Figure 1. (a) Schematic of the thermoforming process; (b) axisymmetric geometry of the mold and the assist plug.
Figure 3. A typical backpropagation network.Figure 2. Diagram of a neuron.
Figure 4. Sigmoid function.
Table 1. Part thickness distribution measured in the main experiment.
+6

參考文獻

相關文件

The personal data of the students collected will be transferred to and used by the Education Bureau for the enforcement of universal basic education, school

For the data sets used in this thesis we find that F-score performs well when the number of features is large, and for small data the two methods using the gradient of the

Training two networks jointly  the generator knows how to adapt its parameters in order to produce output data that can fool the

The remaining positions contain //the rest of the original array elements //the rest of the original array elements.

The aims of this study are: (1) to provide a repository for collecting ECG files, (2) to decode SCP-ECG files and store the results in a database for data management and further

For a long time, 5×5 evaluation table is the tool used by Kano’s two dimensional model in judging quality attribute, but the judgment process might be too objective, plus

Thus, the proposed approach is a feasible and effective method for process parameter optimization in MIMO plastic injection molding and can result in significant quality and

We used the radar echo data of the 10 most significant typhoon rainfall records between 2000 and 2010 as input variables to estimate the single point rainfall volume of the