# National University of Kaohsiung Repository System:Item 310360000Q/10500

全文

(2) Searching Eﬀective Points by Surrogate Models and Time Series Predictions. by Chia-Lung Hsu Advisors Weichung Wang and Ray-Bing Chen. Institute of Statistics, National University of Kaohsiung Kaohsiung, Taiwan 811 R.O.C. July 2005.

(3) Contents 1 Introduction. 1. 2 The New Algorithm 2.1 The Basis-Based Response Surface Method . . . . . . . . . . . . . . . . 2.2 New Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 2 2 5. 3 Computing Experiments 3.1 The Lyapunov Exponents . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The simulation cases . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 8 8 9. 4 Experimental Results 4.1 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The results for the two experiments of L.E. . . . . . . . . . . . . . . . . 4.3 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 13 13 16 23. 5 Conclusion. 30. A Appendix: The ﬁgure of evolution processes for Lyaounov Exponents 32 References. 38. i.

(4) ddddddddddddddd dddddddddd dddd: ddd dd, ddd dd ddddddddddd dddddddddddd. dd: ddd dddddddddddd. dd. dddddddddddddddddddddddddddddddddd dddddddddddddddddddddddddddddddddddd dddddddddddddddddddddddddddddddddddd dddddddddddddddddddddddddddddddddddd dddddddddddddddddddddddddddddddddddd dddddddddddddddddddddddddddddddddddd dddddddddddddddddddddddddddddddddddd dddddddddddddddddddddddLyapunovddddddddd ddddddddddddddddddddddd ddd : ddddddd, ddddd, dddd, ddd, dddd, dddd. ii.

(5) Searching Eﬀective Points by Surrogate Models and Time Series Predictions Advisors: Dr. Weichung Wanga and Dr. Ray-Bing Chenb a. Department of Applied Mathmetics National University of Kaohsiung b. Institute of Statistics National University of Kaohsiung. Student: Chia-Lung Hsu Institute of Statistics National University of Kaohsiung. Abstract We develop an algorithm to ﬁnd so-called eﬀective points x ∈ Rn such that the corresponding responses f (x) ∈ R belong to a speciﬁc region of interest. Examples of a region of interest include extreme values, bounded intervals, positivity, and others. Here the responses are obtained iteratively with respect to the evolution t, and these evolution processes are ﬁtted by auto-regressive processes. To ﬁnd the eﬀective points, the true yet unknown response surface is approximated by a surrogate model. Then possible eﬀective points are selected from two surrogate surfaces, which are constructed based on the predictions of the AR processes and the current values of the evolution processes respectively. The convergency criteria for evolution processes are also used here for improving the eﬃciency of our novel algorithm. Several simulations and two real examples for ﬁnding positive Lyapunov exponents of a dynamical system are demonstrated. Computational results show that the novel algorithms is eﬃcient and practical. Key words and phrases: A basis-based response surface method, auto-regression, uniform design, optimization, region of interest, image representation. iii.

(6) 1. Introduction The problem of ﬁnding eﬀective points, x ∈ X ⊂ Rn , such that the response. values, f (x) ∈ R, belong to a region of interest (ROI) are considered, where X is the experimental region. Many applications can be formulated as this kind of problems. For example, if the ROI represents extreme values, our problem is equivalent to the optimization problem. Here we assume: (1) the response surface is complex and is diﬃcult to model by simple functions; (2) the computing costs of response values are expensive, and (3) the response functions are unknown and complicated. Therefore, surrogate approach is used here. That is instead of searching on complex true surface directly, a simple and cheap surrogate surface is used to approximate the true yet complicate response surface, and then the eﬀective points are found from this surrogate surface. When we only consider the optimization problems, response surface methodology (RSM), which is proposed by Box and Wilson (1951), is a useful and famous statistical tool for the optimization problems by the surrogate approach. The surrogate models in RSM are lower-order polynomial models, and central composite designs are chosen for sampling experimental points to ﬁt these models. Another surrogate approach for optimization problems is Design and Analysis of Computer Experiment (DACE, Sacks et al., 1989). In DACE, space ﬁlling designs are employed to sample experiment points, and the surrogate approximation is accomplished by the kriging method. The basis-based response surface method (BRSM), proposed by Wang and Chen (2004) and Chen et al. (2006), use the surrogate approach for ﬁnding the eﬀective points. The BRSM disvretizes the experiment region as a grid ﬁrst. By doing so, the continuous response surface is transformed into ﬁnite number of pixels. Thus, the response surface is treated as an image. Hence existed techniques in image representation can be applied here for constructing the surrogate surface. Here the surrogate model is a linear combination of basis functions which is a popular model assumption in image representation. In the BRSM, a space ﬁlling design, uniform design, is also used for selecting the initial experiment point, because the knowledge about the true surface is not suﬃcient. 1.

(7) Besides above three assumptions for response function, we suppose that the response values are evaluated by an evolution process, which is time and cost consuming, and these evolution processes of responses could not be ignored here. Thus, in order to search the eﬀective points eﬃciently, we do not only want to minimize the number of explored experiment points but also need to reduce the cost of evolution processes. Hence in this thesis, based on the same surrogate model in BRSM, a novel algorithm is proposed to ﬁnd the eﬀective points on X when f (x) is computed by an evolution process. The paper is organized as follows. The new algorithm for ﬁnding eﬀective points is proposed in Section 2. The problems of our experiments and the constructions of our simulations for showing the performance of our novel algorithm are described in Section 3. Then the numerical results of our algorithm are shown in Section 4. Finally, a conclusion is given in Section 5.. 2. The New Algorithm In this section, a novel algorithm is proposed for searching eﬀective points eﬃciently. when the response values of each point x ∈ X are obtained by an evolution process. When the evolution processes of responses are ignored, i.e. we get the response variables ”directly,” the BRSM is an eﬃcient method to solve this kind of problem. However, these evolution processes should not be ignored here, especially when the cost for computing responses is very expensive. Hence, we do not only want to ﬁnd the eﬀective points eﬃciently, but also want to reduce the cost for obtaining the responses. Basically, when the evolution processes of response values are ignored, our algorithm is similar to BRSM. Hence, our novel algorithm can be treated as a generalization of BRSM.. 2.1. The Basis-Based Response Surface Method. In this subsection, BRSM is introduced. In Wang and Chen (2004) and Chen et al. (2006), they successfully demonstrated several experiments to show that BRSM is capable of ﬁnding multiple eﬀective points when the unknown response surface is not smooth. The ﬁrst step of the BRSM is to discritize experimental region into a grid, 2.

(8) and then experiment points are chosen on this grid for evaluating the corresponding responses. In BRSM, the surrogate surface is constructed by using a set of predeﬁned basis functions, and eﬀective points are then identiﬁed based on this simple surrogate surface. Therefore, the outline of BRSM algorithm is as follows: (1) Generate grid over the experimental region. (2) Choose initial experiment points. (3) Generate basis functions. (4) Repeat until the eﬀective points are found. (4.1) Evaluate the response variables. (4.2) Construct the surrogate surface by the basis functions. (4.3) Predict possible eﬀective points and choose the next experiment point according to the constructed surrogate surface. The key point of BRSM is that after discretizing the experimental region, the continuous response surface is treated as a multi-dimension image with ﬁnite number of pixels. Thus, the existed image representation techniques can be applied to construct the surrogate surfaces. Here the surrogate model is chosen as the linear combination of overcomplete basis functions. How to choose the initial experiment points is an important issue and will aﬀect the intermediate searching results and the overall performance. Because there is only few knowledge about the true surface, a space ﬁlling design, uniform design (Fang et al., 2000), is applied for selecting the points. Here we introduce the basis functions in BRSM. First the basis dictionary used in BRSM is an ”overcomplete dictionary,” because this overcomplete dictionary is widely used in image representation. Due to the overcomplete dictionary, a popular choice of basis functions is the Gabor basis functions, and the two-dimensional Gabor basis functions are deﬁned as. g(u, v) =. 1 1 2πu exp[− (σu u2 + σv v 2 )] cos[ + ϕ], Z 2 λ 3. (1).

(9) 0.3 0.3. 0.2. 0.2 0.1. 0.1. 0 −0.1 10. 0 8. 6. 4. 2. 2. 4. 6. 8. −0.1. 10. 0.15 0.15. 0.1. 0.1 0.05. 0.05. 0 −0.05 10. 0 8. 6. 4. 2. 2. 4. 6. 8. −0.05. 10. Figure 1: The diagrams of Gabor basis functions for (a) σu =0.5, θ=0, ϕ=0, and (b) , ϕ=0. The center of the function (u0 , v0 )=(6,6) σu =1, θ = 3π 8. u = u0 + x1 cos θ − x2 sin θ,. (2). v = v0 + x1 sin θ − x2 cos θ,. (3). where Z is the normalizing constant, (x1 , x2 ) are coordinates of X , u0 , v0 , σu , σv are user chosen parameters of a two-dimensional Gaussian window satisfying relation √ √ σv = 2σu and λ = 2πσu , λ and ϕ are parameters of a sinusoidal grating, and θ is the angle between the x1 -axis of the image and the u-axis of the Gabor dictionary. Two examples of Gabor basis functions are shown in Figure 1. To infer these parameters in the surrogate model, the matching pursuit algorithm proposed by Mallat and Zhang (1993) is employed. Basically, the matching pursuit algorithm is to minimize the 2-norm distance between the true model and surrogate model iteratively.. 4.

(10) 2.2. New Algorithm. In this thesis, the situation we are interested in is that every response value is obtained by an evolution process, and the cost for this evolution process is very expensive. Thus, how to reduce the cost for response evaluation is an important issue, and should be considered carefully. Here each response value, f (x), is treated as the limit value of f (x, t), i.e. lim f (x, t) = t→∞. f (x). Since the response values are obtained with respect to the evolution variable t, this evolution process can be viewed as a time-series process. In this thesis, a simple time-series process, auto-regressive process, is used to model the evolution process {f (x, t)} for each response, f (x), and the auto-regressive process (AR) with order k is represented as yt = β0 + β1 yt−1 + . . . + βk yt−k + ε where yt = f (x, t) is the current response value for the corresponding evolution process at t, βi ’s are the auto-regressive coeﬃcients, and ε is a Gaussian white noise. The auto-regressive process with order k is denoted by AR(k) When we could have response values directly, BRSM can eﬃciently ﬁnd the eﬀective points on X due to the power of surrogate models and uniform design. Therefore, the surrogate model in the BRSM and uniform design would also be employed here. Besides this idea, we use AR processes to ﬁt the evolution processes of responses, and then the prediction surrogate surface also is built according to all predictions. Thus, we search possible eﬀective points from the surrogate surface and this prediction surrogate surface. Additionally, to save the cost for evaluating evolution process, we set up the criteria for checking the convergence of evolution process for each response. According to these ideas, the outline our new algorithm is as follows: At ﬁrst, like BRSM, we need to generate grid over the experimental region; choose initial experiment points by a uniform design, and generate basis functions. Here let Pexp denote the grid point such that the corresponding responses are computing. When there is a grid point x in Pexp such that the corresponding {f (x, t)} satisﬁes the convergency criterion, we would stop evaluating the evolution process of that point, and then do the following steps for searching the possible eﬀective points: 5.

(11) (1) Predict f (x, tx + l) by auto-regressive process, ∀x ∈ Pexp , where tx is the current evolution variable of x. (2) Construct the surrogate surface with current response values f (x, tx ). (3) Construct prediction surrogate surface with predictions fˆ(x, tx + l). (4) Choose the possible eﬀective points from these two surrogate surfaces in (2) and (3) respectively. Thus, at least two new experiment points are added into Pexp , and begin their evolution processes. Then the whole algorithm is ended when the stopping conditions are held. Here we describe the details in our algorithm: • Since it is hard to obtain lim f (x, t), no matter if {f (x, t)} converges or not, t→∞. each evolution process for a response will be terminated at tlimit . That is, we set f (x) = f (x, tlimit ). Since, we are not interested in small variation of f (x, t), we would check the convergency criterion every Δt iterations for each evolution process. • The convergency criteria employed to check the convergence of evolution process are based on the variance and trend of evolution processes, and we would temporarily stop evolution processes that satisfy the criteria. For these stopped evolution processes, the processes might be restared if they are chosen from the surrogate surface again. • In order to detect the global trend component of the evolution process, {f (x, t)}, d−1 given a period d, we deﬁne mt = f (x, t − i)/d, and then n mi ’s, {m(h−n+1)d , i=0. . . . , mhd }, are ﬁtted by an AR(k) process, where t = hd. Thus, mhd+l = f (x, t+l) can be predicted by this AR(k) process, and we set fˆ(x, t + l) = m ˆ hd+l . The grid points, whose f (x, tlimit )’s are obtained, are denoted as Pexp. f inal ;. the. grid points, whose evolution processes satisfy the convergency criteria, are denoted as Pexp. stop ,. and the grid points in Pexp. eva. 6. mean that the corresponding evolution.

(12) processes are still computing. Then Pexp = Pexp. f inal. ∪ Pexp. stop. ∪ Pexp. eva .. Finally. our new algorithm is as follows, and this algorithm is called Time-series Basis-Based Response Surface Method (TBRSM in short). (1) Generate a grid P containing N points on the experimental region X . (2) Choose Pinit initial experiment points by a uniform design and deﬁne Pexp = Pinit . (3) Generate a Gabor dictionary {φj , j = 1, ..., M }. (4) Choose tlimit and Δt. (5) Evaluate f (x, tlimit )’s for all x ∈ Pinit Update Pexp. f in. = Pexp .. (6) Repeat until the eﬀective points are found. (6.1) If there exists x1 ∈ Pexp process of x2 ∈ Pexp. eva. that f (x1 , tlimit ) is obtained, or the evolution. satisﬁes the convergency criteria, then go to Step. eva. (6.2). Otherwise evaluate f (x, t) until t = tx + Δt, ∀x ∈ Pexp (6.2) Update Pexp and Pexp. eva. f inal. = Pexp. = Pexp. f inal. eva \(x1. ∪ x1 , Pexp. stop. = Pexp. stop. eva .. ∪ x2 ,. ∪ x2 ).. (6.3) Construct the current surrogate surface by the matching pursuit. (6.4) Predict fˆ(x, tx + l) by the corresponding AR(k) process, ∀x ∈ Pexp if x ∈ (Pexp f in ∪ Pexp stop ), then fˆ(x, tx + l) = f (x, tx ).. eva ,. and. Construct the prediction surrogate surface by the matching pursuit. (6.5) Find a new possible eﬀective point xnew1 on the ﬁrst surrogate surface from P \ (Pexp. f in ∪ Pexp eva ),. and search another possible eﬀective point xnew2 on. the prediction surrogate surface from P \ (Pexp Update Pexp Pexp. stop. eva. = Pexp. Pexp = Pexp. f in. = Pexp. eva. stop \(xnew1. ∪ Pexp. stop. ∪ xnew1 ∪ xnew2 , ∪ xnew2 ), and. ∪ Pexp 7. eva .. f in. ∪ Pexp. eva. ∪ xnew1 )..

(13) 3. Computing Experiments In this section, ﬁrst we introduce a real problem, and then we would describe the. constructions of three simulations which would be used for showing the performance of our new algorithm.. 3.1. The Lyapunov Exponents. A dynamical system modeling an absorptive bistable laser diodes with an electroniccontrolled external drive has been studied in Wang et al. (2001). The dynamical system can be represented by the following rate equations, dNe1 2 2 = Sp1 + mc sin(2π · mf · T ) − α1 Ne1 − (α2 Ne1 + Ne1 + α3 )Np − Ne1 , dT dNe2 2 2 = Sp2 − α2 Ne2 − (α2 Ne2 + Ne2 + α3 )Np − α4 Ne2 , (4) η dT dNp 2 2 2 2 = Np [γ1 (α2 Ne1 + Ne1 + α3 ) + γ2 (α2 Ne2 + Ne2 + α3 )] − Np + ε(γ1 Ne1 + γ2 Ne2 ). dT. η. This study aimed to assert the existence of chaotic light output due to the system and then to apply the light output to secure optical communications. One essential indicator in characterizing this dynamical system is Lyapunov exponents (L.E.). A positive L.E. implies that the system is chaotic for the corresponding parameter settings. Here we intend to ﬁnd certain parameter values of Sp1 (the pump rate) and mc (the modulation current) in Eq.(4), such that the associated L.E. are positive. It is diﬃcult to ﬁnd the suitable parameter sets for positive L.E., because of the following two reasons: Firstly, computing L.E. of the dynamical system is extremely time consuming and the response values are obtained iteratively with respect to the evolution variable t, and secondly, the relations between the parameters and L.E. are exceeding complicated. Therefore, how to identify the target parameter sets among all the possible parameters eﬃciently is an important question. First we study the evolution processes for computing L.E., and usually these processes can be grouped into ten types. Two types of evolution processes are shown in Figure 2, and the ﬁgures of the other types are displayed in Appendix. From these ﬁgures, we ﬁnd that the trend of these processes can be very smooth or oscillated 8.

(14) −3. 8. (a). x 10. (b) 0.04. response value. response value. 6 4 2 0. 0.03. 0.02. 0.01. −2 −4. 0. 2. −4. −5. 4 6 evolution variable. 8. 0. 10. −3. 2.8. 4 6 evolution variable. 8. 10 4. x 10. (d). x 10. 2.6 response value. response value. −5.5 −6 −6.5 −7. 2.4 2.2 2. −7.5 −8. 2. x 10. (c). x 10. 0. 4. 2. 2.5. 3 3.5 evolution variable. 1.8. 4 4. x 10. 2. 2.5. 3 3.5 evolution variable. 4 4. x 10. Figure 2: (a)(b): The evolution processes of (Sp1 , mc )=(20,5) and (Sp1 , mc )=(28,9.5); (c)(d): their parts of t ∈(20000,40000) respectively. extremely. But we think that it is reasonable to regard these evolution processes as time-series processes. Two experiment regions would be considered here. The ﬁrst experiment region is (Sp1 , mc ) ∈ [20, 30] × [5, 15], and the true surface is shown in Figure 3. Another region is(Sp1 , mc ) ∈ [25.5, 40] × [10, 24.5], and the corresponding true surface is shown in Figure 4.. 3.2. The simulation cases. We construct three response surfaces which are similar to the L.E. problem in Section 3.1. The responses are the combination of three two-dimensional exponent functions with a normal noise, i.e. y = 15 exp(−(0.2(x1 − 5.7)2 + 0.2(x2 + 3.4)2 )) −8 exp(−(0.12(x1 − 11.3)2 + 0.15(x2 − 2.3)2 )) 9.

(15) −3. −3. x 10. x 10. 1.5. 1.5 −3. x 10 1. 1 1.5. 15. 1. 0.5. 0.5. 0.5 0 L.E.. mc. 0 10. 0. −0.5. −0.5. −0.5. −1 −1.5. −1. −1. −2 −2.5. 5 20. 25 S. 30. −1.5. −1.5. 15. p1. 30 −2. −2 10 25. −2.5. −2.5. m. c. 5. S. 20. p1. Figure 3: The true surface of the ﬁrst L.E. experiment −3. −3. x 10. x 10. −3. x 10. 1. 1. 1 0. 24. 0 0. 22. −1. −1 L.E.. mc. 20 18. −1. −2. 16 −2. 14. −4. 12 10. −2. −3. 30. 35. 40. −3. −3. Sp1 40. 20 −4. 35 15 30 mc. 10. S. p1. Figure 4: The true surface of the second L.E. experiment. 10. −4.

(16) 15. 15. 15 10 10. 10 10. x2. y. 5. 0. 5. 5. 5 0. −5 −5 0 −10. 0. 5. 10 x1. 15. 0. 20. 10 20. 5 −5. 15. 0. −5. 10 −5 x2. −10. 5 0. x1. Figure 5: The response surface of the ﬁrst simulation.. +10 exp(−(0.1(x1 − 16.4)2 + 0.2(x2 − 1.9)2 )) + η, where η comes from N ormal(0, 3). The experiment region is [0,20]×[-10,10]. In order to get the similar evolution process of L.E., we use diﬀerent AR(k) processes to generate evolution processes in our simulations. To decide the parameters in each AR(k) process, here the order k is sampled from DiscreteU nif orm(2,5); the corresponding AR coeﬃcients β1 , . . . , βk are drawn from i.i.d. U nif orm(-1.5,1.5), and the noise term, ε, comes from N ormal(0, ω). In the ﬁrst simulation, we choose ω=0.2, and the response surface is shown in Figure 5. We construct another simulation with ω=0.15. The response surface of second simulation is shown in Figure 6. Finally, the third simulation is constructed with the same assumptions of the second simulation, and the corresponding surface is in Figure 7.. 11.

(17) 15. 15. 15 10. 10. 10 10 5 5. 5. x2. y. 5 0. 0 −5. −10. 0. 0. 5. 10 x1. 15. 0. −5. 20. 10 20. 5. −5. −5. 15. 0 10 −5 x. 2. −10. 5 x. 0. 1. Figure 6: The response surface of the second simulation 14. 14. 12. 12 14. 10. 12. 10. 10. 10 5. 8 y. x2. 6. 0. 8. 8 6. 6. 4 2. 4. −5. −2. 2 −10. 4. 0. 2. −4 0. 5. 10 x1. 15. 20. 10. 0. 0 20. 5 −2. 15. 0. −2. 10 −5. −4 x2. −10. −4. 5 0. x1. Figure 7: The response surface of the third simulation. 12.

(18) 4. Experimental Results First we introduce the surrogate construction for our simulations and real exper-. iments. Let the N × 1 response vector associated with response variables over the grid P be VP = (f (x1 ), . . . , f (xN ))T , where N is the number of grid points, and a set of basis functions, {φj , j = 1, . . . , M }, is chosen as a overcomplete Gabor dictionary, where M > N , and φj is deﬁned by (1) to (3). Without loss of generality, we assume that Pexp contains p experiment points, and these points are denoted as x1 , . . . , xp , and then we deﬁne the p × 1 vector V˜Pexp = (f (x1 ), . . . , f (xp ))T . Let ex be the N × 1 unit i. vector whose values are all zeros except the one corresponding to the point xi in which the value is assigned to be one, and let Ip be the p × N identiﬁcation matrix that the ith row of Ip is exi . Then we deﬁne Ip = (ex1 , . . . , exp )T . Thus V˜Pexp = Ip VP . Hence, the surrogate surface modeling error on Pexp. M . c˜j φ˜j is constructed by choosing c˜j ’s to minimize the. j=1. V˜Pexp −. M . c˜j φ˜j ,. j=1. where φ˜j = Ip φj , and this minimization is accomplished by the matching pursuit algorithm. Due to limited computer resources, in our simulations and real experiments, we set that there are at most four evolution processes that are processing in the computer system. That is, the number of points in Pexp. eva. is less than or equal to 4.. This section is divided into three parts. In Section 4.1, we present the results of our simulation, and the results of the L.E. experiments are shown in Section 4.2. Finally, we will compare the performances of TBRSM with other similar algorithms in Section4.3.. 4.1. Simulation results. In this subsection, TBRSM is applied to these three simulations in Section 3.2. In these three simulations, the experiment region is [0,20]×[-10,10], and the grid is set to be {(x1 , x2 )|x1 ∈ {0, 1, . . . , 20}, x2 ∈ {−10, −9, . . . , 10}}. That is, the grid 13.

(19) contains 21×21 points in the experiment region. The ROI we set here is {x|f (x) > 7}. Thus we have 22 ,23, and 24 eﬀective points in three simulations respectively. We deﬁne tlimit is 1000 and Δt is 20 here, A uniform design with 2 factors is used to select 21 initial experiment points. Since the evolution processes are generated by diﬀerent AR processes with diﬀerent orders, we need to choose a suitable order k for AR processes in our algorithm. Since we have the evolution processes for initial experiment, {x|f (x, t), t = 0, . . . , tlimit }, ∀x ∈ Pinit , we ﬁt AR processes with diﬀerent orders k = 2,. . .,6 to these evolution processes, and then due to the minimum sum of square errors, we choose k = 6. Finally, for the other parameters in the algorithm, we choose n=13; d=3; and l=6. The convergency criteria we consider in the simulations is the frequency of that the average of the last 60 response values is less than 5 and the variance is less than 10−1 is accumulated to 5 times continuously. We summarize the simulation results in the following: • In the ﬁrst simulation, the TBRSM successfully identiﬁes all the 22 eﬀective points while 248 experiment points are used. Except 3 eﬀective points which are found by the initial points, the others are located when the TBRSM uses 29, 33, 35, 37, 39, 43, 51, 55, 85, 93, 95, 99, 131, 133, 147, 161, 167, 185, and 248 points. Except the last eﬀective points, our algorithm is eﬃcient that identify most eﬀective points with only one third of grid points are used. The processes for choosing experiment points in this simulation with using 40, 80, 120, and 160 experiment points are shown in Figure 8, and there are 9, 12, 15, and 19 eﬀective points identiﬁed respectively. The black x’s are initial points; the red o’s are eﬀective points; the dots are the possible points chosen from the surrogate surface, and the *’s are chosen from the prediction surrogate surface. From Figure 5, we know the eﬀective points are located around two mounds. It seems that the TBRSM locates these two mounds quickly. Hence our algorithm correctly detects the trend of the surface. Figure 9 shows the locations of all experiment points we used for this simulation. In order to search the last and single eﬀective point, (17,-3), the algorithm has to spend many experiment points because this point is the only one eﬀective points in its neighborhood. 14.

(20) simulation 1. tbrsm. 10. 10. 5. 5. 0. 0. −5. −5. −10. 0. 5 total 40. 10 15 optimal 9. −10. 20. 10. 5. 5. 0. 0. −5. −5. 0. 5. 10 120. 5. 10 80. 10. −10. 0. 15. −10. 20. 0. 5. 10. 15. 15. 20. 15. 20. 12. 160. 19. Figure 8: The processes of choosing experiment points of TBRSM in the ﬁrst simulation simulation 1 tbrsm 10. 8. 6. 4. 2. 0. −2. −4. −6. −8. −10. 0. 2. 4. 6. 8. 10. 12. 14. 16. 18. 20. Figure 9: Proﬁle of the 248 experiment points in the ﬁrst simulation. 15.

(21) • In the second simulation, the TBRSM uses 275 experiment points to identify 23 eﬀective points. Except 2 eﬀective points which are found by the initial points, the others are located when 22, 23, 31, 49, 57, 69, 77, 93, 115, 119, 161, 168, 169, 190, 195, 196, 200, 212, 224, 261, and 275 experiment points are used. We ﬁnd that our algorithm uses about a half grid points to identify all eﬀective points. We also show the processes of choosing experiment points in the second simulation with using 40, 80, 120, and 160 experiment points in Figure 10, and 5, 9, 12, and 13 eﬀective points are identiﬁed respectively. According to the true response surface (Figure 6), the eﬀective points are located around two mounds. From Figure 10, we ﬁnd that the TBRSM identiﬁes the eﬀective points around the left mound quickly, and then it moves to search the eﬀective points in the right mound. Figure 11 is the locations of all explored experiment points. We ﬁnd that our algorithm uses many points to search the right mound. In Figure 10(c)(d), there are many points chosen around the left mounds. • Finally, in the third simulation, the TBRSM identiﬁes 24 eﬀective points while 166 points are chosen. The TBRSM uses 28, 29, 37, 39, 41, 47, 51, 57, 59, 73, 75, 77, 97, 107, 115, 119, 123, 129, 133, 151, 160, and 166 experiment points to locate all eﬀective points expect 2 eﬀective points which are found by the initial points. This, our algorithm identiﬁes eﬀective points with one third of grid points are used. The processes of choosing experiment points in the second simulation with using 40, 80, 120, and 160 experiment points are shown in Figure 12, and 6, 13, 17, and 23 eﬀective points are identiﬁed respectively. Figure 13 presents all 166 experiment located on the grid. We ﬁnd that the TBRSM only chooses experiment points around two mounds, because the response values of eﬀective points in Figure 7 are signiﬁcantly diﬀerent from the others.. 4.2. The results for the two experiments of L.E.. There are two experiment regions for L.E. The grid set of the ﬁrst experiment region is {(Sp1 , mc )|Sp1 ∈ {20, 20.5, . . . , 29.5, 30}, mc ∈ {5, 5.5, . . . , 14.5, 15}}, and there are 21 eﬀective points in the right part of this experiment region. Another grid set is 16.

(22) simulation 2 (a). tbrsm (b). 10. 10. 5. 5. 0. 0. −5. −5. −10. 0. 5 total 40. 10 15 optimal 5. −10. 20. 0. 5. 10 80. 10. 5. 5. 0. 0. −5. −5. 0. 5. 10 120. 20. 15. 20. (d). (c) 10. −10. 15 9. 15. −10. 20. 0. 5. 10. 12. 160. 13. Figure 10: The processes of choosing experiment points of TBRSM in the second simulation simulation 2 tbrsm 10. 8. 6. 4. 2. 0. −2. −4. −6. −8. −10. 0. 2. 4. 6. 8. 10. 12. 14. 16. 18. 20. Figure 11: Proﬁle of the 275 experiment points in the second simulation. 17.

(23) simulation 3. tbrsm. 10. 10. 5. 5. 0. 0. −5. −5. −10. 0. 5 total 40. 10 15 optimal 6. −10. 20. 10. 5. 5. 0. 0. −5. −5. 0. 5. 10 120. 5. 10 80. 10. −10. 0. 15. −10. 20. 0. 5. 10. 17. 15. 20. 15. 20. 13. 160. 23. Figure 12: The processes of choosing experiment points of TBRSM in the third simulation simulation 3 tbrsm 10. 8. 6. 4. 2. 0. −2. −4. −6. −8. −10. 0. 2. 4. 6. 8. 10. 12. 14. 16. 18. 20. Figure 13: Proﬁle of the 166 experiment points in the third simulation. 18.

(24) {(Sp1 , mc )|Sp1 ∈ {25.5, 26, . . . , 39.5, 40}, mc ∈ {10, 10.5, . . . , 24, 24.5}}. In the second experiment, total 25 eﬀective points are separated into three areas of second experiment region. Here we set the upper limit iteration tlim as 100000, and Δt as 1000. We hope both the evolution processes can be ﬁtted well by AR process and they would not waste too much computing resources. In order to choose a suitable order k for all AR processes, we randomly choose ten periods from evolution processes of initial points, and ﬁt them by AR processes with diﬀerent orders. The results of two points are shown in Figure 14, and we ﬁnd that k=20 is suﬃcient to ﬁt these precesses due to the minimization error sum of square. The other parameters in TBRSM are n=50; d=40; and l=10 in the L.E. experiments. We set the stopping criteria to be that the frequency of that the average of the last 1000 response values is less than 10−7 and the variance is less than 10−7 is accumulated to 5 times continuously after analyzing the evolution processes of initial points. The results of two real experiments are described in the following: • In the ﬁrst L.E.experiment, the TBRSM identiﬁes 21 eﬀective points successfully with 133 experiment points. Except 2 eﬀective points which are found by the initial points, the TBRSM uses 31, 45, 49, 51, 62, 73, 75, 83, 85, 86, 87, 88, 89, 101, 127, 129, 131, and 133 experiment points to locate the other eﬀective points. Our algorithm uses only one third of grid points to identify all eﬀective points. From Figure 3, we know that the eﬀective points are located in the right part of the experiment region, and the left part is a smooth surface. Figure 15 shows the processes of choosing experiment points when 40, 80, 120, and 160 experiment points are used, and there are 5, 12, 18, and 21 eﬀective points identiﬁed respectively. Figure 16 is the locations of all 133 experiment points, and it shows that the TBRSM focuses on the hot spot area. • In the second L.E.experiment, the TBRSM identiﬁes 25 eﬀective points successfully while 322 experiment points are used. Except one eﬀective point which is found by the initial points, the TBRSM uses 39, 40, 58, 88, 101, 102, 105, 106, 120, 137, 138, 144, 168, 176, 196, 212, 216, 222, 228, 248, 266, 270, 276, and 322 experiment points to locate the other eﬀective points. It seems that to 19.

(25) −15. 10. −16. 10. −17. sse of original data and model. 10. −18. 10. −19. 10. −20. 10. −21. 10. −22. 10. −23. 10. 12. 14. 16. 18. 20. 22. 24. 26. 20. 22. 24. 26. order −13. 10. −14. 10. −15. sse of original data and model. 10. −16. 10. −17. 10. −18. 10. −19. 10. −20. 10. 12. 14. 16. 18 order. Figure 14: The broken-line graphs of diﬀerent orders to SSE in the L.E. experiments. 20.

(26) L.E. 1. tbrsm. 15. 15. 10. 10. 5 20. 22. 24 total 40. 26 28 optimal 5. 5 20. 30. 15. 15. 10. 10. 5 20. 22. 24 120. 26. 28. 22. 5 20. 30. 22. 18. 24 80. 26. 24 131. 26. 28. 30. 28. 30. 12. 21. Figure 15: The process of choosing experiment points of TBRSM in the ﬁrst L.E. experiment L.E. 1 tbrsm 15. 14. 13. 12. 11. 10. 9. 8. 7. 6. 5 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. Figure 16: Proﬁle of the 133 experiment points in the ﬁrst L.E. experiment. 21.

(27) L.E. 2. tbrsm. 25. 25. 20. 20. 15. 15. 10 25. 30 total 60. 35 optimal 4. 10 25. 40. 25. 25. 20. 20. 15. 15. 10 25. 30 180. 35 15. 30. 10 25. 40. 35 10. 40. 120. 35 21. 40. 240. 30. Figure 17: The process of choosing points of TBRSM in the second L.E. experiment L.E. 2 tbrsm 25. 20. 15. 10 25. 30. 35. 40. Figure 18: Proﬁle of the 322 experiment points in the second L.E. experiment. 22.

(28) identify all of eﬀective point, TBRSM only choose one third of grid points. Due to the less information provided by the initial experiment points, the selected experiments spread the whole experiment region, but ﬁnally TBRSM still identiﬁes these three hot spots. We show the processes of choosing experiment points when 60, 120, 180, and 240 points are used in Figure 17, and there are 4, 10, 15, and 21 eﬀective points identiﬁed respectively. Figure 18 is the locations of all 362 experiment points.. 4.3. Comparison. In this subsection, we compare TBRSM with the other three algorithms. First we introduce these three algorithms: (1) In TBRSM, AR processes are used to ﬁtted the evolution processes. Then based on the predictions of AR processes, the second surrogate surface is constructed and one possible eﬀective point will be chosen from this surface. In order to show the eﬀect of AR processes, we form another algorithm, named Iterated Basis-Based Response Surface Method (IBRSM). Here the only diﬀerence between TBRSM and IBRSM is that we do not use AR processes to characterize the evolution processes, and in stead of adding a point from prediction surrogate surface, we choose two points from current surrogate surface. (2) Since response function f is assumed to be unknown or very complicate, an intuitive idea to ﬁnd the eﬀective points is that given a grid ﬁrst, we randomly pick a grid point and then check the corresponding response belongs to ROI or not. Here this type of algorithm is called RAND, because the probability for picking each grid point is equal. (3) We also apply BRSM in these simulations and experiments. For BRSM, the evolution processes are ignored. To compare the results of these algorithms, the computing cost for each algorithm is evaluated. The computing cost can be divided into two parts. The one is the number of 23.

(29) explored experiment points, and the other one is the evolution processes of experiment points. Here the total computing costs for these four algorithms are deﬁned as two formats according to if the evolution processes are ignored or not. If the evolution processes are ignored, like RAND and BRSM, the total computing cost is equal to the number of experiment points times tlimit , i.e. number of Pexp × tlimit . Otherwise, the total computing cost is the sum of the number of steps in the evolution process for each point in Pexp , i.e. tx . In addition to the computing cost, we use the result x∈Pexp. of BRSM as our baseline to see the eﬃciency for locating n eﬀective points, i.e. the ratio of computing costs for identifying n eﬀective points, where n ≤ the number of total eﬀective points, m. Hence the eﬃciency of an algorithm for ﬁnding n eﬀective points is deﬁned as CostsAlg (n) , n = 1, . . ., m, CostsBRSM (n) where CostsAlg (n) is the total computing cost for ﬁnding n eﬀective points. First we consider the results of three simulations. Figures 19, 20, and 21 are the plots of total computing costs v.s. number of eﬀective points. We ﬁnd that the computing costs of RAND is like a oblique line that divide the graph into two parts. Clearly, the broken-line in the left-upper part means that the algorithm has a better performance than that of RAND. That is, in these three simulations, the performances of BRSM, TBRSM and IBRSM are better than that of RAND. For the other three algorithms, usually TBRSM has the better performances, especially for the ﬁrst simulation. In the ﬁrst simulation, IBRSM and BRSM spend more costs than TBRSM for searching the last eﬀective points. The relative eﬃciencies of these three simulations are shown in Figures 22 to 24. We ﬁnd that RAND spend more twice costs than that of BRSM to identify eﬀective points, but TBRSM and IBRSM can save a lot. Clearly, TBRSM has better performance in these algorithms. Now we present the comparisons for the two L.E. experiments. Figures 25 and 26 are the total cost graphs of algorithms. We still ﬁnd that RAND does not perform well. The performances of TBRSM and IBRSM are also better than that of BRSM, especially for searching the last eﬀective points in the ﬁrst L.E. experiment, and of the beginning stages for the second L.E. experiment. However, the diﬀerence between TBRSM and 24.

(30) simulation 1 25. number of optimal points. 20. 15. 10. 5 tbrsm ibrsm brsm rand 0. 0. 0.5. 1. 1.5. 2. 2.5 total step. 3. 3.5. 4. 4.5 5. x 10. Figure 19: The total cost of the ﬁrst simulation. simulation 2 25. number of optimal points. 20. 15. 10. 5 tbrsm ibrsm brsm rand 0. 0. 0.5. 1. 1.5. 2. 2.5 total step. 3. 3.5. 4. Figure 20: The total cost of the second simulation. 25. 4.5 5. x 10.

(31) simulation 3 25. number of optimal points. 20. 15. 10. 5 tbrsm ibrsm brsm rand 0. 0. 0.5. 1. 1.5. 2. 2.5 total step. 3. 3.5. 4. 4.5 5. x 10. Figure 21: The total cost of the third simulation. simulation 1 3 rand 2.1758 brsm 1 tbrsm 0.64188 ibrsm 0.83515 2.5. proportion of costs. 2. 1.5. 1. 0.5. 0. 0. 5. 10 15 numbers of optimal points. 20. Figure 22: The relative eﬃciencies of the ﬁrst simulation. 26. 25.

(32) simulation 2 3 rand 1.8047 brsm 1 tbrsm 0.64338 ibrsm 0.72216 2.5. proportion of costs. 2. 1.5. 1. 0.5. 0. 0. 5. 10 15 numbers of optimal points. 20. 25. Figure 23: The relative eﬃciencies of the second simulation. simulation 3 3 rand 2.4236 brsm 1 tbrsm 0.63691 ibrsm 0.66274 2.5. proportion of costs. 2. 1.5. 1. 0.5. 0. 0. 5. 10 15 numbers of optimal points. 20. Figure 24: The relative eﬃciencies of the third simulation. 27. 25.

(33) L.E. 1 25. number of optimal points. 20. 15. 10. 5 tbrsm ibrsm brsm rand 0. 0. 0.5. 1. 1.5. 2. 2.5 total step. 3. 3.5. 4. 4.5 7. x 10. Figure 25: The total cost of the ﬁrst L.E. experiment. L.E. 2 25. number of optimal points. 20. 15. 10. 5 tbrsm ibrsm brsm rand 0. 0. 1. 2. 3. 4. 5. 6. 7. 8. total step. Figure 26: The total costs of the second L.E. experiment. 28. 9 7. x 10.

(34) L.E. 1 4.5 rand 3.1629 brsm 1 tbrsm 0.70405 ibrsm 0.66662. 4. 3.5. proportion of costs. 3. 2.5. 2. 1.5. 1. 0.5. 0. 0. 5. 10 15 numbers of optimal points. 20. 25. Figure 27: The relative eﬃciencies of the ﬁrst L.E. experiment. L.E. 2 3.5 rand 2.2277 brsm 1 tbrsm 0.40573 ibrsm 0.48857. 3. proportion of costs. 2.5. 2. 1.5. 1. 0.5. 0. 0. 5. 10 15 numbers of optimal points. 20. 25. Figure 28: The relative eﬃciencies of the second L.E. experiment. 29.

(35) IBRSM is not signiﬁcant in these two experiments. Therefore, the AR processes may not provide us enough useful information on exploring experiment points. The relative eﬃciencies are shown in Figures 27 and 28. We ﬁnd that the relative eﬃciencies of TBRSM and IBRSM decrease stably and are less than 50% of BRSM for identifying all eﬀective points in the ﬁrst L.E. experiment. In the second experiment, the relative eﬃciencies of TBRSM are even less than 50% in the whole produce.. 5. Conclusion In this thesis, we propose a novel algorithm, TBRSM, for searching the eﬀective. points of response surfaces, and the response value for each experimental points is computed by an evolution process, which is considered as a simple time-series process, auto-regressive process. Basically TBRSM is an iterative algorithm, and at each iteration of our novel algorithm, we ﬁnd the possible eﬀective points from two surrogate surfaces which are constructed by current values and predictions. Here like BRSM, the surrogate surface is the linear combination of the predeﬁned overcomplete basis dictionary, and the coeﬃcients of the surrogate surface are inferred by the matching pursuit algorithm. From our simulation studies and the results of two real experiments, TBRSM successfully locates all eﬀective points with few experiment points. However, in the ﬁrst real experiment, IBRSM has the better performance than TBRSM. The reason may be due to the process assumption of the evolution processes. Basically, TBRSM should have the better performance than that of IBRSM if AR process is a correct assumption for evolution process, because the purpose of using the AR process is to detect the trend of the evolution process and then to predict the next step of the evolution process. However, besides the AR process, the spline method propose in Yiu et al. (2001) would be another possible choice. In Section 4, we evaluate these evolution processes sequentially and limit the number of points whose evolution processes are processing is less than or equal to 4. Here we assume these evolution processes are processing independently such that we can evaluate them separately. When we have more computing resources, these processes 30.

(36) could be distributed to the diﬀerent resources. Therefore, we can apply the parallel computing techniques for improving the eﬃciency of TBRSM.. 31.

(37) Appendix: The ﬁgure of evolution processes for Lyaounov Exponents (S , m ) = ( 20, 5). −3. 8. p1. x 10. c. response value. 6 4 2 0 −2 −4. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −4. −5. x 10. −5.5 response value. A. −6 −6.5 −7 −7.5 −8. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 29: The evolution process of Lyapunov exponent at (Sp1 , mc )=(20,5) and its part of t ∈ (20000, 40000).. 32.

(38) (Sp1, mc) = ( 28, 9.5). response value. 0.04. 0.03. 0.02. 0.01. 0. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −3. 2.8. x 10. response value. 2.6 2.4 2.2 2 1.8. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 30: The evolution process of Lyapunov exponent at (Sp1 , mc )=(28,9.5) and its part of t ∈ (20000, 40000). (S , m ) = ( 26, 5). −3. 8. p1. x 10. c. response value. 6 4 2 0 −2. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −4. −1.85. x 10. response value. −1.9 −1.95 −2 −2.05 −2.1 −2.15. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 31: The evolution process of Lyapunov exponent at (Sp1 , mc )=(26,5) and its part of t ∈ (20000, 40000). 33.

(39) (S , m ) = ( 26.5, 5) p1. c. 0.08. response value. 0.06 0.04 0.02 0 −0.02. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −4. −6. x 10. response value. −7 −8 −9 −10 −11 −12. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 32: The evolution process of Lyapunov exponent at (Sp1 , mc )=(26.5,5) and its part of t ∈ (20000, 40000). (S , m ) = ( 29.5, 10.5) p1. c. 0.06. response value. 0.05 0.04 0.03 0.02 0.01 0 −0.01. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −3. −2.05. x 10. response value. −2.1 −2.15 −2.2 −2.25 −2.3 −2.35. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 33: The evolution process of Lyapunov exponent at (Sp1 , mc )=(29.5,10.5) and its part of t ∈ (20000, 40000). 34.

(40) (Sp1, mc) = ( 29.5, 13) 0.05. response value. 0.04 0.03 0.02 0.01 0. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −3. 1.5. x 10. response value. 1.4 1.3 1.2 1.1 1 0.9. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 34: The evolution process of Lyapunov exponent at (Sp1 , mc )=(29.5,13) and its part of t ∈ (20000, 40000). (S , m ) = ( 32.5, 10) p1. c. 0.06. response value. 0.05 0.04 0.03 0.02 0.01 0 −0.01. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −3. −2.05. x 10. response value. −2.1 −2.15 −2.2 −2.25 −2.3 −2.35. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 35: The evolution process of Lyapunov exponent at (Sp1 , mc )=(32.5,10) and its part of t ∈ (20000, 40000). 35.

(41) (S , m ) = ( 30, 7.5) p1. c. 0.04. response value. 0.03 0.02 0.01 0 −0.01. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −3. −2.3. x 10. response value. −2.35 −2.4 −2.45 −2.5 −2.55. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 36: The evolution process of Lyapunov exponent at (Sp1 , mc )=(30,7.5) and its part of t ∈ (20000, 40000). (S , m ) = ( 30, 10.5) p1. c. 0.05. response value. 0.04 0.03 0.02 0.01 0 −0.01. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −4. −4. x 10. response value. −5 −6 −7 −8 −9 −10 −11. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 37: The evolution process of Lyapunov exponent at (Sp1 , mc )=(30,10.5) and its part of t ∈ (20000, 40000). 36.

(42) (S , m ) = ( 33.5, 16.5) p1. c. 0.06 0.05 response value. 0.04 0.03 0.02 0.01 0 −0.01. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −3. −3.2. x 10. response value. −3.3 −3.4 −3.5 −3.6 −3.7 −3.8. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 38: The evolution process of Lyapunov exponent at (Sp1 , mc )=(33.5,16.5) and its part of t ∈ (20000, 40000).. 37.

(43) References [1] F. Bergeaud and S. Mallat (1995). Matching pursuit of images. 1995 International Conference on Image Processing, 1, 53-56. [2] G. E. P. Box and K. B. Wilson (1951). On the experimental attainment of optimum condition. Journal of the Royal Statistical Society, Ser. B., 13, 1-45. [3] R.-B. Chen, W. Wang and F. Tsai (2006). A basis-based response surface method for computer experiment optimization. Technical report, Department of Applied Mathematics and Institute of Statistics, National University of Kaohsiung. [4] K. T. Fang, D. K. J. Lin, P. Winker and Y. Zhang (2000). Uniform design: theory and application. Technometrics, 42, 237-248. [5] B. MacLenna (1991). Gabor representation of spatiotemporal visual images. Technical report CS-91-144. [6] S. Mallat and Z. Zhang (1993). Matching pursuit with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41, 3397-3415 [7] T. S. Parker and L. O. Chua (1989). Practical numerical algorithms for chaotic systems. Springer-Verlag. New York. [8] J. Sacks, W. J. Welch, T. J. Mitchell and H. P. Wynn (1989). Design and analysis of computer experiments. Statistical Science, 4(4), 409-435. [9] W. Wang and R.-B. Chen (2004). Basis representation methodology for response surfaces. Technical report, Department of Applied Mathematics and Institute of Statistics, National University of Kaohsiung. [10] W. Wang, T.-M. Hwang, C. Juang, J. Juang, C.-Y. Liu and W.-W. Lin (2001). Chaotic behaviors of bistable laser diodes and its application in synchronization of optical communication. Japanese Journal of Applied Physics, 40(10), 5914-5919.. 38.

(44) [11] K. F. C. Yiu, S. Wang, K. L. Teo and A. C. Tsoi (2001). Nonlinear system modeling via knot-optimizing B-spline networks, IEEE Transactions on Neural Networks, 12, (5) pp.1013-1022.. 39.

(45)

相關文件

Wang, Solving pseudomonotone variational inequalities and pseudocon- vex optimization problems using the projection neural network, IEEE Transactions on Neural Networks 17

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

Elsewhere the difference between and this plain wave is, in virtue of equation (A13), of order of .Generally the best choice for x 1 ,x 2 are the points where V(x) has

Microphone and 600 ohm line conduits shall be mechanically and electrically connected to receptacle boxes and electrically grounded to the audio system ground point.. Lines in

Associate Professor, Kyushu University.. At the end of the antarakalpa most of mankind perish. The Part 2 is the main part of this paper and points out the

• Learn the mapping between input data and the corresponding points the low dimensional manifold using mixture of factor analyzers. • Learn a dynamical model based on the points on

The simulation environment we considered is a wireless network such as Fig.4. There are 37 BSSs in our simulation system, and there are 10 STAs in each BSS. In each connection,

Taiwan customer satisfaction index (TCSI) model shown in Figure 4-1, 4-2 and 4-3, developed by the National Quality Research Center of Taiwan at the Chunghua University in