• 沒有找到結果。

National University of Kaohsiung Repository System:Item 310360000Q/10500

N/A
N/A
Protected

Academic year: 2021

Share "National University of Kaohsiung Repository System:Item 310360000Q/10500"

Copied!
44
0
0

加載中.... (立即查看全文)

全文

(1)國立高雄大學統計學研究所 碩士論文. Searching Effective Points by Surrogate Models and Time Series Predictions 一個以代用模型與時間序列預測值 來搜尋有效點的演算法. 研究生:許家龍 撰 指導教授:王偉仲,陳瑞彬. 中華民國九十五年七月.

(2) Searching Effective Points by Surrogate Models and Time Series Predictions. by Chia-Lung Hsu Advisors Weichung Wang and Ray-Bing Chen. Institute of Statistics, National University of Kaohsiung Kaohsiung, Taiwan 811 R.O.C. July 2005.

(3) Contents 1 Introduction. 1. 2 The New Algorithm 2.1 The Basis-Based Response Surface Method . . . . . . . . . . . . . . . . 2.2 New Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 2 2 5. 3 Computing Experiments 3.1 The Lyapunov Exponents . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The simulation cases . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 8 8 9. 4 Experimental Results 4.1 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The results for the two experiments of L.E. . . . . . . . . . . . . . . . . 4.3 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 13 13 16 23. 5 Conclusion. 30. A Appendix: The figure of evolution processes for Lyaounov Exponents 32 References. 38. i.

(4) ddddddddddddddd dddddddddd dddd: ddd dd, ddd dd ddddddddddd dddddddddddd. dd: ddd dddddddddddd. dd. dddddddddddddddddddddddddddddddddd dddddddddddddddddddddddddddddddddddd dddddddddddddddddddddddddddddddddddd dddddddddddddddddddddddddddddddddddd dddddddddddddddddddddddddddddddddddd dddddddddddddddddddddddddddddddddddd dddddddddddddddddddddddddddddddddddd dddddddddddddddddddddddLyapunovddddddddd ddddddddddddddddddddddd ddd : ddddddd, ddddd, dddd, ddd, dddd, dddd. ii.

(5) Searching Effective Points by Surrogate Models and Time Series Predictions Advisors: Dr. Weichung Wanga and Dr. Ray-Bing Chenb a. Department of Applied Mathmetics National University of Kaohsiung b. Institute of Statistics National University of Kaohsiung. Student: Chia-Lung Hsu Institute of Statistics National University of Kaohsiung. Abstract We develop an algorithm to find so-called effective points x ∈ Rn such that the corresponding responses f (x) ∈ R belong to a specific region of interest. Examples of a region of interest include extreme values, bounded intervals, positivity, and others. Here the responses are obtained iteratively with respect to the evolution t, and these evolution processes are fitted by auto-regressive processes. To find the effective points, the true yet unknown response surface is approximated by a surrogate model. Then possible effective points are selected from two surrogate surfaces, which are constructed based on the predictions of the AR processes and the current values of the evolution processes respectively. The convergency criteria for evolution processes are also used here for improving the efficiency of our novel algorithm. Several simulations and two real examples for finding positive Lyapunov exponents of a dynamical system are demonstrated. Computational results show that the novel algorithms is efficient and practical. Key words and phrases: A basis-based response surface method, auto-regression, uniform design, optimization, region of interest, image representation. iii.

(6) 1. Introduction The problem of finding effective points, x ∈ X ⊂ Rn , such that the response. values, f (x) ∈ R, belong to a region of interest (ROI) are considered, where X is the experimental region. Many applications can be formulated as this kind of problems. For example, if the ROI represents extreme values, our problem is equivalent to the optimization problem. Here we assume: (1) the response surface is complex and is difficult to model by simple functions; (2) the computing costs of response values are expensive, and (3) the response functions are unknown and complicated. Therefore, surrogate approach is used here. That is instead of searching on complex true surface directly, a simple and cheap surrogate surface is used to approximate the true yet complicate response surface, and then the effective points are found from this surrogate surface. When we only consider the optimization problems, response surface methodology (RSM), which is proposed by Box and Wilson (1951), is a useful and famous statistical tool for the optimization problems by the surrogate approach. The surrogate models in RSM are lower-order polynomial models, and central composite designs are chosen for sampling experimental points to fit these models. Another surrogate approach for optimization problems is Design and Analysis of Computer Experiment (DACE, Sacks et al., 1989). In DACE, space filling designs are employed to sample experiment points, and the surrogate approximation is accomplished by the kriging method. The basis-based response surface method (BRSM), proposed by Wang and Chen (2004) and Chen et al. (2006), use the surrogate approach for finding the effective points. The BRSM disvretizes the experiment region as a grid first. By doing so, the continuous response surface is transformed into finite number of pixels. Thus, the response surface is treated as an image. Hence existed techniques in image representation can be applied here for constructing the surrogate surface. Here the surrogate model is a linear combination of basis functions which is a popular model assumption in image representation. In the BRSM, a space filling design, uniform design, is also used for selecting the initial experiment point, because the knowledge about the true surface is not sufficient. 1.

(7) Besides above three assumptions for response function, we suppose that the response values are evaluated by an evolution process, which is time and cost consuming, and these evolution processes of responses could not be ignored here. Thus, in order to search the effective points efficiently, we do not only want to minimize the number of explored experiment points but also need to reduce the cost of evolution processes. Hence in this thesis, based on the same surrogate model in BRSM, a novel algorithm is proposed to find the effective points on X when f (x) is computed by an evolution process. The paper is organized as follows. The new algorithm for finding effective points is proposed in Section 2. The problems of our experiments and the constructions of our simulations for showing the performance of our novel algorithm are described in Section 3. Then the numerical results of our algorithm are shown in Section 4. Finally, a conclusion is given in Section 5.. 2. The New Algorithm In this section, a novel algorithm is proposed for searching effective points efficiently. when the response values of each point x ∈ X are obtained by an evolution process. When the evolution processes of responses are ignored, i.e. we get the response variables ”directly,” the BRSM is an efficient method to solve this kind of problem. However, these evolution processes should not be ignored here, especially when the cost for computing responses is very expensive. Hence, we do not only want to find the effective points efficiently, but also want to reduce the cost for obtaining the responses. Basically, when the evolution processes of response values are ignored, our algorithm is similar to BRSM. Hence, our novel algorithm can be treated as a generalization of BRSM.. 2.1. The Basis-Based Response Surface Method. In this subsection, BRSM is introduced. In Wang and Chen (2004) and Chen et al. (2006), they successfully demonstrated several experiments to show that BRSM is capable of finding multiple effective points when the unknown response surface is not smooth. The first step of the BRSM is to discritize experimental region into a grid, 2.

(8) and then experiment points are chosen on this grid for evaluating the corresponding responses. In BRSM, the surrogate surface is constructed by using a set of predefined basis functions, and effective points are then identified based on this simple surrogate surface. Therefore, the outline of BRSM algorithm is as follows: (1) Generate grid over the experimental region. (2) Choose initial experiment points. (3) Generate basis functions. (4) Repeat until the effective points are found. (4.1) Evaluate the response variables. (4.2) Construct the surrogate surface by the basis functions. (4.3) Predict possible effective points and choose the next experiment point according to the constructed surrogate surface. The key point of BRSM is that after discretizing the experimental region, the continuous response surface is treated as a multi-dimension image with finite number of pixels. Thus, the existed image representation techniques can be applied to construct the surrogate surfaces. Here the surrogate model is chosen as the linear combination of overcomplete basis functions. How to choose the initial experiment points is an important issue and will affect the intermediate searching results and the overall performance. Because there is only few knowledge about the true surface, a space filling design, uniform design (Fang et al., 2000), is applied for selecting the points. Here we introduce the basis functions in BRSM. First the basis dictionary used in BRSM is an ”overcomplete dictionary,” because this overcomplete dictionary is widely used in image representation. Due to the overcomplete dictionary, a popular choice of basis functions is the Gabor basis functions, and the two-dimensional Gabor basis functions are defined as. g(u, v) =. 1 1 2πu exp[− (σu u2 + σv v 2 )] cos[ + ϕ], Z 2 λ 3. (1).

(9) 0.3 0.3. 0.2. 0.2 0.1. 0.1. 0 −0.1 10. 0 8. 6. 4. 2. 2. 4. 6. 8. −0.1. 10. 0.15 0.15. 0.1. 0.1 0.05. 0.05. 0 −0.05 10. 0 8. 6. 4. 2. 2. 4. 6. 8. −0.05. 10. Figure 1: The diagrams of Gabor basis functions for (a) σu =0.5, θ=0, ϕ=0, and (b) , ϕ=0. The center of the function (u0 , v0 )=(6,6) σu =1, θ = 3π 8. u = u0 + x1 cos θ − x2 sin θ,. (2). v = v0 + x1 sin θ − x2 cos θ,. (3). where Z is the normalizing constant, (x1 , x2 ) are coordinates of X , u0 , v0 , σu , σv are user chosen parameters of a two-dimensional Gaussian window satisfying relation √ √ σv = 2σu and λ = 2πσu , λ and ϕ are parameters of a sinusoidal grating, and θ is the angle between the x1 -axis of the image and the u-axis of the Gabor dictionary. Two examples of Gabor basis functions are shown in Figure 1. To infer these parameters in the surrogate model, the matching pursuit algorithm proposed by Mallat and Zhang (1993) is employed. Basically, the matching pursuit algorithm is to minimize the 2-norm distance between the true model and surrogate model iteratively.. 4.

(10) 2.2. New Algorithm. In this thesis, the situation we are interested in is that every response value is obtained by an evolution process, and the cost for this evolution process is very expensive. Thus, how to reduce the cost for response evaluation is an important issue, and should be considered carefully. Here each response value, f (x), is treated as the limit value of f (x, t), i.e. lim f (x, t) = t→∞. f (x). Since the response values are obtained with respect to the evolution variable t, this evolution process can be viewed as a time-series process. In this thesis, a simple time-series process, auto-regressive process, is used to model the evolution process {f (x, t)} for each response, f (x), and the auto-regressive process (AR) with order k is represented as yt = β0 + β1 yt−1 + . . . + βk yt−k + ε where yt = f (x, t) is the current response value for the corresponding evolution process at t, βi ’s are the auto-regressive coefficients, and ε is a Gaussian white noise. The auto-regressive process with order k is denoted by AR(k) When we could have response values directly, BRSM can efficiently find the effective points on X due to the power of surrogate models and uniform design. Therefore, the surrogate model in the BRSM and uniform design would also be employed here. Besides this idea, we use AR processes to fit the evolution processes of responses, and then the prediction surrogate surface also is built according to all predictions. Thus, we search possible effective points from the surrogate surface and this prediction surrogate surface. Additionally, to save the cost for evaluating evolution process, we set up the criteria for checking the convergence of evolution process for each response. According to these ideas, the outline our new algorithm is as follows: At first, like BRSM, we need to generate grid over the experimental region; choose initial experiment points by a uniform design, and generate basis functions. Here let Pexp denote the grid point such that the corresponding responses are computing. When there is a grid point x in Pexp such that the corresponding {f (x, t)} satisfies the convergency criterion, we would stop evaluating the evolution process of that point, and then do the following steps for searching the possible effective points: 5.

(11) (1) Predict f (x, tx + l) by auto-regressive process, ∀x ∈ Pexp , where tx is the current evolution variable of x. (2) Construct the surrogate surface with current response values f (x, tx ). (3) Construct prediction surrogate surface with predictions fˆ(x, tx + l). (4) Choose the possible effective points from these two surrogate surfaces in (2) and (3) respectively. Thus, at least two new experiment points are added into Pexp , and begin their evolution processes. Then the whole algorithm is ended when the stopping conditions are held. Here we describe the details in our algorithm: • Since it is hard to obtain lim f (x, t), no matter if {f (x, t)} converges or not, t→∞. each evolution process for a response will be terminated at tlimit . That is, we set f (x) = f (x, tlimit ). Since, we are not interested in small variation of f (x, t), we would check the convergency criterion every Δt iterations for each evolution process. • The convergency criteria employed to check the convergence of evolution process are based on the variance and trend of evolution processes, and we would temporarily stop evolution processes that satisfy the criteria. For these stopped evolution processes, the processes might be restared if they are chosen from the surrogate surface again. • In order to detect the global trend component of the evolution process, {f (x, t)}, d−1  given a period d, we define mt = f (x, t − i)/d, and then n mi ’s, {m(h−n+1)d , i=0. . . . , mhd }, are fitted by an AR(k) process, where t = hd. Thus, mhd+l = f (x, t+l) can be predicted by this AR(k) process, and we set fˆ(x, t + l) = m ˆ hd+l . The grid points, whose f (x, tlimit )’s are obtained, are denoted as Pexp. f inal ;. the. grid points, whose evolution processes satisfy the convergency criteria, are denoted as Pexp. stop ,. and the grid points in Pexp. eva. 6. mean that the corresponding evolution.

(12) processes are still computing. Then Pexp = Pexp. f inal. ∪ Pexp. stop. ∪ Pexp. eva .. Finally. our new algorithm is as follows, and this algorithm is called Time-series Basis-Based Response Surface Method (TBRSM in short). (1) Generate a grid P containing N points on the experimental region X . (2) Choose Pinit initial experiment points by a uniform design and define Pexp = Pinit . (3) Generate a Gabor dictionary {φj , j = 1, ..., M }. (4) Choose tlimit and Δt. (5) Evaluate f (x, tlimit )’s for all x ∈ Pinit Update Pexp. f in. = Pexp .. (6) Repeat until the effective points are found. (6.1) If there exists x1 ∈ Pexp process of x2 ∈ Pexp. eva. that f (x1 , tlimit ) is obtained, or the evolution. satisfies the convergency criteria, then go to Step. eva. (6.2). Otherwise evaluate f (x, t) until t = tx + Δt, ∀x ∈ Pexp (6.2) Update Pexp and Pexp. eva. f inal. = Pexp. = Pexp. f inal. eva \(x1. ∪ x1 , Pexp. stop. = Pexp. stop. eva .. ∪ x2 ,. ∪ x2 ).. (6.3) Construct the current surrogate surface by the matching pursuit. (6.4) Predict fˆ(x, tx + l) by the corresponding AR(k) process, ∀x ∈ Pexp if x ∈ (Pexp f in ∪ Pexp stop ), then fˆ(x, tx + l) = f (x, tx ).. eva ,. and. Construct the prediction surrogate surface by the matching pursuit. (6.5) Find a new possible effective point xnew1 on the first surrogate surface from P \ (Pexp. f in ∪ Pexp eva ),. and search another possible effective point xnew2 on. the prediction surrogate surface from P \ (Pexp Update Pexp Pexp. stop. eva. = Pexp. Pexp = Pexp. f in. = Pexp. eva. stop \(xnew1. ∪ Pexp. stop. ∪ xnew1 ∪ xnew2 , ∪ xnew2 ), and. ∪ Pexp 7. eva .. f in. ∪ Pexp. eva. ∪ xnew1 )..

(13) 3. Computing Experiments In this section, first we introduce a real problem, and then we would describe the. constructions of three simulations which would be used for showing the performance of our new algorithm.. 3.1. The Lyapunov Exponents. A dynamical system modeling an absorptive bistable laser diodes with an electroniccontrolled external drive has been studied in Wang et al. (2001). The dynamical system can be represented by the following rate equations, dNe1 2 2 = Sp1 + mc sin(2π · mf · T ) − α1 Ne1 − (α2 Ne1 + Ne1 + α3 )Np − Ne1 , dT dNe2 2 2 = Sp2 − α2 Ne2 − (α2 Ne2 + Ne2 + α3 )Np − α4 Ne2 , (4) η dT dNp 2 2 2 2 = Np [γ1 (α2 Ne1 + Ne1 + α3 ) + γ2 (α2 Ne2 + Ne2 + α3 )] − Np + ε(γ1 Ne1 + γ2 Ne2 ). dT. η. This study aimed to assert the existence of chaotic light output due to the system and then to apply the light output to secure optical communications. One essential indicator in characterizing this dynamical system is Lyapunov exponents (L.E.). A positive L.E. implies that the system is chaotic for the corresponding parameter settings. Here we intend to find certain parameter values of Sp1 (the pump rate) and mc (the modulation current) in Eq.(4), such that the associated L.E. are positive. It is difficult to find the suitable parameter sets for positive L.E., because of the following two reasons: Firstly, computing L.E. of the dynamical system is extremely time consuming and the response values are obtained iteratively with respect to the evolution variable t, and secondly, the relations between the parameters and L.E. are exceeding complicated. Therefore, how to identify the target parameter sets among all the possible parameters efficiently is an important question. First we study the evolution processes for computing L.E., and usually these processes can be grouped into ten types. Two types of evolution processes are shown in Figure 2, and the figures of the other types are displayed in Appendix. From these figures, we find that the trend of these processes can be very smooth or oscillated 8.

(14) −3. 8. (a). x 10. (b) 0.04. response value. response value. 6 4 2 0. 0.03. 0.02. 0.01. −2 −4. 0. 2. −4. −5. 4 6 evolution variable. 8. 0. 10. −3. 2.8. 4 6 evolution variable. 8. 10 4. x 10. (d). x 10. 2.6 response value. response value. −5.5 −6 −6.5 −7. 2.4 2.2 2. −7.5 −8. 2. x 10. (c). x 10. 0. 4. 2. 2.5. 3 3.5 evolution variable. 1.8. 4 4. x 10. 2. 2.5. 3 3.5 evolution variable. 4 4. x 10. Figure 2: (a)(b): The evolution processes of (Sp1 , mc )=(20,5) and (Sp1 , mc )=(28,9.5); (c)(d): their parts of t ∈(20000,40000) respectively. extremely. But we think that it is reasonable to regard these evolution processes as time-series processes. Two experiment regions would be considered here. The first experiment region is (Sp1 , mc ) ∈ [20, 30] × [5, 15], and the true surface is shown in Figure 3. Another region is(Sp1 , mc ) ∈ [25.5, 40] × [10, 24.5], and the corresponding true surface is shown in Figure 4.. 3.2. The simulation cases. We construct three response surfaces which are similar to the L.E. problem in Section 3.1. The responses are the combination of three two-dimensional exponent functions with a normal noise, i.e. y = 15 exp(−(0.2(x1 − 5.7)2 + 0.2(x2 + 3.4)2 )) −8 exp(−(0.12(x1 − 11.3)2 + 0.15(x2 − 2.3)2 )) 9.

(15) −3. −3. x 10. x 10. 1.5. 1.5 −3. x 10 1. 1 1.5. 15. 1. 0.5. 0.5. 0.5 0 L.E.. mc. 0 10. 0. −0.5. −0.5. −0.5. −1 −1.5. −1. −1. −2 −2.5. 5 20. 25 S. 30. −1.5. −1.5. 15. p1. 30 −2. −2 10 25. −2.5. −2.5. m. c. 5. S. 20. p1. Figure 3: The true surface of the first L.E. experiment −3. −3. x 10. x 10. −3. x 10. 1. 1. 1 0. 24. 0 0. 22. −1. −1 L.E.. mc. 20 18. −1. −2. 16 −2. 14. −4. 12 10. −2. −3. 30. 35. 40. −3. −3. Sp1 40. 20 −4. 35 15 30 mc. 10. S. p1. Figure 4: The true surface of the second L.E. experiment. 10. −4.

(16) 15. 15. 15 10 10. 10 10. x2. y. 5. 0. 5. 5. 5 0. −5 −5 0 −10. 0. 5. 10 x1. 15. 0. 20. 10 20. 5 −5. 15. 0. −5. 10 −5 x2. −10. 5 0. x1. Figure 5: The response surface of the first simulation.. +10 exp(−(0.1(x1 − 16.4)2 + 0.2(x2 − 1.9)2 )) + η, where η comes from N ormal(0, 3). The experiment region is [0,20]×[-10,10]. In order to get the similar evolution process of L.E., we use different AR(k) processes to generate evolution processes in our simulations. To decide the parameters in each AR(k) process, here the order k is sampled from DiscreteU nif orm(2,5); the corresponding AR coefficients β1 , . . . , βk are drawn from i.i.d. U nif orm(-1.5,1.5), and the noise term, ε, comes from N ormal(0, ω). In the first simulation, we choose ω=0.2, and the response surface is shown in Figure 5. We construct another simulation with ω=0.15. The response surface of second simulation is shown in Figure 6. Finally, the third simulation is constructed with the same assumptions of the second simulation, and the corresponding surface is in Figure 7.. 11.

(17) 15. 15. 15 10. 10. 10 10 5 5. 5. x2. y. 5 0. 0 −5. −10. 0. 0. 5. 10 x1. 15. 0. −5. 20. 10 20. 5. −5. −5. 15. 0 10 −5 x. 2. −10. 5 x. 0. 1. Figure 6: The response surface of the second simulation 14. 14. 12. 12 14. 10. 12. 10. 10. 10 5. 8 y. x2. 6. 0. 8. 8 6. 6. 4 2. 4. −5. −2. 2 −10. 4. 0. 2. −4 0. 5. 10 x1. 15. 20. 10. 0. 0 20. 5 −2. 15. 0. −2. 10 −5. −4 x2. −10. −4. 5 0. x1. Figure 7: The response surface of the third simulation. 12.

(18) 4. Experimental Results First we introduce the surrogate construction for our simulations and real exper-. iments. Let the N × 1 response vector associated with response variables over the grid P be VP = (f (x1 ), . . . , f (xN ))T , where N is the number of grid points, and a set of basis functions, {φj , j = 1, . . . , M }, is chosen as a overcomplete Gabor dictionary, where M > N , and φj is defined by (1) to (3). Without loss of generality, we assume that Pexp contains p experiment points, and these points are denoted as x1 , . . . , xp , and then we define the p × 1 vector V˜Pexp = (f (x1 ), . . . , f (xp ))T . Let ex be the N × 1 unit i. vector whose values are all zeros except the one corresponding to the point xi in which the value is assigned to be one, and let Ip be the p × N identification matrix that the ith row of Ip is exi . Then we define Ip = (ex1 , . . . , exp )T . Thus V˜Pexp = Ip VP . Hence, the surrogate surface modeling error on Pexp. M . c˜j φ˜j is constructed by choosing c˜j ’s to minimize the. j=1. V˜Pexp −. M . c˜j φ˜j ,. j=1. where φ˜j = Ip φj , and this minimization is accomplished by the matching pursuit algorithm. Due to limited computer resources, in our simulations and real experiments, we set that there are at most four evolution processes that are processing in the computer system. That is, the number of points in Pexp. eva. is less than or equal to 4.. This section is divided into three parts. In Section 4.1, we present the results of our simulation, and the results of the L.E. experiments are shown in Section 4.2. Finally, we will compare the performances of TBRSM with other similar algorithms in Section4.3.. 4.1. Simulation results. In this subsection, TBRSM is applied to these three simulations in Section 3.2. In these three simulations, the experiment region is [0,20]×[-10,10], and the grid is set to be {(x1 , x2 )|x1 ∈ {0, 1, . . . , 20}, x2 ∈ {−10, −9, . . . , 10}}. That is, the grid 13.

(19) contains 21×21 points in the experiment region. The ROI we set here is {x|f (x) > 7}. Thus we have 22 ,23, and 24 effective points in three simulations respectively. We define tlimit is 1000 and Δt is 20 here, A uniform design with 2 factors is used to select 21 initial experiment points. Since the evolution processes are generated by different AR processes with different orders, we need to choose a suitable order k for AR processes in our algorithm. Since we have the evolution processes for initial experiment, {x|f (x, t), t = 0, . . . , tlimit }, ∀x ∈ Pinit , we fit AR processes with different orders k = 2,. . .,6 to these evolution processes, and then due to the minimum sum of square errors, we choose k = 6. Finally, for the other parameters in the algorithm, we choose n=13; d=3; and l=6. The convergency criteria we consider in the simulations is the frequency of that the average of the last 60 response values is less than 5 and the variance is less than 10−1 is accumulated to 5 times continuously. We summarize the simulation results in the following: • In the first simulation, the TBRSM successfully identifies all the 22 effective points while 248 experiment points are used. Except 3 effective points which are found by the initial points, the others are located when the TBRSM uses 29, 33, 35, 37, 39, 43, 51, 55, 85, 93, 95, 99, 131, 133, 147, 161, 167, 185, and 248 points. Except the last effective points, our algorithm is efficient that identify most effective points with only one third of grid points are used. The processes for choosing experiment points in this simulation with using 40, 80, 120, and 160 experiment points are shown in Figure 8, and there are 9, 12, 15, and 19 effective points identified respectively. The black x’s are initial points; the red o’s are effective points; the dots are the possible points chosen from the surrogate surface, and the *’s are chosen from the prediction surrogate surface. From Figure 5, we know the effective points are located around two mounds. It seems that the TBRSM locates these two mounds quickly. Hence our algorithm correctly detects the trend of the surface. Figure 9 shows the locations of all experiment points we used for this simulation. In order to search the last and single effective point, (17,-3), the algorithm has to spend many experiment points because this point is the only one effective points in its neighborhood. 14.

(20) simulation 1. tbrsm. 10. 10. 5. 5. 0. 0. −5. −5. −10. 0. 5 total 40. 10 15 optimal 9. −10. 20. 10. 5. 5. 0. 0. −5. −5. 0. 5. 10 120. 5. 10 80. 10. −10. 0. 15. −10. 20. 0. 5. 10. 15. 15. 20. 15. 20. 12. 160. 19. Figure 8: The processes of choosing experiment points of TBRSM in the first simulation simulation 1 tbrsm 10. 8. 6. 4. 2. 0. −2. −4. −6. −8. −10. 0. 2. 4. 6. 8. 10. 12. 14. 16. 18. 20. Figure 9: Profile of the 248 experiment points in the first simulation. 15.

(21) • In the second simulation, the TBRSM uses 275 experiment points to identify 23 effective points. Except 2 effective points which are found by the initial points, the others are located when 22, 23, 31, 49, 57, 69, 77, 93, 115, 119, 161, 168, 169, 190, 195, 196, 200, 212, 224, 261, and 275 experiment points are used. We find that our algorithm uses about a half grid points to identify all effective points. We also show the processes of choosing experiment points in the second simulation with using 40, 80, 120, and 160 experiment points in Figure 10, and 5, 9, 12, and 13 effective points are identified respectively. According to the true response surface (Figure 6), the effective points are located around two mounds. From Figure 10, we find that the TBRSM identifies the effective points around the left mound quickly, and then it moves to search the effective points in the right mound. Figure 11 is the locations of all explored experiment points. We find that our algorithm uses many points to search the right mound. In Figure 10(c)(d), there are many points chosen around the left mounds. • Finally, in the third simulation, the TBRSM identifies 24 effective points while 166 points are chosen. The TBRSM uses 28, 29, 37, 39, 41, 47, 51, 57, 59, 73, 75, 77, 97, 107, 115, 119, 123, 129, 133, 151, 160, and 166 experiment points to locate all effective points expect 2 effective points which are found by the initial points. This, our algorithm identifies effective points with one third of grid points are used. The processes of choosing experiment points in the second simulation with using 40, 80, 120, and 160 experiment points are shown in Figure 12, and 6, 13, 17, and 23 effective points are identified respectively. Figure 13 presents all 166 experiment located on the grid. We find that the TBRSM only chooses experiment points around two mounds, because the response values of effective points in Figure 7 are significantly different from the others.. 4.2. The results for the two experiments of L.E.. There are two experiment regions for L.E. The grid set of the first experiment region is {(Sp1 , mc )|Sp1 ∈ {20, 20.5, . . . , 29.5, 30}, mc ∈ {5, 5.5, . . . , 14.5, 15}}, and there are 21 effective points in the right part of this experiment region. Another grid set is 16.

(22) simulation 2 (a). tbrsm (b). 10. 10. 5. 5. 0. 0. −5. −5. −10. 0. 5 total 40. 10 15 optimal 5. −10. 20. 0. 5. 10 80. 10. 5. 5. 0. 0. −5. −5. 0. 5. 10 120. 20. 15. 20. (d). (c) 10. −10. 15 9. 15. −10. 20. 0. 5. 10. 12. 160. 13. Figure 10: The processes of choosing experiment points of TBRSM in the second simulation simulation 2 tbrsm 10. 8. 6. 4. 2. 0. −2. −4. −6. −8. −10. 0. 2. 4. 6. 8. 10. 12. 14. 16. 18. 20. Figure 11: Profile of the 275 experiment points in the second simulation. 17.

(23) simulation 3. tbrsm. 10. 10. 5. 5. 0. 0. −5. −5. −10. 0. 5 total 40. 10 15 optimal 6. −10. 20. 10. 5. 5. 0. 0. −5. −5. 0. 5. 10 120. 5. 10 80. 10. −10. 0. 15. −10. 20. 0. 5. 10. 17. 15. 20. 15. 20. 13. 160. 23. Figure 12: The processes of choosing experiment points of TBRSM in the third simulation simulation 3 tbrsm 10. 8. 6. 4. 2. 0. −2. −4. −6. −8. −10. 0. 2. 4. 6. 8. 10. 12. 14. 16. 18. 20. Figure 13: Profile of the 166 experiment points in the third simulation. 18.

(24) {(Sp1 , mc )|Sp1 ∈ {25.5, 26, . . . , 39.5, 40}, mc ∈ {10, 10.5, . . . , 24, 24.5}}. In the second experiment, total 25 effective points are separated into three areas of second experiment region. Here we set the upper limit iteration tlim as 100000, and Δt as 1000. We hope both the evolution processes can be fitted well by AR process and they would not waste too much computing resources. In order to choose a suitable order k for all AR processes, we randomly choose ten periods from evolution processes of initial points, and fit them by AR processes with different orders. The results of two points are shown in Figure 14, and we find that k=20 is sufficient to fit these precesses due to the minimization error sum of square. The other parameters in TBRSM are n=50; d=40; and l=10 in the L.E. experiments. We set the stopping criteria to be that the frequency of that the average of the last 1000 response values is less than 10−7 and the variance is less than 10−7 is accumulated to 5 times continuously after analyzing the evolution processes of initial points. The results of two real experiments are described in the following: • In the first L.E.experiment, the TBRSM identifies 21 effective points successfully with 133 experiment points. Except 2 effective points which are found by the initial points, the TBRSM uses 31, 45, 49, 51, 62, 73, 75, 83, 85, 86, 87, 88, 89, 101, 127, 129, 131, and 133 experiment points to locate the other effective points. Our algorithm uses only one third of grid points to identify all effective points. From Figure 3, we know that the effective points are located in the right part of the experiment region, and the left part is a smooth surface. Figure 15 shows the processes of choosing experiment points when 40, 80, 120, and 160 experiment points are used, and there are 5, 12, 18, and 21 effective points identified respectively. Figure 16 is the locations of all 133 experiment points, and it shows that the TBRSM focuses on the hot spot area. • In the second L.E.experiment, the TBRSM identifies 25 effective points successfully while 322 experiment points are used. Except one effective point which is found by the initial points, the TBRSM uses 39, 40, 58, 88, 101, 102, 105, 106, 120, 137, 138, 144, 168, 176, 196, 212, 216, 222, 228, 248, 266, 270, 276, and 322 experiment points to locate the other effective points. It seems that to 19.

(25) −15. 10. −16. 10. −17. sse of original data and model. 10. −18. 10. −19. 10. −20. 10. −21. 10. −22. 10. −23. 10. 12. 14. 16. 18. 20. 22. 24. 26. 20. 22. 24. 26. order −13. 10. −14. 10. −15. sse of original data and model. 10. −16. 10. −17. 10. −18. 10. −19. 10. −20. 10. 12. 14. 16. 18 order. Figure 14: The broken-line graphs of different orders to SSE in the L.E. experiments. 20.

(26) L.E. 1. tbrsm. 15. 15. 10. 10. 5 20. 22. 24 total 40. 26 28 optimal 5. 5 20. 30. 15. 15. 10. 10. 5 20. 22. 24 120. 26. 28. 22. 5 20. 30. 22. 18. 24 80. 26. 24 131. 26. 28. 30. 28. 30. 12. 21. Figure 15: The process of choosing experiment points of TBRSM in the first L.E. experiment L.E. 1 tbrsm 15. 14. 13. 12. 11. 10. 9. 8. 7. 6. 5 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. Figure 16: Profile of the 133 experiment points in the first L.E. experiment. 21.

(27) L.E. 2. tbrsm. 25. 25. 20. 20. 15. 15. 10 25. 30 total 60. 35 optimal 4. 10 25. 40. 25. 25. 20. 20. 15. 15. 10 25. 30 180. 35 15. 30. 10 25. 40. 35 10. 40. 120. 35 21. 40. 240. 30. Figure 17: The process of choosing points of TBRSM in the second L.E. experiment L.E. 2 tbrsm 25. 20. 15. 10 25. 30. 35. 40. Figure 18: Profile of the 322 experiment points in the second L.E. experiment. 22.

(28) identify all of effective point, TBRSM only choose one third of grid points. Due to the less information provided by the initial experiment points, the selected experiments spread the whole experiment region, but finally TBRSM still identifies these three hot spots. We show the processes of choosing experiment points when 60, 120, 180, and 240 points are used in Figure 17, and there are 4, 10, 15, and 21 effective points identified respectively. Figure 18 is the locations of all 362 experiment points.. 4.3. Comparison. In this subsection, we compare TBRSM with the other three algorithms. First we introduce these three algorithms: (1) In TBRSM, AR processes are used to fitted the evolution processes. Then based on the predictions of AR processes, the second surrogate surface is constructed and one possible effective point will be chosen from this surface. In order to show the effect of AR processes, we form another algorithm, named Iterated Basis-Based Response Surface Method (IBRSM). Here the only difference between TBRSM and IBRSM is that we do not use AR processes to characterize the evolution processes, and in stead of adding a point from prediction surrogate surface, we choose two points from current surrogate surface. (2) Since response function f is assumed to be unknown or very complicate, an intuitive idea to find the effective points is that given a grid first, we randomly pick a grid point and then check the corresponding response belongs to ROI or not. Here this type of algorithm is called RAND, because the probability for picking each grid point is equal. (3) We also apply BRSM in these simulations and experiments. For BRSM, the evolution processes are ignored. To compare the results of these algorithms, the computing cost for each algorithm is evaluated. The computing cost can be divided into two parts. The one is the number of 23.

(29) explored experiment points, and the other one is the evolution processes of experiment points. Here the total computing costs for these four algorithms are defined as two formats according to if the evolution processes are ignored or not. If the evolution processes are ignored, like RAND and BRSM, the total computing cost is equal to the number of experiment points times tlimit , i.e. number of Pexp × tlimit . Otherwise, the total computing cost is the sum of the number of steps in the evolution process for  each point in Pexp , i.e. tx . In addition to the computing cost, we use the result x∈Pexp. of BRSM as our baseline to see the efficiency for locating n effective points, i.e. the ratio of computing costs for identifying n effective points, where n ≤ the number of total effective points, m. Hence the efficiency of an algorithm for finding n effective points is defined as CostsAlg (n) , n = 1, . . ., m, CostsBRSM (n) where CostsAlg (n) is the total computing cost for finding n effective points. First we consider the results of three simulations. Figures 19, 20, and 21 are the plots of total computing costs v.s. number of effective points. We find that the computing costs of RAND is like a oblique line that divide the graph into two parts. Clearly, the broken-line in the left-upper part means that the algorithm has a better performance than that of RAND. That is, in these three simulations, the performances of BRSM, TBRSM and IBRSM are better than that of RAND. For the other three algorithms, usually TBRSM has the better performances, especially for the first simulation. In the first simulation, IBRSM and BRSM spend more costs than TBRSM for searching the last effective points. The relative efficiencies of these three simulations are shown in Figures 22 to 24. We find that RAND spend more twice costs than that of BRSM to identify effective points, but TBRSM and IBRSM can save a lot. Clearly, TBRSM has better performance in these algorithms. Now we present the comparisons for the two L.E. experiments. Figures 25 and 26 are the total cost graphs of algorithms. We still find that RAND does not perform well. The performances of TBRSM and IBRSM are also better than that of BRSM, especially for searching the last effective points in the first L.E. experiment, and of the beginning stages for the second L.E. experiment. However, the difference between TBRSM and 24.

(30) simulation 1 25. number of optimal points. 20. 15. 10. 5 tbrsm ibrsm brsm rand 0. 0. 0.5. 1. 1.5. 2. 2.5 total step. 3. 3.5. 4. 4.5 5. x 10. Figure 19: The total cost of the first simulation. simulation 2 25. number of optimal points. 20. 15. 10. 5 tbrsm ibrsm brsm rand 0. 0. 0.5. 1. 1.5. 2. 2.5 total step. 3. 3.5. 4. Figure 20: The total cost of the second simulation. 25. 4.5 5. x 10.

(31) simulation 3 25. number of optimal points. 20. 15. 10. 5 tbrsm ibrsm brsm rand 0. 0. 0.5. 1. 1.5. 2. 2.5 total step. 3. 3.5. 4. 4.5 5. x 10. Figure 21: The total cost of the third simulation. simulation 1 3 rand 2.1758 brsm 1 tbrsm 0.64188 ibrsm 0.83515 2.5. proportion of costs. 2. 1.5. 1. 0.5. 0. 0. 5. 10 15 numbers of optimal points. 20. Figure 22: The relative efficiencies of the first simulation. 26. 25.

(32) simulation 2 3 rand 1.8047 brsm 1 tbrsm 0.64338 ibrsm 0.72216 2.5. proportion of costs. 2. 1.5. 1. 0.5. 0. 0. 5. 10 15 numbers of optimal points. 20. 25. Figure 23: The relative efficiencies of the second simulation. simulation 3 3 rand 2.4236 brsm 1 tbrsm 0.63691 ibrsm 0.66274 2.5. proportion of costs. 2. 1.5. 1. 0.5. 0. 0. 5. 10 15 numbers of optimal points. 20. Figure 24: The relative efficiencies of the third simulation. 27. 25.

(33) L.E. 1 25. number of optimal points. 20. 15. 10. 5 tbrsm ibrsm brsm rand 0. 0. 0.5. 1. 1.5. 2. 2.5 total step. 3. 3.5. 4. 4.5 7. x 10. Figure 25: The total cost of the first L.E. experiment. L.E. 2 25. number of optimal points. 20. 15. 10. 5 tbrsm ibrsm brsm rand 0. 0. 1. 2. 3. 4. 5. 6. 7. 8. total step. Figure 26: The total costs of the second L.E. experiment. 28. 9 7. x 10.

(34) L.E. 1 4.5 rand 3.1629 brsm 1 tbrsm 0.70405 ibrsm 0.66662. 4. 3.5. proportion of costs. 3. 2.5. 2. 1.5. 1. 0.5. 0. 0. 5. 10 15 numbers of optimal points. 20. 25. Figure 27: The relative efficiencies of the first L.E. experiment. L.E. 2 3.5 rand 2.2277 brsm 1 tbrsm 0.40573 ibrsm 0.48857. 3. proportion of costs. 2.5. 2. 1.5. 1. 0.5. 0. 0. 5. 10 15 numbers of optimal points. 20. 25. Figure 28: The relative efficiencies of the second L.E. experiment. 29.

(35) IBRSM is not significant in these two experiments. Therefore, the AR processes may not provide us enough useful information on exploring experiment points. The relative efficiencies are shown in Figures 27 and 28. We find that the relative efficiencies of TBRSM and IBRSM decrease stably and are less than 50% of BRSM for identifying all effective points in the first L.E. experiment. In the second experiment, the relative efficiencies of TBRSM are even less than 50% in the whole produce.. 5. Conclusion In this thesis, we propose a novel algorithm, TBRSM, for searching the effective. points of response surfaces, and the response value for each experimental points is computed by an evolution process, which is considered as a simple time-series process, auto-regressive process. Basically TBRSM is an iterative algorithm, and at each iteration of our novel algorithm, we find the possible effective points from two surrogate surfaces which are constructed by current values and predictions. Here like BRSM, the surrogate surface is the linear combination of the predefined overcomplete basis dictionary, and the coefficients of the surrogate surface are inferred by the matching pursuit algorithm. From our simulation studies and the results of two real experiments, TBRSM successfully locates all effective points with few experiment points. However, in the first real experiment, IBRSM has the better performance than TBRSM. The reason may be due to the process assumption of the evolution processes. Basically, TBRSM should have the better performance than that of IBRSM if AR process is a correct assumption for evolution process, because the purpose of using the AR process is to detect the trend of the evolution process and then to predict the next step of the evolution process. However, besides the AR process, the spline method propose in Yiu et al. (2001) would be another possible choice. In Section 4, we evaluate these evolution processes sequentially and limit the number of points whose evolution processes are processing is less than or equal to 4. Here we assume these evolution processes are processing independently such that we can evaluate them separately. When we have more computing resources, these processes 30.

(36) could be distributed to the different resources. Therefore, we can apply the parallel computing techniques for improving the efficiency of TBRSM.. 31.

(37) Appendix: The figure of evolution processes for Lyaounov Exponents (S , m ) = ( 20, 5). −3. 8. p1. x 10. c. response value. 6 4 2 0 −2 −4. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −4. −5. x 10. −5.5 response value. A. −6 −6.5 −7 −7.5 −8. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 29: The evolution process of Lyapunov exponent at (Sp1 , mc )=(20,5) and its part of t ∈ (20000, 40000).. 32.

(38) (Sp1, mc) = ( 28, 9.5). response value. 0.04. 0.03. 0.02. 0.01. 0. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −3. 2.8. x 10. response value. 2.6 2.4 2.2 2 1.8. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 30: The evolution process of Lyapunov exponent at (Sp1 , mc )=(28,9.5) and its part of t ∈ (20000, 40000). (S , m ) = ( 26, 5). −3. 8. p1. x 10. c. response value. 6 4 2 0 −2. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −4. −1.85. x 10. response value. −1.9 −1.95 −2 −2.05 −2.1 −2.15. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 31: The evolution process of Lyapunov exponent at (Sp1 , mc )=(26,5) and its part of t ∈ (20000, 40000). 33.

(39) (S , m ) = ( 26.5, 5) p1. c. 0.08. response value. 0.06 0.04 0.02 0 −0.02. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −4. −6. x 10. response value. −7 −8 −9 −10 −11 −12. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 32: The evolution process of Lyapunov exponent at (Sp1 , mc )=(26.5,5) and its part of t ∈ (20000, 40000). (S , m ) = ( 29.5, 10.5) p1. c. 0.06. response value. 0.05 0.04 0.03 0.02 0.01 0 −0.01. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −3. −2.05. x 10. response value. −2.1 −2.15 −2.2 −2.25 −2.3 −2.35. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 33: The evolution process of Lyapunov exponent at (Sp1 , mc )=(29.5,10.5) and its part of t ∈ (20000, 40000). 34.

(40) (Sp1, mc) = ( 29.5, 13) 0.05. response value. 0.04 0.03 0.02 0.01 0. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −3. 1.5. x 10. response value. 1.4 1.3 1.2 1.1 1 0.9. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 34: The evolution process of Lyapunov exponent at (Sp1 , mc )=(29.5,13) and its part of t ∈ (20000, 40000). (S , m ) = ( 32.5, 10) p1. c. 0.06. response value. 0.05 0.04 0.03 0.02 0.01 0 −0.01. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −3. −2.05. x 10. response value. −2.1 −2.15 −2.2 −2.25 −2.3 −2.35. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 35: The evolution process of Lyapunov exponent at (Sp1 , mc )=(32.5,10) and its part of t ∈ (20000, 40000). 35.

(41) (S , m ) = ( 30, 7.5) p1. c. 0.04. response value. 0.03 0.02 0.01 0 −0.01. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −3. −2.3. x 10. response value. −2.35 −2.4 −2.45 −2.5 −2.55. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 36: The evolution process of Lyapunov exponent at (Sp1 , mc )=(30,7.5) and its part of t ∈ (20000, 40000). (S , m ) = ( 30, 10.5) p1. c. 0.05. response value. 0.04 0.03 0.02 0.01 0 −0.01. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −4. −4. x 10. response value. −5 −6 −7 −8 −9 −10 −11. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 37: The evolution process of Lyapunov exponent at (Sp1 , mc )=(30,10.5) and its part of t ∈ (20000, 40000). 36.

(42) (S , m ) = ( 33.5, 16.5) p1. c. 0.06 0.05 response value. 0.04 0.03 0.02 0.01 0 −0.01. 0. 1. 2. 3. 4. 5 6 evolution variable. 7. 8. 9. 10 4. x 10. −3. −3.2. x 10. response value. −3.3 −3.4 −3.5 −3.6 −3.7 −3.8. 2. 2.2. 2.4. 2.6. 2.8 3 3.2 evolution variable. 3.4. 3.6. 3.8. 4 4. x 10. Figure 38: The evolution process of Lyapunov exponent at (Sp1 , mc )=(33.5,16.5) and its part of t ∈ (20000, 40000).. 37.

(43) References [1] F. Bergeaud and S. Mallat (1995). Matching pursuit of images. 1995 International Conference on Image Processing, 1, 53-56. [2] G. E. P. Box and K. B. Wilson (1951). On the experimental attainment of optimum condition. Journal of the Royal Statistical Society, Ser. B., 13, 1-45. [3] R.-B. Chen, W. Wang and F. Tsai (2006). A basis-based response surface method for computer experiment optimization. Technical report, Department of Applied Mathematics and Institute of Statistics, National University of Kaohsiung. [4] K. T. Fang, D. K. J. Lin, P. Winker and Y. Zhang (2000). Uniform design: theory and application. Technometrics, 42, 237-248. [5] B. MacLenna (1991). Gabor representation of spatiotemporal visual images. Technical report CS-91-144. [6] S. Mallat and Z. Zhang (1993). Matching pursuit with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41, 3397-3415 [7] T. S. Parker and L. O. Chua (1989). Practical numerical algorithms for chaotic systems. Springer-Verlag. New York. [8] J. Sacks, W. J. Welch, T. J. Mitchell and H. P. Wynn (1989). Design and analysis of computer experiments. Statistical Science, 4(4), 409-435. [9] W. Wang and R.-B. Chen (2004). Basis representation methodology for response surfaces. Technical report, Department of Applied Mathematics and Institute of Statistics, National University of Kaohsiung. [10] W. Wang, T.-M. Hwang, C. Juang, J. Juang, C.-Y. Liu and W.-W. Lin (2001). Chaotic behaviors of bistable laser diodes and its application in synchronization of optical communication. Japanese Journal of Applied Physics, 40(10), 5914-5919.. 38.

(44) [11] K. F. C. Yiu, S. Wang, K. L. Teo and A. C. Tsoi (2001). Nonlinear system modeling via knot-optimizing B-spline networks, IEEE Transactions on Neural Networks, 12, (5) pp.1013-1022.. 39.

(45)

參考文獻

相關文件

Wang, Solving pseudomonotone variational inequalities and pseudocon- vex optimization problems using the projection neural network, IEEE Transactions on Neural Networks 17

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

Elsewhere the difference between and this plain wave is, in virtue of equation (A13), of order of .Generally the best choice for x 1 ,x 2 are the points where V(x) has

Microphone and 600 ohm line conduits shall be mechanically and electrically connected to receptacle boxes and electrically grounded to the audio system ground point.. Lines in

Associate Professor, Kyushu University.. At the end of the antarakalpa most of mankind perish. The Part 2 is the main part of this paper and points out the

• Learn the mapping between input data and the corresponding points the low dimensional manifold using mixture of factor analyzers. • Learn a dynamical model based on the points on

The simulation environment we considered is a wireless network such as Fig.4. There are 37 BSSs in our simulation system, and there are 10 STAs in each BSS. In each connection,

Taiwan customer satisfaction index (TCSI) model shown in Figure 4-1, 4-2 and 4-3, developed by the National Quality Research Center of Taiwan at the Chunghua University in