• 沒有找到結果。

Multiple Dynamic Trajectories Prediction

4.3 Visualization of Distribution of Optimal Solutions

4.4.2 Multiple Dynamic Trajectories Prediction

For a dynamic trajectory prediction problem, the goal may be to estimate the initial position and velocity of a moving object using the measured data. For the trajectory prediction of multiple targets, here we assume that the target detection has been carried out in advance, and we focus on the estimation of the initial states of multiple moving objects. Through a learning process, the NSOMS may determine a most probable initial state of each target through repeatedly comparing the measured data with the predicted trajectories derived from the possible initial states stored in the neurons of the SOM.

In this application, the nonlinear dynamic equation describing the trajectory of the moving object and the measurement equation are as those described in Chapter 3. The learning algorithm for multiple trajectories prediction is organized as follows.

Algorithm for multiple trajectory prediction based on the SOMS: Predict an optimal initial state for the trajectory of every moving target using the measured position data.

Step 1: Set the stage of learning k = 0. Choose H number of niches, N number of neurons within each niche, and reference value Pr. Estimate the ranges of the possible position and velocity of the moving object, and randomly store the possible initial states whj(0) into the neurons, where j = 1, . . . , N , h = 1, . . . , H.

Step 2: Send whj(k) into the dynamic model, described in Eqs.(3.25) and (3.28), to compute phj(k).

Step 3: For each neuron j of every niche, compute its output Ojh(k):

Ohj(k) =YM where M is the number of the objects detected.

Find the winning neuron j with the minimum Ohj(k):

Ohj(k) = min

j Ohj(k) (4.18)

Step 4: Update the weight vectors of the winning neuron j and its neighbors to every niche, and update the positions of neurons of the entire network.

Step 5: If Pq

i=1σ˜hi(k) < Pr for every niche, whj(k) is determined to be as an effective optimal solution with the duplicate optimal solutions excluded. The prediction process outputs the predicted optimal initial states to the dynamic model to derive the object trajectories. From Eqs.(4.6) and (4.7), the new whj(k) will be randomly regenerated and then added into the set <c.

Step 6: Check whether the number of iterations is smaller than a pre-specified maximum number of iterations. If it is not, let k = k + 1, and go to Step 2; otherwise, the prediction process is completed and output optimal states of all objects. The final network mapping provides the visualization of the distribution of the optimal states. In addition, during each stage of learning, we perform a number of learning to increase the SOM learning speed. This number of learning is set to be a large number in the initial stage of the learning process, such that the NSOMS may converge faster at the price of more oscillations, and decreased gradually to achieve smooth learning in later stages of learning.

To demonstrate the effectiveness of the proposed NSOMS and weight updating rule, we performed a series of simulations for dynamic trajectory prediction based on using the proposed NSOMS and two NSOMS without the proposed dynamic weight updating rule (named as SOMSO-1 and SOMSO-2 by using the SOMO’s and SOSENs’s weight updating rules, respectively, in place of that of the NSOMS). The trajectory to predict in the simulations was designed to emulate that of a missile. Its governing equations of motion in the 3D Cartesian coordinate system are as those described in Chapter 3. The ranges of the possible initial states wj(0) were estimated to be

1.14 × 106m ≤ x1(0) ≤ 2.14 × 106m

Within the ranges described in Eq.(4.19), the possible initial positions and velocities of the missile were selected and stored into the 1125 (5×225) neurons of the 2D SOM. We consider three targets to be detected in the following simulations. The parameters of NSOMS were set to be n1 = 2, n0 = 1, respectively. The additional adaptation term ε, described in Eq.(4.4), was set to be 0.1. For comparison, we set the same learning rate as those in the NSOMS described in Sect. 4.4,1, and several parameters of the SOMSO-1 and SOMSO-2 were adjusted via a trial-and-error process to yield salient performance.

The number of learning is set to be 20 during each stage of learning.

We first applied the SOMS, SOMSO-1, and SOMSO-2 for trajectory prediction with a good estimate of the initial state. Three ideal initial states of the missiles were assumed to be within the estimated range. The variance of the measurement noise was set to be

(15m)2. Figure 4.7 shows the simulation results. The ideal and measured trajectories are shown in Figure 4.7(a). These three methods predicted the initial state quite well and thus resulted in very small estimated errors, except in the initial stage of the prediction, as shown in Figure 4.7(b)-(d) (the estimated initial state error is shown for illustration).

We observed that the NSOMS converged faster than the other methods did. Figure 4.8 shows the neighboring relationship of neurons of the best three niches using the NSOMS.

In Figure 4.8(a), from a random distribution of neurons in the beginning of the learning, the mapping structure gradually form three clusters along with the stage of learning.

Figure 4.8(b) shows how the neighborhood function Di and F (wi(k)) varied during the SOM learning process, and eventually they are very close to each other.

In the second set of simulations, we investigated their performances for the condition of a bad estimate of the initial state. The ranges of the possible initial states wj(0) were estimated to be

In this simulation, three ideal initial states were assumed to be outside of the estimated range. The variance of the measurement noise was enlarged to be (50m)2. The setting of all parameters was set to be the same as them in first simulation. Figure 4.9(a) shows the ideal and measured trajectories and Figure 4.9(b)-(d) the estimated initial error. From the results, the influence of the bad estimate on these methods was mostly at the initial stage of the prediction. After the transient, the NSOMS still managed to find the optimal

(a) Ideal and measured TBM trajectories

Figure 4.7 Simulation results for the multiple trajectories prediction using the NSOM, SOMSO-1, and SOMSO-2 with a good estimate of the initial state: (a) the ideal and measured TBM trajectories, (b)-(d) the estimated initial state error by using the NSOM, SOMSO-1, and SOMSO-2.

(Cont. )

(b) NSOMS

(c) SOMSO-1

(d) SOMSO-2

Figure 4.7 Simulation results for the multiple trajectories prediction using the NSOM, SOMSO-1, and SOMSO-2 with a good estimate of the initial state: (a) the ideal and measured TBM trajectories, (b)-(d) the estimated initial state error by using the NSOM, SOMSO-1, and SOMSO-2.

Estimated position error (m) Estimated position error (m) Estimated position error (m)

(a) Projection result in 2D neuron space

(b) Final neighborhood function values

Figure 4.8 Final results obtained by the NSOM S for the multiple trajectories prediction: (a) projection result in 2D neuron space and (b) finial neighborhood function values.

:Di : (F wi( ))k :Di*

initial states of all targets. Meanwhile, we also observed that the NSOMS converged very faster than the other methods did. As for the SOMSO-1 and SOMSO-2, they converged very slowly as the optimal initial state did not fall within the estimated range. In Figure 4.10(a), from a random distribution of neurons in the beginning of the learning, the mapping structure gradually form three clusters along with the stage of learning. Figure 4.10(b) shows how the neighborhood function Di and F (wi(k)) varied during the SOM learning process, and eventually they are very close to each other.

To further demonstrate the NSOMS search ability, we used four different network sizes and learning parameters to run these optimization algorithms for the dynamic trajectory prediction. We only consider one target for comparison. The ideal initial state of the missile was assumed to be within the estimated range, described in Eq.(4.19). Figure 4.11 shows how the population size affects for these algorithms. We observed that the NSOMS performs better than the SOMSO-1 and SOMSO-2. As Figure 4.11 shows, the SOMSO-1 and SOMSO-2 did not converge to the optimal state when the network size was very small. Figure 4.12 show the performance with the different learning parameters for the influence of the these algorithms under a fixed network size (1 × 225). As Figure 4.12 illustrates, we observed that the large learning parameter may speed up the learning of the NSOMS. Although a small learning parameter made the NSOMS converge slight slowly, it still converged faster than the SOMSO-1 and SOMSO-2 did. Table 4.2 shows that the different network (population) size and learning parameter affect the learning result for the NSOMS, SOMSO-1 and SOMSO-2 in detail. We calculated the RMS (Root-Mean-Square) value of the error between the ideal and predicted trajectories at k = 100 to evaluate their performance. The comparison results of average of repeated 30 times are listed in Table 4.2. We observed that the NSOMS performed better than the other methods did. Figure 4.13 shows the performance of 5 runs with the same initial weights

(a) Ideal and measured TBM trajectories

Figure 4.9 Simulation results for multiple trajectories prediction using the NSOM, SOMSO-1, and SOMSO-2 with a bad estimate of the initial state:

(a) the ideal and measured TBM trajectories, (b)-(d) the estimated initial state error by using the NSOM, SOMSO-1, and SOMSO-2. (Cont. )

(b) NSOMS

(c) SOMSO-1

(d) SOMSO-2

Figure 4.9 Simulation results for multiple trajectories prediction using the NSOM, SOMSO-1, and SOMSO-2 with a bad estimate of the initial state:

(a) the ideal and measured TBM trajectories, (b)-(d) the estimated initial state error by using the NSOM, SOMSO-1, and SOMSO-2.

Estimated position error (m) Estimated position error (m) Estimated position error (m)

(a) Projection result in 2D neuron space

(b) Final neighborhood function values

Figure 4.10 Final results obtained by the NSOM S for the multiple trajectories prediction : (a) projection result in 2D neuron space and (b) finial neighborhood function values.

:Di : (F wi( ))k :Di*

(a) H 1, N !3 3

(b) H 1, N !5 5

Figure 4.11 Performance for different network sizes using the NSOMS, SOMSO-1, and SOMSO-2. (Cont. )

Estimated position error (m) Estimated position error (m)

(c) H 1, N 10 10!

(d) H 1, N 20 20!

Figure 4.11 Performance for different network sizes using the NSOMS, SOMSO-1, and SOMSO-2.

Estimated position error (m) Estimated position error (m)

(a) w( )k !0.2

(b) w( )k !0.4

Figure 4.12 Performance for different parameters using the NSOMS, SOMSO-1, and SOMSO-2. (Cont. )

Estimated position error (m) Estimated position error (m)

(c) w( )k !0.6

(d) w( )k !0.8

Figure 4.12 Performance for different parameters using the NSOMS, SOMSO-1, and SOMSO-2.

Estimated position error (m) Estimated position error (m)

Table 4.2: Comparison results for NSOMS, SOMSO-1, and SOMSO-2 on the dynamic trajectory prediction.

Mean and Standard Deviation of RMS Values (m) Network

25.981 1.233 1.056 102!1.016 102 44.012 35.543 25.711 0.786 25.838 0.752 25.834 0.761 25.775 1.075 25.834 1.258 25.845 1.281

for the influence of three algorithms under a quite small network size (H = 1, N = 2 × 2).

Figure 4.14 shows the performance of 5 runs with the different initial weights. From the results shown in Table 4.2 and Figures 4.11-4.14, the NSOMS was more robust and converged faster than the other two algorithms did.

We also performed simulations based on using the RCS-PSM. We modified the in-ternal parameters of the PSM method to enhance its search abilities. However, it was not that straightforward to determine its parameters properly, and the process was time-consuming. The SOMSO-1, SOMSO-2, and RCS-PSM might not be that effective under such circumstances that the ranges of the possible initial states may be uncertain and varying in noisy, unknown environments. We thus conclude that the NSOMS performed better than the SOMSO-1, SOMSO-2, and RCS-PSM for this dynamic trajectory predic-tion applicapredic-tion, and the proposed dynamic weight updating rule was effective.

As a summary, in this chapter, a niching SOM-based search algorithm has been pro-posed for identification and visualization of multiple optimal solutions. Through reducing the network size greatly for search in the high-dimensional space, we have also proposed a niche weight updating rule to raise the learning efficiency. The final network struc-ture allows us to easily classify the optimal solutions into clusters, thus yielding useful information for solution selection.

(a) SOMSO-1

(b) SOMSO-2

(c) NSOMS

Figure 4.13 Performance of 5 runs with the same initial weights using the NSOMS, SOMSO-1, and SOMSO-2.

Estimated position error (m) Estimated position error (m) Estimated position error (m)

(a) SOMSO-1

(b) SOMSO-2

(c) NSOMS

Figure 4.14 Performance of 5 runs with the different initial weights using the NSOMS, SOMSO-1, and SOMSO-2.

Estimated position error (m) Estimated position error (m) Estimated position error (m)

Chapter 5

Conclusion

In this dissertation, we have proposed an SOM-based search algorithm (SOMS), which can be used for both static and dynamic functions in real time. To achieve high learn-ing efficiency for system parameters in different worklearn-ing ranges, we have also proposed a new SOM weight updating rule. An intelligent radar predictor for trajectory estimation is first developed for application. With a simplified target dynamic model, the unsupervised SOM in the predictor can achieve salient prediction in noisy, unknown environments. The SOM is more robust to the uncertainty of the dynamic model than Kalman filter and GA.

Furthermore, the SOM’s search abilities have been adequately exploited in a multimodal domain. A new niche method (deterministic competition) to extend the ability of the SOM-based search algorithm has been proposed for identification of multiple optimal so-lutions. In order to reduce the network size, another new SOM weight updating rule is proposed to enhance the learning efficiency. With the dynamic weight updating, the NSOMS converges faster than other algorithms such as SOMO, KSOM-ES, SOSENs and RCS-PSM. Moreover, a new adaptive mapping model is proposed to visualize the distri-bution and structure of the optimal solutions into the 2D neuron space. In our proposed

NSOMS, only two learning parameters need to be determined in the weight and position updating rules. The applications of the proposed NSOMS on both function optimization in a multimodal domain and dynamic trajectory predictions involving multiple targets have clearly demonstrated its effectiveness.

5.1 Future Research

In this dissertation, by combining the SOM with the dynamic model, the SOM is able to tackle the spatiotemporal data. The SOM has been applied to search optimal parameters for dynamic systems. To further exploit its search ability, in one of the future works, we will apply the NSOMS for system identification and control problems. Because the current searching process includes the weight updating rules and the parameters of learn-ing, it is not easy to appropriate the learning rate, number of neurons, and termination criteria. Although these parameters can be selected through a trial-and-error process, the time response of the learning affects the performance of the dynamic systems for system identification and control problems. Thus, we will also discuss the convergence issue in details. As the SOM also possesses an appealing feature in responding to distinct proper-ties exhibited by the input data through forming several corresponding clusters, another worthwhile future work will be to extend the proposed NSOMS for a wide application such as image processing, speaker recognition, machine learning, and others.

Bibliography

[1] A. P. Azcarraga, T. N. Yap, Jr., J. Tan, and T. S. Chua, “Evaluating Keyword Selection Methods for WEBSOM Text Archives,” IEEE Trans. on Knowledge and Data Engineering, Vol. 16(3), pp. 380-383, 2004.

[2] G. A. Barreto and A. F. R. Araujo, “Identification and Control of Dynamical Systems Using the Self-Organizing Map,” IEEE Trans. on Neural Networks, Vol. 15(5), pp.

1244-1259, 2004.

[3] A. G. Barto, “Reinforcement Learning and Adaptive Critic Methods,” Handbook of Intelligent Control, White and Sofge, eds., Van Nostrand-Reinhold, New York, pp.

469-491, 1992.

[4] G. A. Carpenter and S. Grossberg, “The ART of Adaptive Pattern Recognition by a Self-Organizing Neural Network,” IEEE Computer, Vol. 21(3), pp. 77-88, 1988.

[5] C.B. Chang and J. A. Tabaczynski, “Application of State Estimation to Target Track-ing,” IEEE Trans. on Automatic Control, Vol. 29(2), pp. 98-109, 1984.

[6] L. Chin, “Application of Neural Networks in Target Tracking Data Fusion,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 30(1), pp. 281-287, 1994.

[7] Y. Y. Chen and K. Y. Young, “An Intelligent Radar Predictor for Hight-speed Moving-target Tracking,” International J. Fuzzy Systems, Vol. 6(2), pp. 90-99, 2004.

[8] A. D. Cioppa, C. D. Stefano, and A. Marcelli, “On the Role of Population Size and Niche Radius in Fitness Sharing,” IEEE Trans. on Evolutionary Computation, Vol.

8(6), pp. 580-592, 2004.

[9] M. Efe and D. P. Atherton, “Maneuvering Target Tracking with an Adaptive Kalman Filter,” IEEE Conference on Decision and Control, pp. 737-742, 1998.

[10] N. Eldredge and S. J. Gould, “Punctuated equilibria: an alternative to phyletic gradualism”, Models in Paleobiology, pp. 82-115, 1972.

[11] D. E. Goldberg and J. Richardson, “Genetic algorithms with sharing for multimodal function optimization,” Proceedings of 2nd ICGA, pp. 41-49, 1987.

[12] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, New York, 1989.

[13] M. Hagenbuchner and A. C. Tsoi, “A Supervised Self-Organizing Map for Struc-tures,” IEEE Conference on Neural Networks, pp. 1923-1928, 2004.

[14] S. Haykin, Neural Networks: A Comprehensive Foundation, Macmillan, New York, 1994.

[15] H. Igarashi, “Visualization of Optimal Solutions Using Self-Organizing Maps in Com-putational Electromagnetism,” IEEE Trans. on Magnetics, Vol. 41(5), pp. 1816-1819, 2005.

[16] H.-D. Jin, K.-S. Leung, M.-L. Wong, and Z.-B. Xu,“An Efficient Self-Organization Map Designed by Genetic Algorithms for the Traveling Salesman Problem,” IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, Vol. 33(6), pp. 877-888, 2003.

[17] J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” IEEE Int. Confer-ence on Neural Networks, pp. 1942-1948, 1995.

[18] J. K. Kim, D. H. Cho, H. K. jung, and C. G. Lee, “Niching Genetic Algorithm Adopting Restricted Competition Selection Combined with Pattern Search Method”, IEEE Trans. on Magnetics, vol. 38(2), pp. 1001-1004, 2002.

[19] K. J. Kim and S. B. Cho, “Fusion of Structure Adaptive Self-Organizing Maps Using Fuzzy Integral,” IEEE Conference on Neural Networks, pp. 28-33, 2003.

[20] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated anneal-ing,” Science, Vol. 220, pp. 671-680, 1983.

[21] T. Kirubarajan, H. Wang, Y. Bar-Shalom, and K. R. Pattipati, “Efficient Multisensor Fusion Using Multidimensional Data Association,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 37(2), pp. 386-400, 2001.

[22] T. Kohonen, Self-Organizing Map, Springer, Berlin, Germany, 1997.

[23] D.-C. Liaw, Y.-W Liang, and C.-C. Cheng, “Nonlinear Control for Missile Terminal Guidance,” ASME J. Dynamic Systems, Measurement, and Control, Vol. 122(4), pp.

663-668, 2000.

[24] J. Laaksonen, M. Koskela, and E. Oja, “PicSOM — Self-Organizing Image Retrieval with MPEG-7 Content Descriptors,” IEEE Trans. on Neural Networks, Vol. 13(4), pp. 841-853, 2002.

[25] R. Storn and K. Price, “Differential Evolution - A simple and efficient global optimiza-tion over continuous spaces,” Journal of Global Optimizaoptimiza-tion, Vol 11. pp 341-359, 1997.

[26] T. M. Martinetz, H. J. Ritter, and K. J. Schulten, “Three-Dimensional Neural Net for Learning Visuomotor Coordination of a Robot Arm,” IEEE Trans. on Neural Networks, Vol. 1(1), pp. 131-136, 1990.

[27] R. L. Moose, H. F. Vanlandingham, and D. H. McCabe, “Modeling and Estima-tion for Tracking Maneuvering Targets,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 15(3), pp. 448-456, 1979.

[28] S. W. Mahfoud, “Niching Methods for Genetic Algorithms,” PhD thesis, University of Illinois at Urbana Champaign, 1995.

[29] M. Milano, P. Koumoutsakos, and J. Schmidhuber, “Self-Organizing Nets for Opti-mization,” IEEE Trans. on Neural Networks, Vol. 15(3), pp. 758-765, 2004.

[30] K. Obermayer and T. J. Sejnowski, ed., Self-Organizing Map Formation: Foundation of Neural Computation, MIT Press, Cambridge, 2001.

[31] J. C. Principe, L. Wang, and M. A. Motter, “Local Dynamic Modeling with Self-Organizing Maps and Applications to Nonlinear System Identification and Control,”

Proceedings of the IEEE, Vol. 86(11), pp. 2240-2258, 1998.

[32] K. V. Ramachandra, “A Kalman Tracking Filter for Estimating Position, Velocity and Acceleration from Noisy Measurements of a 3-D Radar,” Electro Technology, Vol. 33, pp. 66-76, 1989.

[33] J. M. Roberts, D. J. Mills, D. Charnley, and C. J. Harris, “Improved Kalman Filter Initialisation Using Neurofuzzy Estimation,” International Conference on Artificial Neural Networks, pp. 329-334, 1995.

[34] B. Sareni, L. Krahenbuhl, and A. Nicolas, “Niching genetic algorithm for optimization in electromagnetics I. Fundamentals,” IEEE Trans. on Magnetics, Vol. 34(5), pp.

2984-2991, 1998.

[35] B. Sareni and L. Krahenbuhl, “Fitness Sharing and Niching Methods Revisited,”

IEEE Trans. on Evolutionary Computation, Vol. 2(3), pp. 97-106, 1998.

[36] D. Sbarbaro and D. Bassi, “A Nonlinear Controller Based on Self-Organizing Maps,”

IEEE Int. Conference on Systems, Man and Cybernetics, pp. 1774-1777, 1995.

[37] H. Shah-Hosseini and R. Safabakhsh, “TASOM: a New Adaptive Self-Organization Map,” IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, Vol.

33(2), pp. 271-282, 2003.

[38] M. C. Su and H. T. Chang, “Fast Self-Organizing Feature Map Algorithm,” IEEE Trans. on Neural Networks, Vol. 11(3), pp. 721-733, 2000.

[39] M. C. Su and H. T. Chang, “A New Model of Self-Organizing Neural Networks and its Application in Data Projection,” IEEE Trans. on Neural Networks, Vol. 12(1), pp. 153-158, 2001.

[40] M. C. Su, Y. X. Zhao, and J. Lee, “SOM-Based Optimization,” IEEE Int. Conference on Neural Networks, pp. 781-786, 2004.

[41] J.A. Vasconcelos, R. R. Saldanha, L. Krahenbiihl, A. Nicolas, “Genetic Algorithm Coupled with a Deterministic Method for Optimization In Electromagnetics,” IEEE Trans. on Magnetics, VOl.33(2), pp. 1860-1863, 1997.

[42] J. A. Walter and K. I. Schulten, “Implementation of Self-Organizing Neural Networks for Visuo-Motor Control of an Industrial Robot,” IEEE Trans. on Neural Networks, Vol. 4(1), pp. 86-96, 1993.

[43] S. Wu and T. W. S. Chow, “PRSOM: a New Visualization Method by Hybridizing Multidimensional Scaling and Self-Organizing Map,” IEEE Trans. on Neural Net-works, Vol. 16(6), pp. 1362-1380, 2005.

[43] S. Wu and T. W. S. Chow, “PRSOM: a New Visualization Method by Hybridizing Multidimensional Scaling and Self-Organizing Map,” IEEE Trans. on Neural Net-works, Vol. 16(6), pp. 1362-1380, 2005.

相關文件