• 沒有找到結果。

Dynamic Trajectory Prediction

3.3 Applications

3.3.2 Dynamic Trajectory Prediction

For a dynamic trajectory prediction problem, the goal may be to estimate the launching position and velocity of a moving object using the measured data. Through a learning process, the SOMS may determine a most probable initial state through repeatedly com-paring the measured data with the predicted trajectories derived from the possible initial states stored in the neurons of the SOM. We consider the SOMS very suitable for this application, because the relationship between the initial state and its resultant trajec-tory is not utterly random. We can thus distribute the initial states into the SOM in an organized fashion, and make it as a guided search.

In this application, the nonlinear dynamic equation describing the trajectory of the moving object and the measurement equation are first formulated as

x(k + 1) = fk(x(k)) + ξk (3.12)

v(k) = gk(x(k)) + ζk (3.13)

where fkand gkare the vector-value function defined in Rqand Rl(q and l the dimension), respectively, and their first-order partial derivatives with respect to all the elements of x(k) continuous. ξk and ζk are the zero-mean Gaussian white noise sequence in Rq and Rl, respectively, with

E[ξk] = 0 (3.14)

E[ξjξtk] = Qδjk (3.15)

E[ζk] = 0 (3.16)

E[ζjζtk] = Uδjk (3.17)

E[ξjζtk] = 0 (3.18)

where E[·] stands for the expectation function, Q and U the covariance matrix of the input noise and output noise, respectively, and δjk the Dirac delta function. Q and U are expected to be uncertain and varying in noisy, unknown environments, and their estimated values possibly imprecise, even incorrect. Being unaware of the statistical properties of the dynamic model, the SOMS is utilized to find the optimal initial state via learning.

The learning algorithm for dynamic trajectory prediction is organized as follows.

Algorithm for dynamic trajectory prediction based on the SOMS: Predict an optimal initial state for the trajectory of a moving object using the measured position data.

Step 1: Set the stage of learning k = 0. Estimate the ranges of the possible launching position and velocity of the moving object, and randomly store the possible initial states wj(0) into the neurons, where j = 1, . . . , m × n, m × n the total number of neurons in the 2D (m × n) space.

Step 2: Send wj(k) into the dynamic model, described in Eq.(3.12) and Eq.(3.13), to compute pj(k).

Step 3: For each neuron j, compute its output Oj(k) as the Euclidean distance between the measured position data v(k) and pj(k):

Oj(k) =

Xk

i=0

°°

°pj(i) − v(i)°°° (3.19)

Find the winning neuron j with the minimum Oj(k):

Oj(k) =

Step 4: Update the weight vectors of the winning neuron j and its neighbors.

Step 5: Check whether the minimum Oj(k) is smaller than a pre-specified value ²:

Oj(k) < ² (3.21)

If Eq.(3.21) does not hold, let k = k +1 and go to Step 2; otherwise, the prediction process is completed and output the predicted optimal initial state to the dynamic model to derive the object trajectory. Note that the value of ² is empirical according to the demanded resolution in learning, and we chose it very close to zero. In addition, during each stage of learning, we perform a number of learning to increase the SOM learning speed. This number of learning is set to be a large number in the initial stage of the learning process, such that the SOMS may converge faster at the price of more oscillations, and decreased gradually to achieve smooth learning in the later stages of learning.

To demonstrate the effectiveness of the proposed SOMS and weight updating rule, we performed a series of simulations for dynamic trajectory prediction based on using the SOMS, the SOMS without the proposed center and width adjustment on the neighbor-hood function (named as SOMSO), and GA. The trajectory to predict in the simulations was designed to emulate that of a missile. Its governing equations of motion in the 3D Cartesian coordinate system are described as

¨

where gmand ω stand for the gravitational constant and the rotative velocity of the earth, respectively, and set to be gm = 3.986×105km3/s2 and ω = 7.2722×10−5rad/s. (ξx, ξy, ξz) are assumed to be continuous-time uncorrelated zero-mean Gaussian white noise pro-cesses. Referring to Eq.(3.12) and letting x = (x, y, z, ˙x, ˙y, ˙z)T = (x1, x2, x3, x4, x5, x6)T, we can obtain the discretized dynamic equation as

x(k + 1) = f (x(k)) + ξk (3.25) Gaus-sian white noise sequences with a constant variance σf2 = (0.1m/s2)2. And, referring to Eq.(3.13), the measurement equation is formulated as

v(k) =

and

ζk =h ζx1 ζx2 ζx3 iT (3.29) where (ζx1, ζx2, ζx3) are the measurement noise sequences with a zero mean and constant variance σm2 = (15m)2. The ranges of the possible initial states wj(0) were estimated to be

Within the ranges described in Eq.(3.30), the possible launching positions and velocities of the missile were selected and stored into the 729 (27×27) neurons of the 2D SOM. And, the learning rate for the SOMS was chosen to be

η(k) = 0.8 · e−k/50+ 0.2 (3.31)

The sampling time t was 0.5s. For the GA, the population size was selected to be 729 to match with the SOM, and the crossover and mutation probability 0.6 and 0.0333, respectively. The number of learning is set to be 20 during each stage of learning.

We first applied the SOMS, SOMSO and GA for trajectory prediction with a good estimate of the initial state. The ideal initial state of the missile was assumed to be (68.7 × 105m, 2.7 × 105m, 4.8 × 105m, 130m/s, 820m/s, 1370m/s), which was within the estimated range. And, the variance of the measurement noise was set to be (15m)2. Figure 3.7 shows the simulation results. All SOMS, SOMSO and GA predicted the initial state quite well and thus resulted in very small estimated errors, except in the initial stage of

the prediction, as shown in Figure 3.7(a) (only the position error in the X-direction (x1) is shown for illustration). Figure 3.7(b) shows how the neighborhood function F (wj(k)), described in Eq.(3.2), varied during the SOM learning process. In Figure 3.7(b), from a random distribution in the beginning of the learning, F (wj(k)) gradually approximated the expected Gaussian distribution along with the stage of learning.

In the second set of simulations, we investigated their performances for the condition of a bad estimate of the initial state. In this simulation, the ideal initial state was assumed to be (64×105m, 4.8×105m, 2.4×105m, 215m/s, 2130m/s, 1030m/s), which was outside the estimated range. And, the variance of the measurement noise was enlarged to be (30m)2. From the simulation results shown in Fig. 3.8, the influence of the bad estimate on the SOMS and SOMSO was mostly at the initial stage of the prediction.

After the transient, the SOMS and SOMSO still managed to find the optimal initial state.

Meanwhile, we also observed that the SOMS converged faster than the SOMSO. As for the GA, it converged very slowly as the optimal initial state did not fall within the estimated range. We thus conclude that the SOMS performed better than the GA for this dynamic trajectory prediction application, and the proposed dynamic weight updating rule was effective. In this chapter we have proposed an SOM-based algorithm for optimization problems, which can be used for both static and dynamic functions in real time. To achieve high learning efficiency for system parameters in different working ranges, we have also proposed a new SOM weight updating rule. The applications of the proposed SOMS on both function optimization problems and dynamic trajectory predictions have clearly proven its effectiveness.

(a) Estimated position error in the X-direction

Figure 3.7 Simulation results for dynamic trajectory prediction using the SOMS, SOMSO, and GA with a good estimate of the initial state: (a) the estimated position error in the X-direction and (b) the variation of the neighborhood function F(wj( ))k during the SOMS learning process. (Cont.)

Position error (m)

time ( s )

0

Figure 3.7 Simulation results for dynamic trajectory prediction using the SOMS, SOMSO, and GA with a good estimate of the initial state: (a) the estimated position error in the X-direction and (b) the variation of the neighborhood function F(wj( ))k during the SOMS learning process.

( j( ))

Estimated position error in the X-direction

Figure 3.8 Simulation results for dynamic trajectory prediction using the SOMS, SOMSO, and GA with a bad estimate of the initial state.

Position error (m)

time (s )

Chapter 4

Niching SOM-Based Search Algorithm

Many global optimization techniques based on population evolution have been successfully applied for finding a global optimum [17, 12, 21], while they cannot cope with optimization with multiple optimal solutions. To tackle this problem, usually the approach employed is to repeat executing the optimization process with different initial populations. Mean-while, it is possible that the same optimal solution was found even with different initial populations and still several solutions were remained to be found. Consequently, it may require a considerable amount of computational time for finding all the solutions. Thus, a niching method is proposed to extend the ability SOMS, discussed below.

相關文件