• 沒有找到結果。

Chapter 5 Conclusions and Discussions

5.2   Discussions

Parameter settings. In the cooling schedule, the value of T

max, and Nt, are set prior.

For a trial which includes a change of center, a change of b and a, a change of θ, and a change of f for every pattern, there are three possible results to accept or reject the change determined by Metropolis criterion:

1. The new parameter has smaller error and it is accepted.

2. The new parameter has larger error and it is still accepted.

3. The new parameter has larger error and it is rejected.

The determination of Tmax, we considered the accept ratio of the larger-error trials. If the Tmax is not high enough, the trial with larger error will almost reject, that is, it always accept trial with smaller error, so it is possible to reach local minimum. Fig. 46 shows this situation, where Tmax = 1,500 iterations, initial center (0,0), a = 1, b = 1, θ

60

= 0, and f = 1. Fig. 47 shows the result when Tmax = 10. In Fig. 47 (b), the accept ratio of canter, angle, and size has increased and shows the good result for this simple example. Fig. 48 show the result when Tmax = 100,000. After 500 iterations, T = 4.1, but this temperature is not low enough and the high accept ratio of larger-error parameters results in the instability. To solve this problem, we can increase the number of iterations to 1,000 and show the result in Fig. 49. This still shows good result, but it takes more time. In conclusions, for temperature Tmax, we have to choose a high enough temperature that gives a high accept ratio of larger-error parameters.

Besides, we need many enough iterations to cool the temperature to ensure stability.

Also we find the setting of Tmax is proportional to the scale of input points. In Fig.

47, Tmax = 10 provides good result. In Fig. 50, we enlarge the scale of data by two.

Fig. 50 (a) with Tmax = 10 cannot give good result, but Fig. 50 (a) with Tmax = 100 can give good result. In our simulation experiments, we choose Tmax = 500 and 500 iterations to ensure high enough initial temperature and lower final temperature T ≈ 0.02.

As for Nt, if trials are not many enough, we cannot get good result. Larger Nt takes more time but gives more chances. So we can have as many trials as possible if the computational power is strong enough. Fig. 51 shows too few trials cannot provide good result and for this simple example we need only Nt = 10 to obtain a good result.

61

(a)

(b)

Fig. 46. Illustration of low initial temperature Tmax: (a) Detection result of Tmax = 1. (b) Accept ratio of larger-error parameters.

62

(a)

(b)

Fig. 47. Illustration of low initial temperature Tmax: (a) Detection result of Tmax = 10.

(b) Accept ratio of larger-error parameters.

63

(a)

(b)

Fig. 48. Illustration of high initial temperature Tmax: (a) Detection result of Tmax = 100,000. (b) Accept ratio of larger-error parameters.

64

(a)

(b)

Fig. 49. Illustration of high initial temperature Tmax with more iterations: (a) Detection result of Tmax = 100,000. (b) Accept ratio of larger-error parameters.

65

(a)

(b)

Fig. 50. Enlarge the scale of points by two: (a) Detection result of Tmax = 10. (b) Detection result of Tmax = 100.

66

(a)

(b)

Fig. 51. Relationship between Nt and detection result Tmax = 500: (a) Detection result of Nt= 1. (b) Detection result of Nt = 10.

Time consumption. As for the time consumption, Table III shows that the CPU

time is proportional to the number of patterns or the number of parameters. The larger number of parameters, the algorithm takes more time to obtain the solution.

Memory requirement. For traditional HT, it needs an accumulation matrix. The

size of accumulation matrix grows as the number of parameters increases. Besides,

67

the higher precision, the larger accumulator matrix is needed. On the other hand, SA algorithm for parameter detection needs only memories for the original parameters and the trials parameters. This depends on the number of patterns K. Furthermore, the parameters can be presented by SA algorithm with high precision since we do not need to quantize the parameter space as in the traditional HT.

Preprocessing. In seismic application, we have no constraint on the center.

However, for ideal case, the hyperbola has the center on x-axis, i.e. t = 0. In simulated seismic data, we can find that the center does not lie on the x-axis, since convolution produces a shift. So preprocessing is quite critical. Wavelet and deconvolution processing may be needed in the preprocessing to improve the detection result.

68

References

[1] P. V. C. Hough, “Method and means for recognizing complex patterns,” U.S.

Patent 3069654, 1962.

[2] R. O. Duda and P. E. Hart, “Use of Hough transform to detect lines and curves in pictures,” Communication ACM, vol. 15, 1972, pp. 11-15.

[3] M. M. Slotnick, Lseeons in Seismic Computing. The Society of Exploration Geophysicists, 1959.

[4] M. B. Dobrin and C. H. Savit, Introduction to Geophysical Prospecting.

McGraw-Hill Book Co., New York, 1988.

[5] O. Yilmaz, Seismic data processing. The Society of Exploration Geophysicists, 1987.

[6] P. Kearey and M. Brooks, An Introduction to Geophysical Exploration. Blackwall Science Publications, 1991.

[7] K. Y. Huang, K. S. Fu, T. H. Sheen, and S. W. Cheng, “Image processing of seismograms: (A) Hough transformation for the detection of seismic patterns. (B) Thinning processing in the seismogram,” Pattern Recognition, vol. 18, no.6, pp.

429-440, 1985.

[8] J. Basak and A. Das, “Hough transform networks: learning conoidal structures in a connectionist framework,” IEEE transaction on Neural Networks, vol. 13, 2002, pp. 381-392.

[9] K. Y. Huang, J. D. You, K. J. Chen, H. L. Lai, and A. J. Don, “Hough Transform neural network for seismic pattern detection,” International Joint Conference on

Neural Networks, July 16-21, 2006, pp. 4670-4675.

[10] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, 1983, pp. 671-680.

[11] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller, “Equation of state calculations by fast computing machines,” J Chem Phys., 1953, pp.1087-1092.

[12] R. W. Klein and R. C. Dubes, “Experiments in projection and clustering by simulated annealing,” Pattern Recognition, vol. 22, 1989, pp. 213-220.

[13] D. E. Brown and C. L. Huntley, “A practical application of simulated annealing to clustering,” Pattern Recognition, vol. 25, 1992, pp. 401-412.

[14] K. S. Al-Sultan, S. Z. Selim, “A global algorithm for the fuzzy clustering problem,” Pattern Recognition, vol. 26, 1993, pp. 1357-1361.

[15] S. Geman and D. Geman, “Stochastic relaxation, Gibbs distribution and Bayesian restoration of images,” IEEE Trans. Pattern Analysis and Machine Intelligence,

69

vol. 6, 1984, pp. 721-741.

[16] H. Szu, R. Hartley “Fast simulated annealing,” Physics Letters A, vol. 122, 1987, pp. 157-162.

[17] A. Corana, M. Marchesi, C. Martini, and S. Ridella, “Minimizing multimodal functions of continuous variables with the simulated annealing algorithm,”

Mathematical Software, ACM Transactions on, vol. 13, 1987, pp. 262-280.

[18] G. P. Barabino, G. S. Barabino, B. Bianco, and M. Marchesi, “A study on the performance of simplex methods for function minimization,” Circuits and

Computers IEEE International Conference on, ICCC 80, 1980, pp. 1150-1153.

[19] S. F. Masri, G. A. Bekey, F. B. Safford, “Global Optimization Algorithm Using Adaptive Random Search,” appl. Math. and Comput. vol. 7, 1980, pp. 353-376.

相關文件