• 沒有找到結果。

Horizontal Object Tracking Control

Chapter 5 Experimental Results

5.3 Horizontal Object Tracking Control

This section will further focus on the tracking control of horizontal trajectories and similar to set-point control the maximal velocities are set to be vmax=2000rpm/min for the neck, 70 rpm/min for the individual eye and 30 rpm/min for the eye to work with the neck. The horizontal trajectory is sinusoidal and expressed as

) 2 cos(

)

( T

A t t

x (5.3-1)

where A is the magnitude and T is the period. In the experiments, A=70 and the T= 6, 9, 12, 15 seconds. For the case of T=6 seconds, the experiment results of the image position errors are shown in Figure 5.10 to Figure 5.12. From these results, it is clear that all the controllers C3NN, C2NN and C1NN are indeed able to control the Eye-robot to trace the moving object with image position errors less than 0.09, i.e., within 30 pixels.

Such image position errors are acceptable since the radius of the object, which is 6 Figure 5.9 Set-point control of C1NN errors result

46

pixels with the tracking errors around 0.02, and results in a quite smooth tracking motion.

Figure 5.10 6 seconds of C3NN errors tracking control result

47

Figure 5.12 6 seconds of C1NN errors tracking control result Figure 5.11 6 seconds of C2NN errors tracking control result

48

Intuitively, a slower moving object will lead to a more precise tracking motion.

To demonstrate such tracking behavior, the controllers C3NN, C2NN and C1NN are also applied to the cases of T=9,12,15 seconds. Figure 5.13 to Figure 5.15 show the experiment results for T=9 seconds, Figure 5.16 to Figure 5.18 show the experiment results for T=12 seconds, and Figure 5.19 to Figure 5.21 show the experiment results for T=15 seconds. As expected, the image position errors are reduced to 20 pixels for T=9 seconds and to 15 pixels T=12 seconds. However, limited to the physical feature

of cameras, the image position errors are no longer improvable by further decreasing the period to T=15 seconds, whose image position error is still around 15 pixels.

In experiment results have been conducted in conditions and the experimental results have proved that the Eye-robot tracking in an effective for accurately and rapidly tracking moving object. Obviously, when the object is far away to the image center, the highest velocity will be obtain in object tracking control, the left eye with the neck and the right eye of the Eye-robot can trace the object in the center by smoothing and quickly simultaneously. To demonstrate the proposed, both computers and experiments are executed in this thesis. During the experiment results, different cases are executed to evaluate the feasibility for the proposed scheme. In these experiment results, the results show that the proposed has excellent performance to achieve the Eye-robot tracking

49

Figure 5.14 9 seconds of C2NN errors tracking control result Figure 5.13 9 seconds of C3NN errors tracking control result

50

Figure 5.16 12 seconds of C3NN errors tracking control result Figure 5.15 9 seconds of C1NN errors tracking control result

51

Figure 5.18 12 seconds of C1NN errors tracking control result Figure 5.17 12 seconds of C2NN errors tracking control result

52

Figure 5.20 15 seconds of C2NN errors tracking control result Figure 5.19 15 seconds of C3NN errors tracking control result

53

Figure 5.21 15 seconds of C1NN errors tracking control result

54

Chapter 6

Conclusions and Future Works

This thesis proposes an intelligent object tracking controller design to drive the Eye-robot to trace the object and the object detection method to fix with the image.

The proposed of the Eye-robot tracking in this thesis has shown that it is possible to use a neural network controller to compute actual velocity with good accuracy. This thesis used an ANN to train the training patterns such that, when the object is presently located in any places, it automatically computes the velocity V of the corresponding object position p. The Eye-robot tracking controller design that is used in this thesis is very simple in concept, independent of the Eye-robot model used and the quality of image obtained and yields very good results.

The experimental results in the thesis show that an acceptable accuracy can be obtained but it seems that is not very easy to reach high accuracy by using only neural networks. Neural networks have a good generalization capability in the range that they are trained. During the object tracking, the image capture, image processing and to give a velocity command to drive the Eye-robot only spent around 0.15 second.

The Eye-robot was successful applied in the object tracking, which via the offline training with the use of neural network. For the future studies, there are some suggestions and directions described as follows:

1. Extend to five-axis motors, which is increase vertical direction to track the motion object.

2. Track a designated object in multiple objects even when they are moving together, or interacting with each other.

55

Reference

[1] Jean-Luc Starck, Fionn Murtagh, Emmanuel J. Candes, and David L. Donoho., (2003), ”Gray and Color Image Contrast Enhancement by the Curve let Transform.”IEEE TRANSACTIONS ON IMAGE PROCESSING. Vol. 12, No 6, May.

[2] Kim, S., I. Kim, and I. Kweon, (2003), “Robust model-based 3d object recognitionby combining feature matching with tracking.” Proceedings of the IEEE International Conference on Robotics and Automation, ICRA., 2:

pp.2123–2128.

[3] A. Arsenioa and J. Santos-Victor, “Robust Visual Tracking by an Active Observer,” IEEE IROS 97, Vol 3, pp.1342-1347, 1997.

[4] A.M. Baumberg, and D.C. Hogg, “An Efficient Method for Contour Tracking using Active Shape Models,” Motion of Non-Rigid and Articulated Objects, Proceedings of the IEEE Workshop , pp.194-199, Nov. 1994.

[5] D. H. Nguyen and B. Widrow, "Neural networks for self-learning control systems, " IEEE Control Systems Magazine, pp. 18-23, Apr. 1990.

[6] A. Blanco, M. Delgado and M. C. Pegalajar, “A real-coded genetic algorithm for training recurrent neural networks,” Neural Networks, Vol. 14, No. 1, pp. 93-105, 2001.

[7] K. Nishihara and T. Poggio, “Stereo vision for robotics,” Proc. Robotics Research, First Int. Symp. , pp. 489–505, 1984.

[8] M. Kabuca, J. Desoto and J. Miranda, “Robot Vision Tracking System, ” IEEE Trans. On Industrial Electronics, Vol. 35, No. 1, February 1988, pp. 40-51.

[9] Brian Scassellati. A binocular, foveated active vision system. Memo1628, Massachusetts Institute of Technology Artificial Intelligence Lab, Cambridge,

56

Massachusetts, January 1998.

[10] McKenna, S. J., Raja, Y. and Gong, S. (1999), “Tracking Colour Objects using Adaptive Mixture Models”, Image and Vision Computing, vol. 17.

[11] Daniilidis, K., Krauss, C., Hansen, M. and Sommer, G. (1997), “Real Time Tracking of Moving Objects with an Active Camera”, Real Time Imaging no. 1 February 1998.

[12] Bruce, J., Balch, T. and Veloso, M. (2000), “Fast and Inexpensive Color Image Segmentation for Interactive Robots”, Proceedings of the 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. 3.

[13] J. Hertz, A. Krogh, and R. G. Palmer, Introduction to the Theory of Neural Computation. Reading, MA: Addison-Wesley, 1991.

[14] W.T. Miller III, R.S. Sutton, and P.J. Werbos, Eds., Neural Networks for Control.

Cambridge, MA: MIT Press, 1990.

[15] Sheng-Wen Shih and Jin Liu. A novel approach to 3-D gaze tracking using stereo cameras. In IEEE Transactions on Systems, Man and Cybernetics, Part B, Nantou, Taiwan, February 2004.

[16] Nischal M. Piratla and Anura P. Jayasumana. A neural network based real-time gaze tracker. J. Netw. Comput. Appl., 25(3):179–196, 2002.

[17] Wyatt, H. J. and Pola, J. (1987) Smooth eye movements with step ramp stimuli:

the influence of attention and stimulus extent. Vision Res. 27(9), 1565-1580.

[18] Worfolk, R. and Barnes, G. R. (1992) Interaction of active and passive slow eye movement systems. Expl Brain Res. 90, 589-598.

[19] J. Pelz, R. Canosa, J. Babcock, D. Kucharczyk, A. Silver, and D. Konno.

Portable eyetracking: A study of natural eye movements, 2000.

[20] M. Sonka, V. Hlavac, and R. Boyle. Image Processing, Analysis and Machine Vision. International Thomson Publishing, 2 edition, 1998.

57

[21] Zhiwei Zhu, Kikuo Fujimura, and Qiang Ji. Real-time eye detection and tracking under various light conditions. In Proceedings of the symposium on Eye tracking research & applications, pages 139_144. ACM Press, 2002.

[22] X. Xie, R. Sudhakar, and H. Zhuang. A cascaded scheme for eye tracking and head movement compensation. T-SMC, A28:487_490, 1998.

[23] Starck , L .Jean, Fionn Murtagh, J. Candes, Emmanuel, and L. Donoho, David, (2003), ”Gray and Color Image Contrast Enhancement by the Curve let Transform.” IEEE TRANSACTIONS ON IMAGE PROCESSING,12(6) pp.706-717.

[24] Jean-Luc Starck, Fionn Murtagh, Emmanuel J. Candes, and David L. Donoho, (2003), ”Gray and Color Image Contrast Enhancement by the Curve let Transform.” IEEE TRANSACTIONS ON IMAGE PROCESSING. Vol. 12, No 6, May.

[25] Black, J., T.Ellis, (2006), “Multi camera image tracking.” Proceedings of the Elsevier International Conference on Image and Vision Computing., 24(11):1256–1267.

相關文件