• 沒有找到結果。

Chapter 7 Conclusion and Future Work

7.2 Future Directions

Some directions for future study are recommended below:

(1) In the future, practical applications of PTZ cameras will be further studied. For instance, most image-based traffic surveillance methods adopt a virtual window to detect vehicles [71]. If the view of a PTZ camera is changed, then the position and size of the window must be adjusted manually. Using the dynamic calibration procedure developed in this thesis, the detection window can be arranged automatically. On the other hand, the effects of lens distortion and having a non-fixed principal point need to be handled in order to increase the accuracy of PTZ camera calibration.

(2) Several directions on vehicle detection and tracking deserve further study in the future. On one hand, heavy occlusion of vehicles influences the accuracy of image measurement.

Methods need to be developed to distinguish individual vehicles. Color information of individual tracked cars can be very useful for solving this problem [80]. On another hand, in order to increase the accuracy of foreground segmentation, we will focus on the study of new methods to select adaptive thresholds for handling the change of environment

106

illumination on the road. To achieve a dependable performance, a Neuro-Fuzzy classifier can be desirable for an ITMS.

(3) For future shadow-detection studies, more emphasis will be directed to increase the robustness of the shadow detection under various illumination conditions. First, the Gaussian ratio model built under a specific illumination condition might fail under a considerably different illumination. In traffic monitoring applications, it will be beneficial to build a database of ratio models for different illumination conditions. Additionally, shadow pixels that lie nearmoving vehicles or overlap other vehicles might be misclassified as moving-vehicle pixels. The pixels of the same shadow region may have similar color information to the traffic imagery. The color distribution can be used to find uniform sub-regions and hence the uniform sub-regions can be used to verify the actual shadow region [81].

107

Appendix A

Derivation of Focal Length Equation

In this appendix, the focal length equation will be derived by using only two parallel lines in the image. As shown in Fig. 2-1(a), lines L1 and L2 are two parallel lines, L1 and L2

intersect X-axis and Y-axis at P1, P2, P3 and P4; the corresponding coordinates of these points in image plane are expressed as follows:

u

2

can be rewritten as

v =

1

108

Applying trigonometric function properties, one can easily find the relationship between

r, s and t. Computing r

2 +

f

2

s

2and r2t2, one can obtain

Rearranging (A.12), we obtain

φ

109

Substituting (A21) into (A13), we obtain

θ

Using the vanishing point constraints and trigonometric function properties, one can proceed to derive equations that will determine the equation containing sec2θ and, finally,

110

Rearranging (A.26), we arrive at:

2 +bm+c=0

am . (A.27)

where m is f2 and other variables are listed Table 2-1. This governing equation is presented in Section 2.2 as the focal length equation.

Next, let us discuss how camera parameters affect the sign of coefficient a. For simplicity, the sign of at2is discussed instead of the sign of a

)

(A.28) reveals that the magnitude of the tile angle and pan angle affect the sign of coefficient

a ; the details are listed below:

a>0, if

φ

>

θ

,

111

a=0, if

φ

=

θ

or

Y

3 =0,

a<0, if

φ

<

θ

.

It is clear that the difference between the absolute value of tilt angle and pan angle determines the sign of coefficient a . When a=0, the focal length equation becomes linear and the focal length can be easily estimated. This completes the derivation of the focal length equation.

When the vanishing point is far from the image center or disappears (for instance, when the tilt angle equals 90 deg) in the image frame, (A.27) cannot be used to find the focal length. Instead the focal length can be easily obtained by the perspective projection equation:

w h w

f

= p , (A29)

where wp is the width between parallel lanes in the image frame.

112

Appendix B

Conversion between Pixel

Coordinates and World Coordinates

In this appendix, we derive the transformation between pixel coordinates and world coordinates. We will explain how focal length and tilt angle are used to obtain the world coordinates of a feature in the ground plane.

A pixel coordinate (u,v) is expressed as a function of world coordinate (X,Y,Z)

113 v v v v f Y h

= −

0 0

sin2φ . (B.6)

Substituting (B.6) into (B.1), it is easy to obtain

v v

v u f h v

v v u f X h

= −

− +

=

0 0

0 1) sin

sinφ ( φ . (B.7)

From (B.6) and (B.7), one can transform the pixel coordinates into their world coordinates.

114

Appendix C

RGB Color Ratio Model of Shadow Pixels

In an outdoor daytime environment, there are two light sources, namely, a light point source and a diffused extended light source. In the following derivation, the road is assumed as to be Lambertian, with a constant reflectance in a traffic scene. The radiance

L

lit of the light reflected at a given point on a surface in the scene is formulated as [82]

) respectively; λ is the wavelength;iis the angle of incidence between the illumination direction and the surface normal at a considered point; e is the reflection angle between the surface normal and the viewing direction; and g is the phase angle illumination direction and the viewing direction. When sunlight occlusion creates shadows, the radiance

L

shadow of the reflected light becomes

)

where )

L′

a(

λ

is the ambient reflection term in presence of the occluding object. To simplify the analysis, the design assumes that the ambient light coming from the sky is not influenced by the presence of the occluding objects, that is,

L

a′(

λ

)=

L

a(

λ

).

The model is derived based on an RGB mode. The color components of the reflected intensity reaching the RGB sensors at a point (x,y) in the 2D image plane can be expressed

115 wavelengths λ. We assume that the scene radiance and the image irradiance are the same because the situation is considered a Lambertian scene under uniform illumination [83]. For a point in direct light, the sensor measurements are

[ ]

When a point is in the shadow, the sensor measurements are

Λ similar in a traffic scene, for each object point of the road surface, the RGB measurement ratio between the lit and the shadow condition is approximately constant:

) .

116

Bibliography

[1] B. McQueen and J. McQueen, Intelligent Transportation Systems Architectures.

Norwood, MA: Artech House, 1999, pp. 19-49.

[2] K. Hayashi and M. Sugimoto, “Signal control system (MODERATO) in JAPAN,” in

Proc. IEEE Int. Conf. Intell. Transport. Syst., Tokyo, Japan, 1999, pp. 988 – 992.

[3] G. K. H. Pang, K. Takabashi, T. Yokota and H. Takenaga, “Adaptive route selection for dynamic route guidance system based on fuzzy-neural approaches,” IEEE Trans.

Veh. Technol., vol. 48, no. 6, pp. 2028 – 2041, Nov. 1999.

[4] V. Kastrinaki, M. Zervakis and K. Kalaitzakis, “A survey of video processing techniques for traffic applications,” Image and Vision Computing, vol. 21, no. 4, pp.

359-381, Dec. 2003.

[5] R. Cucchiara, M. Piccardi and Paola Mello, “Image analysis and rule-based reasoning for a traffic monitoring system,” IEEE Trans. Intell. Transport. Syst., vol.

1, no. 2, pp. 119-130, Jun. 2000.

[6] N. J. Ferrier, S. M. Rowe and A. Blake, “Real-time traffic monitoring,” in Proc.

Second IEEE Workshop on Applications of Computer Vision, Sarasota, Florida, 1994,

pp. 81–88.

[7] D. Koller, K. Daniilidis and H. H. Nagel, “Model-based object tracking in monocular image sequences of road traffic scenes,” International Journal of Computer Vision, vol. 10, no.3, pp. 257-281, Jun. 1993.

[8] H. Veeraraghavan, O. Masoud and N. Papanikolopoulos, “Computer vision algorithms for intersection monitoring,” IEEE Trans. Intell. Transport. Syst., vol. 4, no. 2, pp.78-89, Jun. 2003.

[9] K. T. Song and J. C. Tai, “Dynamic calibration of pan-tilt-zoom cameras” IEEE

Trans. Syst., Man, Cybern. B, vol. 36, no. 5, in press.

[10] D. Beymer, P.F. McLauchlan, B. Coifman and J. Malik, “A real-time computer vision system for measuring traffic parameters,” in Proc. IEEE Comput. Vis. and

Pattern Recogn., San Juan, Puerto Rico, 1997, pp. 495-501.

[11] O. Masoud, N. P. Papanikolopoulos and E. Kwon, “The use of computer vision in monitoring weaving sections,” IEEE Trans. Intell. Transport. Syst., vol. 2, no. 1, pp.

18–25, Mar. 2001.

[12] J. C. Tai, S. T. Tseng, C. P. Lin and K. T. Song, “Real-time image tracking for automatic traffic monitoring and enforcement applications,” Image and Vision

Computing Journal, vol. 22, no. 6, pp. 485–501, Jun. 2004.

[13] Z. Zhu, G. Xu, B. Yang, D. Shi and X. Lin, “VISATRAM: a real-time vision system for automatic traffic monitoring,” Image and Vision Computing Journal, vol. 18, no.

10. pp. 485–501, July 2000.

117

[14] A. M. Sabatini, V. Genovese and E. S. Maini, “Toward low-cost vision-based 2D localisation systems for applications in rehabilitation robotics,” in Proc. IEEE Int.

Conf. Intell. Robots and Syst., Lausanne, Switzerland, 2002, pp. 1355–1360.

[15] F. Y. Wang, “A simple and analytical procedure for calibrating extrinsic camera parameters,” IEEE Trans. Robot. Automat., vol. 20, no. 1, pp. 121–124, Feb. 2004.

[16] S. Ying and G. W. Boon “Camera self-calibration from video sequences with changing focal length,” in Proc. IEEE Int. Conf. Image Processing, Chicago, Illinois, 1998, vol. 2, pp. 176–180.

[17] E. Izquierdo, “Efficient and accurate image based camera registration,” IEEE Trans.

Multimedia, vol. 5, no. 3, pp. 293–302, Sept. 2003.

[18] L. L. Wang and W. H. Tsai, “Camera calibration by vanishing lines for 3-D computer vision,” IEEE Trans. Pattern Anal. Machine Intell., vol. 13, no. 4, pp.

370–376, Apr. 1991.

[19] T. Echigo, “A camera calibration technique using three sets of parallel lines,”

Machine Vision and Applications, vol. 3, no. 3, pp. 159–167, 1990.

[20] E. K. Bas and J. D. Crisman, “An easy to install camera calibration for traffic monitoring,” in Proc. IEEE Conf. Intell. Transport. Syst., Boston, Massachusetts, 1997, pp. 362–366.

[21] C. Zhaoxue and S. Pengfei, “Efficient method for camera calibration in traffic scenes,” IEE Electron. Lett., vol. 40, no. 6, pp. 368–369, Mar. 2004.

[22] A. H. S. Lai and N. H. C. Yung, “Lane detection by orientation and length discrimination,” IEEE Trans. Syst., Man, Cybern. Part B, vol. 30, no. 4, pp. 539–548, Aug. 2000.

[23] T. N. Schoepflin and D. J. Dailey, “Dynamic camera calibration of roadside traffic management cameras for vehicle speed estimation,” IEEE Trans. Intell. Transport.

Syst., vol. 4, no. 2, pp. 90–98, Jun. 2003.

[24] N. Li, J. Bu and C. Chen, “Real-time video object segmentation using HSV space,”

in Proc. IEEE Conference on Image Processing, Rochester, New York, 2002, pp.

85-88.

[25] R. Cucchiara, M. Piccardi and P. Mello, “Image analysis and rule-based reasoning for a traffic monitoring system,” IEEE Trans. on Intelligent Transportation Systems, vol. 1, no. 2, pp.119-130, 2000.

[26] C. Eveland, K. Konolige, and R. Bolles, “Background modeling for segmentation of video-rate stereo sequences,” in Proc. of the IEEE Computer Society Conference on

Computer Vision and Pattern Recognition, Santa Barbara, 1998, pp. 266-271.

[27] J. B. Zheng, D. D. Feng, W. C. Siu, Y. N. Zhang, X. Y. Wang and R. C. Zhao, “The accurate extraction and tracking of moving objects for video surveillance,” in Proc.

International Conference on Machine Learning and Cybernetics, Beijing, China,

2002, pp. 1909-1913.

[28] P. Kumar, K. Sengupta and A. Lee, “A comparative study of different color spaces

118

for foreground and shadow detection for traffic monitoring system,” in Proc. IEEE

5th International Conference on Intelligent Transportation Systems, Singapore, 2002,

pp. 100-105.

[29] C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 747-757, 2000.

[30] D. Butler, S. Sridharan and V.M. Bove, “Real-time adaptive background segmentation,” in Proc. IEEE International Conference on Acoustics, Speech, and

Signal Processing, Hong Kong, 2003, pp. 349-352.

[31] R. A. Johnson and G. K. Bhatacharyya, Statistics: principles and methods, John Wiley & Sons, New York, 2001.

[32] P. Kumar, S. Ranganath, W. Huang and K. Sengupta, “Framework for real-time behavior interpretation from traffic video,” IEEE Transactions on Intelligent

Transportation Systems, vol. 6, no. 1, pp. 43-53, 2005.

[33] J. W. Hsieh, S. H. Yu, Y. S. Chen and W. F. Hu, “A shadow elimination method for vehicle analysis,” in Proc. IEEE Int. Conf. on Pattern Recognition Cambridge, UK, 2004, pp. 372–375.

[34] A. Yoneyama, C. H. Yeh and C. C. J. Kuo, “Moving cast shadow elimination for robust vehicle extraction based on 2D joint vehicle/shadow models,” in Proc. IEEE

Int. Conf. on Advanced Video and Signal Based Surveillance, Miami, Florida,

2003, pp. 21–22.

[35] S. Nadimi and B. Bhanu, “Physical models for moving shadow and object detection in video,” IEEE Trans. Pattern Anal. Mach. Intel., vol. 26, no. 8, pp. 1079-1087, 2004.

[36] A. Prati, I. Mikic, M. M. Trivedi and R. Cucchiara, “Detecting moving shadows:

algorithms and evaluation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 7, pp. 918-923, Jul. 2003.

[37] T. Thongkamwitoon, S. Aramvith and T. H. Chalidabhongse, “An adaptive real-time background subtraction and moving shadows detection,” in Proc. IEEE Int. Conf. on

Multimedia and Expo, Taipei, 2004, pp. 1459–1462.

[38] R. Cucchiara, C. Grana, M. Piccardi and A. Prati, “Detecting moving objects, ghosts and shadows in video streams”, Trans. Pattern Anal. Mach. Intel., vol. 25, no. 10, pp.

1337-1342, Oct. 2003.

[39] E. Salvador, A. Cavallaro and T. Ebrahimi, “Cast shadow segmentation using invariant colour features,” Computer Vision and Image Understanding, vol. 95, no. 2, pp. 238–259, 2004.

[40] A. Bevilacqua, “Effective shadow detection in traffic monitoring applications,”

Journal of WSCG, vol. 11, no. 1, pp. 57–64, 2003.

119

[41] Y. Sato and K. Ikeuchi, “Reflectance analysis under solar illumination,” in Proc.

IEEE Workshop on Physics-Based Modeling and Computer Vision '95, Cambridge,

Massachusetts, 1995, pp. 180-187.

[42] E. E. Hilbert, C. Carl, W. Gross, G. R. Hanson, M. J. Olasaby and A. R. Johnson,

“Wide area detection system - conceptual design study,”

Report No.

FHWA-RD-77-86, Federal Highway Administration, Washington, D.C., USA, 1978.

[43] D. Koller, K. Daniilidis and H. H. Nagel, “Model-based object tracking in monocular image sequences of road traffic scenes,” Int. J. Comput. Vis. vol. 10, pp. 257–281, 1993.

[44] A. E. C. Pece and A. D. Worrall, “Tracking with the EM contour algorithm,” in Proc.

Eur. Conf. Computer Vision, Copenhagen, 2002, pp. 28–31.

[45] D. W. Lim, S. H. Choi and J. S. Jun, “Automated detection of all kinds of violations at street intersection using real time individual vehicle tracking,” in IEEE Southwest

Symp. Image Anal.Interpretation, Santa Fe, New Mexico, 2002, pp. 126–129.

[46] S. Kamijo, Y. Matsushita, K. Ikeuchi and M. Sakauchi, “Traffic monitoring and accident detection at intersections,” IEEE Trans. Intell. Transport. Syst. vol. 1, pp.

108–118, 2000.

[47] W. L. Hsu, H. Y. M. Liao, B. S. Jeng and K. C. Fan, “Real-time traffic parameter extraction using entropy,” IEE Proc. Vis. Image Signal Process, vol. 151, no. 3, pp.

194–202, 2004.

[48] A. H. S. Lai and N. H. C. Yung, “Vehicle-type identification through automated virtual loop assignment and block-based direction-biased motion estimation,” IEEE

Trans. Intell. Transport. Syst., vol. 1, no. 2, pp. 86-97, Jun. 2000.

[49] S. T. Tseng and K. T. Song, “Real-time image tracking for traffic monitoring,” in

Proc. IEEE Int. Conf. Intell. Transport. Syst., Singapore, 2002, pp. 1–6.

[50] M. V. D. Berg, B. D. Schutter, A. Hegyi and J. Hellendoorn, “Model predictive control for mixed urban and freeway networks,” in Proc. 83rd Annual Meeting

Transport. Research Board, Washington, D.C., 2004, pp. 1-19.

[51] H. C. Liu and M. Kuwahara, “A study on real-time signal control for an oversaturated network,” in Proc. 7th World Congress Intell. Transport. Syst., Torino, 2000.

[52] R. Camus, G. Fenu, G. Longo, F. Pampanin and T. Parisini, “Identification of freeway-traffic dynamic models: a real case study,” in Proc. IEEE American Control

Conf., Denver, 2003, pp. 4579–4584.

[53] V. D. Zijpp, “Dynamic origin-destination matrix estimation on motorway networks,”

Ph. D. dissertation, Dept. Transport. Planning Traffic Eng., Delft Univ. Technology, GA Delft, Netherlands, 1996.

[54] Y. Asakura, “Origin-destination matrices estimation model using automatic vehicle identification data and its application to the Han-Shin expressway network,”

Transport. Research, vol. 27, no. 4, pp. 419–438, Jan 2000.

120

[55] C. Oh, S. G. Ritchie, J, Oh and R. Jayakrishnan, “Real-time origin-destination (OD) estimation via anonymous vehicle tracking,” in Proc. IEEE Int. Conf. Trans. Intell.

Transport. Syst., Singapore, 2002, pp. 582–586.

[56] A. G. Hobeika and C. K. Kim, “Traffic flow prediction systems based on upstream traffic,” in Proc. IEEE-IEE Vehicle Navigation and Information Syst. Conf., Yokohama, Japan, 1994, pp. 345 -350.

[57] S. Chen, Z. P. Sun and B. Bridge, “Automatic traffic monitoring by intelligent sound detection,” in Proc. IEEE Conf. Intell. Transport. Syst., Boston, MA, 1997, pp.

171–176.

[58] S. S. Beauchemin and J. L. Barron, “The computation of optical flow,” ACM

Computing Surveys, vol. 27, no. 3, pp. 433–467, Sep. 1995.

[59] B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, no. 1, pp. 185–203, Aug. 1981.

[60] D. J. Fleet and K. Langley, “Recursive filters for optical flow,” IEEE Trans. Pattern

Anal. Machine Intell., vol. 17, no. 1, pp. 61–67, Jan. 1995.

[61] G. Tziritas, “Recursive and/or iterative estimation of the two-dimensional velocity field and reconstruction of three-dimensional motion,” Signal Processing, vol. 16, no.

1, pp. 53–72, 1989.

[62] M. Elad and A. Feuer, “Recursive optical flow estimation - adaptive filtering approach,” J. Visual Comm. and Image Representation, vol. 9, no. 2, pp. 119–138, Jun. 1998.

[63] Y. U. Yim and S. Y. Oh “Three-feature based automatic lane detection algorithm (TFALDA) for autonomous driving,” IEEE Trans. Intell. Transport. Syst., vol. 4, no.

4, pp. 219–225, Dec. 2003.

[64] R. C. Jain, R. Kasturi and B.G. Schunck, Machine Vision. McGraw-Hill, New York, 1995.

[65] S. T. Bow, Pattern Recognition and Image Preprocessing. New York: Marcel Dekker, 2002.

[66] A. C. Bovik, S. J. Aggarwal, F. Merchant, N. H. Kim and K. R. Diller, “Automatic area and volumn measurement from digital biomedical images,” In Image analysis methods and applications: methods and applications, D. P. Hader (Editor), CRC Press, Boca Raton, FL., 2001, pp. 23-64.

[67] J. T. McClave, Terry Sincich and William Mendenhall, Statistics (8th Edition), Prentice Hall, New Jersy, 1999.

[68] L. G. Shapiro and G. C. Stockman, Computer vision, Prentice Hall, New Jersy, 2001.

[69] See http://www.itstaiwan.org/Home_English.asp, last visited July 14, 2006.

[70] See http://www.cmlab.csie.ntu.edu.tw/cml/dsp/training/coding/h263/h263.html, last visited July 14, 2006.

121

[71] J. C. Tai and K.T. Song, “Automatic contour initialization for image tracking of multi-lane vehicles and motorcycles,” in Proc. IEEE Conf. Intell. Transport. Syst., Shanghai, China, 2003, pp. 808–813.

[72] I. Haritaoglu, D. Harwood and L. S. Davis, “W4: Real-time surveillance of people and their activities,” Trans. Pattern Anal. Mach. Intel., vol. 22, pp. 809-830, Aug.

2000.

[73] M. Baumberg and D.C. Hogg, “An efficient method for contour tracking using active shape models,” in IEEE Motion Non-rigid Articulated Objects Workshop, Austin, Texas, 1994, pp. 194–199.

[74] A. Koschan, S. K. Kang, J. K. Paik, B. R. Abidi and M.A. Abidi, “Video object tracking based on extended active shape models with color information,” in Proc.

Eur. Conf .Color Graphics Imaging Vision, Poitiers, France, 2002, pp. 126–131.

[75] G. Iannizzotto and L. Vita, “Real-time object tracking with movels and affine transformations,” in Int. Conf. Image Processing, 2000, pp. 316–318.

[76] A. Blake and M. Isard, Active contours, Springer Press, London, England, 1998.

[77] S. M. Bozic, Digital and Kalman filtering, Edward Arnold, London, England, 1994.

[78] C. J. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. 4th

Alvey Vision Conf., Manchester, 1988, pp. 147–151.

[79] A. Singh, Optic Flow Computation: A Unified Perspective. Los Alamitos, CA: IEEE Computer Society Press, 1991, pp. 33–36.

[80] W. Hu, X. Xiao, D. Xie, T. Tan and S. J. Maybank, “Traffic accident prediction using 3-D model-based vehicle tracking,” IEEE Trans. Veh. Technol. vol. 53, no. 3, pp. 677–694, 2004.

[81] D. Comaniciu and P. Meer. “Mean shift: A robust approach toward feature space analysis,” Trans. Pattern Anal. Mach. Intel., vol. 24, no. 5, pp. 603–619, 2002.

[82] E. Salvador, “Shadow segmentation and tracking in real-world conditions,” Ph.D.

Thesis. Signal Processing Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 2004.

[83] P. Favaro and S. Soatto, “A variational approach to scene reconstruction and image segmentation from motion blur cues,” in Proc. IEEE Intl. Conf. Comp. Vis. and Patt.

Recog., Washington, DC, USA, 2004, pp. 631-637.

122

Vita

姓名: 戴任詔 性別: 男

生日: 中華民國 51 年 4 月 22 日 籍貫: 台灣省台中縣

論文題目: 中文: 交通參數估測系統之攝影機參數校正與影像追蹤

英文: Camera Calibration and Image Tracking for Traffic Parameter Estimation

學歷:

1. 民國 74 年 6 月 國立清華大學動力機械工程學系畢業 2. 民國 81 年 6 月 國立交通大學控制工程研究所畢業

3. 民國 88 年 9 月 國立交通大學電機及控制工程研究所博士班

經歷:

1. 76 年 6 月~76 年 11 月 羽田機械 助理工程師 2. 76 年 12 月~79 年 3 月 三陽工業 二級助理研究員

3. 81 年 9 月~迄今 明新科技大學 機械工程系講師

123

Publication List

Journal paper

[1] K. T. Song and J. C. Tai, “Dynamic Calibration of Pan-Tilt-Zoom Cameras,” IEEE

Trans. Systems, Man, and Cybernetics, Part B. vol. 36, no. 5, in press. [4 點].

[2] J. C. Tai, S. T. Tseng, C. P. Lin, and K. T. Song, “Real-Time Image Tracking for Automatic Traffic Monitoring and Enforcement Applications,” Image and Vision Computing, vol. 22, no. 6, pp. 485–501, 2004 [1.2 點].

[3] J. C. Tai and K. T. Song, “On the Parametric Approach to the MEMS Mask Design,” (in Chinese) Journal of Technology, vol. 17, no. 1, pp. 73–82, 2002. [0 點].

[4] K. T. Song and J. C. Tai, “Real-Time Background Estimation of Traffic Imagery Using Group-Based Histogram,” revised, Journal of Information Science and Engineering.

[5] K. T. Song and J. C. Tai, “Image-Based Traffic Monitoring with Shadow Suppression,”

revised, Proceedings of the IEEE.

[6] K. T. Song and J. C. Tai, “Automatic Contour Initialization and Multi-Vehicle Tracking for Vision-Based Traffic Monitoring,” submitted to International Journal of Imaging

Systems and Technology.

Conference paper

[1] K. T. Song and J. C. Tai, “Image-Based Turn Ratio Measurement at Road Intersection,”

in Proc. of The IEEE International Conference on Image Processing, Genova, Italy, 2005, pp. I-1077–1080.

[2] J. C. Tai and K. T. Song, “Background Segmentation and its Application to Traffic Monitoring Using Modified Histogram,” in Proc. of 2004 IEEE International

Conference on Networking, Sensing & Control, Taipei, Taiwan, 2004, pp. 13–18.

[3] J. C. Tai and K. T. Song, “Automatic contour initialization for image tracking of multi-lane vehicles and motorcycles,” in Proc. of the IEEE 6th International Conference

on Intelligent Transportation Systems, Shanghai, 2003, pp.808–813.

[4] C. P. Lin, J. C. Tai and K. T. Song, “Traffic monitoring based on real-time image tracking,” Proc. of. 2003 IEEE International Conference on Robotics and Automation, Taipei, 2003, pp.2091–2096.

[5] J. C. Tai and K. T. Song, “Design of a Novel Electrostatic Linear Stepping Micromotor, ” in Proc. of the 6th International Conference on Mechatronics Technology, Kitakyushu, Japan, 2002, pp.411–416.