• 沒有找到結果。

True Eye Locating Procedure

在文檔中 人臉偵測之研究 (頁 68-0)

CHAPTER 3 A NOVEL FACE DETECTION METHOD UNDER VARIOUS

3.2.3 True Eye Locating Procedure

Yow and Cipolla [26] use some geometric constrains to detect feature points including two eye points, however, when a face picture is taken under a biased lighting condition, these feature points cannot be detected correctly. Here, a two-step procedure is designed to locate two true eyes. In the first step, two rules are provided to identify the true eyes. With located eye centers

lx ,e lye

and

rx ,e rye

, a

2 is then defined to be face area (see Figs. 3.11(b) and 3.11(d)), where de is the horizontal distance between the two eyes. In order to include the mouth area, if the distance de is shorter than a threshold 1.2w, then the bottom-right point of the face area is refined as 

Rule 1: If exact one eye-like rectangle locates at each side of the symmetric line,

these two eye-like rectangles are considered as the true eyes and the face rectangle is located (see Figs. 3.11(a) and 3.11(b)).

Rule 2: If only one eye-like rectangle (ER) locates at one side of the symmetric

line and more than one at the other side, consider the only rectangle ER as one true eye and identify the nearest rectangle of ER on the other side as the other true eye.

Figs. 3.11(c) and 3.11(d) show an example of the located true eyes and the corresponding face location.

Right eye Left eye de/2

de

1.5de de/2

de/2

1.5de

(a) A case satisfying Rule 1. (b) The detected face location based on the eye locations from (a).

The nearest eye-like rectangle Right eye

Left eye

de

de/2

de/2

1.5de de/2

1.5de

(c) A case satisfying Rule 2. (d) The detected face location based on the eye locations of (c).

Fig. 3.11. Two examples for true eye and face location.

For those cases not satisfying Rules 1 or 2, the second step is conducted. Based on the relative locations among eyebrows, glasses and eyes, for the eye-like rectangle height (h) and width (w), two kinds of eye patterns (two-layered and three-layered) are defined to help locate true eyes.

Definition 1: A pair of two eye-like rectangles is called a two-layered pattern, if three-layered pattern, if the horizontal distance between any two neighboring rectangles is less than w and the vertical distance in

 example of the three-layered pattern.

In the second step, if we find a pair of two-layered patterns which locate at each side of the symmetric line, respectively, then both bottom rectangles are identified as the true eyes (see Figs. 3.13(a) and 3.13(b)). If there is only one two-layered pattern detected, then choose the bottom eye-like rectangle as the true eye and remove all eye-like rectangles which intersect with the vertical symmetric line. The nearest eye-like rectangle on the other side of the symmetric line is considered as the other eye. Figs. 3.13(c) and 3.13(e) show two examples for this case. If there is no two-layered eye pattern at any side, then begin considering three-layered patterns. For a three-layered eye pattern, the middle rectangle is identified as the true eye. If a three-layered pattern is detected at one side, then the other eye is identified to be the closest eye-like rectangle at the other side. Fig. 3.13(g) shows an example of this case.

If there are no any of the two kinds of patterns, consider the nearest rectangles to the

symmetric and horizontal lines as the true eyes (see Fig. 3.13(i)).

(a) The two-layered pattern. (b) The three-layered pattern.

Fig. 3.12. Two examples for the two kinds of eye patterns.

Two-layered pattern

Two-layered pattern

Right eye Left eye

(a) Two-layered patterns at each side of the symmetric line.

(b) The detected face location based on the true eye locations on (a).

Removed

Fig. 3.13. Some examples for true eye and face rectangle location (continued).

Two-layered two-layered pattern detected at one side.

Fig. 3.13. Some examples for true eye and face rectangle location.

3.3 EXPERIMENTAL RESULTS

In order to show the effectiveness of the proposed method, we apply the method to HHI face database [19] of 206 images (see Fig. 3.14) and Champion face database [20] of 227 images (see Fig. 3.15). We also collect some images from our laboratory,

the Internet and MPEG7 video clips to evaluate the performance. These contain images of different racial persons under different kinds of lighting conditions (such as overhead, side and color lightings) and poses. The size of faces ranges from 252x229 to 79x79. Fig. 3.16 shows the successful results of applying our method to some faces with different skin colors. The successful results for faces with different poses and eye glasses are shown in Fig. 3.17. Even there is shadow on a face; the face can also be detected.

Fig. 3.14. A part of images from HHI face database.

Fig. 3.15. A part of images from Champion face database.

Fig. 3.16. The detection results for persons with different skin colors.

Fig. 3.17. The detection results for persons with different poses.

In the dissertation, if the located face rectangle bounds a face, consider it as a correct detection. If a face rectangle is found for a non-face region, we call it a false positive detection. For HHI database, the correct detection rate is 93.68%, and the correct detection rate on Champion database is 95.15%. Tables 3.1 and 3.2 show the detail detection results for both databases. Table 3.3 shows the detection result of the non-profile faces in HHI. In order to show the effectiveness of the proposed method, the original results of Hsu [18] using both databases and Chow [7] for HHI database are also given in Tables 3.1-3.3 to do comparisons.

Table 3.1 Detection results on HHI database (image size 640 x 480).

Head pose Frontal Near

(on the mobile processor 1.4 G Hz CPU)

Hsu [18] 22.97

(on the 1.7 G Hz CPU)

Table 3.2 Detection results on Champion database (image size ~ 150 x 220).

Proposed method Hsu [18]

No. of images 227

No. of false

positive 7 14

No. of miss 11 19

Correct detection

rate (%) 95.15 91.63

Average execution time per image (sec)

4

(on the mobile processor 1.4 G Hz CPU)

5.78

(on the 1.7 G Hz CPU)

Table 3.3 Detection results on non-profile faces of HHI database.

Proposed method Chow [7]

No. of images

(non-profile faces) 195 (206-11) 151 (selected)

No. of false positive 12 4

Correct detection rate

(%) 93.8 92.7

The ROC curves for both databases are also given in Fig. 3.18. Fig. 3.19 shows the successful detection results for a set of images from our laboratory, the Internet and MPEG7 video clips. Fig. 3.20 shows multiple faces detection results.

Fig. 3.18. The ROC curves for our face detector on HHI and Champion databases.

Fig. 3.19. The detection results of face images collected from our laboratory, the Internet, and MPEG7 video clips.

We had developed a face detector which can detect a face with different poses (left/right profile face and non-profile face) under various environments. However, some faces such as up-side or down-side face may fail to detect. We use a fixed ratio

threshold to detect the horizontal eye line for a given face candidate. This may cause the horizontal eye line detection failure for such faces. Another false detection case is that a profile face is taken from a skin-like background. Because we only use skin shape information to detect the profile face features. If we can not segment the profile face skin region, we cannot detect a profile face.

Fig. 3.20. The detection results for multiple faces.

CHAPTER 4

CONCLUSIONS AND FUTURE WORKS

In this dissertation, we have presented method to address the problems of face detection with poses under various environments. The main contribution of the work is developing a face detector which can detect a face with various poses (left/right profile face and non-profile face), different races, and robust over a wide range of lighting conditions (such as overhead, side and color lightings). Even if there is a closed eye on a face and there is a shadow on a face, the face can also be detected.

The experimental results show that the proposed method is efficient and robust. The proposed face detector has a higher correct detection rate than those of Hsu et al. and Chow et al. In addition, we also present a novel efficiency algorithm to extract the head region of a person who wears a skin-like color clothes. On the basis of the output, we can easily separate the head and shoulder part of the body.

How to locate the eyes of a driver under extreme lighting condition is an important issue for a successful intelligent transportation system. In future, we will develop an efficiency eye blinking detection system based on the results of the proposed method to handle this problem.

R

EFERENCES

[1] T. Hayami, K. Matsunaga, K. Shidoji, and Y. Matsuki, ―Detecting drowsiness while driving by measuring eye movement – a pilot study,‖ IEEE Proc. 5th Int’l Conf. Intelligent Transportation Systems, pp. 156-161, 2002.

[2] E. Hjelmas and B. K. Low, ―Face Detection: A Survey,‖ Computer Vision and Image Understanding, vol. 83, no. 3, pp. 236-274, Sep. 2001.

[3] M. H. Yang, D. J. Kriegman, and N. Ahuja, ―Detecting faces in images: a survey,‖ IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp. 34-58, Jan. 2002.

[4] M. Turk and A. Pentland, ―Eigenfaces for recognition,‖ J. Cognitive Euroscience, vol. 3, no. 1, pp. 71-86, 1991.

[5] H. Rowley, S. Baluja, and T. Kanade, ―Neural network-based face detection,‖

IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 23-38, Jan. 1998.

[6] C. A. Waring and X. Liu, ―Face detection using spectral histograms and SVMs,‖

IEEE Trans. Systems, Man and Cybernetics — Part B: Cybernetics, vol. 35, No.

3, pp. 467-476, June 2005.

[7] T. Y. Chow, K. M. Lam, and K. W. Wong, ―Efficient color face detection algorithm under different lighting conditions,‖ J. Electronic Imaging, vol. 15, issue 1, pp. 013015(1)-013015(10), May 2006.

[8] F. Y. Shih and C. F. Chuang, ―Automatic extraction of head and face boundaries and facial features,‖ Information Sciences, vol. 158, pp. 117-130, Jan. 2004.

[9] J. Wu and Z. Zhou, ―Efficient face candidates selector for face detection,‖ Pattern Recognition, vol. 36, issue 5, pp. 1175-1186, May 2003.

[10] J. Miao, B. Yin, K. Wang, L. Shen, and X. Chen, ―A hierarchical multiscale and

multiangle system for human face detection in a complex background using gravity-center template,‖ Pattern Recognition, vol. 32, issue 7, pp. 1237-1248, July 1999.

[11] J. Song, Z. Chi, and J. Liu, ―A robust eye detection method using combined binary edge and intensity information,‖ Pattern Recognition, vol. 39, issue 6, pp.

1110-1125, June 2006.

[12] V. Perlibakas, ―Automatical detection of face features and exact face contour,‖

Pattern Recognition Letters, vol. 24, issue 16, pp. 2977-2985, Dec. 2003.

[13] H. Wang, P. Li, and T. Zang, ―Boosted Gaussian Classifier with Integral Histogram for Face Detection,‖ Int. J. Pattern Recognition and Artificial Intelligence, vol. 21, issue 7, pp. 1127-1139, 2007.

[14] F. Y. Shih, S. Cheng, C. F. Chuang, and Patrick S. P. Wang, ―Extracting Faces and Facial Features from color Images,‖ Int. J. Pattern Recognition and Artificial Intelligence, vol. 22, issue 3, pp. 515-534, 2008.

[15] P. Kakumanu, S. Makrogiannis, and N. Bourbakis, ―A survey of skin-color modeling and detection methods,‖ Pattern Recognition, vol. 40, issue 3, pp.

1106-1122, Mar. 2007.

[16] S. Satoh, Y. Nakamura, and T. Kanade, ―Name-it: naming and detecting faces in news videos,‖ IEEE Multimedia, vol. 6, no. 1, pp. 22-35, 1999.

[17] D. Saxe and R. Foulds, ―Toward robust skin identification in video images,‖ Proc.

Second Int’l Conf. Automatic Face and Gesture Recognition, pp. 379-384, 1996.

[18] R. L. Hsu, M. A. Mottaleb, and A. K. Jain, ―Face detection in color images,‖

IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 696 -706, May. 2002.

[19] MPEG7 Content Set from Heinrich Hertz Institute, http://www.darmstadt.gmd.

de/mobile/hm/projects/MPEG7/Documents/N2466.html. Oct. 1998.

[20] The Champion Database, http://www.libfind.unl.edu/alumni/events/breakfastt_

for_champions.htm. Mar. 2001.

[21] T. Sim, S. Baker, and M. Bsat, ―The CMU pose, illumination, and expression database, ― IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no.

12, pp. 1615-1618, Dec. 2003.

[22] M. Q. Jing and L. H. Chen, ―A novel method for horizontal eye line detection under various environments,‖ accepted by the International Journal of Pattern Recognition and Artificial Intelligence.

[23] N. Nakao, W. Ohyama, T. Wakabayashi, and F. Kimura, ―Automatic Detection of Facial Midline And Its Contributions To Facial Feature Extraction,‖ Electronic Letters on Computer Vision and Image Analysis, vol. 6, no. 3, pp. 55-65, 2008.

[24] X. Chen, P. J. Rynn, and K. W. Bowyer, ―Fully automated facial symmetry axis detection in frontal color images,‖ Fourth IEEE Workshop on Automatic Identification Advanced Technologies, pp. 106-111, 2005.

[25] H. J. So, M. H. Kim, Y. S. Chung, and N. C. Kim, ―Face Detection Using Sketch Operators and Vertical Symmetry,‖ Lecture Notes in Computer Science, vol.

4027, pp.541-551, 2006.

[26] K. C. Yow and R. Cipolla, ―Feature-Based Human Face Detection,‖ CUED /F-INFENG/TR 249, University of Cambridge, Department of Engineering, England, Aug. 1996.

PUBLICATION LIST

We summarize the publication status of the proposed methods and our research status in the following.

(1) M. Q. Jing and L. H. Chen, ―A novel method for horizontal eye line detection under various environments,‖ accepted by the International Journal of Pattern Recognition and Artificial Intelligence.

(2) M. Q. Jing and L. H. Chen, ―A novel face detection method under various environments,‖ accepted by Opt. Eng.

(3) M. Q. Jing, C. H. Yu, H. L. Lee, and L. H. Chen, ―Solving Japanese Puzzles With Logical Rules And Depth First Search Algorithm,― accepted by the International conference on Machine Learning and Cybernetics, 12-15 July, 2009.

(4) M. Q. Jing, W. C. Yang, and L. H. Chen, ―A New Steganography Method Via Various Animation Timing Effects In PowerPoint Files,― accepted by the International conference on Machine Learning and Cybernetics, 12-15 July, 2009.

(5) M. Q. Jing, W. J. Ho, and L. H. Chen, ―A Novel Method For Shoeprints Recognition And Classification,― accepted by the International conference on Machine Learning and Cybernetics, 12-15 July, 2009.

(6) M. Q. Jing, C. C. Wang, and L. H. Chen, ―A Real-Time Unusual Voice Detector Based On Nursing At Home,― accepted by the International conference on Machine Learning and Cybernetics, 12-15 July, 2009.

(7) M. Q. Jing, C. R. Weng, C. H. Lee, and L. H. Chen,―An Algorithm for Eye Blink Detection,― submitted to Electronics Letters.

VITA

Min-Quan Jing was born in Ilan, Taiwan, Republic of China on December 15, 1973.

He received the B.S. degree in Computer Science engineering from Chung Hua University, Hsinchu, Taiwan in 1997, and M.S degree in Computer & Information Science from National Chiao Tung University in 1999. He is now a Ph.D candidate of the Department of Computer Science at the National Chiao Tung University. His current research interests include image processing, pattern recognition and face detection.

在文檔中 人臉偵測之研究 (頁 68-0)

相關文件