• 沒有找到結果。

Eye Detection on the Image of Face Wearing Dark Sunglasses

CHAPTER 4 SIMULATION and RESULTS

4.1 Eye Detection

4.1.2 Eye Detection on the Image of Face Wearing Dark Sunglasses

In Figs. 4.10–4.11, we show two examples of eye detection on the images of face wearing sunglasses. (a) shows the input face image; (b) is the face segment derived from (a); (c) is the 1-D graph derived via summing the row-wise intensity of face segment of (b); (d) Locates the eye location at (b).

(a) (b)

(c) (d)

Fig. 4.10. The example 1 of location on face wearing dark sunglasses. (a) The input face image with sunglasses. (b) The face segment derived from (a). (c) The 1-D graph derived via summing the row-wise intensity of face segment of (b). (d) Locating the eye location at (b).

(a) (b)

(c) (d)

Fig. 4.11. The example 2 of location on face wearing dark sunglasses. (a) The input face image with sunglasses. (b) The face segment derived from (a). (c) The 1-D graph derived via summing the row-wise intensity of face segment of (b). (d) Locating the eye location at (b).

4.2 Reflection Separation

In Section 4.2.1, we show two example of one dimensional reflection separation with cost value to confirm that the cost function Eqs. (3.20) and (3.21) works well. In Section 4.2.2, we first show three examples of reflection separation by discretization to interpret the important of feature intensity between the reflection layer and the foreground layer. Then we show the result of four examples with different reflection straightly.

4.2.1 One Dimensional Reflection separation

In this section, we demonstrate a one dimensional family of solutions according to [19]. First, we define s x y

( )

, , shown in Fig. 4.12(b), which is the image of the

“correct” decomposition for Fig. 4.12(a). We consider decompositions of the expression I1 = γs x y

( )

, , I2 = I − and evaluated the cost for different values I1

of r by Eq. (3.19). Fig. 4.12 indeed shows the minimum in this one dimensional subspace of solution is obtained at the “correct” solution.

However, when we apply the cost function Eq. (3.19) at real image, it does not favor the “correct” decomposition; the reason is the “spurious” response as mention at Section 3.3.2. We should apply the refine cost function Eq. (3.21) to natural image and we can see the “correct” decomposition is indeed favored in the one dimension subspace (the minimum is obtained at γ =1) at Fig. 4.13.

(a) (b)

(c)

Fig. 4.12. Example 1 for testing an one dimension family of solutions. (a) Original image I . (b) The correct foreground layers. (c) The cost value of one dimension family of solutions for (b).

(a) (b)

(c)

(d)

Fig. 4.13. Example 2 for one dimension family of solutions. (a) Reflection images.

(b) The correct foreground layers. (c) The cost value of one dimension family of solutions for (b), calculated by (3.19). (d) The cost value of one dimension family of solutions for (b) , calculated by (3.21).

4.2.2 Reflection Separation by Discretization

In Figs. 4.14–4.16, we show three examples of reflection separation by discretization to interpret the important of feature intensity between the reflection layer and the foreground layer. Then we show the result of four examples with different reflection straightly in Fig. 4.17.

(a) (b) (c)

(d) (e) (f)

Fig. 4.14. Example 1 of separation results of images with reflections using discretization. (a) The input image consisting of the foreground image multiplied by 0.8 and the reflection image multiplied by 0.2. (b) Separated foreground layer image of (a). (c) Separated reflection layer image of (a). (d) The input image consisting of the foreground image multiplied by 0.85 and the reflection image multiplied by 0.15.

(e) Separated foreground layer image of (d). (f) Separated reflection layer image of (d).

(a) (b) (c)

(d) (e) (f)

Fig. 4.15. Example 2 of separation results of images with reflections using discretization. (a) The input image consisting of the foreground image multiplied by 0.8 and the reflection image multiplied by 0.2. (b) Separated foreground layer image of (a). (c) Separated reflection layer image of (a). (d) The input image consisting of the foreground image multiplied by 0.85 and the reflection image multiplied by 0.15.

(e) Separated foreground layer image of (d). (f) Separated reflection layer image of (d).

(a) (b) (c)

(d) (e) (f)

Fig. 4.16. Example 3 of separation results of images with reflections using discretization. (a) The input image consisting of the foreground image multiplied by 0.8 and the reflection image multiplied by 0.2. (b) Separated foreground layer image of (a). (c) Separated reflection layer image of (a). (d) The input image consisting of the foreground image multiplied by 0.85 and the reflection image multiplied by 0.15.

(e) Separated foreground layer image of (d). (f) Separated reflection layer image of (d).

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Fig. 4.17. Example of separation results of images with different reflections using discretization. (a), (d), (g), and (j) The input image consisting of the foreground image multiplied by 0.85 and the reflection image multiplied by 0.15. (b), (e), (h), and (k) Separated foreground layer images for (a), (d), (g), and (j), respectively. (c), (f), (i), and (l) Separated reflection layer images for (a), (d), (g), and (j), respectively.

Chapter 5. Conclusion

In this thesis, we develop eye detection algorithm to locate the eye region. We also introduce reflection separation algorithm to reduce the side effect of reflection arisen from glasses or sunglasses. In the relevant applications we are concerned about drowsiness detection system that provides an early detection and warning of fatigue at the wheel. Our eye detection algorithm can provide high eye accurate location;

especially it is applicable to the case when a subject wears glasses or sunglasses. Our proposed reflection separation algorithm can retrieve and recover the eye information behind glasses.

In our eye detection system, we first utilize the universal skin-color map to extract the face region. Then we use corner operator, edge operator, and anisotropic diffusion to locate the eye region, detect the existence of glasses and separate reflection. The anisotropic diffusion plays an important role at corner and edge finding stage above because it can reduce noise and reserve the feature at same time.

For future work, we shall increase the number of patches in patches database.

Besides, in order to optimize the result of recover image, we should not only pick up the correct patch based on the error of Eq. (3.24), but also take the neighbor cost into account value will be. The optimization procedure could be improved by the techniques such as belief propagation (BP), max-product BP, and graph cuts.

References

[1] C. H. Chang, “Drowsiness detection using fuzzy integral based information fusion,” Master Thesis, Department of Electrical and Control Engineering, National Chiao Tung University, Taiwan, June 2005.

[2] E. Hjelmas and B. K. Low, “Face detection: a survey,” Computer Vision and Image Understanding, vol. 83, pp. 236–274, 2001.

[3] D. Chai and K. N. Ngan, “Face segmentation using skin-color map in videophone applications,” IEEE Trans. Circuits Syst. Video Technol., vol. 9, pp.

551–564, 1999.

[4] H. Wu, Q. Chen, and M. Yachida, “Face detection from color images using a fuzzy pattern matching method,” IEEE Trans. Pattern Anal. Machine Intell., vol. 21, pp. 557–563, 1999.

[5] J. Yang and A. Waibel, “A real-time face tracker,” in Proc. 3rd IEEE Workshop on Application of Computer Vision, 1996, pp. 142–147.

[6] R. Féraud, O. J. Bernier, J. E. Viallet, and M. Collobert, “A fast and accurate face detection based on neural network,” IEEE Trans. Pattern Anal. Machine Intell., vol. 23, pp. 42–53, 2001.

[7] D. Maio and D. Maltoni, “Real-time face location on gray-scale static images,”

Pattern Recognition, vol. 33, pp. 1525–1539, 2000.

[8] C. Garcia and G. Tziritas, “Face detection using quantized skin color regions merging and wavelet packet analysis,” IEEE Trans. Multimedia, vol. 1, pp.

264–27, 1999.

[9] K. K. Sung and T. Poggio, “Example-based learning for view-based human face detection,” IEEE Trans. Pattern Anal. Machine Intell., vol. 20, pp. 39–51, 1998.

[10] K. C. Yow and R. Cipolla, “Feature-based human face detection,” Image and Vision Computing, vol. 15, pp. 713–735, 1997.

[11] S. A. Sirohey and A. Rosenfeld, “Eye detection in a face image using linear and nonlinear filters,” Pattern Recognition, vol. 34, pp. 1367–1391, 2001.

[12] G. C. Feng and P. C. Yuen, “Multi-cues eye detection on gray intensity image,”

Pattern Recognition, vol. 34, pp. 1033–1046, 2001.

[13] R. T. Kumar, S. K. Raja, and A. G. Ramakrishnan, “Eye detection using color cues and projection functions,” in Proc. IEEE Int. Conf. Image Processing, 2002, vol. 3, pp. 24–28.

[14] Z. Liu, X. He, J. Zhou, and G. Xiong, “A novel method for eye region detection in gray-level image,” in Proc. IEEE Int. Conf. Communications, Circuits and Systems and West Sino Expositions, 2002, vol. 2, pp. 1118–1121.

[15] S. Kawato and J. Ohya, “ Two-step approach for real-time eye tracking with a new filtering technique,” in Proc. IEEE Int. Conf. Syst., Man, Cybern., 2000, vol. 2, pp. 1366–1371.

[16] A. Levin and Y. Weiss, “User assisted separation of reflections from a single Image using a sparsity prior,” in Proc. of the European Conference on Computer Vision (ECCV), Prague, May 2004.

[17] M. Irani and S. Peleg, “Image sequence enhancement using multiple motions analysis,” in Proc. IEEE Conf. Comput. Vision Pattern Recog., Champaign, Illinois, 1992, pp. 216–221.

[18] R. Szeliksi, S. Avidan, and P. Anandan, “Layer extraction from multiple images containing reflections and transparency,” in Proceedings IEEE CVPR, 2000.

[19] A. Levin, A. Zomet, and Y. Weiss, “Separating reflections from a single image using local features,” IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2004, Washington DC

[20] D. C. Marr and E. Hildreth, “Theory of edge detection,” Proc. Roy. Soc.

London., vol. B 207, pp. 187-217, 1980.

[21] A. Rosenfeld and M. Thurston, “Edge and curve detection for visual scene analysis,” IEEE Trans. Comput., vol. C-20, no. 5, pp. 562-569, 1971.

[22] J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-8, no. 6, pp. 679-698, Nov. 1986.

[23] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc.

4th Alvey Vision Conference, pp. 147–151, 1988.

[24] M. J. Black, G. Sapiro, D. H. Marimont, and D. Heeger. “Robust anisotropic diffusion,” IEEE Trans. on Image Processing, vol. 7, no. 3, MARCH 1998.

[25] P. Perona and J. Malik, “Scale space and edge detection using anisotropic diffusion,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 12(7):629–639, July 1990.

[26] J. Fan, D. K. Y. Yau, A. K. Elmagarmid, and W. G. Aref, “Automatic image segmentation by integrating color-edge extraction and seeded region growing,”

IEEE Trans. Image Processing, vol. 10, pp. 1454–1466, 2001.

[27] J. Fan, R. Wang, L. Zhang, D. Xing, and F. Gan, “Image sequence segmentation based on 2-D temporal entropy,” Pattern Recognition Lett., vol.

17, pp. 1101–1107, 1996.

[28] A. Levin, A. Zomet, and Y. Weiss, “Learning to perceive transparency from the statistics of natural scenes,” in S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, 2002.

[29] J. Malik, S. Belongie, T. Leung, and J. Shi, “Contour and texture analysis for image segmentation,” in K.L. Boyer and S. Sarkar, editors, Perceptual Organization for artificial vision systems. Kluwer Academic, 2000.

[30] J.S. Yedidia, W.T. Freeman, and Y. Weiss, “Constructing free energy approximations and generalized belief propagation algorithms,” MERL Technical Report TR2002-35, 2002, available online at http://www.merl.com/papers/TR2002-35/.

相關文件