• 沒有找到結果。

W DetectingFacesinImages:ASurvey

N/A
N/A
Protected

Academic year: 2022

Share "W DetectingFacesinImages:ASurvey"

Copied!
25
0
0

加載中.... (立即查看全文)

全文

(1)

Detecting Faces in Images: A Survey

Ming-Hsuan Yang, Member, IEEE, David J. Kriegman, Senior Member, IEEE, and Narendra Ahuja, Fellow, IEEE

Abstract—Images containing faces are essential to intelligent vision-based human computer interaction, and research efforts in face processing include face recognition, face tracking, pose estimation, and expression recognition. However, many reported methods assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyze the information contained in face images, robust and efficient face detection algorithms are required. Given a single image, the goal of face detection is to identify all image regions which contain a face regardless of its three-dimensional position, orientation, and lighting conditions. Such a problem is challenging because faces are nonrigid and have a high degree of variability in size, shape, color, and texture. Numerous techniques have been developed to detect faces in a single image, and the purpose of this paper is to categorize and evaluate these algorithms. We also discuss relevant issues such as data collection, evaluation metrics, and benchmarking. After analyzing these algorithms and identifying their limitations, we conclude with several promising directions for future research.

Index Terms—Face detection, face recognition, object recognition, view-based recognition, statistical pattern recognition, machine learning.

æ 1 I

NTRODUCTION

W

ITHthe ubiquity of new information technology and media, more effective and friendly methods for human computer interaction (HCI) are being developed which do not rely on traditional devices such as keyboards, mice, and displays. Furthermore, the ever decreasing price/

performance ratio of computing coupled with recent decreases in video image acquisition cost imply that computer vision systems can be deployed in desktop and embedded systems [111], [112], [113]. The rapidly expand- ing research in face processing is based on the premise that information about a user’s identity, state, and intent can be extracted from images, and that computers can then react accordingly, e.g., by observing a person’s facial expression.

In the last five years, face and facial expression recognition have attracted much attention though they have been studied for more than 20 years by psychophysicists, neuroscientists, and engineers. Many research demonstra- tions and commercial applications have been developed from these efforts. A first step of any face processing system is detecting the locations in images where faces are present.

However, face detection from a single image is a challen- ging task because of variability in scale, location, orientation (up-right, rotated), and pose (frontal, profile). Facial expression, occlusion, and lighting conditions also change the overall appearance of faces.

We now give a definition of face detection: Given an arbitrary image, the goal of face detection is to determine whether or not there are any faces in the image and, if present, return the image location and extent of each face.

The challenges associated with face detection can be attributed to the following factors:

. Pose.The images of a face vary due to the relative camera-face pose (frontal, 45 degree, profile, upside down), and some facial features such as an eye or the nose may become partially or wholly occluded.

. Presence or absence of structural components.

Facial features such as beards, mustaches, and glasses may or may not be present and there is a great deal of variability among these components including shape, color, and size.

. Facial expression. The appearance of faces are directly affected by a person’s facial expression.

. Occlusion.Faces may be partially occluded by other objects. In an image with a group of people, some faces may partially occlude other faces.

. Image orientation. Face images directly vary for different rotations about the camera’s optical axis.

. Imaging conditions. When the image is formed, factors such as lighting (spectra, source distribution and intensity) and camera characteristics (sensor response, lenses) affect the appearance of a face.

There are many closely related problems of face detection. Face localization aims to determine the image position of a single face; this is a simplified detection problem with the assumption that an input image contains only one face [85], [103]. The goal of facial feature detection is to detect the presence and location of features, such as eyes, nose, nostrils, eyebrow, mouth, lips, ears, etc., with the assumption that there is only one face in an image [28], [54].

Face recognition or face identification compares an input image (probe) against a database (gallery) and reports a match, if

. M.-H. Yang is with Honda Fundamental Research Labs, 800 California Street, Mountain View, CA 94041. E-mail: myang@hra.com.

. D.J. Kriegman is with the Department of Computer Science and Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801.

E-mail: kriegman@uiuc.edu.

. N. Ahjua is with the Department of Electrical and Computer Engineering and Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801. E-mail: ahuja@vision.ai.uiuc.edu.

Manuscript received 5 May 2000; revised 15 Jan. 2001; accepted 7 Mar. 2001.

Recommended for acceptance by K. Bowyer.

For information on obtaining reprints of this article, please send e-mail to:

tpami@computer.org, and reference IEEECS Log Number 112058.

0162-8828/02/$17.00 ß 2002 IEEE

(2)

any [163], [133], [18]. The purpose of face authentication is to verify the claim of the identity of an individual in an input image [158], [82], while face tracking methods continuously estimate the location and possibly the orientation of a face in an image sequence in real time [30], [39], [33]. Facial expression recognition concerns identifying the affective states (happy, sad, disgusted, etc.) of humans [40], [35].

Evidently, face detection is the first step in any automated system which solves the above problems. It is worth mentioning that many papers use the term “face detection,”

but the methods and the experimental results only show that a single face is localized in an input image. In this paper, we differentiate face detection from face localization since the latter is a simplified problem of the former.

Meanwhile, we focus on face detection methods rather than tracking methods.

While numerous methods have been proposed to detect faces in a single image of intensity or color images, we are unaware of any surveys on this particular topic. A survey of early face recognition methods before 1991 was written by Samal and Iyengar [133]. Chellapa et al. wrote a more recent survey on face recognition and some detection methods [18].

Among the face detection methods, the ones based on learning algorithms have attracted much attention recently and have demonstrated excellent results. Since these data- driven methods rely heavily on the training sets, we also discuss several databases suitable for this task. A related and important problem is how to evaluate the performance of the proposed detection methods. Many recent face detection papers compare the performance of several methods, usually in terms of detection and false alarm rates. It is also worth noticing that many metrics have been adopted to evaluate algorithms, such as learning time, execution time, the number of samples required in training, and the ratio between detection rates and false alarms.

Evaluation becomes more difficult when researchers use different definitions for detection and false alarm rates. In this paper, detection rate is defined as the ratio between the number of faces correctly detected and the number faces determined by a human. An image region identified as a face by a classifier is considered to be correctly detected if the image region covers more than a certain percentage of a face in the image (See Section 3.3 for details). In general, detectors can make two types of errors: false negatives in which faces are missed resulting in low detection rates and false positives in which an image region is declared to be face, but it is not. A fair evaluation should take these factors into consideration since one can tune the parameters of one’s method to increase the detection rates while also increasing the number of false detections. In this paper, we discuss the benchmarking data sets and the related issues in a fair evaluation.

With over 150 reported approaches to face detection, the research in face detection has broader implications for computer vision research on object recognition. Nearly all model-based or appearance-based approaches to 3D object recognition have been limited to rigid objects while attempting to robustly perform identification over a broad range of camera locations and illumination conditions. Face detection can be viewed as a two-class recognition problem

in which an image region is classified as being a “face” or

“nonface.” Consequently, face detection is one of the few attempts to recognize from images (not abstract representa- tions) a class of objects for which there is a great deal of within-class variability (described previously). It is also one of the few classes of objects for which this variability has been captured using large training sets of images and, so, some of the detection techniques may be applicable to a much broader class of recognition problems.

Face detection also provides interesting challenges to the underlying pattern classification and learning techniques.

When a raw or filtered image is considered as input to a pattern classifier, the dimension of the feature space is extremely large (i.e., the number of pixels in normalized training images). The classes of face and nonface images are decidedly characterized by multimodal distribution func- tions and effective decision boundaries are likely to be nonlinear in the image space. To be effective, either classifiers must be able to extrapolate from a modest number of training samples or be efficient when dealing with a very large number of these high-dimensional training samples.

With an aim to give a comprehensive and critical survey of current face detection methods, this paper is organized as follows: In Section 2, we give a detailed review of techniques to detect faces in a single image. Benchmarking databases and evaluation criteria are discussed in Section 3.

We conclude this paper with a discussion of several promising directions for face detection in Section 4.1

Though we report error rates for each method when available, tests are often done on unique data sets and, so, comparisons are often difficult. We indicate those methods that have been evaluated with a publicly available test set. It can be assumed that a unique data set was used if we do not indicate the name of the test set.

2 D

ETECTING

F

ACES IN A

S

INGLE

I

MAGE

In this section, we review existing techniques to detect faces from a single intensity or color image. We classify single image detection methods into four categories; some methods clearly overlap category boundaries and are discussed at the end of this section.

1. Knowledge-based methods.These rule-based meth- ods encode human knowledge of what constitutes a typical face. Usually, the rules capture the relation- ships between facial features. These methods are designed mainly for face localization.

2. Feature invariant approaches.These algorithms aim to find structural features that exist even when the pose, viewpoint, or lighting conditions vary, and then use the these to locate faces. These methods are designed mainly for face localization.

3. Template matching methods.Several standard pat- terns of a face are stored to describe the face as a whole or the facial features separately. The correlations between an input image and the stored patterns are

1. An earlier version of this survey paper appeared at http://

vision.ai.uiuc.edu/mhyang/face-dectection-survey.html in March 1999.

(3)

computed for detection. These methods have been used for both face localization and detection.

4. Appearance-based methods.In contrast to template matching, the models (or templates) are learned from a set of training images which should capture the representative variability of facial appearance. These learned models are then used for detection. These methods are designed mainly for face detection.

Table 1 summarizes algorithms and representative works for face detection in a single image within these four categories. Below, we discuss the motivation and general approach of each category. This is followed by a review of specific methods including a discussion of their pros and cons. We suggest ways to further improve these methods in Section 4.

2.1 Knowledge-Based Top-Down Methods

In this approach, face detection methods are developed based on the rules derived from the researcher’s knowledge of human faces. It is easy to come up with simple rules to describe the features of a face and their relationships. For example, a face often appears in an image with two eyes that are symmetric to each other, a nose, and a mouth. The relationships between features can be represented by their

relative distances and positions. Facial features in an input image are extracted first, and face candidates are identified based on the coded rules. A verification process is usually applied to reduce false detections.

One problem with this approach is the difficulty in translating human knowledge into well-defined rules. If the rules are detailed (i.e., strict), they may fail to detect faces that do not pass all the rules. If the rules are too general, they may give many false positives. Moreover, it is difficult to extend this approach to detect faces in different poses since it is challenging to enumerate all possible cases. On the other hand, heuristics about faces work well in detecting frontal faces in uncluttered scenes.

Yang and Huang used a hierarchical knowledge-based method to detect faces [170]. Their system consists of three levels of rules. At the highest level, all possible face candidates are found by scanning a window over the input image and applying a set of rules at each location. The rules at a higher level are general descriptions of what a face looks like while the rules at lower levels rely on details of facial features. A multiresolution hierarchy of images is created by averaging and subsampling, and an example is shown in Fig. 1. Examples of the coded rules used to locate face candidates in the lowest resolution include: “the center TABLE 1

Categorization of Methods for Face Detection in a Single Image

Fig. 1. (a) n = 1, original image. (b) n = 4. (c) n = 8. (d) n = 16. Original and corresponding low resolution images. Each square cell consists of n n pixels in which the intensity of each pixel is replaced by the average intensity of the pixels in that cell.

(4)

part of the face (the dark shaded parts in Fig. 2) has four cells with a basically uniform intensity,” “the upper round part of a face (the light shaded parts in Fig. 2) has a basically uniform intensity,” and “the difference between the average gray values of the center part and the upper round part is significant.” The lowest resolution (Level 1) image is searched for face candidates and these are further processed at finer resolutions. At Level 2, local histogram equalization is performed on the face candidates received from Level 2, followed by edge detection. Surviving candidate regions are then examined at Level 3 with another set of rules that respond to facial features such as the eyes and mouth.

Evaluated on a test set of 60 images, this system located faces in 50 of the test images while there are 28 images in which false alarms appear. One attractive feature of this method is that a coarse-to-fine or focus-of-attention strategy is used to reduce the required computation. Although it does not result in a high detection rate, the ideas of using a multiresolution hierarchy and rules to guide searches have been used in later face detection works [81].

Kotropoulos and Pitas [81] presented a rule-based localization method which is similar to [71] and [170]. First, facial features are located with a projection method that Kanade successfully used to locate the boundary of a face [71].

Let Iðx; yÞ be the intensity value of an m  n image at position ðx; yÞ, the horizontal and vertical projections of the image are defined as HIðxÞ ¼Pn

y¼1Iðx; yÞ and V IðyÞ ¼Pm

x¼1Iðx; yÞ.

The horizontal profile of an input image is obtained first, and then the two local minima, determined by detecting abrupt changes in HI, are said to correspond to the left and right side of the head. Similarly, the vertical profile is obtained and the local minima are determined for the locations of mouth lips, nose tip, and eyes. These detected features constitute a facial candidate. Fig. 3a shows one example where the boundaries

of the face correspond to the local minimum where abrupt intensity changes occur. Subsequently, eyebrow/eyes, nos- trils/nose, and the mouth detection rules are used to validate these candidates. The proposed method has been tested using a set of faces in frontal views extracted from the European ACTS M2VTS (MultiModal Verification for Teleservices and Security applications) database [116] which contains video sequences of 37 different people. Each image sequence contains only one face in a uniform background. Their method provides correct face candidates in all tests. The detection rate is 86.5 percent if successful detection is defined as correctly identifying all facial features. Fig. 3b shows one example in which it becomes difficult to locate a face in a complex background using the horizontal and vertical profiles. Furthermore, this method cannot readily detect multiple faces as illustrated in Fig. 3c. Essentially, the projection method can be effective if the window over which it operates is suitably located to avoid misleading interference.

2.2 Bottom-Up Feature-Based Methods

In contrast to the knowledge-based top-down approach, researchers have been trying to find invariant features of faces for detection. The underlying assumption is based on the observation that humans can effortlessly detect faces and objects in different poses and lighting conditions and, so, there must exist properties or features which are invariant over these variabilities. Numerous methods have been proposed to first detect facial features and then to infer the presence of a face. Facial features such as eyebrows, eyes, nose, mouth, and hair-line are commonly extracted using edge detectors. Based on the extracted features, a statistical model is built to describe their relationships and to verify the existence of a face. One problem with these feature-based algorithms is that the image features can be severely corrupted due to illumination, noise, and occlu- sion. Feature boundaries can be weakened for faces, while shadows can cause numerous strong edges which together render perceptual grouping algorithms useless.

2.2.1 Facial Features

Sirohey proposed a localization method to segment a face from a cluttered background for face identification [145]. It uses an edge map (Canny detector [15]) and heuristics to remove and group edges so that only the ones on the face

Fig. 2. A typical face used in knowledge-based top-down methods:

Rules are coded based on human knowledge about the characteristics (e.g., intensity distribution and difference) of the facial regions [170].

Fig. 3. (a) and (b) n = 8. (c) n = 4. Horizontal and vertical profiles. It is feasible to detect a single face by searching for the peaks in horizontal and vertical profiles. However, the same method has difficulty detecting faces in complex backgrounds or multiple faces as shown in (b) and (c).

(5)

contour are preserved. An ellipse is then fit to the boundary between the head region and the background. This algorithm achieves 80 percent accuracy on a database of 48 images with cluttered backgrounds. Instead of using edges, Chetverikov and Lerch presented a simple face detection method using blobs and streaks (linear sequences of similarly oriented edges) [20]. Their face model consists of two dark blobs and three light blobs to represent eyes, cheekbones, and nose. The model uses streaks to represent the outlines of the faces, eyebrows, and lips. Two triangular configurations are utilized to encode the spatial relationship among the blobs.

A low resolution Laplacian image is generated to facilitate blob detection. Next, the image is scanned to find specific triangular occurrences as candidates. A face is detected if streaks are identified around a candidate.

Graf et al. developed a method to locate facial features and faces in gray scale images [54]. After band pass filtering, morphological operations are applied to enhance regions with high intensity that have certain shapes (e.g., eyes). The histogram of the processed image typically exhibits a prominent peak. Based on the peak value and its width, adaptive threshold values are selected in order to generate two binarized images. Connected components are identified in both binarized images to identify the areas of candidate facial features. Combinations of such areas are then evaluated with classifiers, to determine whether and where a face is present. Their method has been tested with head-shoulder images of 40 individuals and with five video sequences where each sequence consists of 100 to 200 frames. However, it is not clear how morphological operations are performed and how the candidate facial features are combined to locate a face.

Leung et al. developed a probabilistic method to locate a face in a cluttered scene based on local feature detectors and random graph matching [87]. Their motivation is to formulate the face localization problem as a search problem in which the goal is to find the arrangement of certain facial features that is most likely to be a face pattern. Five features (two eyes, two nostrils, and nose/lip junction) are used to describe a typical face. For any pair of facial features of the same type (e.g., left- eye, right-eye pair), their relative distance is computed, and over an ensemble of images the distances are modeled by a Gaussian distribution. A facial template is defined by averaging the responses to a set of multiorientation, multi- scale Gaussian derivative filters (at the pixels inside the facial feature) over a number of faces in a data set. Given a test image, candidate facial features are identified by matching the filter response at each pixel against a template vector of responses (similar to correlation in spirit). The top two feature candidates with the strongest response are selected to search for the other facial features. Since the facial features cannot appear in arbitrary arrangements, the expected locations of the other features are estimated using a statistical model of mutual distances. Furthermore, the covariance of the esti- mates can be computed. Thus, the expected feature locations can be estimated with high probability. Constellations are then formed only from candidates that lie inside the appropriate locations, and the most face-like constellation is determined. Finding the best constellation is formulated as a random graph matching problem in which the nodes of the

graph correspond to features on a face, and the arcs represent the distances between different features. Ranking of constellations is based on a probability density function that a constellation corresponds to a face versus the probability it was generated by an alternative mechanism (i.e., nonface).

They used a set of 150 images for experiments in which a face is considered correctly detected if any constellation correctly locates three or more features on the faces. This system is able to achieve a correct localization rate of 86 percent.

Instead of using mutual distances to describe the relationships between facial features in constellations, an alternative method for modeling faces was also proposed by the Leung et al. [13], [88]. The representation and ranking of the constellations is accomplished using the statistical theory of shape, developed by Kendall [75] and Mardia and Dryden [95]. The shape statistics is a joint probability density function over N feature points, repre- sented by ðxi; yiÞ, for the ith feature under the assumption that the original feature points are positioned in the plane according to a general 2N-dimensional Gaussian distribu- tion. They applied the same maximum-likelihood (ML) method to determine the location of a face. One advantage of these methods is that partially occluded faces can be located. However, it is unclear whether these methods can be adapted to detect multiple faces effectively in a scene.

In [177], [178], Yow and Cipolla presented a feature- based method that uses a large amount of evidence from the visual image and their contextual evidence. The first stage applies a second derivative Gaussian filter, elongated at an aspect ratio of three to one, to a raw image. Interest points, detected at the local maxima in the filter response, indicate the possible locations of facial features. The second stage examines the edges around these interest points and groups them into regions. The perceptual grouping of edges is based on their proximity and similarity in orientation and strength. Measurements of a region’s characteristics, such as edge length, edge strength, and intensity variance, are computed and stored in a feature vector. From the training data of facial features, the mean and covariance matrix of each facial feature vector are computed. An image region becomes a valid facial feature candidate if the Mahalanobis distance between the corresponding feature vectors is below a threshold. The labeled features are further grouped based on model knowledge of where they should occur with respect to each other. Each facial feature and grouping is then evaluated using a Bayesian network. One attractive aspect is that this method can detect faces at different orientations and poses. The overall detection rate on a test set of 110 images of faces with different scales, orientations, and viewpoints is 85 percent [179]. However, the reported false detection rate is 28 percent and the implementation is only effective for faces larger than 60  60 pixels. Subse- quently, this approach has been enhanced with active contour models [22], [179]. Fig. 4 summarizes their feature- based face detection method.

Takacs and Wechsler described a biologically motivated face localization method based on a model of retinal feature extraction and small oscillatory eye movements [157]. Their algorithm operates on the conspicuity map or region of interest, with a retina lattice modeled after the magnocellular

(6)

ganglion cells in the human vision system. The first phase computes a coarse scan of the image to estimate the location of the face, based on the filter responses of receptive fields. Each receptive field consists of a number of neurons which are implemented with Gaussian filters tuned to specific orienta- tions. The second phase refines the conspicuity map by scanning the image area at a finer resolution to localize the face. The error rate on a test set of 426 images (200 subjects from the FERET database) is 4.69 percent.

Han et al. developed a morphology-based technique to extract what they call eye-analogue segments for face detection [58]. They argue that eyes and eyebrows are the most salient and stable features of human face and, thus, useful for detection. They define eye-analogue segments as edges on the contours of eyes. First, morphological operations such as closing, clipped difference, and thresh- olding are applied to extract pixels at which the intensity values change significantly. These pixels become the eye- analogue pixels in their approach. Then, a labeling process is performed to generate the eye-analogue segments. These segments are used to guide the search for potential face regions with a geometrical combination of eyes, nose, eyebrows and mouth. The candidate face regions are further verified by a neural network similar to [127]. Their experiments demonstrate a 94 percent accuracy rate using a test set of 122 images with 130 faces.

Recently, Amit et al. presented a method for shape detection and applied it to detect frontal-view faces in still intensity images [3]. Detection follows two stages: focusing and intensive classification. Focusing is based on spatial arrangements of edge fragments extracted from a simple edge detector using intensity difference. A rich family of such spatial arrangements, invariant over a range of photometric and geometric transformations, is defined. From a set of 300 training face images, particular spatial arrangements of edges which are more common in faces than backgrounds are selected using an inductive method developed in [4]. Mean- while, the CART algorithm [11] is applied to grow a classification tree from the training images and a collection of false positives identified from generic background images.

Given a test image, regions of interest are identified from the spatial arrangements of edge fragments. Each region of interest is then classified as face or background using the learned CART tree. Their experimental results on a set of 100 images from the Olivetti (now AT&T) data set [136] report a false positive rate of 0.2 percent per 1,000 pixels and a false negative rate of 10 percent.

2.2.2 Texture

Human faces have a distinct texture that can be used to separate them from different objects. Augusteijn and Skufca developed a method that infers the presence of a face through the identification of face-like textures [6]. The texture are computed using second-order statistical features (SGLD) [59] on subimages of 16  16 pixels. Three types of features are considered: skin, hair, and others. They used a cascade correlation neural network [41] for supervised classification of textures and a Kohonen self-organizing feature map [80] to form clusters for different texture classes. To infer the presence of a face from the texture labels, they suggest using votes of the occurrence of hair and skin textures. However, only the result of texture classification is reported, not face localization or detection.

Dai and Nakano also applied SGLD model to face detection [32]. Color information is also incorporated with the face-texture model. Using the face texture model, they design a scanning scheme for face detection in color scenes in which the orange-like parts including the face areas are enhanced. One advantage of this approach is that it can detect faces which are not upright or have features such as beards and glasses. The reported detection rate is perfect for a test set of 30 images with 60 faces.

2.2.3 Skin Color

Human skin color has been used and proven to be an effective feature in many applications from face detection to hand tracking. Although different people have different skin color, several studies have shown that the major difference lies largely between their intensity rather than their chrominance [54], [55], [172]. Several color spaces have been utilized to label pixels as skin including RGB [66], [67], [137], normalized RGB [102], [29], [149], [172], [30], [105], [171], [77], [151], [120], HSV (or HSI) [138], [79], [147], [146], YCrCb [167], [17], YIQ [31], [32], YES [131], CIE XYZ [19], and CIE LUV [173].

Many methods have been proposed to build a skin color model. The simplest model is to define a region of skin tone pixels using Cr; Cb values [17], i.e., RðCr; CbÞ, from samples of skin color pixels. With carefully chosen thresholds,

½Cr1; Cr2 and ½Cb1; Cb2, a pixel is classified to have skin tone if its values ðCr; CbÞ fall within the ranges, i.e., Cr1 Cr Cr2and Cb1 Cb  Cb2. Crowley and Coutaz used a histogram hðr; gÞ of ðr; gÞ values in normalized RGB color space to obtain the probability of obtaining a particular RGB- vector given that the pixel observes skin [29], [30]. In other words, a pixel is classified to belong to skin color if hðr; gÞ  ,

Fig. 4. (a) Yow and Cipolla model a face as a plane with six oriented facial features (eyebrows, eyes, nose, and mouth) [179]. (b) Each facial feature is modeled as pairs of oriented edges. (c) The feature selection process starts with interest points, followed by edge detection and linking, and tested by a statistical model (Courtesy of K.C. Yow and R. Cipolla).

(7)

where  is a threshold selected empirically from the histogram of samples. Saxe and Foulds proposed an iterative skin identification method that uses histogram intersection in HSV color space [138]. An initial patch of skin color pixels, called the control seed, is chosen by the user and is used to initiate the iterative algorithm. To detect skin color regions, their method moves through the image, one patch at a time, and presents the control histogram and the current histogram from the image for comparison. Histogram intersection [155]

is used to compare the control histogram and current histogram. If the match score or number of instances in common (i.e., intersection) is greater than a threshold, the current patch is classified as being skin color. Kjeldsen and Kender defined a color predicate in HSV color space to separate skin regions from background [79] . In contrast to the nonparametric methods mentioned above, Gaussian density functions [14], [77], [173] and a mixture of Gaussians [66], [67], [174] are often used to model skin color. The parameters in a unimodal Gaussian distribution are often estimated using maximum-likelihood [14], [77], [173]. The motivation for using a mixture of Gaussians is based on the observation that the color histogram for the skin of people with different ethnic background does not form a unimodal distribution, but rather a multimodal distribution. The parameters in a mixture of Gaussians are usually estimated using an EM algorithm [66], [174]. Recently, Jones and Rehg conducted a large-scale experiment in which nearly 1 billion labeled skin tone pixels are collected (in normalized RGB color space) [69].

Comparing the performance of histogram and mixture models for skin detection, they find histogram models to be superior in accuracy and computational cost.

Color information is an efficient tool for identifying facial areas and specific facial features if the skin color model can be properly adapted for different lighting environments. How- ever, such skin color models are not effective where the spectrum of the light source varies significantly. In other words, color appearance is often unstable due to changes in both background and foreground lighting. Though the color constancy problem has been addressed through the formula- tion of physics-based models [45], several approaches have been proposed to use skin color in varying lighting conditions. McKenna et al. presented an adaptive color mixture model to track faces under varying illumination conditions [99]. Instead of relying on a skin color model based on color constancy, they used a stochastic model to estimate an object’s color distribution online and adapt to accom- modate changes in the viewing and lighting conditions.

Preliminary results show that their system can track faces within a range of illumination conditions. However, this method cannot be applied to detect faces in a single image.

Skin color alone is usually not sufficient to detect or track faces. Recently, several modular systems using a combina- tion of shape analysis, color segmentation, and motion information for locating or tracking heads and faces in an image sequence have been developed [55], [173], [172], [99], [147]. We review these methods in the next section.

2.2.4 Multiple Features

Recently, numerous methods that combine several facial features have been proposed to locate or detect faces. Most of them utilize global features such as skin color, size, and shape

to find face candidates, and then verify these candidates using local, detailed features such as eye brows, nose, and hair. A typical approach begins with the detection of skin-like regions as described in Section 2.2.3. Next, skin-like pixels are grouped together using connected component analysis or clustering algorithms. If the shape of a connected region has an elliptic or oval shape, it becomes a face candidate. Finally, local features are used for verification. However, others, such as [17], [63], have used different sets of features.

Yachida et al. presented a method to detect faces in color images using fuzzy theory [19], [169], [168]. They used two fuzzy models to describe the distribution of skin and hair color in CIE XYZ color space. Five (one frontal and four side views) head-shape models are used to abstract the appear- ance of faces in images. Each shape model is a 2D pattern consisting of m  n square cells where each cell may contain several pixels. Two properties are assigned to each cell: the skin proportion and the hair proportion, which indicate the ratios of the skin area (or the hair area) within the cell to the area of the cell. In a test image, each pixel is classified as hair, face, hair/face, and hair/background based on the distribu- tion models, thereby generating skin-like and hair-like regions. The head shape models are then compared with the extracted skin-like and hair-like regions in a test image. If they are similar, the detected region becomes a face candidate. For verification, eye-eyebrow and nose-mouth features are extracted from a face candidate using horizontal edges.

Sobottka and Pitas proposed a method for face localization and facial feature extraction using shape and color [147].

First, color segmentation in HSV space is performed to locate skin-like regions. Connected components are then deter- mined by region growing at a coarse resolution. For each connected component, the best fit ellipse is computed using geometric moments. Connected components that are well approximated by an ellipse are selected as face candidates.

Subsequently, these candidates are verified by searching for facial features inside of the connected components. Features, such as eyes and mouths, are extracted based on the observation that they are darker than the rest of a face. In [159], [160], a Gaussian skin color model is used to classify skin color pixels. To characterize the shape of the clusters in the binary image, a set of 11 lowest-order geometric moments is computed using Fourier and radial Mellin transforms. For detection, a neural network is trained with the extracted geometric moments. Their experiments show a detection rate of 85 percent based on a test set of 100 images.

The symmetry of face patterns has also been applied to face localization [131]. Skin/nonskin classification is carried out using the class-conditional density function in YES color space followed by smoothing in order to yield contiguous regions. Next, an elliptical face template is used to determine the similarity of the skin color regions based on Hausdorff distance [65]. Finally, the eye centers are localized using several cost functions which are designed to take advantage of the inherent symmetries associated with face and eye locations. The tip of the nose and the center of the mouth are then located by utilizing the distance between the eye centers. One drawback is that it is effective only for a single frontal-view face and when both

(8)

eyes are visible. A similar method using color and local symmetry was presented in [151].

In contrast to pixel-based methods, a detection method based on structure, color, and geometry was proposed in [173]. First, multiscale segmentation [2] is performed to extract homogeneous regions in an image. Using a Gaussian skin color model, regions of skin tone are extracted and grouped into ellipses. A face is detected if facial features such as eyes and mouth exist within these elliptic regions.

Experimental results show that this method is able to detect faces at different orientations with facial features such as beard and glasses.

Kauth et al. proposed a blob representation to extract a compact, structurally meaningful description of multispec- tral satellite imagery [74]. A feature vector at each pixel is formed by concatenating the pixel’s image coordinates to the pixel’s spectral (or textural) components; pixels are then clustered using this feature vector to form coherent connected regions, or “blobs.” To detect faces, each feature vector consists of the image coordinates and normalized chrominance, i.e., X ¼ ðx; y;rþgþbr ;rþgþbg Þ [149], [105]. A connectivity algorithm is then used to grow blobs and the resulting skin blob whose size and shape is closest to that of a canonical face is considered as a face.

Range and color have also been employed for face detection by Kim et al. [77]. Disparity maps are computed and objects are segmented from the background with a disparity histogram using the assumption that background pixels have the same depth and they outnumber the pixels in the foreground objects. Using a Gaussian distribution in normalized RGB color space, segmented regions with a skin- like color are classified as faces. A similar approach has been proposed by Darrell et al. for face detection and tracking [33].

2.3 Template Matching

In template matching, a standard face pattern (usually frontal) is manually predefined or parameterized by a function. Given an input image, the correlation values with the standard patterns are computed for the face contour, eyes, nose, and mouth independently. The existence of a face is determined based on the correlation values. This approach has the advantage of being simple to implement.

However, it has proven to be inadequate for face detection since it cannot effectively deal with variation in scale, pose, and shape. Multiresolution, multiscale, subtemplates, and deformable templates have subsequently been proposed to achieve scale and shape invariance.

2.3.1 Predefined Templates

An early attempt to detect frontal faces in photographs is reported by Sakai et al. [132]. They used several subtemplates for the eyes, nose, mouth, and face contour to model a face.

Each subtemplate is defined in terms of line segments. Lines in the input image are extracted based on greatest gradient change and then matched against the subtemplates. The correlations between subimages and contour templates are computed first to detect candidate locations of faces. Then, matching with the other subtemplates is performed at the candidate positions. In other words, the first phase deter- mines focus of attention or region of interest and the second phase examines the details to determine the existence of a

face. The idea of focus of attention and subtemplates has been adopted by later works on face detection.

Craw et al. presented a localization method based on a shape template of a frontal-view face (i.e., the outline shape of a face) [27]. A Sobel filter is first used to extract edges.

These edges are grouped together to search for the template of a face based on several constraints. After the head contour has been located, the same process is repeated at different scales to locate features such as eyes, eyebrows, and lips. Later, Craw et al. describe a localization method using a set of 40 templates to search for facial features and a control strategy to guide and assess the results from the template-based feature detectors [28].

Govindaraju et al. presented a two stage face detection method in which face hypotheses are generated and tested [52], [53], [51]. A face model is built in terms of features defined by the edges. These features describe the curves of the left side, the hair-line, and the right side of a frontal face. The Marr-Hildreth edge operator is used to obtain an edge map of an input image. A filter is then used to remove objects whose contours are unlikely to be part of a face. Pairs of fragmented contours are linked based on their proximity and relative orientation. Corners are detected to segment the contour into feature curves. These feature curves are then labeled by checking their geometric properties and relative positions in the neighborhood. Pairs of feature curves are joined by edges if their attributes are compatible (i.e., if they could arise from the same face). The ratios of the feature pairs forming an edge is compared with the golden ratio and a cost is assigned to the edge. If the cost of a group of three feature curves (with different labels) is low, the group becomes a hypothesis.

When detecting faces in newspaper articles, collateral information, which indicates the number of persons in the image, is obtained from the caption of the input image to select the best hypotheses [52]. Their system reports a detection rate of approximately 70 percent based on a test set of 50 photographs. However, the faces must be upright, unoccluded, and frontal. The same approach has been extended by extracting edges in the wavelet domain by Venkatraman and Govindaraju [165].

Tsukamoto et al. presented a qualitative model for face pattern (QMF) [161], [162]. In QMF, each sample image is divided into a number of blocks, and qualitative features are estimated for each block. To parameterize a face pattern,

“lightness” and “edgeness” are defined as the features in this model. Consequently, this blocked template is used to calculate “faceness” at every position of an input image. A face is detected if the faceness measure is above a predefined threshold.

Silhouettes have also been used as templates for face localization [134]. A set of basis face silhouettes is obtained using principal component analysis (PCA) on face examples in which the silhouette is represented by an array of bits.

These eigen-silhouettes are then used with a generalized Hough transform for localization. A localization method based on multiple templates for facial components was proposed in [150]. Their method defines numerous hypoth- eses for the possible appearances of facial features. A set of hypotheses for the existence of a face is then defined in terms of the hypotheses for facial components using the Dempster- Shafer theory [34]. Given an image, feature detectors compute

(9)

confidence factors for the existence of facial features. The confidence factors are combined to determine the measures of belief and disbelief about the existence of a face. Their system is able to locate faces in 88 images out of 94 images.

Sinha used a small set of spatial image invariants to describe the space of face patterns [143], [144]. His key insight for designing the invariant is that, while variations in illumination change the individual brightness of different parts of faces (such as eyes, cheeks, and forehead), the relative brightness of these parts remain largely unchanged.

Determining pairwise ratios of the brightness of a few such regions and retaining just the “directions” of these ratios (i.e., Is one region brighter or darker than the other?) provides a robust invariant. Thus, observed brightness regularities are encoded as a ratio template which is a coarse spatial template of a face with a few appropriately chosen subregions that roughly correspond to key facial features such as the eyes, cheeks, and forehead. The brightness constraints between facial parts are captured by an appropriate set of pairwise brighter-darker relation- ships between subregions. A face is located if an image satisfies all the pairwise brighter-darker constraints. The idea of using intensity differences between local adjacent regions has later been extended to a wavelet-based representation for pedestrian, car, and face detection [109].

Sinha’s method has been extended and applied to face localization in an active robot vision system [139], [10]. Fig. 5 shows the enhanced template with 23 defined relations.

These defined relations are furthered classified into 11 essential relations (solid arrows) and 12 confirming rela- tions (dashed arrows). Each arrow in the figure indicates a relation, with the head of the arrow denoting the second region (i.e., the denominator of the ratio). A relation is satisfied for face temple if the ratio between two regions exceeds a threshold and a face is localized if the number of essential and confirming relations exceeds a threshold.

A hierarchical template matching method for face detec- tion was proposed by Miao et al. [100]. At the first stage, an input image is rotated from 20 to 20 in steps of 5 , in order to handle rotated faces. A multiresolution image hierarchy is formed (See Fig. 1) and edges are extracted using the Laplacian operator. The face template consists of the edges produced by six facial components: two eyebrows, two eyes, one nose, and one mouth. Finally, heuristics are applied to determine the existence of a face. Their experimental results show better results in images containing a single face (frontal or rotated) than in images with multiple faces.

2.3.2 Deformable Templates

Yuille et al. used deformable templates to model facial features that fit an a priori elastic model to facial features (e.g., eyes) [180]. In this approach, facial features are described by parameterized templates. An energy function is defined to link edges, peaks, and valleys in the input image to corresponding parameters in the template. The best fit of the elastic model is found by minimizing an energy function of the parameters. Although their experimental results demonstrate good performance in tracking nonrigid features, one draw- back of this approach is that the deformable template must be initialized in the proximity of the object of interest.

In [84], a detection method based on snakes [73], [90] and templates was developed. An image is first convolved with a blurring filter and then a morphological operator to enhance edges. A modified n-pixel (n is small) snake is used to find and eliminate small curve segments. Each face is approximated by an ellipse and a Hough transform of the remaining snakelets is used to find a dominant ellipse.

Thus, sets of four parameters describing the ellipses are obtained and used as candidates for face locations. For each of these candidates, a method similar to the deformable template method [180] is used to find detailed features. If a substantial number of the facial features are found and if their proportions satisfy ratio tests based on a face template, a face is considered to be detected. Lam and Yan also used snakes to locate the head boundaries with a greedy algorithm in minimizing the energy function [85].

Lanitis et al. described a face representation method with both shape and intensity information [86]. They start with sets of training images in which sampled contours such as the eye boundary, nose, chin/cheek are manually labeled, and a vector of sample points is used to represent shape. They used a point distribution model (PDM) to characterize the shape vectors over an ensemble of individuals, and an approach similar to Kirby and Sirovich [78] to represent shape- normalized intensity appearance. A face-shape PDM can be used to locate faces in new images by using active shape model (ASM) search to estimate the face location and shape parameters. The face patch is then deformed to the average shape, and intensity parameters are extracted. The shape and intensity parameters can be used together for classification.

Cootes and Taylor applied a similar approach to localize a face in an image [25]. First, they define rectangular regions of the image containing instances of the feature of interest.

Factor analysis [5] is then applied to fit these training features and obtain a distribution function. Candidate features are determined if the probabilistic measures are above a thresh- old and are verified using the ASM. After training this method with 40 images, it is able to locate 35 faces in 40 test images. The ASM approach has also been extended with two Kalman filters to estimate the shape-free intensity parameters and to track faces in image sequences [39].

2.4 Appearance-Based Methods

Contrasted to the template matching methods where tem- plates are predefined by experts, the “templates” in appear- ance-based methods are learned from examples in images. In general, appearance-based methods rely on techniques from statistical analysis and machine learning to find the relevant

Fig. 5. A 14x16 pixel ratio template for face localization based on Sinha method. The template is composed of 16 regions (the gray boxes) and 23 relations (shown by arrows) [139] (Courtesy of B. Scassellati).

(10)

characteristics of face and nonface images. The learned characteristics are in the form of distribution models or discriminant functions that are consequently used for face detection. Meanwhile, dimensionality reduction is usually carried out for the sake of computation efficiency and detection efficacy.

Many appearance-based methods can be understood in a probabilistic framework. An image or feature vector derived from an image is viewed as a random variable x, and this random variable is characterized for faces and nonfaces by the class-conditional density functions pðxjfaceÞ and pðxjnonfaceÞ. Bayesian classification or maximum likelihood can be used to classify a candidate image location as face or nonface. Unfortunately, a straightforward implementation of Bayesian classification is infeasible because of the high dimensionality of x, because pðxjfaceÞ and pðxjnonfaceÞ are multimodal, and because it is not yet understood if there are natural parameterized forms for pðxjfaceÞ and pðxjnonfaceÞ.

Hence, much of the work in an appearance-based method concerns empirically validated parametric and nonpara- metric approximations to pðxjfaceÞ and pðxjnonfaceÞ.

Another approach in appearance-based methods is to find a discriminant function (i.e., decision surface, separating hyperplane, threshold function) between face and nonface classes. Conventionally, image patterns are projected to a lower dimensional space and then a discriminant function is formed (usually based on distance metrics) for classification [163], or a nonlinear decision surface can be formed using multilayer neural networks [128]. Recently, support vector machines and other kernel methods have been proposed.

These methods implicitly project patterns to a higher dimensional space and then form a decision surface between the projected face and nonface patterns [107].

2.4.1 Eigenfaces

An early example of employing eigenvectors in face recognition was done by Kohonen [80] in which a simple neural network is demonstrated to perform face recognition for aligned and normalized face images. The neural network computes a face description by approximating the eigenvectors of the image’s autocorrelation matrix.

These eigenvectors are later known as Eigenfaces.

Kirby and Sirovich demonstrated that images of faces can be linearly encoded using a modest number of basis images [78]. This demonstration is based on the Karhunen-Loe`ve transform [72], [93], [48], which also goes by other names, e.g., principal component analysis [68], and the Hotelling transform [50]. The idea is arguably proposed first by Pearson in 1901 [110] and then by Hotelling in 1933 [62].

Given a collection of n by m pixel training images represented as a vector of size m  n, basis vectors spanning an optimal subspace are determined such that the mean square error between the projection of the training images onto this subspace and the original images is minimized.

They call the set of optimal basis vectors eigenpictures since these are simply the eigenvectors of the covariance matrix computed from the vectorized face images in the training set.

Experiments with a set of 100 images show that a face image of 91  50 pixels can be effectively encoded using only 50 eigenpictures, while retaining a reasonable likeness (i.e., capturing 95 percent of the variance).

Turk and Pentland applied principal component analysis to face recognition and detection [163]. Similar to [78], principal component analysis on a training set of face images is performed to generate the Eigenpictures (here called Eigenfaces) which span a subspace (called the face space) of the image space. Images of faces are projected onto the subspace and clustered. Similarly, nonface training images are projected onto the same subspace and clustered.

Since images of faces do not change radically when projected onto the face space, while the projection of nonface images appear quite different. To detect the presence of a face in a scene, the distance between an image region and the face space is computed for all locations in the image. The distance from face space is used as a measure of “faceness,” and the result of calculating the distance from face space is a “face map.”

A face can then be detected from the local minima of the face map. Many works on face detection, recognition, and feature extractions have adopted the idea of eigenvector decomposition and clustering.

2.4.2 Distribution-Based Methods

Sung and Poggio developed a distribution-based system for face detection [152], [154] which demonstrated how the distributions of image patterns from one object class can be learned from positive and negative examples (i.e., images) of that class. Their system consists of two components, distribution-based models for face/nonface patterns and a multilayer perceptron classifier. Each face and nonface example is first normalized and processed to a 19  19 pixel image and treated as a 361-dimensional vector or pattern.

Next, the patterns are grouped into six face and six nonface clusters using a modified k-means algorithm, as shown in Fig. 6. Each cluster is represented as a multidimensional Gaussian function with a mean image and a covariance matrix. Fig. 7 shows the distance measures in their method.

Two distance metrics are computed between an input image pattern and the prototype clusters. The first distance component is the normalized Mahalanobis distance between the test pattern and the cluster centroid, measured within a lower-dimensional subspace spanned by the cluster’s 75 largest eigenvectors. The second distance component is the Euclidean distance between the test pattern and its projection

Fig. 6. Face and nonface clusters used by Sung and Poggio [154]. Their method estimates density functions for face and nonface patterns using a set of Gaussians. The centers of these Gaussians are shown on the right (Courtesy of K.-K. Sung and T. Poggio).

(11)

onto the 75-dimensional subspace. This distance component accounts for pattern differences not captured by the first distance component. The last step is to use a multilayer perceptron (MLP) network to classify face window patterns from nonface patterns using the twelve pairs of distances to each face and nonface cluster. The classifier is trained using standard backpropagation from a database of 47,316 window patterns. There are 4,150 positive examples of face patterns and the rest are nonface patterns. Note that it is easy to collect a representative sample face patterns, but much more difficult to get a representative sample of nonface patterns.

This problem is alleviated by a bootstrap method that selectively adds images to the training set as training progress. Starting with a small set of nonface examples in the training set, the MLP classifier is trained with this database of examples. Then, they run the face detector on a sequence of random images and collect all the nonface patterns that the current system wrongly classifies as faces.

These false positives are then added to the training database as new nonface examples. This bootstrap method avoids the problem of explicitly collecting a representative sample of nonface patterns and has been used in later works [107], [128].

A probabilistic visual learning method based on density estimation in a high-dimensional space using an eigenspace decomposition was developed by Moghaddam and Pentland [103]. Principal component analysis (PCA) is used to define the subspace best representing a set of face patterns. These principal components preserve the major linear correlations in the data and discard the minor ones. This method decomposes the vector space into two mutually exclusive and complementary subspaces: the principal subspace (or feature space) and its orthogonal complement. Therefore, the target density is decomposed into two components: the density in the principal subspace (spanned by the principal components) and its orthogonal complement (which is discarded in standard PCA) (See Fig. 8). A multivariate Gaussian and a mixture of Gaussians are used to learn the statistics of the local features of a face. These probability densities are then used for object detection based on maximum likelihood estimation. The proposed method has been applied to face localization, coding, and recognition.

Compared with the classic eigenface approach [163], the proposed method shows better performance in face recogni- tion. In terms of face detection, this technique has only been demonstrated on localization; see also [76].

In [175], a detection method based on a mixture of factor analyses was proposed. Factor analysis (FA) is a statistical method for modeling the covariance structure of high dimensional data using a small number of latent variables.

FA is analogous to principal component analysis (PCA) in several aspects. However, PCA, unlike FA, does not define a proper density model for the data since the cost of coding a data point is equal anywhere along the principal component subspace (i.e., the density is unnormalized along these directions). Further, PCA is not robust to independent noise in the features of the data since the principal components maximize the variances of the input data, thereby retaining unwanted variations. Synthetic and real examples in [36], [37], [9], [7] have shown that the projected samples from different classes in the PCA subspace can often be smeared.

For the cases where the samples have certain structure, PCA is suboptimal from the classification standpoint. Hinton et al.

have applied FA to digit recognition, and they compare the performance of PCA and FA models [61]. A mixture model of factor analyzers has recently been extended [49] and applied to face recognition [46]. Both studies show that FA performs better than PCA in digit and face recognition. Since pose, orientation, expression, and lighting affect the appearance of a human face, the distribution of faces in the image space can be better represented by a multimodal density model where each modality captures certain characteristics of certain face appearances. They present a probabilistic method that uses a mixture of factor analyzers (MFA) to detect faces with wide variations. The parameters in the mixture model are estimated using an EM algorithm.

A second method in [175] uses Fisher’s Linear Discrimi- nant (FLD) to project samples from the high dimensional image space to a lower dimensional feature space. Recently, the Fisherface method [7] and others [156], [181] based on linear discriminant analysis have been shown to outperform the widely used Eigenface method [163] in face recognition on several data sets, including the Yale face database where face images are taken under varying lighting conditions. One possible explanation is that FLD provides a better projection than PCA for pattern classification since it aims to find the most discriminant projection direction. Consequently, the classification results in the projected subspace may be

Fig. 7. The distance measures used by Sung and Poggio [154]. Two distance metrics are computed between an input image pattern and the prototype clusters. (a) Given a test pattern, the distance between that image pattern and each cluster is computed. A set of 12 distances between the test pattern and the model’s 12 cluster centroids. (b) Each distance measurement between the test pattern and a cluster centroid is a two-value distance metric.D1is a Mahalanobis distance between the test pattern’s projection and the cluster centroid in a subspace spanned by the cluster’s 75 largest eigenvectors.D2 is the Euclidean distance between the test pattern and its projection in the subspace. Therefore, a distance vector of 24 values is formed for each test pattern and is used by a multilayer perceptron to determine whether the input pattern belongs to the face class or not (Courtesy of K.-K. Sung and T. Poggio).

Fig. 8. Decomposition of a face image space into the principal subspace Fand its orthogonal complement F for an arbitrary density. Every data point x is decomposed into two components: distance in feature space (DIFS) and distance from feature space (DFFS) [103] (Courtesy of B. Moghaddam and A. Pentland).

(12)

superior than other methods. (See [97] for a discussion about training set size.) In the second proposed method, they decompose the training face and nonface samples into several subclasses using Kohonen’s Self Organizing Map (SOM) [80].

Fig. 9 shows a prototype of each face class. From these relabeled samples, the within-class and between-class scatter matrices are computed, thereby generating the optimal projection based on FLD. For each subclass, its density is modeled as a Gaussian whose parameters are estimated using maximum-likelihood [36]. To detect faces, each input image is scanned with a rectangular window in which the class-dependent probability is computed. The maximum- likelihood decision rule is used to determine whether a face is detected or not. Both methods in [175] have been tested using the databases in [128], [154] which together consist of 225 images with 619 faces, and experimental results show that these two methods have detection rates of 92.3 percent for MFA and 93.6 percent for the FLD-based method.

2.4.3 Neural Networks

Neural networks have been applied successfully in many pattern recognition problems, such as optical character recognition, object recognition, and autonomous robot driv- ing. Since face detection can be treated as a two class pattern recognition problem, various neural network architectures have been proposed. The advantage of using neural networks for face detection is the feasibility of training a system to capture the complex class conditional density of face patterns.

However, one drawback is that the network architecture has to be extensively tuned (number of layers, number of nodes, learning rates, etc.) to get exceptional performance.

An early method using hierarchical neural networks was proposed by Agui et al. [1]. The first stage consists of two parallel subnetworks in which the inputs are intensity values from an original image and intensity values from filtered image using a 3  3 Sobel filter. The inputs to the second stage network consist of the outputs from the subnetworks and extracted feature values such as the standard deviation of the pixel values in the input pattern, a ratio of the number of white pixels to the total number of binarized pixels in a window, and geometric moments. An output value at the second stage indicates the presence of a face in the input region. Experimental results show that this method is able to detect faces if all faces in the test images have the same size.

Propp and Samal developed one of the earliest neural networks for face detection [117]. Their network consists of four layers with 1,024 input units, 256 units in the first hidden layer, eight units in the second hidden layer, and two output units. A similar hierarchical neural network is later proposed by [70]. The early method by Soulie et al. [148] scans an input image with a time-delay neural network [166] (with a receptive field of 20  25 pixels) to detect faces. To cope with size variation, the input image is decomposed using wavelet transforms. They reported a false negative rate of 2.7 percent and false positive rate of 0.5 percent from a test of 120 images.

In [164], Vaillant et al. used convolutional neural networks to detect faces in images. Examples of face and nonface images of 20  20 pixels are first created. One neural network is trained to find approximate locations of faces at some scale.

Another network is trained to determine the exact position of faces at some scale. Given an image, areas which may contain faces are selected as face candidates by the first network.

These candidates are verified by the second network. Burel and Carel [12] proposed a neural network for face detection in which the large number of training examples of faces and nonfaces are compressed into fewer examples using a Kohonen’s SOM algorithm [80]. A multilayer perceptron is used to learn these examples for face/background classifica- tion. The detection phase consists of scanning each image at various resolution. For each location and size of the scanning window, the contents are normalized to a standard size, and the intensity mean and variance are scaled to reduce the effects of lighting conditions. Each normalized window is then classified by an MLP.

Feraud and Bernier presented a detection method using autoassociative neural networks [43], [42], [44]. The idea is based on [83] which shows an autoassociative network with five layers is able to perform a nonlinear principal component analysis. One autoassociative network is used to detect frontal-view faces and another one is used to detect faces turned up to 60 degrees to the left and right of the frontal view.

A gating network is also utilized to assign weights to frontal and turned face detectors in an ensemble of autoassociative networks. On a small test set of 42 images, they report a detection rate similar to [126]. The method has also been employed in LISTEN [23] and MULTRAK [8].

Lin et al. presented a face detection system using probabilistic decision-based neural network (PDBNN) [91].

The architecture of PDBNN is similar to a radial basis function (RBF) network with modified learning rules and probabilistic interpretation. Instead of converting a whole face image into a training vector of intensity values for the neural network, they first extract feature vectors based on intensity and edge information in the facial region that contains eyebrows, eyes, and nose. The extracted two feature vectors are fed into two PDBNN’s and the fusion of the outputs determine the classification result. Based on a set of 23 images provided by Sung and Poggio [154], their experimental results show comparable performance with the other leading neural network-based face detectors [154], [128].

Among all the face detection methods that used neural networks, the most significant work is arguably done by Rowley et al. [127], [126], [128]. A multilayer neural network is used to learn the face and nonface patterns from face/

Fig. 9. Prototype of each face class using Kohonen’s SOM by Yang et al.

[175]. Each prototype corresponds to the center of a cluster.

參考文獻

相關文件

Retarded Green’s functions in NHEK/CFT correspondence Hidden conformal symmetry and real-time correlators Hidden conformal symmetry and quasi-normal modes Conclusion and

• Emergent Z_k 1-form & 2-form symmetry. BF theory

[7] C-K Lin, and L-S Lee, “Improved spontaneous Mandarin speech recognition by disfluency interruption point (IP) detection using prosodic features,” in Proc. “ Speech

• Similar to Façade, use a generic face model and view-dependent texture mapping..

Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp... Annealed

In this thesis, we propose a novel image-based facial expression recognition method called “expression transition” to identify six kinds of facial expressions (anger, fear,

Pantic, “Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data,” IEEE Conference on Computer

Soille, “Watershed in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations,” IEEE Transactions on Pattern Analysis and Machine Intelligence,