• 沒有找到結果。

A new LDA-based face recognition system which can solve the small sample size problem

N/A
N/A
Protected

Academic year: 2021

Share "A new LDA-based face recognition system which can solve the small sample size problem"

Copied!
14
0
0

加載中.... (立即查看全文)

全文

(1)

* Corresponding author. Tel.: #886-2-788-3799x1811; fax: #886-2-782-4814.

E-mail address: liao@iis.sinica.edu.tw (H.-Y.M. Liao).

A new LDA-based face recognition system which can solve

the small sample size problem

Li-Fen Chen , Hong-Yuan Mark Liao

 *, Ming-Tat Ko,

Ja-Chen Lin , Gwo-Jong Yu



Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan Institute of Information Science, Academia Sinica, Taiwan

Institute of Computer Science and Information Engineering, National Central University, Chung-Li, Taiwan Received 16 June 1998; received in revised form 21 June 1999; accepted 21 June 1999

Abstract

A new LDA-based face recognition system is presented in this paper. Linear discriminant analysis (LDA) is one of the most popular linear projection techniques for feature extraction. The major drawback of applying LDA is that it may encounter the small sample size problem. In this paper, we propose a new LDA-based technique which can solve the small sample size problem. We also prove that the most expressive vectors derived in the null space of the within-class scatter matrix using principal component analysis (PCA) are equal to the optimal discriminant vectors derived in the original space using LDA. The experimental results show that the new LDA process improves the performance of a face recognition system signi"cantly.  2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

Keywords: Face recognition; Feature extraction; Linear discriminant analysis; Linear algebra

1. Introduction

Face recognition has been a very hot research topic in recent years [1}4]. A complete face recognition system includes two steps, i.e., face detection [5,6] and face recognition [7,8]. In this paper, attention will be focused on the face recognition part. In the last 10 years, a great number of successful face recognition systems have been developed and reported in the literature [7}13]. Among these works, the systems reported in Refs. [7,10}13] all adopted the linear discriminant analysis (LDA) approach to enhance class separability of all sample images for recognition purposes. LDA is one of the most popular linear projection techniques for feature extraction. It "nds the set of the most discriminant projection

vectors which can map high-dimensional samples onto a low-dimensional space. Using the set of projection vectors determined by LDA as the projection axes, all projected samples will form the maximum between-class scatter and the minimum within-class scatter simulta-neously in the projective feature space. The major draw-back of applying LDA is that it may encounter the so-called small sample size problem [14]. This problem arises whenever the number of samples is smaller than the dimensionality of the samples. Under these circum-stances, the sample scatter matrix may become singular, and the execution of LDA may encounter computational di$culty.

In recent years, many researchers have noticed this problem and tried to solve it using di!erent methods. In Ref. [11], Goudail et al. proposed a technique which calculated 25 local autocorrelation coe$cients from each sample image to achieve dimensionality reduction. Sim-ilarly, Swets and Weng [12] applied the PCA approach to accomplish reduction of image dimensionality. Besides image dimensionality reduction, some researchers have

0031-3203/00/$20.00 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. PII: S 0 0 3 1 - 3 2 0 3 ( 9 9 ) 0 0 1 3 9 - 9

(2)

tried to overcome the computational di$culty directly using linear algebra. Instead of calculating eigenvalues and eigenvectors from an n;n matrix, Fukunaga [14] proposed a more e$cient algorithm and calculated eig-envalues and eigenvectors from an m;m matrix, where

n is the dimensionality of the samples and m is the rank of

the within-class scatter matrix SU. In Ref. [15], Tian et al. used a positive pseudoinverse matrix S>

Uinstead of

calcu-lating the matrix S\

U . For the same purpose, Hong and

Yang [16] tried to add the singular value perturbation in

SU and made SU a nonsingular matrix. In Ref. [17],

Cheng et al. proposed another method based on the principle of rank decomposition of matrices. The above three methods are all based on the conventional Fisher's criterion function. In 1992, Liu et al. [18] modi"ed the conventional Fisher's criterion function and conducted a number of researches [10,18,19] based on the new criterion function. They used the total scatter matrix

SR("S@#SU) as the divisor of the original Fisher's

func-tion instead of merely using the within-class scatter matrix. They then proposed another algorithm based on the Foley}Sammon transform [20] to select the set of the most discriminant projection vectors. It is known that the purpose of an LDA process is to maximize the be-tween-class scatter while simultaneously minimizing the within-class scatter. When the small sample size problem occurs, the within-class scatter matrix SU is singular. The theory of linear algebra tells us that it is possible to "nd some projection vectors qs such that qRSUq"0 and qRS@qO0. Under the above special circumstances, the modi"ed Fisher's criterion function proposed by Liu et al. [10] will de"nitely reach its maximum value, i.e., 1. However, an arbitrary projection vector q satisfying the maximum value of the modi"ed Fisher's criterion cannot guarantee maximum class separability unless qRS@q is further maximized. Liu et al.'s [10] approach also su!ers from the stability problem because the eigenvalues deter-mined using their method may be very close to each other. This problem will result in instability of the projec-tion vector determinaprojec-tion process. Another drawback of Liu et al.'s approach is that their method still has to calculate an inverse matrix. Most of the time, calculation of an inverse matrix is believed to be a bottleneck which reduces e$ciency.

In this paper, a more e$cient, accurate, and stable method is proposed to calculate the most discriminant projection vectors based on the modi"ed Fisher's cri-terion. For feature extraction, a two-stage procedure is devised. In the "rst stage, the homogeneous regions of a face image are grouped into the same partition based on geometric characteristics, such as the eyes, nose, and mouth. For each partition, we use the mean gray value of all the pixels within the partition to represent it. There-fore, every face image is reduced to a feature vector. In the second stage, we use the feature vectors extracted in the "rst stage to determine the set of the most

dis-criminant projection axes based on a new LDA process. The proposed new LDA process starts by calculating the projection vectors in the null space of the within-class scatter matrix SU. This null space can be spanned by those eigenvectors corresponding to the set of zero eigen-values of SU. If this subspace does not exist, i.e., SU is nonsingular, then SR is also nonsingular. Under these circumstances, we choose those eigenvectors correspond-ing to the set of the largest eigenvalues of the matrix (S@#SU)\S@ as the most discriminant vector set; other-wise, the small sample size problem will occur, in which case we will choose the vector set that maximizes the between-class scatter of the transformed samples as the projection axes. Since the within-class scatter of all the samples is zero in the null space of SU, the projection vector that can satisfy the objective of an LDA process is the one that can maximize the between-class scatter. A similar concept has been mentioned in Ref. [13]. How-ever, they did not show any investigation results, nor did they draw any conclusions concerning the concept. We have conducted a series of experiments and compared our results with those of Liu et al.'s approach [10] and the template matching approach. The experimental re-sults have shown that our method is superior to both Liu et al.'s approach and the template matching approach in terms of recognition accuracy. Furthermore, we have also proved that our method is better than Liu et al.'s approach in terms of training e$ciency as well as stabil-ity. This indicates that the new LDA process signi"cantly improves the performance of a face recognition system.

The organization of the rest of this paper is as follows: In Section 2, the complete two-phase feature extraction procedure will be introduced. Experimental results in-cluding those of database construction, experiments on the small sample size problem, and comparisons with two well-known approaches, will be presented in Section 3. Finally, concluding remarks will be given in Section 4.

2. Feature extraction

In this section, we shall describe in detail the proposed feature extraction technique, which includes two phases: pixel grouping and generalized LDA based on the modi-"ed Fisher's function.

2.1. Pixel grouping

According to the conclusion drawn in Ref. [21], a statistics-based face recognition system should base its recognition solely on the `purea face portion. In order to ful"ll this requirement, we have built a face-only database using a previously developed morphology-based "lter [6]. Using this morphological "lter, the eye-analogue segments are grouped into pairs and used to locate potential face regions. Thus, every constituent of

(3)

Fig. 1. Examples of normalized face-only images. The top two rows of images are of the same person, and the bottom two rows are of another person.

Fig. 2. Illustration of the pixel grouping process. N normalized face images are piled up and aligned into the same orientation. Suppose the image size is P;P; then P N-dimensional vectors are obtained, and the elements of a vector are the gray values of the pixels in N di!erent images.

the face-only database is the face portion containing only the eyes, nose and mouth. Some examples of this face database are shown in Fig. 1. In order to execute pixel grouping, the above-mentioned face-only images are transformed into normalized sizes. Let the training database be comprised of N normalized face-only images of size P;P. We pile up these N images and align them into the same orientation, as shown in Fig. 2. Therefore, we obtain P N-dimensional vectors whose elements are the gray values of the pixels. These P N-dimensional vectors are then clustered into m groups using the k-means clustering method, where m is the resolution of the transformed images. After clustering, each image is par-titioned into m groups, and each pixel is assigned to one of the groups. For each image, we calculate the average gray value of each group and use these m mean values to represent the whole image. Thus, the P-dimensional images are now reduced to m-dimensional with m;P. Fig. 3 shows some examples of the transformed images. The images in the leftmost column are the original im-ages of size 60;60, and the others are the transformed images with increasing resolutions of 2, 2, 2, and 2, respectively, from left to right. After pixel grouping, we use the transformed images to execute the second phase } generalized LDA.

2.2. Generalized LDA

The purpose of pixel grouping is to reduce the dimen-sionality of the samples and to extract geometric features; however, it does not take class separability into consid-eration at all. In the literature [10}12], LDA is a well-known technique for dealing with the class separability problem. LDA can be used to determine the set of the

most discriminant projection axes. After projecting all the samples onto these axes, the projected samples will form the maximum between-class scatter and the min-imum within-class scatter in the projective feature space. In what follows, we shall "rst introduce the LDA ap-proach and some related works. In the second subsec-tion, we shall describe our approach in detail.

2.2.1. Conventional LDA and its potential problem

Let the training set comprise K classes, where each class contains M samples. In LDA, one has to determine the mapping

xJ IK"ARxIK, (1)

where xIK denotes the n-dimensional feature vector extracted from the mth sample of the kth class, and xJ IK denotes the d-dimensional projective feature vector of xIK transformed by the n;d transformation matrix A. One way to "nd the mapping A is to use Fisher's criterion [22]:

F(q)"qRS@q

qRSUq, (2)

where q3RL, S@" )I(xNI!xN)(xNI!xN)R, and SU" )I +

K (xIK!xNI)(xIK!xNI)R are the between-class scatter

matrix and within-class scatter matrix, respectively, where xN I"1/M +

(4)

Fig. 3. Results obtained after performing pixel grouping. The images in the leftmost column are the original images, and the others are the transformed images with the increasing resolu-tions of 2, 2, 2, and 2, from left to right.

The column vectors of A can be chosen from the set of qJ 's, where

qJ "arg max

qZRL

F(q). (3)

After projecting all the xIK's (where k"1,2, K;

m"1,2, M) onto the qJ axis, the projected samples, xJIK's

(k"1,2, K; m"1,2, M), will form the maximum be-tween-class scatter and the minimum within-class scatter. The vector qJ is called the optimal discriminant projection vector. According to linear algebra, all qJ s can be eigen-vectors corresponding to the set of largest eigenvalues of

S\

U S@. The major drawback of applying the LDA

ap-proach is that it may encounter the small sample size

problem [14]. The small sample size problem occurs

whenever the number of samples is smaller than the

dimensionality of the samples. Whenever this happens, the matrix SU becomes singular, and the computation of

S\

U becomes complex and di$cult. Liu et al. seriously

addressed the problem in [10,18,19]. One of their e!orts was to propose a modi"ed Fisher's criterion function,

FK (q), to replace the original Fisher's function, F(q). They

have proved [19] that FK (q) is exactly equivalent to F(q). That is,

arg max

OZRL FK (q)"arg maxqZRL F(q). (4) In what follows, we shall directly describe two theorems of Ref. [19] which are related to our work.

Theorem 1. Suppose that R is a set in the n-dimensional

space,∀x3R, f (x)*0, g(x)*0, and f (x)#g(x)'0. Let h(x)"f (x)/g(x), and h(x)"f (x)/( f (x)#g(x)). Then, h(x) has the maximum (including positive inxnity) at point

x in R iw h(x) has the maximum at point x [19]. Theorem 2. The Fisher's criterion function F(q) can be

replaced by FK (q)" qRS@q

qRSUq#qRS@q (5)

in the course of solving the discriminant vectors of the optimal set [19].

From the above two theorems, we know that F(q) and

FK (q) are functionally equivalent in terms of solving the

optimal set of projection axes (or discriminant vectors). Therefore, one can choose either F(q) or FK (q) to derive the optimal projection axes. In this paper, we propose a new method to calculate the optimal projection axes based on

FK (q). According to the normal process of LDA, the

solu-tions of maxqZRLFK (q) should be the eigenvectors

corre-sponding to the set of the largest eigenvalues of the matrix (S@#SU)\S@. If the small sample size problem occurs at this point, the eigenvectors of (S@#SU)\S@ will be very di$cult to compute due to the singularity problem. In order to avoid direct computation of (S@#SU)\S@, Liu et al. [19] suggested deriving the dis-criminant vectors in the complementary subspace of the null space of SR(SR"S@#SU, which denotes a total scat-ter matrix), where the null space of SR is spanned by the eigenvectors corresponding to the zero eigenvalues of SR. Since the total scatter matrix SR in the complementary subspace is nonsingular, it is feasible to follow the normal LDA process to derive the discriminant vectors in this subspace. However, there are still some critical problems associated with this approach. The "rst problem with Liu et al.'s approach is the validity of the discriminant vector set problem. It is known that the purpose of LDA is to maximize the between-class scatter while minimizing the

(5)

within-class scatter simultaneously. In the special case where qRSUq"0 and qRS@qO0, Eq. (5) will de"nitely reach the maximum value of FK (q). However, an arbitrary projection vector q satisfying the above conditions can-not guarantee derivation of the maximum qRS@q value. Under these circumstances, a correct LDA process can-not be completed because only the within-class scatter is minimized while the between-class scatter is not surely maximized. The second problem associated with Liu et al.'s approach is the stability problem. In Ref. [23], the author stated that an eigenvector will be very sensitive to small perturbation if its corresponding eigenvalue is close to another eigenvalue of the same matrix. Unfortunately, in Ref. [18], the matrix used to derive the optimal projec-tion vector su!ers from the above-menprojec-tioned problem. In other words, their optimal projection vector determina-tion process may be severely in#uenced whenever a small perturbation is added. The third problem associated with Liu et al.'s approach [18] is the singularity problem. This is because their approach still has to calculate the inverse of the matrix SR. In this paper, we propose a more e$cient, accurate, and stable method to derive the most discriminant vectors from LDA based on the modi"ed Fisher's criterion. In the proposed approach, we calculate the projection vectors in the null space of the within-class scatter matrix SU because the projection vectors found in this subspace can make all the projected samples form zero within-class scatter. Furthermore, we will also prove that "nding the optimal projection vector in the original sample space is equivalent to calculating the most expres-sive vector [12] (via principal component analysis) in the above-mentioned subspace. In what follows, we shall describe the proposed method in detail.

2.2.2. The proposed method

Let the database comprise K classes, where each class contains M distinct samples, and let xIK be an n-dimen-sional column vector which denotes the feature vector extracted from the mth sample of the kth class. Suppose

SU and S@ are, respectively, the within-class scatter matrix

and the between-class scatter matrix of xIK's (where

k"1,2, K; m"1,2, M), and suppose the total scatter

matrix SR"SU#S@. According to linear algebra [24] and the de"nitions of the matrices SR, SU, and

S@, rank(SR))rank(S@)#rank(SU), where rank(SR)"

min(n, KM!1), rank(S@)"min(n, K!1), and rank(SU) "min(n, K;(M!1)). In this paper, we shall determine a set of discriminant projection vectors from the null subspace of SU. Therefore, the rank of SU certainly is the major focus of this research. Suppose the rank of SU is r, i.e., r"min(n, K;(M!1)). If r"n, this implies that

K;(M!1)*nNKM*n#KNKM!1*n#K

!1*n. The above inequality means that the rank of

SR is equal to n. Consequently, if SU is nonsingular, then SR is nonsingular, too. Under these circumstances, there

will be no singularity problem when the matrix S\

R S@ is

computed in the normal LDA process. On the other hand, if r is smaller than n, the small sample size problem will occur. For this case, we propose a new method to derive the optimal projection vectors.

Fig. 4 illustrates graphically the process of deriving the optimal projection vectors when r(n. In the top part of Fig. 4, < stands for the original sample space, and ¹ rep-resents a linear transformation: ¹(x)"SUx, x3<. Since the rank of SU is smaller than the dimensionality of <(r(n), there must exist a subspace <L< such that <"span+aG " SUaG"0, for i"1,2, n!r,. < here is called the null space of SU. In the bottom part of Fig. 4, the #ow chart of the discriminant vector determination process is illustrated. Let Q"[a,2,aL\P]. First, all samples X's are transformed from < into its subspace < through the transformation QQR. Then, the eigenvec-tors corresponding to the largest eigenvalues of the be-tween-class scatter matrix SI @ (a new matrix formed by the transformed samples) in the subspace < are selected as the most discriminant vectors. In what follows, we shall describe our approach in detail.

First of all, Lemma 1 shows the subspace where we can derive the discriminant vectors based on maximizing the modi"ed Fisher's criterion.

Lemma 1. Suppose <"span+aG"SUaG"0, aG3RL, i" 1,2, n!r,, where n is the dimensionality of samples, SU is

the within-class scatter matrix of the samples, and r is the rank of SU. Let S@ denote the between-class scatter matrix of the samples. For each qJ 3l which satisxes qJRS@qJO0, it will maximize the function FK (q)"qRS@q/(qRS@q#qRSUq).

Proof. 1. Since both S@ and SU are real symmetric, qRS@q*0 and qRSUq*0, for all q3RL, it follows that 0)qRS@q)qRS@q#qRSUqN0)FK(q)

"

qRS@q

qRS@q#qRSUq)1.

It is obvious that FK (q)"1 if and only if qRS@qO0 and qRSUq"0.

2. For each q(3<, qJ can be represented as a linear combination of the set+aG,, i.e., qJ" L\PGaGaG, where aG is the projection coe$cient of qJ with respect to aG. There-fore, we have

SUqJ"SUL\P GaGaG"

L\P

GaGSUaG"0NqJRSUqJ"0.

From 1. and 2., we can conclude that for each qJ 3< which satis"es qJ RS@qJO0, the function FK(q) will be maxi-mized.

(6)

Fig. 4. Illustration of the projection vector set determination process. At the top of the "gure, ¹ is a linear transformation from < to =: ¹(x)"SU x, x3<. < is the null space of SU. In the middle of the "gure, X stands for the original sample set, and XI is the transformed sample feature set of X obtained through the transformation QQR, where Q"[a,2,aL\P], n is the dimensionality of the samples, r is the rank of SU, and SUaG"0 for each aG. The most discriminant vectors for LDA can be computed from the between-class scatter matrix, SM@, of XI .

Lemma 1 has a critical issue related to LDA. That is, when the small sample size problem occurs, an arbitrary vector qJ 3< that maximizes FK(q) is not necessarily the optimal discriminant vector of LDA. This is because under the above situation, qJ RS@qJ is not guaranteed to reach the maximal value. Therefore, one can conclude that it is not su$cient to derive the discriminant vector set simply based on the modi"ed Fisher's criterion when the small sample size problem occurs. In what follows, Lemma 2 will show that the within-class scatter matrix of all the transformed samples in < is a complete zero matrix. Lemma 2 is very important because once it is proved correct, determination of the discriminant vector set no longer depends on the total scatter matrix. Instead, the discriminant vector set can be derived directly from the between-class scatter matrix.

Lemma 2. Let QQR be a transformation which transforms

the samples in < into a subspace <, where Q"

[a,2, aL\P] is an n;(n!r) matrix and each aG satisxes

SUaG"0, for i"1,2, n!r; and where the subspace

< is spanned by the orthonormal set of aG's. If all the

samples are transformed into the subspace < through QQR, then the within-class scatter matrix SI U of the transformed samples in < is a complete zero matrix.

Proof. suppose xIK is the feature vector extracted from the mth sample of the kth class, and that the data-base comprised K classes, where each class contains M samples. Let yIK denote the transformed feature vector of xIK through the transformation QQR. That is, yIK"QQRxIK, yNI"QQRxNI, and yN"QQRxN, where xNI"1/

M +

KxIK and x"1/KM )I +KxIK Thus, SI U" ) I + K (yIK!yNI)(yIK!yNI)R " ) I + K (QQRxIK!QQRxNI)(QQRxIK!QQRxNI)R "QQR ) I + K (xIK!xNI)(xIK!xNI)QQR

"QQRSUQQR"0, since SUQ"0. (6) We have mentioned earlier that the LDA process is used to determine the set of the most discriminant projec-tion axes for all the samples. After projecprojec-tion, all the projected samples form the minimum within-class scatter and the maximum between-class scatter. Lemma 1 tells us that for any qJ 3<, as long as it satis"es qJRS@qJO0, the modi"ed Fisher's criterion, FK (q), will be maximized to 1.

(7)

However, Lemma 1 also tells us that we should add another criterion to perform LDA, not just depend on the Fisher's criterion. Lemma 2, on the other hand, tells us that the selection of qJ 3< enforces SIU"0. That is to say: SI R"SIU#SI@"SI@. Since SIU is consistently equal to 0, we have to select a set of projection axes that can maxi-mize the between-class scatter in <. From the above two lemmas, we know that maximizing the between-class scatter in < is equal to maximizing the total scatter in <. Under these circumstances, we can apply the princi-pal component analysis (PCA) method [25] to derive the set of the most discriminant projection vectors and ful"ll the requirement of LDA. The physical meaning of PCA is to "nd a set of the most expressive projection vectors such that the projected samples retain the most informa-tion about the original samples. The most expressive vectors derived from a PCA process are the l eigenvectors corresponding to the l largest eigenvalues of SI R, where ( J

GjG / LGjG)*p, n is the dimensionality of samples,

andjG represents the eigenvalue ordered in the ith place in SI R. Basically, jG is in the decreasing order from 1 to n. If

p"0.95, a good enough representation is obtained [26].

In what follows, we shall show the proposed method in Theorem 3 based on the above two lemmas.

Theorem 3. Suppose that Q"[a,2, aL\P], and that aG's

are the eigenvectors corresponding to the zero eigenvalues of the within-class scatter matrix SU in the original feature space <, where n is the dimensionality of the feature vectors and r is the rank of SU. Let < denote the subspace spanned by the set of eigenvectorsa,2,aL\P. If r is smaller than n, the most expressive vector qJ in < obtained through the transformation QQR will be the most discriminant vector in <.

Proof. 1. From Lemma 2, we know that the within-class scatter matrix SI U in < is a complete zero matrix. Thus, the between-class scatter matrix SI @ in < is equal to the total scatter matrix SI R in <.

2. The most expressive projection vector qJ in < satis-"esqJ RSI@qJ'0. Suppose S@"SI@#SK@, where S@, SI@, and SK@ are all real symmetric. Then,

qJ RS@qJ"qJRSI@qJ#qJRSK@qJ*qJRSI@qJ'0NqJRS@qJO0.

3. We can show that qJ is the optimal solution within < that can maximize FK(q). Since the most expressive projection vector qJ in < can maximize the value of qJRS@qJ, and qJ RSUqJ"0 is known, we can conclude that the most expressive projection vector in < is the most dis-criminant projection vector in < for LDA.

After projecting all the samples onto the projective feature space based on Theorem 3, a Euclidean distance classi"er is used to perform classi"cation in the projec-tive feature space.

The proposed algorithm Input: N n-dimensional vectors.

Output: The optimal discriminant vector set of all N input vectors.

Algorithm:

Step 1. Calculate the within-class scatter SU and the

between-class scatter S@.

Step 2. Suppose the rank of SU is r. If r"n, then the

discriminant set is the eigenvectors corresponding to the set of the largest eigenvalues of matrix (S@#SU)\S@; otherwise, go on to the next step.

Step 3. Perform the singular value decomposition of SU as SU"; <R, where U"V because SU is symmetric. Step 4. Let <"[l,2,lP, lP>,2,lL] and Q"[lP>,

2,lL]. (It has been shown in Ref. [24] that the null space of SU can be spanned by lP>,2,lL).

Step 5. Compute SI @, where SI@"QQRS@(QQR)R.

Step 6. Calculate the eigenvectors corresponding to the

set of the largest eigenvalues of SI @ and use them to form the most discriminant vector set for LDA.

3. Experimental results

3.1. Database construction and feature extraction

The facial image database contained 128 persons (classes), in which for each person, 10 di!erent face im-ages with frontal views were obtained. The process for collecting facial images was as follows: after asking the persons to sit down in front of a CCD camera, with neutral expression and slightly head moving in frontal views, a 30-s period was recorded on videotape under well-controlled lighting condition. Later, a frame grabber was used to grab 10 image frames from the videotape and stored them with resolution of 155;175 pixels. Accord-ing to the conclusion drawn in Ref. [21], which stated that a statistics-based face recognition system should base its recognition solely on the `purea face portion, a face-only database was built using a previously de-veloped morphology-based "lter [6]. Part of the database is shown in Fig. 1. For pixel grouping, each database image was transformed into a normalized size, 60;60. Then, all the 1280 database images (128;10) were piled up and aligned into the same orientation. After this process, 3600 1280-dimensional vectors were obtained. These vectors were then clustered into

m groups (where m stands for the required resolution)

using the K-means clustering method. For each image, the average gray value of each group was calculated, and then these m mean values were used to represent the whole image. Therefore, the dimensionality of each image was reduced from 3600 to m dimensions. Since m is a variable which stands for the dimensionality of the feature vectors of experimentation, we designed an

(8)

Table 1

Face recognition results obtained by applying di!erent num-bers of features extracted from images. The training database contains 128 persons, where each person contains six distinct samples Number of features Number of projection axes used Recognition rate (%) Training time (S) m"32 29 95.28 0.3039 m"64 53 96.27 1.1253 m"128 70 97.54 3.5746 m"256 98 97.34 17.1670

Fig. 5. Experimental results obtained using our method under the small sample size problem. The &#' sign means that each class in the database contains 2 samples. The &;' sign and &䊊' sign mean each class in the database contains three and six samples, respectively.

experiment to decide the best value of m for subsequent experiments. For this experiment, we chose a training database containing 128 persons, with six frontal view samples for each person. For testing purposes, we used a 128-person testing database. Within the database, we obtained 10 samples for each person. Since the database used was a large database, the projection vectors for LDA could be directly computed from S\

R S@. Table 1

showed a set of experimental results obtained by ap-plying di!erent m values (m"32, 64, 128, and 256). The data shown in the second column of Table 1 are the number of projection axes used at a certain resolution. The number of projection axes adopted was decided by checking the p value mentioned in Section 2.2.2. If

p reached 0.95, then we used its corresponding number of

projection axes as the maximum number of projection axes. Therefore, for m"32 and 64, the corresponding number of projection axes adopted as 29 and 53, respec-tively. From Table 1, we "nd that m"128 was the most suitable number of features in terms of recognition rate and training e$ciency. Therefore, in the subsequent sets of experiments, this number (m"128) was globally used.

3.2. Experiments on the small sample size problem

In order to evaluate how our method interacts with the small sample size problem, including problems like the number of samples in each class and the total number of classes used, we conducted a set of experiments and show the results in Fig. 5. The horizontal axis in Fig. 5 repres-ents the number of classes used for recognition, and the vertical axis represents the corresponding recognition rate. The &#', &;', and &䊊' signs in Fig. 5 indicate there were 2, 3, and 6 samples in each class, respectively, for experimentation. The results shown in Fig. 5 re#ect that the proposed approach performed fairly well when the size of the database was small. However, when K (the number of classes) multiplied the M!1 (the number of samples minus 1) was close to n(n"128), the

perfor-mance dropped signi"cantly. This phenomenon was es-pecially true for the case where M"6. In Section 2.2.2, we have mentioned that the information for deriving the most discriminant vectors depended on the null space of SU, <. The dimension of <, dim(<), was equal to

n!(KM!K), where n is equal to 128, K is the number

of classes and M is the number of samples in each class. When M"6 and K approached 25, K(M!1) was very close to n(128). Under these circumstances, the recogni-tion rate dropped signi"cantly (see Fig. 5). The reason for this phenomenon emerged was the low value of dim(<). When the dim(<) value was small, not many spaces were available for deriving the discriminant projection axes; hence, the recognition rate dropped. Inspecting another curve (the &#' sign) in Fig. 5, it is seen that since there were only two samples in each class, the corresponding curve of the recognition rate is not as monotonous as those for the cases that contained 3 and 6 samples in a class. This part of the experiment provided a good guide for making better decisions regarding the number of samples in each class and the number of classes in a database. When one wants to solve the small sample size problem with good performance, the above experi-mental results can be used as a good reference.

3.3. Comparison with other approaches

In order to demonstrate the e!ectiveness of our ap-proach, we conducted a series of experiments and com-pared our results with those obtained using two other well-known approaches. Fig. 6 shows the experimental results obtained using our approach, Liu et al.'s ap-proach [10] and the template matching apap-proach. The horizontal axes and vertical axes in Fig. 6 represent the number of classes in the database and the corresponding

(9)

Fig. 6. The experimental results obtained using our method (&#' sign), Liu's method (&;' sign), and template matching (&䊊' sign). The horizontal axis represents the number of classes in the database, and the vertical axis stands for the recognition rate. (a) The results obtained when each class contains only two samples; (b) the results obtained when each class contains three samples; (c) the results obtained when each class contains six samples.

recognition rate, respectively. In addition, the &#',&;' and &䊊' signs shown in Fig. 6 stand for the recognition results obtained using our approach, Liu et al.'s ap-proach, and the template-matching apap-proach, respective-ly. Furthermore, the data shown in Fig. 6(a)}(c) are the experimental results obtained when each class contained, respectively, 2, 3, and 6 samples. Among the three ap-proaches, the template-matching approach performed recognition based solely on the original feature vectors. Therefore, there was no LDA involved in this process. Furthermore, from the data shown in Fig. 6, it is obvious that Liu et al.'s approach was the worst. Basically, the most serious problem which occurred in Liu's approach was the degraded discriminating capability. Although the

derived discriminant vectors maximized the modi"ed Fisher's criterion function, the optimal class separability condition, which is the objective of an LDA process, was not surely satis"ed. Therefore, the projection axes deter-mined by Liu et al.'s approach could not guarantee to provide the best class separability of all the database samples. Therefore, it is no wonder that the performance of Liu et al.'s approach was even worse than that of the template-matching approach. On the other hand, our approach was apparently superior because we forced the within-class scatter in the subspace to be zero. This constraint restricted the problem to a small domain, hence, it could be solved in a much easier way. Another advantage of our approach is that we do not need to

(10)

Fig. 7. Training time required by our method (&#' sign) and Liu's method (&;' sign). The horizontal axis represents the number of classes in the database, and the vertical axis stands for the training time. (a) The results obtained when each class contains only two samples; (b) the results obtained when each class contains three samples; (c) the results obtained when each class contains six samples.

compute the inverse matrix. In Liu et al. [10], computa-tion of the inverse matrix is indispensable. However, since we project all the samples onto an appropriate subspace, the computation of the inverse matrix, which is considered a time bottleneck, can be avoided.

Another advantage of our approach over Liu et al.'s approach is the training time requirement. Fig. 7 shows three sets of experiments; in each set of experiments we used di!erent numbers of samples in a class (2 in (a), 3 in (b), and 6 in (c)). The &#' and &;' signs represent, respectively, the results obtained using our approach and Liu et al.'s approach. From Fig. 7(a)}(c), it is obvious that the training time required by Liu et al.'s approach grew exponentially when the database was augmented. The reason for this outcome was the projection axes deter-mination process. In Liu et al.'s method, the projection

axes are determined iteratively. In each iteration, their algorithm has to derive the projection vector in a recal-culated subspace. Therefore, their training time is expo-nentially proportional to the number of classes adopted in the database. In comparison with Liu et al.'s approach, our approach requires a constant time for training. This is because our approach only has to calculate the sub-space once and then derive all the projection vectors in this subspace.

The experimental results shown in Figs. 6 and 7 are comparisons between Liu et al.'s approach and ours in terms of accuracy and e$ciency. In what follows, we shall compare our method with Liu et al.'s method using another important criterion } the stability criterion. Table 2 shows a set of experimental results regarding the stability test between our method and Liu et al.'s. In this

(11)

Table 2

Stability test executed during the derivation of the "rst optimal projection vector. The training database comprised 10 classes, where each class contains three samples. The elements shown in the second and the fourth columns represent the orientation di!erence between the current optimal projection vector and the projection vector derived in the previous iteration

Iteration Our method Liu's method

Orientation di!erence (degrees)

Recognition rate (%) Orientation di!erence (degrees) Recognition rate (%) 1 90.56 90.56 2 0.0006 90.56 92.3803 90.56 3 0.0005 90.56 98.7039 90.56 4 0.0005 90.56 127.1341 90.56 5 0.0005 90.56 100.4047 90.56 6 0.0006 90.56 94.8684 90.56 7 0.0006 90.56 97.4749 88.33 8 0.0007 90.56 77.8006 90.56 9 0.0006 90.56 99.7971 90.56 10 0.0006 90.56 75.0965 90.56 Table 3

The eigenvalues used to derive the "rst optimal projection vector. The elements shown in the left column are the eigen-values determined using our method. The ones shown in the right column were determined using Liu et al.'s method Eigenvalues determined

using our method

Eigenvalues determined using Liu's method

3.31404839e#04 1.00000000e#00 2.39240384e#04 1.00000000e#00 1.67198579e#04 1.00000000e#00 1.01370563e#04 1.00000000e#00 6.88308959e#03 1.00000000e#00 7.41289737e#03 1.00000000e#00 2.70253079e#03 1.00000000e#00 5.53323313e#03 1.00000000e#00 3.46817376e#03 1.00000000e#00

set of experiments, we tried to compute the "rst optimal projection vector in 10 iterations. The leftmost column of Table 2 indicates the iteration number. The element shown in the second and the fourth column of Table 2 is the orientation di!erence (in degrees) between the current optimal projection vector and the projection vector de-rived in the previous iteration. The data shown in the second column were obtained by applying our method while the data shown in the fourth column were obtained by applying Liu et al.'s method. Theoretically, the opti-mal projection vector determined based on the same set of data should stay the same or only change slightly over 10 consecutive iterations. From Table 2, it is obvious that the projection vector determined by our method was very stable during the 10 consecutive iterations. On the other hand, the projection vector determined by Liu et al.'s method changed signi"cantly between every two con-secutive iterations. Linear algebra [23] tells us that an eigenvector will be very sensitive to small perturbation if its corresponding eigenvalue is close to another eigen-value of the same matrix. Table 3 shows the eigeneigen-values obtained by our method and by Liu et al.'s. It is obvious that the eigenvalues obtained by our method are quite di!erent from each other. However, the eigenvalues ob-tained by Liu et al.'s method are almost the same. These data con"rm that our method was much more stable than Liu et al.'s.

Another important issue which needs to be discussed is the in#uence of the reserved percentage of dim(<) on the recognition rate. Since the construction of < is the most time consuming task in our approach, we would like to show empirically that by using only part of the space <, our approach can still obtain good recognition results. Fig. 8 illustrates the in#uence of the reserved percentage of dim(<) on the recognition rate when the number of

classes is changed. The &#', &;' and &䊊' signs indicate that there were 10, 20 and 30 classes in the database, respectively. In all of the above mentioned classes, each class contained three distinct samples. From the three curves shown in Fig. 8, it is obvious that by only reser-ving 10% of dim(<), the recognition rate could still maintain 94%. Fig. 9, on the other hand, illustrates the in#uence of the reserved percentage of dim(<) on the recognition rate when the number of samples in each class is changed. The &#',&;' and &䊊' signs indicate that there were 2, 3, and 6 samples in each class, respectively. From Fig. 9, we can see that by only reserving 10% of

dim(<), the recognition rate could always reach 91%.

Moreover, the results shown in Fig. 8 re#ect that the information retained in the space < (the null space of SU) was more sensitive to the number of classes. This means

(12)

Fig. 8. Illustration of the in#uence of the reserved percentage of dim(<) on the recognition rate. The &#', &;', and &䊊' signs mean that there are 10, 20, and 30 classes in the database, respectively. Each class contains three distinct samples. This "gure shows that the information contained in the null space of SU was more sensitive to the number of classes in the database.

Fig. 9. Illustration of the in#uence of the reserved percentage of dim(<) on the recognition rate. The &#', &;', and &䊊' signs mean that there are two, three, and six samples in each class, respectively. The database comprised 10 classes. This "gure shows that the information for the same person was uniformly distributed over the null space of SU. Therefore, the percentage of dim(<) did not in#uence the recognition results very much.

that when more classes are contained in the database, a higher percentage of < should be reserved to obtain good recognition results. On the other hand, Fig. 9 shows that the information about the same person was uniform-ly distributed over the null space of SU. Therefore, the

percentage of dim(<) did not in#uence the recognition results very much.

4. Concluding remarks

In this paper, we have proposed a new LDA-based face recognition system. It is known that the major drawback of applying LDA is that it may encounter the small sample size problem. When the small sample size prob-lem occurs, the within-class scatter matrix SU becomes singular. We have applied a theory from linear algebra to "nd some projection vectors q's such that qRSUq"0 and qRS@qO0. Under the above special cir-cumstances, the modi"ed Fisher's criterion function pro-posed by Liu et al. [10] can reach its maximum value, i.e., 1. However, we have found that an arbitrary projection vector q satisfying the maximum value of the modi"ed Fisher's criterion cannot guarantee the maximum class separability unless qRS@q is further maximized. Therefore, we have proposed a new LDA process, starting with the calculation of the projection vectors in the null space of the within-class scatter matrix SU. If this subspace does not exist, i.e., SU is nonsingular, then a normal LDA process can be used to solve the problem. Otherwise, the small sample size problem occurs, and we choose the vector set that maximizes the between-class scatter of the transformed samples as the projection axes. Since the within-class scatter of all the samples is zero in the null space of SU, the projection vector that can satisfy the objective of an LDA process is the one that can maximize the between-class scatter. The experimental results have shown that our method is superior to Liu et al.'s ap-proach [10] in terms of recognition accuracy, training e$ciency, and stability.

References

[1] R. Chellappa, C. Wilson, S. Sirohey, Human and machine recognition of faces: a survey, Proc. IEEE 83 (5) (1995) 705}740.

[2] D. Valentin, H. Abdi, A. O'Toole, G. Cottrell, Connection-ist models of face processing: a survey, Pattern Recogni-tion 27 (9) (1994) 1209}1230.

[3] R. Brunelli, T. Poggio, Face recognition: features versus templates, IEEE Trans. Pattern Anal. Mach. Intell. 15 (10) (1993) 1042}1052.

[4] A. Samal, P. Iyengar, Automatic recognition and analysis of human faces and facial expressions: a survey, Pattern Recognition 25 (1) (1992) 65}77.

[5] S.H. Jeng, H.Y. Mark Liao, C.C. Han, M.Y. Chern, Y.T. Liu, Facial feature detection using geometrical face model: an e$cient approach, Pattern Recognition 31 (3) (1998) 273}282.

[6] C.C. Han, H.Y. Mark Liao, G.J. Yu, L.H. Chen, Fast face detection via morphology-based pre-processing, Pattern Recognition 1999, to appear.

(13)

About the Author*LI-FEN CHEN received the B.S. degree in computer science from the National Chiao Tung University, Hsing-Chu, Taiwan, in 1993, and she is now a Ph.D. student in the department of computer and information science at National Chiao Tung University from 1993. Her research interests include image processing, pattern recognition, computer vision, and wavelets.

About the Author*MARK LIAO received his B.S. degree in physics from the National Tsing-Hua University, Hsin-Chu, Taiwan, in 1981, and the M.S. and Ph.D. degrees in electrical engineering from the North-western University in 1985 and 1990, respectively. He was a research associate in the Computer Vision and Image Processing Laboratory at the Northwestern University during 1990}1991. In July 1991, he joined the Institute of Information Science, Academia Sinica, Taiwan, as an assistant research fellow. He was promoted to associate research fellow and then research fellow in 1995 and 1998, respectively. Currently, he is the deputy director of the same institute. Dr. Liao's current research interests are in computer vision, multimedia signal processing, wavelet-based image analysis, content-based image retrieval, and image watermaking. He was the recipient of the Young Investigators' award of Academia Sinica in 1998; the best paper award of the Image Processing and Pattern Recognition Society of Taiwan in 1998; and the paper award of the above society in 1996. Dr. Liao served as the program chair of the International Symposium on Multimedia Information Processing (ISMIP), 1997. He also served on the program committees of the International Symposium on Arti"cial Neural Networks, 1994}1995; the 1996 International symposium on Multi-technology Information Processing; and the 1998 International Conference on Tools for AI. Dr. Liao is an Associate Editor of the IEEE Transactions on Multimedia (1998}2001) and the Journal of Information Science and Engineering. He is a member of the IEEE Computer Society and the International Neural Network Society (INNS).

About the Author*JA-CHEN LIN was born in 1955 in Taiwan, Republic of China. He received his B.S. degree in computer science in 1977 and M.S. degree in applied mathematics in 1979, both from the National Chiao Tung University, Taiwan. In 1988 he received his Ph.D. degree in mathematics from Purdue University, USA. In 1981}1982, he was an instructor at the National Chiao Tung University. From 1984 to 1988, he was a graduate instructor at Purdue University. He joined the Department of Computer and Information Science at National Chiao Tung University in August 1988, and is currently a professor there. His recent research interests include pattern recognition and image processing. Dr. Lin is a member of the Phi-Tau-Phi Scholastic Honor Society.

[7] H.Y. Mark Liao, C.C. Han, G.J. Yu, H.R. Tyan, M.C. Chen, L.H. Chen, Face recognition using a face-only database: a new approach, Proceedings of the third Asian Conference on Computer Vision, Hong Kong, Lecture Notes in Computer Science, Vol. 1352, 1998, pp. 742}749. [8] B. Moghaddam, A. Pentland, Probabilistic visual learning for object representation, IEEE Trans. Pattern Anal. Mach. Intell. 19 (7) (1997) 696}710.

[9] M. Turk, A. Pentland, Eigenfaces for recognition, J. Cogni-tive Neurosci. 3 (1) (1991) 71}86.

[10] K. Liu, Y. Cheng, J. Yang, Algebraic feature extraction for image recognition based on an optimal discriminant cri-terion, Pattern Recognition 26 (6) (1993) 903}911. [11] F. Goudail, E. Lange, T. Iwamoto, K. Kyuma, N. Otsu,

Face recognition system using local autocorrelations and multiscale integration, IEEE Trans. Pattern Anal. Mach. Intell. 18 (10) (1996) 1024}1028.

[12] D. Swets, J. Weng, Using discriminant eigenfeatures for image retrieval, IEEE Trans. Pattern Anal. Mach. Intell. 18 (8) (1996) 831}836.

[13] P.N. Belhumeur, J.P. Hespanha, D.J. Kiregman, Eigen-faces vs. "sherEigen-faces: recognition using class speci"c linear projection, IEEE Trans. Pattern Anal. Mach. Intell. 19 (7) (1997) 711}720.

[14] K. Fukunaga, Introduction to Statistical Pattern Recogni-tion, Academic Press, New York, 1990.

[15] Q. Tian, M. Barbero, Z.H. Gu, S.H. Lee, Image classi"ca-tion by the Foley}Sammon transform, Opt. Eng. 25 (7) (1986) 834}840.

[16] Zi-Quan Hong, Jing-Yu Yang, Optimal discriminant plane for a small number of samples and design method of

classi"er on the plane, Pattern Recognition 24 (4) (1991) 317}324.

[17] Y.Q. Cheng, Y.M. Zhuang, J.Y. Yang, Optimal "sher dis-criminant analysis using the rank decomposition, Pattern Recognition 25 (1) (1992) 101}111.

[18] K. Liu, Y. Cheng, J. Yang, A generalized optimal set of discriminant vectors, Pattern Recognition 25 (7) (1992) 731}739.

[19] K. Liu, Y.Q. Cheng, J.Y. Yang, X. Liu, An e$cient algo-rithm for Foley}Sammon optimal set of discriminant vec-tors by algebraic method, Int. J. Pattern Recog. Artif. Intell. 6 (5) (1992) 817}829.

[20] D.H. Foley, J.W. Sammon, An optimal set of discriminant vectors, IEEE Trans. Comput. 24 (1975) 281}289. [21] L.F. Chen, H.Y.M. Liao, C.C. Han, J.C. Lin, Why a

statistics-based face recognition system should base its recognition on the pure face portion: a probabilistic decision-based proof, Proceedings of the 1998 Symposium on Image, Speech, Signal Processing and Robotics, The Chinese University of Hong Kong, September 3}4, 1998 (invited), pp. 225}230.

[22] A. Fisher, The Mathematical Theory of Probabilities, Macmillan, New York, 1923.

[23] G.W. Stewart, Introduction to Matrix Computations, Academic Press, New York, 1973.

[24] B. Noble, J.W. Daniel, Applied Linear Algebra, Prentice-Hall, Englewood Cli!s, NJ, 1988.

[25] R.C. Gonzalez, R.E. Woods, Digital Image Processing, Addison-Wesley, Reading, MA, 1992.

[26] A.K. Jain, R.C. Dubes, Algorithms for Clustering Data, Prentice-Hall, Englewood Cli!s, NJ, 1988.

(14)

About the Author*MING-TAT KO received a B.S. and an M.S. in mathematics from the National Taiwan University in 1979 and 1982, respectively. He received a Ph.D. in computer science from the National Tsing Hua University in 1988. Since then he joined the Institute of Information Science as an associate research fellow. Dr. Ko's major research interest includes the design and analysis of algorithms, computational geometry, graph algorithms, real-time systems and computer graphics.

About the Author*GWO-JONG YU was born in Keelung, Taiwan in 1967. He received the B.S. degree in Information Computer Engineering from the Chung-Yuan Christian University, Chung-Li, Taiwan in 1989. He is currently working toward the Ph.D. degree in Computer Science. His research interests include face recognition, statistical pattern recognition and neural networks.

數據

Fig. 2. Illustration of the pixel grouping process. N normalized face images are piled up and aligned into the same orientation
Fig. 3. Results obtained after performing pixel grouping. The images in the leftmost column are the original images, and the others are the transformed images with the increasing  resolu-tions of 2 , 2, 2, and 2, from left to right.
Fig. 4. Illustration of the projection vector set determination process. At the top of the &#34;gure, ¹ is a linear transformation from &lt; to = : ¹(x)&#34;SU x, x3&lt;
Fig. 5. Experimental results obtained using our method under the small sample size problem
+4

參考文獻

相關文件

6 《中論·觀因緣品》,《佛藏要籍選刊》第 9 冊,上海古籍出版社 1994 年版,第 1

A factorization method for reconstructing an impenetrable obstacle in a homogeneous medium (Helmholtz equation) using the spectral data of the far-field operator was developed

A factorization method for reconstructing an impenetrable obstacle in a homogeneous medium (Helmholtz equation) using the spectral data of the far-eld operator was developed

Write the following problem on the board: “What is the area of the largest rectangle that can be inscribed in a circle of radius 4?” Have one half of the class try to solve this

In Case 1, we first deflate the zero eigenvalues to infinity and then apply the JD method to the deflated system to locate a small group of positive eigenvalues (15-20

Population: the form of the distribution is assumed known, but the parameter(s) which determines the distribution is unknown.. Sample: Draw a set of random sample from the

Then, it is easy to see that there are 9 problems for which the iterative numbers of the algorithm using ψ α,θ,p in the case of θ = 1 and p = 3 are less than the one of the

volume suppressed mass: (TeV) 2 /M P ∼ 10 −4 eV → mm range can be experimentally tested for any number of extra dimensions - Light U(1) gauge bosons: no derivative couplings. =&gt;