• 沒有找到結果。

Isolation and characterization of a novel proteinase inhibitor from the snake serum of Taiwan habu (Trimeresurus mucrosquamatus)

N/A
N/A
Protected

Academic year: 2021

Share "Isolation and characterization of a novel proteinase inhibitor from the snake serum of Taiwan habu (Trimeresurus mucrosquamatus)"

Copied!
8
0
0

加載中.... (立即查看全文)

全文

(1)

Automatic face authentication with self compensation

Tzung-Han Lin, Wen-Pin Shih

*

Department of Mechanical Engineering, National Taiwan University, No. 1, Sec. 4, Rossevelt Road, Taipei 106, Taiwan Received 30 March 2006; accepted 9 October 2007

Abstract

This paper presents a novel method for automatic face authentication in which the variance of faces due to aging has been considered. A bilateral symmetrical plane is used for weighting the correspondences of the scanned model and database model upon model verifi-cation. This bilateral symmetrical plane is determined by the nose tip and two canthus features. The coupled 2D and 3D feature extrac-tion method is introduced to determine the posiextrac-tions of these canthus features. The central profile on this bilateral symmetrical plane is the foundation of the face recognition. A weighting function is used to determine the rational points for the correspondences of the opti-mized iterative closest point method. The discrepancy value is evaluated for the authentication and compensation between different mod-els. We have implemented this method on the practical authentication of human faces. The result illustrates that this method works well in both self authentication and mutual authentication.

 2007 Elsevier B.V. All rights reserved.

Keywords: Face recognition; Face authentication; Automatic feature detection; Self compensation

1. Introduction

Human authentication provides an alternative of the conventional biometric authentications which verify finger-prints, voicefinger-prints, or face images in the applications of access security and intelligent robots. Most previous stud-ies of the face recognition exploited 2D images, and the employment of entire 2.5D range images has also been reported[14]. Comparing to 3D images, 2D images is suit-able only in constrained environments and poses because its information could be significantly influenced by the change of illuminations and poses. Although good perfor-mance of 2D face recognition has been achieved[15], it is still difficult to overcome the influence caused by head poses and expressions. On the other hand, 3D face recogni-tion can be easily applied under various illuminarecogni-tions and head poses because the curvatures of 3D models with arbi-trary orientations are invariant. Curvature analysis has been a critical tool for 3D feature extraction and pattern

demarcation [4,5,17,18]. However, automatic 3D human face recognition is still a challenging task. It requires com-plex computation and comparisons among a great number of images. In typical 3D face recognition, the range images are used for recording the texture contours and depth val-ues. When a front face is being captured, the nose tip is typ-ically assumed to be the closet point to the camera. In this case, the principal component analysis (PCA) performs well in 3D face identification with the same position and orientation[7]. The database of face images are then trans-formed into finite principal set, i.e. eigenvectors. After training the data set, the tested images with various expres-sions can be permitted. It should be noted that PCA does not work well when the data possess excessive noise. In contrast to PCA, the eigen-decomposition can be employed to extract the intrinsic geometric features of facial surfaces with geometric invariants. The eigen-decompositions can be applied to four different data sets which include range images, range images with textures, canonical images, and flattened texture, respectively. The eigenform method with the eigen-decomposition of flattened textures and canonical images has successively distinguished twins[3].

0262-8856/$ - see front matter  2007 Elsevier B.V. All rights reserved. doi:10.1016/j.imavis.2007.10.002

*

Corresponding author. Tel.: +886 2 3366 4511; fax: +886 2 2363 1755. E-mail address:wpshih@ntu.edu.tw(W.-P. Shih).

www.elsevier.com/locate/imavis Image and Vision Computing 26 (2008) 863–870

(2)

The biomorphic symmetry is a crucial behavior for automatic face recognition and authentication. Zhang et al.[16] proposed a bilateral symmetry analysis for face authentication. They introduced a MarkSkirt operator to determine the symmetrical plane of each facial surface. The attained symmetry profiles of different faces are then used for similarity comparisons. Their authentication func-tion consists of three weighted differences of the distance between two compared 3D models.

The iterative closest point (ICP) is a widely used method for model registration [1]. The singular value decomposition (SVD) of ICP determines the minimum distance of the correspondences of two related models. ICP is also a significant method for estimating the differ-ence between two models. However, ICP has a local minimum solution for the estimation of the rigid registra-tion. For the non-rigid registration using standard ICP approach, an additional warping function is required. The grid-resampled control points have been used for matching distance in each hierarchical step [10,11]. The fine alignment with Hybrid ICP has two steps for esti-mating one transformation matrix in single iteration. The shape index which is a function of principal tures represents the normalized scale value of the curva-ture. With the employment of convolution [9], the identification of specified features can be facilitated by the 2D shape index images which evolved from 3D cur-vatures. This coupled method which incorporates both 2D and 3D characteristics is more feasible than either pure 3D or pure 2D method for automatic feature extraction. However, the accuracy of the feature posi-tions will be compromised after convolution. To solve this issue, Song et al. adopted an error-compensated sin-gular value decomposition (ECSVD) method to estimate the error and consequently compensate this error after each image rotation[13]. Because ECSVD tunes the head pose from arbitrary view into the frontal view, the reli-able range images for the PCA method can be easily handled.

The 2.5D range images have the advantages of main-taining 3D points and 2D images simultaneously. The cur-vatures in 2.5D range images are simpler to calculate with higher efficiency than those of 3D data[6]. The comparison among serial profiles, presented by Beumier and Acheroy

[2], is used for 3D authentication from striped images. In our method, the 2D images are also used for acquiring the positions of eyeballs. The pair features on the nose bridge and the nose tip determine the bilateral symmetrical plane [8]. Our coarse alignment for ICP is based on the bilateral symmetrical plane. We use the frontal range images for human face databases and implement the fine alignment of modified ICP. A threshold value is designed to justify the result of fine alignment of the modified ICP. Once the tested face has a weighed distance less than the threshold, the person has the successful authentication. The 2.5D range images also have the capability of blending two related models with tunable depth values. Generally

speaking, the successful authenticated model is also another kind of the database model and has the represen-tation of this person in current period. In this paper, we provide a linear blending function to compensate the data-base models. This method confirms that the compensated database model will be updated to be close to the current state of the face. Our experimental results show that the discrepancy between two different persons is obvious after training.

2. Automatic face authentication

In our application, a portable 3D laser scanner is used for the enrollment and verification of human faces. The 3D laser scanner consists of an infrared device and a digital camera. The infrared device can acquire the depth values of the range images by sensing the reflection of the ribbon-shaped laser beam. The digital camera can capture the instant scene and store it as a high resolution image. Because the geometrical position between the digital cam-era and infrared device is fixed, the transformation matrix of the mapping texture captured from the digital camera can be carried out. Before using the device, the distortion due to the optical lens is calibrated and compensated. The scanned models have their corresponding textures (Fig. 1a). One range image has 10–30 thousands effective 3D vertices, and its texture image has 0.1–3 million pixels. In order to improve the efficiency of image processing, the size of the texture image will be reduced to one quarter. We use 3D data for detecting features and use 2D image data as the assisting features. Initially, we retrieve all possible 3D features as the candidates of eye sockets. Then, the assisting 2D eye features help us to identify the 3D eye socket features correctly.

Fig. 1. (a) A captured face with 2.5D data; (b) first principal curvature j1;

(c) second principal j2; (d) mean curvature; (e) Gaussian curvature; (f)

(3)

2.1. Specified 3D curvatures

‘‘Features’’ are the foundation of building the corre-spondences between two related images. The 3D features can be defined by a gross local shape or a specified shape. We prefer the specified shapes as the features in human face authentication because human faces have the similar mor-phological shapes such as eyes, nose, ears and mouth. Some features are invariant in geometrical behavior, such as invariant curvature and symmetry. For example, mouth, lip, eyelids and nose are features with either extra high or extra low values of curvatures (Fig. 1b). In addition, a face with lots of wrinkles may also contain lots of features. It is noticed that the dimple features near the eye sockets are always invariant in different facial expressions and have similar curvatures. The dimple features near eyes are criti-cal in our method to determine the bilateral symmetricriti-cal plane. The facial symmetry is one significant behavior for our approach. We suppose that the central line which lies on the bilateral symmetrical plane exists in each scanning procedure.

The curvature is an important geometrical property. It indicates the slope tendency of the neighboring region of a single point. The mathematic form of the curvature is a 2· 2 matrix. In discrete computing, the curvature is cal-culated by finite differential method instead of partial dif-ferential method. An efficient approximation method has been developed by Hamann [6]. Following Hamann’s approach, we calculate the principal curvatures according to the bivariate polynomial. Specifically, the roots of the bivariate polynomial are the principal curvatures. The topological connected vertices on the surface and its derivatives can be also affined into the Gauss–Weingar-ten map. As a result, the eigenvalues and eigenvectors of the matrix will be the principal curvatures and direc-tions, respectively.

The two principal curvatures of a local surface, j1and j2, can be represented by the mean curvature and Gaussian curvature (Fig. 1), which are the average and the product of the first and second principal curvatures, respectively. A positive Gaussian curvature value means that the surface is locally either a peak or a valley. On the other hand, a negative value means the surface has a local saddle point. If the Gaussian curvature is zero, the surface is flat in at least one direction. For example, both a plane and a cylin-der have zero Gaussian curvature. Although features with similar curvatures may have many vertices, most of these vertices are neighboring and on the same patch. Occasion-ally, some unexpected features due to noises or irregular shapes may be obtained. These noises should be smoothed or excluded. A simple procedure is used to extract the effec-tive features for our method as following:

1. Calculate the first principal curvature for each vertex on the face;

2. Calculate the statistical variances of the first principal curvatures for each vertex with neighboring vertices;

3. Retrieve these vertices if their principal curvatures j1are greater than0.03 and less than 0.01, and their vari-ances of curvature r1are less than 0.02;

4. Group these vertices if they are topological connected. The group which has many vertices with similar curvatures is called a feature. The grouped vertices must not only have similar curvatures but also be topologi-cally connected. Sometimes, two neighboring features may exit due to noise occurring in between. Under this circumstance, a simple merge or smoothening procedure is needed. If the two neighboring features are on the same patch, the merge procedure will be implemented (Fig. 2). If these two neighboring features are on different patches or there are holes between them, they must be taken as individual features. Because the filter is designed for the dimple features around the eye sockets, all the dimple features will remain after the filtration.

2.2. Coupled 2D and 3D features

2D image analysis is a mature technique. The feature extraction technique in 2D images has been well developed

[12,15]. Nevertheless, 2D image processing highly depends on the image quality. The environmental factors such as illumination and radiation are critical for image analysis. The mask-based operator is always helpful for detecting the specified features. But it is not easy to detect a 2D face feature in various poses or expressions because some fea-tures vanish or change easily. On the other hand, detecting the anchor points such as eyes is easier than detecting other features. In our method, the environmental light is restricted to be near white light. If the environmental light has been changed, the color should be calibrated. The spec-ified mask is applied to the region of the skin color for determining the 2D eye features. 2D eye features are assis-tant features for locating the exact 3D eye socket features.

Fig. 2. Merging procedure for two neighboring features. If two features (ellipses with solid lines) are too close, they will be combined into one feature (ellipse with dot line) by the linear combination.

(4)

Once the eyes are found in one 2D texture image, the map-ping for matching 3D features becomes a Voronoi prob-lem. The affine map of the 3D features will form finite cells of the Voronoi diagram, and the two 2D eye features will locate on two individual cells (Fig. 3). A person with lots of wrinkles may have many dimple features, and the cells of Voronoi diagram will be complex. In this case, we need to shift 2D features close to the canthi in order to match the correct 3D canthus features. The canthus fea-tures are the foundation for determining the frontal direc-tions and nose tip.

2.3. Bilateral symmetrical plane

The bilateral symmetry is an important behavior of bio-logical morphology. Our method uses the bilateral symmet-rical plane for reducing the affection of the face expressions. It can be seen that the bilateral symmetrical plane penetrates the nose bridge and divides the face into two symmetrical portions (Fig. 4). The bilateral symmetri-cal plane can be determined by knowing the position of the nose tip, one vertex of the nose bridge, and one normal direction. We choose the average of the two canthus fea-tures as one vertex of the bridge and the normal direction of the bilateral symmetrical plane being perpendicular to the mean normal of the canthus features. Once the bilateral symmetrical plane is determined, the orientation of the model (either scanned model or database model) is decided.

Fig. 4demonstrates our method of determining the bilat-eral symmetrical plane. The bilatbilat-eral symmetrical plane is used not only for verification but also for enrollment. In enrollment, the human face with formal expression is cap-tured and stored as one database model. Every database model is transformed according to its orientation. Each vertex in the database model has a weighting dependence on the distance from itself to the bilateral symmetrical plane (Fig. 5).

3. Self compensation

The enrollment for each new face should be done before the implementation of the verification. The face models stored in database only represent the states in the instant of enrollment. However, the invariant database model will not match the scanned model which has been aging for a long period of time. Due to aging, fattening, and thinning, the database model should be updated for adapting the variant scanned models for the same person. In this paper, a linear blending function which is function of the discrep-ancy is used for updating the database model after each successful verification.

3.1. Rational point–point ICP

The iterative closest point (ICP) is an algorithm for solv-ing the transformation matrix of two rigid models by min-imizing the distance of their correspondences. The correspondences between two models often have identical geometrical behaviors such as curvatures and normal direc-tions. We modify the corresponding points as rational points and implement the point–point ICP on the registra-tion. All of the correspondences have been weighted as rational points. The weighting of each correspondence depends on the distance from itself to the bilateral symmet-rical plane. The rational points can be mapped onto a homogenous coordinate by applying the weighting to each component, as illustrated in (1). We suppose the central profiles of the faces are nearly invariant under variant expressions. Due to the biological symmetry, the weighting function is defined as the function of the distance from the bilateral symmetrical plane with exponential decay, as shown in (2). The factor d is used for tuning the rate of the decay and should be a positive value. In our approach, one coarse alignment and several fine alignments for ICP are needed. The coarse alignment uses two correspon-dences which are nose tip and nose bridge. The bilateral symmetrical plane is an additional condition to satisfy the third unknown in a 3D transformation. The coarse alignment of the scanned model and database model pulls the scanned model close to database model. In the enroll-ment, the bilateral symmetrical plane has been detected and stored as the additional information of the database model. One database model contains vertex positions, cur-vatures, normals and weightings. Before storing these data, the database model has been transformed, and its bilateral symmetrical plane aligns with yz plane. The coarse align-ment procedure also transforms the scanned models according to their orientations (Fig. 6). Since the vertices far from the bilateral symmetrical plane have small weight-ing values, the effects of various expressions are reduced.

The coarse alignment has put the scanned model to the database model as close as possible, so the correspondences between these two models can be established readily. The correlations of the corresponding vertices of these two models, which are neighboring to each other after coarse

Fig. 3. Six 3D features are labeled on the human face. (a) The 3D features are mapped on to the capturing plane, and the Voronoi diagram is formed by these mapped features. (b) The two eyes locate on separated individual cells.

(5)

alignment, are defined in (3). The correlation factor cij, which denotes vertex i in database model and vertex j in scanned model, consists of three parameters. The matched pair will have similar principal curvatures and similar nor-mal vector simultaneously. If the correlation factor cij is greater than a specified value (the recommend value is

2.5), one pair of correspondences are determined. The Oct-tree is used for improving the efficiency in querying neighboring vertices. When the matched pairs are found, the rational point–point ICP is implemented for precise alignment. When the iterations of ICP converge, the exact vertices on the scanned model will be calculated by a

Fig. 4. Procedure for determining the bilateral symmetrical plane. (a) The scanned face model is shown. (b) The principal curvature of each vertex on the face model is demonstrated by various spectrums. (c) The specified features are retrieved and each feature is represented by a points and a normal. (d) The bilateral symmetrical plane is determined by the canthus features and the nose tip.

Fig. 5. Flowcharts of our method for face authentication. (a) The acquired model in enrollment procedure is requested strictly. (b) As the same sub-routine of determination of bilateral symmetrical plane in enrollment, the verification procedure has additional sub-routines for evaluating the discrepancy and compensating the database model.

Fig. 6. The models with frontal view and formal expression are taken as the database models such as (a) and (b). In coarse alignment, the scanned models with smiley expression or tilts, such as (c) and (d), respectively, are transformed to the proper orientation in order to conduct further comparisons with database models.

(6)

resampling procedure. We restrict that most vertices which need to be compared are on the nose and forehead because these vertices have little change among various expressions. If the number of the matched pairs increases, the result of ICP will be more reliable. The verification result is more robust if there are more vertices compared between the scanned model and database model

 pi¼ ½wixi; wiyi; wizi;1 T ð1Þ wi¼ ejxij=d ð2Þ cij¼ ejji 1j j 1jþ ejj i 2j j 2jþ eðN iNj1Þ ð3Þ

3.2. Self compensation with linear blending function

In order to improve the robustness of verification, we use a linear blending function to compensate the database. The symbol S1 denotes the database model which is acquired at first. This database model stores finite vertices and corresponding normals as well as curvatures and weightings. The vertices in S1are located on nose, a part of the forehead, and a part of the cheek. We use the linear combination for blending the specified database model and the successful verified model, as illustrated in (4). The blending factor a is between 0 and 1, and it can be a func-tion of the discrepancy, D, as shown in(5). The discrepancy between these two models is defined as the normalized value which represents the distance error of the correspon-dences. In(5), the symbol pV,kand pR,kare the k-th corre-spondences for the scanned model and the database model, respectively. The symbol S0

1 denotes the original database model, and S0

ndenotes the evolution of the database model after n successful verifications. The condition that the dis-crepancy is smaller than the threshold (Dt) is defined as a successful verification. If the discrepancy value is smaller, the evolved database model S0

nwill be more similar to Sn -10. After n successful verifications, S0

n will be the current database model instead of the previous database model Sn-10. S01¼ S1 S02¼ ð1  aÞS01þ aS2 S0n¼ ð1  aÞS0 n1þ aSn ð4Þ aðDÞ ¼ 1 ðD=DtÞ; 0 < D < Dt 0; Dt< D  D¼P n k¼1 wkjpV ;k pR;kj; Pn k¼1 wk ¼ 1 ð5Þ

3.3. Selection of individual threshold

The word ‘‘discrepancy’’ is too ambiguous in describing the degree of the difference between two persons. It is inter-esting to quantify the value of the discrepancy between two persons. We calculate the discrepancy value of two com-pared models, and we compare the discrepancy with a

threshold for recognizing these two models. The discrep-ancy value can be defined as the normalized distance of all correspondences between two models. It is evident that the discrepancy between the scanned model and the data-base model is small enough to be considered as the same person. A threshold Dtis taken as the critical value for sev-erance. In self authentication, the person’s face is scanned and compared with himself/herself database model. There are differences among these models which are scanned for a person in different periods of time. These differences are induced by various expressions, poses, and capturing devices. In mutual authentication, the person is scanned and compared with other database models. Generally speaking, the probability of the discrepancy either in self authentication or mutual authentication is a normal distri-bution.Fig. 7illustrates how to select a proper threshold for a person. The probability curves of the self authentica-tion and mutual authenticaauthentica-tion are presented by two indi-vidual Gaussian curves. These two curves may have an overlap region. The upper and lower bounds of this region are DUand DL, respectively. These two curves intersect at DC. If the threshold is set in the confused region between DLand DU, there will be failure for both of the self authen-tication and the mutual authenauthen-tication. A compromise threshold for this database model will be DC. A strict selec-tion for the threshold is the value less than the lower bound (DL) of the confused region, but it may induce failure of self authentications. If the threshold is greater than the upper bound (DU) of the confused region, this person will undertake the risks of counterfeit. If there is no confused region between these two curves, this person will be readily distinguished from other persons.

4. Result and discussion

A method for face authentication is introduced. This method can compensate the database model after success-ful verification. We have tested 63 adult faces with various poses and expressions. A frontal pose with formal expres-sion is acquired from each person for enrollment. The

DL DC DU D (discrepancy) Probability self authentication mutual authentication

Fig. 7. Selection for individual threshold. A proper threshold is recom-mended to be the lower bound (DL) of the confused region, which is filled

(7)

poses of all persons in verification are flexible but both the inner canthi must be visible to the camera. The head poses with slants or tilts are available. In our method, the face expression is recommended not being too severely deformed. In a surveillance system, we request that users provide their identity numbers before authentication. Then, the system starts to scan each user’s face and com-pares the scanned model with the specified database model. When the discrepancy value between scanned face and database model is less than the threshold, the user is per-mitted for entrance. The discrepancy value will be compen-sated for blending this scanned face and database model, and the current database model will be replaced by the blended model. The discrepancy value may have a slight increment if the scanned model deforms or has been tilted and slanted. The faces with smiley expressions are also per-mitted since the change of the central profiles is too small to affect the discrepancy value, as depicted inFig. 8.

An experimental result for 20 persons is shown in

Fig. 9. At first, the front views of these 20 persons are demanded to be captured, and these captured faces are stored as 20 database models. Then, every one use these 20 ID for 20-time verification, and the verification results are represented by these discrepancy values. A constant 0.5 is used as the threshold. These discrepancy values are shown spatially. In order to reveal their inequalities, all discrepancy values are mapped onto three orthogonal planes (xy, yz and zx planes). In Fig. 9, the values less than the threshold are shown as solid circles below the dash lines on the zx or yz planes. The person ‘‘T’’ uses the other person’s ID (‘‘L’’) for verification and gets high discrepancy value. That means the persons ‘‘T’’ and ‘‘L’’ are very different. It can be found that if the person ‘‘L’’ uses ‘‘D’’ ID for verification, he/she also gets high crepancy value. If the person uses his/her own ID, the dis-crepancy value is relatively small and less than the predefined threshold. It can also be found that the dis-crepancy value is always the smallest under the circum-stance when many persons verify with one ID and one person verifies with many IDs.

The self compensation is the mechanics for refreshing the current database model. It also reduces the

discrep-ancy in self authentication and increases the discrepdiscrep-ancy in mutual authentication. In order to perform statistical analysis, we use the histogram to count the probability

Fig. 8. The central profiles in queried region are nearly invariant in various conditions. (a) Smiley expression; (b) slight twisted face expression; (c) tilted pose. Database models Scanned persons x z y

Fig. 9. Comparisons among 20 persons. The solid circles denote the discrepancy values which are less than the threshold 0.5. The solid rectangles denote the values larger than the threshold. The pure green color intensity represents zero discrepancy, and the color closer to pure red denotes larger discrepancy value.

D (discrepancy) 0.0 0.5 1.0 1.5 2.0 2.5 Probability 0.0 0.1 0.2 0.3 0.4 0.5 0.6

(A) vs (database A with compensation) (A) vs (database A without compensation) (B) vs (database A with compensation) (B) vs (database A without compensation)

Fig. 10. Determination for a proper individual threshold. After self compensation, the range of the available threshold is enlarged.

(8)

of each discrete discrepancy value. The curve for the dis-crete values is fitted as a normal distribution, and the histogram has at least 120 data. The experimental result shows that the Gaussian mean of the discrepancy value of the self authentication has shifted leftward and has been reduced by the employment of compensation, as shown in Fig. 10. Similarly, the Gaussian mean of the discrepancy value of the mutual authentication shifts rightward by the use of compensation. The compensation enlarges the range of the available threshold, and the dis-crepancy between different faces is consequently more distinct. The lower bound of the available threshold is recommended to be the discrepancy which has 90–95% probability for successful self verification. The upper bound of the threshold is the discrepancy which has 5– 10% probability for the failure of mutual authentication. We assume that all scanning procedures are effective. A scanning process needs about 2 s, and the enrollment procedure needs less than 2 s. The verification procedure needs additional 3 s (for Pentium4 1.8 GHz) including self compensation. We suppose that the faces of all per-sons hold steadily in the scanning processes.

5. Conclusion

We have successfully implemented our feature extrac-tion method in a face authenticaextrac-tion system. This method coupled 3D and 2D features and has been able to retrieve the canthus features. The bilateral symmetrical plane for coarse alignment was also carried out. The weighted point–point ICP was used to determine the optimal trans-formation matrix. The experimental results verified our method correctly in both self authentication and mutual authentication. The range of the available threshold has been enlarged for robust authentication.

Acknowledgements

The authors thank Mr. Wen-Chao Chen and Mr. Wei-Yih Ho for establishing scanning devices and systemic experiments. We also thank Mr. Ming-Hui Lin for provid-ing experimental devices.

References

[1] P. Besl, N. McKay, A method for registration of 3D shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence 14 (2) (1992) 239–256.

[2] C. Beumier, M. Acheroy, Automatic 3d face authentication, Image and Vision Computing 18 (4) (2000) 315–321.

[3] A.M. Bronstein, M.M. Bronstein, R. Kimmel, Expression-invariant 3D face recognition, in: International Conference on Audio and Video based Biometric Person Authentication, 2003, pp. 62–70. [4] K.I. Chang, K.W. Bowyer, P.J. Flynn, Face recognition using 2D and

3D facial data, Workshop in Multimodal User Authentication, 2003, pp. 25–32.

[5] G.G. Gordon, Face recognition based on depth and curvature features, in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1992, pp. 108–110. [6] B. Hamann, Curvature approximation for triangulated surfaces,

Computing 8 (1993) 139–153.

[7] C. Hesher, A. Srivastava, G. Erlebacher, A novel technique for face recognition using range imaging, in: Proceedings of the 7th IEEE International Symposium on Signal Processing and its Applications, 2003, pp. 201–204.

[8] T.H. Lin, W.C. Chen, W.Y. Ho, W.P. Shih, 3D face authentication by mutual coupled 3D and 2D feature extraction, in: Proceedings of the 44th ACM Southeast Conference, 2006, pp. 423–427.

[9] X. Lu, D. Colbry, A.K. Jain, Three-dimensional model based face recognition, in: Proceedings of the 17th IEEE International Confer-ence on Pattern Recognition, 2004, pp. 362–366.

[10] X. Lu, A.K. Jain, Deformation analysis for 3D face matching, in: Proceedings of the 7th IEEE International Workshop on Applica-tions of Computer Vision, 2005, pp. 99–104.

[11] X. Lu, A.K. Jain, Integrating range and texture information for 3D face recognition, in: Proceedings of the 7th IEEE International Workshop on Applications of Computer Vision, 2005, pp. 156–163. [12] R. Mariani, Sub-pixellic eyes detection, in: Proceedings of the 10th

IEEE International Conference on Image Analysis and Processing, 1999, pp. 496–501.

[13] H. Song, S. Lee, J. Kim, K. Sohn, Three-dimensional sensor-based face recognition, Applied Optics 44 (5) (2005) 677–687.

[14] Y. Wang, C.S. Chua, Robust face recognition from 2D and 3D images using structural Hausdorff distance, Image and Vision Computing 24 (2) (2006) 176–185.

[15] W. Yu, X. Teng, C. Liu, Face recognition using discriminant locality preserving projections, Image and Vision Computing 24 (3) (2006) 239–248.

[16] A. Yuille, D. Cohen, P. Hallinan, Feature extraction from faces using deformable templates, in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1989, pp. 104–109.

[17] L. Zhang, A. Razdan, G. Farin, J. Remiani, M. Bae, C. Lockwood, 3D face authentication and recognition based on bilateral symmetry analysis, Visual Computer 22 (2004) 43–55.

[18] W. Zhao, R. Chellappa, A. Rosenfeld, Face recognition: a literature survey, ACM Computing Surveys 35 (2003) 399–458.

數據

Fig. 2. Merging procedure for two neighboring features. If two features (ellipses with solid lines) are too close, they will be combined into one feature (ellipse with dot line) by the linear combination.
Fig. 4 demonstrates our method of determining the bilat- bilat-eral symmetrical plane
Fig. 5. Flowcharts of our method for face authentication. (a) The acquired model in enrollment procedure is requested strictly
Fig. 7. Selection for individual threshold. A proper threshold is recom- recom-mended to be the lower bound (D L ) of the confused region, which is filled with black color.
+2

參考文獻

相關文件

To solve this problem, this study proposed a novel neural network model, Ecological Succession Neural Network (ESNN), which is inspired by the concept of ecological succession

The objective of this study is to establish a monthly water quality predicting model using a grammatical evolution (GE) programming system for Feitsui Reservoir in Northern

This research proposes a Model Used for the Generation of Innovative Construction Alternatives (MUGICA) for innovation of construction technologies, which contains two models:

The purpose of this thesis is to propose a model of routes design for the intra-network of fixed-route trucking carriers, named as the Mixed Hub-and-Spoke

To enhance the generalization of neural network model, we proposed a novel neural network, Minimum Risk Neural Networks (MRNN), whose principle is the combination of minimizing

Shih and W.-C.Wang “A 3D Model Retrieval Approach based on The Principal Plane Descriptor” , Proceedings of The 10 Second International Conference on Innovative

One is to survey the state of the MOW service in Taiwan; another is to propose a feasible operation model of MOW service including of order-processing

Instead of the conventional discrete model using an equivalent mass and spring, a continuous geometrical model of the finite element method is utilized to the dynamic analysis of