• 沒有找到結果。

社群網路上運用群組關係於照片標註推薦系統

N/A
N/A
Protected

Academic year: 2021

Share "社群網路上運用群組關係於照片標註推薦系統"

Copied!
59
0
0

加載中.... (立即查看全文)

全文

(1)

資訊科學與工程研究所

社群網路上運用群組關係於照片標註推薦系統

Photo Tagging Recommendation Using Group Prior

on Social Network

研 究 生:曾億才

指導教授:李素瑛 教授

(2)

社群網路上運用群組關係於照片標註推薦系統

Photo Tagging Recommendation Using Group Prior on Social Network

研 究 生:曾億才 Student:Yee-Choy Chean

指導教授:李素瑛 Advisor:Suh-Yin Lee

立 交 通 大 學

訊 科 學 與 工 程 研 究 所

士 論 文

A Thesis

Submitted to Institute of Computer Science and Engineering College of Computer Science

National Chiao Tung University in partial Fulfillment of the Requirements

for the Degree of Master

in

Computer Science Aug 2011

Hsinchu, Taiwan, Republic of China

(3)

i

社群網路上運用群組關係於照片標註推薦系統

研究生:曾億才

指導老師:李素瑛 教授

國立交通大學資訊科學與工程研究所

摘 要

在社群網路上,因為使用者喜歡透過照片來分享他們的生活點滴,使得照片 的資料量急遽成長。對於使用者來說,為了能將這些照片分享給出現在照片裡的 其他人,系統提供了臉部人名的標註功能。透過這個機制,照片可以迅速的分享 給其他人。但是,用人工標註大量的照片是很耗時的工作,所以為了讓人臉標註 自動化,許多人臉偵測與識別的方法逐漸被整合到這些社群網路的系統當中。透 過識別人臉,我們可以對使用者欲標註的照片推薦名單,供使用者加速標註的速 度。然而,社群網路上龐大的人數和照片拍攝環境的多變,使得人臉識別面臨相 當大的瓶頸。 鑒於大部分的合照裡,照片中的人必定存在著某種程度上的關聯。因此,已 有些研究透過分析人與人之間的互動情況,從而衡量出人與人之間的關係程度。 藉此,再配合人臉辨識的方法以提升識別的準確度。 在我們的論文當中,我們提出了一個新穎的方法來對人臉標註做推薦。有別 於其它方法中單純分析照片上傳者在社群網站中與其他人的關係,我們還考慮了 欲被標註的那群人臉的群組關係。在我們的實驗中,證明了我們的方法確實能帶 來很好的效果。 關鍵字:社群網路、人臉偵測、人臉識別、照片標註

(4)

ii

Photo Tagging Recommendation Using Group Prior

on Social Network

Student: Yee-Choy Chean Advisor: Prof. Suh-Yin Lee

Institute of Computer Science and Engineering National Chiao Tung University

Abstract

In social networking sites, photo images that are captured in the daily life occupy a large proportion of web contents since most of people like to upload their photos to share their activities. For sharing a photo with the people appearing in this photo, the users have to manually tag the people with their names, and then the system will recommend this photo to the people immediately. Thus, face tagging becomes a mechanism for photo sharing on the social networking sites, which makes face recognition on the social networking sites become a modern and useful application. Therefore, more and more researches discuss the use of face recognition on the social networking sites to help people tag the photo automatically. However, traditional approaches which use face recognition to recognize people in the social network are not a robust technique for the large number of people and the uncontrolled situation in photos. Hence, the social contexts, which are kinds of user relationship learned from the online social network, are adopted to improve the accuracy of face recognition.

In this thesis, we propose a novel system to address the task of tagging recommendation for a photo. Not only the face recognition technique but also the

(5)

iii

relationship among the people on social network is considered. Different from other studies that just use the relationship between the uploading user and the faces which will be tagged, we also utilize the group relationship of the faces appear in the photo. The experiments show that the recognition performance is truly improved employing the proposed mechanism.

Keyword: face detection, face recognition, social network, social context, photo tagging

(6)

iv

Acknowledgement

First of all, I greatly appreciate my advisor, Prof. Suh-Yin Lee. seniors Hui-Zhen Gu, Chien-Li Chou for their graceful ideas, precious experience and technical assistances. Besides, I am grateful to my best friends Li-Wu Tsai, Jing-Hong Low, Ivy Chuah, Chun-Siong Ong, Geok-Huat Sia, Kee-Loong Wong, Chun-Yon g Tan, Kwee-Hong Tang, Chong-Kae Seah, Shi-Sheng Foo, William Liong, Kim-Boon Low, Jee-Seng Chew, Lan-Xin Tang, Yon g-Tzer Lim, Shu-Tyng Pang, Shu-Xian Chean, Siew-Ying Chean, Sook-Fui Chean, Chao-Ying Wu, Tuan-Hsien Lee, Andy Goh, for their inspiration. Last but not least, I appreciate my parents Poh-Seng Wong, Kim-Thye Chan and my brother Jin-Li Wong. Without their support and encouragement, I am not able to complete this achievement. I devoutly dedicate this thesis to them.

(7)

v

Table of Contents

Abstract (Chinese) ... i Abstract (English) ... ii Acknowledgement ... iv Table of Contents ... v List of Figures ... vi

List of Tables ... viii

Chapter 1. Introduction ... 1

Chapter 2. Background and Related Work ... 4

2.1 Overview of Social Network ... 5

2.2 Face Detection ... 6

2.3 Face Recognition ... 9

2.4 Face Recognition in Consumer Photos ... 10

2.5 Face Recognition Using Social Context in Social Network ... 13

Chapter 3. Proposed System Architecture ... 17

3.1 Problem Definition... 17

3.2 System Overview ... 18

3.3 Face Recognition based on PCA... 19

3.4 Importance of Social Network ... 21

3.4.1 The Most Popular social networking site – Facebook.com ... 22

3.5 Construction of Social Graph Model ... 27

3.6 Similarity Score Estimator ... 29

3.7 Group Prior Estimator ... 32

Chapter 4. Experimental Results... 37

4.1 Experimental Dataset Collection ... 37

4.2 Evaluation Method ... 40

4.3 Experimental Results ... 41

Chapter 5. Conclusions and Future Works ... 46

(8)

vi

List of Figures

Figure 1. (a) Original image. (b) Integral image. ... 7

Figure 2. Use of integral image. ... 7

Figure 3. (a) Edge Features. (b) Line Features. (c) Special diagonal line feature. ... 7

Figure 4. Adaboost learning pseudo code. ... 8

Figure 5. The concept of Cascading Classifier. ... 8

Figure 6. An example for using first name to recognize people. ... 11

Figure 7. Graph for group prior recognition. ... 12

Figure 8. Centralized face recognition engine framework. ... 14

Figure 9. Collaborative face recognition engine framework. ... 15

Figure 10. An example for face tagging recommendation. ... 17

Figure 11. Framework of our face tagging recommendation system. ... 18

Figure 12. Dominant features of eigenfaces. ... 21

Figure 13. An example illustrate the influence of social network. ... 22

Figure 14. A user profile page on the Facebook.com. ... 23

Figure 15. An example of user comment on Facebook. ... 25

Figure 16. An example of video tagging on Facebook. ... 25

Figure 17. An example of photo tagging on Facebook. ... 26

Figure 18. Social graph model for co-occurrence relationship. ... 27

Figure 19. Social graph model for mutual friend relationship. ... 28

Figure 20. Construction of Similarity Score. ... 29

Figure 21. An example of query face. ... 31

Figure 22. Combine the co-occurrence relationship and mutual friend relationship. 32

Figure 23. Combine the Face Score and Relation Score. ... 32

Figure 24. The basic idea of group prior. ... 33

Figure 25. N candidates for each query face. ... 34

Figure 26. Detected communities in the graph. ... 35

Figure 27. Ranking community by average of co-occurrence relationship. ... 36

Figure 28. Photos in experimental dataset. ... 38

Figure 29. Faces appearing in the photos. ... 38

Figure 30. Ratio of Training data and testing data. ... 39

Figure 31. Histogram of people per image. (Testing data) ... 39

Figure 32. Histogram of people per image. (Training data) ... 40

Figure 33. Average H-hit rate for α. ... 42

Figure 34. Average H-hit rate for β. ... 42

(9)

vii

Figure 36. Different H-hit rate for Similarity Score. ... 44

(10)

viii

List of Tables

Table 1. The comparison of the face recognition methods. ... 10

(11)

1

Chapter 1. Introduction

More and more photos are captured in our daily life due to the ubiquitous presence of capture devices such as digital camera, mobile phone, and camcorder. It is possible that we capture thousands of photos in a travel, which we could not imagine before. Along with the exposure of the large number of photos, most of the photos are just stored in the computer because it is difficult to manage the photos manually.

Nowadays, social network becomes a new popular platform for people to interact with each other. Thus, the users of social networks upload their photos onto the social network not only for sharing their life experience but also jogging their own memory. Moreover, users can tag the people appearing in a photo with their names so that the system can notify the tagged users and get a good way to manage and search the photos easily afterward. However, tagging all uploaded photos is a time-consuming task for general users. Hence, if some tags can be recommended automatically by the system, users can accomplish the tagging much quickly. For that reason, with the benefits arising with social network development, the photo tagging recommendation is the goal to achieve in this thesis.

To provide an appropriate tagging recommendation, using the face detection and recognition to identify the people in a photo is a direct method to achieve this goal. However, with the increasing number of people, the features of faces are insufficient to distinguish the difference between them. As a result, the accuracy of face recognition decreases when the number of people increases. In addition, uncontrolled situation such as ambient illumination and capture angle of faces are also the bottlenecks for practical face detection and recognition. Furthermore, most of people might use the graphics editing program to make the photos look better before

(12)

2

uploading.

In general, to recognize the people in a photo, we can quickly make judgments regarding many aspects including their demographic description and identity if they are familiar to us. Some questions related to the activities of, emotional states of, and relationships between people in a photo can be answered by us. In other words, we draw conclusions based on not just what we see, but also a life time of experience of living and interacting with other people. Based on the observations, we adopt the relationship among the people to filter out a portion of people which are more possible to appear in the photo, and then apply face recognition on the filtered people. In real world, it is difficult to quantize the relationship among people. Thanks to the rapid development of social network, the relationships between people can be retrieved by their interactions on the social network. For instance, if person A and person B usually appear in photos simultaneously, we can say that person B is closer with person A than others people. Afterward, when person A uploads a new photo, the probability of appearance with user B is higher than other persons. This kind of information becomes a complementation when we recognize a person in a photo.

In social network, the data which we use to retrieve the relationship is called social context. Two kinds of social context are used in our thesis: the number of co-occurrence of two persons in a photo and the number of common friends that two persons share. The relationship can be evaluated from the social contexts, and we are able to apply the relationship on the face recognition. For example, we define a photo that is going to be tagged as the query photo and the faces in the photo as the query

faces. Assume that the person uploads a photo with one face already tagged. We can

find a group of people that are related to that person of the tagged face, and only the people in the group need to be considered in face recognition. As a result, we can recommend a list of people who are more possible to appear in the photo for the faces

(13)

3

that are not recognized yet. In existing studies, only the relationship between the user who uploads the photo and other users are considered. Actually, the relationship among the people in a photo is also an important cue to recommend a list of people.

In this thesis, for a query photo, we assume that the uploading user has tagged one face, termed as the known face and our goal is to find a group of people who are related to the identity of the known face. The people are retrieved from their faces by using face recognition at first. Then the relationships among the people are evaluated, and a list of people is recommended to the photo tagging.

In experiments, we use the photos on Facebook from 94 volunteers to demonstrate the performance of the proposed framework, and satisfactory experimental results are obtained.

In Chapter 2, we review previous works on tag recommendation using face detection and face recognition. In Chapter 3, we present our proposed system, including Face recognition and social context used. Chapter 4 shows the experimental results. At last, we will make a conclusion and discuss the future work in Chapter 5.

(14)

4

Chapter 2. Background and Related Work

Social network [1] is a social structure made up of people, which are connected by socially-meaningful relationships. Because of the ubiquitous presence of digital capturing devices such as digital camera, mobile phone, camcorders, most people upload their personal photos onto the social networking sites not only for sharing their life experience but also jogging their own memory. For sharing the photo with each other on social networking sites, a user will tag the faces appearing in the photo with their names. It can not only help the system recommend this photo to the users who we want to share with but also provide a good way to manage and search photo album easily afterwards [2]. In addition

Lots of studies recommend the tags for the photos only by the image low-level features [8,9]. For example, the studies which combines the face detection [10-13] and face recognition [14,15] is used to perform photo annotation automatically. However, uncontrolled situations such as light, shooting angle and resolution of image are the bottlenecks for practical face recognition. Besides, for making the photos looked great, most of the people use the graphics editing program to edit the photo before uploading. It makes a great challenge for face recognition also.

, analyzing those tags is useful for friend recommendation [3,4] and tag recommendation [5-7].

When we recognize the people in a photo, we can quickly make judgments regarding many aspects including their demographic description and identity if they are familiar to us. Some questions related to the activities of, emotional states of, and relationships between people in an image can be answered by us. Thus, we draw conclusions based on not just what we see, but also a lifetime of experience of living and interacting with other people.

(15)

5

In consumer images, Gallagher et al. [16,17] improve the face recognition by learning the prior probability of different individuals appearing together in a photo. In addition, analysis of the context in an album is used to improve the recognition also [18]. Those contexts includes the information related to the photo of the scene surrounding the person, digital camera context such as location and photo capture time and the interactions among people. In social networking sites, social context can be analyzed to obtain the relationships among peoples. Sigurbjornsson et al. [19] combine information from four contexts: all the photos in the system, photos of a user, the photos of a user’s social contacts and the photos posted in the groups which a user in.

In this chapter, we first present the overview of social network. Then, the previous work of Face Detection, Face Recognition, Auto labeling with group prior, photo annotation with social context are introduced in following sections.

2.1 Overview of Social Network

Less than a decade ago, connecting to people meant communicating via snail mail, fax machine, phone calls and beepers. Communication evolved into email, IM (instant messaging) and SMS messaging through mobile phones since then. In today’s ages, as a communication way, social network is undoubtedly a modern day phenomenon.

Social network provides advancement of communication and self expression especially. Since millions are logging in a variety of social networks, it is just natural for businesses to promote their products and services through these social networks. Social networks could easily group users according to the given information and it is much easier to send appropriate advertisement to users. Besides, job seekers and those

(16)

6

who want to promote themselves online have also used social network as a way to achieve their goal. Many people use social networking websites to post their music, photos and other works so that their skills could be noticed by other professionals. They have already achieved fame through this technique and are still using social network to connect to as many people as possible for self promotion. According to the data collected by BrandZ [20], with almost 700 million users worldwide, the social networking giant Facebook.com (FB) [21] was valued at a bit over 190 billion. It seems like a field with unlimited potential.

In view of the fact that the photos with faces are a large portion of the uploaded photos, it shows that face recognition on a social network platform is becoming a important application area.

2.2 Face Detection

Given an arbitrary image, the goal of face detection is to determine whether there is any face in the image and, if present, return the location of the face in the image [11].

Human Face detection has been one of the most popular topics in the computer vision literature, since face detection is the stepping stone to lots of applications, such as face recognition, face tracking, face verification/authentication and analysis of facial expressions. The single face detection algorithm proposed by Viola and Jones [22] has the most impressive impact in 2000’s. It contains three main ideas that make it possible to perform a successful detector that can run in real-time: the integral image for fast feature evaluation, classifier learning with AdaBoost [23] for feature selection and the cascade for fast rejection of non-face windows.

(17)

7

subset of a grid. Figure 1 and Figure 2 show an example for using the integral image. When we need to compute the sum of pixels in region D, we can use the points 1,2,3,4 in integral value. D = 4- 2 - 3 + 1

Figure 1. (a) Original image. (b) Integral image.

Figure 2. Use of integral image.

(18)

8

Figure 4. Adaboost learning pseudo code.

Using the integral image, we can compute simple Haar-like rectangular features, as shown in Figure 3.

Boosting is a method of finding a highly accurate hypothesis by combining many “weak” hypotheses, each with moderate accuracy. For an introduction on boosting, we refer the readers to [24] and [25]. Figure 4 shows the Adaboost learning pseudo code.

Figure 5. The concept of Cascading Classifier.

Figure 5 shows the concept of Cascading Classifier. First, we make a binary decision to classifiers which keep the sub-windows (positive sub-windows) for next round or reject it (negative sub-windows) immediately. Afterwards, the positive response from

(19)

9

the first classifier triggers the evaluation of the second classifier and so on. After that process, the output sub-windows are the faces we want.

2.3 Face Recognition

As mentioned above, face detection is highly related with face recognition. Based on efficient face detection, face recognition has wide variety of applications. The face recognition problem can be defined as: given static images, identify one or more persons in the scene by comparing with the faces stored in the database. Face recognition is one of the foremost challenging problems in computer vision. This is the reason why it receives researcher’s attention and sustained development in recent years. The applications of face recognition technique can be categorized into two main types: law enforcement applications and commercial applications. Law enforcement applications include video camera surveillance and mug shot albums. Commercial applications include static matching of face database on credit card, ATM card and so on.

Many face recognition techniques can only apply on frontal faces, including Eigenfaces, Neural network (NN), graph matching (GM), geometrical feature matching (GFM) and template matching (TM) [26]. The advantage and disadvantages of those methods are shown in Table 1.

(20)

10

Table 1. The comparison of the face recognition methods.

Advantage Disadvantages

Eigenfaces As fast, simple and practical method.

Does not provide invariance over changes in scale and lighting conditions.

NN Efficient in feature extraction.

Not suitable for a single model image recognition test because multiple model images per person are necessary in order for training.

GM Rotation invariance. Computationally expensive.

GFM

Useful for finding possible matches in a large database such as Mug shot album.

Dependent on the accuracy of the feature location algorithms, require considerable

computational time. TM More logical than other feature

matching method. Computational complexity

2.4 Face Recognition in Consumer Photos

Most of the researches in face recognition target the problem to recognize person with given a face image. In consumer photos, the photographer captures photos not in a random fashion, but rather to remember or document meaningful events in her/his life. In addition, using the contexts includes information related to the photo of the scene surrounding the person, camera context such as location and photo capture time, and the social context describes the interactions between people, all this information we mention can help us to improve the accuracy of face recognition.

In [18,27], the contexts in the photo are use to recognize persons in a photo. Context is broadly defined as information relevant to something under consideration. Different types of contexts are useful for recognizing people shown as in Table 2.

(21)

11

Tab le 2. Different types of context for face recognition.

Pixel Context Camera Context Social Context Clothing

Color

Other people Relative pose

Image capture time GPS data

First name Age and gender Person’s height

Gallagher et al. [28] considered the relationship among name, age and gender to construct the model of identity recognition. As shown in Figure 6(a), for example, given the names, Linda and Lydia, we want to associate the people in the photo with the names. In Figure 6(b), author use the context provided by U.S Social Security baby name database to compute statistics related to distribution over birth year, gender and first name. Therfore, the name Linda is most likely the women on the right and Lydia is another women on the left.

(a) (b)

Figure 6. An example for using first name to recognize people. [28]

In addition, Gallagher et al. [16,17] considered the group prior to identify people in consumer images with multiple people. The goal of the author is to provide the computer with the same intuitive that humans would use for analyzing images of

(22)

12

people. The group prior describes the probability of a group of individuals appearing together in an image.

(a) (b)

Figure 7. Graph for group prior recognition. [16]

Let P = {𝑝1, 𝑝2, 𝑝3, … , 𝑝m} denoted all the faces in a photo and Sfeature = {𝑓1, 𝑓2, 𝑓3, … , 𝑓m} be the set of all the observed features of each people. This paper graphically models the relationship between the identities of the people in the photo. A particular in the image is pm, the associated features are fm, and the name assigned to person is nm. The appearance feature fm

Figure 7(a) shows a graph that represents the appearance features and the identities in a photo. Each person p has an undirected connection to all other people. Figure 7(b) shows the recognition accuracy using individual prior (“Indiv”) and group prior(“Group”), respectively. The individual prior considers only the prior probability of an individual appearing in a photo. As a whole, accuracy of recognition by using Group prior is much better than that by just using individual prior.

are derived solely from the pixels of the face region in the image.

(23)

13

2.5

Face Recognition Using Social Context in Social Network

Thanks to the rapid development of social network, many online personal photo albums are embedded in some forms of social network. Since the photos in a personal photo album are related and collected from a trip or activity, the social contexts of social network are abundant to help us not only for photo understanding but also for management efficiency.

In recent years, there are lots of studies focusing on improving face recognition using social contexts. Moreover, based on the improvement, friend recommendation can be provided as well. Once the face recognition is resolved, we can recommend the people who appear in the album to the album owner as potential friends.

Stone et al. [29] demonstrates a simple method to enhance face recognition with social network context. The goal of this work is to infer a joint labeling of face identities over all nodes (identity) in the graph (social network) by applying a pair-wise conditional random field (CRF) [30]. The potential functions of CRF are categorized to two classes: (1) single-node potential (2) pair-wise potential. First, single-node potential includes the face similarity and computes the distribution reflecting the number of times that each person has been labeled in the user’s existing personal photo collection. Second, the pair-wise potential includes the friendship potential to describe whether two users are friend on social network. For example, if user B exists in friend list of user A, user A and user B are friends. In addition, other unary, pair-wise and higher-order potentials could also be considered to incorporate other forms of information.

(24)

14

Figure 8. Centralized face recognition engine framework.

Many existing face recognition systems were developed using a centralized face recognition approach, as shown in Figure 8. Put it in a simple way, A face recognition engine should be constructed to recognize query faces Therefore, all the training face images in the database are used to train the face recognition engine. This approach only uses one face recognition engine to perform the recognition from beginning to end.

(25)

15

Figure 9. Collaborative face recognition engine framework.

In contrast to centralized, Choi et al. [31] proposes collaborative face recognition for face annotation on social network is proposed. Different from other face recognition system used on social network, the goal of this work is to lower computational cost and come with a design suitable for deployment in decentralized social networking sites. Figure 9 shows the framework of the collaborative face recognition system. “Current user” is the user who uploads the query image. The collaborative face recognition framework for the Current user is constructed with multiple face recognition engines. Each user in the social network has their own face recognition engine trained from their own photo collection.

Then, selecting suitable face recognition engines effectively by social contexts is performed. It increases the possibility of selecting face recognition engines trained from more face images. Occurrence and co-occurrence probability is used by this work to estimate the strength of relationship between two users.

(26)

16

occurrences of a person on a photo album to recommend a list of friends. The purpose of these papers is different from our work. However, the algorithm of using the social context can be a reference for our work.

(27)

17

Chapter 3. Proposed System Architecture

In this chapter, we illustrate the framework for incorporating social network context into face tagging recommendation system. First, we make a clear definition of our problem in section 3.1. To easily understand the process of our work, an overview of the system is provided in the section 3.2. The approach of face recognition that we use is described section 3.3. As the most popular social networking site and the source of dataset we use, we introduce the Facebook.com in section 3.4. The construction of the social graph model in our system is discussed in section 3.5. A detailed description of Similarity Score Estimator to estimate the Similarity Score is presented write in section 3.6. The Group Prior Estimator is described in section 3.7 in a great detail.

3.1 Problem Definition

(28)

18

Given an arbitrary photo with a known person, the proposed system recommends a tag list for the rest people appearing in this photo. For example, as shown in the Figure 10, there are three people in the photo. Assume that we know the face related to John who exists in the photo, and we define the face as the known face and the rest as the query

faces. In addition, we estimate the strength of relationship between John and others.

Simultaneously, we recognize query face B and query face C which is detected by the face detection algorithm through the face recognition. To combine the face recognition result and social relationship result, a fusion approach is proposed. Finally, we apply the Group Prior Estimator and recommend a list of names for this photo.

3.2 System Overview

Figure 11. Framework of our face tagging recommendation system.

Figure 11 shows the framework of our face tagging recommendation system. There are two databases used in our system: face database and relationship database.

(29)

19

We build the face database with the training faces including people who exist in social network. The relationship database is constructed from the social context extracted from social network and is represented by a graph model. We quantify the social relationship among the social networks users by making use of the identity co-occurrence probability in photo collections on social network and the number of mutual friends in the friend list that a user creates on the social network. The input of our system is the photo we want to tag, which is called query image. Assume that the user who uploads this photo already tags a face in the photo, which is defined as a

known face u𝑘𝑛𝑜𝑤𝑛. The rest people that appear in this photo are defined as the query

faces and detected by face detection. For each query face, the Face Score is computed

by face recognition in Face Score Estimator. By the way, the co-occurrence relationship and the number of mutual friends for known face are retrieved from the social graph model. Through normalization, we can map the co-occurrence relationship, number of mutual friends and face score into the same level. Then, the Similarity Score is computed by Similarity Score Estimator. Finally, in group prior estimator, we find a group of people who have higher probability to appear in this photo than other groups, and those people become the recommendation list for the photo.

3.3 Face Recognition based on PCA

In this section, we introduce the face recognition based on Principal Component Analysis (PCA) method [33], which is one of the most successful techniques that have been used in image recognition and compression. The purpose of PCA is to reduce the large dimensionality of the data space to the smaller intrinsic dimensionality of feature space, which is needed to describe the data economically. This is the case

(30)

20

when there is a strong correlation between observed variables.

Let D = �d1, d2, … , dND� denotes ND training dataset. The average Davg is defined as following:

Davg = N1D∗ ∑N𝑖=1D d𝑖. (1)

Each element in the training dataset differs from Davg by the vector Y𝑖 = d𝑖 − Davg.The covariance matrix Mcov is obtained as:

Mcov = N1D ∑ Yi=1ND i∗YiT. (2)

Since the covariance matrix Mcov is square, we can calculate the eigenvectors and eigenvalues for Mcov . Hence, we choose ND significant eigenvectors of Mcov as E𝑘 and compute the weight vectors W𝑖𝑘 for each element in the training dataset, where k ∈ {1,2,3, … , ND}.

W𝑖𝑘 = E𝑘T∗ �di− Davg� ∀ 𝑖, 𝑘. (3)

For using the PCA method, eigenfaces face recognition [14] can be achieved. Let Ftraining= {𝑓𝑡1, 𝑓𝑡1, 𝑓𝑡1, … , 𝑓𝑡𝑁𝐹𝑡} denote the set of 𝑁𝐹𝑡 training faces. Calculate each face difference vector from the average face Davg by Equation (1), and the covariance matrix Mcov is obtained by Equation (2) .Then compute the eigenvectors E𝑘 of covariance matrix Mcov , which defines the face feature space. Finally, we compute the weights W𝑖𝑘 by Equation (3) for each face image in the Ftraining.

When a query face image is encountered, we calculate a set of weights W𝑇𝑒𝑠𝑡𝑘. W𝑇𝑒𝑠𝑡𝑘 forming a vector Tp = �w1, w2, … , wND�

T

to describes the contribution of each eigenface in representing the input face image.

(31)

21

Finally, we classify the weight pattern with compute the minimum distance of the W𝑇𝑒𝑠𝑡𝑘 of query face image from vector Tp. It means that testing image can be classified to be in class Pclass when minimum(dp) < DThreshold, where dp = �WtestK− Tp�. Figure 12 shows some dominant features for eigenfaces.

Figure 12. Dominant features of eigenfaces. [34]

3.4 Importance of Social Network

Social network is more and more important in our daily life. It can help us find a job, make a new friend and find a partner. Figure 13 shows how the social network affects in our life. Assume that Bob is your friend and he knows Mary, and Mary’s friend John has a job for you. Through the recommendation of Bob and Mary, you can get the job more easily. The problem with social network in real world that is most of connection between people is hidden. Your network may have huge potential, but that is only available as when you can see the people of the connection. Put it simple, as the example mentioned before, you don’t know who friends of your friends are and this will make you a stranger to John. This problem is solved by the type of website , so called social networking sites.

(32)

22

Figure 13. An example illustrate the influence of social network.

3.4.1 The Most Popular social networking site – Facebook.com

Facebook is the most popular social networking site in several countries. We conduct our experiments using a small portion of the Facebook data. Figure 14 shows the user profile page on Facebook. We can split the function of Facebook into several parts. A: profile image, B: friend list (contact list), C: photo collection, D: posts shared by current user, and E: friend events.

(33)

23

Figure 14. A user profile page on the Facebook.com.

A. Profile Image: a profile image is the featured picture of the user of an online profile. This image usually is the first impression of the user for others and also can be potentially used to build the training face database in face recognition algorithm.

B. Friend list: some studies call it “contact list”. It lists the friends who are added by the user.

C. Photo collections: the photo albums uploaded by user. In addition, the photos tagging with the user’s name are regarded as one of the photo collections.

(34)

24

D. Posts shared by user: the comments shared by friends and the user.

E. Friend’s event: event created by user or his/her friends, birthday reminding and the activity created by the group which the user joins.

Facebook provides a mechanism for users to share photos, video, music, comments, and so on to their friend. If you want to share something from your friend to other friends, there is a share button below those data.

In addition, one of the most popular services on Facebook is “name tagging”, which gives you the ability to identify and reference people in photos, videos and notes. The name tagging on the photos and videos can make a connection among people.

People update the status on the social networking site to reflect their thoughts and feelings. Sometimes, the status includes referenced friends, groups or even events they are attending. For instance, we can post "Grabbing lunch with Meredith Chin" or "I'm heading to Starbucks Coffee Company.”

Figure 15 shows an example of how a social networking site user posts a message on his/her wall and the friends can give some comments following the message. A “like” button is provided to let user’s friends express if they like it.

Figure 16 shows example of how a social network user share a video via the social network. Both user and his friends can add a tagging on the video.

(35)

25

Figure 15. An example of user comment on Facebook.

(36)

26

Figure 17. An example of photo tagging on Facebook.

Figure 17 shows the photo shared by the user on the social network. The user and his friends can add tag or comments on the photo. The comments and tags be added to the photo become more important information for face recognition. Because these contexts are most relevant to the photo, we can understand this photo by analyzing the comments and tags.

(37)

27

3.5 Construction of Social Graph Model

In this section, we discuss the construction of a Social Graph Model. In the social networking sites, the interaction of people can represent the strength of relationship. For example: Allen and Mary are in the same school and they have many mutual friends. Then, we can make an assumption that Allen and Mary have a good relationship than other else. In this thesis, relationship in the social network represented by a weighted graph is described as follows.

1.

Co-occurrence relation graph

:

Such relationships naturally exist in photo collections due to the natural social interaction of human. In our system, the relationships emerge from the tagging of photos. For example, the system may learn that John often appears in photos where Mary appears, it is happened when John and Mary usually joining a same party or activity, afterward, John upload a photo, it has high possibility of Mary appeared in the photo too.

Figure 18. Social graph model for co-occurrence relationship.

Figure 18 illustrates the structure of co-occurrence relation graph. Let U = {𝑢𝑚| 𝑚 = 1,2,3, … , 𝑀} be the set of nodes including M users on the social

(38)

28

network, and E = �𝑒𝑖𝑗| 𝑖, 𝑗 = 1,2,3, … , 𝑀�, i ≠ 𝑗, is a set of edges connecting between the users. 𝑊 = �𝑤𝑖𝑗 | 𝑖, 𝑗 = 1,2,3, … , 𝑀� represents the set of co-occurrence for 𝑢𝑖 and 𝑢𝑗 from photo collection on the social network.

2.

Mutual friend relation graph

:

Number of mutual friend is another way to estimate the relationship among people. According to our observations, if two people have the more the number of mutual friends, which is very similar to their social circles in real life. Maybe they are same school or working in the same company. For this reason, they may have more chance to take photo together.

Figure 19. Social graph model for mutual friend relationship.

Figure 19 illustrate the structure of Mutual friend relation graph, Let

Co-occurrence relation graph and mutual friend relation graph are not only represents the strength of relationship among user, but also become a consultation for

U = {𝑢𝑚| 𝑚 = 1,2,3, … , 𝑀} be the set of nodes including M different users on the social network, E = �𝑒𝑖𝑗| 𝑖, 𝑗 = 1,2,3, … , 𝑀�, 𝑖 ≠ 𝑗, is a set of edges connecting between the users. 𝑊 = �𝑤𝑖𝑗 | 𝑖, 𝑗 = 1,2,3, … , 𝑀� represents the number of Mutual friend of 𝑢𝑖 and 𝑢𝑗 on the social network.

(39)

29

estimate the location among the users.

3.6 Similarity Score Estimator

Figure 20. Construction of Similarity Score.

In this section, we define a Similarity Score Estimator to combine the Face Score and the Relation Score. As shown in Figure 20, the Similarity Score includes the Face Score and Relation Score. The Relation Score contains the co-occurrences relationship and mutual friend relationship.

Oj′�u𝑘𝑛𝑜𝑤𝑛, uj� = ∑ δPi=1 i�u𝑘𝑛𝑜𝑤𝑛, uj� where u𝑘𝑛𝑜𝑤𝑛 ≠ u𝑗. (4)

δi�u𝑘𝑛𝑜𝑤𝑛, uj� = � 1, u 0, otherwise 𝑘𝑛𝑜𝑤𝑛 and uj are in photo pi. (5)

Oj�u𝑘𝑛𝑜𝑤𝑛, uj� = Oj′�uO′𝑘𝑛𝑜𝑤𝑛,uj� –O′min

max – O′min . (6)

u𝑘𝑛𝑜𝑤𝑛 is the person who already known in the photo. First, let Oj′�u𝑘𝑛𝑜𝑤𝑛, uj� be a co-occurrence probability that indicate the number of u𝑘𝑛𝑜𝑤𝑛R and uj appearing

in a photo. We rank the people who existing in social network with computed the probability Oj′�u𝑘𝑛𝑜𝑤𝑛, uj� of co-occurrence with u𝑘𝑛𝑜𝑤𝑛. u𝑘𝑛𝑜𝑤𝑛 is the person who already known in the photo and uj is one of the person in the social network.

(40)

30

O′𝑚𝑖𝑛 and O′𝑚𝑎𝑥is the maximum and minimum number of co-occurrence in photo among the network users. To get the Oj�u𝑘𝑛𝑜𝑤𝑛, uj� , we normalize the Oj′�u𝑘𝑛𝑜𝑤𝑛, uj�.

Mj′�𝑢𝑘𝑛𝑜𝑤𝑛, 𝑢𝑗� = 𝐶𝑙𝑢𝑘𝑛𝑜𝑤𝑛 ∩ 𝐶𝑙𝑢𝑗. (7)

Mj�u𝑘𝑛𝑜𝑤𝑛, uj� = MM′max – M′j′�u𝑘,uj� –M′min

min . (8)

Rj�u𝑘𝑛𝑜𝑤𝑛, uj�R

(9)

=

α ∗ Mj�u𝑘𝑛𝑜𝑤𝑛, uj�+(1 − α) ∗ Oj�u𝑘𝑛𝑜𝑤𝑛, uj�.

Second, we compute the number of mutual friend M′�u𝑘𝑛𝑜𝑤𝑛, uj� for u𝑘𝑛𝑜𝑤𝑛R

The Relation Score Rj�u𝑘𝑛𝑜𝑤𝑛, uj� for u𝑘𝑛𝑜𝑤𝑛 and u𝑗 can be computed by using the co-occurrence relationship and mutual friend relationship. α is a parameter to adjust the

and u𝑗. Let u𝑘𝑛𝑜𝑤𝑛 be the person who already known in the photo and uj be the one of user on the social network. CLu𝑘𝑛𝑜𝑤𝑛 is the contact list belong to u𝑘𝑛𝑜𝑤𝑛 and CLuj is the contact list belong to uj. M′𝑚𝑖𝑛 and M′𝑚𝑎𝑥is the maximum and minimum number of mutual friend among the network users. To get the Mj�u𝑘𝑛𝑜𝑤𝑛, uj�, we normalize the Mj′�u𝑘𝑛𝑜𝑤𝑛, uj�.

specific weight

For estimating the Similarity Score Sj�q, uj� of the query face q correspond to the user uj who existing in the social network. We combine the Face Score Fj�q, uj� and Relation Score Rj�u𝑘𝑛𝑜𝑤𝑛, uj� by the formula as following.

between mutual friend relationship and co-occurrence relationship, and the range of α is 0 to 1. In our experiment, the best number of α is 0.3, it bring us a good result in experiments.

(41)

31

Note that the importance of relationship becomes higher as β decreases. To make it easier to understand, let us take an example. Assume that in the social network system constructed by 10 candidates with their face image data, photo collection and friend list. Given a query image Q consisting of one known person u𝑘𝑛𝑜𝑤𝑛, query faces q1 and q2 as shown in Figure 21. The goal is recognize the

query face q1. First, we want to recognize the face q1 who takes the photo with u𝑘𝑛𝑜𝑤𝑛.

Figure 21. An example of query face.

Let Fj�q1, uj� indicates the face recognition result for q1 and uj. Since u𝑘𝑛𝑜𝑤𝑛 and q1 taken photo together must exist some relationships between them, the user who is close with u𝑘𝑛𝑜𝑤𝑛 has higher possibility of appearing in this query image Q.

We combine the Oj�u𝑘𝑛𝑜𝑤𝑛, uj� and Mj�u𝑘𝑛𝑜𝑤𝑛, uj� as the Relation Score Rj�u𝑘𝑛𝑜𝑤𝑛, uj� for u𝑘𝑛𝑜𝑤𝑛 and u𝑗. Figure 22 shows the combining of co-occurrence relationship and mutual friend relationship. O1 to O10 are co-occurrence relationship which corresponding to network user uj and u𝑘𝑛𝑜𝑤𝑛. M1 to M10 are number of mutual friend which corresponding to network user uj and u𝑘𝑛𝑜𝑤𝑛. R1

(42)

32

to R10 are Relation Score which corresponding to uj and u𝑘𝑛𝑜𝑤𝑛.

Figure 22. Combine the co-occurrence relationship and mutual friend relationship.

Next, Similarity Score Sj�u𝑘𝑛𝑜𝑤𝑛, uj� can be computed by integrating the Face Score Fj�q1, uj� and the Relation Score Rj�u𝑘𝑛𝑜𝑤𝑛, uj� for each u𝑗. The higher the Similarity Score of u𝑗 is, the more possible that the u𝑗 is the query face q1 is. An example of combine the Face Score and Relation Score is shown in Figure 23. F1 to F10 are Face Score which corresponding to query face q1 and uj. R1 to R10 are Relation Score and S1 to S10 are Similarity Score.

Figure 23. Combine the Face Score and Relation Score.

3.7 Group Prior Estimator

(43)

33

activity, but also a relation is existing among them. By considering the relationship of group of people, we can find a group of people are possible appear in a photo. In this section, we propose a framework to recommend a group of tag for the group of people in the photo.

Figure 24. The basic idea of group prior.

Figure 24 shows a photo comes from the social network. In general to recognize the people in the social network, many studies use the social context to find the people who have most relevant to the u𝑘𝑛𝑜𝑤𝑛 and predict the query faces one by one. It just consider the relationship between the u𝑘𝑛𝑜𝑤𝑛 and query faces. In this section, we consider the relationship among the users to find a group of people who have the highest likelihood for the people appearing in the photo.

(44)

34

Figure 25. N candidates for each query face.

As mentioned in section 3.6, we compute Similarity Score between every of the network user uj and query face. For each query face, we keep the top N candidates, called candidate set, in ranking of Similarity Score. As an example shown in Figure 25, there are two query faces q1 and q2 in the photo, we can get a total two candidate sets consisting of 2*N candidates. We employ the union operation on the two candidate sets, called Fusion Set which contains K candidates, will be reserved where 𝐾 ≤ 𝑁.

(45)

35

Figure 26. Detected communities in the graph.

For finding a group of people who is close with each other in the Fusion Set, we build a social graph model and use the community detection algorithm to detect the community. The Figure 26 shows the built graph. Each node is an individual in the Fusion Set. Edge is the connection between individuals and weight is the Relation Score between the individuals. SHRINK [35] algorithm is used to detect the communities for its fast execution performance and accuracy.

(46)

36

Figure 27. Ranking community by average of co-occurrence relationship.

In Figure 27, after using the SHRINK algorithm to detect the community in the graph, we intend to find a community which has most relevant with u𝑘𝑛𝑜𝑤𝑛. First, for each of the community, we average the co-occurrence relationship between the u𝑘𝑛𝑜𝑤𝑛 and the corresponding candidates. Those communities are ranked according to the average of co-occurrence relationships. The higher the average value is, the closer the u𝑘𝑛𝑜𝑤𝑛 and candidates in the community is.

The individuals in community are ranked by co-occurrence relationship between u𝑘𝑛𝑜𝑤𝑛 and the individual. Then, the individuals in the community are added into the recommendation list in order until the size of recommendation list is equal to number query faces for photo. If the number of individual in the first community is less than the number of query faces in photo, the individuals for second community is added into the recommendation list in order. Finally, we obtain the recommendation list for the query image.

(47)

37

Chapter 4. Experimental Results

In this chapter, we present a comparison between the performance of a baseline face recognition system and the performance of the same system but combined with social network context. In section 4.1, we illustrate the collected dataset. Section 4.2 introduces the evaluation methods, and the experimental results are discussed in section 4.3.

4.1 Experimental Dataset Collection

To make our experiment be a true phenomenon of the reaction in social networking site, all the photos and contact lists are from Facebook.com. In addition, for verifying our experimental results, we mark the location of the face and tag the user name on the photos to construct the ground truths manually. Some photos in our database are shown in Figure 28. To extract the relationships between people from the photos, the number of people in each photo must be greater than two. Figure 29 shows some faces appearing in the photos. Only the frontal faces are used in our databases for decreasing the difficulty of face recognition.

(48)

38

Figure 28. Photos in experimental dataset.

Figure 29. Faces appearing in the photos.

Moreover, if the face detection cannot correctly extract the faces in the photos, the face recognition also cannot work successfully. For revealing that the social relationship makes a great impact on the photo tagging recommendation system, we fetch the faces manually for avoiding unsuccessful face detection.

Figure 30 shows the distribution of the dataset, in which 15% (165 photos) are the testing data and 85% (909 photos) are the training data. Figure 31 and Figure 32 show the histograms of the number of people per image from a set photo collections for testing data and training data, respectively. From the distribution of photo

(49)

39

collection, we discover that the number of photos that contain two people is more than that of the others.

Figure 30. Ratio of Training data and testing data.

Figure 31. Histogram of people per image. (Testing data)

85

35

23

22

0

20

40

60

80

100

2

3

4

>=5

nu

m

of

ima

ge

s

(50)

40

Figure 32. Histogram of people per image. (Training data)

4.2 Evaluation Method

To evaluate the performance of the face annotation, the H-hit rate is used. Let F = �f1, f2, f3, … fNF� denote the set of NF query faces in a photo. In our experiments, we sorted all the faces by the order that we assign manually. To give an evaluation of the prediction performance, it is assumed that the faces are annotated by this same order. For each query face fi , i ≤ NF, the system will generate a list of H candidate names. If the ground truth of face fi is in the list, we call this a successful prediction and the prediction is hit by the name list. So, we can calculate the H-hit rate of a photo by given H, the length of the candidate name list. The H-hit rate is defined in the following.

H − hit rate : ∑NFi=0 hitH(fi)

NF , (11)

where hitH(fi) is 1 if fi is hit by the name list of H names, and 0 otherwise.

131

444

166

75

93

0

100

200

300

400

500

1

2

3

4

>=5

nu

m o

f i

ma

ge

s

(51)

41

Let P = �p1, p2, p3, … pNP� denote the set of NP photos in the database. In our experiments, for each query photo with NF query faces, we recommend a list of NF candidate names for this query photo. For evaluating the performance of our system, the precision of recommendation list is defined as follow:

Precision : ∑ �∑NFi=1 hitNF(fi) NF � ∗

1 NP

p∈P . (12)

4.3 Experimental Results

In our proposed method, Relation Score is a fusion score from co-occurrence relationship and mutual friend relationship. Let U = {u1, u2, u3, … , uN} denote the set of N network users in the database, and u𝑘𝑛𝑜𝑤𝑛 is the person we already known in the photo. For find a suitable specific weight to reflect the reality relationship between network users and u𝑘𝑛𝑜𝑤𝑛, we use α as an experimental parameter to control the specific weight of co-occurrence relationship and mutual friend relationship. Rj , Mj and Oj is the Relation Score, mutual friend relationship and co-occurrence relationship between u𝑘𝑛𝑜𝑤𝑛 and each of the network users respectively. The formula to compute Relation Score is defined as following:

Rj�u𝑘𝑛𝑜𝑤𝑛, uj�R

where j ≤ N and α ∈ [0,1]

=

α ∗ Mj�u𝑘𝑛𝑜𝑤𝑛, uj�+(1 − α) ∗ Oj�u𝑘𝑛𝑜𝑤𝑛, uj� ,

(13)

We adjust the value of α with adding increment of 0.1 each time. Figure 33 shows the average top10 hit-rate respect to adjusted value of α. Note that the average of top10 hit-rate is the average from 1-hit rate to 10-hit rate. If α is 0, we only consider the co-occurrence relationship; if α is 1, we only consider the mutual friend

(52)

42

relationship. A good compromise is found by setting α to 0.3.

Figure 33. Average H-hit rate for α.

Figure 34. Average H-hit rate for β.

In addition, Similarity Score is a fusion score from Face Score and Relation Score. Sj and Fj is the Similarity Score and Face Score between query face and each of the network users respectively. Rj is the Relation Score between u𝑘𝑛𝑜𝑤𝑛 and each of the network users. The formula to compute Similarity Score is defined as following.

Sj�q, uj�

=

β ∗ Fj�q, uj�

+

(1 − β) ∗ Rj�u𝑘𝑛𝑜𝑤𝑛, uj�

.

where j ≤ N and β ∈ [0,1]

(53)

43

We adjust the value of β with adding increment of 0.1 each time. Figure 34 shows the average top10 hit-rate respect to adjust value of β. Note that the average of top10 hit-rate is the average from 1-hit rate to 10-hit rate. If β is 0, we only consider the co-occurrence relationship; If β is 1, we only consider the mutual friend relationship. A good compromise is found by setting β to 0.5.

Figure 35. Influence of relationship.

Figure 35 shows the influence of relationship in our system. We compare the

H-hit rate for 4 different cases: 1.Similarity score 2.Co-occurrence 3.Mutual friend

4.Face score. In case 1, we set the α = 0.3 and β = 0.5. In case 2, we only consider the influence of Co-occurrence relationship. Case 3 only consider the influence of Mutual friend relationship. In case 4, we use Face score computed by face recognition algorithm directly.

(54)

44

Figure 36. Different H-hit rate for Similarity Score.

In the group prior estimator mentioned above, we select top-10 result for each

query face and make the candidates for finding a group of users to be recommended.

From Figure 36, the curve becomes more stable after H is 10.That means most of the correct users can be found in the top-10 result.

Figure 37. Reliability of recommendation list.

(55)

45

number of query faces is, the higher precision of the recommendation list is. The different number of query faces makes different influence in robustness of relationship. The relationship is much useful when the number of query faces is more.

(56)

46

Chapter 5. Conclusions and Future Works

In this thesis, we present a system for face tagging recommendation using group prior on social network. We propose a simple but efficient approach of finding a group of people to recommend querying photo. In addition, most of the previous studies using social context for face tagging just only consider the relationship between the known face and the query faces. Thus, if a query face that has no close relationship with the known face, their methods cannot work well under this condition. Instead, our approach can correctly tag the query faces not close with the known face via the relationships between query faces in the photo. So, the community detection is used to find a group of users which is close with known face although some users in the group are not familiar with known face. To improve the performance and the robustness of the system, some enhancements can be done in the future:

(i) Usage of text-based social context: in social websites, text-based social context has a lot of information that can be used for learning the relationship among network users. For instance, the profile page has information about users, such as: gender, name of high school, occupation and interest. For using those information, we can assume that the people have high relationship if they are in the same school.

(ii) A user-friendly interface: a good user interface design can make face tagging easier. For example, when we want to tag a face in a photo, a recommendation list popping out can help us quickly find the target user instead of searching the user among all the users.

(iii) Improvement of face recognition: although the using of relationship can improves the accuracy of face recommendation, face recognition still is important for face tagging. An accurate face recognition approach can makes the face tagging recommendation system better.

(57)

47

Bibliography

[1] “Social Network.” http://en.wikipedia.org/wiki/Social_network.

[2] H.-N. Kim, A. El Saddik, K.-S. Lee, Y.-H. Lee, and G.-S. Jo, “Photo Search in A Personal Photo Diary by Drawing Face Position with People Tagging,” in

Proceedings of the 16th International Conference on Intelligent User Interfaces,

New York, NY, USA, 2011, pp. 443–444.

[3] Z. Wu, S. Jiang, and Q. Huang, “Friend Recommendation According to Appearances on Photos,” in Proceedings of the 17th ACM International

Conference on Multimedia, New York, NY, USA, 2009, pp. 987–988.

[4] M. Moricz, Y. Dosbayev, and M. Berlyant, “PYMK: Friend Recommendation at MySpace,” in Proceedings of the International Conference on Management of

Data - SIGMOD’10, Indianapolis, Indiana, USA, 2010, pp. 999.

[5] B. Sigurbjörnsson and R. van Zwol, “Flickr Tag Recommendation based on Collective Knowledge,” in Proceeding of the 17th International Conference on

World Wide Web, New York, NY, USA, 2008, pp. 327–336.

[6] H. Chen, M. Chang, P. Chang, M. Tien, W. Hsu, and J. Wu, “SheepDog: Group and Tag Recommendation for Flickr Photos by Automatic Search-based Learning,” in Proceeding of the 16th ACM International Conference on

Multimedia, Vancouver, British Columbia, Canada, 2008, pp. 737-740.

[7] Microsoft, “Automatic Tag Recommendation Algorithms for Social Recommend Systems” http://research.microsoft.com/apps/pubs/default.aspx? id=79896. [8] L. Zhang, L. Chen, M. Li, and H. Zhang, “Bayesian Face Annotation in Family

Albums,” in Proceedings of the International Conference in Computer

Vision(ICCV), Nice, France, 2003, pp. 2.

[9] J. Y. Choi, D. N. W, Y. M. Ro, and K. N. Plataniotis, “Automatic Face Annotation in Personal Photo Collections Using Context-Based Unsupervised Clustering and Face Information Fusion,” Circuits and Systems for Video

Technology, IEEE Transactions on, vol. 20, no. 10, pp. 1292-1309, Oct. 2010.

[10] W.-K. Tsao, A. J. T. Lee, Y.-H. Liu, T.-W. Chang, and H.-H. Lin, “A Data Mining Approach to Face Detection,” Pattern Recognition, vol. 43, no. 3, pp. 1039-1049, Mar. 2010.

[11] Ming-Hsuan Yang, D. J. Kriegman, and N. Ahuja, “Detecting faces in images: A Survey,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 24, no. 1, pp. 34-58, Jan. 2002.

[12] J. Zheng, G. A. Ramírez, and O. Fuentes, “Face Detection in Low-Resolution Color Images,” in Image Analysis and Recognition, vol. 6111, A. Campilho and

(58)

48

M. Kamel, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, pp. 454-463.

[13] “Introduction Face detection.” http://www.stanford.edu/class/ee368.

[14] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face Recognition: A Literature Survey,” ACM Computing Surveys (CSUR), vol. 35, pp. 399–458, Dec. 2003.

[15] M. Turk and A. Pentland, “Eigenfaces for Recognition,” Journal of Cognitive

Neuroscience, vol. 3, pp. 71–86, Jan. 1991.

[16] A. C. Gallagher and T. Chen, “Understanding Images of Groups of People,” in

Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, Miami, FL, USA, 2009, vol. 0, pp. 256-263.

[17] A. C. Gallagher and T. Chen, “Using Group Prior to Identify People in Consumer Images,” in Computer Vision and Pattern Recognition, Minneapolis, MN, 2007, pp. 1-8.

[18] A. C. Gallagher and T. Chen, “Using Context to Recognize People in Consumer Images,” IPSJ Transactions on Computer Vision and Applications, vol. 1, pp. 115-126, 2009.

[19] B. Sigurbjornsson, R. Zwol, and A. Rae, “Improving Tag Recommendation Using Social Networks,” in Proceeding of the 9th International Conference on

Adaptivity, Personalization and Fusion of Heterogeneous Information, Paris,

France, 2010.

[20] “BrandZ 2011.” http://www.brandz.com/output/. [21] “Facebook.” http://www.facebook.com/.

[22] P. Viola and M. Jones, “Rapid Object Detection Using A Boosted Cascade of Simple Features,” in Proceedings of the IEEE Computer Society Conference on

Computer Vision and Pattern Recognition, vol. 1, pp. I-511-I-518, 2001.

[23] T. Mita, T. Kaneko, and O. Hori, “Joint Haar-like features for face detection,” vol. 2, pp. 1619-1626, Oct. 2005.

[24] R. Meir and G. Rätsch, “An Introduction to Boosting and Leveraging,” New York, NY, USA: Springer-Verlag New York, Inc., 2003, pp. 118–183.

[25] J. Friedman, T. Hastie, and R. Tibshirani, “Additive Logistic Regression: A Statistical View of Boosting,” Annals of Statistics, vol. 28, 1998.

[26] A. S. Tolba, A.H. El-Baz, and A.A. El-Harby, “Face Recognition: A Literature Review,” International Journal of Signal Processing, vol. 2, no. 2, pp. 88–103, 2005.

[27] T. Zhang, H. Chao, C. Willis, and D. Tretter, “Consumer Image Retrieval by Estimating Relation Tree from Family Photo Collections,” in Proceedings of the

(59)

49

USA, 2010, pp. 143–150.

[28] A. C. Gallagher and Tsuhan Chen, “Estimating Age, Gender, and Identity Using First Name Priors,” in Proceedings of the IEEE Conference on Computer Vision

and Pattern Recognition, 2008, pp. 1-8.

[29] Z. Stone, T. Zickler, and T. Darrell, “Autotagging Facebook: Social Network Context Improves Photo Annotation,” in Computer Vision and Pattern

Recognition Workshop, Los Alamitos, CA, USA, 2008, vol. 0, pp. 1-8.

[30] A. Mccallum and C. Sutton, “An Introduction to Conditional Random Fields for Relational Learning,” Graphical Models, no. x, pp. 93.

[31] Jae Young Choi, W. De Neve, K. N. Plataniotis, and Y. M. Ro, “Collaborative Face Recognition for Improved Face Annotation in Personal Photo Collections Shared on Online Social Networks,” Multimedia, IEEE Transactions on, vol. 13, no. 1, pp. 14-28, Feb. 2011.

[32] R. K. C. Lai, J. C. K. Tang, A. K. Y. Wong, and P. I. S. Lei, “Design and Implementation of An Online Social Network With Face Recognition,” Journal

of Advances in Information Technology, vol. 1, no. 1, Feb. 2010.

[33] Guangda Su, Cuiping Zhang, Rong Ding, and Cheng Du, “MMP-PCA Face Recognition Method,” Electronics Letters, vol. 38, no. 25, pp. 1654- 1656, Dec. 2002.

[34] “Face Detection and Recognition.” http://www.shervinemami.co.cc.

[35] J. Huang, H. Sun, J. Han, H. Deng, Y. Sun, and Y. Liu, “SHRINK: A Structural Clustering Algorithm for Detecting Hierarchical Communities in Networks,” in

Proceedings of the 19th ACM international Conference on Information and Knowledge Management, Toronto, ON, Canada, 2010, pp. 219–228.

參考文獻

相關文件

EQUIPAMENTO SOCIAL A CARGO DO INSTITUTO DE ACÇÃO SOCIAL, Nº DE UTENTES E PESSOAL SOCIAL SERVICE FACILITIES OF SOCIAL WELFARE BUREAU, NUMBER OF USERS AND STAFF. 數目 N o

Friends and family: A cross-cultural investigation of social support and subjective well-being among college students. Stress, social support, and the

“Social welfare” if defined in a narrow sense refers to the services provided by the Social Welfare Department (SWD) and Non-governmental Organisations (NGOs),

volume suppressed mass: (TeV) 2 /M P ∼ 10 −4 eV → mm range can be experimentally tested for any number of extra dimensions - Light U(1) gauge bosons: no derivative couplings. =&gt;

Monopolies in synchronous distributed systems (Peleg 1998; Peleg

 Schools can administer APASO-II scales/subscales at diff erent times of the school year to achieve different purpose s, e.g. to assess the effectiveness of an intervention progra m

 Schools can administer APASO-II scales/subscales at diff erent times of the school year to achieve different purpose s, e.g.. to assess the effectiveness of an intervention progra

• How social media shape our relationship to and understanding of breaking news events. – How do we know if information shared on social media