• 沒有找到結果。

Unsupervised Auxiliary Visual Words Discovery for Large-Scale Image Object Retrieval

N/A
N/A
Protected

Academic year: 2022

Share "Unsupervised Auxiliary Visual Words Discovery for Large-Scale Image Object Retrieval"

Copied!
8
0
0

加載中.... (立即查看全文)

全文

(1)

Unsupervised Auxiliary Visual Words Discovery for Large-Scale Image Object Retrieval

Yin-Hsi Kuo

1,2

, Hsuan-Tien Lin

1

, Wen-Huang Cheng

2

, Yi-Hsuan Yang

1

, and Winston H. Hsu

1

1

National Taiwan University and

2

Academia Sinica, Taipei, Taiwan

Abstract

Image object retrieval – locating image occurrences of specific objects in large-scale image collections – is essen- tial for manipulating the sheer amount of photos. Cur- rent solutions, mostly based on bags-of-words model, suf- fer from low recall rate and do not resist noises caused by the changes in lighting, viewpoints, and even occlusions.

We propose to augment each image with auxiliary visual words (AVWs), semantically relevant to the search targets.

The AVWs are automatically discovered by feature propa- gation and selection in textual and visual image graphs in an unsupervised manner. We investigate variant optimiza- tion methods for effectiveness and scalability in large-scale image collections. Experimenting in the large-scale con- sumer photos, we found that the the proposed method sig- nificantly improves the traditional bag-of-words (111% rel- atively). Meanwhile, the selection process can also notably reduce the number of features (to 1.4%) and can further fa- cilitate indexing in large-scale image object retrieval.

1. Introduction

Image object retrieval – retrieving images (partially) containing the target image object – is one of the key tech- niques of managing the exponentially growing image/video collections. It is a challenging problem because the target object may cover only a small region in the database images as shown in Figure 1. Lots of promising applications such as annotation by search [17, 18], geographical information estimation [7], etc., are keen to the accuracy and efficiency of image object retrieval.

Bag-of-words (BoW) model is popular and shown effec- tive for image object retrieval [14]. BoW representation quantizes high-dimensional local features into discrete vi- sual words (VWs). However, traditional BoW-like methods fail to address issues related to noisily quantized visual fea- tures and vast variations in viewpoints, lighting conditions, occlusions, etc., commonly observed in large-scale image collections [12, 21]. Thus, it suffers from low recall rate as shown in Figure 1(b).



  

 !"#$" 

##"" #% #%$!!#%

 !& 

Figure 1. Comparison in the retrieval performance of the tradi- tional BoW model [14] and the proposed approach. (a) An exam- ple of object-level query image. (b) The retrieval results of a BoW model, which generally suffers from the low recall rate. (c) The results of the proposed system, which obtains more accurate and diverse results. Note that the number below each image is its rank in the retrieval results and the number in a parenthesis represents the rank predicted by the BoW model.

Due to variant capture conditions and large VW vocabu- lary (e.g., 1 million vocabulary), the features for the target image objects might have different VWs (cf. Figure 1(c)).

Besides, it is also difficult to obtain these VWs through query expansion (e.g., [1]) or even varying quantization methods (e.g., [12]) because of the large differences in vi- sual appearance between the query and the target objects.

We observe the sparseness for the visual words in BoW model to cover the whole search targets and the lack of se- mantic related features from these visual features only, as discussed in Section 3. In this work, we argue to augment each image in the image collections with auxiliary visual words (AVW)—additional visual features semantically rel- evant to the search targets (cf. Figure 1(c)). Targeting at large-scale image collections for serving different queries, we mine the auxiliary visual words in an unsupervised man- ner by incorporating bother visual and (noisy) textual in- formation. We construct graphs of images by visual and textual information (if available) respectively. We then au- tomatically propagate the semantic and select the informa-

(2)

tive AVWs across the visual and textual graphs since these two modalities can boost each other (cf. Figure 3). The two processes are formulated as optimization formulations itera- tively through the subtopics in the image collections. Mean- while, we also consider the scalability issues by leveraging distributed computation framework (e.g., MapReduce).

Experiments show that the proposed method greatly im- proves the recall rate for image object retrieval. Specifically, the unsupervised auxiliary visual words discovery greatly outperforms BoW models (by 111% relatively) and comple- mentary to conventional pseudo-relevance feedback. Mean- while, AVW discovery can also derive very compact (i.e., 1.4% of the original features) and informative feature rep- resentations which will benefit the indexing structure [14].

The primary contributions of the paper include,

• Observing the problems in large-scale image object re- trieval by conventional BoW model (Section 3).

• Proposing auxiliary visual words discovery through vi- sual and textual clusters in an unsupervised and scal- able fashion (Section 4).

• Investigating variant optimization methods for effi- ciency and accuracy in AVW discovery (Section 5).

• Conducting experiments on consumer photos and showing great improvement of recall rate for image ob- ject retrieval (Section 7).

2. Related Work

Most image object retrieval systems adopt the scale- invariant feature transform (SIFT) descriptor [8] to capture local information and adopt bag-of-words (BoW) model [14] to conduct object matching [1, 11]. The SIFT descrip- tors are quantized to visual words (VWs), such that index- ing techniques well developed in the text domain can be directly applied.

The learned VW vocabulary will directly affect the im- age object retrieval performance. The traditional BoW model adopts k-means clustering to generate the vocabu- lary. A few attempts try to impose extra information for vi- sual word generation such as visual constraints [13], textual information [19]. However, it usually needs extra (manual) information during the supervised learning, which might be formidable in large-scale image collections.

Instead of generating new VW vocabulary, some re- searches work on the original VW vocabulary such as [15].

It suggested to select useful feature from the neighboring images to enrich the feature description. However, its per- formance is limited for large-scale problems because of the need to perform spatial verification, which is compu- tationally expensive. Moreover, it only considers neighbor- ing images in the visual graph, which provides very lim- ited semantic information. Other selection methods for the

0 20 40 60 80 100

0 0.2 0.4 0.6 0.8 1 1.2

Images (%)

Visual words (%)

Flickr11K (11,282 images) Flickr550 (540,321 images) Half of the visual words occur in Flickr11K: 0.106% (12 images) Flickr550: 0.114% (617 images)

Figure 2. Cumulative distribution of the frequency of VW occur- rence in two different image databases, cf. Section 3.1. It shows that half of the VWs occur in less than 0.11% of the database im- ages (i.e., 12 and 617 images, respectively). The statistics rep- resent that VWs are distributed over the database images very sparsely.

useful features such as [6] and [10] are based on different criteria—the number of inliers after spatial verification, and pairwise constraints for each image, thus suffer from lim- ited scalability and accuracy.

Authors in [9] consider both visual and textual informa- tion and adopt unsupervised learning methods. However, they only use global features and adopt random-walk-like process for post-processing in image retrieval. Similar limi- tations are observed in [16], where only the similarity image scores are propagated between textual and visual graphs.

Different from the prior works, we use local features for image object retrieval and propagate the VWs directly be- tween the textual and visual graphs. The discovered aux- iliary VWs are thus readily effective in retrieving diverse search results, eliminating the need to apply a random walk in the graphs again.

To augment images with their informative features, we propose auxiliary visual words discovery, which can effi- ciently propagate semantically relevant VWs and select rep- resentative visual features by exploiting both textual and vi- sual graphs. The discovered auxiliary visual words demon- strate significant improvement over the BoW baseline and are shown orthogonal and complementary to conventional pseudo-relevance feedback. Besides, when dataset size be- comes larger, we can apply all the processes in a parallel way (e.g., MapReduce).

3. Key Observations—Requiring Auxiliary Vi- sual Words

Nowadays, bag-of-words (BoW) representation [14] is widely used in image object retrieval and has been shown promising in several content-based image retrieval (CBIR) tasks (e.g., [11]). However, most existing systems sim- ply apply the BoW model without carefully considering the sparse effect of the VW space, as detailed in Section 3.1.

(3)

Another observation (explained in Section 3.2) is that VWs are merely for describing visual appearances and lack the semantic descriptions for retrieving more diverse results (cf.

Figure 1(b)). The proposed AVW discovery method is tar- geted to address these issues.

3.1. Sparseness of the Visual Words

For better retrieval accuracy, most systems will adopt 1 million VWs for their image object retrieval system as sug- gested in [11]. As mentioned in [2], one observation is the uniqueness of VWs—visual words in images usually do not appear more than once. Moreover, our statistics shows that the occurrence of VWs in different images is very sparse.

We calculate it on two image databases of different sizes, i.e., Flickr550 and Flickr11K (cf. Section 6.1), and obtain similar curves as shown in Figure 2. We can find that half of the VWs only occur in less than 0.11% of the database im- ages and most of the VWs (i.e., around 96%) occur in less than the 0.5% ones (i.e., 57 and 2702 images, respectively).

That is to say, two images sharing one specific VW seldom contain similar features. In other words, those similar im- ages might only have few common VWs. This phenomenon is the sparseness of the VWs. It is partly due to some quan- tization errors or noisy features. Therefore, in Section 4, we propose to augment each image with auxiliary visual words.

3.2. Lacking Semantics Related Features

Since VWs are merely low-level visual features, it is very difficult to retrieve object images with different viewing an- gles, lighting conditions, partial occlusions, etc. An exam- ple is shown in Figure 3. By using BoW models, the query image (the top-left one) can easily obtain visually similar results (e.g., the bottom-left one) but often fails to retrieve the ones in a different viewing angle (e.g. the right-hand side image). This problem can be alleviated by taking ben- efit from the textual semantics. That is, by using the textual information associated with images, we are able to obtain semantically similar images as shown in the red dotted rect- angle in Figure 3. If those semantically similar images can share (propagate) their VWs to each other, the query image can still obtain similar but more visually and semantically diverse results.

4. Auxiliary Visual Words Discovery

Based on the previous observations, it is necessary to propagate VWs to those visually or semantically similar images. Consequently, we propose an offline stage for un- supervised auxiliary visual words discovery. We augment each image with auxiliary visual words (features) by con- sidering semantically related VWs in its textual cluster and representative VWs in its visual cluster. When facing large- scale dataset, we can deploy all the processes in a paral- lel way (e.g., MapReduce [3]). Besides, AVW reduces the

Title: Outline of the golden Gate Bridge Tags: GoldenGatBridge !

SanFrancisco ! GoldenGatePark Title: Golden Gate Bridget, SFO

Tags: Golden Gate Bridge ! Sun Tags: n/a

Title: n/a

Figure 3. Illustration of the roles of semantic related features in the image object retrieval. Images in the blue rectangle are visu- ally similar, whereas those images in the red dotted rectangle are textually similar. The semantic (textual) features are promising to establish the in-between connection (Section 4) to help the query image (the top-left one) retrieve the right-hand-side image.

number of VWs to be indexed (i.e., better efficiency in time and memory). Such AVW might potentially benefit the fur- ther image queries and can greatly improve the recall rate as demonstrated in Section 7.1 and in Figure 7. For mining AVWs, we first generate image graphs and image clusters in Section 4.1. Then based on the image clusters, we propa- gate auxiliary VWs in Section 4.2 and select representative VWs in Section 4.3. Finally, we combine both selection and propagation methods in Section 4.4.

4.1. Graph Construction and Image Clustering

The image clustering is first based on a graph construc- tion. The images are represented by 1M VWs and 90K text tokens expanded by Google snippets from their associated (noisy) tags. We take the advantage of the sparseness and use cosine measure as the similarity measure. The mea- sure is essentially an inner product of two feature vectors and only the non-zero dimensions will affect the similar- ity value—i.e., skipping the dimensions that either feature has a zero value. We observe that the textual and visual features are sparse for each image and the correlation be- tween images are sparse as well. We adopt efficient algo- rithms (e.g., [4]) to construct the large-scale image graph by MapReduce. To cluster images on the image graph, we ap- ply affinity propagation (AP) [5] for graph-based clustering.

AP passes and updates messages among nodes on graph it- eratively and locally—associating with the sparse neighbors only. AP’s advantages include automatic determining the number of clusters, automatic exemplar (canonical image) detection within each cluster.

The image clustering results are sampled in Figure 4.

Note that if an image is close to the canonical image (center image), it has a higher AP score, indicating that it is more strongly associated with the cluster.

(4)

(a) A visual cluster sample. (b) A textual cluster sample.

Figure 4. Sample image clusters (cf. Section 4.1). The visual clus- ter groups visually similar images in the same cluster, whereas the textual cluster favors semantic similarities. The two clusters facil- itate representative VWs selection and semantic (auxiliary) VWs propagation, respectively.

4.2. Semantic Visual Words Propagation

Seeing the limitations in BoW model, we propose to aug- ment each image with additional VWs propagated from the visual and textual clusters (Figure 5(a)). Propagating the VWs from both visual and textual domains can enrich the visual descriptions of the images and be beneficial for fur- ther image object queries. For example, it is promising to derive more semantic VWs by simply exchanging the VWs among (visually diverse but semantically consistent) images of the same textual cluster (cf. Figure 4(b)) .

We actually conduct the propagation on each extended visual cluster, containing the images in a visual cluster and those additional ones co-occuring with these images in cer- tain textual clusters. The intuition is to balance visual and semantic consistence for further VW propagation and se- lection (cf. Section 4.3). Figure 5(b) shows two extended visual clusters derived from Figure 5(a). More interestingly, image E is singular in textual cluster due to having no tags;

however, E still belongs to a visual cluster and can still re- ceive AVWs in its associated extended visual cluster. Sim- ilarly, if there is a single image in a visual cluster such as image H, it can also obtain auxiliary VWs (i.e., from image B and F ) in the extended visual cluster.

Assuming matrix X ∈ RN×D represents the N image histograms in the extended visual cluster and each image has D (i.e., 1 million) dimensions. And Xi stands for the VW histogram of image i. Assume M among N are from the same visual cluster; for example, N = 8 and M = 4 in the left extended visual cluster in Figure 5(b). The vi- sual propagation is conducted by the propagation matrix P ∈ RM×N, which controls the contributions from differ- ent images in the extended visual cluster.1 P (i, j) weights

1Note that here we first measure the images from the same visual cluster only. However, by propagating through each extended visual clusters, we can derive the AVWs for each image.

C F A

E Visual clusters

Textual clusters

H

A D

G C

F

B H

E

(a) Visual and textual graphs.

C F A

E

From visual clusters From textual clusters G

D B

H

H B

F

(b) Two extended visual clus- ters from the (left) visual and textual clusters.

Figure 5. Illustration of the propagation operation. Based on vi- sual and textual graphs in (a), we can propagate auxiliary VWs among the associated images in the extended visual clusters. (b) shows the two extended visual clusters as the units for propaga- tion respectively; each extended visual cluster include the visually similar images and those co-occurrences in other textual clusters.

the whole features propagated from image j to i. If we mul- tiply the propagation matrix P and X (P X), we can obtain a new M × D VW histograms, as the AVWs, for the M images augmented by the N images.

For each extended visual cluster, we desire to find a bet- ter propagation matrix P , given the initial propagation ma- trix P0(i.e., P0(i, j) = 1, if both i and j are semantically related and within the same textual cluster). We propose to formulate the propagation operation as

fP = min

P α∥P X∥2F

NP 1

+ (1− α)∥P − P02F

NP 2

, (1)

The goal of the first term is to avoid from propagating too many VWs (i.e., propagating conservatively) since P X be- comes new VW histogram matrix after the propagation.

And the second term is to keep the similarity to the orig- inal propagation matrix (i.e., similar in textual cluster).

NP 1 = ∥P0X∥2F and NP 2 = ∥P02F are two normaliza- tion terms and α modulates the importance between the first and the second terms. We will investigate the effects of α in Section 7.2. Note that the propagation process updates the propagation matrix P on each extended visual cluster separately as shown in Figure 5(b); therefore, this method is scalable for large-scale dataset and easy to adopt in a par- allel way.

4.3. Common Visual Words Selection

Though the propagation operation is important to obtain different VWs, it may include too many VWs and thus de- crease the precision. To mitigate this effect and remove those irrelevant or noisy VWs, we propose to select those representative VWs in each visual cluster. We observe that images in the same visual cluster are visually similar to each other (cf. Figure 4(a)); therefore, the selection operation is to retain those representative VWs in each visual cluster.

(5)

Selection S

(weight on each dimension)

Xi

Xj

(a) Common VWs selection. (b) Two examples.

Figure 6. Illustration of the selection operation. The VWs should be similar in the same visual cluster; therefore, we select those representative visual features (red rectangle). (b) illustrates the importance (or representativeness) for different VWs. And we can further remove some noisy features (less representative) which ap- peared on the people or boat.

As shown in Figure 6(a), Xi (Xj) represents VW his- togram of image i (j) and selection S indicates the weight on each dimension. So XS indicates the total number of features retained after the selection. The goal of selection is to keep those common VWs in the same visual cluster (cf.

Figure 6(b)). That is to say, if S emphasizes more on those common (representative) VWs, the XS will be relatively large. Then the selection operation can be formulated as

fS = min

S β∥XS0− XS∥2F

NS1

+ (1− β)∥S∥2F

NS2

. (2) The second term is to reduce the number of selected fea- tures in the visual clusters. The selection is expected to be compact but should not incur too many distortions from the original features in the visual clusters and thus regularized in the first term, showing the difference of feature numbers before (S0) and after (S) the selection process. Note that S0 will be assigned by one which means we select all the dimensions. NS1 = ∥XS02F and NS2 =∥S02F are the normalization terms and β stands for the influence between the first and the second terms and will be investigated in Section 7.2.

4.4. Iteration of Propagation and Selection

The propagation and selection operations described above can be performed iteratively. The propagation op- eration obtains semantically relevant VWs to improve the recall rate, whereas the selection operation removes visu- ally irrelevant VWs and improves memory usage and effi- ciency. An empirical combination of propagation and selec- tion methods is reported in Section 7.1.

5. Optimization

In this section, we study the solvers for the two for- mulations above (Eq. (1) and (2)). Before we start, note

that the two formulations are very similar. In particular, let S = S˜ − S0, the selection formulation (2) is equivalent to

minS˜

β∥X ˜S∥2F

NS1

+ (1− β)∥ ˜S + S02F

NS2

. (3)

Given the similarity between Eq. (1) and (3), we can focus on solving the former and then applying the same technique on the latter.

5.1. Convexity of the Formulations

We shall start by computing the gradient and the Hessian of Eq. (1) with respect to the propagation matrix P . Con- sider the M by N matrices P and P0. We can first stack the columns of the matrices to form two vectors p = vec(P ) and p0 = vec(P0), each of length M N . Then, we re- place vec(P X) with (XT ⊗ IM)p, where IM is an iden- tity matrix of size M and⊗ is the Kronecker product. Let α1= Nα

P 1 > 0 and α2= 1N−α

P 2 > 0, the objective function of Eq. (1) becomes

f (p)

= α1∥(XT ⊗ IM)p∥22+ α2∥p − p022

= α1pT(X⊗ IM)(XT⊗ IM)p + α2(p− p0)T(p− p0) Thus, the gradient and the Hessian are

pf (p) = 2(

α1(X⊗ IM)(XT ⊗ IM)p + α2(p− p0)) . (4)

2pf (p) = 2(

α1(X⊗ IM)(XT ⊗ IM) + α2IM N) . (5) Note that the Hessian (Eq. (5)) is a constant matrix. The first term of the Hessian is positive semi-definite, and the second term is positive definite because α2> 0. Thus, Eq.

(1) is strictly convex and enjoys an unique optimal solution.

From the analysis above, we see that Eq. (1) and (2) are strictly convex, unconstrained quadratic programming problems. Thus, any quadratic programming solver can be used to find their optimal solutions. Next, we study two spe- cific solvers: the gradient descent solver which iteratively updates p and can easily scale up to large problems; the an- alytic one which obtains the optimal p by solving a linear equation and reveals a connection with the Tikhonov regu- larization technique in statistics and machine learning.

5.2. Gradient Descent Solver (GD)

The gradient descent solver optimizes Eq. (1) by starting from an arbitrary vector pstart and iteratively updates the vector by

pnew← pold− η∇pf (pold),

where a small η > 0 is called the learning rate. We can then use Eq. (4) to compute the gradient for the updates. Nev- ertheless, computing (X⊗ IM)(XT ⊗ IM) may be unnec- essarily time- and memory-consuming. We can re-arrange

(6)

the matrices and get

(X⊗IM)(XT⊗IM)p = (X⊗IM)vec(P X) = vec(P XXT) Then,

pf (p) = 1vec(P XXT) + 2α2vec(P− P0)

= vec(2α1P XXT + 2α2(P − P0)).

That is, we can update poldas a matrix Poldwith the gra- dient also represented in its matrix form. Coupling the up- date scheme with an adaptive learning rate η, we get update propagation matrix by

Pnew= Pold− 2η(

α1PoldXXT + α2(Pold− P0)) .(6) Note that we simply initialize pstartto vec(P0).

For the selection formulation (Section 4.3), we can adopt similar steps with two changes. And let β1 = Nβ

S1 and β2= 1N−β

S2. First, Eq. (6) is replaced with Snew= Sold− 2η(

−β1XTX(S0− Sold) + β2Sold) .(7) Second, the initial point Sstartis set to a zero matrix since the goal of selection formulation is to select representative visual words (i.e., retain a few dimensions).

There is one potential caveat of directly using Eq. (7) for updating. The matrix XTX can be huge (e.g., 1M × 1M ). To speed up the computation, we could keep only the dimensions that occured in the same visual cluster, because the other dimensions would contribute 0 to XTX.

5.3. Analytic Solver (AS)

Next, we compute the unique optimal solution p of Eq. (1) analytically. The optimal solution must satisfy

pf (p) = 0. Note that From Eq. (4),

pf (p) = Hp− 2α2p0,

where H is the constant and positive definite Hessian ma- trix. Thus,

p= 2α2H−1p0.

Similar to the derivation in the gradient descent solver, we can write down the matrix form of the solution, which is

P= α2P01XXT+ α2IM)−1.

For the selection formulation, a direct solution from the steps above would lead to

S= β11XTX + β2ID)−1XTXS0. (8) Nevertheless, as mentioned in the previous subsection, the XTX matrix in Eq. (8) can be huge (e.g., 1M×1M). It is a time-consuming task to compute the inverse of an 1M×

1M matrix. Thus, instead of calculating XTX directly, we transform XTX to XXT which is N by N and is much smaller (e.g., 100× 100). The transformation is based on the identity of the inverse function

(A + BBT)−1B = A−1B(I + BTA−1B)−1. Then, we can re-write Eq. (8) as

S= β1XT1XXT + β2IN)−1XS0. (9) Note that the analytic solutions of Eq. (1) and (2) are of a similar form to the solutions of ridge regression (Tikhonov regularization) in statistics and machine learning. The fact is of no coincidence. Generally speaking, we are seeking to obtain some parameters (P and S) from some data (X, P0 and S0) while regularizing by the norm of the param- eters. The use of the regularization not only ensures the strict convexity of the optimization problem, but also eases the hazard of overfitting with a suitable choice of α and β.

6. Experimental Setup 6.1. Dataset

We use Flickr550 [20] as our main dataset in the exper- iments. To evaluate the proposed approach, we select 56 query images (1282 ground truth images) which belong to the following 7 query categories: Colosseum, Eiffel Tower, Golden Gate Bridge, Tower de Pisa, Starbucks logo, Tower Bridge, and Arc de Triomphe. Also, we randomly pick up 10,000 images from Flickr550 to form a smaller sub- set called Flickr11K.2 Some query examples are shown in Figure 7.

6.2. Performance Metrics

In the experiments, we use the average precision, a performance metric commonly used in the previous work [11, 20], to evaluate the retrieval accuracy. It approximates the area under a non-interpolated precision-recall curve for a query. A higher average precision indicates better retrieval accuracy. Since average precision only shows the perfor- mance for a single image query, we also compute the mean average precision (MAP) over all the queries to evaluate the overall system performance.

6.3. Evaluation Protocols

As suggested by the previous work [11], our image ob- ject retrieval system adopts 1 million visual words as the ba- sic vocabulary. The retrieval is then conducted by compar- ing (indexing) the AVW features for each database image.

To further improve the recall rate of retrieval results, we apply the query expansion technique of pseudo-relevance

2http://www.cmlab.csie.ntu.edu.tw/%7Ekuonini/Flickr11K

(7)

Table 1. The MAP of AVW results with the best iteration number and PRF in Flickr11K with totally 22M (SIFT) feature points. Note that the MAP of the baseline BoW model [14] is 0.245 and after PRF is 0.297 (+21.2%). #F represents the total number of features retained;

M is short for million. ‘%’ indicates the relative MAP gain over the BoW baseline.

Solver Propagation→ Selection (propagation first) Selection→ Propagation (selection first)

MAP MAP by PRF (%) #F MAP MAP by PRF (%) #F

Gradient descent solver (GD) 0.375 0.516 (+110.6%) 0.3M 0.342 0.497 (+102.9%) 0.2M

Analytic solver (AS) 0.384 0.483 (+97.1%) 5.2M 0.377 0.460 (+87.8%) 13.0M

    

Figure 7. More search results by auxiliary VWs. The number represents its retrieval ranking. The results show that the proposed AVW method, thought conducted in an unsupervised manner in the image collections, can retrieve more diverse and semantic related results.

feedback (PRF) [1], which expands the image query set by taking the top-ranked results as the new query images. This step also helps us understand the impacts of the discov- ered AVWs because in our system the ranking of retrieved images is related to the associated auxiliary visual words.

They are the key for our system to retrieve more diverse and accurate images as shown in Figure 7 and Section 7.1. We take L1 distance as our baseline for BoW model [14]. The MAP for the baseline is 0.245 with 22M (millions) feature points and the MAP after PRF is 0.297 (+21.2%).

7. Results and Discussions

7.1. The Performance of Auxiliary Visual Words

The overall retrieval accuracy is listed in Table 1. As mentioned in Section 4.4, we can iteratively update the fea- tures according to Eq. (1) and (2). It shows that the iter- ation with propagation first (propagation→ selection) can have the best results. Since the first propagation will share all the VWs with related images and then the selection will choose those common VWs as representative VWs. How- ever, if we do the iteration with selection first (i.e., selection

→ propagation), we might lose some possible VWs after the first selection. Experimental results show that we only need one or two iterations to achieve better result because those informative and representative VWs have been prop- agated or selected in the early iteration steps. Besides, the number of features are significantly reduced from 22.2M to 0.3M (only 1.4% retained), essential for indexing those fea- tures by inverted file structure [14]. The required memory size for indexing is proportional to the number of features.

In order to have the timely solution by gradient descent solver, we set a loose convergence criteria for both prop- agation and selection operations. Therefore, the solution

of the two solvers might be different. Nevertheless, Table 1 still shows that the retrieval accuracy of the two solvers are very similar. The learning time for the first propagation is 2720s (GD) and 123s (AS), whereas the first selection needs 1468s and 895s for GD and AS respectively. Here we fixed α = 0.5 and β = 0.5 to evaluate the learning time.3 By using analytic solver, we can get a direct solution and much faster than the gradient descent method. Note that the number of features will affect the running time directly;

therefore, in the remaining iteration steps, the time required will decrease further since the number of features is greatly reduced iteratviely. Meanwhile, only a very small portion of visual features retained.

Besides, we find that the proposed AVW method is com- plementary to PRF since we yield another significant im- provement after conducting PRF on the AVW retrieval re- sults. For example, the MAP of AVW is 0.375 and we can have 0.516 (+37.6%) after applying PRF. The relative im- provement is even much higher than PRF over the tradi- tional BoW model (i.e., 0.245 to 0.297, +21.2%). More retrieval results by AVW + PRF are illustrated in Figure 7, which shows that the proposed AVW method can even re- trieve semantically consistent but visually diverse images.

Note that the AVW is conducted in an unsupervised manner in the image collections and requires no manual labels.

7.2. Parameter Sensitivity

Finally, we report the impact of sensitive tests on two important parameters—propagation formulation (α) and se- lection formulation (β). The results are shown in Figure 8.

In the propagation formulation, α decides the number of

3The learning time is evaluated in MATLAB at a regular Linux server with Intel CPU and 16G RAM.

(8)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

MAP (M)

Alpha 0 0.2 0.4 0.6

0 20 40 60 80 100

MAP by PRF MAP #Features

(a) Alpha (α, for propagation in Eq. (1))

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

MAP (M)

Beta 0

0.1 0.2 0.3 0.4

0 10 20 30

MAP by PRF MAP #Features

(b) Beta (β, for selection in Eq. (2))

Figure 8. Parameter sensitivity on alpha and beta. (a) shows that propagating too many features is not helpful for the retrieval accu- racy. (b) shows that only partial features are important (represen- tative) to each image. More details are discussed in Section 7.2.

Note that we can further improve retrieval accuracy by iteratively updated AVW by propagation and selection processes.

features needed to be propagated. Figure 8(a) shows that if we propagate all the possible features to each image (i.e., α = 0), we will obtain too many irrelevant and noisy fea- tures which is helpless for the retrieval accuracy. Besides, the curve drops fast after α≥ 0.8 because it preserved few VWs which might not appear in the query images. The fig- ure also shows that if we set α around 0.6 we can have better result but with fewer features which are essential for large- scale indexing problem.

And for selection formulation, similar to α, β also influ- ences the number of dimensions needed to be retained. For example, if β = 0, we will not select any dimensions for each image. And β = 1 means we will retain all the fea- tures, and the result is equal to the BoW baseline. Figure 8(b) shows that if we just keep a few dimensions of VWs, the MAP is still similar to BoW baseline though with some retrieval accuracy decrease. Because of the spareness of large VW vocabulary as mentioned in Section 3.1, we only need to keep those important VWs.

8. Conclusions and Future Work

In this work, we show the problems of current BoW model and the needs for semantic visual words to improve the recall rate for image object retrieval. We propose to aug- ment each database image with semantically related auxil- iary visual words by propagating and selecting those infor- mative and representative VWs in visual and textual clus- ters. Note that we formulate the processes as unsupervised optimization problems. Experimental results show that we

can greatly improve the retrieval accuracy compared to the BoW model (111% relatively). In the future, we will fur- ther look into the problem by L2-loss L1-norm optimiza- tion which might preserve the sparseness for visual words.

We will also investigate different solvers to maximize the retrieval accuracy and efficiency.

References

[1] O. Chum et al. Total recall: Automatic query expansion with a generative feature model for object retrieval. In IEEE ICCV, 2007.

[2] O. Chum et al. Geometric min-hashing: Finding a (thick) needle in a haystack. In IEEE CVPR, 2009.

[3] J. Dean et al. Mapreduce: Simplified data processing on large clusters. In OSDI, 2004.

[4] T. Elsayed et al. Pairwise document similarity in large col- lections with mapreduce. In Proceedings of ACL-08: HLT, Short Papers, pages 265–268, 2008.

[5] B. J. Frey et al. Clustering by passing messages between data points. Science, 2007.

[6] S. Gammeter et al. I know what you did last summer: Object- level auto-annotation of holiday snaps. In IEEE ICCV, 2009.

[7] J. Hays et al. im2gps: estimating geographic information from a single image. In IEEE CVPR, 2008.

[8] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004.

[9] H. Ma et al. Bridging the semantic gap between image con- tents and tags. IEEE TMM, 2010.

[10] P. K. Mallapragada et al. Online visual vocabulary pruning using pairwise constraints. In IEEE CVPR, 2010.

[11] J. Philbin et al. Object retrieval with large vocabularies and fast spatial matching. In IEEE CVPR, 2007.

[12] J. Philbin et al. Lost in quantization: Improving particu- lar object retrieval in large scale image databases. In IEEE CVPR, 2008.

[13] J. Philbin et al. Descriptor learning for efficient retrieval. In ECCV, 2010.

[14] J. Sivic et al. Video Google: A text retrieval approach to object matching in videos. In IEEE ICCV, 2003.

[15] P. Turcot et al. Better matching with fewer features: The se- lection of useful features in large database recognition prob- lems. In IEEE ICCV Workshop on WS-LAVD, 2009.

[16] X.-J. Wang et al. Multi-model similarity propagation and its application for web image retrieval. In ACM MM, 2004.

[17] X.-J. Wang et al. Annosearch: Image auto-annotation by search. In IEEE CVPR, 2006.

[18] X.-J. Wang et al. Arista - image search to annotation on billions of web photos. In CVPR, 2010.

[19] L. Wu et al. Semantics-preserving bag-of-words models for efficient image annotation. In ACM workshop on LSMMRM, 2009.

[20] Y.-H. Yang et al. Contextseer: Context search and recom- mendation at query time for shared consumer photos. In ACM MM, 2008.

[21] X. Zhang et al. Efficient indexing for large scale visual search. In IEEE ICCV, 2009.

參考文獻

相關文件

堅毅 尊重他人 責任感 國民身份認同 承擔精神 誠信

Assessing Students’ Visual Arts Learning..

 Retrieval performance of different texture features according to the number of relevant images retrieved at various scopes using Corel Photo galleries. # of top

Salakhutdinov, Richard Zemel, Yoshua Bengio, “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention”, ICML, 2015.. Image

Ongoing Projects in Image/Video Analytics with Deep Convolutional Neural Networks. § Goal – Devise effective and efficient learning methods for scalable visual analytic

◦ Disallow tasks in the production prio rity band to preempt one another.... Jobs

NSS Visual Arts Curriculum in Homantin Government Secondary School. Form First Term Secondary

Assessing Students’ Visual Arts Learning..