• 沒有找到結果。

INTERACTIVE REGION-BAESD IMAGE RETRIEVAL

N/A
N/A
Protected

Academic year: 2021

Share "INTERACTIVE REGION-BAESD IMAGE RETRIEVAL"

Copied!
20
0
0

加載中.... (立即查看全文)

全文

(1)INTERACTIVE REGION-BAESD IMAGE RETRIEVAL Shih-Huan Tseng and Chuech-Yu Li and Chiou-Ting Hsu Department of Computer Science National Tsing Hua University, Hsinchu, Taiwan Abstract This paper proposes an interactive region-based image retrieval system. Initially, we use color clustering by K-means algorithm and region labeling to segment an image into regions. Several geometric invariant features, such as dominant color, color histogram, moment invariants, and co-occurrence texture features, are extracted to index each region. Then, we describe each image as a combination of feature vectors of the segmented regions. To measure the distance between images, we define a hierarchical distance function as a liner combination of region features. The retrieved results can be refined via interactive relevance feedback. To learn the “ideal” query regions that the users really want, we derive the weighting parameters of distance measurement using optimized learning technique. A series of experiments have been performed to demonstrate the effectiveness and performance of our work.. 1.. Introduction. Since a great deal of images becomes available through networks and high-capacity storages, an image retrieval scheme is indispensable for efficient indexing, browsing, and retrieval.. 1.

(2) Traditionally, we annotate and query images by keywords. The keyword-based annotation takes a lot of time, cost, and effort to process the voluminous images. In addition, due to the rich contents in images and the subjectivity of human perception, different people often give different annotations to the same content. Thus, users cannot retrieve the desired image if the queried keyword is not properly annotated. Hence, instead of indexing and querying by keywords, content-based image retrieval (CBIR) aims to retrieve images according to their own content. Most CBIR techniques represent each image as a combination of low-level features. Once users submit an example image for query, the retrieval system automatically ranks and displays the retrieved results in the order of similarity. Much of the previous works [18-22, 32] treats an image as a single entity and describes it by global features. However, for a complex image containing multiple objects, global features cannot capture local information (e.g. shape) and spatial relationship between objects. Representing an image as a combination of multiple objects is thus advantageous than processing an image as a single entity. Nevertheless, since current segmentation techniques cannot determine meaningful objects from arbitrary images, regions with homogeneous features are usually employed to describe the image objects. Such region representation also support partial query to a specific part of an image. Relevance feedback [11-13, 21, 23, 26, 27] is an interactive process to reduce the gap between semantic concepts and low-level features from users’ feedback. In each iteration of. 2.

(3) feedback, the users rate retrieved images according to their preferences. The retrieval system then updates the distance measurement or the probability structure from the modified rating. The retrieved results will gradually converge to the desired images or move to other search paths based on users’ feedback. In [12-13], an optimization-based learning technique is used to dynamically adjust the weights of different features. In [23], the weights of each segmented region from relevant images and irrelevant images are automatically determined through a formulated linear system. In [21], PicHunter employed Bayes’s learning technique to predict the target image. In this paper, we propose an interactive region-based image retrieval system. Figure 1 illustrates our proposed flowchart. Images are initially segmented into regions by K-means color clustering and region labeling algorithm. Next, we extract low-level features to describe each region. We measure the distance between query image and database images by a hierarchical distance function. At each feedback step, users may identify some of the retrieved images as relevant. Then, the retrieval system will interactively update the distance measurements and return the refined results. Experiments show that query by representing an image as regions performs better than representing an image as a single entity. The rest of this paper is organized as follows. Sec. 2 describes our region segmentation algorithm by color clustering and labeling technique. Sec. 3 presents the feature extraction and indexing for each segmented region. Sec. 4 discusses our learning algorithm based on the. 3.

(4) optimization learning technique. The experiments of two query types are compared and discussed in Sec. 5. And Sec. 6 concludes our work.. 2.. Region Segmentation. Representing an image as a combination of multiple regions is advantageous than processing it as a single entity. In order to incorporate local features into image retrieval, we have to partition images into regions beforehand. With segmented regions, local features (e.g. shape, color, texture) can be more easily estimated and indexed, and such locally indexed features enable us to query more specific content of an image. Since automatic segmentation of an image into semantically meaningful regions is still one of the most challenging problems, most existing works [15,17,23,24] defined regions as areas with homogeneous low-level features. In this work, we define a region as a neighboring area with color homogeneity. We first partition the color space into several clusters. Then, the labeling algorithm is carried out to connect pixels within the same color cluster into regions. 2.1.. Color Clustering. Color is a widely used visual feature in region segmentation [1][4], and is commonly represented as a point in three-dimensional color space, such as RGB, HSV, and CIE. L*u * v * [5][9]. In CIE L*u * v * color space, the perceptual distance between colors can be approximately measured by Euclidean distance. Hence, we apply K-means algorithm [1-4] to partition color components in L*u * v * space.. 4.

(5) Since different images have various color distributions, the number of clusters varies with the complexity of the content. Here, we apply a splitting technique [10] to adjust the number of clusters. In each splitting step, if the average distortion of a cluster is larger than a threshold, we add a perturbation vector to this cluster to construct a new cluster center. To reduce the computation cost, the L*u * v * space is first quantized into 256 bins ( 4 × 8 × 8 ). The color histogram of an image X is represented by h = [ h1 , h2 ,K, h256 ]T , where hi (1 ≤ i ≤ 256) indicates the number of pixels in X with quantized color component. c i = [ Li , u i , vi ]T . Let Z. t. = { z 1t , z 2t , K , z mt } denote the set of m clusters at the. t − th ( t ≥ 0 ) iteration stage, and y ti be the center of cluster z it . The initial cluster center is set to be y 11 = (. ∑. h c ) /(∑i =1 hi ) . At each step, we calculate the average distortion σ it of. 256 i =1 i i. 256. t of all clusters by: each cluster z it and the total average distortion σ mean. σ it =. ∑. c j ∈z it. c j − y it. 2. z it. ∑ ∑ = ∑ m. t σ mean. i =1. c j ∈z it m. , and. c j − y ti. zt i =1 i. (1) 2. .. (2). If the stopping condition is not satisfied, we apply the splitting technique to increase the number t , we construct a new cluster center by of clusters. For each cluster z it , if σ it > σ mean. y ' = y ti + [σ L ,σ u ,σ v ] , where σ L , σ u , and σ v are standard deviations of z it in L* , u * , T. and v * components, respectively. Then, we iteratively update the clustering result and cluster center by nearest neighbor rule.. 5.

(6) t The splitting and clustering processes are stopped when the average total distortion σ mean. t −1 . However, since the average total distortion decreases as the comes close enough to σ mean. number of cluster increases, a proper threshold is difficult to define for an optimal number of clusters. Thus, we further include the cluster-validity method [3] to refine the above procedure. We calculate the cluster separation measure ρ (n) as follows:. ρ ( n) =. where. σi. ∑ =. c k ∈z it. c k − y ti z. t i. 1. σi +σ j 1 n max ( ), n ≥ 2 , ∑ n i =1 1≤ j ≤n∧i ≠ j µ ij. , µ ij = y ti − y tj. 2. (3). , and n is the number of clusters. The. smallest ρ (n) indicates best separation and the corresponding n is the optimal number of clusters. For example, in Fig. 2, the optimal segmentation occurs at the first valley of ρ (n) curve, and Fig. 2(b) shows the segmentation results. 2.2. Region Labeling Since color clustering ignores spatial information of the partitioned colors, we use the labeling algorithm [6] with eight-connectivity to build connected regions. Instead of applying region merging to process fragmental regions, small regions with area less than 1% of the whole image will be dropped. We observed that small regions are usually insignificant, and reducing the number of regions is much preferable for the indexing step.. 3.. Feature Extraction and Indexing. To index each segmented region, we have to find out several representative features to describe. 6.

(7) the region. Many visual features have been widely used in image retrieval, such as colors, textures, and shapes. 3.1. Color Because color is insensitive to background complication and independent of image size and orientation, color feature becomes one of the most widely used visual features in image retrieval. Several color feature representations have been applied in image retrieval, like color histograms [5], color moment [14], and color set [5]. Here, we extract the dominant color and the color histogram to index a region. Dominant color is measured as the mean color of a region. We represent the dominant color as three components in L*u * v * space. Color histogram counts the number of pixels of a region into appropriate histogram bins. The description by color histograms is invariant to translation, rotation about the image axes, small off-axis rotations, scale changes and partial occlusion [6]. Similarly, we quantize the. L*u * v * color space into 256 color histogram bins ( 4 × 8 × 8 ). 3.2. Shape In this work, we measure the moment invariants to represent the shape features of each region. The moment invariants ϕ1 ~ ϕ 7 [5, 7, 8] are invariant to translation, rotation and scaling of shapes. Because the dynamic range of ϕ1 ~ ϕ 7 is very large, it is much convenient to deal with Sgn(ϕ i ) log | ϕ i | rather than ϕ i , where Sgn(ϕ i ) retains the sign of ϕ i .. 7.

(8) 3.3. Texture Texture feature contains important information about the structural arrangement of surfaces and their relationship to the surrounding environment. The co-occurrence matrix [6] is a statistical texture description used to measure the repeated occurrence of several configurations within a region. However, the co-occurrence matrix itself is rarely used for similarity comparison [6]. Instead, several numeric features computed from co-occurrence matrix are used to represent the texture feature in a more compact form. Therefore, we measure five standard features [6], includes energy, entropy, contrast, homogeneity, and correlation, which are derived from a normalized co-occurrence matrix to index the texture feature of each region.. 4.. Retrieval with Relevance Feedback. As aforementioned, we represent an image as a combination of regions, and describe each region by its low-level features. Next, we aim to find the user’s desired image via relevance feedback by optimization-based learning process [12-13]. We start with the definition of our notations. An image with nx regions is denoted by a set X = {r1 , r2 ,K, rnx } , where each region vector ri = [fi1 , fi 2 , fi 3 , fi 4 ]T consists of 4 feature vectors, which are dominant color, color histogram, moment invariants and texture features, respectively. Each feature vector f ij = [ f ij1 , f ij 2 ,K, f ijn fj ]T contains n fj components, where. n fj is 3, 256, 7, and 5 for j = 1~4. 4.1. Feature Distance Measurement. 8.

(9) As described in Sec. 3, the dominant color of a region is represented as a feature vector f1 , which consists of 3 components in L* u * v * space. The texture feature vector f 3 comprises five numeric components derived from the co-occurrence matrix. Seven shape moment invariants make up the shape feature vector f 4 . We define the distance measurements for these three feature vectors as a generalized Euclidean distance [12, 6, 16, 25]:. d (f i , q i ) = (f i − q i ) T W (f i − q i ),. i ∈ [1,3,4] ,. (4). where q i denote the feature vector of a query region q = [q1 , q 2 , q 3 , q 4 ]T . Though the distance between two color histograms can also be measured by Euclidean distance, such measurements usually exhibit poor performance [5]. Thus, we adopt the idea arises from histogram intersection, and measure the distance of two color histograms as the reciprocal of the histogram intersection:. d (f 2 , q 2 ) =. ∑. n k =1. (f 2 , k ). ∑k =1 min((f 2 , k ), (q 2 , k )) n. .. (5). where k is the index of the histogram bin. If the denominator equals zero, then d (f 2 , q 2 ) is defined as a maximum value based on the implementation platform. 4.2. Relevance Feedback by a Single Region or an Image We have already defined the distance measurement for each feature vector. The overall distance between region vectors is then defined as a linear combination of feature vectors [13]. In the following discussion, the symmetric matrix W j (j=1,3,4) denotes the weights of individual entries within feature vector f j , and u j (j=1~4) denotes weight of feature vector f j . The 9.

(10) distance function between a region r = [f1 , f 2 , f 3 , f 4 ]T. and the ideal query region. q = [q1 , q 2 , q 3 , q 4 ]T is then defined as: 4. d (r , q ) = ∑ u j d ( f j , q j ) = u 2 d ( f 2 , q 2 ) + j =1. ∑u. j =1, 3, 4. j. (f j − q j ) T W j (f j − q j ) .. (6). Let N be the number of relevant images or regions. The above distance function leads to the following optimization problem: N. minimize∑ d (ri , q ), where i =1. 4. 4. d (ri , q ) = ∑ u j d (f ij , q j ) = u2 d (f i 2 , q 2 ) + ∑ u j (fij − q j ) W j (f ij − q j ), j =1. j =1. 4. subject to the constraint. (7). T. 1. ∑u j =1. = 1 , and det( W j ) = 1 , for j = 1,3,4.. j. Note that, if there are no constraints for u j and W j , this optimization problem will reduce to solution of all zeros. To solve this minimization problem, we use the Lagrange multipliers to reduce the constrained problem to an unconstrained one. Thus, the optimal solution for feature vector q j in ideal query region q is just the weighted average of feature vectors in relevant regions [12]:. qj. ∑ =. N. f. i =1 ij. , j = 1 ~ 4. .. (8). 1 Kj. (9). N. The optimal solution for W j is derived as [12]:. W j = (det(C j )). C −j 1 ,. where C j = [crs ]K j ×K j is the weighted covariance matrix of f j and N. crs = ∑ (f ijr − q jr )T (f ijs − q js ) . i =1. And the optimal solution for u j is [13]: 10. (10).

(11) 4. uj = ∑ l =1. where d j =. ∑. N. i =1. dl , dj. (11). d (f ij , q j ) .This solution indicates that if the total distance d j for feature j is. small, this feature should be assigned a higher weight. 4.3. Relevance Feedback by Multiple Regions To measure the overall distance between an image X = {r1 , r2 ,K, rnx } with nx regions and the query image Q = {r1q , r2q , K , rnqQ } with nq regions, we define the distance function as:. d (X,Q ) =. ∑. a∈[1, NQ ]. wa min d (rb , raq ) ,. (12). b∈[1, N X ]. where rb = [f b1 , f b 2 , f b 3 , f b 4 ]T , raq = [q a1 , q a 2 , q a 3 , q a 4 ]T and 4. d (rb , raq ) = ∑ uaj d (f bj , q aj ) = ua 2 d (f b 2 , q a 2 ) + j =1. ∑u. j =1, 3, 4. aj. (fbj − q aj )T Waj (f bj − q aj ) . (13). Let N be the number of the relevant images, the optimization problem is formulated as: N. N. i =1. i =1 a∈[1, N Q ]. minimize∑ d (X i , Q) = ∑. ∑. wa min d (rib , raq ), where b∈[1, N X ]. 4. d (rib , r ) = ∑ uaj d (fibj , q aj ) = ua 2 d (fib 2 , q a 2 ) + q a. j =1. NQ. subject to the constraint. 1 =1, ∑ a =1 wa. 4. 1. ∑u j =1. , (14). ∑u. j =1, 3, 4. (fibj − q aj ) Waj (fibj − q aj ) T. aj. = 1 , and det( Waj ) = 1 (for j = 1,3,4).. aj. Since the regions within an image are non-overlapped, we model the overall distance as a linear combination of region distances. For a specific region a in the query image Q , the optimal solutions for q a , Waj and u aj ( j = 1,2,K, n f ) are the same as Sec. 4.2. However, we have to measure q a , Waj and u aj for each region, the time complexity is much higher 11.

(12) than the case of single region. The optimal solution for wa (a = 1,2,K, N Q ) is derived as: NQ. wa = ∑ l =1. where Da =. 5.. ∑. N. Dl , Da. (15). min d (rib , raq ) .. i =1 b∈[1, N ] X. Experimental Results and Analysis. The following experiments use Corel data set as our test data. We select 2000 images, including sunsets, skies and mountains, animals, fruits, foods, objects, etc., from Corel data set. Each category contains at least 100 images with size of 192 × 128 or 128 × 192 pixels. Many existing systems also used Corel data set to test their performance [13][18]. Some of the systems [19] use pre-selection categories. In our work, we don’t classify the data set and just mix all these categories into a test database. We use precision and recall to measure the retrieval performance. Precision P is determined as the number of retrieved relevant images over the number of the total retrieved images, while recall R is defined as the number of retrieved relevant images over the total number of relevant images. The meaning of “relevant” of retrieval results is highly dependent on users’ subjectiveness. Fig. 3 shows three test images from Food Objects, Sunset and Evening Skies, and Museum Duck Decoys directories. We choose 50, 150, and 100 images from these three directories as relevant images, respectively. The experiments are performed on two query types: query by an image and query by multiple regions. The former treats the whole query image as a single 12.

(13) region, and employs the technique discussed in Sec. 4.2 to adjust the weights. The latter represents the image as a combination of multiple regions, and employs the technique discussed in Sec. 4.3 to adjust the weights. 5.1. Query by an image Fig. 4 shows the retrieved results. The retrieved images in Fig. 4(a) contain relevant and irrelevant ones. Though the irrelevant ones are different to the query image subjectively, the low-level features are quite similar to that of the query. After we select some relevant images for updating the weighting matrix, the retrieved images of the first feedback are shown in Fig. 4(b). We can see that the weights of features, corresponding to the positions of slider bars, changed. Also, the system ranks the retrieved images according to the newly weighted distance measurement. If we continue to select relevant images from Fig. 4(b), the retrieved results are shown in Fig. 4(c). Here the weights of features and ranks of retrieved images changed again. Three precision-recall curves of test images from Fig. 3 are shown in Fig. 5, where no RF indicates no relevant feedback is performed, and 1st RF and 2nd RF indicate retrieval with first and second feedback. From Fig. 5, we observed that relevance feedback did improve the performance. However, when the recall rate is higher than 0.4, the performance degrades very fast. We found that when the relevant images are few, if one of the dissimilar images with similar low-level features is highly ranked, the precision rate will decrease quite fast for high recall rate. In addition, the second feedback does not always perform better than the first one.. 13.

(14) We observe that user usually select obvious relevant images at first feedback, but may feel difficult to select more relevant images at second feedback. Thus the learning of new query region and feature weights highly depends on users’ selection. 5.2. Query by multiple regions Fig. 6 shows the results of query by multiple regions. Note that the result in Fig. 6(a) is better than Fig. 4(a), because local features of regions are now combined to adjust the distance measurement. Fig. 6(b) and (c) show the result after the first and the second feedback. The overall performance in Fig. 7 is better than that in Fig. 5, because such query type takes similar regions into consideration. Fig. 7(a) performs best because the relevant images contain obvious objects and noticeable shape features. The relevant images in Fig. 7(b) are natural images and have no similar local features. For example, the shape of suns may be covered by cloud, sea or other objects, and the color of some sunset images lays on red, yellow, or orange. Overall, we observed that once the regions in an image are not distinct and well segmented, the extracted features represent poor local information about this image, and the learning process cannot derive a good query region and weights of features based on these ill-segmented regions.. 6.. Conclusion. We have developed an interactive region-based image retrieval system with relevance feedback.. 14.

(15) We perform color clustering by K-means technique and follow with labeling algorithm to segment an image into regions. To describe each region, we extract dominant color and color histogram as color features, moment invariants as shape feature and co-occurrence matrix as texture feature. Combine the above features as a hierarchical feature vectors, we applied the optimized learning technique [12] [13] to derive the optimized distance function via relevance feedback. From our experiments, the performance of query by multiple regions is better than that of query by an image, because local features are much easily extracted from segmented regions. Nevertheless, the performance of query by multiple regions highly depends on the segmentation results, since good shape features can only be extracted from well-segmented regions. References [1]. Kompatsiaris, M.G. Strintzis, “Spatiotemproal segmentation and tracking of object for visualization of videocoference image sequences,” IEEE Trans. CSVT, vol. 7, no. 1, pp. 1388-1402, Dec. 2000.. [2]. S.Z Selim, M.A. Ismail, “K-means-type algorithm,” IEEE Trans. PAMI, vol. 6, pp. 81-87, Jan. 1984.. [3]. A. Hanjalic, H. Zhang, “An integrated scheme for automated video abstraction based on unsupervised cluster-validity analysis,” IEEE Trans. CSVT, vol. 9, no. 8, pp. 1280-1289, Dec. 1999.. [4]. T. Uchiyama, A. Arbib, “Color image segmentation using competitive learning,” IEEE Trans. PAMI, vol. 16, no. 12, pp. 1197-1206, Dec. 1994.. [5]. A. Del Bimbo, Visual information retrieval, Morgan Kaufmann, 1999.. 15.

(16) [6]. L.G Shapiro, G.C. Stockman, Computer vision, Prentice Hall, 2001.. [7]. M. Sonka, V. Hlavac, R. Boyle, Image processing, analysis, and machine vision, 2nd edition, Brook/Cole, 1999.. [8]. R.G. Gonzalez, R.E. Woods, Digital image processing, Addison Wesley, 1993. [9]. Color Space, http://cs.fit.edu/wds/classes/cse5255/cse5255/davis/index.html.. [10] Y. Linde, A. Buzo, and R.M Gray, “An algorithm for vector quantization design,” IEEE Trans on Communications, COM-28”84-95, January 1980. [11] Y. Rui and T.S. Huang, “Relevance feedback: a power tool for interactive content-based image retrieval,” IEEE Trans. CSVT, vol. 8, no. 5, pp. 644-655, Sep. 1998. [12] Y. Isikawa, R. Subramanya, and C. Faloutsos, “MindReader: querying database through multiple examples,” Proceeding of 24th VLDB Conference, 1998. [13] Y. Rui, T.S. Huang, “Optimizing learning in image retrieval,” CVPR’00, , Jun. 2000. [14] Y. Rui, T.S. Huang, and S.F. Chang, “Image retrieval: current techniques, promising directions, and open issues,” Journal of Visual Communication and Image Representation 10, pp.39-62, Jan 1999. [15] C.S. Fuh, S.W Cho, and K. Essig, “Hierarchical color image region segmentation for content-based image retrieval system,” IEEE Trans. IP, vol. 9, no 1, pp. 156-162, Jan. 2000. [16] J.S. Payne, L. Hepplewhite, and T.J Stonham, ”Perceptually based metrics for the evaluation of textural image retrieval methods,” Proc. IEEE International Conference on Multimedia Computing and Systems, pp 793-797, vol. 2, 1999. [17] D. Zhong, S.F. Chang, “An integrated approach for content-based video object segmentation and Retrieval,” IEEE Trans. CSVT, vol. 9, no. 8, pp. 1259-1268, Dec. 1999. [18] H.W. Yoo, et al., “Visual information retrieval system via contented-based approach,” Pattern Recognition, vol. 35, no. 3, pp. 749-769, March 2002. [19] A. Mojsilovic et al., ”Matching and retrieval based on the vocabulary and grammar of 16.

(17) color patterns,” IEEE Trans. IP, vol. 9, no 1, pp. 38-54, Jan. 2000. [20] X.S. Zhou, Thomas S. Huang, “Edge-based structural features for content-based image retrieval”, Pattern Recognition Letters, pp. 457-468, 2001. [21] I.J. Cox et al., “The Bayesian image retrieval system, PicHunter: theory, implementation, and psychophysical experiments,” IEEE Trans. IP, vol. 9, no 1, pp. 20-37, Jan. 2000. [22] A. Vailaya, Mario A.T. Figueiredo, A.K. Jain, and H.J. Zhang, “Image Classification for content-based indexing,” IEEE Trans. IP, vol. 10, no 1, pp. 117-129, Jan. 2001. [23] J.W. Hsieh et al., “Region-based image retrieval,” Proc. ICIP, 2000. [24] H. Grecu and P. Lambert, “Image retrieval by partial queries,” Proc. ICIP, 2001. [25] M.K. Hu, “Visual Pattern Recognition by Moment Invariant,” IRE Trans. Information Theory, Vol. 8, No. 2, pp. 179-187, 1962. [26] T.S Huang, X.S Zhou, “Image retrieval with relevance feedback: from heuristic weight adjustment to optimal Learning methods,” Proc. ICIP, 2001. [27] J. Yoon, N. Jayant, “ Relevance feedback for semantics based image retrieval,” Proc. ICIP, 2001. [28] B.S. Manjunath et al., “Color and texture descriptors,” IEEE Trans. CSVT, vol. 11, no. 6, pp. 703-715, Jun. 2001. [29] M. Bober, “MPEG-7 visual shape descriptors,” IEEE Trans. CSVT, vol. 11, no. 6, pp. 716-719, Jun. 2001. [30] R. Milanese, “A rotation, translation, and scale-invariant approach to content-based image retrieval,” Journal of Visual Communication and Image Representation, pp. 186-196, 1999. [31] Y. Rui, T.S. Huang, and S. Mehrotra, “Content-based image retrieval with relevance feedback in MARS,” Proc. ICIP, 1997. [32] X. Wan and C.-C. J. Kuo, “A new approach to image retrieval with clustering,” IEEE Trans. CSVT, vol.8, no. 5, pp. 628-643, Sep. 1998. 17. hierarchical color.

(18) Indexing Image Set. Color Clustering. Region Segmentation. cluster map Region Labeling. Results Relevance Feedback. regions ri Feature Extraction. feature descriptors Dominant color Color histogram. Feature Matching. database. Moment invariant Features from cooccurrence matrix. feature descriptors Feature Extraction Query by an image Query by multiple regions. User select a query type. Region Segmentation. Query image. Retrieval. Figure 1. Flowchart of our proposed retrieval system. 2 1.8 1.6 1.4. ?(n). 1.2 1 0.8 0.6 0.4 0.2 0. 0. 2. 4. 6. 8. 10. 12. 14. 16. Number of cluster. (a). (b). (c). Figure 2. Experiment on color clustering using color-validity method: (a) original image, (b) segmentation result, (c) cluster separation ρ (n) curve.. (a). (b). (c). Figure 3. Test images. 18.

(19) (a). (b). (c). Figure 4. Result of query by an image: (a) no relevance feedback, (b) the first feedback, and (c) the second feedback. Fig 3(a) no RF Fig 3(a) 1st RF Fig 3(a) 2nd RF. 1. 0.8. P. Fig 3(b) no RF Fig 3(b) 1st RF Fig 3(b) 2nd RF. 1. 0.8. 0.6. 0.6. P 0.4. 0.4. 0.2. 0.2. 0. 0. 0.1. 0.2. 0.3. 0.4. 0.5. 0.6. 0.7. 0.8. 0.9. 0. 1. 0. 0.1. 0.2. 0.3. (a). (b) Fig 3(c) no RF Fig 3(c) 1st RF Fig 3(c) 2nd RF. 1. 0.8. 0.6. P 0.4. 0.2. 0. 0. 0.1. 0.2. 0.3. 0.4. 0.5. 0.4. 0.5. R. R. 0.6. 0.7. 0.8. 0.9. 1. R. (c) Figure 5. P-R curves of query by an image for Fig. 3(a), (b) and (c).. 19. 0.6. 0.7. 0.8. 0.9. 1.

(20) (a). (b). (c). Figure 6. Result of query by multiple regions: (a) no relevance feedback, (b) the first feedback, and (c) the second feedback.. Fig 3(a) no RF Fig 3(a) 1st RF Fig 3(a) 2nd RF. 1. 0.8. P. 0.8. P. 0.6. 0.6. 0.4. 0.4. 0.2. 0.2. 0. Fig 3(b) no RF Fig3(b) 1st RF Fig3(b) 2nd RF. 1. 0. 0.1. 0.2. 0.3. 0.4. 0.5. 0.6. 0.7. 0.8. 0.9. 0. 1. 0. 0.1. 0.2. 0.3. 0.4. 0.5. R. R. (a). (b) Fig3(c) no RF Fig 3(c) 1st RF Fig 3(c) 2nd RF. 1. 0.8. P. 0.6. 0.4. 0.2. 0. 0. 0.1. 0.2. 0.3. 0.4. 0.5. 0.6. 0.7. 0.8. 0.9. 1. R. (c) Figure 7. P-R curves of query by multiple regions for Fig. 3(a), (b) and (c).. 20. 0.6. 0.7. 0.8. 0.9. 1.

(21)

數據

Figure 2. Experiment on color clustering using color-validity method: (a) original image, (b)  segmentation result, (c) cluster separation  ρ (n )  curve
Figure 4. Result of query by an image: (a) no relevance feedback, (b) the first feedback, and (c)  the second feedback
Figure 6. Result of query by multiple regions: (a) no relevance feedback, (b) the first feedback,  and (c) the second feedback

參考文獻

相關文件

Deep Learning of Binary Hash Codes for Fast Image Retrieval!. Kevin Lin, Huei-Fang Yang, Jen-Hao Hsiao,

‹ Based on the coded rules, facial features in an input image Based on the coded rules, facial features in an input image are extracted first, and face candidates are identified.

The case where all the ρ s are equal to identity shows that this is not true in general (in this case the irreducible representations are lines, and we have an infinity of ways

Light rays start from pixels B(s, t) in the background image, interact with the foreground object and finally reach pixel C(x, y) in the recorded image plane. The goal of environment

(It is also acceptable to have either just an image region or just a text region.) The layout and ordering of the slides is specified in a language called SMIL.. SMIL is covered in

 Retrieval performance of different texture features according to the number of relevant images retrieved at various scopes using Corel Photo galleries. # of top

Sūtra that “the secular and the sacred are equal.” Su Shi’s writings on the Vimalakīrtinirdeśa Sūtra show his kulapati image of Vimalakīrtinirdeśa in the Song dynasty..

We will calculate the relationship points as their features and find the maximum relation protein spot pair as basic information for image matching.. If we cannot find any referable