• 沒有找到結果。

Hadamard transform based fast codeword search algorithm for high-dimensional VQ encoding

N/A
N/A
Protected

Academic year: 2021

Share "Hadamard transform based fast codeword search algorithm for high-dimensional VQ encoding"

Copied!
6
0
0

加載中.... (立即查看全文)

全文

(1)

Hadamard transform based fast codeword search algorithm

for high-dimensional VQ encoding

Shu-Chuan Chu

a

, Zhe-Ming Lu

b

, Jeng-Shyang Pan

c,*

a

Department of Information Management, Cheng Shiu University, Taiwan

b

Department of Automatic Test and Control, Harbin Institute of Technology, China

c

Department of Electronic Engineering, National Kaohsiung University of Applied Science, Kaohsiung 807, Taiwan Received 29 December 2004; received in revised form 6 June 2006; accepted 9 June 2006

Abstract

An efficient nearest neighbor codeword search algorithm for vector quantization based on the Hadamard transform is presented in this paper. Four elimination criteria are derived from two important inequalities based on three characteristic values in the Hadamard transform domain. Before the encoding process, the Hadamard transform is performed on all the codewords in the codebook and then the transformed codewords are sorted in the ascending order of their first elements. During the encoding process, firstly the Hadamard transform is applied to the input vector and its characteristic values are calculated; secondly, the codeword search is initialized with the codeword whose Hadamard-transformed first element is nearest to that of the input vector; and finally the closest codeword is found by an up-and-down search procedure using the four elimination criteria. Experimental results demonstrate that the proposed algorithm is much more efficient than the most existing nearest neighbor codeword search algorithms in the case of problems of high dimensionality.

 2006 Elsevier Inc. All rights reserved.

Keywords: Vector quantization; Image coding; Fast codeword search; Hadamard transform

1. Introduction

Vector quantization (VQ) is a block-based lossy compression technique, which has been successfully used in image compression[5,22], image filtering[6]and speech coding[14]. The main idea of VQ is to exploit the sta-tistical dependency among the vector components to reduce the spatial or temporal redundancy and obtain a high compression ratio. VQ can be defined as a mapping from k-dimensional Euclidean space Rkinto a finite subset C of Rkcalled the codebook: C = {y1, y2, . . . , yN}, where yiis a codeword and N is the codebook size.

There are two key problems involved in VQ, i.e., codebook design and codeword search. The task of codebook design is to generate the N most representative codewords from a large training set that consists of M training vectors, where M N. One well-known codebook design method is LBG algorithm or GLA[14]. The task of

0020-0255/$ - see front matter  2006 Elsevier Inc. All rights reserved. doi:10.1016/j.ins.2006.06.010

* Corresponding author. Tel.: +886 73814526; fax: +886 73811182.

E-mail address:jspan@cc.kuas.edu.tw(J.-S. Pan).

(2)

codeword search is to find the best-match codeword from the given codebook for the input vector. That is to say, the nearest codeword yj= (yj1, yj2, . . . , yjk) in the codebook C is found for each input vector x =

(x1, x2, . . . , xk) such that the distortion between this codeword and the input vector is the smallest among

all codewords. The most common distortion measure between x and yi is the Euclidean distance as follows:

dðx; yiÞ ¼ Xk l¼1 ðxl yilÞ 2 ð1Þ From the above equation, we can see that each distortion computation requires k multiplications and 2k 1 additions. For an exhaustive full search (FS) algorithm, encoding each input vector requires N distortion

Nomenclature C codebook yi spatial codeword

Rk Euclidean space N codebook size

M number of training vectors x spatial input vector k vector dimension

X transformed input vector Yi transformed codeword

Sx sum of spatial input vector

Si sum of spatial codeword

mx mean of spatial input vector

mi mean of spatial codeword

vx deviation of spatial input vector

vi deviation of spatial codeword

VX deviation of transformed input vector

Vi deviation of transformed codeword

kXk norm of transformed input vector kyik norm of transformed codeword

X1 first element of transformed input vector

Yi1 first element of transformed codeword

dmin current minimum distortion

Hn the Hadamard matrix

VQ vector quantization FS full search

IFS improved full search PDS partial distortion search

TIE triangular inequality elimination ENNS equal-average nearest neighbor search

EENNS equal-average equal-variance nearest neighbor search IENNS improved equal-average nearest neighbor search

IEENNS improved equal-average equal-variance nearest neighbor search EEENNS equal-average equal-variance equal-norm nearest neighbor search SVEENNS sub-vector equal-average equal-variance nearest neighbor search WTPDS wavelet transform based PDS

HTPDS Hadamard transform based PDS NOS norm-ordered search

TNOS transform-domain norm-ordered search

(3)

calculations and N 1 comparisons. Therefore, it is necessary to perform kN multiplications, (2k  1)N additions and N 1 comparisons to encode each input vector. For a VQ system with large codebook size and high dimension, the computation load is very high during the encoding stage. To reduce the search complexity of the FS algorithm, many fast nearest neighbor codeword search algorithms have been pre-sented. These algorithms can be grouped into three categories: spatial domain inequality based [1–4, 7–9,12,16,17,19–21,23–25], pyramid structure based [13,18,23] and transform domain inequality based

[10,11,15].

The spatial (or temporal) inequality based algorithms eliminate impossible codewords by utilizing the inequalities based on the characteristic values such as sum, mean, deviation, and L2norm of the spatial vector.

These inequalities can be mainly classified into five types: triangle inequalities, absolute error inequalities, mean inequalities, variance inequalities, and norm inequalities. The partial distortion search (PDS) algorithm

[2]is a simple and efficient codeword search algorithm that allows early termination of the distortion calcu-lation between an input vector and a codeword by introducing a premature exit condition in the searching process. The improved full search algorithm[4]reduces the computation time in the full search method for VQ nearly 50% by using Winograd’s identity. The triangular inequality elimination (TIE) criterion is used in[3,8,9,25]to reject a large number of impossible codewords. However, the TIE criterion requires consider-able memory space of size (N 1)N/2 to store the distance between any pair of codewords. The equal-average nearest neighbor search (ENNS) algorithm [7,23] uses the mean value to reject impossible codewords. This algorithm greatly reduces the computational time compared with the conventional full search algorithm with only N additional memory locations. The improved algorithm, i.e., the equal-average equal-variance nearest neighbor search (EENNS) algorithm[12], uses the variance as well as the mean value to reject more codewords. This algorithm reduces the computational time further with 2N additional memory. The improved algorithm

[1] termed IEENNS uses the mean and the variance of an input vector like EENNS but develops a new inequality between these features and the distance. Another improved ENNS method [17], referred to as IENNS, is based on the inequality derived from the improved absolute error inequality (IAEI) criterion

[20,21]. In that method, a vector is separated into two sub-vectors: one is composed of the first half of vector components and the other consists of the remaining vector components. Two inequalities based on the sums of its two sub-vectors’ components are used to reject those codewords that cannot be rejected by ENNS. Ref.[19]

presents a so-called sub-vector based equal-average equal-variance nearest neighbor search algorithm (SVEENNS), where a vector is also separated into two sub-vectors: one is composed of the first half of vector components and the other is composed of the remaining vector components. For each codeword and its two sub-vectors, the sums and variances of their vector components are computed and saved off-line. These code-words are sorted in the ascending order of the sum of their vector components. In the encoding phase, com-pared to IEENNS, four extra inequalities are used to reject those codewords that cannot be rejected by IEENNS. Wu and Lin presented a new kick-out condition [26]based on the norms of codewords, and the corresponding search method is called NOS (norm-ordered search) algorithm in this paper. Recently, Lu and Sun[16]have presented the equal-average equal-variance equal-norm nearest neighbor search (EEENNS) algorithm, which uses three significant features of a vector, mean value, variance, and norm, to reject many impossible codewords and saves a great deal of computational time. Because the variance of a vector can be calculated from the norm and the mean of the vector, the EEENNS algorithm only needs to compute and store N mean values and N norms of all codewords off-line.

The pyramid structure based algorithms reject impossible codewords by using the inequalities layer by layer. Lee and Chen[13]presented a fast codeword search algorithm based on mean pyramids for image cod-ing in which the vector dimension is 2n· 2n

. Pan et al.[18]presented a more efficient pyramid structure called the mean–variance pyramid, which can be used to reject a large number of unmatched codewords. Recently, Song and Ra[24]have used the L2-norm pyramid of codewords to reject impossible codewords.

The transform domain-based algorithms efficiently perform the PDS algorithm in the wavelet or Hadamard transform domain, i.e., so-called WTPDS[10]or HTPDS[15]algorithm. Recently, Jiang et al.[11]presented a new Hadamard transform based NOS algorithm. The latter two methods only use one characteristic value of the vector to reject the impossible codewords, so they are not very efficient. In this paper, we present a new fast codeword search algorithm based on Hadamard transform with four elimination criteria, which are very effi-cient in the case of problems of high dimensionality. We derive four inequalities based on the first element, the

(4)

deviation, and the norm of the transformed vector, which can reject much more impossible codewords than previous transform-domain based methods, and thus the search time is dramatically reduced.

This paper is organized as follows. In Section2, some related nearest neighbor codeword search algorithms are introduced and analysed. In Section3, some basic definitions are described and properties related to the proposed algorithm are derived. In Section4, the proposed algorithm is discussed in detail. In Section 5, the simulation results are given. Section 6concludes the paper.

2. Related existing nearest neighbor codeword search algorithms

Some existing nearest neighbor codeword search algorithms such as PDS, ENNS, IENNS, EENNS, IEENNS, SVEENNS, NOS and EEENNS are reviewed in this section. Assume Sxand mx are the sum and

mean for the components of a vector x, respectively. In the Euclidean space Rk, the central line l is defined as the line on which the coordinates (components) of any point (vector) have the same value. The hyperplane orthogonal to l is called an equal-average hyperplane. The deviation of a k-dimensional vector x = (x1, x2, . . . ,

xk) is defined as vx¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Xk l¼1 ðxl mxÞ2 v u u t ¼pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffidðx; LxÞ ð2Þ

where Lx= (mx, mx, . . . , mx) is a k-dimensional vector, which is the projection point of x on the central line l.

The L2norm of a k-dimensional vector x = (x1, x2, . . . , xk) is defined as

kxk ¼ ffiffiffiffiffiffiffiffiffiffiffiffi Xk l¼1 x2 l v u u t ð3Þ

From the above denotation, we can easily prove that the deviation, the mean, and the norm of the vector x satisfy the following equation:

v2 x¼ kxk

2

 k  m2

x ð4Þ

In our paper, the ‘‘so far’’ smallest distortion is described as

dmin¼ minfdðx; yiÞjyi has been inspectedg ð5Þ

For convenience, we assume dmin= d(x, yp). In other words, we assume the so far best-match codeword is yp.

2.1. Partial distortion search

The partial distortion search (PDS) algorithm [2] allows early termination of the distortion calculation between an input vector and a codeword by introducing a premature exit condition in the search process. Assume the ‘‘so far’’ smallest distortion is dmin. If the uninspected codeword yisatisfies the condition:

Xq l¼1

ðxl yilÞ 2

P dmin ð6Þ

which guarantees that d(x, yi) P dmin, then the codeword yican be rejected without computing the whole

dis-tance d(x, yi), where 1 6 q 6 k. Although the PDS algorithm is not efficient enough, it can be combined with

other fast search algorithms to reject the codewords that cannot be eliminated by other algorithms. 2.2. Equal-average nearest neighbor search

The ENNS algorithm[7,23]takes advantage of the fact that the mean of the nearest codeword is usually closed to the mean of the input vector. Assume mxand miare the mean values of x and yi, respectively, then

k ðmx miÞ 26

dðx; yiÞ ð7Þ

(5)

For example, for the Lena image encoded with 512 8· 8 dimensional codewords, the PSNR value is 28.50 dB for all algorithms. Because the Lena image is in the training set, whereas the Baboon image is a high-detail image outside the training set, the encoding time of the Baboon image is much longer than that of the Lena image. The reason is that there are more high-frequency coefficients for the high-detail image. FromTables 1 and 2, we can see that the proposed algorithm is superior to all other algorithms for both low-detail and high-detail images, especially in the case of high dimensionality. For the Lena image encoded with the codebook of size 1024, the CPU time of the proposed algorithm IHTEEENNS is only about 1.3% that of the full search algorithm.

From these two tables, we can also get the following conclusions. (1) The transform domain based fast codeword search algorithm is by and large better than its corresponding spatial domain algorithm. (2) For the case of high dimension 16· 16 and large codebook 1024, the more characteristic values are used, the shorter CPU time is needed. However, we need more extra space and overhead computation, and thus for the case of low dimension and small codebook, the results may be a little worse if we use more characteristic values. (3) Because the Hadamard transform requires no multiplication and has good energy compactness, the algorithms based on HT are much better than all other algorithms.

6. Conclusions

This paper presents a fast codeword search algorithm based on four elimination criteria in the Hadamard transform domain. The proposed algorithm makes full use of three transform domain characteristic values, i.e., the first element, the deviation, and the norm of the transformed vector. The algorithm can dramatically reduce the complexity in the case of high-detail and high-dimensional image vector quantization. Experimen-tal results demonstrate that the proposed algorithm is superior to most existing fast codeword search algorithms.

References

[1] S.J. Baek, B.K. Jeon, K.M. Sung, A fast encoding algorithm for vector quantization, IEEE Signal Processing Letters 4 (2) (1997) 325– 327.

[2] C.D. Bei, R.M. Gray, An improvement of the minimum distortion encoding algorithm for vector quantization, IEEE Transactions on Communications 33 (10) (1985) 1132–1133.

[3] S.H. Chen, J.S. Pan, Fast search algorithm for VQ-based recognition of isolated word, IEE Proceedings – I 136 (6) (1989) 391–396. [4] K.L. Chung, W.M. Yan, J.G. Wu, A simple improved full search for vector quantization based on Winograd’s identity, IEEE Signal

Processing Letters 7 (12) (2000) 342–344.

[5] A. Gersho, R.M. Gray, Vector Quantization and Signal Compression, Kluwer Academic Publisher, Boston, 1992.

[6] A.I. Gonza´lez, M. Gran˜a, J.R. Cabello, A. D’Anjou, F.X. Albizuri, Experimental results of an evolution-based adaptation strategy for VQ image filtering, Information Sciences 133 (3–4) (2001) 249–266.

[7] L. Guan, M. Kamel, Equal-average hyperplane partitioning method for vector quantization of image data, Pattern Recognition Letters 13 (10) (1992) 693–699.

[8] C.M. Huang, Q. Bi, G.S. Stiles, R.W. Harris, Fast full search equivalent encoding algorithms for image compression using vector quantization, IEEE Transactions on Image Processing 1 (3) (1992) 413–416.

[9] S.H. Huang, S.H. Chen, Fast encoding algorithm for VQ-based image coding, Electronics Letters 26 (19) (1990) 1618–1619. [10] W.J. Hwang, S.S. Jeng, B.Y. Chen, Fast codeword search algorithm using wavelet transform and partial distance search techniques,

Electronics Letters 33 (5) (1997) 365–366.

[11] S.D. Jiang, Z.M. Lu, Q. Wang, Fast norm-ordered codeword search algorithms for image vector quantization, Chinese Journal of Electronics 12 (3) (2003) 373–376.

[12] C.H. Lee, L.H. Chen, Fast closest codeword search algorithm for vector quantization, IEE Proceedings – Vision, Image and Signal Processing 141 (3) (1994) 143–148.

[13] C.H. Lee, L.H. Chen, A fast search algorithm for vector quantization using mean pyramids of codewords, IEEE Transactions on Communications 43 (2–4) (1995) 1697–1702.

[14] Y. Linde, A. Buzo, R.M. Gray, An algorithm for vector quantizer design, IEEE Transactions on Communications 28 (1) (1980) 84– 95.

[15] Z.M. Lu, J.S. Pan, S.H. Sun, Efficient codeword search algorithm based on Hadamard transform, Electronics Letters 36 (16) (2000) 1364–1365.

[16] Z.M. Lu, S.H. Sun, Equal-average equal-variance equal-norm nearest neighbor search algorithm for vector quantization, IEICE Transactions on Information and Systems, E-86D (3) (2003) 660–663.

(6)

[17] J.S. Pan, K.C. Huang, A new vector quantization image coding algorithm based on the extension of the bound for Minkowski metric, Pattern Recognition 31 (11) (1998) 1757–1760.

[18] J.S. Pan, Z.M. Lu, S.H. Sun, Fast codeword search algorithm for image coding based on mean–variance pyramids of codewords, Electronics Letters 6 (3) (2000) 210–211.

[19] J.S. Pan, Z.M. Lu, S.H. Sun, An efficient encoding algorithm for vector quantization based on sub-vector technique, IEEE Transactions on Image Processing 12 (3) (2003) 265–270.

[20] J.S. Pan, F.R. McInnes, M.A. Jack, Bound for Minkowski metric or quadratic metric applied to VQ codeword search, IEE Proceedings – Vision Image and Signal Processing 143 (1) (1996) 67–71.

[21] J.S. Pan, F.R. McInnes, M.A. Jack, Fast clustering algorithm for vector quantization, Pattern Recognition 29 (3) (1996) 511–518. [22] F. Rizzo, J.A. Storer, B. Carpentieri, Overlap and channel errors in adaptive vector quantization for image coding, Information

Sciences 171 (1–3) (2005) 125–143.

[23] S.W. Ra, J.K. Kim, Fast mean-distance-ordered partial codebook search algorithm for image vector quantization, IEEE Transactions on Circuits Systems II 40 (9) (1993) 576–579.

[24] B.C. Song, J.B. Ra, A fast search algorithm for vector quantization using L2-norm pyramid of codewords, IEEE Transactions on Image Processing 11 (1) (2002) 10–15.

[25] E. Vidal, An algorithm for finding nearest neighbours in (approximately) constant average time, Pattern Recognition Letters 4 (1986) 145–157.

[26] K.S. Wu, J.C. Lin, Fast VQ encoding by an efficient kick-out condition, IEEE Transactions on Circuits, Systems and Video Technology 10 (1) (2000) 59–62.

參考文獻

相關文件

I) Liquids have more entropy than their solids. II) Solutions have more entropy than the solids dissolved. III) Gases and their liquids have equal entropy. IV) Gases have

In Learning Objective 23.1, students are required to understand and prove the properties of parallelograms, including opposite sides equal, opposite angles equal, and two

Although we have obtained the global and superlinear convergence properties of Algorithm 3.1 under mild conditions, this does not mean that Algorithm 3.1 is practi- cally efficient,

/** Class invariant: A Person always has a date of birth, and if the Person has a date of death, then the date of death is equal to or later than the date of birth. To be

In this chapter, we have presented two task rescheduling techniques, which are based on QoS guided Min-Min algorithm, aim to reduce the makespan of grid applications in batch

Unlike the client-server model, BitTorrent divides a file into a number of equal-sized pieces, where each peer simultaneously downloads and uploads via its neighbors that

(A) For deceleration systems which have a connection link or lanyard, the test weight should free fall a distance equal to the connection distance (measured between the center line

Therefore, this study based on GIS analysis of road network, such as: Nearest Neighbor Method, Farthest Insertion Method, Sweep Algorithm, Simulated Annealing