• 沒有找到結果。

4841­4244­0504­1/06/$20.00 ©2006 IEEE ISIT 2006, Seattle, USA, July 9 ­ 14, 2006

N/A
N/A
Protected

Academic year: 2022

Share "4841­4244­0504­1/06/$20.00 ©2006 IEEE ISIT 2006, Seattle, USA, July 9 ­ 14, 2006"

Copied!
5
0
0

加載中.... (立即查看全文)

全文

(1)

Low-density constructions can achieve the Wyner-Ziv and Gelfand-Pinsker bounds

Emin Martinian Mitsubishi Electric Research Labs

Cambridge, MA 02139, USA Email: martinian@merl.com

Martin J. Wainwright Dept. of Statistics and Dept. of EECS,

UC Berkeley, Berkeley, CA 94720 Email: wainwrig@{eecs,stat}.berkeley.edu

Abstract— We describe and analyze sparse graphical code constructions for the problems of source coding with decoder side information (the Wyner-Ziv problem), and channel coding with encoder side information (the Gelfand-Pinsker problem).

Our approach relies on a combination of low-density parity check (LDPC) codes and low-density generator matrix (LDGM) codes, and produces sparse constructions that are simultaneously good as both source and channel codes. In particular, we prove that under maximum likelihood encoding/decoding, there exist low- density codes (i.e., with finite degrees) from our constructions that can saturate both the Wyner-Ziv and Gelfand-Pinsker bounds.

I. INTRODUCTION

Sparse graphical codes, particularly low-density parity check (LDPC) codes, are widely used and well understood in application to channel coding problems [11]. For other communication problems, especially those involving aspects of both channel and source coding, there remain various open questions associated with using low-density code constructions. Two important examples are source coding with side information (the Wyner-Ziv problem), and channel coding with side information (the Gelfand-Pinsker problem).

This paper focuses on the design and analysis of low- density codes—more specifically, constructions based on a combination of LDPC and low-density generator matrix (LDGM) codes—for source and channel coding with side information. It builds on our previous work [8], in which we proved that low-density constructions and ML decoding can saturate the rate-distortion bound for a symmetric Bernoulli source.

Related work: It is well-known that random constructions of nested codes can saturate the Wyner-Ziv and Gelfand-Pinsker bounds [14], [16]. However, an unconstrained random construction leads to a high-density code, which is of limited practical use. One practically viable approach to lossy compression is trellis coded quantization (TCQ) [7].

A number of researchers have exploited TCQ as a quantizer for the Wyner-Ziv and related multiterminal source coding problems [2], [15] as well as for channel coding with side information [5]. A disadvantage of TCQ is that

EM was supported by Mitsubishi Electric Research Labs and MJW was supported by an Alfred P. Sloan Foundation Fellowship, an Okawa Foundation Research Grant, and NSF Grant DMS-0528488.

saturating rate-distortion bounds requires that the trellis constraint length be taken infinity [12]; consequently, the computational complexity of decoding, even using message- passing algorithms, grows exponentially. It is therefore of considerable interest to develop low-density graphical constructions for such problems. Past work by a number of researchers [9], [13], [3], [10] has suggested that LDGM codes, which arise as the duals of LDPC codes, are well- suited to various types of quantization.

Our contributions: In this paper, we describe a sparse graphi- cal construction for generating nested codes that are simultane- ously good as both source and channel codes. We build on our previous work [8], in which we analyzed constructions, based on a combination of LDPC and LDGM codes, for the problem of standard lossy compression. Here we prove that there exist variants of these joint LDPC/LDGM constructions with finite degrees such that, when decoded/encoded using maximum likelihood, can saturate the Wyner-Ziv and Gelfand-Pinsker bounds. Although ML decoding is not practically viable, the low-density nature of our construction means that they have low degree, and with high probability (w.h.p.) high girth and expansion, all of which are important for the application of efficient message-passing.

The remainder of this paper is organized as follows.

Section II provides background on source coding with side information (SCSI, or the Wyner-Ziv problem), and channel coding with side information (CSSI, or the Gelfand-Pinsker problem). Section III introduces our joint LDGM/LDPC construction, and provides a high-level overview of its use for the SCSI and CCSI problems. In Section IV, we prove that our construction produces codes that are simultaneously

“good” for both source and channel coding. We conclude with a discussion in Section V.

Notation: Vectors/sequences are denoted in bold (e.g., s), random variables in sans serif font (e.g.,s), and random vec- tors/sequences in bold sans serif (e.g.,s). Similarly, matrixes are denoted using bold capital letters (e.g., G) and random matrixes with bold sans serif capitals (e.g.,G). We use I(·; ·), H(·), and D (·||·) to denote mutual information, entropy, and relative entropy (Kullback-Leibler distance), respectively.

Finally, we use card{·} to denote the cardinality of a set,

(2)

|| · ||p to denote the p-norm of a vector, Ber(t) to denote a Bernoulli-t distribution, and Hb(t) to denote the entropy of a Ber(t) random variable.

II. BACKGROUND

A. Source and channel coding

We begin with definitions of “good” source and channel codes that are useful for future reference.

Definition 1. (a) A code family is a good D-distortion binary symmetric source code if for any  > 0, there exists a code with rate R < 1 − Hb(D) +  that achieves distortion D.

(b) A code family is a good BSC(p)-noise channel code if for any  > 0 there exists a code with rate R > 1 − Hb(p) −  with error probability less than .

B. Wyner-Ziv problem

Suppose that we wish to compress a symmetric Bernoulli source s ∼ Ber(12) so as to be able to reconstruct it with Hamming distortion D. By classical rate distortion theory [4], the minimum achievable rate is given by R(D) = 1−Hb(D).

In the Wyner-Ziv extension [14], there is an additional source of side information abouts—say in the form y = s ⊕ w where w ∼ Ber(δ) is observation noise—that is available only at the decoder. In this setting, the minimum achievable rate takes the form RWZ(D, p) = l. c. e.

Hb(D ∗ p) − Hb(D) , (p, 0) , where l. c. e. denotes the lower convex envelope. Note that in the special case p = 12, the side information is useless, so that the Wyner-Ziv rate reduces to classical rate-distortion.

C. Gelfand-Pinkser problem

Now consider the binary information embedding problem:

the channel has the form y = u ⊕ s ⊕ z, where u is the channel input, s is a host signal (not under control of the encoder), and z ∼ Ber(p) is channel noise. The encoder is free to choose the input vector u ∈ {0, 1}n, subject to the channel constraint u1 ≤ wn, so as to maximize the rate of information transfer. We write u ≡ um where m is the underlying message to be transmitted. The decoder wishes to recover the embedded message from the corrupted observation y. It can be shown [1] that the capacity in this set-up is given by RIE(w, p) u. c. e.

Hb(w) − Hb(p) , (0, 0)

, where u. c. e.

denotes the upper convex envelope.

III. GENERALIZEDCOMPOUNDCONSTRUCTION

In this section, we describe a compound construction that produces codes that are simultaneously “good”, in the senses previously defined, as source and channel codes. We then describe how the nested codes generated by this compound construction apply to the SCSI and CCSI problems.

A. Code construction

Consider the compound code construction illustrated in Fig. 1, defined by a factor graph with three layers. The top layer consists of n bits, each attached to an associated parity check. These parity checks connect to m variable nodes in

k

k1 k2

m

γt

γv

γc

n G

H1 H2

Fig. 1. Illustration of compound LDGM and LDPC code construction. The top section consists of an(n, m) LDGM code with generator matrix G and constant check degrees γt = 4; its rate is R(G) = mn. The bottom section consists of (m, k1) and (m, k2) LDPC codes with degrees v, γc) = (3, 6), described by parity check matrices H1

andH2 and ratesR(H1) = 1 −km1 andR(H2) = 1 −km2 respectively. The overall rate of the compound construction isRcom= R(G)R(H)), where R(H) = R(H1) + R(H2).

the middle layer, and in turn these middle variable nodes are connected to k = k1+ k2 parity checks in the bottom layer.

Random LDGM ensemble: The top two layers define an (n, m) LDGM code. We construct it by connecting each of the n checks at the top randomly to γtvariable nodes in the middle layer chosen uniformly at random. We use G ∈ {0, 1}m×n to denote the resulting generator matrix; by construction, each column of G has exactly γt ones, whereas each row (corresponding to a variable node) has an (approximately) Poisson number of ones. An advantage of this regular-Poisson degree ensemble is that the resulting distribution of a random codeword is extremely easy to characterize:

Lemma 1. Let G ∈ {0, 1}m×n be a random generator matrix obtained by randomly placing γt ones per column.

Then for any vector w ∈ {0, 1}m with a fraction of v ones, the distribution of the corresponding codewordw G is Bernoulli(δ(v; γt)) where

δ(v; γt) = 1

2· [1 − (1 − 2v)γt] . (1) Random LDPC code: The bottom two layers define a pair of LDPC codes, with parameters (m, k1) and (m, k2); we choose these codes from a standard standard (γv, γc)-regular LDPC ensemble originally studied by Gallager. Specifically, each of the m variable nodes in the middle layer connects to γvcheck nodes in the bottom layer. Similarly, each of the k check nodes in the bottom layer connects to γcvariable nodes in the middle layer. For convenience, we restrict ourselves to even check degrees γc. Dividing the k check bits into two subsets, of size k1 and k2 with respective parity check matricesH1 and H2, allows for the construction of nested codes, which will be needed for both the Wyner-Ziv and Gelfand-Pinsker problems.

B. Good source and channel codes

The key theoretical properties of this joint LDGM/LDPC construction are summarized in the following results:

Theorem 1 (Good source code). With appropriate finite degrees, there exist (n, m, k) constructions that are D-good source codes for all rates above R(D) = 1 − Hb(D).

(3)

Theorem 2 (Good channel code). With appropriate finite degrees, there exist (n, m, k) constructions that are good p- channel codes for all rates below capacity C = 1 − Hb(p).

Theorem 1 on source coding was proved in our previous work [8], whereas a proof of Theorem 2 is given in Section IV.

We now describe how these two theorems allow us to establish that our low-density construction achieves the Wyner-Ziv and Gelfand-Pinsker bounds. At a high level, our approach is closely related to standard approaches to SCSI/CSCI coding;

the key novelty is that appropriately nested codes can be construction using low-density architectures.

C. Coding for Wyner-Ziv

We focus only on achieving rates of the form Hb(D ∗ p) − Hb(D), as any remaining rates on the Wyner-Ziv curve can be achieved by time-sharing with the point (p, 0). To do this, we use the compound code in Fig. 1. Specifically, a source s is encoded to H2w where w is chosen to minimize the distortion||s−wG||1subject to the constraint thatH1w = 0.

Theorems 1 and 2 show that maximum likelihood decoding ofH2w using side information y approaches the Wyner-Ziv bound in the sense that this construction yields a good D- distortion binary source code, and a nested subcode that is a good D ∗ p-noise channel code. Details follow.

Source coding component: The D−distortion source code component of the construction involves the n variable nodes representing the source bits, the m intermediate variable nodes, and the subset of k1lower layer check nodes. This subgraph, represented by the generator matrixG and parity check matrix H1(see Fig. 1), define a code (on the n variable nodes) with effective rate

R1: =m 1− km1

n = m − k1

n . (2)

Choosing the middle and lower layer sizes m and k1such that R1= 1− Hb(D) guarantees (from Theorem 1) the existence of finite degrees such that that this code is a good D-distortion source code.

Channel coding component: Now suppose that the sources has been quantized, and is represented (up to distortion D) by the compressed sequence x ∈ {0, 1}m. We transmit the associated sequenceH2x ∈ {0, 1}k2 of parity bits associated with the code H2; doing so requires rate Rtrans = kn2. The task of the decoder is as follows: given these k2 parity bits as well as the k1zero-valued parity bits, the decoder seeks to recover the quantized sequencex on the basis of the observed side-informationy. Note that from the decoder’s perspective, the effective code rate is given by

R2 = m − k1− k2

n (3)

Suppose that we choose k2 such that R2 = 1− Hb(D ∗ p);

then Theorem 2 guarantees that the decoder will (w.h.p.) be able to recover a codeword corrupted by (D ∗ p)-Bernoulli noise. Note that the side information can be written as y = s ⊕ e ⊕ v, where e : = s ⊕ s is the quantization noise,

andv ∼ Ber(p) is the channel noise. If the quantization noise e were i.i.d. Ber(D), then the overall effective noise e ⊕ v would be i.i.d. Ber(D ∗ p). In reality, the quantization noise is not exactly i.i.d. Ber(D), but it can be shown [16] that it can be treated as such for theoretical purposes.

In summary then, the overall transmission rate of this scheme for the Wyner-Ziv problem is given by

m − k1

n



m − k1− k2

n



= Hb(D ∗ p) − Hb(D) . (4) Thus, by applying Theorems 1 and 2, we conclude that our low-density scheme saturates the Wyner-Ziv bound.

D. Coding for Gelfand-Pinsker

The construction for the Gelfand-Pinsker problem is similar, but with the order of the code nesting reversed. In particular, the Gelfand-Pinsker problem requires a good p-noise channel code, and a nested subcode that is a good w-distortion source code. As before, we focus only on achieving rates of the form Hb(w) − Hb(p). To encode a message m with side informationy, the channel input is wG where w is chosen to minimize||y−wG||1subject toH1w = m. Details follow.

Source coding component: We begin by describing the nested subcode for the source coding component. The idea is to embed a message into the transmitted signal during the quantization process. The first set of k1 lower parity bits remain fixed to zero throughout the scheme. On the other hand, we use the remaining k2 lower parity bits to specify a particular messagem ∈ {0, 1}k2 that the decoder would like to recover. With the lower parity bits specified in this way, we use the resulting code to quantize a given source sequences to a compressed versions. If we choose n, m and k such that

R1 = m − k1− k2

n = 1− Hb(w) , (5)

then Theorem 1 guarantees that the resulting code is a good w-distortion source code. Otherwise stated, we are guaranteed that w.h.p, the error e : = s ⊕ s in our quantization has Hamming weight upper bounded by wn. Thus, transmitting the errore ensures that the channel constraint is met.

Channel coding component: At the decoder, the k1 lower parity bits remain set to zero; the remaining k2 parity bits, which represent the message m, are unknown to the coder.

We now choose k1 such that the effective code used by the decoder has rate

R2 = m − k1

n = 1− Hb(p) . (6) In addition, the decoder is given a noisy channel observation of the formy = e ⊕ s ⊕ v = s ⊕ v and its task is to recover

s. With the channel coding rate chosen as in equation (6) and channel noise v ∼ Ber(p), Theorem 2 guarantees that the decoder will w.h.p. be able to recover s. By design of the quantization procedure, we have the equivalencem = sH2so that a simple syndrome-forming procedure allows the decoder to recover the hidden message. Thus, by applying Theorems 1 and 2, we conclude that our low-density scheme saturates the Gelfand-Pinsker bound under ML encoding/decoding.

(4)

IV. PROOF OFTHEOREM2

As described in the previous sections, Theorems 1 and 2 allow us to establish that the Wyner-Ziv and Gelfand-Pinsker bounds can be saturated under ML encoding/decoding. The source coding part—namely Theorem 1—was proved in our earlier work [8]. Here we provide a proof of Theorem 2, which ensures that these joint LDGM/LDPC constructions are good channel codes. We consider a joint construction, as illustrated in Fig. 1, consisting of a rate R(G) LDGM top code, and a rate R(H) lower LDPC code. Recall that the overall rate of this compound construction is given by Rcom= R(G)R(H). Note that an LDGM code on its own (i.e., without the lower LDPC code) is a special case of this construction with R(H) = 1. However, a standard LDGM of this variety is not a good channel code, due to the large number of low- weight codewords. Essentially, the following proof shows that using a non-trivial LDPC lower code (with R(H) < 1) can eliminate these troublesome low-weight codewords.

If the codewordc is transmitted, then the receiver observes y = c ⊕ v where v is a Ber(p) random vector. Our goal is to bound the probability that maximum likelihood (ML) decoding fails where the probability is taken over the randomness in both the channel noise and the code construction. To simplify the analysis, we focus on the following sub-optimal (non-ML) decoding procedure:

Definition 2 (Decoding Rule:). With threshold d(n) : = (p + n−1/3) n, decode to codeword ci ⇐⇒ ci⊕ y1≤ d(n), and no other codeword is within d(n) of y.

(The extra factor of n−1/3in the threshold d(n) is of theoreti- cal convenience.) Due to the linearity of the code construction, we may assume without loss of generality that the all zeros codeword 0n was transmitted (i.e., c = 0n). In this case, the channel output is simply y = v and so our decoding procedure will fail if and only if either (i) v1 > d(n), or (ii) there exists some codeword “middle layer codeword”

z ∈ {0, 1}msatisfying the parity check equation1H z = 0 and corresponding to a codewordci=z G such that z G⊕v1≤ d(n). Using the following two lemmas, we establish that this procedure has arbitrarily small probability of error, whence ML decoding (which is at least as good) also has arbitrarily small error probability.

Lemma 2. The probability of decoding error vanishes asymp- totically provided that

R(G)A(v) − D (p||δ(v; γt)∗ p) < 0 for all v ∈ (0,12] (7) where A(v) : = limm→+∞Am(v) is the asymptotic log- domain weight numerator of the LDPC code, with Am(v) being the average log-domain weight enumerator defined as

Am(v) : = 1

mlogE card

z ||z||1= vm . (8)

1To be more precise, for the channel decoding step of the Wyner-Ziv problem, the middle layer codeword must satisfyH1z = 0 and H2z = m wherem is the output of the Wyner-Ziv encoder. For the channel decoding step of the Gelfand-Pinsker problem, the middle layer codeword must only satisfyH1z = 0, since m is unknown until decoding is complete.

Proof. Let N = 2nRcom denote the total number of codewords in the joint LDGM/LDPC code. Then we can upper bound the probability of error using the union bound as follows:

perr≤ P[v1> d(n)] + N i=2

P

ziG ⊕ v1≤ d(n) . (9) By Bernstein’s inequality, the probability of the first error event vanishes for large n. Now focusing on the second sum, let us condition on the event thatz1 = . Then Lemma 1 guarantees thatz G has i.i.d. Ber(δ(m; γt)) elements, so that the vector z G ⊕ v has i.i.d. Ber(δ(m; γt)∗ p) elements.

Applying Sanov’s theorem yields the upper bound P

z G ⊕ v1≥ d(n) z1= 

≤ 2−nD(p||δ(mt)∗p).

We can then upper bound the second error term (9) via 2nRcom

m

=0

P[z|1= ] 2−nD(p||δ(mt)∗p)

= m

=0

2

nnRcom+m

Am(m)−R(H)

−nD(p||δ(mt)∗p)o

= m

=0

2n{R(G)Am(m)−D(p||δ(mt)∗p)}

= m

=0

2n{R(G)[Am(m)−A(m)+A(m)]−D(p||δ(mt)∗p)}

m

=0

2n{R(G)|Am(m)−A(m)|++R(G)A(m)−D(p||δ(mt)∗p)} where we have replaced Rcom= R(G) with R(H) in the third line and used the notation |x|+to denote max(0, x). Finally, we notice that by the definition of the asymptotic weight enumerator, A(v), the |Am(v) − A(v)|+ term converges to zero uniformly2for v ∈ [0, 1] leaving only the error exponent (7), which is negative by assumption.

Lemma 3. For any p ∈ (0, 1) and total rate Rcom : = R(G) R(H) < 1 − Hb(p), it is possible to choose the code parameters γt, γc and γv such that (7) is satisfied.

Proof. For brevity, let F (v) = R(G)A(v) − D (p||δ(v; γt)∗ p).

It is well-known that a regular LDPC code with rate R(H) = γγvc < 1 has linear minimum distance; in particular, there exists a threshold ν= νv, γc) such thatA(v) ≤ 0 for all v ∈ [0, ν]. Hence, for v ∈ (0, ν], we have F (v) < 0.

Turning now to the interval [ν,12], consider the function G(v) : = RcomHb(v) − D (p||δ(v; γt)) .

SinceA(v) ≤ R(H)Hb(v), we have F (v) ≤ G(v), so that it suffices to upper bound G. Observe that G(12) = Rcom− (1 − Hb(p)) < 0. Therefore, it suffices to show that, by appropriate choice of γt, we can ensure that G(v) ≤ G(12). Noting that G is infinitely differentiable and taking derivatives (details

2The definition of A(v) implies pointwise convergence of |Am(v) − A(v)|+ for v ∈ [0, 1]. But since the domain is compact, pointwise convergence implies uniform convergence.

(5)

0 0.1 0.2 0.3 0.4 0.5

−0.5

−0.4

−0.3

−0.2

−0.1 0 0.1 0.2 0.3 0.4 0.5

Codeword weight

Exponent

WtEnum KL Combined

0 0.1 0.2 0.3 0.4 0.5

−0.5

−0.4

−0.3

−0.2

−0.1 0 0.1 0.2 0.3 0.4 0.5

Codeword weight

Exponent

WtEnum KL Combined

(a) (b)

Fig. 2. Plots of different terms in error exponent (7). The combined curve must remain negative for all ω in order for the error probability to vanish asymptotically. (a) A LDGMγt= 4 construction without any LDPC lower code: here the weight enumerator A is given by Hb(ω), and it dominates the Kullback-Leibler term for low ω. (b) The same γt = 4 LDGM combined with a v, γc) = (3, 6) LDPC lower code: here the LDPC weight enumerator is dominated for all ω by the KL error exponent.

omitted), it can be shown that G(12) = 0 and G(12) < 0.

Hence, a second order Taylor series expansion yields that G(v) ≤ G(12) for all v ∈ (µ,12] for some µ < 12. It remains to bound G on the interval [ν, µ]. On this interval, we have G(v) ≤ RcomHb(µ)−D (p||δ(ν; γt)). By examining (1), we see that choosing γtsufficiently large will ensure that on the interval [ν, µ], the RHS is less than Rcom− (1 − Hb(p)) as required.

Theorem 2 follows by combining the previous lemmas.

At first glance, Lemma 3 may seem unsatisfying, since it might require a very large top degree γt. Note, however, that this degree does not depend on the block length, hence our claim that good low density codes can be constructed with finite degree. Of course, for the claim of finite degree codes to be practically meaningful, the degree required for γtshould be reasonably small. To investigate this issue, we plot the error exponent (7) for rate Rcom = 0.5, LDGM top degree γt = 4, and different choices of lower code with R(H) in Figure 2. Without any lower LDPC code, then R(H) = 1 and the effective asymptotic weight enumerator is simply Hb(ω).

Panel (a) shows the behavior in this case: note that the error exponent exceeds zero in a region around v = 0 where the weight enumerator dominates the negative KL term. In contrast, panel (b) shows the case of a (γv, γc) = (3, 6) LDPC code, where we have used the results of Litsyn and Shevelev [6] in plotting the asymptotic weight enumerator.

This code family has a linear minimum distance, so that the log-domain weight enumerator is negative in a region around v = 0. Thus, the error exponent (7) remains negative for all v ∈ [0, 0.5]. Thus, provided that a (3, 6) lower LDPC code is used, a very reasonable top degree of γt= 4 is sufficient.

V. DISCUSSION

We have established that sparse graphical constructions that exploit both LDGM and LDPC codes can saturate fundamental bounds for problems of source coding with side informa- tion, and channel coding with side information. Although the present results are based on ML encoding/decoding, the spar- sity and graphical structure of our constructions render them

suitable candidates for practical message-passing schemes, which remains to be investigated in future work.

REFERENCES

[1] R. J. Barron, Brian Chen, and G. W. Wornell. The duality between information embedding and source coding with side information and some applications. IEEE Trans. Info. Theory, 49(5):1159–1180, May 2003.

[2] J. Chou, S. S. Pradhan, and K. Ramchandran. Turbo and trellis- based constructions for source coding with side information. In Data Compression Conference (DCC), 2003.

[3] S. Ciliberti, M. M´ezard, and R. Zecchina. Message-passing algorithms for non-linear nodes and data compression. Technical report, November 2005. arXiv:cond-mat/0508723.

[4] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory.

John Wiley & Sons, Inc., New York, 1991.

[5] U. Erez and S. ten Brink. A close-to-capacity dirty paper coding scheme.

IEEE Trans. Info. Theory, 51(10):3417–3432, October 2005.

[6] S. Litsyn and V. Shevelev. On ensembles of low-density parity-check codes: asymptotic distance distributions. IEEE Trans. Info. Theory, 48(4):887–908, April 2002.

[7] M. W. Marcellin and T. R. Fischer. Trellis coded quantization of memoryless and Gauss-Markov sources. IEEE Trans. Comm., 38(1):82–

93, 1990.

[8] E. Martinian and M. J. Wainwright. Low density codes achieve the rate- distortion bound. In Data Compression Conference, volume 1, page To appear, March 2006.

[9] E. Martinian and J. S. Yedidia. Iterative quantization using codes on graphs. In Allerton Conference on Communication, Control, and Computing, Monticello, IL, October 2003.

[10] T. Murayama. Thouless-Anderson-Palmer approach for lossy compres- sion. Physical Review E, 69:035105(1)–035105(4), 2004.

[11] T. J. Richardson and R. L. Urbanke. The capacity of low-density parity- check codes under message-passing decoding. IEEE Trans. Info. Theory, 47(2):599–618, February 2001.

[12] A. J. Viterbi and J. K. Omura. Trellis encoding of memoryless discrete- time sources with a fidelity criterion. IEEE Trans. Info. Theory, IT- 20(3):325–332, 1974.

[13] M. J. Wainwright and E. Maneva. Lossy source coding via message- passing and decimation over generalized codewords of LDGM codes.

In Proc. Int. Symp. Info. Theory, September 2005.

[14] A. D. Wyner and J. Ziv. The rate-distortion function for source encoding with side information at the encoder. IEEE Trans. Info. Theory, IT-22:1–

10, January 1976.

[15] Y. Yang, V. Stankovic, Z. Xiong, and W. Zhao. On multiterminal source code design. In Data Compression Conference, 2005.

[16] R. Zamir, S. Shamai (Shitz), and U. Erez. Nested linear/lattice codes for structured multiterminal binning. IEEE Trans. Info. Theory, 6(48):1250–

1276, 2002.

參考文獻

相關文件

The original concurrent decoding schedule has a large storage requirement that is dependent on the total number of edges in the underlying bipartite graph, while a new,

R ECENT advances [1], [2] in error correcting codes have shown that, using the message passing decoding algorithm, irregular low density parity-check (LDPC) codes can achieve

1, which is not needed in the case of trellis decoding by the Viterbi algorithm, computes soft information on LDPC code bits for subsequent soft iterative decoding..

Abstract—We propose a system for magnetic recording, using a low density parity check (LDPC) code as the error-correcting-code, in conjunction with a rate

Thus, we can say that when the E~l/N~~ is large, the LDPC-COFDM systems achieve the good error rate performance with a small number of iterations on a

-LDPC codes were shown to be a good alternative to B-LDPC or RS codes for magnetic recording, because they perform well with AWGN and outperform B-LDPC codes when burst impairments

To several classes of structured LDPC codes have been minimize hardware costs and memory requirements of such proposed, such as LDPC codes based on quasi-cyclic (QC) encoders, a

All data obtained from the A level participants was used to train the models. The data of the other 86 persons is divided into two groups: Group 1 was also used to train the models,