• 沒有找到結果。

Gaussian Approximation of Log-Cost-Ratios

在文檔中 Codes and Decodingon General Graphs (頁 61-66)

Corollary 4.1 When the min-sum or sum-product algorithms are applied to any system with a Tanner graph (cycle-free or not), and the corresponding tree

5.4 Gaussian Approximation of Log-Cost-Ratios

. (5.23)

Clearly, if the check-to-site costs were truly Gaussian, then so would the final costs and the site-to-check costs , since they are computed as sums of the former (Equations 5.21 and 5.23). Moreover, if the check-to-site costs are “close” to Gaussian, then, according to the central limit theorem, the other costs will be even “closer” to Gaussian. So, the critical point is obviously the distribution of the check-to-site costs, as computed accord-ing to (5.22). Figure 5.11 illustrates the distribution of the check-to-site costs for a cycle code with three bits per parity check, indicating that the distribution is approximated relatively well by a Gaussian distribution in this case.

In fact, for cycle codes with three bits per parity check, it is possible to derive the mean and variance of the check-to-site costs analytically, assuming that the site-to-check costs are independent and Gaussian distributed. The derivation is rather lengthy (it can be found in Appendix A.4), and leads to the formulas

µs E, µE,s E′Q:s

E′,E′E

=

−5 0 5

0 0.1 0.2 0.3 0.4 0.5

−5 0 5

10−8 10−6 10−4 10−2 100

−5 0 5

0 0.2 0.4 0.6 0.8

−5 0 5

10−25 10−20 10−15 10−10 10−5 100

Figure 5.11 Probability distribution of check-to-site cost as computed by the min-sum algorithm on a cycle code with three bits per parity check.

The site-to-check costs are assumed to have Gaussian dis-tribution with mean 1. In the upper two diagrams, the variance of is 2, and in the lower two diagrams, the variance is 1/2. The solid curves are the actual probability distributions; the dotted curves are Gaussian distributions with the same mean and variance.

µE s, = µE s, ( )1 µE s, ( )0 µs,E = µs,E( )1 µs,E( )0

µs,E

µE s, µs

µs E,

µE s, = µE s, ( )1 –µE s, ( )0

µE s, = µE s, ( )1 –µE s, ( )0

5.4 Gaussian Approximation of Log-Cost-Ratios 51

(5.24)

, (5.25)

where m andσ are the mean and standard deviation of the site-to-check costs , respec-tively, and and are the usual “error functions”.

So, using this cycle code on a Gaussian channel, it is easy to compute estimates of the error probability, under the assumption that the check-to-site cost functions are almost Gaus-sian. In Figure 5.12, we have plotted the estimates obtained this way along with simulation results. As can be seen in the figure, the estimate is somewhat optimistic.

For more interesting realizations, such as turbo codes, it appears difficult to obtain a the-oretical expression for the mean and variance of the log-cost ratios obtained during the com-putations. By simulating the decoding process for a single trellis, it would be possible to approximate the relation between the distributions of the incoming and outgoing log-cost ratios numerically.

EE s, ] mQ 2m ----σ

( ) e

m2 σ2

---– σ

2 π ---–

=

E[(µE s, )2] m2 σ22 ---eπ

m2 σ2 ---–

2mσ

---erfπ m σ ----( ) –

+

=

µs,E

Q .( ) erf .( )

0 0.5 1 1.5 2 2.5 3 3.5 4

10−6 10−5 10−4 10−3 10−2 10−1 100

SNR [dB]

Bit Error Rate

Figure 5.12 Bit error rate with cycle code and the min-sum algorithm. The solid curves are the estimates obtained with the Gaussian assumption, while the dashed curves are obtained from simulations. The number of decoding iterations are shown in the diagram.

0

2 4

10 20

6

52

Chapter 6

Decoding Performance with Cycles

We now turn to the case where the cycles do in fact influence the decoder, i.e., when there are multiple copies of some sites (and checks) in the computation tree. It is not at all obvious that the performance will continue to improve if the decoding process goes on after this point. However, simulation results strongly indicate that this may in fact be the case, at least for certain code constructions. This was already observed by Gallager in [5]; it is also clear that the dramatic performance of turbo codes [7] is achieved when the cycles have closed.

Gallager explained this behavior by stating that “the dependencies have a relatively minor effect and tend to cancel each other out somewhat”.

While the analysis of Chapter 5 does not apply in this case, the concept of computation trees and deviation sets, and in particular Theorem 4.2, still applies: a necessary condition for a decoding error to occur is that the global cost of some deviation is negative.

However, the cost of a deviation is not a sum of independent local costs, as it is in the cycle-free case. Instead, some of the local costs are the same (since they correspond to the same sites). Therefore, the probability that the cost is negative depends not only on the weight of e, but also on the number of multiple occurrences. Still, it is possible to formalize the situa-tion in a nice way, as follows.

Thus, let be a system with sites, where the visible sites are all binary, and let be a corresponding tree system rooted at the site s. Then we have

Definition 6.1 The multiplicity vector of a tree configuration is an integer-valued n-tuple , where is the number of tree sites t with

that correspond to s (for a visible site s).

With this definition, the global cost of the tree configuration u may be expressed as

. (6.1)

eE

N W B, ,

( ) n = N VN

N W B, , )

uW [ ]uZn [ ]us

ut = 1

G u( ) [ ]usγs s

V

=

53 We now assume that the channel is Gaussian and (as usual) that the all-zero codeword was transmitted (using antipodal signaling); this means that the local costs are mutually inde-pendent and normally distributed with mean 1 and variance . Furthermore, from (6.1) we

see that is normally distributed with mean and variance

, giving

, (6.2)

where is the usual error function.

The expression in (6.2) closely resembles the corresponding expression in the cycle-free case, making it natural to define the generalized weight of a tree configuration u as the quantity

, (6.3)

obtaining the convenient formulation

. (6.4)

Note that the generalized weight of u is in inverse proportion to a kind of “normalized empirical variance” of [u], so that is large when the components of [u] are similar, whereas is small when some components dominate over the rest, i.e., when a few sites occur more often than others in the support of u.

We now consider the deviation set and the probability of decoding error. Corollary 4.4 applies regardless of the presence of cycles, and we have the error probability bound

. (6.5)

From (6.4), it is clear that should be as large as possible to give a small error probabil-ity. This indicates that there should be no short cycles in the Tanner graph, because other-wise it is possible for a deviation to concentrate a lot of its support on such a short cycle and thus obtain a small generalized weight, thereby contributing to a large error probability. This idea will be formalized into a strict result, regarding cycle codes, below.

The upper bound on the error probability (6.5) may, in turn, be upper-bounded with the usual union bound, giving

γs

σ2

G u( ) E[G u( )] =

sV[ ]us

V[G u( )] σ2 [ ]us2 sV

=

Pr{G u( )≤0} Q

sV[ ]us

σ [ ]us2 sV

---

 

 

 

=

Q .( ) Q( weight u( )⁄σ) ω( )u

ω( )u (

sV[ ]us)2

[ ]us2 sV

---

≡∧

Pr{G u( )≤0} Q ω( )u ---σ

 

 

=

ω( )u ω( )u

Pr decoding error{ }≤Pr{G e( )≤0for some eE} ω( )e

. (6.6)

In parallel with the cycle-free case, it is even possible to define a “generalized weight

enu-merator” , where is the number of

devia-tions with generalized weightω, and the sum in runs over all generalized weightsω that occur inE. This would make it possible to write (a weakened version of) the bound (6.6) in

the compact form , where, as usual, for a Gaussian

channel. Unfortunately, no method is known for computing the generalized weight distribu-tion , apart from going through all deviations explicitly. In particular, it is not possible to apply the sum-product algorithm in the straightforward fashion described in Section 5.1, since the generalized weight of a configuration e is not additive over disjoint site subsets (as is the conventional weight).

For a limited class of systems, however, it is possible to analyze in more detail the behav-ior of the deviations when the number of decoding iterations tends to infinity. This is possi-ble with systems for which the deviations have a “chain-like” structure, i.e., their support forms a single walk through the computation tree, starting and ending in two leaf sites and going through the middle (root) site. There are two interesting kinds of systems in this class:

cycle codes and tailbiting trellises.

The idea is, essentially, that the lowest-cost deviation will have a periodic structure: the walk formed by such a deviation will spend most of its time in the lowest-cost part of the graph. We begin with the simplest case, namely cycle codes.

在文檔中 Codes and Decodingon General Graphs (頁 61-66)

相關文件