• 沒有找到結果。

Further Unifications

在文檔中 Codes and Decodingon General Graphs (頁 36-39)

The reader may have noted that (for both algorithms presented) the updating formulas for the site-to-check cost functions, on the one hand, and the check-to-site cost functions, on the other, are very similar (see, for example, Equation 3.1 vs. 3.2). In fact, sites could have been treated as a special kind of checks, which would have formally unified the update formulas.

It seems, however, that the present notation is more natural and easier to understand.

It is also possible to unify (and possibly generalize) the min-sum and sum-product algo-rithms formally, using an “universal algebra” approach with two binary operators⊕ and⊗.

For the sum-product algorithm the operator⊕ is addition and the operator⊗ is multiplica-tion, while for the min-sum algorithm⊕ is minimization (taking the smallest of two num-bers) and⊗ is addition. The theorems 3.1 and 3.2 have a counterpart in this universal setting provided that the operators⊕ and⊗ are associative and commutative, and that the operator

⊗ is distributive over⊕, i.e., = . The proof for such a general theorem is readily obtained by substituting ⊕ for “min” and ⊗ for “+” in the proof of Theorem 3.1, cf. Appendix A.1.

a⊗(bc) (ab)⊕(ac)

25

Chapter 4

Analysis of Iterative Decoding

Iterative decoding is a generic term for decoding algorithms whose basic operation is to modify some internal state in small steps until a valid codeword is reached. In our frame-work, iterative decoding occurs when the min-sum or sum-product algorithm is applied to a code realization with cycles in the Tanner graph. (There are other iterative decoding methods too, cf. e.g. our previous work [13].) As opposed to the cycle-free case, there is no obvious termination point: even when the cost contributions from each site have reached all other sites, there is no guarantee that this information is used optimally in any sense. Typically, the updating process is continued after this point, updating each intermediate cost function sev-eral times before computing the final costs and making a decision.

Since the theorems of Chapter 3 do not apply when there are cycles in the Tanner graph, it is somewhat surprising that, in fact, iterative decoding often works quite well. This is pri-marily demonstrated by simulation results, the most well-known being the turbo codes of Berrou et. al. [7]. A much older (and lesser known) indication of this is Gallager’s decoding method for low-density parity-check codes [5].

A natural approach to analyzing iterative decoding is simply to disregard the influence of the cycles. In fact, if the decoding process is terminated after only a few decoding iterations, the algorithm will have operated on cycle-free subgraphs of the Tanner graph; consequently, the theorems of Chapter 3 apply to these “subgraph codes”, a fact that can be used to esti-mate the decoding performance. Put another way, the intermediate cost functions coming into each site (or check) are statistically independent as long as the cycles have not “closed”.

One aspect of this approach is to consider the performance obtained after infinitely many cycle-free decoding iterations. In particular, the error probability may in some cases tend to zero when the number of decoding iterations increases. Of course, the number of cycle-free decoding iterations for any fixed realization is limited by the length of the shortest cycles (the girth) of the Tanner graph, so such an analysis is strictly applicable only to infinite real-ization families having asymptotically unbounded girth. By considering such families, how-ever, it is possible to construct coding systems with arbitrarily low probability of decoding error for a given channel.

On the other hand, even if the number of decoding iterations is too large for a strict appli-cation of this analysis, the initial influence of the cycles is probably relatively small (in fact, Gallager states in [5] that “the dependencies have a relatively minor effect and tend to cancel

each other out somewhat”); in any case, it appears highly unlikely that the actual perfor-mance would be significantly better than the estimate obtained from such a cycle-free analy-sis.

Chapter 5 is devoted to performance analyses based on cycle-free subgraphs.

For any fixed code, the decoding performance is limited by the theoretical code perfor-mance obtained with an optimal decoder. Ideally, an iterative decoding algorithm would approach this performance when the number of iterations increases. Very little is actually known about the asymptotic behavior of iterative decoding for fixed codes. Our results in this area, along with a deeper discussion of the problem, are presented in Chapter 6.

The rest of this chapter contains some fundamental observations that are used in both Chapter 5 and Chapter 6. In particular, we generalize the concept of trellis “detours”, (i.e., paths that diverge from the all-zero path exactly once) to the case of arbitrary realizations.

Most of the analysis applies only to the min-sum algorithm, since we have found its opera-tion easier to characterize. For further simplificaopera-tion, we will also assume the following:

• The code is binary and linear, and all codewords are equally probable. In our setting, this implies that the local check costs can be neglected, and that there are site costs on binary sites only (we do allow nonbinary hidden sites, though, including trellis state spaces).

• The channel is stationary, memoryless, and has binary input; the transition

probabilities are symmetric (i.e., and

).

These restrictions allow us to assume (without losing further generality) that the all-zero codeword was transmitted. Furthermore, we will assume (also with full generality) that the local site cost functions are normalized so that for all s, and use the shorthand notation . Then the global cost may be written as

, (4.1)

where is the support of x, i.e., the number of nonzero visible sites in x. Under these restrictions, we will be interested in the following problems:

• What combinations of local costs (error patterns) lead to a decoding error?

• For a given channel, what is the probability of decoding error?

By decoding error, we will usually consider symbol error for some site s, i.e., the event . This may seem a little odd since we are analyzing the min-sum algorithm rather than the sum-product algorithm. This has been the most fruitful approach, however. In addition, while it is true for cycle-free systems that the min-sum algorithm minimizes the block error probability and the sum-product algorithm minimizes the symbol error probabil-ity, this need not be the case for systems with cycles.

ps( ) x0 s=0) = ps( ) x1 s=1) ps( ) x0 s=1) = ps( ) x1 s= 0)

γs( )0 = 0 γs ≡∧ γs( )1

G x( ) γs( )xs

s

N γs

ssupp x

( )

= =

supp x( )

γs

µs( )0 ≤µs( )1

在文檔中 Codes and Decodingon General Graphs (頁 36-39)

相關文件