• 沒有找到結果。

Derivations for Section 3.3

在文檔中 Codes and Decodingon General Graphs (頁 92-96)

Chapter 8 Conclusions

A.2 Derivations for Section 3.3

, (A.12) which matches the formula for , and the recursive process of above can be applied to the terms of the last sum in (A.12).

A.2 Derivations for Section 3.3 81

We now turn to the complexity of updating the check-to-site cost function . We may divide the computation into two steps. At the first step we loop through all local configura-tions and compute their cost sum; this takes operations (if there is no local check cost function , otherwise are needed). At the second step, we con-sider, for each site value , all local configurations that match with a, i.e.,

with , and take the minimum of their costs. So, let be

the number of local configurations that match with a. Then, the number of operations in this second step to compute is , and the total number of operations in the second step is . By adding the complexity of the two steps, we get

. (A.16)

For large and , there are often more efficient ways of updating the check E than just looping through the local behavior. Such “optimizations” can often be described as replacing the check with a small cycle-free realization of . For example, a parity check on k sites has a local behavior of size ; if the corresponding check-to-site cost functions are computed in the “naive” way, by applying the updating formula (3.2) k times (one for each site of E), the total complexity of that check is (using (A.16)) . By implementing the parity check as a small cycle-free system, a much lower complexity is obtained, at least for large values of k. A parity check of size k can be replaced by par-ity checks of size three, and intermediate (hidden) binary sites, as in Figure A.4. Using (A.16) again, the number of operations for each such small check is 18, so the total complex-ity of the refined check structure is (the new hidden sites require no computation, and the other sites are excluded from the comparison).

Since the savings that can be obtained in this way depend heavily on , we will not consider this issue further, but instead assume that the local behaviors are small, so that (A.16) is useful.

A.2.1 Updating All Intermediate Cost Functions

When using the min-sum (or sum-product) algorithm on a realization with cycles, all inter-mediate cost functions typically have to be computed for each iteration. Here we consider the number of operations needed for each such iteration.

µE s,

BE(E –2) γE BE(E –1)

aAs xEBE

xs = a Ma ≡∧ {xEBE:xs=a}

µE s, ( )a Ma–1 Ma–1

( )

aAs

= BE As

CE s, ) = BE(E –2)+ BEAs = BE(E –1)– As

BE E

BE

2k1 µE s,

k 2( k1(k–1)–2) k–2 k–3

18 k( –2) E

s1 s2 s3 s4 s5 s1 s2 s3 s4 s5

Figure A.4 A refinement of a check E (left) into a small cycle-free check structure (right).

BE

Consider first the computation of the final cost function and all the site-to-check cost functions from a site s, i.e., and for all check sets E with . In a naive imple-mentation, we might compute these cost functions independently, with the total number of

operations . A more economical way is to save and

reuse the temporary sums that occur in the computations, as follows. Let be the

check sets that s belongs to. Let and for . Similarly,

let and for . Then .

Comput-ing for all i requires operations, while (for all i) requires

operations. The final additions to get the cost functions require opera-tions (not counting addition with a zero constant) The total number of operaopera-tions involved in computing the site-to-check cost functions from the site s in this way is thus

if s is visible. (A.17)

For hidden sites, the number of additions is one less, as usual. The computation of the final cost function requires only a single addition (for each element of the site alphabet), since . Also, it is not necessary that all outgoing cost functions of the site s are computed at the same time for this updating scheme to work.

In fact, this “optimized” site updating scheme can be obtained by replacing the site with a small cycle-free realization of it (just as with checks), consisting of several sites that are copies of the original one. Each site with degree larger than two is replaced by a small check structure, as illustrated in Figure A.5. The refined check structure consists of several new sites with the same alphabet as the original site, only one of which is visible, and each is connected to only two check sets. The new checks that are introduced are “repetition codes”, i.e., they force their sites to have the same value. In other words, the new hidden sites are

“copies” of the original site s.

Consider a realization of the output code (where are the visible sites) and a corresponding check structure Q. The number of binary operations of a complete update of all cost functions can be expressed using (A.17) and (A.16). Updating all site-to-check cost functions thus amounts to

(A.18)

µs µs E, sE

C( )µs +

EQ:sECs E, ) = s2 As

E1, ,… Es L0 = γs Li = Li1Ei,s 1≤i< s Rs+1 = 0 Ri = Ri+1Ei,s 1<is µs E, i = Li1+Ri+1

Li As(s –1) Ri As(s –2)

µs E, i As(s –2) Cs

Cs = As(3 s –5)

µs

µs = Ls 1Es s,

E1 E2 E3

s

E1 E2 E3

s

Figure A.5 A site, s, which is connected to three checks (left), is refined into several sites, each connected to only two checks (right).

E4 E4

N W B, ,

( ) BV VN

As(3 s –5)

s

V As(3 s 6)

s

N V\

+

A.2 Derivations for Section 3.3 83 binary operations, while updating all check-to-site functions requires

(A.19)

binary operations. The total number of binary operations of a complete update (excluding the final cost functions) is thus

. (A.20)

Computing the final cost functions requires only additions for each site s for which the final cost function is desired.

A.2.2 The Min-Sum Algorithm on Cycle-Free Realizations

In a cycle-free code realization, it is natural to use a smart updating order as discussed in Section 3.3. Such an updating order consists of two updating phases. In the first phase, all cost functions pointing “towards” a given site r, called the “root site”, are computed, as well as the final cost function at r. In the second phase, the remaining intermediate cost functions, which point “outwards” from r are computed, together with the final cost functions. With the min-sum algorithm, the second updating phase may be replaced by a simple “backtracking”

procedure (as in the Viterbi algorithm), if the goal is just to find the lowest-cost valid config-uration. We now consider only the first phase.

For each site s except r, we need to compute where is the check set contain-ing s which is “closest” to the root site r. For each check set E, we need to compute , where is the site of E which is “closest” to r. Finally, we need to compute . Thus, the complexity of the first phase may be expressed as

. (A.21)

Now, let be a partition of the site set N such that L are the leaf sites, I are the visible interior sites except for the root site r, and H are the hidden sites except for r. Then we can expand (A.21) as

(A.22)

, (A.23)

BE(E –1)– As

[ ]

s

E

E

Q BE (E2 E)

E

Q As s

s

N

=

BE (E2E )

E

Q As(2 s 5)

s

V As(2 s 6)

s

N V\

+ +

As

µs E s, ( ) E s( )

µE s E, ( )

s E( ) µr

C1

C1 Cs E s, ( ))

sN

:sr C(µE s E, ( ))

E

Q C(µr)

+ +

=

L I H, , ,{ }r

( )

C1 As(s –1)

s

I As(s 2)

s

H [BE(E 1) As E( )]

E

Q r Ar

+ + +

=

As s

s

I As(s 1)

s

H BE (E 1)

E

Q

+ +

=

where the last equality follows from , which holds because contains all interior sites exactly once, except for the root site r which occurs times.

As mentioned, with the min-sum algorithm it often suffices to find the best configuration, and it only remains to find the optimum value at r and then trace stored pointers outwards through the Tanner graph. The complexity of finding the smallest of values is , so the overall complexity in this case is plus the complexity of following the pointers.

在文檔中 Codes and Decodingon General Graphs (頁 92-96)

相關文件