• 沒有找到結果。

Generalized maximal contention-free and intra-block permutation 77

4.2 Block-oriented IBPTC

4.2.4 Generalized maximal contention-free and intra-block permutation 77

Maximal contention-free property is proposed in [92] and an interleaver possessing this property supports flexible parallelism degree. The definition is given below.

Definition 21 A length-K interleaver Π is maximal contention-free if both φ = π and φ = π−1 satisfy

¹φ(k + iL) L

º 6=

¹φ(k + jL) L

º

, (4.17)

for all factors L of K, where 0 ≤ k < L and 0 ≤ i < j < KL.

When we apply the memory mapping function in eqns. (4.9) and (4.10), the interleaver supports memory contention-free with parallelism degree the factors of K.

Definition 21 is narrow sense and some good interleavers satisfying Definition 18 may be skipped due to eqn. (4.17). When we apply memory mapping functions in eqns.

(4.13) and (4.14), there exists an interleaver satisfying Definition 21 avoids memory contention. Definition 21 may not support memory contention-free as memory mapping functions in eqns. (4.13) and (4.14) are applied. Definition 21 seems restrictive and necessitates modification. We give a new generalized definition as below.

Definition 22 A length-K interleaver is generalized maximal contention-free if there exist memory mapping functions for all factors of K satisfy Definition 18.

This definition provides a concrete picture in finding interleavers with both good error rate performance and more flexibility in memory contention-free property.

The B-IPB interleaver with the block interleaver satisfying generalized maximal contention-free property is also a generalized maximal contention-free interleaver. In most cases B-IBP interlaever is not a generalized maximal contention-free interleaver because intra-block permutation is not an option. Section 4.2.2 shows that a length-N L B-IBP interleaver supports parallelism degree to any factor of the total number blocks N

but does not promise that any factor of N L is supportable. However if the block inter-leaver is a generalized maximal contention-free interinter-leaver, there exists memory mapping functions to support parallelism degree which is any factor of L. Therefore these exist composite memory mapping functions such that B-IBP interleaver is also a generalized memory contention-free interleaver. The theorem is provided as follows.

Theorem 4.7 If the block interleaver is a generalized maximal contention-free inter-leaver, the B-IBP interleaver is also a generalized maximal contention-free interleaver.

Proof: The block interleaver is a generalized maximal contention-free interleaver and there exist memory mapping functions Mi,block and Mdi,block avoiding memory con-tention with parallelism degree i which is the factor of block length L. Suppose there are N blocks for the B-IBP interleaver and Theorem 4.6 shows that there exist mem-ory mapping functions Mk,B−IBP and Mdk,B−IBP avoiding memory contention with the degree k which is the factor of N . Assume Q = gcd¡N

k, i¢

. We construct a compos-ite memory mapping functions Mki,C−B−IBP and Mdki,C−B−IBP supporting parallelism degree ki as

kQ and Qi are factors of N and L respectively, and these memory mapping functions MkQ,B−IBP, MdkQ,B−IBP, MQi,block and Mdi

Q,block avoid memory contention due to ¯¯¯¯m1N L

We apply both memory mapping functions to ΠB−ibp and Π−1B−ibp and have

Theorem 4.7 introduces memory mapping functions for the B-IBP interleaver sup-porting generalized maximal contention-free property and the necessary condition is that the block interleaver is a generalized maximal contention-free interleaver. Therefore we can search existing good short block interleaver such as QPP or ARP to construct the B-IBP interleaver; the resultant distance property is generally good and generalized maximal contention-free is satisfied.

4.2.5 High-radix APP decoder and intra-block permutation

The high-radix APP decoder [16] improves turbo decoding throughput, network complexity and storage by paying the trellis complexity. The APP decoder processes multiple trellis segments or multiple information bits at each unit time to increase de-coding throughput. Since the dede-coding throughput increases, less APP decoders can achieve the same decoding throughput comparing to the baseline radix-2 APP decoder and the associated network complexity is less. Less APP decoders also require less stor-age for state metrics, received samples and extrinsic information in the turbo decoder.

However more trellis segments or information bits processed at each unit time induces exponential growth trellis complexity in the APP decoder. For example, 6 bits processed at one unit time implies 26 = 64 edges coming out from one state and in total 64 · |Σ|

edges appears but 1 bit processed at one unit time means 2 edges emit from one state and in total 2 · 6 · |Σ| = 12 · |Σ| edges come out, where |Σ| is the number of total states.

Therefore the high-radix APP decoder enlarges trellis complexity or routing complexity.

Take the radix-4 APP decoder [16] as an example to compare the routing complexity.

Fig. 4.7 draws two trellises to compare both the radix-2 and radix-4 APP decoders. Fig.

4.7 (a) is the trellis composed of two trellis segments referring to Fig. 2.3 (c). The radix-2 APP decoder processes this trellis by two unit times. Fig. 4.7 (b) plots the merged trellis and in one unit time two bits are processed. If the parallelism degree 32 is necessary, a 32 × 32 network can be replaced by two 16 × 16 networks when the radix-4 APP decoder substitutes the radix-2 APP decoder. The associated network complexity decreases and the number of APP decoders decrease by two respectively.

In order to support the radix-2B APP decoder, the APP decoder has to access and write consecutive B information bits without memory contention. Suppose the block interleaver Πblock has length L and B is the factor of L, where the condition that B is the factor of L implies that the trellis segment does not change at the last for the APP decoder. We give a definition as follows.

(a) (b)

00 01 10 11

+2

σ

t

00 01 10 11

00 01 10 11

σ

t

σ

t+1

00 01 10 11

00 01 10 11

σ

t

σ

t+2

Figure 4.7: (a) Two connected trellis segments referring to Fig. 2.3 (c); (b) The merged trellis segment for the radix-4 APP decoder.

Definition 23 If the block interleaver Πblock supports memory contention-free for the radix-2B APP decoder, there exist memory mapping functions MB,block and MdB,block

such that

MB,block(iB + j) 6= MB,block(iB + k), (4.22)

MdB,block(iB + j) 6= MdB,block(iB + k), (4.23) MB,blockblock−1 (iB + j)) 6= MB,block−1block(iB + k)), (4.24) MdB,blockblock(iB + j)) 6= MdB,blockblock(iB + k)), (4.25) where 0 ≤ i < BL, 0 ≤ j < k < B.

Recall the length-L ARP interleaver in eqn. (4.4) and the proof in Theorem 4.5, the permutation function moves elements in Sk to SA(k), where Sk = ©

iC + k|0 ≤ i < LCª , 0 ≤ k < C. Therefore, the ARP interleaver can support the radix-2C APP decoder by the memory mapping functions

MC,block(iC + j) = j, (4.26)

MdC,block(iC + j) = j. (4.27)

The B-IBP interleaver supports the radix-2B APP decoder if the block interleaver

supports the radix-2B APP decoder. A theorem is given below.

Theorem 4.8 If the block interleaver supports the radix-2B APP decoder, the associated B-IBP interleaver also supports the radix-2B APP decoder.

Proof: The block interleaver supports the radix-2B APP decoder and there exists memory mapping functions MB,block and MdB,block avoiding memory contention where B is the factor of L. Suppose there are N blocks for the B-IBP interleaver and Theorem 4.6 shows that there exist memory mapping functions Mk,B−IBP and Mdk,B−IBP avoiding memory contention where k is the factor of N . We construct a composite memory mapping functions MkB,C−B−IBP and MdkB,C−B−IBP as

MkB,C−B−IBP(mL + n) = Mk,B−IBP(mL + n)B + MB,block(n), (4.28) MdkB,C−B−IBP(mL + n) = Mdk,B−IBP(mL + n)B + MdB,block(n). (4.29)

Then we have

MkB,C−B−IBPB−ibp−1 (mL + n))

= Mk,B−IBPB−ibp−1 (mL + n))B + MB,block−1block(n)), (4.30) MdkB,C−B−IBPB−ibp(mL + n))

= Mdk,B−IBPB−ibp(mL + n))B + MdB,blockblock(n)). (4.31)