• 沒有找到結果。

Turbo code, also named parallel concatenated convolutional code (PCCC), convolutional turbo code (CTC), or turbo convolutional code (TCC), was first proposed by C. Berrou, A. Glavieux and P. Thitimajshima in 1993 [9,10]. It has been proved that the performance of turbo code can be close to shannon limit with simple recursive systematic convolutional (RSC) codes concatenated by an interleaver whose length is N. The interleaver permutes the information sequence before the second encoding, introducing code diversity.

2.2.1 Turbo Encoder

The turbo encoder is composed of two RSC encoders and an interleaver to reorder the information sequence. Note that the RSC encoder must be recursive for better perfor-mance [11]. In Fig. 2.3, the information symbols are encoded to the systematic part v0(D) and the parity part v1(D); thus, v0(D) = u(D). And the second encoder encodes the interleaved information symbols ˜u(D) to the parity part v2(D).

Encoder 1

Encoder 2 Interleaver

( )D u

( )D uɶ

1( )D v

2( )D v

0( )D v

Figure 2.3: Turbo encoder

2.2.2 Turbo Interleaver

The main reason causing turbo code performance so close to shannon limit is the inter-leaver. As shown in Fig. 2.3, the interleaver permutes the information sequence u(D) to

˜

u(D). Therefore, the interleaver can spread out the burst errors and further eliminate the correlation of the input of two RSC encoders so that the iterative decoding algorithm based on exchanging un-correlated information between two decoders can be applied.

Also, the interleaver can break low weight codewords to improve the coding gain.

The code distance spectrum dominates the error-correcting performance of the turbo code. Referring to [12], the process of the interleaver called spectral thinning can reduce the error probability of low weight codewords. If we assume the interleaver performs random permutation, the error probability can be reduced by a factor of 1/N [11, 13], where N is the interleaver size. And 1/N is also refered to the interleaver gain. The

size and the permutation will considerably affect the turbo code performance. At low SNRs, the interleaver size has the most important effect, whereas the permutation would dominate the error performance at high SNRs. Consequently, the interelaver structure is desirable to break these input patterns. In such case, the input sequence to the second encoder, which is generated by the interleaver, will most likely produce a high weight parity check sequence and further increase the whole turbo codeword weight.

2.2.3 Turbo Decoder

The iterative turbo decoding process based on MAP algorithm is to exchange the soft information among soft-in/soft-out (SISO) decoders to calculate a posteriori probability of each information bit ut [2]. For a code rate 1/n RSC encoder, each codeword frame consists of one systematic bit and (n−1) parity bits. In the receiver, the received codeword has the systematic symbol r0,t and the parity symbols r(1)t ∼ r(n−1)t . If the a priori information is represented by

La(ut), lnP (ut= +1)

P (ut= −1), (2.30)

additionally, the channel reliability value Lc is defined to be 4ENs

0 for the AWGN chan-nel [14], and the branch metric in logarithmic domain would be

¯ which is from (2.9) and (2.13).

As a result, the APP information from the SISO decoder can be derived as follows:

L(ut) = ln

The term Le(ut) is the extrinsic information corresponding to the information bit ut[9,10].

In the decoder, we receive the systematic sequence r0(D) as well as the parity sequences r1(D) and r2(D) from encoder 1 and encoder 2. In the decoding flow shown in Fig. 2.4, there are two SISO decoders for the two constituent encoders in Fig. 2.3. Initially, we set the a priori information La1(ut) for the first decoder to zero and apply the BCJR algorithm to calculate the a posteriori information L1(ut). From (2.32), the extrinsic information Le1(ut) can be obtained

Le1(ut) = L1(ut) − Lcr0,t− La1(ut), (2.33) where La1(ut) = 0 initially. In the SISO decoder-2, the inputs are ˜r0(D) permuted from the systematic part r0(D) and the parity sequence r2(D), while the a priori information La2(˜ut) is the extrinsic output Le1(ut) from decoder-1 after permutation. Consequently, we can evaluate the a posteriori output L2(˜ut) and the extrinsic information Le2(˜ut) corresponding to the second constituent code by

Le2(˜ut) = L2(˜ut) − Lc0,t− La2(˜ut). (2.34) As shown in Fig. 2.4, the information Le2(˜ut) can be regarded as the the a priori information La1(ut) for SISO decoder-1 after being reordered by the de-interleaver. The BCJR algorithm proceeds again for the first constituent code based on the information La1(ut) from SISO decoder-2. The turbo decoding proceeds iteratively with the extrinsic

information passing between the two SISO decoders. When the stopping criteria are reached, which may be the maximum iteration number or a correctly decoded codeword, the APP information L2(˜ut) through the de-interleaver is exported for hard decision.

Notice that both SISO decoders in Fig. 2.4 will complete once within each decoding iteration.

The BER curve of turbo code can be divided into three regions [15], at very low SNRs, the signal is so greatly corrupted by channel noise that the decoder cannot improve the error rate and may even degrade it. The non-convergence region has an almost constant and high error probability. As the SNR increases, a waterfall region is encountered where the error rate drops sharply. As the SNR increases still further, a error floor region is encountered where the curve becomes less steep, limiting the performance gains. This error floor region is primarily a function of the distance properties of the code, which can be expressed by (2.35)

Pb ∝ Q r

2df reeREb

N0

!

, (2.35)

where df ree is the code minimum free distance, R is the code rate, and NEb

0 is the SNR.

2.3 Double-binary Convolutional Turbo Code Decod-ing Algorithm

Double-binary convolutional turco code (CTC) can provide better performance than single binary turbo code for equivalent compexity [16]. This section will introduce double-binary CTC with tail-biting technique which can avoid reducing the code rate and increasing the transmission bandwidth. Using double-binary CTC, the latency of the decoder is halved [17], and it could be easily adopted in many standards, such as DVB-RCS and WiMAX standards [1, 18].

2.3.1 Double-binary CTC Encoder

The bouble-binary CTC encoder is shown in Fig. 2.5. Compare to the conventional turbo code, there has two systematic bits, so the number of branches connected to each state in trellis diagram are increased from two to four.

Encoder 1

Figure 2.5: Double-binary Convolutional Turbo encoder

For conventional turbo encoder, we should add tail bits to force the trellis diagram to finish at zero state. The trellis termination makes sure that the initial state for the next block is the all-zero state, but the tail bits will decrease the code rate and degrade the transmission efficiency, and the degradation will be more for the shorter blocks. Using tail-biting application, also called circulation states, the state of the encoder at the beginning of the encoding process is not necessarily the all-zero state. The fundamental idea behind tail-biting is that the encoder is controlled in such a way that it starts and ends the encoding process in the same state [19].

The circular coding ensures that, at the end of the encoding operation, the encoder retrieves the initial state, so that data encoding may be represented by a circular trellis.

Assume there exists such a circulation state Sc, if the encoder starts from state Sc, it comes back to the same state when the encoding process is finished. The derivation of circulation state Sc requires a pre-encoding operation. First, the encoder is initialized in the all zero state, and the data sequence of length N is encoded once, leading to a final state Sm(N ). Second, we find Sc from the final state Sm(N ) by the following equation [19]:

Sc = I + GN−1

× Sm(N ), (2.36)

where G is the generator matrix which comes from encoder, and I is the identity matrix.

Finally, data are encoded starting from the state Sc calculated by (2.36).

2.3.2 Decoding Procedure for Double-binary CTC

According to the iterative decoding algorithm of turbo codes in section 2.2, we realize that the goal of the MAP decoding algorithm is to achieve the extrinsic and LLR values.

Therefore, for the input signals u0,t and u1,t, the LLR for i = 1, 2, 3 can be represented as Li(dt), ln Pr {dt= i|r}

Pr {dt= 0|r}, (2.37)

where dt in GF (22) is defined as the collection of input symbols (u0,t, u1,t) with elements {0, 1, 2, 3} from time (t − 1) to time t (that is, dt= 00, 01, 10, 11. We use decimal notation instead of binary for simplicity), and r is received symbol after QPSK mapping. The decomposition of the above equation will be

Li(dt) = ln

Applying the Log-MAP algorithm to the (2.38), the LLR can be rewritten to

Li(dt) = ln

and the Max-Log-MAP approximation will become Li(dt) ≈ max

(m,m)∈Bit[¯α(Sm(t)) + ¯γ(Sm(t), Sm(t+1)) + ¯β(Sm(t+1))]

− max

(m,m)∈B0t[¯α(Sm(t)) + ¯γ(Sm(t), Sm(t+1)) + ¯β(Sm(t+1))].

(2.40)

Since the tail-biting is applied on circular trellis diagram, we have equally likely symbols.

Thus, the initial condition of branch metrics become

¯

α(Sx(0)) = 0 for ∀ Sx(0) ∈ S

β(S¯ x(N )) = 0 for ∀ Sx(N ) ∈ S. (2.41)

SISO

Figure 2.6: Double-binary CTC decoder

For a code rate 1/n double-binary RSC encoder, each codeword frame consists of two systematic bits and 2(n − 1) parity bits. In the receiver, the received codeword has the systematic symbols rt(0), r1(1) and the parity symbols rt(2) ∼ r(2n−1)t . Moreover, in order to reduce the computational complexity, to increase throughput, or to reduce the power consumption, we could further simplify the path metrics into

¯

where the value of bj ∈ {+1, −1} depends on the encoding polynomial after BPSK map-ping and can be pre-calculated for all state transitions, respectively. The a priori infor-mation in (2.42) is represented by

Lia(dt), ln P (dt= i)

P (dt= 0). (2.43)

From the decoding flow shown in Fig. 2.6, the extrinsic information for next stage can be calculated as

Lie(dt) = Li(dt) − [(b0· r0,t+ b1· r1,t) − (r0,t+ r1,t)] − Lia(dt) . (2.44) Compute symbol probabilities for the next decoder from previous decoder as

Lia(dt) = Lie ˜dt

= ln P (dt= i)

P (dt = 0), (2.45)

to save the hardware resource, we can define ln P (dt= 0) to 0. Hence, the a priori information can be rewritten as follows:

Assume the information symbols are equal probability, so we initialize the a priori infor-mation for the first iteration: 



The double-binary turbo decoding proceeds iteratively with the extrinsic information pass-ing between the two SISO decoders. When the stopppass-ing criteria are reached, which may be the maximum iteration number or a correctly decoded codeword, the final decisions are made according to:

相關文件