• 沒有找到結果。

Convolutionally Encoded Multiple Descriptions

4.4 The MD-SISO Source Decoder

Goal of the MD-SISO source decoder is to compute the APPs of transmitted quantizer indexes by jointly exploiting the channel information, the source residual redundancy and the inter-description correlation induced by the MDSQ. In previous work related to this problem [22][35], the source decoder uses two separate MAP detectors with each detector operating on a single description ˜YD,1T to compute the APP P (uD,t| ˜YD,1T ) for a decoded systematic symbol uD,t = lD. Afterwards the source decoder makes an MAP decision on the two symbols {lI, lJ} and uses their combination to locate the corresponding quantizer index from the index assignment matrix. As the two MAP symbol estimates are decoded separately, it may report invalid codeword combinations corresponding to the empty cells of the index assignment matrix. To compensate for

this shortage, we propose a joint MAP decoding algorithm which combines reliability information received on different channels and computes the APP for each of possibly transmitted quantizer index ut = l. For the purpose of applicability, the algorithm of MD-SISO source decoding is separated into two parts. The first algorithmic step consists in the computation of the APP P (ut| ˜YI,1T , ˜YJ,1T ) for a decoded quantizer index ut, given the two received code-symbol sequences { ˜YI,1T , ˜YJ,1T }. In the second step, identical for both descriptions, these index APPs are combined with a priori knowledge of the index assignment to extract the extrinsic information L[ext]SD (uD,t) on every symbol uD,t

of description D. It contains the new part of information resulting from MD-SISO source decoding and will be delivered back to the corresponding channel decoder as new a priori information for the next iteration.

The source decoding algorithm starts by computing the APP for a decoded quan-tizer index ut= l as follows

P (ut= l| ˜YI,1T , ˜YJ,1T ) = P (ut= l, ˜YI,1T , ˜YJ,1T )/P ( ˜YI,1T , ˜YJ,1T ). (4.15)

Since the received sequence of systematic symbols are de-interleaved and then processed by the source decoder, we have P (ut = l, ˜YI,1T , ˜YJ,1T ) = P (ut = l, ˜UI,1T , ˜ZI,1T , ˜UJ,1T , ˜ZJ,1T ), where ˜UD,1T = Φ−1( ˜XD,1T ) These probabilities can be further decomposed by using the Bayes theorem as

P (ut= l, ˜UI,1T , ˜ZI,1T , ˜UJ,1T , ˜ZJ,1T ) = P (ut= l, ˜UI,1T , ˜UJ,1T )P ( ˜ZI,1T , ˜ZJ,1T |ut= l, ˜UI,1T , ˜UJ,1T )

= αut(l)βtu(l) ·Q

D∈{I,J}P ( ˜ZD,1T |uD,t= lD, ˜UD,1T )

(4.16)

where αut(l) = P (ut= l, ˜UI,1t , ˜UJ,1t ) and βtu(l) = P ( ˜UI,t+1T , ˜UJ,t+1T |ut= l, ˜UI,1t , ˜UJ,1t ). Using the Markov property of the indexes and the memoryless assumption of the channel, the forward-backward recursions of the algorithm in the logarithmic domain can be

expressed as With these metrics, the a posteriori LLR corresponding to the index APP P (ut = l| ˜YI,1T , ˜YJ,1T ) can be expressed as

L(ut= l| ˜YI,1T , ˜YJ,1T ) = ˆαut(l) + ˆβtu(l) − ˆαut(0) − ˆβtu(0) +P

D∈{I,J}{L[ext]CD (uD,t= lD) − L[ext]CD (uD,t= 0D)}

(4.20)

In the next step, the APP of each decoded symbol in every description is calculated from the temporary values of the index APPs and used for computing the extrinsic information of the source decoder. From the properties of the index assignment matrix, this task was accomplished by summing together the APPs of quantizer indexes being assigned to a certain description. For example, the APP for a decoded symbol uI,t= lI

of description I is given by P (uI,t= lI| ˜YI,1T , ˜YJ,1T ) = X

n∈R

P (ut= n| ˜YI,1T , ˜YJ,1T ). (4.21)

where RlI = {n|δI(n) = lI} represents the subset of quantizer indexes located in column lI of the matrix. Substituting (4.17) and (4.19) into (4.21) leads to

log P (uI,t= lI| ˜YI,1T , ˜YJ,1T ) = log P (˜uI,t|uI,t= lI) + log P ( ˜ZI,1T |uI,t= lI, ˜UI,1T ) extrinsic term L[ext]SD (uI,t = lI). In order to determine each of the three terms, we rewrite (4.22) in log-likelihood algebra as

As shown in the Appendix B, the a priori LLR in (4.24) is equal to the de-interleaved sequence of extrinsic information resulting from the channel decoding, i.e, La(uI,t = lI) = L[ext]CD(uI,t = lI). The extrinsic LLR L[ext]SD (uI,t = lI) contains the new part of information which has been determined by the source decoder by exploiting the residual source redundancy as well as the inter-description correlation induced by the

MDSQ. With respect to (4.23), the extrinsic LLR resulting from the source decoding can be calculated by

L[ext]SD (uI,t= lI) = L(uI,t= lI| ˜YI,1T , ˜YJ,1T ) − L[ext]CD (uI,t= lI) − Lc(uI,t= lI) (4.23) which is used after interleaving as a priori information in the next channel decoding round. Finally, we summarize the proposed MD-ISCD scheme as follows:

1. Initialization: Set the extrinsic information of source decoding to L[ext]SD (xI,t) = L[ext]SD (xJ,t) = 0. Set the iteration counter to n = 0 and define an exit condition nmax.

2. Read series of received sequences ˜YD,1T and map all received systematic symbols

˜

xD,t to channel-related LLR Lc(xD,t).

3. Perform log-MAP channel decoding on each description to compute the extrinsic LLR L[ext]CD(xD,t) using (4.14).

4. Perform MD-SISO source decoding by inserting the de-interleaved extrinsic LLR L[ext]CD(uI,t) and L[ext]CD(uJ,t) into (4.20) to compute the index a posteriori LLR L(ut| ˜YI,1T , ˜YJ,1T ) and into (4.23) to compute the symbol a posteriori LLR L(uI,t| ˜YI,1T , ˜YJ,1T ). Then, the extrinsic LLR L[ext]SD (uI,t) is computed by (4.27) and is forwarded to the channel decoder as a priori information. Joint decoding of the two received sequences to extract the extrinsic LLR L[ext]SD (uJ,t) of description J operates in a similar manner.

5. Increase the iteration counter n ← n + 1. If the exit condition n = nmax is fulfilled, then continue with step 6, otherwise proceed with step 3.

6. Compute the APP for each decoded index ut= l as follows:

P (ut= l| ˜YI,1T , ˜YJ,1T ) = eL(ut=l| ˜YI,1T, ˜YJ,1T )/

2M−1

X

j=0

eL(ut=j| ˜YI,1T , ˜YJ,1T ). (4.24)

7. Estimate the decoder output signals ˆvt by (4.1) using the index APPs obtained from step 6.

4.5 Experimental Results

Computer simulatoions were conducted to compare the performance of various MD-ISCD schemes for transmission of convolutionally encoded multiple descriptions over AWGN channels. First a bit-level iterative decoding scheme MD-ISCD1 [22] is con-sidered for error mitigation using the classical BCJR algorithm for soft-output chan-nel decoding and assisted with the bit reliability information provided by the soft-bit source decoding [36]. For the MD-ISCD1 scheme with bit interleaving, the source de-coder applies two separate MAP detectors and performs turbo cross decoding to exploit the inter-description correlation [22]. Two approaches to symbol-level iterative decod-ing, denoted by MD-ISCD2 and MD-ISCD3, are presented and investigated. They both applied a symbol interleaver and performed log-MAP symbol decoding of binary convolutional codes based on sectionalized code trellises. Unlike the MD-ISCD1 and MD-ISCD2 which use two MAP detectors with each detector decoding one descrip-tion, the MD-ISCD3 applies a joint MAP source decoder to improve the estimation of transmitted quantizer indexes by combining reliability information received on different channels. Specifically, the APP to be computed for the MD-ISCD3 is P (ut| ˜YI,1T , YJ,1T ) in (4.14), and {P (uI,t| ˜YI,1T ), P (uJ,t| ˜YJ,1T )} for the other two schemes. The input signals con-sidered here include are first order Gauss-Markov sources described by vt= ρvt−1+ wt, where wt is a zero-mean, unit-variance white Gaussian noise, with correlation coeffi-cients of ρ = 0.8 and ρ = 0.95. As indicated in [37], a value of ρ = 0.95 can be found for scale factors determined in the MPEG audio codec for digital audio broadcasting. On the other hand, ρ = 0.8 provides a good fit to the long-time-averaged autocorrelation function of 8 kHZ-sampled speech that is bandpass-filtered to the range (300 Hz,3400 Hz) [38]. A total of 3000000 input samples is processed by a scalar M-bit Lloyd-Max quantizer and each quantizer index is mapped to two descriptions, each with R bits per symbol per channel. For each of the two descriptions, the bitstreams were spread by an interleaver of length 300 bits and afterwards they were channel encoded by a rate-1/2 recursive systematic convolutional code with a memory order 2 and generator

−6 −5 −4 −3 −2 −1 0 1 2 0

5 10 15 20 25 30

Es/N

0 (dB)

Parameter SNR (dB)

0 1 2 3

Figure 4.4: MD-ISCD3 performance for Gauss-Markov sources with ρ = 0.95 and (M, R) = (5, 3).

polynomial G(D) = (1, (1 + D2)/(1 + D + D2)).

A preliminary experiment was first performed to examine the step-wise quality gains due to the turbo-like evaluation of channel-code and source-code redundancies.

The variation of parameter signal-to-noise ratio (SNR) as a function of the channel SNR Es/N0 for MD-ISCD3 simulation of Gauss-Markov sources with ρ = 0.95 and (M, R) = (5, 3) is shown in Figure 4.4. The results indicate that a turbo-like refine-ment of the extrinsic information from both constituent decoders makes substantial quality improvements possible. The full gain in parameter SNR is reached after three iterations. The investigation further showed that the improved performance achiev-able using MD-ISCD3 is more noticeachiev-able for lower channel SNR. To elaborate further, SNR performances of various MD-ISCD schemes were examined for Gauss-Markov sources with ρ = 0.8 and ρ = 0.95. We provide results for experiments on MDSQ with

−6 −5 −4 −3 −2 −1 0 1 2

Figure 4.5: SNR performance of different decoders for (M, R) = (4, 3) and Gauss-Markov sources (ρ = 0.8, 0.95)

(M, R) = (4, 3) and (5, 3) in Figures. 4.5 and 4.6, respectively. Note that the value of maximal PSNR is 20.22 dB for M = 4 and 26.01 dB for M = 5 due to Lloyd-Max quantization distortion. Three iterations of the algorithm were performed by each decoder as further iterations did not result in a significant improvement. The results clearly demonstrate the improved performance achievable using symbol decoders MD-ISCD2 and MD-ISCD3 in comparison to that of bit-based MD-ISCD1. Furthermore, the improvement has a tendency to increase for lower channel SNR and for more heav-ily correlated Gaussian sources. This indicates that the extrinsic information between source and channel decoders is better to be exploited at the symbol level. The inves-tigation further showed that there is a considerable gap between the MD-ISCD2 and MD-ISCD3 schemes. Moreover, the performance gain achievable using MD-ISCD3 in-creases as more and more diagonals are included in the index assignment. For the case

−60 −5 −4 −3 −2 −1 0 1 2 5

10 15 20 25 30

Es/N 0 (dB)

Parameter SNR (dB) MD−ISCD1(ρ= 0.8)

MD−ISCD2(ρ= 0.8) MD−ISCD3(ρ= 0.8) MD−ISCD1(ρ= 0.95) MD−ISCD2(ρ= 0.95) MD−ISCD3(ρ= 0.95)

Figure 4.6: SNR performance of different decoders for (M, R) = (5, 3) and Gauss-Markov sources (ρ = 0.8, 0.95)

of (M, R) = (4, 3) and ρ = 0.95, the MD-ISCD3 yields about 1.97 dB improvement at Eb/N0 = −2 dB relative to the MD-ISCD2. For the case of (M, R) = (5, 3), the param-eter SNR can further be improved by up to 9.91 dB. The difference between them is due to the fact that MD-ISCD2 only accounts for the information received on a single description through the knowledge of symbol APP P (uD,t| ˜YD,1T ). On the other hand, the MD-ISCD3 uses in its APP computation the total channel outputs { ˜YI,1T , ˜YJ,1T ) and makes the final decision by incorporating the inter-description correlation as a result of the MDSQ.

Chapter 5