• 沒有找到結果。

SISO Channel Decoding of Convolutionally- Convolutionally-Encoded VLCConvolutionally-Encoded VLC

Conclusions and Future Work

5.4 SISO Channel Decoding of Convolutionally- Convolutionally-Encoded VLCConvolutionally-Encoded VLC

For the transmission scheme with channel coding, a soft-output channel decoder can be used to provide both decoded bits and their reliability information for further pro-cessing to improve the system performance. The commonly used BCJR algorithm is a trellis-based MAP decoding algorithm for both linear block and convolutional codes.

The derivation presented in [23] led to a forward-backward recursive computation on the basis of a bit-level trellis diagram, which has two branches leaving each state and every branch represents a single symbol-bit. Proper sectionalization of a bit-level code trellis may result in useful trellis structural properties [31][32] and allows us to de-vise SISO channel decoding algorithms which incorporate parameter-oriented extrinsic information from the source decoder. To proceed with this, we propose a modified BCJR algorithm which parses the received code-bit sequence into blocks and computes the APP for each parameter symbol on a symbol-by-symbol basis. Unlike classical BCJR algorithm that decodes one bit at a time, our scheme proceeds with decoding the parameter symbols as nonbinary symbols that are matched to the number of bits in an symbol. By parsing the code-bit sequence into l(c(xt))-bit symbols, we are in essence merging l(c(xt)) stages of the original bit-level code trellis into one. In previous work[31][32], the sectionalized trellis originally proposed for fixed length codevector, is not suit for symbol-by-symbol channel decoding of convolutionally-encoded VLC.

In the case of variable-length branch, different paths entering a state have used up a different number of bits from the received sequence and can therefore be extended dif-ferently [48]. For this reason, we extend 1-dimensional state st to 2-dimensional state As an example, we illustrate in Figure 5.3 three stages of the bit-level trellis diagram of a rate 1/2 convolutional encoder with generator polynomial (7, 5)8. The solid lines and dashed lines correspond to the input bits of 0 and 1, respectively. σt= (st, gt). Figure 5.3 also shows the sectionalized trellis diagram when two stages of the original bit-level trellis are merged together. In general, there are 2M branches leaving and entering each state in a l(c(λ))-stage merged trellis diagram. Having defined the trellis structure as such, there will be one symbol APP corresponding to each branch which represents a particular parameter symbol xt = λ. For convenience, we say that the sectional-ized trellis diagram forms a finite-state machine defined by its state transition function Fσ(xt, σt−1) and output function Fp(xt, σt−1). Viewing from this perspective, the code-bit combination associated with the branch from state σt−1 to state σt = Fσ(xt, σt−1) can be written as yt = (xt, zt), where zt = Fp(xt, σt) is the variable-length codevector

00 the convolutional encoder with memory order ν. An example for 3-dimension trellis digram is shown in Figure 5.4 for N = 5, T = 2 and ν = 2.

We next applied this new sectionalized code trellis to compute the APP of a sys-tematic symbol xt= λ given the received code sequence ˜Y1T = {˜y1, ˜y2, ..., ˜yT} in which

0

1/P (˜y1T) is a normalizing factor. For the recursive implementation, the forward and backward recursions are to compute the following metrics:

αxt(λ, σt) = X

and in (5.15)

γλ,qx (˜yt, σt, σt−1) = P (xt= λ, σt, ˜yt|xt−1 = q, σt−1)

= P (st|xt = λ, gt, ˜yt, xt−1 = q, σt−1)

·P (˜yt|xt= λ, gt, xt−1 = q, σt−1)P (xt= λ, gt|xt−1 = q, σt−1)

= P (st|xt = λ, st−1)P (˜yt|xt= λ, gt, σt−1)

·P (xt= λ, gt|xt−1 = q, σt−1).

(5.17)

Having a proper representation of the branch metric γλ,q(˜yt, σt, σt−1) is the critical step in applying symbol decoding to error mitigation and one that conditions all sub-sequent steps of the implementation. As a practical manner, several additional factors must be considered to take advantage of the symbol-level trellis structure and AWGN channel assumption. First, making use of the merged variable length code trellis, the value of P (st|xt−1 = q, st−1) is either one or zero depending on whether symbol q is associated with transition from state σt−1 to state σt = Fσ(xt = λ, σt−1). For AWGN channels, the second term in (5.17) is reduced to

P (˜yt|xt = λ, gt, σt−1) = P (˜xt|xt= λ, ˜zt, gt, σt−1)P (˜zt|xt= λ, σt−1, gt)

= P (˜xt|xt= λ, gt)P (˜zt|zt = Fp(xt= λ, σt−1), gt)

(5.18)

where the conditional pdfs for the received systematic and parity symbols can be computed analogous to (5.6). The third term in (5.17) is reduced to P (xt = λ, gt) under the assumption that xt is uncorrelated with xt−1, which is indeed the case as xt is the interleaved version of parameter symbols. Within the iterations the a priori information P (xt= λ, gt) can be improved by additional a priori information which is provided by the SISO source decoder in terms of its extrinsic probability PSD[ext](xt= λ, gt).

The decoder’s next step is to compute the deinterleaved extrinsic information for each symbol. We first compute APPs for deinterleaved symbol ut = λ which takes VLC trellis states into consideration as follows:

PCD(gt−1, ut= λ, gt|˜y1T) = C· PCD(ut= λ|˜yT1) (5.19)

where the normalization factor C =X

q

X

(gt,gt−1)∈{(gt,gt−1):gt−gt−1=l(q)}

PCD(ut= q|˜y1T)

and PCD(ut= q|˜y1T) = Φ−1(PCD(xt= q|˜y1T)).

An iterative process using the SISO channel decoder as a constituent decoder is realizable, if the reliability of the APPs PCD(gt−1, ut = λ, gt|˜y1T) can be separated into three terms according to Bayes theorem: the a priori probability Pa(ut = λ, gt), the channel-related probability Pc(ut = λ, gt) = P (˜ut|ut = λ, gt), and an extrinsic term resulting from PCD[ext](ut= λ, gt). The equation (5.19) leads to

PCD(gt−1, ut= λ, gt|˜y1T) = P (ut= λ, gt−1, gt, ˜uT1, ˜zT1)/P (˜y1T)

= P (ut= λ, gt−1, gt, ˜uT1) ·PzT1|ut=λ,gPyTt−1,gtuT1) 1)

= P (˜ut|ut= λ, gt−1, gt, ˜ut1−1, ˜uTt+1) · P (xt = λ, gt, gt−1)

·P (˜ut1−1, ˜uTt+1) · P (˜z1T|ut= λ, gt−1, gt, ˜U1T)/P (˜y1T)

= C · Pc(˜ut|ut= λ, gt) · P (ut= λ, gt)

·PCD[ext](ut= λ, gt)

(5.20)

where C is a normalization factor and

PCD[ext](ut= λ, gt) = P (˜zT1|ut = λ, gt−1, gt, ˜uT1). (5.21) With this, the deinterleaved extrinsic probability PCD[ext](ut= λ, gt) can be calulated by

PCD[ext](ut= λ, gt) = PCD(gt−1,ut=λ,gty1T)

Pc(ut=λ,gt)PSD[ext](ut=λ,gt)P (ut=λ,gt) (5.22) and used as new a priori information for the source decoder. Similarly, the interleaved extrinsic probability PCD[ext](xt= λ, gt) can be calulated by

PCD[ext](xt= λ, gt) = PCD(gt−1,xt=λ,gtyT1)

P(x=λ,g)P[ext](x=λ,g)P (x=λ,g). (5.23)

Finally, we summarize the proposed symbol-based ISCD of convolutionally-encoded VLC as follows:

1. Initialization:

Set the extrinsic information of source decoding to PSD[ext](xt, gt) = 1. Set the iteration counter to n = 0 and define an exit condition nmax.

2. Read series of received sequences ˜yT1 and map all received systematic symbols ˜xt

to channel-related probabilities Pc(xt, gt).

3. Perform MAP channel decoding on each symbol to compute the APP PCD(xt = λ|˜y1T) leaded by substituting (5.15) and (5.16) into (5.14). Then the symbol APP for each interleaved symbol which takes VLC trellis states, P (xt= λ, gt, gt−1|˜y1T), is computed by (5.19). The de-interleaved extrinsic probability and interleaved extrinsic probability PCD[ext](ut, gt), PCD[ext](ut, gt) can be calculated by (5.22) and (5.23), respectively. These extrinsic probabilities can be used as a priori infor-mation for the source decoder.

4. Perform SISO source decoding of VLC by inserting the de-interleaved extrinsic probability PCD[ext](ut, gt) into (5.5) and then de-interleaved extrinsic probability for each symbol is calculated by (5.11). Compute the symbol a posteriori probability PSD(ut, gt, gt−1|˜yT1) by (5.9) and then substitute these probabilities into (5.1) to lead APPs PSD(ut|˜y1T). Interleave these APPs to compute the interleaved symbol APP for each symbol which takes trellis states, P (xt = λ, gt, gt−1|˜y1T) by (5.13).

Then, the interleaved extrinsic probability PSD[ext](xt, gt) is computed by (5.12) and is forwarded to the channel decoder as a priori information.

5. Increase the iteration counter n ← n + 1. If the exit condition n = nmax is fulfilled, then continue with step 6, otherwise proceed with step 3.

6. Using the symbol APPs obtained from step 4 to calculate the decoder output

signals ˆvt by MAP estimation, where the estimated value are given as

ˆ

vt= cλ, λ = arg max

λ PSD(ut= λ|˜y1T) (5.24)

Bibliography

[1] J. Rosenberg, L. Qiu, and H. Schulzrinne, “Integrating pakcet FEC into adaptive voice playout buffer algorithms on the internet,” in Proc. IEEE INFOCOM 2000, vol. 3, Tel Aviv, Israel, Mar. 2000, pp. 1705-1714

[2] W. Jiang and A. Ortega, “Multiple description speech coding for robust commu-nication over lossy packet networks,” in International Conference on Multimedia and Expo, New York, USA, August . 2000, vol. 1,pp. 444–447.

[3] Y.J. Liang, E.G. Steinbach, and B. Girod, “Multi-stream voice over IP using packet path diversity,” in Multimedia Signal Processing IEEE Fourth Workshop, 2001, pp. 555–560.

[4] J. Balam and J. D. Gibson “Multiple descriptions and Path Diversity for Voice Communications Over Wireless Mesh Networks, ” IEEE Transactions on Multi-media, August 2007.

[5] V. K. Goyal, “Multiple description coding: ‘compression meets the network,”

IEEE Signal Processing Magazine, Sep. 2001.

[6] A. Ingle and V. A. Vaishampayan, “DPCM system design for diversity systems with applications to packetized system,” IEEE Trans. Inform. Theory, vol. 39, pp. 821-834, May 1993.

[7] W. Jiang and A. Ortega,“Multiple description speech coding for robust com-monucation over lossy packet networks,” in Proc. IEEE Int. Conf. Multimedia and Expo., 2000, vol. 1, pp.444-447.

[8] B. Bessette, R. Salami, R. Lefebvre, M. Jelinek, J. Rotola-Pukkila, J. Vainio, H. Mikkola, and K. Jarvine, ,“The adaptive multirate wideband speech codec (AMR-WB),” IEEE Trans. Speech Audio Proces., Nov. 2002.

[9] International Telecommunication Union,“Coding of Speech at 8 kbit/s Using Conjugate- Structure Algebraic-Code-Excited Linear-Prediction (CS-ACELP),”

ITU-T G.729 Recommendation., Nov. 2000.

[10] V. A. Vaishampayan, “Design of multiple description scalar quantizers,” IEEE Trans. Speech Audio Process, vol. 3, no. 1, pp. 48-58, Jan. 1995.

[11] S.B. Moon, J. Kurose, and D. Towsley, “Packet audio playout delay adjustment:

Performance bounds and algorithms,” Multimedia Systems , vol. 6, no. 1, pp.

17–28, Jan. 1998.

[12] L. Sun and E. Ifeachor, “Voice quality prediction models and their application in VoIP networks,” IEEE Transactions on Multimedia, August 2006.

[13] K. Fujimoto, S. Ata, and M. Murata “Adaptive Playout Buffer Algorithm for Enhancing Perceived Quality of Streaming Applications ,” in Processings of IEEE Globecom, Nov 2002.

[14] S. Lin and D.J. Costello, Error Control Coding, Pearson Prentice Hall, New Jersey, 2004.

[15] J. Rosenberg, L. Qiu, and H. Schulzrinne, “Integrating pakcet FEC into adaptive voice playout buffer algorithms on the internet,” in Proc. IEEE INFOCOM 2000, vol. 3, Tel Aviv, Israel, Mar. 2000, pp. 1705-1714

[16] C. Boutremans and J. Boudec, “Adaptive Joint Playout Buffer and FEC Ad-justemnt for Internet Telephony,” in Processings of IEEE INFOCOM, 2003.

[17] Chia-Chen Kuo, Ming-Syan Chen, and Jeng-Chun Chen, “An Adaptive mission Scheme for Audio and Video Synchronization based on Real-time Trans-port Protocol, ” in IEEE International Conference on Multimedia and Expo, Tokyo, Japan, August 2001.

[18] International Telecommunication Union, “The E-model, a computational model for use in transmission planning, ” ITU-T Recommendation G.107, July 2000.

[19] N. Gortz, “A generalized framework for iterative source-channel decoding,”

Annals of Telecommunications, Special Issue on Turbo Codes, pp. 435-446, July/August 2001.

[20] M. Adrat, P. Vary, and J. Spittka, “Iterative source-channel decoder using extrin-sic information from softbit source decoding,” in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing,vol. 4, pp. 2653-2656, Salt Lake City, Utah, USA, May 2001.

[21] M. Srinivasan, “Iterative decoding of multiple descriptions,” in Proc. IEEE ICC, MArch 1999, pp. 3-12.

[22] J. Barros, J. Hagenauer, and N. Gortz, “Turbo cross decoding of multiple de-scriptions,” in Proc. IEEE ICC, vol. 3, 2002, pp. 1398-1402.

[23] L. R. Bahl, Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linear codes for minimizing symbol error rate,” IEEE Trans. Inform. Theory, vol. IT-20, pp.

284-287, Mar. 1974.

[24] R. Cole and J. Rosenbluth, “Voice over IP performance monitoring,” in Journal on Computer Communication Review, vol. 31, no. 2, Apr. 2001.

[25] International Telecommunication Union “Perceptual Evaluation of Speech Qual-ity (PESQ), An Objective Method for End-to-end Speech QualQual-ity Assessment of Narrow- band Telephone Networks and Speech Codecs” ITU-T Recommendation P.862, Feb. 2001.

[26] P.A. Barrent, R.M. Voelcker, and A.V. Lewis, “Speech transmission over digital mobile radio channels”, BT Technology Journal, vol. 14, no. 1, pp. 45-56, Jan.

1996.

[27] L. Ding and R.A. Goubran, “Assessment of effects of packet loss on speech quality in VoIP”, in Proceeding of IEEE International workshop on haptic, audio and visual environments and their applications, pp. 49-54, Sep. 2003.

[28] International Telecommunication Union, “Objective measuring apparatus, Ap-pendix 1: Test signals,” ITU-T Recommendation P.50, Feb. 1998.

[29] E.K.P. Chong and S.H. Zak, An Introduction to Optimization, John Wiley &

Sons, Inc., 2001.

[30] Chun-Feng Wu, and Wen-Whei Chang “Perceptual Optimization of Playout Buffer in VoIP Appications, ” in Proceedings of Chinacom, Oct. 2006.

[31] Y. Liu, S. Lin, and M.P.C. Fossorier, “MAP algorithms for decdoing linear block codes based on sectionalied trellis diagrams,” IEEE Trans. Commun., vol. 48, pp. 577-587, April 2000.

[32] M. Bingeman and A. K. Khandani, “Symbol-based turbo codes,” IEEE Commu-nications Letters, vol. 3, pp. 285-287, Oct. 1999.

[33] J. A. Erfanian, S. Pasupathy, and G. Gulak, “Reduced complexity symbol detec-tors with parallel structures for ISI channels,” IEEE Trans. Commun., vol. 42, pp. 1661-1671, 1994.

[34] P. Robertson, E. Villebrun, and P. Hoeher, “A comparison of optimal and sub-optimal MAP decoding algorithms operating in the log domain,” in Proc. IEEE International Conference on Communication, vol. 2, pp. 1009-1013, Jun 1995.

[35] M. Srinivasan, “Iterative decoding of multiple descriptions,” in Proc. IEEE ICC, March 1999, pp. 3-12.

[36] T. Fingscheidt and P. Vary, “Softbit speech decoding: a new approach to error concealment,” IEEE Trans. Speech and Audio Processing, vol. 9, no. 3, pp. 240-251, March 2001.

[37] N. Gortz, “On the iterative approximation of optimal joint source-channel de-coding,” IEEE J. Select. Areas Commun., vol. 19, no. 9, pp. 1662-1670, 2001.

[38] N. S. Jayant and P. Noll, Digital Coding of Waveforms, Prentice-Hall, Englewood Cliffs, N.J., 1984

[39] R. Bauer and J. Hagenauer, “On variable length codes for iterative source/channel decoding,” in Proc. IEEE Data Compression Conference , March 2001, pp.273-282.

[40] R. Thobaben and J. Kliewer, “Low-complexity iterative joint source-channel de-coding for variable-length encoded Markov sources,”IEEE Transactions on Com-munications, vol.53, no.12, pp. 2054- 2064, Dec. 2005

[41] M. A. Bemard and B. D. Sharma, “Some combinatorial results on variable length error correcting codes”, in Ars Combinatoria, Vol. 25B, 1988, pp. 181-194 [42] R. Bauer, J. Hagenauer, “Iterative source channel-decoding using Reversible

Vari-able Length Codes,” in Proc. IEEE Data Compression Conference, Snowbird, USA, March 2000, pp. 93-102

[43] Y. Takishima, M. Wada and H. Murakami, “Reversible variable length codes,”IEEE Trans. on Comm., vol. COM-43, No. 2/3/4, 1995, pp. 158-162

[44] X. Wang and X. Wu, “Joint Source-Channel Decoding of Multiple Description Quantized and Variable Length Coded Markov Sequences,” in Proc. IEEE Inter-national Conference on Multimedia and Expo, , July 2006, pp.1429-1432.

[45] T. Guionnet, C. Guillemot, and E. Fabre, “Soft decoding of multiple descrip-tions,” in IEEE International Conference on Multimedia, ICME, vol. 2, Lau-sanne, Switzerland, August 26-29 2002, pp. 601604.

[46] J. Kliewer and R. Thobaben, “Iterative joint source-channel decoding of variable-length codes using residual source redundancy,” IEEE Transactions on Wireless Communications, vol.4, no.3, pp. 919- 929, May 2005

[47] R. Bauer and J. Hagenauer, “Symbol-by-symbol MAP decoding of variable length codes,” inProc. 3rd ITG Conf. Source Channel Coding, Munich, Germany, Jan.

2000, pp. 111116.

[48] K. Sayood, H.H Otu and N. Demir, “Joint source/channel coding for variable length codes,” IEEE Transactions on Communications, vol.48, no.5, pp.787-794, May 2000

Appendix A

This section gives the detailed computation of R(l)(m, n, DF,i) and S(l)(m, n, DF,i) when (1) a Reed-Solomon code (N, K) is used, (2) packets are sent over a Gilbert channel and (3) the FEC delay of packet i is DF,i. For m = 1, n ≥ 1, R(l)(1, n, DF,i) is the probability that none of the packets are missing in the next n − 1 packets following the network loss of packet i, and is given by

R(l)(1, n, DF,i) = Pr(Wi+1i+n−1= 0n−1|Wi = 1)

= q(l)(1 − p(l))n−2·Qn−1

h=1(1 − e(l)b,i+h).

(A.1)

For 2 ≤ m ≤ n, we compute R(l)(m, n, DF,i) conditionally to the event {Aj, Bj, Cj, j = 0, 1, . . . , n − m} on the arriving states of packets:

Aj = {Wii+j+1 = 10j1}

Bj = {Wii+j+1 = 10j2}

Cj = {m − 2 missing packets in Wi+j+2i+n−1}

(A.2)

where 0j is a shorthand for j successive 0’s. For a Gilbert loss model with parameters p(l) and q(l), we have

Pr(Aj) =

(1 − q(l)), j = 0 q(l)(1 − p(l))j−1p(l)Qj

h=1(1 − e(l)b,i+h), j ≥ 1 (A.3)

Pr(Bj) = q(l)(1 − p(l))jQj

h=1(1 − e(l)b,i+h)e(l)b,i+j+1, j ≥ 1 (A.4)

Pr(Cj|Aj) = Pr(m − 2 missing packets in Wi+j+2i+n−1|Wii+j+1 = 10j1)

From the total probability theorem, R(l)(m, n, DF,i) can be computed as follows:

R(l)(m, n, DF,i) =

Similarly, the probability ˜R(l)(m, n, DF,i) can also be computed by recurrence as R˜(l)(m, n, DF,i) = packets following the late loss of packet i, and is given by

S(l)(1, n, DF EC,i) = Pr(Wi+1i+n−1 = 0n−1|Wi = 2)

= e(l)b,i(1 − p(l))n−1·Qn−1

h=1(1 − e(l)b,i+h).

(A.9)

For 2 ≤ m ≤ n, we compute S(l)(m, n, DF,i) conditionally to the event {Cj, Dj, Ej, j =

For a Gilbert loss model with parameters p(l) and q(l), we have

Pr(Dj) = e(l)b,i(1 − p(l))jp(l)Qj

From the total probability theorem, S(l)(m, n, DF,i) can be computed as follows:

S(l)(m, n, DF,i) =

Similarly, ˜S(l)(m, n, DF,i) can be computed by recurrence as

(l)(m, n, DF,i) =

















e(l)b,i(1 − p(l))n−1·Qn−1

h=1(1 − e(l)b,i−h), m = 1, n ≥ 1

n−m

X

j=0

{e(l)b,i(1 − p(l))j

j

Y

h=1

(1 − e(l)b,i−h) · {p(l)(l)(m − 1, n − j − 1, DF,i−j−1) +(1 − p(l))e(l)b,i−j−1(l)(m − 1, n − j − 1, DF,i−j−1)}}, 2 ≤ m ≤ n

(A.16)

Appendix B

In this Appendix B we shall show that the a priori LLR in (4.24) is equal to the de-interleaved sequence of extrinsic information provided by the SISO channel decoder, i.e, La(uI,t = lI) = L[ext]CD(uI,t = lI). The APP of a systematic symbol xD,t = lD, given the received code sequences ˜YD,1T = ( ˜XD,1T , ˜ZD,1T ), can be decomposed by using the Bayes theorem as

P (xD,t = lD| ˜YD,1T )

= P (xD,t= lD, ˜XD,1T ) · P ( ˜ZD,1T |xD,t = lD, ˜XD,1T )/P ( ˜YD,1T )

= P (˜xD,t|xD,t = lD, ˜XD,1t−1, ˜XD,t+1T ) · P (xD,t = lD, ˜XD,1t−1, ˜XD,t+1T )

·P ( ˜ZD,1T |xD,t = lD, ˜XD,1T )/P ( ˜YD,1T )

= C · P (˜xD,t|xD,t = lD) · P (xD,t= lD) · P ( ˜ZD,1T |xD,t = lD, ˜XD,1T )

(B.1)

where C = P ( ˜XD,1t−1, ˜XD,t+1T )/P ( ˜YD,1T ). We rewrite (B.1) in log-likelihood algebra as

L(xD,t = lD| ˜YD,1T )

= La(xD,t = lD) + Lc(xD,t = lD) + L[ext]CD (xD,t = lD)

(B.2)

with

L[ext]CD (xD,t= lD) = logP( ˜PZD,1T |xD,t=lD, ˜XD,1T )

( ˜ZD,1T |xD,t=0, ˜XD,1T ) . (B.3)

Since the de-interleaved sequence of L[ext]CD (xD,t = lD) is used by the source decoder, we have

L[ext]CD (uD,t= lD) = logP( ˜PZD,1T |uD,t=lD, ˜UD,1T )

( ˜ZD,1T |uD,t=0, ˜UD,1T ) (B.4)

where ˜UD,1T = Φ−1( ˜XD,1T ). Once the LLR L[ext]CD (uD,t= lD) has been determined, we can compute the probability as follows

P ( ˜ZD,1T |uD,t= lD, ˜UD,1T ) = eL[ext]CD (uD,t=lD)/P2R−1

j=0 eL[ext]CD (uD,t=j). (B.5)