• 沒有找到結果。

Iterative Source-Channel Decoding Using Symbol-Level Extrinsic Information

N/A
N/A
Protected

Academic year: 2021

Share "Iterative Source-Channel Decoding Using Symbol-Level Extrinsic Information"

Copied!
9
0
0

加載中.... (立即查看全文)

全文

(1)IEICE TRANS. COMMUN., VOL.E93–B, NO.12 DECEMBER 2010. 3555. PAPER. Iterative Source-Channel Decoding Using Symbol-Level Extrinsic Information Chun-Feng WU† , Nonmember and Wen-Whei CHANG†a) , Member. SUMMARY Transmission of convolutionally encoded source-codec parameters over noisy channels can benefit from the turbo principle through iterative source-channel decoding. We first formulate a recursive implementation based on sectionalized code trellises for MAP symbol decoding of binary convolutional codes. Performance is further enhanced by the use of an interpolative softbit source decoder that takes into account the channel outputs within an interleaving block. Simulation results indicate that our proposed scheme allows to exchange between its constituent decoders the symbol-level extrinsic information and achieves high robustness against channel noises. key words: iterative source-channel decoding, sectionalized code trellis, extrinsic information. 1.. Introduction. With the rapid development of wireless and multimedia communications, reliable transmission of speech and video signals over band-limited noisy channels are becoming more and more widespread. The basic strategy consists in using a source encoder to extract characteristic parameters of the source signals, which are then error protected by channel codes. According to Shannon’s separation theorem [1], conventional source coding schemes are usually designed in a way that any knowledge about channel characteristics is disregarded. Furthermore, in the development of channel codes any possibly given source statistics is neglected. The sourcechannel separation theorem might hold under impractical requirements only demanding unlimited computational complexity and infinite coding delay. These requirements cannot be strictly met in a practical communication system with limited block lengths. As a consequence, on the one hand, residual redundancy remains in the set of source codec parameters after source coding and, on the other hand, residual errors may exist in the bit sequence after channel decoding. In order to provide a system of higher robustness against channel noise, several paths have been taken toward the joint design of source and channel coders. These methods include pseudo Gray coding [2], channel-optimized quantization [3], source-optimized channel codes [4], and more recently, exploiting the residual source redundancies [5]–[8]. In general, source encoders represent the source signal by a small set of characteristic parameters taken from a finite quantizer codebook, but due to block length constraints, the quantizer indexes will still exhibit considerable redundanManuscript received July 1, 2010. The authors are with National Chiao Tung University, Hsinchu, Taiwan, ROC. a) E-mail: wwchang@cc.nctu.edu.tw DOI: 10.1587/transcom.E93.B.3555 †. cies. Such residual redundancy appears on parameter-level, either in terms of a non-uniform distribution or in terms of time-correlation between consecutive indexes. The residual source redundancy can be used for enhancing channel decoding [5], [6] or for effective source decoding [7], [8]. In [7], the quantizer-channel tandem is described as a discrete hidden Markov model and the source decoding is formulated in the form of a sequence-based approximate minimum mean-squared error (SAMMSE) estimation problem. The softbit source decoding (SBSD) algorithm in [8] processes the soft channel outputs and combines them with source a priori information to estimate the decoded output. The estimation is carried out for the quantizer indexes rather than for single index-bits since the dependencies of indexes are stronger than the correlations of the index-bits. The error concealing capabilities of SBSD can be further improved if channel coding algorithms add artificial redundancy at the transmitter side. The entire system can be viewed as having the residual source correlations as implicit outer channel code that are serially concatenated with the inner explicit channel codes. Using this interpretation, the concept of extrinsic information from turbo decoding [9] can be adopted for iterative source-channel decoding (ISCD) [10]–[13]. In an ISCD scheme, the decoder is decomposed into two parts, which can be identified as the constituent decoders for the channel-code and source-code redundancies. According to the turbo-principle extracting the extrinsic information from one decoder and using it as additional a priori knowledge in the other one is expected to improve the reliability of the decoded signal step-by-step. With respect to an implementation of ISCD it has to be emphasized that the major part of the iterative process runs on bit-level, but SBSD itself is realized on the parameterlevel. This is justified by the fact that in many practical systems binary convolutional codes are utilized, so the softoutput channel decoding can be implemented efficiently by the BCJR algorithm [14]. It causes the problem that only bitwise source a priori knowledge can be exploited by the channel decoder, since the BCJR algorithm is derived based on a bit-level code trellis. The technique how to combine the source a priori knowledge on parameter-level with the extrinsic information of the channel decoder on bit-level is not widely common so far. In classical ISCD algorithms [10]– [13], it requires the conversion of the index-probabilities to bit-probabilities in each passing of the extrinsic information between the two constituent decoders. This processing step destroys the bit-correlations within an index, thus reducing. c 2010 The Institute of Electronics, Information and Communication Engineers Copyright .

(2) IEICE TRANS. COMMUN., VOL.E93–B, NO.12 DECEMBER 2010. 3556. the effectiveness of iterative decoding. The drawback can be avoided if the extrinsic information due to the source-code and channel-code redundancies are not treated separately on different levels, but jointly on parameter-level. Therefore, we focus on symbol-based trellis decoding algorithms throughout this paper since in contrast to the classical BCJR algorithm [14] a symbol-by-symbol decoding approach is especially well suited to generate the reliability information on the quantizer indexes. The first step toward realization is to use quantizer indexes rather than single index-bits as the bases for the soft-output decoding of a convolutional channel code. This is used in conjunction with sectionalized code trellises to take advantage of the statistical a priori information on index-basis. Furthermore, the source decoder is different from the SBSD [8] in that besides the present and past channel outputs, some future channel outputs belonging to the same interleaving block are also evaluated to provide additional reliability gains. For the purpose of applicability, we derive a new formula that shows how the past and future channel outputs can be transformed into the extrinsic information utilizable for iterative decoding. 2.. System Model. The transmission of continuous-valued, autocorrelated source samples is considered. Figure 1 shows our model of a transmission system. Suppose at time t, the input source sample vt is quantized by the index ut that, after bit mapping, is represented by a bit combination consisting of M bits. For notational convenience, the index ut is regarded as an integer representing the decimal equivalent of a bitvector (ut (0), ut (1), . . . , ut (M − 1)). The quantizer’s reproduction level corresponding to the index ut = i is denoted by ci , where i ∈ I = {0, 1, . . . , 2 M − 1}. We can generally assume that there is a certain amount of residual redundancy remaining in the index sequence due to delay and complexity constraints for the quantization stage. In the following, the time-correlations of the quantizer indexes are modelled by a first-order stationary Markov process with index-transition probabilities P(ut |ut−1 ). After source encoding a block of T bitvectors, written as U1T = {u1 , u2 , . . . , uT } for brevity, are interleaved by an interleaver Φ on a symbol-by-symbol basis. Such an interleaver permutes the input data block in Mbit indexes, but does not change the order of the bits within each index. The interleaved index sequence is denoted by X1T = {x1 , x2 , . . . , xT }, where each index xt = Φ(ut ) is associated with a bitvector (xt (0), xt (1), . . . , xt (M − 1)). Afterwards the interleaved bitstream is encoded by a binary convolutional channel encoder prior to transmission. We considered a rate-1/2 systematic channel encoder whose outputs corresponding to each bit xt (m) are the systematic bit xt (m) and the parity bit zt (m). These outputs are modulated with a BPSK modulator and then transmitted over an AWGN channel. The code-bits are assumed to be bipolar, i.e., xt (m), zt (m) ∈ {−1, +1}. For notational convenience, the code-symbol sequence Y1T = {y1 , y2 , ..., yT } with yt = (xt , zt ), will refer to the sequence of pairs of systematic and parity. Fig. 1. Fig. 2. Model of the transmision system.. Symbol-based iterative source-channel decoding.. symbols. Let the possibly noisy received code sequence corresponding to Y1T = (X1T , Z1T ) be denoted by Y˜ 1T = (X˜ 1T , Z˜1T ). At the receiver side, the decoder is designed to minimize the mean squared error in signal reconstruction when channel errors occur. For the concatenation of quantization and channel coding, the turbo-like evaluation of residual source redundancy and of artificial channel-code redundancy makes step-wise quality gains possible by iterative decoding. As shown in Fig. 2, the receiver consists of two constituent decoders with soft-inputs and soft-outputs (SISO). For each iteration, the channel decoder processes the received code sequence Y˜ 1T and combines them with the source a priori information to compute the extrinsic infor[ext] (xt ) on individual systematic symbol xt . Goal mation LCD of the source decoder is to jointly exploit the channel information and the source a priori information to compute the a posteriori probability (APP) for each of possibly transmitted quantizer index ut = i, which is denoted by P(ut = i|Y˜ 1T ). It also generates an extrinsic information LS[ext] D (ut ) which becomes additional a priori knowledge of the channel decoder for the next iteration. Exchanging extrinsic information between two constituent decoders is iteratively repeated until the reliability gain becomes insignificant. After the last iteration, the APPs are combined with quantizer’s reproduction levels to provide the signal estimates as follows:  P(ut = i|Y˜ 1T )ci (1) vˆt = i. 3.. SISO Source Decoding. The determination rules of extrinsic information of SBSD has been derived in [12], but a slight modification is proposed which allows a delay of T samples in the decoding process. We have chosen the length T in compliance with the defined size of an interleaving block. If on the basis of a first-order Markov model time-correlation between consecutive indexes shall be utilized, then the entire history of re-.

(3) WU and CHANG: ITERATIVE SOURCE-CHANNEL DECODING USING SYMBOL-LEVEL EXTRINSIC INFORMATION. 3557. ceived codewords Y˜ 1t and possibly additionally given future T codewords Y˜ t+1 have to be considered as well. To advance with this, we derive a forward-backward recursive algorithm that shows how the past and future received codewords can be transformed into extrinsic information utilizable in the iterative decoding process. The basic strategy is to jointly exploit the channel information, the source a priori information as well as the extrinsic information from the channel decoder. The APP for each of possibly transmitted index ut = i, given the received code sequence Y˜ 1T = (X˜ 1T , Z˜ 1T ), is given by P(ut = i|Y˜ 1T ) = P(ut = i, Y˜ 1T )/P(Y˜ 1T ).. (2). Since the received sequence is de-interleaved and then processed by the source decoder, we have P(ut = i, Y˜ 1T ) = P(ut = i, U˜ 1T , Z˜1T ), where U˜ 1T = Φ−1 (X˜ 1T ). These probabilities can be further decomposed analogous to the SAMMSE algorithm [7] by using the Bayes theorem as P(ut = i, , U˜ 1T , Z˜1T ) = P(ut = i, U˜ 1T )P(Z˜1T |ut = i, U˜ 1T ) = αut (i)βut (i)P(Z˜1T |ut = i, U˜ 1T ). (3). T where αut (i) = P(ut = i, U˜ 1t ) and βut (i) = P(U˜ t+1 |ut = i, U˜ 1t ). Using the Markov property of the indexes and the memoryless assumption of the channel, the forward and backward recursions of the algorithm can be expressed as  P(ut = i, ut−1 = j, u˜ t , U˜ 1t−1 ) (4) αut (i) = j. =. . the quantizer indexes in form of index APPs. Recognizing this, we define the reliability of each nonzero index ut = i, i = 1, 2, . . . , 2 M − 1, with respect to ut = 0, by considering log-likelihood ratio (LLR) of the following type L(ut = i|Y˜ 1T ) = log. P(ut = i|Y˜ 1T ) . P(ut = 0|Y˜ T ). This definition for the LLR values allows for easy conversion between the a posteriori LLR values and index APPs. The other LLRs are related to the corresponding probabilities in a similar fashion. For instance, the value i gives a priori information about the random variable ut with the LLR in form of La (ut = i) = log[P(ut = i)/P(ut = 0)]. The conditional LLR of the received value u˜ t at the channel output, given that the index ut = i has been transmitted, is given by Lc (ut = i) = log[P(˜ut |ut = i)/P(˜ut |ut = 0)]. Assuming an AWGN channel with zero mean and variance σ2n = N0 /2E s , the conditional probability density function (pdf) of u˜ t can be formulated as  M−1 p(˜ut (m)|ut (m)) (7) p(˜ut |ut ) = m=0  M  E   M−1 1 = √2πσ exp − N0s m=0 (˜ut (m) − ut (m))2 . n. The next step is to reduce the large computational burden complexity which is required for computing the logarithmic values of the αut (i) and βut (i) terms in (3). This problem can be solved by using the Jacobian logarithm function [15] defined by the property log(eδ1 + eδ2 ) = max{δ1 , δ2 } + log(1 + e−|δ2 −δ1 | ).. P(˜ut |ut = i, ut−1 = j, U˜ 1t−1 ). (6). 1. (8). Then, for a finite set of real numbers {δ1 , δ2 , . . . , δn }, the logarithm of their sum can be computed recursively. Suppose  δj e with 1 < l ≤ n is that the logarithm of Δl−1 = l−1 j=1 known, then. j. · P(ut = i|ut−1 = j, U˜ 1t−1 ) · P(ut−1 = j, U˜ 1t−1 )  P(ut = i|ut−1 = j)αut−1 ( j) = P(˜ut |ut = i) j. and βut (i) =. . P(ut = i, ut+1 = j, U˜ 1T )/P(ut = i, U˜ 1t ). (9) log(Δl ) = log(Δl−1 + eδl ) −| log Δl−1 −δl | = max{log Δl−1 , δl } + log(1 + e ). (5). j. =. . T P(U˜ t+2 |ut+1. =. j, U˜ 1t+1 ). · P(˜ut+1 |ut+1 = j). For brevity, weuse the following shorthand notation max j ∗ {δ j } = log( lj=1 eδ j ). By taking the logarithm of αut (i) derived in (4), we obtain. j. · P(ut = i, ut+1 = j, U˜ 1t )/P(ut = i, U˜ 1t )  P(˜ut+1 |ut+1 = j)P(ut+1 = j|ut = i)βut+1 ( j) = j. The MAP algorithm is likely to be considered too complex for real-time implementation in a practical system. To avoid the number of complicated operations and also numerical representation problems, realizations of the MAP algorithm in the logarithmic domain have been proposed in [15], [16]. Goal of the classical log-MAP algorithm is to provide us with the logarithm of the ratio of the APP of each information bit being 1 to the APP of it being 0. In this work, however, the use of a symbol-based ISCD scheme implies that the reliability information should be defined for. (10) αˆ ut (i) = log αut (i) ∗ = log P(˜ut |ut = i) + max {log P(ut = i|ut−1 = j) j. + αˆ ut−1 ( j)} and similarly, βˆ ut (i) = log βut (i) = max∗ {log P(˜ut+1 |ut+1 = j). (11). j. + log P(ut+1 = j|ut = i) + βˆ ut+1 ( j)}. An iterative process using the SISO source decoder as a constituent decoder is realizable, if the a posteriori LLR L(ut = i|Y˜ 1T ) can be separated into four additive terms: the a.

(4) IEICE TRANS. COMMUN., VOL.E93–B, NO.12 DECEMBER 2010. 3558. priori LLR La (ut = i), the channel-related LLR Lc (ut = i), and two extrinsic terms resulting from source and channel decoding. In order to determine each of the four terms, we rewrite (6) in log-likelihood algebra as L(ut = i|Y˜ 1T ) = αˆ ut (i) + βˆ ut (i) − αˆ ut (0) − βˆ ut (0) + log. (12). P(Z˜1T |ut =i,U˜ 1T ) P(Z˜1T |ut =0,U˜ 1T ). = La (ut = i) + Lc (ut = i) + LS[ext] D (ut = i) [ext] +LCD (ut = i). Fig. 3. where LS[ext] D (ut = i) = βˆ ut (i) + max∗ {log P(ut = i|ut−1 = j). (13). j. +αˆ ut−1 ( j)} − βˆ ut (0) −max∗ {log P(ut = j. 0|ut−1 = j) + αˆ ut−1 ( j)}.. and [ext] (ut = i) = log LCD. P(Z˜ 1T |ut =i,U˜ 1T ) P(Z˜ 1T |ut =0,U˜ 1T ). (14). A detailed derivation of (14) is presented in the Appendix. The extrinsic LLR LS[ext] D (ut = i) contains the new part of information resulting from the source decoder by exploiting the residual source redundancy. With respect to (12), the extrinsic LLR can be separated according to ˜T LS[ext] D (ut = i) = L(ut = i|Y1 ) − La (ut = i) −. [ext] (ut LCD. (15). = i) − Lc (ut = i). which is used after interleaving as a priori information in the next channel decoding round. 4.. SISO Channel Decoding. For the transmission scheme with channel coding, a softoutput channel decoder can be used to provide both estimated bits and their reliability information for further processing to improve the system performance. The commonly used BCJR algorithm is a trellis-based MAP decoding algorithm for both linear block and convolutional codes. The derivation presented in [14] led to a forward-backward recursive computation on the basis of a bit-level trellis diagram, which has two branches leaving each state and every branch represents a single index-bit. Proper sectionalization of a bit-level code trellis may result in useful trellis structural properties [17], [18] and allows us to devise SISO channel decoding algorithms which incorporate parameter-oriented extrinsic information from the source decoder. To proceed with this, we propose a modified BCJR algorithm which parses the received code-bit sequence into M-bit blocks and computes the APP for each quantizer index on a symbol-bysymbol basis. Unlike classical BCJR algorithm that decodes one bit at a time, our scheme proceeds with decoding the. Bit-level and merged trellis diagrams.. quantizer indexes as nonbinary symbols that are matched to the number of bits in an index. By parsing the codebit sequence into M-bit symbols, we are in essence merging M stages of the original bit-level code trellis into one. As an example, we illustrate in Fig. 3 two stages of the bitlevel trellis diagram of a rate 1/2 convolutional encoder with generator polynomial (7, 5)8 . The solid lines and dashed lines correspond to the input bits of 0 and 1, respectively. Figure 3 also shows the sectionalized trellis diagram when two stages of the original bit-level trellis are merged together. In general, there are 2 M branches leaving and entering each state in a M-stage merged trellis diagram. Having defined the trellis structure as such, there will be one symbol APP corresponding to each branch which represents a particular quantizer index xt = i. For convenience, we say that the sectionalized trellis diagram forms a finite-state machine defined by its state transition function F s (xt , st ) and output function F p (xt , st ). Viewing from this perspective, the code-symbol associated with the branch from state st to state st+1 = F s (xt , st ) can be written as yt = (xt , zt ), where zt = F p (xt , st ) is the parity symbol given state st and input xt . We next applied the sectionalized code trellis to compute the APP of a systematic symbol xt = i given the received code sequence Y˜ 1T = {˜y1 , y˜ 2 , ..., y˜ T } in which y˜ t = ( x˜t , z˜t ). Taking the trellis state st into consideration, we rewrite the APP as follows:  P(xt = i, st , Y˜ 1T ) (16) P(xt = i|Y˜ 1T ) = C st. =C. . αtx (i, st )βtx (i, st ),. st T |xt = where αtx (i, st ) = P(xt = i, st , Y˜ 1t ), βtx (i, st ) = P(Y˜ t+1 t T ˜ ˜ i, st , Y1 ), and C = 1/P(Y1 ) is a normalizing factor. For the recursive implementation, the forward and backward recursions are to compute the following metrics:. αtx (i, st )  P(xt = i, st , xt−1 = j, st−1 , y˜ t , Y˜ 1t−1 ) = =. st−1. j. st−1. j. . x αt−1 ( j, st−1 )γi, j (˜yt , st , st−1 ). (17).

(5) WU and CHANG: ITERATIVE SOURCE-CHANNEL DECODING USING SYMBOL-LEVEL EXTRINSIC INFORMATION. 3559. βtx (i, st )  T P(xt+1 = j, st+1 , Y˜ t+1 |xt = i, st , Y˜ 1t ) = st+1. =. st. − max∗ {αˆ tx (0, st ) + βˆ tx (0, st )} st. j.  st+1. = max∗ {αˆ tx (i, st ) + βˆ tx (i, st )}. (18). x βt+1 ( j, st+1 )γ j,i (˜yt+1 , st+1 , st ). j. and in (17) γi, j (˜yt , st , st−1 ). (19). = P(xt = i, st , y˜ t |xt−1 = j, st−1 , Y˜ 1t−1 ) = P(st |xt−1 = j, st−1 )P(xt = i|xt−1 = j) · P(˜yt |xt = i, st ).. L(xt = i|Y˜ 1T ). Having a proper representation of the branch metric γi, j (˜yt , st , st−1 ) is the critical step in applying symbol decoding to error mitigation and one that conditions all subsequent steps of the implementation. As a practical manner, several additional factors must be considered to take advantage of the sectionalized trellis structure and AWGN channel assumption. First, making use of the merged code trellis, the value of P(st |xt−1 = j, st−1 ) is either one or zero depending on whether index j is associated with transition from state st−1 to state st = F s (xt−1 = j, st−1 ). The second term in (19) is reduced to P(xt = i) under the assumption that xt is uncorrelated with xt−1 , which is indeed the case as xt is the interleaved version of quantizer indexes. For AWGN channels, the third term in (19) is reduced to P(˜yt |xt = i, st ) = P( x˜t |xt = i)P(˜zt |zt = F p (xt = i, st )). (20). where the conditional pdfs for the received systematic and parity symbols can be computed analogous to (7). Next, the Jacobian logarithm is used for computing the logarithmic APPs and the corresponding logarithmic values of the αtx (i, st ) and βtx (i, st ). By formulating the algorithm in the log-likelihood algebra, we obtain αˆ tx (i, st ) = log αtx (i, st ) (21) x ( j, st−1 ) + γˆ i, j (˜yt , st , st−1 )}} = max∗ {max∗ {αˆ t−1 st−1. j. and βˆ tx (i, st ) = log βtx (i, st ) (22) ∗ ˆx ∗ = max {max {βt+1 ( j, st+1 ) + γˆ j,i (˜yt+1 , st+1 , st )}} j. Assuming a memoryless channel and channel encoding of systematic form, this a posteriori LLR can be separated according to Bayes’ theorem into three additive terms: a priori term La (xt = i) = log[P(xt = i)/P(xt = 0)], the channelrelated term Lc (xt = i) = log[P( x˜t |xt = i)/P( x˜t |xt = 0)], and [ext] (xt = i). Substituting (21) and (22) an extrinsic term LCD into (24) leads to. st+1. and in (21) γˆ i, j (˜yt , st , st−1 ) = log γi, j (˜yt , st , st−1 ) (23) = log P(xt = i) + log P(st |xt−1 = j, st−1 ) + log P( x˜t |xt = i) + log P(˜zt |zt = F p (xt = i, st )). Goal of the SISO channel decoder is to compute the a posteriori LLR for each systematic symbol xt = i, which can be written as P(xt = i|Y˜ 1T ) (24) L(xt = i|Y˜ 1T ) = log P(xt = 0|Y˜ 1T ). (25). = La (xt = i) + Lc (xt = i) +. [ext] LCD (xt. = i). with the extrinsic LLR [ext] LCD (xt = i) = max∗ {βˆ tx (i, st ) + log P(˜zt |zt = F p (xt = i, st )). (26). st. + max∗ {max∗ {log P(st |xt−1 = j, st−1 ) st−1. +. j. x αˆ t−1 ( j, st−1 )}}}. − max∗ {βˆ tx (0, st ) st. + log P(˜zt |zt = F p (xt = 0, st ) + max∗ {log P(st |xt−1 = 0, st−1 ) st−1. x ( j, st−1 )}}} + max∗ {αˆ t−1 j. With respect to an implementation of ISCD the a priori LLR in (25) is initialized to be La (xt = i) in terms of the source distribution P(xt = i). Within iterations the precision of the APP estimation can be enhanced by replacing La (xt = i) with the interleaved extrinsic LLR LS[ext] D (xt = i) provided by the SISO source decoder. Therefore, the extrinsic LLR resulting from the channel decoding can be calculated by [ext] LCD (xt = i) = L(xt = i|Y˜ 1T )− LS[ext] D (xt = i)− Lc (xt = i) (27). and is passed to the source decoder as new a priori information for the next iteration. Finally, we summarize the proposed symbol-level ISCD scheme as follows: 1. Initialization: Set the extrinsic information of source decoding to L[ext] S D (xt ) = 0. Set the iteration counter to n = 0 and define an exit condition nmax . 2. Read series of received sequences Y˜ 1T and map all received systematic symbols x˜t to channel-related LLR Lc (xt ). 3. Perform log-MAP channel decoding by an efficient realization of (24) and then compute the extrinsic LLR [ext] (xt ) using (27). LCD 4. Perform source decoding by inserting the de[ext] (ut ) into (12) to cominterleaved extrinsic LLR LCD ˜ pute the a posteriori LLR L(ut |Y1T ). Then, the extrinsic LLR LS[ext] D (ut ) is computed by (15) and is forwarded to the channel decoder as a priori information. 5. Increase the iteration counter n ← n + 1. If the exit condition n = nmax is fulfilled, then continue with step.

(6) IEICE TRANS. COMMUN., VOL.E93–B, NO.12 DECEMBER 2010. 3560. 6, otherwise proceed with step 3. 6. Compute the APP for each estimated index ut = i as follows: M 2 −1. P(ut = i|Y˜ 1T ) = eL(ut =i|Y1 ) / ˜T. eL(ut = j|Y1 ) . ˜T. (28). j=0. 7. Estimate the decoder output signals vˆt by (1) using the index APPs from step 6. 5.. Experimental Results. Computer simulations were conducted to compare the performance of various ISCD schemes for transmission over AWGN channels. First a bit-level iterative decoding scheme ISCD1 [11] is considered for error mitigation using the BCJR algorithm for soft-output channel decoding and assisted with the bit reliability information provided by the SBSD. For the ISCD1 scheme with bit interleaving, the source correlation of quantizer indexes is not exploited so that each received channel code-bit is independently decoded. Two approaches to symbol-level iterative decoding, denoted by ISCD2 and ISCD3, are presented and investigated. They both applied a symbol interleaver and performed log-MAP symbol decoding of binary convolutional codes based on sectionalized code trellises. The source decoder in the ISCD3 is different from the SBSD of ISCD2 in that a total of T channel outputs within an interleaving block are evaluated to provide additional reliability gains using an interpolation technique. Specifically, the index APP to be computed for the interpolative SBSD is P(ut = i|Y˜ 1T ) in (2), and P(ut = i|Y˜ 1t ) for the SBSD. Following the work of [11], the input signals were the first-order Gauss-Markov sources described by vt = ρvt−1 + wt , where wt is a zeromean, unit-variance white Gaussian noise, with correlation coefficients of ρ = 0.8 and ρ = 0.95. As indicated in [11], a value of ρ = 0.95 can be found for scale factors determined in the MPEG audio codec for digital audio broadcasting. On the other hand, ρ = 0.8 provides a good fit to the long-time-averaged autocorrelation function of 8 kHzsampled telephone speech that is bandpass-filtered to the frequency range of 0.3–3.4 kHz [19]. A total of 3000000 input samples is processed by a scalar M-bit Lloyd-Max quantizer. After natural binary encoding of the quantizer indexes with M bits per index, the resulting bitstreams are divided into blocks of 300 bits. Each of these blocks, consisting of T = 300/M indexes, was spread by an interleaver and afterwards they were encoded by a rate-1/2 recursive systematic convolutional channel code with memory order ν = 2 and generator polynomial G(D) = (1, (1 + D2 )/(1 + D + D2 )). A preliminary experiment was first performed to examine the step-wise quality gains due to the turbo-like evaluation of channel-code and source-code redundancies. The variation of parameter signal-to-noise ratio (SNR) as a function of the channel SNR E s /N0 for ISCD3 simulation of Gauss-Markov sources with ρ = 0.95 and M = 3 is shown. Fig. 4 ISCD3 performance for Gauss-Markov sources with ρ = 0.95 and quantizer rate M = 3.. in Fig. 4. One iteration consists of log-MAP channel decoding followed by interpolative SBSD. Within the iterations the curves obtained after the channel decoding process are marked by an upper “+.” The lowest curve shows the experimental results of a separate decoding scheme which applied classical channel decoding and source decoding by hard decision and table lookup. For the 0+ -th iteration the logMAP algorithm for channel decoding with zero extrinsic information LS[ext] D (xt ) = 0 is carried out, resulting in a SNR gain of 1.43 dB for a channel condition of Eb /N0 = −4 dB. Next, the 1st iteration is completed, when in addition the de[ext] (ut ) interleaved extrinsic LLR from channel decoding LCD is exploited in the source decoding process. Due to the high correlation coefficient of ρ = 0.95, the parameter SNR can further be improved by up to 9.04 dB. If the updated LS[ext] D (xt ) is fed back to the channel decoder as additional a priori information, one additional iteration improves the performance by about 1.37 dB for Eb /N0 = −4 dB. We see that a turbo-like refinement of the extrinsic information from both constituent decoders makes substantial quality improvements possible. The full gain in parameter SNR is reached after three iterations. Figure 5 shows the SNR performance for ISCD3 simulation of Gauss-Markov sources with ρ = 0.95 and M = 4. The investigation further showed that the improved performance achievable using ISCD3 is more noticeable for higher quantizer rates. Compared to a separate decoding approach, the maximum gain in parameter SNR amounts respectively to 13.86 dB and 16.0 dB for M = 3 and M = 4 at Eb /N0 = −4 dB. To elaborate further, SNR performances of various ISCD schemes were examined for Gauss-Markov sources with ρ = 0.8 and ρ = 0.95. We provide results for experiments on rate M = 3 and M = 4 Lloyd-Max quantizers in Figs. 6 and 7, respectively. Three iterations of the algorithm were performed by each decoder as further iterations did not result in a significant improvement. The performances of ISCD schemes were also compared with a non-iterative joint.

(7) WU and CHANG: ITERATIVE SOURCE-CHANNEL DECODING USING SYMBOL-LEVEL EXTRINSIC INFORMATION. 3561. Fig. 5 ISCD3 performance for Gauss-Markov sources with ρ = 0.95 and quantizer rate M = 4.. Fig. 7 SNR performance of different decoders for quantizer rate M = 4 and Gauss-Markov sources (ρ = 0.8, 0.95).. Table 1 Complexity analyses of bit-level and symbol-level channel decoders. Scheme Variable Number of additions Symbol-level αˆ 3N1 · (2 M + 3N2 ) · 2ν · T ˆ Decoder 3N1 · (2ν + 3N2 ) · 2ν · T β αˆ 3N1 · 2ν · T · M Bit-level 3N1 · 2ν · T · M βˆ Decoder Conversion between bit and T · M · [6(2 M−1 − 1) + 1] index APP +2 M+1 · T. Fig. 6 SNR performance of different decoders for quantizer rate M = 3 and Gauss-Markov sources (ρ = 0.8, 0.95).. source-channel decoder (NIJSCD), which consists of a harddecision channel decoder followed by a SAMMSE source decoder [7]. Compared with the NIJSCD, the better performances of ISCD can be attributed to its ability to exchange iteratively the reliability gains resulting from SBSD and from channel decoding. The results also show the improved performance achievable using symbol decoders ISCD2 and ISCD3 in comparison to that of bit-based ISCD1. Furthermore, the improvement has a tendency to increase for worse channel conditions and for more heavily correlated Gaussian sources. This indicates that the extrinsic information between two constituent decoders is better to be exploited at the symbol level. The investigation further showed that there is a considerable gap between the ISCD2 and ISCD3 schemes. The difference between them is due to the fact that the ISCD2 only accounts for the past and current channel outputs through the knowledge of APP P(ut = i|Y˜ 1t ). On the other hand, the ISCD3 decoder takes into account not. only the past channel outputs but also a look-ahead of some future channel observations belonging to the same interleaving block. Finally, we investigate decoding complexity of the ISCD schemes working at bit and symbol levels. The difference between our channel decoding algorithm and the known BCJR algorithm is the substitution of bit-level code trellis by the sectionalized code trellis. The merging of M stages of bit-level trellis results in fewer stages for the forward and backward recursions, and consequently, less αˆ tx ’s and βˆ tx ’s values need to be computed. On the other hand, the merged code trellis has 2 M branches leaving and entering each state, which requires more LLR values to be calculated. Consider the computational complexity of the logMAP algorithm for symbol decoding of a rate-1/2 convolutional code with memory order ν. Since three additions are required to compute the Jacobian logarithm in (8), a total of 3N1 ·(2 M +3N2 ) and 3N1 ·(2ν +3N2 ) additions are required to compute αˆ tx ’s and βˆ tx ’s in (21) and (22), respectively, where N1 = 2ν − 1 and N2 = 2 M − 1. Table 1 gives the number of additions required for symbol-based log-MAP decoding of convolutional codes. The complexity analysis of the ISCD based on bit-level trellis is also included, taking into account the additional complexity for converting the indexlevel and bit-level extrinsic information from the constituent.

(8) IEICE TRANS. COMMUN., VOL.E93–B, NO.12 DECEMBER 2010. 3562. decoders. For the case of (T, M, ν) = (100, 3, 2), the symbolbased channel decoder is about five times as complex as the bit-based channel decoder. 6.. Conclusions. In this paper we presented a new ISCD scheme which permits to exchange between its two constituent decoders the symbol-level extrinsic information. First a log-MAP symbol decoding scheme is proposed to decode convolutionally encoded quantizer indexes over AWGN channels and is shown to be superior to the bit-level trellis decoding algorithms. Performance is further enhanced by the use of an interpolative SBSD scheme that exploits a look-ahead of some future channel outputs for better decoding in addition to the past observations. Experimental results indicate that the proposed symbol-level iterative decoding algorithm achieves significant improvements in error robustness for quantization of Gauss-Markov sources over AWGN channels.. [13] M. Adrat and P. Vary, “Iterative source-channel decoding: Improved system design using EXIT charts,” EURASIP J. Applied Signal Processing, 2005, vol.6, pp.928–941, May 2005. [14] L.R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linear codes for minimizing symbol error rate,” IEEE Trans. Inf. Theory, vol.IT-20, no.2, pp.284–287, March 1974. [15] J.A. Erfanian, S. Pasupathy, and G. Gulak, “Reduced complexity symbol detectors with parallel structures for ISI channels,” IEEE Trans. Commun., vol.42, no.2/3/4, pp.1661–1671, 1994. [16] P. Robertson, E. Villebrun, and P. Hoeher, “A comparison of optimal and sub-optimal MAP decoding algorithms operating in the log domain,” Proc. IEEE International Conference on Communication, vol.2, pp.1009–1013, June 1995. [17] Y. Liu, S. Lin, and M.P.C. Fossorier, “MAP algorithms for decdoing linear block codes based on sectionalied trellis diagrams,” IEEE Trans. Commun., vol.48, no.4, pp.577–587, April 2000. [18] M. Bingeman and A.K. Khandani, “Symbol-based turbo codes,” IEEE Commun. Lett., vol.3, no.10, pp.285–287, Oct. 1999. [19] N.S. Jayant and P. Noll, Digital Coding of Waveforms, PrenticeHall, Englewood Cliffs, N.J., 1984.. Appendix Acknowledgement This study was supported by the National Science Council, Republic of China, under contract NSC 98-2221-E-009090-MY3.. In this Appendix we shall present a detailed derivation of (14). The APP of a systematic symbol xt = i, given the received code sequence Y˜ 1T = (X˜ 1T , Z˜ 1T ) can be decomposed by using the Bayes theorem as (A· 1) P(xt = i|Y˜ 1T ) T ˜T T ˜ ˜ = P(xt = i, X1 , Z1 )/P(Y1 ) = P(xt = i, X˜ 1T ) · P(Z˜1T |xt = i, X˜ 1T )/P(Y˜ 1T ) T T ) · P(xt = i, X˜ 1t−1 , X˜ t+1 ) = P( x˜t |xt = i, X˜ 1t−1 , X˜ t+1 T T T ˜ ˜ ˜ · P(Z1 |xt = i, X1 )/P(Y1 ) T = P( x˜t |xt = i, X˜ 1t−1 , X˜ t+1 ) · P(xt = i) t−1 ˜ T T ˜ ˜ · P(X1 , Xt+1 ) · P(Z1 |xt = i, X˜ 1T )/P(Y˜ 1T ) = C · P( x˜t |xt = i) · P(xt = i) · P(Z˜1T |xt = i, X˜ 1T ). References [1] C.E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J., vol.27, pp.379–429, July 1948. [2] K. Zeger and A. Gersho, “Pseudo-Gray Coding,” IEEE Trans. Commun., vol.38, no.12, pp.2147–2158, Dec. 1990. [3] N. Farvardin and V. Vaishampayan, “On the performance and complexity of channel-optimized vector quantizers,” IEEE Trans. Inf. Theory, vol.37, no.1, pp.155–160, Jan. 1991. [4] S. Heinen and P. Vary, “Source optimized channel codes for parameter potection,” Proc. Int. Symp. on Information Thoery, Sorrento, Italy, June 2000. [5] J. Hagenauer, “Source-controlled channel decoding,” IEEE Trans. Commun., vol.43, no.9, pp.2449–2457, Sept. 1995. [6] S. Lin and D.J. Costello, Error Control Coding, Prentice-Hall, Englewood Cliffs, N.J., 1983. [7] D.J. Miller and M. Park, “A sequence-based approximate MMSE decoder for source coding over noisy channels using discrete hidden Markov models,” IEEE Trans. Commun., vol.46, no.2, pp.222–231, Feb. 1998. [8] T. Fingscheidt and P. Vary, “Softbit speech decoding: A new approach to error concealment,” IEEE Trans. Speech Audio Process., vol.9, no.3, pp.240–251, March 2001. [9] C. Berrou and A. Glavieux, “Near optimum error correcting coding and decoding: Turbo codes,” IEEE Trans. Commun., vol.44, no.10, pp.1261–1271, 1996. [10] N. Gortz, “A generalized framework for iterative source-channel decoding,” Annals of Telecommunications, Special Issue on Turbo Codes, pp.435–446, July/Aug. 2001. [11] N. Gortz, “On the iterative approximation of optimal joint sourcechannel decoding,” IEEE J. Sel. Areas Commun., vol.19, no.9, pp.1662–1670, 2001. [12] M. Adrat, P. Vary, and J. Spittka, “Iterative source-channel decoder using extrinsic information from softbit source decoding,” Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, vol.4, pp.2653–2656, Salt Lake City, Utah, USA, May 2001.. T )/P(Y˜ 1T ). We rewrite (A· 1) in logwhere C = P(X˜ 1t−1 , X˜ t+1 likelihood algebra as. L(xt = i|Y˜ 1T ). (A· 2). = La (xt = i) + Lc (xt = i) +. [ext] LCD (xt. = i). with [ext] (xt = i) = log LCD. P(Z˜ 1T |xt =i,X˜ 1T ) . P(Z˜1T |xt =0,X˜ 1T ). (A· 3). [ext] Since the de-interleaved sequence of LCD (xt = i) is used by the source decoder, we have [ext] LCD (ut = i) = log. P(Z˜ 1T |ut =i,U˜ 1T ) P(Z˜1T |ut =0,U˜ 1T ). where ut = Φ−1 (xt ) and U˜ 1T = Φ−1 (X˜ 1T ).. (A· 4).

(9) WU and CHANG: ITERATIVE SOURCE-CHANNEL DECODING USING SYMBOL-LEVEL EXTRINSIC INFORMATION. 3563. Chun-Feng Wu received the B.S. degree in Mathematics from National Taiwan University and the M.S. degree in communication engineering from National Chiao Tung University in 2003 and 2006, respectively. Currently, he is working toward the Ph.D. degree in communication engineering at National Chiao Tung University. His research instrests include multimedia communication, joint source-channel coding and wireless communication.. Wen-Whei Chang received the B.S. degree in communication engineering form National Chiao Tung University, Hsinchu, Taiwan, ROC, in 1980 and the M.Eng. and Ph.D. degrees in electrical engineering from Texas A&M University, College Station, TX, in 1985 and 1989, respectively. Since August 2010, he has been a professor with the Department of Communication Engineering, National Chiao Tung University, Hsinchu, Taiwan, ROC. His current research interests include speech processubg, joun source-channel coding and wireless communication..

(10)

參考文獻

相關文件

3.1(c) again which leads to a contradiction to the level sets assumption. 3.10]) which indicates that the condition A on F may be the weakest assumption to guarantee bounded level

Abstract We investigate some properties related to the generalized Newton method for the Fischer-Burmeister (FB) function over second-order cones, which allows us to reformulate

n Logical channel number and media information (RTP payload type). n Far endpoint responds with Open Logical

People need high level critical thinking skill to receive and deconstruct media messages and information from different sources.

• We will look at ways to exploit the text using different e-learning tools and multimodal features to ‘Level Up’ our learners’ literacy skills.. • Level 1 –

However, the SRAS curve is upward sloping, which indicates that an increase in the overall price level tends to raise the quantity of goods and services supplied and a decrease in

However, the SRAS curve is upward sloping, which indicates that an increase in the overall price level tends to raise the quantity of goods and services supplied and a decrease in

background To understand the level of health literacy and risk behavior of Hong Kong secondary school students so as to provide relevant public health education Survey duration