• 沒有找到結果。

SISO Source Decoding of Variable-Length CodesCodes

Conclusions and Future Work

5.3 SISO Source Decoding of Variable-Length CodesCodes

The transmission of continuous-valued, autocorrelated source samples is considered.

Figure 5.1 shows our model of a transmission system, where a sequence of T source samples is given by v = [v1, v2, ..., vT]. The transmitter consists of a variable-length source encoder and a convolutional channel encoder separated by an interleaver. Sup-pose at time t, the input sample vt is quantized by the symbol ut with M bits. The quantizer’s reproduction level corresponding to the symbol ut = λ is denoted by cλ

,where λ ∈ I from the finite alphabet I = {0, 1, ..., 2M − 1}. We can generally assume that there is a certain amount of residual redundancy remaining in the symbol se-quence due to delay and complexity constraints for the quantization stage. The scalar quantizer is followed by a VLC encoder, which maps a fixed-length symbol ut = λ to

Figure 5.1: Model of the transmission system.

a variable-length bitvector c(λ) = (ut(1), ut(2), ..., ut(l(c(λ)))) of length l(c(λ)) using the VLC code C. The output of the VLC encoder is given by the binary sequence b = [b(1), . . . , b(n), . . . , b(N)] with a total bit length N, where b(n) represents a single bit at bit instant n. Following the work of [40], we considered the binary sequence b as one particular codeword of a code B whose codewords are all possible combina-tions of VLC codewords with total length N. A block of T symbols, written as uT1 = {u1, . . . , ut, . . . , uT}, are interleaved by a symbol interleaver Φ. The interleaved symbol sequences is denoted by xT1 = {x1, . . . , xt, . . . , xT}, where each symbol xt = Φ(ut) is associated with a bitvector (xt(1), xt(2), ..., xt(l(c(λ))). Afterwards the interleaved bit-stream is then encoded by a rate-1/2 systematic convolutional channel encoder whose output is denoted by y1T = {y1, . . . , yt, . . . , yT} = {(x1, z1), . . . , (xt, zt), . . . , (xT, zT)}, where xt(l) and zt(l) represent the systematic and parity symbol, respectively. For all simulations, binary phase shift keying (BPSK) is used as modulation scheme and an AWGN channel is assumed for transmission. The transmission of a symbol sequence y1T (to a bit sequences bN1 ) over an AWGN channel leads to a symbol sequence ˜y1T (to a soft-bit sequence ˜b2N1 ) at the channel output.

For the concatenation of VLC and channel coding, the turbo-like evaluation of residual source redundancy and of artificial channel-code redundancy makes step-wise quality gains possible by iterative decoding. The ISCD consists of two constituent decoders with soft-inputs and soft-outputs (SISO). Goal of the channel decoder is to process the received code sequence ˜y1T = {˜y1, ˜y2, . . . , ˜yT} and combines them with the source a priori information to compute the extrinsic information. The source decoder computes an extrinsic value which, after interleaving, can be exploited as additional a priori knowledge by the channel decoder in the next iteration. Exchanging

extrinsic information between two constituent decoders is iteratively repeated until the reliability gain becomes insignificant. After the last iteration, the a posteriori information resulting from the source decoder is used as a priori knowledge in a MAP VLC sequence estimation [46], followed by a dequantization leads to an estimate ˆvt of the transmitted source sequence vt.

The determination rules of extrinsic information of SBSD has been derived in [20], but a slight modification is proposed which allows a delay of T samples in the decoding process. We have chosen the length T in compliance with the defined size of an inter-leaving block. If on the basis of a first-order Markov model time-correlation between consecutive symbols shall be utilized, then the entire history of received codewords ˜ut1 and possibly additionally given future codewords ˜uTt+1 have to be considered as well.

To advance with this, we derive a forward-backward recursive algorithm that shows how the past and future received codewords can be transformed into extrinsic infor-mation utilizable in the iterative decoding process. The basic strategy is to jointly exploit the channel information, the source a priori information as well as the extrinsic information resulting from the SISO channel decoding. We consider a sequence of T source symbols, each of which is encoded by a VLC with alphabet size 2M. All the possible bit sequences with symbol-length T and bit-length N can be represented in a VLC trellis digram. An example for VLC trellis representation is shown in Figure 5.2 for T = 4 and N = 10, where Nt represents the set of all possible positions gt at time instant t. Note that, the state transition is from gt−1 to gt= l(c(λ)) + gt−1, given an input ut = λ. Taking trellis states gt and gt−1 into consideration, APP for each of possibly transmitted symbol ut = λ given the received sequence ˜y1T = {˜uT1, ˜z1T}, is given by

PSD(ut= λ|˜y1T) =P

gt

P

gt−1PSD(gt−1, ut= λ, gt|˜y1T) (5.1) where PSD(gt−1, ut= λ, gt|˜y1T) can be further decomposed by using the Bayes theorem

Figure 5.2: VLC trellis representation for T = 4, N = 10 and C = {c(0) = 11, c(1) = 00, c(2) = 101, c(3) = 010}.

as

PSD(gt−1, ut= λ, gt|˜y1T) = αut(λ)βtu(λ) · Pz1T|gt−1P(˜,uytT=λ,gtuT1)

1) (5.2)

where αut(λ) = P (gt−1, ut = λ, gt, ˜ut1) and βtu(λ) = P (˜uTt+1|gt−1, ut = λ, gt, ˜ut1). Using the Markov property of the symbols and the memoryless assumption of the channel,

the forward and backward recursions of the algorithm can be expressed as

where

P (ut= λ, gt|ut−1 = q, gt−1, gt−2)

= C(q,g1

t−1){ P (ut= λ|ut−1 = q), for l(c(λ)) = gt− gt−1

0, otherwise

(5.6)

with the normalization factor

C(q, gt−1) =X

gt

X

λ∈I:l(c(λ)=gt−gt−1

P (ut= λ|ut−1 = q). (5.7)

Assuming an AWGN channel with zero mean and variance σn2 = N0/2Es, the conditional probability density function (pdf) of ˜ut can be formulated as

P (˜ut|ut= λ, gt) =Ql(c(λ))

m=1 p(˜b(gt−1+ m)|ut(m)))

= (2πσ1

n)M · eEsN0Pl(c(λ))m=1 (˜b(gt−1+m)−ut(m)))2

(5.8)

An iterative process using the SISO source decoder as a constituent decoder is realizable, if the APPs PSD(gt−1, ut = λ, gt|˜y1T) can be separated into four additive terms: the a priori probability P (ut= λ, gt) in terms of the a priori probability P (ut = λ), the channel-related probability Pc(ut = λ, gt) = P (˜ut|ut= λ, gt), and two extrinsic terms resulting from source and channel decoding. In order to determine each of the four terms, we can substitute (5.5) into (5.3) and rewrite (5.2) as follows:

PSD(gt−1, ut= λ, gt|˜y1T) = P (ut= λ, gt) · Pc(ut= λ, gt)

·PSD[ext](ut= λ, gt) · PCD[ext](ut= λ, gt)

(5.9)

where

PCD[ext](ut= λ, gt) = P (˜zT1|gt−1, ut= λ, gt, ˜uT1) (5.10)

and

PSD[ext](ut= λ, gt) = βtu(λ) ·X

gt−2

X

q

P (ut= λ, gt|ut−1 = q, gt−1, gt−2ut−1(q).(5.11)

A detail derivation of (5.10) is presented in the next section. Within the iterations the precision of APP estimation can be enhanced by multiplying P (ut = λ, gt|ut−1 = q, gt−1, gt−2) in (5.5) by PCD[ext](ut = λ, gt) from the channel decoder. The interleaved extrinsic probability can be computed according to

PSD[ext](xt= λ, gt) = PSD(gt−1,xt=λ,gtyT1)

P(xt=λ,gt)Pc(xt=λ,gt)PCD[ext](xt=λ,gt) (5.12) and used as new a priori information in the next channel decoding round. Notice that the term PSD(gt−1, xt= λ, gt|˜uT1) represents the APP for each interleaved symbol and can be computed by

PSD(gt−1, xt= λ, gt|˜y1T) = C· PSD(xt= λ|˜yT1) (5.13)

with the normalization factor C =X

q

X

{gt,gt−1:gt−gt−1=l(q)}

PSD(xt= q|˜y1T).

where PSD(xt= q|˜y1T) = Φ(PSD(ut= q|˜y1T)).

5.4 SISO Channel Decoding of