• 沒有找到結果。

Accumulate Codes Based on 1+D Convolutional Outer Codes

N/A
N/A
Protected

Academic year: 2021

Share "Accumulate Codes Based on 1+D Convolutional Outer Codes"

Copied!
4
0
0

加載中.... (立即查看全文)

全文

(1)

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 57, NO. 2, FEBRUARY 2009 311

Accumulate Codes Based on 1+D Convolutional Outer Codes

Mao-Ching Chiu, Member, IEEE, and Hsiao-feng (Francis) Lu, Member, IEEE

Abstract—A new construction of good, easily encodable, and soft-decodable codes is proposed in this paper. The construction is

based on serially concatenating several simple1+D convolutional

codes as the outer code, and a rate-11/(1+ D) accumulate code

as the inner code. These codes have very low encoding complexity and require only one shift-forward register for each encoding branch. The input-output weight enumerators of these codes are also derived. Divsalar’s simple bound technique is applied to analyze the bit error rate performance, and to assess the minimal required signal-to-noise ratio (SNR) for these codes to achieve reliable communication under AWGN channel. Simulation results show that the proposed codes can provide good performance under iterative decoding.

Index Terms—Low-density parity-check (LDPC) codes, accu-mulate codes, convolutional codes.

I. INTRODUCTION

L

OW-DENSITY parity-check (LDPC) codes [1] have at-tracted much attention during the last decade due to their remarkable capability in achieving near-capacity error performance. LDPC codes can be decoded by the well-known “belief propagation” (BP) algorithm, also known as sum-product algorithm (SPA) or min-sum algorithm [2], [3]. Most of good LDPC codes found in the literature, while having near-capacity error performance, have a computer-generated pseudo-random parity-check matrix and do not possess a simple encoding structure. Thus the encoding complexity of these pseudo-random LDPC codes can be as high as O(n2),

wheren is the codeword length, if direct matrix multiplication

is performed.

In order to have a low-complexity encoder, several capacity-approaching LDPC codes with structures have been proposed [4]–[6], and the encoders of these codes can be easily im-plemented by using shift-register circuits. Another type of capacity-approaching codes, called repeat-accumulate (RA) codes, is proposed in [7]. The RA code is a serial concate-nation of a repetition code and a rate-1 accumulate code. They can be easily encoded and have a remarkable error performance under iterative decoding [7]–[9]. By parallel concatenating several 1/(1 + D) convolutional codes with regular puncturing pattern, the resulting codes, termed zigzag

Paper approved by K. Narayanan, the Editor for Optical Communication of the IEEE Communications Society. Manuscript received March 19, 2006; revised May 5, 2007 and January 4, 2008.

This work was supported by the National Science Council of the Republic of China under Grants NSC 96-2219-E-194-004, NSC 97-2628-E-194-001-MY2, NSC 97-2219-E-009-014, and NSC 96-2628-E-009-172-MY3.

M.-C. Chiu is with the Department of Communications Engineering, National Chung Cheng University, Min-Hsiung, Chia-Yi, 621, Taiwan, R.O.C. (e-mail: ieemcc@ccu.edu.tw).

H.-f. Lu is with Department of Communications Engineering, Na-tional Chiao Tung University, Hsinchu, 300, Taiwan, R.O.C. (e-mail: fran-cis@cc.nctu.edu.tw).

Digital Object Identifier 10.1109/TCOMM.2009.02.060165

codes, are proposed in [10]. The encoders of zigzag codes are also very simple and can be decoded using low-complexity soft-in/soft-out decoders.

Recently, in [12], the authors proposed two classes of product accumulate (PA) codes, termed PA-I and PA-II codes. The PA codes are constructed based on the serial concatenation of an outer product code, an interleaver, and an accumulate code. For serial concatenated codes, it was shown [7], [13] that, to have a better interleaving gain, the outer code should have minimum Hamming distance greater than or equal to 3. Therefore, the key task is to construct an outer code meeting this condition, or at least so that the number of weight-2 code-words is as small as possible [12]. In [12], the outer code of PA codes is a direct product or turbo product of single-parity check (SPC) codes. In this paper, we propose another method to construct the outer code to meet the above requirement. The motivation is to construct easily encodable, and soft-decodable codes with good error performance. The proposed code consists of several simple1 + D convolutional codes as the outer code, and a rate-1 1/(1 + D) accumulate code as the inner code. The outer code has a very simple encoder, just one shift-forward register for each encoding branch, and has relatively low density of 1’s in its parity-check matrix; hence, it is suitable for decoding using conventional BP decoding algorithm. To further understand these codes, we derive the corresponding input-output weight enumerators (IOWEs) and apply the simple bound technique [14] to analyze the BER performance of these codes as well as the minimal required signal-to-noise ratio (SNR) for these codes to achieve reliable communication under AWGN channel. Through theoretical analysis and computer simulations, we find that the error performances of the proposed codes are remarkably good.

This paper is organized as follows. The code construction is given in Section II. Section III presents the IOWEs of these codes. Simulations and numerical results are provided in Section IV. Section V concludes the paper.

II. CODECONSTRUCTION

Fig. 1 shows the overall structure of the proposed single-feedforward-register convolutional accumulate (SFRCA) codes. The k = mr information bits are first de-multiplexed

into m blocks, each of size r. Then the m blocks are

independently encoded by a rate-1/2 convolutional code with systematic generator matrix [1, 1 + D], which can be easily implemented by a feed-forward register, as shown in Fig. 1. The parity bits from each encoder, except those from the first encoder, are further independently interleaved by interleavers

Π1, . . . , Πm−1. The interleaved bits from each1+D encoder

are summed together, using modulo-2 addition, at the encoder output to yield r parity bits. Thus the total number of output 0090-6778/09$25.00 c 2009 IEEE

(2)

312 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 57, NO. 2, FEBRUARY 2009 D D D 3 1 3 -1 m 3 mr bits Demux (m+1)r code bits D

Fig. 1. Proposed accumulate code with1 + D convolutional encoders.

coded bits at outer encoder, including information bits, equals (m + 1)r. These (m + 1)r coded bits are interleaved again by an interleaver of size(m + 1)r, denoted by Π in Fig. 1. Finally, a rate-1 convolutional code with generator1/(1+D), also known as the accumulate code, is employed as the inner code for the second-round encoding. It is obvious that the overall code is an(n, k) = ((m + 1)r, mr) linear code with rateR = m/(m + 1), for m ≥ 1.

It can be shown that the parity-check matrix of the proposed outer code has the following form:

H = GT 1+D ΠT1GT1+D . . . ΠTm−1GT1+D Ir , (1) where G1+D = ⎡ ⎢ ⎢ ⎢ ⎣ 1 1 0 · · · 0 0 1 1 · · · 0 .. . ... ... ... ... 0 0 0 · · · 1 ⎤ ⎥ ⎥ ⎥ ⎦, (2)

and Πi, i = 1, . . . , m − 1, are permutation matrices acting

as interleavers. It can be easily seen from (1) and (2) that the number of 1’s in H is small. The row weights of H are at most2m + 1 and the column weights are at most 2; hence, the proposed code can be efficiently decoded by BP algorithm. The iterative decoding algorithm can be found in [12]. In addition, the decoder does not have to store the whole parity-check matrix (1) for decoding, since the parity-check matrix can be easily generated if the interleavers are known at decoder end.

For codes with accumulate codes acting as inner codes, it is known [12] that the number of weight-2 codewords of the outer code should be as small as possible. In our construction it can be shown that the average number of weight-2 outer codewords is of the order O(m2). As a result, the number

of weight-2 outer codewords depends only on the parameter

m, which is related only to the rate of the constructed code,

not to the block length. Though the outer code itself is not a good code, the interleaverΠ preceding the inner 1/(1 + D)

code works as a random scrambler that can map low-weight codewords to high-weight codewords, and therefore yields a good distance spectrum.

III. INPUT-OUTPUTWEIGHTENUMERATOR

In this section, we will investigate the input-output weight enumerator (IOWE) of the proposed code. The IOWE of an

(n, k) binary linear block code C is defined as

AC(x, y) =k w=0 n h=0 AC w,hxwyh, (3) whereAC

w,his the number of codewords inC having Hamming

weighth, provided that the corresponding input message

vec-tors are of Hamming weightw. Let C be an (n, k) systematic

binary linear block code generated by a k × n generator

matrix G = [Ik|P ]. For every codeword c ∈ C, its first k

bits are termed input (or message) bits, and the last n − k

bits are termed redundancies. The input-redundancy weight enumerator (IRWE) of C is defined as

BC(x, y) =k w=0 n−k h=0 BC w,hxwyh, (4) where BC

w,h is the number of codewords in C having

re-dundancy weight h, provided that the corresponding input

message vectors are of Hamming weight w. It is easy to see AC(x, y) = BC(xy, y).

We first focus on the IRWE of the outer code. We will assume that 1, · · · , Πm−1} are independent and uniform

interleavers [15]. Let Co denote the set of codewords of the

outer code. It can be shown that the dual code C⊥

o ofCo is

actually a simple turbo code, concatenated in parallel withm

identical 1 + D encoders and m − 1 interleavers. Therefore, by extending results of [15], we have the following theorem.

Theorem 1: The IRWE of codeC⊥

o is given by BC⊥o(x, y) = r w=0 mr h=0 BCo⊥ w,hxwyh (5) = r w=0 xw 1 r w m−1 r q=0 A1+D w,q yq m , where A1+D w,h =  r − w h/2   w − 1 h/2 − 1 

is the IOWE of a1+D convolutional code truncated to length

r.

We are now in position to relate the IOWE of the dual code

C⊥

o to that of Co. Given the generator matrix of Co⊥ in the

form[P |Ir], we will set the generator of Co as[Imr|PT]. In

this case, the first mr bits of codeword c ∈ Co are termed

input bits and the lastr bits are termed redundancies. On the

other hand, the first mr bits of codeword c ∈ C

o are termed

redundancies and the last r bits are termed input bits. Now,

we may keep track of the weights of two parts separately. Such weight separation falls into the category of split weight

enumerator considered in [11], [16] and is reproduced below. Definition 1: For any c = (c1, c2, . . . , cmr, cmr+1,

. . . ,c(m+1)r) ∈ Co⊥, let wL(c) = wH(c1, c2, . . . , cmr) and

wR(c) = wH(cmr+1, · · · , c(m+1)r) be respectively the

left-and right-weights ofc, where wH(·) is the Hamming metric of

a binary vector. The split weight enumerator ofC⊥

o is defined as ΛC⊥ o(x, y, X, Y ) = c∈C⊥ o xr−wR(c)ywR(c)Xmr−wL(c)YwL(c).

(3)

CHIU and LU: ACCUMULATE CODES BASED ON 1+D CONVOLUTIONAL OUTER CODES 313

Notice that the IRWE BC⊥

o(x, y) = ΛCo⊥(1, x, 1, y).

Analo-gous to the well-known MacWilliams transform that relates the weight enumerator of a code to that of its dual, such relation also exists for split weight enumerators.

Lemma 2: Let ΛC⊥

o and ΛCo be respectively the split

weight enumerators ofC⊥

o and its dualCo. Then

ΛCo(x, y, X, Y ) = 1

|C⊥ o|Λ

C⊥

o(x + y, x − y, X + Y, X − Y )

Lemma 2 implies the following result, which relatesBCo(x, y)

toBC⊥ o(x, y).

Theorem 3: Let BC⊥

o(x, y) be the IRWE of the code Co.

Then the IRWE of outer codeCo is given by

BCo(x, y) = (1 + x)mr(1 + y)r 2r BC o  1 − y 1 + y, 1 − x 1 + x  . (6) Proof: Note that

BCo(x, y) = mr w=0 r h=0 BCo w,hxwyh= ΛCo(1, y, 1, x)

and the rest follows from Lemma 2.

The IOWE of a 1/(1 + D) convolutional code truncated to length(m + 1)r is given by A1/(1+D)w,h =  (m + 1)r − h w/2   h − 1 w/2 − 1  .

Finally, by taking account of interleaverΠ and rate-1

accu-mulate convolutional encoder1/(1+D), the overall IOWE of the proposed code is

AC(x, y) = mr w=0 r h=0 ⎛ ⎝r p=0 BCo w,p A1/(1+D)p+w,h (m+1)r p+w  ⎞ ⎠    ≡AC w,h xwyh, whereBCo

w,pis the IRWE of the outer code given in (6). Based

on the above, we apply the simple bound technique [14] to analyze the BER performance of these codes as well as to assess the minimal required SNR for these codes to achieve reliable communication under AWGN channel. The results are presented in the next section.

IV. SIMULATION ANDNUMERICALRESULTS

Here we consider four codes with design parameters (n, k) = (10000, 5000), (3000, 1500), (3000, 2000), and (2000, 1500), respectively. The IOWE AC(x, y) for each code

is first computed based on results in Section III. The simple bound [14] on BER is then employed to yield a performance upper bound as a benchmark. We also conduct computer simulations of these codes. The simulation results are obtained with 100 iterations and with at least5 × 109 information bits

for high SNR region. Two types of interleavers are considered here. The first one is the random interleaver which is generated uniformly. The second one is theS-random interleaver [17].

An S-random interleaver (where S is a positive integer)

is a “semirandom” interleaver constructed as follows. Each randomly selected integer is compared with S previously

selected random integers. If the difference between the current selection and S previous selections is smaller than S, the

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 Eb/N0 (dB)

Bit Error Rate

Rate−1/2 (10000, 5000) code, Uniform Interleaver Rate−1/2 (10000, 5000) code, S−Random Interleaver, SI=40

Rate−1/2 (3000, 1500) code, Uniform Interleaver Rate−1/2 (3000, 1500) code, S−Random Interleaver, SI=28

Rate−2/3 (3000, 2000) code, Uniform Interleaver Rate−2/3 (3000, 2000) code, S−Random Interleaver, S

I=28, SO=16

Rate−3/4 (2000, 1500) code, Uniform Interleaver Rate−3/4 (2000, 1500) code, S−Random Interleaver, SI=24, SO=15

Bounds

Fig. 2. Simulation and simple bound on the bit error rate.

random integer is rejected. This process is repeated until all distinct integers have been selected. TheS parameters for the

inner interleaverΠ and outer interleavers Πi,i = 1, . . . , m−1,

are denoted as SI andSO, respectively. Note that the

inter-leavers in our simulation are only generated once for each code and are fixed for subsequent simulation runs.

For the cases of random interleavers, as observed from simulation results shown in Fig. 2, the iterative decoding con-verges very well and the simple bounds faithfully predict the BER performances, even the error floors. As the error floors may still be pronounced in certain applications, to improve performance, S-random interleavers are often employed to

replace random interleavers and to reduce the correlation of bit metrics between two successive decoding processes [17]. We remark that in our coding scheme, the ordering of coded bits from the outer code does affect the performance when

S-random interleaver is employed. In our simulations, the coded bits from outer code are ordered such that everym systematic

bits are followed by a parity bit generated by encoders at lower branch shown in Fig. 1. TheS parameters SI andSO

of each simulation case are indicated in Fig. 2. The results show that the slopes of error floors are improved significantly when S-random interleavers are employed.

A. Asymptotic Performance and Long Code Simulation

We consider eleven codes of different rates from 1/2 to 32/33. For all codes, the information length is approximately equal to 64000 bits. For a clear comparison, in Fig. 3, we plot the Shannon capacity of BPSK modulation over AWGN channel, the SNR thresholds based on the simple bound [14], and the required SNR in real code simulations for BER at 10−5. The results show that the proposed codes perform fairly

close to the Shannon limits. For low rate code, i.e., the rate-1/2 code, the SNR gap is about 0.85dB. However, for high rate code, e.g. the rate-32/33 code, the SNR gap is reduced to only 0.47dB. These codes are not the best code found in the literatures. However, from practical point of view, the simplicities in encoding and decoding schemes are the major advantages.

(4)

314 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 57, NO. 2, FEBRUARY 2009 0 1 2 3 4 5 6 0.4 0.5 0.6 0.7 0.8 0.9 1 E b/N0 (dB) Code Rate BPSK AWGN Capacity

Simulation with k≈ 64000 at BER = 10−5

Simple bound thresholds

Fig. 3. Asymptotic performance of the proposed codes.

2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3 10−4 10−3 10−2 10−1 100 E b/N0 (dB)

Bit Error Rate

SFRCA: Rate−3/4 (1336, 1002) code PA−I: Rate−3/4 (1336, 1002) code SFRCA: Rate−3/4 (5336, 4002) code PA−I: Rate−3/4 (5336, 4002) code SFRCA: Rate−3/4 (21336, 16002) code PA−I: Rate−3/4 (21336, 16002) code

Fig. 4. Performance comparison between SFRCA and PA-I codes. B. Comparison with PA-I Codes

Since PA-I codes in general have better performance than PA-II codes [12], in this simulation, we compare the performances of the proposed codes with those of PA-I codes. We consider rate-3/4 codes with parameters(n, k) = (1336, 1002), (5336, 4002), and (21336, 16002), respectively. Simulation results are given in Fig. 4, and it can be seen that there is no significant difference between these codes at the same length.

V. CONCLUSIONS

A new construction of concatenated code with the simplest [1, 1 + D] convolutional code as the outer code and the rate-1

accumulate code as the inner code is presented in this paper. Since the outer code encoder contains only one feedforward register for each branch, the code is called single-feedforward-register convolutional accumulate (SFRCA) code. Although the code is very simple in terms of encoding structure, it provides remarkable performance especially in the high rate regime. We derive the corresponding input-output weight enumerators (IOWEs) and apply Divsalar’s simple bound technique to analyze the BER performances of these codes. Through the simple bound, we compute the minimal required SNR for the codes to achieve reliable communication under AWGN channel. Simulations are also provided to show that the iterative decoder of these codes converges very well and provides good performance.

REFERENCES

[1] R. G. Gallager, Low-Density Parity-Check Codes. Cambridge, MA: MIT Press, 1963.

[2] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Kaufmann, 1988.

[3] D. MacKay, “Good error correcting codes based on very sparse matri-ces,” IEEE Trans. Inform. Theory, vol. 45, pp. 399–431, March 1999. [4] R. Lucas, M. P. C. Fossorier, Y. Kou, and S. Lin, “Iterative decoding of

one-step majority logic decodable codes based on belief propagation,” IEEE Trans. Commun., vol. 48, pp. 931–937, June 2000.

[5] Y. Kou, S. Lin, M. P. C. Fossorier, “Low-density parity-check codes based on finite geometries: a rediscovery and new results,” IEEE Trans. Inform. Theory, vol. 47, pp. 2711–2736, Nov. 2001.

[6] R. M. Tanner, D. Sridhara, A. Sridharan, T. E. Fuja and D. J. Costello, “LDPC block and convolutional codes based on circulant matrices,” IEEE Trans. Inform. Theory, vol. 50, pp. pp. 2966–2984, Dec. 2004. [7] D. Divsalar, H. Jin, and R. J. McEliece, “Coding theorems for

‘turbo-like’ codes,” Proc. 1998 Allerton Conf. Commun. and Control, pp. 201– 210, Sept. 1998.

[8] H. Jin, A. Khandekar, and R. McEliece, “Irregular repeat-accumulate codes,” Proc. 2nd. Int. Symp. on Turbo Codes and Related Topics, Brest, France, pp. 1-8, Sept. 2000.

[9] M. Yang, Y. Li, and W. E. Ryan, “Design of efficiently-encodable moderate-length high-rate irregular LDPC codes,” Proc. 40th An-nual Allerton Conference on Communication, Control, and Computing pp. 1415-1424, Oct. 2002.

[10] P. Li, X. Huang, and N. Phamdo, “Zigzag codes and concatenated zigzag codes,” IEEE Trans. Inform. Theory, vol. 47, pp. 800–807, Feb. 2001. [11] H.-F. Lu, P. V. Kumar, and E.-H. Yang, “On the input-output weight

enumerators of product accumulate codes,” IEEE Commun. Lett., vol. 8, no. 8, pp. 520-522, Aug. 2004.

[12] J. Li, K. R. Narayanan, and C. N. Georghiades, “Product accumulate codes: a class of codes with near-capacity performance and low decoding complexity,” IEEE Trans. Inform. Theory, vol. 50, pp. 31–46, Jan. 2004. [13] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “Serial concate-nation of interleaved codes: performance analysis, design, and iterative decoding,” IEEE Trans. Inform. Theory, vol. 44, pp. 909–926, May 1998. [14] D. Divsalar, “A simple tight bound on error probability of block codes with application to turbo codes,” TMO Progress Rep. 42-139, Nov. 1999. [15] S. Benedetto and G. Montorsi, “Unveiling turbo codes: some results on parallel concatenated coding schemes,” IEEE Trans. Inform. Theory, vol. 42, pp. 409–428, Mar. 1996.

[16] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error Correcting Codes, New York: North-Holland, 1977.

[17] H. R. Radjadpor, N. J. A. Sloane, M. Salehi, and G. Nebe, “Interleaver design for turbo codes,” IEEE J. Select. Areas Commun., vol. 19, pp. 831–837, May 2001.

數據

Fig. 1. Proposed accumulate code with 1 + D convolutional encoders.
Fig. 2. Simulation and simple bound on the bit error rate.
Fig. 4. Performance comparison between SFRCA and PA-I codes. B. Comparison with PA-I Codes

參考文獻

相關文件

1 After computing if D is linear separable, we shall know w ∗ and then there is no need to use PLA.. Noise and Error Algorithmic Error Measure. Choice of

 Schools can administer APASO-II scales/subscales at diff erent times of the school year to achieve different purpose s, e.g.. to assess the effectiveness of an intervention progra

In the third quarter of 2002, the Census and Statistics Department conducted an establishment survey (5) on business aspirations and training needs, upon Hong Kong’s

• Copy a value from the right-hand side (value or expression) to the space indicated by the variable in the left-hand side.. • You cannot write codes like 1 = x because 1 cannot

• To achieve small expected risk, that is good generalization performance ⇒ both the empirical risk and the ratio between VC dimension and the number of data points have to be small..

• For some non-strongly convex functions, we provide rate analysis of linear convergence for feasible descent methods. • The key idea is to prove an error bound between any point

As n increases, not only does the fixed locality bound of five become increasingly negligible relative to the size of the search space, but the probability that a random

Error correcting output codes (ECOC) are a general framework to decompose a multi-class problem to several binary problems. To obtain probability estimates under this framework,