• 沒有找到結果。

Chapter 1 Introduction

1.2 Thesis Organization

This thesis is organized as follows. In chapter 2, basic concept of the LDPC codes: the code construction, encoding concept and various decoding algorithms will be introduced. Chapter 3 will first introduce the modified min-sum algorithm which uses the normalized technique. Then we propose a new dynamic normalized-offset technique for min-sum decoding algorithm. In chapter 4, the simulation results for the LDPC code which is discussed in chapter 2 and chapter 3 will be shown. In chapter 5, hardware architecture of the LDPC decoder will be discussed here. In the end of this thesis, brief conclusions and future work will be presented in chapter 6.

Chapter 2

Low-Density Parity-Check Codes

In this chapter, an introduction to low-density parity-check code will be given, including the fundamental concepts of LDPC code, code construction, encoding mechanism and decoding algorithm.

2.1 Fundamental Concept of LDPC Code

A binary LDPC code is a binary linear block code that can be defined by a sparse binary m×n parity-check matrix. A matrix is called a sparse matrix because there is only a small fraction of its entries are ones. In other words, most part of the parity-check matrix are zeros and the else part of that are ones.

For any m×n parity-check matrix H, it defines a (n, k, r, c)-regular LDPC code if every column vector of H has the same weight c and every row vector of H has the same weight r. Here the weight of a vector is the number of ones in the vector.

k= −n m. By counting the ones in H, it follows that n c× = ×k r. Hence if m<n, then c<r. Suppose the parity-check matrix has full rank, the code rate of H is

(rc) /r= −1 c r/ . If all the column-weights or the row-weights are not the same, an LDPC code is said to be irregular.

As suggested by Tanner [7], an LDPC code can be represented by a bipartite graph. An LDPC code corresponds to a unique bipartite graph and a bipartite graph also corresponds to a unique LDPC code. In a bipartite graph, one type of nodes, called the variable (bit) nodes, correspond to the symbols in a codeword. The other type of nodes, called the check nodes, correspond to the set of parity check equations.

If the parity-check matrix H is an m×n matrix, it has m check nodes and n variable nodes. A variable node vi is connected to a check node cj by an edge, denoted as (vi, cj), if and only if the entry hi,j of H is one. A cycle in a graph of nodes and edges is defined as a sequence of connected edges which starts from a node and ends at the same node, and satisfies the condition that no node (except the initial and final node) appears more than one time. The number of edges on a cycle is called the length of the cycle. The length of the shortest cycle in a Tanner graph is called the girth of the graph.

Regular LDPC codes are those where all nodes of the same type have the same degree. The degree of a node is the number of edges connected to that node. For example, Figure2.1 shows a (8, 4, 4, 2)-regular LDPC code and its corresponding

c3

Figure 2.1 (8, 4, 4, 2)-regular LDPC code and its corresponding Tanner graph.

⎥⎥

Tanner graph. In this example, there are 8 variable nodes (vi), 4 check nodes (ci), the row weight is 4 and the column weight is 2. The edges (c1, v3), (v3, c3), (c3, v7), and (v7, c1) depict a cycle in the Tanner graph. Since this turns out to be the shortest cycle, the girth of the Tanner graph is 4. Irregular LDPC codes were introduced in [8] and [9].

2.2 Constructions of LDPC Codes

This section is going to discuss the parity-check matrix H of LDPC code. The design of H is the moment when the asymptotical constraints (the parameters of the class you designed, like the degree distribution, the rate) have to meet the practical constraints (finite dimension, girths).

Here, we describe some recipes which take some practical constraints into account. Two techniques exist in the literature: random and deterministic ones. The design compromise is that for increasing the girth, the sparseness has to be decreased, so is the code performance decreased due to a low minimum distance. On the contrary, for high minimum distance, the sparseness has to be increased yielding the creation of low-length girth, due to the fact that H dimensions are finite, and thus, yielding a poor convergence of sum-product algorithm.

2.2.1 Random Code Construction

The first constructions of LDPC codes are random ones. They were proposed by Gallager [1] and MacKay [10]. The parity check matrix in Gallager’s method is a concatenation and/or superposition of sub-matrices; these sub-matrices are created by performing some permutations on a particular (random or not) sub-matrix which usually has a column weight of 1. The parity check matrix in MacKay’s method is computer-generated. These two methods are introduced below.

Gallager’s method [1]

Define an (n, r, c) parity check-matrix as a matrix of n columns that has c ones in each column, r ones in each row, and zeros elsewhere. Following this definition, an (n, r, c) parity-check matrix has nc r/ rows and thus a rate of coderate≥ −1 c r/ . In order to construct an ensemble of (n, r, c) matrices, consider first the special (n, r, c) matrix in Figure 2.2, where n, r and c are 20, 4 and 3, respectively.

1

Figure 2.2 Example of an LDPC code matrix, where (n, r, c)=(20,4,3)

This matrix is divided into c sub-matrices, each containing a single 1 in each column. The first of these sub-matrices contains all its 1’s in descending order where the ith row contains 1’s in columns ( 1)ir+ to ir . The other sub-matrices are 1 merely column permutations of the first. We define the ensemble of (n, r, c) codes as the ensemble resulting from random permutations of the columns of each of the bottom (c− sub-matrices of a matrix such as in Figure 2.2 with equal probability 1) assigned to each permutation. This definition is somewhat arbitrary and is made for mathematical convenience. In fact such an ensemble does not include all (n, r, c) codes as just defined. Also, at least (c− rows in each matrix of the ensemble are 1) linearly dependent. This simply means that the codes have a slightly higher information rate than the matrix indicates.

MacKay’s method [10]

A computer-generated code was introduced by MacKay [10]. The parity-check matrix is randomly generated. First, parameters n, m, r, and c are chosen to conform an (n, m, r, c)-regular LDPC code where n, r and c are the same as in Gallager’s code and m is the number of the parity-check equations in H. Then, 1’s are randomly generated into c different positions of the first column. The second column is generated in the same way, but checks are made to insure that no two columns have a 1 in the same position more than twice in order to avoid 4-cycle in the Tanner graph.

If there is a 4-cycyle in the Tanner graph, the decoding performance will be reduced by about 0.5dB. Avoidance of 4-cycles in a parity-check matrix is therefore required.

The next few columns are generated sequentially and checks for 4-cycles must be performed in each generation. In this procedure, the number of 1’s in each row must be recorded, and if any row already has r ones, the next-column generation will not select that row.

2.2.2 Deterministic Code Construction

A parity-check matrix H by random construction is sparse, but its corresponding generator matrix is not. This property will increase encoding complexity. To circumvent this problem, deterministic code construction schemes have been proposed. It can lead to low encoding complexities. Some forms of the deterministic code construction include block-LDPC code, quasi-cyclic code [5], and quasi-cyclic based code [21]. They are introduced below.

Block-LDPC Code

The parity check matrix H based on block-LDPC code is composed by several sub-matrices. The size of H is m-by-n. The sub-matrices are shifted identity matrices or zero matrices. The matrix form of H is shown in Figure 2.3. Sub-matrix P is i j, one of a set of z-by-z permutation matrices or a z-by-z zero matrix. Matrix H is expanded from a binary base matrix H of size b m -by-b n , where b m= ⋅z mb , n= ⋅ , and z is an integer ≥ 1. The base matrix is expanded by replacing each 1 z nb

in the base matrix with a z-by-z permutation matrix, and each 0 with a z-by-z zero matrix. The used permutations are circular right shifts, and the set of permutation matrices contains the z-by-z identity matrix and circularly right-shifted versions of the identity matrix. The details of block-LDPC Code can be seen in Appendix A.

Figure 2.3 The parity-check matrix H of a block-LDPC code

Quasi-Cyclic Code [5]

A code is quasi-cyclic if, for any cyclic shift of a codeword by l places, the resulting word is also a codeword. A cyclic code is a quasi-cyclic code with l=1. Consider the binary quasi-cyclic codes described by a parity-check matrix

]

binary circulant matrices is isomorphic to the algebra of polynomials modulo xv−1

over GF(2). A circulant matrix A is completely characterized by the polynomial

where the coefficients are from the first row of A , and a code C with parity-check matrix of the form (2.10) can be completely characterized by the polynomials

)

a l . Figure 2.4(a) shows an example of a rate-1/2 quasi-cyclic code, where a1(x)= 1+x and a2(x)=1+x2 +x4 . Figure 2.4(b) shows the corresponding Tanner graph representation. For this example, we can see the edges (c1, v6), (v6, c4), (c4, v8), (v8, c1) depict a 4-cycle in this graph which is to be avoided for performance consideration.

⎥⎥

(a) A parity-check matrix with two circulant matrices

c

3

(b) Tanner graph representation

Figure2.4 Example of a rate-1/2 quasi-cyclic code from two circulant matrices, where x

x

a1( )= 1+ and a2(x)=1+x2 +x4

Quasi-Cyclic Based Code [21]

The code is constructed with a base of quasi-cyclic code. The parity check matrix is in the following form.

any 4-cycles in the new structure of the parity-check matrix, we provide a new difference family to solve this problem. First, construct two (v,γ,1) difference families Family A and Family B and combine the two families to form a new difference Family C , subject to the following two constraints.

Constraint 1: The differences [(ai,xai,y) mod v] and [(bi,xbi,y) mod v], where i=1,2,...,l−1 ;x,y=1,2,...,γ ,xy, give each element, can not be the same.

Constraint 2: The differences [(ai,xaj,y) mod v] and [(bi,xbj,y) mod v], where , i, j=1,2,...,l−1,ij ;x,y=1,2,...,γ give each element, can not be the same.

2.3 Encoding of LDPC Codes

Since LDPC code is a linear block code, it can be encoded by conventional methods. However, conventional methods require encoding complexities proportional to the quadratic of the code length. The high encoding cost of LDPC code becomes a major drawback when compared to the turbo codes which have linear time encoding

complexity. In this section, we will introduce some improved methods.

2.3.1 Conventional Method

Let ]u=[u0,u1,u2,...,uk1 be a row vector of message bits with length k and the generating matrix of this code, and

uG

Suppose a sparse parity-check matrix H with full rank is constructed. Gaussian elimination and column reordering can be used to derive an equivalent parity-check matrix in the systematic form Hsystematic =[PIr]. Thus equation (2.13) can be solved to get the generating matrix in a systematic form as

] [ k T

systematic I P

G = . (2.14)

Finally, the generating matrix G can be obtained by doing the reverse column reordering to the Gsystematic.

Triangularized parity-check matrix form [4]

In [4], it suggests to force the parity-check matrix to be in a lower triangular form. Under this restriction, it guarantees a linear time encoding complexity, but, in

general, it also results in some loss of performance.

2.3.2 Richardson’s Method [3]

Richardson’s method is the most extensively used among LDPC encoding algorithms. Figure 2.5 shows how to bring a parity-check matrix into an approximate lower triangular form using row and column permutations. Note that since this

Figure 2.5 The parity-check matrix in an approximate lower triangular form transformation was accomplished solely by permutations, the parity check matrix H is still sparse. This method is to cut the parity check matrix H into 6 sub-matrices: A, B, T, C, D, E. Especially, the sub-matrix T is in lower triangular form.

More precisely, it is assumed that the matrix is written in the form

⎟⎟⎠

⎜⎜ ⎞

=⎛

E D C

T B

H A (2.15)

where A is (mg)×(nm), B is (m− )g ×g , T is (mg)×(mg), C is )

(n m

g× − , D is g× , and E is g g×(mg). Further, all these matrices are sparse and T is lower triangular with ones along the diagonal. Let x=(s,p1,p2) denote the

codeword of this parity-check matrix where s is the message bits with length

Expanding equation (2.17), one can get equations (2.18) and (2.19)

2 0 from equation (2.19) we conclude that

( )

the determination of p1 can be accomplished with a time complexity of

( ( ))

O g× −n m simply by performing a multiplication with this (generally dense) matrix. This complexity can be further reduced as shown in Table 2.1. Rather than pre-computing φ1

(

ET A C s1 +

)

T and then multiplying with s , T p1 can be determined by breaking the computation into several smaller steps, each of which is computationally efficient. To this end, we first determine As , which has complexity T of ( )O n , because A is sparse. Next, we multiply the result by T1. Since

T

T y

As

T1[ ]= is equivalent to the system [AsT]=TyT , this can also be accomplished in ( )O n time by back-substitution method, because T is lower triangular and sparse. The remaining steps are fairly straightforward. It follows that

the overall complexity of determining p1 is O n( +g2). In a similar manner, noting from equation (2.18) that p2T =−T1(AsT +Bp1T), we can determine p2 in time complexity of O n , as shown step by step in Table 2.2. ( )

A summary of this efficient encoding procedure is given in Table 2.3. It contains two steps, the preprocessing step and the actual encoding step. In the preprocessing step, we first perform row and column permutations to bring the parity-check matrix into the approximate lower triangular form with as small a gap g as possible. In actual encoding, it contains the steps listed in Table 2.1 and 2.2. The overall encoding complexity is O n( +g2), where g is the gap of the approximate triangularization.

Table 2.1 Efficient computation steps of 1 1

(

1

)

T T

p = −φET A C s +

Operation Comment Complexity

As T

Multiplication by sparse matrix

T Multiplication by sparse matrix Multiplication by sparse matrix Addition

Multiplication by dense g× matrixg

( )

n

Operation Comment Complexity

As T

Multiplication by sparse matrix Multiplication by sparse matrix Addition

]

Table 2.3 Summary of Richardson’s encoding procedure

Preprocessing: Input: Non-singular parity-check matrix H. Output: An equivalent parity-check matrix of the form ⎟⎟⎠

⎜⎜ ⎞

1. [Triangularization] Perform row and column permutations to bring the parity-check matrix H into the approximate lower triangular form

⎟⎟ column permutations if necessary to ensure this property.

⎟⎟⎠

Encoding: Input: Parity-check matrix of the form ⎟⎟⎠

⎜⎜ ⎞

−1 is non-singular and a vector s denote the message bits has length )

(m− . Output: The vector n x=(s,p1,p2) where p1 has length g and p2 has length )(mg , such that HxT =0T.

1. Determine p1 as shown in Table 2.1.

2. Determine p2 as shown in Table 2.2.

2.3.3 Quasi-Cyclic Code [5]

As reviewed in section 2.2, the quasi-cyclic code can be described by a parity-check matrix H =[A1,A2,...Al] and each of a circulant matrix A is j

completely formed by the polynomial a(x)=a0 +a1x+....+av1xv1 with coefficients from its first row. A code C with parity-check matrix H can be completely characterized by the polynomials a x1( ), ( ),..., a x2 and ( )a x . As for the encoding, if l one of the circulant matrices is invertible (say A ) the generator matrix for the code l can be constructed in the following systematic form.

⎥⎥ process can be achieved with linear complexity using a v(l−1)-stage shift register.

Regarding the algebraic computation, the polynomial transpose is defined as

As an example, consider a rate-1/2 quasi-cyclic code with v=5, l =2, the first described by the polynomial

3

Figure 2.6 shows the example parity-check matrix and the corresponding generator matrix.

(a) A parity-check matrix with two circulants

⎥⎥

(b) The corresponding generator matrix in systematic form

Figure 2.6 Example of a rate-1/2 quasi-cyclic code. (a) Parity-check matrix with two circulants, where a1(x)= 1+x and a2(x)=1+x2 +x4. (b) Corresponding generator matrix in systematic form.

Quasi-Cyclic Based Code [21]

As reviewed in section 2.2, the quasi-cyclic based code can be described by a

parity-check matrix encoding for the quasi-cyclic based structure, suppose that two of the circulant matrices Al1 and B are invertible, we can derive two generator matrices in the l following systematic forms

[

( 2) 1

]

bits, each having the same length v. The encoding procedure is partitioned into two steps.

Then, combine the parity bits p1 with the message bits d to form an intermediate codeword c′ where c′=[d, p1].

Encoding Step 2: The last parity bits p2 can be derived from the generator matrix G2 and the intermediate codeword c′. That is

p2 =c′×G2. (2.30)

2.4 Conventional LDPC Code Decoding Algorithm

There are several decoding algorithms for LDPC codes. The LDPC decoding algorithms can be summarized as: bit-flipping algorithm [20], and message passing algorithm [11]. In the following, we will make an introduction of the decoding algorithms.

2.4.1 Bit-Flipping Algorithm [20]

The idea for decoding is the fact that in case of low-density parity-check matrices the syndrome weight increases with the number of errors in average until errors weights are much larger than half the minimum distance. Therefore, the idea is to flip one bit in each iteration, and the bit to be flipped is chosen such that the syndrome weight decreases. It should be noted that not only rows of the parity-check matrix can be used for decoding, but in principle all vectors of the dual code with minimum (or small) weight. In the following, we will introduce two of the bit-flipping algorithms [20].

Notation and Basic Definitions

The idea behind this algorithm is to “flip” the least number of bits until the parity check equation H xT = is satisfied.0 Suppose a binary (n,k) LDPC code is used for error control over a binary-input additive white Gaussian noise (BIAWGN) channel with zero mean and power spectral density σ2. The letter n is the code length and k is the message length. Assume binary phase-shift-keying (BPSK) signaling with unit energy is adopted. A codeword c=( , , ,c c0 1 L cn1) {∈ GF(2)}n is mapped into bipolar sequence x=( , , ,x x0 1 L xn1) before its transmission, where xi = ⋅2 (ci − 1), 0≤ ≤ −i n 1. Let y=( , , ,y y0 1 L yn1)be the soft-decision received sequence at the output of the receiver matched filter. For 0≤ ≤ −i n 1, yi = + , where xi ni n is a i Gaussian random variable with zero mean and variance σ2. An initial binary hard decision of the received sequence,z(0) =(z0(0),z1(0), ,L z(0)n1), is determined as follows

(0) 1, 0

0, 0

i i

i

z y

y

⎧ ≥

= ⎨⎩ ≤ (2.31)

For any tentative binary hard decision z made at the end of ach decoding iteration, we can compute the syndrome vector as s=H zT. One can define the log-likelihood ratio (LLR) for ear channel output yi, 0≤ ≤ − : i n 1

( 1| ) ln ( 0 | )

i i

i

i i

p c y

L p c y

= =

= (2.32) The absolute value of L , i Li , is called the reliability of the initial decision zi(0). For any binary vector v=( , , ,v v0 1L vn1), let wt(v) be the Hamming weight of v . Let

u be the n dimensional unit vector, i.e., a vector with “1” at the i-th position and “0” i

everywhere else.

Algorithm I

Step (1) Initialization: Set iteration counter k = 0. Calculatez(0)andS(0) =wt H z( ⋅ (0)T). Step (2) If S( )k = , then go to Step (8). 0

Step (3) k←k+1. If k>kmax, where kmax is the maximum number of iterations, go to Step (9).

Step (4) For each i=0,1, ,L n−1, calculate Si( )k =wt H[ ⋅(z(k1)+ui) ]T Step (5) Find j( )k ∈{0,1, ,L n−1} with ( ) ( )

arg(min0 )

k k

i n i

j S

≤ <

= .

Step (6) If j( )k = j(k1), then go to Step (9).

Step (7) Calculate ( )

( ) ( 1)

k

k k

z =z +uj and S( )k =wt H z( ⋅ ( )kT). Go to Step (2).

Step (8) Stop the decoding and return z( )k .

Step (9) Declare a decoding failure and return z(k1) .

So the algorithm flips only one bit at each iteration and the bit to be flipped is chosen according to the fact that, in average, the weight of the syndrome increases with the weight of the error. Note that in some cases, the decoder can choose a wrong position j, and thus introduce a new error. But there is still a high likelihood that this new error will be corrected in some later step of the algorithm.

Algorithm II

Algorithm I can be modified, with almost no increase in complexity, to achieve better error performance, by including some kind of reliability information (or measure) of the received symbols. Many algorithms for decoding linear block codes

based on this reliability measure have been devised. Consider the received soft-decision sequencey=( , , ,y y0 1 L yn1). For the AWGN channel, a simple measure of the reliability,L , of a received symbol i y is its magnitude, i yi . The larger the

magnitude yi is, the larger the reliability of the hard-decision digit z is. If the i reliability of a received symbol y is high, we want to prevent the decoding i algorithm from flipping this symbol, because the probability of this symbol being erroneous is less than the probability of this symbol being correct. This can be achieved by appropriately increasing the values S in the decoding algorithm. The i solution is to increase the values of S by the following term: i Li . The larger value

of Li implies that the hard-decision z is more reliable. The steps of the soft i version of the decoding algorithm are described in detail below:

Step (1) Initialization: Set iteration counter k = 0. Calculatez(0)andS(0) =wt H z( ⋅ (0)T). Step (2) If S( )k = , then go to Step (8). 0

Step (3) k←k+1. If k >kmax, go to Step (9).

Step (4) For each i=0,1, ,L n−1, calculate Si( )k =wt H[ ⋅(z(k1)+ui) ]T + Li Step (5) Find j( )k ∈{0,1, ,L n−1} with ( ) ( )

arg(min0 )

arg(min0 )