• 沒有找到結果。

Low-Density Parity-Check Codes for Digital Subscriber Lines E. Eleftheriou and S.

N/A
N/A
Protected

Academic year: 2022

Share "Low-Density Parity-Check Codes for Digital Subscriber Lines E. Eleftheriou and S."

Copied!
6
0
0

加載中.... (立即查看全文)

全文

(1)

Low-Density Parity-Check Codes for Digital Subscriber Lines

E. Eleftheriou and S. Ölçer IBM Research, Zurich Research Laboratory

8803 Rüschlikon, Switzerland

Abstract- The paper investigates the application of low- density parity-check (LDPC) codes to digital subscriber-line (DSL) transmission systems that employ discrete multitone modulation. A family of linear-time encodable binary LDPC codes that are well-suited for DSL transmission is introduced.

Encoding and symbol mapping for multilevel modulation are described. Simulation results show that even under tight latency constraints good net coding gains can be achieved. Implemen- tation complexity is analyzed and compared with that of trellis- coded modulation as employed in current asymmetric DSL transceivers. The incorporation of powerful LDPC coding techniques into next-generation DSL modems appears to be possible with reasonable increase in transceiver complexity.

I. INTRODUCTION

Low-density parity-check (LDPC) codes [1,2] have mainly been considered for data transmission systems employing binary modulation. In many communication systems, how- ever, multilevel modulation with more than two levels is employed to maximize the rate of information transfer under strict constraints on transmit signal bandwidth. An example is multicarrier digital-subscriber-line (DSL) transmission [3], where symbol constellations of possibly different sizes are used for quadrature amplitude modulation (QAM) on each subcarrier. The study of LDPC coding schemes that are suitable for bandwidth-efficient modulation represents, therefore, a topic of considerable practical interest.

In this paper, we describe an LDPC-coded multilevel modulation technique and investigate its application to DSL transmission. Binary LDPC codes are employed together with multilevel-symbol mapping based on set partitioning and so- called “double Gray-code labeling.” Our approach differs from that in [4], where multilevel coding with binary LDPC component codes is proposed. We introduce a family of bi- nary LDPC codes that offer good performance, are encodable in linear time, and do not suffer from error floors at signifi- cantly low bit-error rates. These LDPC codes can be con- structed efficiently for any code rate and block size of interest for DSLs.

In current asymmetric DSL (ADSL) specifications [5], coding is achieved by a concatenated scheme that includes an outer Reed–Solomon (RS) code and an inner trellis code.

Depending on the choice of code parameters and interleaving depth, this scheme can provide a net coding gain of up to

~5.5 dB with respect to uncoded modulation. LDPC coding, as described in this paper, is intended as a replacement of the inner trellis code with the objective of operating the ADSL link closer to its capacity limits than is currently possible. Our approach is applicable to both ADSL [6] and Very-high speed DSL (VDSL) systems.

In DSL transmission, overall delay, or latency, is a critical issue. “Voice” applications are known to demand rather low latency whereas other applications, such as video streaming,

tolerate larger delays but need stronger error-correction capability. Thus, in studying new coding techniques for DSLs, trade-offs between coding gain and latency have to be well characterized. Another important issue is transceiver complexity. It is a critical parameter especially at the central- office access multiplexors or at remote terminals because it directly affects equipment cost and power consumption.

LDPC coding is attractive for DSL transmission because it permits a wide range of trade-offs between latency, com- plexity, and system performance.

II. MULTICARRIER ADSL TRANSMISSION

The block diagram of Fig. 1 shows the components of a discrete-multitone (DMT)-based ADSL system that are relevant for the discussion in this paper.

Information bits, representing user data and control messages, are encoded by an outer RS code with code symbols from GF(28), convolutionally interleaved, and further encoded by an inner-coding stage. In the current ADSL standard, the inner code is a four-dimensional 16-state trellis code. Here we investigate replacing the inner trellis code by an LDPC coding scheme1. In either case, the encoded data are mapped into frequency-domain modulation symbols and then transformed by an inverse discrete Fourier transform (IDFT) operation to yield a frame of time-domain signals.

These signals are converted from parallel to serial (P/S) form and sent over the communication channel. At the receiver, the inverse of the transmit operations takes place to recover the information bits. The “soft demapper” block shown in Fig. 1, which is not needed in the case of trellis decoding by the Viterbi algorithm, computes soft information on LDPC code bits for subsequent soft iterative decoding.

The telephone-twisted-pair channel introduces frequency- dependent signal distortion as well as several other forms of disturbance, of which crosstalk is the most important. If each DMT subchannel has sufficiently narrow bandwidth, then each one independently approximates an additive white Gaussian noise (AWGN) channel with a particular signal-to-

1 Outer RS coding is included in the above description because this function is mandatory in current ADSL specifications. We focus on the inner coding scheme.

Information bits

Received information

encoderRS Encoder for inner code Symbol

mapper IDFT P/S Channel

Noise

+ +

Decoder for inner code

Softde-

mapper DFT S/P

Inter- leaver

decoderRS Deinter -leaver

Fig. 1. DMT-based ADSL transmission system.

(2)

noise ratio (SNR) [3]. Impulse noise represents a further source of disturbance that an ADSL system must be able to cope with. Finally, we note that narrowband interference of various origins, e.g., AM radio signals, also affects the reliability of communications in ADSL.

III. LDPC PARITY-CHECK MATRIX CONSTRUCTION

For ADSL transmission, LDPC codes with high code rates are desirable. Besides achieving high spectral efficiencies in a bandwidth-constrained transmission situation, such codes in- volve a smaller amount of parity checks than low-rate codes do, resulting in more tractable decoder implementations at the envisaged multi-megabit-per-second data rates. It is also desirable that the generation of the parity-check matrix in- volves a small amount of preprocessing operations, rendering

“on-the-fly” construction of LDPC codes practical. Further- more, linear-time encodable LDPC codes are attractive because low implementation complexity can also be achieved at the transmitter. Finally, the ability to specify the LDPC codes via a small number of parameters is critical because it minimizes the overhead during initialization, when the receiver must indicate to the transmitter which LDPC code to use for encoding. Codes that can be described by a small number of parameters are also well suited for standardization purposes.

The deterministic parity-check matrix construction presen- ted in this section meets the above objectives. The construc- tion is based on “array codes,” which are two-dimensional codes that have been proposed for detecting and correcting burst errors [7]. When array codes are viewed as binary codes, their parity-check matrices exhibit sparseness, which can be exploited for decoding them as LDPC codes using the sum-product algorithm [8]. Therefore, array codes provide the framework for defining a family of LDPC codes that lend themselves to deterministic constructions.

The array-code parity-check matrix is specified by three parameters: a prime number p and two integers k and j such that k, j [ p. It has dimensions jp % kp and is given by [7]













=

) 1 )(

1 ( )

1 ( 2 1

) 1 ( 2 4

2

1 2

A

k j j

j

k k

L M O M M M

L L L

I I I

I I

I I

H , (1)

where I is the p % p identity matrix and is a p % p permutation matrix representing a single left or right cyclic shift. For example, for p = 5,

0 1 0 0 0

0 0 1 0 0

0 0 0 1 0

0 0 0 0 1

1 0 0 0 0

or

0 0 0 0 1

1 0 0 0 0

0 1 0 0 0

0 0 1 0 0

0 0 0 1 0













=













= . (2)

The parameters j and k provide the column and row weight of HA, respectively. By construction, the matrix HA is 4-cycle free because no two rows have overlapping “1”s in more than one position.

To achieve efficient encoding, a parity-check matrix in triangular form is desirable, see, e.g., [9]. Although Gaussian elimination could be used to this end, the resulting increase in processing complexity makes this approach inattractive.

Instead, we define a new matrix HS by cyclically shifting the rows of the matrix HA in a blockwise manner. The amount of cyclic shift for each block row is such that the jp % jp leftmost subblock of HS contains the identity matrix I along its diagonal:

=

+

+

1) - )(

1 (

) 3 2(

2

1 ) 2 ( 2

1

) 2 )(

1 ( ) 1 )(

1 (

) 3 ( 2 )

1 ( 2 2) - 2(

2 1

-

S

k j

k k

j j j

j k j j k j

j k

k

j k

M L L

M L

L M O M M M

L L

L I I

I I

I

I I

I I

H (3)

The matrix HS is 4-cycle free and has same column and row weight as HA.

To obtain the parity-check matrix in the desired form, the lower-triangular elements of the jp % jp leftmost subblock of

HS are set to zero, yielding

=

) 1 )(

1 (

) 3 2(

2

1 ) 2 2(

1 ) 3 ( 2

2

k j

k k

j j j j j

M L L

L

M O M M M

L L

L I I

I O O O

I O O

I O

I I

I I

H , (4)

where O is the p % p null matrix.

By successive row and column permutations, HA and HS can brought be to a form similar to that defined in [10].

Therefore, using similar counting arguments as in [10], it can be shown that HA and HS both lead to a minimum Ham- ming distance of dmin =6 for j = 3 and dmin ≥8 for j = 4 (these are the values of j of most practical interest). Further- more, for j = 3, it can be shown that forcing HS to the triangular form H does not decrease the minimum distance of the code. This property is conjectured to also hold for j = 4, but could only be verified via an exhaustive search for codes employed in the simulations.

The LDPC code defined by H has code-word length N = kp, number of parity checks M = jp, and information block length K = (k – j)p. An LDPC code with N’ < N is easily obtained by discarding the N – N’ rightmost columns of H.

The parity-check matrix H is fully determined by p, j, k, and the code length N’.

Efficient encoding is achieved directly from H without the need to compute the generator matrix of the code. Recall that

(3)

because LDPC codes are linear block codes, an N-tuple x is an LDPC code word if and only if HxT =0T. Let us express the vector x in the form xT =[pTsT], where the jp % 1 vector p represents the parity part and the (k – j)p % 1 vector s represents the systematic part of the code word x.

Then, it is easy to see that the jp parity bits in p can be obtained in a recursive manner by employing

T T

T ]

[p s 0

H⋅ = and exploiting the upper-triangular form of H. This encoding process can be shown to require (N/2) [r (j + 3)+ (j – 3)] XOR operations, where r is the rate of the code. Hence, the code is linear-time encodable.

IV. BANDWIDTH-EFFICIENT LDPC-CODED MODULATION

Soft demapping at the receiver is greatly simplified if square-shaped constellations are employed, because the real and imaginary parts of the received noisy complex signals can then be demapped independently. We thus assume that transmit symbols are chosen from a 2b-QAM symbol set, where the integer b = 1 or b>1 and even. The block diagram of the multilevel encoding and symbol mapping functions is shown in Fig. 2.

When b>1and even, the two binary b/2-tuples (vb/2–1, vb/2–2, …, v1, v0) and (wb/2–1, wb/2–2, …, w1, w0) independently select two L-ary real symbols, L = 2b/2, representing the real and imaginary parts, respectively, of the complex QAM symbol to be transmitted. The L-ary symbols belong to the set

} 1 ..., , 1 , 0 ), 1 ( 2

{ = − − = −

= Al l L l L

A . (5)

Each 2b-QAM symbol conveys bcv and bcw LDPC code bits on its real and imaginary parts, respectively; the remaining bits are uncoded. It is generally sufficient to allow up to six code bits per QAM symbol for best trade-off in terms of spectral efficiency, performance, and implementation com- plexity.

Symbol mapping relies on the partition of the set A into 2bcv[2bcw] subsets such that the minimum Euclidean dis- tance between the symbols within each subset is maximized.

The bcv [bcw] least-significant bits (LSBs) of v [w] label the subsets of A following a Gray-coding rule. The remaining most-significant bits (MSBs) label symbols within a subset following a separate Gray-coding rule. Table I gives an example of this double Gray-code labeling technique for the case of 256 QAM (L = 16), and bcv = bcw = 2. When b = 1, only the code bit v0 is employed. This case corresponds to BPSK modulation.

We note that Gray-code labeling is optimum in an informa- tion-theoretic sense as it leads to largest capacity for bit- interleaved coded modulation [11,12]. Intuitively, from the observation of a noisy symbol, the most reliable soft informa- tion on each underlying bit can be generated if Gray-code labeling is used because here the variation of a symbol value between two adjacent levels corresponds to flipping a single bit only. Gray-code labeling is thus adopted for the LSBs on which the soft demapper needs to generate reliability infor- mation. Furthermore, as the uncoded MSBs are obtained via simple thresholding at the receiver, labeling those bits with a (separate) Gray code within each subset allows lowering the bit-error rate on the MSBs.

Let now y denote the real part of a noisy received signal:

n A

y= + , (6)

withAA and n an AWGN sample with variance σ2n (the imaginary part of the received signal is processed similarly).

The soft demapper shown in Fig. 1 computes the a posteriori probability (APP) of code bit imbeing equal to x = 0, 1 as

(7)

Symbol mapping

Double Gray mapping v1

v0 vb/2-1

Re

bcv LD PC code bits

vb/2-2

Double Gray mapping w1

w0

wb/2-1

Im

bcw LD PC code bits

wb/2-2

uncoded bits

uncoded bits

Fig. 2. Multilevel LDPC encoding and symbol mapping for 2b QAM.

TABLE I. EXAMPLE OF SYMBOL LABELING FOR THE CASE OF256 QAM (L = 16), bcv = bcw = 2.

L-ary symbol v3 (w3)

v2 (w2)

v1 (w1)

v0 (w0)

+15 1 0 1 0

+13 1 0 1 1

+11 1 0 0 1

+9 1 0 0 0

+7 1 1 1 0

+5 1 1 1 1

+3 1 1 0 1

+1 1 1 0 0

–1 0 1 1 0

–3 0 1 1 1

–5 0 1 0 1

–7 0 1 0 0

–9 0 0 1 0

–11 0 0 1 1

–13 0 0 0 1

–15 0 0 0 0

1, , , 1 , 0 ,

)

| Pr(

A

2 ) A - ( A

2 ) A - (

2 n 2 ,

2 n 2

=

=

=

σ

σ y cv

y

m m b

e e y

x

i mx K

l

l l

l

A A

(4)

where Am,x denotes the set of symbols A for which im =x. If sum-product decoding is based on the computation of log- likelihood ratios, then ln

[

Pr(im=0|y)/Pr(im=1|y)

]

can be employed. For a practical implementation, it is not necessary to include all the terms in the summations in Eq. (7). Given a received signal |y |<L−1, it is usually sufficient to determine the two closest nominal symbols Al, and include only those in the summation terms. If the received signal does not fall within the constellation boundaries, i.e., |y |≥L−1, then the APP is set to 1 or to 0, depending on the symbol found at the constellation edge. In this way, the computational effort for soft demapping is not only reduced but also made essentially independent of the constellation size L.

The (approximate) channel APPs generated in this manner are finally used in the sum-product algorithm (SPA) [1,2] for soft iterative decoding. Note that various simplifications of the SPA have been proposed in the literature. For example, the simplified algorithm presented in [13] operates entirely in the log-likelihood-ratio domain and offers a substantial reduction in complexity with essentially the same perfor- mance as the full SPA.

V. IMPLEMENTATION COMPLEXITY

In this section, we compare, for a particular example, the implementation complexity of the proposed LDPC coding scheme with that of trellis-coded modulation (TCM) as specified in [5].

A. Encoding Complexity

Let us consider a DMT system with 200 tones (subchan- nels) and 16 QAM on each tone. As computational complex- ity for each trellis-encoding step amounts to 7 XOR opera- tions for TCM, a total complexity of 100 × 7 = 700 XOR operations per DMT symbol is obtained. For LDPC coding, we need a code of length 200 × 4 = 800 bits in this case. An appropriate choice is the code with j = 3, k = 25 and rate r = K/N j 0.8863, resulting in a complexity of 2127 XOR operations per DMT symbol. Therefore, the complexity of LDPC encoding is about three times that of TCM encoding.

B. Decoding Complexity

Consider again the above example. The computational complexity of the trellis decoding approximately amounts to 119 additions and 4 multiplications per trellis step, not accounting for the complexity of subset decoding and the up- dating of survivor sequences (backtracing).

Using the algorithm in [13], the complexity for LDPC decoding amounts to 3(k–2)+2j additions per iteration, as- suming a block-parallel implementation. Furthermore for soft demapping, 8 multiplications and 6 additions are required per QAM symbol. The complexity of decoding for uncoded bits, which we did not account for, can be assumed to be similar to that of subset decoding in TCM. Using the same LDPC code parameters as above, assuming 20 iterations for soft decoding and a DMT-symbol rate of 4000 Hz, the results of Table II are obtained.

Note that if an LDPC code spanning more than one DMT symbol is used, complexity due to sum-product decoding will grow (which, fortunately, represents the less intensive part of the decoding process) whereas soft-demapping complexity will remain fixed because soft demapping is performed on a DMT-symbol basis.

C. Memory Requirements

For TCM, a memory size of about 20 × 2 × 16 = 640 words, where the factor 20 accounts for 5 constraint lengths, is needed to store the survivor sequences for 16-state Viterbi decoding.

For the above LDPC code example, the parity-check matrix has 800 × 3 = 2400 nonzero entries, the locations of which have to be stored for encoding purposes. Assume, for decoding with a fully parallel and pipelined structure, that each memory block is implemented as two buffers alternating between read and write. Then, the required memory for sum- product decoding is 4 × 2400 = 9600 words. Clearly, longer codes will lead to more stringent memory requirements.

VI. SIMULATION RESULTS

A. Performance in AWGN

In the simulations, the full SPA is employed with the number of iterations limited to 20. We represent bit and block-error rates as a function of Eb/N0, the ratio of energy per- bit to noise power-spectral-density, and symbol-error rates as a function of the normalized SNR. Recall that for a modulation and coding scheme transmitting bit/symbol, the normalized SNR is defined as [14]

0 b

norm 2 1

SNR N

E

= − . (8)

For uncoded QAM, SNRnorm ≈9.8dBat a symbol-error rate of 10–7.

Figs. 3 to 5 show the bit-error rate (BER) and block-error rate (BLER) performance of three LDPC codes for binary transmission over the AWGN channel. The codes have lengths N = 529, 2209, and 4489, and assume j = 3, j = 4, and j = 4, respectively. Uncoded performance and capacity are also plotted in these figures.

The performance achieved is as good as or better than the performance of the randomly constructed LDPC codes [2] of comparable lengths and rates. Note also the absence of error floors at error rates of 10–7, which are of interest for ADSL. It is therefore expected that good performance will also be achieved for multilevel modulation. Figs. 6 to 8 show the symbol-error rate performance for 16, 256, and 4096 QAM over the AWGN channel using the three LDPC codes.

TABLE II. DECODING COMPLEXITY FOR LDPC-CODED MODULATION AND TCM.

Additions/s Multiplications/s

TCM 47.6 M 1.6 M

Soft demapping

+ LDPC decoding 4.8 M + 6.0 M 6.4 M + 0

(5)

Fig. 9 shows the performance of the (2209, 2021) LDPC code in the spectral-efficiency versus power-efficiency plane, at a BER of 10–7 (triangles). The figures incorporate the capacity of the employed signal sets (squares), shedding light onto the effectiveness of the proposed LDPC coding scheme.

The gap between the capacity limit and power efficiency of

the LDPC schemes remains fairly constant, nearly indepen- dently of the spectral efficiency. Similar results have been obtained for the other LDPC codes and other spectral efficiencies but, for space reason, are not shown here.

uncoded BER

capacity bit

BER BLER

2 3 4 5 6 7 8 9 10 11

Eb/N0 (dB) 1.00E-8

1.00E-7 1.00E-6 1.00E-5 1.00E-4 1.00E-3 1.00E-2 1.00E-1 1.00E+0

Error rate

Fig. 3. Performance of LDPC code (529, 460) for binary trans- mission over the AWGN channel.

uncoded BER

capacity bit

BER BLER

3 4 5 6 7 8 9 10 11

Eb/N0 (dB) 1.00E-8

1.00E-7 1.00E-6 1.00E-5 1.00E-4 1.00E-3 1.00E-2 1.00E-1 1.00E+0

Error rate

Fig. 4. Performance of LDPC code (2209, 2021) for binary trans- mission over an AWGN channel.

uncoded BER

capacity bit

BER BLER

2 3 4 5 6 7 8 9 10 11

Eb/N0 (dB) 1.00E-8

1.00E-7 1.00E-6 1.00E-5 1.00E-4 1.00E-3 1.00E-2 1.00E-1 1.00E+0

Error rate

Fig. 5. Performance of LDPC code (4489, 4221) for binary trans- mission over an AWGN channel.

uncoded

2 3 4 5 6 7 8 9 10

SNRnorm (dB) 1.00E-8

1.00E-7 1.00E-6 1.00E-5 1.00E-4 1.00E-3 1.00E-2 1.00E-1

Symbol error rate

16-QAM 256-QAM 4096-QAM

Fig. 6: Performance of LDPC code (529, 460) for transmission over the AWGN channel using 16, 256, and 4096 QAM.

uncoded

2 3 4 5 6 7 8 9 10

SNRnorm (dB) 1.00E-8

1.00E-7 1.00E-6 1.00E-5 1.00E-4 1.00E-3 1.00E-2 1.00E-1

Symbol error rate

16-QAM 256-QAM 4096-QAM

Fig 7: Performance of LDPC code (2209, 2021) for transmission over the AWGN channel using 16, 256, and 4096 QAM.

uncoded

2 3 4 5 6 7 8 9 10

SNRnorm (dB) 1.00E-8

1.00E-7 1.00E-6 1.00E-5 1.00E-4 1.00E-3 1.00E-2 1.00E-1

Symbol error rate

16-QAM 256-QAM 4096-QAM

Fig. 8: Performance of LDPC code (4489, 4221) for transmission over the AWGN channel using 16, 256, and 4096 QAM.

(6)

B. Performance in AWGN with Latency Constraints To determine the net coding gain as function of coding latency, we consider a DMT system with a total number of 100 or 200 tones. DMT symbols are assumed to be sent at the nominal rate of 4000 Hz, as is the case for ADSL.

The results summarized in Table III show the net coding gains in dB at a symbol-error rate of 10–7 for different values of coding latency (no outer RS code). We did not run simulations for codes longer than 7200 bits, hence some entries in the table are not provided. The code rates were chosen in the range of 0.82 to 0.95. It can be seen that good coding gains can be achieved even for very tight latency constraints.

VII. CONCLUSIONS

LDPC codes are finding their way into a number of applications, e.g., for wireless communications and storage channels. They also offer unique advantages for DSL transmission.

The simulation results presented here show that, even under tight latency constraints, good net coding gains can be achieved by LDPC coding. Furthermore, LDPC codes do not

exhibit “error floors” at the low bit-error rates of interest for DSL transmission. Another advantage is their low implemen- tation complexity as compared, for example, to turbo codes.

In fact, many implementation trade-offs are possible owing to the inherent parallelism in the sum-product algorithm, opening the way for very-low-power VLSI realizations.

Clearly, further study is needed to fully characterize the benefits of LDPC coding for DSLs, including VDSL, and to assess performance with actual loop and noise characteristics.

However, the incorporation of powerful LDPC coding techniques into next-generation DSL modems appears to be attractive in terms of performance gains and also possible at only a reasonable increase in transceiver complexity.

REFERENCES

[1] R. G. Gallager, “Low-density parity-check codes,” IRE Trans.

Inform. Theory, vol. IT-8, pp. 21-28, Jan. 1962.

[2] D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices,” IEEE Trans. Inform. Theory, vol. 45, No. 2, pp. 399-431, Mar. 1999.

[3] T. Starr, J. M. Cioffi, and P. J. Silverman, Digital Subscriber Line Technology, Upper Saddle River, NJ: Prentice Hall, 1999.

[4] K. Narayan and J. Li, “Bandwidth efficient low density parity check coding using multilevel coding and iterative multistage decoding,” in Proc. 2nd Int’l Symposium on Turbo Codes and Related Topics, Brest, France, pp. 165-168, Sept. 2000.

[5] “Asymmetric digital subscriber line (ADSL) transceivers,”

ITU-T Recommendation G.992.1, June 1999.

[6] E. Eleftheriou and S. Ölçer, “Proposed text on LDPC coding for inclusion in Draft Recommendation,“ Document SC-065, ITU-Telecommunication Standardization Sector, Study Group 15, San Francisco, CA, Aug. 2001.

[7] M. Blaum, P. Farrell, and H. van Tilborg, “Array codes,” in Handbook of Coding Theory , V. S. Pless and W. C. Huffman Eds., Elsevier 1998.

[8] J. L. Fan, “Array codes as low-density parity-check codes,” in Proc. 2nd Int’l Symposium on Turbo Codes and Related Topics, Brest, France, pp. 543-546, Sept. 2000.

[9] T. J. Richardson and R. Urbanke, “Efficient encoding of low- density parity-check codes,” IEEE Trans. Inform. Theory, vol.

47, No. 2, pp. 638-656, Feb. 2001.

[10] D. Hösli, E. Svensson, and D. Arnold, “High-rate low-density parity-check codes: construction and application,” in Proc.

2nd Int’l. Symposium on Turbo Codes and Related Topics, Brest, France, pp. 447-450, Sept. 2000.

[11] G. Caire, G. Taricco, and E. Biglieri, “Bit-interleaved coded modulation,” IEEE Trans. Inform. Theory, vol. 44, No. 3, pp.

927-946, May 1998.

[12] E. Eleftheriou, X. Hu, and S. Ölçer, “An information-theoretic framework for comparing the coding schemes proposed for G.dmt.bis and G.lite.bis,” Temporary Document IC-070, ITU- T, Study Group 15, Question 4, Irvine, CA, Apr. 9-13, 2001.

[13] E. Eleftheriou, T. Mittelholzer, and A. Dholakia, “Reduced- complexity decoding algorithm for low-density parity-check codes,” Electron. Lett., vol. 37, no 2, pp. 102-104, Jan. 2001.

[14] M. V. Eyuboglu and G. D. Forney, “Trellis precoding: com- bined coding, precoding and shaping for intersymbol inter- ference channels,” IEEE Trans. Inform. Theory, vol. 38, No. 2, pp. 301-314, Mar. 1992.

TABLE III.NET CODING GAINS (IN DB) ACHIEVED BY SELECTED LDPC

CODES AS A FUNCTION OF LATENCY. Latency (in ms) Modu-

lation

# of

tones 0.5 1 2 4 6 8

100 4.5 4.8 5.2 6.0 6.1 6.2

16

QAM 200 4.8 5.2 6.0 6.2 – –

100 4.1 4.6 5.2 5.9 6.1 –

4096

QAM 200 4.6 5.2 5.9 – – –

Fig. 9: Performance of LDPC code (2209, 2021) for various spect- ral efficiencies. The numbers in parentheses indicate the gap in Eb/N0 between the coded scheme and the signal-set capacity at BER

= 10–7.

參考文獻

相關文件

Our approach relies on a combination of low-density parity check (LDPC) codes and low-density generator matrix (LDGM) codes, and produces sparse constructions that are

Abstract—We propose a system for magnetic recording, using a low density parity check (LDPC) code as the error-correcting-code, in conjunction with a rate

Thus, we can say that when the E~l/N~~ is large, the LDPC-COFDM systems achieve the good error rate performance with a small number of iterations on a

Inspired by the idea of eal, we further e ploit the sparsity of parity check matri of PC codes and use e tended factorization with pivoting in encoding process, which is fle ible

We proposed a LARPBS algorithm for implementing the message-passing decoder of the regular (j, k) LDPC code defined by any sparse parity-check matrix H M×N , in constant time,

It is shown that such codes cannot have a Tanner graph representation with girth larger than 12, and a relatively mild necessary and sufficient condition for the code to have a girth

• There are important problems for which there are no known efficient deterministic algorithms but for which very efficient randomized algorithms exist.. – Extraction of square roots,

— John Wanamaker I know that half my advertising is a waste of money, I just don’t know which half.. —