• 沒有找到結果。

2.1.1 Digital Fountain Codes

Digital Fountain Codes are probabilistic erasure correction codes that allow an encoder to derived practically infinite amount of randomized output symbols from a fixed block of source symbols. They then permit the decoders to improving their successful decoding probability simply by capturing more output symbols. These codes are regarded as rateless codes because the ratios between the amount of source codes and output codes are not fixed rather they are determined by the decoding process. The encoding process of Digital Fountain Codes usually generates an output symbol by adding a random collection of source symbols over the finite field of the code. Both the number of source symbols (known as the degree distribution) and the selection of source symbols (expressed as the connectivity matrix) are determined by a pseudo-random process. On the other hand, the decoding process usually uses a belief propagation algorithm to recover the source symbols from the captured symbols. Success of decoding is not guaranteed; however, its probability can be enhanced simply by capturing more output symbols. Current, there are several implementations of Digital Fountain Codes including Luby Transform (LT) codes, Tornado codes and Raptor codes. [Figure. 2 : Relations between source and output symbols of a Digital Fountain Code] shows a systematic code that produces n output symbols from k source symbols by adding l redundant symbols to the source symbols.

Figure. 2 : Relations between source and output symbols of a Digital Fountain Code

4

2.1.2 Short-Length LT Codes

A Luby Transform (LT) code is completely defined by the number of source symbols K and its degree distributionP(d). The degree d of each output symbol is an arbitrary integer between 1 to K. Every block of source symbols has s symbols.

(A) Rateless LT Codes

The LT encoding process of an output symbol Ci can bedivided into three steps:

1. Choose the degree d of an output symbol Ci according to the degree distribution.

2. Choose randomly and uniformly d distinct source symbols as the neighbors of output symbol Ci.

Connections between output symbol Ci and its neighbors are also generated.

3. Perform addition on the values of all d source symbols as the value of output symbol Ci.

1 0 1 1

Figure. 3: BP Decoding Process

An encoder operates step 1 through step 3 iteratively to generate each codeword. A simple LT encoding example of the output symbol C1 with d1=2 is depicted as [Figure. 3: BP Decoding Process]. After two source symbols S1 and S3 are chosen uniformly and randomly as neighbors of C1, the value of C1 is generated by performing addition on the values of S1 and S3. The generation of the first output symbol C1 is completed and C1 is sent to receivers.

5

(B) Degree Distribution

The degree distribution is derived according to the released probability of an output symbol in BP decoding procedure. When K is fixed, the degree distributions of LT codes mainly influence not only the recovery probability of source symbols but also the complexities of encoding and decoding. Thus, a good degree distribution should have these properties:

1. High probability of recovering source symbols can be achieved with as few output symbols as possible.

2. Average connection number of source symbols should be as low as possible.

The first property is to minimize transmission bandwidth while the second property is to minimize complexities of encoding and decoding. Based on these two properties, a mathematical degree distribution called Soliton distribution Soliton (d) is derived. Notice that the probability of degree one in a Soliton distribution is 1/K that only one output symbol with degree one is generated in average. All output symbols with degree one can probably be erased during transmission. This follows that BP decoding fails to start. As a result, the robust Soliton distribution is developed for practical usage and it is described as follows:

(2) respectively. In comparison with Soliton distribution, the robust Soliton distribution has higher probability of degree one to ensure that BP decoding succeeds with high probability.

(C) Belief Propagation (BP) Decoding

After a receiver collects sufficient output symbols, belief propagation (BP) decoding which operates iteratively is able to start, every iteration can be divided into two steps:

1. Find output symbols with degree one. Assign their value to their neighboring source symbols and these source symbols are decoded. Remove output symbols with degree one.

int ( ) ( )

6

2. Perform addition on each remaining output symbols with their neighbors which are already decoded.

Remove these neighbors and connections from each codeword.

A receiver repeats steps 1 and 2 iteratively until source symbols are all decoded or there are no output symbols with degree one. A simple LT decoding example is illustrated in 0. Initially, four output symbols [C1, C2, C3, and C4] = [1, 0, 1, 1] are received to recover source symbols S1~S3 as follows:

1. C1 value is assigned to S1; C1 is removed and S1 is decoded. Addition is performed on C2 and C4 with S1 respectively. S1 is removed from the neighbors of C2 and C4.

2. C4 value is assigned to S2; C4 is removed and S2 is decoded. Addition is performed on C2 and C3 with S2 respectively. S2 is removed from the neighbors of C2 and C3.

3. C2 value is assigned to S3. C2 and C3 are removed. S3 is decoded and the BP decoding is completed.

(D) Drawbacks

BP decoding may stop at an arbitrary decoding iteration when there is no degree-one output symbol left. The information contained in the remaining output symbols is unable to be exploited by BP algorithm. In the cases that the received output symbols cannot be decoded by BP, the remaining output symbols will mean a waste of transmission bandwidth.

7

相關文件