• 沒有找到結果。

In this chapter, we assess via simulations the error performances of the found VLECPCs in terms of reconstructed source symbol error rate (SER).1 In all simulations, the source is assumed memoryless and the channel is the BPSK-modulated AWGN channel. The decoding complexity of the proposed two-phase sequence MAP (TP-SMAP) decoder is also examined. Furthermore, comparisons with other systems in literature, including three known VLECPC schemes and a traditional SSCC system, are provided. For measuring the time to search for the optimal and suboptimal VLECPCs, the experiments were carried using the C programing language under a 64-bit operation system Linux (Ubuntu 10.04 LTS) executed on a desktop computer with a Intel-Core2 Duo E6600 2.4GHz CPU and 4GB memory. It should noted that the decoders of VLECPCs in the following simulations are assumed to be TP-SMAP, if they are not be specified.

As usual, the system signal-to-noise ratio (SNR) is given by SNR , E/N0, where E is the signal energy per channel use and N0/2 is the variance of the zero-mean additive channel noise sample. To account for the coding redundancy of systems with different code rates, SNR per source symbol is used in presenting the simulation results, which is given by

SNRs = Es N0

= E N0

· 1

R, (6.1)

where Es is the energy per source symbol, and R is the overall (average) system rate defined as the number of transmitted source symbols per channel use. For an SSCC

1As a convention, the SER here is the Levenshtein distance between the transmitted sequence and the

system, the overall rate R satisfies R = Rc/Rs, where Rs is the source coding rate (in coded bits/source symbol) and Rc is the channel coding rate (in coded bits/channel use).

Hence, for an SSCC system employing a kth-order Huffman VLC2followed by a tail-biting convolutional code, Rs is the average codeword length of the Huffman code divided by k, and Rc is the rate of the tail-biting convolutional code. Note that a VLECPC (or a single-step JSCC) can be regarded as having Rc = 1 with Rs being its averaged source coding rate, since no explicit channel coding is performed.

Table 6.1: Average codeword length per grouped symbol of a 8-ary alphabet generated from binary non-uniform memoryless sources with different p0.

Buttigieg’s Lamy’s Wang’s Opt. VLECPC

p0 0.7 0.8 0.7 0.8 0.7 0.8 0.7 0.8

dfree = 3 4.500 4.000 4.500 4.000 4.500 4.000 4.473 3.992 dfree = 5 6.443 5.912 6.443 5.912 6.443 5.912 6.340 5.592 dfree = 7 8.326 7.864 8.473 7.936 8.326 7.864 8.016 7.240

In Table 6.1, we compare the VLECPCs found by the proposed method in Chapter 3 with Buttigieg’s codes [7], Lamy’s codes [23] and the codes by Wang et al. [30]. Here, we group three information bits, generated from a binary non-uniform memoryless source with bit probability p0 , Pr(0) ∈ {0.7, 0.8}, as one source symbol; hence, the VLECPCs are 3rd order VLCs (i.e., k = 3), and the size of the source alphabet is K = 23 = 8. Since our proposed algorithm guarantees to find VLECPCs with minimal average codeword length under a fixed dfree, the resulting VLECPCs have a shorter average codeword length than any other code with identical free distance.

We then investigate the improvement in both error performance and decoding com-plexity of the proposed TP-SMAP decoder. In Figure 6.1, 30 information bits (i.e., 10 grouped symbols) are encoded by the optimal VLECPC with dfree = 7 and p0 = 0.8 of Table 6.1, which is listed in Table 6.2. The dotted lines show the performance of the MAP decoder under the assumption that the receiver only knows the number of transmitted bits, N. The solid line portrays the MAP decoder’s performance under the assumption that receiver knows both number of symbols, L, and transmitted bits, N. As shown in

2Recall that a kth order VLC maps a block of k source symbols onto a variable-length codeword. So its average source coding rate is given by the average codeword length divided by k.

Table 6.2: The optimal VLECPC with dfree = 7 and p0 = 0.8 (the one with an average codeword length of 7.240) of Table 6.1.

Grouped Symbol Probability Optimal VLECPC with dfree = 7 and p0 = 0.8

000 0.512 00100

001 0.128 01011010

010 0.128 100111001

100 0.128 1111111111

011 0.032 11010110011

101 0.032 000110010011

110 0.032 110011101011

111 0.008 1111110001011

1 1.5 2 2.5 3 3.5 4

10−5 10−4 10−3 10−2

SNRs(dB)

SymbolErrorRate

MAP on TN

MAP on TL,N

Figure 6.1: Error performances of using different decoders to decode the same VLECPC, which is encoded by the optimal VLECPC listed in Table 6.2. The number of 3-bit source symbols per transmission block is 10, which is equivalent to 30 source information bits.

Table 6.3: Average (AVG) and maximum (MAX) numbers of decoder branch metric computations for the codes of Figure 6.1.

Eb/N0 1 dB 2 dB 3 dB 4 dB

decoder AVG MAX AVG MAX AVG MAX AVG MAX

Viterbi on TN 459 768 459 768 459 768 459 768

Viterbi on TL,N 1651 2600 1651 2600 1651 2600 1651 2600

TP-SMAP TL,N 461 2970 460 1619 459 863 459 768

Figure 6.1, about 0.3 dB in coding gain is realized by knowing L (in addition to N).

Table 6.3 summarizes the decoding complexities of different decoders in terms of the branch metric computations. From the table, we remark that the TP-SMAP decoder has a similar decoding complexity as the Viterbi algorithm on TN while achieving about 0.3 dB coding gain in error performance. For identical error performance, the TP-SMAP decoding algorithm spends almost 4 times less in branch computations than the Viterbi algorithm on TL,N.

Average number of branch metric computation

Viterbi Algorithm on TN

Viterbi Algorithm on TL,N

TP-SMAP on TL,N

Figure 6.2: Average numbers of decoder branch metric computations of using different decoders to decode the same VLECPC for different L at SNRs = 3 dB. The VLECPC is obtained from Table 6.2.

We further test the decoding complexities of different decoders for different L. In Figure 6.2, the optimal VLECPC of Table 6.2 is transmitted at SNRs = 3.0 dB. This figure indicates that the decoding complexities of TP-SMAP are similar to those of the Viterbi algorithm on TN. The result also shows that the decoding complexities of TP-SMAP decoder are proportional to the size of transmission block L. It should be emphasized that the decoding complexities of TP-SMAP are one order less than those of the Viterbi algorithm on TL,N, in which both have the same error performances.

1 1.5 2 2.5 3 3.5 4

10−5 10−4 10−3 10−2 10−1

SNRs

SymbolErrorRate

Optimal VLECPC with p0= 0.7 Optimal VLECPC with p0= 0.8

Figure 6.3: Error performances of optimal VLECPCs for different p0. The VLECPCs are obtained from the optimal VLECPCs with dfree = 7 in Table 6.1. The number of 3-bit source symbols per transmission block is 10, which is equivalent to 30 source information bits.

We next investigate the error performances of optimal VLECPCs for different values of p0 and L. Figure 6.3 shows that the optimal VLECPC for p0 = 0.8 performs about 0.8 dB better than the optimal VLECPC for p0 = 0.7 at SER of 10−3. Figure 6.4 shows that the optimal VLECPC performs better when size of transmission block L is smaller. These two figures indicate that the optimal VLECPCs are better when the source distribution

1 1.5 2 2.5 3 3.5 4 10−5

10−4 10−3 10−2

SNRs(dB)

SymbolErrorRate

L = 50 L = 40 L = 30 L = 20 L = 10

Figure 6.4: Error performances of the optimal VLECPC for different L. The optimal VLECPC is obtained from Table 6.2.

Table 6.4: Average (AVG) and maximum (MAX) numbers of decoder branch metric computations for the codes of Figure 6.5.

SNRs 1 dB 2 dB 3 dB 4 dB

AVG MAX AVG MAX AVG MAX AVG MAX

Lamy’s VLECPC 511 3631 510 1858 510 970 510 731

Buttigieg’s and Wang’s VLECPCs 500 3439 499 1303 499 720 499 670

Optimal VLECPC 461 2970 460 1119 459 719 459 668

Optimal VLECPC with smallest Bdfree 462 3040 460 1144 459 712 459 668

1 1.5 2 2.5 3 3.5 4 10−5

10−4 10−3 10−2 10−1

SNRs(dB)

SymbolErrorRate

Lamy’s VLECPC, R = Rc/Rs= 1/2.645 = 0.378 Buttigieg’s and Wang’s VLECPCs, R = 1/2.621 = 0.381 Optimal VLECPC with Bdfree= 1.8268, R = 1/2.413 = 0.414 Optimal VLECPC with smallest Bdfree= 0.0164, R = 1/2.413 = 0.414

Figure 6.5: Error performances of different (3rd order) VLECPCs for a binary non-uniform source with p0 = 0.8. The number of 3-bit source symbols per transmission block is 10, which is equivalent to 30 source information bits. The free distance dfree for all VLECPCs is dfree = 7.

We next examine in Figure 6.5 the improvement in error performance between the optimal code construction in Chapter 3 and the modified optimal one (that guarantees to output the optimal VLECPC with the smallest Bdfree) in Section 4.1. Here, we group three information bits, generated from a binary non-uniform memoryless source with bit probability p0 , Pr(0) = 0.8, as one source symbol; hence, the VLECPCs are 3rd order VLCs (i.e., k = 3), and the size of the source alphabet is K = 23 = 8. Also shown in the same figure are the error performances of three VLECPCs respectively obtained by Buttigieg’s [7], Lamy’s [23] and Wang’s [30] code construction algorithms, which have the same free distance dfree = 7 as the optimal VLECPCs we constructed, where Buttigieg’s and Lamy’s algorithms coincidentally yield an identical code in this case. In each simulation, 10 source symbols (equivalently, 30 source information bits) are encoded and transmitted as a block. All codes are decoded using the TP-SMAP decoder of Chapter 5. Figure 6.5 shows that our optimal VLECPC constructed by the algorithm proposed in Chapter 3 has around 0.8 dB coding gain over the three existing VLECPCs;

it also indicates that minimizing Bdfree can further pick up another 0.1 dB in performance gain.

Table 6.4 summarizes the decoding complexity of the TP-SMAP for the VLECPCs of Figure 6.5. We notice that a VLECPC with higher average codeword length requires a higher decoding complexity. This is somehow anticipated since the decoding trellis is larger for a VLECPC with higher average codeword length. Along this observation, the optimal VLECPC and the optimal VLECPC with the smallest Bdfree have expectedly similar decoding complexity because they have identical average codeword length. In addition, with a smaller (actually, the minimum) average codeword length, our optimal VLECPC decodes faster via the TP-SMAP than the other three VLECPCs.

We next test the performance of the suboptimal code construction algorithm of Sec-tion 4.2 for the 26-symbol English data source. Since there are two different distribu-tions for the English alphabet that are generally used in the literature for constructing VLECPCs (e.g., compare [25, 30, 20, 26] with [7, 9, 13, 18]), we provide simulation results for both distributions; we will refer to them as Distributions 1 and 2, respectively. The

VLECPCs we obtain via our suboptimal code construction algorithm are presented in Tables 6.5 and 6.6 for Distributions 1 and 2, respectively.

In Table 6.7(a), we list, for different values of dfree, the average codeword lengths (ALs) of the resulting VLECPCs under Distribution 1 as well as the execution time needed for their construction via our suboptimal algorithm and the three algorithms referred above. For the sake of completeness, the parameters used in each algorithm are reported in Table 6.7(b).3 These parameters are chosen through a number of trials in targeting a VLECPC with smaller average codeword length. The results indicate that by manipulating the parameters, the VLECPCs obtained by our suboptimal code construction algorithm can outperform all other three VLECPCs in average codeword length. Table 6.7(a) also shows that our suboptimal code construction algorithm is worse than Lamy’s or Wang’s algorithms in terms of execution time for dfree ≤ 9; however, we can prevent the construction complexity of our algorithm from growing too quickly for dfree ≥ 10 by properly adjusting its parameters under the premise that our algorithm can still yield a better code than the other three algorithms. Similar conclusions can be drawn about the performance of the above algorithms under Distribution 2; the results are presented in Table 6.8.

Analogously to other schemes, many combinations of parameters need to be tested in our suboptimal algorithm to arrive at a good VLECPC construction. The main parame-ters that control the algorithm’s complexity are the early-elimination window ∆ and the Encoding Stack size Γ. Usually, complexity increases when either ∆ or Γ increase, albeit with the benefit of improving the VLECPC average codeword length. In general, it is not straightforward to decide on the right choice of values for these parameters before testing them. Despite this inconvenience, the proposed suboptimal approach is efficient enough to test many combinations of parameters in reasonable time. For example, to get the sub-optimal VLECPC with dfree= 3 in Table 6.5, we simulated all combinations of the follow-ing parameters: ∆ = {1, 3, 5, 7, 9, 11, 13}, Γ = {20, 40, 60, 80, 100, 200, 300, 400, 500, 1000}

3Buttigieg’s algorithm (specifically, MVA in [7]) and Wang’s algorithm [30] are characterized by two parameters, L1 and Lmax. An additional parameter Ls is needed for Lamy’s algorithm (specifically, noHole+L in [23]).

Table 6.5: The VLECPCs for the English alphabet with Distribution 1 obtained by the suboptimal code construction algorithm for different values of free distance.

Alphabet Probability dfree= 3 dfree= 5 dfree= 7 dfree= 9 dfree= 10 dfree= 11

E 0.14878610 0111 00001 00000000 00101101 000100000 0000000000

T 0.09354149 00101 011110 11111111 111111100 0000011110 00001011111

A 0.08833733 11011 0101011 000011111 1111000111 00101100111 000111101001

O 0.07245769 000110 1010000 111100001 11001000100 11011011000 0011010101111

R 0.06872164 010011 00110100 0011010100 110001111011 010111101100 00111100111001

N 0.06498532 101111 10010011 1100110011 0101010010100 101010010011 11101011100110

H 0.05831331 111010 11101111 01011010010 1001001100011 0110111000010 010101110010101

I 0.05644515 0001011 011001011 10101010101 00010000010001 1111100111101 111011101111010 S 0.05537763 1000100 101111100 11000101001 10100010101010 10110011110101 0111110110110011 D 0.04376834 1011001 110000100 001111001100 001100101001000 11001100001011 1100011111011100 L 0.04123298 1110010 1011110111 010101100010 100000110110011 011010110110011 01101101110011010 U 0.02762209 00000011 1101000010 101010010001 0001101010110111 100111011101111 11010010111110110 P 0.02575393 00000100 11000100111 110000111100 0100011011001010 111101101010100 101001001101110100 F 0.02455297 10001111 11110101000 0101001110110 1000000001110000 0110001101111001 1101011110110111100 M 0.02361889 10010101 110001010011 0110011000011 01000010011110111 1011110110000101 1110100001100110110 C 0.02081665 10100001 110111001100 0110100111001 01011011101011001 01001001110110110 01110010111111001010 W 0.01868161 10100110 1100010101000 1001011011001 10000110110001010 10100010110010001 011110001011111010011

G 0.01521216 11000000 1101110000010 1001110000110 000101101101110010 11010101111101001 0110011110111111000011 Y 0.01521216 010000011 11011101010111 1010001101100 010000110101010101 010010011101010011 1101010001011110010110 B 0.01267680 010000100 11011101101000 00110011110010 100110111010001000 110000101110001100 10100110101111111000101 V 0.01160928 100100000 110111000010111 01011001100111 0001101011011010001 111111100111111010 11101100000001111010010 K 0.00867360 110001101 110111010111001 01100101111100 0100001011101101010 1000110101001001101 010001101100111111000011 X 0.00146784 1000001001 110111011001100 01101100001011 0100001100000010000 1100101011110000111 101110010000111111010010 J 0.00080064 1100001111 1101110001111001 10011001011001 00010110111110110001 1110010001011110110 111011000001000110110110 Q 0.00080064 1100011100 1101110101100100 10100110010101 00011010110010011110 11100010011011000111 0101010101000111111000011 Z 0.00053376 10000010100 1101110110111111 001101101111001 01000011010011001000 11100101110110101011 1010110010001001110110010

33

Table 6.6: The VLECPCs for the English alphabet with Distribution 2 obtained by the suboptimal code construction algorithm for different values of free distance.

Alphabet Probability dfree= 3 dfree= 5 dfree= 7 dfree= 9 dfree= 10 dfree= 11

E 0.1270 0111 000000 0011111 00000101 000100000 0000000001

T 0.0906 00011 111111 01000110 001110011 0000011110 00001111101

A 0.0817 11101 0001110 000010000 0101101000 00101100111 011100011010

O 0.0751 001010 1111000 111101101 01110111001 11011011000 0111011101010

I 0.0697 010011 00101001 0001001001 001111100110 010111101100 10110110111000

N 0.0674 101111 11010110 1110111000 110010011000 101010010011 11011101000111

S 0.0633 110110 010110100 00000101100 0100111011111 0110111000010 101100101001110

H 0.0609 0010010 101100110 10001110011 01110110110110 1111100111101 111011010110000

R 0.0599 0100000 110010011 11110000001 10001011011100 10110011110101 1011100111010011

D 0.0425 1000110 111001101 010110100010 100011100011010 11001100001011 1110011000101100

L 0.0403 1011001 0100010101 100111010001 110100010101110 011010110110011 11011010110010100 C 0.0278 1101011 0101001011 101001111010 0000101011001010 100111011101111 110110100010011110 U 0.0276 10001011 1000110010 111000100101 1010111100111110 111101101010100 111011101101100010 M 0.0241 10010100 1010011001 0001001110101 1101011111010000 0110001101111001 1011101101111000000 W 0.0236 10100001 1011100101 1000011101010 01010000110010010 1011110110000101 1101101111000101111 F 0.0223 10100110 01001010101 1100100100001 10011111111001011 01001001110110110 1110010010011010010 G 0.0202 11000010 01011001011 00100010100011 11111011010111110 10100010110010001 10111011000010101110 Y 0.0197 11000101 10001100011 10110011110100 010111111101011110 11010101111101001 11101010111101110100 P 0.0193 000000100 10110100101 11001111110011 100111001010001010 010010011101010011 111001001001011110110 B 0.0149 100000001 011001000111 11011000101010 111000010011001110 011101110110101011 111010110000101101010 V 0.0098 100001111 100001010011 001001000100011 0010111010011001110 111000001010001101 111110100111111011101 K 0.0077 100100010 111010100011 011011001111100 1001000011010101010 1000110101001001101 1011101100000110000110

J 0.0015 0000001111 0011111100011 100111011100000 1110011011110010010 1011011101101111010 1110101101111111101101 X 0.0014 0000011010 00100111100011 0010001011100000 01101000011111001011 1110001001011110101 10111011011011111101100 Q 0.0010 00000111010 11000010100011 1101000011110100 10100000011010111110 11100111011011000111 11101000100101100010111 Z 0.0007 000001110010 011101111100011 1110001100100011 11011101111011001110 11111100100110111001 11111010001100010111010

34

Table 6.7: List of the VLECPCs obtained by three existing code construction schemes and the VLECPCs obtained by our suboptimal code construction algorithm for the 26-symbol English alphabet with Distribution 1 given in Table 6.5: (a) Average codeword lengths (ALs) of the found codes and execution time for each code construction algorithm; (b) Parameters used in each algorithm. The suboptimal algorithm is initialized with Ub set to equal the smallest of the average codeword lengths of the VLECPCs by Buttigieg, Lamy and Wang.

(a)

Algorithm Buttigieg’s Lamy’s Wang’s Suboptimal

AL Time AL Time AL Time AL Time

dfree= 3 6.272617 2m2s 6.309980 4s 6.266612 <1s 6.189350 18s dfree= 5 8.378035 6m42s 8.400986 44s 8.378035 12s 8.333866 2m27s dfree= 7 10.559646 4h31m 10.599945 5m43s 10.488923 27s 10.302508 8m41s dfree= 9 12.737255 6h27m 12.806644 9m52s 12.737255 2m30s 12.532291 5m29s dfree= 10 12.757672 11h45m 12.867893 17m54s 12.757672 47m46s 12.593140 9m35s dfree= 11 14.876166 19h14m 15.354549 21m43s 15.024952 2h15m 14.580329 14m53s

(b)

Algorithm Buttigieg’s Lamy’s Wang’s Suboptimal Parameters (L1, Lmax) (L1, Lmax, Ls) (L1, Lmax) (∆, Γ, D, I) dfree = 3 (4, 13) (4, 13, 10) (4, 13) (5, 300, Dm, 2) dfree = 5 (6, 15) (6, 15, 12) (6, 15) (3, 500, Dl, 1) dfree = 7 (7, 16) (7, 16, 13) (7, 16) (5, 2000, Dm, 1) dfree = 9 (9, 18) (9, 18, 15) (9, 18) (1, 60, Dm, 1) dfree = 10 (10, 19) (10, 19, 15) (10, 19) (1, 40, Dl, 2) dfree = 11 (12, 21) (12, 21, 17) (12, 21) (1, 4, Dl, 1)

Table 6.8: List of the VLECPCs obtained by three existing code construction schemes and the VLECPCs obtained by our suboptimal code construction algorithm for the 26-symbol English alphabet with Distribution 2 given in Table 6.6: (a) Average codeword lengths (ALs) of the found codes and execution time for each code construction algorithm; (b) Parameters used in each algorithm. The suboptimal algorithm is initialized with Ub set to equal the smallest of the average codeword lengths of the VLECPCs by Buttigieg, Lamy and Wang.

(a)

Algorithm Buttigieg’s Lamy’s Wang’s Suboptimal

AL Time AL Time AL Time AL Time

dfree= 3 6.4038 20s 6.4047 14s 6.3574 <1s 6.2560 7s dfree= 5 8.4740 5m16s 8.5049 47s 8.4740 9s 8.3223 1m13s dfree= 7 10.5388 1h55m 10.5110 12m01s 10.5388 47s 10.3615 12m13s dfree= 9 12.8898 3h14m 12.9644 13m04s 12.8898 4m22s 12.6647 6m03s dfree= 10 12.8959 9h10m 13.0095 58m29s 12.8959 19m41s 12.7507 8m49s dfree= 11 15.0345 17h37m 15.0846 38m53s 15.0345 1h20m 14.6521 16m12s

(b)

Algorithm Buttigieg’s Lamy’s Wang’s Suboptimal Parameters (L1, Lmax) (L1, Lmax, Ls) (L1, Lmax) (∆, Γ, D, I) dfree = 3 (4, 13) (4, 13, 13) (4, 13) (6, 200, Dm, 1) dfree = 5 (6, 15) (6, 15, 13) (6, 15) (2, 250, Dm, 1) dfree = 7 (7, 18) (7, 18, 15) (7, 18) (1, 3000, Dm, 1) dfree = 9 (9, 18) (9, 18, 16) (9, 18) (1, 20, Dl, 1) dfree = 10 (10, 20) (10, 20, 17) (10, 20) (3, 40, Dl, 1) dfree = 11 (11, 21) (11, 21, 18) (11, 21) (1, 12, Dl, 1)

and D = {Dm, Dl}. It took us only about 29 minutes to simulate all these 140 combina-tions within a single computer experiment.

Table 6.9: The complexities and performances of some different suboptimal code construc-tion for dfree = 4 for the 26-symbol English alphabet (Distribution 2 given in Table 6.6).

(∆, Γ, D, I) Ub AL # of node computations Time

We next provide efficiency comparisons with the recent works of Diallo et al. [13] and Hijazi et al. [18]. 4 Notably different from our work and also the main referenced works in this dissertation (i.e., Buttigieg’s [7], Lamy’s [23] and Wang’s [30] ), Diallo et al. and Hijazi et al. do not construct codes for a given distribution but for a pre-specified set of codeword lengths. The distributions assumed in their papers are therefore primarily for the computation of the resulting average codeword length. To compare with the VLECPC of [13], we simulated our suboptimal code construction for dfree= 4 under the same used distribution for the 26-symbol English alphabet (Distribution 2 given in Table 6.6). The VLECPC designed in [13, Table IV] has an average codeword length of 7.3375 and an execution time of 310 hours. Our suboptimal code construction algorithm, when initialized by an upper bound given by the average length of the code in [13] (i.e., with Ub = 7.3375) and parameters (∆ = 3, Γ = 200, Dl, I = 1) yields an improved VLECPC with an average codeword length of 6.4794 within only 2 seconds of execution. Furthermore, a similar result can be obtained without making use of the Ub parameter (i.e., by setting Ub = ∞).

The (3,200,Dl,1) suboptimal algorithm still yields a VLECPC with an average codeword length of 6.4794 within only 2 seconds of execution. This further shows that the proposed suboptimal algorithm is highly efficient. The complexities and performances of other found suboptimal VLECPCs are summarized in Table 6.9.

4It should be mentioned that we did not actually implement the systems of [13] and [18]; instead, the efficiency results of these systems are directly retrieved from each paper. Due to differences in the experimental platforms, the comparisons between our system and those of [13] and [18], especially in terms of execution time, may not be on a fully equal footing. They are however herein provided for the sake of reference.

In [18, Table 3], Hijazi et al. provide a VLECPC for dfree= 7 within an execution time of 13 minutes and 31 seconds for a given set of codeword lengths. For Distribution 2 in Table 6.6, the resulting average codeword length is 10.4213. In [18, Table 4], they provide another VLECPC for dfree = 7, resulting in a better average codeword length of 10.1138 under Distribution 2, but no execution time is given.

Table 6.10: The complexities and performances of some other suboptimal code construc-tion for dfree = 7 for the 26-symbol English alphabet (Distribution 2 given in Table 6.6).

(∆, Γ, D, I) Ub AL # of node computations Time

(1, 3000, Dm, 1) ∞ 10.3615 459403 12m37s

(1, 3000, Dm, 1) 10.5110 10.3615 452237 12m13s

In contrast, our best to-date suboptimal code construction, as shown in Table 6.8 with parameters (∆ = 1, Γ = 3000, Dm, I = 1) and Ub = 10.5110, outputs a VLECPC for dfree = 7 with an average codeword length of 10.3615, which is in between 10.4213 [18, Table 3] and 10.1138 [18, Table 4], under an execution time of 12 minutes and 13 seconds.

On the other hand, our current suboptimal code construction algorithm, when initial-ized with Ub = 10.4213 (and also Ub = 10.1138), either reports a code search failure or cannot converge to a solution in reasonable time, depending on the choice of parameters (∆, Γ, D, I). It should be pointed out however, that unlike our suboptimal algorithm, the scheme of [18] requires a priori knowledge of all codeword lengths before it is run. Hence arriving at the right choice of codeword lengths for any given dfree and alphabet size re-quires additional trials (whose execution duration are not reported in [18]). Nonetheless, it is certainly of interest, to further improve the efficiency of our algorithm and assess whether or not the average codeword length of 10.1138 is optimal or not for dfree = 7.

The complexities and performances of other found suboptimal VLECPCs are summarized in Table 6.10.

Figure 6.6 illustrates the SER performances of the VLECPCs presented in Table 6.7 with dfree = 11. Again, 10 source symbols are encoded and transmitted as a block in each simulation, and all codes are decoded using the TP-SMAP decoder in Chapter 5. We observe from the figure that the VLECPC obtained by our suboptimal code construction

8 8.5 9 9.5 10 10.5 11 10−5

10−4 10−3 10−2 10−1

SNRs(dB)

SymbolErrorRate

Wang’s VLECPC, R = Rc/Rs= 1/15.025 = 0.067 Buttigieg’s VLECPC, R = Rc/Rs= 1/14.876 = 0.067 Lamy’s VLECPC, R = Rc/Rs= 1/15.355 = 0.065 Suboptimal VLECPC, R = Rc/Rs= 1/14.580 = 0.069

Figure 6.6: Error performances of the VLECPCs of Table 6.7 with dfree = 11 for the 26-symbol English alphabet (with Distribution 1). The number of source symbols per transmission block is L = 10.

Table 6.11: Average (AVG) and maximum (MAX) numbers of decoder branch metric computations for the codes of Figure 6.6.

SNRs 8 dB 9 dB 10 dB 11 dB

VLECPC system AVG MAX AVG MAX AVG MAX AVG MAX

Wang’s VLECPC 3124 11131 3123 4524 3123 4093 3123 4000 Buttigieg’s VLECPC 3112 16433 3111 5950 3111 4001 3111 4001 Lamy’s VLECPC 3211 14675 3210 5959 3209 4391 3209 4391 Suboptimal VLECPC 3108 10096 3104 4349 3104 3995 3104 3995

algorithm outperforms the other three VLECPCs by at least 0.15 dB. The decoding com-plexities of these systems are summarized in Table 6.11. As anticipated, the VLECPC obtained by our suboptimal code construction algorithm has the smallest average code-word length and hence its decoding complexity is smaller than those of the other three VLECPCs, particularly in the maximum number of branch metric computations.

Finally, we compare the SER performance of one suboptimal VLECPC shown in Table 6.7 with that of a traditional SSCC system for the situation where the source is the memoryless 26-symbol English data. The SSCC system consists of a Huffman source coder and a tail-biting convolutional channel (TBCC) coder. We use (3, 1, 3), (3, 1, 4), (3, 1, 5) and (3, 1, 6) TBCCs respectively with generator polynomial [54, 64, 74], [52, 66, 76], [47, 53, 75] and [564, 624, 754] (in octal) [29] such that the resulting SSCC systems have approximately the same code rate R ≈ 0.08 as the VLECPC to be compared with. Also, the dfree of the chosen VLECPC is 10, while the largest minimum Hamming distances dmin for (3, 1, 3), (3, 1, 4), (3, 1, 5) and (3, 1, 6) TBCCs are 10, 12, 13 and 15, respectively.

Both the VLECPC and the TBCCs are decoded by sequence decoders, where the one for the VLECPC is the TP-SMAP proposed in Chapter 5, and the one for the TBCCs is the priority-first search decoding algorithm (PFSA) introduced in [16]. The results are illustrated in Figure 6.7.

Table 6.12: Average (AVG) and maximum (MAX) numbers of decoder branch metric computations for the codes of Figure 6.7. The parameter λ used in PFSA is indicated inside the parentheses.

SNRs 8 dB 9 dB 10 dB 11 dB

Scheme

AVG MAX AVG MAX AVG MAX AVG MAX

Code Decoder

(3, 1, 3) TBCC [54, 64, 74] PFSA(3) 753 2049 739 1518 731 1483 730 1253 (3, 1, 4) TBCC [52, 66, 76] PFSA(4) 1466 4192 1444 3298 1435 2916 1432 2528 (3, 1, 5) TBCC [47, 53, 75] PFSA(5) 2907 8909 2875 6437 2865 4851 2862 4661 (3, 1, 6) TBCC [564, 624, 754] PFSA(6) 5773 21062 5734 12814 5724 9063 5721 8687

Suboptimal VLECPC TP-SMAP 2698 8322 2695 7362 2694 3840 2694 3840

We remark from Figure 6.7 that for almost all simulated SNRs, the suboptimal VLECPC outperforms the SSCC using a TBCC of memory order no larger than 5. In comparison with the SSCC equipped with the (3, 1, 6) TBCC, the suboptimal VLECPC still performs better when SNR is less than 9 dB. Table 6.12 summarizes the decoding

8 8.5 9 9.5 10 10.5 11 10−6

10−5 10−4 10−3 10−2 10−1 100

SNRs(dB)

SymbolErrorRate

(3, 1, 3) TBCC with dmin= 10 + Huffman, R = 0.333/4.156. = 0.080 (3, 1, 4) TBCC with dmin= 12 + Huffman, R = 0.333/4.156. = 0.080 (3, 1, 5) TBCC with dmin= 13 + Huffman, R = 0.333/4.156. = 0.080 (3, 1, 6) TBCC with dmin= 15 + Huffman, R = 0.333/4.156. = 0.080 Suboptimal VLECPC with dfree= 10, R = 1/12.593 = 0.079

Figure 6.7: Error performances of the SSCC (specifically, first order Huffman + TBCC) and the VLECPC of Table 6.7 with dfree = 10 for the 26-symbol English alphabet (with Distribution 1). The number of source symbols per transmission block is L = 10.

complexities of the suboptimal VLECPC and the TBCCs in terms of the branch metric

complexities of the suboptimal VLECPC and the TBCCs in terms of the branch metric

相關文件