• 沒有找到結果。

Problem 4. a) b) If we sort the probabilities in descending order, we can see that the two letters with the lowest probabilities are a

N/A
N/A
Protected

Academic year: 2022

Share "Problem 4. a) b) If we sort the probabilities in descending order, we can see that the two letters with the lowest probabilities are a"

Copied!
4
0
0

加載中.... (立即查看全文)

全文

(1)

Problem 4.

a)

b)

If we sort the probabilities in descending order, we can see that the two letters with the lowest probabilities are a2 and a4. These will become the leaves on the lowest level of the binary tree. The parent node of these leaves will have a probability of 0.9. If we consider parent node as a letter in a reduced alphabet then it will be one of the two letters with the lowest probability: the other one being a1. Continuing in this manner, we get the binary tree shown in Figure 1. and the code is

Figure 1: Huffman code for the five-letter alphabet.

c) lavg = 0.15 × 3 + 0.04 × 4 + 0.26 × 2 + 0.05 × 4 + 0.5 × 1 = 1.83bits/symbol.

(2)

Problem 5.

Figure 2: Huffman code for the four-letter alphabet in Problem 5.

a) The Huffman code tree is shown in Figure 2. The code is

The average length of the code is 0.1×3+0.3×2+0.25×3+0.35×1 = 2 bits/symbol.

b) Huffman code tree is shown in Figure 3. The code is

The average length of the code is obviously 2 bits/symbol.

Figure 3: Minimum variance Huffman code for the four-letter alphabet in Problem 5.

While the average length of the codeword is the same for both codes, that is they are both equally efficient in terms of rate. However, the second code has a variance of zero for the code lengths. This means that we would not have any problems with buffer control if we were using this code in a communication system. We cannot make the same assertion about the first code.

(3)

Problem 6.

Examining the Huffman code generated in Problem 4 (not 3!) along with the associated probabilities, we have

The proportion of zeros in a given sequence can be obtained by first computing the

probability of observing a zero in a codeword and then dividing that by the average length of a codeword. The probability of observing a zero in a codeword is

1 × 0.15 + 0 × 0.04 + 1 × 0.26 + 1 × 0.05 + 1 × 0.50 = 0.96.

0.96/1.83 = 0.52. Thus, the proportion of zeros is close to a half. If we examine Huffman codes for sources with dyadic probabilities, we would find that the proportion is exactly a half. Thus, the use of a Huffman code will not lead to inefficient channel usage.

Problem 10.

Depending on how you count, the errors five characters are received in error before the first correctly decoded character.

For the minimum variance code the situation is different

(4)

Again, only a single character is received in error.

Problem 13.

參考文獻

相關文件

• If a graph contains a triangle, any independent set can contain at most one node of the triangle.. • We consider graphs whose nodes can be partitioned into m

We would like to point out that unlike the pure potential case considered in [RW19], here, in order to guarantee the bulk decay of ˜u, we also need the boundary decay of ∇u due to

This paper presents (i) a review of item selection algorithms from Robbins–Monro to Fred Lord; (ii) the establishment of a large sample foundation for Fred Lord’s maximum

We explicitly saw the dimensional reason for the occurrence of the magnetic catalysis on the basis of the scaling argument. However, the precise form of gap depends

• We have found a plausible model in AdS/CFT that captures some essential features of the quantum hall eect as well as a possible experimental prediction. • Even if this is the

Miroslav Fiedler, Praha, Algebraic connectivity of graphs, Czechoslovak Mathematical Journal 23 (98) 1973,

2-1 註冊為會員後您便有了個別的”my iF”帳戶。完成註冊後請點選左方 Register entry (直接登入 my iF 則直接進入下方畫面),即可選擇目前開放可供參賽的獎項,找到iF STUDENT

To complete the “plumbing” of associating our vertex data with variables in our shader programs, you need to tell WebGL where in our buffer object to find the vertex data, and