• 沒有找到結果。

This homework set comes with 200 points and 20 bonus points. In general, every home- work set would come with a full credit of 200 points, with some possible bonus points.

N/A
N/A
Protected

Academic year: 2022

Share "This homework set comes with 200 points and 20 bonus points. In general, every home- work set would come with a full credit of 200 points, with some possible bonus points."

Copied!
2
0
0

加載中.... (立即查看全文)

全文

(1)

Introduction to Information Theory (NTU, Fall 2019) instructor: Hsuan-Tien Lin

Homework #3

RELEASE DATE: 12/11/2019

DUE DATE: 01/14/2019, BEFORE 17:00 on GRADESCOPE

QUESTIONS ABOUT HOMEWORK MATERIALS ARE WELCOMED ON THE FACEBOOK FORUM.

Unless granted by the instructor in advance, you must upload your solution to Gradescope as instructed by the TA.

Any form of cheating, lying, or plagiarism will not be tolerated. Students can get zero scores and/or fail the class and/or be kicked out of school and/or receive other punishments for those kinds of misconducts.

Discussions on course materials and homework solutions are encouraged. But you should write the final solutions alone and understand them fully. Books, notes, and Internet resources can be consulted, but not copied from.

Since everyone needs to write the final solutions alone, there is absolutely no need to lend your homework solutions and/or source codes to your classmates at any time. In order to maximize the level of fairness in this class, lending and borrowing homework solutions are both regarded as dishonest behaviors and will be punished according to the honesty policy.

You should write your solutions in English or Chinese with the common math notations introduced in class or in the problems. We do not accept solutions written in any other languages.

This homework set comes with 200 points and 20 bonus points. In general, every home- work set would come with a full credit of 200 points, with some possible bonus points.

1.

For Questions 1–6, we will deal with binary symmetric channels with crossover probability , as shown in Figure 3.2(a) of the IAC notes. The channel can be characterized by

p(y = 1|x = 0) = p(y = 0|x = 1) = .

An ensemble X with four equiprobable elements is represented by the four codewords: 000, 011, 101, 110, for sequential transmission over a binary symmetric channel with  = 0.1. Upon receiving the first (leftmost) digit y1, calculate the information provided about X. That is, let Y1be a binary ensemble with p(y1) defined by p(x) (or more specifically, p(x1)) and the channel p(y|x). Compute I(X; Y1).

2.

Following the previous question, compute I(X; Y2|Y1), where Y2 is the binary ensemble for the second received digit.

3.

Let C denote a binary symmetric channel with crossover probability  = 0.4. Let CN denote the channel obtained by cascading N statistically independent channels C. Show that CN is a binary symmetric channel and derive its crossover probability in terms of N .

4.

Following the previous question, show that the information capacity of CN is strictly positive for all positive integers N , and find the smallest N for which the capacity is less than 0.001 bits.

5.

Consider a channel coding scheme where only two codewords 000 and 111 are sent over a binary symmetric channel with crossover probability  = 0.1, and employ a ‘majority vote’ decoding (i.e.

received binary string with more 0’s than 1’s is decoded to 000; otherwise decoded to 111). Calculate the rate of the scheme as well as its probability of error (the maximum over both codewords).

6.

Consider a channel coding scheme where only two codewords 0N and 1N are sent over a binary symmetric channel with crossover probability  = 0.1, and employ a ‘majority vote’ decoding (i.e.

received binary string with more 0’s than 1’s is decoded to 0N; otherwise decoded to 1N). For simplicity, assume that N is an odd integer. Calculate the maximum rate of such a scheme (i.e.

by minimizing N ) such that the probability of error is ≤ 0.0001.

1 of 2

(2)

Introduction to Information Theory (NTU, Fall 2019) instructor: Hsuan-Tien Lin

7.

Let ⊕ denote the exclusive-or operation on binary strings. Show that for any two binary strings s and t of length N , K(s ⊕ t) ≤ K(s) + K(t) + c1log N + c2, where c1, c2 are positive constants.

8.

Following the previous question, prove or disprove that there exists s and t such that K(s) ≥ N , K(t) ≥ N , but K(s ⊕ t) ≤ 1126N .

9.

Prove that K(sN) ≤ K(s) + c1log N + c2, where sN denotes concatenating s for N times, and c1, c2

are positive constants.

10.

For length-N binary strings s with k zeros, show that K(s) ≤ c1N · H(k/N ) + c2log N + c3 for some positive constants c1, c2, c3, where H(θ) is the entropy of the binary ensemble with p(0) = θ and p(1) = 1 − θ.

Bonus: Incompressible Strings

11.

(Bonus 20 points) A Kolmogorov-incompressible string satisfies K(s) ≥ |s|. If we call any string such that K(s) ≥ 11261 |s| to be weakly Kolmogorov-incompressible. Prove or disprove that the

‘weakly Kolmogorov-incompressible’ function f (s) =qK(s) ≥11261 |s|y is computable.

2 of 2

參考文獻

相關文件

Since everyone needs to write the final solutions alone, there is absolutely no need to lend your homework solutions and/or source codes to your classmates at any time.. In order

Since everyone needs to write the final solutions alone, there is absolutely no need to lend your homework solutions and/or source codes to your classmates at any time.. In order

Since everyone needs to write the final solutions alone, there is absolutely no need to lend your homework solutions and/or source codes to your classmates at any time.. In order

Since everyone needs to write the final solutions alone, there is absolutely no need to lend your homework solutions and/or source codes to your classmates at any time.. In order

Since everyone needs to write the final solutions alone, there is absolutely no need to lend your homework solutions and/or source codes to your classmates at any time.. In order

Since everyone needs to write the final solutions alone, there is absolutely no need to lend your homework solutions and/or source codes to your classmates at any time.. In order

(BFQ, *) Implement the fixed learning rate gradient descent algorithm below for logistic regression.. Run the algorithm with η = 0.001 and T = 2000 on the following set

As shown in class, for one-dimensional data, the VC dimension of the decision stump model is 2.. In fact, the decision stump model is one of the few models that we could easily