• 沒有找到結果。

Robust Code Design over Fast Fading Channels

N/A
N/A
Protected

Academic year: 2021

Share "Robust Code Design over Fast Fading Channels"

Copied!
74
0
0

加載中.... (立即查看全文)

全文

(1)

電信工程學系碩士班

在快速衰減通道下的強健編碼設計

Robust Code Design over Fast Fading Channels

研 究 生:郭明鑫

(2)





















Robust Code Design over Fast Fading Channels











Student



Ming-Hsin Kuo









Advisor



Po-Ning Chen







































A Thesis

Submitted to Institute of and Communication Engineering College of Electrical and Computer Engineering

National Chiao Tung University in Partial Fulfillment of the Requirements

for the Degree of Master of Science

in

Communication Engineering June 2008

(3)

在快速衰減通道下的強健編碼設計

研究生:郭明鑫 指導教授:陳伯寧 教授

國立交通大學

電信工程學系碩士班

中文摘要

西元 2002 年史克蘭等研究者提出結合通道編碼、通道估計和錯

誤更正的非線性區塊編碼,並證實可以增進系統效能[10];但編碼方

式是以電腦搜尋得到,此種結構在解調時效率不高。而在西元 2007

年,吳佳龍等研究者提出具有規則的區塊編碼,其效能與電腦搜尋的

編碼幾近相同,但編碼的效率大為提高[13]。在本論文中,將利用規

則建立的自正交編碼概念,延伸使用在快速衰減通道下,在分析模擬

中並採用一階高斯馬可夫通道作為在傳送區塊內衰減係數改變的模

型,驗證所設計的編碼,不論是特定快速衰減通道下,或是在半穩態

衰減通道下傳送,均有不錯的效能。

(4)

Robust Code Design over Quasi-static Fast Fading Channels

Student: Ming-Hsin Kuo

      

Advisor: Prof. Po-Ning Chen

Institute of Communication Engineering

National Chiao Tung University





Abstract

Nonlinear block codes that combine channel estimation, channel

equalization and error protection have recently been proposed and

confirmed their improvement in system performance by Skoglund et al in

2002 [10]. The design of Skoglund et al block codes, however, is based

on computer search, and thus, has no efficient structure for decoding. In

2007, Wu et al have proposed a rule-constructed structural block code,

and shown that the code has comparable performance to the

computer-searched non-linear code of equal rate [13]. In this thesis, we

extend the concept of rule-constructed self-orthogonal code design from

the quasi-static fading channel to the non-static fading channel. We then

examine the performance of our extension code over the first-order

Gauss-Markov fading channel, and found that our extension code

performs well not only in the target fast fading channel but also in the

quasi-static fading channel.

(5)

Acknowledgements

I would like to express my gratitude to my advisor, Prof. Po-Ning Chen, for his patient guidance and support. This thesis would not have been possible without his advice. I would also like to thank the Department of Communication Engineering, National Chiao Tung University, for providing many resources for study and research. Finally, I would like to thank all NTL labmates for their support and friendship, as well as all those who have helped me in the past two years.

(6)

Contents

Acknowledgements i

List of Figures iv

1 Introduction 1

1.1 Background . . . 1

1.2 Objective of the Research . . . 2

1.3 Organization of thesis . . . 3

2 System Model 4 2.1 Overview . . . 4

2.2 Gauss-Markov Model . . . 8

3 Code Design 10 3.1 Self-orthogonality condition for SNR-optimized codewords . . . 10

3.2 Codeword Selection . . . 11

3.3 Decoding Criterion . . . 15

(7)

4 Simulation Results 20

4.1 Codes Designed For Quasi-Static Block Fading Channels . . . 20

4.2 Codes Designed For Non-Static Fading Channels . . . 33

5 Conclusions 61

(8)

List of Figures

2.1 System model for combined channel estimation and error protection codes . 8

4.1 The maximum-likelihood word error rates for Code(12, 2, 12) over Channel(2, 6)

with different degree of channel variation factors α. . . . 22

4.2 The maximum-likelihood word error rates for Code(14, 2, 14) over Channel(2, 7)

with different degree of channel variation factors α. . . . 23

4.3 The maximum-likelihood word error rates for Code(16, 2, 16) over Channel(2, 8)

with different degree of channel variation factors α. . . . 24

4.4 The maximum-likelihood word error rates for Code(18, 2, 18) over Channel(2, 9)

with different degree of channel variation factors α. . . . 25

4.5 The maximum-likelihood word error rates for Code(20, 2, 20) over Channel(2, 10)

with different degree of channel variation factors α. . . . 26

4.6 The maximum-likelihood word error rates for Code(22, 2, 22) over Channel(2, 11)

with different degree of channel variation factors α. . . . 27

4.7 The maximum-likelihood word error rates for Code(24, 2, 24) over Channel(2, 12)

with different degree of channel variation factors α. . . . 28

4.8 The maximum-likelihood word error rates for Code(12, 2, 12) over Channel(2, 3)

(9)

4.9 The maximum-likelihood word error rates for Code(16, 2, 16) over Channel(2, 4)

with different degree of channel variation factors α. . . . 30

4.10 The maximum-likelihood word error rates for Code(20, 2, 20) over Channel(2, 5)

with different degree of channel variation factors α. . . . 31

4.11 The maximum-likelihood word error rates for Code(24, 2, 24) over Channel(2, 6)

with different degree of channel variation factors α. . . . 32

4.12 The maximum-likelihood word error rates for Code(12, 2, 6) over Channel(2, 6)

with different degree of channel variation factors α. . . . 34

4.13 The maximum-likelihood word error rates for Code(14, 2, 7) over Channel(2, 7)

with different degree of channel variation factors α. . . . 35

4.14 The maximum-likelihood word error rates for Code(16, 2, 8) over Channel(2, 8)

with different degree of channel variation factors α. . . . 36

4.15 The maximum-likelihood word error rates for Code(18, 2, 9) over Channel(2, 9)

with different degree of channel variation factors α. . . . 37

4.16 The maximum-likelihood word error rates for Code(20, 2, 10) over Channel(2, 10)

with different degree of channel variation factors α. . . . 38

4.17 The maximum-likelihood word error rates for Code(22, 2, 11) over Channel(2, 11)

with different degree of channel variation factors α. . . . 39

4.18 The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, 12)

with different degree of channel variation factors α. . . . 40

4.19 The maximum-likelihood word error rates for Code(12, 2, 6) over Channel(2, 3)

(10)

4.20 The maximum-likelihood word error rates for Code(16, 2, 8) over Channel(2, 4)

with different degree of channel variation factors α. . . . 42

4.21 The maximum-likelihood word error rates for Code(20, 2, 10) over Channel(2, 5)

with different degree of channel variation factors α. . . . 43

4.22 The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, 6)

with different degree of channel variation factors α. . . . 44

4.23 The maximum-likelihood word error rates for Code(12, 2, 3) over Channel(2, 3)

with different degree of channel variation factors α. . . . 45

4.24 The maximum-likelihood word error rates for Code(16, 2, 4) over Channel(2, 4)

with different degree of channel variation factors α. . . . 46

4.25 The maximum-likelihood word error rates for Code(20, 2, 5) over Channel(2, 5)

with different degree of channel variation factors α. . . . 47

4.26 The maximum-likelihood word error rates for Code(24, 2, 6) over Channel(2, 6)

with different degree of channel variation factors α. . . . 48

4.27 The maximum-likelihood word error rates for Code(12, 2, 3) over Channel(2, 6)

with different degree of channel variation factors α. . . . 49

4.28 The maximum-likelihood word error rates for Code(16, 2, 4) over Channel(2, 8)

with different degree of channel variation factors α. . . . 50

4.29 The maximum-likelihood word error rates for Code(20, 2, 5) over Channel(2, 10)

with different degree of channel variation factors α. . . . 51

4.30 The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, 12)

(11)

4.31 The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, Q)

with channel variation factor α = 0 and different values of Q. . . . 53

4.32 The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, Q)

with channel variation factor α = 0.568084 and different values of Q. . . . . 54

4.33 The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, Q)

with channel variation factor α = 0.753713 and different values of Q. . . . . 55

4.34 The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, Q)

with channel variation factor α = 0.910057 and different values of Q. . . . . 56

4.35 The maximum-likelihood word error rates for Code(24, 2, 6) over Channel(2, Q)

with channel variation factor α = 0 and different values of Q. . . . 57

4.36 The maximum-likelihood word error rates for Code(24, 2, 6) over Channel(2, Q)

with channel variation factor α = 0.568084 and different values of Q. . . . . 58

4.37 The maximum-likelihood word error rates for Code(24, 2, 6) over Channel(2, Q)

with channel variation factor α = 0.753713 and different values of Q. . . . . 59

4.38 The maximum-likelihood word error rates for Code(24, 2, 6) over Channel(2, Q)

(12)

Chapter 1

Introduction

1.1

Background

In communications nowadays, high-quality data transmission usually relies on accurate knowledge of the channel characteristics. The measurement of the channel characteristics thus becomes an essential technique in real systems. Often, a training sequence that contains no data information is pre-transmitted before the data entity for the receiver to estimate the channel characteristics. Alternatively, pilot signals are inserted within the data entity to help improving the accuracy of the channel estimation [1, 8]. Recently, there is growing interest in blind channel estimation, where no training sequence or pilot signals are trans-mitted [12, 3]. The receiver accordingly has to retreat the data information with an implicit measurement of the channel characteristics based on the data entity [10].

In a quasi-static block fading environment [10], the channel statistics is assumed to change at a very slow rate, and hence, remains almost constant within a block transmission period. Such block-static nature of the channel coefficients facilitates their estimation. However, for a non-quasi-static block fading channel such as that is experienced by highly mobile devices, the channel coefficients may vary evidently during a block transmission period. In such case, to obtain good data transmission quality even by means of blind channel estimation

(13)

technique is an engineering challenge.

In this thesis, the channel coefficients within a transmission block are assumed deter-ministic but unknown. In fact, this should be the case from the stand point of practical block transmission since the channel coefficients that affect the current decoding are ones that have already occurred in the past. We further assume that in a static environment, these deterministic channel coefficients are the same for the entire block, while in a non-static environment, they are allowed to change several times within a block. Both the non-static and non-static situations will be considered in this work. Our target is to design a robust code that can be simultaneously used in both the quasi-static block fading channels and the non-static ones.

1.2

Objective of the Research

In 2002, Skoglund et al [10] proposed to introduce the computer-searched best non-linear block code to combine channel estimate, equalization and error protection. In their work, the channel coefficients were assumed unchanged during the transmission of each codeword block, but allow to vary across blocks. Their simulations indicated that the computer-searched best code can outperform a typical training-sequence-enhanced communication system with perfect channel estimation by at least 2 dB. Nevertheless, due to the structureless nature of the computer-searched best code, the time-consuming exhaustive decoding was adopted, which limited the practical use of their code.

In 2007, Wu et al [13] replaced the computer-optimized non-linear code by a rule-based constructed code, and showed that the constructed code can yield comparable performance to the computer-searched best codes. Enforced by the structure of the rule-based construction, Wu et al subsequently derived a maximum-likelihood recursive metric for use of the

(14)

priority-exhaustive decoding is greatly reduced.

In this thesis, we further extend the idea of Wu et al to the design of a code that per-forms well not only in quasi-static block fading environment but also in non-static channels. The only assumption in our extension is that the receiver knows exactly when the channel coefficients change within a codeword block. We will then examine the robustness of our rule-based constructed code over the Gauss-Markov fading channels [2] with different channel parameters.

1.3

Organization of thesis

The organization of the thesis is as follows. In Chapter 2, channel models for quasi-static block fading channels and their extensions to non-static block fading are described. Also introduced are the Gauss-Markov fading channels, as well as how to correspond the channel parameters to different degrees of Doppler effect. In Chapter 3, the rule used for code construction is presented, followed by the derivation of the error probability upper bound. In Chapter 4, simulation results are summarized and remarked. Chapter ?? concludes this work.

(15)

Chapter 2

System Model

2.1

Overview

Suppose the signal is transmitted through the linear time-invariant filter channel. Then, the received signal y(t) can be expressed as

y(t) =

Z

−∞

h(τ )x(t − τ )dτ + n(t), (2.1)

where x(t) is the transmitted signal, h(t) is the impulse response of the linear time-invariant channel filter, and n(t) is the additive noise. The discrete equivalence of (2.1) is given by

y(t) =

X

τ =−∞

h(τ )x(t − τ ) + n(t). (2.2)

In case the channel filter becomes time-variant, (2.1) is refined to

y(t) =

X

τ =−∞

h(τ ; t)x(t − τ ) + n(t),

where in h(τ ; t), τ is the convolutional argument for filtering, and t represents the dependence of the filter on time.

Similar to [10], assume a codeword b = [b1, . . . , bN] is transmitted through the so-called quasi-static block fading channel of which the channel coefficients remain constant within

(16)

each block. Denote by P the memory order of the quasi-static fading channel. We can then re-formulate (2.2) as

y = Hb + n, (2.3)

where y = [y1, y2, . . . , yL]T is the complex output vector observed at the receiver,

H =                h1 0 0 0 . . . 0 h2 h1 0 . . . 0 ... ... . .. ... ... hP hP −1 ... h1 . .. ... 0 hP . .. ... h1 0 ... ... ... hP . .. h1 0 . . . 0 hP ... 0 0 0 . . . 0 hP                L×N ,

is formed by L = N +P −1 shift counterparts of the channel coefficients h = [h1, h2, . . . , hP]T, n = [n1, n2, . . . , nL]T is the zero-mean complex Gaussian noise vector in which E[nnH] = σ2

nIL, and IL is the L × L identity matrix. In the above notations, the superscripts “T ” and

“H” respectively represent the matrix transpose and Hermitian matrix transpose operations. It is assumed that the channel coefficients are flat and normalized in the sense that {hi}Pi=1

are independent and identically distributed (i.i.d.), and E[hHh] = 1.

Formula (2.3) supposes that a codeword of length N experiences the same channel char-acteristics during its transmission. In a non-static environment or for a large N, the channel coefficients however may vary within the coding block. In such case, (2.3) should be modified as:

(17)

where Hv =              H(1)N1×N1 0N1×(N −N1) 0N2×(N1−(P −1)) H (2) N2×N2 0N2×(N −N1−N2+(P −1)) 0N3×(N1+N2−2(P −1)) H (3) N3×N3 0N2×(N −N1−N2−N3+2(P −1)) . .. 0Nq×(N −Nq) H (q) Nq×Nq              (2.5)

corresponds to q channel coefficient changes during the transmission of codeword b, and 0 denotes the all-zero matrix whose size is indicated by its subscripts. In (2.5), N1+ N2+ · · · +

Nq = L, and H(1)N1×N1 =              h1,1 0 . . . . . . . . . . . . . 0 h2,1 h1,1 0 . . . . . . . . . . 0 ... ... ... ... ... ... ... hP,1 hP −1,1 ... h1,1 0 . . . . 0 0 hP,1 ... h2,1 h1,1 0 . . . 0 ... ... ... ... ... ... ... 0 0 0 hP,1 hP −1,1 . . . h1,1              N1×N1 , and for 1 < i < q, H(i)Ni×Ni =       

hP,i hP −1,i . . . h1,i 0 . . . . . . 0

0 hP,i . . . h2,i h1,i 0 . . . 0

... ... . .. ... ...

0 . . . 0 hP,i hP −1,i . . . h1,i 0

0 . . . 0 0 hP,i hP −1,i . . . h1,i

       Ni×Ni , and H(q)Nq×Nq =            hP,q hP −1,q . . . h1,q 0 0 . . . 0 0 hP,q . . . h2,q h1,q 0 . . . 0 ... ... ... ... ... ... ... 0 0 . . . 0 hP,q hP −1,q . . . h1,q 0 0 . . . . 0 hP,q . . . h2,q ... ... ... ... ... ... . .. ...            .

(18)

Here we assume that Ni ≥ P for every i, and Ni may be different for different i.

As similarly done in [10, 13], we can re-formulate (2.4) as

y = Bvhv + n, where hv = [h1,1, h2,1, . . . , hP,1, h2,1, h2,2, . . . , hP,2, h3,1, . . . , hP,q]T, and Bv =       B(1)N1×P 0N1×P · · · 0N1×P 0N2×P B (2) N2×P · · · 0N2×P ... ... . .. ... 0Nq×P 0Nq×P · · · B (q) Nq×P       L×qP . (2.6) In (2.6), B(1)N1×P =              b1 0 . . . 0 b2 b1 . . . 0 ... ... . .. ... bP bP −1 ... b1 bP +1 bP ... b2 ... ... ... bN1 bN1−1 . . . bN1−(P −1)              N1×P , and for 1 < i < q, B(i)Ni×P =      bPi−1 k=1Nk+1 bPi−1k=1Nk . . . bPi−1k=1Nk−(P −1)+1 bPi−1 k=1Nk+2 b Pi−1 k=1Nk+1 . . . b Pi−1 k=1Nk−(P −1)+2 ... ... ... bPi k=1Nk bPik=1Nk−1 . . . bPik=1Nk−(P −1)      Ni×P , and B(q)Nq×P =             bPq−1 k=1Nk+1 b Pq−1 k=1Nk . . . b Pq−1 k=1Nk−(P −1)+1 bPq−1 k=1Nk+2 b Pq−1 k=1Nk+1 . . . b Pq−1 k=1Nk−(P −1)+2 ... ... ... bN bN −1 . . . bN −(P −1) 0 bN . . . bN −(P −1)+1 ... ... . .. ... 0 0 . . . bN             Nq×P .

(19)

- Channel filter hv - - Joint ML decoder -6 n b y ˆb

Figure 2.1: System model for combined channel estimation and error protection codes

Since n is a complex zero-mean Gaussian vector, and hv is assumed unknown constants,

the optimal decision for the transmitted codeword is: ˆb = arg min

Bv

min

hv

ky − Bvhvk2. (2.7)

For a given b, the least square (LS) estimate of hv is given by

ˆ

hv = (BTvBv)−1BTvy.

Taking the above LS estimate into (2.7) yields ˆb = arg min Bv ° °y − Bv(BTvBv)−1BTvy ° °2 (2.8) = arg min P B ° °P By ° °2 , where P B , IL− PB and PB , Bv(BTvBv)−1BTv.

2.2

Gauss-Markov Model

In the simulations for quasi-static block fading channels in [10, 13], the channel coefficients

h are generated independently in every block. We can extendedly assume that the

sub-block channel coefficients hi = [h1,i, h2,i, · · · , hP,i]T in the non-static environment are also

(20)

A more general relationship between consecutive sub-block channel coefficients however is the first-order Gauss-Markov that is usually adopted in time-varying environment [1, 5, 11, 2]. Specifically,

hi = αhi−1+ vi = α(αhi−2+ vi−1) + vi = . . . = αih0 +

i

X

j=1 αj−1v

i−j (2.9)

where Pr{h0 = 0P ×1} = 1, and {vi}qi=1 is zero-mean complex Gaussian distributed with

E[vivHi ] = σ2viIP. The special case is σ

2

h1 = σ

2

v1, since h1 equals v1. Notably, the parameter

α charactersizes the rate of channel variation between consecutive sub-blocks. Its value lies

between zero and one, and is controlled by the Doppler spread and transmission bandwidth [11] as

α = exp(−ωdTs) = exp (−πBdTs) = exp

³ −2πfc v cTs ´ ,

where Bd= ωd/π denotes the Doppler spread, fc is the carrier frequency, v is the velocity of

the transmitter, c is the velocity of light, and Ts is the symbol period (i.e. sub-block period

in our case). For vehicle speed v = 180 km/hours, carrier frequency fc = 900 MHz, and

sub-block size 10−4 seconds,

ωdTs = 2πfc v cTs= 2π × (900 × 10 6 Hz) × 180 km/hours 1.08 × 109 km/hours × 10 −4 seconds = 0.03π.

This yields α = exp{−0.03π} = 0.910057 [5]. If we increase fc to 2.7 GHz and 5.4 Ghz,

α will become exp(−0.09π) = 0.753713 and exp(−0.18π) = 0.568084, respectively. These α-values will be used in our simulations.

(21)

Chapter 3

Code Design

In this chapter, the codeword condition under which the signal-to-noise ratio (SNR) is guar-anteed maximal is derived, followed by the code construction approach proposed based on the condition. Also derived in this chapter is an error probability upper bound that will be used as the criterion to search for the best code for comparison with the constructed one.

3.1

Self-orthogonality condition for SNR-optimized

code-words

A known inequality [9] for the multiplication of two positive semi-definite Hermitian matrices is

(22)

where tr(·) represents the matrix trace operation, and λmax(B) is the maximal eigenvalue of

B. From the system model defined in (2.4), the average SNR is given by

SNR = E[||Hvb||2] E[||n||2] = E[||tr(h H v BTvBvhv)||] 2 n = tr(E[hvh H v ]BTvBv) 2 n tr(E[hvh H v ]) 2 n λmax(BTvBv) = N L tr(E[hvhHv ]) σ2 n λmax( 1 NB T vBv).

The above inequality holds with equality when (1/N)BT

vBv is an identity matrix [6], namely,

BT vBv = NIqP ,      N 0 . . . 0 0 N . . . 0 ... ... ... ... 0 0 . . . N      qP ×qP . (3.2)

As a result, the maximum SNR equals

SNRmax= N L tr(E[hvhHv ]) σ2 n . (3.3)

3.2

Codeword Selection

The previous section established the self-orthogonal condition under which the system SNR is maximized. However, the codeword sequences satisfying (3.2) may not exist for certain

N, P and q. In such cases, one can only choose codewords that best-approximate (3.2), for

(23)

Case 1. For P = 2 and q = 1, the codewords can be chosen according to: BT vBv = BTB =            · N 0 0 N ¸ , for N odd · N ±1 ±1 N ¸ , for N even.

Case 2. For P = 2 and q > 1 with N1 = N2 = · · · = Nq−1 = Q, we observe that

Bv = B(1)⊕ B(2)⊕ · · · ⊕ B(q), (3.4)

where “⊕” is the direct sum operator of two matrices.1 Then, the codewords can be

chosen according to:

¡ B(1)¢T B(1) =            · Q 0 0 (Q − 1) ¸ , for Q odd · Q ±1 ±1 (Q − 1) ¸ , for Q even and for 1 < i < q, ¡ B(i)¢TB(i) =            · Q ±1 ±1 Q ¸ , for Q odd · Q 0 0 Q ¸ , for Q even and ¡ B(q)¢T B(q)=            · N − (q − 1)Q ±1 ±1 N − (q − 1)Q + 1 ¸ , for [N − (q − 1)Q] odd · N − (q − 1)Q 0 0 N − (q − 1)Q + 1 ¸ , for [N − (q − 1)Q] even.

1For two matrices A and B, the direct sum of A and B is defined as A ⊕ B =

· A 0 0 B ¸

(24)

Case 3. For P > 2 and q = 1, the codewords can be chosen according to: BT vBv = BTB =                                                                   N 0 ±1 0 ±1 0 · · · 0 N 0 ±1 0 ±1 · · · ±1 0 N 0 ±1 0 · · · 0 ±1 0 N 0 ±1 · · · ±1 0 ±1 0 N 0 · · · 0 ±1 0 ±1 0 N · · · ... ... ... ... ... ... ...            P ×P , for N odd            N ±1 0 ±1 0 ±1 · · · ±1 N ±1 0 ±1 0 · · · 0 ±1 N ±1 0 ±1 · · · ±1 0 ±1 N ±1 0 · · · 0 ±1 0 ±1 N ±1 · · · ±1 0 ±1 0 ±1 N · · · ... ... ... ... ... ... ...            P ×P , for N even

Case 4. For P = 3 and q = 2 with N1 = Q, the codewords can be chosen according to:

¡ B(1)¢T B(1) =                     Q0 Q − 10 ±1±1 ±1 ±1 Q − 2 , for Q odd  ±1 Q − 1Q ±1 00 0 0 Q − 2 , for Q even and ¡ B(2)¢T B(2) =                     N − Q±1 N − Q + 1±1 ±10 ±1 0 N − Q + 2 , for (N − Q) odd  N − Q0 N − Q + 10 ±10 0 ±1 N − Q + 2 , for (N − Q) even.

For convenience, the numbers of sequences that satisfy Cases 1 and 2 are listed in the lemma forms in the following.

(25)

Lemma 3.1. The number of sequences that fulfill Case 1 and b1 = −1 is equal to (2 − (N mod 2)) µ N − 1 bN −1 2 c. (3.5)

Proof. The sequences must satisfy

c = b1b2 + b2b3+ . . . + bN −1bN (3.6)

where c = 0 for N odd, and c = ±1 for N even. Therefore, for N even, there must be either exactly b(N − 1)/2c terms equal to −1 or exactly b(N − 1)/2c terms equal to 1 among b1b2, b2b3, . . . , bN1bN. And for N odd, Case 1 is only satisfied when there are exactly

(N − 1)/2 terms equal to −1 among b1b2, b2b3, . . . , bN1bN. The lemma is then completed by

noting that (b1b2, b2b3, . . . , bN −1bN) and (b1, b2, . . . , bN) are one-to-one correspondence given

that b1 = −1.

Lemma 3.2. The number of sequences that fulfill Case 2 and b1 = −1 is equal to

[2 − (Q mod 2)] µ Q − 1 bQ−12 c[1 + (Q mod 2)]q−2 µ Q bQ2cq−2 [1 + (N − (q − 1)Q) mod 2] µ N − (q − 1)Q bN −(q−1)Q2 c. (3.7)

Proof. Lemma 3.2 requires          c1 = b1b2+ b2b3+ . . . + bQ−1bQ c2 = bQbQ+1+ bQ+1bQ+2+ . . . + b2Q−1b2Q ... cq = b(q−1)Qb(q−1)Q+1+ b(q−1)Q+1b(q−1)Q+2+ . . . + bN −1bN (3.8)

where c1 = 0 for Q odd and c1 = ±1 for Q even, and for 1 < i < q, ci = ±1 for Q odd and

ci = 0 for Q even, and cq = ±1 for [N − (q − 1)Q] odd and cq = 0 for [N − (q − 1)Q] odd.

By following the same reasoning as in Lemma 3.1, the numbers of sequences that make valid the equations respectively for c1, {ci}q−1i=2 and cq are equal to

µ

(26)

[1 + (Q mod 2)] µ Q bQ 2c ¶ and [1 + (N − (q − 1)Q) mod 2] µ N − (q − 1)Q bN −(q−1)Q2 c.

The number of sequences that fulfill Cases 3 and 4 may not have close-form formulas, and hence, they are omitted.

For clarity, the codeword selection procedure is also given in the end.

Step 1. (Initialization) Let b1 = −1, and let rmax be the total number of sequences

sat-isfying the required BT

vBv. Sort the (±1)-sequences according to their lexical order, starting from all-(−1) sequence, and denote them by b(1), b(2), b(3), . . . , b(rmax).

Step 2. (Codeword Selection) For an (N, K) code, compute

∆ =jrmax

2K

k

. Then, the codewords selected are {b(j × ∆)}2K

j=1.

3.3

Decoding Criterion

In the previous section, the rule for codeword selection is introduced, and only 2K codewords

are picked and the others are discarded. By assuming that the decoder knows N1, N2, . . .,

Nq, the optimal decision criterion in (2.7) are further explored. Notably, there are at least L2 Pq

i=1Ni2 zero elements in P⊥B, which is of no use in decision, and the computation for

these zero elements are accordingly vanished in the equivalent optimal decoding criterion derived in this section.

(27)

In the general non-static environment, by the fact that

(A1 ⊕ A2⊕ · · · ⊕ Aq)(C1⊕ C2⊕ · · · ⊕ Cq) = A1C1⊕ A2C2⊕ · · · ⊕ AqCq,

for square Ai and Ci of the same size, we have from (2.6) that

BT vBv =      (B(1))TB(1) 0 · · · 0 0 (B(2))TB(2) · · · 0 ... ... . .. ... 0 0 · · · (B(q))TB(q)      qP ×qP = £(B(1))TB(1)¤£(B(2))TB(2)¤⊕ · · · ⊕£(B(q))TB(q)¤.

Similarly, PB is a block matrix satisfying

PB =      P(1)B 0 · · · 0 0 P(2)B · · · 0 ... ... ... ... 0 0 · · · P(q)B      L×L = P(1)B ⊕ P(2)B ⊕ · · · ⊕ P(q)B , (3.9) where P(1)B = B(1)£(B(1))TB(1)¤−1(B(1))T.

(28)

Putting (3.9) into (2.8) yields that ˆb = arg min PB {||y − PBy||2} = arg min (P(1)B ,P (2) B ,··· ,P (q) B ) ½° ° °ILy − (P(1)B ⊕ P (2) B ⊕ · · · ⊕ P (q) B )y ° ° °2 ¾ = arg min (P(1)B ,P (2) B ,··· ,P (q) B ) ½° ° °(IN1 ⊕ IN2 ⊕ · · · ⊕ INq)y − (P (1) B ⊕ P (2) B ⊕ · · · ⊕ P (q) B )y ° ° °2 ¾ = arg min (P(1)B ,P (2) B ,··· ,P (q) B ) ½° ° ° h (IN1 − P (1) B ) ⊕ (IN2 − P (2) B ) ⊕ · · · ⊕ (INq − P (q) B ) i y ° ° °2 ¾ = arg³ min (P(1)B ),(P(2) B )⊥,··· ,(P (q) B ) ´ ½° ° °((P(1)B )⊕ (P(2) B )⊥⊕ · · · ⊕ (P (q) B )⊥)y ° ° °2 ¾ = arg³ min (P(1)B )⊥,(P (2) B )⊥,··· ,(P (q) B ) ´ ½° ° °((P(1)B )⊥y(1)) ⊕ ((P(2)B )⊥y(2)) ⊕ · · · ⊕ ((P(q)B )⊥y(q)) ° ° °2 ¾ = arg³ min (P(1)B ),(P(2) B )⊥,··· ,(P (q) B ) ´ ( q X i=1 ° ° °(P(i)B)y(i)°°°2 ) , (3.10) where y(i) =       yPi−1 j=1Nj+1 yPi−1 j=1Nj+2 ... yPi−1 j=1Nj+Ni       T

represents the i-th sub-block of y.

3.4

Error Rate Evaluation

In this section, we derive the union bound for the error probability.

(29)

code considered. Then, the error probability Pe can be bounded above by Pe = 1 2K 2K X i=1 Pr(ˆb 6= bi|bi transmitted) 1 2K 2K X i=1 Pr µ ∃ j 6= i such that ° ° °PBjy ° ° °2 °°P⊥ Biy ° °2 ¯ ¯ ¯ ¯ bi transmitted ¶ 1 2K 2K X i=1 2K X j=1,j6=i Pr µ° ° °PBjy ° ° °2 °°P⊥ Biy ° °2 ¯ ¯ ¯ ¯ bi transmitted ¶ = 1 2K 2K X i=1 2K X j=1,j6=i Pr ³ yH(P Bj) HP Bjy ≤ y H(P Bi) HP Biy ¯ ¯ ¯ bi transmitted ´ = 1 2K 2K X i=1 2K X j=1,j6=i Pr ³ yHh(P Bj) HP Bj − (P Bi) HP Bi i y ≤ 0 ¯ ¯ ¯ bi transmitted ´ , (3.11) where P

Bi corresponds to the codeword bi.

Since P

Bi is idempotent for every bi, and P

Bj−P Bi = PBi−PBi, (3.11) can be reformulated as Pe 1 2K 2K X i=1 2K X j=1,j6=i Pr ³ yH³P Bj − P Bi ´ y ≤ 0 ¯ ¯ ¯ bi transmitted ´ = 1 2K 2K X i=1 2K X j=1,j6=i Pr¡yH ¡P Bj − PBi ¢ y ≥ 0¯¯ bi transmitted ¢ . (3.12)

By following similar argument as in [10], the covariance matrix Sy(i) of the received vector

y for given transmitted codeword bi and zero-mean complex-Gaussian distributed h is real

and symmetric, and is always positive definite for positive noise variance. We can then define Gi = S1/2y (i), and obtain that

Gi ¡ PBj − PBi ¢ Gi = L X `=1

λ`;i,jq`;i,jqT`;i,j,

where {λ`;i,j}L`=1 and {q`;i,j}L`=1represent the eigenvalues and eigenvectors of Gi

¡

PBj − PBi

¢ Gi.

(30)

As a result, given that bi is transmitted, yH¡PBj− PBi ¢ y = ¡G−1i y¢H¡Gi ¡ PBj− PBi ¢ Gi ¢ ¡ G−1i y¢ = L X `=1 λ`;i,j ¯ ¯qT `;i,jG−1i y ¯ ¯2 = L X `=1 λ`;i,j|z`;i,j|2,

where {z`;i,j}L`=1 is independent zero-mean complex Gaussian with variance 1, and

Pe 1 2K 2K X i=1 2K X j=1,j6=i Pr à L X `=1 λ`;i,j|z`;i,j|2 ≥ 0 ! . (3.13)

Without loss of generality, we assume that ¯λ1;i,j > ¯λ2;i,j > · · · > ¯λr;i,j > 0 > ¯λr+1;i,j > · · · >

¯

λLc;i,j be those eigenvalues by removing the identical ones among {λ`;i,j}

L

`=1, and let their

respective orders of multiplicity be {o`;i,j}L`=1c . Then, |z`;i,j|2 is a central χ2-random variable

with 2o`;i,j degree of freedom.

By elementary probabilistic theories, the cumulant distribution function of random vari-able PL`=1λ`;i,j|z`;i,j|2 is

Fi,j(υ) = 1 2 1 π Z 0 1

t · Im{φi,j(t)e

−i tυ}dt (3.14)

where Im{·} represents the imaginary part, and

φi,j(t) = Lc Y `=1 ¡ 1 − 2 i t¯λ`;i,j ¢−o`;i,j . (3.15)

where i = √−1. Finally, by denoting di,j =

PLc

`=1o`;i,j, we obtain [7] that

Fi,j(υ) = 1 − r X `=1 1 (o`;i,j− 1)! · ∂o`;i,j−1 ∂xo`;i,j−1 ¯ F`(x, υ) ¸ x=¯λ`;i,j , (3.16) where ¯ F`(x, υ) = xdi,j−1e−υ/(2x) r Y m=1,m6=`

(x − ¯λm;i,j)−om;i,j. (3.17)

(31)

Chapter 4

Simulation Results

In this chapter, we will examine the robustness of the proposed coding scheme. Specifically, several designed codes will be simulated over quasi-static Gaussian and non-static Gauss-Markov block fading channels in order to verify that the codes designed for non-static block fading channels are robust over both channels. As a convention, the zero-mean channel coefficients are normalized as E[|hi,j|2] = 1/P for 1 ≤ i ≤ P and 1 ≤ j ≤ q, and {hi,j}Pi=1

are assumed independent.

4.1

Codes Designed For Quasi-Static Block Fading

Chan-nels

This section summarizes the simulations over the non-static Gauss-Markov fading channels with Gaussian distributed channel coefficients. In notations, the designed code of length

N, which targets to be transmitted over the memory-order-(P − 1) non-static fading

chan-nel whose chanchan-nel coefficients change in every Q symbols, is denoted by Code(N, P, Q). The simulated channel, whose channel coefficients change in every Q symbols, and whose memory order is (P − 1), are similarly denoted as Channel(P, Q). Five different channel variation factors (i.e., α-values) of the first-order Gauss-Markov fading channel will be used

(32)

in our simulations, which are respectively 0, 0.568084, 0.753713, 0.910057, and 1. Notably, Channel(P, Q) reduces to the quasi-static block fading channel of memory order (P − 1) when α = 1.

The performance of Code(12, 2, 12) over Channel(2, 6) is shown in Fig. 4.1. The simu-lations indicate that the code designed for quasi-static fading channels performs well only over quasi-static fading environment, namely, α = 1. As α decreases, which means that the degree of channel variations increases, the performance degrades accordingly. Similar simu-lations have been performed for Code(14, 2, 14), Code(16, 2, 16), Code(18, 2, 18), Code(20, 2, 20), Code(22, 2, 22), Code(24, 2, 24), Code(12, 2, 12), Code(16, 2, 16), Code(20, 2, 20), and Code(24, 2, 24) respectively over Channel(2, 7), Channel(2, 8), Channel(2, 9), Channel(2, 10), Channel(2, 11), Channel(2, 12), Channel(2, 3), Channel(2, 4), Channel(2, 5), and Channel(2, 6), and are summarized respectively in Figs. 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8 4.9, 4.10, and 4.11. Same performance behaviors can be observed from these figures. As a consequence, we conclude that Code(N, P, N ) performs well only over quasi-static block fad-ing channel (namely, α = 1), and its performance degrades considerably for moderate-to-high degree of channel variations.

(33)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.1: The maximum-likelihood word error rates for Code(12, 2, 12) over Channel(2, 6) with different degree of channel variation factors α.

(34)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.2: The maximum-likelihood word error rates for Code(14, 2, 14) over Channel(2, 7) with different degree of channel variation factors α.

(35)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.3: The maximum-likelihood word error rates for Code(16, 2, 16) over Channel(2, 8) with different degree of channel variation factors α.

(36)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.4: The maximum-likelihood word error rates for Code(18, 2, 18) over Channel(2, 9) with different degree of channel variation factors α.

(37)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.5: The maximum-likelihood word error rates for Code(20, 2, 20) over Channel(2, 10) with different degree of channel variation factors α.

(38)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.6: The maximum-likelihood word error rates for Code(22, 2, 22) over Channel(2, 11) with different degree of channel variation factors α.

(39)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.7: The maximum-likelihood word error rates for Code(24, 2, 24) over Channel(2, 12) with different degree of channel variation factors α.

(40)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.8: The maximum-likelihood word error rates for Code(12, 2, 12) over Channel(2, 3) with different degree of channel variation factors α.

(41)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.9: The maximum-likelihood word error rates for Code(16, 2, 16) over Channel(2, 4) with different degree of channel variation factors α.

(42)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.10: The maximum-likelihood word error rates for Code(20, 2, 20) over Channel(2, 5) with different degree of channel variation factors α.

(43)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.11: The maximum-likelihood word error rates for Code(24, 2, 24) over Channel(2, 6) with different degree of channel variation factors α.

(44)

4.2

Codes Designed For Non-Static Fading Channels

In this section, we turn to the examination of codes designed for non-static fading channels. The performance of Code(12, 2, 6) over Channel(2, 6) is shown in Fig. 4.12. As expected, the performances remain intact for different values of α. We however observe that for SNR larger than 5 dB, the best performance is actually obtained at α = 0 as contrary to that observed in the previous section, and the performance degrades as α grows. Since the design of Code(12, 2, 6) in fact assumes an abrupt change of channel coefficients at the middle of the codewords, thereby [h1,1, h2,1] is allowed to be totally different from [h1,2, h2,2] in the code

derivation, it is reasonable to yield that the larger the degree of channel variations, the fitter the simulated channel model to the target one of the code design. Yet, the performance deviation between α = 0 and α = 1 is very small, and in certain case such as Fig. 4.13, the performance improves slightly even with larger α.

Simulations for Code(16, 2, 8), Code(18, 2, 9), Code(20, 2, 10), Code(22, 2, 11), and Code(24, 2, 12) respectively over Channel(2, 8), Channel(2, 9), Channel(2, 10), Channel(2, 11), and Channel(2, 12) are illustrated in Figs. 4.14, 4.15, 4.16, 4.17, and 4.18, respectively. Same performance behaviors as in Fig. 4.12 can be observed from these figures.

(45)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.12: The maximum-likelihood word error rates for Code(12, 2, 6) over Channel(2, 6) with different degree of channel variation factors α.

(46)

0 2 4 6 8 10 12 14 15 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.13: The maximum-likelihood word error rates for Code(14, 2, 7) over Channel(2, 7) with different degree of channel variation factors α.

(47)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.14: The maximum-likelihood word error rates for Code(16, 2, 8) over Channel(2, 8) with different degree of channel variation factors α.

(48)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.15: The maximum-likelihood word error rates for Code(18, 2, 9) over Channel(2, 9) with different degree of channel variation factors α.

(49)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.16: The maximum-likelihood word error rates for Code(20, 2, 10) over Channel(2, 10) with different degree of channel variation factors α.

(50)

0 2 4 6 8 10 12 14 15 10−4 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.17: The maximum-likelihood word error rates for Code(22, 2, 11) over Channel(2, 11) with different degree of channel variation factors α.

(51)

0 2 4 6 8 10 12 14 15 10−4 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.18: The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, 12) with different degree of channel variation factors α.

(52)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.19: The maximum-likelihood word error rates for Code(12, 2, 6) over Channel(2, 3) with different degree of channel variation factors α.

Next, we examine the situation where the update rate of channel coefficients is twice of that is considered in the code design.

The performance of Code(12, 2, 6) over Channel(2, 3) is shown in Fig. 4.19. The simula-tions indicate that the code designed for Channel(2, 6) performs well only over Channel(2, 3) with α = 1, which is equivalent to the code-target Channel(2, 6). This is analogous to what we have obtained in Section 4.1. Similar simulations have been performed for Code(16, 2, 8), Code(20, 2, 10), and Code(24, 2, 12) respectively over Channel(2, 4), Channel(2, 5), and Channel(2, 6), and are summarized respectively in Figs. 4.20, 4.21, and 4.22. Same performance behaviors can be observed from these figures.

(53)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.20: The maximum-likelihood word error rates for Code(16, 2, 8) over Channel(2, 4) with different degree of channel variation factors α.

(54)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.21: The maximum-likelihood word error rates for Code(20, 2, 10) over Channel(2, 5) with different degree of channel variation factors α.

(55)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.22: The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, 6) with different degree of channel variation factors α.

(56)

Now, we demonstrate the performance of Code(12, 2, 3) over Channel(2, 3)in Fig. 4.23. Again, when the update rate of the channel coefficients fits that of the code-target channel, the performance remains intact with respect to different values of α. Simulations for Code(16, 2, 4), Code(20, 2, 5), and Code(24, 2, 6) over Channel(2, 4), Channel(2, 5), and Channel(2, 6) illustrated in Figs. 4.24, 4.25, and 4.26, respectively. We observe from these figures that the performances of these codes are the best at α = 0, since the resultant simulated channel fits best to the code-target channel.

0 2 4 6 8 10 12 14 15 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.23: The maximum-likelihood word error rates for Code(12, 2, 3) over Channel(2, 3) with different degree of channel variation factors α.

(57)

0 2 4 6 8 10 12 14 15 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.24: The maximum-likelihood word error rates for Code(16, 2, 4) over Channel(2, 4) with different degree of channel variation factors α.

(58)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.25: The maximum-likelihood word error rates for Code(20, 2, 5) over Channel(2, 5) with different degree of channel variation factors α.

(59)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.26: The maximum-likelihood word error rates for Code(24, 2, 6) over Channel(2, 6) with different degree of channel variation factors α.

(60)

0 2 4 6 8 10 12 14 15 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.27: The maximum-likelihood word error rates for Code(12, 2, 3) over Channel(2, 6) with different degree of channel variation factors α.

Next, we simulate the case that the update rate of the channel coefficients is slower than that of the code target channel. Figure 4.27 illustrates the performance of Code(12, 2, 3) over Channel(2, 6). Simulation result is almost the same as that in Fig. 4.23, which indicates the robustness of the code design over simulated channels with slower coefficient change. Simu-lations for Code(16, 2, 4), Code(20, 2, 5), and Code(24, 2, 6) respectively over Channel(2, 8), Channel(2, 10), and Channel(2, 12) are then summarized in Figs. 4.28, 4.29, and 4.30, respectively.

(61)

0 2 4 6 8 10 12 14 15 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.28: The maximum-likelihood word error rates for Code(16, 2, 4) over Channel(2, 8) with different degree of channel variation factors α.

(62)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.29: The maximum-likelihood word error rates for Code(20, 2, 5) over Channel(2, 10) with different degree of channel variation factors α.

(63)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER alpha=0 alpha=0.568084 alpha=0.753713 alpha=0.910057 alpha=1

Figure 4.30: The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, 12) with different degree of channel variation factors α.

(64)

In the end, we simulate the performance of designed codes with block length N = 24 over different sub-block length Q. The performance of Code(24, 2, 12) over Channel(2, Q) with different sub-block length Q and channel coefficient factor α = 0 is illustrated in Fig. 4.31. Simulation results indicate that the code designed for Channel(2, 12) performs well only over Channel(2, 12) and Channel(2, 24) (i.e., quasi-static channel). As Q differs from 12 or 24, the performance degrades considerably. Similar simulations for Code(24, 2, 12) over Channel(2, Q) with different sub-block length Q and three channel coefficient factors α = 0.568084, α = 0.753713, α = 0.910057 are illustrated in Figs. 4.32, 4.33 and 4.34, respectively. These simulation results show that as α increases, the performance of Code(24, 2, 12) over Channel(2, Q) with Q 6= 24 tends to be closer to that over Channel(2, 24).

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER Q=4 Q=6 Q=8 Q=10 Q=12 Q=14 Q=16 Q=18 Q=20 Q=22 Q=24

Figure 4.31: The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, Q) with channel variation factor α = 0 and different values of Q.

(65)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER Q=4 Q=6 Q=8 Q=10 Q=12 Q=14 Q=16 Q=18 Q=20 Q=22 Q=24

Figure 4.32: The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, Q) with channel variation factor α = 0.568084 and different values of Q.

(66)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER Q=4 Q=6 Q=8 Q=10 Q=12 Q=14 Q=16 Q=18 Q=20 Q=22 Q=24

Figure 4.33: The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, Q) with channel variation factor α = 0.753713 and different values of Q.

(67)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER Q=4 Q=6 Q=8 Q=10 Q=12 Q=14 Q=16 Q=18 Q=20 Q=22 Q=24

Figure 4.34: The maximum-likelihood word error rates for Code(24, 2, 12) over Channel(2, Q) with channel variation factor α = 0.910057 and different values of Q.

(68)

The performance of Code(24, 2, 6) over Channel(2, Q) with different sub-block length

Q and channel coefficient factor α = 0 is shown in Fig. 4.35. Simulation results indicate

that the code designed for Channel(2, 6) performs well over Channel(2, 6), Channel(2, 12), Channel(2, 18), and Channel(2, 24). The performance degrades when Q differs from 6, 12, 18 or 24. Similar simulations for Code(24, 2, 6) over Channel(2, Q) with different sub-block length Q and three channel coefficient factors α = 0.568084, α = 0.753713, α = 0.910057 are illustrated in Figs. 4.36, 4.37 and 4.38. From these simulation results, we found that the performance remains well no matter what the value of α is when Q is a multiple of target subblock length for code design.

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER Q=4 Q=6 Q=8 Q=10 Q=12 Q=14 Q=16 Q=18 Q=20 Q=22 Q=24

Figure 4.35: The maximum-likelihood word error rates for Code(24, 2, 6) over Channel(2, Q) with channel variation factor α = 0 and different values of Q.

(69)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER Q=4 Q=6 Q=8 Q=10 Q=12 Q=14 Q=16 Q=18 Q=20 Q=22 Q=24

Figure 4.36: The maximum-likelihood word error rates for Code(24, 2, 6) over Channel(2, Q) with channel variation factor α = 0.568084 and different values of Q.

(70)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER Q=4 Q=6 Q=8 Q=10 Q=12 Q=14 Q=16 Q=18 Q=20 Q=22 Q=24

Figure 4.37: The maximum-likelihood word error rates for Code(24, 2, 6) over Channel(2, Q) with channel variation factor α = 0.753713 and different values of Q.

(71)

0 2 4 6 8 10 12 14 15 10−3 10−2 10−1 100 SNR(dB) WER Q=4 Q=6 Q=8 Q=10 Q=12 Q=14 Q=16 Q=18 Q=20 Q=22 Q=24

Figure 4.38: The maximum-likelihood word error rates for Code(24, 2, 6) over Channel(2, Q) with channel variation factor α = 0.910057 and different values of Q.

(72)

Chapter 5

Conclusions

In this work, a binary block code design for combined channel estimation and error pro-tection, which is extended from [13] specifically for non-static fading channels, is proposed and examined. Simulations hint that as long as the update rate of the channel coefficients is equal to or slower than that of the code target channel, the performance remains robust. However, when the channel coefficients change faster than those of the channel that the code design is presumed, the performance degrades considerably. The future work is to examine whether the code proposed is robust for non-stationary fading channels in which the channel coefficients change in an non-stationary non-periodic fashion.

(73)

References

[1] I. Abou-Faycal, M. Medard, and U. Madhow, “Binary adaptive coded pilot symbol

as-sisted modulation over Rayleigh fading channels without feedback,” IEEE Trans.

Com-mun., vol. 53, no. 6 , pp. 1036-1046, June 2005.

[2] S. Akin and M. C. Gursoy , “Training optimization for Gauss-Markov Rayleigh fading

channel,”IEEE International Conference on Communications (ICC’07), pp. 5999-6004 , June 2007.

[3] J. Giese and M. Skoglund “Single and Multiple-Antenna Constellations for

Communica-tion over Unknown Frequency-Selective Fading Channels,” IEEE Trans. Inform.

The-ory, vol. 53, no. 4, pp. 1584-1594, April 2007.

[4] F. A. Graybill, Theory and Application of the Linear Model, Duxbury Press, North

Scituate, Mass., 1976.

[5] R. Haeb, “A comparison of coherent and differentially coherent detection schemes for

fading channels,” Vehicular Technology Conference, 1988 IEEE 38th, pp. 364-370, June 1988.

[6] D. Harville, Matrix Algebra From a Statistician’s Perspective, 1st edition, Springer,

(74)

[7] J. P. Imhof, “Computing the Distribution of Quadratic Forms in Normal Variables,”

Biometrika, vol. 48, no. 3/4, pp. 419-426, 1961.

[8] M. Medard, I. Abou-Faycal, and U. Madhow, “Adaptive coding with pilot signals,” in

Proc. 38th Annual Alletlon Conf. on Communication, Control and Computing, October

2000.

[9] R. Patel and M. Toda, “Trace ineuqalities involving Hermitian metrices,” Linear Algebra

and Its Applications, vol. 23, pp. 13-20, 1979.

[10] M. Skoglund, J. Giese, and S. Parkvall, “Code design for combined channel estimation

and error protection,” IEEE Trans. Inform. Theory, vol. 48, no. 5, pp. 1162-1171, May 2002.

[11] M. Stojanovic, J. G. Proakis, and J. A. Catipovic, “Analysis of the impact of

chan-nel estimation errors on the performance of a decision-feedback equalizer in fading multipath channels,” IEEE Trans. Commun., vol. 43, no. 2/3/4, pp. 877-886, Febru-ary/March/April 1995.

[12] J. K. Tugnait, L. Tong, and Z. Ding, “Single-user channel estimation and equalization,”

IEEE Signal Processing Magazine, vol. 17, pp. 16-28, May 2000.

[13] C.-L. Wu, P.-N. Chen and Y. S. Han, “Maximum-likelihood priority-first search

decod-able codes for combined channel estimation and error protection,” submitted to IEEE

數據

Figure 4.1: The maximum-likelihood word error rates for Code(12, 2, 12) over Channel(2, 6) with different degree of channel variation factors α.
Figure 4.2: The maximum-likelihood word error rates for Code(14, 2, 14) over Channel(2, 7) with different degree of channel variation factors α.
Figure 4.3: The maximum-likelihood word error rates for Code(16, 2, 16) over Channel(2, 8) with different degree of channel variation factors α.
Figure 4.4: The maximum-likelihood word error rates for Code(18, 2, 18) over Channel(2, 9) with different degree of channel variation factors α.
+7

參考文獻

相關文件

• Cell coverage area: expected percentage of locations within a cell where the received power at these. locations is above a

Hence, code for each non zero AC coefficient is composed of a basecode (corresponding to runlength/category) and a code corresponding to offset in.. Standard tables vs

In particular, we present a linear-time algorithm for the k-tuple total domination problem for graphs in which each block is a clique, a cycle or a complete bipartite graph,

3.16 Career-oriented studies provide courses alongside other school subjects and learning experiences in the senior secondary curriculum. They have been included in the

(ii) “The dismissal of any teacher who is employed in the school – (a) to occupy a teacher post in the establishment of staff provided for in the code of aid for primary

(ii) “The dismissal of any teacher who is employed in the school – (a) to occupy a teacher post in the establishment of staff provided for in the code of aid for primary

In this paper, we extended the entropy-like proximal algo- rithm proposed by Eggermont [12] for convex programming subject to nonnegative constraints and proposed a class of

Let us suppose that the source information is in the form of strings of length k, over the input alphabet I of size r and that the r-ary block code C consist of codewords of