• 沒有找到結果。

Intercarrier Interference Self-Cancellation Scheme

2.3 Existing Techniques for OFDM Systems over Time-Varying Channels

2.3.1 Intercarrier Interference Self-Cancellation Scheme

ICI self-cancellation scheme is a simple way for suppressing ICI in OFDM system. It is to modulate one data symbol onto adjac

s introduce time-selective channel. Due to the analysis in Section 2.2, we show that the traditional OFDM system is sensitive to time-varying channels. Some techniques have been proposed to mitigate the ICI. We will first introduce ICI self-cancellation scheme [1], [2], which is originally used in compensating frequency errors and also valid over time-varying channels. Then we will show the frequency domain equalizer technique for compensating the ICI.

Scheme

This method is proposed in [1], [2]. The

ent pairs of subcarriers rather than onto single subcarriers. By this way, the ICI generated by these adjacent subcarriers can be “self-cancelled” by each orther. This

scheme is also called polynomial cancellation (PCC). We will first analyze the ICI with single frequency as in [1],[2], and then show why the ICI self-cancellation works.

The received signal on each subcarrier can be seen as a linear combination of signals received via different paths with different Doppler shifts. So that this scheme can also be used in the practical mobile environments that have significant Doppler spread.

The system architecture of the ICI self-cancellation scheme is shown in Figures 2.5-2.6. The only difference between the OFDM system with the ICI self-cancellation

e will show how this block works.

tion scheme

conventional scheme and the conventional OFDM system is the ICI self-canceling modulation/demodulation. W

Randomizer FEC Iinterleavr Modulation ICI Canceling Modulation

IFFT Add DAC Filter RF

DATA

CP

Figure2.5: Transmitter architecture of ICI self-cancella

Figure2.6: Receiver architecture of ICI self-cancellation

Let the n-th OFDM transmission data be

1 2

e that the signal is mixed with a loca . In the

receiver, we ignore the noise terms and assum l

oscillator which has frequency mismatch Δ with the transmitter oscillator. Then the f demodulated signal in the receiver before FFT can be written as

1 2

optimum timing. After the samp

π

, and assuming this signal is sampled at the ling, the signal can be rewritten as

2 1 2

The sampled signal after DFT are given by

1 2

By Equations (2.22) and (2.23), we have the received signals after sampling and DFT

The analysis of ICI terms can be done by defining complex weighting,

fT

Compared Equations (2.24) with (2.25), we have the complex weighting written as

1 2 ( )

By the aforementioned equation, we show the complex weighting value in Figure 2.7.

The figure shows a smooth curve, this w works.

th alled IC

the data ne onto adjacent subcarriers w h different

ill be the key point why ICI self-cancellation

Zhao and Haggman have proposed a me od to mitigate ICI c I self-cancellation scheme. This scheme maps to

it signs, such as s0 = −s s1, 2 = −s3,…,sN2= −sN1 . The difference between the adjacent coefficients is small, and the adjacent subcarriers map to the

be self-canceled by each other. The received signal after FFT can be written as

same data with different signs. The ICI produced by adjacent subcarriers will

2

where ( )y d is the symbol that will be demodulated to obtain the information bits. By this way, the overall system SNR increases by a factor 2, due to the coherent addition.

Figure2.7:ICI cofficients of different subcarriers

0.8

A disadvantage of this system is that its bandwidth efficiency is only half of the conventional OFDM system. There are some new researches to improve the bandwidth

ancel the ICI produced by a frequency error due to mismatch between the transmitter nd receiver oscillators in the above description. In real mobile environments, the

ceived signal on each subcarrier can be seen as a linear combination of signals ceived via different paths with different Doppler shifts. So this scheme can also be sed in the practical mobile environments that have significant Doppler spread.

.3.2 Frequency Domain Equalizer Scheme

Frequency domain equalizer is a method which is used frequently in ompensating the ICI of OFDM systems in mobile environments. The transmitter rchitecture is the same as conventional OFDM system. Besides, a frequency domain

equalizer after FFT FDM systems in

the receiver before FFT are usually called the “Time Domain Signals”, and those after FFT are called the “Frequency Domain Signal”. By this definition, the frequency domain equalizer equalizes the received signal after FFT. There are many equalizers technique can be used here with different performance and complexity, such as block minimum mean square equalizer (MMSE) or block Zero-forcing (ZF), FIR equalizer, etc. The block of MMSE and ZF equalizer is shown below.

efficiency, such as in [13]. We only show that ICI self-cancellation scheme can self c

is added as shown in Figure 2.8. The signals of O

2

where W is the equalizer weighting, H is the channel matrix in time domain, ση2 is the noise power, and σs2 is the signal power. Some low-complexity equalizers have already been proposed, as in [3], [4], [5], [6], [7]. This paper also proposed a low-complexity technique based on the block MMSE equalizer and will be shown in Chapter 4.

Figure2.8: OFDM receiver architecture with frequency domain equalizer

the OFDM system over time-varying channels. In particular, the ICI is e major subject of interest. We also introduce some existing techniques for solving this problem, such as the ICI self-cancellation schemes and the frequency domain equalizer schemes. Finally, modified OFDM systems for time-varying channel are presented.

2.4 Summary

In this chapter, we first introduce the traditional OFDM system, and show the challenges to

th

Derandomizer Decoding

Bit interleaver

Demodulation Remove FFT

CP Filter DAC

RF

DATA Frequency

Domain Equalizer

Chapter 3

Introduction to Conjugate Gradient (CG) Algorithm

asic concept of projection and then introduce the Krylov subs

3.1 Projection Methods

We will first show the general projection theory [9], [10] and then show the projection can minimize error between the real solution and the approximate solution obtained by the projection methods.

We will first show the b

pace and some krylov subspace methods that are the predecessor of conjugate gradient methods. The evolution from basic Krylov method to conjugate gradient method is shown in Section 3.4. The Krylov subspace method is currently to be the most important iterative technique for solving large linear systems, and the CG algorithm is a mature algorithm in this topic.

3.1.1 General Projection

practical iterative techniques for solving linear systems . A projection method can be seen as a scheme of extracting approximate solution of a linear system from a subspace. We call this subspace the arch subspace or the candidate approximants denote by K. Assume that it has the he of these constraints is l independent conditions.

e define a subspace L, which is called the subspace of constraints or left subspace.

ere are two kinds of projection methods, orthogonal and oblique. An orthogonal proje

Most of the existing utilize a projection method an

se

dimension n. In general, there should have n constraints be imposed to be exacting t approximate solution. A typical way

W Th

ction means that the subspace L is the same as K. The oblique projection means that the subspace L is different from K, and they can have some relationships or be totally uncorrelated.

We show the mathematically approach of projection the technique. A projection technique onto the subspace can obtain approximated solutionˆx by

Search xˆ∈K and bAxˆ⊥L (3.1) approximate solution in Equation (4.3) can be written as

ˆ 0

By Equations (4.4) and (4.5), we have the projection method based on Equation (4.2) in the matrix form, which is

1

0 0

ˆ ( T ) T

x=x +P Q AP Q r (3.6)

3.1.2 Property of the Projection Method

We will show that orthogonal projection solution can minimize the error between the desired solution and the approximate solution as in [9]. Let P is the orthogonal projector onto a subspace K, x is the desired vector, and y is the arbitrary vector in subspace K. Because of the orthogonality betweenxandPx, we have

2 2 2 2

If A is a symmetric and positive definite matrix, we can derive the similar result that orthogonal projection can minimize A-norm error between x and y . By Equation (4.6)

( ( ), ) 0 , , we have

A xyq = ∀ ∈ (3.9) q K

Ax=b,

By Equatio

− ′ = ∀ ∈ (3.10)

T s is called the Galerki

n (4.7) can be rewritten as

K (b Ay q, ) 0 , q

hi n condition which defines an orthogonal projection [9].

Let A is an arbitrary matrix, and L= AK. The oblique projection onto K and orthogonal to L will minimize the 2-norm the residual vector . The derivation is similar as the orthogonal projection. Then we have

(3.11)

his is called the Petrov-Galerkin condition which defines an oblique projection [9].

3.2

The Krylov subspace is a subspace of the form [8],[9],[10]

(3.12)

By this definition, we know that is the subspace of all vectors in

of r= −b Axˆ We will show the iterative methods are located in the Krylov subspace. Solving x = , we may solve the simplib

A Tx0 =b x0

lution for x . We may correct the approximation x with δ , so δ 0

( 0 )

A x +δ = (3.13) b

This can be see

(3.14) Aδ = −b Ax0

We may solve Equation (3.13) by a simplified approximate system

(3.15)

rrect the approx

By settingT I the Equation

)

Multiplying Equation (3.16) by and adding , we have

(3.18) imate solution with the same process respect tox . Therefore, we have 1

1

al of the system depends on how well the polynomial p dami

Equation (3.18) shows that the residu ps the initial error.

By Equation (3.14), the i-th approximate solution x can be expressed as i

xi =x +r +

k =

(3.22)

The aforementioned discussion shows that iterative methods are located in the

space (see Section 3.1), different

projection methods can be obtained, such as orthogonal or oblique projection, and different kinds of iterative techniques have been derived. They have different

on a case by case basis.

Usually, the characteristic of

iterative method. Choosing an appropriate method can have significant improvement ate and the complexity.

3.3 Krylov Subspace Methods

m kinds of K e

an ow volution from the

basic projection, the Arnoldi’s m

symmetric Lanczos algorithm and the CG algorithm.

3 .1 Arnoldi’s Algori

is a method that builds an orthogonal basis of

1 i 1

convergence rates. One should choose the best iterative method

A plays an important role in choosing the appropriate

on the convergence r

There are any rylov subspace m thods, and we focus on the predecessor of CG methods, d the CG method. We will sh the e

ethod, and then derive other simplified methods: the

.3 thm

Arnoldi’s algorithm is a basic orthogonal projection method. This scheme was first introduced in 1951 by Arnoldi. This

th bspace by orthogonal projection. The basic Arnoldi’s algorithm can be found in [9]

e Krylov subspace and finds an approximate solution on the Krylov su

Algorithm 3.1 Arnoldi’s Algorithm

1 1

The above process builds an orthogonal basis by a Gram-Schmidt process. The above algorithm can be rewritten in the matrix form as

j j j orthogonal basis of the Krylov subspace. We can rewrite Equation (3.22) in the matrix form as

11 12 13 1

is a Hessenberg matrix obtained by deleting the last row in

The above process produces an orthogonal basis of the Krylov subspace. By Equations (3.4) and (3.5), orthogonal projection means that the subspace L is the same as . We have

Combining Equations (3.26) and (3.27), we have the equation for orthogonal projection onto Krylov subspace as

1

0 ( 0 2 )

m m m 1

x =x +P H r e (3.29)

Algorithm

A method is called the full orthogonalization method (FOM) that searches the o ogonal basis

approximate solution by Equation (3.28). There are some modified methods that have om FOM method. Restarted FOM is to restart the Arnoldi’s algorithm periodically. Incomplete orthognoalization process (IOM) is to truncate

3.3.2 Krylov Subspace Methods Based on Arnoldi’s

rth of the Krylov subspace by Arnoldi’s theorem and finds the

lower c plexity than the

the bases generated by the original Arnoldi’s algorithm. We find the new basis only orthogonal to several bases that have already been found.

Algorithm 3.2 IOM Algorithm

1 1

progressive method in solving the approximate solution. Based on the above algorithm, the Hessenberg matrix in Equation (3.24) will be a band matrix with upper

as

nalization method (DIOM) derived from IOM is a

Hm

bandwidth equal to t−1 and lower bandwidth equal to 1, which can be shown

h11 h1

Take the LU factorization of this matrix. Because Hm is a Hessenberg band matrix with bandwidth equal to t+1, its LU factorization will have the form that the lowe wer triangle matrix, and the upper triangle matrix has upper bandwidth equal to

r triangle matrix is a unit band lo 1

t− . These two matrices are shown below.

0 0

Then Equation (3.28) can be written as

1 1 1

The above equation can be rewritten as

1

~

m

By the definition of c , we have m

By Equation (3.32), we have the iterative equation as

0

In FOM and IOM algorithm, we require an orthogonal basis to solve the approximate solution. By Equation (3.36), we have a progressive method to solve x , m which can solve the projection problem iterativel

algorithm which is mathematically identical to the IOM algorithm, but a progressive version.

y. Finally we have the DIOM

Algorithm 3.3 DIOM Algorithm Choose a vector p 1

3.3.3 Symmetric Lanczos Algorithm

The symmetric Lanczos algorithm is a simplified Arnoldi’s method in which the mmetric. When solving Ax=b in the assumption that A

matrix is sy is symmetric

a symmetric matrix, hence it is a tridiagonal matrix. We can reduce the computational complexity by this characteristic. A three-term r

algorithm.

m m m m

a b

b a

matrix, the Hessenberg matrix H in Equation (3.24) is alsom

ecurrence equation can be found based on the Arnoldi’s

The Hessenberg matrix H in Equation (3.24) should have the structure as m follows

Then the Arnoldi’s theorem can be simplified to the Lanczos Algorithm as in [8]

⎢ ⎥

(3.38)

Algorithm 3.4 Lanczos method

1 1

Then we can find the orthogonal basis of the Krylov subspace by Lanczos’s theorem, and find the approximate solution by Equation (3.28), if A is symmetric.

This process will require fewer computations than the Arnoldi’s method.

basis based on the Lanczos algorithm. Then we can use Equation (3.28) to find

An algorithm similar to the DIOM algorithm can be derived. It is called the D-Lanczos algorithm. Because the H

LU factorization in Equation (3.30) can be written as

0 0

m

⎡ ⎤

⎢ ⎥

3.3.4 Conjugate Gradient Method

Like the FOM algorithm in the assumption that A is symmetric, we can build an orthogonal

orthogonal projection onto the Krylov subspace which is the desired approximate solution.

essenberg matrix H is a tridiagonal matrix, the m

1 1

By Equation (3.38), Equation (3.32) can be simplified to

(3.40)

quation (3.39) can be rewritten as

1

m m m m m

h g +o g = p

E

1 ( )

m m m m

mm

g p o g

= h − (3.41)

Then we have the D-Lanczos algorithm by replacing the equation of computing in DIOM algorithm (algorithm 3.3) with Equation (3.40). Because the approximate solution is iteratively found by

gm

e g is called the searching direction vector. The CG method can be derived m from the D-Lanczos algorithm by two properties. The first is that the residual vectors are orthogonal to each other and the second is that the search direction vectors g m are A-conju (Ag gi, j)=0, ∀ ≠i j.

The coefficients ξm1 and ηm can be found by the aforementioned two properties.

Finally, we have the CG algorithm, which is one of the best known iterative techniques in solving the symmetric positive definite (S.P.D) system.

0

Algorithm 3.5 Conjugate gradient method

0 0, 0

r = −b Ax g = r for j=0~convergence

( , ) /( , ) T T

j r rj j Agj gj r rj j g Agj j

α = = ,

1

j j j j

x + =xg rj+1= −rj αjAgj

1, 1 1 1

T T

( ) /( , )

j rj+ rj+ r rj j rj+ rj+ r rj j 1 1

β = = gj+ =rj+jgj

In this chapter, we first introduce the

algorithm from basic projection theory. CG algorithm is one of the best known iterative techniques for solving a symmetric positive definite (S.P.D) system. We will use the PCG algorithm fo

r.

3.5 Summary

concept of projection and derive the CG

r solving the matrix inverse problem in the MMSE equalizer in the next chapte

Chapter 4

Proposed Low-Complexity Frequency Domain Equalizer

a es are introduced briefly in Chapter 2. In ximation based on the previous analysis is shown in th

ions are shown in Section 4.5

4.1 Band Channel Approximation

The magnitude of the frequency domain channel matrix is shown in Figure 4.1.

The channel model is the Jakes model and the normalized Doppler spread equals 0.1. It is shown that the most significant coefficients are those on the central band and the edges of the matrix, which is similar to the analysis of the channel in Chapter 2. In order to reduce the computation complexity, the smaller coefficients are ignored and only the significant coefficients are dealt with. Although there are some losses in the

The frequency dom in equalizer schem this chapter, the band channel appro

e first place. By this approximation, some techniques have been proposed to reduce the complexity of different equalizers as introduced in Section 4.2. In addition, an MMSE equalizer based on the CG method with optimal preconditioning is proposed.

And then we compare the complexity of this scheme with some other methods. Finally, performance simulat

BER performance, the computation complexity of mobile OFDM systems can be duced greatly.

n channel can be approximated as in Figure 4.2, [5], [6]. We n only take account of the coefficients in the shaded region and ignore other efficients. Then a frequency domain channel matrix with bandwidth Q as shown in

n [6] can enhance this re

The frequency domai ca

co

Figure 4.2 is processed. A time-domain technique discussed i approximation.

Figure 4.1: Amplitude of frequency domain channel matrix in Jakes model

Figure 4.2: Structure of approximate frequency domain channel

4.2 Existing Low-Complexity Frequency Domain Equalizers  

Two important low-complexity frequency domain equalizers will be introduced, which are proposed in [4], [6]. The main ideas behind them are also the band channel approximation. We adopt the mobile OFDM signal model introduced in Chapter 2, Equation (2.17), and ignore the superscript giving

+

Q+1 Q

( )i

H

= = +

= + =

y Fr FHx Fη

FHF s Fη As z (4.1)

A l e

received signal. The weight computations are based on

inear minimum mean square equalizer (LMMSE) can be used to equalize th

H 2

arg min E{ W }

mmse = −

W W y s (4.2)

It can be easily derived that the optimum weights in the above equation are Q

Q

1 1

( H )

mmse = + zz

W AA R

SNR A (4.3)

where is the equivalent channel in the frequency domain as shown in is the autocorrelation matrix of the noise. The equalized signal can be written as

= H

A FHF

Equation (4.1), and R zz

H

= mmse

d W y , and then the receiver make decision based on this equalized signal. In Equation (4.3), an N×N matrix inversion is required. It requires computations which is too expensive to be realized for a large N.

One should apply a low complexity algorithm to solve this problem.

By the idea that the ICI only comes from the neighborhood subcarriers, Xiaodong Cai and Georgios B. Giannakis proposed a low-complexity LMMSE equalizer in [4], Assuming that is the desired signal to be solved, it can only take the rows of the matrix for computing the LMMSE weight vector. It means we are only

riers out of th neighborhood, as shown in Figure 4.3.

Because the significant parts of the ICI come from the neighborhood subcarriers, this assum

( 3) O N

s i 2Q+1

A

concerned with the ICI coming from the 2Q neighborhood subcarriers, and ignore the ICI produced by the subcar e 2Q

ption is meaningful.

Therefore the equation for computing the LMMSE weights from Equations (4.3) and (4.4) can be written as

( , )

is a part of the origina atrix, is a part of the autocorrelation function.

This technique can be seen as ge system into several small systems, which can be easily solved. Note that the last 2Q rows of the matrix

( ,1)

partitioning a lar Rzi

pproach is proposed by Philip Schniter in [6]. We call this scheme the Partial MMSE equalizer for

channel approximation as described in Section 4.1. Assume that we want to retrieve l . We define

equalizer (proposed by Xiaodong Cai and Georgios B. Giannakis)

Another similar a

simplicity. This method applies the band

si

the signa

(2 1) (4 1)

utation of LMMSE weights is similar to Equation (4.5) as follows applied. The comp

1 R 1A ′ (4.8)

Because it only requires O N computations to compute( ) i i

, ( H ) quires computations to solve the LMMSE problem.

Figure 4.4: Partial MMSE eqaulizer (proposed by Philip Schniter)

(1,1) (1, ) (1, 1) (1, )

4.3 Proposed Preconditioned Conjugate Gradient (PCG) MMSE Equalizer

by using precond conjugate gra

4.3.1 Preconditioned Conjugate Gradient (PCG)

ne of the serious defects of iterative methods is the lack of robustness. CG In this section, a low-complexity LMMSE equalizer itioned dient algorithm for solving the matrix inversion problem is proposed. It will be shown that the complexity of this method is ( )O N and have similar performance but further computations than Partial MMSE equalizer.

Algorithm

O

works regularly if the system is well conditioned. Because CG is a project technique to the Krylov subspace Km which is the subspace of n, it will converge in at most n iteration. The convergence rate of CG is related to the condition numberκ which is defined as follows

max

max

相關文件