• 沒有找到結果。

Chapter 4 Data Detection in MIMO-OFDM System Based on GPDA Detector

4.1 GPDA-MCPDA Detector

4.1.1 Markov Chain Monte Carlo Method

4.1.1.4 Gibbs Sampler

One problem with applying Monte Carlo integration is in obtaining samples from some complex probability distribution. Attempts to solve this problem are the roots of MCMC methods. In particular, they trace to attempts by mathematical physicists to integrate very complex functions by random sampling, and resulting Metropolis-Hastings sampling [6]. The Gibbs sampler [6] [15] (introduced in the context of image processing by Geman 1984), is a special case of Metropolis-Hastings sampling wherein the random value is always accepted (i.e. α =1). The task remains to specify how to construct a Markov Chain with values converged to the target distribution. The key to the Gibbs sampler is that we only consider the univariate conditional distributions (the distribution when all of the random variables but one is assigned fixed value). Such conditional distributions are far easier to simulate than complex joint distributions and usually have simpler forms. Thus, we simulate n random variables sequentially from the n univariate conditions rather than generating a single n-dimensional vector in a single pass using the full joint distribution.

To introduce the Gibbs sampler, consider a bivariate random variable ( , )x y , and suppose we want to compute one or both marginal, p x( ) and p y( ). The idea behind the sampler is that it is far easier to consider a sequence of conditional distributions, p x y( | ) and

( | )

p y x , than to obtain the marginal by integral of the joint density p x y( , ), e.g.,

( ) ( , )

p x =

p x y dy. The sampler starts with some initial value y0 for y and obtains x0 by generating a random variable from the conditional distribution, p x y( | = y0). We use x0 to generate a new value of from the sampler. To do so, we draw from the conditional distribution based on the value

y1 Repeating this process for k times, then it generates a Gibbs sequence of length k, where a subset of points ( ,x y for j j) 1≤ ≤j m k< is taken as our simulated draws from the full joint

distribution. The first m times of the process, called burn-in period, can make the Markov Chain converge to the distribution that near its stationary one.

When more than two variables are involved, the sampler is extended in the obvious fashion. In particular, the value of the kth variable is drawn from the distribution

where denotes a vector containing all of the variables but k. Thus, during the ith iteration of the sample, to obtain the value of i( )k

For example, if there are four variab y z)

|

Now, we consider the equation (3.2) and we use Gibbs sampler to generate samples of

generate initial samples randomly

sample from ( | , , , , )

i( )m

The first Nb iterations of the loop, called burn-in period, are to let the Markov Chain converge to near its stationary distribution. During the next Ns iterations, the Gibbs sampler generate the Ns samples, i.e.,

t sample can be istribution should be converged

after

las the solution of the distribution. Since the d

b s

N +N times iteration.

There are two problems of Gibbs sampler:

1) How do you choose an initial point? A poor choice of initial point will greatly increase the required burn-in time, and an area of much current research is whether an optimal initial point can be found.

2) How many iterates are needed? This question does not have exact answer, the majority

.1.2 GPDA-MCPDA Detector

at higher values of SNR, some of the transition

tioned answers are obtained from the experience.

4

In [12], the author mentioned that

probabilities in the underlying Markov Chain may become very small. As a result, the Markov Chain may be effectively divided into a number of nearly disjoint chains. The term nearly disjoint here means the transition probabilities that allow movement between the disjoint chains are very low. Therefore, a Gibbs sampler that starts from a random point will remain within the set of points surrounding the initial point and thus may not get a chance of visiting sufficient points to find the global solution. In [13] two solutions for solving this problem were proposed: (i) run a number of parallel Gibbs sampler with different initial points; (ii) while running the Gibbs sampler, we assume a noise variance which is higher than it actually is. These two methods turned out to be effective for low and medium SNRs.

In the parallel MCPDA detector, we will focus on these two methods which men

above to improve the performance of MCMC method. First, we use parallel Gibbs samplers with the initial point generated randomly. Second, we compute covariance according to (3.6) rather than (4.25), so named MCPDA. Since we take the variance of residual interference caused by the random samples into account in the equation (3.6), the covariance will increase.

Furthermore, with few times of iteration, the covariance will be gradually narrowing. This may be regarded as automatic Simulated Annealing method [6]. Finally, we will pick up a

sample from the final iteration of parallels, and the sample which has minimum distance (i.e.

arg min 2

a X i r - Ha ). Thus, we can get a solution from the parallel MCPDA detector.

r 3, we have mentioned that the GPDA detector performs well at th

In Chapte e low SNR

regions, so we can only use the GPDA detector at the low SNR regions; however, with the SNR increasing (exceed M dB), the performance of the GPDA detector will get worse gradually. Thus, we need to use parallel MCPDA method to assist the GPDA detector in order to reach better performance, so named GPDA-MCPDA detector. Moreover, MCPDA is similar to GPDA, we only need to add few blocks, and then the GPDA detector will become the MCPDA detector. Therefore, it may be quite simple to implement. The block diagram of the GPDA-MCPDA detector is shown in Fig. 4.1 and the discrepancy between the GPDA detector and the MCPDA detector is shown in Fig. 4.2.

1

Fig. 4.1 Block diagram of the GPDA-MCPDA detector.

, ( ) 1

Fig. 4.2 The discrepancy between the PDA detector and the MCPDA detector.

 

.2 GPDA-SD Detector

not have good performance at the high SNR

.2.1 Sphere Decoding

e lgorithm is a quasi-ML detection technique. It promises to find

4

As we know, the GPDA detector does

regions. In order to solve this problem, we try to find a solution which is better than the GPDA solution. Therefore, we use the Sphere Decoding (SD) algorithm to do this work, so named GPDA-SD, which can attain a better performance and lower complexity in the MIMO-OFDM spatial multiplexing system.

4

Th sphere decoding [3] a

the optimal solution with low computational costs under some conditions. The SD algorithm

was first introduced by Finke and Pohst [16] in the context of the closest point search in lattices but it has become very popular in digital communication literature. Its various applications include lattice codes, CDMA systems, MIMO systems, global positioning system (GPS), etc.

相關文件