• 沒有找到結果。

Chapter 1 Introduction

1.3 Purpose Of Research

In the previous section, we see that discrete-input and discrete-output channel model suits digital communication systems, but there is not an algorithm to calculate this channel capacity with input power constraint and a convergence proof (in [6], the authors proposed an algorithm, but they can not prove the convergence of the algorithm. In our simulations, we found case where their algorithm fails to converge.).

Our purpose of research is to propose an algorithm, which can calculate the discrete-input and discrete-output channel capacity with input power constraint, and guarantee the algorithm convergent, and the algorithm is extended to MIMO cases.

We use the algorithm to study the optimal input distribution with different input power constraint. In the end, we hope to use the algorithm to study simple relay channel cases.

9

Chapter 2

An Algorithm for computing the capacity of Discrete Input and Discrete Output MIMO channel with input power constraint

2.1 Quantized MIMO System

Let us consider the quantized MIMO systems in Fig. 2.1, where nT transmit antennas are connected to nR receive antennas by the channel matrix HCnT nR* , which is assumed to be completely known to the transmitter and receiver. Because of knowing the channel state at the transmitter, we can get the optimal input distribution which is fitted the input power constraint ( * 2

i

Pi i =Pav

X ) to approach the

channel capacity. At every transmit antenna, the input signal ( ,x x1 2,...,xnT) is the modulation signals (such as PAM and QAM), and the received signal is perturbed by samples ( ,v v1 2,...,vnR) of complex, circularly symmetric, additive, white Gaussian noise of zero mean and variance of σv2/2 in its real and imaginary part, respectively, yielding total noise power σv2. The receive signal is split up into real-part and imaginary-part and fed to the input of a bank of quantizers Q Q1, 2,...Q2nR which

10

output the quantized signals (y y1, 2,...,y2nR). Let us collect the input and output signals into vector

X=[ ,x x1 2,...xnT]TM (2.1) Y=[ ,y y1 2,...y2nR]TQ (2.2) Where M is a finite set containing all possible modulated transmit vector X, while Q is a finite set containing all possible quantized receive vectors Y. Let us write the input-output relationship of the quantized MIMO system as

Y=Quantized (HX+V) (2.3) The individual quantizer is defined by their input-output relationship as

Quantized(r ) =q iff i l qi( )< ≤ri u qi( ) (2.4) Where q is the output of the quantizers when its input ranges between a lower limit

i( )

l q and an upper limit u q , and these limits define the quantization interval for i( ) which the quantizers outputs the value q. Here we use the uniform quantizers in our simulation.

Fig. 2.1 Quantized MIMO system model [7].

11

2.2 Algorithm

2.2.1 Algorithm Research

Our goal is to find PX( )⋅ which maximize the information rate I(X;Y|H). Here I(X;Y|H) is the mutual information between channel input and channel output assuming the transmitter and receiver knows the channel matrix H.

The discrete-input and discrete-output channel capacity computation problem is stated as follows:

Find the pmf (probability mass function) that satisfy the following equation * The maximization in (2.6) is taken under the following set of constraint.

( * Finally, we wish to evaluate C (the discrete-input and discrete-output channel capacity), defined as

C = I(X; Y |H)|PX( )⋅ =PX*( )⋅ (2.9) An algorithm computing the discrete-input and continuous-output channel capacity has been proposed in [6]. We can regard the discrete-output as a special case of the continuous-output. With the idea, we propose a discrete-input and discrete- output version for computing the quantized MIMO channel capacity. We state as follows.

For a specific channel H

12

given H and X, Y~N (HX,σv2I), thus knowing the quantization levels and appropriate decision regions, the complementary error function can be used to compute

P (Y | X) . Y|X

In step3, λ is chosen to satisfy the specific equation, and we introduce two methods in next two sections.

13

2.2.2 Interval halving procedure

Interval halving procedure is an efficient method for solving equation f(x) =0.

The requirements for using this method is that there are two values x and 1 x that 2 satisfy f(x )f(1 x ) <0 .Since f(2 x ) and f(1 x ) have opposite signs, we know by the 2 intermediate value theorem that there exists a solution x% and that x1≤ ≤x% x2, and with only n+1 function evaluations we can find a shorter interval of length

1 2 2 n

x x

ε = − that contains x%, the procedure was described as follows.

Input: x ,1 x , f(x),2 ε (tolerance error)

Output: A solution to the equation f(x) =0 that lies in an interval of length <ε

14

2.2.3 Newton-Raphson procedure

Newton’s method is perhaps the best known method for finding the roots of the real-value function. Newton’s method can often converge quickly, especially if the iterations begin sufficiently near the desired roots.

Given a function f(x) and its derivative f’(x), we given a first guessx , and a 0 better approximation is

0 We continue this process and can get the roots of the equation.

2.2.4 Algorithm Convergence

In this algorithm, we regard the discrete-output as the special case of the continuous-output, so we can see the convergence proof in [6].

15

Chapter 3

Simulation Result

3.1 Discrete-input and discrete-output MIMO Rayleigh flat-Fading channels

In this section, we use the algorithm to simulate the discrete-input and discrete-output MIMO Rayleigh flat-fading channels. Two transmit antennas and two receive antennas (2X2 MIMO) are used, and at each transmit antenna, 4-QAM signal is transmitted, and at each receive antenna, a 3 bit quantizer is used. One thousand channels are randomly generated and the ergodic channel capacity is calculated through averaging. We show the simulation result in Fig. 3.1 (a).

Fig. 3.1 (a) shows a typical simulation. The capacity is saturates at high SNR because of the modulation scheme. Two antennas transmit independent signals, so the maximum capacity of this scheme is 4 bits/channel use (2*log (4) =4). We show 2X1 2 and 2X2 MIMO performance in Fig. 3.1 (b).

16

Fig. 3.1 (a) 2X2 MIMO, 4-QAM Modulation, 3 bit Quantizer

Fig. 3.1 (b) 2X1 and 2X2 MIMO, 4-QAM Modulation, 3 bit Quantizer

17

3.2 The maximum information rate about the

discrete-input and discrete-output MIMO channel

Quantizing the signal at the receiver causes of the loss of information rate. If we transmit two independent 4-QAM signals at two antennas, and we can distinguish 16 (4*4=16) different signal vectors (which has two elements, and each element is 4-QAM signal), then the information rate is 4 bits/channel use. If we quantize the receive signals by quantizers, we may not distinguish the all difference at the receiver and get the maximum information rate. To explain this, we show all combination of two antenna and modulation signal (here we use a special 4-QAM signal, which real part is -1 and 5, and imagery part is -5 and 1) in Fig 3.2. We generate a channel randomly, then all the possible transmit signal vectors pass the channel and are quantized at the receiver, which was shown in Fig 3.3. In Fig 3.3, we can only distinguish 12 different signal vectors at the receiver, so the maximum information rate is 3.58 bit/channel use (log (12)2 =3.58). We run the algorithm, and show the simulation result in Fig 3.4, in which we see that the channel capacity does not exceed the maximum value 3.58.

18

Fig. 3.2 All possible transmit signal vectors in 2X2 quantized MIMO system.

Fig. 3.3 Quantized signal vectors at the receiver.

19

Fig. 3.4 Simulation result of a specific channel.

20

3.3 Optimal input vector distribution for different input power constraint

In this section, we analyze the optimal input vector distribution with different input power constraint in discrete-input and discrete-output MIMO Raleigh flat fading channel. In our simulation, in order to observe the relationship between the power constraint and optimal input distribution, we use a special 4-QAM signal (which is the same in section 3.1). We analyze the optimal distribution with low power constraint in 3.3.1, and analysis high power constraint in 3.3.2.

3.3.1 Low power constraint

In this section, we analyze the optimal input vector distributions with low power constraint. We constrain the average input power 25 and show all possible transmit vector in Fig 3.5(a), and these transmit vectors pass a special channel is shown in Fig 3.5(b). In Fig 3.5(b), we see that we can only distinguish 4 different vectors (we circle in different color), so the maximum information rate is 2 bits/channel use (log (4)2 = ). We run the algorithm and show the simulation result in Fig 3.5(c). In 2 3.5(c), we see that the channel capacity is saturate at 2 bits/channel use. This result conforms to our anticipation. We show the optimal input vector distribution in high SNR in Fig 3.5(d). We analyze these distributions as follows.

In Fig 3.5(b), we see that we can only distinguish 4 different signal vectors, which are (-1.5-4.5j,-1.5-4.5j), (4.5-4.5j,4.5-4.5j), (-1.5+1.5j,-1.5+1.5j), and

21

(4.5+1.5j,4.5+1.5j).

When we receive the vector (-1.5-4.5j,-1.5-4.5j), the transmit vector may be one of the three vectors (vector1 (-1-5j,-1-5j), vector2 (-1-5j,-1+j), vector5 (-1+j,-1-5j)).

We summarize the relationship between the output vector and its possible input vectors in Table 3.1. From this table, we see that we receive the vector (-1.5+1.5j,-1.5+1.5j) only when vector6 is transmitted, and vector6 has the lowest power 4(12+ + + = ), so that we see the probability of vector6 is the most in 12 12 12 4 Fig 3.5(d). The three vectors (which are vector1, vector2 and vector5) are transmitted, then we can receive the vector (-1.5-4.5j,-1.5-4.5j), but the vector2 and vector5 have the equal power 28, smaller than the power 52 of vector1, so that we see the probability of vector2 and vector5 are equal, and bigger than vector1. We can analysis the other distribution in Fig 3.5(b) by the same method. Fig 3.6 shows typical optimal distributions for low power constrain (power from low to high).

22

(a)

(b)

23

(c)

(d)

Fig. 3.5 Simulation result for low input power constraint.

24

Fig. 3.6 Low power constrain (power from low to high)

25

Output vector Possible input vectors

(-1.5-4.5j,-1.5-4.5j) vector1(-1-5j,-1-5j),vector2(-1-5j,-1+j), vector5(-1+j,-1-5j)

(4.5-1.5j,4.5-1.5j) vector3(-1-5j,5-5j),vector4(-1-5j,5+1j), vector7(-1+1j,5-5j),vector9(5-5j,-1-5j), vector10(5-5j,-1+1j),vector11(5-5j,5-5j), vector12(5-5j,5+1j),vector13(5+1j,-1-5j), vector15 (5+1j,5-5j).

(-1.5+1.5j,-1.5+1.5j) vector6(-1+1j,-1+1j) (4.5+1.5j,4.5+1.5j) Vector8(-1+1j,5+1j),vector14(5+1j,-1+1j),

Vector16(5+1j,5+1j)

Table 3.1 Output vector and its possible input vectors

26

3.3.2 High power constraint

In this section, we constrain the input average power 70 (high input power constraint), and the other settings are the same with section 3.3.1. We run the algorithm and show the optimal input vectors distributions in Fig 3.6. When we transmit one of the three vectors (which are vector8, vector14 and vector16), we can receive the vector (4.5+1.5j,4.5+1.5j). The vector16 has the most power of the three vectors, so we see that the probability of vector16 is the most. We can analysis the other distribution in Fig 3.7 by the same method. Fig. 3.8 shows typical optimal distributions for high power constrain (power from low to high).

Fig. 3.7 Optimal input distribution for high power constrain.

27

Fig. 3.8 High power constrain (power from low to high).

28

3.4 AGC to achieve the channel capacity

In our simulation, we discover that when we fix the antenna and modulation scheme, the AGC dominate the channel capacity. We show the different AGC in Fig.

3.9 and we can tune the AGC until achieve the maximum information rate.

(a) From -6 to 6

29

(b) From -3 to 3

Fig. 3.9 Different AGC

30

Chapter 4

Simple Relay Case

4.1 Introduction

Now, we want to use our algorithm to study cooperative communication and we only consider simple cases. We start with the elementary relay channel model as shown in Fig. 4.1, in which a single relay R assists the communication between the source S and the destination D. There is no direct link between the source and the destination.

Fig. 4.1 Elementary Relay Channel

Let the transmit power at the source and the relay be p and p respectively. R At both the relay and the destination, the receive symbol is corrupted by additive white Gaussian noise of unit power. Relay R observes r, a noisy version of transmitted symbol x. Based on the observation r, the relay transmits a symbol f(r) which is received at the destination along with its noise n . The relay function f satisfies the 2 average power constrain (E f r[ ( ) ]2 =PR).

r = x +n (4.1) 1 y= f(r) +n (4.2) 2

31

4.2 Basic Memoryless Forwarding Strategies

In this section, we introduce two basic memoryless forwarding strategies. We introduce demodulate forward in 4.2.1 and amplify forward in 4.2.2.

4.2.1 Demodulate And Forward

In DF protocol, demodulation of the received symbol at the relay is followed by modulation, the relay function for DF can be expressed as

( ) ( )

DF R

f r = P sign r (4.3) where sign(r) outputs the sign of r. Due to the demodulation process, the relay transmitted symbol does not provide any soft information to the destination.

4.2.2 Amplify And Forward

An AF relay simply forwards the received signal r after satisfying its power constraint.

The relay function for AF can be written as Evidently, with AF, the relay tries to provide soft information to the destination. A disadvantage with this technique is that significant power is expended at the relay when r is high.

32

4.3 Simulation Results

In this section, we consider simple relay case and Rayleigh flat-fading channel, and then run our algorithm. We set the total power (source and relay) S, and noise power N (SNR=S/N). Fig. 4.2 shows in Rayleigh flat-fading channel, more parallel relay get the better performance. Fig 4.3 shows one relay and no relay, the performance is similar, and two parallel relay is better. We compare two kind of different relay strategies we describe in section 4.2, and show the simulation results in Fig. 4.4, in which, we can see that DF get better performance in high SNR, because in high SNR relay demodulate received signals more correct. In Fig. 4.5, we show one relay and different received antenna simulation results, we can see that two received antennas get better performance. In relay systems, we can trade off number of relays, and quantization levels, and number of antennas. We show different relays and quantization levels in Fig. 4.6, in which we see that increase one antenna, get better performance than increase one-bit quantization. To achieve a specific performance, we can combination different relays, antennas and quantization levels. We show different combination achieve similar performance in Fig. 4.7.

33

Fig. 4.2 Different Parallel Relay

Fig. 4.3 Different Parallel Relay And No relay

34

Fig. 4.4 Different Relay Strategies

Fig. 4.5 Different Received Antenna

35

Fig. 4.6 Different Combination

Fig. 4.7 Similar Performance With Different Combination

36

Bibliography

[1] C. Shannon, “A mathematical theory of communication”, Bell Syst. 1984.

[2] S. Arimoto, “An algorithm for computing the capacity of arbitrary discrete memoryless channels”, 1972.

[3] R. E. Blahut , “Computation of channel capacity and rate distortion functions”, 1972.

[4] N. Varnica, X. Ma, and A. Kavcic, ”Capacity of power constrained memoryless AWGN channels with fixed input constellations”,November 2001.

[5] J. Bellorado, S. Ghassemzadeh, and A. Kavcic,”Approaching the capacity of the MIMO Rayleigh flat-fading channel with QAM constellation, independent across antennas and dimensions”.

[6] Obianuju Ndili and Tokunbo Ogunfunmi, “Achieving Maximum possible Speed on Constrained Block Transmission System”.

[7] Josef A.Nossek and Michel T.Ivrlac, ”Capacity and Coding for Quantized MIMO Systems”,2006.

37

About the Author

名:黃俞榮

Yu-Rong Huang

地:高雄市

出生日期:

1983. 11. 21

歷:

1990. 9 ~ 1996. 6

高雄市四維國小

1996. 9 ~ 1999. 6

高雄市私立復華中學

1999. 9 ~ 2002. 6

高雄市立中正高中

2002. 9 ~ 2006. 6

大同大學 電機工程學系 學士

相關文件