• 沒有找到結果。

Algorithm Convergence

Chapter 2 An Algorithm for computing the capacity of Discrete Input and

2.2 Algorithm

2.2.4 Algorithm Convergence

2.2.4 Algorithm Convergence

In this algorithm, we regard the discrete-output as the special case of the continuous-output, so we can see the convergence proof in [6].

15

Chapter 3

Simulation Result

3.1 Discrete-input and discrete-output MIMO Rayleigh flat-Fading channels

In this section, we use the algorithm to simulate the discrete-input and discrete-output MIMO Rayleigh flat-fading channels. Two transmit antennas and two receive antennas (2X2 MIMO) are used, and at each transmit antenna, 4-QAM signal is transmitted, and at each receive antenna, a 3 bit quantizer is used. One thousand channels are randomly generated and the ergodic channel capacity is calculated through averaging. We show the simulation result in Fig. 3.1 (a).

Fig. 3.1 (a) shows a typical simulation. The capacity is saturates at high SNR because of the modulation scheme. Two antennas transmit independent signals, so the maximum capacity of this scheme is 4 bits/channel use (2*log (4) =4). We show 2X1 2 and 2X2 MIMO performance in Fig. 3.1 (b).

16

Fig. 3.1 (a) 2X2 MIMO, 4-QAM Modulation, 3 bit Quantizer

Fig. 3.1 (b) 2X1 and 2X2 MIMO, 4-QAM Modulation, 3 bit Quantizer

17

3.2 The maximum information rate about the

discrete-input and discrete-output MIMO channel

Quantizing the signal at the receiver causes of the loss of information rate. If we transmit two independent 4-QAM signals at two antennas, and we can distinguish 16 (4*4=16) different signal vectors (which has two elements, and each element is 4-QAM signal), then the information rate is 4 bits/channel use. If we quantize the receive signals by quantizers, we may not distinguish the all difference at the receiver and get the maximum information rate. To explain this, we show all combination of two antenna and modulation signal (here we use a special 4-QAM signal, which real part is -1 and 5, and imagery part is -5 and 1) in Fig 3.2. We generate a channel randomly, then all the possible transmit signal vectors pass the channel and are quantized at the receiver, which was shown in Fig 3.3. In Fig 3.3, we can only distinguish 12 different signal vectors at the receiver, so the maximum information rate is 3.58 bit/channel use (log (12)2 =3.58). We run the algorithm, and show the simulation result in Fig 3.4, in which we see that the channel capacity does not exceed the maximum value 3.58.

18

Fig. 3.2 All possible transmit signal vectors in 2X2 quantized MIMO system.

Fig. 3.3 Quantized signal vectors at the receiver.

19

Fig. 3.4 Simulation result of a specific channel.

20

3.3 Optimal input vector distribution for different input power constraint

In this section, we analyze the optimal input vector distribution with different input power constraint in discrete-input and discrete-output MIMO Raleigh flat fading channel. In our simulation, in order to observe the relationship between the power constraint and optimal input distribution, we use a special 4-QAM signal (which is the same in section 3.1). We analyze the optimal distribution with low power constraint in 3.3.1, and analysis high power constraint in 3.3.2.

3.3.1 Low power constraint

In this section, we analyze the optimal input vector distributions with low power constraint. We constrain the average input power 25 and show all possible transmit vector in Fig 3.5(a), and these transmit vectors pass a special channel is shown in Fig 3.5(b). In Fig 3.5(b), we see that we can only distinguish 4 different vectors (we circle in different color), so the maximum information rate is 2 bits/channel use (log (4)2 = ). We run the algorithm and show the simulation result in Fig 3.5(c). In 2 3.5(c), we see that the channel capacity is saturate at 2 bits/channel use. This result conforms to our anticipation. We show the optimal input vector distribution in high SNR in Fig 3.5(d). We analyze these distributions as follows.

In Fig 3.5(b), we see that we can only distinguish 4 different signal vectors, which are (-1.5-4.5j,-1.5-4.5j), (4.5-4.5j,4.5-4.5j), (-1.5+1.5j,-1.5+1.5j), and

21

(4.5+1.5j,4.5+1.5j).

When we receive the vector (-1.5-4.5j,-1.5-4.5j), the transmit vector may be one of the three vectors (vector1 (-1-5j,-1-5j), vector2 (-1-5j,-1+j), vector5 (-1+j,-1-5j)).

We summarize the relationship between the output vector and its possible input vectors in Table 3.1. From this table, we see that we receive the vector (-1.5+1.5j,-1.5+1.5j) only when vector6 is transmitted, and vector6 has the lowest power 4(12+ + + = ), so that we see the probability of vector6 is the most in 12 12 12 4 Fig 3.5(d). The three vectors (which are vector1, vector2 and vector5) are transmitted, then we can receive the vector (-1.5-4.5j,-1.5-4.5j), but the vector2 and vector5 have the equal power 28, smaller than the power 52 of vector1, so that we see the probability of vector2 and vector5 are equal, and bigger than vector1. We can analysis the other distribution in Fig 3.5(b) by the same method. Fig 3.6 shows typical optimal distributions for low power constrain (power from low to high).

22

(a)

(b)

23

(c)

(d)

Fig. 3.5 Simulation result for low input power constraint.

24

Fig. 3.6 Low power constrain (power from low to high)

25

Output vector Possible input vectors

(-1.5-4.5j,-1.5-4.5j) vector1(-1-5j,-1-5j),vector2(-1-5j,-1+j), vector5(-1+j,-1-5j)

(4.5-1.5j,4.5-1.5j) vector3(-1-5j,5-5j),vector4(-1-5j,5+1j), vector7(-1+1j,5-5j),vector9(5-5j,-1-5j), vector10(5-5j,-1+1j),vector11(5-5j,5-5j), vector12(5-5j,5+1j),vector13(5+1j,-1-5j), vector15 (5+1j,5-5j).

(-1.5+1.5j,-1.5+1.5j) vector6(-1+1j,-1+1j) (4.5+1.5j,4.5+1.5j) Vector8(-1+1j,5+1j),vector14(5+1j,-1+1j),

Vector16(5+1j,5+1j)

Table 3.1 Output vector and its possible input vectors

26

3.3.2 High power constraint

In this section, we constrain the input average power 70 (high input power constraint), and the other settings are the same with section 3.3.1. We run the algorithm and show the optimal input vectors distributions in Fig 3.6. When we transmit one of the three vectors (which are vector8, vector14 and vector16), we can receive the vector (4.5+1.5j,4.5+1.5j). The vector16 has the most power of the three vectors, so we see that the probability of vector16 is the most. We can analysis the other distribution in Fig 3.7 by the same method. Fig. 3.8 shows typical optimal distributions for high power constrain (power from low to high).

Fig. 3.7 Optimal input distribution for high power constrain.

27

Fig. 3.8 High power constrain (power from low to high).

28

3.4 AGC to achieve the channel capacity

In our simulation, we discover that when we fix the antenna and modulation scheme, the AGC dominate the channel capacity. We show the different AGC in Fig.

3.9 and we can tune the AGC until achieve the maximum information rate.

(a) From -6 to 6

29

(b) From -3 to 3

Fig. 3.9 Different AGC

30

Chapter 4

Simple Relay Case

4.1 Introduction

Now, we want to use our algorithm to study cooperative communication and we only consider simple cases. We start with the elementary relay channel model as shown in Fig. 4.1, in which a single relay R assists the communication between the source S and the destination D. There is no direct link between the source and the destination.

Fig. 4.1 Elementary Relay Channel

Let the transmit power at the source and the relay be p and p respectively. R At both the relay and the destination, the receive symbol is corrupted by additive white Gaussian noise of unit power. Relay R observes r, a noisy version of transmitted symbol x. Based on the observation r, the relay transmits a symbol f(r) which is received at the destination along with its noise n . The relay function f satisfies the 2 average power constrain (E f r[ ( ) ]2 =PR).

r = x +n (4.1) 1 y= f(r) +n (4.2) 2

31

4.2 Basic Memoryless Forwarding Strategies

In this section, we introduce two basic memoryless forwarding strategies. We introduce demodulate forward in 4.2.1 and amplify forward in 4.2.2.

4.2.1 Demodulate And Forward

In DF protocol, demodulation of the received symbol at the relay is followed by modulation, the relay function for DF can be expressed as

( ) ( )

DF R

f r = P sign r (4.3) where sign(r) outputs the sign of r. Due to the demodulation process, the relay transmitted symbol does not provide any soft information to the destination.

4.2.2 Amplify And Forward

An AF relay simply forwards the received signal r after satisfying its power constraint.

The relay function for AF can be written as Evidently, with AF, the relay tries to provide soft information to the destination. A disadvantage with this technique is that significant power is expended at the relay when r is high.

32

4.3 Simulation Results

In this section, we consider simple relay case and Rayleigh flat-fading channel, and then run our algorithm. We set the total power (source and relay) S, and noise power N (SNR=S/N). Fig. 4.2 shows in Rayleigh flat-fading channel, more parallel relay get the better performance. Fig 4.3 shows one relay and no relay, the performance is similar, and two parallel relay is better. We compare two kind of different relay strategies we describe in section 4.2, and show the simulation results in Fig. 4.4, in which, we can see that DF get better performance in high SNR, because in high SNR relay demodulate received signals more correct. In Fig. 4.5, we show one relay and different received antenna simulation results, we can see that two received antennas get better performance. In relay systems, we can trade off number of relays, and quantization levels, and number of antennas. We show different relays and quantization levels in Fig. 4.6, in which we see that increase one antenna, get better performance than increase one-bit quantization. To achieve a specific performance, we can combination different relays, antennas and quantization levels. We show different combination achieve similar performance in Fig. 4.7.

33

Fig. 4.2 Different Parallel Relay

Fig. 4.3 Different Parallel Relay And No relay

34

Fig. 4.4 Different Relay Strategies

Fig. 4.5 Different Received Antenna

35

Fig. 4.6 Different Combination

Fig. 4.7 Similar Performance With Different Combination

36

Bibliography

[1] C. Shannon, “A mathematical theory of communication”, Bell Syst. 1984.

[2] S. Arimoto, “An algorithm for computing the capacity of arbitrary discrete memoryless channels”, 1972.

[3] R. E. Blahut , “Computation of channel capacity and rate distortion functions”, 1972.

[4] N. Varnica, X. Ma, and A. Kavcic, ”Capacity of power constrained memoryless AWGN channels with fixed input constellations”,November 2001.

[5] J. Bellorado, S. Ghassemzadeh, and A. Kavcic,”Approaching the capacity of the MIMO Rayleigh flat-fading channel with QAM constellation, independent across antennas and dimensions”.

[6] Obianuju Ndili and Tokunbo Ogunfunmi, “Achieving Maximum possible Speed on Constrained Block Transmission System”.

[7] Josef A.Nossek and Michel T.Ivrlac, ”Capacity and Coding for Quantized MIMO Systems”,2006.

37

About the Author

名:黃俞榮

Yu-Rong Huang

地:高雄市

出生日期:

1983. 11. 21

歷:

1990. 9 ~ 1996. 6

高雄市四維國小

1996. 9 ~ 1999. 6

高雄市私立復華中學

1999. 9 ~ 2002. 6

高雄市立中正高中

2002. 9 ~ 2006. 6

大同大學 電機工程學系 學士

相關文件