• 沒有找到結果。

= N

n n k

Nk 1

ρ , and determine

Nk

n k n k,1,..,α ,

α from the gain α for each k=1,…,K.

Subsequently, we can set up the input (Rk,Nk,αˆk,var(αk)), feed into the off-line trained ANN, and obtain the estimatedPk, denoted by P~k, from the output of ANN for each k=1,…,K. Then we can compute the estimated total consumed power, denoted by ~ ,

PT for the givenρ by

=

= K

k k

T P

P

1

~

~ . Using this off-line trained ANN, the l (=3)ρ'swith smallest P~T among the s

feasible ρ's obtained in Stage 1 are the subcarrier assignment patterns determined in this stage.

4.2.3 Stage 3: Determine the Good Enough Subcarrier Assignment and Bit Allocation

Since there are only l (=3) candidate feasible-ρ's left, we can use the exact objective function of (4.4) to calculate the objective value of each ρ. That is to solve the optimal bit allocation problem (4.5) for the given ρ using the greedy algorithm mentioned above to obtain the optimal power consumption Pk for user k=1,…,K. Then we calculate

=

= K

k k

T P

P

1

for the givenρ. Consequently, theρ associated with the optimal bit allocation corresponding to the smallest PT among the l feasible ρ's will be the good enough solution of (2.3) that we look for.

4.3 Test Results and Comparisons

In this section, we will demonstrate the performance of the proposed algorithm on solving the large-dimension ASABA problem (4.4), which is equivalent to (2.3), in the aspects of solution quality and computational efficiency by comparing with other algorithms. We assume the OFDM system has 256 subcarriers (i.e. N=256), which can carry two, four, and six bits/symbol; therefore in this system M=6. We adopt the approximate formula in (2.1) for

the fk(c) in the transmission power 2 ,

) (

n k k c f

α shown in the objective function of (2.3), and we set Pe =104 and N0 =10-12 watt in the following simulations.

We use a frequency-selective channel consisting of six independent Rayleigh multipaths to model the wireless transmission channel, and each multipath is modeled by Clark’s flat fading model [25]. We assumed that the power delay profile is exponentially decaying with

e2p, where p =0, 1, 2, 3, 4 and 5 denote the multipath index. Hence, the related power of the six multipath components are 0dB, -8.69 dB, -17.37 dB, -26.06 dB, -34.74 dB, and -43.43dB.

We also assume the average subcarrier channel gain Eαk,n2 is unity for all k and n. Based on the above assumptions, we can generate power consumption coefficients αk,n, k=1,…,K, n=1,…,N, using MATLAB for our simulations.

We consider cases of various number of users for K=10, 20, 30, 40, and 50. For each K, we assume a fixed total data rate request RT=1024 bits/symbol and randomly generate

, ,..., 1

,k K

Rk = based on the constraint T K k

k R

R =

=1 . For each K and the associated R, we randomly generate 5000 sets of αk,n, k=1,...,K, n=1,...,N, based on the above mentioned power consumption coefficient generation process and denote αi as the ith set in the 5000. With the above test setup, we apply our algorithm to solve (2.3) on a Pentium 2.4 GHz processor and 512 Mbytes RAM PC. We also apply the more global-like mathematical programming based approaches proposed by Wong et al. and Kim et al. in [3] and [5], respectively, and the more local-like two-module scheme and two-step subcarrier assignment approaches proposed by Ergen et al. and Zhang in [6] and [8], respectively, to the same test cases on the same PC. For the purpose of comparison, we can use the average bit SNR (abSNR) to replacePT, because abSNR is defined as the ratio of the average transmit power,

T T

R

P , to the noise PSD level N0. As we have assumed that all the data rates per symbol are fixed at RT, and the N0 is just a constant, thus PT is proportional to abSNR.

Remark 4.5: As shown in (2.1), PT consists of the term N0. Therefore, the magnitude of N0

employed in our tests is not relevant to the results of abSNR, because the term N0 will be cancelled out as noted in the definition of abSNR.

For each K with the associated vector R, we denote abSNR(αi) as the resulted abSNR when αi is used and calculate

5000 ) (

5000

1

= i=

abSNR i

abSNR

α , where abSNR denotes the average of the

5000 abSNR’s for a given K. The resulted abSNR for each K and each algorithm are shown in Figure 4.1.

Figure 4.1. The abSNR for K=10, 20, 30, 40 and 50 obtained by the five algorithms.

Form Figure 4.1, we see that the abSNR obtained by our algorithm, which are marked by

"U", is smallest among all algorithms. Moreover, the result obtained by our algorithm is even better when the number of users increases as can be observed from Figure 4.1.

Remark 4.6: The quality of the solution obtained by the approach proposed by Wong et al.

in [3] is excellent and has been used as a comparing standard in most of the literature regarding ASABA problems [5], [7], [8]. We also manifest the quality of their solution in our simulations as shown in Figure 4.1. The reason that supports their solution’s excellent quality is their global-like mathematical programming based approach as indicated previously. They first employed a Lagrangian relaxation method to solve the continuous version of the ASABA

problem then rounded the optimal continuous subcarrier assignment solution off to the closest integer solution. Such an arbitrarily rounding off may cause possible infeasibility and not theoretically guarantee to obtain a good solution, especially when the dimension of the ASABA problem is large. Dislike their approach, we handle the discrete solution space directly. In the first stage of our approach, our specially designed GA, which associates with a surrogate model for fast fitness evaluation, search through the whole feasible solution space to find some good feasible subcarrier assignment patterns. Thus, our approach is also global-like and will not cause any infeasibility problem. Then in the second and third stages, we use the ANN and exact models, respectively, to help pinpoint a good enough subcarrier assignment pattern associated with optimal bit allocation among the feasible solutions resulted in Stage 1.

The arbitrarily rounding off technique employed in [3] is lack of theoretical support. However, the foundation of our approach is OO theory, which is a theoretically sound general methodology [10] and has several successful applications on the combinational optimization problems with huge discrete solution space [24], [33]-[34].

Figure 4.2. The average computation time for obtaining an abSNR by the five algorithms in cases of K=10, 20, 30, 40 and 50.

We also show the average computation time for obtaining an abSNR for each K and each algorithm in Figure 4.2. From this figure, we see that the average computation time obtained by our algorithm, which are around 100 milliseconds as marked by "U", is also smallest among all algorithms. These results show that our algorithm outperforms the other four in both aspects of solution quality and computational efficiency. More importantly, when the number of users increases, the performance of our algorithm is even better. This demonstrates that our algorithm is most suitable for large-dimension ASABA problems.

Remark 4.7: The methods in [5], [6], [8] are proposed to overcome the computational complexity of the method in [3]. Indeed, the methods in [6], [8] are more computationally efficient than the methods in [3], [5] as shown in Figure 4.2, because the former are local-like heuristic methods while the latter are global-like mathematical programming based approaches. In fact, the authors of [6] and [8] did not compare the computational efficiency of their methods with the method in [3] in their papers, because they take their methods being conceptually faster for granted. However, since the methods in [6], [8] are local-like methods, the computation time of each solution adjustment step is very short, but the improvement of the solution is limited. Hence their convergence rate will be degraded especially when the dimension of the ASABA problem is large. On the contrary, the computational complexity of our approach is less relevant to the size of the ASABA problem, because (i) the population size and number of iterations of the employed GA in stage 1 are fixed, (ii) the parameters s and l in stages 1 and 2, respectively, are fixed, and (iii) the structure of the ANN is also fixed.

This is the reason why the computational efficiency of our algorithm can compete with the methods in [6], [8] in solving large-dimension ASABA problems. It is commonly understood that the comparisons based on CPU times may not be objective enough, however we can hardly obtain any analytical expression of the total consumed number of multiplications and additions of the methods in [3], [5], [6], [8]. In fact, the CPU time is a commonly used tool for the comparisons of computational efficiency in similar subjects appearing in [7], [35], [36].

Figure 4.3. Comparison of the five algorithms for various Pe in the case of K=40.

In previous comparisons, we have set the BER, Pe =104. It would be interesting to know how will the Quality-of-Service (QoS) requirement, i.e. various BER, affect the performance of our algorithm. Therefore, we have tested the five algorithms for K=10, 20, 30, 40 and 50 with various Pe ranging from 102 to 106 using randomly generated 5000 sets of αk,n, k=1,…,K, n=1,…,256, for each K. The conclusions on the performance for the five algorithms for various K are similar. A typical one is shown in Figure 4.3, which corresponds to K=40.

The abSNR obtained by our algorithm is marked by "U" in Figure 4.3. We see that the performance of our algorithm is the best among the five in all cases of Pe, and when the QoS level is required higher (i.e. the value of Pe is smaller), the performance of our algorithm is even better (i.e. smaller abSNR compared with the other four algorithms). This further demonstrates the superiority of the solution quality achieved by our algorithm.

相關文件