• 沒有找到結果。

Chapter 1. Introduction

1.4 Thesis Organization

In this thesis, the main scope is to expatiate on bandwidth allocation concept of WiMAX. While several papers implemented which so called strict priority queue to address bandwidth in order, but in some cases, the unfair situation may appear. That is why several proceeding researches have another different view of bandwidth allocation. It also proposes a new mechanism of uplink scheduling, which is called two-tier scheduling algorithm, to enhance the shortage of those recent scheduling researches. Under this mechanism, BS (Base Station) can allocate fair bandwidth for each service according to its individual needs. And the fairness concept is obeyed with weighted fairness definition. After the instruction of concept, the simulation result can reveal that the proposed method surely improves performance and guarantees the fairness with quality of service features. The thesis is organized as follows. The second part is the instruction of the two-tier scheduling algorithm and the simulation result will be shown in part three. In the final is the conclusion of the thesis and discuss about the future work of WiMAX.

Chapter 2.

Two-Tier Scheduling Algorithm

2.1 Proposed Structure

As Fig.5 shows, the service flow handshakes with BS by DSA/DSC message during connection establishing. At this time, this connection will notify BS the two rates as parameters of its own, Maximum Sustained Rate and Minimum Reserved Rate. Once the connection has data to send, it will notify the amount of data to BS by BR. After BS collects all the BRs from connections, it classifies each connection into three category, the first one is the Unsatisfied category, which is for those connections which are not yet satisfied with their minimum guarantee, the second is called Satisfied category, which is for those already satisfied with their minimum guarantee and not yet achieved their maximum requirement, the last category is called Over-satisfied, which is for those already exceed their maximum requirements. BS then performs Queueing and Scheduling operation for them. BS designs the MAP and defines in what time slots which connection can do uplink transmission. Assuming the transmission rate of PHY is fixed, the amount of physical slots mapping to the bandwidth request can be known. Three important parameters will not change by time.

And BS can always be aware of each connection’s transmission rate just by the information of elapsed time and allocated bandwidth. The concept of the proposed structure is illustrated in Fig. 2-1.

Figure 2-1: Proposed structure

2.2 Parameter Definitions

(1)Rimin:

The Minimum Reserved Rate of connection i.

(2)Rimax:

The Maximum Sustained Rate of connection i.

(3)Riallocated:

The previous allocated bandwidth rate of connection i..

(4)PFi:

The Priority Function (PF) of connection i; its value is between [0, 1]. This parameter indicates a connection’s satisfaction degree. Therefore, a connection being with a smaller PF has higher allocation precedence, compared with those

connections in the same class. The PF in different category has different calculation. And will be shown in 2.4.

2.3 Service Category

(1) Unsatisfied:

In this class, connections are not yet satisfied with their . Thus they are the most urgent ones to get the transmission opportunity. The PF can represent each unsatisfied connection’s satisfied degree. The smaller one shows there is still a long distance to be satisfaction and undoubtedly to be prior allocated. Only rtPS and nrtPS services are qualified in this category because only they two have , while BE has no minimum guarantee. This concept originates from the standard that the key point is to attain each connection’s as soon as possible. As usual, the higher QoS requirement services has larger Minimum Reserved Rate (rtPS>nrtPS), so rtPS services are more possible to be unsatisfied through this measure. As a result, rtPS can usually be prior to be allocated in the next frame than nrtPS even if they were both allocated the same rate in advance.

i

In this category, connections have already been satisfied with their , and so far

not reached their . I suggest that after attaining the , the spare bandwidth should be allocated to all the connections through a fair standard. Thus after calculating each connection’s PF, we can know the satisfaction degree of it, i.e., . The connection with lower satisfaction degree should be allocated prior. And those connections which are in a quite satisfied condition should wait until others have been

i

Rmin

i

Rmax Rimin

PFi

allocated. And at this status, not only rtPS and nrtPS but also BE can take part in the opportunity contention. The can be represented as the max allowable rate contracted with BS. If there is still a long way to this rate, it means this connection is less satisfied. The maximum number of PF is “1”, which stands for the connection is the most satisfied. As usual, the services with higher QoS requirement have larger and . Then after the scheduling of allocation, all connections will be

allocated between their . And owing to BE has no , once it is allocated, it will become more satisfied and may become less prior. The result will show that allocated amount in higher QoS requirement services is larger than that in lower ones, which is in accordance with their features. Not only the fairness is guaranteed, but QoS issue has also been concerned.

In this status, we call those connections as greedy connections. When bandwidth is very sufficient, greedy connections can also be allowed. So after the allocation of Unsatisfied and Satisfied connections, it is time to give chances to those greedy connections. We can still evaluate this kind of connection’s greediness by calculating their . The connections with lower PF stand for less greedy while higher ones are much greedy. Out of fairness, the less greedy connections should be prior to get allocated. As a result, the connection with smaller Maximum Sustained Rate is easier to become greedy and thus less prior to be allocated. Consequently, the lower QoS requirement service, such as Best Effort, is less prior once it is in this category. On the other hand, we also propose a parameter called , the larger

can make to become smaller, thus the higher QoS requirement services can be prior to be allocated than lower ones in this category.

PFi

QoS_Faci QoS_Faci

PFi

2.4 Priority Function Calculation

Based on the current service category of connection i, its PF is calculated as follows.

i i i min allocated max

max min

2.5 Two-Tier Scheduling Operations Tier 1:

During the connection establishing period, connections do the handshaking with BS and notify the and to BS. Then if a connection has data to send, it will notify the BS by sending BR before BS designing MAP.

i m i n

R Rim ax

After collecting all the BRs in one frame, BS calculates each connection’s and classifies all the connections into three categories.

i a llo c a te d

R

Tier 2:

When in each frame, the BS allocates bandwidth (BW) following the category order (UnsatisfiedÆSatisfiedÆOver-satisfied). In each category, always pick up the connection with the smallest PF and allocate bandwidth to it according to its BR until all bandwidth is allocated. When finishing all the allocation in one category, if there is

still available bandwidth, then enter the next category allocation. If all bandwidth is allocated in any category or all connections are allocated after Over-satisfied category, BS stops the process and broadcasts MAP. However, in the initial frame, while all connections’ PFs are zero, and no priority order can be evaluated, the scheduling algorithm simply allocates to connections following first come first serve (FCFS) scheduling. And then in the later frames, the connections with allocated bandwidth have PF values while those not allocated connections have PF value of zero and become prior. The process of Two-Tier Scheduling Algorithm (2TSA) can be illustrated as the pseudo code and in flow chart in Fig. 2-2 shown below.

Figure 2-2: Flow chart of 2TSA

Pseudo Code of 2TSA

Chapter 3. Performance Evaluation

3.1 Parameter and Environment Setting

The simulation program is based on a self-configured environment with g++

compiler in Linux, and modified from a part of settings in [6]. Simulation environment is under deploying five connections of UGS flows, seven of rtPS flows, seven of nrtPS flows, and seven of Best Effort flows. For each UGS flow, we define their max latency of 20(ms), both Minimum Reserved Rate and Maximum Sustained Rate of 60Kbits/sec. For each rtPS flows, we define Max latency as 50(ms). For nrtPS flows, we define their Max Latency as 100(ms). And for BE flows, define their Max Latency as 200(ms). Buffer management is used to control buffer and decide which packet to drop. When the waiting time of some packets exceeds their max latency, buffer will regard them as invalid packets and drop. Frame length is assigned to 10 ms and simulation time is 1000 frames (10 seconds). The simulation parameters are illustrated in Table 3-1.

All packet arrivals occur at the beginning of each frame and the packet arrival process for each connection follows the Poisson distribution with different traffic rate λ. We propose that each connection generates at least its Minimum Reserved Rate mapping to one frame. And the average generated size in one frame is designed to its Maximum Sustained Rate mapping to one frame. By defining fixed packet size, we can implement the packet arrival model that follows Poisson Process [10], which illustrates the distribution the number of arrival packets in on frame, and it is defined with

λ λ

, is the mean value of arrival packets numbers. Thus defined:

Table 3-1: Simulation parameters

i i

max min

R -R λ=packet_size

0

Figure 3-1: Poisson distribution

The Fig. 3-1 shows the Poisson distribution with mean ( ) value is five. The simulation model bounds connections’ generated packets in each frame to its Minimum Reserved Rate mapping to one frame, and generates packets variably every frame. Average traffic rate is its maximum sustained rate.

λ

Besides, in the simulation, we also simulated the greedy connections’ behavior.

In the first five seconds, connections generate and request bandwidth follows their Maximum Sustained Rate. However after simulation time of five seconds, if any connection finds out that its allocation is in a good condition. It will attempt to request more bandwidth for better throughput. In that case, connection is becoming greedy because its average Bandwidth Request is enlarging. And as also, after another period of time, if the greedy connection detects that the condition is still good, it will request more. Observation is about to see the fairness improvement when some connections become greedy and intend to cause damages to other connections.

The simulation runs under two available bandwidth scenarios. The performance of Two-Tier Scheduling Algorithm(2TSA) and Strict Priority Scheduling(SPS) will be

compared. First, set the available bandwidth between the summation of all connections’ Maximum Sustained Rates and Minimum Reserved Rates. As the performance expected, all the connections should be in the category of Satisfied after simulation by the proposed mechanism. And compare with the SPS method that allows greedy rtPS starve nrtPS and BE. In the second scenario, set the available bandwidth to be larger than the summation of total connections’ Maximum Sustained Rates. In Scenario II, two cases will be run. One is by assuming only rtPS becomes greedy after five seconds and the other is by assuming all connections are able to become greedy. Observation in the first case is about to see if overcharging rtPS will starve nrtPS or BE. The second case is about to see whether the allocation of residual bandwidth is more fairly allocate to all connections and also benefit the higher QoS requirement connections to take more than the lower ones

3.2 Performance Metrics

The performance metrics measured in the simulation include the average throughput and fairness degree.

(1) Average throughput(Φ )j

This parameter is defined as the average allocated bandwidth. As the following equation shows, for class j, and connection i,

B i

the number of connections in class j.

(2) Fairness degree (FD):

This parameter indicates how fair the bandwidth is shared by all connections for each approach, and is defined as:

⎦ , where n is the number of connections and Share degree (SD)

defines as:

− , SD indicates the relation between a connection’s demand and allocated bandwidth. The FD value is between [0,1]. It represents the variant between connections’ SDs. The smaller the variant is, the larger FD will be, and the allocation will be fairer.

(3) Average delay:

This parameter indicates the average delay of each class of service type. The value is calculated in milliseconds (ms). We calculated this metric based on the packets sent and the total delay of sent packets. In case the flow gets starvation, the average delay after starvation will be regarded as zero.

(4) Throughput per connection:

We have also reckoned up each connection’s throughput in the final of simulation. This value was calculated based on each connection’s actual allocated bandwidth during simulation and the simulation time. Besides, the result will be shown in Table format.

3.3 Simulation Results

3.3.1 Scenario I: Available bandwidth 8Mbits/sec

In the first scenario, the available bandwidth is set to 8 Mbits/sec. When using strictly priority scheduling, only rtPS can be allocated with its Maximum Sustained Rate. Although allocating bandwidth to rtPS and UGS connections, the residual

bandwidth is, (Mbits) , it is not enough

for all nrtPS with their Maximum Sustained Rate ,i.e., .Of course, BE connections starved. As Fig 3-2 shows, in the beginning five seconds, rtPS connections are allocated with Maximum Sustained Rate, while nrtPS were allocated less than this rate, and BE could not gain any bandwidth. In the later five seconds, some rtPS became greedy and attempted to grab nrtPS’s bandwidth. As a result, nrtPS will be allocated even less than their Minimum Reserved Rate. The fairness index shown in Fig 3-3 also indicates this damage of fairness. In the first five seconds, the fairness degree converges around 0.2~0.3 owing to the no allocation to BE. However, the stable state of fairness degree was broken and seriously decreased when the greedy connections appeared in the later five seconds. On the other hand, when using 2TSA, all connections share bandwidth proportionally, and none exceeds the Maximum Sustained Rate. In Fig 3-3, the fairness degree remains almost one, even when some connections become greedy. The reason is that once a connection gets more bandwidth in current frame, it will lose its priority in the following frames. Thus, even when rtPS became greedy in the last five seconds, they were only able to get the same rate as before. We can see at the beginning, the fairness degree does not sustain one but in the following seconds, the fairness degree almost performs one. However on the contrary, the throughput performs the same from start to the end. That is because we evaluate throughput every interval by interval. And while calculating fairness

8 (0.06 5) (0.65 3 0.8 2 0.7 2) 2.75− × − × + × + × =

2.75 (0.55 3 0.5 2 0.43 1) 3.51< × + × + × =

0

Figure 3-2: Average throughput in Scenario I

0

Figure 3-3: Fairness degree in Scenario I

degree, we used the globally allocated bandwidth for index.

In the average delay of Scenario I, we can see that in SPS. rtPS < nrtPS, and BE is zero due to its starvation. rtPS remains in a very short delay because SPS supports its QoS priority. However, if we continue to see the 2TSA performance, rtPS remains

average delay of 20ms and nrtPS performs shorter delay than that in SPS. On the other hand, owing to the non-starvation of BE. BE also performs average delay about 120 ms. The simulation figure is shown below Fig.3-4. All connection’s throughputs are shown in Table. 3-2, 3-3, and 3-4.

0 20 40 60 80 100 120 140

1 2 3 4 5 6 7 8 9 10

Simulation time(sec)

Average delay(ms)

2TSA-rtPS 2TSA-nrtPS 2TSA-BE SPS-rtPS SPS-nrtPS SPS-BE

Figure 3-4: Average delay in Scenario I

Table 3-2: Throughput of rtPS in 2TSA Scenario I

Table 3-3: Throughput of nrtPS in 2TSA Scenario I

Table 3-4: Throughput of BE in 2TSA Scenario I

3.3.2 Scenario II: Available bandwidth 12Mbits/sec

In the second scenario, the available bandwidth is set to 12 Mbits/sec. And two cases are simulated in this scenario. One is by assuming only rtPS connections become greedy after five seconds and the other one is by assuming all connections are able to become greedy after five seconds as long as they find out their previous allocation quality is good and try to send more data in purpose of better throughput.

When using Strict Priority Scheduling, the three types of services were originally allocated with their Maximum Sustained Rate, Fig.3-5, Fig.3-6, because the available bandwidth (12Mbits) is larger than the summation of all the connections’ Maximum Sustained Rates. And the remaining bandwidth becomes residual bandwidth.

(12 ), in the first five

seconds, the stable state of fairness degree has been remained, Fig. 3-7. But in the later five seconds, rtPS started grabbing bandwidth and made nrtPS and rtPS seriously damaged. Sooner or later, the nrtPS and BE will be starved. In Fig.3-7, the fairness degree downs to almost zero at the end, which means it is a very unfair state. No matter in which case, starvation of nrtPS and BE will happen. However, if using 2TSA, in the first five seconds, each service gained their Maximum Sustained Rate.

And in this scenario, the spare bandwidth (12-10.32=1.68Mbits) is not used and become residual bandwidth. When simulation time went over five seconds, in Case 1, see Fig.3-5, the rtPS connections attempted to gain the residual bandwidth and successfully be allocated. But the difference between Strict Priority Scheduling is that, when rtPS overcharges, nrtPS and BE will not be damaged. And in Case 2, see Fig.3-6, because all the connections are able to become greedy, the residual bandwidth will be fairly allocated to the connections which request more. The residual allocation of each connection is also different based on its QoS requirement.

0.3(UGS) 4.95(rtPS) 3.51(nrtPS) 1.56(BE) 10.32Mbits

> + + + =

And it is resulted from the design of QoS_Fac, connections with higher QoS requirement are assigned with a higher QoS_Fac and generated a lower PF. As a result, we can see the residual bandwidth allocation is rtPS>nrtPS>BE. Then in the evaluation of the fairness degree in Fig 3-6, when after five seconds, fairness will surely be affected by the rtPS in Case 1 either SPS or 2TSA. But the damage of fairness in 2TSA is obviously much less than that in SPS. If in Case 2, owing to all the connections will be able to become greedy after five seconds, the fairness degree of 2TSA outperforms the Case 1 in 2TSA. No matter which case we run. The fairness degree of SPS decreases seriously once rtPS connections starve nrtPS and BE ones.

And in Fig 3-8 is the long term(100 seconds) of observation. Even the fairness degree of 2TSA decreases in the first few seconds, it will converge at the end and again remain a stable state.

0

Figure 3-5: Average throughput in Scenario II-Case 1

0

Figure 3-6: Average throughput in Scenario II-Case 2

0

Figure 3-7: Fairness degree in Scenario II-10 seconds

0

Figure 3-8: Fairness degree in Scenario II-100 seconds

The average delays of Scenario II are shown in Fig. 3-9 and Fig. 3-10. In case 1, only rtPS become greedy, and rtPS lengthens its average delay after five seconds

owing to some of the rtPS may also be affected by other greedy rtPS, no matter in

owing to some of the rtPS may also be affected by other greedy rtPS, no matter in

相關文件