• 沒有找到結果。

1.1 An Overview of WiMAX System

Mobile WiMAX based on the IEEE 802.16e standard [1] has been introduced as the first broadband wireless access technology that supports both fixed and mobile environments. The IEEE 802.16e standard specifies the physical (PHY) and Medium Access Control (MAC) layers while the WiMAX Forum ensures interoperability between different manufactures’ devices. In fact, there are three different PHY layers defined: Single-Carrier PHY, Orthogonal Frequency-Division Multiplexing (OFDM) PHY, and Orthogonal Frequency-Division Multiple Access (OFDMA) PHY. Compared to the other PHY technologies, the OFDMA PHY is superior and has therefore been selected by the WiMAX Forum because of its features such as reducing the granularity in the radio resource mechanism, using the available power more efficiently, and leading to a significant cell range extension in the uplink transmission [2].

In the IEEE 802.16e OFDMA system, while both Frequency Division Duplexing (FDD) and Time Division Duplexing (TDD) operation modes are specified, there are many initial products and deployment scenarios currently focusing on TDD operation because of the flexibility to partition downlink (DL) and uplink (UL) resources and better channel reciprocity to support closed enhancing techniques [3]. In the TDD system, the time domain is divided into 5ms frames, which are separated into DL and UL sub-frames [4].

As can be seen in Fig. 1.1, an OFDMA DL sub-frame begins with a DL preamble which is used for frame synchronization, channel state estimation, received signal

strength, and signal-to-interference-plus-noise ratio (SINR) estimation; and followed by Frame Control Header (FCH), downlink map (DL-MAP) and uplink map (UL-MAP) messages that describe the structure and composition of frame.

Fig. 1.1 A simple OFDMA DL sub-frame structure.

According to the IEEE 802.16e specification, the units of resource allocation are slots, bursts and permutation zones. A slot is the minimum possible data allocation unit in time and frequency dimensions. It’s a combination of a sub-channel and one or more OFDMA symbols depending on sub-carrier permutation mode. On the downlink, a burst denotes a rectangular region which consists of a group of contiguous logical sub-channels in a group of contiguous OFDMA symbols. Each burst is transmitted using a single Modulation and Coding Scheme (MCS) and may include MAC protocol data units (PDUs) intended for one or more users. A permutation zone denotes an allocation in time when one particular subcarrier permutation mode is used to map data symbols onto sub-channels. A downlink sub-frame may contain more than one

permutation zone.

In this thesis, we only consider the downlink sub-frame with Partially Used Sub-Channelization (PUSC) zone wherein the sub-carriers are fairly distributed over the entire frequency band by using PUSC permutation mode. With PUSC mode, a slot will be a combination of one sub-channel and two OFDMA symbols; according to WiMAX Forum specified parameters, with 10 Mhz spectrum, there are 30 sub-channels, each one consists of 28 sub-carriers.

The IEEE 802.16e standard has defined 5 scheduling service classes with different QoS requirements, including bandwidth, packet loss, delay, and delay jitter: Unsolicited Grand Service (UGS), real-time Polling Service (rtPS), extended real-time Polling Service (ertPS), non-real-time Polling Service (nrtPS), and Best Effort (BE) [5][6]; each class has different QoS parameters.

Moreover, the standard also specifies that each data burst has to be mapped into a rectangular region of the downlink sub-frame. This constraint makes the DL mapping as two-dimensional downlink burst mapping problem introduced in the next section.

1.2 Downlink Burst Mapping Problem

The two-dimensional downlink burst mapping problem in Mobile WiMAX system is a variation of the bin packing problem, which is known to be NP-complete [7].

In order to define the problem, assume that the DL sub-frame is a two-dimensional matrix of slots, and the area of each burst is expressed in term of slots. Let c and s be the number of sub-channels and number of time slots in a DL sub-frame, respectively.

Thus the total resource in a frame is B s c slots. Given a set of n items

1 2

{ ,b b ,..., }b , and each item n b has a size i A , 1 ii  n. All items are mapped into B under the following constraints:

No overlap between any two rectangular bursts regions.

AiWiHi. Where W and i H are the width and the height of the i rectangle burst assigned to item ith, respectively.

Wis and Hic for all i.

However, the OFDMA downlink burst mapping is different from the typical bin packing. First, the dimensions of rectangles are predetermined in the bin packing, while the bursts can be sharped flexibly. Second, there can be resource wastage due to the size mismatch between the data requests and the allocated burst, which does not exist in the bin packing. When the actual data belonging to a request cannot fill the whole rectangular region of a burst, the vacant slots are considered as over allocated slot wastage within the burst. Besides, in some cases the remaining slots in the DL sub-frame cannot form a rectangle to fit any unmapped requests. These remaining slots are called unused slot wastage outside the bursts. Both these wasted slots reduce the efficiency and should be minimized. Fig. 1.2 is an example about resource wastage in DL Mapping.

For this example, over allocated slots and unused slots are shown in black and white, respectively.

Fig. 1.2 Resource wastage in DL Mapping.

1.3 Objective

The problem of efficiently mapping the DL data requests into rectangular regions in the DL sub-frame as mentioned above is not addressed by the IEEE 802.16e standard, and is left as an implementation issue. As a result, several DL mapping algorithms have been proposed in literature recently [8]-[11]. In particular, a mapping algorithm which handles two levels of data requests, i.e., urgent and non-urgent data, was presented in [12]. For convenience, we shall refer to this algorithm Two-Level Requests Mapping (TLRM).

In TLRM algorithm, each user’s request can consist of urgent and non-urgent data.

The algorithm consists of two phases. Phase 1 maps all requests with both urgent and non-urgent parts into the DL sub-frame, and Phase 2 returns some mapped non-urgent data parts so that more urgent data parts can then be mapped. The goal of TLRM algorithm is to map the real-time traffic effectively but do not focus on the urgent part in the first phase. Moreover, if several MCSs are used, the mapping order based on the required number of slots becomes inefficiency. Suppose that there are two requests with

the same size. The one with a worse MCS is mapped prior to the other which may be unmapped when there are no enough slots for both, resulting in a low spectrum efficiency.

The purpose of this thesis is to present an enhanced version of the TLRM algorithm.

In addition to the number of required slots, we also consider achievable Modulation and Coding Scheme (MCS) of urgent data in determining the order of data mapping.

Simulation results show that, compared with previous design, the enhanced algorithm serves more urgent data with higher throughput. Simulation results show that, compared with previous design, the enhanced algorithm serves more urgent data with higher throughput.

1.4 Thesis Organization

The rest of this thesis is organized as follows. In Chapter 2, the related algorithms eOCSA and TLRM will be reviewed in detail. Our proposed data mapping algorithm, called Enhanced Two-Level Requests Mapping (E-TLRM) in WiMAX Systems, is briefly introduced and described in Chapter 3. Details of the performance evaluation of our E-TLRM algorithm when compared with eOCSA and TLRM are given in Chapter 4.

Finally, Chapter 5 gives our conclusions.

相關文件