• 沒有找到結果。

Improving the performances of distributed coordinated scheduling in IEEE 802.16 mesh networks

N/A
N/A
Protected

Academic year: 2021

Share "Improving the performances of distributed coordinated scheduling in IEEE 802.16 mesh networks"

Copied!
17
0
0

加載中.... (立即查看全文)

全文

(1)

Improving the Performances of Distributed

Coordinated Scheduling in IEEE 802.16

Mesh Networks

Shie-Yuan Wang, Senior Member, IEEE, Chih-Che Lin, Student Member, IEEE, Han-Wei Chu,

Teng-Wei Hsu, and Ku-Han Fang

Abstract—The IEEE 802.16 mesh network is a promising next-generation wireless backbone network. In such a network, set-ting the holdoff time for nodes is essential to achieving good performances of medium-access-control-layer scheduling. In this paper, we propose a two-phase holdoff time setting scheme to improve network utilization. Both static and dynamic approaches of this scheme are proposed, and their performances are compared against those of the original schemes. Our simulation results show that both approaches significantly increase the utilization of the control-plane bandwidth and decrease the time required to estab-lish data schedules. In addition, both approaches provide efficient and fair scheduling for IEEE 802.16 mesh networks and generate good application performances.

Index Terms—Distributed scheduling, IEEE 802.16(d), mesh network.

I. INTRODUCTION

I

N RECENT years, the IEEE 802.16 standard (WiMAX) [1] has been attracting much attention. This next-generation broadband communication technology aims to provide high network throughput, low packet delay, and low packet loss rate. In this standard, two operational modes are defined: 1) the point-to-multipoint (PMP) mode and 2) the mesh mode. The PMP mode provides for one-hop communication between a base station (BS) and several subscriber stations (SSs). In con-trast, the mesh mode constructs a multihop wireless backbone network.

In the mesh mode, packets can be transferred in a peer-to-peer manner, i.e., two SS nodes can directly communicate with each other without the aid of the BS node. In this mode, network accesses are managed in a manner much like time-division multiple access. As shown in Fig. 1, the network bandwidth is first divided into frames, each of which is subdivided into one control and one data subframe. A control subframe is further

Manuscript received May 15, 2007; revised August 16, 2007 and October 4, 2007. This work was supported in part by the Ministry of Education Program for Promoting Academic Excellence of Universities under Grant 95-2752-E-009-014-PAE and in part by the National Science Council under Grant 96-2628-E-009-073. The review of this paper was coordinated by Dr. Q. Zhang.

S.-Y. Wang, C.-C. Lin, H.-W. Chu, and T.-W. Hsu are with the Department of Computer Science, National Chiao Tung University, Hsinchu 300, Taiwan, R.O.C. (e-mail: shieyuan@csie.nctu.edu.tw).

K.-H. Fang is with the Cyberlink Corporation, Taipei 231, Taiwan, R.O.C. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TVT.2007.912330

divided into transmission opportunities (TxOpp), whereas a data subframe is further divided into minislots. Control mes-sages and data packets are transferred over transmission oppor-tunities and minislots, respectively.

To avoid conflict when using transmission opportunities, the mesh mode defines two scheduling modes: 1) the centralized mode and 2) the distributed mode. In the centralized scheduling mode, a network is partitioned into tree-based clusters. Each cluster has a BS node that is responsible for allocating network resources to the SS nodes that it services. Although the central-ized scheduling mode provides collision-free transmissions for control and data packets, it has several disadvantages, which are described here.

First, the number of routes that can be utilized is unnecessar-ily reduced. The reason is that the centralized scheduling mode uses a tree-based topology, which cannot exploit all possible routes in a network, as compared with a mesh-based topology. For the same reason, the only route between two nodes on the tree may not be the shortest one between them if, instead, a mesh-based topology were used. In addition, the root node of the tree is likely to become the performance bottleneck, because many packets need to pass through it to reach their destination nodes. Second, it is difficult to efficiently exploit the spatial reuse property of wireless communication in the centralized scheduling mode. The message format defined in this mode only allows a BS node to notify an SS node of the bandwidth allocated to it. There is no field in the message that allows a BS node to specify the start and end minislot offsets for an allocation. As such, to avoid interference, each SS node has to take a conservative approach to derive its own data schedule. Allocating minislots in this way is collision-free but results in only one active SS node per cluster at any given time. More detailed explanations about this problem are provided in the Appendix.

In contrast, the distributed scheduling mode provides two advantages: First, the distributed scheduling mode uses a mesh topology. This allows all possible routing paths to be utilized to avoid performance bottlenecks. In addition, spatial reuse of wireless communication can be exploited to increase network capacity. Second, the distributed scheduling mode establishes data schedules on an on-demand basis; thus, network band-width can be more efficiently utilized.

In the IEEE 802.16 mesh network standard, the distributed scheduling mode is further divided into two operational modes: 0018-9545/$25.00 © 2008 IEEE

(2)

Fig. 1. Frame structure of the IEEE 802.16 mesh mode.

Fig. 2. Node’s transmission cycle comprises the holdoff time and the contention time. 1) the coordinated mode and 2) the uncoordinated mode. In the distributed coordinated scheduling mode, the control messages required to establish data schedules are transmitted over transmission opportunities without collisions. In contrast, in the distributed uncoordinated scheduling mode, such control messages can only be transmitted on the transmission opportu-nities left from the distributed coordinated scheduling mode or on unallocated minislots. Because of this design, the distributed coordinated scheduling mode provides better quality-of-service (QoS) supports than the distributed uncoordinated scheduling mode. In this paper, we focus only on the distributed coordi-nated scheduling mode.

In the distributed coordinated scheduling mode, the role of every network node is the same. Each node uses the same pseudo-random election algorithm to resolve contention of transmission opportunities. Control messages, such as mesh network configuration (MSH-NCFG) and mesh distributed co-ordinated scheduling (MSH-DSCH) messages, are transmitted on transmission opportunities determined by this algorithm. (The MSH-NCFG message is used to carry information for network initialization, and the MSH-DSCH message is used to carry information for data scheduling.) The contention resolu-tion of MSH-DSCH message transmissions is explained here to illustrate how this pseudo-random algorithm works.

During operation, each node listens to the MSH-DSCH messages advertised by its neighboring nodes. Based on the scheduling information carried in the MSH-DSCH messages, each node knows the transmission interval (more details about this will be illustrated and explained in Fig. 9) of each of its neighboring nodes. More specifically, it knows for which transmission opportunities its neighboring nodes may contend. Each node then uses the same election algorithm to determine the winning node for a given transmission opportunity. This algorithm takes a specified transmission opportunity number and the IDs of all the nodes contending for this transmission opportunity as input. It outputs the ID of the winning node whose computed value is the largest among all the competing nodes. Since every node uses the same algorithm and the same input, every node knows which node will win a given

transmission opportunity within its two-hop neighborhood (ex-plained in Section III-A2). As such, no collision will occur on any transmission opportunity. For a node, if it cannot win a given transmission opportunity, it repeats the aforementioned process, with the next transmission opportunity (i.e., the previ-ous transmission opportunity number plus one) as input, until it eventually wins one transmission opportunity.

After a node wins a transmission opportunity, the IEEE 802.16 standard requires it to refrain from contending for another transmission opportunity in a certain number of con-secutive transmission opportunities, which is called the holdoff time. As shown in Fig. 2, a node’s transmission cycle com-prises the holdoff time and the contention time. The contention time is defined as the number of consecutive transmission opportunities in which a node should contend for access (i.e., participate in the algorithm computation) until it wins one. Although this holdoff time design may increase delays in trans-mitting control messages, it allows for fair accesses to transmis-sion opportunities among competing nodes. Using this design, every winning node has to suspend its contention for its next MSH-DSCH transmission opportunity during its holdoff time. As such, a winning node cannot obtain more than one transmis-sion opportunity in one transmistransmis-sion cycle, preventing a node from monopolizing a wireless channel.

The effect of the holdoff time value can be discussed from two aspects: On one hand, if the holdoff time is set to a too large value, network nodes will suffer from long delays in transmitting control messages and thus cannot fully utilize the link bandwidth. In the distributed coordinated scheduling mode, a network node can transmit a data packet only after it has es-tablished a data schedule with the next-hop node of the packet. To establish a data schedule, a node needs to perform a three-way handshake procedure, which requires three transmission opportunities to exchange three MSH-DSCH messages. Since using a large holdoff time value will increase the transmission cycle for transmitting a control message, the time required to establish a data schedule (and, thus, sending a data packet) will be increased as well. On the other hand, if the holdoff time is set to a too-small value, the number of nodes competing

(3)

for a transmission opportunity will be large. This will lead to high contention for transmission opportunities. Under such a congested condition, nodes may experience unpredictable packet delays and unfairly share transmission opportunities. In summary, the holdoff time should be set to an appropriate value that is large enough to avoid congestion but small enough to avoid large transmission delays.

In the standard, every node in a network is required to use a holdoff time value that is greater than or equal to 16. This design cannot optimally perform for all networks, because it does not consider the possibility that node density may vary at different locations of the network. Ideally, if a node is located in a high-node-density area, it should use a larger holdoff time value to avoid congestion. In contrast, if a node is located in a low-node-density area, it should use a smaller holdoff time value to reduce unnecessary large transmission delays.

To address these shortcomings of the standard, we propose a two-phase holdoff time setting scheme in this paper. This scheme uses different holdoff time values for the network initialization phase and the data scheduling phase. As such, the success of network initialization can be guaranteed, and good scheduling performances can be achieved. In addition, the scheme allows different nodes to use different holdoff time values to match the node densities around them, which reduces unnecessary transmission delays and improves network utiliza-tion without causing network congesutiliza-tion. Static and dynamic approaches of this scheme are proposed, and their performances are compared with those of the original schemes under various network conditions. Our simulation results show that the dy-namic approach significantly improves network performances without causing network congestion for IEEE 802.16 mesh networks.

The remainder of this paper is organized as follows: In Section II, we survey related work. In Section III, the necessity of the two-phase holdoff time setting scheme is explained. In Section IV, we present the design of the two-phase holdoff time setting scheme. We also present the details of the static and dy-namic approaches. In Section V, we compare the performances of the static and dynamic approaches with those of the schemes that use a fixed holdoff time value for all nodes. Finally, we conclude this paper in Section VI.

II. RELATEDWORK

So far, very few papers in the literature have studied the performances of the distributed coordinated scheduling mode of the IEEE 802.16 mesh network. Most papers (e.g., [2]–[4]) focus on the centralized scheduling mode. For the distributed coordinated scheduling mode, Cao et al. [5] studied the effect of the holdoff time value in terms of control message transmis-sion cycles and the time required to establish data schedules. The definition of the holdoff time specified in the standard is shown in

holdoff time = 2exp+base (1) where the base should be set to 4, and the exponent value can vary. The authors vary the holdoff time exponent value from

0 to 4 to observe the effects of the resultant holdoff time value. In [6], Bayer et al. analyzed control message transmission cycles and round trip times (RTTs) using different holdoff time base values.

In [7], Bayer et al. proposed a dynamic holdoff time setting scheme to improve the scheduling performance of the distrib-uted scheduling mode. The goal of their paper is similar to that of our paper. However, there are fundamental design and implementation differences between these two papers. In the following, we present some details of the paper to point out the differences.

The main idea of the scheme proposed in [7] is given as follows: Network nodes that are transmitting, receiving, or forwarding data packets should use smaller holdoff time values to exchange MSH-DSCH messages more quickly. In contrast, nodes that are not transmitting, receiving, or forwarding data packets should use larger holdoff time values to reduce con-tentions for transmission opportunities. Bayer et al. classify network nodes into four classes: 1) BS; 2) active; 3) sponsoring; and 4) inactive. The BS class comprises all BS nodes in the network; the sponsoring class consists of nodes that are spon-soring new nodes (i.e., allocating data schedules for forwarding the control messages initiated from a new node’s network registration procedure). A sponsoring node is a node that has been selected by one of its neighbors as the next-hop node toward the destination. Network nodes that are transmitting, receiving, or forwarding data packets are called active nodes; nodes that are idle are called inactive nodes. Each of these four classes has its own range of holdoff time exponent values. The upper bounds of these classes’ holdoff time exponent values are shown in the following:

0≤ MEBS≤ MEact< MEsp< MEinact≤ 7 (2)

where MEBS denotes the maximum holdoff time exponent

value of the BS class, MEact denotes the maximum holdoff

time exponent value of the active class, MEsp denotes the

maximum holdoff time exponent value of the sponsoring class, and MEinactdenotes the maximum holdoff time exponent value

of the inactive class.

Since a smaller holdoff time exponent value results in a smaller holdoff time value, nodes using a smaller holdoff time exponent value will refrain themselves from contending for transmission opportunities for a shorter period of time. Such nodes, therefore, can, on average, win a transmission oppor-tunity faster than nodes using a larger holdoff time exponent value. Due to this reason, the nodes of the BS class, on average, can transmit MSH-DSCH messages faster than (or as fast as if MEBS= MEact) the nodes belonging to the active class.

Sim-ilarly, the nodes belonging to the active class, on average, can transmit MSH-DSCH messages faster than the nodes belonging to the sponsoring or inactive class.

Although this scheme provides some advantages, it has several disadvantages described here.

First, this scheme needs to rely on a collaborative routing protocol to determine if a node should belong to the active class or the inactive class. For each node, it should consult the routing protocol to check whether it is selected as a potential

(4)

forwarding node for a routing path. If so, it should switch to the active class. Otherwise, it should belong to the inactive class.

Second, when a mesh network becomes highly loaded, it is very likely that every node has data to send most of the time. In such a condition, most of the nodes will belong to the active class and thus will have the same range of holdoff time values. As such, the multi-class design of this scheme will degenerate to the original single-class design in this condition. To avoid this problem, which is caused by a heavy load, an admission control mechanism can be used.

Third, not all active nodes actually need to use smaller holdoff time values at all times. Active nodes need not establish data schedules over all transmission opportunities that they win. This is because an established data schedule can be valid for N frames, where 1≤ N ≤ 128. (Note that each frame can contain

M transmission opportunities, where 2≤ M ≤ 15, depending

on the network setting.) If an active node excessively wins transmission opportunities without considering whether it has data to send, it will waste many transmission opportunities that could otherwise be given to other nodes that have data to send.

Lastly, in this scheme, when switching to a higher priority class, a node should first set its holdoff time exponent value to zero and then gradually increment this value by one until its and its neighboring nodes’ M x values are no longer above a predefined threshold. This means that, before a node can stabi-lize its holdoff time exponent value, it will excessively contend for transmission opportunities and thus waste the control-plane bandwidth. To understand the meaning of M x, the standard defines that each node should use two shorter fixed-length fields

exp and M x to represent its next transmission opportunity

number. The relationship between these two fields and a trans-mission opportunity number is given as follows:

2exp∗ Mx < next_T xOpp <= 2exp∗ (Mx + 1) (3) where next_T xOpp denotes a node’s next transmission op-portunity number. The detailed information about (3) will be explained in Section IV-C. In contrast, our proposed two-phase holdoff time setting scheme does not incur the aforementioned problems. In addition to improving the scheduling perfor-mances of the distributed coordinated mode, our proposed scheme ensures the success of network initialization without wasting the control-plane bandwidth. The dynamic approach of our proposed scheme can dynamically reduce a node’s holdoff time exponent value when the node needs to establish a data schedule. As such, the dynamic approach can more efficiently use transmission opportunities than the scheme proposed in [7]. Besides, the dynamic approach need not employ a collabora-tive routing protocol; therefore, its design and implementation complexities are much lower than those of the scheme pro-posed in [7].

III. NECESSITY OF THETWO-PHASE HOLDOFFTIMESETTINGSCHEME

In this section, we explain why the proposed two-phase hold-off time setting scheme is necessary and important. The holdhold-off time value affects not only the efficiency of medium access

control (MAC)-layer scheduling but the success of network initialization as well. Network initialization is very important, because a node must successfully initialize and attach itself to the network before it can start transmitting and receiving packets. When designing a new holdoff time setting scheme, it is important to ensure that the network nodes under the new scheme can still successfully attach themselves to the network. When evaluating a holdoff time setting scheme, three aspects must be carefully considered: 1) the success of network initialization; 2) the efficiency of MAC-layer scheduling; and 3) the fairness of resource sharing. In the following, we first define several relevant performance metrics and explain their meanings. These metrics will be used throughout this paper. Then, we elaborate on the effect of the holdoff time value using the simulation results of four fixed-value holdoff time setting schemes. Finally, the necessity of the two-phase holdoff time setting scheme is explained.

A. Performance Metrics

In the following, several performance metrics used through-out this paper are defined.

1) SRNI: The success rate of network initialization (SRNI)

is defined as

SRNI =NCsuccess NCtotal

(4) where NCsuccess denotes the number of cases in which the

network succeeds in initialization, and NCtotal denotes the

number of total cases.

The success of a network initialization is defined as follows: For a network case, if all of its nodes successfully initialize and attach themselves to the network, the initialization of this network case succeeds. In contrast, if any node fails to perform its initialization and attachment procedures, the initialization of this network case fails. As stated before, the success of network initialization is very important, because, before a node can start sending or receiving packets, it must successfully initialize and attach itself to the network. In [8], we show that, in dense networks, excessive control message transmissions can lead to severe message collisions due to the well-known “hidden

termi-nal” problem. As such, if the holdoff time value is set to a value

that is too small, the control messages containing information for a node to attach itself to the network will be excessively dropped. In such a condition, the node’s initialization process will fail, causing the SRNI to decrease.

2) ATOUN: The utilization of a node’s control-plane

band-width is an important metric used to evaluate the efficiency of a holdoff time setting scheme. To define the utilization of the control-plane bandwidth from the perspective of an SS node, one should first understand the notion of the two-hop neighbor-hood. The two-hop neighborhood of a node is defined as the set comprising all of its one-hop and two-hop neighboring nodes (including itself), i.e.,

nbr(j) ={i | node i ∈ node js two-hop neighborhood}. (5) For an SS node, since the hidden terminal problem can occur only with nodes that are in its two-hop neighborhood, the

(5)

Fig. 3. Good case for establishing a data schedule in the distributed coordinated scheduling mode when the network is not congested. IEEE 802.16 standard requires that each node resolves the

contention of each transmission opportunity with the nodes in its two-hop neighborhood. As such, within a node’s two-hop neighborhood, only one node can transmit a control message at any given transmission opportunity.

The average transmission opportunity utilization viewed from node j is defined as follows:

AvgTxOpp(j) = 

i∈nbr(j)txnum(i)

total(j) (6)

where txnum(j) denotes the number of transmission oppor-tunities won by node j, and total(j) denotes the number of total transmission opportunities since node j has attached itself to the network. This definition indicates how well the nodes in node j’s two-hop neighborhood (including node j itself) together utilize the network’s transmission opportunities. Ideally, the average transmission opportunity utilization viewed from each node should be 100%, which indicates that, from the perspective of each node, each transmission opportunity is used by one and only one node, and no transmission opportunity is left unused.

The average transmission opportunity utilization of nodes (ATOUN) metric of a network case is defined as follows:

ATOUN = m

j=1AvgTxOpp(j)

m (7)

where m is the number of nodes in a network case. It is the av-erage across all the nodes’ AvgTxOpp values in a network case. The ATOUN metric reflects the utilization of the control-plane bandwidth from the aggregate of the local view of each node. A higher value of this metric indicates that a network case has higher control-plane bandwidth utilization; a lower value indicates that a network case has lower control-plane bandwidth utilization.

The ATOUN metric does not measure the fairness of band-width sharing in a network. To solve this problem, we designed another metric, which is explained in Section III-A4, to evaluate how fairly network nodes share the control-plane bandwidth.

3) ATHPT: The average three-way handshake procedure

time (ATHPT) metric is defined as the average time required by the three-way handshake procedure to establish a data schedule across all network nodes in a case. This metric is computed as follows: For a network case, we first use the following expres-sion to average the times required to establish data schedules for every node:

THPT(j) = n

i=1tij

n (8)

where tij denotes the time required to establish the ith data

schedule of node j, and n is the number of node j’s data

schedules. We then use the following expression to compute the case’s ATHPT value, which is the average across all the nodes’ THPT values:

ATHPT = m

j=1THPT(j)

m (9)

where m is the number of nodes in a network case. Like the ATOUN metric, for each scheme, the average and standard deviation of its ATHPT values across all simulation cases will be presented.

ATHPT is a common metric used in the literature to evaluate the effect of the holdoff time value. The three-way handshake procedure requires transmitting three MSH-DSCH messages, each of which contains the request, grant, and confirm infor-mation elements (IEs), respectively. The detailed procedure is described here.

First, the requesting node transmits a request IE to the peer node. The request IE specifies the number of requested mini-slots on the peer node and the available minimini-slots on the request-ing node from which the peer node can choose. On receivrequest-ing the request IE, the peer node decides whether it would like to accept this request. If not, it ignores this message. Otherwise, out of its own available minislots, it allocates a data schedule from the requesting node’s available minislots. The peer node then transmits a grant IE containing the information of the allocated data schedule to the requesting node. Upon receiving the grant IE, the requesting node broadcasts a confirm IE to all of its neighboring nodes to notify them of this allocation information. The reasons for the preceding procedure are clear. First, the minislots of the requesting and peer nodes are already synchro-nized over the time axis in an IEEE 802.16 mesh network. Second, when the requesting node is transmitting data to the peer node, the peer node must be able to receive the data at the same time. Therefore, the requesting node must negotiate with the peer node to find a range of minislots that is available to both of them (i.e., good for transmitting at the requesting node and good for receiving at the peer node) and can accommodate the requested number of minislots.

Figs. 3 and 4 are two examples showing the effect of the holdoff time value. (Note that these two figures are for illustra-tion purposes only, and the minimum number of slots between subsequent MSH-DSCH messages should be 16, according to the standard.) Fig. 3 shows a good case for establishing a data schedule when the network is not congested. In this case, all control messages required to establish a data schedule are exchanged within one control subframe due to the use of a small holdoff time value. As such, data packets can be quickly transmitted within the same frame in which the control messages are transmitted. In contrast, Fig. 4 shows a bad case for establishing a data schedule when the network is not con-gested. In this case, the three control messages are transmitted

(6)

Fig. 4. Bad case for establishing a data schedule in the distributed coordinated scheduling mode when the network is not congested. over three different frames due to the use of a large holdoff

time value. In such a condition, the data packets can only be transmitted over the minislots that are at least two frames away from the transmission of the request IE. Since a node is allowed to transmit data packets to its neighboring node only after they have established a data schedule, the increased delay of the three-way handshake procedure directly degrades the network quality experienced by application programs.

4) IICMTs: We define a new performance metric called inefficiency index of control message transmissions (IICMT)

to evaluate how fairly network nodes share the control-plane bandwidth. To understand IICMT, one first realizes that, viewed from a node, if every node in its two-hop neighborhood has data to send at any given time, the optimal way to schedule these nodes’ control message transmissions is to schedule them roughly in a round-robin fashion. That is, on average, a node should transmit one and only one control message every

N transmission opportunities, where N is the number of nodes

in its two-hop neighborhood (including itself). We call this round-robin scheme “the static optimal scheme” in this paper. This is the optimal design for a static network in which every node has data to send at all times. This is because, when a node wants to transmit a control message, the transmission must be resolved among all the nodes in its two-hop neighborhood. As such, to avoid congestion while reducing transmission de-lays, on average, a node can only transmit a control message every N transmission opportunities, where N is as previously defined.

For a fixed-value holdoff time setting scheme, to avoid any congestion from occurring in the network, the maximum of the

N values of all nodes should be used as the fixed value for all

nodes. Since, in a general network, not all nodes have the same

N value, this fixed-value approach will waste the transmission

opportunities of the nodes whose N values are smaller than the maximum one.

We define a node’s transmission opportunity utilization dur-ing a period as the ratio of the number of transmission op-portunities that it wins during the period to the total number of transmission opportunities that are available during that period. If such ratios of all nodes under a holdoff time setting scheme closely approximate their counterparts under the static optimal scheme, this holdoff time setting scheme is considered to perform as well as the static optimal scheme.

The following explains the steps used to compute IICMT. First, we convert the actual utilization ratio of node i into the logarithmic form given as follows:

R1(i) =−log2



NumTxOppwin(i) NumTxOpptotal



(10) where NumTxOppwin(i) denotes the number of transmis-sion opportunities that node i wins in the period, and

NumTxOpptotal denotes the total number of transmission op-portunities that are available in the period. In addition, the logarithmic form of the static optimal utilization ratio for a node i is shown in the following:

R2(i) =−log2  1 |nbr(i)|  (11) where |nbr(i)| denotes the number of nodes in node i’s two-hop neighborhood. Second, the absolute value of the difference between the two logarithms is computed and given by

Dif f (i) = AbsoluteV alue (R1(i)− R2(i)) . (12) One can use this value to evaluate the inefficiency and unfairness degree of a node’s control message scheduling. The best value for this difference is zero, which means that the scheduling generated by the used scheme for this node is equivalent to that generated by the static optimal scheme. If this difference value increases, it means that the scheme performs worse than the static optimal scheme.

Finally, the IICMT metric is defined as the sum of the Dif f values of all the nodes in a network and given by

IICMT =

m



i=1

Dif f (i) (13) where m is the number of nodes in a network case.

The rationale for this metric is that, when evaluating a holdoff time setting scheme, one should consider its impacts on all network nodes. This sum shows the degree of inefficient and unfair use of available transmission opportunities across all the nodes in a network. A zero IICMT value means that a holdoff time setting scheme schedules transmission opportunities for the whole network as if the static optimal scheme were used. In contrast, a nonzero IICMT value indicates that the scheme either inefficiently or unfairly schedules transmission oppor-tunities when compared with the static optimal scheme. As expected, a high IICMT value indicates that the used holdoff time setting scheme deviates much from the static optimal scheme.

B. Effect of the Holdoff Time Value

We use the NCTUns network simulator [9] to evaluate the effect of the holdoff time value. We use ten random connected topologies and derive average simulation results from them to eliminate the effects that may be caused by using a single specific topology. To generate such topologies, we randomly distribute one BS node and 99 SS nodes within a square area with a side length of 2500 m. We then check whether the generated topology is partitioned or not. If it is partitioned, it is discarded, and the aforementioned process is repeated

(7)

Fig. 5. Example of MSH-NCFG message collisions.

until a random connected topology is generated. The whole process is repeated ten times to generate ten random connected topologies. These topologies represent different random dense wireless backbone networks. For each studied holdoff time setting scheme, we conducted its simulations on each topology five times, each time using a different random number seed. Therefore, we have 50 runs in total to derive average simulation results. The simulated time for each run is set to 1000 s. During simulation, each node runs a MAC-layer pseudo-data scheduler to periodically establish data schedules with its neighboring nodes in a round-robin manner. The frequency is chosen to be one data schedule every 3 s. It generates a moderate traffic load that allows the performances of the studied holdoff time setting schemes to be distinguished.

In [8], we pointed out two reasons the initialization of an IEEE 802.16 mesh network may fail. In this paper, we applied the revised network initialization process proposed in [8] to all studied schemes, including the original fixed-value schemes. This revised process can significantly alleviate the MSH-NCFG message collision problem.

After this revised process is applied, message collisions now only result from excessive MSH-NCFG message transmissions by a new node’s neighboring nodes. A typical example is shown in Fig. 5. Suppose that node C is a new node trying to enter the network and nodes A and B are its neighboring nodes that have attached themselves to the network. The dotted circles represent the signal coverage of nodes A and B, respectively. Before node C attaches itself to this network, nodes A and B transmit their own MSH-NCFG messages without considering whether their MSH-NCFG messages can be successfully received at the location of node C. Consequently, many MSH-NCFG messages transmitted by nodes A and B may collide at node C. However, since node C, so far, has not been a functional node in this net-work, such message collisions do not hinder node C’s normal operation at this moment.

However, in case nodes A and B transmit their own MSH-NCFG messages very frequently (for example, these two nodes use a very small holdoff time value to schedule their MSH-NCFG message transmissions), it is very likely that node C can-not successfully receive any MSH-NCFG message transmitted by these two nodes. In such a condition, node C cannot proceed its network initialization process, because it cannot obtain the necessary information required to start its network initialization process.

The other reason that causes the network initialization process to fail is the absence of routing paths from a new SS node to a BS node. For an SS node, the success of its net-work initialization process relies on the availability of a routing path from itself to the BS node. On performing the registration procedure (one of the necessary procedures in a node’s network initialization process), a new node must have a routing path to communicate with a BS node. If no available routing path exists, the new node’s network initialization process will fail.

To eliminate the effect of the aforementioned routing prob-lem on the performance results, we adopted a design to guar-antee that every new SS node has a sponsor node and every new SS node has a routing path to the BS node. To provide such a guarantee, we first generated routing paths among all nodes using Dijkstra’s shortest path algorithm. Then, we let an SS node choose the first-hop node on its routing path to the BS node as its sponsor node. Such a design guarantees that, when an SS node is performing the registration procedure, at least one routing path exists for the SS node to communicate with the BS node. With this design, the problem of network initialization processes failing due to lack of routes to the BS node no longer exists. As such, the simulation results reflect solely the effect of different holdoff time values rather than the mixed effects of different holdoff time values and the used routing protocol.

In the following, we compare four different fixed-value hold-off time setting schemes. As previously mentioned, the standard regulates that the holdoff time base value be set to 4. Therefore, here, we set the holdoff time base value used by all schemes to 4 while varying the holdoff time exponent value used by these schemes from 0 to 3. The resultant holdoff time values are thus 16, 32, 64, and 128, respectively.

Table I shows the performances of the four fixed-value hold-off time setting schemes. In total, 50 runs of simulations were conducted. SRNI, as defined in (4), measures the success rate of network initialization of these 50 runs. The ATOUN-Avg. and ATOUN-Std.dev. in the table are the average and standard deviation of the ATOUNs of the 50 runs, respectively. The ATHPT-Avg. and ATHPT-Std.dev. are the average and standard deviation of the ATHPTs of the 50 runs, respectively. Finally, the IICMT-Avg. and IICMT-Std.dev. are the average and stan-dard deviation of the IICMTs of the 50 runs, respectively.

From the table, one sees that, when the holdoff time value de-creases, the ATOUN-Avg. inde-creases, and both the ATHPT-Avg. and the IICMT-Avg. decrease. As discussed before, all of these trends are expected and reasonable. These trends show that us-ing a smaller holdoff time value results in better performances when the network is uncongested.

However, the SRNI results reveal a serious problem when small holdoff time values are used. One sees that using large holdoff time values (e.g., 64 and 128) results in 100% SRNI. However, using small holdoff time values (e.g., 16 and 32) results in a success rate of less than 100%, which means that some SS nodes cannot successfully initialize and attach themselves to the network.

These simulation results show that a fixed holdoff time value cannot provide good scheduling performances while guarantee-ing the success of network initialization processes. Based on

(8)

TABLE I

PERFORMANCES OF THEFOURFIXED-VALUEHOLDOFFTIMESETTINGSCHEMES

this observation, we propose a new holdoff time setting scheme to achieve both of the aforementioned goals. In the following, we explain the proposed two-phase holdoff time setting scheme in detail.

IV. TWO-PHASEHOLDOFFTIME-SETTINGSCHEME In Section III-B, we show that using a fixed holdoff time value cannot guarantee the success of network initialization while providing good scheduling performances. As such, we propose a two-phase holdoff time setting scheme that uses different holdoff time values for the two phases. In this scheme, after being powered on, each node initially stays in the network initialization phase. It remains in that phase until all of the nodes in its two-hop neighborhood have successfully initialized and attached themselves to the network. When this condition is met, the node then enters the data transmission phase. Because every node should succeed in initializing and attaching itself to the network, when a node is still in its network initialization phase, the proposed scheme sets its holdoff time to a large value (e.g., 64 or higher) to eliminate the potential hidden terminal problem. This design ensures that the network initialization processes of all nodes will eventually succeed. After all the nodes have successfully attached themselves to the network, they will have switched their phases to the data transmission phase. In this phase, the hidden terminal problem no longer occurs, because the neighbor relationships among all nodes have been known and stabilized. As such, a small holdoff time value can be used in this phase to improve MAC-layer scheduling performances.

In this scheme, a node uses only its local knowledge to determine when to switch from the network initialization phase to the data transmission phase. According to the standard, each node should maintain a node list to record the scheduling information of the nodes in its two-hop neighborhood. This node list is an input to the pseudo-random election algorithm for scheduling the node’s control message transmissions. Assume that, for each node, the network operator has given it the total number of nodes in its two-hop neighborhood. (This assump-tion can be easily met for a static network, where dynamic fading effects are not significant.) With this information, each node can locally determine when it can switch to the data transmission phase by comparing the number of nodes that are currently in its node list with that given by the network operator. When these two numbers match, it can safely enter the data transmission phase for better performances.

In the following, we propose two approaches of this two-phase holdoff time setting scheme. The first one considers the static locations of nodes, whereas the second one further

considers the dynamic bandwidth needs of nodes. In the rest of this paper, we call the former the static approach and the latter one the dynamic approach for brevity.

A. Static Approach

For a network node, only nodes in its two-hop neighborhood can contend for transmission opportunities with itself. As such, instead of using the same holdoff time value for all nodes, we propose an approach that allows different nodes to use different holdoff time values based on the node densities around them. The holdoff time value of a node is statically set based on its two-hop neighborhood node number, as shown in

Holdoff time of node j = 2floor(log2(|nbr(j)|)) (14)

where nbr(j) is as defined in (5).

As shown in Fig. 2, the transmission cycle of an MSH-DSCH message comprises the holdoff and contention times. In a dense network, where a node normally has a large number of nodes in its two-hop neighborhood, a node’s holdoff time set by (14) will be large. Due to this reason, the experienced contention time will be small, because each node now must refrain itself from contending for transmission opportunities for a long period of time, which reduces the probability of contention. Since, in this situation, the holdoff time represents a large portion of the transmission cycle, the resultant transmission cycle in the static approach will approach that generated in the static optimal scheme, which only considers the two-hop neighborhood node number without considering the contention time.

Regarding a node in a sparse network, the situation is re-versed. In a sparse network, where a node normally has a small number of nodes in its two-hop neighborhood, a node’s holdoff time set by (14) will be small. As such, the experienced contention time will be relatively large. This is because each node now only needs to suspend itself for a small period of time, which increases the probability of contention. In this situation, the resultant transmission cycle will be longer than that generated in the static optimal scheme because it has a large contention time in the transmission cycle. As such, the static approach in a sparse network may perform worse than the static optimal scheme. However, as our simulation results will show, the static approach still performs better than any fixed-value holdoff time setting scheme. This is because each node can use a more appropriate holdoff time value in the static approach than in a fixed-value holdoff time scheme.

B. Dynamic Approach

The dynamic approach is based on the static approach and further considers the dynamic bandwidth needs of nodes. Recall

(9)

Fig. 6. Comparison between the time required for establishing a data schedule under the static and dynamic approaches of the proposed scheme. that the three-way handshake procedure used to establish a data

schedule requires transmitting three MSH-DSCH messages. When the static approach of the two-phase holdoff time setting scheme is used, the network statically schedules each node’s MSH-DSCH message transmissions roughly in a round-robin fashion without considering the dynamic bandwidth needs of nodes. As such, for a node, the transmission cycle between any two consecutive MSH-DSCH message transmissions is fixed, regardless of whether it has data to send. The idea of the dynamic approach is to shorten the transmission cycles of the nodes that have data to send. This will result in decreased per-hop (as well as end-to-end) data transmission delays and increased per-hop (as well as end-to-end) data transmission throughputs.

Using the dynamic approach, a requesting node can reduce the elapsed time between transmitting a request IE and trans-mitting a confirm IE. On receiving a grant IE from the granting node, a requesting node tries to transmit a confirm IE as soon as possible. As shown in Fig. 6, the dynamic approach can, on average, save about half of the time required to establish a data schedule, compared with the static approach.

The detailed algorithm of the dynamic approach is presented in Fig. 7. In the dynamic approach, the holdoff time base value is set and fixed to 0 rather than the default 4. Initially, the dynamic approach uses the static approach to determine the holdoff time value used for each node. If no node has data to send, the operation of the dynamic approach degenerates to the operation of the static approach. That is, each node uses a holdoff time value determined by the static approach to regularly transmit its MSH-DSCH messages to keep the pseudo-random election algorithm operating correctly. Note that, even though there is no data to send in the network, a node still needs to regularly send out its MSH-DSCH messages to maintain the operation of the pseudo-random election algo-rithm. The transmitted MSH-DSCH messages are used to notify this node’s neighboring nodes of its next MSH-DSCH mes-sage transmission time. These MSH-DSCH mesmes-sages, however, need not carry request, grant, or confirm IEs.

When a node wants to set up a data schedule (i.e., it needs to send out a request IE), the dynamic approach temporarily takes

over the task of determining the node’s holdoff time value. After the data schedule is set up, this task will be passed back to the static approach to determine the node’s holdoff time value for its regular MSH-DSCH message transmissions. The detailed procedure is explained here.

First, after the requesting node sends out the request IE, it calculates the earliest transmission opportunity where it can transmit the confirm IE to the granting node. Transmitting the confirm IE must be performed later than receiving a grant IE from the granting node. To ensure this sequence, the requesting node’s target transmission opportunity (i.e., the next transmis-sion opportunity used to transmit the confirm IE) is initially set to the next transmission opportunity of the granting node plus one. (Note that the requesting node knows this information, because this information is regularly exchanged among nodes via the MSH-DSCH messages.) Then, it uses the difference between its current and target transmission opportunities to calculate the maximum target holdoff exponent value using

max target holdoff exp = ceil (log2(dif f erence)) . (15)

This calculated exponent value is used as the initial exponent value to find a transmission opportunity that is later than the target transmission opportunity. The found transmission oppor-tunity is the output of the pseudo-random election algorithm and will be larger than the target transmission opportunity due to the existence of the contention time. Later on, the dynamic approach then goes through an iteration to find the smallest exponent value that makes the transmission of the confirm IE as close as possible to the reception of the granting IE. During each step of the iteration, the target holdoff time exponent is decremented by one to explore whether this smaller value can still meet the requirement. On the last step of the iteration, where the requirement can no longer be met, the exponent value used in the previous step (which is stored in the optimized holdoff time exponent variable) is the exponent value that is both feasible and the smallest. This value is then returned and used to derive the transmission opportunity to transmit the confirm IE.

(10)

Fig. 7. Algorithm of the dynamic approach of the proposed scheme. Fig. 8 shows an example illustrating the operation of the proposed dynamic approach. Suppose that the two-hop neigh-borhood of node 2 comprises eight nodes, including node 2 itself. Fig. 8(a) shows the winning nodes of the transmission opportunities numbered from 1 to 26. Here, the winning node of a transmission opportunity is defined as the node that wins this transmission opportunity when all of the eight nodes contend for this transmission opportunity. (This condition will occur if the holdoff time base value is set to zero. The effect of the holdoff time base value will be explained in detail in

Section IV-C.) Assume that node 2 wants to establish a data schedule with node 8. Before transmitting the request IE to node 8 on transmission opportunity 2, node 2 should perform the algorithm depicted in Fig. 7. The detailed steps executed by this algorithm are explained here.

As shown in Fig. 8(b), the dynamic approach algorithm first sets the target transmission opportunity to 9, which is right after the transmission opportunity that node 8 is likely to transmit its grant IE (assuming 8 in this example). The algorithm calculates the transmission opportunity that node 2 can win. During the

(11)

Fig. 8. Example illustrating the operation of the dynamic approach. first iteration, the algorithm first finds that node 2 can win transmission opportunity 22. Since the calculated transmis-sion opportunity 22 is still larger than the target transmistransmis-sion opportunity, the algorithm stores the current target holdoff time exponent value into the optimized holdoff time exponent vari-able, decrements the target holdoff time exponent value by one, and starts the second iteration. During the second iteration, it finds that node 2 can win transmission opportunity 9. However, because the calculated transmission opportunity 9 is still larger than or equal to the target transmission opportunity, it enters the third iteration to probe further optimization. During the third iteration, the algorithm finds that the calculated transmission opportunity is 5, which is now less than the target transmis-sion opportunity. Thus, it stops this iterative procedure and returns the value stored in the optimized holdoff time exponent variable as its output. (Note that value is the target holdoff time exponent value calculated in the previous iteration.) As shown in Fig. 8(b), upon performing the pseudo-random elec-tion algorithm with the optimized holdoff time exponent value (2 in this example), node 2 will win transmission opportunity 9, which is the optimal transmission timing to transmit its confirm IE in this example.

C. Discussion on the Effect of Holdoff Time Base Value

As mentioned before, the IEEE 802.16 standard states that every node should set the holdoff time base value to 4. For this reason, IEEE 802.16-compliant devices can cooperate using this fixed holdoff time base value. Both of the static and dynamic approaches, however, require changing the holdoff time base value to successfully operate. To make these two approaches compatible with the IEEE 802.16 standard, a mech-anism is required to notify network devices of changes to the holdoff time base value. Here, we explain the problems that may occur if this system parameter is dynamically changed. In Section IV-D, we will describe several mechanisms that can be used to change this parameter without causing problems.

Fig. 9. Relationship between the holdoff time and the Tx interval.

The holdoff time base value contributes a constant time amount 2baseto the holdoff time. Setting the holdoff time base

value to 4 means that each node should suspend its contention for at least 24 consecutive transmission opportunities after it

has won one. This lower bound limits the smallest holdoff time value that can be assigned to nodes in the static and dynamic approaches. As such, the static and dynamic approaches may not achieve their optimal performances if the holdoff time base value is not reduced to 0. For this reason, the holdoff time base value is set to 0 in the static and dynamic approaches.

Instead of using a lengthy field, the standard defines that each node should use two shorter fixed-length fields exp and M x to represent a transmission opportunity number. The relationship between these two fields and a transmission opportunity num-ber has been given in (3). On receiving a control message (such as an MSH-NCFG or an MSH-DSCH message), a node should use the received exp and M x fields to derive the next transmis-sion interval of the transmitting node using (3). Fig. 9 shows the relationship between the holdoff time and the transmission in-terval (Tx inin-terval). The holdoff time comprises the Tx inin-terval (denoted as α in the figure) and the ineligible interval (denoted as β in the figure). The Tx interval represents the duration in which a node may contend for one transmission opportunity. On the other hand, the ineligible interval is the duration for which the node is not allowed to contend for any transmission oppor-tunity. Based on (3) defined in the standard, the length of the Tx interval is fixed at 2exp, because a node’s next transmission

(12)

from (2exp∗ Mx + 1) to 2exp∗ (Mx + 1). Consequently, the

length of the ineligible interval is (2exp+base− 2exp). If the

base value is set to 0, the holdoff time and the Tx interval of each node will exactly overlap, causing the length of the ineligible interval to be zero. This means that, after winning a transmission opportunity, a node will immediately contend for another transmission opportunity. (This also means that a node will contend for transmission opportunities all the time.) As such, a node should consider that all the nodes in its two-hop neighborhood will contend for each transmission opportunity with itself. On the other hand, if the holdoff time base value is larger than 0, a node’s holdoff time will be larger than its Tx interval. In such a condition, the contention time experienced by a node can be reduced due to a decreased number of contending nodes.

The choice of the holdoff time base value depends on the needs of a holdoff setting scheme. For the static approach, using a positive holdoff time base value can reduce the contention time of each node. For the dynamic approach, however, the holdoff time base value must be set to zero for the following two important reasons: First, if a positive holdoff time base value is used, the lower bound of the holdoff time value that can be assigned to nodes will be limited. As such, to give the dynamic approach the largest freedom to set the holdoff time of a node, the holdoff time base value should preferably be set to 0 at all times. Second, if the holdoff time base value is allowed to change during the operation of a network, after a node’s holdoff time has just been changed (due to the change of the holdoff time base value), MSH-DSCH control messages may collide. The reason for this phenomenon is explained in detail here.

Figs. 10 and 11 show two cases after a node’s holdoff time value has just been changed. The former shows an example in which changing the holdoff time value results in no message collisions, whereas the latter case shows an opposite example. Suppose that node A has changed its holdoff time value and broadcast the new holdoff time exponent value. In Fig. 10, node B is the next node to transmit an MSH-DSCH message. In such a case, node C will be notified of this change by node B’s MSH-DSCH message in time. Thus, node C will not schedule its MSH-DSCH message transmission to collide with node A’s MSH-DSCH message transmission. In contrast, in Fig. 11, node C has scheduled an MSH-DSCH message transmission before node B can notify it of node A’s new holdoff time value. Under such a condition, node C’s MSH-DSCH message transmission may collide with node A’s MSH-DSCH message transmission, because node A’s ineligible interval viewed by node C is now out of date. Figs. 12–14 illustrate three cases that can cause this problem.

In these figures, HTa denotes the holdoff time of node A viewed by node C (may be out of date), and HTa denotes the holdoff time of node A viewed by node A itself (always up to date). The symbol γ denotes the vulnerable interval that results from node A’s changing its holdoff time and during which the MSH-DSCH messages of nodes A and C may col-lide. Suppose that node C’s transmission was scheduled within node A’s original ineligible interval β. Fig. 12 depicts a case where node A has just decreased its holdoff time base value; therefore, its ineligible interval is shortened

Fig. 10. Example showing that control messages will not collide after node A changes its holdoff time value.

Fig. 11. Example showing that control messages will collide after node A changes its holdoff time value.

Fig. 12. Case where node A has just decreased its holdoff time base value.

Fig. 13. Case where node A has just increased its holdoff time exponent value.

Fig. 14. Case where node A has just decreased its holdoff time exponent value.

(13)

from β to β. This operation generates the γ vulnerable interval, because node C does not know that node A will now contend for transmission opportunities in the γ terval. Fig. 13 depicts a case where node A has just in-creased its holdoff time exponent value; therefore, both its Tx interval and ineligible interval are lengthened. In such a condition, node A’s ineligible interval will shift on the time axis, generating the vulnerable interval shown in Fig. 13. In contrast, Fig. 14 depicts a case where node A has just decreased its holdoff time exponent value; therefore, its Tx interval and ineligible interval are shortened, resulting in the shift of node A’s ineligible interval on the time axis. This shift generates the vulnerable interval shown in Fig. 14.

To prevent the collision problem from occurring, an addi-tional mechanism is necessary. For instance, in the case given in Fig. 11, after changing the holdoff time value, node A should defer its contention for transmission opportunities until its original ineligible interval has elapsed. Such a mechanism, however, may increase the implementation complexity of the dynamic approach and decrease the scheduling performances. To totally avoid the collision problem without wasting much network bandwidth, the dynamic approach uses 0 as the holdoff time base value for all network nodes at all times. Fixing the holdoff time base value to zero effectively eliminates every node’s ineligible interval (i.e., the length of each node’s inel-igible interval now becomes zero), resulting in each network node considering that it should always contend for transmission opportunities with all other nodes in its two-hop neighbor-hood. (These two-hop neighborhood nodes are considered to be always eligible to contend for transmission opportunities.) In such a condition, if a node wants to win a transmission opportunity, it should win over all of its two-hop neighborhood nodes. This means that, for each node, the node list used as the input of its pseudo-random election algorithm will always comprise its two-hop neighborhood nodes, despite the dynamic changes of the holdoff time exponent values of its neighboring nodes. As such, packet collisions due to dynamic changes of the holdoff time exponent values can be avoided under the zero-holdoff-time-base-value condition.

D. Notification of Holdoff Time Base Value Change

Here, we propose a practical protocol that can be used to notify new SS nodes of the holdoff time base value used in a network. A BS node can use this protocol to check whether a new SS node can use a holdoff time base value other than 4. If the SS node cannot do so, the BS node should reject the network registration request (REG-REQ) from such an SS node, because this SS node cannot work well with other SS nodes in such a network.

First, the reserved “vendor-specific information” field can be exploited to help SS nodes know the holdoff time base value used in a network. This field is defined in the standard for the registration procedure to exchange additional information not specified in the standard. The signaling protocol is described as follows: An SS node first adds a holdoff time base query message (carried by the “vendor-specific information” field) into the REG-REQ message, which is destined to the BS node.

TABLE II

PARAMETERSETTINGUSED INSIMULATIONS

On receiving this REG-REQ message, the BS node replies the SS node with a registration response (REG-RSP) message, which contains the holdoff time base value used in this network (also carried by the “vendor-specific information” field). If the BS node does not find the holdoff time base query message in the SS node’s REG-REQ message, it should reject this SS node’s REG-REQ, because this SS node may not be able to change its holdoff time base value.

There are three ways to reject a REG-REQ: The first one is to simply ignore the REG-REQ message if the BS node decides to reject it. The second way is to utilize the “de/reregister com-mand” (DREG-CMD) message defined in the standard. The DREG-CMD message can be used to notify the SS node of the rejection action. Finally, the third way is to return a REG-RSP message with the response code set to 1, indicating that this REG-REQ cannot be accepted, because the SS node does not support the dynamic approach.

V. PERFORMANCEEVALUATION

In this section, we evaluate the performances of the static and dynamic approaches. We compare the simulation results of these approaches with those of the three fixed-value holdoff time setting schemes studied in Section III-B (the “holdoff time 16, 32, and 64” schemes). For the evaluation, the NCTUns network simulator and emulator [9] is used, which was used in Section III-B to generate the simulation results of the three fixed-value holdoff time setting schemes. The chain, grid, and random network topologies are used for these performance studies. Each reported performance is the average of five runs using different random number seeds. For each run, the sim-ulated time is 1000 s. Table II shows the parameter setting used in our simulations. More detailed setup specific to a particular network topology will be described in the following sections.

A. Chain Network Topology

The chain network topology is composed of 21 nodes. From left to right, the nodes are named BS and SS(1), SS(2), . . ., SS(20), respectively. On this chain network, each node runs a MAC-layer pseudo-data scheduler to periodically establish data schedules with its neighboring nodes in a round-robin manner. The frequency is chosen to be one data schedule every 3 s, which has been explained before.

As shown in Table III(a), in the chain network, as the holdoff time value exponentially increases, the ATOUN value exponentially decreases, and the ATHPT value exponentially

(14)

TABLE III

PERFORMANCES OF THETHREEFIXED-VALUEHOLDOFFTIMESETTINGSCHEMES AND THESTATIC ANDDYNAMICAPPROACHES OF THEPROPOSED TWO-PHASEHOLDOFFTIMESETTINGSCHEME. (a) CHAINNETWORKTOPOLOGY. (b) GRIDNETWORKTOPOLOGY. (c) RANDOMNETWORKTOPOLOGY

increases. These results show that when the holdoff time value exponentially increases, the average transmission opportunity utilization significantly decreases, and the ATHPT significantly increases. The reasons for these phenomena have been ex-plained before. As for the static and dynamic approaches, one sees that they significantly outperform the three fixed-value holdoff time setting schemes on the ATOUN and ATHPT metrics. One also sees that the dynamic approach outperforms the static approach. This is because the former can dynamically adjust the holdoff time value to reduce the time interval between sending a request IE and sending a confirm IE. As such, the time required to complete a three-way handshake procedure (and, thus, for establishing a data schedule) can be greatly reduced. This also explains why the dynamic approach generates a higher utilization of transmission opportunity than the static approach.

For IICMT, when the holdoff time value decreases, the IICMT value decreases as well. This result shows that using a smaller holdoff time value can achieve fairer and more efficient scheduling. One sees that the static and dynamic approaches achieve much smaller IICMT values than the fixed-value holdoff time setting schemes. The result is expected as in both approaches; the holdoff time of each node can be independently set to a different value to reflect the node density around it. In contrast, as discussed before, a fixed-value holdoff time setting scheme cannot suit the scheduling needs of all the nodes in a network. The dynamic approach outperforms

the static approach on IICMT, and the reason is explained here. To reduce the time required for the three-way handshake procedure, the dynamic approach uses an iterative algorithm to decrease a node’s holdoff time value. As such, the dynamic approach eliminates a part of the contention time that the static approach cannot eliminate. This makes the dynamic approach perform more closely to the static optimal scheme than the static approach.

Regarding application performances, on this chain network, we conduct a different set of simulations using three differ-ent application-layer traffic: 1) Transmission Control Protocol (TCP); 2) User Datagram Protocol (UDP); and 3) ping. For a studied holdoff time setting scheme, its performances are evaluated using 18 cases. Each case was run five times, each time using a different random number seed. In each case, a traffic flow (either TCP, UDP, or ping) is set up. The source node of the traffic flow is fixed at the SS(2) node, whereas the destination node of the traffic flow is chosen to be SS(i + 2) in the ith case.

Fig. 15 shows the relationship between the TCP throughput and the hop count, Fig. 16 shows the relationship between the UDP throughput and the hop count, and Fig. 17 shows the relationship between the end-to-end RTT measured by the ping program and the hop count, respectively. As shown in Figs. 15 and 16, the static and dynamic approaches achieve much higher throughputs than the fixed-value holdoff time setting schemes over all studied hop counts. The RTT results show that the

(15)

Fig. 15. TCP throughputs over different hop counts in chain networks.

Fig. 16. UDP throughputs over different hop counts in chain networks.

Fig. 17. RTT measured by the ping program in chain networks.

static and dynamic approaches significantly reduce the end-to-end round-trip packet delay, when compared with the three fixed-value holdoff time setting schemes. The results also show that the dynamic approach generates a smaller round-trip packet delay than the static approach. This is expected as the dynamic approach can reduce the time required to establish a data schedule further than the static approach.

Table III(a) shows the TCP and UDP throughputs and RTTs averaged across all different hop counts for each scheme. According to the average TCP and UDP throughput results, the static and dynamic approaches, on average, achieve higher TCP and UDP throughputs than the fixed-value schemes. For exam-ple, the dynamic approach outperforms the “holdoff time 16” scheme by a factor of 1.21 on the TCP throughput and by a factor of 1.16 on the UDP throughput, respectively. Regarding the RTT, the dynamic approach, on average, reduces the RTT of “ping” packets by a factor of 2.24 when compared to the “holdoff time 16” scheme.

B. Grid Network Topology

For grid network simulations, we construct a 10× 10 grid network comprising 100 nodes, each of which is spaced 450 m apart from its vertical and horizontal neighbors. Each node runs a MAC-layer pseudo-data scheduler to periodically establish data schedules with its neighboring nodes in a round-robin manner. As previously explained, the frequency is chosen to be one data schedule every 3 s.

As shown in Table III(b), the ATOUN result shows that the dynamic approach achieves the highest utilization of transmis-sion opportunity. The ATHPT result shows that the dynamic approach, on average, generates the shortest time required for establishing data schedules among all studied schemes. Regarding IICMT, the dynamic approach, on average, achieves 9.661, which is smaller than the “holdoff time 16” scheme by a factor of 7.278. The IICMT results show that the dynamic approach can both efficiently and fairly schedule transmission opportunities in the control plane. As for the static approach, it, on average, achieves a better IICMT value than the “holdoff time 16” scheme. However, its performances on ATOUN and ATHPT are close to those of the “holdoff time 16” scheme because in a grid network, a node’s two-hop neighborhood node number is, on average, more than 16, and almost every node (except the nodes on the edges of the grid) has the same number. This condition allows the “holdoff time 16” scheme to perform almost equally well with the static approach.

C. Random Network Topology

For random network simulations, we use the ten random topologies generated in Section III-B to compare the perfor-mances of all studied holdoff time setting schemes in general networks. The simulation setting used here is the same as that used in Section III-B.

As shown in Table III(c), in random topologies, both the static and dynamic approaches generate better performances than the fixed-value holdoff time setting schemes. For the static approach, it, on average, increases the ATOUN value by a factor of 1.128 and decreases the IICMT value by a factor of 1.428, when compared with the “holdoff time 16” scheme. As for the dynamic approach, it, on average, increases the ATOUN value by 49.27% and decreases the IICMT value by 640.01%, as compared with the “holdoff time 16” scheme. These results show that both approaches generate fairer and more efficient

數據

Fig. 1. Frame structure of the IEEE 802.16 mesh mode.
Fig. 3. Good case for establishing a data schedule in the distributed coordinated scheduling mode when the network is not congested
Fig. 4. Bad case for establishing a data schedule in the distributed coordinated scheduling mode when the network is not congested
Fig. 5. Example of MSH-NCFG message collisions.
+7

參考文獻

相關文件

The contents of this essay are to demonstrate that one can get the ultimate achievements by Separate-teaching also, to clarify the value of Separate-teaching and

6 《中論·觀因緣品》,《佛藏要籍選刊》第 9 冊,上海古籍出版社 1994 年版,第 1

Write the following problem on the board: “What is the area of the largest rectangle that can be inscribed in a circle of radius 4?” Have one half of the class try to solve this

The writer is convinced that Seng-chao’s concept of the Saint should be interpreted as an activity which is, in a sense, similar to the basic nature of NG Yu-kwan’s concept of Pure

Teachers may consider the school’s aims and conditions or even the language environment to select the most appropriate approach according to students’ need and ability; or develop

Now, nearly all of the current flows through wire S since it has a much lower resistance than the light bulb. The light bulb does not glow because the current flowing through it

Then, it is easy to see that there are 9 problems for which the iterative numbers of the algorithm using ψ α,θ,p in the case of θ = 1 and p = 3 are less than the one of the

The case where all the ρ s are equal to identity shows that this is not true in general (in this case the irreducible representations are lines, and we have an infinity of ways