• 沒有找到結果。

4. Protocol Design

4.3 Ad-Hoc Routing Protocol

4.3.1 Mobility Issues

To support the mobility in MANET, we improve our Ad-Hoc routing scheme to fast recover our system performance from mobile node topology changes. By this Ad-Hoc routing, our scheme can apply to a moving motorcade (Figure 4.3.1). An

important issue of the mobility in MANET is how to maintain the routing table correctly.

A simple description of our improved Ad-Hoc routing scheme is given below. In case of some RNs disconnect from DN, a DN checks the connectivity of every its RN in MANET periodically. Once a DN check out that it loses connectivity of a RN, this DN should delete the routing entry of this disconnected RN, and should inform every its RN about this disconnection. Therefore, a DN and every RN can refresh its routing table when the Ad-Hoc network topology changes. When a RN disconnects from DN, the trunk daemon will retransmit data packets in the RN’s sliding window (Figure 3.2.1-2), and dispatch those retransmitted data packets to other connected RNs in a Round Robin sequence.

Figure 4.3.1: a moving motorcade 4.4 Reset Protocol

The reset protocol is used to inform the DN and its helping RNs that the file download is completed. Figure 4.4 shows this protocol. When the apache server (PS)

finishes sending the file content to the DN, it sends an “Ending” packet to the trunk daemon through a UDP socket (step 13) to notify it of this event. The trunk daemon in turns sends an “Ending” packet to each mobile node daemon involved in this file download (step 14). It then removes the firewall rules that were previously installed in the kernel and releases all the data structures and other resources allocated for this file download. When a mobile node daemon receives an “Ending” packet, it releases the used data structures and other resources. It also resets its state to prepare for another file download. If an “Ending” packet is lost, the mobile node daemon that should receive the packet will automatically release the used data structures and other resource after a certain period of channel idle time.

Figure 4.4: The reset protocol.

5. EXPERIMENTAL SETTINGS AND RESULTS

5.1 Experimental Settings

IBM A31 notebooks computer with 1.8G Hz CPU and 512 MB memory are used as PS and DN in our experiments.

Five notebooks tabled below are used as RNs.

Type IBM A31 IBM R40 IBM T30 Toshiba

CPU Intel 1.8G Hz Intel 1.3GHz

Intel 2.2GHz

AMD 0.475GHz Main

Memory

512 MB 512MB 512MB 64MB

number 1 1 1 2

Each of RNs and each DN are equipped with an ASUS WL-14 IEEE 802.11(b) WLAN interface card or D-LINK DWL-122 and a Nokia D211 GPRS interface card, which is shown in Figure 5.1-2. In our scheme, every RN node is just responsible to forward data packets to DN. Therefore, using different machines listed above would not affect our system performance.

Because the two PCMCIA slots available on a notebook computer are too close to use one PCMCIA IEEE 802.11(b) and one PCMCIA GPRS cards at the same time, we used the ASUS WL-14 IEEE 802.11(b) interface card, which uses the USB interface rather than the PCMCIA interface to connect to the notebook computer. The Red-Hat Linux operating system with the 2.4 kernel is installed on each of these machines.

Figure 5.1-1: The trunk daemon

Figure 5.1-2: Four notebook computers are used in experiments. One of them is used as the DN while the others are used as the DN’s helping RNs.

Figure 5.1-3: Each notebook computer is equipped with an IEEE 802.11(b) WLAN interface card and a GPRS interface card.

One desktop host is used as the PS (Figure 5.1-1). It runs the Red-Hat Linux operating system with the 2.4 kernel. This host is located in our laboratory and connected to the Internet. A modified apache web server is run on this desktop host to provide proxy services for the DN.

ChungHwa Telecom Inc. operates the GPRS network used in these experiments.

When a GPRS network interface is attached to the GPRS network, ChungHwa Telecom Inc. automatically assigns a public IP address to it. As such, an Internet host (e.g., the PS) can actively send packets to an attached GPRS user.

We also try to evaluate our scheme on a different GPRS network, which is

operated by FarEastone Telecom Inc. On the FarEastone GPRS network, when a GPRS interface is attached to the network, it is assigned a private IP address.

Normally, a machine assigned a private IP address is attached to a private network managed by a network address translator (NAT). Such a machine can actively exchange packets with a host on the Internet but a host on the Internet cannot actively exchange packets with it. This restriction, however, does not cause any problem for our scheme because before the PS sends packets to the DN, the DN and each of its RN have sent a “NAT Mapping Installation” packet to the PS.

However, the link quality of the GPRS network operated by FarEastone Telecom Inc is very unstable in the NCTU campus. Sometimes, our machines can hardly connect to the FarEastone GPRS network for five minutes. Due to the unstable link quality, the performance would be badly decreased. Therefore, we don’t evaluate our scheme on the FarEastone GPRS network.

5.2 Experimental Results

5.2.1 Calibration Test

In the calibration tests, we want to measure how the GPRS network performs when there are several CBR traffics activated simultaneously. In these tests, our scheme was not used. We measured the CBR throughput that can be achieved over one GPRS channel when one, two, three, and four GPRS channels are used simultaneously. The used channels were set up between the PS and the DN, and between the PS and the 3 RNs. Instead, on a tested GPRS channel, we pumped UDP packets at the PS machine into the channel at the constant bit rate. We used UDP

packets rather than TCP packets because a TCP traffic source will reduce its sending rate at least by a half when encountering packet losses or reordering while a UDP traffic source will not. Since we are concerned with instantaneous link quality in a GPRS channel, we used UDP packets in the calibration tests.

In each CBR UDP packet stream, the packet interval time is 0.33 seconds and each UDP packet size is 1400 bytes. So, the maximum throughput of one CBR UDP packet stream is 4.2 Kbytes/sec. When UDP packets are sent at a constant bit rate over a GPRS channel, we say the channel is active. Figure 5.2.1-1 shows the throughput achieved on one channel when only one channel is active. Figure 5.2.1-2 shows the throughputs achieved on two channels when two channels are active at the same time.

Figure 5.2.1-3 shows the throughputs achieved on three channels when three channels are active at the same time. Finally, Figure 5.2.1-4 shows the throughputs achieved on four channels when four channels are active at the same time. In these figures, we also show the total achieved throughputs of all active GPRS channels.

In Figure 5.2.1-1 below, the average throughput of a CBR UDP stream is close to 3 Kbytes/sec. In a CBR UDP packet stream, the packet interval time is 0.33 seconds and each UDP packet size is 1400 bytes. Therefore, we can only get four possible throughputs, 0 Kbytes/sec, 1.4 Kbytes/sec, 2.8 Kbytes/sec, 4.2 Kbytes/sec, in the calibration tests.

One UDP Flow

Figure 5.2.1-1: The throughput of one GPRS channel.

Two UDP Flows

Node 1 Node 2 Total

Figure 5.2.1-2: The throughputs of two GPRS channels.

From these figures, we have some observations. First, the maximum throughput that can be achieved over one GPRS channel is only about 4 KB/sec. Second, the quality of a GPRS channel is unstable. The throughput of a GPRS channel often drops to zero, stays at zero for a while, and then rises again. Third, when multiple GPRS

channels are active, it is common that at least one channel’s throughput is very poor.

These observations suggest that achieving N throughput speedup over N GPRS channels is almost impossible on current commercial GPRS networks.

Three CBR UDP Flows

Figure 5.2.1-3: The throughputs of three GPRS channels

Four UDP Flows

Figure 5.2.1-4: The throughputs of four GPRS channels.

Since the best aggregate UDP throughput achieved over N GPRS channels is much lower than N times of a GPRS channel’s native bandwidth (i.e., 36 Kbps), it is unreasonable to divide the throughput of our scheme by 36 Kbps to calculate its throughput speedup over N channels. In our web transfer throughput experiments, TCP (used by HTTP) is used to transport web files. Since TCP throughput suffers greatly from packet losses, reordering, large round-trip packet delays on the channel (3 ~ 4 seconds on a GPRS channel), and long channel blockage periods, our scheme faces a very challenging situation in which maintaining good TCP throughput over N unstable channels is difficult. It is nature that the aggregate TCP throughput achieved over N channels under our scheme is less than the aggregate UDP throughput achieved over N channels in the calibration tests.

Therefore, in the following subsection we will divide the TCP throughput of our scheme by the TCP throughput achieved over one channel (3.3 KB/sec) to calculate its speedup. This speedup represents the ratio of the performance of our N-channel scheme to the performance of a 1-channel scheme on the same commercial GPRS network. A high ratio value indicates that our scheme can better utilize the given N channels while a low value indicates that our scheme cannot.

5.2.2 Evaluation Experiments

We measured the web download throughput of our scheme when one, two, three, and four GPRS channels are used. In each of these experiment suites, we measured the download throughput under different file sizes. For each file size, we repeated the

experiment 10 times and report their average and standard deviation. In the experiments, the web server hosting these files resides on the same subnet as the PS.

In the first experiment suite, no RN provides additional GPRS channel bandwidth to help the DN download its requested file. Thus, the packets carrying the file’s content are transmitted on the DN’s own GRPS channel. Figure 5.2.2-1 shows that the average throughput without applying out scheme is about 3.3 KB/sec when the file size is greater than 50 KB. For each average throughput data point, the point above it is the average plus the standard deviation and the point below it is the average minus the standard deviation.

No Relay Node

Figure 5.2.2-1: The average file download throughput with different sizes (through only one GPRS channel).

In the second experiment suite, one RN is used and its channel and the DN’s channel are used to download the file in parallel. Figure 5.2.2-2 shows that the

average throughput is about 6 KB/sec when the file size exceeds 90 KB. The throughput speedup is 1.82 (6/3.3).

In the third experiment suite, two RNs are used and in total three GPRS channels are used to download the file in parallel. Figure 5.2.2-3 shows that the average throughput is about 8 KB/sec when the file size exceeds 560 KB. The throughput speedup is 2.42 (8/3.3).

In the fourth experiment suite, three RNs are used and in total four GPRS channels are used to download the requested file in parallel. Figure 5.2.2-4 shows that the average throughput is about 9 KB/sec when the file size exceeds 550 KB. The throughput speedup is 2.72 (9/3.3).

One Relay Node

0 1 2 3 4 5 6 7 8

0 100 200 300 400 500 600

File Size (KB)

TCP Throughput (KB/sec)

Avg (Avg + Std Dev) (Avg - Std Dev)

Figure 5.2.2-2: The average file download throughput with different sizes (through two GPRS channels).

Two Relay Nodes

Figure 5.2.2-3: The average file download throughput with different sizes (through three GPRS channels).

Figure 5.2.2-4: The average file download throughput with different sizes (through

four GPRS channels).

From Figure 5.2.2-1 to Figure 5.2.2-4, we see that the average file download throughput is low, when the file size is small. This phenomenon can be explained as follows. After a TCP connection is set up, it immediately enters the TCP slow-start congestion control phase. In this phase, the TCP sender exponentially increases its sending rate (its congestion window size ---- the number of unacknowledged packets that can be sent out per RTT) in each subsequent RTT, where RTT is the round trip time between the TCP sender and receiver. For example, a TCP sender sends out 1, 2, 4, 8, and 16 packets in the 1st, 2nd, 3rd, 4th, and 5th RTT respectively, If the size of the requested file is small and thus only few packets need to carry its content, finishing the file transfer will use only the first few RTTs. Since a TCP sender’s sending rate is its congestion window size divided by its RTT and the GPRS channel’s RTT is very large (3 ~ 4 seconds for a standard-sized 1500-byte packet), the TCP sender’s sending rates in the first few RTTs are the lowest. As such, TCP throughputs on GPRS channels are low for small files regardless whether our scheme is used or not.

To further evaluate our scheme, we measured the web download throughput when five GPRS channels are used. In that experiment suite, because we only have four GPRS accounts of ChungHwa Telecom Inc, we used three RN operated on the ChungHwa GPRS network, and one RN operated on FarEastone GPRS network. So, in total five GPRS channels are used to download the requested file in parallel.

However, this experiment didn’t perform well, and the throughput even is lower than 5 Kbytes/sec (twice the TCP GPRS throughput without using our scheme) sometimes.

The unstable link quality on FarEastone GPRS network in the NCTU campus did decrease the performance, and more data packets would be out of order in this

experiment. Therefore, adding one more channel doesn’t necessary increase the performance of our scheme depending on the signal strength of the new GPRS channel.

Figure 5.2.2-5 is the conclusion from Figure 5.2.2-1 to Figure 5.2.2-4. Readers can also reference Figure A-1 in the Appendix.

Figure 5.2.2-6 (Figure A-2) shows the speedup rate when we used different number of RNs in our experiments. Our scheme can get triple times of TCP GPRS throughput when there are three RNs forwarding data packets.

4.8

Figure 5.2.2-5: experiment results in the real world

1.71 1.9

Figure 5.2.2-6: experiment speedup in the real world

The results shown in these figures and the above explanation suggest that our scheme is more suitable for downloading large files than downloading small files.

Actually, when the download file size is small, there is no need to use our scheme to further reduce the small transfer time.

6. S

IMULATION

S

ETTINGS

A

ND

R

ESULTS

In the simulation experiments, we evaluated our scheme about some issues such as system performance when number of RNs increases and mobility support in MANET.

6.1 Simulation Settings and Modifications

To run our applications such as the trunk daemon and the mobile node daemon on NCTUns2.0, some settings and modifications should be done. On NCTUns2.0, modified kernel uses Source-Destination-Pair IP address scheme to route packets from applications. But by using the fully-integrated GUI environment, a user need not know the concept and need not use the source-destination-pair address scheme at all.

However, when users try to use RAW socket, and install firewall rules to divert packets from IP queue, their application should be modified to work well under the Source-Destination-Pair IP address scheme of NCTUns2.0. So, due to the Source-Destination-Pair IP address scheme on NCTUns2.0, we do some modifications in our trunk daemon and mobile node daemon.

We focus on some issues such as system performance when number of RNs increases and mobility support in MANET when doing the simulation experiments.

Therefore, we used our modified traffic generators to take place the modified apache server in our scheme. Then, we can evaluate our scheme by one TCP connection or a CBR UDP packet stream. We also can evaluate the performance of our routing scheme which is hard to be evaluated in the real world.

6.2 Simulations Results

6.2.1 Calibration Tests for the GPRS package on NCTUns2.0

In those calibration tests, we want to evaluate the GPRS system performs on NCTUns2.0 without using our scheme.

In Figure 6.2.1-1, there is a topology of common GPRS network on NCTUns2.0.

Node 1 is the Base Station, and the Base station connects to a host (Node 5) in the Internet. Node 7 is a cellular phone. Node 6 is the mobile computer equipped with a GPRS network card and an 802.11b wireless card.

Figure 6.2.1-1: simple topology of GPRS network

Figure 6.2.1-2 shows the protocol stack of a GPRS Base Station. Each RLC module has its own transmission queue with a limited number of slots. A packet is allowed to enter an RLC module if the transmission queue in that RLC module has enough slots to store that packet. Otherwise, the packet will be put back into the GPRS FIFO module until the RLC module has enough space for storing this incoming packet.

Figure 6.2.1-2: the protocol stacks of base station

We present the GPRS TCP throughput in Figure 6.2.1-3 (Figure A-3) without using our scheme. The pink line represents the current queue size of GPRS FIFO, and the blue line represents the TCP throughput.

The GPRS FIFO is overflow between 36 seconds to 55 seconds, so GPRS base station would drop some data packets during that period. This condition would trigger the congestion control of a TCP connection, so the TCP throughput drops to zero between 61 seconds to 78 seconds. The size of current GPRS FIFO decreases after 58 seconds, therefore slow start of the TCP connection can progress. After a while, the GPRS TCP throughput climbs up again.

We present the GPRS UDP throughput in Figure 6.2.1-4 (Figure A-4) without using our scheme. We pump a CBR UDP packet stream about 3.75 Kbytes/sec on PS.

Although the GPRS FIFO is overflow after 25 seconds, the UDP throughput keeps steady at 3.74 Kbytes/sec. It is because the UDP throughput wouldn’t decrease even when there are some packet losses on GPRS network.

GPRS TCP Throughput

Kbytes/sec & Current Queue Size

TCP Throughput GFIFO Current Queue Size

Figure 6.2.1-3: GPRS TCP Throughput

GPRS UDP Throughput

UDP Throughput GFIFO Current Queue Size

Figure 6.2.1-4: GPRS UDP Throughput

6.2.2 Evaluation Experiments with a Simple Wireless Physical Layer Module

We measured the web download throughput of our scheme on NCTUns2.0 using a simple wireless physical layer module (PHY) when different numbers of static RNs are forwarding packets to DN. By using Simple wireless PHY on NCTUns2.0, a receiver would not suffer any packet loss within the transmission range of a sender.

In each of the simulation experiment suites below, we measured the download throughput under different file sizes. For each file size, we repeated the experiment 10 times and report their average and standard deviation.

6.2.2.1 TCP

First, we measure the TCP throughput of our scheme on NCTUns2.0.

6.2.2.1.1 Single Hop Count

In those experiments below, every RN is just one hop away from DN, so data packets can be sent to DN directly from each RN in MANET (Figure 6.2.2.1.1-1).

Node 7 is a DN, and the red circle represents its transmission range. So, every RN is within the transmission range of DN.

Figure 6.2.2.1.1-1: simulation topology 1

In the first experiment suite, no RN provides additional GPRS channel bandwidth to help the DN download its requested file. Thus, the packets carrying the file’s content are transmitted on the DN’s own GRPS channel. Figure 6.2.2.1.1-2

In the first experiment suite, no RN provides additional GPRS channel bandwidth to help the DN download its requested file. Thus, the packets carrying the file’s content are transmitted on the DN’s own GRPS channel. Figure 6.2.2.1.1-2

相關文件