**中 華 大 學 ** **碩 士 論 文 **

### 在有方向性無線感測網路中利用可移動感測 器進行區域覆蓋最大化之研究

### Maximize Area Coverage in Directional Sensor Networks with Mobile Sensors

### 系 所 別：資訊工程學系碩士班 學號姓名：M09902018 李 權 峰 指導教授：梁 秋 國 博士

### 中 華 民 國 103 年 2 月

### 摘 要

有方向性感測網路是由許多有向感測器所組成，有方向性感測網路不像全向 感測網路一樣，感測器會因硬體上的限制或是成本上的考量，導致感測角度是有 限制的。有向感測網路往往會受到，有向感測器本身的角度大小或者是感測器偵 測方位的影響，使得覆蓋的區域不再是像傳統全向性感測網路一樣。所以區域覆 蓋對於有方向性感測網路仍然是一個重要的議題。在本篇論文中，我們討論了在 有方向性感測網路中，把感測器裝置在一個可以移動的載具上，可以藉由移動到 重疊區域較少或者沒有重疊的位置，來提高覆蓋比例去改善最大化區域覆蓋問題。

我們提出了可移動感測器的分散式自我部屬的機制。在有方向性感測器被隨機部 署在監控的區域後，每一個有方向性感測器會計算新的位置來移動，藉由移動到 新的位置來獲得比上一個位置更好的覆蓋率。有方向性感測器的位置會每回合的 被調整，因此覆蓋率會逐漸地增加。我們使用的是一個虛擬力的機制來增加覆蓋 率。模擬結果顯現出我們提出來的機制能有效地提升面積覆蓋率。

關鍵字:有方向性感測器、可移動感測器、面積覆蓋

**Abstract**

A directional sensor network is composed of many directional sensor nodes.

Unlike conventional omni-directional sensors that always have an omni-angle of sensing range; directional sensors may have a limited angle of sensing range due to technical constraints or cost considerations. Area coverage is still an essential issue in a directional sensor network. In this paper, we will discuss how to move sensors to the smaller overlap region or no overlapping areas to enhance coverage rate to maximize coverage for area coverage. We present distributed self-deployment schemes of mobile sensors. After sensors are randomly deployed, each sensor calculates its next new location to move in order to obtain a better coverage than previous one. The locations of sensors are adjusted each round so that the coverage is gradually improved. We also proposed a virtual force scheme to increase the coverage.

Simulation results show the effectiveness of our schemes in term of the coverage improvement.

**Keywords: directional sensors, mobile sensor, area coverage **

### 致謝

在我懵懵懂懂地從大學畢業後，家人的支持與親友的鼓勵，使我邁向學習生 涯的下一個階段，殊不知，研究所的學術領域的學習，並非一般人所想的這般容 易，在學術領域所有所提出的研究都相當嚴謹，實驗和結果都必須再三求證，感 謝梁秋國老師這幾年來，對於天資愚鈍的我，不辭辛勞的時時刻刻在叮嚀我常犯 的錯誤，甚至教我做人處事的態度，讓我在看待不僅僅是學術上的研究變得更嚴 謹，更使得我在待人處事有更深一層的思考，同時也感謝在實驗室學長學弟的熱 心協助，在我困惑、煩惱和憂愁的時候，總能陪在我身邊給我最大的協助，特別 感謝陳彥廷學長，也在畢業工作後，不厭其煩地教我程式及討論實驗流程。感謝 上述幾個人支持與教導，使我在相關領域能提出完善的研究和好的貢獻。

最後要感謝我的父母及親人對我的照顧、鼓勵與支持，讓我能順利完成學業，

邁向人生的另一個階段。雖然求學過程辛苦且漫長，但努力終能獲得回報，最後 僅以此論文獻給所有曾幫助過我的老師、學長、同學以及學弟們，期許在未來的 日子裡能發揮所學所長，回饋社會以及大家對我的期望。

**Table of Contents **

List of Figures ... vi

List of Tables ... viii

Chapter 1 Introduction ... 1

Chapter 2 Related Work ... 7

2.1 Voronoi diagram ... 7

2.2 The VORonoi-based Algorithm (VOR) ... 8

2.3 The Minimax Algorithm ... 10

2.4 The Centroid Algorithm ... 11

2.5 Voronoi-based Method with Directional Sensors ... 11

2.5.1 Circumcenter of Circumscribed Circle ... 12

2.5.2 Incenter of Inscribed Circle ... 13

2.5.3 Moving Algorithm ... 14

Chapter 3 Directional Sensing Model and Preliminaries ... 17

3.1 Directional Sensing Model ... 17

3.2 Preliminaries ... 18

3.2.1 Virtual Point ... 18

3.2.2 Calculating the Distance to Move ... 19

3.2.3 Target in Sector (TIS) test ... 22

Chapter 4 Our Proposed Schemes ... 24

4.1 Problem Statement ... 24

4.2 Improving the Coverage of Virtual Force Algorithm ... ..25

4.2.1 Centroid Push Auxiliary Point (CPA) Scheme ... 25

4.2.2 Centroid Push Centroid (CPC) Scheme ... 26

4.2.3 Voronoi Point Pull Centroid (VPPC) Scheme ... 27

4.2.4 Neighbor’s Repulsion (NR) Scheme ... 29

4.2.5 Moving Algorithm ... 30

**Chapter 5 Simulation Results ... 34 **

5.1 Simulation Environment ... 34

5.2 Performance of our Proposed Schemes ... 35

**Chapter 6 Conclusion and Future Work... 41 **

**References ... 43 **

**List of Figures **

Figure 1.1. An example of wireless sensor network ... 2

Figure 1.2. An example of three omni-directional sensors deployed to cover the target region... 3

Figure 1.3. An example of five directional sensors deployed to cover the target region ... 4

Figure 1.4. The redundant node judgment scheme for directional sensor network ... 5

Figure 2.1. An example of the Voronoi diagram ... 8

Figure 2.2. The Voronoi polygon ... 8

Figure 2.3. Sensor move to cover the farthest Voronoi vertex ... 9

Figure 2.4. An example of oscillation control ... 10

Figure 2.5. Sensor move toward the center of the smallest enclosing circle of the Voronoi polygon ... 11

Figure 2.6. Sensor move toward the centroid point of the local Voronoi polygon.. ... 12

Figure 2.7. Circumscribed circle and Circumcenter ... 13

Figure 2.8. Inscribed circle and Incenter ... 13

Figure 2.9. The VOR moving approach ... 14

Figure 2.10. The Minimax moving approach ... 15

Figure 2.11. The Centroid moving approach ... 16

Figure 2.12. An example of different circle movement ... 16

Figure 3.1. Directional sensing model ... 17

Figure 3.2. The virtual point ... 19

Figure 3.3. The repulsive force and movement of directional sensor ... 19

Figure 3.4. Two ways of describing directional sensors ... 20

Figure 3.5. Situations of sensor misjudgments causing an overlapping region .... 21

Figure 3.6. Comparing the centroid of a circle to the circumscribed and inscribed circles on a directional sensor ... 22

Figure 3.7. Relationship between the amount of overlap and the resulting movement vector ... 22

*Figure 4.1. Example of the repulsion model in which S** _{j}* exerts a repulsive force on

*S*

*... 26*

_{i..}*Figure 4.2. Example of the repulsion model in which S** _{j}* exerts a repulsive force on

*S*

*... .27*

_{i}*Figure 4.3. Two approaches to determining attracting points (ATPs): (a) using*

VOR; (b) using the centroid approach ... 28
*Figure 4.4. Example gravitational model in which ATP**i** attracts G**i* ... 28
Figure 4.5. The case in which a sensor still has an overlapping area after moving
... 29
Figure 4.6. The case of neighbor’s repulsive force ... 30
Figure 4.7. Example of the move-back scheme in which a sensor (shown in (a))
moves down (shown in (b)) and right (shown in (c)) to result in location (d) ... 32
Figure 4.8. Procedures of our ICVFA algorithm ... 33
Figure 5.1. Coverage rate based on an increasing number of sensors ... 36
Figure 5.2. The total distances moved based on an increasing number of sensors
... 36
Figure 5.3. Coverage based on executed rounds using 60 sensors ... 37
Figure 5.4. Coverage based on executed rounds using 90 sensors ... 37
Figure 5.5. Total moving distances based on executed rounds using 60 sensors .. 38
Figure 5.6. Total moving distances based on executed rounds using 90 sensors .. 38
Figure 5.7. Coverage increment ratio based on differing numbers of sensors ... 39
Figure 5.8. Coverage increment ratio based on the number of sensors. ... 40
Figure 6.1. Troublesome case of Voronoi-based algorithm in which an initial
configuration (a) leads to worse results in (b) ... 42

**List of Tables **

Table 5.1. Experimental parameters ... 35

**Chapter 1 ** Introduction

Wireless sensor networks (WSNs) are one of the most important technologies that will change the world [1] in that such networks can provide us with a fine distinction about the physical world where we are living. A wireless sensor network is composed of a great quantity of tiny and inexpensive sensor nodes which can collect the environmental information for further applications. Potential applications of wireless sensor networks usually span a wide range of areas; such as disaster rescue, energy management, medical monitoring, logistics and inventory management, remote monitoring of seismic activities, environmental factors (e.g., air, water, soil, wind, chemicals), precision agriculture, factory instrumentation, and military reconnaissance [2, 3]. The sensors are expected to be widely deployed with their capabilities for monitoring and control. Such a network can provide a fine global picture through the collaboration of many sensors with each observing a coarse local view [4, 5].

In a wireless sensor networks, each sensor node is normally battery operated and equipped with the following modules [2, 6]:

A sensing module which is capable of sensing some information (e.g., an acoustic, a seismic, a temperature, or a brightness sensor) or monitoring some entity in environment [7, 8].

A digital unit for processing the signals from the sensors and can make some simple mathematical computation such as summation, average, or data sorting of collected information. This unit is also performing network protocol functions [6].

A radio module for wireless communication which can transfer collected information in a wireless to users. With this communication module, sensor nodes can communicate with other nodes, and the users can send some queries to the sensor nodes to tell them what data the users need.

Among the various scopes one of the major applications of sensor network is to collect information periodically from a remote terrain where each node continually senses the environment and sends back this data to the base station (BS), which is usually located at considerably far from the target field [9], for further analysis as shown in Figure 1.1.

User Base station

Sensors

Target region

**Figure 1.1 An example of a wireless sensor network **

In recent years, wireless sensor networks have received a lot of attention due to their wide applications in military and civilian operations, such as fire detection [10], vehicle traffic monitoring [11], ocean monitoring [12], and battlefield surveillance [13]. In wireless sensor networks, area coverage is a fundamental problem and has been studied by many researchers. Most of the past work assumed omni-directional sensors that has an omni-angle of sensing range as shown in Figure 1.2. However, there are many kinds of directional sensors, such as video sensors [14], ultrasonic sensors [15] and infrared sensors [16]. The omni-directional sensor node has a circular disk of sensing range. The directional sensor node has smaller sensing area (sector-like area) and sensing angle than the omni-directional one. Compared to isotropic sensors, the coverage region of a directional

sensor is determined by its location and orientation. This can be illustrated by the example in Figure 1.3.

### 1

### 2

### 3

**Figure 1.2 An example of three omni-directional sensors deployed to cover the **
target region

Area coverage is a fundamental problem in wireless sensor networks. Therefore, sensor nodes must be deployed appropriately to reach an adequate coverage level for the successful completion of the issued sensing tasks [17], [18]. However, in many potential working environments, such as remote harsh fields, disaster areas, and toxic urban regions, sensor deployment cannot be performed manually. To scatter sensors by aircraft may result in the situation that the actual landing positions cannot be controlled. Consequently, the coverage may be inferior to the application requirements no matter how many sensors are dropped. In such case, it is necessary to make use of mobile sensors, which can move to the correct positions to provide the required coverage.

1

2

3

4 5

**Figure 1.3 An example of five directional sensors deployed to cover the target **
region

Most previous research efforts on deploying mobile sensors are based on the omni-directional sensor networks. For example, Howard et al [19] present a distributed, potential-field-based approach to solve the coverage problem. In their approach, sensor nodes are treated as virtual particles that are subject to force, these forces repel the neighboring sensor nodes from each other and from obstacles. Finally, sensor nodes will spread from dense to sparse area. The concept of potential-field was first proposed in the research of mobile robotic route plan and obstacle avoidance by Khatib [20]. In [21], Wang et al. present a set of Voronoi diagram-based schemes to maximize sensing coverage. After discovering a coverage hole locally, the schemes calculate new position for each sensor to move at next round. They use the Voronoi diagram to discover the coverage holes and design three movement-assisted sensor deployment schemes: VEC (VECtor-based), VOR (VORonoi-based), and Minimax. In [22], Lee et al. designs two movement-assisted schemes: Centroid-based and Dual-Centroid-based. Based on the Voronoi diagram and centroid (geometric center), the proposed schemes can be used to improve the sensing coverage.

In this thesis, we study the problem of coverage by directional mobile sensor under

the random deployment strategy. We develop solutions that maximize the sensing coverage while minimizing the moving distance.

In [23], Tao et al. propose a redundant node judgment scheme for directional sensor network. They employ the concept of perimeter coverage node proposed by Chi-Fu Huang et al. [24] to judge redundant node. They abstract a fan-shaped sensing region of a directional sensor with it inscribed circle to approximately perform redundant node judgment. When the boundary of the inscribed circle can be fully covered by its neighborhood sensors, the sensor can be considered as redundant sensor as shown in Figure 1.4. By employing the concept of inscribed circle, we also abstract the circumscribed circle in this thesis. Then based on the center of circumscribed circle and the center of inscribed circle of sensing region, we proposed a virtual force schemes that the force can repel the sensors from each others to reduce the overlap region. Simulation results show that our distributed algorithms are effective in terms of coverage, and movement.

*S*_{1}

*S**2*

*S*_{3}

*S**4*

*S*_{1}

*S*_{2}

*S*_{3}

*(a) Perimeter Coverage, S** _{3}* is
redundant sensor.

*(b) Non-Perimeter Coverage, S** _{1}* ,

*S*

_{2}*and S*

*are not redundant sensor.*

_{3}**Figure 1.4 The redundant node judgment scheme for directional sensor network **

The rest of this thesis is organized as follows. In Chapter 2, we introduce some moving algorithm for omni-directional sensor networks. In Chapter 3, we describe our sensing model and preliminaries. A detail description of our proposed schemes is presented

in Chapter 4. Chapter 5 shows some simulation results. Finally, we conclude this thesis and future works in Chapter 6.

**Chapter 2 ** Related Works

In recent years, there are research works on sensor network coverage for directional sensor networks with mobile sensors. So we introduce three movement-assisted sensor deployment algorithms for wireless sensor networks which are composed of many omni-directional sensors and they are all based on Voronoi diagram: VORonoi-based, Minimax [22], Centroid [23]. We also introduce a Voronoi-based method with directional sensor networks [34].

**2.1 Voronoi diagram **

The Voronoi diagram is a significant data structure in computational geometry [25],
[26] and it can be used to solve the coverage problems of wireless sensor network [21],
[27]. The Voronoi diagram of a collection of nodes separates the plane space into many
polygons. Figure 2.1 is an example of the Voronoi diagram, and Figure 2.2 shows an
*example of the Voronoi polygon. Given a set of sensor S={s**0**,s**1**,…..,s**n*}, the Voronoi
diagram partitions a plane space into polygons and each polygon contains only one sensor
and any point in the given polygon is closer to its generating sensor than any other sensors.

*We define the Voronoi polygon of s*_{0}* as G*_{0 }*= {V*_{0}*, E*_{ 0}*}, where V** _{0}* is the set of Voronoi

*vertices of s*

*0*

*, and E*

*0*

*is the set of Voronoi edges and N*

*0*

*is the set of neighbors of s*

*0*. As shown in Figure 2.2,

*V*

_{0}*={v*

_{1}*,v*

_{2}*,v*

_{3}*,v*

_{4}*,v*

*},*

_{5}*E*

_{0}*={v*

_{1}*v*

_{2}*,v*

_{2}*v*

_{3}

*,v*

_{3}*v*

_{4}*,v*

_{4}*v*

_{5}*,v*

_{5}*v*

*}, and*

_{1}*N*

*0*

*={s*

*1*

*,s*

*2*

*,s*

*3*

*,s*

*4*

*,s*

*5*

*}. So the Voronoi edges of s*

*0*are the vertical bisectors of the line that pass

*thru s*

_{0}*and its neighbors. For example, line segment v*

_{1}*v*

*is the bisector of line segment*

_{2}*s*

*0*

*s*

*2*.

Based on Voronoi diagram, we can simplify the area coverage problem into the coverage problem of each Voronoi polygon [27].

**Figure 2.1 An example of the Voronoi diagram **

*S*_{0}*S*_{1}

*S*_{5}

*S**4*

*S*_{3}

*S*_{2}*V*_{1}

*V**2*

*V*_{3}

*V*_{5}

*V**4*

*G*

**Figure 2.2 The Voronoi polygon **

**2.2 The VORonoi-based Algorithm (VOR) **

The main idea of VOR is pulling sensors to the sparsely covered area. Sensors will move toward its farthest Voronoi vertex and stop when the farthest vertex can be covered as shown in Figure 2.3.

Sensor

Farthest vertex

: New position to move

**Figure 2.3 Sensor move to cover the farthest Voronoi vertex **

And VOR limits the maximum moving distance to be at most half of the
communication range minus the sensing range to reduce the risk of moving oscillation. But
moving oscillations may still occur if new holes are generated by a sensor’s leaving as the
following figures show. So the VOR add an oscillation control which does not allow sensor
*to move backward as shown in Figure 2.4. From Figure 2.4(a), sensor S** _{i}* moves toward the

*new position to cover the farthest vertex. And in Figure 2.4(b), sensor S*

*i*tends to move toward the new position to cover the farthest vertex when new hole is generated by last

*movement. But we can observe that sensor S*

*i*move backward after the second movement, so the sensor should not move toward the new position. And the VOR also check whether the local coverage will be increased by its movement or not.

*S**i*

*S**i*

New hole Sensor New position to move Farthest vertex Moving direction

*(a) S**i* move toward the new position. *(b) S**i* do not move toward the new position.

**Figure 2.4 An example of oscillation control **

**2.3 The Minimax Algorithm **

The main idea of Minimax is fixing holes by moving closer to the farthest Voronoi
vertex, but it does not move as far as VOR. As shown in Figure 2.5, sensor will move
toward the center of the smallest enclosing circle of the Voronoi polygon. To find the
smallest enclosing circle of polygon, the simplest algorithm considers every circle defined
*by two or three of the n points, and finds the smallest of “these” circles that contains every *
point and there are other algorithms of finding smallest enclosing circle which are
described in [28], [29], [30]. The Minimax also applies the same Maximum moving
distance to reduce the risk of moving oscillation, the Oscillation control to prevent sensors
from moving backward and the Movement-adjustment scheme to check whether the local
coverage will be increased by its movement or not which are mentioned before.

: Smallest enclosing circle Sensor

: New position to move

**Figure 2.5 Sensor move toward the center of the smallest enclosing circle of the **
Voronoi polygon

**2.4 ** **The Centroid Algorithm **

The Centroid algorithm is moving sensors to their local center point of the local Voronoi polygon which is called Centroid point. As shown in Figure 2.6, the star point is the local center point of the local Voronoi polygon which is called Centroid point. So sensor will move toward the Centroid point as its new position. Then the Centroid algorithm also applies the same movement-adjustment scheme mentioned before to check whether the local coverage will increase by its movement or not. If the local coverage increased, the sensor will start to move toward the new position. Otherwise, it will not move toward the new position which implies the sensor stay in the same position.

Sensor nodes

Local Voronoi polygon The centroid point of the local Voronoi polygon

**Figure 2.6 Sensor move toward the centroid point of the local Voronoi polygon **

**2.5 Voronoi-based Method with Directional Sensors **

He et al. apply above methods to the directional sensor networks [34].They propose two ways that regard directional sensor as omni-directional sensor to apply Voronoi-based moving algorithms.

**2.5.1 Circumcenter of Circumscribed Circle **

The circumscribed circle of a polygon is a circle which can cover all the vertices of
*the polygon. The center of this circumscribed circle is called the Circumcenter. In this *
thesis, we need to depict the circumscribed circle of a sensing sector then calculate its
*circumcenter (C*_{0}*) and circumradius (C*_{r}*), as shown in Figure 2.7. *

*C**0*

*C**r*

Directional sensor

*C**0* : Center of circumscribed
circle

**Figure 2.7 The Circumscribed circle and Circumcenter **

**2.5.2 Incenter of Inscribed Circle **

The inscribed circle of a sensing sector is the circle that contacts with the arc and two
*radiuses. Figure 2.8 illustrates the center of inscribed circle of a sensing sector. Let D *
denote the distance between the directional sensor and the center of the inscribed circle and
*r denote the radius of inscribed circle which is called Inradius. It is easy to obtain that *

Therefore, we can easily have that
and By using the above
*variables, we can compute the center of the inscribed circle which is called the Incenter. *

The line passing from directional sensor to the Incenter is the angle bisector of the sensing sector.

*r*
*D*

*a*

Angle bisector

Directional sensor
*I**0** : Center of inscribed *
circle

*R*
*I*_{0}

**Figure 2.8 Inscribed circle and Incenter **

**2.5.3 ** **Moving algorithm**

The purpose of moving algorithms is to determine the new locations for the directional sensors and also to decide whether the sensors should move to the new locations or not. For getting better coverage, researchers employ three different approaches to determine the new locations, which are the VOR, Minimax, and Centroid approaches.

VOR approach

The VOR approach is similar to the VOR moving algorithm proposed by [21]. This
means that the new location of a directional sensor will be on the way from the directional
*sensor to the farthest Voronoi vertex (denoted as V** _{far}*) of its corresponding Voronoi polygon.

*The directional sensor will move toward V**far** and stop when V**far* can be covered. If the
*moving distance that the movement node move toward V** _{far}* is bigger than maximum

*moving distance (denoted as d*

*max*), the movement node will stop when the moving distance

*achieve d*

_{max}*instead of stopping when V*

*can be covered. They also adapt the oscillation control in [21] to prevent the new hole generated by moving sensors back and forth. After determining the new position to move, the directional sensor will execute the movement-adjustment until new position is reached. Figure 2.9 shows the procedure of VOR approach.*

_{far}VOR moving approach
**Notations : **

*N**i* : the movement node

(means the circumcenter (or incenter) of directional sensors)
*R**c** : communication range *

*d*_{max}* : maximum moving distance ( * circumradius) /* or ( inradius) */

* : moving vector of N**i *

*V**far*: the farthest Voronoi vertex
_{ }*: vector from N*_{i }*to V*_{far}*G(N*_{i }*) : Voronoi polygon of N*_{i }**Procedure : **

*(1) Calculate the V**far** of G(N**i *)

(2) _{ } /* or inradius */

(3) shrink * to be d**max** if * * d**max*

*(4) do oscillation control *

*(5) do movement-adjustment until V**far* is covered

**Figure 2.9 The VOR moving approach **

Minimax approach

The Minimax approach is also similar to the Minimax moving algorithm proposed by
[21]. This means that the new location of a directional sensor will be the center of smallest
*enclosing circle of its corresponding Voronoi polygon, which is called the Minimax point. *

*If the moving distance that movement node moves toward Minimax point is bigger than *
*d*_{max}*, the movement node will stop when the moving distance achieve d** _{max}* instead of

*moving to the Minimax point. After determining the new position to move, the directional*sensor will execute the movement-adjustment until new position is reached. Figure 2.10 shows the procedure of Minimax approach.

Minimax moving approach
**Notations : **

*N** _{i}* : the movement node (means the circumcenter (or incenter) of directional sensors)

*d*

_{max}*: maximum moving distance (*circumradius) /* or ( inradius) */

* : moving vector of N**i*

*Minimax point : the center of smallest enclosing circle *
_{ }*: vector from N**i **to Minimax point *

*G(N**i **) : Voronoi polygon of N**i *

**Procedure : **

*(1) Calculate the center of the smallest enclosing circle of the vertices of G(N**i *)
(2) _{ }

(3) shrink * to be d**max** if * * d**max*

*(4) do oscillation control *

*(5) do movement-adjustment until the Minimax point is reached *
**Figure 2.10 The Minimax moving approach **

Centroid approach

In the Centroid moving approach, we calculate the centroid of the local Voronoi polygon of each directional sensor, which can be calculated by equation (2) and (3). Then this centroid point is the new position of the movement node of a directional sensor. After determining the new position to move, the directional sensor will execute the movement-adjustment until new position is reached. This approach is similar to the Centroid-based moving algorithm proposed by [22]. The procedure of Centroid-based moving approach is shown in Figure 2.11.

Centroid moving approach
**Notations : **

*M**i* : the movement node (means the circumcenter (or incenter) of directional sensors)
*G(M*_{i }*) : Voronoi polygon of M*_{i}

**Procedure : **

** (1) Calculate the centroid point of G(M***i *)

** (2) do movement-adjustment until the centroid point is reached **
**Figure 2.11 The Centroid moving approach **

Then the moving approach can be stated as follows: The directional sensor determines the location of new circumcenter (inscribed circle or circumscribed circle) and location of destination for moving. Then it checks if the local coverage is increased by moving to the destination. Figure 2.12 illustrates an example of different circle movement.

(a ) Before movement of

circumscribed circle (b ) After movement of circumscribed circle

Directional sensor

Circumcenter New circumcenter

New destination

(c ) Before movement of

inscribed circle (d ) After movement of inscribed circle

**Figure 2.12 An example of different circle movement **

**Chapter 3 **

### Directional Sensing Model and Preliminaries

Unlike isotropic sensors, directional sensors have their own sensing model. In this chapter, we describe our directional sensing model and some preliminaries.

**3.1 Directional Sensing Model **

Compared to an omnidirectional sensor, which has a disc-shaped sensing range, a
directional sensor has a smaller sector-like sensing area and smaller sensing angle, as
illustrated in Figure 3.1. As shown in the figure, the sensing region (also called the sensing
**sector) of a directional sensor is a sector denoted by 4-tuple (S, **** , α, R), where S is the ***location of the sensor, is the center line of sight of the field of view, α is the sensing *
*angle, and R is the sensing radius; further, in the figure, β is defined as its direction angle *
value relative to the horizontal.

*R*

Directional sensor

60

α

β

*s*

_{(x, y)}**Figure 3.1 Directional sensing model. **

**3.2 Preliminaries **

Our algorithm uses a Virtual Force-based method [35]. In the subsections below, we
define a virtual point (i.e., centroid point, auxiliary point, and attractive point) and four
**types of operations of virtual force. **

**3.2.1 Virtual Point **

We assume there are four virtual points around the boundary of the sensing sector and
the sensor depends on location information to compute its virtual points, as shown in
*Figure 3.2. We also assume AP*_{ik}* is the set of auxiliary points of S*_{i,}* where k = 1…3. AP** _{i1}* is

*the position of the sensor, with AP*

*i2*

*and AP*

*i3*being the vertices on the arc of the directional sensor’s sensing sector. We also assume the centroid of the sensing range for the centroid

*point is G*

*i*. Other sensors can exert a virtual force on these virtual points. Figure 3.2 shows

*four virtual points on the sensing range of sensor S*

*. We observe the auxiliary points on the directional sensor, then the sensor can use these auxiliary points to calculate the virtual force [35] exerted on the directional sensor by its neighboring directional sensors. Further, the directional sensor can be repelled from dense to sparse areas, as shown in Figure 3.3(a).*

_{i}*We observe that directional sensor S*_{i}* has an overlapped region with directional sensor S*_{j}*and AP**i3** is covered by S**j* at the same time, so the directional sensor will exert a repulsive
*force on S*_{i}*. When S*_{j}* does not exert a repulsive force on S*_{i}*, G*_{j}* pushes AP** _{i3}*, the virtual point

*of directional sensor S*

*i*

*. Then S*

*i*

*experiences a repulsive force as G*

*j*

*pushes AP*

*i3*. Also

*shown in Figure 3.3(b), if S*

_{i}*only has force, we observe that directional sensor S*

*does not*

_{i}*have any overlapped region with directional sensor S*

*j*by moving to a new position. Our method for moving is based on the virtual force to move away from any overlap with neighboring sensors.

Directional sensor Auxiliary point

Centroid

*S*_{i }

*G**i *
A*P**i2*

A*P**i3*

A*P**i1*

**Figure 3.2 The virtual point **

*S**i *

*G**i *
A*P**i2*

A*P** _{i3}*
A

*P*

*i1*

*S**j *

*G**j *

A*P** _{j2}*
A

*P*

_{j3}A*P*_{j1}

*S**i *

*G**i *
A*P*_{i2}

A*P**i3*

A*P**i1*

*S**j *

*G**j *

A*P**j2*

A*P**j3*

A*P**j1*

*(a) S**j** exert a repulsive force on S**i*. (b) The directional sensor after movement

**Figure 3.3 (a) repulsive force and (b) movement of the directional sensor **

**3.2.2 Calculating the Distance to Move **

In previous studies, scholars likened the directional sensor (which is fan-shaped) to the omnidirectional sensor (which is disc-shaped). One often-proposed description is the circumscribed circle of a directional sensor’s sensing range that covers all vertices of the directional sensor’s sensing range, as shown in Figure 3.4(a). The other often-proposed description is the inscribed circle of a directional sensor’s sensing range, which is the largest circle contained in the directional sensor’s sensing range; the inscribed circle touches the three sides (i.e., two edges and one arc), as shown in Figure 3.4(b). If we regard the center of these circles as virtual points, then we can use virtual force to move the

corresponding sensors.

(a) Circumscribed circle of directional sensor

(b) Inscribed circle of directional sensor

**Figure 3.4 Two ways of describing directional sensors **

We found the above approaches to be insufficient for describing the sensing range of a directional sensor. First, we used a circumscribed circle to simulate a directional sensing range and determined it to be more than the original sensing region of the directional sensor. In this case, the sensor computed that it had overlap with its neighbors; therefore, the sensor moved away from the overlapping region, so the sensor had redundant movements, as shown in Figure 3.5(a). Second, we used an inscribed circle to simulate a directional sensing range and determined it to be less than the original sensing region of the directional sensor. In this case, the sensor computed that it had no overlap with neighboring sensors; therefore, the sensor did not move away from the overlap region, and, as shown in Figure 3.5(b), the sensor had an overlapping region with its neighboring sensors.

(a) The case of circumscribed circle misjudgment

(b) The case of Inscribed circle misjudgment

**Figure 3.5 Situations of sensor misjudgments causing an overlapping region, **
including (a) circumscribed circles and (b) inscribed circles

Regardless of whether we select a circumscribed or inscribed circle to express a directional sensor’s sensing range, we will find disadvantages and misjudgments.

Therefore, we aim to find a better approach for representing the area of the directional
sensor’s sensing range. We suggest the centroid of a circle on the directional sensor. More
*specifically, we define CR as the distance between the centroid of the directional sensor’s *
*sensing range and the position of the sensor. The centroid is the center of the circle, and CR *
is the radius at which to draw the circle. Using this approach, the centroid of the circle falls
between the circumscribed circle and inscribed circle, as shown in Figure 3.6. We consider
*CR as the base of the amount of movement, but we do not use the centroid of the circle to *

express a directional sensor’s sensing range. Instead, we aim to calculate a more accurate distance to reduce the occurrence of redundant movements and overlapping areas.

*CR* **Centroid of circle**

**Inscribed circle**
**Circumscribed circle**

**Figure 3.6 Comparing the centroid of a circle to the circumscribed and inscribed **
circles on a directional sensor

At the same time, we aim to compare the proportion of overlap. If a sensor has a large amount of overlapping area, the sensor will experience more force to move away from the overlapping area. Conversely, if the sensor has a small amount of overlapping area, the sensor will experience a smaller force. These phenomena are shown in Figure 3.7.

*S**i *

*G**i *

*S**j *

*G**j *

*S**i * *G**i * *S**j *

*G**j *

*AP**i3*

Region of overlapping

Correction of vector Actual vector

(a) The larger overlapping area with large force

(b) The smaller overlapping area with small force

**Figure 3.7 Relationship between the amount of overlap and the resulting movement **
vector

**3.2.3 Target in Sector (TIS) Test **

In [14], with each choice of orientation, a certain subset of targets is covered by a directional sensor. The relationship of directional sensor, its orientation, and target can be determined by a Target in Sector (TIS) test.

*First, we calculate the distance from directional sensor s to target t and verify whether *
the resulting distances are less than the sensing radius; i.e.

* (1) *

Next, we check whether the angle between vectors * and is within the field of *
*view of directional sensor s; i.e. *

* * (2)

*Target t is said to be covered by sensor s if and only if both of the above conditions *
*are satisfied. Further, a region is said to be covered by sensor s if and only if, for any point *
*p in the region, p is covered by s. *

We can equally divide the two-dimensional plane into numerous points, with any
**point regarded as a target in the target region. As a result, we can use TIS to check whether **
a given point is covered by a sensor.

**Chapter 4 **

### Our Proposed Schemes

**4.1 Problem Statement **

In this chapter, we formally define the problem to solve and describe our proposed
*schemes. The problem is as follows. Given N randomly deployed mobile directional *
*sensors with sensing range R**s**, communication range R**c**, and sensing angle α in a given *
target sensing region, we must maximize sensor coverage with a minimal amount of
required moving distance.

To address the above problem, we make the following assumptions, some of which are similar to those made in [22]:

*1) All directional sensors have the same sensing range (R*_{s}*), communication range (R** _{c}*),

*and sensing angle α, where α . Directional sensors within R*

*c*of a sensor are called the sensor’s neighboring nodes. The communication range is equal to twice the sensing radius.

2) Directional sensors can move to arbitrary positions via GPS, but their sensing directions are not rotatable.

3) Each directional sensor knows its location information, which includes sensing region, sensing radius, sensing angle, and field of view. Each sensor can also obtain the location information of its neighboring sensors.

4) The target region is on a two-dimensional plane with no obstacles, but the boundary of the target region is regarded as a wall-like obstacle.

5) We do not consider the energy consumption of the directional sensors, but we view the distance moved as proportional to energy consumption—i.e., the more distance moved, the more energy consumed.

**4.2 Improving the Coverage of the Virtual Force **

**Algorithm **

To maximize area coverage, we present multiple forces for directional mobile sensors.

More specifically, the resultant forces are composed of four computing schemes, namely the Centroid Push Auxiliary Point (CPA) scheme, Centroid Push Centroid (CPC) scheme, Voronoi Point Pull Centroid (VPPC) scheme, and Neighbor’s Repulsion (NR) scheme. Our proposed computing schemes are all based on Virtual Force.

**4.2.1 ** **Centroid Push Auxiliary Point (CPA) Scheme**

Each directional sensor calculates the auxiliary points of its sensing sector; these
auxiliary points are then regarded as the virtual points by which the sensor can check
whether auxiliary points are covered by a neighboring sensor. We use the TIS test to define
how auxiliary points are covered. Then, at the beginning of each round, each sensor
computes the auxiliary points of a directional sensor. If a sensor has the same overlapping
area with a neighbor, the sensor’s auxiliary points are covered by the neighbor. The
neighbor’s centroid pushes the sensor’s auxiliary points covered by the neighbor of the
*sensing sector. For example, as shown in Figure 4.1, if S*_{i}* has overlap with S*_{j}* and AP*_{i3}* is *

*covered by S**j** then S**i** computes a vector from G**j** to AP**i3.*

*S**i*

*S**j*

Auxiliary point

*AP**i2*

*AP**i1*

*AP**i3*

Directional sensor

The direction of Virtual
*force S**j **exert on S**i* .

*G**j*

*G**i*

**Figure 4.1 Example of the repulsion model in which S**_{j}* exerts a repulsive force on S*_{i}

The calculation of the sensor’s repulsive force can be expressed as

* *_{ }* * _{ } ^{ }^{ }

* * * ** _{ }* (3)

Each sensor keeps the force temporarily when each sensor does all operations over.

**4.2.2 ** **Centroid Push Centroid (CPC) Scheme**

In the CPC scheme, each directional sensor calculates the centroid of its sensing
sector; the centroids are regarded as corresponding virtual points. We also use the TIS test
*to define the size of the overlapping regions. The area of the sensing sector SA is equally *
*divided into M small areas, which we assume to be points (given large enough M). At the *
*beginning of each round, each sensor computes its overlapping area size (or OAS) with its *
neighbors. Next, each sensor finds which neighbor has the same overlapping area. If a
sensor has the same overlapping area with a neighbor, but its auxiliary points are not
covered by any neighbors within its sensing range, then the directional sensor uses its
centroid to calculate the force of the neighbor’s centroid on its own centroid. For example,
*as shown in Figure 4.2, if S*_{i}* has overlap with S*_{j}* but AP*_{i1}*, AP*_{i2}*, and AP** _{i3}* aren’t covered by

*S**j** then S**i** computes a vector from centroid G**j** to centroid G**i. *

*S**i* *S**j*

Auxiliary point

Directional sensor

The direction of Virtual
*force S**j** exert on S**i*.

*G**j*

*G**i*

**Figure 4.2 Example of the repulsion model in which S**_{j}* exerts a repulsive force on S*_{i}

The calculation of the sensor’s repulsive force can be expressed as

* *_{ }^{ }* *

* * (4)

Each sensor keeps the force temporarily when each sensor does all operations over.

**4.2.3 ** **Voronoi Point Pull Centroid (VPPC) Scheme **

After implementing the above two schemes (i.e., CPA and CPC), we feel results are
still insufficient. To obtain more accurate forces for each sensor, each sensor uses its
centroid to simulate a Voronoi polygon with its neighbor’s centroid, then each sensor finds
its own Voronoi polygon. At the same time, we use VOR and centroid methods to define
*the attracting point that will attract the sensor’s centroid (this point is abbreviated ATP). If *
*a sensor finds ATP by using VOR, the sensor uses its Voronoi polygon to find the vertex *
*that is farthest from the sensor’s centroid, as shown in Figure 4.3(a). If a sensor finds ATP *
*by using the centroid approach, the sensor uses its Voronoi polygon’s centroid as ATP, as *
shown in Figure 4.3(b). We pick one of these methods to compute the attracting force for

*each sensor. For example, S**i* finds its Voronoi polygon with its neighbor’s centroid and
*computes its ATP*_{i}* using this polygon’s vertices. G*_{i}* is attracted by ATP** _{i}*, as shown in Figure
4.4.

The Farthest point of Voronoi polygon Centroid of Voronoi polygon

**G****G**

Sensor of Centroid

**Figure 4.3 Two approaches to determining attracting points (ATPs): (a) using VOR; **

(b) using the centroid approach

*G*_{i}

*S**i*

*ATP** _{i}*
Attracting

point

Directional sensor

The direction of Virtual
*force APT attract S**i*.

Sensor of centroid

**Figure 4.4 Example gravitational model in which ATP***i** attracts G**i*

The calculation of the sensor’s attracting force is expressed as

* *_{ }* *

* * (5)

Each sensor keeps the force temporarily when each sensor does all operations over. If a sensor does not have any neighbors to simulate its Voronoi polygon in next round, then the sensor will not perform this step.

**4.2.4 ** **Neighbor’s Repulsion (NR) Scheme **

In the NR scheme, we address the case in which no neighbors have any overlap with the sensor. When we have a higher density of random deployment in the target region, the uncovered area is usually very small in the neighborhood of the sensor; in other words, the sensor increases its chance that it has an increased number of neighbors. In the above schemes, if a sensor gets the same force, then the sensor has substantial movement as a result of the combined forces. In this case, the sensor moves away from the original overlapping area, but the sensor may be able to move to cover the same area as other neighbors, as shown in Figure 4.5.

Sensor has overlapping area

Sensor doesn't have overlapping area

Sensor gets the forces

60 60

(a) Initial deployment in the case

(b) Resultant of forces (c) After moving

*S*_{i }*S**i * *S**i *

**Figure 4.5 The case in which a sensor still has an overlapping area after moving **

*S**i *

*G*_{i }

*S**j *

*G**j *

Auxiliary point

Directional sensor

The direction of Virtual
*force S**j **exert on S**i* .

**Figure 4.6 The case involving a neighbor’s repulsive force **

The calculation of the sensor’s repulsive force is expressed as

* *_{ }* *

* * (6)

Each sensor keeps the force temporarily when each sensor does all operations over.

**4.2.5 ** **Moving Algorithm **

In this subsection, we propose a means to improve the coverage of the virtual force algorithm to maximize sensor coverage. Instead, of using the CPA, CPC, VPPC, and NR schemes to guide the directional sensor movements toward a new position, the virtual force scheme employs the repulsive force between auxiliary points that surround the sensing sector of the directional sensor, and thereby use the neighboring directional sensors as a basis of movement.

The fundamental idea of our virtual force scheme is to repel sensor nodes from one
*another such that sensor nodes will spread from dense to sparse areas. We assume APC**ij* is
*the set of auxiliary points of S*_{i}* that are covered by its neighboring sensor S** _{j}*. is the

_{ }*vector from G*_{j}* to ** _{ }*.

*is the distance between S*

_{ }*j*

*and AP*

_{ik}*. SA is the size of the*

*sensing sector. is the size of the overlapping area of S*

*(i.e., the sensor collected information of each neighbor’s location, sensing angle, and direction, and the sensor can use TIS for simulation in a two-dimensional array).*

_{i}*If sensor S**i** has overlapping coverage with S**j** (i.e., ** * ), then we can divide the
*problem into the following two cases for overlapping coverage: (1) an auxiliary point of S*_{i}*is covered by S**j**; and (2) S**i** without any auxiliary points is covered by S**j**. If sensor S**i* does
*not have overlapping coverage with S*_{j}* (i.e., ** * ), then S*i* does not do anything. The
*virtual force exerted by S**j** on S**i** is denoted as FR**ij*, and the repulsion model is defined as

* *_{ }

* * * *_{ }* *_{ }* * * *_{ }* ** * * *

* *

if sensor hasoverlappingcoveragewith
* *_{ }* and *_{ }* ** _{ }*
if sensor has overlappingcoveragewith

*and *_{ }* and *_{ }* *_{ }

Next, we survey the case of overlapping coverage as in [33]. If a sensor has more overlapping
area, the sensor cannot move away from the overlapping area. We define threshold to raise the
accuracy of direction. If a sensor’s threshold is greater than in the first case above, the
force of the second case is substituted for the first case. If a sensor’s threshold is less than
in the first case above, the force of the first case does not change. We assume the threshold
of *S** _{i}* is

*i*; the calculation of a sensor’s threshold is then expressed as

*i* = ^{ }_{ } (7)

We also propose a move-back scheme to prevent the sensor’s sensing region from being outside the target region. If the auxiliary point of a sensor is out of the target region,

the sensor should move to the new position until its auxiliary point is located on the
boundary of the target region, as shown in Figure 4.7. In Figure 4.7(a), we observe that
*auxiliary points A and B are outside the target region, so we let these auxiliary points inside *
the target region. By doing so, the directional sensor will find the farthest auxiliary point to
*boundaries b**1** and b**2**. AP**i2** is the farthest point to boundary b**1** and AP**i3* is the farthest point
*to boundary b** _{2}*. The directional sensor will therefore move down the distance between

*auxiliary point AP*

*i2*

*and boundary b*

*1*

*, and auxiliary point AP*

*i2*will lie just on the boundary of the target region, as shown in Figure 4.7(b).

*Auxiliary point AP**i3* is also outside the target region, so the directional sensor will
*move right the distance from AP*_{i3}* to boundary b*_{2}*. AP** _{i3}* is also located on the boundary of
the target region, as shown in Figure 4.7(c). In Figure 4.7(d), the sensing sector of the
directional sensor will fall inside the target region after it has moved to the new position.

Target region New position to move Directional sensor Moving direction

*AP*_{i3}

*AP*_{i2}

(a) Before move-back scheme.

(b) Move virtual point
*A back to the target *

region.

(d) After move-back scheme.

*b**1* *b*_{1}*b**1*

*b*_{2}

*b**2*

(c) Move virtual point
*B back to the target *

region.

*b**2*

*b*_{1}

*AP**i2*

*AP*_{i3}*AP*_{i3}

*AP**i3*

*AP*_{i2}

*AP**i2*

**Figure 4.7 Example of the move-back scheme in which a sensor (shown in (a)) **
moves down (shown in (b)) and right (shown in (c)) to result in location (d)

Improving the coverage of the virtual force algorithm will stop when it achieves the maximum number of rounds of the algorithm. The complete procedure of our Virtual Force

Moving algorithm is shown in Figure 4.8.

**Improving the coverage of the virtual force algorithm **
**Notation: **

*APC*_{ij,}* AP*_{ik,}* , FR *_{ }*ij,** CR, *, : defined above
*N**i**: the neighbor of sensor S**i *

* moving vector of S*_{i }* VPPC of S**i *

* NR of S**i *

*Max_Round: pre-defined maximum number of rounds *
**Procedure: **

*(1) Enter discovery phase: *

* (1.1) set timer to be discovery interval and enter Moving phase upon timeout *
* (1.2) broadcast hello after a random time slot *

*(2) Enter moving phase: *

* (2.1) set timer to be discovery interval and enter discovery phase upon timeout *
* (2.2) Compute the overlap and useful neighbors *

* (2.2.1) for each S*_{i}*, compute * and *i*

* (2.3) Compute the force with ** **≠ 0 of S**i*

* (2.3.1) *
* (2.3.2) for each S**j** in N**i *

* If sensor S**i** has overlapping coverage with S**j** and APC**ij** ≠ Ø and AP**ik** APC**ij*

If *i*

*FR**ij* = _{ } ^{ }^{ }* * * *_{ }* * * = * * + FR*_{ij }

Else

* APC**ij* Ø
Endif

Endif

* If sensor S*_{i}* has overlapping coverage with S*_{j}* and APC** _{ij}* Ø and AP

*ik*

*APC*

*ij*

* FR** _{ij}* =

^{ }

* *
* * * = * * + FR**ij *

Endif

* (2.3.3) for each S**i*, find and
* * * = * * + + * _{ }

* (2.3.4) perform move-back scheme *
End for

* (2.4) Max_Round -- *

* (2.5) Done when Max_Round =0 *

**Figure 4.8 Procedures of our ICVFA algorithm **

**Chapter 5 **

### Simulation Results

In this chapter, we evaluate the performance of our proposed schemes. In section 5.1, we first describe our simulation environment and performance metrics. Then, the performance of our proposed scheme is described in section 5.2.

**5.1 Simulation Environment **

We simulated and analyzed the performance of VOR and the centroid approach with different attracting points (see section 4.2.3 above) and no attracting points (i.e., Pure) from the following two aspects:

Coverage: The main goal of a directional sensor network is to monitor the target region. Therefore, the better coverage of the target region, the directional sensor network can more precisely monitor the target region.

Moving distance: Instead of evaluating the energy consumption of directional sensor networks, we employ the moving distance as the criterion of energy consumption.

Therefore, the less total moving distance, the less total energy consumed.

Each simulation was executed 20 times with results averaged. Our simulation
*program was written in C# on the .NET platform. We deployed 90 directional sensors in a *
*region of 500 × 500 m*^{2}* in our simulation. The sensing radius was 60 m, the communication *
*radius was 120 m, and the sensing angle was 90*^{o}.

Network size *500×500 m*^{2}

*Sensing radius (R**s*) *60 m *

*Sensing angle (α) * 90°

Number of directional sensors 90

Number of rounds 12

*Communication radius(R** _{c}*)

*120 m*

Threshold() 30

**Table 5.1 Experimental parameters **

**5.2 Performance of our Proposed Schemes **

Figures 5.1 and 5.2 show coverage rates for variations of Centroid, VOR, Pure, and Random cases with different attracting points on the performance of accumulated coverage and moving distances, respectively. We observe that the ICVFA (Centroid) approach has better performance in terms of coverage than the other approaches, because the directional sensor distributes more evenly by moving to the centroid of its Voronoi polygon and the centroid of the polygon constructed by its neighbor nodes.

The moving distances of ICVFA (Centroid), ICVFA (VOR), and ICVFA (Pure) change more dramatically based on the number of sensors, because the density of directional sensors is higher in the target region, which in turn causes sensors to “push”

other sensors to obtain more space to cover. Since we equate energy consumption with the moving distances of the directional sensors, using ICVFA (Centroid) obtains a better coverage rate and much higher moving distance.