• 沒有找到結果。

CHAPTER 1. INTRODUCTION

1.4 O RGANIZATION OF THE T HESIS

The remainder of this thesis is organized as follows: In Chapter 2, the probability-based localization method is first presented. The proposed algorithm guarantees a robust localization method under different environment. The localization result can be applied on robot as well as a user with the ZigBee module In Chapter 3, monocular visual navigation system is presented to provide navigation ability for the robot with merely a camera. Chapter 4 shows the hardware and other sub system implemented for establishing the intelligent environment and the mobile robot system. Experimental results of the proposed are reported and discussed in Chapter 5.

Chapter 6 concludes the contributions of this work and provides the recommendations for future research.

Chapter 2.

Probability-based Location Aware Design

2.1 Introduction

For an on-demand robotic system, a locational aware module is required to provide location data of objects of interest, users and of the mobile robot itself. This information supports various intelligent behaviors of a service robot in day-to-day scenarios. In this thesis, a novel probability-based approach to building up a location aware system is presented. In this approach, the uncertainties and inconsistencies often seen in received signal strength indicator (RSSI) measurements are handled with a minimum of calibration. By taking off-line calibration measurement of a ZigBee sensor network, the inherent problem of signal uncertainty of to-be-localized nodes can be effectively resolved. The proposed RSSI-based algorithm allows flexible deployment of sensor nodes in various environments. The proposed algorithm has been verified in several typical environments. The rest of this chapter is organized as follows: In section 2.2 the proposed probabilistic localization method is presented.

The performance evaluation of the localization system are described and discussed in session 2.3.

2.2 Proposed ZigBee Localization Method

To overcome limitations posed by uncertainties of RSSI, we suggest a novel probability-based approach to estimating location by modeling the RSSI vs. distance relationship with discrete probability density functions. This approach aims to provide a straightforward method of describing different distributions without losing generality. While

most current approaches use mathematical equations to model the RSSI vs. distance relationship, the proposed method adopts the original RSSI vs. distance data to construct its own model. Furthermore, instead of using trilateration or fingerprint techniques, the location is estimated by accumulating a probability histogram observed from several reference nodes on a local probability map. This approach will be shown to have better tolerance against fluctuations and inconsistencies inherent with the RF signal. It is also flexible because it uses the available number of the RSSI measurements and the deployment of ZigBee nodes.

We first assume that the locations of all reference nodes are already known and the RSSI between each pair of nodes can be received. Fig. 2-1 shows a flow chart of the algorithm. It contains a calibration phase which models the RSSI vs. distance relationship and a localization phase which estimates the location using a filtered 2D probability map as follows.

2.2.1 Modeling of RSSI vs. Distance Relationship

The calibration phase estimates the distance between nodes based on the RSSI measurement. Instead of using a mathematical path-loss model, the RSSI vs. distance relationship is collected and modeled with a series of probability histograms, which record different distances measured under a fixed RSSI. The histogram therefore represents the discrete probability density function (pdf) of a given RSSI value. Let R and D denote the random variables of the RSSI reading and distance between nodes (in meters), respectively.

Assume the model is tested from distance 0 to L meters at intervals of q meters. The pdf of the RSSI value equal to r can be defined as:

 

1

i i+1

1

where di represents the distance of the ith interval, N = L/q is the total bins in the histogram and hi is the value of each bin. Fig. 2-2 shows an example of the probability histogram when RSSI

= –88 dBm. This model can easily be implemented onboard the sensor node since it only needs a single look-up table. Clearly, the number of the bins in each histogram limits the resolution of the estimation of distance. This is, however, not critical since the RSSI value is also discrete in practice.

2.2.2 2D Probability Map

In the next step, the location of the mobile node is estimated using the RSSI values measured from several nearby reference nodes. The concept is to apply the two-dimensional multi-lateration method with the estimated pdfs. A trilateration method determines the location by finding the intersections of circles given from (1) the centers of the reference nodes; and (2)

Fig. 2-1: The flow chart of the proposed localization algorithm.

Collect RSSI values at

distance between the mobile node and several reference nodes (more than three). In the proposed algorithm, however, a mobile node may obtain several possible distance measurements from each reference node. The next step is to find the intersection sets of distance measurements with the largest probability from several reference nodes.

To simplify the calculations, the proposed algorithm estimates the 2D probability distribution directly on the 2D map quantized in grids of a fix size of K × K. Specifically, grids of the size N × M are spanned over the localization area, defined by (x,y), 1  x  N and 1 y  M. The next step is to derive the location of the mobile node xm = (xm, ym) from a RSSI vector R = (R1, R2…, Ri, … RK), containing the RSSI values received from K reference nodes with known positions xr,i, 1  i  K. For each reference node xr,i with a RSSI value r measured at the mobile node, the 2D pdf P(x,y,i) of the mobile node at position (x,y) can be given as:

Fig. 2-2: Probability histogram of distance for RSSI = –88 dBm.

Since the RSSI vs. distance model is known and modeled as a 1D pdf, the 2D probability distribution is equivalent to a repeat of the 1D pdf P(D |R = r) in (1) around the coordinates of the reference node from 0 to 360 degree. Fig. 2-3 shows a graphical example of the 2D pdf with one mobile node and one reference node at (0,0). The 2D pdf indicates the probability of the mobile node by means of a 2D probability map.

2.2.3 Filtered 2D Probability Map

While the localization method described in the previous section works under many circumstances, some practical issues still need to be resolved. Among them, the most important problem is the gap in the histogram created by the inconsistency of the RSSI values. For instance, in Fig. 2-2 (when RSSI = –88 dBm), the probability of the distance = 15 m is zero,

Fig. 2-3: The 2D pdf of a fixed RSSI reading between a reference node and a mobile node.

which is unlikely in reality, because of the irregular nature of the radio signal. To realize the goal that “the model only needs to be built once,” we need to enhance the proposed model to tolerate these circumstances. Furthermore, for some WSN applications such as searching, it would also be beneficial if the proposed method can provide alternative locations when it fails on the first try. In order to resolve these issues, we treat the 2D probability map as an image and apply a 2D circular averaging filter H(x,y) with radius r to it. The idea is to smooth the map by filling the gaps and avoid any zero probability on the map by taking probabilities of nearby areas into account. The filter is defined as:

    

where the value of r is adjusted according to the variance of the RSSI values. The filtered map is thus stated as:

P

( x , y , i )  P ( x , y , i )  H ( x , y )

 

As a result, the filtered map provides a relatively more robust result than the raw map (refer to the experiment section). The filtered map also provides other possible locations for the robot or human to search for the target, if they failed to find the target at the location with the highest probability.

2.2.4 Location Estimation

Finally, the location of the current mobile node will be estimated by accumulating all the 2D probability maps obtained from reference nodes such that:

The summation of each grid P (x,y) is analogous to the convolution of all the individual distributions. Fig. 2-4 shows an example of how maps obtained from different reference nodes are accumulated into the final map. As a result, the mobile node is located by the peak value of the probability map, even though the upper-right map suggests a false result. However, because of the natural irregularities of the RSSI, the results may lead to a map with several peaks. The location of the mobile node will not be determined by one of the peaks under those circumstances. Instead, the propose algorithm determines the location by estimating the geometric center of all the peaks such that:

Fig. 2-4: An example of how filtered 2D maps obtained from different reference nodes are combined into the final map.

190 195 200 205 210 215 220 225 230 235 240 170

where xp denotes location of the pth peak on the 2D pdf, 1  p  P. Note that the multi-lateration method can now be treated as a special case of the proposed algorithm, where the pdf of each reference node is an impulse function.

2.3 Robustness Test of The Proposed Localization Algorithm

The experiment aims to test if the proposed method can handle ZigBee localization in different environments more robustly than other methods. Three localization methods:

Min-max, ROCRSSI, and CC2431 are compared with the proposed method in this experiment.

Min-Max is a widely-adopted range-based localization method [7]. The technique known as Ring Overlapping based on Comparison of Received Signal Strength Indication (ROCRSSI) is a commonly used range-free approach [8] to localization. The CC2431 location engine is embedded in a Texas Instruments CC2431 system-on-chip (Soc) solution for ZigBee/IEEE 802.15.4 wireless sensor networks. CC2431 implements Motorola’s IEEE 802.15.4 radio–

location solution exploiting a maximum likelihood estimation algorithm [6]. To compare the CC2431 location engine with the proposed method, we have adopted our modules with CC2431 chips to perform the experiment.

Fig. 2-5 shows the data flow of the experimental system. RSSI values of reference nodes were first estimated by the mobile node. The values were then sent to the observation node and finally transmitted to the host PC via a serial link. The collected RSSI values were then estimated with different algorithms and compared with ground truth. Three experiments have been performed in three different types of environments. Table 2-1 shows the configurations of these environments. The localization result using raw data as well as the filtered 2D map were individually tested in the experiment to further investigate the effectiveness of the proposed method. Fig. 2-6 , Fig. 2-7 and Fig. 2-8 provide photos and the locations of beacons in these environments. During the calibration measurements of the proposed method, RSSI values at

various distances in several different environments of the lab building were recorded. In the measurement, the RSSI values between nodes were collected by moving a mobile node away from a reference node manually along a direction from a distance of 0 to 10.2 m with an interval of 0.3 m. The goal is to collect RSSI data under different circumstances. The calibrations of the other methods were performed only in Experiment Environment 1 (EE1). In the experiments, every ZigBee module was fully charged before the experiment, since the power of ZigBee sensor nodes affects RSSI and hence the location prediction.

Table 2-1 shows the overall localization results of EE1. The results show practically no difference in the localization error between the proposed method and the CC2431 localization engine. Both results outperform the Min-Max and ROCRSSI methods, which is similar to the results reported in [7]. Notice that the parameters of the signal propagation model used in the CC2431 localization engine are given to maximize the accuracy of the experiment, while our approach uses the sensor model established beforehand from other calibration places. In

Fig. 2-5: Data flow of the ZigBee localization experiments.

Reference

Experiment Environments 2 (EE2) and 3 (EE3), the same RSSI models and parameters were applied as were used in EE1. Tables III and IV show the overall localization error of the experiments in EE2 and EE3, respectively. Since these two environments are different from the test in EE1, the RSSI-based localization models of the other methods are no longer suitable. As a result, other methods perform more poorly in the new environments. On the contrary, the proposed method already considers possible changes in the environment and preformed more robustly in different environments.

2.4 Summary

This chapter presents a novel method for localizing a ZigBee node in a WSN. The probability-based approach takes the uncertainties of RSSI-based distance estimation into account. The method is independent of the sensor node distribution and environment configuration and therefore can minimize the calibration burden in practical applications. The performance of the proposed algorithm has been evaluated by experiments in three different environments. The experimental results show that the average error of the implemented location aware system is 1.7 m, despite a much higher range uncertainties from raw RSSI values. This accuracy can effectively support service robotic applications, especially when properly combined with a vision system. For instance, the localization system can provide a possible area of a user calling the robot for service or when an emergency situation occurs.

Table 2-1: Configurations of Test Environments

Test Environment Type Test point # Area/m2 Beacon #

EE1 Room 26 49 8

EE2 Corridor 21 30 6

EE3 Hallway 60 600 8

(a)

(b)

Fig. 2-6: (a) Experimental environment 1 (EE1): the lab 622 and corridor (b) the location of deployed ZigBee beacons in the experiment (filled circles).

Table 2-2: Estimation Error Comparision: EE1

Method Average Error (m) Max Error (m) Standard Deviation

Min-Max 2.29 3.4 0.77

(a)

(b)

Fig. 2-7: (a) Experiment Environment 2 (EE2): the corridor of lab building, (b) the location of deployed ZigBee beacons in the experiment (filled circles).

Table 2-3: Estimation Error Comparision: EE2

Method Average Error (m) Max Error (m) Standard Deviation

Min-Max 2.46 3.2 1.14

ROCRSSI 2.76 3.1 1.22

CC2431 1.82 3.5 1.5

Proposed

(2D Map) 1.32 3 0.73

Proposed

(with Filtered Map) 1.21 3.2 0.92

(0,0) (0,4)

(4,4)

(8,4) (8,0)

(4,0)

(a)

(b)

Fig. 2-8: (a) Experiment Environment 3 (EE3): the lobby of the lab building (b) the location of deployed ZigBee beacons in the experiment (filled circles).

Table 2-4: Estimation Error Comparision: EE3

Method Average Error (m) Max Error (m) Standard Deviation

Min-Max 3.7 6.3 1.29

ROCRSSI 3.1 5.2 1.61

CC2431 2.2 7.58 1.74

Proposed

(2D Map) 1.5 4.2 0.89

Proposed

(with Filtered Map) 1.36 3.21 0.73

Chapter 3.

Monocular Visual Navigation Design

3.1 Introduction

Obstacle detection and avoidance are essential for mobile robot navigation. This chapter proposed a monocular vision-based obstacle avoidance design for mobile robot navigation. A novel image processing procedure is developed to estimate the distance between the robot and obstacles based-on inverse perspective transformation (IPT) in an image plane. A robust image processing solution is proposed to detect and segment a drivable ground area within the camera view. The proposed method integrates robust feature matching with adaptive color segmentation for plane estimation and tracking to cope with variations in illumination and camera view. After IPT and ground region segmentation, distance measurement results are obtained similar to those of a laser range finder for mobile robot obstacle avoidance and navigation. The merit of this algorithm is that the mobile robot can have the capacity of path finding and obstacle avoidance by using a single monocular camera. The rest of this chapter is organized as follows: The concept of the system is described in section 3.2. Section 3.3 goes through the detailed algorithm of ground plane and obstacle detection. The calibration experiment of distance measurement is presented in section 3.4. Section 3.5 shows the simulation result using synthetic images. Experimental results using mobile robot will be reported in Chapter 5.

3.2 System Overview

In order to estimate range information for mobile robot navigation control using merely monocular image sequence, the basic idea is to find the world coordinates of pixels belongs to the ground plane from the image plane by applying IPT. For common indoor environments, we can make the following assumptions: (1) the ground is a plane; (2) the robot moves parallel to the ground; and (3) the camera is fixed on the robot. The distance between the camera and the ground plane is thus fixed and the IPT of the ground plane can be determined beforehand. With the pinhole camera model, applying IPT can be simplified by using plane-to-plane homography [38], as shown in Fig. 3-1. There exists a 3 by 3 homography matrix H such that

pq  ,

H



where p

is the coordinate of the ground plane relative to the camera, and q

the coordinate of image plane. The homography matrix H can be found from the relative coordinates of four points on the ground plane and their corresponding coordinates in the image. In the calibration phase, several sets of corresponding points are used to estimate a precise H. After calibration,

Fig. 3-1: Homography of coplanar points between image plane and ground plane.

p

C’ I

H

H -1

q

the projected images can then provide the relative position of each pixel on the ground in world coordinates [33,34]. Off-ground objects, on the other hand, will be distorted after the transformation. A recalibration will be necessary if the position or angle of the camera is changed.

Due to the assumptions, IPT can only map pixels on the ground plane. In practical obstacle avoidance applications, pixels which belong to the ground plane must be found on the fly in order to determine distance to obstacles. The proposed system therefore aims to solve this problem via multiple visual clues. Fig. 3-2 illustrates the architecture of the proposed monocular visual navigation system (MonoVNS). In this design, two parallel procedures are launched simultaneously after the image preprocessing step. In the ground feature detection process, the system extracts all the features in current image at time step t, and matches them to features observed in previous N frames. N is determined by the frame number that can be processed in one second. According to the correspondence and pre-determined ground features, ground homography and features on the ground in the newly acquired image can be found. The system then uses the homography matrices among consecutive frames to classify all other features as on the ground or not. Meanwhile, in a parallel process, images are segmented into patches according to their color homogeneity. These segments are then classified into ground or off-ground regions by comparing with warped images. The detailed algorithm is described in Section 3.3.

Once the ground and off-ground areas are segmented, the coordinates of the ground area (relative to the camera) can be estimated by applying IPT to the acquired images. The images are virtually rectified with a downward-looking pose. The range information can then be obtained by a ray-casting operation. Ray casting is a procedure to mimic a laser range finder (LRF) using the rectified image. A set of virtual rays are casted from the camera center, one per pixel, until the pixel belongs to an off-ground area, as shown in Fig. 3-3. The distance of a

Fig. 3-2: The architecture of the proposed MonoVNS.

Robust Feature ExtractionFeature Matching

Ground Feature Detection Ground Plane Region Segmentaion

Robust Homography Estimation Inverse Perspective Transformation

Image SegmentationSegment comparisonGround/non-ground segment classification Ray Casting

Obstacle Detection Robot Navigation Controller

Ground/non-ground region labeled image

Image segments

SURF points Ground / Non-ground features Image segments Range dataGrid map Matched frames and feature pairs

Ground / Non-ground features Classification result

Monocular Camera Image sequence

ray to an object can thus be estimated by the two-norm of the pixel. The distances of these rays now represent the distances to obstacles, just like a typical LRF, but with a smaller view angle.

This information can be applied to any LRF-based obstacle avoidance scheme, and make it to work in a similar manner.

3.3 Ground Plane and Obstacle Detection 3.3.1 Ground Feature Detection

In order to obtain robust matching results of corresponding points, the feature extractor should have properties such as invariance to rotation, scale, illumination and image noise. In previous related works, Harris and FAST [39] corner detectors have been widely adopted [28-31]. The robustness of these descriptors is relatively limited in practical applications. In last decade, there have been many advances in scale/rotation-invariant feature extractors, such as Scale-Invariant Feature Transform (SIFT) [40]. A An array of image gradient histograms has been used as a descriptor instead of a raw image patches. However, the heavy computation

Fig. 3-3: An example of ray casting in an image after IPT. The white area indicates the

Fig. 3-3: An example of ray casting in an image after IPT. The white area indicates the

相關文件