• 沒有找到結果。

Chapter 3 Calibration of a Two-camera Omni-directional Imaging System and

4.5 Vehicle Navigation Strategy

4.5.4 Proposed navigation strategy and process

Before the vehicle starts a navigation session, it will estimate the distances to the nearby walls which are within the relevant angular scanning ranges. Based on the distances, we can know if it can turn to the left, turn to the right, or go forward, and then issue an appropriate control instruction to the vehicle to drive it to navigate safely.

The major rules for controlling the vehicle are described as follows.

1. For the work of the imaging system, keeping an appropriate distance with the wall is needed.

2. The vehicle can go forward until the left wall is not detected further or until it is blocked by the frontal wall.

Both situations are illustrated in Figure 4.10, and the algorithm of issuing a control instruction to the vehicle for each navigation cycle is described as follows.

Note that the distance DLB is assigned a value only under the condition that the vehicle exceeds the corner point, as shown in Figure 4.10(b). For gathering enough information, it is desired that vehicle go forward a fixed distance moveDistfix if it is moveable, and then collect information. Referring Figure 4.10 for notations used in the algorithm. DL, DF, and DR are the distances between the vehicle and the left wall, the frontal wall, and the right wall, respectively, if the corresponding walls are detected. The threshold values passLengthnear and passLengthfar are used to decide whether the vehicle can move forward or not.

64

Algorithm 4.4: Control decision for vehicle navigation.

Input: The distances D

L, DF, DR, DLB and pre-selected length or distance values,

passLength

near, passLengthfar, moveDistfix, safeRangeF, and safeRangeLB, used as threshold values.

moveDist

fix and then exit;

ii. Otherwise, go forward for the distance of DF  safeRangeF and then exit.

Step 3. Perform the following steps to check whether or not the vehicle should turn

65

around.

3.1 Check if DLB ≧ safeRangeLB:

i. if yes, turn to the left for the angle of 90o, and then exit;

ii. otherwise, go forward for the distance of safeRangeLB  DLB, and then turn to the left for the angle of 90o, and finally exit.

3.2 Turn to the right for the angle of 90o, and then exit.

66

Chapter 5

Automatic Construction of House Layout by Autonomous Vehicle Navigation

5.1 Introduction

The goal of this study is to construct a 3-D house-layout of the room space. In Chapter 4, we described how the vehicle conducts navigation and gathers the environment information that includes the detected mopboard edge points and the omni-images taken by the imaging system. With the mopboard edge points, we propose in this study a global optimization method to generate the floor-layout to fit them, and the details will be described in Section 5.2. Besides, objects on walls such as doors and windows are also concerned. Based on the floor-layout, we conduct analysis of the omni-image to detect objects on walls. The entire process is described in Section 5.3.

5.2 Proposed Global Optimization Method for Floor-layout

Construction

5.2.1 Idea of proposed method

In order to create the floor layout, we use a straight line to fit the detected edge

67

points which belongs to the same wall by the LSE curve fitting scheme, and we do this for each wall, respectively. However, due to the error of the odometer of the vehicle or possible position estimation errors from the imaging system, the adjacent fitted lines may be not mutually perpendicular, as shown in Figure 5.1, though in fact the adjacent walls are so.

One way out, as adopted in this study, is to use straight lines which are perpendicular or parallel to each other to fit the edge points in the beginning. In this way, how to fine-tune all of the fitting lines such that the adjacent fitting lines not only are mutually perpendicular but also can fit the edge points with the minimum tolerance is an important task. A global LSE optimization method is proposed to solve this problem, and the details are described in the following section.

Figure 5.1 The edge points of different walls (in different colors) and the fitting lines.

5.2.2 Details of proposed method

As mentioned, it is reasonable to use two mutually perpendicular lines to fit the edge points which belong to adjacent mutually perpendicular walls. It means that if we choose one fitting line and adjust the direction of it, the directions of other fitting lines will change, too. As a result, using the fitting lines to fit the edge points will incur fitting errors. Based on this idea, we propose a global LSE optimization method to minimize the fitting error and the entire process is described in the following

68

algorithm.

Algorithm 5.1: Floor-layout construction.

Input: n sets of detected mopboard edge points, S

1, S2, …, Sn of n walls, W1, W2, …,

W

n.

Output: A floor layout.

Steps:

Step 1. (Line fitting for each wall) Fit the points in Sk of Wk with a line Lk by the LSE curve fitting scheme and compute its mean point Mk where 1  k  n.

Step 2. (Optimal fitting with respect to a chosen fitting line) Perform the following steps to obtain globally optimal fitting with respect to a selected fitting line of a certain wall.

2.1 Choose a fitting line Lk, starting from k =1 until k = n, and compute its direction angle θk.

2.2 Adjust k by adding a small angle  (initially adding 10o and then adding

 each time later until adding +10o), resulting in k′, to generate a new line Lk such that Lk passes the point Mk with direction angle θk′.

2.3 (Generation of other fitting lines) Generate a sequence of lines Lk+1′,

L

k+2′, …, L(k+n2)mod n′ with each Lk+i′ perpendicular to its former line Lk+(i1)and passing its original mean point Mk+i (i.e., every two neighboring lines are mutually perpendicular).

2.4 (Computing the sum of fitting errors of all lines) Compute the error ei of fitting all the points in Si of Wi to line Li′ obtained in the last two steps (Steps 2.2 and 2.3), and sum the errors up to get a total error ek′ for Lk′.

2.5 Repeat Steps 2.2 through 2.4 until the range of angular adjustment, (10o, + 10o), is exhausted.

69

2.6 Find the minimum of all the total errors ek′ and denote it as emin, k. 2.7 Repeat Steps 2.1 through 2.6 to compute the emin, k for all k = 1, 2, …, n.

2.8 Find the global minimum error emin, ko as the minimum of all the emin, k. Step 3. Take all the lines with adjusted angles corresponding to emin, ko as the desired

floor layout.

An example of the results yielded by the above algorithm is shown in Figure 5.2.

Figure 5.2 Illustration of floor-layout construction. (a) Original edge point data. (b) A floor-layout of (a).

5.3 Analysis of Environment Data in Different Omni-images

5.3.1 Introduction

Only creating a floor layout is insufficient for use as a 3-D model of the indoor room space. The objects on walls such as doors and windows must also be detected and be drawn to appear in the desired 3-D room model. Therefore, it is indispensible to analyze the omni-images taken by the imaging system to extract such objects.

70

However, an omni-image covers a large range, but the further the distance is, the larger the estimation error is. How to retrieve the desired information is an important task. For this, we propose a method to determine a scanning range with two direction angles for each pair of omni-images based on the floor layout edge equation. The details will be described in Section 5.3.2. With the scanning region for each omni-image, we can retrieve appropriate 3-D information from different omni-images.

Note that each object which is detected by the scanning region of each omni-image is regarded as an individual one. Therefore, adjacent objects which appear in the consecutive omni-images taken from the same omni-camera (the upper or the lower one) must be combined into one based on, e.g., the information of their positions.

Also, due to the configuration of the imaging system, some objects on the wall, such as windows and doors, may appear in the pair of omni-images (the upper and the lower ones). To solve it, we propose a method to recognize doors and windows from the combined objects, and the method is described in Section 5.3.3.

5.3.2 Proposed method for retrieving accurate data

As shown in Figure 5.3, for each wall Wk, there are multiple navigation imaging spots N1, N2, …, and Nm for taking omni-images as the environment information. In order to determine a scanning region for each omni-image taken at N1 through Nm, we calculate the midpoint Mij of Ni and Nj for all i = 1, 2, …, n  1 and j = i + 1. Then, we project each midpoint Mij onto the line Lk which is the result of the floor-layout construction process mentioned previously, resulting in a projection point Mij. And based on the positions of the projection points and the navigation imaging spots, we can determine the cover ranges. With the cover range and the direction of the vehicle at the navigation spot, we can determine the scanning range with two direction angles

71

However, if we conduct the detection of objects within each scanning range, the mopboard will be detected as an object on the wall. Because the objects to be detected are doors and windows, and the mopboard is regarded as a feature of a wall, the mopboard should not appear in the detected objects. For this reason, we define a scanning region excluding the mopboard in the scanning range. In order to exclude the mopboard part in the scanning range, we use the pano-mapping table in a reverse way to estimate the image coordinates of a concerned space point at a known position.

More specifically, we use the distance to a detected edge point, and a pre-defined height of the mopboard to calculate the elevation of a point which is on the top edge of the mopboard with respect to the CCS1. Then, we can look up the pano-mapping

72

table to find its radius in the omni-image. As a result, the mopboard part can be excluded in the omni-image, forming a desired scanning region. In this way, the object detection process can be carried out within this region. The detailed algorithm is described as follows where Dk,θ is the distance to the detected mopboard edge point which is on the direction of angle θ with respect to the CCS1, and k is the index of the imaging spot of each wall.

Algorithm 5.2: Determination of scanning region of a wall.

Input: The floor-layout line F

k of Wk, omni-images I1,1, I1,2, …, and I1,m; I2,1,

I

2,2, …,and I2,m; navigation imaging spots N1, N2, …, and Nm; the distance Dt,s

estimated from mopboard detection where 1  t  m and 1  s  360; a constant H which is the height of the mopboard; pre-defined scanning radii

r

max and rmin.

Output: The scanning regions R

1,i for I1,i and R2,i for I1,i, for all i = 1, 2, …, m.

Steps:

Step 1. (Midpoint calculation and projection) Perform the following steps to calculate the midpoints.

1.1 Calculate the midpoint Mij of Ni and Nj, where j = i  1.

1.2 Project Mij onto the floor layout edge Fk, and find the projection point Mij.

1.3 Repeat Step 1.1 through 1.2 for all i from 1 to m  1.

Step 2. (Determination of cover range ) Perform the following steps to determine the cover range.

2.1 Find consecutive projection points which are on Fk, assuming that the indexes of the consecutive points Mij are found to be p through q:

2.2 For all i = p, p + 1, …, q  1, check i according to the following rules.

(A) if i = p, then form the cover range Ri by Ni, Mij, and the endpoint of Fk

73

which is nearest to Mij, where j = i + 1;

(B) else, if i = q  1, then form the cover range Ri by Nq, Msi, and the endpoint of Fk which is nearest to Nq, where s = i  1;

(C) otherwise, form the cover range Ri by Nj, Mij, and Mjs, where j = i + 1, and s = j + 1.

Step 3. (Determination of scanning region in omni-image) Perform the following steps to determine a region excluding the mopboard portion.

3.1 Choose I1,k and I2,k, starting with k = 1 until k = m, and the range R1,k and

R

2,k with two direction angles, respectively.

3.2 Choose the distance Dk,s, staring with s = 1

3.3 Check the value of s: if s is in the interval of the two direction angles, then compute the radii r1k,s and r2k,s by using the pano-mapping table in a reverse way and H, respectively; otherwise, do nothing.

3.4 Increase s by 1 and check the value of s: if s  360, then go to Step 3.2;

otherwise, continue.

3.5 Generate the scanning regions R1,k and R2,k which are subjected to the radii

r

1k,s and r2k,s, and the pre-defined values of rmax and rmin, respectively, for all

s such that 1  s  360.

3.6 Repeat Steps 3.1 through 3.5 until all the scanning regions are determined.

5.3.3 Proposed method for door and window detection

With the above-mentioned scanning region, we can detect the objects in each omni-image by the proposed two-way angular scanning scheme which extends the method in Step 5 of Algorithm 4.1. The two-way scheme is used instead of the

74

original one described in Algorithm 4.1 because the two-way scheme can achieve a better estimation. In the two-way angular scanning scheme, we first perform Steps 1 through 4 of Algorithm 4.1 for the pair of images, then traverse along the line Lθ from the outer boundary to the inner one and from the inner to the outer boundary within the scanning regions, and find the first intersection black pixel which is followed by 10 or more consecutive black pixels, respectively. The first found black pixel in each direction will be regarded as an element of the boundary of the detected object, as shown in Figure 5.4.

Figure 5.4 Illustration of the boundary (in red point) of detected object.

According to the two first scanned pixels which are detected by traversing along the scanning line with opposite directions, we utilize the information within the scanning line bounded by the two scanned pixels to determine whether the two pixels are boundary points of the object or not. Besides, objects may occupy the omni-image and are detected for some certain continuous angular interval. We utilize the property to combine the detected objects which are detected from certain continuous scanning lines into an individual one, as shown in Figure 5.5. In this way, there may be some individual objects in the scanning region. As a result, according to these pixels which are boundary points of the object, and by utilizing the 3-D position estimation method

75

described in Section 4.3.2 by looking up the pano-mapping table, the average heights at the bottom and the top of the object can be estimated.

Figure 5.5 Illustration of the detected objects within the scanning region.

There are some major rules for combing the detected objects.

1. First, we traverse along the scanning line Lθ within the scanning region R from opposite directions and find the pixels pi and po of the object boundaries, respectively.

2. By counting the numbers nb and nsum of black pixels and all pixels between the two detected pixels along Lθ, if found, and by a threshold and the ratio b

sum

n

n

, we can determine whether the pixels pi and po are on the boundary of the object or not.

3. In R, if all the pixels pi and po for each scanning line Lθ in an angular interval are all on the boundary of an object, then such a largest interval can used to describe the same object. Besides, in R, those combined objects may be combined again according to their positions.

And with the above rules, all individual objects can be determined in the same scanning region of an omni-image.

76

However, an object on a wall may appear to cross two or more scanning regions of corresponding source omni-images taken by the same upper or lower omni-camera.

Besides, an object also has a possibility to appear in the pairs of omni-images simultaneously. For the above reason, we have to combine those detected individual objects which belong to the same one. Here, we denote O1 and O2 as the sets of the individual objects detected from the omni-images taken by the lower and upper omni-cameras, respectively. As shown in Figure 5.6, at first, we conduct the combination task for all the objects in O1 and O2, according to their positions to form two new sets O1 and O2, respectively. Then, we conduct the reorganization task. The process is described in the following algorithm.

Algorithm 5.3: Objects reorganization

Input: Sets O

1 and O2 including the combined objects on walls.

Output: The window objects set O

W, and the door objects set OD.

Steps:

Step 1. (Objects recognition for each wall) For each floor-layout edge Fk, perform the following steps to recognize the objects on wall Wk

1.1 Choose an object o2,i from O2, to find an object in O1 at a similar location.

(1) If such object o1,j is found, then check if it is connected to the mopboard by its location:

i. if yes, then recognize o1,j together with o2,i as a door, and add it to

O

D;

ii. otherwise, recognize o1,j together with o2,i as a window, and add it to OW.

(2) If such an object is not found, recognize o2,i as a window, and add it to

O

W.

77

1.2 Repeat Step 1.1 until the objects of O2 are exhausted.

1.3 Recognize the remaining objects in O1 as widows, and add it to OW. Step 2. Repeat Step 1 until all the floor-layout edges are exhausted.

(a)

o

11

o

12

o

21

o

22

o

23

o

24

(b)

o

21

'

o

22

'

o

11

'

(c)

Window Door

(d)

Figure 5.6 Illustration of object combinations. (a) Scanning regions. (b) Individual objects.

(c) Combined objects for each omni-camera. (d) Reorganization of objects.

78

Chapter 6

Experimental Results and Discussions

6.1 Experimental Results

In this chapter, we will show some experimental results of the proposed automatic house-layout construction system by autonomous vehicle navigation based on mopboard following in indoor environments. The experimental environment was an empty indoor room with mopboards at the bottom of the walls, in the Computer Vision Laboratory of the Department of Computer Science in Engineering 3 Building at National Chiao Tung University, Taiwan, and is shown in Figure 6.1(a). For convenience, we denote walls as the notations shown in Figure 6.1(b).

(a)

W2

W3 W4

W5 W6

W7 W8

W1 Door

Window

(b)

Figure 6.1 The experimental environment. (a) Side view. (b) Illustration of the environment.

At first, the vehicle was placed appropriately near wall W1 with its direction not

79

precisely parallel to it. When the vehicle started a navigation cycle, the vehicle adjusted its direction to keep its navigation path parallel to each wall by utilizing the imaging system to conduct the mopboard detection process as described previously.

Figure 6.2 shows an example of the resulting images of mopboard detection in which the red dots show the detected mopboard edge points. After this action, the imaging system will gather the environment data and conduct the mopboard detection process again to estimate the locations of the mopboard edge points. And according to the estimated distances between the vehicle and the walls, the vehicle decided to go forward or turn around. Figure 6.3(a) shows the result of the estimated mopboard edge points which cover two adjacent walls. Figure 6.3(b) shows the result of the pattern classification procedure.

After the vehicle finished the navigation session, the floor layout was constructed with the globally optimal fitting method described previously. Figure 6.4(a) shows the estimated mopboard edge points of all walls and Figure 6.4(b) shows the floor layout which is constructed by the globally optimal fitting method.

Figure 6.2 Detected mopboard edge points.

80

(a) (b)

Figure 6.3 Classification of mopboard edge points (a) The detected mopboard points. (b) Result of the classification (the points belonging to the upper wall).

In Table 6.1, we show the errors in percentage between the actual widths of the walls and the estimated data of 9 times of navigations using the proposed system.

From the table, we see that the average error of the wall widths is 2.745%. Also, we compute the error in percentage of the estimated total perimeter of the floor layout with respect to the real data, and it is 0.23%. Both error percentages show that the precision of the proposed system is satisfactory for real applications.

(a) (b)

Figure 6.4 Illustration of global optimization. (a) The estimated mopboard edge points of all walls. (b) A floor layout fitting the points in (a).

81

Table 6.1 Precision of estimated wall widths and their error percentages.

(1) (2) (3) (4) (5) (6) (7) (8)

82

After the floor layout was constructed, the system proceeded to extract doors and windows, if any. In our experimental environment, we do have a door and a simulated window. The door is high enough to be seen in both the images taken by the lower and the upper omni-cameras. An example of the images of the door and the door detection result is shown in Figure 6.5. In addition, we simulated a window in our experimental environment by creating a black frame and attaching it on a wall. The window is not so low that only upper omni-camera can “see” it. An example of the image of the window and the window detection result is shown in Figure 6.6.

(a) (b)

(c) (d)

Figure 6.5 Images and door detection result. (a) Image of the door taken by the upper

Figure 6.5 Images and door detection result. (a) Image of the door taken by the upper

相關文件