• 沒有找到結果。

Chapter 5 Automatic Construction of House Layout by Autonomous Vehicle

5.3 Analysis of Environment Data in Different Omni-images

5.3.3 Proposed method for door and window detection

With the above-mentioned scanning region, we can detect the objects in each omni-image by the proposed two-way angular scanning scheme which extends the method in Step 5 of Algorithm 4.1. The two-way scheme is used instead of the

74

original one described in Algorithm 4.1 because the two-way scheme can achieve a better estimation. In the two-way angular scanning scheme, we first perform Steps 1 through 4 of Algorithm 4.1 for the pair of images, then traverse along the line Lθ from the outer boundary to the inner one and from the inner to the outer boundary within the scanning regions, and find the first intersection black pixel which is followed by 10 or more consecutive black pixels, respectively. The first found black pixel in each direction will be regarded as an element of the boundary of the detected object, as shown in Figure 5.4.

Figure 5.4 Illustration of the boundary (in red point) of detected object.

According to the two first scanned pixels which are detected by traversing along the scanning line with opposite directions, we utilize the information within the scanning line bounded by the two scanned pixels to determine whether the two pixels are boundary points of the object or not. Besides, objects may occupy the omni-image and are detected for some certain continuous angular interval. We utilize the property to combine the detected objects which are detected from certain continuous scanning lines into an individual one, as shown in Figure 5.5. In this way, there may be some individual objects in the scanning region. As a result, according to these pixels which are boundary points of the object, and by utilizing the 3-D position estimation method

75

described in Section 4.3.2 by looking up the pano-mapping table, the average heights at the bottom and the top of the object can be estimated.

Figure 5.5 Illustration of the detected objects within the scanning region.

There are some major rules for combing the detected objects.

1. First, we traverse along the scanning line Lθ within the scanning region R from opposite directions and find the pixels pi and po of the object boundaries, respectively.

2. By counting the numbers nb and nsum of black pixels and all pixels between the two detected pixels along Lθ, if found, and by a threshold and the ratio b

sum

n

n

, we can determine whether the pixels pi and po are on the boundary of the object or not.

3. In R, if all the pixels pi and po for each scanning line Lθ in an angular interval are all on the boundary of an object, then such a largest interval can used to describe the same object. Besides, in R, those combined objects may be combined again according to their positions.

And with the above rules, all individual objects can be determined in the same scanning region of an omni-image.

76

However, an object on a wall may appear to cross two or more scanning regions of corresponding source omni-images taken by the same upper or lower omni-camera.

Besides, an object also has a possibility to appear in the pairs of omni-images simultaneously. For the above reason, we have to combine those detected individual objects which belong to the same one. Here, we denote O1 and O2 as the sets of the individual objects detected from the omni-images taken by the lower and upper omni-cameras, respectively. As shown in Figure 5.6, at first, we conduct the combination task for all the objects in O1 and O2, according to their positions to form two new sets O1 and O2, respectively. Then, we conduct the reorganization task. The process is described in the following algorithm.

Algorithm 5.3: Objects reorganization

Input: Sets O

1 and O2 including the combined objects on walls.

Output: The window objects set O

W, and the door objects set OD.

Steps:

Step 1. (Objects recognition for each wall) For each floor-layout edge Fk, perform the following steps to recognize the objects on wall Wk

1.1 Choose an object o2,i from O2, to find an object in O1 at a similar location.

(1) If such object o1,j is found, then check if it is connected to the mopboard by its location:

i. if yes, then recognize o1,j together with o2,i as a door, and add it to

O

D;

ii. otherwise, recognize o1,j together with o2,i as a window, and add it to OW.

(2) If such an object is not found, recognize o2,i as a window, and add it to

O

W.

77

1.2 Repeat Step 1.1 until the objects of O2 are exhausted.

1.3 Recognize the remaining objects in O1 as widows, and add it to OW. Step 2. Repeat Step 1 until all the floor-layout edges are exhausted.

(a)

o

11

o

12

o

21

o

22

o

23

o

24

(b)

o

21

'

o

22

'

o

11

'

(c)

Window Door

(d)

Figure 5.6 Illustration of object combinations. (a) Scanning regions. (b) Individual objects.

(c) Combined objects for each omni-camera. (d) Reorganization of objects.

78

Chapter 6

Experimental Results and Discussions

6.1 Experimental Results

In this chapter, we will show some experimental results of the proposed automatic house-layout construction system by autonomous vehicle navigation based on mopboard following in indoor environments. The experimental environment was an empty indoor room with mopboards at the bottom of the walls, in the Computer Vision Laboratory of the Department of Computer Science in Engineering 3 Building at National Chiao Tung University, Taiwan, and is shown in Figure 6.1(a). For convenience, we denote walls as the notations shown in Figure 6.1(b).

(a)

W2

W3 W4

W5 W6

W7 W8

W1 Door

Window

(b)

Figure 6.1 The experimental environment. (a) Side view. (b) Illustration of the environment.

At first, the vehicle was placed appropriately near wall W1 with its direction not

79

precisely parallel to it. When the vehicle started a navigation cycle, the vehicle adjusted its direction to keep its navigation path parallel to each wall by utilizing the imaging system to conduct the mopboard detection process as described previously.

Figure 6.2 shows an example of the resulting images of mopboard detection in which the red dots show the detected mopboard edge points. After this action, the imaging system will gather the environment data and conduct the mopboard detection process again to estimate the locations of the mopboard edge points. And according to the estimated distances between the vehicle and the walls, the vehicle decided to go forward or turn around. Figure 6.3(a) shows the result of the estimated mopboard edge points which cover two adjacent walls. Figure 6.3(b) shows the result of the pattern classification procedure.

After the vehicle finished the navigation session, the floor layout was constructed with the globally optimal fitting method described previously. Figure 6.4(a) shows the estimated mopboard edge points of all walls and Figure 6.4(b) shows the floor layout which is constructed by the globally optimal fitting method.

Figure 6.2 Detected mopboard edge points.

80

(a) (b)

Figure 6.3 Classification of mopboard edge points (a) The detected mopboard points. (b) Result of the classification (the points belonging to the upper wall).

In Table 6.1, we show the errors in percentage between the actual widths of the walls and the estimated data of 9 times of navigations using the proposed system.

From the table, we see that the average error of the wall widths is 2.745%. Also, we compute the error in percentage of the estimated total perimeter of the floor layout with respect to the real data, and it is 0.23%. Both error percentages show that the precision of the proposed system is satisfactory for real applications.

(a) (b)

Figure 6.4 Illustration of global optimization. (a) The estimated mopboard edge points of all walls. (b) A floor layout fitting the points in (a).

81

Table 6.1 Precision of estimated wall widths and their error percentages.

(1) (2) (3) (4) (5) (6) (7) (8)

82

After the floor layout was constructed, the system proceeded to extract doors and windows, if any. In our experimental environment, we do have a door and a simulated window. The door is high enough to be seen in both the images taken by the lower and the upper omni-cameras. An example of the images of the door and the door detection result is shown in Figure 6.5. In addition, we simulated a window in our experimental environment by creating a black frame and attaching it on a wall. The window is not so low that only upper omni-camera can “see” it. An example of the image of the window and the window detection result is shown in Figure 6.6.

(a) (b)

(c) (d)

Figure 6.5 Images and door detection result. (a) Image of the door taken by the upper omni-camera. (b) Image of the door taken by the lower omni-camera. (c) Door detection result of (a). (d) Door detection result of (b).

After door and window detections, they were merged into the floor-layout data to form the final house layout. An example of house-layout construction in graphic form

83

is shown in Figure 6.7, in which the house layout is displayed from two views, one from the top of the environment and the other from the back of the wall on which the window appears.

(a) (b)

Figure 6.6 Images and window detection result. (a) Image of the window taken by the upper omni-camera. (b) Window detection result of (a).

6.2 Discussions

From our experiments and the results, we see that the goal of the study  automatic house-layout construction by autonomous vehicle navigation without path learning  has been achieved. An inconvenience found in this study is that the two-camera omni-directional imaging system designed for this research is not high enough so that windows high above will not appear clearly (with its image part smeared by the plastic top cover of the upper omni-camera) when the vehicle navigates too close to the wall root. A possible solution is to construct a more transparent and spherical-shaped cover made possibly of glass.

Due to unavailability of an empty and sufficiently-large house space for conducting the experiment, the environment we used for our experiments was not totally closed and half of it was enclosed by simulated walls. In the future, more experiments should be conducted in more realistic room spaces.

84 (a)

(b)

Figure 6.7 Graphic display of constructed house layout. (a) Viewing from the top (green rectangle is a door and yellow one is a window). (b) Viewing from the back of the window.

85

Chapter 7

Conclusions and Suggestions for Future Works

7.1 Conclusions

A system for automatic house-layout construction by vision-based autonomous vehicle navigation in an empty indoor room space has been proposed. To achieve acquisition of environment images, a new type of omni-directional camera has been designed for this study, which consists of two omni-cameras aligned coaxially and back to back, with the upper camera taking images of the upper semi-spherical space of the environment and the lower camera taking images of the lower semi-spherical space. A so-called pano-mapping table [7] is used for computing the depth data of space feature points.

The proposed automatic house layout construction process consists of three major stages: (1) vehicle navigation by mopboard following; (2) floor layout construction; and (3) 3-D house layout construction. In the first stage, a vehicle is navigated to follow the mopboards at the roots of the walls in the house. A pattern classification technique has been proposed for classifying the mopboard points detected by an image processing scheme applied directly on the omni-image. Each group of mopboard points so classified are then fitted by a line using an LSE criterion.

The line is used to represent the related wall.

In the second stage, a global optimization method has been proposed to construct a floor layout from all the wall lines in a sense of minimizing the total line fitting error.

86

In the last stage, doors and windows are detected from the omni-images taken in the navigation session. An algorithm has been proposed to match rectangular areas appearing in the lower and upper omni-images taken by the respective cameras, to decide the existence of doors or windows. And then the detected door and window data are merged into the wall line data to get a complete 3-D data set for the house.

Finally, the data set is transformed into a graphic form for 3-D display of the house from any viewpoint.

The entire house layout construction process is fully automatic, requiring no human involvement, and so is very convenient for real applications. The experimental results show the feasibility of the proposed method.

7.2 Suggestions for Future Works

There exist several interesting topics for future research, which are listed in the following.

1. The proposed two-camera omni-directional imaging system may also be used for other applications like environment image collection and 3-D environment model construction.

2. More in-house objects, like paintings, furniture, poles, and so on, may be extracted from omni-images of the house environment for more complete construction of the house layout.

3. More applications of the proposed methods, like house dimension measuring, unknown environment exploration, automatic house cleaning, etc., may be investigated.

4. More techniques for acquiring the corner and line information from house ceilings using the proposed omni-camera system may be developed.

87

References

[1] Z. Zhu, “Omnidirectional stereo vision,” Proceedings of Workshop on

Omnidirectional Vision in the 10th IEEE ICAR, pp. 1-12, Budapest, Hungary,

Aug 2001.

[2] A. Ohya, A. Kosaka, and A. Kak, “Vision-based navigation by a mobile robot with obstacle avoidance using single-camera vision and ultrasonic sensing,”

IEEE Transactions on Robotics and Automation, Vol. 14, No. 6, pp. 969-978,

1998.

[3] J. Gluckman, S. K. Nayar, and K. J. Thoresz, “Real-time omnidirectional and panoramic stereo,” Proceedings of DARPA98, pp.299–303, 1998.

[4] H. Ukida, N. Yamato, Y Tanimoto, T Sano, and H. Yamamoto, “Omni-directional 3D measurement by hyperbolic mirror cameras and pattern projection,”

Proceedings of 2008 IEEE Conference on Instrumentation & Measurement Technology, Victoria, BC, Canada, pp.365-370, May. 12-15, 2008.

[5] B. S. Kim, Y. M. Park and K. W. Lee, “An experiment of 3D reconstruction using laser range finder and CCD camera,” Proceedings of IEEE 2005

International Geoscience and Remote Sensing Symposium, Seoul, Korea,

pp.1442-1445, July 25-29, 2005.

[6] S. Kim and S. Y. Oh, “SLAM in indoor environments using omni-directional vertical and horizontal line features,” Journal of Intelligent and Robotic Systems, v.51 n.1, p.31-43, January 2008.

[7] S. W. Jeng and W. H. Tsai, “Using pano-mapping tables for unwrapping of omni-images into panoramic and perspective-view images,” Journal of IET

Image Processing, Vol. 1, No. 2, pp. 149-155, June 2007.

88

[8] K. L. Chiang. “Security patrolling and danger condition monitoring in indoor environments by vision-based autonomous vehicle navigation,” M. S. Thesis, Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, June 2005.

[9] J. Y. Wang and W. H. Tsai, “A study on indoor security surveillance by vision-based autonomous vehicles with omni-cameras on house ceilings,” M. S.

Thesis, Institute of Computer Science and Engineering, National Chiao Tung

University, Hsinchu, Taiwan, Republic of China, June. 2009.

[10] M. C. Chen and W. H. Tsai, "Vision-based security patrolling in indoor environments using autonomous vehicles, ”Proceedings of 2005 Conference on

Computer Vision, Graphics and Image Processing, pp. 811-818, Taipei, Taiwan,

Republic of China, August 2005.

[11] C. J. Wu and W. H. Tsai, “Location estimation for indoor autonomous vehicle navigation by omni-directional vision using circular landmarks on ceilings,”

Robotics and Autonomous Systems, Vol. 57, No. 5, pp. 546-555, May 2009.

[12] P. Biber, S. Fleck, and T. Duckett, “3D modeling of indoor environments for a robotic security guard,” Proceedings of the IEEE Computer Society Conference

on Computer Vision and Pattern Recognition, San Diego, CA, USA, vol. 3, pp.

124-130, 2005.

[13] S. B. Kang and R. Szeliski, “3-d scene data recovery using omnidirectional multi-baseline stereo,” Int. J. Comput. Vision, p. 167~183, Oct 1997.

[14] J. I. Meguro, J, I, Takiguchi, and Y, A, T, Hashizume, “3D reconstruction using multibaseline omnidirectional motion stereo based on GPS/dead-reckoning compound navigation system,” International Journal of Robotics Research, Vol.

26, No. 6, pp. 625-636, June 2007.

相關文件