• 沒有找到結果。

Chapter 6 Experimental Results and Discussions

6.2 Discussions

From our experiments and the results, we see that the goal of the study  automatic house-layout construction by autonomous vehicle navigation without path learning  has been achieved. An inconvenience found in this study is that the two-camera omni-directional imaging system designed for this research is not high enough so that windows high above will not appear clearly (with its image part smeared by the plastic top cover of the upper omni-camera) when the vehicle navigates too close to the wall root. A possible solution is to construct a more transparent and spherical-shaped cover made possibly of glass.

Due to unavailability of an empty and sufficiently-large house space for conducting the experiment, the environment we used for our experiments was not totally closed and half of it was enclosed by simulated walls. In the future, more experiments should be conducted in more realistic room spaces.

84 (a)

(b)

Figure 6.7 Graphic display of constructed house layout. (a) Viewing from the top (green rectangle is a door and yellow one is a window). (b) Viewing from the back of the window.

85

Chapter 7

Conclusions and Suggestions for Future Works

7.1 Conclusions

A system for automatic house-layout construction by vision-based autonomous vehicle navigation in an empty indoor room space has been proposed. To achieve acquisition of environment images, a new type of omni-directional camera has been designed for this study, which consists of two omni-cameras aligned coaxially and back to back, with the upper camera taking images of the upper semi-spherical space of the environment and the lower camera taking images of the lower semi-spherical space. A so-called pano-mapping table [7] is used for computing the depth data of space feature points.

The proposed automatic house layout construction process consists of three major stages: (1) vehicle navigation by mopboard following; (2) floor layout construction; and (3) 3-D house layout construction. In the first stage, a vehicle is navigated to follow the mopboards at the roots of the walls in the house. A pattern classification technique has been proposed for classifying the mopboard points detected by an image processing scheme applied directly on the omni-image. Each group of mopboard points so classified are then fitted by a line using an LSE criterion.

The line is used to represent the related wall.

In the second stage, a global optimization method has been proposed to construct a floor layout from all the wall lines in a sense of minimizing the total line fitting error.

86

In the last stage, doors and windows are detected from the omni-images taken in the navigation session. An algorithm has been proposed to match rectangular areas appearing in the lower and upper omni-images taken by the respective cameras, to decide the existence of doors or windows. And then the detected door and window data are merged into the wall line data to get a complete 3-D data set for the house.

Finally, the data set is transformed into a graphic form for 3-D display of the house from any viewpoint.

The entire house layout construction process is fully automatic, requiring no human involvement, and so is very convenient for real applications. The experimental results show the feasibility of the proposed method.

7.2 Suggestions for Future Works

There exist several interesting topics for future research, which are listed in the following.

1. The proposed two-camera omni-directional imaging system may also be used for other applications like environment image collection and 3-D environment model construction.

2. More in-house objects, like paintings, furniture, poles, and so on, may be extracted from omni-images of the house environment for more complete construction of the house layout.

3. More applications of the proposed methods, like house dimension measuring, unknown environment exploration, automatic house cleaning, etc., may be investigated.

4. More techniques for acquiring the corner and line information from house ceilings using the proposed omni-camera system may be developed.

87

References

[1] Z. Zhu, “Omnidirectional stereo vision,” Proceedings of Workshop on

Omnidirectional Vision in the 10th IEEE ICAR, pp. 1-12, Budapest, Hungary,

Aug 2001.

[2] A. Ohya, A. Kosaka, and A. Kak, “Vision-based navigation by a mobile robot with obstacle avoidance using single-camera vision and ultrasonic sensing,”

IEEE Transactions on Robotics and Automation, Vol. 14, No. 6, pp. 969-978,

1998.

[3] J. Gluckman, S. K. Nayar, and K. J. Thoresz, “Real-time omnidirectional and panoramic stereo,” Proceedings of DARPA98, pp.299–303, 1998.

[4] H. Ukida, N. Yamato, Y Tanimoto, T Sano, and H. Yamamoto, “Omni-directional 3D measurement by hyperbolic mirror cameras and pattern projection,”

Proceedings of 2008 IEEE Conference on Instrumentation & Measurement Technology, Victoria, BC, Canada, pp.365-370, May. 12-15, 2008.

[5] B. S. Kim, Y. M. Park and K. W. Lee, “An experiment of 3D reconstruction using laser range finder and CCD camera,” Proceedings of IEEE 2005

International Geoscience and Remote Sensing Symposium, Seoul, Korea,

pp.1442-1445, July 25-29, 2005.

[6] S. Kim and S. Y. Oh, “SLAM in indoor environments using omni-directional vertical and horizontal line features,” Journal of Intelligent and Robotic Systems, v.51 n.1, p.31-43, January 2008.

[7] S. W. Jeng and W. H. Tsai, “Using pano-mapping tables for unwrapping of omni-images into panoramic and perspective-view images,” Journal of IET

Image Processing, Vol. 1, No. 2, pp. 149-155, June 2007.

88

[8] K. L. Chiang. “Security patrolling and danger condition monitoring in indoor environments by vision-based autonomous vehicle navigation,” M. S. Thesis, Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan, Republic of China, June 2005.

[9] J. Y. Wang and W. H. Tsai, “A study on indoor security surveillance by vision-based autonomous vehicles with omni-cameras on house ceilings,” M. S.

Thesis, Institute of Computer Science and Engineering, National Chiao Tung

University, Hsinchu, Taiwan, Republic of China, June. 2009.

[10] M. C. Chen and W. H. Tsai, "Vision-based security patrolling in indoor environments using autonomous vehicles, ”Proceedings of 2005 Conference on

Computer Vision, Graphics and Image Processing, pp. 811-818, Taipei, Taiwan,

Republic of China, August 2005.

[11] C. J. Wu and W. H. Tsai, “Location estimation for indoor autonomous vehicle navigation by omni-directional vision using circular landmarks on ceilings,”

Robotics and Autonomous Systems, Vol. 57, No. 5, pp. 546-555, May 2009.

[12] P. Biber, S. Fleck, and T. Duckett, “3D modeling of indoor environments for a robotic security guard,” Proceedings of the IEEE Computer Society Conference

on Computer Vision and Pattern Recognition, San Diego, CA, USA, vol. 3, pp.

124-130, 2005.

[13] S. B. Kang and R. Szeliski, “3-d scene data recovery using omnidirectional multi-baseline stereo,” Int. J. Comput. Vision, p. 167~183, Oct 1997.

[14] J. I. Meguro, J, I, Takiguchi, and Y, A, T, Hashizume, “3D reconstruction using multibaseline omnidirectional motion stereo based on GPS/dead-reckoning compound navigation system,” International Journal of Robotics Research, Vol.

26, No. 6, pp. 625-636, June 2007.

相關文件