• 沒有找到結果。

Experimental Results of Distance of Deviations From Navigation Path . 82

Chapter 6 Experimental Results and Discussions

6.3 Experimental Results of Distance of Deviations From Navigation Path . 82

Because the autonomous vehicle used in this study suffers from accumulation of mechanical errors, two top-view omni-cameras are utilized to locate and monitor the vehicles. In this experiment, the vehicle patrols in our laboratory continuously and the monitored points are selected by a user in the environment map, as shown in Figure 6.10.

Figure 6.10 The monitored points selected by a user.

We record the length of every segment of the path in cm, and the total length of the navigation of the vehicle in meter in Table 6.3. In our experiments, when the vehicle navigated to the destinations of every segment of the path, the distance between the real position of the vehicle and the original destination of every segment of the path was calculated and recorded in the table, too. The error rates obtained by dividing the distance of the error by the length of the segment are also listed in the table.

We also performed a double-check experiment. In the first time, the vehicle

position and direction were not corrected by any method, and the results are recorded in Table 6.3. In the second time, the data were corrected by the method described in Chapter 4, and the results are recorded in Table 6.4.

Table 6.3 Records of the uncorrected mechanic errors in every segment of path No. (1) Length of

Table 6.4 Records of the corrected mechanic errors in every segment of path.

using the proposed technique. The average error rate with correction is much smaller than those without any correction. Besides, by the tables above we also can know that the mechanic errors of every segment of the path will not accumulate anymore, because we have eliminated the errors caused in every segment of the path using our method.

6.4 Discussions

The proposed system utilizes the vision-based autonomous vehicle to perform the security patrolling task. For this purpose, some monitored points were utilized to guide the vehicle. By the way, there are more applications of monitoring specific spots in indoor environments, such as providing various services at various application environments. Every monitored point can be regarded, for example, as a business service point in which there are some customers. If the environment is a restaurant, the apparatus of showing menus can be equipped on a vehicle, and then the vehicle can move to each service point along assigned optimal paths to ask what dishes or services are needed. If the environment is a company, the vehicles also can be utilized to deliver documents or messages in each service point.

However, there are still some problems in the proposed system. If an object appears next to the vehicle suddenly, the top-view omni-cameras will not have the ability to identify the vehicle. Furthermore, when tracking a person in the environment, if another person walks close to the person who is tracked, we may not be able to calculate the right position of the person. To solve the problem, it might be necessary to add information of color and sample models of the vehicle and the

Chapter 7

Conclusions and Suggestions for Future Works

7.1 Conclusions

In this study, we utilize a vision-based autonomous vehicle and omni-cameras affixed on the ceiling to perform indoor security surveillance. The indoor environment can be complicated with static and dynamic obstacles. We have proposed several techniques and designed some algorithms for indoor security surveillance, which are summarized in the following.

(1) A height-adaptive space mapping method has been proposed, by which we can construct an adaptive mapping table for a camera irrespective of the height of the camera. By the mapping table, we can convert the coordinates of the points in the environment between the image coordinates and the global coordinates. Hence we can calculate the real-world position of a vehicle or a person.

(2) An environment-information calculation method has been proposed, by which we can obtain all the ground regions in the environment, which form the environment map of the patrolling environment. By the constructed map, we can know the positions of all the obstacles. When a vehicle navigates, the patrolling path can be checked to see if there is any obstacle on it. If so, we can plan another path to avoid the obstacles also by use of the environment map.

avoid static and dynamic obstacles in the environment. If there are some obstacles on the original path, several turning points are calculated by the method and form a new path for navigation. The new path is the shortest path with the least number of turning points.

(4) A vehicle location and direction correction method has been proposed. Because the vehicles suffer from mechanic errors, we utilize the top-view omni-cameras to locate them in this study. By the odometer values of the vehicles, we can calculate the centroids of the vehicles in the image. After the centroids are transformed into the global space, the odometer values are corrected by the coordinates of the resulting points. Besides, the directional angles of the vehicles also must be corrected, in which two continuous correct position points are utilized to do the job. By the correction method, the position and the direction of the vehicle can be corrected automatically. Accordingly, the vehicle will not deviate from the planned path too far.

(5) A method for handling the camera handoff problem has been proposed, by which a vehicle can navigate under several cameras to expend the range of surveillance.

Besides, when tracking a person in the environment, because the person may move among the FOV’s of the cameras, the method is also applied to calculate the position of the person in the images.

(6) A method for calculating the position of a person in omni-images based on the rotational invariance property has been proposed, by which we can calculate the position via the image taken by the fisheye camera on the ceiling. Besides, only the partial image has to be processed to calculate the exact position of the person to reduce the amount of calculation, decrease the probability of interference by other dynamic obstacles, and increase the preciseness of calculating the position

(7) A position prediction method for use in omni-images has been proposed. Because we only process the partial image to calculate the position of a person, the position of the person in the next segment of time should be predicted by the prediction method. Besides, because the fisheye cameras are highly distorted, the predicting position of the person is revised by the proposed method, too.

The experimental results shown in the previous chapters have revealed the feasibility of the proposed system.

7.2 Suggestions for Future Works

The proposed strategies and methods, as mentioned previously, have been implemented on a vehicle system with multiple omni-cameras on the ceiling.

According to this study, in the following we make several suggestions and point out some related interesting issues, which are worth further investigation in the future:

(1) using a pen-tilt-zoom camera equipped on the vehicle to extract features of images to detect whether monitored objects still exist;

(2) increase the ability to detect more dangerous conditions;

(3) increase the ability of warning users immediately through cell phones or electronic mails;

(4) increase the ability of voice control when users want to issue navigation orders to the vehicle;

(5) increase the ability of constructing the adaptive mapping table automatically for the cameras whose FOV’s is not vertical to the ground;

(6) increase the ability of tracking multiple people in the environment

(7) increase the ability of processing the fragmented images taken by the camera due to insufficient network bandwidths;

(8) control the vehicle by the information gathered in the image space to reduce the errors of converting the coordinates of the vehicle or the person between the global coordinates and the image coordinates;

(9) increase the ability of patrolling in a dark room via infrared-ray cameras;

(10) increase the ability of handling more complicated situations of the handoff problem by using more information such as camera positions.

References

[1] J. Kannala and S. Brandt, “A generic camera calibration method for fish-eye lenses,” IEEE Proceedings of 17th international conference on pattern recognition, Vol. 01, pp. 10-13, 2004.

[2] S. shah and J. K. Aggarwal, “A simple calibration procedure for fish-eye (high distortion) lens camera,” IEEE Transactions on Robotics and Automation, Vol. 4, pp. 3422-3427, May 1994.

[3] S. Shah and J. K. Aggarwal, “Intrinsic parameter calibration procedure for a fish-eye lens camera with distortion model and accuracy estimation,” Pattern Recognition, Vol. 29, No. 11, pp. 1775-1788, 1996.

[4] H. C. Chen and W. H. Tsai “A Study on optimal security patrolling by multiple vision-based autonomous vehicles with omni-monitoring,” Proceedings of 2008 International Computer Symposium, pp. 196-201, Taipei, Taiwan, Nov. 2008.

[5] I. Fukui, “TV image processing to determine the position of a robot vehicle,”

Pattern Recognition, Vol. 14, pp. 101-109, 1981.

[6] Betke M and Gurvits L, “Mobile robot localization using landmarks,” IEEE Transactions on Robotics and Automation, Vol. 13, No. 2, pp. 251-263, Apr.

1997.

[7] M. J. Magee and J. K. Aggarwal, “Determining the position of a robot using a single calibration object,” IEEE Conference on Robotics, pp. 57-62, Atlanta, Georgia, USA, May 1983.

[8] J. Huang, C. Zhao, Y. Ohtake, H. Li, and Q. Zhao, “Robot position identification using specially designed landmarks,” Proceedings of 2006 IEEE Conference on Instrumentation and Measurement Technology, Sorrento, Italy, Apr. 2006.

corners,” Pattern Recognition, Vol. 19, pp. 439-451, 1986.

[10] K. L. Chiang and W. H. Tsai, “Vision-based autonomous vehicle guidance in indoor environments using odometer and house corner location information,”

Proceedings of 2006 IEEE International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 415-418, Washington, DC, USA, Dec. 2006.

[11] K. C. Chen and W. H. Tsai, “A study on autonomous vehicle navigation by 2D object image matching and 3D computer vision analysis for indoor security patrolling applications,” Proceedings of 2007 Conference on Computer Vision, Graphics and Image Processing, Miaoli, Taiwan, June 2007.

[12] D. Cobzas, H. Zhang, and M. Jagersand, “Image-based localization with depth-enhanced image map,” Proceedings of IEEE International Conference on Robotics and Automation (ICRA 2003), pp. 1570-1575, Taipei, Taiwan, 2003.

[13] P. Biber, H. Andreasson, T. Duckett, and A. Schilling, ”3D modeling of indoor environments by a mobile robot with a laser scanner and panoramic camera,”

Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2004), Sendai, Japan, Sept. 2004.

[14] W. Lu and Y. P. Tan, “A color histogram based people tracking system,” IEEE Circuits and Systems, Vol. 2, pp. 137-140, May 2001.

[15] T.Takeshita, T. Tomizawa, and A. Ohya, “A house cleaning robot system – path indication and position estimation using ceiling camera,” IEEE SICE-ICASE, pp.

2653-2656, Busan, Korea, Oct. 2006.

[16] J. J. Yang, “Design and implementation of an image processing based watchdog robot,” MS Thesis, National Cheng Kung University, Tainan, Taiwan, June 2002.

[17] S. Y. Tsai and W. H. Tsai, “A study on mobile robot navigation with simple

Proceedings of 2008 International Computer Symposium, pp. 207-212, Taipei, Taiwan, Nov. 2008.