• 沒有找到結果。

Chapter 5 Following Suspicious People Automatically and Other Applications 58

5.5 Other Applications

5.5.2 Calculating Walking Speed of a Person

The purpose of calculating the walking speed of a person is that if the person who breaks into the environment under surveillance has a mask on his/her face, even if we have taken clear images of him/her by the camera equipped on the vehicle, we still cannot tell the identity of him/her. However, we can gather other characteristics of the person, like the walking speed which is one of the characteristics that can help us to identify the person, as mentioned previously.

Because the position of the person is calculated in every 400 ms, we can calculate the distance of movement of the person in the same frequency. Hence, we can take every two consecutive positions P1 and P2 of the person, and calculate the walking speed of the person by the following equation:

100 60 4

. 0

2

1P

P meters/minutes (5.4)

where P1P2 means the distance between positions P1 and P2 in cm. We can calculate the average walking speed by correcting a set S of all the walking speeds in

a certain duration of time, removing data with zero values from S, and calculating an average speed value from S. The necessity of removing zero-valued data is: if the walking speed w is zero, it means that the person is not moving now, and so we cannot consider w into the average walking speed.

Chapter 6

Experimental Results and Discussions

In this chapter, we show some experimental results of the proposed security patrolling system. The first is the results of calculating the positions Wreal of real-world points when a fisheye camera is affixed at the different heights. We compare the values of Wreal which are calculated by the method proposed in Chapter 3 with those obtained by measuring manually.

The second is the results of calculating the positions of a person in an actual environment in the Computer Vision Laboratory, Department of Computer Science, National Chiao Tung University, and the computing results are compared with the real positions of the person.

The third is the results of the distance of deviating from the original path when an autonomous vehicle patrols in an actual environment, and the position and direction of the vehicle are corrected by the method proposed in Chapter 4. The detail will be described in Section 6.3.

6.1 Experimental Results of

Calculating Positions of Real-world

Points

calculated the values of the real-world point positions Wreal for every height by the method mentioned in Chapter 3 and by manually measuring simultaneously.

We construct a basic mapping table for the fisheye camera first. We affix the camera at the height of 20 cm from the calibration board as shown in Figure 6.1. The real-world width between every two consecutive intersection points on the board we used is 1cm obtained by manually measuring. Then we can calculate the image coordinates of the intersections of the lines in the image and construct a basic mapping table by the method proposed in Chapter 3.

Figure 6.1 The camera is affixed at 20 cm from the calibration board.

After the table is constructed, we affix the camera to the heights of 10 cm, 15 cm, 30 cm, and 40 cm from the calibration board as shown in Figures 6.2, 6.3, 6.4, and 6.5, respectively. We can calculate the values Wreal of real-world points for every height by the equations derived in Chapter 3 and by manually measuring simultaneously. The results of calculation are shown in Table 6.1, and the average error rate is 2.52%.

Figure 6.2 The camera is affixed at 10 cm from the calibration board.

Figure 6.3 The camera is affixed at 15 cm from the calibration board.

Figure 6.4 The camera is affixed at 30 cm from the calibration board.

Figure 6.5 The camera is affixed at 40 cm from the calibration board.

Table 6.1 The results of calculating the value of Wreal by two ways.

No. Heights (1)Calculated Wreal (2)Measured Wreal Error (| (1) (2) | (1)

 %)

1 5 0.26 0.25 3.85

2 10 0.49 0.5 2.04

3 15 0.73 0.75 2.74

4 20 1 1 N/A

5 25 1.24 1.25 0.81

6 30 1.45 1.5 3.45

7 35 1.77 1.75 1.13

8 40 1.93 2 3.63

6.2 Experimental Results of

Calculating Positions of a Person

The environment for this experiment is an open space area in our laboratory.

Because of the property of imaging projection, after the region of a person is found in the image, the point which is in the region and is closest to the center of the image is the position of the person, as shown in Figure 6.6. We calculated several positions of a person by the method described in Chapter 5 and by manually measuring simultaneously. The positions are equally and randomly scattered in the region which is under surveillance in the laboratory as the red points shown in Figure 6.7, and the two images are taken by the two fisheye cameras affixed on the ceiling.

Figure 6.6 Finding the position of a person in the image.

(a) (b)

Figure 6.7 The experimental positions of the person.

In Table 6.2, we show the global coordinates of the positions of the person which were obtained both by measuring manually and by calculation from the images, and the error rates of the positions so obtained.

The average error rate of finding the position of a person is 1.26% which is small enough for the vehicle to follow the person successfully in real applications.

Table 6.2 Calculating errors of the position of a person.

A real example of following a specific person in our laboratory is shown below.

broke into the laboratory, and the vehicle stopped the patrolling task and started to follow the person. In Figures 6.8(c) through 6.8 (f), the vehicle followed the person continuously until the person left the environment.

(a) (b)

(c) (d)

(e) (f)

Figure 6.8 A real example of following the specific person. (a) The vehicle patrolled in the laboratory. (b) through (f) The vehicle followed the person

The images taken by the fisheye cameras on the ceiling is shown in Figure 6.9.

The black squares are the regions we processed, and the white circles in the black squares are the positions of the person we calculated.

(a) (b)

(c) (d)

(e) (f)

Figure 6.9 The images taken by the cameras on the ceiling, and the positions of the person are indicated.

6.3 Experimental Results of Distance of Deviations from Navigation Path

Because the autonomous vehicle used in this study suffers from accumulation of mechanical errors, two top-view omni-cameras are utilized to locate and monitor the vehicles. In this experiment, the vehicle patrols in our laboratory continuously and the monitored points are selected by a user in the environment map, as shown in Figure 6.10.

Figure 6.10 The monitored points selected by a user.

We record the length of every segment of the path in cm, and the total length of the navigation of the vehicle in meter in Table 6.3. In our experiments, when the vehicle navigated to the destinations of every segment of the path, the distance between the real position of the vehicle and the original destination of every segment of the path was calculated and recorded in the table, too. The error rates obtained by dividing the distance of the error by the length of the segment are also listed in the table.

We also performed a double-check experiment. In the first time, the vehicle

position and direction were not corrected by any method, and the results are recorded in Table 6.3. In the second time, the data were corrected by the method described in Chapter 4, and the results are recorded in Table 6.4.

Table 6.3 Records of the uncorrected mechanic errors in every segment of path No. (1) Length of

Table 6.4 Records of the corrected mechanic errors in every segment of path.

using the proposed technique. The average error rate with correction is much smaller than those without any correction. Besides, by the tables above we also can know that the mechanic errors of every segment of the path will not accumulate anymore, because we have eliminated the errors caused in every segment of the path using our method.

6.4 Discussions

The proposed system utilizes the vision-based autonomous vehicle to perform the security patrolling task. For this purpose, some monitored points were utilized to guide the vehicle. By the way, there are more applications of monitoring specific spots in indoor environments, such as providing various services at various application environments. Every monitored point can be regarded, for example, as a business service point in which there are some customers. If the environment is a restaurant, the apparatus of showing menus can be equipped on a vehicle, and then the vehicle can move to each service point along assigned optimal paths to ask what dishes or services are needed. If the environment is a company, the vehicles also can be utilized to deliver documents or messages in each service point.

However, there are still some problems in the proposed system. If an object appears next to the vehicle suddenly, the top-view omni-cameras will not have the ability to identify the vehicle. Furthermore, when tracking a person in the environment, if another person walks close to the person who is tracked, we may not be able to calculate the right position of the person. To solve the problem, it might be necessary to add information of color and sample models of the vehicle and the

Chapter 7

Conclusions and Suggestions for Future Works

7.1 Conclusions

In this study, we utilize a vision-based autonomous vehicle and omni-cameras affixed on the ceiling to perform indoor security surveillance. The indoor environment can be complicated with static and dynamic obstacles. We have proposed several techniques and designed some algorithms for indoor security surveillance, which are summarized in the following.

(1) A height-adaptive space mapping method has been proposed, by which we can construct an adaptive mapping table for a camera irrespective of the height of the camera. By the mapping table, we can convert the coordinates of the points in the environment between the image coordinates and the global coordinates. Hence we can calculate the real-world position of a vehicle or a person.

(2) An environment-information calculation method has been proposed, by which we can obtain all the ground regions in the environment, which form the environment map of the patrolling environment. By the constructed map, we can know the positions of all the obstacles. When a vehicle navigates, the patrolling path can be checked to see if there is any obstacle on it. If so, we can plan another path to avoid the obstacles also by use of the environment map.

avoid static and dynamic obstacles in the environment. If there are some obstacles on the original path, several turning points are calculated by the method and form a new path for navigation. The new path is the shortest path with the least number of turning points.

(4) A vehicle location and direction correction method has been proposed. Because the vehicles suffer from mechanic errors, we utilize the top-view omni-cameras to locate them in this study. By the odometer values of the vehicles, we can calculate the centroids of the vehicles in the image. After the centroids are transformed into the global space, the odometer values are corrected by the coordinates of the resulting points. Besides, the directional angles of the vehicles also must be corrected, in which two continuous correct position points are utilized to do the job. By the correction method, the position and the direction of the vehicle can be corrected automatically. Accordingly, the vehicle will not deviate from the planned path too far.

(5) A method for handling the camera handoff problem has been proposed, by which a vehicle can navigate under several cameras to expend the range of surveillance.

Besides, when tracking a person in the environment, because the person may move among the FOV’s of the cameras, the method is also applied to calculate the position of the person in the images.

(6) A method for calculating the position of a person in omni-images based on the rotational invariance property has been proposed, by which we can calculate the position via the image taken by the fisheye camera on the ceiling. Besides, only the partial image has to be processed to calculate the exact position of the person to reduce the amount of calculation, decrease the probability of interference by other dynamic obstacles, and increase the preciseness of calculating the position

(7) A position prediction method for use in omni-images has been proposed. Because we only process the partial image to calculate the position of a person, the position of the person in the next segment of time should be predicted by the prediction method. Besides, because the fisheye cameras are highly distorted, the predicting position of the person is revised by the proposed method, too.

The experimental results shown in the previous chapters have revealed the feasibility of the proposed system.

7.2 Suggestions for Future Works

The proposed strategies and methods, as mentioned previously, have been implemented on a vehicle system with multiple omni-cameras on the ceiling.

According to this study, in the following we make several suggestions and point out some related interesting issues, which are worth further investigation in the future:

(1) using a pen-tilt-zoom camera equipped on the vehicle to extract features of images to detect whether monitored objects still exist;

(2) increase the ability to detect more dangerous conditions;

(3) increase the ability of warning users immediately through cell phones or electronic mails;

(4) increase the ability of voice control when users want to issue navigation orders to the vehicle;

(5) increase the ability of constructing the adaptive mapping table automatically for the cameras whose FOV’s is not vertical to the ground;

(6) increase the ability of tracking multiple people in the environment

(7) increase the ability of processing the fragmented images taken by the camera due to insufficient network bandwidths;

(8) control the vehicle by the information gathered in the image space to reduce the errors of converting the coordinates of the vehicle or the person between the global coordinates and the image coordinates;

(9) increase the ability of patrolling in a dark room via infrared-ray cameras;

(10) increase the ability of handling more complicated situations of the handoff problem by using more information such as camera positions.

References

[1] J. Kannala and S. Brandt, “A generic camera calibration method for fish-eye lenses,” IEEE Proceedings of 17th international conference on pattern recognition, Vol. 01, pp. 10-13, 2004.

[2] S. shah and J. K. Aggarwal, “A simple calibration procedure for fish-eye (high distortion) lens camera,” IEEE Transactions on Robotics and Automation, Vol. 4, pp. 3422-3427, May 1994.

[3] S. Shah and J. K. Aggarwal, “Intrinsic parameter calibration procedure for a fish-eye lens camera with distortion model and accuracy estimation,” Pattern Recognition, Vol. 29, No. 11, pp. 1775-1788, 1996.

[4] H. C. Chen and W. H. Tsai “A Study on optimal security patrolling by multiple vision-based autonomous vehicles with omni-monitoring,” Proceedings of 2008 International Computer Symposium, pp. 196-201, Taipei, Taiwan, Nov. 2008.

[5] I. Fukui, “TV image processing to determine the position of a robot vehicle,”

Pattern Recognition, Vol. 14, pp. 101-109, 1981.

[6] Betke M and Gurvits L, “Mobile robot localization using landmarks,” IEEE Transactions on Robotics and Automation, Vol. 13, No. 2, pp. 251-263, Apr.

1997.

[7] M. J. Magee and J. K. Aggarwal, “Determining the position of a robot using a single calibration object,” IEEE Conference on Robotics, pp. 57-62, Atlanta, Georgia, USA, May 1983.

[8] J. Huang, C. Zhao, Y. Ohtake, H. Li, and Q. Zhao, “Robot position identification using specially designed landmarks,” Proceedings of 2006 IEEE Conference on Instrumentation and Measurement Technology, Sorrento, Italy, Apr. 2006.

corners,” Pattern Recognition, Vol. 19, pp. 439-451, 1986.

[10] K. L. Chiang and W. H. Tsai, “Vision-based autonomous vehicle guidance in indoor environments using odometer and house corner location information,”

Proceedings of 2006 IEEE International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 415-418, Washington, DC, USA, Dec. 2006.

[11] K. C. Chen and W. H. Tsai, “A study on autonomous vehicle navigation by 2D object image matching and 3D computer vision analysis for indoor security patrolling applications,” Proceedings of 2007 Conference on Computer Vision, Graphics and Image Processing, Miaoli, Taiwan, June 2007.

[12] D. Cobzas, H. Zhang, and M. Jagersand, “Image-based localization with depth-enhanced image map,” Proceedings of IEEE International Conference on Robotics and Automation (ICRA 2003), pp. 1570-1575, Taipei, Taiwan, 2003.

[13] P. Biber, H. Andreasson, T. Duckett, and A. Schilling, ”3D modeling of indoor environments by a mobile robot with a laser scanner and panoramic camera,”

Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2004), Sendai, Japan, Sept. 2004.

[14] W. Lu and Y. P. Tan, “A color histogram based people tracking system,” IEEE Circuits and Systems, Vol. 2, pp. 137-140, May 2001.

[15] T.Takeshita, T. Tomizawa, and A. Ohya, “A house cleaning robot system – path indication and position estimation using ceiling camera,” IEEE SICE-ICASE, pp.

2653-2656, Busan, Korea, Oct. 2006.

[16] J. J. Yang, “Design and implementation of an image processing based watchdog robot,” MS Thesis, National Cheng Kung University, Tainan, Taiwan, June 2002.

[17] S. Y. Tsai and W. H. Tsai, “A study on mobile robot navigation with simple

Proceedings of 2008 International Computer Symposium, pp. 207-212, Taipei, Taiwan, Nov. 2008.