• 沒有找到結果。

Detection of Door Opening

Chapter 6 Security Patrolling in Indoor Environments

6.5 Detection of Door Opening

In this section, the detection algorithm for the determination of the current situation of a door is proposed. The algorithm is performed after the vehicle moving itself to the front of a door, and it is conducted to detect if the door is open or not. The detection process is illustrated as following algorithm.

Algorithm 6.6. Detection of door opening.

Input: An image I and a set data of the door D . i Output: A Boolean value, true or false.

Steps:

step 1. Detect the edges of I by applying the Sobel operator.

step 2. Detect the edge of the door E and the edge of the baseline Ed b. Choose the right or left side baseline of the door according to learned data baseline.

step 3. Compute the slopes of the E and Ed b by the following equation:

2 2

( )

e e e

e e

u v n uv

a x n x

= −

∑ ∑ ∑

∑ ∑

. (6.24)

(6.25)

|a − a | ≤ th. d b

where th is a threshold.

step 5. If it is not satisfactory, it means the door is open, and then return true. Else continue the following steps.

step 6. Compute color data by selecting a rectangular region, as shown in Figure 6.9.

step 7. Compare the color data. If it is satisfactory, then return false, else return true.

The main idea here is to utilize the edges of the door and the baseline. We detect the edges of the door’s downside and baseline. We compute their slopes by using a line fitting technique. If the door is closed, the edge of the door’s downside is parallel to the baselines. Hence, the two slopes should be closer. But when the door is open completely, as shown in Figure 6.9(b), we can not detect the edge of the door. Hence, we utilize the color conditions to decide whether the door is open or not.

(a) (b) Figure 6.9 An illustration of door detection. (a) The door is opened. (b)

The door is closed.

6.6 Improved Guidance Precision in Navigation by Learned Object Location

Although the line following technique and the mechanic error correction method are helpful for the improvement of the navigation accuracy, it is still possible that the vehicle moves for a great deviation from the normal path after a long distance of security patrol. A vision-based technique is proposed to correct the navigation deviation in this section. The main idea of correcting the navigation deviation is to correct the position coordinates of the vehicle. Utilizing the known position of the monitored object which was mentioned in Chatper5, we can correct the coordinates of the vehicle in the GCS.

When the vehicle searches for the object, an ideal situation is that the vehicle turns directly to the front of the vehicle in the object detection process, as illustrated in Section 6.4.1. However, if the vehicle has turned an extra angle Φ leftward or rightward to search for the object, as illustrated in Section 6.4.3, the angle Φ has to be considered during the correction of the coordinates of the vehicle.

Algorithm 6.7. Improved guidance precision in navigation by learned object location.

Input: A detected object region Obj, the coordinates of the learned object LearnGCobj,i

= {Learn_xobj,i, Learn_yobj,i}, the direction angle θ0 from the odometer, and a turn angle Φ illustrated in Algorithm 6.5 in Section 6.4.3.

Output: A set of coordinates (xv,yv) and direction angle θ1. Steps:

step 1. Compute the coordinates of the detected object (Vx , Vy ) in the VCS now

by using region Obj when the vehicle turns for the angle Φ to detect the object.

step 2. Compute the coordinates of the detected object (Vx , Vyo o) in the VCS when the vehicle does not turn leftward or rightward for the angle Φ by the following equations:

(6.26) Vx = Vxo obj × sinΦ − Vyobj × cosΦ;

(6.27) Vy = Vxo obj × cosΦ + Vyobj × sinΦ.

step 3. Compute the coordinates of the vehicle (Oxv, Oyv) in the object coordinate system by transforming the coordinates (Vx ,Vy ) by the following equation: o o

(6.28) (Oxv, Oy ) = ((−1) × Vx , (−1) × Vy ). v o o

step 4. Correct the current direction angle of the vehicle by the following equation:

1 o

step 5. Compute the angle ρ between the object coordinate system and the global coordinate system by using the direction angleθ1 by the following equation:

(6.30) ρ = θ1 − π/2 − Φ.

step 6. Correct the coordinates of the vehicle in the GCS by using the learned object data by the following equations:

(6.31) xv = Ox × sinρ − Oyv v × cosρ + Learn_xobj,i;

(6.32) yv = Ox × cosρ − Oyv v × sinρ + Learn_yobj,i.

step 7. The vehicle navigates according to navigation strategy for straight-line section.

The main idea of correcting the vehicle position is to utilize the recognized monitored object GCS coordinates to modify the odometer values of the vehicle.

When the vehicle detects an object and considers it as a learned object, we can use its

GCS coordinates. From the image; we can compute its VCS coordinates. The origin of the VCS is the center of the vehicle and the coordinates of the object, Vxobj and Vyobj are the distances relative to the vehicle, as shown in Figure 6.10(a). Since we transform the VCS to the coordinate system whose origin is the object center, we can get the vehicle coordinates Ox and Oy , as shown in Figure 6.10v v (b). Then using the coordinates of the learned object, we can compute the vehicle position in the GCS.

The direction angle and coordinates are considered. In order to correct the angle, we use the object detection process. When the vehicle turns to the front of the vehicle in the object detection process, the object should appear in the image center if no mechanic error occurs. However, if the error does occur, we correct the direction angle of the vehicle by using the position of the object appearing in the image. The real coordinates of the vehicle in the GCS can be obtained from our method by using the Oxv and Oy and the direction angle. v

As soon as we have corrected the direction and coordinates of the vehicle, the vehicle can not return the original node position, as node N shown in Figure 6.11i . The real position of the vehicle is L. We use the real position L of the vehicle and next node Ni+1 to compute the navigation path by using the line following technique mentioned in Section 6.3, as blue path shown in Figure 6.11. Hence, although the vehicle may not arrive at original position Ni due to navigation deviation, we use the object to correct the vehicle position and vehicle can continue navigating in the precise path without returning original position Ni. It is helpful for improving efficiency and precision of navigation.

Also, there is another situation we must consider. Although the vehicle turns toward to the object, the vehicle will turn another angle Φ additionally if the vehicle cannot detect an object. Hence, Φ is included in each correction computation.

(a) (b) Figure 6.10 Coordinate Transform between the VCS and the object coordinate

system. (a) A sidelong view of the VCS. (b) A vertical view of the object coordinates.

Figure 6.11 An illustration of navigation after correcting the position of the vehicle.

Chapter 7

Experimental Results And Discussions

7.1 Experimental Results

We will show some experimental results of the proposed security patrolling system in this section. The user interface of the system is shown in Figure 7.1.

At first, a user controls the vehicle to learn a path and some monitored objects and doors, as shown in Figure 7.2. The experimental images are shown in the remote system. Figure 7.2(a) illustrates the results of the learning process in which the user chose an arbitrary object to be monitored by selecting its area from the image. Figure 7.2(b) shows a door to be monitored, which has been selected by a user.

Figure 7.1 An interface of the experiment.

(a) (b) Figure 7.2 The learning images.

The entire learned data is shown in Figure 7.3. It includes the path nodes and saved objects and a door. There are two safes saved in this experimental.

Figure 7.3 An illustration of learned data.

After ending the learning process, the entire vehicle navigation process is shown in Figure 7.4. Some experimental results of monitoring objects and doors are shown in Figure 7.5. There are two regions in the images; the left side is the view of the vehicle, and the right side is the image processing result. Some warning messages of

monitoring results are shown in the image. Figure 7.5(a) ~ (d) demonstrate that our system successfully recognizes the existence of monitoring objects. Figure 7.5(e) and (f) include another successful example of our system successfully distinguishing different kinds of door situations.

(a) (b)

(c) (d)

(e) (f) Figure 7.4 A navigation process.

(a)

(b)

(c)

Figure 7.5 The experimental result of security monitoring.

(d)

(e)

(f)

Figure 7.5 The experimental result of security monitoring (continued).

7.2 Discussions

By analyzing the experimental results of navigation, some problems are identified as follows.

(1) The result of detecting an object by using the improved snake algorithm might become worse due to the complex background. The control points of the snake will not converge to the edge of the object since the colors of background are too complex. The control point will stop at the edge of background. In the future, the results of the object detection will more satisfactory by improving the snake algorithm.

(2) Object matching is often degraded by the varying lighting condition. Although lighting in indoor environment is more stable than outside, an image still can be affected easily due to the diaphragm of the camera. The vehicle needs to stop more than one second to wait the light steady. Since we use an offsetting technique to overcome it, an erroneous judgment sometimes will occur.

(3) That the floor has to be flat is a constraint of our system. A mechanic error correction model is used in this study but the situation of the vehicle wheel gliding can not be totally overcome. The navigation precision is affected by the roughness of the floor.

Chapter 8

Conclusions and Suggestions for Future Works

8.1 Conclusions

Several techniques and strategies have been proposed in this study and integrated into an autonomous vehicle system for security patrolling in the indoor environments with mechanic error correction and visual object monitoring capabilities.

At first, a setup strategy for the autonomous vehicle is proposed. Two kinds of tasks, namely, Location mapping calibration and mechanic error correction, have been proposed to set up the vehicle before its patrolling. Feasible 2D Location mapping calibration is proposed for acquiring the relative positions between the vehicle and the surrounding environment precisely. The mechanic error correction model which is based on a second-order curve equation is proposed to improve navigation accuracy.

Next, some learning strategies are proposed for the autonomous vehicle, including learning of the planned path and learning of monitoring objects and doors.

The user can easily control the vehicle to navigate in the environment and select monitored objects in the image. And in order to make a precise navigation along a path, one method is to use the coordinates of learned objects as an auxiliary tool to adjust the position and direction of the vehicle. Another method is based on a line following technique. Both ways have been implemented in this study.

In addition, a computer vision process has been proposed for security monitoring

in the navigation path. Several processes, namely, object detection, object recognition, object searching, and door opening detection, have been proposed to detect the current situation during the patrolling process.

The experimental results shown in the previous chapter have revealed the feasibility of the proposed system.

8.2 Suggestions for Future Works

The proposed strategies and methods, as mentioned previously, have been implemented on a small vehicle system. Several suggestions and related interesting issues are worth further investigation in the future. They are described as follows.

(1) Improving the object detection method --- In order to detect monitored object with a more complicated image, the object detection method need be improved, which can then be adopted for more application environments.

(2) Adding the capability of object feature extraction --- This is especially useful when the interesting image regions of an object are hollow. For example, the things are a ring, a wheel, or a flowerpot, etc.

(3) Adding the vehicle abilities of obstacle detection and avoidance such that it can navigate in complex and dynamic environments with objects or humans appearing suddenly on the navigation path.

(4) Adding the ability of human detection and tracking during the vehicle navigation.

(5) Adding the ability of conflagration detection in the house.

(6) Designing a friendlier user-machine interface and simplifying the learning strategy for object and path learning.

(7) Designing a camera system with a capability of panning, tilting, and swinging.

(8) Adding the capability of voice control in the learning process.

(9) Adding the capability of transmitting warning messages from the vehicle to the user’s cell phone by using telecommunication systems.

References

[1] Michael Kass, Andrew WitKin and Demetri Terzopoulos, “Snake: Active contour models,” International Journal of Computer Vision, vol. 1, no. 4, Jan. 1988, pp.

321-331.

[2] P.C. Yuen, Y.Y. Wong, and C.S. Tong, “Contour detection using enhanced snakes algorithm,” Electronics Letters, vol. 32, issue 3, Feb. 1, 1996, pp. 202-204.

[3] Donna J. Williams and Mubarak Shah, ”A fast algorithm for active contours,”

Proceedings of Third International Conference on Computer Vision, Osaka, Japan, Dec. 4-7, 1990 , pp. 14-26.

[4] F. Leymarie and M.D. Levine, "Tracking deformable objects in the plane using an active contour model." IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 6, pp. 617-634, 1993.

[5] Dong Joong Kang, “A fast and stable snake algorithm for medical images,”

Pattern Recognition Letters, vol. 20, issue 5, May 1999, pp. 507-512.

[6] Tianqing Li, Yi Zhang, Danya Yao, and Dongcheng Hu, “FFT Snake: a robust and efficient method for the segmentation of arbitrarily shaped objects in image sequences,” Proceedings of the 17th International Conference on Pattern Recognition, vol. 2, Aug. 23-26, 2004, pp. 116-119.

[7] http://members.bellatlantic.net/~vze2vrva/ellipse_fitting.html, Ellipse Fitting.

[8] Maurizio Pilu, Andrew W. Fitzgibbon, and Robot B. Fisher, “Ellipse-specific direct least-square fitting,” Proceedings of International Conference on Image Processing, vol. 3, Lausanne, Switzerland, Sept. 16-19, 1996, pp. 599-602.

[9] Chung-Chi Lai, “A Study on Automatic Indoor Navigation Techniques for Vision-Based Mini-Vehicle with Off-Line Environment Learning Capability,”

Master Thesis, Department of Computer and Information Science, National Chiao Tung University, Taiwan, June, 2003.

[10] Pao-Lung Li, “Path Learning, Planning, and Guidance for ALV Navigation Inside Buildings,” Master Thesis, Department of Computer and Information Science, National Chiao Tung University, Taiwan, June, 1997.

[11] Hsien-Yi Chiu, “Automatic Vehicle Navigation and Parking in Building Corridors Using Panoramic Sensing and 2D Image Analysis Techniques,” Master Thesis, Department of Computer and Information Science, National Chiao Tung University, Taiwan, June, 2002.

[12] Y. C. Chen and W. H. Tsai (2004/08). “Vision-based autonomous vehicle navigation in complicated room environments with collision avoidance capability by simple learning and fuzzy guidance techniques,” Proceedings of 2004 Conference on Computer Vision, Graphics and Image Processing, Hualien, Taiwan, Republic of China.

[13] Rui-Chih Liu, “Security Patrolling in Building Corridors by Multiple-Camera Computer Vision and Automatic Vehicle Navigation Techniques,” Master Thesis, Department of Computer and Information Science, National Chiao Tung University, Taiwan, June, 2001.

[14] John B. Fraleigh and Raymond A. Beauregard, “Linear Algebra,” third edition, Nov. 1995.

[15] R. C. Gonzalez and R. Z. Woods, “Digital Image Processing,” second edition, 2002.