• 沒有找到結果。

Process of Automatic Path Map Creation from Learned Data

Chapter 5 Learning Strategies for Indoor Navigation by Manual

5.5 Process of Automatic Path Map Creation from Learned Data

After ending the learning process, the path data Npath, the object data LearnOobject, and the door data LearnDdoor are already saved. We use the index number NNumber of each node and the total number of nodes NodeNumber to create a path map for later navigation sessions, as shown in Figure 5.9. By using the index numbers NNumber of the nodes, the vehicle can move along the navigation path.

Figure 5.9 An example of navigation map.

Chapter 6

Security Patrolling in Indoor Environments

6.1 Introduction

By using the learning strategies mentioned in the previous chapters, path data and object data are saved in the PC. The vehicle can navigate according to the information. In this chapter, we will describe the entire process of security patrolling in more detail.

In Section 6.2, we will first describe a navigation process briefly. The vehicle navigates according to the node data and checks the existence of the monitored objects by using the coordinates of the learned object during the navigation process.

In Section 6.3, a line following technique is proposed. Since the vehicle navigates along the nodes one by one, the entire navigation path can be divided into many straight sections. A line following technique is used for correcting mechanic errors generated in each straight section when the vehicle is moving from one node to another consecutively.

In Section 6.4, we will propose an object security monitoring process. First, the vehicle moves forward to a node where there is a monitored object nearby, and the object should be detected according to the previous learned data in the learning stage.

Next, a rotation angle is computed such that the vehicle can turn accordingly toward the monitored object. Also, by using the improved snake algorithm, the vehicle

detects an object by using the improved snake algorithm. Finally, a recognition process is conducted to compare the images with the learned ones to decide whether the original object exists or not.

In Section 6.5, we will describe the proposed door situation recognition process.

The door situation will be checked to see if it is opened or closed when the vehicle moves automatically to a suitable position recorded in the learning process.

In Section 6.6, we will describe a vehicle coordinate correction method. Since the vehicle might navigate a long distance in the original path gradually, the method is basically based on a vision-based technique which is proposed for reducing the accumulative mechanical errors for precise path navigation. When the vehicle detects a previously learned object from the image, our method will use this information to correct position errors.

6.2 Navigation Process

In the security patrolling process, the vehicle navigates along the generated path by visiting each path node consecutively through the routes specified by the node edges and checks the existence of the learned objects. A simple security patrolling process is described in the following algorithm.

Algorithm 6.1. Security patrolling.

Input: The set of nodes Npath, object data LearnOobject, and doors data LearnDdoor. Output: Navigation process.

Steps:

Step 1. The vehicle starts navigating from a starting node N . 0

Step 2. Scan the node list N to read the next node data.

Step 3. Perform the line following process until the vehicle arrives at the next node.

Step 4. Check whether the vehicle has to monitor objects or doors.

Step 5. If there exists a monitored object O or a door Dj k in the current node, take the following action; else, continue the remaining navigation.

Step 5.1. If the learned data are object data, do the following steps.

Step 5.1.1. Turn toward to the learned object according the coordinates recorded in the learning process.

Step 5.1.2. Do the object matching process.

Step 5.2. If the learned data is a door data, recognize the door situation.

Step 6. Read the next node data. If there exists the remaining nodes, repeat step 3 to step 5. Else, finish the navigation.

Read the learned data Move to the node

Whether view the objects or the doors

Start navigation

End Navigation

Object or door

Turn toward to the object

Figure 6.1 Flowchart of security patrolling process

6.3 Navigation Strategy for Straight-Line Sections

The navigation strategy of line following is adopted to reduce deviations from the route when the vehicle passes a node in its navigation path. The line following process will ensure that the vehicle passes each node during crossing two adjacent nodes. The details are described as an algorithm as follows and a related figure is shown in Figure 6.2.

Algorithm 6.2. Line following navigation.

Input: Coordinates L(xodo,yodo) and direction angle θ0 of the vehicle provided by the odometer, next node Ni(x ,y ), and a unit vector i+1 i+1 i=

[ ]

1,0 .

Output: A navigation path between two adjacent nodes.

Steps:

step 2. Compute the direction angle θ1 of the vehicle in the GCS after the vehicle turns toward the node N by using the following equations: i+1

1

step 4. Because of the mechanic error, the vehicle can not move to the correct

position as shown in Figure 6.2. A correction angle β is computed as follows by using the curve equation y = ax2 + bx + c as illustrated in Section 3.3:

2

tan (1 i i )

i

a V b V c V

β = + + . (6.3)

step 5. Compute the real rotation angle as r = α − β.

step 6. Turn the vehicle leftward for the angle of r if r is larger than zero; otherwise, turn the vehicle rightward for the angle of r. Therefore, the direction of the vehicle become θ0 − γ.

step 7. Move the vehicle forward. Read the odometer to obtain the current vehicle location L and compute how far the vehicle has moved as d = |L − L|. v 1 v

step 8. End this navigation session if d ≥ d. 1

Theoretically, there are two main parameters we have to compute in the navigation session, namely, the rotation angle γ and the navigation distance d between two adjacent nodes. By using the curve built in advance which is mentioned in Chapter 3, we compute the real distance d and angle r. Although the vehicle can not move straightly due to the mechanic error, it can still arrive at the next position accurately by adjusting the direction of starting for compensation.

As shown in Figure 6.2, the starting position of the vehicle should be Ni in theory, but the vehicle will not stop at node N in last line following navigation from N to Ni i-1 i

due to the condition of ending a navigation session. The real position of the vehicle became L. Hence, we can compute parameters of a navigation session by using L provided by the odometer instead of N . i

Figure 6.2 Line following navigation.

6.4 Object Security Monitoring Process

In this Section, we will describe the detailed object monitoring process. In the process, the vehicle has to search for the monitored object or the door automatically which was pre-selected by the user in the security patrolling. Then it checks the existence of the objects or the door situation. A new parameter NodeNumber is created which is stored in each object data for specifying the position of the object searching.

Another problem to be addressed is that when the vehicle moves to the neighborhood of a monitored object, there are some unmonitored objects near the vehicle. How to distinguish a monitored object from them is a problem we have to deal with. Here, we use data of the learned objects to distinguish other objects. In Section 6.4.1, we will propose an object detection method. And in Section 6.4.2, an object matching method will be proposed.

Although the vehicle can detect the learned object according the object coordinates recorded in the learning process, in practice, the navigation deviation often causes the vehicle to be unable to search an object at once. The vehicle moves to

the neighbor of the object and detects it but it can not detect an object if the object is not in the image. The last method in Section 6.4 is to decide the searching angle of the object due to the navigation deviation. Also in this section, we will describe the entire object monitoring process.

6.4.1 Proposed Monitored Object Detection Method

When the vehicle moves to the neighborhood of the object O , the data of Oi i, LearnOi = {LearnCICobj,i, LearnCobj,i, LearnSobj,i, LearnGCobj,i, LearnCfloor,i, NodeNumber} which is illustrated in the learning process is used to detect the object O . The detailed algorithm is described as follows. i

Algorithm 6.3. Monitored object detection.

Input: A color image I, coordinates L(xodo,yodo), and direction angle θ0 provided by the odometer, and a learned object data LearnO . i

Output: A set of object region ObjR.

Steps:

step 1. Use coordinates of the object, LearnGCobj,i = { Learn _xobj,i, Learn _yobj,i}, and the coordinates of the vehicle L to compute rotation angle α by the following steps, as shown in Figure 6.3.

step 1.1. Compute a vector V by the following equation:

1 turn the vehicle rightward for the angle α.

Figure 6.3 Illustration of turn angle computing.

step 3. Capture an image and decide the coordinates of the control points in the ICS of the snake algorithm by the following steps.

step 3.1. Decide four endpoints of a rectangle in the ICS in the following way, as shown in Figure 6.4.

step 3.2. Execute the improved snake algorithm by using the four endpoints to detect an object.

step 4. Get the detected object pixels ObjR by using the coordinates of the control

points in the ICS, as shown in Figure 6.4.

Figure 6.4 An experimental result of object region.

In this process, the vehicle turns its head forward to the front of the object. If no mechanic error occurs, the center of the object region is located on a vertical line which is at the image center. It is shown in Figure 6.5.

Figure 6.5 An ideal experimental result of the vehicle turning to the object.

6.4.2 Proposed Object Matching Method

After finishing the object detection process as illustrated in the last section, a matching rule is proposed to determine whether the object is exact the same as the previous learned one. A detailed matching algorithm is described as follows.

Algorithm 6.4. Object Matching Process.

Input: An image I and a learned object data O . i

Output: A boolean value, true or false.

Steps:

Step 1. Perform the monitored object detection process.

Step 2. If the object cannot be detected, as shown in Figure 6.6, end this process and return false.

Figure 6.6 An experimental result of detecting no object.

Step 3. Else, detect an object Obj, and compute the feature data which are denoted as ObjData= { Cobj, Sobj, GCobj, Cfloor}. All features are illustrated as follows.

(1) Color data Cobj.

{ obj, obj, obj, sd, sd, sd}

obj obj obj obj

C = R G B R G B . (6.7)

Robj Gobj, and Bobj

where , are the means of the R, G, and B values respectively.

sd

Robj, Gobjsd , and Bobjsd are the standard deviations of the R, G, B values, respectively.

(2) Shape data Sobj.

Sobj = {aobj, bobj}. (6.8)

aobj and bobj are the horizontal axis and vertical axis of the ellipse to represent the shape of the object.

(3) Global coordinate GCobj.

Step 4. Compare the learned object with this object by using the color data in the following methods.

step 4.1. Compare the means of color data in the following way.

step 4.1.1. First compute light difference by using the floor color.

,

step 4.1.2. Compare these means by using three inequalities below.

, 1

step 4.2. Compare the standard deviations by using the following three inequalities:

, 2

where the parameter Ct2 is a threshold.

Step 5. Compare the learned object with this object by using the shape data in the following ways.

step 5.1. Compute the ratio of the horizontal axis and vertical axis of the learned object and this object by using following equations.

LearnR = (Learn_aobj,i / Learn_bobj,i) (6.20)

R = (aobj / bobj) (6.21)

Compare ratios in the following inequality.

(6.22)

| (LearnR / R) − 1| ≤ St The parameter St is a threshold.

Step 6. If the above inequalities are satisfied in Steps 4 step 5, then return true, else return false.

6.4.3 Detailed Object Monitoring Algorithm

Although the vehicle can detect the object by using the learned object coordinates and the vehicle location in the object detection process as described in Section 6.4.1, there are some situations in which the vehicle can not detect the monitored object. The reason is that sometimes mechanic errors cause the vehicle moveing far away from the route such that it can not view the monitored objects, as shown in Figure 6.7. Therefore, a complete object monitoring process including searching view adjustment for improving the robustness is described in the following algorithm and a detailed flowchart is shown in Figure 6.8

(a) (b)

(c) (d) Figure 6.7 Situations of object detection.

Algorithm 6.5. Monitored object monitoring process.

Input: A learned object data LearnO , coordinates L(xi odo,yodo), and direction angle θ0

provided by the odometer.

Output: A warning message or nothing.

Steps:

step 1. Perform the monitored object detection process.

step 2. Perform the object matching process.

step 3. If the return value is true, then end this process. Else, continue the following steps.

step 4. Compute a rotation angle Φ by the following equation;

tan (1 ) 10

V (6.23)

Φ = .

The symbol V is the distance between the object and the vehicle as illustrated in Algorithm 6.3.

step 5. Turn the vehicle leftward for the angle Φ. step 6. Repeat Step 2.

step 7. If the return value is still false, continue the following steps. Else, end this process

step 8. Turn the vehicle rightward for the angle 2Φ. step 9. Repeat Step 2.

step 10. If the return value is still false, a warning is announced. Else, end this process.

Figure 6.8 Flowchart of object matching.

The main goal of turning the vehicle is to search an object. When the vehicle cannot discover the desired object, it must turn left or turn right for the angle Φ to search the object again. The angle Φ is extracted according to the distance between the vehicle and the learned object. After finishing the searches in the three directions, if the vehicle still can not find the desired object, then a warning message is announced.

6.5 Detection of Door Opening

In this section, the detection algorithm for the determination of the current situation of a door is proposed. The algorithm is performed after the vehicle moving itself to the front of a door, and it is conducted to detect if the door is open or not. The detection process is illustrated as following algorithm.

Algorithm 6.6. Detection of door opening.

Input: An image I and a set data of the door D . i Output: A Boolean value, true or false.

Steps:

step 1. Detect the edges of I by applying the Sobel operator.

step 2. Detect the edge of the door E and the edge of the baseline Ed b. Choose the right or left side baseline of the door according to learned data baseline.

step 3. Compute the slopes of the E and Ed b by the following equation:

2 2

( )

e e e

e e

u v n uv

a x n x

= −

∑ ∑ ∑

∑ ∑

. (6.24)

(6.25)

|a − a | ≤ th. d b

where th is a threshold.

step 5. If it is not satisfactory, it means the door is open, and then return true. Else continue the following steps.

step 6. Compute color data by selecting a rectangular region, as shown in Figure 6.9.

step 7. Compare the color data. If it is satisfactory, then return false, else return true.

The main idea here is to utilize the edges of the door and the baseline. We detect the edges of the door’s downside and baseline. We compute their slopes by using a line fitting technique. If the door is closed, the edge of the door’s downside is parallel to the baselines. Hence, the two slopes should be closer. But when the door is open completely, as shown in Figure 6.9(b), we can not detect the edge of the door. Hence, we utilize the color conditions to decide whether the door is open or not.

(a) (b) Figure 6.9 An illustration of door detection. (a) The door is opened. (b)

The door is closed.

6.6 Improved Guidance Precision in Navigation by Learned Object Location

Although the line following technique and the mechanic error correction method are helpful for the improvement of the navigation accuracy, it is still possible that the vehicle moves for a great deviation from the normal path after a long distance of security patrol. A vision-based technique is proposed to correct the navigation deviation in this section. The main idea of correcting the navigation deviation is to correct the position coordinates of the vehicle. Utilizing the known position of the monitored object which was mentioned in Chatper5, we can correct the coordinates of the vehicle in the GCS.

When the vehicle searches for the object, an ideal situation is that the vehicle turns directly to the front of the vehicle in the object detection process, as illustrated in Section 6.4.1. However, if the vehicle has turned an extra angle Φ leftward or rightward to search for the object, as illustrated in Section 6.4.3, the angle Φ has to be considered during the correction of the coordinates of the vehicle.

Algorithm 6.7. Improved guidance precision in navigation by learned object location.

Input: A detected object region Obj, the coordinates of the learned object LearnGCobj,i

= {Learn_xobj,i, Learn_yobj,i}, the direction angle θ0 from the odometer, and a turn angle Φ illustrated in Algorithm 6.5 in Section 6.4.3.

Output: A set of coordinates (xv,yv) and direction angle θ1. Steps:

step 1. Compute the coordinates of the detected object (Vx , Vy ) in the VCS now

by using region Obj when the vehicle turns for the angle Φ to detect the object.

step 2. Compute the coordinates of the detected object (Vx , Vyo o) in the VCS when the vehicle does not turn leftward or rightward for the angle Φ by the following equations:

(6.26) Vx = Vxo obj × sinΦ − Vyobj × cosΦ;

(6.27) Vy = Vxo obj × cosΦ + Vyobj × sinΦ.

step 3. Compute the coordinates of the vehicle (Oxv, Oyv) in the object coordinate system by transforming the coordinates (Vx ,Vy ) by the following equation: o o

(6.28) (Oxv, Oy ) = ((−1) × Vx , (−1) × Vy ). v o o

step 4. Correct the current direction angle of the vehicle by the following equation:

1 o

step 5. Compute the angle ρ between the object coordinate system and the global coordinate system by using the direction angleθ1 by the following equation:

(6.30) ρ = θ1 − π/2 − Φ.

step 6. Correct the coordinates of the vehicle in the GCS by using the learned object data by the following equations:

(6.31) xv = Ox × sinρ − Oyv v × cosρ + Learn_xobj,i;

(6.32) yv = Ox × cosρ − Oyv v × sinρ + Learn_yobj,i.

step 7. The vehicle navigates according to navigation strategy for straight-line section.

The main idea of correcting the vehicle position is to utilize the recognized monitored object GCS coordinates to modify the odometer values of the vehicle.

When the vehicle detects an object and considers it as a learned object, we can use its

GCS coordinates. From the image; we can compute its VCS coordinates. The origin of the VCS is the center of the vehicle and the coordinates of the object, Vxobj and Vyobj are the distances relative to the vehicle, as shown in Figure 6.10(a). Since we transform the VCS to the coordinate system whose origin is the object center, we can get the vehicle coordinates Ox and Oy , as shown in Figure 6.10v v (b). Then using the coordinates of the learned object, we can compute the vehicle position in the GCS.

The direction angle and coordinates are considered. In order to correct the angle, we use the object detection process. When the vehicle turns to the front of the vehicle in the object detection process, the object should appear in the image center if no mechanic error occurs. However, if the error does occur, we correct the direction angle of the vehicle by using the position of the object appearing in the image. The real coordinates of the vehicle in the GCS can be obtained from our method by using the Oxv and Oy and the direction angle. v

As soon as we have corrected the direction and coordinates of the vehicle, the vehicle can not return the original node position, as node N shown in Figure 6.11i . The

As soon as we have corrected the direction and coordinates of the vehicle, the vehicle can not return the original node position, as node N shown in Figure 6.11i . The