• 沒有找到結果。

Chapter 3 Adaptive Space Mapping Method for Object Location Estimation

3.3 Using Multi-cameras to Expand The Range of Surveillance

3.3.3 Calculating Coordinates in The GCS

After the relative position and rotation angle are calculated, the adaptive mapping table of the second camera can be calculated by the following method. The black points in Figure 3.13 are represented by the global coordinates in the mapping table of camera 1, and we want to calculate the global coordinates of the blue points in the region of camera 2 to construct the adaptive mapping table of camera 2.

1

Figure 3.13 Calculating the adaptive mapping table of camera 2.

The coordinates (x, y), (x, y) and (x, y) in Figure 3.14 specify three blue points in Figure 3.13. The coordinates (x, y) specify any of the blue points, P1, and the coordinates (x, y) specify another blue point Ph in the horizontal direction of P1, and the coordinates (x, y) specify a third blue point Pv in the vertical direction of P1. Symbol p specifies the length between P1 and Ph, and  is the rotation angle between

the two cameras. Both p and  are measured in advance. We can obtain four formulas

Figure 3.14 Calculating the global coordinates of points in the GCS.

Horizontal:

These four formulas are used to calculate all the global coordinates of points in the FOV of the second camera. First, the coordinates (x, y) of blue point 1 is calculated by the method described in Section 3.3.1. Then we may calculate the coordinates of blue points 2, 3 and 4 in Figure 3.13 via Equations 3.23 and 3.24 because they are on the vertical direction of P1.

For example, when calculating point 2, p is 1W , and when calculating point 3,

p is 2Wreal, where Wreal is calculated in advance as described previously. Afterward, the horizontal points in every row are calculated by use of the points of the first column. For example, when calculating the coordinates of point 8, because this point is on the horizontal direction of point 2, Equations 3.21 and 3.22 are used and the values of x and y in them will be the coordinates of point 2, and p will be still 1Wreal. The calculating sequence is represented in Figure 3.14 by white numbers in blue circular shapes. In this way, all the coordinates of points in camera 2 can be calculated, and so the adaptive mapping table can be completed.

After all the mapping tables of every camera are constructed, we consider all the mapping tables as a combined one, and so complete the expansion of the range of surveillance.

Chapter 4

Construction of Environmental Maps and Patrolling in Learned Environments

4.1 Introduction

One goal of this study is to make an autonomous vehicle to patrol in an indoor environment automatically. Patrolling points where the vehicles should navigate through for security monitoring are selected by a user freely. In order to achieve this goal, an environment map should be constructed, and the vehicle should have the ability to avoid static and dynamic obstacles, or a crash may happen. Besides, autonomous vehicles usually suffer from mechanical errors, and such errors will cause the vehicle to deviate from the right path, so an automatic path correction process is needed. The process should correct the position and direction of the vehicle at appropriate timings and spots.

4.2 Construction of Environment Maps

For an autonomous vehicle to patrol in a complicated indoor environment, the environment map should be constructed first. An environment map includes a two

dimensional Boolean array in this study, and each element in the array represents a square centimeter in the global. If the value of an element in the array is true, that means there are some obstacles in this region, and the autonomous vehicle cannot go through this region. On the other hand, if the value of an element in the array is false, that means there is no obstacle in this region, and the autonomous vehicle can go through this region.

Furthermore, in this study we use multiple fish-eye cameras affixed on the ceiling to get the images of the environment, and use these images to construct the environment map. The major steps are described below.

1. Find the region of ground in taken images by a region growing technique.

2. Use the combined space mapping table to transform the image coordinates of the ground region into global coordinates to construct a rough environment map.

3. Eliminate broken areas in the rough environment map to get the desired environment map.

More details are described subsequently.

4.2.1 Finding Region of Ground by Region Growing Technique

A region growing method is used in this study to find the region of the ground, as mentioned previously. First, a seed is selected by a user from the ground part in the image as the start point, and the eight neighboring points of this start point are examined to check if they belongs to the region or not. The proposed scheme for this connected-component check will be described later. Then each of the points decided to

belong to the region is used as the seed again, and the connected-component check is repeated, until no more region points can be found. More details of the method are described as an algorithm in the following.

Algorithm 4.1: Finding the region of the ground in an image.

Input: An image taken by a camera on the ceiling.

Output: The image coordinates of the region of the ground in the image.

Steps:

1 Select a seed P from the ground part in the image manually as the start point, and regard it as a ground point.

2 Check the eight neighboring points Ti of P to see if they belong to the region or not.

2.1 Find all the neighboring points Ni of Ti which belong to the region of the ground for each Ti.

2.2 Calculate the value of the similarity degree between Ti and each Ni.

2.3 Decide whether Ti belongs to the ground region according to the similarity values by the following steps.

2.3.1 Compare the values of similarity calculated in Step 2.2 with a threshold TH1 separately (the detail will be described later).

2.3.2 Calculate the number p of similarity values which are larger than TH1. 2.3.3 Calculate the number q of similarity values which are smaller than or

equal to TH1.

2.3.4 Compare p with q, and if the value of p is larger than q, then mark the point Ti as not belonging to the region and go to Step 2 to process the next Ti; else, continue.

values of all pixels in the region of the ground (the detail will be described later).

2.3.6 Compare the similarity degree d with another threshold TH2, and if d is smaller than TH2, then mark Ti as belonging to the ground region; else, mark Ti as not.

2.4 Gather the points Bi which belong to Ti and belong to the region of the ground.

3 If there are some points of Bi which are not examined yet, then regard each Bi as a seed P and go to Step 2 again to check if they belong to the ground region or not.

In Steps 2.1 and 2.2, when a point Ti is examined, all the neighboring points Ni of Ti which have already been decided to belong to the ground region are found out first, and a similarity degree between Ti and each of its eight neighboring points, as shown in Figure 4.1, is computed. The similarity degree between two points A and B is

Figure 4.1 Illustration of calculation of the similarity degree between two image

In Step 2.3.1, after the similarity degree is calculated, the degree is compared with a threshold TH1, whose value may be adjusted by a user. If the value is large, then the scope of the ground region which is found will be enlarged; else, reduced.

In Steps 2.3.2 and 2.3.3, the two introduced values p and q are set to zero at first.

The value of p represents the number of points whose similarity degree is larger than TH1, and the value of q represents the number of points whose similarity degree is not so. Hence, if a degree is larger than TH1, then we add one to p, and if the degree is not so, then we add one to q. Afterward, in Step 2.3.4, if the value of p is larger than q, the point Ti is marked as not belonging to the region, and then go to Step 2 again to check the next Ti. If the value of p is not so, then an additional iterative process is conducted to examine Ti.

Sometimes the boundary between the region of the ground and obstacles is not very clear in images. So in Step 2.3.5, an average values AVR is calculated first, which contains 3 values, namely, the average values Ravr, Gavr and Bavr of red, green, and blue values, respectively, of all the pixels in the ground region. We use AVR to decide whether the pixel Ti belongs to the ground region or not. The similarity degree d between the point Ti and AVR, as mentioned in the step, is then calculated according to a similar version of Equation 4.1. In Step 2.3.6, d is compared with another threshold TH2. If d is smaller than TH2, then the point Ti is marked as belonging to the region; else, as not.

In Step 2.4, the points Bi which belong to Ti and belong to the region of the ground are found, and in Step 3, these points are regarded as seeds and Step 2 is repeated again to check if these points belong to the region or not. No matter whether the point Ti belongs to the region or not, the point will be marked as scanned. An example is shown in Figure 4.2. The two images in Figure 4.2(a) are taken by two

regions found by the above algorithm.

(a)

(b)

Figure 4.2 An example of finding the region of the ground.

4.2.2 Using Image Coordinates of Ground Region to Construct a Rough

Environment Map

After the image coordinates of the ground region are found in every image, we

can utilize the coordinates to construct a rough environment map by the following method.

Because the mapping tables of the fisheye cameras are constructed in advance, coordinates can be converted between the ICS (image coordinate system) and the GCS (global coordinate system) freely by Algorithm 3.1. And the size of an environment map is defined in advance, so we can convert the global coordinates Gi

of each point in the map into image coordinates Ii first.

Also, because all the image coordinates of the ground region Rg have been found by Algorithm 4.1 in Section 4.2.1, we can check them to see if the image coordinates Ii specify a point pi belonging to Rg or not. If pi belongs to Rg, then we indicate the space point Pi of the global coordinates Gi in the map to be an open way otherwise, to be an obstacle. Hence we can obtain the global coordinates of all the obstacles in the environment, and so can indicate the positions of the obstacles in the map to construct a rough environment map.

4.2.3 Eliminating Broken Areas in a Rough Environment Map

After constructing a rough environment map, there will be a lot of broken areas in the map, so a refining process is applied to eliminate these broken areas. The refining process includes two major operations erosion and dilation. An erosion operation can eliminate the noise in the map, and a dilation operation can mend unconnected areas to make the map smooth.

The erosion operation examines every element in the rough map, and if the value

of an element E is true, that means there are some obstacles in this region R. A mask with nn size will then be put on R and every element in this mask will be examined to see if the values of them are true or false. The values of elements in the mask then are gathered to decide the new value of E. If the number of the value of true is larger than half of the number of elements in the mask, then the new value of E will be set true; otherwise, the value of E will be set false. In another situation, if the value of E is false originally, the new value of E will be false, too. Because of the property of this erosion operation, the noise in the map will be eliminated.

The dilation operation can expand the region of every obstacle, so if there is a little gap between two regions of obstacles, after doing the dilation operation, the gap will be mended. The dilation operation also scans all of the elements in the map, and if the value of element R is true, another mask with size mm is put on R. Afterward, the value of every element which is in the mask are set to true, hence every obstacle in the map will be expanded, and the gap will be mended. The degree of expansion depends on the size of mask, that is, if m is large, then the degree of expansion will be high. An example of results is shown in Figure 4.3.

(a) (b)

Figure 4.3 An example of refining a map. (a) A rough map. (b) A complete map after applying the erosion and dilation operations.

4.3 Avoiding Static Obstacles

After constructing the complete environment map, we can know exactly the positions of the obstacles in an indoor environment. When an autonomous vehicle navigates in the environment, it should avoid all of the obstacles automatically. When there are some obstacles on the original path from a starting point to a terminal point, the system should plan a new path to avoid the obstacles, and the new path will satisfies the following constraints:

1. The shortest path should be chosen.

2. The turning points should be within the open space where no obstacle exists.

3. The number of turning points should be reduced to the minimum.

The method we use in this study is to find several turning points to insert between the original starting point and the terminal point. Here, by a turning point, we mean one where the autonomous vehicle will turn its direction. The lines connecting these points will be the new path. As shown in Figure 4.4, the purple points are the original starting and terminal points, and the red line is the original path.

Figure 4.4 Illustration of computing turning points to construct a new path (composed by blue line segment) different from original straight path (red line

But by the environment map, there are some obstacles on the straight path from the starting point to the terminal one, so turning points should be calculated to construct a new path. The green points are the computed turning points, and the blue lines which connect the turning points compose the new path. By the following algorithm, the turning points can be calculated.

Algorithm 4.2: Calculating turning points to construct a path through no obstacle.

Input: An environment map M, a starting point S, and a terminal point T.

Output: The coordinates and the number of turning points in the GCS, through which a path from S to T encounters no obstacle.

Steps:

Step 1. Calculate the equation of the line L connecting by S and T using the coordinates of S and T.

Step 2. Check whether there are obstacles Oi on L, where i is the number of obstacles on L. If not, the final path is L and the algorithm is finish. Otherwise, continue.

Step 3. Calculate the overlapping regions Ri of L and the obstacles.

Step 4. Calculate the perpendicular bisectors of each Ri.

Step 5. Extend each bisector until reaching another obstacle or the boundary of map, and compute two intersection points Oi and Oi when reaching another obstacle or boundary.

Step 6. Calculate the middle point Ci between Oi and Oi for each i.

Step 7. Calculate the middle point Ci between Oi and Oi for each i.

Step 8. Compare the length of a path connecting S, Ci (for all i in the sequence) and T with the length of another path connecting S, Ci (for all i in sequence) and T,

Step 9. If there is no obstacle on the new path, then output the turning points on the new path. If there are some obstacles on the sub-paths, then recursively apply this algorithm to each sub-path with obstacles.

After the turning points are calculated, an alternative path is found. The process is illustrated in Figure 4.5. In Figure 4.5(a), the purple points are the starting and terminal points, the red line is the original path, and the green lines are the perpendicular bisectors. In Figure 4.5(b), the first alternative path is found by connecting the turning points, but there are still some obstacles on a sub-path of it, as shown in Figure 4.5(c). So the algorithm is applied again on the sub-path, and the final alternative path is calculated to be as that shown in Figure 4.5(d).

(a) (b)

(c) (d)

Figure 4.5 An example of planning an alternative path. (a) The original path with some obstacles on it. (b) The calculated new path but still some obstacles on its sub-path. (c) Repeat the algorithm to the sub-path. (d) The calculated final path.

4.4 Patrolling in Indoor Environment

After an environment map is constructed and a path is planned, the autonomous vehicle can navigate in the indoor environment automatically. But after a while, the vehicle will start to deviate from its path because of accumulated mechanic errors. So the farther the vehicle goes, the more the deviation will become. Hence we should correct the position of the vehicle. Besides, there may be some dynamic obstacles in the environment. When an autonomous vehicle patrols in an environment, it should avoid these obstacles, or a crash might happen. Furthermore, the FOV’s of a fisheye camera is finite. In order to expand the range of surveillance, we use multiple fisheye cameras in this study. So the solution to the hand-off problem will be described in this chapter, too. In order for an autonomous vehicle to patrol in an indoor environment automatically and continuously, we should solve the following problems.

1. Correcting the global coordinates of the vehicle automatically.

2. Avoiding dynamic obstacles automatically.

3. Patrolling under several cameras.

The solutions proposed in this study are described in the rest of this chapter.

4.4.1 Correcting Global Coordinates of Vehicle Automatically

Because an autonomous vehicle suffers from mechanical errors, the correction of the position is needed. Furthermore, the error of the direction of a vehicle will be

accumulated. In this study, we propose four correction strategies to correct the position and direction of a vehicle. The four strategies are:

1. position correction;

2. direction correction;

3. enforced direction correction;

4. ending path segment correction.

A. Vehicle location correction strategy 1 --- position correction

When a vehicle navigates to a turning point, it should correct its global coordinates. Because a turning point is a destination of a path segment, in order to avoid the accumulation of mechanical errors in this path segment, the position of the vehicle must be corrected at the turning point. In this study, images taken by the fisheye camera are used to calculate the exact position of a vehicle. First, when a vehicle navigates to a turning point, we can get the odometer values of the vehicle, which include the x-coordinate and the y-coordinate of the vehicle, and the direction angle of the vehicle. In this strategy, the x- and y- coordinates will be corrected. After getting the values of the odometer, we can know the approximate position of the vehicle. Then we judge which camera should be used to get an image of the vehicle and calculate the center point of the vehicle shape in the image. When the center point is calculated, because the approximate position of the vehicle is known, only a partial image neighboring to the vehicle position need be processed. The steps are shown in Figure 4.6.

The method we propose to calculate the position of the vehicle is described in the following. Because the odometer values are expressed as global coordinates, the values of the position in the odometer should be converted into image coordinates

Cimg using the corresponding mapping table. Then the partial image of the square

Cimg using the corresponding mapping table. Then the partial image of the square