• 沒有找到結果。

Detail Algorithm of Navigation Process

Chapter 4  Navigation Strategy in Outdoor Environments

4.3  Detail Algorithm of Navigation Process

In this section, we describe the detail process for vehicle navigation in the navigation process. The flowchart of the entire navigation process is shown in Figure 4.8. With the learned information, the vehicle navigates along the learned path by the way of visiting each recorded node consecutively and conducts appointed works at specific positions until reaching the destination of the learned path. The entire navigation process is described in the following algorithm.

Algorithm 4.3 Navigation Process.

Input: a learned navigation path Npath with relevant guidance parameters, and learned data of camera calibration.

Output: Navigation process.

Step.

Step 1. Read from Npath a navigation node Nnext and relevant guidance parameters.

Step 2. Turn the vehicle toward the next node Nnext.

Step 3. Check the illumination by the recoded environment intensity and conduct the dynamic exposure adjustment procedure if necessary, and then conduct the vehicle to navigate forward.

Step 4. Try to find obstacles; and if an obstacle is founded and located at a position which is too close to the vehicle, stop the vehicle and insert avoidance nodes (see Section 6.2 for the detail) into the navigation path for the purpose of obstacle avoidance and go to Step 1.

Step 5. If a sidewalk following mode is adopted, modify the vehicle direction after localizing the curb landmark by the curb detection method using the

Step 6. Check whether the next node Nnext is reached by the mentioned two principles in Section 4.2.1; and if not, go to Step 4.

Step 7. If a fixed obstacle is read from Npath, insert dodging nodes into the navigation path and go to Step 10.

Step 8. If a hydrant or light pole landmark is read from Npath, take the following steps and then go to Step 10.

8.1 Check the illumination in the relevant environment windows in the image for the appointed landmark by the recoded environment intensity, and then dynamically adjust the exposure if necessary.

8.2 Detect the appointed landmark, a light pole or a hydrant, and obtain the landmark position as illustration in Sections 5.3 and 5.4, respectively.

8.3 Use the landmark position to localize the vehicle position and modify the odometer position as described in Section 4.2.

Step 9. If a curb line calibration node is read from Npath, modify the orientation reading of the odometer by detecting and localizing a curb line segment, as illustrate as described in Section 4.2.

Step 10. Repeat Steps 1 through 9 until there exists no remaining nodes in Npath.

Read a new node

Figure 4.9 Flowchart of detailed proposed navigation process.

Chapter 5  

Light Pole and Hydrant Detection in Images Using a New Space Line

Detection Technique

5.1 Idea of Proposed Space Line Detection Technique

In this study, it is desired to develop a space line detection technique to localize each light pole or hydrant landmark on the navigation path for vehicle navigation.

However, in contrast to the function of a traditional projective camera, the projection of a space line on an omni-image using an omni-camera is not a line shape any more but a conic-section curve [26]. Some techniques have been proposed for line detection in an omni-image, among which is Wu and Tsai’s method [26] which detects lines in an H-shaped landmark for use in automatic helicopter landing, as illustrated in Figure 5.1. By the use of the parameters of a hyperboloidal mirror and some geometric relationship, they proved that the projection of a space line onto an omni-image is a conic section curve. Then, by the use of a simple technique using the 2D Hough transform, they extracted the conic section curve in the omni-image and localized the boundary lines of the H shape for conducting helicopter localization.

However, the above-mentioned method is based on the condition that the parameters of the hyperboloidal mirror are known, but in fact retrieving the parameters of a hyperboloidal mirror is not an easy work. Hence, by the use of the

pano-mapping method which is a more convenient omni-camera calibration method, we propose a new space line detection technique in this study. Instead of directly obtaining the projected conic section cure of a space line in the omni-image, we obtain the space plane which goes through the desired space line and the mirror center.

The detail of the proposed line detection method by the use of the two-mirror omni-camera is introduced in Section 5.2.1. Furthermore, for the specific space line which is perpendicular to the ground, we derive in this study a method to obtain its 3D information directly based on the results of the proposed line detection method.

(a) (b) Figure 5.1 Wu and Tsai [26] proposed a line detection method for the omni-image to conduct

automatic helicopter landing. (a) Illustration of automatic helicopter landing on a helipad with a circled H shape. (b) An omni-image of a simulated helipad.

Finally, by the use of the proposed space line detection technique, the light pole and hydrant localization works can be completed for vehicle navigation in both the learning and the navigation processes. We introduce the proposed hydrant and light pole detection and localization techniques in Sections 5.3 and 5.4, respectively.

5.2 Proposed Technique for Space Line Detection

5.2.1 Line detection using pano-mapping table

In this section, we introduce the proposed space line detection technique for use on omni-images taken by the two-mirror omni-camera. As mentioned previously, it is desired to detect the space plane, which goes through a specified space line and the mirror center, instead of detecting a space line projected on an omni-image in other methods. The process is described in the following. It is emphasized that the pano-mapping table has be established in advance for the use in this process.

Suppose that the space line L to be detected is projected by Mirror A onto the omni-image, and that P is an arbitrary space point on L. Firstly, we consider a way to represent a vector which goes through P and the mirror center in the camera system used in this study. As shown in Figure 5.2, a light ray going through the space point P is projected by Mirror A onto an image point I. The mirror center OA and P together form a vector Vp, denoted as (Px, Py, Pz) in the CCS CCSlocal. This vector Vp can be described using the elevation and azimuth angles α and  by the following equations:

cosP'x  cos ; cosP 'y  sin ; P'z sin . (5.1) Next, owing to the slant-up placement of Mirror A discussed previously in Chapter 2, we rotate the camera coordinate system CCSlocal by a specific slant angle, denoted as . By the use of the rotation matrix described in Equation (2.5), the transformation between the coordinates (X′, Y′, Z′) of the original CCS CCSlocal and the coordinates (X, Y, Z) of the rotated CCS can be described as follows:

1 0 0

Figure 5.2 A space point with a elevation angle α and an azimuth.

By the above coordinate transformation described by Equation (5.2), we can convert vector Vp into a new one Vp, which represents the vector with an azimuth angle  and an elevation angle α going through the mirror center in the rotated CCS and may be described by the following equations:

cos cos

Next, considering the space line L projected onto the omni-image the one IL as shown in Figure 5.3, we can find a space plane Q which goes through L and the mirror center OA. For this, suppose that the normal vector of Q is denoted as NQ = (l, m, n). Then, we can derive the following equation to describe the coordinates (X, Y, Z) of a pixel on the space plane Q:

0

lX mY nZ   . (5.4)

On the other hand, it is noted that vector VP is perpendicular to NQ, so that the

inner product between VP and NQ becomes zero, leading to the following equation:

Figure 5.3 A space line L projected on IL in an omni-image.

By Equation (5.3), we can transform Equation (5.5) into an alternative form as follows:

cos sin cos sin sin cos sin sin cos

cos cos cos cos 0

From Equation (5.6), it is desired to obtain the three unknown parameters l, m, and n which represent the normal of the space plane Q. For this purpose, we divide Equation (5.6) by n to get the following form:

cos sin cos sin sin cos sin sin cos

cos cos cos cos 0

where A = m/n, B = l/n. We may rewrite the above equation further to obtain

In the above equation, we use two parameters A and B to represent the original three ones l, m, n. By this form, we can use a simple 2D Hough transform technique to obtain the parameters A and B, as described in detail in the following algorithm.

Algorithm 5.1 Space line detection.

Input: an input edge-point image Iedge which includes the points of the projection IL of a space line L, and the pano-mapping table for Mirror A.

Output: two parameters, Amax and Bmax, representing a normal vector of the space plane described by Equation (5.8).

Steps.

Step 1. Set a 2D Hough space S with the parameters A and B, and initialize all cell counts to be zero.

Step 2. For an edge point I at coordinates (u, v) in Iedge, look up the pano-mapping table and obtain a corresponding azimuth-elevation angle pair ( α).

Step 3. Compute the parameter values A and B by Equation (5.7) using and α, and increment the count in the cell (A, B) of the Hough space S by one.

Step 4. Repeat Steps 2 and 3 until all the edge points in Iedge are computed.

Step 5. Take the cell (Amax, Bmax) with a maximum count in S as output.

After the algorithm is conducted, we can obtain the normal vector (l, m, n) of the desired space plane Q in another form represented by the two parameters A = m/n and

B = l/n.

Furthermore, if L is a vertical space line which means that the normal vector of the space plane Q is parallel to the ground, then it is easy to figure out that m is equal to zero. Thus Equation (5.8) can be reduced to the following equation:

B = -a1 (5.9)

In a similar way as described in Algorithm 5.1, we can use a 1D Hough transform to find the parameter B, which represents a normal vector of the specific space plane through a vertical space line and the mirror center.

3D data computation using a vertical space line

In this section, based on the proposed space line detection technique described above, we can derive the 3D data of a vertical space line (such as the boundary lines of a light pole or the vertical axis of a hydrant) from the omni-image, as described subsequently.

As shown in Figure 5.4, a vertical space line L is projected onto IL1 and IL2 on the regions of Mirrors A and B, respectively. The center OA of Mirror A is located at coordinates (0, 0, 0) in the CCS as we previously assumed. Thus, with the slant angle denoted as  and the length of the baseline denoted as b as shown in Figure 5.4, we can derive the position of the center OB of Mirror B to be at coordinates (0, bsin, bcos). Next, according to Equation (5.4), the equations of the two space planes Q1

and Q2 going through L and the mirror centers, OA and O, respectively, can be described in the following:

l1X + m1Y + n1Z = 0; (5.10) l2X + m2(Y - bsin + n2(Z - bcos) = 0 (5.11) where (l1 , m1 , n1) represents the normal vector of Q1 and (l2 , m2 , n2) represents that of Q2.

In addition, by the reason that the space line L is perpendicular to the ground, we know that m1 and m2 are both zero. Thus, the above two space plane equations can be reduced into the following forms:

l1X + n1Z = 0; (5.12)

l2X + n2(Z bcos) = 0 (5.13) which are equivalent to

B1X + Z = 0; (5.14)

B2X + (Z bcos) = 0 (5.15) where B1 = l1/n1 and B1 = l2/n2.

Figure 5.4 A space line projected onto IL1 and IL2 on two mirrors in the used two-mirror omni-camera.

By solving Equations (5.14) and (5.15), we can obtain the following equations to describe the position of the vertical space line L:

2 1 In conclusion, for a vertical space line projected on both of the regions of Mirrors A and B in the omni-image, after conducting the proposed line detection on the regions of Mirrors A and B in the omni-image and finding a pair of the corresponding space planes using Algorithm 5.1, we can use Equation (5.16) to compute the location of the vertical space line directly.

5.3 Method of Light Pole Detection

The idea of the proposed method for light pole localization is to use two vertical boundary lines of the light pole to estimate the position of the light pole. The entire process for light pole position computation is shown in Figure 5.5. Firstly, the proposed technique to detect two boundary lines of the light pole is introduced in Section 5.3.1. Then, the computation of the light pole location is described in Section 5.3.2. Finally, some experimental results for light pole detection are given in Section 5.3.3.

5.3.1 Light pole boundary detection

In this section, we describe how to detect the two boundary lines of a light pole in an omni-image. The proposed method consists of two steps. Firstly, we conduct light pole segmentation by the Canny edge detection technique to obtain the boundary

points of the light pole. Then, by the resulting edge-point image, we use the above-mentioned space line detection technique to find the two vertical boundary lines based on a 1D Hough transform technique. Finally, we can obtain two specific space planes which go through one of the two light pole boundary lines as well as two other space planes which go through the other of the two light pole boundary lines;

and use these results to compute the light pole location, as described in the next section. The detailed algorithm for the just-mentioned idea of light pole detection is described as follows.

Figure 5.5 Proposed method of light pole localization.

Algorithm 5.2 Light pole boundary line detection.

Input: an input image Iinput, two pano-mapping tables for Mirrors A and B, and a set of environment windows Winlp.

Output: two parameters BA1 and BB1 representing the parameters of two space planes through one of the two boundary lines of the light pole and then through the Mirror A center and the Mirror B center, respectively; and two other parameters BA2 and BB2 representing the parameters of two space planes through the other one of the two boundary lines of the light pole and then through the Mirror A center and the Mirror B center, respectively.

Steps.

Step 1. For Iinput, use the Canny edge detector to conduct edge detection to extract the feature points of the boundary lines of the light pole, and obtain an

Step 2. Set a 1D space S with parameter B and initialize all cell counts to be zero.

Step 3. For each edge point I at coordinates (u, v) in winB of Winlp, look up the pano-mapping table to obtain an azimuth  and an elevation angle α.

Step 4. Compute B by Equation (5.9) using and α, and increment by 1 the value of the cell with parameter B in S.

Step 5. Repeat Steps 2 and 3 until all edge points in winB of Winlp are computed.

Step 6. Find two cells, denoted as B1 and B2, with the two maximum values in space S

Step 7. If B1 > B2, set BA1 = B1 and BA2 = B2; else, set BA1 = B2 and BA2 = B1. Step 8. Take BA1 and BA2 as outputs.

Step 9. In the same way, repeat Steps 2 through 8 in winS of Winlp for Mirror B and take the obtained two corresponding parameters BB1 and BB2 as outputs.

5.3.2 Light pole position computation

After successfully detecting two boundary lines of a light pole, we can use them to compute the light pole location. The proposed technique for this is described in this section. At first, by the use of two known corresponding space planes obtained in the previous section, we compute the locations of the two light pole boundary lines, denoted as Lin and Lout, respectively, in the CCS as illustrated in Figure 5.6. Then, two corresponding points, Gin and Gout, on the ground can be obtained by the obtained equations of Lin and Lout. Next, we check whether the distance between Gin and Gout is close to the known diameter of the light pole. If not, we assume that the detected two vertical space lines are not the boundary lines of the light pole. Finally, we compute the center position between Gin and Gout for use as the light pole position Glp. The detailed algorithm to estimate the light pole position is described in the following

algorithm.

Figure 5.6 Two obtained boundary lines Lin and Lout of the light pole in the CCS.

Algorithm 5.3 Light pole position computation.

Input: two corresponding space plane parameters BA1 and BB1, and two other corresponding parameters BA2 and BB2 obtained from Algorithm 5.2, of a light pole appearing in an omni-image.

Output: a light pole position Glp in the CCS.

Steps.

Step 1. By BA1 and B 1, compute one boundary space line L1 of the light pole by Equation (5.16) and obtain its equation as follows:

B

X = X1; Z= Z1. (5.17)

Step 2. By BA2 and B 2, compute another boundary space line L2 of the light pole by Equation (5.16) and derive its equation as follows:

B

X = X2; Z = Z2. (5.18) Step 3. Compute the distance d between the two lines by the following equation:

2

diameter of the light pole and ThD is a pre-defined threshold, then go to Step 5; else, show the message that there is no light pole and exit.

Step 5. Compute the coordinates (xlp, ylp, zlp) of the light pole position Glp in the CCS as follows:

xlp = (X2+ X1)/2; ylp = -H; zlp = (Z1 +Z2)/2 (5.20) where H is the height of the camera center, and take Glp as output.

Experimental results for light pole detection

An input image with the projection of a light pole on the regions of Mirrors A and B is shown in Figure 5.7(a). After conducting Canny edge detection, we obtain an edge-point image as shown in Figure 5.7(b). By this edge image, we use the proposed line detection method to extract two light pole boundary lines, and the two 1D Hough spaces of the parameter B for Mirrors A and B are shown in Figures 5.8(a) and 5.8(b), respectively. The result of light pole detection is shown in Figure 5.9 and the relative light pole position with respect to the vehicle is shown in Figure 5.10.

5.4 Method of Hydrant Detection

In this section, we introduce the proposed method to localize a hydrant. At first, we introduce the used method to describe a hydrant contour and the learning of the hydrant contour in Section 5.4.1. Next, by the use of dynamic threshold adjustment

and vertical line localization techniques, we can extract a representative structural feature of the hydrant, namely, its axis, and then estimate the position of the axis, as described in Section 5.4.2. Also, some experimental results for hydrant detection by the proposed method are given in Section 5.4.3.

(a) (b) Figure 5.7 Two omni-images with the projection of the light pole. (a) The input image. (b) The result

edge image after doing Canny edge detection.

(a)

(b)

Figure 5.8 Two 1D accumulator spaces with parameters B. (a) For Mirror A. (b) For Mirror B.

Figure 5.9 The result image of light pole detection. Two boundary lines are illustrated as the red and blue curves.

Figure 5.10 A computed light pole position, the yellow point, with respect to the vehicle position, the blue point, in the VCS. Two boundary lines are located at the blue and red positions.

5.4.1 Hydrant contour description

In hydrant detection, for the purpose to inspect the results of hydrant segmentation on the image, we use a simple description with two specific parameters

obtained by principal component analysis. Specifically, after obtaining the hydrant segmentation results, we compute the covariance matrix Cx by the feature point positions in the image. After obtaining the two eigenvalues and the two corresponding eigenvectors of the matrix Cx, we compute the length ratio  of the two eigenvalues of Cx and the rotational angle  between the ICS and the principal component, respectively. Then, we use  and  to describe the hydrant contour as shown in Figure 5.11. The detail to obtain these two parameters is described in the following algorithm.

Algorithm 5.4 Hydrant contour parameter computation.

Input: an input bi-level image Iinput which includes the feature points of a hydrant appearing in an omni-image.

Output: two hydrant contour parameters, a rotational angle , and a length ratio . Steps.

Step 1. Scan each feature point p with coordinates (u, v) in Iinput, compute the center mx = (ux, vx) of all the feature points using their coordinates, and calculate

Step 1. Scan each feature point p with coordinates (u, v) in Iinput, compute the center mx = (ux, vx) of all the feature points using their coordinates, and calculate