• 沒有找到結果。

Chapter 5  Light Pole and Hydrant Detection in Images Using a New Space Line

5.4  Method of Hydrant Detection

5.4.2  Hydrant detection and localization

By the symmetric shape of a common hydrant, the idea of the proposed method for hydrant localization is to detect the vertical axis line of the hydrant using principal component analysis, and localize the hydrant by this line. The entire process to localize a hydrant in this way is shown in Figure 5.12, and two stages of works conducted in this process are described in the following.

Figure 5.12 Proposed method of light pole localization.

(A) Hydrant feature extraction by dynamic color thresholding

Due to the special color of the hydrant, we utilize the color information to extract the hydrant contour from an image. Specifically, by the use of the HSI color space, we use only the hue and saturation values to classify the hydrant feature in order to ignore the influence of the varying image intensity caused by the time-changing lighting condition in the outdoor environment. The conversion of color values from the RGB color space to the HSI color space is as follows:

,

1(

According to our experimental experience, we define two hue values, denoted as Hmin and Hmax, as the hue threshold values of the upper and lower bounds for extracting the red feature of the hydrant. Similarly, we define two saturation values, denoted as Smin and Smax, as the saturation threshold values of the upper and lower bounds for extracting the surface feature of the hydrant. These threshold values are used together to classify the hydrant feature points.

Furthermore, varying lighting conditions will influence the hue and saturation features. Based on the learned hydrant contour, we conduct dynamic color thresholding to adjust the recorded saturation threshold value of Smin in a fixed range [S0, S1], where S0 and S1 are learned in advance in different lighting conditions in the learning stage. We describe the overall method to extract the feature points of the hydrant in detail in the following algorithm.

Algorithm 5.4 Hydrant detection by dynamic thresholding.

Input: an input image Iinput including a hydrant; the learned four hydrant contour parameters, min, max, min, and max; two hue threshold values Hmin and Hmax; two saturation thresholds Smin and Smax; and a set of environment windows Winhyd.

Output: a bi-level image Ibi with feature points of the hydrant, and an adjusted saturation threshold Smin.

Steps.

Step 1. Initialize an empty bi-level image Ibi for labeling feature points and set all pixel values as zero.

Step 2. Scan each pixel Iuv with coordinates (u, v) in Winhyd, compute its hue value huv and saturation value suv by Equation (5.22), and if huv is between Hmin

and Hmax and suv is between Smin and Smax, then label Iuv by “1” in Ibi. Step 3. Apply erosion and dilation operations to the bi-level image Ibi.

Step 4. Conduct image connected component labeling, and find a maximum connected component M in Ibi.

Step 5. Apply Algorithm 5.3 to M in Ibi to obtain two contour parameters, the rotational angle  and the length ratio  of M.

Step 6. If min <  < max and min <  < max, then take M in Ibi and Smin as outputs;

else, adjust the threshold Smin in the range [S0, S1] and go to Step 1.

(B) Hydrant position computation by the vertical axis line of the hydrant

By using the results obtained by the hydrant contour extraction process described above, it is desired further to find the vertical axis line of the hydrant to localize the hydrant. We assume that the desired vertical axis line goes through both centers of the hydrant appearing in the regions of Mirrors A and B in the omni-image. After extracting the two center positions of the hydrant in regions of Mirrors A and B, we can obtain further the two space planes which go through the axis line and the two mirror centers, respectively, by the use of the proposed vertical line detection method.

Finally, we can obtain the hydrant position by the located axis line using the information of the two space planes. The detailed process is described as follows.

Algorithm 5.5 Hydrant location computation.

Input: an input bi-level image Ibi which includes hydrant feature points, and an

environment window Winhyd.

Output: a hydrant position Ghyd in the CCS.

Steps.

Step 1. Compute the center CB with coordinates (uB, uB) of the hydrant feature points in winB of Winhyd and the center CS with coordinates (uS, uS) of the hydrant feature points in winS of Winhyd.

Step 2. Look up the pano-mapping table to obtain the corresponding elevation angle αB and azimuth angle  of CB and the corresponding elevation angle αS and azimuth angle S of CS.

Step 3. By Equation (5.9), compute the parameter value B corresponding to CB

using  and αB as well as the parameter value BS corresponding to CS using

S and αS.

B

5.4.3

Step 4. By the use of BA and BB, compute the position coordinates X and Z of the axis line L of the hydrant by Equation (5.16).

Step 5. Compute the hydrant position Ghyd with coordinates (xhyd, yhyd, zhyd) in the CCS as follows:

xhyd = X; yhyd = -H; zhyd = Z (5.23)

where H is the height of the camera center.

Step 6. Take Ghyd as output.

Experimental results for hydrant detection

Some experimental results for hydrant detection are shown in this section. The input image with a hydrant on the regions of Mirrors A and B, respectively, is shown in Figure 5.13. The result of hydrant segmentation using the initial threshold values is shown in Figure 5.14(a). Next, the result of hydrant segmentation by dynamic thresholding is shown in Figure 5.14(b). We can see that the extracted contour in

Figure 5.14(b) is more similar to the real shape of the hydrant. Finally, the result of detecting the vertical axis line of the hydrant and the obtained hydrant position are shown in Figure 5.15.

Figure 5.13 The input omni-image with a hydrant.

(a)

(b)

Figure 5.14 Two result images of hydrant segmentation with different threshold values (a) The result of hydrant segmentation with original threshold values. (b) The result image of hydrant segmentation by dynamic thresholding.

(a)

(b)

Figure 5.15 The result of hydrant detection and obtained hydrant position. (a) The result image of extracting the vertical axis line of the hydrant (b) The related hydrant position with respect to the vehicle position.

Chapter 6  

Curb Line Following and Obstacle Avoidance in Navigation

6.1 Proposed Technique of Curb Line Following

To conduct vehicle navigation on sidewalks, we propose a technique to detect a curb line and compute its location with respect to the vehicle. Then, by a localized curb line, we can guide the vehicle to follow the curb and also calibrate the orientation odometer reading in the navigation process. In this system, we detect the curb line by the use of the projection of a curb line on the region of Mirror A in the omni-image.

We know that the detected curb image points are on the floor in the real world, so the position of the curb line can be computed directly by the use of a single camera, i.e., the one with Mirror A in the proposed camera system.

In the remainder of this chapter, the proposed method to extract curb boundary points is introduced in Section 6.1.1. By the use of the method for curb boundary extraction, we conduct curb line localization by the proposed dynamic threshold adjustment technique described in Section 6.1.2. Finally, after deriving the location of the curb line, we propose a method for the vehicle to navigate by following the curb line in the navigation process as introduced in detail in Section 6.1.3. Some experimental results for curb detection are shown in Section 6.1.4.

6.1.1 Curb line boundary points extraction

For curb line detection on an omni-image, we define an environment window, denoted as Wincurb, to specify a specific region of Mirror A on the image. By an input omni-image obtained from the omni-camera, we perform the following four steps to compute the relative position of a detected curb boundary point with respect to the vehicle.

(1) Curb feature detection by the use of color information

Because of the special color of the curb (which is red in our experimental environment), we use the color information to extract the curb feature using the HSI color model. Like the method for hydrant feature extraction as discussed previously in Section 5.4.2, we classify the curb feature in the image by the use of two hue threshold values, denoted as Hmin and Hmax, and two saturation threshold values, denoted as Smin and Smax, as the lower and upper bounds for thresholding the hue and saturation value, respectively. Then, by thresholding the hue and saturation values for each point on the image, we can obtain a set of curb feature points and then label their positions on a bi-level image Ibi for the use in the next step.

(2) Curb boundary point detection

With the bi-level image Ibi which includes the curb feature points, we can start to find the inner boundary points of the curb line. In image Ibi, we scan each pixel from top to down and from right to left in Wincurb as illustrated in Figure 6.1, and record the first found feature point as a curb boundary point. After scanning each row, we derive the curb boundary point position in the image and label them as red points, as shown in Figure 6.1.

(3) Computation of curb boundary point positions

After deriving the curb boundary points in the omni-image, we compute the boundary positions in the CCS. Suppose that a curb point is found on the ground, such as the point P illustrated in Figure 6.2. By the use of Mirror A, point P at coordinates (X, Y, Z) is projected onto the omni-image with an elevation angle α and an azimuth angle . As described previously in Section 5.2.1, we can represent the vector from the mirror center OA to a space point P using the related elevation and azimuth angles by the use of Equation (5.3) which is repeated in the following:

cos cos

In other words, we can represent the position of P in the CCS with the form described by Equation (5.3). Besides, by the reason that the height H of the center of Mirror A is known in advance, we can further derive Y = -H. Hence, by the proportions among Px, PY, and PZ and known parameter Y, the position of the ground point P can be computed by dividing Equation (5.3) by Py and then multiplying the result by -H, leading to the following equations which describe the position of P:

(cos cos )

Finally, the details of the above three major steps for curb line boundary extraction is described in the following algorithm, and the obtained curb boundary points positions in the VCS will be used to estimate the location of the curb line, as

introduced in the next section.

Figure 6.1 A detected curb line and the inner boundary points of the curb line on the omni-image.

Ground Plane

Z X Y

OA Baseline

P(X, Y, Z)

Mirror A Mirror B

H

Figure 6.2 A ground point P projected onto Mirror A

Algorithm 6.1 Extraction of curb boundary points.

Input: an input image Iinput, two hue threshold values Hmin and Hmax, two saturation threshold values Smin and Smax, and an environment window Wincurb.

Output: a set Scurb of the positions of the curb boundary points in Iinput in the VCS.

Steps.

Step 1. Initialize a bi-level image Ibi.

Step 2. Scan each point Iuv at coordinates (u, v) in Wincurb in Iinput, and by Equation (5.22) compute its hue value huv and saturation value suv. If huv is between

Step 3. Conduct the erosion and dilation processes to the bi-level image Ibi.

Step 4. Scan each row from right to left in Wincurb in Ibi and find the first labeled point Bj at coordinates (u, v).

Step 5. Look up the pano-table to obtain the corresponding elevation and azimuth angle pair (α,), and compute the boundary point position BCCS in the CCS by the use of Equation (6.1).

Step 6. Calculate the corresponding position BVCS of the point in the VCS by the coordinate transformation described by Equation (3.6) with BCCS as input, and record BVCS into the set Scurb.

Step 7. Repeat Steps 4 through 6 until all rows in Wincurb have been scanned.

6.1.2 Curb line localization by dynamic color thresholding

To localize a detected curb line segment, firstly assume that the curb line segment in the image is a straight line. This is reasonable because the projection of the curb line on the omni-image is a small part of the whole curb line. Thus, we may approximate the detected curb line using a liner function by a line fitting technique.

Specifically, using the boundary point positions by the method discussed previously in the last section, we can fit the data to a line L and obtain the equation of L as follows:

Y = ax + b, (6.5) where the two parameters, a and b, are calculated by the following equations:

1 1 12

2

with (xi, yi) being the position coordinates of a boundary point.

Furthermore, by the use of this model, we can estimate a more precise position of the curb line using the proposed dynamic color thresholding technique mentioned previously. To be more specific, we conduct dynamic threshold adjustment for curb detection by adjusting the saturation threshold Smin in a pre-defined fixed range [S0, S1]. After using all possible pre-selected threshold values in this range to extract curb boundary points, we select the saturation threshold value with the minimum sum of errors in the result of fitting the curb boundary points with the computed line. The entire process for curb line location computation is shown in Figure 6.3 and the detailed algorithm is described in the following.

Figure 6.3 Process of curb line location computation

Algorithm 6.2 Curb line detection by dynamic color thresholding.

Input: an input image Iinput, and an environment window Wincurb.

Output: a slope angle of the curb line, and the distance d of the vehicle to the curb line.

Steps.

Step 1. Conduct curb boundary point extraction by the use of Algorithm 6.1 with Iinput, Wincurb, two hue threshold values Hmin and Hmax and two saturation threshold values Smin and Smax as inputs to obtain a set Scurb of N boundary points, each denoted as ci with coordinates (xi, yi) in the VCS.

Step 2. Use the line regression scheme to compute a line L by Equation (6.6) with ci

as inputs, where i is 1 through N and derive the equation of the best-fit line L as follows:

Y = aX + b (6.4) where the coefficients a and b are as described by Eqs. (6.6).

Step 3. Compute the sum of the errors Se of fitting the boundary points ci with L by through 3 until all possible pre-selected threshold values in [S0, S1] have been computed.

Step 5. Select the fitting line Lbest with the minimum sum of errors from the computed fitting lines obtained in Step 4.

Step 6. Compute the slope angle of Lbest and the distance d to the vehicle by the following equation:

6.1.3 Line following in navigation

By the obtained curb line location, the vehicle can conduct line following on sidewalks in the navigation process. The proposed scheme for line following aims to keep the navigation path at an appreciate distance to the curb line. As shown in Figure 6.4, we define the range [Dist1, Dist2] as the safe limits between the vehicle and the curb line. When the vehicle is at a position with a safe distance to the curb, we guide the vehicle to adjust its direction to be parallel to the curb. However, if the distance to the curb line is not in this range, we slow down the speed of the vehicle and turn the vehicle forward to get into the safe region progressively. The proposed line following process for vehicle navigation is described in the following algorithm.

Figure 6.4 Illustration of line following strategy.

Algorithm 6.3 Curb Line following.

Input: an input image Iinput. Output: none.

Steps.

Step 1. By the use of Algorithm 6.2, obtain the slope angle  of the curb line and a distance d to the curb line.

Step 2. According to the distance d between the vehicle and the curb line, perform the following steps.

(1) If d > Dist2, slow down the speed of the vehicle; and if the current vehicle direction is toward the safe region, exit; else, turn to the right for an angle of 5o toward the safe region.

(2) If d < Dist1, slow down the speed of the vehicle; and if the current vehicle direction is toward the safe region, exit; else, turn to the left for an angle of 5o forward the safe region.

(3) If Dist1 ≦ d ≦ Dist2, modify the vehicle direction by the use of  to make it parallel to the curb line.

6.1.4 Experimental results of curb detection

Some experimental results of curb detection using the proposed method are given in this section. An input omni-image with curb line is shown in Figure 6.5. By the proposed method, the curb segmentation result with original threshold parameters is shown in Figure 6.6(a). In addition, a better curb segmentation result adopting the dynamic threshold adjustment technique is shown in Figure 6.6(b). Finally, the extracted curb boundary points and computed best-fit line from Figure 6.6(b) are shown in Figure 6.7.

Figure 6.5 An input omni-image with curb line landmark.

(a) (b) Figure 6.6 Two result images of curb segmentation with different threshold values (a) The segmentation

result with original threshold values. (b) The segmentation result image by dynamic thresholding.

Figure 6.7 Illustration of extracted curb boundary points and a bet fitting line (the yellow dots).

6.2 Proposed Technique of Obstacle Avoidance

The idea of the proposed obstacle detection technique is based on the use of the disparity resulting from the separation of Mirrors A and B. Because the used two-mirror omni-camera is placed at a fixed position and slanted up for a fixed angle on the autonomous vehicle, a ground point P will be projected by the two mirrors onto the camera at two specific different image positions as shown in Figure 6.8(a). In other words, we can find the same space point P at these two image positions simultaneously. Thus, we can record in advance the relation between two corresponding ground points in the two mirrors and use them to inspect an object which is not flat on the ground. More specifically, if an object with a height is projected by the two mirrors onto the omni-images, we can detect it by looking up recoded corresponding ground positions on the two mirrors. As shown in Figure 6.8(b), instead of the ground point G we find out another space point F on the obstacle which is projected by Mirror A onto the image.

Simply speaking, for obstacle detection in this study, our purpose is to construct a specific table, we call ground matching table, which records the relationship between the ground points on the image region of Mirror A and the corresponding ground points on the image region of Mirror B in the omni-image as shown in Figure 6.9. The proposed method for creating a ground matching table is introduced in Section 6.2.1. Next, by the use of the established ground matching table, we can conduct obstacle detection and localization conveniently for vehicle navigation, as described in Section 6.2.2. Finally, the procedure of obstacle avoidance is introduced in Section 6.2.3.

(a) (b) Figure 6.8 Two side view of the vehicle system and a ground G. (a) Without obstacles. (b) With an obstacle

in front of the vehicle.

Figure 6.9 Illustration of the ground matching table.

6.2.1 Calibration process for obtaining

corresponding ground points in two mirrors

At the beginning of the calibration process, we specify a set of environment windows Winobs for use in the calibration process as well as in the obstacle detection

At the beginning of the calibration process, we specify a set of environment windows Winobs for use in the calibration process as well as in the obstacle detection