• 沒有找到結果。

Single-class classification of HSI colors for sidewalk detection 62

Chapter 5 Supervised Learning of Navigation Path by

5.3 Proposed Semi-automatic Vehicle Navigation for Learning

5.3.3 Single-class classification of HSI colors for sidewalk detection 62

Two methods are used to classify the curbstone features on the sidewalk in this study. First, we use the special color of the curbstone at the sidewalk to detect the guide line. As illustrated in Figure 5.4, a cylinder is used to represent the HSI color model. It translates the RGB space into three channels, called hue, saturation and intensity, forming an HSI color space with hue representing the perceived colors in 0o to 360o, saturation representing the brightness of the color, and intensity representing the gradient of the gray level images.

Figure 5.4 The HSI color model [23].

Method 1.

We use the hue value and the saturation value to describe the color of the curbstone as shown in Figure 5.5. The ground of the sidewalk looks unlike the red color, but as we classify the curbstone from the sidewalk using only the hue channel, the result is shown in Figure 5.5(b) which is not good. Fortunately, we found the saturations of the ground are much lower than those of the curbstone because of the different materials. Therefore, we use both hue and saturation values to extract the curbstone of the sidewalk. The classification rule for extracting the curbstone is designed to be in the following:

"if the input pattern x satisfied the following conditions:

thH ≦ h(x) ≦ thH2, thS ≦ s(x) ≦ thS2,

then assign x to the class c; else to the class g, " (5.8) where c is the class of the curbstone, and g is the class of the ground of the sidewalk; the functions h and s are used to translate the input pattern x to the hue value and saturation value respectively; thH1 and thH2 are the threshold values in the hue channel; and thS1 and thS2 are the threshold values in the saturation channel.

Method 2.

We extract the guide line using the ground features instead of the curbstone in this case. In the learning process, we use two-dimension features of the hue and saturation and design a single-class classification rule to extract the curbstone:

"if d(X) < thre, then assign the input pattern X to the class g; else,

to the class c," (5.9)

where X is the input pattern, c is the class of the curbstone, g is the class of the ground

distance which is defined as follows:

d2 = (X  M)TΣ1(X  M)  

where M is the mean vector of g and Σ is the covariance matrix of g. The resulting image of Method 2 is shown in Figure 5.7.

Method 2 is more general than Method 1 but the computation cost of it is higher.

If the color of the curbstone is different, the system can still extract out the guide line by using Method 2. In order to adapt to the real environment, Method 2 is adopted in this study.

5.3.4 Proposed method for guide line detection

In this section, we will describe four steps for detecting the guide line features and finding the corresponding points in the following. We call this procedure as guide line detection.

1. Defining the extraction windows to detect the features.

We define two pairs of extraction windows, each pair has two extraction windows, to detect the guide line features of the curbstone, and one pair is used for the lateral direction and the other pair is used for the frontal direction. In order to increase the efficiency, the extraction windows in this study are defined as the rectangular shapes instead of them as the fan shapes.

In the lateral direction, the extraction windows are defined to cover a shape extending from 223o to 256o in the ICS. The extraction window Wsmall is located from the top-left point at coordinates (701, 713) to the bottom-right one at coordinates (755, 759) with the size of 46 pixels by 54 pixels and the other extraction window Wbig is located from the top-left point at coordinates (463, 932) to the bottom-right point at coordinates (685, 1116) with the size of 184 pixels by 222 pixels. Their shapes

appearing in the omni-image in accordance with the rotational invariance property is illustrated in Figure 5.6(a).

(a)

(b)

(c)

Figure 5.5 Experimental results of extracting curbstone features. (a) An input image. (b) The curbstone features extracted by the hue channel. (c) The curbstone features extracted by the hue and saturation channels.

In the frontal direction, the pair of the extraction windows is defined from 244o to 299o in the ICS. The extraction window Wsmall is located from the top-left point at coordinates (753, 730) to the bottom-right point at coordinates (811, 798) with the

size of 68 pixels by 58 pixels and the other extraction window Wbig is located from the top-left point at coordinates (692, 929) to the bottom-right point at coordinates (882, 1124) with the size of 195 pixels by 190 pixels by the rotational invariance property as illustrated in Figure 5.6(b). Each of the four corners of Wbig and that of Wsmall are aligned in the same radial direction.

2. Detecting the features of the curbstones.

By the proposed classification rules, we can extract curbstone features well and we also use some techniques in imaging process such as connected components, dilation and erosion to eliminate noise in the image after detecting the features of the curbstone. The resulting image is shown in Figure 5.7.

(a)

Figure 5.6 Two extraction windows used to detect guide line. (a) A extraction window used for detection of lateral direction. (b) A extraction window used for detection of frontal direction.

(b)

Figure 5.7 Two extraction windows used to detect guide line. (a) A extraction window used for detection of lateral direction. (b) A extraction window used for detection of frontal direction. (cont’d)

Figure 5.8 The result of detecting the curbstone features in the experimental environment.

3. Extracting the guide line of the curbstones.

Many curbstone features are hard to use to detect the corresponding points in Wbig, so we propose a method to detect the guide line of the curbstone instead of total features. After detecting the curbstone features, we scan the image in Wsmall from right to left and from top to bottom to find the guide line at the inner position of the sidewalk. After detecting the first feature pixel with the scan order from right to left, label the pixel as a guide line feature point and then detect the next feature pixel in the next row. An illustration is shown in Figure 5.8.

4. Finding the corresponding points.

In [10] and [23], an omni-image is transformed into two panoramic images for two mirrors, then the bilinear interpolation technique is used to refine the panoramic images, and corresponding points are found by some matching algorithms, and range data are computed finally. A disadvantage is that the computation cost of this process is high.

Figure 5.9 The guide line (the red dots) extracted by the proposed method.

We propose in this study a method based on the rotational invariance property of

the omni-image without transforming the omni-image into two panoramic images, as illustrated in Figure 5.9. The corresponding point is in the same radial direction of the feature point, so we scan the feature points along the radial direction (the red dotted line in Figure 5.9(a)) and take the corresponding point to be the last feature pixel in the image portion corresponding to the large mirror, as illustrated by Figure 5.9(a).

Algorithm 5.3. Computation of corresponding feature points.

Input: an image Iinput.

Output: an image with corresponding points marked.

Steps:

Step 1. Scan Iinput in Wsmall and Wbig to extract the curbstone features based on the proposed classification rules.

Step 2. Scan in Wsmall from right to left and from top to bottom to find the guide line at the inner sidewalk. If the first curbstone feature on a row is found, label the pixel as a guide line feature Fg, and then scan the next row until finished.

Step 3. For each Fg, compute the azimuth angle θ, and scan Iinput accordingly on the same radial direction of Fg from near to far with respect to the center in the area of Wbig.

Step 4. Label as a corresponding point of Fg the last feature pixel found in the from-near-to-far scanning with respect to the center of the omni-image.