• 沒有找到結果。

Chapter 5 Supervised Learning of Navigation Path by

5.5 Proposed Path Planning Method Using Learned Data

5.5.2 Proposed path planning process

Five types of the path nodes are defined in this study as shown in Table 5.3, and these types of nodes are recorded in the learning procedure automatically. In the procedure of guide line detection, if the vehicle detects the guide line to be in front of it, the system will record the current position as a turning point and put it into a set S.

In the procedure of human hand pose detection, if the system terminates the procedure of guide line detection and execute the procedure of blind navigation, the system will record the current position as a blind navigation point and put it into the set S.

The mechanical error of the autonomous vehicle will be increased if the interval between two path nodes is too long. Therefore, we insert some points in the interval in the set S to increase the number of times for correcting the location of the vehicle. A new set T of the path nodes after executing the path planning process is obtained.

Figure 5.18 shows a map with T of the path nodes after the path planning process.

Table 5.3 Different type of the path nodes.

Type of points Colors of points on the map

General point The blue points in Figure 5.17

Start point / End point The red points

Turning point The green points

Blind navigation point The black points

Inserted point The yellow points

Algorithm 5.11. Path planning.

Input: a set Sof path nodes learned in the learning procedure.

Output: a set T of left path nodes after path planning.

Steps:

Step 1. Scan each node Ni in S. If Ni is not a general point, then mark Ni as an extreme point N1e

.

Step 2. Repeat Step 1, and if a second extreme point N2e is found, then create an interval Pi, where N1e

is the beginning point and N2e

is the end point of Pi. Step 3. Scan each node ni in Pi, and if d(ni, N1e) ≧ thre, then mark ni as an inserted

point, where the function value d(a, b) is computed as the Euclidean distance between a and b and thre is a pre-selected threshold value decided in the experiment.

Step 4. Repeat Steps 1 through 3 if Ni is not the end of S; else, scan each node Ni in S and if Ni is not a general point, put it into the output set T.

Figure 5.20 A map is created with the path planning process.

Chapter 6

Vehicle Guidance on Sidewalks by Curb Following

6.1 Idea of Proposed Guidance Method

In this study, the autonomous vehicle is designed to guide the blind people to walk safely on the sidewalk environment. The first difficulty is whether the vehicle knows the existence of the user who follows the autonomous vehicle. We propose a technique using ultrasonic sensors to synchronize the speed of the autonomous vehicle with that of the blind.

Another difficulty is that some dynamic obstacles may block the navigation path of the autonomous vehicle. We propose a method based on Chiang and Tsai’s method [21] to avoid the dynamic obstacles by using the proposed two-mirror omni-camera.

An illustration is shown in Figure 6.1 and the proposed schemes are described in the subsequent sections.

6.1.1 Proposed synchronization method of vehicle navigation and human walking speeds

In the proposed method for synchronization of vehicle navigation and human walking speeds, the aft sonar arrays are used only to detect foots of the user and they are no.9 through no.14 sonar sensors in Figure 6.2. We define four regions: near, middle, far and too far, for synchronizing the vehicle speed with that of the user, respectively. An illustration is shown in Figure 6.4 and their definitions are given in Table 6.1. An algorithm of the proposed method is described in the following.

Figure 6.1 An illustration of the navigation procedure.

Figure 6.2 The illustration of sonar arrays. (a) The fore sonar array of the vehicle. (b) The fore and aft sonar arrays [19].

Algorithm 6.1. Synchronization between the vehicle speed and that of the user.

Input: none.

Output: the vehicle speed.

Steps:

Step 1. Compute the signal for each of the no. 9 through no. 14 sonar sensors by following rule:

“if the sonar signal is in the detectable range, then label the sonar signal as detected, and record the value; else, label it as un-detected.”

Step 2. Compute the average value d of the signals which are labeled as detected.

Step 3. Decide the corresponding command by d according to Table 6.1.

(b)

Figure 6.3 The illustration of sonar arrays. (a) The fore sonar array of the vehicle. (b) The fore and aft sonar arrays [19] (cont’d).

Figure 6.4 The range of a sonar sensor in detecting obstacles.

Figure 6.5 An illustration of the defined regions used for synchronizing the vehicle speed with that of the user.

1 2 3 4 5 6

0

Front

7

14 13 12 11 10 9 1

Back

8

Left Right

Table 6.1 Different commands of the vehicle according to the sonar signal.

The region for the sonar signal

detects The vehicle commands

Near Accelerate

Middle Adjust the speed to be identical to the default speed which is already set in the initial

Far Slow down the speed

Too far Stop the vehicle

Algorithm 6.1. Synchronization between the vehicle speed and that of the user.

Input: none.

Output: the vehicle speed.

Steps:

Step 4. Compute the signal for each of the no. 9 through no. 14 sonar sensors by following rule:

“if the sonar signal is in the detectable range, then label the sonar signal as detected, and record the value; else, label it as un-detected.”

Step 5. Compute the average value d of the signals which are labeled as detected.

Step 6. Decide the corresponding command by d according to Table 6.1.

6.2 Proposed Obstacle Detection and Avoidance Process

6.2.1 Proposed method for computation of the vehicle position

In [15-18], the real world coordinates of a certain spot are computed in advance, and then used to decide a vehicle command, called goFront, to instruct the vehicle to

move to different positions. Using this command provided by the MobileRobot to instruct the vehicle will lock all processes until the vehicle arrives the destination, as found in this study.

In order to use the sonar sensors to compute the distance between the vehicle and the user, we cannot use this vehicle command to instruct the vehicle. Therefore we propose an algorithm to compute the vehicle position and command accordingly the vehicle to navigate in the map.

We use a projection vector to compute the vehicle position and decide whether the vehicle has arrived at a path node or not. As illustrated in Figure 6.5, there are two path nodes, Ni and Ni+1, which are indicated as ith and (i+1)th ones of the path nodes.

The vehicle is at the position Pi in Figure 6.5(a), and it moves to the position Pi+1 after a time interval, as shown in Figure 6.5(b). An algorithm proposed to compute the vehicle position is described below.

Algorithm 6.2. Computation of the vehicle position.

Input: the path nodes learned in the learning procedure.

Output: none Steps:

Step 1. Compute the length Lv of a direction vector Vi

which is equal to Ni+1  Ni. Step 2. Get the current position Pi of the vehicle from the odometer.

Step 3. Compute a direction vector Wi

Figure 6.6 An illustration of the proposed algorithm. (a) The vehicle is at Pi. (b) The vehicle is at Pi+1.

6.2.2 Detection of obstacles

In this section, we will introduce the proposed scheme for detection of obstacles.

Two extraction windows with rectangle shapes are defined and used to detect obstacles in front of the vehicle. The pair of the two windows is defined from 143o to 285o in the ICS. A window Wsmall is located from the top-left point at coordinates (727, 735) to the bottom-right point at coordinates (822, 810) with the size 95 pixels by 75 pixels. Another window Wbig is located from the top-left point at coordinates (633, 936) to the bottom-right point at coordinates (903, 1113) with the size 270 pixels by 177 pixels, as shown in Figure 6.6.

Similar to the procedure described in Chapter 5, three stages are used to compute the 3D range data of obstacles.

1. Detection of obstacle features.

An obstacle is extracted by the use of Equation (5.8) using the HSI color model and some image processing techniques such as connected component labeling, dilation, and erosion. An example of the resulting image of detected obstacles is shown in Figure 6.7.

2. Detection of the outlines of obstacles.

After detecting the obstacle, we detect the outlines of the obstacles appearing in image portions corresponding to the big mirror and the small mirror, respectively. We scan the extraction window from top to bottom and from left to right to detect the outline feature points on the top of the obstacle, and from bottom to top and left to right to detect the outline feature points on the bottom of the obstacle. An example is shown in Figure 6.8.

3. Compute the 3D range data of the obstacle.

After we get the outline feature points of the obstacle, we can find the correspondences of the feature point pairs by Algorithm 5.3, and compute the range data by Equation (3.16). And then two portions of the range data are computed: one is of the outline feature points on the top of the obstacle, and the other is that of those on the bottom of the obstacle. We calculate the average values of the range data for the two portions with respect to the three axes in the CCS , respectively and express the them in the vector form [dxT, dyT, dzT]T and [dxB, dyB, dzB]T for the top portion and the bottom portion, respectively. The height distance H and the depth distance D and the width distance W of the obstacle can be obtained by

[H, D]T  [dyT, dzT]T [dyB, dzB]T, (6.1)

W  max(dxT

, dxB

) (6.2)

where the function value of max(a, b) is computed for the bigger one between a and b.

Finally we can use the 3D information to perform different collision avoidance actions. For example, if an obstacle is not flat and blocks the navigation path, then the vehicle will avoid it; else, the vehicle will keep going.

Figure 6.7 positions of extraction windows.

Figure 6.8 The obstacle features.

(b)

(a) (c)

Figure 6.9 The outline feature points of the obstacle. (a) The result image with both portions of feature points. (b) The result of the upper feature points of (a). (c) The result of the lower feature points of (a).

6.2.3 Ideas of proposed method for obstacle avoidance

As an obstacle is detected, a new path for obstacle avoidance is necessary. In Chiang and Tsai’s method [21], the depth distance and the width distance of an obstacle was computed by a 2D projection method. Then, the vehicle avoided the obstacle by inserting an avoidance node into the original node sequence and went to the next node through the avoidance node. The method is shown in Figure 6.9(a).

A problem is shown in Figure 6.9(b), that is, the obstacle is at a position which overlaps the position of the path node NODED or is very close to the node NODED that is the destination the vehicle will arrive at. As can be seen, the angle from the avoidance point to NODED becomes narrow and the vehicle will hit the obstacle along the avoidance path.

(a)

(b)

Figure 6.10 An illustration of the avoidance method. (a) The avoidance method proposed in Chiang and Tsai [21]. (b) A situation in an assumption.

A concept of a virtual node as well as Algorithm 6.2 is proposed to solve this problem, as illustrated in Figure 6.10. The vehicle is located in the interval from the path node, NODEF, to the path node, NODED. After detecting the obstacle, several processes are executed by the system as described in the following:

Calculate the distance D0 between the vehicle and the obstacle, and the depth distance DEPTH of the obstacle by the description in Section 6.2.2.

Compute the position of NVD on a vector V

by V



NODED  NODEP, (6.3)

d  2×D0 + DEPTH, (6.4)

where d is a distance in front of the vehicle.

 Compute the position of the vehicle by Algorithm 6.2 when the vehicle arrives at NVD.

A problem occurs when the vehicle is at the guide line and trying to avoid obstacles. Figure 6.11(a) shows that the right avoidance point is located at a position out of the bound of the guide line. If the right path is the chosen path used to avoid the obstacle, the vehicle should change the avoidance path to the left one. Figure 6.11(b) shows that NVD may be created at a position out of the bound of the guide line if NODED is a turning node. Before moving to NVD from an avoidance point, the vehicle should change NVD to NODED+1.

A procedure is performed to solve these problems mentioned previously before avoiding the obstacle and moving to NVD from the current position through the avoidance point. A distance dnode2line from each path node to the guide line should be recorded in the learning procedure and the proposed method is described in the following.

Algorithm 6.3. Avoidance of the vehicle out of the guide line.

Input: the distance dnode2line from the path node to the guide line.

Output: none. the width of the autonomous vehicle.

Figure 6.11 An illustration of a virtual node in the proposed method.

(a)

Figure 6.12 To avoid an obstacle at guide lines. (a) The obstacle is at the straight guide line. (b) The obstacle is at a corner.

6.2.4 Proposed method for obstacle avoidance

In this section, we summarize the different techniques mentioned previously into two algorithms for obstacle avoidance. Algorithm 6.4 is a decision procedure before avoidance of obstacles. Algorithm 6.5 is a procedure for avoidance of obstacles.

Algorithm 6.4. Preparation for obstacle avoidance.

Input: input image Iinput. Output:

Steps:

Step 1. Detect obstacles in Iinput mentioned in Section 6.2.2.

Step 2. Compute the outlines in the top portion FT and the bottom portion FB of possible obstacle features mentioned in Section 6.2.2.

Step 3. Calculate the range data of FT and FB, respectively, as mentioned in Section 6.2.2.

Step 4. Compute D0 from FB as mentioned in Section 6.2.2.

Step 5. Go Algorithm 6.5 if D0 is near enough (the vehicle is close to the obstacle).

Algorithm 6.5. Avoidance of obstacle.

Input: a set of the range data R of the obstacle and the path nodes.

Output: avoidance points AP1, AP2 and a virtual node NVD for obstacle avoidance.

Steps:

Step 1. Compute the 3D information of the obstacle by subtracting FB from FT by Equations (6.1) and (6.2).

Step 2. Go to Step 4 without creating the avoidance points (Step 5 to Step 6) if the obstacle is too short to block the navigation path; Else, go to Step 3.

Step 3. Compute the avoidance points AP by the following steps:

(a) compute the vector of the obstacle perpendicular to the direction of the

(c) compute the avoidance point by:

AP1 = OL  Wvehicle×xu

; AP2 = OR  Wvehicle×xu

. (6.5)

Step 4. Compute the location of the virtual node NVD by Equations (6.3) and (6.4).

Step 5. Compute the distances d1 and d2 of two avoidance paths from the current position to NVD through AP1 and AP2, respectively.

Step 6. Choose an avoidance path from d1 or d2 with the smaller distance.

Step 7. Determine whether the avoidance path is available and whether NVD should be changed by Algorithm 6.3.

Chapter 7

Experimental Results and Discussions

7.1 Experimental Results

In this chapter, we will show some experimental results of the proposed system in an outdoor environment. Figure 7.1 shows the experimental environment.

In the learning procedure, the system will follow the curbstone in the navigation.

Figure 7.2(a) shows the curbstone in front of the vehicle and Figure 7.2(b) shows the resulting image of feature extraction. Figure 7.3(a) shows the curbstone at the lateral of the vehicle and Figure 7.3(b) shows the resulting image of feature extraction. If the system detects a human hand in the pre-defined region of the camera enclosure, it will change the guide line detection mode to the blind navigation mode. Figure 7.4(a) shows that the user instructs the vehicle by hand poses and Figure 7.4(b) shows the resulting image that was taken when the system was detecting a human hand. A map created in the learning procedure before executing the path planning process is shown in Figure 7.5(a) and Figure 7.5(b) shows the map modified after executing the path planning process.

Figure 7.1 The experimental environment.

(a)

(b)

Figure 7.2 A curbstone appearing in front of the vehicle. (a) A captured image. (b) An image obtained from processing (a) with extracted feature points.

(a)

(b)

Figure 7.3 A curbstone appearing at the lateral of the vehicle. (a) A captured image. (b) An image obtained from processing (a) with extracted feature points.

(a)

(b)

Figure 7.4 A resulting image of hand pose detection. (a) A user instructing the vehicle by hand. (b) The human hand detected in a pre-defined region.

(a)

(b)

Figure 7.5 Two navigation maps. (a) A map created before path planning. (b) A map obtained from modifying (a) after path planning.

In the navigation procedure for guiding a blind person, the system synchronizes its speed with the person’s speed. The signals captured from the six sonar sensors are shown in Figure 7.6. The synchronization method in the system used the sonar signals to compute the distance between the vehicle and the user. When an obstacle is detected, the system will avoid the obstacle if it is not flat and blocks the navigation path. An experimental result which shows that the system detected an obstacle is shown in Figure 7.7.

7.2 Discussions

By analyzing the experimental results of the learning procedure and the navigation procedure, we see some problems. First, in guide line detection, we use the sidewalk color to extract the guide line. If there are too many colors on the sidewalk, the system will be confused and cannot decide the features to use for guiding the vehicle. In this case, we have to learn all the colors as features. Also, the adjustment of the vehicle speed should be more quickly and smoothly. This can be done by using a faster CPU. Furthermore, because of the use of the plastic camera enclosure, light reflection yielded by the enclosure creates undesired lighting spots in the omni-image, as shown by as illustrated by Figure 7.8. This hopefully may be improved in future designs of the camera system. Finally, more experiments may be conducted to test the system in different environments.

Figure 7.6 Sonar signals obtained when a person stands behind the vehicle.

(a) (b)

(c) (d)

Figure 7.7An experimental result of an obstacle avoidance process. (a) An obstacle in front of the vehicle. (b) A side view of (a). (c) The vehicle avoiding the obstacle. (d) A side view of (c). (e) Obstacle detection result using a captured image.

(e)

Fig. 7.7 An experimental result of an obstacle avoidance process. (a) An obstacle in front of the vehicle. (b) A side view of (a). (c) The vehicle avoiding the obstacle. (d) A side view of (c). (e) Obstacle detection result using a captured image (cont’d).

Figure 7.8 A light pollution on the omni-image.

Chapter 8

Conclusions and Suggestions for Future Works

8.1 Conclusions

In the study, we have designed an autonomous vehicle system equipped with a newly-designed two-mirror omni-camera as the visual sensor and navigating on sidewalks for use as a guide dog. Several techniques for implementing such a system have been proposed.

First, we have derived a new formula to design the two-mirror omni-camera which is composed of two reflection mirrors with hyperboloidal shapes and a traditional projective camera. The formula describes the relationship between the parameters of the mirror surface and the position of the mirror with respect to the camera. People can use the general formula to produce the two-mirror omni-camera with different parameters easily.

Next, we have proposed several new techniques for calibration of the camera and the mechanical error of the autonomous vehicle. The camera calibration technique is based on a pano-mapping technique proposed by Jeng and Tsai [22]. A mapping table which describes the relationship between the pixels and the elevation angles with respect to the hyperbolic mirror has been created and used in object localization and 3D data computation. To calibrate the mechanical error of the odometer equipped in the vehicle, a calibration model based on the curve fitting technique has been proposed.

The mechanical error is reduced by the use of the proposed calibration model.

Furthermore, we proposed new techniques for vehicle guidance in the learning

procedure and in the navigation procedure. To learn environment information, a semi-automatic method based on the line following technique has been proposed. The vehicle navigates on sidewalks by using the features of the curbstone and the

procedure and in the navigation procedure. To learn environment information, a semi-automatic method based on the line following technique has been proposed. The vehicle navigates on sidewalks by using the features of the curbstone and the