• 沒有找到結果。

Learning procedure for navigation path creation

Chapter 3  Learning Guidance Parameters and Navigation Paths

3.6  Learning Processes for Creating a Navigation Path

3.6.3  Learning procedure for navigation path creation

In this section, we describe how we establish a navigation path in the learning process. Firstly, we define eight types of navigation nodes as listed in Table 3.1, where each navigation node includes a set of different appointed works which have to

be conducted by the vehicle, or a set of data representing a landmark position in the navigation path. We guide the vehicle to learn a pre-selected navigation path as well as some pre-selected landmarks by the use of these navigation nodes to construct a learned navigation path. In addition, while each navigation node is recorded, some relevant guidance parameters are also recorded into the learning result. At the end of the learning process, a navigation path consisting of a series of navigation nodes and relevant guidance parameters is recorded, which then can be utilized for vehicle navigation in the navigation process. A flowchart of the process for navigation path creation is shown in Figure 3.15, and the detailed algorithm to implement it is described in the following.

Table 3.1 Eight different types of navigation path nodes.

Type of number Type of node Type 0 Start / Terminal node

Type 1 Curb-following navigation node

Type 2 Blind navigation node

Type 3 Curb-line calibration node

Type 4 Localization node

Type 5 Light-pole landmark node

Type 6 Hydrant landmark node

Type 7 Fixed obstacle node

Algorithm 3.4 Creation of a navigation path.

Input: Odometer readings of vehicle poses, denoted as (Px, Py, Pth), where Px and Py

represent the vehicle location and Pth represents the vehicle direction, in the WCS.

Output: A set of navigation nodes denoted as Npath. Steps.

Step 1. Record into Npath the start node Nbegin of Type 0 with the odometer readings (Px, Py, Pth) = (0, 0, 0).

Step 2. Set the navigation mode, and guide the vehicle to navigate forward until arriving at a desired destination and stop the vehicle.

Step 3. According to the appointed navigation mode, record into Npath the current vehicle pose, denoted as Ncur = (Px, Py, Pth) obtained from the odometer readingsin Type 1 or Type 2; and select one of the four following additional learning tasks.

(1) Learn a hydrant landmark by the method mentioned in Section 3.3, obtain a hydrant position Nhyd and the related vehicle pose Ncar, and record Ncar in Type 4 and Nhyd in Type 6 into Npath.

(2) Learn a light pole landmark by the method mentioned in Section 3.3 and obtain a light pole position Nlp and the related vehicle poses Ncar, and record Ncar in Type 4 and Nlp in Type 5 into Npath.

(3) Learn a fixed obstacle Nobs using the proposed function discussed in Section 3.4, and record Nobs in Type 7 into Npath.

(4) Learn a curb line calibration node Ncali, where the vehicle can “see” a complete curb line segment without occlusion and will calibrate its pose by the “seen” curb line information in the navigation process (with the detail introduced in Section 4.2.2), and record Ncali in Type 3 into Npath.

Step 4. Go to Step 2 if the destination is not reached yet, where the destination position is selected by the trainer.

Step 5. Record the terminal node Nend, denoted as (Px, Py, Pth), according to the current odometer readings, in Type 0 into Npath.

Set navigation mode

Arriving at a new position and read

odometer

Start to learn a navigation path

Navigation starts

End of learning a navigation path

Selected additional learning work

Learn a hydrant landmark and a related vehicle pose

Learn a light pole landmark and a related vehicle pose

Figure 3.15 The process for navigation path creation.

Chapter 4

4.1.1

Navigation Strategy in Outdoor Environments

4.1 Idea of Proposed Navigation Strategy

After successfully learning the navigation environment, we acquire the learned environment information including a navigation path and other guidance parameters.

In this chapter, we introduce the proposed strategies for vehicle navigation in complicated outdoor environments by use of this information. The proposed principles to conduct the navigation work are introduced in Section 4.2.1. The process for navigation is described in Section 4.2.3. In addition, three main ideas to guide the vehicle to navigate on the learned path in this study follows.

Vehicle localization by alone-path objects

As mentioned previously, the vehicle navigation process usually suffers from incremental mechanic errors, resulting in imprecise computations of vehicle positions, so the vehicle should be guided to constantly localize its position by the learned landmark position. After localizing a landmark by the use of proposed localization techniques introduced later in Chapter 5 and obtaining the relative vehicle position with respect to the landmark, we can adjust the vehicle posture by changing its position and orientation using vehicle commands and correcting the odometer

readings. In addition, we also used the learned straight curb line segment on the sidewalk to calibrate the vehicle posture. Theses proposed techniques to adjust the vehicle posture in the navigation path are introduced in Section 4.2.2.

4.1.2

4.1.3

Dynamic adjustment of guidance parameters

In complicated outdoor environments, we cannot only adopt fixed guidance parameters recorded in the learning process to conduct image analysis works, resulting in varying lighting. Thus, we taught the vehicle in the learning process to analyze environment data and then utilize learned methods to adjust guidance parameters. Some techniques for dynamic guidance parameters adjustment are proposed in this study. First, the learned contour of the hydrant helps the vehicle to adjust the segmentation parameters by principal component analysis (PCA). Also, by estimating the result of curb contour extraction, we can adjust the curb segmentation parameters. The above two techniques of dynamic adjustment of thresholds for hydrant and curb detection are introduced later in Chapters 5 and 6, respectively.

In addition, we use a dynamic exposure adjustment scheme to deal with the varying lighting condition in the outdoor environment during the vehicle navigation process. An advantage of dynamic exposure is the possibility to preserve more usable color information of the object in the image. According to the environment intensity parameter learned in the learning process for each work, we can determine whether the current luminance of the image frame is suitable, and the technique will be automatically enforced if necessary. The proposed technique for dynamic exposure adjustment is introduced in Section 4.2.3.

Obstacle avoidance by 3D information

For vehicle navigation in outdoor environments, encountering an obstacle is

unavoidable and must be found to dodge it. By using a stereo camera in this study, we propose a dynamic obstacle detection technique via the use of 3D information. We use this technique to conduct secure navigation. The detail about this technique is described later in Chapter 6.

4.2 Guidance Technique in Navigation Process

4.2.1 Principle of navigation process

In this section, we introduce the principles of the proposed vehicle navigation method on the learned path. At the beginning, the vehicle retrieves a navigation path and related guidance parameters which were recorded in the vehicle system in the learning process. The obtained navigation path consists of several navigation nodes labeled in a sequential order. The vehicle is guided to visit each node sequentially in the navigation process. Four principles are proposed in this study to guide the vehicle to navigate to a desired destination. They are described as follows.

(1) The vehicle always keeps its navigation safe by avoiding collusions along the navigation path. By the use of the proposed obstacle detecting method, the vehicles always check if there is any dynamic obstacle in front and dodge it if necessary. In addition, by localizing a nearby light pole and the learned position of fixed obstacles, the vehicle conducts a specific procedure to dodge these static obstacles.

(2) The vehicle always adjusts guidance parameters based on the learned rules when detecting a landmark using techniques such as dynamic thresholding and

(3) The vehicle always follows the sidewalk curb if possible. After detecting and localizing the curb line, the vehicle modifies its direction to maintain a safe distance and orientation with respect to the curb on the sidewalk.

(4) The vehicle localizes its position and corrects the odometer readings at a constant time interval along the navigation path. According to the learned landmark information, the vehicle detects and then localizes an appointed landmark by the use of the proposed techniques. Then, calibration of the vehicle pose is conducted.

As a rule, the vehicle always localizes itself by the odometer readings to conduct node-base navigation. With the learned path information, we establish two principles to judge whether the vehicle has arrived at the next node in node-based navigation.

The principles are described in the following.

(1) As shown in Figure 4.1(a), the distance distA between the current vehicle position V and the position of the next node Nodei+1 is smaller than a threshold thr1. (2) As shown in Figure 4.1(b), if the distance distB between the next node Nodei+1

and the position of the projection of the vehicle on the vector formed by Nodei

and Nodei+1, is smaller than a threshold thr2.

By the mentioned navigation principles, the vehicle can be expected to navigate to the goal in the end. A flowchart illustrating the proposed node-based navigation is shown in Figure 4.2.

4.2.2 Calibration of vehicle odometer readings by

sidewalk curb and particular landmarks

(a)

(b)

Figure. 4.1 Two proposed principles to judge if the vehicle arrives at the next node in the navigation process. (a) According to the distance between the vehicle position and the next node position. (b) According to the distance between the next node position and the position of the projection of the vehicle on the vector connecting the current node and the next node.

Landmark Detection

Figure 4.2 Proposed node-based navigation process

As mentioned in Chapter 3, the odometer readings provide three values Px, Py, and Pth for the vehicle to know its position (Px, Py) and moving direction Pth. Unfortunately, all of them become imprecise owing to incremental mechanic errors after the vehicle navigates for a period of time. In this section, we describe the proposed schemes to calibrate the odometer readings. The process of odometer reading calibration is illustrated in Figure 4.3. At first, we use recorded curb line segment information to calibrate the orientation reading Pth of the odometer. Second, by the recoded hydrant and light pole positions, we use the proposed hydrant and light pole detection method to obtain its position and then calibrate the position readings (Px, Py) of the vehicle. The reason why we have to combine a hydrant or light pole position with the curb information is that in the odometer reading calibration method we propose in this study, we have to calibrate the orientation odometer reading in advance using the detected curb line before the computed position of the hydrant and light pole can be used to localize the vehicle position.

(A) Odometer calibration by the hydrant and the sidewalk curb line

Two different positions of the vehicle at two nodes in the navigation path and the relation between the vehicle, the curb, and the hydrant are illustrated in Figure 4.4. The calibration process consists of two steps. Firstly, after adjusting the vehicle to the direction specified by the current odometer readings, we detect the nearby straight curb line segment seen in the omni-image, and obtain the slope angle with respect to the vehicle. From the learned navigation path, we can obtain the recorded slope angle of the curb line, and then analyze the two different slope angles to estimate the correct direction of the vehicle. Second, we conduct the vehicle to detect the hydrant and obtain its location. According to the recorded hydrant position from the learned navigation path, we use the correct vehicle orientation to compute

the correct vehicle position by the relation between the hydrant position and the vehicle position in the GCS as shown in Figure 4.5. We describe the proposed method to calibrate the odometer readings in detail in the following algorithm.

Hydrant landmark

Figure 4.3 Proposed odometer reading calibration process.

Hydrant

V(PX, PY, Pth) GCS

adj

Recorded Vehicle Position Current Vehicle Position

V (PX, PY, Pth)

Figure 4.4 A recoded vehicle position V and the current vehicle position V in the GCS.

(a) (b) Figure. 4.5 Hydrant detection for vehicle localization at position L. (a) At coordinates (lx, ly) in VCS.

(b) At coordinates (Cx, Cy) in GCS.

Algorithm 4.1 Odometer readings calibration by a hydrant and a curb line segment.

Input: a recoded vehicle pose VL (Px, PY, Pth), a recorded slope angle θ of the curb line, a recorded hydrant position Lrecord, and the odometer readings of the vehicle pose.

Output: None.

Step.

Step 1. Turn the vehicle to the recorded direction Pth, conduct the curb line detection process described in Chapter 6, and compute the slope angle θ of the curb line relative to the vehicle direction.

Step 2. Compute an adjustment angle θadj by the following equation:

adj = ′ –  (4.1)

and modify the orientation odometer reading to be θadj which is then taken as the correct vehicle orientation Pth′.

Step 3. Detect the hydrant and compute its position at Lccs in the CCS (using the method described in Chapter 5); and by the coordinate transformation between the CCS and the VCS as described in Equation (3.6) with Lccs in the CCS as input, compute the landmark position LVCS and describe it with coordinates (lx, ly) in the VCS.

Step 4. From the learned navigation path, obtain the recorded landmark position Lrecord

at coordinates (Cx, Cy) in the GCS, and use the calibrated orientation Pth′ to compute the current vehicle position (Xcali, Ycali) in the GCS by the following equations:

Step 5. Replace imprecise position readings of the odometer, (PX′, PY′), by the computed vehicle position (Xcali, Ycali).

(B) Odometer calibration by the light pole and the sidewalk curb line

The process for calibration by the light pole and the sidewalk curb is similar to the above-mentioned method for odometer calibration by a hydrant and a sidewalk curb line segment. First, we detect and localize a nearby curb line segment for the purpose to calibrate the orientation reading in a similar way as described previously at a node V1 in the learned path. Next, we conduct a slight difference task, i.e., we navigate the vehicle a step further to another node V2, which is a location recoded in the navigation path with a light pole nearby, in order to detect the light pole at a closer location. The process is shown in Figure 4.4. It is noted that here the mechanical error of the orientation reading is assumed slight after the movement of the vehicle from node V1 to node V2. Then, after detecting and localizing the light pole position, we use the same method to compute the current vehicle position and modify the position odometer as that used for the calibration work using the hydrant described previously.

4.2.3 Dynamic exposure adjustment for different tasks

In the navigation process, by the recorded relevant environment intensity information in the learned navigation path, we can adjust the luminance into an appropriate value for different works. According to the experimental result as shown in Figure 4.7, we find that there exits a specific range of exposure values in which the exposure value has an approximate linear relation with the image intensity in a specific area in the image. Thus, we can estimate an appropriate exposure value Exp using the following polynomial function fexp:

Exp = fexp (Y) = m × Y + b, (4.3) where Y is the average intensity in a specific region in the image, and a and b

are two parameters.

Light Pole

V1(PX1, PY1, PTh1)

V2(PX2, PY2, PTh2) VX

VY

VY

VX

GCS

Vehicle position of orientation reading

calibration

Vehicle position of position readings calibration

Figure. 4.6 Process of odometer calibration by the light pole and curb line. The vehicle detects the curb line at V1 to calibrate the orientation and then navigates to V2 to calibrate the position reading by a detecting light pole.

Figure. 4.7 A relationship between the exposure value and intensity in an experimental result.

However, under different light sources in outdoor environments, the specific range will be different, so is the linear function, fexp. Thus, we propose an efficient method consisting of two stages to automatically obtain an appropriate exposure value which can be utilized to obtain an appointed illumination in an appointed region in an omni-image. First, we use a bisection scheme to adjust the exposure to find the specific range. It is desired to obtain two approximate bounds of the exposure value between which we can get proper intensities. Next, by the two bounds, we utilize linear interpolation to adjust the exposure value and then obtain the desired illumination. An algorithm to describe the proposed method is as follows.

Algorithm 4.2 Dynamic exposure adjustment.

Input: an input image Iinput; desired environment intensity Ybase and relevant environment window Winen; and the minimum lower bound Exp1 and the maximum upper bound Exp2 of the camera exposure value.

Output: None.

Step.

Step 1. Initialize two parameters Y1 = -1 and Y2 = -1.

Step 2. Compute an exposure value Expbi by the following equation:

( 1

bi Exp 2Exp2)

Exp   . (4.4)

Step 3. Use Expbi to acquire an image Iinput with the system camera, and compute

Step 5. Compute the exposure value Explinear by the following equation:

1

Step 6. Use Explinear to acquire an image Iinput and compute the average intensity Ycur of Iinput in Winen. If |Ycur-Ybase| is smaller than a threshold ThrY, then

An experimental result for dynamically adjusting the exposure in the sidewalk curb detection task in the outdoor environment is illustrated in Figure 4.8. By the use of the leaned environment window for curb detection as illustrated by a red rectangular shape on the image in each figure, we compute the average intensity in this region. In the first stage, for the purpose to finding the exposure bounds, we conduct the bi-section scheme to adjust the exposure value as shown in Figures 4.8(a) through 4.8(d). After that, using the obtained exposure lower bound 50 and upper bound 100, we can use a linear interpolation scheme to obtain a suitable intensity on the image as illustrated in Figures 4.8(e).

(a) (b)

(c) (d)

(e)

Figure 4.8 Process of the proposed method to dynamically adjust the exposure for the sidewalk detection task. (a) With exposure value 400. (b) With exposure value 200. (c) With exposure value 100. (d) With exposure value 50. (e) A suitable illumination for sidewalk detection with exposure value 79.

4.3 Detail Algorithm of Navigation Process

In this section, we describe the detail process for vehicle navigation in the navigation process. The flowchart of the entire navigation process is shown in Figure 4.8. With the learned information, the vehicle navigates along the learned path by the way of visiting each recorded node consecutively and conducts appointed works at specific positions until reaching the destination of the learned path. The entire navigation process is described in the following algorithm.

Algorithm 4.3 Navigation Process.

Input: a learned navigation path Npath with relevant guidance parameters, and learned data of camera calibration.

Output: Navigation process.

Step.

Step 1. Read from Npath a navigation node Nnext and relevant guidance parameters.

Step 2. Turn the vehicle toward the next node Nnext.

Step 3. Check the illumination by the recoded environment intensity and conduct the dynamic exposure adjustment procedure if necessary, and then conduct the vehicle to navigate forward.

Step 4. Try to find obstacles; and if an obstacle is founded and located at a position which is too close to the vehicle, stop the vehicle and insert avoidance nodes (see Section 6.2 for the detail) into the navigation path for the purpose of obstacle avoidance and go to Step 1.

Step 5. If a sidewalk following mode is adopted, modify the vehicle direction after localizing the curb landmark by the curb detection method using the

Step 6. Check whether the next node Nnext is reached by the mentioned two principles in Section 4.2.1; and if not, go to Step 4.

Step 7. If a fixed obstacle is read from Npath, insert dodging nodes into the navigation path and go to Step 10.

Step 8. If a hydrant or light pole landmark is read from Npath, take the following steps and then go to Step 10.

8.1 Check the illumination in the relevant environment windows in the image for the appointed landmark by the recoded environment

8.1 Check the illumination in the relevant environment windows in the image for the appointed landmark by the recoded environment