• 沒有找到結果。

Chapter 1 Introduction

1.5 Thesis Organization

The remainder of this thesis is organized as follows. In Chapter 2, the proposed ideas of the study and the system configuration are introduced. In Chapter 3, the design of the proposed two-mirror omni-camera is described. In Chapter 4, the proposed techniques for calibration of the two-mirror omni-camera and correction of the mechanical error of the autonomous vehicle are described. In Chapter 5, we describe the supervised learning method for generating the navigation path map in the outdoor environment. In Chapter 6, an adaptive method used to avoid dynamic obstacles and a technique used to synchronize the vehicle speed with the user’s are described. In Chapter 7, experimental results and discussions are included. Finally, conclusions and some suggestions for future works are given in Chapter 8.

Chapter 2

Proposed Ideas and System Configuration

2.1 Introduction

In this chapter, we will introduce the major ideas proposed in the thesis study. In Section 2.1.1, we will introduce the main ideas of the proposed learning method for outdoor navigation and use some figures to illustrate our ideas. In Section 2.1.2, we will describe the main ideas of the proposed guidance method. To develop the system of this study, we designed a two-mirror omni-camera and equipped it on an autonomous vehicle. The entire hardware configuration is described in Section 2.2.1.

The software configuration and the system architecture are presented in Section 2.2.2.

2.1.1 Idea of proposed learning methods for outdoor navigation

The main purpose in the learning procedure is to create a path map and then instruct the vehicle to navigate according to the map in the outdoor environment. As mentioned in our survey, many methods have been proposed to guide the autonomous vehicle for learning navigation maps. Different from those methods which essentially conduct learning manually, we use special features in the environment to guide the autonomous vehicle to learning the path map semi-automatically.

In addition, in this study we use the color of the curbstone at the sidewalk, as the example shown in Figure 2.1, in the outdoor environment to guide the autonomous vehicle for the purpose of learning a path map. If the color is similar to the ground color of the sidewalk or if no special color appears on the curbstone in the

environment, the autonomous vehicle will not know how it should go in the procedure of learning. Therefore, it is desired to propose another method to handle these conditions. A new approach we use in this study consists of several major steps as follows.

1. Allow the user aside the vehicle to touch the transparent plastic enclosure of the omni-camera at several pre-selected positions on the enclosure which represent different meanings of commands for guiding the vehicle.

2. Detect the hand touch position from the omni-image taken when the hand is put on the camera enclosure.

3. Determine the command from the detected hand position.

4. Instruct the vehicle to conduct navigation according to the command.

In addition, when navigating in outdoor environments, the vehicle often encounters the problem of varying light intensity in the outdoor environment which has a serious influence on the image analysis work. Our general idea to solve this problem is to dynamically adjust the camera exposure value as well as the image threshold value used in the hand detection process just mentioned.

Figure 2.1 The color of curbstone at a sidewalk in National Chiao Tung University.

We use seven major steps to accomplish the learning procedure as described in the following.

1. Decide whether it is necessary to adjust the camera exposure value and the image threshold values for use in image analysis.

2. Use the hue and saturation of the HSI color space to detect sidewalk curbstone features.

3. Detect the edge between the sidewalk region and the curbstone region.

4. Compute the range data of the guide line formed by the curb using the detected edge.

5. Instruct the vehicle to perform actions to follow the guide line.

6. Detect hand positions in the taken image to guide the vehicle if necessary.

7. Execute the procedure of path planning to collect the necessary path nodes and record them in the learning procedure.

2.1.2 Idea of proposed guidance methods for outdoor navigation

In this study, the autonomous vehicle is designed to guide the blind people to walk safely on the sidewalk environment which is learned in the learning procedure.

To accomplish this, we may encounter at least two difficulties as follows.

The first is whether the vehicle knows the existence of the user who follows the autonomous vehicle. In several studies, some developers design a pole to let the user hold it and follow the guide robot. But if the user releases the pole, the robot would not know that the user is behind. To solve this problem, we propose a method using ultrasonic sensors to synchronize the speed of the autonomous vehicle with that of the blind.

Another difficulty is that some dynamic obstacles may block the navigation path of the autonomous vehicle. In Chiang and Tsai’s method [21], they used a 2D projection method to compute the depth distance and the width distance of the

obstacle, and generated the minimum avoidance path from the current position to the next path node. Because they used only one camera to capture the image in the environment, the vehicle cannot know the height distance of the obstacle. If the obstacle is not flat and blocks the navigation path, it could go forward instead of avoiding the obstacle. In our study, we use the proposed two-mirror omni-camera to detect path obstacles and so can get the range data of the obstacle to solve this problem.

In addition, we propose an adaptive collision avoidance technique based on Chiang and Tsai’s method [21] to guide the vehicle to avoid dynamic obstacles after detecting the existence of them.

We use seven major steps to accomplish the procedure of navigation as described in the following.

1. Decide whether it is necessary to adjust the camera exposure value and the threshold value for use in image analysis.

2. Compute the distance from the vehicle to the user by the ultrasonic sensors equipped on the vehicle to synchronize their speeds.

3. Detect the guide line.

4. Localize the position of the vehicle with respect to the guide line.

5. Detect the existence of any dynamic obstacle.

6. Plan a collision-avoidance path.

7. Navigate according to the collision-avoidance path to avoid the obstacle if any.

2.2 System configuration

2.2.1 Hardware configuration

All hardware equipments we use in the study are listed in Table 2.1. There are an autonomous vehicle, a laptop computer, two reflective mirrors designed by ourselves, a perspective camera, and a lens. We will introduce them in detail as follows.

Table 2.1 The hardware equipments we use in this study.

To construct an omni-camera, it is desired to reduce the size of the perspective camera so that its projected area in the omni-image can be as the small as possible (note that the camera appears as a black circle in an omni-image). We use a projective camera of the model ARCAM-200SO which is produced by ARTRAY Co. to construct the proposed two-mirror omni-camera, and the used lens is produced by Sakai Co.

The picture and the specification of the camera and the lens are listed in Figure 2.2, and Table 2.2 and Table 2.3, respectively The camera size is 33mm × 33mm × 50mm with 2.0 M pixels. The size of the CMOS sensor is 1/2 inches (6.4mm × 4.8mm). The specifications of the camera and the lens are important, because we use the parameters in the specifications to design the proposed two-mirror omni-camera, and we will introduce the design procedure in detail in Chapter 3.

Before explaining the structure of the proposed two-mirror omni-camera, we illustrate the imaging system roughly in Figure 2.3. The light rays from a point P in the world go into the sensor of the camera both though the reflective mirrors and the center of the lens (the blue line and the red one in the figure).

Equipments Product name Produced corporation

Autonomous vehicle Pioneer 3 MobileRobots Inc.

Computer W7J ASUSTek Inc.

Two reflective mirrors Micro-Star Int'l Co.

Camera ARCAM-200SO ARTRAY co.

Lens HV3Z615T Sakai co.

(a) (b)

Figure 2.2 (a) The Arcam-200so produced by ARTRAY co. (b) The lens produced by Sakai co.

Table 2.2 The specification of Arcam-200so.

Max resolution 2.0 M pixels

Size 33mm × 33mm × 50mm

CMOS size 1/2” (6.4 × 4.8mm)

Mount C-mount

Frame per second with max resolution 8 fps

Table 2.3 The specification of lens.

Mount C-mount

Iris F1.4

Focal length 6-15mm

We equipped the two-mirror omni-camera on the autonomous vehicle in such a way that the optical axis of the camera is parallel to the ground originally. Figure 2.4(a) shows the omni-image taken by the two-mirror omni-camera when it was affixed on the vehicle to be parallel to the ground. The regions in the omni-image drawn by red lines are the overlapping area reflected by the two mirrors. We can see that part of the omni-image reflected by the bigger mirror is covered by that reflected by the small mirror, and the overlapping area is so relative smaller. To compute the range data of objects in the world, each object should be captured by both mirrors.

And if this overlapping area is too small, the precision of the obtained range data will

become worse.

To solve this problem, it is observed that a front-facing placement of the two-mirror omni-camera with its optical axis parallel to the ground reduces the angle of incoming light rays and so also reduces the overlapping area of the two omni-images taken by the camera. To enlarge this overlapping area, we made a wedge-shaped shelf with an appropriate slant angle and put it under the camera to elevate the angle of the optical axis with respect to the ground, and an omni-image taken with the camera so installed is shown in Figure 2.4(b). We can see that the overlapping area becomes relatively larger now. The proposed two-mirror omni-camera system with its optical axis elevated to a certain angle and affixed on the autonomous vehicle is shown in Figure 2.5.

The autonomous vehicle we use is called Pioneer 3 which was produced by MobileRobots Inc., a picture of it is shown in Figure 2.6, and its specification is listed in Table 2.4. Pioneer 3 has a 44cm × 38cm × 22cm aluminum body with two 16.5cm wheels and a caster. The maximum speed of the vehicle is 1.6 meters per second on flat floors, the maximum speed of rotation is 300 degrees per second and the maximum degree to climb is 25o. It can carry payloads up to 23 kg. The autonomous vehicle carries three 12V rechargeable lead-acid batteries and if the batteries are fully charged initially, it can run for 18 to 24 hours continually. The vehicle is also equipped with 16 ultrasonic sensors. A control system embedded in the vehicle provides many functions for developers to control the vehicle.

The laptop computer we use in this study is ASUS W7J which was produced by ASUSTeK Computer Inc. We use an RS-232 to connect the computer and the autonomous vehicle and use a USB to connect the computer and the camera. A picture of the notebook is shown in Figure 2.7, and the hardware specification of the laptop is listed in Table 2.5.

2.2.2 System configuration

To develop our system, we use Borland C++ Builder 6 with updated pack 4 on the operating system Windows XP. The Borland C++ Builder is a GUI-based interface development environment (IDE) software and it is based on the C++ programming language. The company, ARTAY, provides a development tool of the camera, called Capture Module Software Developer Kit, to assist developers to construct their systems.

The SDK is an object-oriented interface which is based on the Windows 2000 and Windows XP systems, and written in many computer languages such as C++, C, VB.net, C#.net and Delphi. The company, MobilRobots Inc., provides an application programming interface (API), called ARIA, for developers to control the vehicle. The ARIA is an object-oriented interface which is usable under Linux or Windows system in C++ programming language. We use the ARIA to communicate with the vehicle to control the velocity, heading and some navigation settings of it.

Reviewing of the design process of the proposed system, there are four major steps as described in the following.

1. We design a two-mirror omni-camera and equip it on the autonomous vehicle.

2. We propose a calibration method based on Jeng and Tsai’s method [22] to calibrate the camera and propose a calibration method using a curve fitting technique to correct the odometer values available from the autonomous vehicle system.

Figure 2.3 A prototype of the proposed two-mirror omni-camera.

(a) (b)

Figure 2.4 The omni-images taken by the two-mirror omni-camera which is placed at different elevations of angles. (a) Image taken when the optical axis of the camera is parallel to the ground. (b) Image taken when the optical axis of the camera is placed at an angle of 45o with respect to the ground.

(a) (b)

Figure 2.5 The proposed two-mirror omni-camera is equipped on the autonomous vehicle. (a) A side view of the autonomous vehicle. (b) A 45o view of the autonomous vehicle.

Figure 2.6 Pioneer 3 is the vehicle we use in the study [19].

Table 2.4 Specification of the vehicle Pioneer 3

Size 44cm  38cm  22cm

Max speed 1.6 m / sec

Max degree of climbing 25o

Max load 23 kg

Figure 2.7 The notebook ASUS W7J we use in the study.

Table 2.5 Specification of ASUS W7J.

System platform Intel Centrino Duo

CPU T2400 Duo-core with 1.84 GHz

RAM size 1.5 GB

GPU Nvidia Geforce Go 7400

HDD size 80 GB

3. In the learning phase, we record environment information and conduct the path planning procedure to get necessary path nodes. We then detect the color of the curbstone, and use the result as well as a line fitting technique to compute the direction and position of the autonomous vehicle. We also use a method to find the user’s hand positions to command the autonomous vehicle.

4. In the navigation phase, the autonomous vehicle can guide the person in the environment that has been learned in the learning phase. We design a method to synchronize the vehicle speed with that of the user. Then, we use the proposed two-mirror omni-camera to compute the range data of obstacles and use a method to avoid dynamic obstacles.

We divide our system architecture into three levels and show our concept in Figure 2.8. Several hardware controlling modules are in the base level. Imaging process techniques, pattern recognition techniques, some methods for computing 3D range data, and some data analysis techniques are in the second level. The highest level shows our main processing modules based on techniques and knowledge in the second level.

The road detection module is used to detect the guide line (the curb) along the sidewalk to guide the autonomous vehicle. The hand position detection module is used to detect the user’s hand positions to guide the autonomous vehicle when necessary in the learning procedure. The speed synchronization module adjusts the vehicle speed dynamically to synchronize with the human speed. The position processing module is used to record the learned paths in the learning phase and to calculate the vehicle position in the navigation phase. And the obstacle avoidance module handles the avoidance process to avoid collisions with obstacles after the autonomous vehicle detects the existence of dynamic obstacles in the navigation procedure. We will show more complete ideas and describe these techniques in more detail in the following chapters.

Figure 2.8 Architecture of proposed system.

Chapter 3

Design of a New Type of Two-mirror Omni-camera

3.1 Review of Conventional Omni-cameras

A conventional omni-camera is composed of a reflective mirror and a perspective camera. An example is shown in Figure 3.1(a), and an omni-image captured by the omni-camera is shown in Figure 3.1(b). We can see that the omni-image has larger FOV’s with the aid of a reflective mirror, compared with those taken with the traditional camera. The shape of the mirror may be of various types such as hyperbolic, ellipsoidal, parabolic, circular, etc. An illustration is shown in Figure 3.2. More about the design principle of the two-mirror omni-camera we use in this study will be described in the next section.

Figure 3.1 A conventional catadioptric camera. (a) Structure of camera. (b) The acquired omni-image [7].

Figure 3.2 The possible shapes of reflective mirrors are used in omni-cameras [7].

3.1.1 Derivation of Equation of Projection on Omni-image

The two-mirror omni-camera used in the study is constructed by two reflective mirrors made of the hyperbolic shape and a camera with a perspective lens. Here, we use an omni-camera with a mirror made of the hyperbolic shape to show the derivation of the equation of image projection first. An important optical property of the hyperbolic-shape mirror is illustrated in Figure 3.3, which says that when a light ray goes through one focus point F1 of the hyperbolic shape at a space point P which is on the hyperbolic curve, the ray will be reflected to the other focus point F2 definitely.

Figure 3.3 An optical property of the hyperbolic shape.

We use this property of the hyperbolic shape to construct the omni-camera we need in this study. The image projection phenomenon of the omni-camera is shown in Figure 3.4 and two coordinate systems, namely, the camera coordinate system (CCS)

and the image coordinate system (ICS), are used to illustrate the principle of the imaging process. Based on the property mentioned previously, the light ray of a point P(X, Y, Z) in the CCS first goes to the center of the mirror Om, then is reflected by the mirror to the center of the lens Oc by the hyperbolic mirror surface, and finally is projected on the image plane to form the image point q(u, v) in the ICS. To satisfy this property of the hyperbolic shape, the optical axis of the perspective camera and the mirror are assumed to be aligned such that the omni-camera becomes to be of the SVP type.

The point O is taken to be the origin in the CCS which is also the middle point between the two focus points Om and Oc of the hyperbola, where Om, which is also the center of the base of the hyperbolic-shaped mirror, is located at (0, 0, -c); and Oc is

located at (0, 0, c) in the CCS. The hyperbolic shape of the mirror of the omni-camera can be described by the following equation:

2 1

where a and b are two parameters of the hyperbolic shape, and

2 2

cab (3.2)

is the distance to the focus point Om. The projection relationship between the CCS and the ICS can be described [7] as:

where f is the focal length of the camera.

Figure 3.4 Relation between the camera coordinates and the image coordinates.

3.2 Proposed Design of a Two-mirror Omni-camera

3.2.1 Idea of design

The conventional non-stereo camera can only take images of single geometric planes, and cannot be used to get 3D range data, including the depth, width, and height information of scene points, compared with the stereo camera. While using two or more cameras to construct a stereo camera system, keeping the mutual parallelism or alignment of the optical axes of the cameras is a difficult problem. We desire to solve

The conventional non-stereo camera can only take images of single geometric planes, and cannot be used to get 3D range data, including the depth, width, and height information of scene points, compared with the stereo camera. While using two or more cameras to construct a stereo camera system, keeping the mutual parallelism or alignment of the optical axes of the cameras is a difficult problem. We desire to solve