• 沒有找到結果。

Chapter 2 Proposed Ideas and System Configuration

2.2 System configuration

2.2.1 Hardware configuration

All hardware equipments we use in the study are listed in Table 2.1. There are an autonomous vehicle, a laptop computer, two reflective mirrors designed by ourselves, a perspective camera, and a lens. We will introduce them in detail as follows.

Table 2.1 The hardware equipments we use in this study.

To construct an omni-camera, it is desired to reduce the size of the perspective camera so that its projected area in the omni-image can be as the small as possible (note that the camera appears as a black circle in an omni-image). We use a projective camera of the model ARCAM-200SO which is produced by ARTRAY Co. to construct the proposed two-mirror omni-camera, and the used lens is produced by Sakai Co.

The picture and the specification of the camera and the lens are listed in Figure 2.2, and Table 2.2 and Table 2.3, respectively The camera size is 33mm × 33mm × 50mm with 2.0 M pixels. The size of the CMOS sensor is 1/2 inches (6.4mm × 4.8mm). The specifications of the camera and the lens are important, because we use the parameters in the specifications to design the proposed two-mirror omni-camera, and we will introduce the design procedure in detail in Chapter 3.

Before explaining the structure of the proposed two-mirror omni-camera, we illustrate the imaging system roughly in Figure 2.3. The light rays from a point P in the world go into the sensor of the camera both though the reflective mirrors and the center of the lens (the blue line and the red one in the figure).

Equipments Product name Produced corporation

Autonomous vehicle Pioneer 3 MobileRobots Inc.

Computer W7J ASUSTek Inc.

Two reflective mirrors Micro-Star Int'l Co.

Camera ARCAM-200SO ARTRAY co.

Lens HV3Z615T Sakai co.

(a) (b)

Figure 2.2 (a) The Arcam-200so produced by ARTRAY co. (b) The lens produced by Sakai co.

Table 2.2 The specification of Arcam-200so.

Max resolution 2.0 M pixels

Size 33mm × 33mm × 50mm

CMOS size 1/2” (6.4 × 4.8mm)

Mount C-mount

Frame per second with max resolution 8 fps

Table 2.3 The specification of lens.

Mount C-mount

Iris F1.4

Focal length 6-15mm

We equipped the two-mirror omni-camera on the autonomous vehicle in such a way that the optical axis of the camera is parallel to the ground originally. Figure 2.4(a) shows the omni-image taken by the two-mirror omni-camera when it was affixed on the vehicle to be parallel to the ground. The regions in the omni-image drawn by red lines are the overlapping area reflected by the two mirrors. We can see that part of the omni-image reflected by the bigger mirror is covered by that reflected by the small mirror, and the overlapping area is so relative smaller. To compute the range data of objects in the world, each object should be captured by both mirrors.

And if this overlapping area is too small, the precision of the obtained range data will

become worse.

To solve this problem, it is observed that a front-facing placement of the two-mirror omni-camera with its optical axis parallel to the ground reduces the angle of incoming light rays and so also reduces the overlapping area of the two omni-images taken by the camera. To enlarge this overlapping area, we made a wedge-shaped shelf with an appropriate slant angle and put it under the camera to elevate the angle of the optical axis with respect to the ground, and an omni-image taken with the camera so installed is shown in Figure 2.4(b). We can see that the overlapping area becomes relatively larger now. The proposed two-mirror omni-camera system with its optical axis elevated to a certain angle and affixed on the autonomous vehicle is shown in Figure 2.5.

The autonomous vehicle we use is called Pioneer 3 which was produced by MobileRobots Inc., a picture of it is shown in Figure 2.6, and its specification is listed in Table 2.4. Pioneer 3 has a 44cm × 38cm × 22cm aluminum body with two 16.5cm wheels and a caster. The maximum speed of the vehicle is 1.6 meters per second on flat floors, the maximum speed of rotation is 300 degrees per second and the maximum degree to climb is 25o. It can carry payloads up to 23 kg. The autonomous vehicle carries three 12V rechargeable lead-acid batteries and if the batteries are fully charged initially, it can run for 18 to 24 hours continually. The vehicle is also equipped with 16 ultrasonic sensors. A control system embedded in the vehicle provides many functions for developers to control the vehicle.

The laptop computer we use in this study is ASUS W7J which was produced by ASUSTeK Computer Inc. We use an RS-232 to connect the computer and the autonomous vehicle and use a USB to connect the computer and the camera. A picture of the notebook is shown in Figure 2.7, and the hardware specification of the laptop is listed in Table 2.5.

2.2.2 System configuration

To develop our system, we use Borland C++ Builder 6 with updated pack 4 on the operating system Windows XP. The Borland C++ Builder is a GUI-based interface development environment (IDE) software and it is based on the C++ programming language. The company, ARTAY, provides a development tool of the camera, called Capture Module Software Developer Kit, to assist developers to construct their systems.

The SDK is an object-oriented interface which is based on the Windows 2000 and Windows XP systems, and written in many computer languages such as C++, C, VB.net, C#.net and Delphi. The company, MobilRobots Inc., provides an application programming interface (API), called ARIA, for developers to control the vehicle. The ARIA is an object-oriented interface which is usable under Linux or Windows system in C++ programming language. We use the ARIA to communicate with the vehicle to control the velocity, heading and some navigation settings of it.

Reviewing of the design process of the proposed system, there are four major steps as described in the following.

1. We design a two-mirror omni-camera and equip it on the autonomous vehicle.

2. We propose a calibration method based on Jeng and Tsai’s method [22] to calibrate the camera and propose a calibration method using a curve fitting technique to correct the odometer values available from the autonomous vehicle system.

Figure 2.3 A prototype of the proposed two-mirror omni-camera.

(a) (b)

Figure 2.4 The omni-images taken by the two-mirror omni-camera which is placed at different elevations of angles. (a) Image taken when the optical axis of the camera is parallel to the ground. (b) Image taken when the optical axis of the camera is placed at an angle of 45o with respect to the ground.

(a) (b)

Figure 2.5 The proposed two-mirror omni-camera is equipped on the autonomous vehicle. (a) A side view of the autonomous vehicle. (b) A 45o view of the autonomous vehicle.

Figure 2.6 Pioneer 3 is the vehicle we use in the study [19].

Table 2.4 Specification of the vehicle Pioneer 3

Size 44cm  38cm  22cm

Max speed 1.6 m / sec

Max degree of climbing 25o

Max load 23 kg

Figure 2.7 The notebook ASUS W7J we use in the study.

Table 2.5 Specification of ASUS W7J.

System platform Intel Centrino Duo

CPU T2400 Duo-core with 1.84 GHz

RAM size 1.5 GB

GPU Nvidia Geforce Go 7400

HDD size 80 GB

3. In the learning phase, we record environment information and conduct the path planning procedure to get necessary path nodes. We then detect the color of the curbstone, and use the result as well as a line fitting technique to compute the direction and position of the autonomous vehicle. We also use a method to find the user’s hand positions to command the autonomous vehicle.

4. In the navigation phase, the autonomous vehicle can guide the person in the environment that has been learned in the learning phase. We design a method to synchronize the vehicle speed with that of the user. Then, we use the proposed two-mirror omni-camera to compute the range data of obstacles and use a method to avoid dynamic obstacles.

We divide our system architecture into three levels and show our concept in Figure 2.8. Several hardware controlling modules are in the base level. Imaging process techniques, pattern recognition techniques, some methods for computing 3D range data, and some data analysis techniques are in the second level. The highest level shows our main processing modules based on techniques and knowledge in the second level.

The road detection module is used to detect the guide line (the curb) along the sidewalk to guide the autonomous vehicle. The hand position detection module is used to detect the user’s hand positions to guide the autonomous vehicle when necessary in the learning procedure. The speed synchronization module adjusts the vehicle speed dynamically to synchronize with the human speed. The position processing module is used to record the learned paths in the learning phase and to calculate the vehicle position in the navigation phase. And the obstacle avoidance module handles the avoidance process to avoid collisions with obstacles after the autonomous vehicle detects the existence of dynamic obstacles in the navigation procedure. We will show more complete ideas and describe these techniques in more detail in the following chapters.

Figure 2.8 Architecture of proposed system.

Chapter 3

Design of a New Type of Two-mirror Omni-camera

3.1 Review of Conventional Omni-cameras

A conventional omni-camera is composed of a reflective mirror and a perspective camera. An example is shown in Figure 3.1(a), and an omni-image captured by the omni-camera is shown in Figure 3.1(b). We can see that the omni-image has larger FOV’s with the aid of a reflective mirror, compared with those taken with the traditional camera. The shape of the mirror may be of various types such as hyperbolic, ellipsoidal, parabolic, circular, etc. An illustration is shown in Figure 3.2. More about the design principle of the two-mirror omni-camera we use in this study will be described in the next section.

Figure 3.1 A conventional catadioptric camera. (a) Structure of camera. (b) The acquired omni-image [7].

Figure 3.2 The possible shapes of reflective mirrors are used in omni-cameras [7].

3.1.1 Derivation of Equation of Projection on Omni-image

The two-mirror omni-camera used in the study is constructed by two reflective mirrors made of the hyperbolic shape and a camera with a perspective lens. Here, we use an omni-camera with a mirror made of the hyperbolic shape to show the derivation of the equation of image projection first. An important optical property of the hyperbolic-shape mirror is illustrated in Figure 3.3, which says that when a light ray goes through one focus point F1 of the hyperbolic shape at a space point P which is on the hyperbolic curve, the ray will be reflected to the other focus point F2 definitely.

Figure 3.3 An optical property of the hyperbolic shape.

We use this property of the hyperbolic shape to construct the omni-camera we need in this study. The image projection phenomenon of the omni-camera is shown in Figure 3.4 and two coordinate systems, namely, the camera coordinate system (CCS)

and the image coordinate system (ICS), are used to illustrate the principle of the imaging process. Based on the property mentioned previously, the light ray of a point P(X, Y, Z) in the CCS first goes to the center of the mirror Om, then is reflected by the mirror to the center of the lens Oc by the hyperbolic mirror surface, and finally is projected on the image plane to form the image point q(u, v) in the ICS. To satisfy this property of the hyperbolic shape, the optical axis of the perspective camera and the mirror are assumed to be aligned such that the omni-camera becomes to be of the SVP type.

The point O is taken to be the origin in the CCS which is also the middle point between the two focus points Om and Oc of the hyperbola, where Om, which is also the center of the base of the hyperbolic-shaped mirror, is located at (0, 0, -c); and Oc is

located at (0, 0, c) in the CCS. The hyperbolic shape of the mirror of the omni-camera can be described by the following equation:

2 1

where a and b are two parameters of the hyperbolic shape, and

2 2

cab (3.2)

is the distance to the focus point Om. The projection relationship between the CCS and the ICS can be described [7] as:

where f is the focal length of the camera.

Figure 3.4 Relation between the camera coordinates and the image coordinates.

3.2 Proposed Design of a Two-mirror Omni-camera

3.2.1 Idea of design

The conventional non-stereo camera can only take images of single geometric planes, and cannot be used to get 3D range data, including the depth, width, and height information of scene points, compared with the stereo camera. While using two or more cameras to construct a stereo camera system, keeping the mutual parallelism or alignment of the optical axes of the cameras is a difficult problem. We desire to solve this problem by designing a new stereo camera, therefore a two-mirror omni-camera is proposed in this study. In addition to resolving this problem, the proposed omni-camera

also has larger FOV’s than the conventional camera.

3.2.2 Details of design

In this section, we introduce the design procedure of the proposed two-mirror omni-camera in detail. An illustration of its structure is shown in Figure 2.2. The upper mirror is bigger, named Mirror 1, and the lower is smaller, named Mirror 2. They are both made to be of the hyperbolic shape. To construct the desired two-mirror omni-camera, we took two major steps as described in the following.

Step 1: Decision of the size and the position of the mirrors.

To construct the desired two-mirror omni-camera, the first step is to decide the light injection and reflection positions on the two mirrors in the imaging process. An assumption is made in advance, that is, the radius which is calculated as the distance from "the spot where the light rays are reflected from the Mirror 2" to Oci is half of the radius which is calculated as the distance from "the spot where the light rays are reflected from the Mirror 1" to Oci.

Based on the assumption mentioned above, a simulated omni-image is shown in Figure 3.5, where the red part is the region which is reflected by Mirror 2, and the blue part is the region which is reflected by Mirror 1. The radius of the blue region is two times to the radius of the red region.

Before deriving the formulas, we define some terms or notations appearing in Figure 3.6 which is simplified from Figure 2.2 as follows:

(1) x1 is the radius of Mirror 1;

(2) x2 is the radius of Mirror 2;

(3) d1 and d2 are the distance from Mirror 1 to the lens center, Oc and the distance from Mirror 2 to Oc, respectively;

(4) f is the focal length of the perspective camera;

(5) w is the width of the CMOS sensor;

(6) b1, c1 are the parameters of Mirror 1; and (7) b2, c2 are the parameters of Mirror 2.

Figure 3.5 The desired omni-image.

In Figure 3.6, the blue lines are drawn to represent the light rays which go into the center of Mirror 1 with an incidence angle of 0o and are reflected onto the image plane (the CMOS sensor). The red lines are similarly interpreted which go into the center of Mirror 2. Then, focusing on the marked area in Figure 3.6, we get two pairs of similar triangles, (△AO1Oc, △DOciOc) and (△BO2Oc, △COciOc). By the property of similar triangles, we can describe the relationship between them by the following equations:

ci

Figure 3.6 A simplified modal from Figure 3.4.

Step 2: Calculation of the parameters of the mirrors.

The second step is to calculate the parameters of the hyperbolic shape of the mirrors. As shown in Figure 3.4, the relation between the CCS and the ICS of an omni-camera system, as derived by Wu and Tsai [7], may be described by

2 2

where ru2v2 in the ICS, α is the elevation angle of the mirror, and β is the depression angle of Oc.

Let the omni-camera have the largest FOV, and the incidence angle be 0o, which specifying a light ray going from a point P(X, Y, Z) into the mirror center and is reflected to the image plane. As illustrated in Figure 3.7 which is a simplified version of Figure 3.4, from △O1ROc we can obtain the following equations: length of the mirrors. In addition, we can obtain the following equation:

2 2

based on Equation (3.6) with 0.

Equation (3.11) may be transformed, by multiplying the denominator by (b2c2)cosθ, to be:

(b2c2)sinθ2bc=0, (3.12)

or equivalently, to be

b2sinθ2cbc2sinθ=0 (3.13)

from which we can get the parameter b by the following equations:

2 2 2

Combining Equation (3.10) and Equation (3.14), we get

2 2

which can be simplified in the following way:

2 2

Because d is the distance from a focus pointof the hyperbolic shape (the mirror center) to the other focus point Oc (the lens center) as shown in Figure 3.6, we can obtain the parameter cof the mirror by

c=0.5 × d. (3.17)

By equations (3.9) and (3.17), we can solve Equation (3.16) to get

1

Now we can use the general analytic equations, Equations (3.17) and (3.18), to compute the parameters of the mirrors as long as we have the values of x and d.

To calculate the parameters of Mirror 2, we have to make a second assumption, that is, the radius x2 is a pre-selected value. Accordingly, we can obtain the distance d2 in Equations (3.5), and then use Equations (3.17) and (3.18) to calculate the parameters b2 and c2. And the parameter a2 may be computed by Equation (3.2).

Finally, we want to get the parameters of Mirror 1. It is mentioned in [12] that a longer baseline (shown as O O1 2 in Figure 3.6, whose width is equal to d1  d2) of the two mirrors will yield a larger disparity (the difference between the elevateon angles) of the two mirrors in the image. If the baseline is too smaller, the range data will not be accurate. Therefore, we assume the baseline to be of an appropriate length, denoted say as lb as shown in Table 3.1, so that we can get d1 d2 lb and so the value of x1 by Equation (3.4). Then, we can get the parameters b1 and c1 of Mirror 1 by using the general equations of (3.17) and (3.18), and the parameter a1 may be computed accordingly by Equation (3.2).

The calculated parameters of the proposed two-mirror omni-camera are shown in Table 3.1 and some simulation images of the camera are shown in Figure 3.8. We use Equation (3.3) to create the simulation images with a focal length of 6 mm and the parameter values listed in Table 3.1. As illustrated in Figure 3.8, the top-left image is the perspective image; the top-right image and the bottom-right image are the simulation images taken by Mirror 1 and Mirror 2, respectively; and the bottom-left image is the simulation image which is composed of the top-right and the bottom-right

one.

Table 3.1 The parameters of the proposed two-mirror omni-camera obtained from the design process.

a b Baseline lb

Mirror 1 11.46cm 9.68cm

Mirror 2 2.41cm 4.38cm 20cm

Figure 3.7 The light ray is reflected from the biggest radius from the mirror.

3.2.3 3D data acquisition

In this section, we will introduce the principle of the computation of the 3D range data and the derivation of it in the proposed system. In Figure 3.9(a), the point P in the CCS goes through each center of the two mirrors, and α1 and α2 are the elevation angles in each mirror, respectively. A triangle △O1O2P formed by two light rays (the blue line and the red line) is shown in Figure 3.9(b). The distance d is between the point P and O1, and the baseline length of the two mirrors is the distance h between O1 and O2. We can derive the following equations by the law of sines based on the geometry shown in Figure 3.9(b):

2 1 2

Figure 3.9 Computation of 3D range data using the two-mirror omni-camera. (a) The ray tracing of a point P in the imaging device. (b) A triangle in detail (part of (a)).