• 沒有找到結果。

Outline of Proposed Automatic 3-D House-layout Construction

Chapter 2 Principle of Proposed Automatic 3-D House-layout Construction and

2.4 Outline of Proposed Automatic 3-D House-layout Construction

Only creating a floor-layout map, as described in Section 2.3, is insufficient for use as a 3-D model of the indoor room space. The objects on walls such as windows and doors must also be detected and be drawn to appear in the desired 3-D room model. We have proposed methods for detecting and recognizing the doors as well as windows on walls in the upper and lower omni-images. The principal steps of the methods are shown in Figure 2.6.

First, we determine a scanning range with two direction angles for each pair of omni-images based on the line equation of the floor layout. Because the lower omni-camera is installed to look downward, it can cover the mopboard on the wall.

We use a pano-mapping table lookup technique to get the scanning radius in the omni-image by transforming the 3-D space points into corresponding image coordinates. With the scanning region for each omni-image, we can retrieve

18

appropriate 3-D information from different omni-images. Each object which is detected by the scanning region of each omni-image is regard as an individual one.

Some objects on the wall, such as windows and doors, may appear in the pair of omni-images (the upper and the lower ones). Therefore, we have to merge the objects which are detected separately from the upper omni-image and the lower omni-image according to their positions in order to recognize the doors as well as the windows on walls. Then, we can locate them in the 3-D space and draw them in the final 3-D room model in a graphic form.

In summary, the proposed automatic 3-D house-layout construction process includes the following major steps:

1. automatic floor-layout construction by autonomous vehicle navigation and data collection as described in Section 2.4;

2. determine a scanning region for each omni-image according to the floor–layout edges;

3. retrieve information from the scanning region of each omni-image;

4. combine those objects which are detected separately from the upper and lower omni-cameras according to their positions;

5. recognize doors and windows from these combined objects;

6. construct the house-layout model with doors and windows on it in a graphic form.

19

Figure 2.5 Flowchart of proposed process of automatic floor-layout construction.

20

Floor-layout Information Determine Scanning Region for

Each Omni-image

Omin-images

Door Detector and Window Detector Combine Objects

Start of 3-D House-layout Construction

For Each Floor-layout Edge

Objects Information

End of 3-D House-layout construction Construct House-layout

Detect Objects Objects Information

in Each Omni-image

Figure 2.6 Flowchart of proposed outline of automatic 3-D house-layout construction.

21

Chapter 3

Calibration of a Two-camera

Omni-directional Imaging System and Vehicle Odometer

3.1 Introduction

The vehicle used in this study is equipped with two important devices which are a two-camera imaging system and a vehicle odometer. We describe the proposed methods of calibration for these two equipments in this chapter. Before describing the proposed methods, we introduce the definition of the coordinate system used in this study in Section 3.1.1 and the relevant coordinate transformation in Section 3.1.2.

The catadioptric omni-camera used in this study is a combination of a reflective hyperboloidal-shaped mirror and a perspective CCD camera. Both the perspective CCD camera and the mirror are assumed to be properly set up so that the omni-camera becomes to be of a single-viewpoint (SVP) configuration. It is also assumed that the optical axis of the CCD camera coincides with the transverse axis of the hyperboloidal mirror and that the transverse axis is perpendicular to the mirror base plane.

For vehicle navigation by mopboard following, the mopboard positions are essential for vehicle guidance. Besides, these mopboard edge points are very important for a 3-D house-layout construction. In the proposed system, the vehicle estimates the distance information by analyzing images captured from the imaging system. Before the use of the imaging system, a camera calibration procedure is

22

needed. For this purpose, we use a space-mapping technique proposed by Jeng and Tsai [7] to create a space-mapping table for each omni-camera by finding the relations between specific points in 2-D omni-images and the corresponding points in 3-D space. In this way, the conventional task of calculating the projection matrix for transforming points between 2-D omni-image and 3-D space can be omitted. The detail about camera calibration is described in Section 3.2.

For vehicle navigation in indoor environments, the vehicle position is the most important information which is not only used for guiding the vehicle but also as a local center to transform the estimated positions in the camera coordinate system (CCS) into the global positions in the global coordinate system (GCS). Though, the position of the vehicle provided by the odometer of the vehicle may be imprecise because of the incremental mechanic errors of the odometer. It also results in deviations from a planned navigation path. Therefore, it is desired to conduct a calibration task to eliminate the errors. In Section 3.3, we will review the method for vehicle position calibration which was proposed by Chen and Tsai [10], and a vision-based calibration method for adjusting the vehicle direction during navigation will be described in the following chapter.

3.1.1 Coordinate Systems

Four coordinate systems are utilized in this study to describe the relative locations between the vehicle and the navigation environment. The coordinate systems are illustrated in Figure 3.1. The definitions of all the coordinate systems are described in the following.

(1) Image coordinate system (ICS): denoted as (u, v). The u-v plane coincides with the image plane and the origin I of the ICS is placed at the center of the image

23

plane.

(2) Global coordinate system (GCS): denoted as (x, y). The origin G of the GCS is a pre-defined point on the ground. In this study, we define G as the starting position of the vehicle navigation by the mopboard following process.

(3) Vehicle coordinate system (VCS): denoted as (Vx, Vy). The Vx-Vy plane is coincident with the ground. And the origin V is placed at the middle of the line segment that connects the two contact points of the two driving wheels with the ground. The Vx-axis of the VCS is parallel to the line segment joining the two driving wheels and through the origin V. The Vy-axis is perpendicular to the

V

x-axis and passes through V.

(4) Camera coordinate system (CCS): denoted as (X, Y, Z). The origin Om of the CCS is a focal point of the hyperbolic mirror. And the X-Y plane coincides with the image plane and the Z-axis coincides with the optical center inside the lens of the CCD camera.

Figure 3.1 The coordinate systems used in this study. (a) The image coordinate system. (b) The vehicle coordinate system. (c) The global coordinate system. (d) The camera coordinate system.

24

(c) (d)

Figure 3.1 The coordinate systems used in this study. (a) The image coordinate system. (b) The vehicle coordinate system. (c) The global coordinate system. (d) The camera coordinate system. (continued)

3.1.2 Coordinate Transformation

In this study, the GCS is determined when starting a navigation session. The CCS and VCS follow the vehicle during navigation. The relation between the GCS and the VCS is illustrated in Figure 3.2(a). We assume that (xp, yp) represents the coordinates of the vehicle in the GCS, and that the relative rotation angle denoted as θ is the directional angle between the positive direction of the x-axis in the GCS and the positive direction of the Vx-axis in VCS. The coordinate transformation between the VCS and the GCS can be described by the following equations:

x = V

x × cosθ - Vy × sinθ + xp; (3.1)

y = V

x × sinθ + Vy × cosθ + yp. (3.2) The main concept about the relation between the CCS and the ICS is illustrated in Figure 3.2(b), though the CCS in Figure 3.2(b) is a little different from the CCS in this study. The relation plays an important role for transforming the camera coordinates (X, Y, Z) of a space point P into the image coordinates (u, v) of its

25

where a and b are two parameters satisfying the equation of the hyperboloidal mirror as follows: omni-camera [17], and by combining with the above equations, we have

2 2 2 2

where θ is the angle of the space point P with respect to the X-axis as well as that of

26

the corresponding image point p with respect to the u-axis, as shown in Figure 3.2(c).

(a)

(b)

u v

I

P(X, Y, Z)

p(u, v) θ

omni-image plane

(c)

Figure 3.2 The relations between different coordinate systems in this study. (a) The relation between the GCS and VCS (b) Omni-camera and image coordinate systems [11]. (c) Top view of (b).

3.2 Calibration of Omni-directional Cameras

Before using the imaging system, a calibration procedure for the omni-cameras

27

is indispensible. However, the conventional calibration method is complicated for calculating intrinsic and extrinsic parameters. An alternative way is to use the

space-mapping technique to estimate the relation between points in the 2-D image

plane and 3-D space and to establish a space-mapping table for it [7]. The detailed process is reviewed in Section 3.2.2. In the process of establishing the space-mapping table, the information about the focal location of the hyperboloidal mirror is important because the focal point is taken to be the origin in the CCS. The process to find the focal point of the hyperboloidal mirror is described in Section 3.2.1.

3.2.1 Proposed Technique for Finding Focal Point of Hyperboloidal Mirror

In order to creating the space-mapping table, it is indispensible to select some pairs of world space points with known positions and the corresponding points in the omni-images. Note that an image point p is formed by any of the world space points which all lie on the incoming ray R, as shown in Figure 3.3, where we suppose that Om is the focal point of the hyperboloidal mirror, Ow is on the transverse axis of the hyperboloidal mirror, and P1 and P2 are two space points on the ray R. Besides, we also assume that the corresponding image point is p .Subsequently, we have the corresponding point pairs (P1, p) and (P2, p) which then are used to create the table.

However, if we take Ow as the focal point, as a result P1 and P2 will lie on different light rays, though the corresponding image points are still p. In this way, the incorrect pairs will result in an incorrect space-mapping table. To provide accurate pairs, we must find out the position of the focal point of the hyperboloidal mirror.

28

Figure 3.3 The space points and their corresponding image points.

To find out the focal point of the hyperboloidal mirror, as shown in Figure 3.4, we use two different landmarks L1 and L2 which have the same corresponding image point p with known heights and horizontal distances from the transverse axis of the hyperboloidal mirror. We assume that Ow is at (0, 0, 0). Then, according to the involved geometry shown in Figure 3.4, the position of the focal point can be computed by the following equations:

1 2 1

1 2 1

tan H O O

m w

H H

D D D

 

; (3.12)

2 1

1 1 1 1

2 1

m w

tan

H H

O O H D H D

D D

     

. (3.13)

Figure 3.4 Finding out the focal point Om.

29

3.2.2 Review of Adopted Camera Calibration Method

Jeng and Tsai [7] proposed a space-mapping method to estimate the relation between points in the 2-D image plane and the 3-D space and to establish a space-mapping table. By observing Figure 3.2(b) and Eq. (3.3) through (3.7), it is noted that there exists a one-to-one relation between the elevation angle α and the radial distance r. By using the space-mapping table as well as according to the rotational invariance property, we can know the relative elevations and directions of the concerned targets in images.

The adopted method [7] includes three major procedures: landmark learning, estimation of coefficients of a radial stretching function, and space-mapping table creation, as described respectively in the following.

(1) Landmark learning ---

We select some landmark point pairs of world space points with known positions and their corresponding pixels in a taken omni-image. More specifically, the coordinates of the landmark points are measured manually with respect to a selected origin in the CCS. In this study, the origin in the CCS is a focal point of the hyperboloidal mirror which can be found as described in Section 3.2.1. As shown in Figure 3.5, we selected n landmark points on the omni-image, and recorded the pairs of the space coordinates (Xk, Yk, Zk) and the image coordinates (uk, vk), where k =0, 1,…, n  1.

(2) Estimation of coefficients of radial stretching function ---

As shown in Figure 3.6, the same elevation angles correspond to the same radial distances. Therefore, the radial distance r from each image pixel p with (u, v) in the ICS to the image center Oc at (u0, v0) may be computed by r=fr

(ρ). In this study, we

30

attempt to describe the function fr

(ρ), called a radial stretching function, by the

following 5th-degree polynomial function:

1 2 3 4 5

0 1 2 3 4 5

r

( )

rf             a aaaaa

, (3.14)

where a0 to a5 can be estimated using the landmark point pairs, as described in the following major steps [7].

selected landmark point

O

c

Figure 3.5 The interface for selecting landmark points.

Step 1. Elevation angle and radial distance calculation ---

Use the selected landmark point pair (Pk, pk), including the coordinates (Xk, Yk, Zk) in the CCS and the coordinates (uk, vk) in the ICS, to calculate the elevation angle ρk of Pk, in the CCS and the radial distance rk

of p

k in the ICS by the following equations:

tan (

1 k

)

k

k

Z

 

D

, (3.15)

2 2

k k k

ruv

, (3.16)

where Dk is the horizontal distance between the landmark point Pk and the focal point

31

Om of the mirror.

Step 2. Calculation of coefficients of the radial stretching function ---

Substitute all pairs (ρk, rk), where k=0, 1, …, n  1, into Eq. (3.14) to get n

Figure 3.6 Illustration of the relation between radial r in ICS and elevation ρ in CCS.

Step 3. Space-mapping table creation ---

The space-mapping table to be constructed is a 2-dimensional table with horizontal and vertical axes specifying respectively, the range of the azimuth angle θ as well as that of the elevation angel ρ of all possible incident light rays going through the focal point of the mirror.

Table 3.1 shows an example of the pano-mapping table of size M×N. Each entry

E

ij with indices (i, j) in the table specifies an azimuth-elevation angle pair (θi, ρj),

32

which represents an infinite set Sij of points in the CCS passing through by the light ray with azimuth angel θi and elevation angle ρj. And for the reason that these world space points in Sij are all projected onto the identical pixel pij in any omni-image taken by the camera, a mapping relation between Sij and pij as shown in Figure 3.7 can be derived. The mapping table shown in Table 3.1 is constructed by filling entry Eij with coordinates (uij, vij) of pixel pij in the omni-image.

Table 3.1 Example of pano-mapping table of size M×N [7].

Figure 3.7 Mapping between pano-mapping table and omni-image [7] (or space-mapping table in this study).

More specifically, the process of filling entries of the pano-mapping table can be summarized by the following major steps [7].

Step 1. Divide the range 2π of azimuth angles into M intervals and compute θi by

(2 / ), for 0,1,..., 1

i

i M i M

     

. (3.18)

33

Step 2. Divide the range [ρe, ρs] of the elevation angles into N intervals and compute ρj by

[( ) / ] , for 0,1,..., 1

j

j

e s

N

s

j N

         

. (3.19)

Step 3. Fill the entry Eij with corresponding image coordinates (uij, vij) by

cos ;

After establishing the space-mapping table, we can determine the elevation and azimuth angle of a space point by looking up the table.

3.3 Calibration of Vehicle Odometer

Autonomous vehicles or mobile robots usually suffer from accumulations of mechanical errors during navigation. The vehicle we used in this study has been found to have a defect of this kind that when it moves forward, the vehicle deviates leftward from the original path gradually, and the position and the direction angle of the vehicle provided by the odometer are different from the real position and direction angle of the vehicle. Because the mechanic errors of an odometer usually will lead to wrong control instructions and inaccurate vehicle localization information, it is desired to conduct a calibration task to eliminate such errors.

Chen and Tsai [10] proposed a mechanical error correction method by

34

curve-fitting the erroneous deviations from correct paths. In this study, we use this method for correcting the vehicle odometer readings. In Section 3.3.1, we describe how we find out the deviation values in different distances and build an odometer calibration model for them. In Section 3.3.2, we will derive an odometer reading correction equation by a curve fitting technique. Finally, the use of such error correction results is described in Section 3.3.3.

3.3.1 Odometer Calibration Model

In this section, first we describe an experiment we conducted to record the vehicle’s deviations from a planned navigation path at different distances from a navigation starting point. We attach a stick on the frontal of vehicle and let it point to the ground in such a way that the stick is perpendicular to not only the Vy-axis in the VCS but also the ground. The stick and the two contact points of the two driving wheels with the ground are used to find out the origin of the VCS. The equipments used in this experiment were a measuring tape, two laser levelers, and the autonomous vehicle. We chose a point from the crossing points formed by the boundaries of the rectangular-shaped tiles on the floor, as the initial position O of vehicle navigation;

and mark the positions by pasting a sticky tape on the ground. Also, we designated a straight line L along the tile boundaries from O.

First, we used the two laser levelers to adjust the position of the vehicle by hand such that the two contact points of the two driving wheels with the ground lie on the boundaries of the tiles and that the stick on the vehicle is perpendicular to L. Second, we drove the vehicle forward for a desired distance on a straight line L. Third, we marked the terminal position T reached by the vehicle by finding out the origin in the VCS. Fourth, we measured the perpendicular distance between T and L. Finally, we

35

repeated the steps at least five times for each desired distance. An illustration of the experiment is shown in Figure 3.8, a detailed algorithm for the process is described in the following, and the experimental results are shown in Table 3.2.

Figure 3.8 Illustration of the experiment.

Algorithm 3.1. Building a calibration model of the odometer.

Input: none

Output: A calibration model of the odometer.

Steps:

Step 1. Choose an initial point O from the crossing points of the boundaries of the rectangular-shaped tiles on the floor, and designate a straight line L which starts at O and coincides with some of the tile boundaries.

Step 2. Use the laser levelers to check if the two contact points of the two driving wheels with the ground lie on the boundaries; and if not, adjust vehicle pose such that it coincides with the straight line L.

36

Step 3. Drive the vehicle to move forward until it arrives at a desired distance on a straight line L.

Step 4. Mark the terminal position T reached by the vehicle.

Step 5. Measure the distance between T and L.

Step 6. Repeat Steps 2 through 5 at least five times for each desired moving distance and record the results.

Table 3.2 The results of the experiment of building an odometer calibration model.

Move Distance (cm) Average Distance of Deviation (cm)

30 0.31

The distribution of the deviations of the experimental results in Section 3.3.1 is shown in Figure 3.9. We found out that the distribution of the data can be described as a curve as a mechanical-error calibration model for the vehicle. Therefore, we use the least-squares-error (LSE) curve fitting technique to obtain a third-order equation to fit these data. After that, we can use the derived curve equation to calibrate the vehicle odometer during navigation sessions.

37

Figure 3.9 The distribution of the deviations.

The principle of the curve fitting technique we adopt can be explained as follows.

We consider a polynomial function L of degree k:

0 1

: ...

k k

L y   a a x   a x

. (3.22)

And with the pairs of measured data, (x1, y1), (x2, y2), …, (xn, yn) mentioned above, we can use the LSE principle to fit the pairs by

2

From Eq. (3.24), we can derive a relation matrix as follows:

1

In order to solve the matrix equation, we rewrite Eq. (3.22) into a matrix form Y=XA:

38

and we multiply Eq. (3.26) by the transpose of the matrix X as follows:

1 1 1 0

which may be simplified to get the following matrix equation:

1

It is true that Eqs. (3.28) and (3.25) are identical, and as a result, solving Eq. (3.26) is equivalent to solving Eq. (3.25). The deviation processes from Eqs. (3.26) to (3.28) can be rewritten in the notation form respectively as follows:

Y = XA;

(3.29)

X

T

Y = X

T

XA;

(3.30)

A = (X

T

X)

-1

X

T

Y.

(3.31)

The numerical result of curve fitting for the calibration model using that data shown in Table 3.2 is shown below and illustrated in Figure 3.10:

3 2

( ) 0.0000014478 0.000379467 0.002033815 0.000081164

f x   xxx

.

(3.32)

39

Figure 3.10 The result of curve fitting of the deviations with order k=3.

3.3.3 Correction of vehicle position coordinates

The goal to correct the mechanical error of the vehicle now can be carried out by the curve equation derived above. As shown in Figure 3.11, the vehicle starts moving forward from P1(x1, y1) toward the goal P2(x2, y2) in the GCS. Denote the direction of

The goal to correct the mechanical error of the vehicle now can be carried out by the curve equation derived above. As shown in Figure 3.11, the vehicle starts moving forward from P1(x1, y1) toward the goal P2(x2, y2) in the GCS. Denote the direction of

相關文件