• 沒有找到結果。

Vision-based obstacle detection and avoidance for autonomous land vehicle navigation in outdoor roads

N/A
N/A
Protected

Academic year: 2021

Share "Vision-based obstacle detection and avoidance for autonomous land vehicle navigation in outdoor roads"

Copied!
25
0
0

加載中.... (立即查看全文)

全文

(1)

www.elsevier.comrlocaterautcon

Vision-based obstacle detection and avoidance for autonomous

land vehicle navigation in outdoor roads

Kuang-Hsiung Chen, Wen-Hsiang Tsai

)

Department of Computer and Information Science, National Chiao Tung UniÕersity, 1001 Ta Hsueh Road, Hsinchu, 30050 Taiwan Accepted 23 April 1999

Abstract

Ž .

An effective approach to obstacle detection and avoidance for autonomous land vehicle ALV navigation in outdoor road environments using computer vision and image sequence techniques is proposed. To judge whether an object newly appearing in the image of the current cycle taken by the ALV is an obstacle, the object shape boundary is first extracted from the image. After the translation from the ALV location in the current cycle to that in the next cycle is estimated, the position of the object shape in the image of the next cycle is predicted, using coordinate transformation techniques based on the assumption that the height of the object is zero. The predicted object shape is then matched with the extracted shape of the object in the image of the next cycle to decide whether the object is an obstacle. We use a reasonable distance measure to compute the correlation measure between two shapes for shape matching. Finally, a safe navigation point is determined, and a turn angle is computed to guide the ALV toward the navigation point for obstacle avoidance. Successful navigation tests show that the proposed approach is effective for obstacle detection and avoidance in outdoor road environments. q 2000 Elsevier Science B.V. All rights reserved.

Keywords: Autonomous land vehicle; Obstacle detection and avoidance; Image sequence; Computer vision; Image processing

1. Introduction

Ž .

Autonomous land vehicles ALVs are useful for many automation applications both in indoor and outdoor environments. Vision-based obstacle detec-tion for ALV navigadetec-tion in outdoor road environ-ments is a difficult and challenging task because of the great variety of object and road conditions, like irregular and unstable features on objects, moving objects, changes of illumination, and even rain.

Suc-)

Corresponding author. Tel.: 357159090; fax: q886-35721490

cessful ALV navigation requires the integration of the techniques of environment sensing and learning, image processing and feature extraction, ALV loca-tion, path planning, wheel control, and so on. This study is mainly concerned with obstacle detection and avoidance for ALV navigation in outdoor road environments using computer vision and image se-quence techniques.

Many research works have been reported for

ob-w x

stacle detection in outdoor roads 1–33 . Most

sys-w x

tems, such as the CMU Navlab 1–8 , the vehicle constructed by Martin Marietta Denver Aerospace

w9–11 , and the navigation system developed at thex

0926-5805r00r$ - see front matter q 2000 Elsevier Science B.V. All rights reserved.

Ž .

(2)

w x

University of Maryland 12 , use laser range sensors to detect obstacles in outdoor roads or cross-country

w x

terrain. FMC 13 uses a sonic imaging sensor and an infrared sensor for obstacle avoidance and target detection.

As to vision-based approaches to obstacle detec-tion, they basically can be divided into three classes. In the first class, obstacles are extracted directly

w x

from 2D images 14–21 . Only one camera and only the image in the current navigation cycle are used, with certain a priori knowledge and predefined as-sumptions being considered. In the second class of approaches, motion information obtained from a se-quence of images are utilized to detect obstacles

w22–26 . The most popular approaches in this classx

are based on optical flow. In the third class of approaches, obstacles are detected using

stereo-vi-w x w x

sion techniques 27–32 . Besides, Xie et al. 33 used a range finder coupled with a CCD camera to ac-quire 3D information of obstacles. Although the first class in general takes less computing time and has better detection results than the second and the third classes, in fact, it does not really detect obstacles because obstacles are extracted directly from the 2D image. Shadows on roads may also be regarded as obstacles in this class of approaches. On the con-trary, in the second and the third classes, 3D com-puter vision techniques are used to really judge whether objects on roads are obstacles, although more computing time is required in these two classes than the first class.

In this paper, an effective approach to obstacle detection and avoidance for autonomous land vehicle navigation in outdoor road environments using com-puter vision and image sequence techniques is pro-posed. To judge whether one object newly appearing in the image of the current cycle is an obstacle, we first extract the object shape boundary from the image. After the translation from the ALV location in the current cycle to that in the next cycle is estimated, the position of the object shape in the image of the next cycle is predicted using coordinate transformation techniques, based on the assumption that the height of the object is zero. The predicted object shape is then matched with the extracted shape of the object in the image of the next cycle to decide whether the object is an obstacle. We use the

Ž . w x

distance-weighted correlation DWC 34 as the

similarity measure between the two shapes for shape matching. Then a navigation point is computed, and a turn angle is decided accordingly to guide the ALV to follow the navigation point for obstacle avoidance. Successful navigation tests show that the effective-ness of the proposed approach for obstacle detection and avoidance in outdoor road environments.

A new prototype ALV with smart, compact, and ridable characteristics, as shown in Fig. 1, is con-structed as a testbed for this study, whose dimension is 118.5 cm by 58.5 cm. It has four wheels in which the front two are the turning wheels and the rear two the driving wheels. Above the front wheels is a cross-shaped rack on which some CCD cameras are mounted, and above the rack is a platform on which two monitors, one being the computer monitor and the other the image display, are placed. Above the platform is a vertical bar on which another camera used for obstacle detection and avoidance in this study is mounted. The central processor is an IBM

Ž .

PCrAT compatible personal computer Pentium-166 with a color image frame grabber which takes 512 = 486 RGB images, with eight bits of intensity per image pixel.

The ALV is computer-controlled with a modular architecture, as shown in Fig. 2, including four com-ponents, namely, a vision system, a central processor Pentium-166, a motor control system, and a DC power system. The vision system consists of a

(3)

Fig. 2. System structure of prototype ALV.

era, a TV monitor, and a Targa Plus color image frame grabber. The central processor Pentium-166 has an RAM with 16 megabytes, one floppy disk, a 1.2-gigabyte hard disk, a 3.6-gigabyte hard disk, and an EL slim display. The motor control system in-cludes a main control board with an Intel 8085 controller, a motor driver, and two motors. The power of the system is supplied by a battery set including two 12-V power source, each being di-vided into various voltages using a DC-to-DC con-verter set to provide power to the ALV components. The remainder of this paper is organized as fol-lows. In Section 2, the details of the proposed vision-based obstacle detection method is described. In Section 3, the proposed obstacle avoidance method is introduced in detail. The descriptions of the em-ployed image processing techniques and

experimen-tal results are included in Section 4. Finally, some conclusions are stated in Section 5.

2. Proposed vision-based obstacle detection method

Basically, three types of objects may be extracted

Ž .

from the road image in this approach, which are: 1

type-1 objects: the objects that newly appear in the

road image of the current cycle, which will be

Ž .

judged to be obstacles or not in the next cycle; 2

type-2 objects: the objects that appear in the road

image of the previous cycle, which are judged to be

Ž .

obstacles or not in the current cycle; and 3 type-3 objects: the objects that have been decided to be obstacles or non-obstacles in the current or subse-quent cycles.

(4)

Initially, no objects appear in the road image, and we employ the approach proposed in Chen and Tsai

w35 , which utilizes color information clustering andx

combined line and road following techniques, to guide the ALV to follow the road. When new objects appear in the road image, all of them are regarded as type-1 objects and whether they are obstacles will be judged in the next cycle. In the next cycle, these type-1 objects become type-2 objects and whether they are obstacles are judged in this cycle. After the judgment, the type-2 objects become type-3 objects. Type-3 objects may still appear in the images of several subsequent cycles. We then compute a

navi-gation point and drive the ALV toward the point such that the ALV can avoid collision with the type-3 objects that have been decided to be obsta-cles. The entire process is repeated one cycle after another. Note that type-1, type-2, and type-3 objects may appear in the image simultaneously, and some type-3 objects may disappear from the road image in subsequent cycles.

To judge whether one object is an obstacle or not in the next cycle, we first extract the shape boundary of the object from the road image of the current cycle. After the translation from the ALV location in the current cycle to that in the next cycle is

(5)

mated, the position of the object shape in the image of the next cycle is predicted using coordinate trans-formation techniques, based on the assumption that the height of the object is zero. The predicted object shape is then matched with the extracted shape boundary of the object in the image of the next cycle to decide whether the object is a non-obstacle, a static obstacle, or a moving obstacle. We use the

w x

DWC 34 as the similarity measure between the two shapes for shape matching.

In the following, we state first the estimation of the translation between two continuous ALV loca-tions, then the prediction process of the object shape in the image of the next cycle, followed by the shape matching process.

2.1. Estimation of translation between two continu-ous ALV locations

When the ALV keeps driving on a road, we use

w x

the approach proposed in Ref. 35 to guide the ALV to follow the road. In this approach, the ALV loca-tion on the road in cycle i is represented by two

Ž .

parameters d , u , where d is the distance of thei i i ALV to the central path line on the road and u isi the pan angle of the ALV relative to the road

direc-Ž .

tion positive to the left . Let P denote the obtainedi

Ž .

ALV location d , ui i in cycle i and Piq1 denote the

Ž .

obtained ALV location diq 1, uiq1 in cycle i q 1. What we desire to know is the translation vector from Pi to Piq1, denoted by T, which can be derived in terms of d , u , di i iq1, and uiq1. Without loss of generality, we first assume that the ALV turns to the right from P to P , i.e., u -u .

i iq1 iq1 i

Then as shown in Fig. 3, where u ) 0, ui iq1- 0, and

d - di iq1- 0, the angle /CBD can be expressed as

p /CBD s yd y/DBE 2 p p s yd y y/DEB

ž

/

2 2 s yd q d q gsg ,

Ž .

1

where d is the turn angle of the front wheels. Alternatively, /CBD can be expressed as

/CBD s /CBK q /KBD s yu qu .

Ž .

2

iq 1 i

So, angle g can be determined by

g s u y ui iq1.

Ž .

3

The length of vector T can be solved to be

diq 1ydi

l sT .

Ž .

4

cos m q u

Ž

i

.

By the basic kinematics of the ALV, the rotation radius R can be found to be

p g

R s drsin d s drsin

ž

ym y

/

,

Ž .

5

2 2

where d is the distance between the front wheels and the rear wheels, and lT can be expressed as

(

l s R 2 1 y cosgT

Ž

.

Ž .

6

w x Ž . Ž .

according to Ref. 36 . From Eqs. 4 and 6 , we get

(

diq 1ydi d 2 1 y cosg

Ž

.

s p g .

Ž .

7

cos m q u

Ž

i

.

sin ym y

ž

2 2

/

Replacing g by u y ui iq1, the direction of T is determined by the angle

u y ui iq1 Acosu y Bcosi

ž

/

2 y1 m s tan u y u ,

Ž .

8 i iq1



Asinu y Bsini

ž

/

0

2 where 2

(

A s 2 d

Ž

1 y cos u y u

Ž

i iq1

.

.

, B s diq 1yd .i

Ž .

9

Similarly, if the ALV turns to the left from P toi P , i.e., if u )u , the direction of T can be

iq 1 iq1 i

determined by the angle

u y ui iq1 Acosu q Bcosi

ž

/

2 y1 m s tan

Ž

y1

.

. u y ui iq1



Asinu q Bsini

ž

/

0

2 10

Ž

.

Then the components of the translation vector T s

Žx , yT T.are solved to be

x s l cos m ,T T

(6)

The above vision-based kinematic model is used only when no location error exists during ALV navigation. If location error exists during ALV navi-gation, the vision-based kinematic model may be unsuitable for use and an additional control-based kinematic model is used to compensate for the defi-ciency of the original vision-based model. The loca-tion error mainly results from the wheel slippage, the unflatness of the road surface, andror the coordinate transformations, etc. Fig. 4 illustrates one case of the location error, where the ALV location in cycle i

Ž . Ž .

and i q 1 are 0,0 and d,0 , respectively, for d / 0. Since the ALV trajectory between two continuous navigation cycles is assumed to be a circle, the ALV

Ž .

can never reach the location d,0 in cycle i q 1 and location error will exist in cycle i q 1. If the two

Ž . Ž .

continuous ALV locations 0,0 and d,0 are used to find their translation based on the vision-based kine-matic model described above, an unreasonable solu-tion will be produced. For this, the following

naviga-tion checking rule is used to decide whether the vision-based kinematic model is applicable.

Navigation checking rule:

if u - 0, then if d - 0, then ui iq1-u and di iq1) di

else if d ) 0, then uiq 1)ui

Ž .

else d s 0 uiq 1su and di iq1) di

else if u ) 0, then if d ) 0, then ui iq1)u and di iq1- di

else if d - 0, then uiq 1-ui

Ž .

else d s 0 uiq 1su and di iq1- di

Ž .

else u s 0 if d - 0, then ui iq1-u and di iq1) di

else if d ) 0, then uiq 1)u and di iq1- di

Ž .

else d s 0 uiq 1s0 and diq1sd ,i

12

Ž

.

where the distances d and di iq1 are positive to the right relative to the central path line, the pan and turn angles u , ui iq1, and d are positive to the left relative to the road direction, and the value of d can be obtained by checking the ALV control system.

(7)

If the values of d , u , di i iq1, uiq1, and d satisfy the above navigation checking rule, they are used to compute the translation vector from Pi to Piq1

based on the vision-based kinematic model.

Other-Ž .

wise, the ALV location diq 1, uiq1 in Piq1 is unreasonable and a control-based kinematic model is proposed, which uses the ALV control information to compute the translation vector. The computation process based on the control-based kinematic model is described as follows. As shown in Fig. 3, let S denote the travelled distance from P to Pi iq1, which can be obtained from the counter of the odometer in the ALV control system. Then, angle g can be expressed by

g s SrR.

Ž

13

.

Ž .

Since R can be determined by Eq. 5 , where d can be obtained by checking the ALV control system, angle g can be determined accordingly, and the

Ž .

length lT of vector T expressed in Eq. 6 can thus be solved. Since the direction of T is determined by

p g

m s yd y ,

Ž

14

.

2 2

the components of the translation vector expressed in

Ž .

Eq. 11 can be solved.

The computed translation vector is then used

to-Ž .

gether with the ALV location d , ui i in P toi estimate a reasonable ALV location in Piq 1 when

Ž X

the control-based kinematic model is used. Let diq 1,

X .

uiq 1 denote the estimated ALV location in Piq1.

Then, if d - 0, as illustrated in Fig. 3, dXiq1 and uiq1X

are solved to be dX sd q l cos m q u

Ž

.

iq 1 i T i uiq 1X su y g .i

Ž

15

.

Ž . X Otherwise d G 0 , diq 1 is solved by dX sd y l cos m y u ,

Ž

.

Ž

16

.

iq 1 i T i X Ž .

and uiq 1 is identical to that expressed in Eq. 15 . Note that if diq 1sd , ui iq1su s 0, and d s 0,i

we cannot derive the translation vector using the vision-based kinematic model even when the naviga-tion from P to Pi iq1 is judged to be reasonable by checking the values of d , u , di i iq1, uiq1, and d in the navigation checking rule. In this case, the con-trol-based kinematic model is used, and the

compo-nents of the translation vector T from P to Pi iq1 are just

x s 0,T

y s S.T

Ž

17

.

The vision-based kinematic model combined with the control-based kinematic model and the naviga-tion checking rule enables the ALV to achieve reli-able and fault-tolerant navigation.

2.2. Object shape prediction in next cycle

Several coordinate systems and coordinate trans-formations are used in the prediction process. The

Ž .

image coordinate system ICS , denoted as u–w, is attached to the image plane of the camera mounted

Ž .

on the ALV. The camera coordinate system CCS , denoted as u–Õ–w, is attached to the camera lens

Ž .

center. The vehicle coordinate system VCS , de-noted as x–y–z, is attached to the middle point of the line segment which connects the two contact points of the two front wheels of the ALV with the ground. The x-axis and the y-axis are on the ground and parallel to the short and the long sides of the vehicle body, respectively. The z-axis is vertical to the ground. The transformation between the CCS and the VCS can be written in terms of homogeneous

w x coordinates 37,38 as 1 0 0 0 0 1 0 0 uÕw1 s xyz1

Ž

.

Ž

.

0 0 1 0 yxd yyd yzd 1 r11 r12 r13 0 r21 r22 r23 0 = ,

Ž

18

.

r31 r32 r33 0 0 0 0 1 where

r s cosu cos w q sinu sinf sin w ,11 r s ysinu cosf ,12

r s sinu sinf cos w y cosu sin w ,13 r s sinu cos w y cosu sinf sin w ,21 r s cosu cosf ,22

r s ycosu sinf cos w y sinu sin w ,23 r s cosf sin w ,31

r s sinf ,32

(8)

and u is the pan angle, f the tilt angle, and w the swing angle, of the camera with respect to the VCS;

Ž .

and x , y , zd d d is the translation vector from the origin of the VCS to the origin of the CCS.

To predict the shape boundary of an object in the ICS in the next cycle, we first find the ICS coordi-nates of all the boundary points representing the shape of the object in the image of the current cycle. We then backproject each boundary point of the object in the ICS into the VCS, based on the assump-tion that the height of the boundary point in the VCS is zero, i.e., its z coordinate is zero, to obtain the 2D

Ž .

VCS coordinates x, y of the boundary point in the current cycle. By using the translation vector derived

Ž X X.

previously, the 2D VCS coordinates x , y of the boundary point in the next cycle can be found, as illustrated in Fig. 5, where u ) 0, ui iq1- 0, and the

Ž .

components of the translation vector are x , yT T . In

Ž .

Fig. 5. Illustration of how the VCS coordinates x, y of some boundary point P in the next cycle are computed.

Ž X X.

the figure, the VCS coordinates x , y of point P in cycle i q 1 is solved to be 1 0 0 X X 0 1 0 x y 1 s xy1

Ž

.

Ž

.

yxT yyT 1

cos u

Ž

iq 1yui

.

ysin u

Ž

iq1yui

.

0 = sin u

Ž

iq 1yui

.

cos u

Ž

iq1yui

.

0

0 0 1 20

Ž

.

or xXs

Ž

x y xT

.

cos u

Ž

iq1yui

.

q

Ž

y y yT

.

sin u

Ž

iq1yui

.

yXs

Ž

x y x sin uT

.

Ž

iq1yui

.

q

Ž

y y yT

.

cos u

Ž

iq1yui

.

.

Ž

21

.

After the backprojection and translation processes, we project the backprojected and translated bound-ary point in the VCS into the ICS to predict the ICS

Ž .

coordinates u,w of the boundary point in the next cycle and the prediction process is finished. The backprojection and projection principles are de-scribed as follows.

Ž .1 Backprojection principle: As shown in Fig. 6, assume that point P in the image has the CCS

Ž . Ž .

coordinates u , yf, wP P where u , wP P indicate the position in the image, i.e., the ICS coordinates, and f is the focus length. After backprojecting the point P in the image into the VCS, we can get a line

L that passes P and the lens center O . Let Pc X

denote the intersection point of this line L and the

Ž .

horizontal plane z s h. Using Eq. 18 , we get the

Ž .

VCS coordinates x , y , zP P P of point P in the image as

x s uP P

Ž

cosu cosc q sinu sinf sinc

.

qf sinu cosf q w

Ž

.

P

Ž

sinu sinf cosc qcosu cosc q x ,

.

d

y s uP P

Ž

sinu cosc q cosu sinf sinc

.

yf cosu cosf y w

Ž

.

Ž

cosu sinf cosc

P

qsinu sinc q y ,

.

d z s uP P

Ž

cosu sinc

.

yf sinf q w

Ž

cosu cosc q z .

.

Ž

22

.

(9)

Fig. 6. Illustration of the backprojection and projection processes.

Additionally, the equation of line L can be expressed as

x y xd y y yd z y zd

s s sk ,

Ž

23

.

x y xP d y y yP d z y zP d

where k is a constant. By substituting z s h into Eq.

Ž23 , the VCS coordinates x , y , z. Ž XP XP XP. of point PX can be solved to be h y zd X x s x qP d

Ž

x y xP d

.

, z y zP d h y zd X y s y qP d

Ž

y y yP d

.

, z y zP d zXsh.

Ž

24

.

P

Since we backproject each boundary point in the image into the VCS using the assumption that the height of the boundary point in the VCS is zero, we

Ž .

substitute h s 0 into Eq. 24 and the desired 2D

Ž X X. X

VCS coordinates x , yP P of point P are solved to be zd

Ž

x y xP d

.

X x s x yP d , z y zP d zd

Ž

y y yP d

.

X y s y yP d .

Ž

25

.

z y zP d

Ž .2 Projection principle: As shown in Fig. 6,

X

Ž X

assume that point P has the VCS coordinates x ,X X P

. X

y , zP P . After projecting the point P in the VCS

into the ICS, we get its corresponding space point P in the ICS. Since the lens center O has the VCSc

Ž . Ž .

coordinates x , y , zd d d as given in Eq. 18 , the distance between PX and O , denoted as D, isc calculated to be 2 2 2 X X X

(

D s

Ž

x y xP d

.

q

Ž

y y yP d

.

q

Ž

z y zP d

.

. 26

Ž

.

(10)

Ž . Ž .

Fig. 7. Illustration of how to decide using the DWC correlation measure whether an object is an obstacle. a A non-obstacle is detected. b

Ž .

A static obstacle is detected. c A moving obstacle is detected.

Ž .

Let x , y , zP P P denote the VCS coordinates of the corresponding point P in the image.Then the follow-ing equation is satisfied:

x y xP XP y y yP XP z y zP XP f q D s s s sK , X X X x y xd P y y yd P z y zd P D 27

Ž

.

where f is the focus length and K is a constant. The VCS coordinates of point P can be solved accord-ingly to be x s K x y x

Ž

X

.

qxX , P d P P y s K y y yP

Ž

d XP

.

qyXP, z s K z y z

Ž

X

.

qzX .

Ž

28

.

P d P P

Since we assume that the height of the boundary point in the VCS is zero, we substitute zXPs0 into

the above equations for further simplification. Using the transformation between the CCS and the VCS

Ž .

described in Eq. 18 , we can get the CCS

coordi-Ž .

nates u , yf, wP P of point P, and so the desired

Ž .

coordinates u , wP P of point P in the image.

2.3. Shape matching

To judge whether an object O is an obstacle in

Ž

the current cycle, we extract its shape represented

.

by the shape boundary points in the image of the previous cycle, and predict its shape in the image of the current cycle using coordinate transformation

(11)

Ž . 

Fig. 8. Illustration of closeness distance measure L d s1r 1q

wDFŽd.x2qwDRŽd.x24.

techniques based on an assumption stated and de-rived previously. Let P denote the predicted shape boundary and E denote the extracted shape boundary of the object in the image of the current cycle. To match P and E, we overlap them and compute the

w x

measure of the DWC 34 to check the similarity between them. First, the minimum distance d of ab

boundary point b in P or E is defined to be the Euclidean distance between b and its closest point in the other shape. The weight wbk of b is defined to be

1r d2q1 if 0 F d F k ,

Ž

.

k b b

w sb

½

Ž

29

.

0 otherwise,

where k is a constant that limits the distance within which the closest point of b is searched for. Then the DWC is defined to be 1 1 1 k k k DWC

Ž

P , E s

.

ž

Ý

w qi

Ý

wj

/

, 2 NP igP NE jgE 30

Ž

.

where NP and NE are the total numbers of the boundary points in P and E, respectively. It can be

kŽ .

easily verified that 0 F DWC P, E F 1. The value

kŽ .

of DWC P, E is then checked to judge whether the

object O is an obstacle. If the value is greater than a certain threshold value, say TH_1, where 0 - TH_1

- 1, it is decided that O is not an obstacle because

P and E are strongly similar, as illustrated in Fig.

Ž .

7 a . If the value is smaller than TH_1 and greater than another threshold value, say TH_2, where 0 -TH_2 - TH_1, it is decided that O is a static obsta-cle because P and E are partially similar, as

illus-Ž .

trated in Fig. 7 b . Finally, if the value is smaller than TH_2, it is decided that O is a moving obstacle because P and E are strongly dissimilar, as

illus-Ž .

trated in Fig. 7 c .

3. Proposed obstacle avoidance method

3.1. NaÕigation point selection

If no obstacle appears on the road ahead, we drive the ALV to follow the central path line on the road using a closeness distance measure from the ALV to the central path line proposed by Cheng and Tsai

w36 . The measure is defined asx

1

L d s

Ž

.

2 2,

Ž

31

.

1 q DF

Ž

d

.

q DR

Ž

d

.

where DF and DR are the corresponding distances from the front and the rear wheels of the ALV to the central path line after the ALV traverses a distance with the turn angle d , as illustrated in Fig. 8. A larger value of L means that the ALV is closer to the

Fig. 9. Illustration of the definitions of the LP and the RP of one object.

(12)

path. It is easy to verify that 0 - L F 1, and that

L s 1 if and only if both the front wheels and the

rear wheels of the ALV are located just right on the path.

To find the turn angle of the front wheel to drive the ALV as close to the central path line as possible, a range of possible turn angles is searched. An angle is hypothesized each time, and the value of L is calculated accordingly. The angle that produces the maximal value of L is then used as the desired turn angle.

It should be mentioned that allowing the ALV a larger angle to turn in a session of turn drive does not mean that better navigation can be achieved. It may cause serious twisting. On the other hand, a smaller range of turn angles may bring the ALV slightly closer to the central path line. Hence, the largest angle allowing the ALV to turn is a tradeoff between smoothness of navigation and closeness to the central path line.

If obstacles appear on the road ahead, we com-pute a safe navigation point and drive the ALV toward this point for obstacle avoidance. The naviga-tion point is selected as follows. For each boundary point of an obstacle, we compute its corresponding

angle that is defined as the angle between the y-axis

of the VCS and the line segment which connects the boundary point and the origin of the VCS. This angle is positive to the left with respect to the y-axis of the VCS. We define the left extreme point LP as the boundary point whose corresponding angle is the largest, and define the right extreme point RP as the boundary point whose corresponding angle is the smallest. Fig. 9 shows the LP and the RP of an obstacle O, where the corresponding angle u of the1

LP is the largest and the corresponding angle u2 of the RP is the smallest. Then the proposed navigation point selection method is illustrated in Fig. 10, where there are three obstacles O , O , and O ahead of1 2 3

the ALV on the road, and LP and RP are the LP andi i

(13)

Fig. 11. Illustration of why obstacles disappear from the image due to the angle of the camera view when they are still ahead of

Ž . Ž .

the ALV. a Horizontal camera view. b Vertical camera view.

the RP of obstacle O , respectively, for i s 1, 2, andi

3.

In the figure, RP is the point on the left road0

boundary that is the closest to LP , and LP is the1 4

point on the right road boundary that is the closest to RP . We then compute the angle between the line3

Ž

segment connecting RP and Oi V the origin of the

.

VCS and the line segment connecting LPiq 1 and

O , for i s 0, 1, 2, and 3. From all of the angles soV

computed, we find the largest one and let it be u ,k

k s 0, 1, 2, or 3. Then the middle point of the line

segment connecting RP and LPk kq1 is chosen as the navigation point. It can be seen from the figure that

u1 is the largest angle, so the navigation point is set

as the middle point of the line segment connecting RP and LP . This way of choosing the navigation1 2

point can be further applied to the case that there are more obstacles than three on the road ahead.

If all of the obstacles ahead of the ALV appear in the image simultaneously in the current cycle, they

are extracted from the image to find their LPs and RPs, and the navigation point in this cycle is com-puted using these LPs and RPs. In the next cycle, if all of the obstacles ahead of the ALV still appear in the image simultaneously, they are extracted and the navigation point in this cycle is computed in the same way as described above. But, if some of these obstacles, which are still ahead of the ALV, disap-pear from the image due to the angle of the camera

Fig. 12. Illustration of why the chosen navigation point varies

Ž .

during ALV navigation. a Point N 1 is chosen as the navigation

Ž .

point. b Point N 2 is chosen as the navigation point since a new obstacle appears ahead of the ALV after the ALV travels a certain distance.

(14)

view as shown in Fig. 11, they cannot be extracted from the image to find their LPs and RPs. At this moment, their LPs and RPs in this cycle are pre-dicted using their extracted LPs and RPs in the previous cycle. The prediction process has been stated previously in Section 2.2. These predicted LPs and RPs together with the extracted LPs and RPs of the obstacles that appear in the image in this cycle are then used to compute the navigation point in this cycle.

In subsequent navigation cycles, the navigation point is computed cyclically until no obstacle is ahead of the ALV. At this moment, the ALV heads back to the central path line using the line following scheme described previously, and the obstacle avoid-ance process is finished. Note that the chosen naviga-tion point may vary during ALV naviganaviga-tion since new obstacles may appear in the image during

navi-Ž .

gation, as illustrated in Fig. 12. In a , three obstacles are ahead of the ALV and point N 1 is chosen as the navigation point. After the ALV travels a certain distance, a new obstacle appears ahead of the ALV and the chosen navigation point is changed to point

Ž .

N 2 as shown in b .

3.2. Turn angle computation

After the navigation point is chosen, the ALV turns an angle to approach this point for safe naviga-tion. The turn angle computation is illustrated in Fig.

Ž .

13, where P : x , yn n n is the navigation point, lT is the distance between P and O , R is the rotationn V

radius, and d is the turn angle of the front wheels we

Ž . Ž .

want to compute. From Eqs. 5 and 6 , we can obtain the following equation:

cos m

tan d s .

Ž

32

.

lT

sin m q 2 d

Since l , sin m, and cos m can be solved byT

2 2

(

l s x q y ,T n n

Ž

33

.

yn sin m s ,

Ž

34

.

2 2

(

x q yn n xn cos m s ,

Ž

35

.

2 2

(

x q yn n

the desired turn angle of the front wheels is solved accordingly to be

2 dxn

y1

d s tan

ž

2 2

/

.

Ž

36

.

x q y q 2 dyn n n

3.3. Precise ALV location estimation

The ALV keeps driving forward after an image is taken at the beginning of each navigation cycle. After the image is processed and corresponding algo-rithms are performed, the ALV location at the time instant of image taking can be found. At this mo-ment, however, the ALV has traÕelled a certain distance, and the current ALV location cannot be found by vision-based information. To overcome this difficulty, the system uses the ALV control informa-tion to estimate the current ALV locainforma-tion. Let Pi denote the ALV location at the time instant of image taking, and PiX denote the current ALV location after the ALV has travelled a certain distance. Then, as described in Section 2.1, the translation vector T from P to Pi iX can be found by using the travelled distance S and the pan angle d of the front wheels of the ALV from Pi to PiX, and the current ALV location PiX can thus be estimated by using P and T.i

Fig. 13. Illustration of how the turn angle of the front wheels of the ALV is computed.

(15)

Fig. 14. A real color road image and its clustering result.

At the end of each navigation cycle, the actual current ALV location can be estimated as described above. The ALV then drives from the actual current ALV location toward the navigation point when obstacles appear on the road ahead, or toward the central path for line following when no obstacle appears on the road ahead.

Fig. 15. A 24-connected component, where one broken pixel exists between p1 and q1 in the horizontal direction, another exists between p2 and q2 in the vertical direction, and a third one exists between p3 and q3 in the diagonal direction.

4. Image processing techniques and experimental results

4.1. Image processing techniques

We use anISODATAclustering algorithm based on w x

an initial-center-choosing technique 35 , which can solve the problem caused by great changes of inten-sity in navigations, to divide the road image into

Ž .

three clusters according to their intensity values: 1 cluster-0: dark area, like trees and the tested black

Fig. 16. A point p and its candidate 24-neighbors, which are represented by the = points.

(16)

Ž .

board on the road; 2 cluster-1: gray area, coming

Ž .

from the main body of the road; and 3 cluster-2:

bright area, like the sky and the tested white boards on the road. A real road image and its clustering

Fig. 17. A sequence of real road images, their clustering results, and the extracted and predicted boundary points of some tested objects on

Ž .

the road ahead, which illustrate the obstacle detection and avoidance processes when the ALV navigates along a road. a Two type-1

Ž .

objects newly appear in the image in this cycle, and whether they are obstacles will be judged in the next cycle. b The two type-1 objects

Ž .

in a become type-2 objects that are judged to be obstacles or not in this cycle, where the white board is decided to be an obstacle and the

Ž . Ž .

black board is decided to be a non-obstacle. c The two type-2 objects in b become type-3 objects that have been decided to be obstacles or non-obstacles in the previous cycle, and two additional type-1 objects newly appear in the image that will be judged to be obstacles or not

Ž . Ž .

in the next cycle. d The two type-1 objects in c become type-2 objects that are decided to be non-obstacles in this cycle. Note that in this

Ž . Ž . Ž .

figure the two type-3 objects in c disappear from the image though they are still ahead of the ALV. e The two type-2 objects in d become type-3 objects and no new object appears in the image in this cycle, while the ALV has reached the navigation point and begins to

Ž . Ž . Ž .

head back to the central path line. f One type-3 object in e disappears from the image, and another type-3 object in e remains in the

Ž .

image in this cycle, while the ALV keeps heading back to the central path line. g One type-1 object newly appears in the image when the ALV navigates on the central path line, and whether it is an obstacle will be judged in the next cycle. Note that in this figure the object

Ž . Ž .

(17)

Ž .

Fig. 17 continued .

result are shown in Fig. 14, where the tested black and white boards on the road are classified into cluster-0 and cluster-2 areas, respectively.

We then extract the road surface, which is the cluster-1 feature, from the binary image to find the ALV location and the left and right boundaries on

w x

the road 35 . Next, we extract the boundary points of the objects in the area bounded by the two lines representing the two road boundaries from the binary

w x

image 39 . The process is described as follows. First, we use the Sobel operators, which have the advantage of both differencing and smoothing effect,

to find the positions of the boundary points in the image.

Second, we scan the image to label the object boundary pixels to find 24-connected components. Each component represents one specific object shape. The 24-connected component allows single broken points on its boundary in all directions including the horizontal, vertical, and diagonal directions as illus-trated in Fig. 15, where broken pixels exist between

p1 and q1 in the horizontal direction, between p2

and q2 in the vertical direction, and between p3 and

(18)

Ž .

Fig. 17 continued .

still connected. Before the labeling procedure, we first define the candidate 24-neighbors of some

Ž .

point p that have the following properties: 1 for each candidate 24-neighbor q of p, p and q are

Ž . Ž

24-connected; 2 if the u coordinate the horizontal

.

coordinate in the ICS of q is smaller than that of p,

Ž

then the w coordinate the vertical coordinate in the

.

ICS of q is greater than or equal to that of p; and

Ž .3 if the u coordinate of q is greater than or equal to that of p, then the w coordinate of q is greater than that of p.

Fig. 16 shows a point p and its candidate 24-neighbors, which are labeled with ‘‘x’’. The labeling procedure of the 24-connected component, which is extended from that of the 8-connected component

w x

described in Ref. 39 , is stated as follows.

Scan the input image pixel by pixel from left to right and from top to bottom. The nature of the scanning sequence ensures that when some pixel is examined, its candidate 24-neighbors have been ex-amined. Let p denote the pixel examined currently in the scanning process. If p is not a boundary point,

(19)

Ž .

Fig. 17 continued .

simply move p to the next scanning position. If p is a boundary point, check its candidate 24-neighbors for labeling p. If all the neighbors are not boundary points, decide that a new 24-connected component is encountered and a new label is assigned to p. If

Ž

some of the neighbors are boundary points have

.

been labeled , assign the label of any one of these boundary points to p and make a note that the labels of these boundary points are equivalent. Then move

p to the next scanning position and examine p in the

same way. At the end of the scan, all boundary

points have been labeled, but some of these labels may be equivalent.

w x

Finally, the algorithm of Warshall 40 , which can save much computing time, is employed to find the equivalent classes from these labels, and a unique label is assigned to each class. The image is scanned again to replace each label by the label assigned to its equivalent class. This yields a set of 24-connected components, each of which represents one specific object shape composed of the boundary points that

Ž .

(20)

Ž .

Fig. 17 continued .

scene and the extracted boundary points, represented by the black circles, of two objects on the road.

4.2. Experimental results

Based on the proposed approach and algorithms, the prototype ALV constructed for this study is tested and found to be able to navigate safely and smoothly along part of the campus road in National Chiao Tung University. The ALV could follow the central path line when no obstacle appears on the road ahead. The ALV could decide whether the

tested objects on the road ahead are obstacles during navigation, and could drive toward a safe navigation point to avoid collision with the detected obstacles. A lot of successful navigation tests confirm the feasibility of the approach. The average cycle time is about 1.0 s, and the average speed is 170 cmrs or 6.2 kmrh.

Fig. 17 shows a sequence of real road images, their clustering results, and the extracted and pre-dicted boundary points of some tested objects on the road ahead, which illustrate the obstacle detection and avoidance processes when the ALV navigates

(21)

Ž .

Fig. 17 continued .

along a road. In the figure, the road boundaries are represented by the white lines, the extracted and predicted boundary points of objects are represented by the symbols of ‘‘ Ø ’’ and ‘‘x’’, respectively, the navigation points are represented by the white cir-cles, and type-i objects, for i s 1, 2, or 3, are represented by T i’s.

Ž .

In Fig. 17 a , two boards newly appear in the image, and the white board is classified into cluster-2 area and the black board is classified into cluster-0 area. As defined previously, they are type-1 objects in this cycle, and will be judged to be obstacles or

Ž .

not in the next cycle. In b , the two boards become type-2 objects that will be judged to be obstacles or not in this cycle. After the shape matching process, it is decided that the white board is an obstacle and that the black board is not an obstacle. Then, the

Ž .

detected obstacle the white board is used to derive the navigation point, and a turn angle is computed to drive the ALV toward this point for safe navigation.

Ž . Ž .

In c , the two boards in b become type-3 objects that have been decided to be obstacles or non-obstacles in the previous cycle, and two

addi-Ž .

(22)

Ž .

Fig. 17 continued .

the image that will be judged to be obstacles or not

Ž .

in the next cycle. Since the previous obstacle in b still remains in the image in this cycle, it is extracted and used to derive the navigation point in this cycle, and the ALV keeps driving toward the navigation

Ž . Ž .

point. In d , the two type-3 objects in c disappear

Ž .

from the image, and the two white boards in d become type-2 objects that are judged and decided to be non-obstacles in this cycle. Hence, no obstacle appears in the image in this cycle. But, due to the angle of the camera view as described in Section 3.1,

Ž .

the previous obstacle in c is still ahead of the ALV

though it disappears from the image. At this mo-ment, we predict the location of this hidden obstacle with respect to the ALV, which is then used to derive the navigation point that is also invisible in the image. The ALV keeps driving toward the navi-gation point.

Ž . Ž .

In e , the two white boards in d become type-3 objects and the ALV has reached the navigation point. At this moment, no obstacle is ahead of the ALV, and the ALV begins to head back to the

Ž . Ž .

central path line. In f , one type-3 object in e disappear from the image and another type-3 object

(23)

Ž .

Fig. 17 continued .

Ž .

in e remains in the image, and the ALV keeps heading back to the central path line. Finally, as

Ž .

shown in g , the ALV navigates on the central path

Ž .

line after navigation for several cycles. In g , one plastic bucket newly appears on the road ahead, which will be judged to be obstacle or not in the next cycle. It can be seen from the cluster-1 area in the clustering result that the bucket partially blends into

Ž .

the road. In h , the bucket is decided to be an obstacle after the shape matching process, where the bucket also blends into the road in this cycle. Then, a navigation point is derived, and a turn angle is

computed to drive the ALV toward the navigation point. The obstacle avoidance process for ALV navi-gation described above is performed in the same way in subsequent navigation cycles.

5. Conclusions

A vision-based approach to obstacle detection and avoidance for ALV navigation in outdoor road envi-ronments has been proposed. Several techniques have been integrated in this study to provide a reliable

(24)

navigation scheme. Vision-based and control-based kinematic models have been combined such that fault-tolerant ALV navigation can be achieved. Backprojection and projection principles have been used to predict the boundary points of objects in the next cycle. The DWC correlation measure has been employed to judge whether an object is an obstacle. A connected component labeling algorithm and War-shall’s algorithm have been implemented to extract effectively the boundary points of objects on the road ahead with less computing time. When obsta-cles disappear from the image but they are still ahead of the ALV, their positions with respect to the ALV can be predicted and used further to derive the navigation point. The safe navigation point on the road is chosen appropriately during navigation for obstacle avoidance. A sequence of real road images has been used in experiments to test the proposed obstacle detection and avoidance method along a road. Successful navigation results confirm the effec-tiveness of the proposed approach. Future research directions may focus on recognition and representa-tion of general objects on roads, path planning, and environment sensing and learning, etc.

Acknowledgements

This work was supported by National Science Council, Republic of China under Grant NSC87-2213-E009-001.

References

w x1 C. Thorpe, Outdoor visual navigation for autonomous robots, Ž .

Robotics and Autonomous Systems 7 1991 85–98.

w x2 S. Singh, P. Keller, Obstacle Detection for High Speed

Autonomous Navigation, Proc. IEEE International Confer-ence on Robotics and Automation, Sacramento, CA, USA, April 1991, pp. 2798–2805.

w x3 C. Thorpe, M. Hebert, T. Kanade, S. Shafer, Toward

au-tonomous driving: the CMU navlab: Part I. Perception, IEEE

Ž . Ž .

Expert 6 3 1991 31–42.

w x4 C. Thorpe, M. Hebert, T. Kanade, S. Shafer, Toward

au-tonomous driving: the CMU navlab: Part II. Architecture and

Ž . Ž .

systems, IEEE Expert 6 3 1991 44–52.

w x5 J.D. Crisman, C.E. Thorpe, SCARF: a color vision system

that tracks roads and intersections, IEEE Transactions on

Ž . Ž .

Robotics and Automation 9 1 1993 49–58.

w x6 Y. Goto, A. Stentz, The CMU system for mobile robot

navigation, Proc. IEEE International Conference on Robotics and Automation, Raleigh, NC, USA, 1987, pp. 99–105.

w x7 C. Thorpe, M.H. Hebert, T. Kanade, S.A. Shafer, Vision and

navigation for Carnegie-Mellon NAVLAB, IEEE Trans. on

Ž . Ž .

Pattern Analysis and Machine Intelligence 10 3 1988 362–373.

w x8 J.D. Crisman, C.E. Thorpe, UNSCARF, A Color Vision

System for the Detection of Unstructured Roads, Proc. IEEE International Conference on Robotics and Automation, Sacramento, CA, USA, April 1991, pp. 2496–2501.

w x9 M.A. Turk, D.G. Morgenthaler, K.D. Germban, M. Marra,

VITS — a vision system for autonomous land vehicle navi-gation, IEEE Trans. on Pattern Analysis and Machine

Intelli-Ž . Intelli-Ž .

gence 10 3 1988 342–361.

w10 A.M. Waxman, J.J. Lemoigne, L.S. Davis, B. Srinivasan,x

T.R. Kushner, E. Liang, T. Siddalingaiah, A visual naviga-tion system for autonomous land vehicles, IEEE Journal of

Ž . Ž .

Robotics and Automation RA-3 2 1987 124–141.

w11 K.E. Olin, D.Y. Tseng, Autonomous cross-country naviga-x Ž . Ž .

tion, IEEE Expert 6 3 1991 16–30.

w12 L.S. Davis, Visual navigation at the University of Maryland,x Ž .

Robotics and Autonomous Systems 7 1991 99–111.

w13 D. Kuan, G. Phipps, A. Hsueh, Autonomous robotic vehiclex

road following, IEEE Trans. on Pattern Analysis and

Ma-Ž . Ma-Ž .

chine Intelligence 10 4 1988 648–658.

w14 M. Schwarzinger, T. Zielke, D. Noll, M. Brauchmann, W.V.x

Seelen, Vision-Based Car-Following: Detection, Tracking, and Identification, Proc. of the Intelligent Vehicles ’92 Sym-posium, Detroit, USA, Jun. 1992, pp. 24–29.

w15 F. Thomanek, E.D. Dickmanns, D. Dickmanns, Multiplex

Object Recognition and Scene Interpretation for Autonomous Road Vehicle Guidance, Proc. of the Intelligent Vehicles ’94 Symposium, Paris, France, Oct. 1994, pp. 231–236.

w16 M. Cappello, M. Campani, A. Succi, Detection of Lanex

Boundaries, Intersections and Obstacles, Proc. of the Intelli-gent Vehicles ’94 Symposium, Paris, France, Oct. 1994, pp. 284–289.

w17 U. Regensburger, V. Graefe, Visual Recognition of Obstaclesx

on Roads, Proc. of the 1994 IEEErRSJrGI International Conference on Intelligent Robots and Systems, Munich, Ger-many, Sep. 1994, pp. 980–987.

w18 B. Ulmer, VITA-An Autonomous Road Vehicle ARV forx Ž .

Collision Avoidance in Traffic, Proc. of the Intelligent Vehi-cles ’92 Symposium, Detroit, USA, Jun. 1992, pp. 36–41.

w19 W. Efenberger, Q.H. Ta, L. Tsinas, V. Graefe, Automaticx

Recognition of Vehicles Approaching from Behind, Proc. of the Intelligent Vehicles ’92 Symposium, Detroit, USA, Jun. 1992, pp. 57–62.

w20 S. Hirata, Y. Shirai, M. Asada, Scene Interpretation Usingx

3-D Information Extracted from Monocular Color Images, Proc. of the 1994 IEEErRSJ International Conference on Intelligent Robots and Systems, Raleigh, NC, USA, Jul. 1992, pp. 1603–1610.

w21 M. Schmid, An Approach to Model-Based 3-D Recognitionx

of Vehicles in Real Time by Machine Vision, Proc. of the 1994 IEEErRSJrGI International Conference on Intelligent

(25)

Robots and Systems, Munich, Germany, Sep. 1994, pp. 2064–2071.

w22 B. Roberts, B. Bhanu, Inertial navigation sensor integratedx

motion analysis for autonomous vehicle navigation, Journal

Ž . Ž .

of Robotic Systems 9 6 1992 817–842.

w23 B. Heisele, W. Ritter, Obstacle Detection Based on Colorx

Blob Flow, Proc. of the Intelligent Vehicles ’95 Symposium, Detroit, MI, USA, Sep. 1995, pp. 282–286.

w24 W. Enkelmann, Obstacle detection by evaluation of opticalx

flow fields from image sequences, Image and Vision

Com-Ž . Com-Ž .

puting 9 3 1991 160–167.

w25 S.M. Smith, J.M. Brady, A scene segmenter; visual trackingx

of moving vehicles, Engineering Applications of Artificial

Ž . Ž .

Intelligence 7 2 1994 191–204.

w26 Z. Zhang, R. Weiss, A.R. Hanson, Obstacle detection basedx

on qualitative and quantitative 3D reconstruction, IEEE Trans.

Ž . Ž .

on Pattern Analysis and Machine Intelligence 19 1 1997 15–26.

w27 E. Grosso, M. Tistarelli, Activerdynamic stereo vision, IEEEx Ž .

Trans. on Pattern Analysis and Machine Intelligence 17 9

Ž1995 868–879..

w28 J.L. Bruyelle, J.G. Postaire, Direct range measurement byx

linear stereovision for real-time obstacle detection in road

Ž .

traffic, Robotics and Autonomous Systems 11 1993 261– 268.

w29 N. Kehtarnavaz, N.C. Griswold, J.S. Lee, Visual control ofx

Ž .

an autonomous vehicle BART — the vehicle-following

Ž . Ž .

problem, IEEE Trans. on Vehicular Technology 40 3 1991 654–662.

w30 Y.F. Wan, F. Cabestaing, J.C. Burie, A New Edge Detectorx

for Obstacle Detection with a Linear Stereo Vision System, Proc. of the Intelligent Vehicles ’95 Symposium, Detroit, MI, USA, Sep. 1995, pp. 130–135.

w31 M.E. Brauckmann, C. Goerick, J. Grob, T. Zielke, Towardsx

All Around Automatic Visual Obstacle Sensing for Cars, Proc. of the Intelligent Vehicles ’94 Symposium, Paris, France, Oct. 1994, pp. 79–84.

w32 J.L. Bruyelle, J.G. Postaire, Disparity Analysis for Real Timex

Obstacle Detection, Proc. of the Intelligent Vehicles ’92 Symposium, Detroit, USA, Jun. 1992, pp. 51–56.

w33 M. Xie, L. Trassoudaine, J. Alizon, J. Gallice, Road obstaclex

detection and tracking by an active and intelligent sensing

Ž .

strategy, Machine Vision and Applications 7 1994 165–177.

w34 T.J. Fan, W.H. Tsai, Automatic chinese seal identification,x Ž .

Computer Vision, Graphics, Image Processing 25 1984 311–330.

w35 K.H. Chen, W.H. Tsai, Vision-based autonomous land vehi-x

cle guidance in outdoor road environments using combined line and road following techniques, Journal of Robotic

Sys-Ž . Ž .

tems 14 10 1997 711–728.

w36 S.D. Cheng, W.H. Tsai, Model-based guidance of au-x

tonomous land vehicle in indoor environments by structured light using vertical line information, Journal of Electrical

Ž . Ž .

Engineering 34 6 1991 441–452.

w37 L.L. Wang, P.Y. Ku, W.H. Tsai, Model-based guidance byx

the longest common subsequence algorithm for indoor au-tonomous vehicle navigation using computer vision,

Automa-Ž .

tion in Construction 2 1993 123–137.

w38 Y.M. Su, W.H. Tsai, Autonomous land vehicle guidance forx

navigation in buildings by computer vision, radio, and photo-electric sensing techniques, Journal of the Chinese Institute

Ž . Ž .

of Engineers 17 1 1994 63–73.

w39 R.C. Gonzalez, Richard E. Wood, Digit Image Processing,x

Addison-Wesley Publishing, Reading, MA, USA, 1992.

w40 S. Warshall, A theorem on boolean matrices, Journal of thex Ž . Ž .

數據

Fig. 1. The prototype ALV used in this study.
Fig. 2. System structure of prototype ALV.
Fig. 3. Illustration of how we estimate the translation vector between two continuous ALV locations.
Fig. 4. Illustration of one case of location error.
+7

參考文獻

相關文件

• Richard Szeliski, Image Alignment and Stitching: A Tutorial, Foundations and Trends in Computer Graphics and Computer Vision, 2(1):1-104, December 2006. Szeliski

A factorization method for reconstructing an impenetrable obstacle in a homogeneous medium (Helmholtz equation) using the spectral data of the far-field operator was developed

A factorization method for reconstructing an impenetrable obstacle in a homogeneous medium (Helmholtz equation) using the spectral data of the far-eld operator was developed

The probability of loss increases rapidly with burst size so senders talking to old-style receivers saw three times the loss rate (1.8% vs. The higher loss rate meant more time spent

• It is a plus if you have background knowledge on computer vision, image processing and computer graphics.. • It is a plus if you have access to digital cameras

• To achieve small expected risk, that is good generalization performance ⇒ both the empirical risk and the ratio between VC dimension and the number of data points have to be small..

• Learn the mapping between input data and the corresponding points the low dimensional manifold using mixture of factor analyzers. • Learn a dynamical model based on the points on

For a directed graphical model, we need to specify the conditional probability distribution (CPD) at each node.. • If the variables are discrete, it can be represented as a