• 沒有找到結果。

The heterogeneous systems integration design and implementation for lane keeping on a vehicle

N/A
N/A
Protected

Academic year: 2021

Share "The heterogeneous systems integration design and implementation for lane keeping on a vehicle"

Copied!
18
0
0

加載中.... (立即查看全文)

全文

(1)

The Heterogeneous Systems Integration Design and

Implementation for Lane Keeping on a Vehicle

Shinq-Jen Wu, Hsin-Han Chiang, Member, IEEE, Jau-Woei Perng, Member, IEEE,

Chao-Jung Chen, Bing-Fei Wu, Senior Member, IEEE, and Tsu-Tian Lee, Fellow, IEEE

Abstract—In this paper, an intelligent automated lane-keeping system is proposed and implemented on our vehicle platform, i.e., TAIWAN iTS-1. This system challenges the online integrating heterogeneous systems such as a real-time vision system, a lat-eral controller, in-vehicle sensors, and a steering wheel actuating motor. The implemented vision system detects the lane markings ahead of the vehicle, regardless of the varieties in road appear-ance, and determines the desired trajectory based on the relative positions of the vehicle with respect to the center of the road. To achieve more humanlike driving behavior such as smooth turning, particularly at high levels of speed, a fuzzy gain scheduling (FGS) strategy is introduced to compensate for the feedback controller for appropriately adapting to the SW command. Instead of man-ual tuning by trial and error, the methodology of FGS is designed to ensure that the closed-loop system can satisfy the crossover model principle. The proposed integrated system is examined on the standard testing road at the Automotive Research and Testing Center (ARTC)1and extraurban highways.

Index Terms—Automated steering control, crossover principle model, fuzzy gain scheduling (FGS), lane keeping, lateral vehicle control, vision system.

NOMENCLATURE

Fyf and Fyr Lateral forces of front and rear tires,

respectively.

Cf and Cr Cornering stiffness of the front and rear tires, respectively.

αf and αr Slip angle of the front and rear tires, respectively.

M Mass of the vehicle.

Inertia moment around the center of gravity (CG) of the vehicle.

Manuscript received October 24, 2005; revised February 20, 2006, July 15, 2006, May 15, 2007, August 22, 2007, October 2, 2007, and October 27, 2007. This work was supported by the Program for Promoting Academic Excellence of Universities under Grant NSC 96-2752-E-009-012-PAE. The Associate Editor for this paper was U. Nunes.

S.-J. Wu is with the Department of Electrical Engineering, Da-Yeh Univer-sity, Changhua 51591, Taiwan, R.O.C. (e-mail: jen@mail.dyu.edu.tw).

H.-H. Chiang, C.-J. Chen, and B.-F. Wu are with the Department of Electrical and Control Engineering, National Chiao Tung University, Hsinchu 300, Taiwan, R.O.C. (e-mail: hsinhan.ece90g@nctu.edu.tw; cjchen@cssp.cn. nctu.edu.tw; bwu@cssp.cn.nctu.edu.tw).

J.-W. Perng is with the Department of Mechanical and Electromechanical Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan, R.O.C. (e-mail: jwperng@faculty.nsysu.edu.tw).

T.-T. Lee is with the Department of Electrical and Control Engineering, National Chiao Tung University, Hsinchu 300, Taiwan, R.O.C., and also with the Department of Electrical Engineering, National Taipei University of Technology, Taipei 106, Taiwan, R.O.C. (e-mail: ttlee@ntut. edu.tw).

Digital Object Identifier 10.1109/TITS.2008.922874

1www.artc.org.tw.

a and b Distance from the front and rear tires to the CG, respectively.

vx Forward velocity of the vehicle.

vy Lateral velocity in the CG of the vehicle.

η Yaw rate in the CG of the vehicle.

δ Steering wheel (SW) angle.

δf Front-wheel angle of the vehicle.

isr Steering ratio between the SW and front wheel.

ay Lateral acceleration in the CG of the vehicle.

x, y, and z Point in global coordinates X-, Y -, and Z-axis, respectively.

uiand vi Horizontal and vertical axes on the image plane.

f Focal length of the charge-coupled device (CCD) camera.

H Height from the ground of the CCD camera that was settled on the vehicle.

k Curvature of the road trajectory model.

m0 Tangent function of the vehicle heading angle,

relative to the road tangent.

b0 Lateral offset of the road trajectory model.

Inclination of the road.

W Lane width in world space.

MW Width of lane marking in global coordinates.

mi Width of lane marking on the image plane.

Ld Look-ahead distance for previewed navigation.

yLd Lateral offset to the road centerline at a

look-ahead distance Ld.

εLd Angle between the tangent to the road and the

vehicle axis at a look-ahead distance Ld.

ρLd Road curvature at a look-ahead distance Ld.

x State vector of vehicle dynamics. A, B, and E Vehicle linear model matrices. Kfb Full-state feedback control.

τ Transport lag of controlling input.

Inand Im Identity matrices with dimensions n = 4 and

m = 1, respectively.

i Tuning gain of the ith fuzzy rule.

∆fg Inferred gain of fuzzy gain scheduling (FGS).

µi ith rule strength of FGS. I. INTRODUCTION

A

UTOMATED vehicles and highways can improve trans-portation systems. Studies have shown that human error is responsible for over 90% of highway accidents [1]. Vehicle sensing and handling techniques have heavily been integrated into driving assistance systems to improve safety, reduce emis-sions and fuel consumption, and increase the traffic capacity of

(2)

WU et al.: SYSTEMS INTEGRATION DESIGN AND IMPLEMENTATION FOR LANE KEEPING ON VEHICLE 247

existing highways. Due to these potential benefits, research on vehicle automation has been ongoing for decades [2]. Among these, research on automated driving control for lane keeping has been ongoing since the early 1990s.

The Partners for Advanced Transit and Highways program is an infrastructure-based approach that is implemented on a reference/sensing highway system; it involves discrete mag-netic markers that are embedded in the roadway, forming a predetermined path, and a magnetometer that is mounted in front of the experimental vehicles. These systems can perform some demanding tasks, for example, steering a vehicle in a small intravehicle space. Some impressive navigation sys-tems have successfully been implemented in real cars, such as the Rapidly Adapting Lateral Position Handler (RALPH) [3], [4], which was developed for the Navlab vehicle at the Carnegie Mellon University, and the Generic Obstacle and Lane Detection (GOLD) system that was implemented on the ARGO autonomous vehicle at the University of Parma, Parma, Italy [6]–[9]. The University of California at Berkeley and the Ohio State University (OSU) [10]–[12] developed a navigation system on their experimental cars, i.e., Honda Accord LX, and the system’s capabilities to maneuver the vehicle, such as autonomous lane keeping and lane changing, has been demonstrated. A remarkable amount of work has also been produced by the University of Bundeswehr, Munich, Germany, which was headed by Dickmann et al. [13]. An intelligent vision-based road transportation system performs global navigation missions on a network of unmarked roads by integrating the Global Positioning System (GPS) [22]. Other interesting work that is related to autonomous vehicles by means of machine vision can be found in [5], [14], and [23]–[27].

This paper proposes an automated lane-keeping system. It involves challenges for the real-time integration of heteroge-neous and complex systems; the implemented real-time vision system, i.e., the lateral controller, compensated for varying vehicle velocities, a signal transport lag in an actuator, and an SW actuating motor. Furthermore, the proposed lane-keeping system also concerns human driving behavior. Though there is currently much steering automation research, human driving behavior is considered less. Human drivers can look ahead the desired path (or lane); thus, we mimic this behavior by using the real-time vision system. Accordingly, drivers will control actions based on the sensed feedback states to bring the future vehicle path in line with the future desired path. The in-vehicle sensors (i.e., an accelerometer and a yaw rate sensor) serve as the perception or feeling of human drivers. It is found that human drivers, in fact, use feedback signals (i.e., lateral acceleration and yaw rate) other than a lateral offset to stabilize the vehicle for the lane-keeping task [31]. The transport lag from the signal processing to the controlling action represents the neural delay of humans.

With visual data that were grabbed from a single camera that is mounted behind the windshield, the real-time vision system is mainly capable of estimating the vehicle location on its lane for trajectory recognition purposes. In the Taylor et al. approach [10], [11], the image plane coordinates are considered by a 2-D ground plane (X, Y ). The road surface with an angle of

inclination is not considered in the lane-recognition process. In addition, the linear Hough transform is utilized to obtain the best fit straight line from a set of candidate straight lines on the image plane, which will increase the computation load in calculating the best overall score of each candidate line, particularly while the feature-extraction procedure returns ir-relevant features that are not part of lane markings. Chapuis et al. [16], [17] propose an algorithm that compose the road detection module and the 3-D reconstruction module, starting from image coordinates. Moreover, this model is extended to a multimodel vision-based framework to perform multilane de-tection against road singularities [18]. A probabilistic approach is also proposed to group lane boundary hypotheses into the left- and right-lane boundaries [19]. Although these methods can improve the precision of characterizing the roadside, the used probabilistic model of the roadside in the image needs to be initialized by an offline training phase. Furthermore, this model may progressively be refined with the images and adapted to different road types.

In our approach, the parabolic polynomial model is applied to obtain the recognized lane markings such that curves can more accurately be estimated. Instead of heavy calculation due to the training phase in the preliminary recognition process, the lane tendency is primarily predicted using the first-order Taylor polynomial to quickly figure out the possible region of interest (ROI). Once the lane tendency is determined, the road model is updated in a recursive way after each lane marking detection. The main advantage of our method is that it limits the processing time, even in the face of more varieties from the road surface, such as shadows, cracks, and on-road text [20]. Furthermore, this algorithm is intended to be implemented on a DSP-based system, and the optimized operation for DSP platforms has been proposed in [21].

Many studies have confirmed that drivers, indeed, exhibit adaptive behavior in steering tasks. We propose an FGS strategy with respect to this behavior for the steering control. FGS compensates for static feedback control and adjusts the steering effort by considering the lateral offset and the instantaneous speed of the vehicle. Additionally, the rule base of FGS, which involves parameters that define membership functions and consequent expressions, is designed to establish a closed-loop driving system based on the crossover model principle [29] to eliminate the need for manual tuning by trial and error. This system is examined at the Automotive Research and Testing Center (ARTC), with a maximum velocity of 145 km/h for standard roads and 120 km/h for irregular roads. In addition, in numerous experiments on the highway, the system kept to the lane in a complicated road environment, i.e., dense traffic environment, poor road markings, and varieties in road appearance. Demonstration videos that revealed the intelligent integrated system’s capabilities in automated lane keeping were exhibited at the 2005 IEEE Symposium on Intelligent Vehicles, which was held in Las Vegas, NV, [30].

This paper is organized as follows. Section II describes the architecture of our developed automated lane-keeping system and the bicycle model verification with respect to our vehicle platform. Section III briefly introduces the lane-detection algorithm. Section IV describes the lateral controller design,

(3)

Fig. 1. TAIWAN iTS-1 with the zoom-in view of CCD and motor driver.

Fig. 2. Automated lane-keeping system architecture.

including feedback control and FGS design. Section V presents the overview of the hardware and the experimental results under different environmental conditions. Finally, Section VI draws conclusions.

II. AUTOMATEDLANE-KEEPINGSYSTEM

A. Systems Integration Architecture

The proposed automated lane-keeping system comprises a vision system, in-vehicle sensors, a lateral controller, a motor driver, and interface circuits. The complete system is truly implemented in TAIWAN iTS-1, as shown in Fig. 1, which is a commercial prototype vehicle, i.e., Savrin, manufactured by the Mitsubishi Motor Company. The considered feedback configuration is shown in Fig. 2. The vision system captures the real-time road scene and then determines the lateral offset from the centerline and the angle between the road tangent and the heading of the vehicle at a specified look-ahead distance. The real-time vision system that was equipped in TAIWAN iTS-1 is composed of a monochrome CCD camera, a frame grabber, and a PC-based central processing system. The camera module provides a 644 × 493 pixel image at 30 frames per second (FPS). The processing time of the developed vision system is less than 1 ms per frame.

Additionally, the measurements from the vision system with the feedback state signals are fed into the lateral controller

Fig. 3. Bicycle model diagram of lateral vehicle dynamics.

that was built in the real-time MicroAutoBox2 (MABX). The

vision measurement is restricted to update every 40 ms. The signals of in-vehicle sensors are in the form of analog voltage and are directly transmitted through an analog-to-digital (A/D) port to the MABX, with a sampling rate of 25 Hz. The lateral controller routes the feedback signals from the vision system and in-vehicle sensors, and the sampling interval of the lateral controller is restricted to 40 ms according to the update rate of the data from the vision system. Therefore, the steering control command is transmitted from the lateral controller to the motor driver via the interface circuit. The driver can manually operate the emergency switch in the interface circuit to prevent accidents.

Transmission latencies in transmitting data are a critical hardware property of the automated lane-keeping system. There are two stages of delay in the transfer: 1) One delay is caused by the image processing of the vision system, and 2) the other delay is caused by the action of a servomotor. In the former delay, the CCD camera sequentially grabs images at 30 FPS and must transmit each image before capturing another one. This operation delays the information transmission by the amount of time that it takes to transmit one full image. In addition, the vision system runs the program to deal with image processing such that this process delays the information transmission into the lateral controller. Totally, the delay between the time that the shutter on the CCD camera and the time that the measurements for that image are available to the lateral controller is 40 ms. The above-mentioned other latency, which is from the action of the servomotor, has also been measured, on the average, as 0.52 s by the comparison with the time responses of command and measurement. Since these delays are quite substantial, this issue should explicitly be considered into the control design, as introduced in Section III. A more detailed description of the major components of our integrated system will be given in Section V-A.

B. Vehicle Lateral Dynamics Model

The detailed dynamics of a vehicle could be described as a mechanical model that naturally has a minimum of 6 degrees of freedom (DOF). Many studies [11]–[15], [26]–[28], [31] have claimed that the longitudinal and the lateral dynamics of a vehicle can be separated, provided that the moving velocity does not vary that much. The bicycle model, as shown in Fig. 3, which dominates the lateral vehicle dynamics, is useful 2A compact stand-alone prototyping unit that was manufactured by dSPACE

(4)

WU et al.: SYSTEMS INTEGRATION DESIGN AND IMPLEMENTATION FOR LANE KEEPING ON VEHICLE 249

Fig. 4. State signal for verification between the model and the vehicle (the solid line refers to the model output, whereas the dashed line refers to the measured output).

in designing the steering controller. It is clear to see that the slip angle of the front and rear tires is defined as the angle between the orientation of the tire and the orientation of the velocity vector. Based on the assumption of the small steering angle and the linear tire model, the relationship between the lateral force and the slip angle is essentially linear to the constant proportionality called cornering stiffness; the lateral force at a slip angle can, therefore, be determined as

Fyf= Cfαf, Fyr= Crαr (1) with

αf = δf − (aγ + vy)/vx, αr= (bγ− vy)/vx. Note that the cornering stiffness of the front and rear tires Cf,r that is considered here is the slope of side force characteristics at the origin on a dry road. Coupling two front and rear tires yields the following state equation of the bicycle model:

 ˙vy ˙ η  =  a1 a2 a3 a4   vy η  +  b1 b2  δf (2) with a1= − (Cf+ Cr)/M vx a2= (bCr− aCf)/M vx− vx a3= (bCr− aCf)/Iϕvx a4=  b2Cr− a2Cf  /Iϕvx b1= Cf/M b2= aCf/Iϕ

where the related parameters have been defined in the Nomen-clature. Here, the cornering stiffness in the bicycle model (2) is

given only as a fixed value by the vehicle company for a normal road condition.

Validating the bicycle model with the real vehicle dynamics is critical to obtain precise tracking results of steering control. The bicycle model in (2) varies with the vehicle speed vx. Notably, the actual input to our vehicle platform is the SW angle δ rather than the front-wheel angle δf. According to vehicular steering mechanics [32], the SW angle can be ex-pressed as a product of the steering ratio isrand the front-wheel

angle δf, i.e.,

δ = isr· δf. (3)

Because the compliance and steering torque gradients vary with increasing steering angles and load on the front tires, tire pressure, coefficient of friction, etc., in general, isr is not a

fixed value for the power steering of the vehicle. However, the constant ratio can practically be used for control design. The steering ratio isr can then slightly be adjusted to yield a

response that is more similar to that of the real vehicle platform. Fig. 4 compares experimental results with the bicycle model predictions for a transient maneuver of about 60 km/h. The measured SW angle was used as the input to the model. The pre-dicted lateral acceleration from the model can approximately be calculated as

ay∼= νx· η − ˙νy (4) and compares well with the experimental data in Fig. 4(a). The predicted yaw rate of the vehicle also shows good correlation in Fig. 4(b). Several other quantities were also measured and compared with the bicycle model. The level of correlation was generally similar to the results that are presented here.

(5)

Fig. 5. Relationship between the world space and the image plane.

III. VISIONALGORITHM

There are two stages of lane detection for accuracy enhance-ment. The first stage is responsible for predicting the lane tendency, whereas the second stage detects lane markings ahead the vehicle and computes the vehicle location on its lane. As shown in Fig. 5, the points on the image plane (ui, vi) are supposed to be projected into the global coordinates (x, y, z) and can be obtained as

ui= eu· x/y (5)

vi= ev· (z − H)/y (6) where eu= f /dui, ev= f /dvi, and dui and dvi are the physical width and height of an image pixel. In the global coordinates, the road surface is modeled as a plane with an angle of inclination θ, which can be obtained as follows:

z = mθ· y (7)

where mθ= tan θ. The lane geometry can be in the form of a polynomial such as

x = k· y2+ m0· y + b0. (8)

The objective of lane detection is to determine coefficients (k, m0, b0). By arranging from (5)–(8), one can yield

ui= keuevH evmθ− vi + m0eu+ b0 H eu ev (evmθ− vi). (9) Equation (9) represents the lane model in terms of image coordinates (ui, vi) with a road inclination mθ when the road is not flat. The points in global coordinates are defined as Pi(xi, yi, zi), i∈ {l, m, r}, where l, m, and r denote the left, middle, and right sides, respectively. PlPr is assumed to be parallel to the x-axis, and Pmis defined as Pm= (Pl+ Pr)/2. The lane width W is (xr− xl), and yl= ym= yr, zl= zm=

zr. The pixels on the image plane are (ui, vi), i∈ {r, m, l}. Therefore, xm, ym, and zmcan be obtained [21] as

xm= um· W/(ur− ul) (10)

ym= eu· W/(ur− ul) (11)

zm= H + eu(vM · W )/ev(ur− ul). (12)

By substituting (10) and (11) into (8), one can yield um(ur−ul) =  ke2uW+[m0eu](ur−ul)+  b0 W  (ur− ul)2. (13) Both (9) and (13) represent the lane model in image coordi-nates; (9) is available if mθ is known, whereas (13) is formed based on the assumption that the lane width W is constant. However, neither mθ nor W can exactly be determined in the full range of environments. Thus, (9) and (13) are combined into the algorithm to predict the tendency of the lane and to calibrate both mθand W . Equation (13) can be rewritten as

U = Cxy0+ Cxy1· ∆u + Cxy2· ∆u2 (14) with Cxy0= k· e2u· W Cxy1= m0· eu Cxy2= b0/W U = um(ur− ul) ∆u = (ur− ul).

Given paired data (ul, ur), the coefficients, i.e., Cxy0, Cxy1, and

Cxy2, can be determined by the weighted least squares (WLS) approximation so that the unknown coefficients k, m0, and b0

in (8) are obtained by (14).

At the beginning of detection, the detected data (ul, ur) in the first frame is deficient in acquiring the exact lane tendency but still contributes to the information of lane trend. The Taylor polynomial is applied to quickly figure out the possible ROI for detection. This approach is capable of saving unnecessary processing time in the first frame. The previous frame provides the reliable information for predicting the lane tendency of the current frame based on the updated k, m0, and b0of (13). Recall

that uiis a function of viin (9) and, thus, can be expressed as

ui= F (vi) (15)

and the first-order Taylor polynomial for f (vi), in powers of (vi− ve), is

˜

F (v) = F (ve) + F(ve)(vi− ve) (16) where the first derivative of F (vi) at veis

F(ve) = keuevH (evmθ− ve)2 −b0 H · eu ev . (17)

The lane tendency that approaches the expanding coordinate veis obtained by (16). Subsequently, the coordinates of the left and right sides of the lane can also be predicted from (16) by replacing b0with b0∓ W/2, i.e.,

bi= 

b0− W/2, if i = l

b0+ W/2, if i = r. (18)

After the lane tendency is predicted, the following stage is used for lane marking detection. In this stage, the following two intrinsic factors are referred.

(6)

WU et al.: SYSTEMS INTEGRATION DESIGN AND IMPLEMENTATION FOR LANE KEEPING ON VEHICLE 251

Fig. 6. (a) Constant marking width on the image plane and in the world space. (b) Greater intensity than that on the road surface on the scanning line. (c) Search the M point that satisfies IM> IM−mi/2and IM > IM +mi/2. (d) Two maximum gradients, i.e., GLand GR, within the interval of [M− mi/2, M ) and

(M , M− mi/2] are located. (e) Correct detection case. (f) Unsuccessful detection case.

1) The gray-level values of the markings exceed those on the road surface. Sharper edges between the markings and the surface of the road result in higher gradients. The line detection mask [33]  11 −2 1−2 1 1 −2 1  

is used in the gradient calculation, since only the vertical edges are concerned. The line detection mask can more strongly respond to lines that are vertically oriented, and the maximum gradient pixels will be located on the darker side.

2) The width of lane markings is assumed to be a constant in the global coordinates, because the markings are painted on the road surface. The width of markings ranges from

10 to 30 cm. Therefore, the initial width is set to 15 cm in this process. In Fig. 6(a), MW and midenote the width of lane markings in the global coordinates and on the image plane, respectively. The mapping equation can be obtained as MW = H ev· mθ− vi · ev eu · mi . (19)

Accordingly, the lane marking detection resulted in the fol-lowing. The intensity of a lane marking exceeds that of the road surface. Fig. 6(b) shows a horizontal profile of lane markings on the given scanning line and its intensity that is greater than the surface. Due to the greater brightness, the detection process is based on the determination of the horizontal dark–light–dark (DLD) intensity transitions. In Fig. 6(c), point M is located at the DLD transition if the intensity IMexceeds that of its left and

(7)

Fig. 7. Lane-detection flowchart.

right neighbors at [M− mi/2, M + mi/2]. On the scanning line, the search for DLD transition continues until the transition is identified or the search reaches the end of the scanning line. When the transition is determined, two maximum gradients, i.e., GLand GR, at points L and R in the interval [M− mi/2,

M ) and (M , M + mi/2] are determined using the line detec-tion mask. Fig. 6(d) displays a candidate marking region (L, R). The distance LR is compared with a predefined threshold LRth. This process is continued if LR > LRth; otherwise, the recent (L, R) region is erased, and M is redetermined in the step in Fig. 6(c). LRthis related to the minimum candidate width of the marking on the image plane and, generally, is set to half of mi. (L, R) is set as the lane marking if the average intensity

I within (L, R) exceeds IL and IR. Fig. 6(e) is a successful detection case. With regard to the unsuccessful case, as shown in Fig. 6(f), the process returns to Fig. 6(d) to pick up the new candidate region of markings. The new maximum gradient within (L, M ) is used to replace GLif the mean intensity LM is less than M R; otherwise, GRis similarly replaced. Updating

GLor GRyields a new candidate region of the marking. Fig. 7 presents the flowchart of the lane-detection process. As shown in Fig. 8(a), the possible region of lane markings on the left and right sides are modeled by (8), with the co-efficients being of valid ranges k∈ [−1/600, 1/600], m0

[− tan(0.09), tan(0.09)], and b0∈ [−3.75, 3.75]. These two

regions are divided into n zones to the lane-detection process. Fig. 8(b) shows an example of six zones. The initial possible range is searched zone by zone, whereas the zone is scanned row by row, from the bottom to the top. After the initial phase of lane detection, the next zone is located. The probable regions of lane markings in that zone are predicted using (16) and (18), in which ve is determined from the vi coordinate that is

Fig. 8. (a) Marking detection area of both sides of the lane at the initial state. (b) Six zones for the markings detection, starting from the bottom to the top.

near the current zone. The ROI determination process is called “Specify ROI,” which narrows down the searching region by applying two definitions of the pROI parameter. Lane markings are detected under four conditions. First, when both left- and right-side markings are detected, the lane width W is updated from mito Mwaccording to (19). In the second and third cases, only the left- or the right-side marking is detected, and pROI is set to SUB, whereas the other side of the marking point is estimated from the detected points by adding or subtracting the current lane width. Finally, if no marking is detected, pROI is set to MAIN.

The ROI is defined as follows.

1) pROI is SUB. In this case, the ROI at the current row viis defined as

ROI = [ui−1− λsub· mi, ui−1+ λsub· mi] where ui−1 is the abscissa that was detected in the last image row vi−1, and λsubis a preselected constant. Since

the abscissa ui is the target that will be detected, its probable region is specified in the neighborhood of the last detected ui−1due to the continuity of the mark. 2) pROI is MAIN. Under this condition, the ROI is

de-fined as

ROI = [ui− λmain· mi, ui+ λmain· mi] where ui is the abscissa that corresponds to the current image row vi, and λmain is a preselected constant. The

last abscissa ui−1 is not detected; therefore, ui can be obtained using (16) and (18). λmainis greater than λsub,

because ui−1of the lane side is unknown. In this case, the detected marking data will be added to the road tendency fitting.

Note that the bumpy road surface, the vibrations that are associated with motion, and the changing inclination of the road will induce some errors. Therefore, the parameters, i.e., mθand

W , must be adjusted. By substituting (11) and (12) into (7), we obtain

vm= Czy0− Czy1· mi (20) where Czy0= ev· mθ, and Czy1= evH/euW . The WLS method is used to yield these two coefficients, and then, mθand

W are available due to the inverse calculation for calibrating the parameters online.

(8)

WU et al.: SYSTEMS INTEGRATION DESIGN AND IMPLEMENTATION FOR LANE KEEPING ON VEHICLE 253

Fig. 9. (a)–(e) Gradation detection results in zones 1–5. (f) Final detection result.

Fig. 10. Lane-detection results. (a) Texts on the road. (b) Vehicle appears.

The data output to the lateral controller are the vehicle lateral offset and orientation with respect to the centerline of the detected lane. According to the lane model that is formed as a polynomial with coefficients k, m0, and b0, these two outputs

result in

Offset(y) = k· y2+ m0· y + b0 (21)

Orientation(y) = 2· k · y + m0. (22)

Fig. 9 presents the detection results zone by zone, and the black solid curve is derived from (9). The left and right blocks are the ROI for the detection in each zone. Fig. 9(a)–(e) shows the gradual detection results in zones 0–5. The final detection result is given in Fig. 9(f).

Remark: In each frame, the lane marking may not always appear in the ROI of all zones. Despite the fact that no lane markings are detected in the ROI of the bottom zone, the esti-mated curve can still be determined by the detected markings in other zones.

Fig. 10 exhibits that the proposed algorithm can overcome the disturbance from the on-road text and the vehicle ahead, re-spectively. The other results under different weather conditions, i.e., sunny, cloudy, rainy, and night, will also be presented in Section V.

Fig. 11. Vehicle lateral dynamics with respect to road geometry.

IV. VEHICLELATERALCONTROL

A. Lateral Controller Design

The relationship between the lateral dynamics of the vehicle and the desired previewed navigation at a look-ahead distance Ldis plotted in Fig. 11. The valid amount of Ldis determined from our implemented vision system. The evolution of the measurements, which is determined by previewed dynamics, can be described as

˙

yLd= vy+ Ld· η + vx· εLd (23)

˙εLd= η− vx· ρLd (24)

where the parameters have been defined in the Nomenclature. The bicycle model (2) is combined with the previewed dy-namics (23) and (24) to form the following linear state-space equation: ˙x = Ax + Bu + Ew (25) with A =    a1 a2 0 0 a3 a4 0 0 1 Ld 0 vx 0 1 0 0    , B =    b1 b2 0 0    , E =    0 0 0 −vx    where the state vector x = [vy, η, yLd, εLd]T, the control input

u = δf, and the road curvature is viewed as an exogenous disturbance w = ρLdof the system. The linear system in (25)

is parameterized with the longitudinal vehicle speed vx. As vx increases, the poles of the system move toward the imaginary axis, reducing the stability. Notably, changing the look-ahead distance Ld does not affect the stability of the system. If Ld is regarded as being close to the front of the vehicle, then the damping of the zeros in system (25) drastically declines, and a high-gain controller drives the closed-loop poles toward the zeros, resulting in a poorly damped closed-loop system. However, Ld cannot be chosen to be very distant from the reliable field of the vision system. The image resolution at a far look-ahead distance will be degraded such that the collected data include more errors. As a result of numerous experimental verifications, the value of Ldis chosen as 15 m.

The control objective for vehicle lane keeping is to regu-late the offset at the look-ahead yLd to zero. Moreover, the

(9)

controller is well anticipated to ensure that the vehicle lateral acceleration does not exceed 0.4 g (g is 9.8 m/s2) during the control process such that smooth responses and the comfort of the passengers can be yielded. Given the vehicle model as in (25), the state feedback control seems to naturally be applied and is given as u =−Kfbx, Kfb∈ 1×4. Here, the

pole placement design approach is adopted to consider that the required control effort is related to how far the open-loop poles are moved by the feedback. The objective of pole placement aims to specifically fix the undesirable aspects of the open-loop response and avoids large increases in either bandwidth or ef-forts while poles are moved [34]. Moreover, it typically allows smaller gains and, thus, smaller control efforts by moving poles that are near zeros rather than arbitrarily assigning all the poles. The closed-loop poles for the system with high order (> 2) can be chosen as a desired pair of dominant second-order poles, with the rest of the poles corresponding to sufficiently damped modes, so that the system will mimic a second-order response with a reasonable balance between system errors and control effort. The closed-loop bandwidth for the look-ahead lateral offset is chosen as 5.35 rad/s to mimic human responses [29]. With regard to comfort requirement, the corresponding closed-loop poles are chosen to ensure that the lateral acceleration that is above 0.5 Hz will not be amplified during the steering path. Furthermore, pole selection can also be specified by the bandwidth requirement with regard to the transfer function yLd(s)/ρLd(s) with the maximal allowable yLd to reasonable

step changes of ρLd [11]. From the computer simulations for

the closed-loop system response with the step change in curva-ture, it is found that the complex poles with a damping ratio ζ = 0.707 will meet the constraint of lateral acceleration. Therefore, we choose a conjugate pair of dominant poles as −1 ± 1j with a natural frequency of 1.414 rad/s for the closed-loop system, and the other two poles are similar to the original system. Notably, increasing the speed will reduce the stability of the closed-loop system, since the poles move close to the imaginary axis. As a result, the feedback control is supposed to be designed under the highest speed of interest.

Furthermore, the transport lag is also emphasized in the lateral controller design. The transport lag is caused by the SW motor that arises while the desired command is sent to force the actuator and also includes image processing delay. Farther phase lags are added over the range of frequencies, severely destabilizing the overall system. Thus, a pure transport lag element e−sτ is included in the designed lateral controller. The input can be described as u = e−sτu0, with u0=−K

fbx.

By using the approximation of the first-order Pade polynomial e−sτ = (1− τs/2)/(1 + τs/2) (26) and the input becomes

˙u =2 τ(u 0− u) − ˙u0 =2 τ(u 0− u) + K fb(Ax + Bu) = Kfb  A−2 τIn  x +  KfbB− 2 τIm  u. (27)

By combining the system (25) with (27), we obtain the aug-mented system, i.e.,

 ˙x ˙u  =  A B Kfb  A−τ2In   KfbB−2τIm  x u  . (28) The stability of the transport-lagged system is determined by the characteristic roots of the system matrix in (28); conse-quently, the feedback controller Kfb is designed to guarantee

the stability of the closed-loop system (28) with a transport lag at the highest velocity (145 km/h in this paper).

Remark: As described in Section II-A, the transport lag comes from the transmission delay in two latencies: 1) 0.04 s for the complete processing of the vision system and measure-ments that are available to the controller and 2) 0.52 s for the average duration from the command to the reaction of the servomotor. We choose the transport lag 0.6 s in the stable controller design.

B. FGS

Although the static feedback control strategy suffices to meet the requirements of vehicle lateral control, it is sensitive to the parameters of the system, such as vehicle mass, cornering stiffness, and road curvature and is, thus, solidly reflected by feedback signals. The desire to steer the vehicle in a more human fashion and to provide a smooth automated steering control process motivates the adoption of a fuzzy inference scheme as part of the lateral controller design strategy. Based on the fuzzy set theory, FGS is proposed to improve the lateral controller of the vehicle.

As presented in Fig. 12, FGS is designed to autotune the lateral controller. The kernel of the proposed FGS is the infer-ence rule base, which constitutes a natural environment where engineering judgment and human knowledge can be applied to the vehicle steering controller. FGS supports more humanlike driving behavior during the process of keeping to the lane. The system more aggressively mimics humans driving at low levels of speed and more gently mimics at high levels of speed, even when the deviation between the vehicle and the centerline of the road is large. Accordingly, the linguistic input variables are the immediate velocity of the vehicle and the lateral offset from the centerline at the look-ahead distance. FGS yields the proper tuning gain based on the following rules:

ith rule : If vxis ˜A and yLdis ˜B, then ∆iis ˜C. (29) Here, ˜A, ˜B, and ˜C are the corresponding linguistic terms, i.e.,

˜

A = LOW, MED, HIGH ˜ B = NB, NS, ZO, PS, PB ˜ C = S, M, L where NB negative big; NS negative small; ZO zero; PS positive small; PB positive big;

(10)

WU et al.: SYSTEMS INTEGRATION DESIGN AND IMPLEMENTATION FOR LANE KEEPING ON VEHICLE 255

Fig. 12. Block diagram of the proposed controller/vehicle system.

Fig. 13. (a) Rule base for fuzzy gain scheduling. (b) Surface plot of fuzzy gain scheduling.

S small;

M medium;

L large.

Fig. 13(a) and (b) shows the rule base and the surface plot that is associated with FGS, respectively. These parameters of the membership functions for prior and consequent expressions are manually tuned to ensure satisfactory steering performance. The shapes that were chosen in FGS are trapezoidal for vx and are triangular for yLd and ∆fg, respectively, as shown

in Fig. 14. In the defuzzification strategy, the center-of-area (COA) method is adopted to determine the gain as follows:

∆fg= 15 i=1µi× ∆i 15 i=1µi (30)

Fig. 14. Membership functions for (a) vx, (b) yLd, and (c) ∆fg.

where the ith rule firing strength

µi= min (µA(vx), µB(yLd)) .

Finally, the terminal front-wheel steering quantity δf is, thus, obtained by

δf =−∆fg· Kfbx. (31)

When a vehicle steers on curves, the road curvature serves as an exogenous input to the previewed dynamics (23) and (24). In the straight-road (i.e., zero-curvature) case, both the look-ahead lateral offset and heading angle will be regulated to zero. For the case of nonzero curvature, these two states

(11)

Fig. 15. Response of the look-ahead lateral offset to the step change in curvature ahead.

Fig. 16. Equivalent block diagram for the single-point previewed pursuit controller/vehicle system.

Fig. 17. Frequency response characteristic for the controller/vehicle system from e to yLd. (a) Low velocity (about 60 km/h). (b) High velocity (about 110 km/h).

will be stabilized to steady-state values, as shown in Fig. 11. The correct information of road curvature is difficult to obtain in practice. An additional curvature estimator is proposed for feedforward control to improve the transient behavior as the vehicle enters and exits curves [11]. By examining (23) and (24), changes in curvature ahead (i.e., 15 m) can be anticipated

TABLE I

(12)

WU et al.: SYSTEMS INTEGRATION DESIGN AND IMPLEMENTATION FOR LANE KEEPING ON VEHICLE 257

Fig. 18. Schematic of the lateral controller in TAIWAN iTS-1.

TABLE II

TESTINGCONDITIONS OFDIFFERENTENVIRONMENTALSETS

by the varying lateral offset ahead. Therefore, FGS is capable of compensating the effect from this unknown curvature information: If the lateral offset ahead the vehicle increases, then the steering control will be increased. Fig. 15 shows that the yLd response with respect to the step change in ρLd(i.e.,

300−1m−1within 3∼ 14 s) is improved by FGS as compared with the pure feedback design. It is shown that FGS possesses performance that is comparable with the curvature feedforward approach in [11].

C. Analysis for the Lateral Controller With FGS

To achieve the persuasive performance of the steering con-trol, the so-called crossover model principle is applied to examine the utility of the designed lateral controller. This principle has empirically been demonstrated to be applicable to driver’s steering, and any good driver model is expected to yield results that conform to this principle [29]. Therefore, it is expected to achieve the evidence of crossover model principle for our proposed lateral controller. By modifying Fig. 12 to a single-loop preview pursuit of lateral position, as shown in Fig. 16, Gv(s) = [sIn− A]−1B represents the controlled-element transfer matrix of vehicle dynamics, and δf(t), i.e., the current control, is related to the feedback control δf0(t) through FGS and a transport lag. The straight-line regulatory control test for the controller/vehicle system in Fig. 16 reveals the

direct-loop frequency response, as shown in Fig. 17, which relates the output previewed lateral offset yLd to the error e.

This system exhibits a slope of around −20 dB/dec for ω≈ ωc, despite low and high vehicle velocities, i.e., the open-loop transfer function of the controller/vehicle system can be approximated as ωc/s around the crossover frequency ωc. This result is consistent with the crossover model principle of the human operator. In addition, the included transport lag can be viewed as an “effective” time delay that is similar to the inherent limitation of human sensing, processing, and actuation to steer such that the open-loop ratio of the system can be described by the transfer function (ωc/s)e−sτ.

In Fig. 17, it is seen that not only the frequency response in the vicinity of the crossover frequency ωcis determined but also that the crossover frequency at a high velocity is lower than that at a low velocity. This fact is also consistent with the experimental results with regard to the crossover model principle [29], which is responsible for the lower crossover frequency and the difficulty of the drive task at a higher ve-locity. For skilled drivers, the smoother steering behavior will be employed to avoid excessive responses, particularly under high velocities, regardless of the characteristics of vehicles. In Fig. 17, the characteristics of a direct-loop response exhibit the smaller magnitude of high frequency at a high velocity than that at a low velocity. Table I shows that the crossover frequency is higher, particularly at a high velocity without FGS

(13)

Fig. 19. (a) Experimental results without the FGS on the coast-down test track (i.e., a straight lane with a flat surface). (b) Experimental results with the FGS on the coast-down test track (i.e., a straight lane with a flat surface).

(14)

WU et al.: SYSTEMS INTEGRATION DESIGN AND IMPLEMENTATION FOR LANE KEEPING ON VEHICLE 259

Fig. 19. (Continued). (c) Experimental results with the FGS on the noise vibration and harshness surface test track (i.e., a straight lane with an irregular surface, including a single-side lane marking segment).

compensation. This result certainly meets the originality of the FGS compensation design within the lateral controller. As the crossover frequency ωcincreases, the bandwidth of the closed-loop system increases, thereby causing unexpected noise that may destabilize the system.

V. EXPERIMENTALRESULTS

A. Vehicle Overview

The following list of the major components is provided to complete the description of the equipment in TAIWAN iTS-1. • MABX: a real-time system for performing fast function prototyping from scratch and operating with no need for user intervention;

• Pentium computer: a processing system that is equipped with an AMD 1.53-GHz CPU and a 512-MB random access memory (RAM);

• CCD video capturing system: a monochrome CCD camera and a video frame grabber;

• In-vehicle sensors: manifold sensors with which the plat-form is equipped , including a yaw rate sensor and a lateral accelerometer, which are installed at the CG of the vehicle; a vehicle speed sensor is directly used from the vehicle speedometer, and an SW angle sensor is added onto the steering axis;

• SW actuating motor: an AC servo motor that is added to the steering shaft, where the SW is promoted using a belt; • Power system: two inverters and several vehicular batteries that are externally equipped to provide electric power to the MABX, the programmed computer, and the interface circuit;

• Cruise controller: the controller that was developed and installed by the China Motor Company, which keeps the vehicle at a fixed speed at over 40 km/h.

Remark: The master of the lateral controller is the MABX with designated responsibilities that involve specific computa-tions for system operation. The MABX consists of two boards (i.e., DS1401 Based Board and DS1501 I/O Board) in a milled aluminum box with an enclosure size of 220× 225 × 50 mm. The MABX has an IBM PPC 750FX processor with memo-ries for storage of application programs and for transmission between MABX and personal computer (PC)/notebook. Signal generation/measurement that is available at the MABX can be done through the standard module of the CAN/Serial in-terface. With interfaces to the corresponding bus, intelligent sensors (e.g., vision system) and actuators can be integrated during the prototyping process such that real-time systems can be developed. In addition, the I/O functionalities and signal conditioning can be adopted according to the designer’s speci-fications. More information about applications of the MABX is discussed in [35].

(15)

Fig. 20. (a) Routes (overlaid on aerial photography) in Taiwan that were used in the evaluation of the automated lane-keeping system. Routes A and B are referred to as Highways A and B, respectively, in Table II, which was used in the evaluation. (b) History of the experimental results with the FGS on Highway B, with curvatures varying from 0 to 500−1m−1.

The schematic of the lateral controller that is built in the MABX with respect to the vision system, in-vehicle sensors, and the steering actuator is illustrated in Fig. 18. The MABX runs vehicle control and transmission from the vision system and in-vehicle sensors and can be reprogrammed by a note-book that is temporarily connected for program download, data analysis, and calibration. Initially, the lateral controller is constructed via a Simulink model in the notebook. With the dSPACE software package, i.e., real-time interface (RTI), the Simulink model can be transformed to build real-time code and to download and execute this code on the MABX hardware. The linkage between the MABX and the notebook is through a DS815 transmitter card (i.e., PCMICA type-2 card). The desired command for the SW of the vehicle is generated in

the form of a PWM signal from the digital output port of the MABX to drive the SW actuating motor.

B. Road Test Results

To provide a quantitative perspective of our automated lane-keeping system, the system that was implemented on

TAIWAN iTS-1 has been tested on several roads with respect

to different weather and lighting conditions. The missions of automated lane keeping with varying velocities on different test roads are summarized in Table II. Some events (e.g., rain) that do not usually occur are denoted as N/A.

The automated lane-keeping system was initially evaluated on the test track at ARTC, and the regulation experiments were

(16)

WU et al.: SYSTEMS INTEGRATION DESIGN AND IMPLEMENTATION FOR LANE KEEPING ON VEHICLE 261

undertaken to examine the stability of the overall system at vari-ous levels of speed. Two road conditions at ARTC were used in the regulation experiments. First, the system was tested without the FGS compensation. In this case, the automated steering control oscillated when the velocity was increased to around 70 km/h. When the vehicle continued accelerating, the steering action was untamed, worsening the system. Fig. 19(a) shows sampled experimental results under these testing conditions. In the case of FGS compensation, the vehicle could automatically be steered in the lane while the velocity was increased to around 145 km/h, as shown in Fig. 19(b). The vehicle also steered well on a test track with a harsh and vibration surface at a velocity of around 120 km/h, as shown in Fig. 19(c). It is worth mentioning that, in the vision system, the reference centerline can be estimated, even if a segment of track has only single-side lane marking. The performance of lane keeping is indicated by the lateral offset at a look-ahead distance (i.e., Ld= 15 m) that was measured by the vision system. Thus, one can well imagine that the vehicle correctly tracks the centerline of roads, despite varying vehicle velocities and vibration surface of the road. In addition, the night test for the automated lane-keeping system has also been carried out with a maximum velocity of 90 km/h, without any street lamps, except for the front vehicle lamp.

In Table II, Highways A (West and East Expressway No. 68) and B (Highway No. 3) refer to routes A and B, respectively, as shown in Fig. 20(a); the maximum road curvature is 300−1 and 500−1 m−1 for Highways A and B, respectively, and the legal velocity ranges from 60 to 100 km/h. With regard to the experiments on highway roads, the vehicle successfully tracked the road for several tens of kilometers. Fig. 20(b) depicts a sample of experimental results that were obtained from Highway B. It is clear to see that the lane-keeping task not only was accurately performed but also yielded a ride of high quality, since the lateral acceleration was inside the specified range (i.e., 0.4 g). The resolution of heading angle that the vision system provided is 1. During the stable steering, the values of the heading angle would not rise very much, even though the vehicle was turning. Both correct lane detection and lane keeping were achieved under the traffic congestion, which is in the presence of the period (from 50 to 100 s) of varying velocity in Fig. 20(b). As compared with Fig. 19(b) and (c), which shows tests on straight roads, the experimental results in Fig. 20(b) were on the highway at velocities of 80–60–80 km/h with changing curvatures such that the look-ahead lateral offset would not converge to a steady-state value.

It is interesting to note that the other vehicle in front, despite being within the previewed range of the vision system, would not affect the correctness of lane detection, since the road width is reasonably larger than vehicles [as shown in Fig. 10(b)]. However, a mistakable detection could arise when the color of the preceding vehicle was similar to the lane marking in the gray space. Fortunately, this emergency can be avoided under the strategy of the safety headway distance, which is almost much longer than the previewed range (i.e., Ld= 15 m). Shadows that cross on the lane marking are not serious prob-lems of lane-detection functionality; however, there are some limitations of the vision system with regard to strong sunshine and rather dark nights. The reason is primarily due to the direct

Fig. 21. Experimental results under several environmental conditions on Highway B. (a) Inside a tunnel. (b) Sunny. (c) Rain. (d) Under an overpass. (e) Cloudy. (f) Night.

incidence of sun rays onto the camera lens, and the other reason is the heavy light reflection from the asphalt road surface.

Typical highway conditions such as text on the road surface, traffic congestion, etc., exist through these practical tests. In addition, both Highways A and B consist of solid-line and segmented-line markers as well as a mixture of segmented lines and circular reflectors. More specifically, the lane-keeping missions have also been evaluated under different environments (i.e., inside a tunnel and under an overpass) and weather conditions (i.e., sunny, cloudy, rain, and night). As exhibited in Fig. 21, the lane-recognition results are grabbed from the monitor of the vision system. The detection rate of our vision system is summarized in Table III for these various conditions. The lane-marking detection rate is more than 96%, which supports the assumption that the system’s function can stand more practical circumstances.

Some of our demonstration videos that show the system’s capability in lane keeping were exhibited at the 2005 IEEE Intelligent Vehicles Symposium, which was held in Las Vegas, NV. The complete set of video files that present the exper-imental results for TAIWAN iTS-1 can be retrieved from http://140.113.150.201/New_Home/IEEE_ITS.htm.

VI. CONCLUSION

This paper has proposed an automated lane-keeping sys-tem that was implemented on a commercial vehicle, i.e.,

(17)

TABLE III

DETECTIONRATEUNDERDIFFERENTCONDITIONS INHIGHWAYS

TAIWAN iTS-1. It integrates heterogeneous systems,

includ-ing a real-time vision system, a lateral controller, in-vehicle sensors, and an SW actuating motor. Based on gray-level im-age processing, the vision system can deal with complicated road environments to detect the lane marking and provide the real-time vehicle location on its lane. Additionally, to exhibit humanlike driving behavior, FGS has been proposed to provide satisfactory performance in the steering control. Numerous road experiments have demonstrated our system’s effectiveness; the test results at the ARTC and on highways reveal our system’s validity for the lane-keeping mission at various vehicle veloc-ities. The performed ride quality is also comparable to that provided by a typical human driver. Moreover, the practical results from experimental tests support the feasibility under different weather conditions. The already-implemented lane-detection module can combine information that is provided by other sensors, e.g., GPS receiver and range finder. Furthermore, other driving tasks like lane changing and car following are extended functions of our automated lane-keeping system in intelligent vehicles, and related work is ongoing.

ACKNOWLEDGMENT

The authors would like to thank the reviewers and the Asso-ciate Editor for their great help and detailed comments, which considerably improved the presentation of this paper.

REFERENCES

[1] S. Huang, W. Ren, and S. C. Chan, “Design and performance eval-uation of mixed manual and automated control traffic,” IEEE Trans.

Syst., Man, Cybern. A, Syst., Humans, vol. 30, no. 6, pp. 661–673,

Nov. 2000.

[2] A. Vahidi and A. Eskandarian, “Research advances in intelligent collision avoidance and adaptive cruise control,” IEEE Trans. Intell. Transp. Syst., vol. 4, no. 3, pp. 143–153, Sep. 2003.

[3] D. Pomerleau, “RALPH: Rapidly adapting lateral position handler,” in

Proc. IEEE Intell. Veh. Symp., 1995, pp. 506–511.

[4] D. Pomerleau, “Visibility estimation from a moving vehicle using the RALPH vision system,” in Proc. IEEE Intell. Transp. Syst., 1997, pp. 906–911.

[5] M. Lutzeler and E. D. Dickmanns, “Road recognition with MarVEye,” in

Proc. IEEE Intell. Veh. Symp., Stuttgart, Germany, 1998, pp. 112–117.

[6] M. Bertozzi and A. Broggi, “GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection,” IEEE Trans. Image

Process., vol. 7, no. 1, pp. 62–81, Jan. 1998.

[7] A. Broggi, M. Bertozzi, G. Lo, C. Bianco, and A. Piazzi, “Visual percep-tion of obstacles and vehicles for platooning,” IEEE Trans. Intell. Transp.

Syst., vol. 1, no. 3, pp. 164–176, Sep. 2000.

[8] M. Bertozzi, A. Broggi, and A. Fascioli, “Vision-based intelligent vehi-cles: State of the art and perspectives,” Robot. Auton. Syst., vol. 32, no. 1, pp. 1–16, Jul. 2000.

[9] A. Broggi, M. Bertozzi, A. Fascioli, and G. Conte, Automatic

Ve-hicle Guidance: The Experience of the ARGO Autonomous VeVe-hicle.

Singapore: World Scientific, 1999.

[10] C. J. Taylor, J. Malik, and J. Weber, “A real-time approach to stereopsis and lane-finding,” in Proc. IEEE Intell. Veh. Symp., 1996, pp. 207–212.

[11] C. J. Taylor, J. Kosecka, R. Blasi, and J. Malik, “A comparative study of vision-based lateral control strategies for autonomous highway driving,”

Int. J. Rob. Res., vol. 18, no. 5, pp. 442–453, May 1999.

[12] C. Hatipoglu, Ü. Özgüner, and K. A. Redmill, “Automated lane change controller design,” IEEE Trans. Intell. Transp. Syst., vol. 4, no. 1, pp. 13–22, Mar. 2003.

[13] E. D. Dickmanns and B. D. Mysliwetz, “Recursive 3-D road and relative ego-state recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 199–213, Feb. 1992.

[14] S. Tsugawa, “Vision-based vehicles in Japan: Machine vision systems and driving control systems,” IEEE Trans. Ind. Electron., vol. 41, no. 4, pp. 398–405, Aug. 1994.

[15] D.-C. Liaw, H.-H. Chiang, and T.-T. Lee, “Elucidating vehicle lateral dynamics using a bifurcation analysis,” IEEE Trans. Intell. Transp. Syst., vol. 8, no. 2, pp. 195–207, Jun. 2007.

[16] R. Chapuis, J. Gallice, F. Jurie, and J. Alizon, “Real time road mark following,” Signal Process., vol. 24, no. 3, pp. 331–343, Dec. 1991. [17] R. Chapuis, R. Aufrere, and F. Chausse, “Accurate road following and

reconstruction by computer vision,” IEEE Trans. Intell. Transp. Syst., vol. 3, no. 4, pp. 261–270, Dec. 2002.

[18] R. Labayrade, J. Douret, and D. Aubert, “A multi-model lane detector that handles road singularities,” in Proc. IEEE Intell. Transp. Syst., Sep. 2006, pp. 1143–1148.

[19] Z. Kim, “Realtime lane tracking of curved local road,” in Proc. IEEE

Intell. Transp. Syst., Sep. 2006, pp. 1149–1155.

[20] B. F. Wu, C. J. Chen, C. C. Chiu, and T. C. Lai, “A real-time robust lane detection approach for autonomous vehicle environment,” in Proc.

6th IASTED Int. Conf. Signal Image Process., Honolulu, HI, Aug. 2004,

pp. 518–523.

[21] C. J. Chen, “The study of the real-time image processing approaches to intelligent transportation systems,” Ph.D. dissertation, National Chiao Tung Univ., Hsinchu, Taiwan, R.O.C., Oct. 2006.

[22] M. A. Sotelo, F. J. Rodriguez, and L. Magdalena, “VIRTUOUS: Vision-based road transportation for unmanned operation on urban-like scenar-ios,” IEEE Trans. Intell. Transp. Syst., vol. 5, no. 2, pp. 69–83, Jun. 2004. [23] Y. U. Yim and S. Y. Oh, “Three-feather-based automatic lane detection al-gorithm (TFALDA) for autonomous driving,” IEEE Trans. Intell. Transp.

Syst., vol. 4, no. 4, pp. 219–225, Dec. 2003.

[24] A. Giachetti, M. Campani, and V. Torre, “The use of optical flow for road navigation,” IEEE Trans. Robot. Autom., vol. 14, no. 1, pp. 34–48, Feb. 1998.

[25] L. Giubbolini, “A multistatic microwave radar sensor for short range an-ticollision warning,” IEEE Trans. Veh. Technol., vol. 49, no. 6, pp. 2270– 2275, Nov. 2000.

[26] T. Hessburg and M. Tomizuka, “Fuzzy logic control for lateral vehicle guidance,” IEEE Control Syst. Mag., vol. 14, no. 4, pp. 55–63, Aug. 1994. [27] A. Stentz, M. Hebert, and M. Thorpe, Intelligent Unmanned Ground

Ve-hicles. Autonomous Navigation Research at Carnegie Mellon. Norwell,

MA: Kluwer, 1997.

[28] R. Rajamani et al., “Demonstration of integrated longitudinal and lateral control for the operation of automated vehicles in platoons,” IEEE Trans.

Control Syst. Technol., vol. 8, no. 4, pp. 695–708, Jul. 2000.

[29] R. A. Hess and A. Modjtahedzadeh, “A control theoretic model of driver steering behavior,” IEEE Control Syst. Mag., vol. 10, no. 5, pp. 3–8, Aug. 1990.

[30] S.-J. Wu, H.-H. Chiang, J.-W. Perng, T.-T. Lee, and C.-J. Chen, “The automated lane-keeping design for an intelligent vehicle,” in Proc. IEEE

Intell. Veh. Symp., Las Vegas, NV, Jun. 2005, pp. 508–513.

[31] J. Ackermann, “Robust control prevents car skidding,” IEEE Control Syst.

Mag., vol. 17, no. 3, pp. 23–31, Jun. 1997.

[32] W. H. Crouse and D. L. Anglin, Automotive Mechanics. New York: McGraw-Hill, 1993.

[33] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 2002.

[34] G. F. Franklin, J. D. Powell, and A. Emami-Naeini, Feedback Control of

Dynamic Systems, 3rd ed. Reading, MA: Addison-Wesley, 1994.

[35] [Online]. Available: http://www.dspace.com/ww/en/pub/home/products/ hw/micautob.cfm

(18)

WU et al.: SYSTEMS INTEGRATION DESIGN AND IMPLEMENTATION FOR LANE KEEPING ON VEHICLE 263

Shinq-Jen Wu received the B.S. degree in chemical

engineering from the National Taiwan University, Taipei, Taiwan, R.O.C., in 1986, the M.S. degree in chemical engineering from the National Tsing-Hua University, Hsinchu, Taiwan, in 1989, the M.S. degree in electrical engineering from the University of California, Los Angeles, in 1994, and the Ph.D. degree in electrical and control engineering from the National Chiao Tung University, Hsinchu, in 2000.

From September 1989 to July 1990, she was with the Laboratory for Simulation and Control Technol-ogy, Chemical Engineering Division, Industrial Technology Research Institute, Hsinchu. In 1991, she joined the Chemical Engineering Department, Kao-Yuan Junior College of Technology and Commerce, Kaohsiung, Taiwan. From 1995 to 1996, she was an Engineer with the Integration Engineering Department, Macronix International Company Ltd., Hsinchu. She is currently with the Department of Electrical Engineering, Da-Yeh University, Changhwa, Taiwan. She is the Editor of Advances in Fuzzy Sets and Systems (Pushpa). Her research interests include ergonomics-based smart cars, advanced vehicle control and safety systems, Petri net modeling for cancer mechanisms, robust identifi-cation of genetic networks, soft sensor for online tuning, soft-computation-based protein structure prediction, VLSI process technology, optimal fuzzy control/tracking, optimal fuzzy estimation, and soft-computation modeling techniques.

Dr. Wu is a member of the Phi Tau Phi Scholastic Honor Society. Her name is included in the 2006 Asian Admirable Achievers, 2007 Asia Pacific Who’s

Who, and 2008–2009 Marquis Who’s Who in Science and Engineering (tenth

anniversary edition).

Hsin-Han Chiang (S’02–M’03) was born in

Kaohsiung, Taiwan, R.O.C., in 1979. He received the B.S. and Ph.D. degrees in electrical and control en-gineering from the National Chiao Tung University, Hsinchu, Taiwan, in 2001 and 2008, respectively.

He is currently a Post Doctor with the Department of Electrical Engineering, National Taipei Univer-sity of Technology (NTUT), Taipei, Taiwan. His research interests include intelligent systems and control, fuzzy systems and control, automated ve-hicle control, veve-hicle-driver systems, and intelligent transportation systems.

Jau-Woei Perng (S’02–M’04) was born in Hsinchu,

Taiwan, R.O.C., in 1973. He received the B.S. and M.S. degrees in electrical engineering from the Yuan Ze University, Chungli, Taiwan, in 1995 and 1997, respectively, and the Ph.D. degree in electrical and control engineering from the National Chiao Tung University (NCTU), Hsinchu, in 2003.

From 2003 to 2008, he was a Research Assistant Professor with the Department of Electrical and Con-trol Engineering, NCTU. He is currently an Assistant Professor with the Department of Mechanical and Electromechanical Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan. His research interests include robust control, nonlinear control, fuzzy logic control, neural networks, systems engineering, and intelligent vehicle control.

Chao-Jung Chen was born in Taoyuan, Taiwan,

R.O.C., in 1977. He received the B.S. and M.S. degrees from Huafan University, Taipei, Taiwan, in 1999 and 2001, respectively, and the Ph.D. degree in electrical and control engineering from the National Chiao Tung University (NCTU), Hsinchu, Taiwan, in 2006.

He is currently a Research Assistant Professor with the Department of Electrical and Control En-gineering, NCTU. His research interests include real-time image processing, embedded system devel-opment, and intelligent transportation systems.

Bing-Fei Wu (S’89–M’92–SM’02) was born in

Taipei, Taiwan, R.O.C., in 1959. He received the B.S. and M.S. degrees in control engineering from the National Chiao Tung University (NCTU), Hsinchu, Taiwan, in 1981 and 1983, respectively, and the Ph.D. degree in electrical engineering from the Uni-versity of Southern California, Los Angeles, in 1992. Since 1992, he has been with the Department of Electrical Engineering and Control Engineering, NCTU, where he is currently a Professor. He has been involved in research on intelligent transporta-tion systems for many years and leads the research team that developed the first Taiwan smart car, i.e., TAIWAN iTS-1, with autonomous driving and an active safety system. His research interests include vision-based vehicle driving safety, intelligent vehicle control, multimedia signal analysis, embedded systems, and chip design.

Prof. Wu is the Founder and was the Chair of the IEEE Systems, Man, and Cybernetics Society Taipei Chapter in 2003. From 1999 to 2000, he was the Director of the Control Technology of the Consumer Electronics Research Group, Automatic Control Section, National Science Council (NSC), Taiwan. As an active Industry Consultant, he was also involved in the chip design and applications of the flash memory controller and 3C consumer electronics in multimedia systems, for which he received the Best Industry–Academics Cooperation Research Award from the Ministry of Education in 2003. He received the Xerox-Fujitsu Academic Research Award in 2007, the Distin-guished Engineering Professor Award from the Chinese Institute of Engineers in 2002, the Outstanding Information Technology Elite Award from the Taiwan Government in 2003, the First Prize in the TI China–Taiwan DSP Design Contest in 2006, the Outstanding Research Award from NCTU in 2004, the Golden Acer Dragon Thesis Awards from the Acer Foundation in 1998 and 2003, respectively, the First Prize in the We Win (Win by Entrepreneurship and Work with Innovation and Networking) Competition from the Industrial Bank of Taiwan in 2003, and the Silver Award of Technology Innovation Competition from the Advantech Foundation in 2003.

Tsu-Tian Lee (M’87–SM’89–F’97) was born in

Taipei, Taiwan, R.O.C., in 1949. He received the B.S. degree in control engineering from the National Chiao Tung University (NCTU), Hsinchu, Taiwan, in 1970 and the M.S. and Ph.D. degrees in electri-cal engineering from the University of Oklahoma, Norman, in 1972 and 1975, respectively.

In 1975, he was an Associate Professor with the Department of Control Engineering, NCTU, where he became a Professor and the Chairman in 1978. In 1981, he became a Professor and the Director of the Institute of Control Engineering, NCTU. In 1986, he was a Visiting Professor with the University of Kentucky, Lexington, where he became a Full Professor of electrical engineering in 1987. In 1990, he was a Professor and the Chairman of the Department of Electrical Engineering, National Taiwan University of Science and Technology (NTUST), Taipei, where he became a Professor and the Dean of the Office of Research and Development in 1998. Since 2000, he has been with the Department of Electrical and Control Engineering, NCTU, where he is currently a Chair Professor. Since 2004, he has been with the Department of Electrical Engineering, National Taipei University of Technology, where he is currently the President.

Prof. Lee is a Fellow of the Institution of Electrical Engineers and the New York Academy of Sciences. He received the Distinguished Research Award from the National Science Council, R.O.C., from 1991 to 1998, the Academic Achievement Award in Engineering and Applied Science from the Ministry of Education, R.O.C., in 1997, the National Endowed Chair from the Ministry of Education, R.O.C., in 2003, and the TECO Science and Technology Award from the TECO Technology Foundation in 2003. He has been a member of the technical program and advisory committees of many IEEE-sponsored international conferences. He is currently the Vice President of Membership, a Member of the Board of Governors, and the Newsletter Editor of the IEEE Systems, Man, and Cybernetics Society.

數據

Fig. 2. Automated lane-keeping system architecture.
Fig. 4. State signal for verification between the model and the vehicle (the solid line refers to the model output, whereas the dashed line refers to the measured output).
Fig. 5. Relationship between the world space and the image plane.
Fig. 6. (a) Constant marking width on the image plane and in the world space. (b) Greater intensity than that on the road surface on the scanning line
+7

參考文獻

相關文件

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

--coexistence between d+i d singlet and p+ip-wave triplet superconductivity --coexistence between helical and choral Majorana

2 Center for Theoretical Sciences and Center for Quantum Science and Engineering, National Taiwan University, Taipei 10617, Taiwan..

2 Center for Theoretical Sciences and Center for Quantum Science and Engineering, National Taiwan University, Taipei 10617, Taiwan..

2 Department of Materials Science and Engineering, National Chung Hsing University, Taichung, Taiwan.. 3 Department of Materials Science and Engineering, National Tsing Hua

Department of Physics and Institute of nanoscience, NCHU, Taiwan School of Physics and Engineering, Zhengzhou University, Henan.. International Laboratory for Quantum

Department of Physics and Taiwan SPIN Research Center, National Changhua University of Education, Changhua, Taiwan. The mixed state is a special phenomenon that the magnetic field

The original curriculum design for the Department of Construction Engineering of CYUT was to expose students to a broad knowledge in engineering and applied science rather than