• 沒有找到結果。

The radar cross section (RCS) of a vehicle is the key information used in vehicle classification and speed estimation. Figure 3.1 shows a sample RCS signal of a car received from the installation of Figure 3.1(a). The closed area indicated by a dashed line is the detection area of the radar detector. The profile of a vehicle signal resembles a mountain, and different vehicles create different shaped mountains. The vehicle classifier extracts features from the profiles and classifies vehicles accordingly.

The speed estimator also identifies features from the profiles and calculates the vehicle speed. Vehicle RCS is influenced by radar height and angle, radar distance from the first lane, vehicle speed, vehicle shape and vehicle distance to radar. Most of these factors are only fixed on the completion of radar sensor installation. Restated, the vehicle profiles were completely changed when the environmental installation was adjusted. This is a constraint for the supervised classifier, which needs to be retrained for each new environmental installation. Generally, traffic managers hope that sensor setup minimally impacts traffic condition. It means that the sensor setup time must be minimized. The setup time influences the learning time and learning data of a classifier.

If a training classifier is provided, the learning data is gathered during setup. Short setup time results in a skewed distribution of vehicle types. The number of cars may be large while the number of trucks is few. This forms the second constraint: short training time and skewed training data.

Figure 3.1 (a) A picture of a vehicle passing through the detection area of a radar detector. (b) The spectrogram of the vehicle shown in Figure (a).

Figure 3.2 presents a flowchart of an algorithm for these two constraints. The algorithm includes four phases, namely signal processing, calibration, learning and

„classification and speed estimation‟. The rectangles which are enclosed by a dashed line comprise four major phases: signal processing, calibration, learning and

„classification and speed estimation‟. After retrieving the radar signal, a high pass filter is applied to filter background clutter signals. Fast Fourier transformation is to get the range profiles of vehicles on lanes. Then, a constant false alarm rate (CFAR) thresholds are used to detect the presence of vehicles. If the calibrating work is needed, the video calibrating system will be used to calibrate the virtual loop lengths. When the calibrating job is finished, the vehicle profiles will be complemented by the range of vehicle. The aim is to let vehicles have the same signal gains in different lanes. The next

step is to extract nine features from the complemented vehicle profile. While the training job has never been done before, these features will be saved in vehicle training database. The category and length of vehicle, which is the output of video recognition system, will be saved into training database, too. If the number of vehicles is bigger than a threshold, SVM and SVR will finish the learning step. When the learning job is done, SVM will use vehicle features to classify vehicle‟s category. Finally, SVR will predict the length of vehicle and output the vehicle speed. The details of the algorithm will be presented in following subsections. The pseudocode of algorithm is shown as following.

Figure 3.2 The flowchart of the vehicle detection algorithm.

Online training ? SVR predicting Vehicle training Database :

Category, Length, Features

Void Vehicle_classifier_and_speed_estimation_algorithm() begin

while true

Signal_processing();

if need calibrating Calibrating();

endif if vehicle<n

Feature_extracting();

endif

if need training Learning() endif

if training done

Vehicle_classification_and_speed_estimation();

endif endwhile End

void Signal_processing() begin

retrieve signal from system;

apply high pass filter;

do fast Fourier transform;

find vehicle profile;

end

void Calibrating() begin

for each lane of street

check vehicle in/out by vehicle profile and clutter-map CFAR threshold if vehicle-in

capture vehicle-in image from video endif

if vehicle-out

capture vehicle-out image from video

compute virtual loop length by vehicle-in-out images classify vehicle category by images

compute vehicle length by images compute speed

save above results into training database endif

endfor end

void Feature_extrating() begin

if vehicle-out

compute energy of vehicle profile

compute square energy

compute sum,maximal, mean and mean square error of vehicle magnitude profile

compute vibration of vehicle profile compute square vibration

save all features into database endif

end

void Learning() begin

retrieve vehicle features from database

retrieve vehicle length, speed, type, and loop length from database do SVM training

do SVR regression end

void vehicle_classification_and_speed_estimation() begin

do vehicle classification by SVM do vehicle length prediction by SVR estimate vehicle speed

end

Signal processing

Most of the signal processing is performed during this phase. A discrete signal frame xt[n] is retrieved from the time domain during a pulse interval t. Each discrete signal frame has 128 points (n=1..128), and there are a total of 1500 signal frames per second (pulse repeating frequency =1500). Since noise and background clutter disturb normal vehicle echo signals, a simple high pass filter H(z)=1-z-1 is used to cancel the background clutter. The filtered signal yt[n] is shown in Eq. (3.1).

yt[n]=xt[n]-xt-1[n] (3.1)

Furthermore, the high pass filter can also emphasizethe moving of vehicles. Since a high magnitude of some frequencies means that some vehicles present on some lanes, fast Fourier transform (FFT) is performed on yt[n] to get the frequency domain data Yt[n]. That is to say, when a vehicle is presented at distance 3*n meters at time t ,

|Yt[n]| is great than some threshold. To avoid false alarms of vehicle presence, the clutter-map constant false alarm rate (CFAR) [44] technique is adopted. The basic characteristic of clutter-map CFAR is that the false alarm probability remains approximately constant in clutter by a dynamic threshold. Vehicles with an echo power exceeding the threshold thus can still be detected. Eq. (3.2) shows the clutter-map CFAR threshold for the range n during pulse t.

) ] [ ) 1 ( ] [ ( ]

[n Y 1n Y 2n

Tt t t (3.2) where α=2 and γ=0.9.

The final step in signal processing is to collect the vehicle profile Vt[m] presented at m-th range bin Yt [m] during the time interval in which vehicle is presented on detection area. All classification methods are based on the vehicle profile from which features are extracted. Eq. (3.3) defines the profile of the vehicle signal. Each magnitude of m-th range bin |Yt [m]| is multiplied by power k of range frequency fm to compensate for the decay of received power.

Vt[m]= |Yt[m]|×fmk

(3.3) (

where T1<t<T2 and T1 and T2 are the first and last detection times of a vehicle which passes through the radar detection area.

Feature Extraction

Nine features need to be extracted from the vehicle profile, most of which are based on the physical characteristics of the vehicle. First, the energy of the vehicle profile is shown in Eq. (3.4). A large vehicle implies large RCS, which in turn means high energy. Square energy is used to emphasize this characteristic. Other features are obtained from the statistical parameters associated with the vehicle magnitude profile.

These features include the maximal, mean and mean square error for elements of Vt[m].

2

1

) (

T

T t

t m V Energy

(3.4) Another physical phenomenon of vehicles is the vibration of the vehicle profile.

Small vehicles have low vibration while large vehicles have high vibration. Eq. (3.5) calculates vehicle vibration. To increase the weighting of these characteristics, the square of vibration is used. The vibration is just like to do mathematical differentiation

and the energy is the same concept as doing mathematical integration. These features of each vehicle profile form a point in the feature space.

|

This section aims to identify a classifier for effectively classifying vehicles into one of four categories: motorcycles, small, medium and large.

First, this study tries the K-means clustering (denoted as K-means). Here K-means is used as a method of partitional clustering in which the numbers of clusters and random centers are specified before starting the clustering process. The number of clusters is set to four. An objective function is then defined as the sum of the square distances between a point in a feature space and the nearest cluster centers. The standard K-means procedure is then followed to minimize the objective function iteratively by finding a new set of cluster centers. These cluster centers can reduce the value of the objective function at each iteration. Here the maximal iteration is set to 10.

The next classifier is LDA, which is a supervisory classifier. LDA measures the Mahalanobis distance between the group center and the LD project point of nine vehicle features. The LDA then estimates the posterior probability of each group using Mahalanobis distance, the testing vehicle belongs to the group with the highest posterior.

The last classifier is SVM, which is also a supervisory classifier. SVM is a binary classifier. The one-against-one strategy is developed to support multiple classifications For k groups, the one-against-one strategy constructs k(k-1)/2 SVMs to separate each pair of groups. This study tests SVM using the one-against-one approach, in which six SVMs are constructed, each of which trains data from two different vehicle groups.

Prediction is performed by voting, where each classifier makes a prediction and the most frequently predicted class wins (“Max Wins”). In cases where two groups receive an identical number of votes, this study simply selects the one with the smallest index.

For supervisory classifiers LDA and SVM, the environmental installation problem leads to retraining of the classifier for each installation of radar sensors. To resolve the problem, this study proposes a learning method based on a video training system, as shown in Figure 3.3. The system receives vehicle-in and vehicle-out triggers when a vehicle is either inside or outside the detection area. After receiving the triggers, the system captures two video frames. The image processing unit then outputs virtual loop length, vehicle category and vehicle length. Using clutter-map CFAR, the radar system can know the in and out time of a vehicle. When the radar system sends vehicle-in or vehicle-out triggers to the video system, the video system immediately captures a video frame. These two video frames can then be used to perform image processing to obtain the vehicle type. The vehicle type and its features are saved in a training database which can be used to train a supervisory classifier.

Figure 3.3. Video training and calibrating system.

Calibration and speed estimation

In general, the radar speed detecting method is based on the Doppler principle.

When a radio wave bumps onto a tracked object, the radio wave is reflected, the frequency and the amplitude of the reflective wave are influenced by the moving state of the tracked object. If the tracked object is stable in its position, the frequency of the reflective radio wave will not be changed and the Doppler effect will not be generated.

If the tracked object moves forward in the transmitted direction of the radio wave, the frequency of the reflective radio wave will be increased; on the other hand, if the object moves oppositely to the propagated direction of the radio wave, the frequency will be decreased. As a result, the effects of the Doppler Shift are produced. However, the Doppler effect is not obviously and stable for roadside fired Radar. It is almost zero when vehicle pass through the detection zones. The RCS of vehicle is so complicated such that equations 2.2-2.4 are not possible to be applied.

Hence, the vehicle speed is estimated using Eq. (3.6). The detection zone of each lane forms a virtual loop. The key to correctly estimating the speed is to more precisely calculate the three parameters of Eq. (3.6).

ΔT L Speed Lv z

, (3.6) (

where Lv denotes the length of the vehicle, Lz represents the length of the virtual loop and ΔT is the time of vehicle occupation.

It is easy to obtain the vehicle occupation time ΔT from clutter-map CFAR. The length of the virtual loop Lzmust be carefully calibrated. The length of the virtual loop is also an environmental installation problem. The length differs between environmental installations. Theoretically, the virtual loop length can be obtained from radar equations, antenna patterns, and the height and angle of the radar sensor. However, these

methods are imprecise and inconvenient. A more accurate method is to take measurements in the field. Figure 3.3 presents a video calibrating system for measuring the virtual loop length via image processing. Based on clutter-map CFAR, the times at which the vehicle is either in or outside of the virtual loop can be derived. The video calibrating system can obtain video frames at both in and out time. Image processing can be performed to obtain the distance of vehicle movement between the two frames.

The moving distance exactly equals the virtual loop length. SVR is used to estimate the vehicle lengthLv. SVR is almost the same as SVM, with one difference being that the optimal hyperplane is used to predict values in SVR, while in SVM it is used to separate classes. Since SVR is still a supervised regression method, the video system is still required to measure the vehicle lengths and save them in the training database.

相關文件