• 沒有找到結果。

Distance (Ranging) Techniques

Chapter 3 System Architecture and Relevant Techniques

3.5. Distance (Ranging) Techniques

To measure the distance in vehicular environments can use the ranging techniques [35][36] such as Received Signal Strength Indication (RSSI [38]), Time Of Arrival (TOA), Time Difference Of Arrival (TDOA) and Angle-of-Arrival (AOA). RSSI technique estimates the distance with signal strength measurement received by antennas [37]. TOA and TDOA both use the signal propagation time to measure the distance with known signal propagation speed. TOA can directly calculate the time of arrival with signal propagation between two nodes, and the synchronization accuracy will affect the performance. On the other hand, TDOA measures the difference of arrival time of multi signal sources. It can eliminate the clock drift problem of TOA and do not require synchronizing. The AOA technique measures the angles with received signal to estimates the desired target by using the directive antennas or antenna arrays. The performance degrades when distance of nodes increases and the accuracy of AOA are easily effect by signal multipath.

14

Chapter 4

Video-Assisted Inter-Vehicle Positioning (VIP) System

In this chapter, we present the Video-assisted Inter-Vehicle Positioning (VIP) system.

We first make some assumptions and the used symbols of our system. Then, the detailed design of our proposed system will be presented in step.

4.1. Assumptions & Symbols

First, we assume that GPS clock is synchronized among vehicles. And the driving video logger is able to recognize precise information about the license plates, relative positions, lane indices in a reasonable range, e.g. 50 meters with 90 angle width. The license plate of vehicle that is recognized by logger can be used to identify different vehicles. The lane index of GPS position can be matched with digital map, and use this information to decide weight of position. We also assume the communication and image processing tine are relatively small or negligible with respective to the update interval of GPS positions, e.g. 1 second.

Table 4.1 lists the notations and symbols used throughout this thesis. For each vehicle Vi, the GPS position of Vi is defined as Pi. And we match the position with a digital map to

15

identify a lane index of Vi called GPS lane (GLi). At the same time, the lane of Vi sensed by driving video logger is video lane (VLi), and the coordination and lane discrimination between Vi and a sensed vehicle Vj defined as Ci,j and Di,j. After the sensed data exchanged, the set of Vi’s neighbor will be saved as Ni, and a position validation of Pi used to weight Pi is ρi. Finally, the estimated position Pi of vehicle Vi can be calculated.

Table 4.1 : List of symbols Symbols Descriptions

Vi Vehicle with index (License plate number) i Pi GPS position of Vi

i

P Estimated position of Vi

GLi GPS lane of Vi obtained by matching Pi and the digital map VLi Video Lane of Vi recognized by its driving video logger

Ci,j Coordination of a sensed vehicle Vj oriented from Vi

Di,j Lane discrimination between Vi and a sensed vehicle Vj, e.g, Di,j = -2, -1, 0, 1, or 2

Ni Set of neighboring vehicles of Vi

ρi Validation value of Pi

M Number of lanes on the road (M > 1)

4.2. System Flow

The block diagram of proposed system has shown in Figure 4.1. In our system, our goal is to find the better positions between neighbor vehicles to estimate position, so we compare the value of different lane indexes obtained from GPS position and driving video logger in this algorithm. The algorithm consists of three main stages: sensing stage, sharing stage and estimation stage. In the beginning of this system, each vehicle extracts its sensed data from the GPS receiver and the driving video logger. Then, vehicle broadcasts its sensed data to neighbor vehicles. After received the sent data, vehicle uses the lane information to calculate

16

the position validation of neighbor vehicles. Finally, vehicle estimates its position according the GPS positions, relative positions and position validations between neighbors.

The goal of this thesis is to improve the accuracy of vehicle positioning, we want to assign a high weight value on the better GPS position, and otherwise the weight should be approached to 0. With this weighted method, the estimation position which calculated from high weight position should be closed to the real position. Our approach wants to differentiate the reliable of position. We compare the lane of GPS position matching with digital map and the lane recognition with driving video logger. In the following sections, we will describe the detailed design of VIP in step.

Figure 4-1 : Block diagram of proposed system

17

4.3. Sensing Stage

In the sensing stage, sensed data is used to estimate the position of vehicle for estimating position. Each vehicle Vi retrieves its GPS position Pi from GPS receiver first, and matches Pi with the digital map to obtain the corresponding GLi. If a vehicle Vi is fully equipped, it further uses its driving video logger to recognize the sensed data with the image processing functions in the scanning area. The sensed data contains the license plate number as an unique identifier for each front vehicles, as well as the relative position Ci,j, lane discrimination Di,j and the absolution lane index VLi. The Ci,j can be represented as the vector [x,y]T, where x and y, respectively the vertical and horizontal coordination oriented from Vi to Vj, and we set a Cj,i = −Ci j, for later position calculation. The lane discrimination Di,j between Vi and a sensed vehicle Vj is given by one of the values in [−(M – 1), …, −1, 0, 1, …, M – 1], it is provided to a vehicle which is not fully equipped to calculate and update its VLj value. The absolute lane index VLi can be obtained from several of lane recognition or tracking algorithms, which is used to determine the validated values of position. Besides, each vehicle Vi keep track a set Ni of neighboring vehicles what have video sensed data related to itself. In this stage, Ni contains all of the front vehicles recognized form the scanned image, If Vi is not a fully equipped vehicle, Ni is temporarily an empty set thus far.

4.4. Sharing Stage

Since the scanning area of a driving video logger is restricted to a limited angle width (typically from 90 to 150 degrees), each vehicle should share its information with other vehicles to create a more comprehensive view of the surrounding area. In this stage, each vehicle Vi broadcasts a message, containing its Pi and GLi, via the WAVE/DSRC module. If vehicle Vi is fully equipped, the message further contains VLi and all Ci,j’sand Di,j’s of

18

vehicles recognized in the previous stage, i.e. Vj ∈ Ni,

Once vehicle Vi received a message from another vehicle Vj, it checks whether Cj,i is in the message or already being stored in Vi, if it is, Vj is merged into the neighbor set Ni with Ni

= Ni∪ {Vj}, and all related information in this message Pj, GLj, VLj and Cj,i, are stored in Vi’s memory. Note that if there is no information regarding Cj,i, the information will not be kept, because Vi no clue for inferring a reference position from Pj.

On the other hand, if Vi have not yet known its video lane VLi, i.e. Vi is not fully equipped, it can indirectly obtain this value from one of the fully equipped vehicle Vj that scanned itself, as follows:

,

i j j i

VL =VL +D (1)

That is, if Vi is in the scanning area of some fully equipped vehicle Vj behind it, it can infer its VLi from VLj by shelving Dj,i lane(s).

4.5. Estimation Stage

In the estimation stage, our system validates the accuracy of received GPS positions and estimates a corrected position in accordance with the validating values. Considering a vehicle Vi, for each neighboring vehicle Vj (Vj∈Ni), it validates the accuracy of position Pj by the following equation:

1 1

In this equation, M represents the number of lanes and M > 1. This equation transforms the difference between the GPS lane GLj and video lane VLj into a validate value ρj between 0 and 1. Note that the calculation of ρj cannot be done in prior by vehicle Vj, because Vj could be a non-fully equipped vehicle. In this case, vehicle Vj has no way to obtain its VLj before

19

receiving any message from other vehicle, and the value of Vj was recognized by Vi. Let P denote a reference position of Vi j, i estimated from Pj, i.e.

i j, j j i, P =P +C . The corrected position P is calculated as follows: i

  ,

where α > . The corrected position 0 P is the average of all inference positions estimated i

form vehicles in Ni and Vi itself. The validation value ρj gives different weight to each inference position P . An accurate GPS position Pi j, j usually has a small (or no) difference between its GPS lane GLj and the lane VLj recognized by driving video logger, and thus, we give the corresponding reference position P a larger weight, i.e. ρi j, j in the correction.

Besides, the parameter α is a scaling factor. Given a larger value of α will magnify the different between a small and a large validating values. As shown in the next chapter, we found that by adequately setting α there is additional 5 percent of improvement in accuracy.

4.6. VIP System

In this section, we show the procedure of VIP system in the following Figure 4.2.

Produce of each vehicle Vi Initialize: Ni = ; VLi = null;

1 Obtains the position Pi from the GPS receiver;

2 Match Pi with the digital map to find the geographic lane GLi; 3 If Vi is a fully equipped vehicle

4 Capture an image from its driving video logger;

5 Recognize the video lane VLi from the image;

6 For each vehicle Vj recognized from the image do

7 Recognize the relative coordination Ci,j and lane discrimination Di,j;

20

13 Upon received the message from neighboring vehicle Vj

14 If Cj,i is in the message or already in Vi’s storage

Figure 4-2 : Procedure of VIP The details are described as follows:

Line 1: Vi obtains its geographic position Pi with a GPS receiver;

Line 2: Obtain the GPS lane GLi by matching its Pi with the digital map;

Lines 3~5: If Vi is a fully equipped vehicle, it start to capture an image from its driving video logger. It recognizes the video lane VLi from the image with image processing functions;

Lines 6~10: If Vi is fully equipped, it also recognizes the front vehicles. The relative

21

coordination Ci,j and lane discrimination Di,j between Vi and any front vehicle Vj in its sensing area should recorded, and then add Vj into the set Ni of Vi’s neighboring vehicles;

Line 12: If Vi is fully equipped, itbroadcasts its Pi, GLi and VLi, and all Ci,j’sand Di,j’s via WAVE/DSRC module. Else it just broadcasts its Pi and GLi;

Line 13: Upon received the message from neighboring vehicle Vj, Vi does follows actions in lines 14~20;

Lines 14~17: If Cj,i is in the message or already in Vi’s storage, it means the one of Vi and Vj can sense the other one. First, Vi updates its set of neighboring vehicles Ni by Ni = Ni{Vj}, and then stores Pj, GLj, VLj and Cj,i to its storage for later use;

Lines 18~20: If Vi is not fully equipped, it must calculate its video lane VLi byVLi = VLj + Dj,i according the received message;

Lines 21~22: If the set of neighboring vehicles Ni = {}, there are not any vehicles nearby Vi, and Vi cannot correct its Pi, and set P = Pi i;

Lines 23~24: If the set of neighboring vehicles Ni is not empty, for each vehicle Vj ∈ Ni + {Vi} does the follows actions in lines 25~26;

Line 25: It calculates the position validation of Vj according to Eq. (2). The validations of vehicle’s position are calculated according to the lane difference of their GL and VL.

The lower difference will lead to higher weight for reference position;

Line 26: Vi estimates its position P according to Eq. (3). The estimated equation is i

composed of the reference positions and the validating value that calculated in line 25;

Line 29: Return the estimated position P . i

For above procedure, we give a simple example in follows. In Figure 4.3, vehicle VB and VF are fully equipped vehicles, and other vehicles are not fully equipped vehicles. In this example, for the sensing stage vehicle VA, VB, …, VF retrieves their GPS position PA, PB, …,

22

PF with a GPS receiver. And then a fully equipped vehicle VB senses VA and VD then extracts VLB = 2, CB,A, CB,D, DB,A = -1 and DB,D = 1 with its driving video logger, and fully equipped vehicle VF also senses VB and VE.

In the sharing stage, for the fully equipped vehicle VB and VF, they broadcast their sensed P, GL and VL, and all C’sand D’s via WAVE/DSRC module. Vehicles VA, VC and VD are not fully equipped, and they broadcast their P, GL to neighboring vehicles. In this example, upon VA received the message from neighboring vehicle VB, it finds CB,A is in the messages sent from VB. It updates its set of neighboring vehicles by add VB into NA, and store PB, GLB, VLB and CB,A. Also the value of VLA is equal to null, so is calculates the value of VLA according to VLA = VLB + DB,A.

For the estimation stage in this example, only VC cannot correct its position because there are not any vehicle can sense VC or sensed by VC. For other vehicle like VA can calculate the position validation ρA and ρB by Eq. (2). VA estimates its position P by Eq. (3) A

according to average the reference position PA A, and PA B, with the respectively weight ρAα and ρBα.

Figure 4-3 : Example of VIP

23

Now, we summarize the video-assisted inter-vehicle position (VIP) algorithm as follows. The computational complexity is linear to the number of the neighboring vehicle and it requires only one broadcasting for each time of corrections. Besides, in our algorithm, there is no relation between any two corrected positions on the time axile, which can greatly eliminate the impact form vehicle mobility.

24

Chapter 5

Simulation Results and Analysis

5.1. Simulation Environment

In this chapter, we conduct simulation to evaluate our system using MATLAB [39][40].

The experimental setup was shown in Table 5.1. Our scenario is on the highway model that is 1000 meters straight road contained 4 lanes with 3.5m width, as shown in Figure 5.1.

Vehicles are travelling on this road upstream with speed limit from 50km/h to 60km/h and will switch lane randomly. The default vehicle flow rate (density) is medium density with 1800 vehicles/hour, we also simulate the results of 1200, 1800 and 2400 vehicles/hour. The transmission range was set to 300 meters that vehicles have enough ability to communicate with neighbor vehicles. The default sensing range of driving video logger is 150 meters and we will show the different results with several sensing range from 50m to 150m. Also the sensing angle was set to 120 degrees that is common angle used in commercial products. The GPS errors were generated with Gaussian distribution with standard deviation of 5m and 10m. All results are averaged from 10 times of simulations, each run for 600 seconds and vehicles estimate its position per second.

25

Table 5.1 : Simulation parameters

Parameter Value

Vehicle Flow Rate (Density) 1,800 vehicles / hour

Number of Lanes 4

Lane Width 3.5m

Road Length 1000m

Road Type Freeway

WAVE/DSRC Transmission Range 300m

Video Sensing Range 150m (50~150m)

Video Sensing Angle 120° (90°, 120°, 150°)

Speed limit 50km/h to 60 km/h

GPS Error

Gaussian Distribution with Standard Deviation

σ = [5, 10] m

Simulation Time 600 Seconds

Simulation Time Step 1 Seconds

Number of Runs 10

Figure 5-1 : Simulation roadway setup

26

5.2. Simulation Results

In our experiments, we evaluate the error of estimation position by comparing with the true position of vehicles. The performance metric used is root-mean-square error (RMSE) [26] , which is expressed in our experiments as follows:

(4)

RMSE is the common metric used for evaluating the accuracy of positioning. We define the real position of vehicle Vi is Pi* (xi*,yi*) and estimated position of Vi is Pi ( , x yi i) in our experiments, and the RMSE represents as the average distance between Pi* and Pi with n samples.

First, we show the lane discrimination between GPS lane and video lane in Figure 5.2.

With the GPS error of 5 and 10 meters, this figure has shown the value of |GL – VL| = 0 of vehicles exceeded half of vehicles which are less position error than others, the result represent that half of vehicle with low error can help other vehicles to correct their position.

Figure 5-2 : Comparison the lane discrimination with GPS lane and video lane

27

Figure 5.3(a) shows the RMSE of the estimated positions with all vehicles are fully equipped and the sensing range is set to 50m and 150m respectively. When sensing range is 50m, the results show the position error can be reduced 25 to 30 percent (around 3.6m).

When sensing range is set to 150m and scaling factor α = 1, GPS error of 5m, the position error is under 3.3m. When α larger than 4, the error can less than 3m. But we found that when α > 6 the error could get larger in some cases. So, we set α = 5 as the default value of our system. The Figure 5.4 has shown the value of α is 4~6 and the weight is less than 1/5 with |GL – VL| = 1, that shows |GL – VL| = 1 can improve some accuracy. This figure also shows the helpless when |GL – VL| > 2. In the Figure 5.3(b) shows the RMSE when GPS error is 10m and sensing range is 150m, which can reduce the position error to 5.8m with the value of σ was 4~6. The improvement rate of VIP with different value of α and GPS error σ is shown in Figure 5.5. It shows the equal improving ability of VIP in different GPS errors and sensing ranges, and the accuracy by factor of 30 and 40 percent improvement respectively when α > 4.

(a) With GPS error of 5m (b) With GPS error of 10m Figure 5-3 : RMSE with different values of the scaling factor and GPS error

1 2 3 4 5 6 7 8 9 10

28

Figure 5-4 : Distribution of position validation under different scaling factor

Figure 5-5 : Improvement rate with scaling factor and GPS error

1 2 3 4 5 6 7 8 9 10

29

We also evaluate the accuracy with different ratio of fully equipped vehicles in Figure 5.6. With increasing the ratio of fully equipped vehicles the improvement of positioning accuracy also arises. When the ratio of fully equipped vehicles is less than 30 percent, the average number of neighbor vehicles is not enough, it makes the improvement of estimations results not obviously. The number of neighbor vehicles is around 7 vehicles with the all vehicles are fully equipped and sensing range is 150m, it can provide the best correction result in VIP system. In the Figure 5.7 we can see when the ratio of fully equipped vehicles more than 50 percent, the ratio of estimated vehicle is over 70 and 90 percent in different sensing ranges. It shows VIP system can improve the accuracy of positioning did not require that all vehicles need to equipped with a driving video logger.

(a) With GPS error of 5m (b) With GPS error of 10m

Figure 5-6 : RMSE with different ratio of fully equipped vehicles

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

VIP Error (sensing range = 150m, total vehicles) VIP Error (sensing range = 50m, total vehicles) VIP Error (sensing range = 150m, correction vehicles) VIP Error (sensing range = 50m, correction vehicles)

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

VIP Error (sensing range = 150m, total vehicles) VIP Error (sensing range = 50m, total vehicles) VIP Error (sensing range = 150m, correction vehicles) VIP Error (sensing range = 50m, correction vehicles)

30

(a) With sensing range of 50m (b) With sensing range of 150m Figure 5-7 : Measure the ratio of correction vehicle with different ratio of fully equipped

vehicles

Figure 5.8 measure the accuracy with different vehicle flow rates. With the rate of 1,200 vehicles/hour, it shows the improvement of positioning lower than 1,800 vehicles/hour, because the former’s number of neighbor vehicles is less than the later that the reference vehicles is not enough for providing good correction. The vehicle flow rate of 2,400 vehicles/hour is better than 1,800 vehicles/hour, and when ratio of fully equipped vehicles less than 90 percent and sensing range is 150m that the RMSE of estimation position is less than 3m.

31

Figure 5-8 : RMSE with vehicle flow rates

The comparisons of the accuracy with different video sensing range and angle have shown in Figure 5.9 and Figure 5.10. In these results, we change the video sensing range from 50m to 150m. The result shows the larger sensing range may provide more number of neighbor vehicles, and then get better positioning. The sensing angle may have not any effect of positioning accuracy. Because these angles can coverage most of vehicles, the effect of number of neighbor vehicle is very low.

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

1200 Vehicles / Hour, Sensing range = 150m 1800 Vehicles / Hour, Sensing range = 150m 2400 Vehicles / Hour, Sensing range = 150m 1200 Vehicles / Hour, Sensing range = 50m 1800 Vehicles / Hour, Sensing range = 50m 2400 Vehicles / Hour, Sensing range = 50m

32

Figure 5-9 : RMSE with sensing ranges

Figure 5-10 : RMSE with sensing angles

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Sensing angle = 150°, Sensing range = 150m Sensing angle = 120°, Sensing range = 150m Sensing angle = 90°, Sensing range = 150m Sensing angle = 150°, Sensing range = 50m Sensing angle = 120°, Sensing range = 50m Sensing angle = 90°, Sensing range = 50m

33

We also evaluate the VIP system with 2, 3 and 4 lanes. The result has shown in Figure 5-11, the improvement rate of 2 and 3 lanes are similar, and the best result is 21 percent.

When all of vehicles are fully equipped vehicles and sensing range is 50m, the improvement

When all of vehicles are fully equipped vehicles and sensing range is 50m, the improvement

相關文件