• 沒有找到結果。

Chapter 4 Video-Assisted Inter-Vehicle Positioning (VIP) System

4.6. VIP System

where α > . The corrected position 0 P is the average of all inference positions estimated i

form vehicles in Ni and Vi itself. The validation value ρj gives different weight to each inference position P . An accurate GPS position Pi j, j usually has a small (or no) difference between its GPS lane GLj and the lane VLj recognized by driving video logger, and thus, we give the corresponding reference position P a larger weight, i.e. ρi j, j in the correction.

Besides, the parameter α is a scaling factor. Given a larger value of α will magnify the different between a small and a large validating values. As shown in the next chapter, we found that by adequately setting α there is additional 5 percent of improvement in accuracy.

4.6. VIP System

In this section, we show the procedure of VIP system in the following Figure 4.2.

Produce of each vehicle Vi Initialize: Ni = ; VLi = null;

1 Obtains the position Pi from the GPS receiver;

2 Match Pi with the digital map to find the geographic lane GLi; 3 If Vi is a fully equipped vehicle

4 Capture an image from its driving video logger;

5 Recognize the video lane VLi from the image;

6 For each vehicle Vj recognized from the image do

7 Recognize the relative coordination Ci,j and lane discrimination Di,j;

20

13 Upon received the message from neighboring vehicle Vj

14 If Cj,i is in the message or already in Vi’s storage

Figure 4-2 : Procedure of VIP The details are described as follows:

Line 1: Vi obtains its geographic position Pi with a GPS receiver;

Line 2: Obtain the GPS lane GLi by matching its Pi with the digital map;

Lines 3~5: If Vi is a fully equipped vehicle, it start to capture an image from its driving video logger. It recognizes the video lane VLi from the image with image processing functions;

Lines 6~10: If Vi is fully equipped, it also recognizes the front vehicles. The relative

21

coordination Ci,j and lane discrimination Di,j between Vi and any front vehicle Vj in its sensing area should recorded, and then add Vj into the set Ni of Vi’s neighboring vehicles;

Line 12: If Vi is fully equipped, itbroadcasts its Pi, GLi and VLi, and all Ci,j’sand Di,j’s via WAVE/DSRC module. Else it just broadcasts its Pi and GLi;

Line 13: Upon received the message from neighboring vehicle Vj, Vi does follows actions in lines 14~20;

Lines 14~17: If Cj,i is in the message or already in Vi’s storage, it means the one of Vi and Vj can sense the other one. First, Vi updates its set of neighboring vehicles Ni by Ni = Ni{Vj}, and then stores Pj, GLj, VLj and Cj,i to its storage for later use;

Lines 18~20: If Vi is not fully equipped, it must calculate its video lane VLi byVLi = VLj + Dj,i according the received message;

Lines 21~22: If the set of neighboring vehicles Ni = {}, there are not any vehicles nearby Vi, and Vi cannot correct its Pi, and set P = Pi i;

Lines 23~24: If the set of neighboring vehicles Ni is not empty, for each vehicle Vj ∈ Ni + {Vi} does the follows actions in lines 25~26;

Line 25: It calculates the position validation of Vj according to Eq. (2). The validations of vehicle’s position are calculated according to the lane difference of their GL and VL.

The lower difference will lead to higher weight for reference position;

Line 26: Vi estimates its position P according to Eq. (3). The estimated equation is i

composed of the reference positions and the validating value that calculated in line 25;

Line 29: Return the estimated position P . i

For above procedure, we give a simple example in follows. In Figure 4.3, vehicle VB and VF are fully equipped vehicles, and other vehicles are not fully equipped vehicles. In this example, for the sensing stage vehicle VA, VB, …, VF retrieves their GPS position PA, PB, …,

22

PF with a GPS receiver. And then a fully equipped vehicle VB senses VA and VD then extracts VLB = 2, CB,A, CB,D, DB,A = -1 and DB,D = 1 with its driving video logger, and fully equipped vehicle VF also senses VB and VE.

In the sharing stage, for the fully equipped vehicle VB and VF, they broadcast their sensed P, GL and VL, and all C’sand D’s via WAVE/DSRC module. Vehicles VA, VC and VD are not fully equipped, and they broadcast their P, GL to neighboring vehicles. In this example, upon VA received the message from neighboring vehicle VB, it finds CB,A is in the messages sent from VB. It updates its set of neighboring vehicles by add VB into NA, and store PB, GLB, VLB and CB,A. Also the value of VLA is equal to null, so is calculates the value of VLA according to VLA = VLB + DB,A.

For the estimation stage in this example, only VC cannot correct its position because there are not any vehicle can sense VC or sensed by VC. For other vehicle like VA can calculate the position validation ρA and ρB by Eq. (2). VA estimates its position P by Eq. (3) A

according to average the reference position PA A, and PA B, with the respectively weight ρAα and ρBα.

Figure 4-3 : Example of VIP

23

Now, we summarize the video-assisted inter-vehicle position (VIP) algorithm as follows. The computational complexity is linear to the number of the neighboring vehicle and it requires only one broadcasting for each time of corrections. Besides, in our algorithm, there is no relation between any two corrected positions on the time axile, which can greatly eliminate the impact form vehicle mobility.

24

Chapter 5

Simulation Results and Analysis

5.1. Simulation Environment

In this chapter, we conduct simulation to evaluate our system using MATLAB [39][40].

The experimental setup was shown in Table 5.1. Our scenario is on the highway model that is 1000 meters straight road contained 4 lanes with 3.5m width, as shown in Figure 5.1.

Vehicles are travelling on this road upstream with speed limit from 50km/h to 60km/h and will switch lane randomly. The default vehicle flow rate (density) is medium density with 1800 vehicles/hour, we also simulate the results of 1200, 1800 and 2400 vehicles/hour. The transmission range was set to 300 meters that vehicles have enough ability to communicate with neighbor vehicles. The default sensing range of driving video logger is 150 meters and we will show the different results with several sensing range from 50m to 150m. Also the sensing angle was set to 120 degrees that is common angle used in commercial products. The GPS errors were generated with Gaussian distribution with standard deviation of 5m and 10m. All results are averaged from 10 times of simulations, each run for 600 seconds and vehicles estimate its position per second.

25

Table 5.1 : Simulation parameters

Parameter Value

Vehicle Flow Rate (Density) 1,800 vehicles / hour

Number of Lanes 4

Lane Width 3.5m

Road Length 1000m

Road Type Freeway

WAVE/DSRC Transmission Range 300m

Video Sensing Range 150m (50~150m)

Video Sensing Angle 120° (90°, 120°, 150°)

Speed limit 50km/h to 60 km/h

GPS Error

Gaussian Distribution with Standard Deviation

σ = [5, 10] m

Simulation Time 600 Seconds

Simulation Time Step 1 Seconds

Number of Runs 10

Figure 5-1 : Simulation roadway setup

26

5.2. Simulation Results

In our experiments, we evaluate the error of estimation position by comparing with the true position of vehicles. The performance metric used is root-mean-square error (RMSE) [26] , which is expressed in our experiments as follows:

(4)

RMSE is the common metric used for evaluating the accuracy of positioning. We define the real position of vehicle Vi is Pi* (xi*,yi*) and estimated position of Vi is Pi ( , x yi i) in our experiments, and the RMSE represents as the average distance between Pi* and Pi with n samples.

First, we show the lane discrimination between GPS lane and video lane in Figure 5.2.

With the GPS error of 5 and 10 meters, this figure has shown the value of |GL – VL| = 0 of vehicles exceeded half of vehicles which are less position error than others, the result represent that half of vehicle with low error can help other vehicles to correct their position.

Figure 5-2 : Comparison the lane discrimination with GPS lane and video lane

27

Figure 5.3(a) shows the RMSE of the estimated positions with all vehicles are fully equipped and the sensing range is set to 50m and 150m respectively. When sensing range is 50m, the results show the position error can be reduced 25 to 30 percent (around 3.6m).

When sensing range is set to 150m and scaling factor α = 1, GPS error of 5m, the position error is under 3.3m. When α larger than 4, the error can less than 3m. But we found that when α > 6 the error could get larger in some cases. So, we set α = 5 as the default value of our system. The Figure 5.4 has shown the value of α is 4~6 and the weight is less than 1/5 with |GL – VL| = 1, that shows |GL – VL| = 1 can improve some accuracy. This figure also shows the helpless when |GL – VL| > 2. In the Figure 5.3(b) shows the RMSE when GPS error is 10m and sensing range is 150m, which can reduce the position error to 5.8m with the value of σ was 4~6. The improvement rate of VIP with different value of α and GPS error σ is shown in Figure 5.5. It shows the equal improving ability of VIP in different GPS errors and sensing ranges, and the accuracy by factor of 30 and 40 percent improvement respectively when α > 4.

(a) With GPS error of 5m (b) With GPS error of 10m Figure 5-3 : RMSE with different values of the scaling factor and GPS error

1 2 3 4 5 6 7 8 9 10

28

Figure 5-4 : Distribution of position validation under different scaling factor

Figure 5-5 : Improvement rate with scaling factor and GPS error

1 2 3 4 5 6 7 8 9 10

29

We also evaluate the accuracy with different ratio of fully equipped vehicles in Figure 5.6. With increasing the ratio of fully equipped vehicles the improvement of positioning accuracy also arises. When the ratio of fully equipped vehicles is less than 30 percent, the average number of neighbor vehicles is not enough, it makes the improvement of estimations results not obviously. The number of neighbor vehicles is around 7 vehicles with the all vehicles are fully equipped and sensing range is 150m, it can provide the best correction result in VIP system. In the Figure 5.7 we can see when the ratio of fully equipped vehicles more than 50 percent, the ratio of estimated vehicle is over 70 and 90 percent in different sensing ranges. It shows VIP system can improve the accuracy of positioning did not require that all vehicles need to equipped with a driving video logger.

(a) With GPS error of 5m (b) With GPS error of 10m

Figure 5-6 : RMSE with different ratio of fully equipped vehicles

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

VIP Error (sensing range = 150m, total vehicles) VIP Error (sensing range = 50m, total vehicles) VIP Error (sensing range = 150m, correction vehicles) VIP Error (sensing range = 50m, correction vehicles)

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

VIP Error (sensing range = 150m, total vehicles) VIP Error (sensing range = 50m, total vehicles) VIP Error (sensing range = 150m, correction vehicles) VIP Error (sensing range = 50m, correction vehicles)

30

(a) With sensing range of 50m (b) With sensing range of 150m Figure 5-7 : Measure the ratio of correction vehicle with different ratio of fully equipped

vehicles

Figure 5.8 measure the accuracy with different vehicle flow rates. With the rate of 1,200 vehicles/hour, it shows the improvement of positioning lower than 1,800 vehicles/hour, because the former’s number of neighbor vehicles is less than the later that the reference vehicles is not enough for providing good correction. The vehicle flow rate of 2,400 vehicles/hour is better than 1,800 vehicles/hour, and when ratio of fully equipped vehicles less than 90 percent and sensing range is 150m that the RMSE of estimation position is less than 3m.

31

Figure 5-8 : RMSE with vehicle flow rates

The comparisons of the accuracy with different video sensing range and angle have shown in Figure 5.9 and Figure 5.10. In these results, we change the video sensing range from 50m to 150m. The result shows the larger sensing range may provide more number of neighbor vehicles, and then get better positioning. The sensing angle may have not any effect of positioning accuracy. Because these angles can coverage most of vehicles, the effect of number of neighbor vehicle is very low.

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

1200 Vehicles / Hour, Sensing range = 150m 1800 Vehicles / Hour, Sensing range = 150m 2400 Vehicles / Hour, Sensing range = 150m 1200 Vehicles / Hour, Sensing range = 50m 1800 Vehicles / Hour, Sensing range = 50m 2400 Vehicles / Hour, Sensing range = 50m

32

Figure 5-9 : RMSE with sensing ranges

Figure 5-10 : RMSE with sensing angles

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Sensing angle = 150°, Sensing range = 150m Sensing angle = 120°, Sensing range = 150m Sensing angle = 90°, Sensing range = 150m Sensing angle = 150°, Sensing range = 50m Sensing angle = 120°, Sensing range = 50m Sensing angle = 90°, Sensing range = 50m

33

We also evaluate the VIP system with 2, 3 and 4 lanes. The result has shown in Figure 5-11, the improvement rate of 2 and 3 lanes are similar, and the best result is 21 percent.

When all of vehicles are fully equipped vehicles and sensing range is 50m, the improvement rate is around 13 percent (4.3m), and the rate of 4 lanes is 2 times greater than 2 and 3 lanes.

It means our system is working better on 4 lanes.

Figure 5-11 : RMSE with different number of lanes

Finally, we assume the relative positions obtained by video sensing have error. We add 1~5% distance error in the relative positions. As shown in Figure 5-12, when the sensing range is 50m, the distance error is from 0.5m to 2.5m, the result has shown the error will reduce the accuracy of positioning. When the ratio of fully equipped vehicles is less than 30 percent, the RMSE of VIP is higher than GPS error. If all vehicles are fully equipped, the improvement rate is 12 percent (4.35m).

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

2 Lanes, Sensing range = 50m 2 Lanes, Sensing range = 150m 3 Lanes, Sensing range = 50m 3 Lanes, Sensing range = 150m 4 Lanes, Sensing range = 50m 4 Lanes, Sensing range = 150m

34

Figure 5-12 : RMSE with video sensing error

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Ratio of fully equipped vehicles

RMSE (meter)

VIP Error (σ = 5)

VIP Error (σ = 5, video error)

35

Chapter 6 Conclusion

In this thesis, we have proposed a Video-Assisted Inter-Vehicle Positioning (VIP) system. The system has been designed to integrate sensed data extracted from driving video logger with image processing and share to information among vehicles to improve the accuracy of cooperative positioning. Simulation results have shown that our approach can achieve 10 and 30 percent of improvement in the position accuracy when only half of vehicles were fully equipped with different sensing ranges, and improve the accuracy by factor of 40 percent (within 3m) if all vehicles were fully equipped in the best condition. In the future works, it is essential to evaluate the performance under different types of driving video logger and image professor (or software), and to consider how to achieve cooperative positioning in the city environments, which could be much more complex compared with highway scenarios. Besides, it is interested to evaluate the performance if the information obtained by a driving video logger, e.g. lane index, distance, license plate number, are not always accuracy.

36

Reference

[1] M. L. Sichitiu and M. Kihl, “Inter-vehicle communication systems: a survey”, IEEE Communications Surveys & Tutorials, Vol.10, No.2, pp.88-105, 2008.

[2] IEEE Standard for Information Technology - Telecommunications and Information Exchange Between Systems - Local and Metropolitan Area Networks - Specific Requirements; Part 11:

Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications;

Amendment 6: Wireless Access in Vehicular Environments, IEEE Std. 802.11p, July 2010.

[3] C. Campolo , A. Cortese and A. Molinaro, “CRaSCH: A cooperative scheme for service channel reservation in 802.11p/WAVE vehicular ad hoc networks”, International Conference on Ultra Modern Telecommunications & Workshops, pp.1-8, Oct. 2009.

[4] D. Jiang , V. Taliwal, A. Meier, W. Holfelder and R. Herrtwich, “Design of 5.9 ghz dsrc-based vehicular safety communication”, IEEE Wireless Communications, Vol. 13, No.5, pp.36-43, Oct. 2006.

[5] Draft Guide for Wireless Access in Vehicular Environments - Architecture, IEEE P1609.0/D3.2, Apr. 2012.

[6] Draft Standard for Wireless Access in Vehicular Environments - Security Services for Applications and Management Messages, IEEE P1609.2/D15, Apr. 2012.

[7] IEEE Standard for Wireless Access in Vehicular Environments - Networking Services, IEEE Std.

1609.3-2010, Dec. 2010.

[8] IEEE Standard for Wireless Access in Vehicular Environments - Multi-Channel Operation, IEEE Std. 1609.4-2010, Feb. 2011.

[9] Draft Standard for Wireless Access in Vehicular Environments - Remote Management Services, IEEE P1609.6/D0, Apr. 2012.

[10] IEEE Standard for Wireless Access in Vehicular Environments - Over the Air Electronic Payment Data Exchange Protocol for Intelligent Transportation Systems (ITS), IEEE Std.

1609.11-2010, Jan. 2011.

37

[11] Draft Standard for Wireless Access in Vehicular Environments - Identifier Allocations, IEEE P1609.12/D7, Jun. 2012.

[12] H. Moustafa and Y. Zhang, “Vehicular Networks-Techniques, Standards, and Applications,”

Auerbach publishers, 2009.

[13] A. Benslimane, “Optimized Dissemination of Alarm Messages in Vehicular Ad-Hoc Networks (VANET)”, In Proc. of International Conference on High Speed Networks and Multimedia Communications, Vol. 3079/2004, pp.655-666, 2004.

[14] C. Maihöfer and R. Eberhardt, “Time-stable geocast for ad hoc networks and its application with virtual warning signs”, Computer Communications, Vol. 27, No.11, pp.1065-1075, July 2004.

[15] S. Biswas, R. Tatchikou and F. Dion, “Vehicle-to-Vehicle Wireless Communication Protocols for Enhancing Highway Traffic Safety”, IEEE Communications Magazine, Vol. 44, No.1, pp.74-82, Jan. 2006.

[16] F. Ye, M. Adams and S. Roy, “V2V Wireless Communication Protocol for Rear-End Collision Avoidance on Highways”, In Proc. of IEEE International Conference on Communications Workshops, pp.375-379, May 2008.

[17] B. Hoh, M. Gruteser, R. Herring, J. Ban, D. Work, J. Herrera, A. M. Bayen, M. Annavaram and Q. Jacobson, “Virtual trip lines for distributed privacy-preserving traffic monitoring” In Proc. of the international conference on Mobile systems, applications, and services, pp.15-28, 2008.

[18] A. Benmimoun, J.Chen and T. Suzuki, “Design and practical evaluation of an intersection assistant in real-world tests”, In Proc. of the IEEE Intelligent Vehicles Symposium, pp.606-611, June 2007.

[19] Z. H. Xiao, Z. Q. Guan and Z. H. Zheng, “The Research and Development of the Highway's Electronic Toll Collection System”, International Workshop on Knowledge Discovery and Data Mining, pp.359-362, 2008.

[20] A. Boukerche, H. Oliveira, E. Nakamura and A. Loureiro, “Vehicular Ad Hoc Networks: A New Challenge for Localization-Based Systems”, Computer Communications, Vol. 31, No.12, pp.2838-2849, July 2008.

[21] M. Grewal, L.Weill and A. Andrews, “Global Positioning Systems, Inertial Navigation and Integration”, New York: Wiley, 2001.

[22] J. A. Farrell, T. D. Givargis and M. J. Barth, “Real-Time Differential Carrier Phase GPS-Aided INS”, IEEE Transactions on Control Systems Technology, Vol. 8, No.4, pp. 709-721, July 2000.

[23] S. Rezaei and R. Sengupta, “Kalman filter based integration of DGPS and vehicle sensors for

38

localization”, IEEE International Conference on Mechatronics and Automation, Vol. 1, pp.455-460, Aug. 2005.

[24] W. Ochieng, J. Polak, R. Noland, J. Y. Park, L. Zhao, D. Briggs, J. Gulliver, A. Crookell, R.

Evans, M. Walker and W. Randolph, “Integration of GPS and dead reckoning for real-time vehicle performance and emissions monitoring”, GPS Solutions, Vol. 6, No.4. , pp.229-241, Mar. 2003.

[25] M. Davy, E. Duflos and P. Vanheeghe, “Particle Filtering for Multisensor Data Fusion With Switching Observation Models: Application to Land Vehicle Positioning”, IEEE Transactions on Signal Processing, Vol. 55, No.6, pp.2703-2719, June 2007.

[26] R. Parker and S. Valaee, “Vehicle Localization in Vehicular Networks”, In Proc. of IEEE Vehicular Technology Conference, pp.1-5, Sep. 2006.

[27] N. Alam, A. Tabatabaei Balaei and A. G. Dempster, “A DSRC Doppler-Based Cooperative Positioning Enhancement for Vehicular Networks With GPS Availability”, IEEE Transactions on Vehicular Technology, Vol. 60, No.9, Nov 2011.

[28] S. Fujii, A. Fujita, T. Umedu, S. Kaneda, H. Yamaguchi, T. Higashino and M. Takai,

“Cooperative Vehicle Positioning via V2V Communications and Onboard Sensors”, IEEE Vehicular Technology Conference, pp.1-5 , Sep. 2011.

[29] J. Huang and H.-S. Tan, “A Low-Order DGPS-Based Vehicle Positioning System Under Urban Environment”, IEEE/ASME Transactions on Mechatronics, Vol. 11, No.5, pp.567-575, Oct.

2006.

[30] M. Matosevic, Z. Salcic and S. Berber, “A Comparison of Accuracy Using a GPS and a Low-Cost DGPS”, IEEE Transactions on Instrumentation and Measurement, Vol. 55 , No.5, pp.1677-1683, Oct. 2006.

[31] M. Aly, “Real time detection of lane markers in urban streets”, IEEE Intelligent Vehicles Symposium, pp.7-12, June 2008.

[32] J.C. McCall and M.M. Trivedi, “Video-Based Lane Estimation and Tracking for Driver Assistance Survey System and Evaluation”, IEEE Transactions on Intelligent Transportation Systems, Vol. 7, No.1, pp.20-37, Mar. 2006.

[33] M. A. Quddus and W. Y. Ochieng, Z. Lin Zhao, R. B. Noland, “A general map matching algorithm for transport telematics applications”, GPS Solutions, Vol. 7, No.3, pp.157-167, 2003.

[34] J. Du and M. J. Barth, “Next-Generation Automated Vehicle Location Systems: Positioning at the Lane Level” IEEE Transactions on Intelligent Transportation Systems, Vol. 9, No.1, pp.48-57, Mar. 2008.

39

[35] K. Muthukrishnan, M. Lijding and P. Havinga, “Towards Smart Surroundings: Enabling Techniques and Technologies for Localization”, In Proc. of International Workshop on Locati on-and Context-Awareness, Vol. 3479, pp.209-227, 2005.

[36] G. Sun, J. Chen, W. Guo and K.J.R. Liu, “Signal processing techniques in network-aided positioning: a survey of state-of-the-art positioning designs”, IEEE Signal Processing Magazine, Vo. 22, No.4, pp.12-23, July 2005.

[37] R. Parker and S. Valaee, “Vehicular Node Localization Using Received-Signal-Strength Indicator”, IEEE Transactions on Vehicular Technology, Vol. 56, No.6, pp.3371-3380, Nov.

2007.

[38] J. Xu, W. Liu, F. Lang, Y. Zhang and C. Wang, “Distance Measurement Model Based on RSSI in WSN”, Wireless Sensor Network, Vol. 2, No.8, pp.606-611, Aug. 2010.

[39] D.M. Etter, D. Kuncicky and D. Hull, “Introduction to MATLAB 7”, Prentice Hall, 2005.

[40] W.J. Palm, “Introduction to MATLAB 7 for Engineers”, McGraw Hill, 2003.

[41] C.A. Rahman, W. Badawy and A. Radmanesh, “A real time vehicle's license plate recognition system”, In Proc. of IEEE Conference on Advanced Video and Signal Based Surveillance, pp.163-166, July 2003.

[42] S.-L. Chang, L.-S. Chen, Y.-C. Chung and S.-W. Chen, “Automatic license plate recognition”

IEEE Transactions on Intelligent Transportation Systems, Vol. 5, No.1, pp.42-53, Mar. 2004.

[43] D. R. Magee, “Tracking multiple vehicles using foreground, background and motion models”, Image and Vision Computing, Vol. 22, No.2, pp.143-155, Feb. 2004.

[43] D. R. Magee, “Tracking multiple vehicles using foreground, background and motion models”, Image and Vision Computing, Vol. 22, No.2, pp.143-155, Feb. 2004.

相關文件