• 沒有找到結果。

Li-CheWuandHsin-MuTsai ModelingVehicle-to-VehicleVisibleLightCommunicationLinkDurationwithEmpiricalData

N/A
N/A
Protected

Academic year: 2022

Share "Li-CheWuandHsin-MuTsai ModelingVehicle-to-VehicleVisibleLightCommunicationLinkDurationwithEmpiricalData"

Copied!
7
0
0

加載中.... (立即查看全文)

全文

(1)

Modeling Vehicle-to-Vehicle Visible Light

Communication Link Duration with Empirical Data

Li-Che Wu and Hsin-Mu Tsai

Intel-NTU Connected Context Computing Center and Department of Computer Science and Information Engineering

National Taiwan University, Taipei, Taiwan {r00922133,hsinmu}@csie.ntu.edu.tw

Abstract—Visible Light Communication (VLC) is a fast- growing technology that provides free-space wireless communica- tions using LEDs and photodiodes. As LED becomes common in automotive lighting, Vehicular VLC (V2LC) becomes a new and low-cost solution to implement vehicle-to-vehicle (V2V) commu- nications, in order to support many new safety and infotainment applications. In this paper, we take an experimental approach to measure the distribution of V2LC link duration. A video recorder was mounted on a taxi car while it was driven around the city, and the recorded video was post-processed to identify the taillights of other vehicles. Then, the duration of a taillight that stays in the video can be used as an approximation of the link duration of a V2LC link, if some vehicles were equipped with VLC-capable taillights. Our measurement results suggest that on average the V2LC link duration is on the order of several seconds, while the numbers could significantly vary in different scenarios. It was also found that generalized Pareto distribution can be used to model the V2LC link duration. Finally, the empirical distributions of link duration reported in this paper will be useful for future system design and performance evaluation.

I. INTRODUCTION

Vehicle-to-Vehicle (V2V) communications have become an active research topic in the past decade. Researchers have envisioned that, when a vehicle is given the capability to directly exchange information with neighboring vehicles, many new safety and infotainment applications can be provided to the on-board driver and the passengers. Dedicated Short Range Communications (DSRC) [1] is often regarded as the most promising RF technology and standard to be employed for this purpose both in the research community and the industry, due to the fact that it was specifically designed to work in a highly mobile and dynamic scenario. However, to this date there is no adoption of the technology in commercially available vehicles.

The low incentive for adoption is primarily due to the fact that, for these new applications to properly function, it requires a minimum market penetration. Even if all newly purchased vehicles are equipped with DSRC starting from today, it still requires a few years to reach that market penetration rate. As a result, the benefit of equipping a vehicle with DSRC cannot sufficiently justify the high cost of a DSRC radio.

Visible Light Communications (VLC) utilizes visible op- tical signals to carry digital information in free space. It usually employs an LED as the transmitting source and a photodiode or a camera sensor as the receiving component. Many current commercially available vehicles have already employed LED for their lighting, such as in third brake lights, brake lights,

(a) A sample captured image

(b) VEDR location

Fig. 1. Estimating V2LC link duration with VEDR video

turn signals, and headlamps, due to its high resistance to vibration, long average life, and shorter rise-time. As a result, Vehicular VLC (V2LC) emerges as a new cost-effective way to implement V2V communications [2]–[5], since the main transmitting component already exist in current vehicles.

To evaluate the performance of the applications enabled by V2V communications, it is crucial to determine the dis- tribution of the duration of the communication link between two vehicles. In addition, the conditional distribution of the link duration, given that the vehicles is in certain conditions, such as stopping at a red light, the speed of the vehicle, or the road type, could be very useful for the application to adjust its parameters accordingly so that a dropped connection is less likely to happen in the middle of an active transmission and to affect the end performance.

There have been many studies [6]–[9], empirical or the- oretical, to determine the distribution of the link duration for V2V communications using conventional Radio Frequency (RF) radios, e.g., DSRC. However, the propagation channel characteristics of V2LC are significantly different from RF, of which the most significant difference is that a VLC link can only function in Line-of-Sight (LOS) condition, where RF can

(2)

function in None-Line-of-Sight (NLOS) condition1. Therefore, existing works cannot be used for estimating the link duration of V2LC.

In this paper, we take an empirical approach to model the link duration for V2LC. A Video Event Data Recorder (VEDR), mounted on-board a taxi at a location where a V2LC receiver would be, is employed to capture images of the surrounding while the taxi was driven around the city of Taipei (see Figure 1). The captured images are then post-processed and the taillights of other vehicles in the images are detected.

Since obviously no current vehicle on the road is equipped with VLC-capable lighting, it is assumed that all detected taillights have that capability; then, the duration of a detected taillight staying within the images can be regarded as the duration of the link between the vehicle with the detected taillight and the host vehicle. Approximately 30 hours of the video footage is processed to obtain the empirical distribution of link duration.

One great advantages of this approach is that the obtained distribution is applicable to all VLC technologies, including the ones that use a photodiode receiver, that use a camera receiver, as well as other optical wireless technologies; the approach only take into consideration the link establishments and disruptions caused by the dynamics of the vehicles, and thus is independent of the physical layer design.

The contribution of this paper includes the following:

(1) Empirical distributions of V2LC link duration are calcu- lated and obtained for a few different scenarios, such as the type of the road, urban versus non-urban areas, etc. They are useful for the system designer to determine what transmission parameters should be used in a given scenario, and to estimate the overall system performance. (2) We propose the use of generalized Pareto distribution to model V2LC link duration.

This can be used in theoretical study or simulator implemen- tations for performance evaluation of a V2LC system. (3) The developed taillight detection algorithm, combined with the threshold values used in the algorithm, can be re-used to process videos obtained in different conditions, so that a more accurate estimation of V2LC link duration can be obtained in scenarios that our experiments did not cover.

II. EXPERIMENTALMETHOD

A. General Setting

As we use the duration of a taillight that stays within the image to approximate the duration of a VLC link between two vehicles, it is crucial to use a camera that has a similar field-of- view angle and range performance to that of a VLC receiver, and the camera needs to be mounted at a location where a VLC receiver would be. In our experiments, we mounted the VEDR slightly under the center rearview mirror and in the center of the front windshield in a taxi car (see Figure 1), while it is operated by its driver normally for business. The taxi is operated in the urban area and suburban area of Taipei city and New Taipei City, Taiwan. Papago P2X VEDR is used for our experiments and the specification of the VEDR is summarized in Table I. Note that the VEDR is equipped with a GPS chip

1NLOS VLC is possible at the cost of degraded performance - either with much lower communication range or a very high bit error rate. Since communications between two vehicles generally needs to be very reliable and happens in long range, only LOS VLC is considered here.

TABLE I. THESPECIFICATION OF THEVEDRUSED INEXPERIMENTS

Parameter Value

Resolution 1920x1080 (1080p),

down-sampled to 960x540 for taillight detection

Field-of-View Angle 127 degrees

Frame rate 29 frame/s

and the location of the vehicle can be recorded along with the video images, so that the images can be categorized into different scenarios, such as the type of the road. The video recorder of the VEDR is designed to capture high quality images during both the daytime and the nighttime. The VEDR saved all recorded video files and GPS traces into the on-board SD storage card for further processing. In total, approximately 30 hours of video footage are collected and processed.

We employed the image processing techniques for taillight detection that can be found in existing literature to process the images retrieved from the recorded video. One of the biggest difference between our detection scheme and the others is that we adjust the parameters so that the detection precision is as high as possible, i.e., all detected taillights are indeed taillights and there are very few false detections, while as a tradeoff the detection recall can be a little bit low, i.e., not all taillights in the images can be detected. The former is to ensure that the obtained statistics of the link duration is accurate, contributed mostly by real taillights, not false detections, while the latter can be considered as a way to emulate a market penetration rate less than 100% - not all vehicles are equipped with VLC- capable taillights.

To satisfy these requirements, we choose to process only the video captured during nighttime, and to retrieve only taillights, not headlamps, in the images. The reason for the former is that most vehicles only turn on the taillights during the nighttime, and in this case it is much easier to detect the taillight with its high brightness; during the daytime, a much more complex scheme utilizing the symmetry of the car body or the detection of the license plate is needed to detect the taillight, and most likely has a lower detection precision. The reason for the latter is that the vehicle headlamps have similar appearance to many other light sources along the road; the detection precision for the headlamps would also be much lower than that for the taillights, whose unique red color can be utilized in the detection scheme.

B. Taillight Detection

In this subsection, we will briefly describe how the tail- lights are detected in the captured video. The video is first converted into a sequence of images, and each image goes through multiple processing stages, described as follows. Fig- ure 2 shows the results after each processing stage.

1) High brightness area extraction: One of the key char- acteristics of the vehicle taillights is that it has very high brightness compared to other objects in the images, thus it is natural to use a scheme that discards all pixels that have a grayscale value of less than a threshold. However, as the lighting condition changes when the taxi moves to different parts of the city, employing a single threshold for all images is not feasible.

(3)

(a) Original image (b) After high brightness area extraction

(c) After geometry filtering (d) After the final stage, color filtering Fig. 2. An example of taillight detection after each processing stage. The blue rectangles mark all taillight candidates in the images.

[10] describes a scheme that can be used to determine the right threshold to separate the foreground, i.e., objects of interest, and the background in different lighting condition.

The main idea of this scheme is that the pixels in each image usually have brightness values clustered in several groups with clear boundaries, and the scheme attempts to obtain all these boundaries and only retain the pixels in the brightest group.

The scheme first creates a histogram according to the grayscale values of all pixels in the image. Then it goes through a few iterations, in each of which it selects a class of pixels to be split into two classes, with an optimal threshold value that can maximize the between-class variance. The splitting process stops when the between-class variance of the whole image is higher than a pre-determined threshold value2. Finally, only the brightest class of pixels are treated as the foreground, i.e., the lower bound of this class is used as the threshold value, and the rest of pixels are discarded. The original image is converted into a binary digital image, with the value of the foreground pixel set as 1, and the rest set as 0.

2) Connected-component labeling: In this stage, pixels with value 1 are merged into connected regions, represent- ing possible candidates of taillight detections. We assume 8-connectivity in the process, i.e., a pixel with value 1 is connected with another pixel with value 1 and in north, north- east, east, south-east, south, south-west, west, or north-west of itself. The two-pass algorithm described in [11] is used. In the algorithm, the first pass records equivalences and assign temporary labels, and the second pass replace each temporary

20.9 is used in our experiments, as suggested in [10].

label by the label of its equivalence class. The output of this stage is a two dimensional array with each pixel labeled with its connected region identifier, and all background pixels remain to have value 0.

3) Geometry filtering: In this stage, the values of a few geometry metrics of the connected regions obtained from last stage are calculated. Similar to the method described in [12], these values are then used to filter out those with values that a taillight is unlikely to have.

Let Li be the i-th connected region, i.e., the taillight candidate, and also represent the set of pixels in that region, i.e.,Li= {p1, p2, . . . , pn}, where p1, p2, . . . , pnare the pixels inLi. LetX(pi) and Y (pi) denote the X and Y coordinates of the pixel pi. Then the width and the height ofLi are defined as the width and the height of the smallest bounding box for Li, respectively given by

⎧⎨

W (Li) = maxpj∈LiX(pj) − minpj∈LiX(pj)

H(Li) = maxpj∈LiY (pj) − minpj∈LiY (pj) . (1) The geometry metrics used in this stage include:

a. The width to height ratio of Li, given by Rwh(Li) = W (Li)

H(Li) . (2)

b. The area of the smallest bounding box forLi, given by A(Li) = W (Li) · H(Li) . (3)

(4)

c. The bounding box high brightness density for Li, given by

D(Li) = |Li|

A(Li) . (4)

Thresholds for these metrics are pre-determined with the methodology described in II-B5. Then, taillight candidates with values that fall outside of the range determined by the thresholds are discarded.

4) Color filtering: To take advantage of the fact that the taillights are usually in red in the detection scheme, in this stage we calculate the values of several color metrics of the taillight candidates. Similar to last stage, candidates with values that a taillight is unlikely to have are discarded.

For this purpose, we first convert the original image from the RGB representation to the HSV representation, in order to alleviate the dependency of the threshold values to the characteristics of the individual camera sensor and lighting conditions. LetH(pi), S(pi), V (pi) denote the hue, saturation, and value of pixelpi, respectively.

The following color metrics are used in this stage:

a. The average hue ofLi, given by μH(Li) =



pi∈LiH(pi)

|Li| . (5)

b. The average saturation of Li, given by μS(Li) =



pi∈LiS(pi)

|Li| . (6)

c. The average value (intensity) ofLi, given by μV(Li) =



pi∈LiS(pi)

|Li| . (7)

d. The ratio of red pixels in Li, given by

Rr(Li) =|{pi∈ Li|H(pi) ≤ 30 orH(pi) ≥ 330}|

|Li| .

(8) 5) Determining the threshold: To determine the opti- mal threshold for the filtering schemes described in II-B3 and II-B4, the empirical distribution of these geometry and color metrics need to be determined. To this end, approxi- mately 50 frames (images) are randomly selected from the collected video and in these images all actual taillights are manually identified and labeled. All other taillight candidates are also labeled, as false detection. Then the values of the geometry and color metrics of all taillight candidates are calculated. As an example, Figure 3 shows the distributions of two color metrics,μVLiandRr(Li) for all taillight candidates in the 50 randomly selected images. One can observe that the taillights can be better separated from false detections than the headlamps can, which justifies our choice of performing only taillight detection in the first place.

The detection precision is defined as the portion of actual taillights in all detected taillights. The detection recall is de- fined as the portion of detected taillights in all actual taillights.

To determine the threshold(s) for each metric, the value is gradually increased or decreased to increase the detection

TABLE II. THRESHOLDS FOR THEGEOMETRY ANDCOLORFILTERING

Metric Threshold

Lower bound Upper bound

Rwh(Li) 0.3 1.5

A(Li) 50 1000

D(Li) 0.6 not used

µH(Li) -60 20

µS(Li) 0.25 not used

µV(Li) 0.6 not used

Rr(Li) 0.6 not used

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Ratio of Red Pixels Rr(Li) Average Value (V)

Taillight Headlamp False detection

Fig. 3. The distribution ofµV(Li) and Rr(Li)

precision until it reaches 90%, starting with the lowest or the highest possible value. During this process, the filtering of all other metrics are not performed. This process is repeated for each geometry or color metric. Table II shows the final threshold values determined from this process and used for taillight detection in all images from the video.

It is worth noting that special consideration needs to be taken for the lower bound used for A(Li). A taillight of a vehicle that is far away occupies only a small area in the image. To determine a proper lower bound threshold forA(Li), we recorded a video with a typical vehicle taillight that is placed 30 meters from the VEDR, and calculateA(Li) for that taillight, which is approximately 50. We assume that V2LC has a range of about 30 meters, slightly higher than the range of the prototype in [5]. With this assumption, the the lower bound threshold forA(Li) can be safely set at 50, and only taillights that are 30 meters or further away, i.e., out of the transmission range of V2LC, would be discarded due to this threshold.

With the 50 selected frames, the taillight detection scheme presented in this section generates a detection precision of 91.3%, while the detection recall is 32.3%.

C. Taillight Tracking and Duration Estimation

After taillights are detected in each image, the next step is to connect detections that belongs to the same taillight from different frames together, so that the duration that a taillight stays in the video can be calculated. The flow chart of the tracking algorithm that we developed for this purpose is shown in Figure 4. Table III shows the information that the algorithm saved for each taillight identified in the video, corresponding to a particular V2LC link between the host vehicle, i.e., the taxi carrying the VEDR, and a neighboring vehicle.

For each taillight detection in a particular frame, the tracking algorithm looks for the closest taillights found in

(5)

TABLE III. VARIABLES USED FORSAVING THEINFORMATION OF EACHTAILLIGHT

Notation Description

ttla Time to last appearance of the taillight (in number of frames)

ts Time of the first appearance

te Time of the last appearance

x X coordinate of the center of mass of the last appearance of the taillight y Y coordinate of the center of mass of the last appearance of the taillight a Area of the taillight in its last appearance (in number of pixels) r r The ratio of red pixels in the taillight in its last appearance mu v Average value of the taillight in its last appearance

Fig. 4. The block diagram of the tracking algorithm

previous frames, and see if they are sufficiently close and similar in geometry and color. If so, the algorithm considers that the taillight detection in the current frame is part of that taillight. Otherwise, the algorithm considers this as a new taillight that has not yet been detected before, and creates an instance in the data structure to store the information of this taillight. The threshold for deciding whether the detections are sufficiently close or similar is determined empirically using manually labeled data; detections belonging to a few taillights in different frames are manually identified and selected, the differences of the distance, geometry, and color metrics are calculated for them, and the maximum observed difference values are used as the thresholds here (see Figure 4 for the used values).

In addition, the algorithm allows that a taillight not to be detected, possibly due to different instantaneous change of lighting condition, for a short period of time of up to 1 second (29 frames). If the taillight is detected again after that,

the detection can still be considered belonging to the same taillight. However, if a taillight is not detected for more than 1 second, it is considered that the link has been terminated. If it appears again later, it would be considered as a different taillight, i.e., link. This approach also makes sense when considering the link duration - a link that is dropped for more than 1 second in the middle is usually considered as two separate link durations.

The tracking algorithm is executed for each video to identify all taillights, and to calculate the duration that each of them stays in the video, using the time stamps of its first and the last appearance in the video.

III. RESULTS

In this section, we will present the V2LC link duration measurement results, estimated from our recorded video. In approximately 30 hours of video footage, 79,127 taillights are detected by our algorithm. However, it was found that taillights that stays in the video for less than 1 second are usually false detections. Therefore, to prevent their contribution to the statistics from affecting the results, these taillights are removed. Afterwards, 19,688 taillights/links remain and the statistics of those are analyzed, presented in the following.

A. Urban versus Non-Urban

As GPS traces are recorded along with the video in VEDR, we are able to separate the links into the ones that take place in urban areas and the ones that take place in non-urban areas. Figure 5(a) compares the complimentary cumulative distribution functions (CCDF) of the V2LC link duration in urban and non-urban areas. At first sight, it seems counter- intuitive that the link duration in non-urban area is on average longer than the one in urban area, as in the urban area there are more vehicles and more opportunity to establish V2LC links. Further investigation reveals that, while this is true, it also results in a much higher number of short-duration links in the urban area, which significantly lowers the average link duration. This can be observed in Figure 5(f).

B. Number of Lanes

Figure 5(b) compares the CCDF of the V2LC link duration when in roads with different number of lanes in the urban area.

One can observe that the CCDF of the 1-lane road drops slower than the other two, indicating that on average the link duration when driving on a 1-lane road is longer. This is again due to the fact that there are more short-duration links in the other two scenarios, although the opportunity of establishing V2LC links is higher in these cases.

C. Red-Light Stopping

Figure 5(c) compares the CCDF of the V2LC links duration when stopping at the red light with the normal one. The period of time that the host vehicle stops at the red light were marked manually by investigating the video. Then the V2LC link duration that overlaps with the red-light period were calculated to generate the CCDF. One can observe that the CCDF of the link duration during red-light stopping first drops slower than the normal one, but drops sharply after 100 seconds. This is

(6)

100 101 102 103 10−4

10−3 10−2 10−1 100

Link Duration (s)

CCDF

Non−Urban Urban

(a) CCDF: Urban versus non-urban

100 101 102 103

10−4 10−3 10−2 10−1 100

Link Duration (s)

CCDF

1−lane road (urban) 2−lane road (urban) 3−lane road (urban)

(b) CCDF: Number of lanes

100 101 102 103

10−4 10−2 100

Link Duration (s)

CCDF

Red−Light Stopping Normal

(c) CCDF: Red-Light stopping

100 101 102 103

10−4 10−3 10−2 10−1 100

Link Duration (s)

CCDF

60 degree 90 degree 127 degree

(d) CCDF: Field-of-View angle

100 101 102

10−4 10−2 100

Link Duration (s)

CCDF

Car Scooter

(e) CCDF: Car versus Scooter

0 10 20 30 40 50

0 2000 4000 6000 8000

Link Duration (s)

Number of Links Non−Urban

Urban

(f) Histogram: Urban versus non-urban Fig. 5. CCDF and histogram of link duration and in different cases

Fig. 6. Portion of image to be cut off with different field-of-view angles

expected, as in this case the link duration is highly correlated to the duration of the red light; it is generally on the order of tens of seconds, but rarely more than 100 seconds.

D. Field-of-View Angle

Different VLC receivers have different field-of-view an- gles. Optical transmissions arrives outside of the field-of-view angle of the receiver is generally not received as the receiver power is negligible. To understand the impact of different field-of-view angles to the link duration, measurements were carried out to determine the corresponding portion of pixels to be cut out with different field-of-view angles (see Figure 6).

Figure 5(d) shows the CCDF of link duration with different field-of-view angle. One can observe that there is almost no difference in CCDF when the field-of-view angle is reduced from 127 to 90. But reducing from 90 to 60 has a noticeable impact on the CCDF, resulting in longer average link duration. Taillights that has a large incidence angle to the receiver usually do not have a long link duration with the host vehicle, and thus reducing the field-of-view angle could

TABLE IV. AVERAGELINKDURATION INVARIOUSSCENARIOS

Scenario Average Scenario Average

Link Duration (s) Link Duration (s)

All scenarios 6.72 Red-light stopping 14.74

1-lane (urban) 8.61 Car only 10.45

2-lane (urban) 6.36 Scooter only 5.14

3-lane (urban) 6.14

eliminate these short-duration links and increase the average link duration.

E. Car versus Scooter

Figure 5(e) compares the CCDF of the link duration of car-to-car links and car-to-scooter links. As it is difficult to develop an algorithm to classify whether a taillight belongs to a car or a scooter, 10% of the video files are randomly selected; in these video files the taillights belong to either a car or a scooter are manually labeled, and used to generate the CCDF. One can observe that the CCDF for scooters drops much faster than that for cars. Scooters have higher mobility than cars, especially in the urban areas, since they can easily navigate between cars, and in Taiwan they are often operated between lanes and switch lanes often. As a result, they have a higher probability to move out of the field-of-view angle of the receiver, and this results in a lower average link duration.

F. Discussion

Table IV shows the average link duration in various sce- narios. One can observe that there are significant difference between these numbers.

It is well known that the performance of vehicular ad hoc network (VANET) is often poor due to the high mobility of vehicles. The performance of the application or the upper layer

(7)

100 101 102 10−5

10−4 10−3 10−2 10−1 100

Link Duration (s)

PDF

Non−Urban Empirical

Non−Urban Fitted (k=1.931,=2.929) Urban Empirical

Urban Fitted (k=1.686,=2.620)

Root Mean Square Error:

Non−Urban: 0.001839 Urban: 0.001260

Fig. 7. Urban and non-urban link duration distributions and respective fitted generalized Pareto distributions

protocol heavily depends on the duration of the link, or, equiv- alently, the probability of a link breakage. It is therefore crucial for the application or the upper layer protocol to estimate the link duration, and adjust the parameters accordingly; this is very similar to the design of an adaptive communication system, where the system adjust its operational parameters, such as transmission power or link data rate, based on the obtained knowledge of the communication channel condition.

In a V2LC system, it is possible for the system to know the status of the vehicle, such as what type of road it is currently operated on, whether it is stopping at a red light at the moment, etc. Combined with the CCDF presented in previous subsections, the system would be able to use these numbers and to make sure that it can have a certain probability to complete the transmission to a neighboring vehicle before the next link breakage takes place. In addition, our obtained results also provide a tool for the system designer to easily estimate the performance of a V2LC system.

IV. MODELINGLINKDURATIONDISTRIBUTION

Observing the obtained empirical probability density func- tion (PDF) of V2LC link duration, we found that generalized Pareto distribution seems to be a good fit, whose PDF is given by

f(x) = 1 σ



1 + k(x − θ) σ

1k−1

(9) wherek is the shape parameter, σ is the scale parameter, and θ is the location parameter, which is set to1 due to the removal of data points less than 1 second. Figure 7 shows the urban and non-urban link duration distributions and their respective fitted generalized Pareto distributions. To evaluate the goodness of fit, Root Mean Square Error (RMSE) is calculated for both fitted distributions; both numbers are close to 0, indicating tight fits. Similar results can be obtained for distributions of other scenarios and are not shown here due to space limits. It is worth mentioning that the use of generalized Pareto distribution to model V2LC link duration is different from what is used to model link duration in RF-based vehicular networks; for example, [7] found that the link duration can be modeled as a log-normally distributed random variable.

V. CONCLUSION

In this paper, we utilized the video footage taken from a video camera mounted in a taxi car to estimate the distribution

of V2LC link duration; the duration of the taillights of neigh- boring vehicle staying within the video can be considered as the V2LC link duration, if neighboring vehicles were equipped with VLC-capable taillights. It is found that on average the V2LC link duration of links to neighboring vehicles are more than 5 seconds, while in certain scenarios, the average link du- ration can be as high as 15 seconds. The obtained distributions of link duration in different scenarios can be utilized in a V2LC system, combined with obtained vehicle status information, to determine the optimal transmission parameters, in order to lower the probability of having a link breakage during an active transmission. In addition, these distributions can also be used to evaluate the system performance. Finally, it was found that generalized Pareto distribution is a good fit to the empirical distribution of link duration, and it can be easily utilized in theoretical analysis and simulator implementation.

ACKNOWLEDGMENT

This work was supported by National Science Council, Na- tional Taiwan University, and Intel Corporation under Grants NSC-101-2911-I-002-001 and NTU-102R7501.

REFERENCES

[1] Y. Morgan, “Notes on DSRC and WAVE standards suite: Its architec- ture, design, and characteristics,” IEEE Communications Surveys and Tutorials, vol. 12, no. 4, pp. 504–518, 2010.

[2] A. Ashok, M. Gruteser, N. Mandayam, J. Silva, M. Varga, and K. Dana,

“Challenge: mobile optical networks through visual mimo,” in Pro- ceedings of the sixteenth annual international conference on Mobile computing and networking, 2010, pp. 105–112.

[3] C. B. Liu, B. Sadeghi, and E. W. Knightly, “Enabling vehicular visible light communication (V2LC) networks,” in Proc. ACM Intl. Workshop on VehiculAr Inter-NETworking (VANET), 2011, pp. 41–50.

[4] A. Cailean, B. Cagneau, L. Chassagne, S. Topsu, Y. Alayli, and J.-M.

Blosseville, “Visible light communications: Application to cooperation between vehicles and road infrastructures,” in Proc. IEEE Intelligent Vehicles Symposium (IV), 2012, pp. 1055–1059.

[5] S.-H. You, S.-H. Chang, H.-M. Lin, and H.-M. Tsai, “Demo: Visible light communications for scooter safety,” in ACM International Con- ference on Mobile Systems, Applications and Services (MobiSys), June 2013.

[6] I. Ho, K. Leung, J. Polak, and R. Mangharam, “Node connectivity in vehicular ad hoc networks with structured mobility,” in Proc. IEEE Conference on Local Computer Networks, 2007, pp. 635–642.

[7] G. Yan and S. Olariu, “A probabilistic analysis of link duration in vehic- ular ad hoc networks,” IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 4, pp. 1227–1236, 2011.

[8] W. Viriyasitavat, F. Bai, and O. Tonguz, “Dynamics of network con- nectivity in urban vehicular networks,” IEEE Journal on Selected Areas in Communications, vol. 29, no. 3, pp. 515–533, 2011.

[9] M. Hu, Z. Zhong, H. Zhu, M. Ni, and C.-Y. Chang, “Analytical modeling of link duration for vehicular ad hoc networks in urban environment,” in Proc. ACM International Workshop on VehiculAr Inter- NETworking, Systems, and Applications, 2013.

[10] N. Otsu, “A threshold selection method from gray-level histograms,”

Systems, Man and Cybernetics, IEEE Transactions on, vol. 9, no. 1, pp. 62–66, 1979.

[11] K. Suzuki, I. Horiba, and N. Sugie, “Linear-time connected-component labeling based on sequential local operations,” Computer Vision and Image Understanding, vol. 89, no. 1, pp. 1–23, 2003.

[12] Y.-L. Chen, Y.-H. Chen, C.-J. Chen, and B.-F. Wu, “Nighttime vehicle detection for driver assistance and autonomous vehicles,” in Proc.

International Conference on Pattern Recognition, vol. 1, 2006, pp. 687–

690.

參考文獻

相關文件

We propose a digital image stabilization algorithm based on an image composition technique using four source images.. By using image processing techniques, we are able to reduce

In this paper, we aim to develop a transparent object detection algorithm which can detect the location of transparent objects in color image.. Due to the

Determine the length of the longest ladder that can be carried horizontally from one hallway to the

Full credit if they got (a) wrong but found correct q and integrated correctly using their answer.. Algebra mistakes -1% each, integral mistakes

With the help of the pictures and the words below, write a journal entry about what happened.. Write at least

Reading Task 6: Genre Structure and Language Features. • Now let’s look at how language features (e.g. sentence patterns) are connected to the structure

Now, nearly all of the current flows through wire S since it has a much lower resistance than the light bulb. The light bulb does not glow because the current flowing through it

In regard to professional development of KG teachers, the recommendations include (i) organising short-term Thematic courses on top of the existing Basic and Advanced