• 沒有找到結果。

Vehicular Visible Light Communications with LED Taillight and Rolling Shutter Camera

N/A
N/A
Protected

Academic year: 2022

Share "Vehicular Visible Light Communications with LED Taillight and Rolling Shutter Camera"

Copied!
6
0
0

加載中.... (立即查看全文)

全文

(1)

Vehicular Visible Light Communications with LED Taillight and Rolling Shutter Camera

Peng Ji*, Hsin-Mu Tsai†‡, Chao Wang*, Fuqiang Liu*

*Dept. of Electronics and Information Engineering, Tongji University, Shanghai, P.R. China Email: jipeng90@163.com, {chaowang, fuqiangliu}@tongji.edu.cn

Dept. of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan

Intel-NTU Connected Context Computing Center, Taipei, Taiwan Email: hsinmu@csie.ntu.edu.tw

Abstract—Visible light communication (VLC) has recently emerged to become a promising wireless communication technology. Vehicle lights and traffic lights have started to utilize LEDs and due to their shorter response time, they can be easily modified to become VLC transmitters. In addition, cameras embedded in smartphones can be used as VLC receivers. As a result, Vehicular VLC (V2LC) between vehicle lighting and smartphone cameras has the potential to enable a great number of applications with low cost. In this paper, a prototype V2LC system that utilizes undersampled frequency shift ON-OFF keying (UFSOOK) modulation is proposed. The system utilizes rolling shutter cameras as the receiver and takes advantages of its characteristics to improve the receiving performance. An off- the-shelf vehicle LED taillight is used as the transmitter.

Information is transmitted in the continuous state (ON-OFF) changes of LEDs which are invisible to human eyes. The performance evaluation results demonstrate that the communication prototype is robust and can resist common optical interferences and noises within the image.

Keywords—Visible light communications; rolling shutter;

smartphone camera; vehicular visible light communications (V2LC); undersampled frequency shift ON-OFF keying (UFSOOK)

I. INTRODUCTION

Visible light communication (VLC) is considered as a promising future wireless communication technology due to the improvements of light emitting diode (LED) technologies and the existing pervasive lighting infrastructure. Compared to radio communication technologies such as WiFi and cellular technologies, VLC has the following advantages: (1) A large amount of bandwidth in the visible light spectrum is available for use and is not yet regulated; (2) VLC is more secure because light does not penetrate walls or other thick materials and in most cases communication is only possible when the transmitter and the receiver has line-of-sight (i.e. the adversary needs to be within the visual range of the receiver to perform attacks); (3) Visible light poses no harm to the human body and the eyes when the transmitted optical power is below a certain level [1].

Many existing investigations have focused on improving the data transmission data rate and transmission range of VLC.

In [2], the data rate of 100 Mb/s is achieved using on-off keying and nonreturn-to-zero modulation. References [3-4]

introduce the use of OFDM with higher modulation schemes to improve the transmission range and the data rate. However, to achieve high data rate, special transmitting and receiving devices are required. In general, these hardware modules or components are expensive, and thus may have an adverse effect on the adoption of the technology.

Over the last decade, smartphones have penetrated in almost everyone’s daily life. As most smartphones have embedded cameras, providing the ability to capture pictures and videos, they can be utilized as VLC receivers. On the other hand, LEDs are widely used in vehicle lighting and traffic lights. They can be easily modified to become VLC transmitters. A number of applications based on the VLC between vehicle lighting, traffic lights and smartphone cameras (V2LC) can be implemented with this new form of communications and can help us to build vehicles or transportation systems that are more intelligent. For example, instead of attaining the diagnosis information of a vehicle from the On-Board Diagnostics (OBD) interface, we can use smartphones to obtain the information from the taillights with V2LC, which is much more convenient. Another application is collision avoidance. A smartphone can be mounted in a car or on a scooter, so that its camera can capture the images of the lights of surrounding vehicles. When the vehicle ahead performs an emergency brake, this information can be broadcasted with the taillight and the smartphone can receive the warning message with its camera and alerts the driver.

However, unlike indoor LED lights which usually have an appearance with even illuminated surface, vehicle lights and traffic lights usually consist of many small LEDs and irregular reflective surfaces which generate optical interferences and noises in the images, creating difficulties as the receiver attempts to decode the transmission. In addition, most current smartphone cameras use rolling shutter CMOS sensors. The camera sensor has a low sampling rate, up to only tens of times per second (frames per second, fps), and exhibits a special rolling shutter effect. Thus, specific communication protocols and receiver need to be designed and tailored to address these limitations.

A few existing works have realized VLC with LEDs and regular cameras using different methods. Casio’s PicapiCamera [5] uses flashing dots on a display or flashing colored light to convey a small amount of data. The data rate is very low and This work is supported by a grant from National High Technology

Research and Development Program of China (863 Program) (2012AA111902) and the Fundamental Research Funds for the Central Universities (0800219162).

978-1-4799-4482-8/14/$31.00 ©2014 IEEE

(2)

the transmitting light needs to be a special type of light that can change its color. [6] utilizes the rolling shutter effect of a regular CMOS camera sensor and Manchester coding to increase the data rate. The camera needs to capture the image from a reflected surface to fill the entire image for decoding.

By taking advantage of different rows of pixels captured at different time, their protocol and demodulation method can achieve a data rate of 3.1 kb/s. However, it is not suitable for V2LC as vehicle lights and traffic lights have irregular and complex surfaces, and in most scenarios there is no large reflected surface for the receiver to utilize.

In order to develop a V2LC system that is suitable for the envisioned applications, we developed a receiving system that can decode transmissions using the undersampled frequency shift ON-OFF keying (UFSOOK) modulation [7] with rolling shutter cameras. Our prototype uses off-the-shelf vehicle LED taillight as the transmitter and a smartphone camera as the receiver. The performance evaluation results demonstrate that the prototype can effectively resist the interferences and the noises caused by irregular reflective surfaces, and performs well in different illumination conditions and at different distances.

II. ROLLING SHUTTER AND COMMUNICATION PROTOCOL A. Rolling shutter

Rolling shutter is a method of image acquisition in which each frame is recorded not from a snapshot obtained at a single point of time, but rather by scanning across either vertically or horizontally. The rolling shutter effect is shown in Fig. 1. The transmitting LED light switches on and off at very high frequencies according to the modulation, and the pixels of the camera sensor activates sequentially (by row) and therefore does not get the entire image simultaneously. When the rows of pixels (scanlines) are activated, they are exposed to the light at that time and then their values are stored. After the procedure is completed, the scanlines captured at different time are merged together to form a single image [6]. Fig. 1 shows the procedure of a rolling shutter camera capturing a LED light switching on and off with a high frequency.

When the on-off frequency of the transmitting light is higher than the sampling rate (frame rate) of the camera but lower than the scanning frequency of the rolling shutter1, stripes of different light intensity (dark and bright stripes) appear in the pixels occupied by the transmitting LED in the picture (see Fig. 1). When the on-off frequency is much higher than the scanning frequency, the camera sensor can only obtain the average light intensity in the pixels occupied by the transmitting light, and the area appears as half on. Note that it is assumed that the shutter time is set to be shorter than or approximately the same as the inverse of the on-off frequency.

This is usually true as the transmitting light is very bright and reduces the camera shutter time.

1 The inverse of the readout time. The readout time is the duration for a rolling shutter sensor to complete the acquisition of one row of pixels.

On OFF On OFF

Fig. 1. The rolling shutter effect

B. Communication protocol

A camera sensor consists of a large number of photodiodes, forming a two dimensional array, but can only be sampled at a rate of tens of times per second due to its limited output bandwidth. As a result, the protocol and the modulation format have to be specifically designed to include considerations of these properties. In this paper, it is assumed that the frame rate of the camera is 30 frame/s.

1) UFSOOK with rolling shutter

UFSOOK [7] is a modulation method that encodes bits using a form of DC balanced differential encoding. The modulation concept is similar to frequency shift keying (FSK), and it encodes bit ‘1’ and bit ‘0’ with different frequencies.

Appropriate mark (bit ‘1’) and space (bit ‘0’) frequencies are selected so that, when undersampled by a low frame rate camera, bit values are represented by different on-off pairs. For example, bit ‘0’ is decided by two consecutive video frames with pixels that both appear on or off, and bit ‘1’ is decided by two consecutive video frames with one that appears on while the other appears off. However, as shown in Fig. 1, when a rolling shutter camera is used, dark and bright stripes appear in the pixels occupied by the transmitting light. A more robust decoding algorithm can be derived, rather than using the ON- OFF combinations of a single pixel to decode the transmissions. The details are described as follows.

The bit encoding and decoding pattern with rolling shutter camera is shown in Fig. 2, in which the Y-axis represents whether light is on or off. Bit ‘1’ is transmitted as 7 cycles of 105Hz OOK (shows in blue) and bit ‘0’ is transmitted as 8 cycles of 120Hz OOK (shows in red). Therefore, each bit is transmitted for 1/15 second. When this OOK waveform is sampled 30 times per second by a camera (30 fps), represented by the rectangle windows in the figure, there are two samples per bit and hence the bit rate is half of the sample rate. The two samples of bit ‘1’ are shown in yellow windows and the two samples of bit ‘0’ are shown in dark blue windows. The time period of the windows represents the number of rows of pixels illuminated by the light multiplies the readout time.

During this time period, the rows of pixels captured when the light is on appear bright, and the rows of pixels captured when the light is off appear dark. When transmitting a bit ‘0’, any same rows of pixels in two consecutive image frames would be captured at the time with the same light state, and the bright and dark stripes stay in the same position in the pixels occupied by the lights in the two consecutive image frames.

(3)

Light ON

Light OFF Tframe

Fig. 2. UFSOOK encoding of a logic “1 0” bit pattern

When transmitting a bit ‘1’, on the contrary, any same rows of pixels in two consecutive image frames would be captured at the time with opposite light states, and dark and bright stripes appear in alternative positions in the pixels occupied by the transmitting light, i.e., the dark positions in one frame will change to bright in the next frame and vice versa. Therefore, bit values can be decoded by comparing the position of the bright and dark stripes in consecutive image frames.

2) SFD

To detect the beginning of a data packet, it is required to define a start frame delimiter (SFD) that is appended to the beginning of each data packet. The end of the packet is indicated by the appearance of another SFD, which signals the beginning of the next frame. In order to distinguish SFD from data bits, SFD is represented by a much higher OOK frequency than the scanning frequency of the rolling shutter. In this case, the camera sensor extracts the average light intensity and the image appears half on. Compared to images that are captured when a data bit is transmitted and has bright and dark stripes, the image pixels of the light when transmitting SFD appears all bright (while the intensity is actually half of the on state). The high OOK frequency transmission of SFD persists for one bit time (2 image frames) to signal the beginning of the packet.

III. SYSTEM DESIGN A. Transmitter

The block diagram of our system is shown in Fig. 3. The transmitter is implemented with Ettus Universal Software Radio Peripheral (USRP) N200 software defined radio (SDR) and GNU Radio signal processing software. We implement the packet formulation and modulation scheme as GNU Radio blocks, which are executed in real-time on a laptop. Digital samples generated by GNU Radio are sent to USRP, which are then converted to analog voltage-varying signal. The signal is then converted to current-varying signal by the LED frontend to control the output intensity of the transmitting LEDs.

Data is transmitted in the form of packets. In the protocol, each packet consists of two parts: SFD and DATA, as shown in Fig. 4. According to the communication protocol, two video frames are required to decode one bit. Thus, the bit rate is half the frame rate. For a frame rate of 30 fps, the bit rate would be 15 bits per second (bps). Considering the SFD overhead, the data rate would be slightly lower than 15 bps. Although the bit rate is low compared to most communication technology, it is sufficient for certain application scenarios, such as transmitting the diagnostic information of the vehicle to the user (when the

Data

UFSOOK modulation(GNU

Radio)

LED lights

Channel

Rolling shutter camera Demodulation

(iOS App) Received

Data

LED Frontend Transmitter

Receiver

USRP

Fig. 3. Prototype block diagram

SFD B1 B2 B3 B4 B5 B6 B7 B8 B9 B10

Fig.4. Data frame format. B1, B2…, and B10 represent 10 different data bits.

vehicle is not operated), or an emergency message to warn neighboring vehicles about a hazardous road condition ahead.

B. Receiver

The receiver is implemented on a smartphone running iOS.

The smartphone camera faces the LED light directly and the preview mode of the camera captures a continuous array of frames at a fixed frame rate of 30 fps. Then, a decoder running in an iOS application processed the captured video frame by frame to demodulate the data embedded in the pixels occupied by the transmitting LED. The decoding procedure includes three parts, detection of SFD frame, detection of bit ‘0’ or bit

‘1’, and data decoding with error handling mechanism, described in detail as follows.

1) Detection of SFD frame

The main difference between SFD frame and DATA frame is that DATA frame has regular dark and bright stripes while SFD frame appears all bright. However, due to taillight’s numerous LEDs and uneven illumination caused by irregular reflective surfaces, the light intensity of different parts of the image varies significantly and even within the dark stripes there exist very bright spots. Fig. 5 shows the comparison between them.

A straightforward method for SFD detection is to determine whether there are dark stripes in the image. In order to get more accurate detection results with a less complicated algorithm, it is necessary to know the width of stripes in advance. We propose the following method to calculate the width. First, the LED transmits a 1500 Hz OOK signal, and in the image where the LED entirely occupies, there are 75 stripes or 37.5 pairs of dark and bright stripes in one image. The scanning time of each pair of stripes corresponds to one cycle of 1500 Hz OOK, which is 1/1500 second, and thus the scanning time of the entire image is 1/40 second, calculated by multiplying 1/1500 with 37.5. Then, when the LED transmits bit ‘0’ with an OOK frequency of 120 Hz, as the resolution of preview mode is 960 x 540, the width of each pair of stripes can be calculated as

1 1

540 180

120÷

⎛ ⎜

40÷

⎞ ⎟

= pixels

⎝ ⎠

When sending bit ‘1’ at an OOK frequency of 105 Hz, the width can be calculated in a similar way and is 206 pixels.

(4)

a) SFD frame b) DATA frame Fig. 5. Images of the transmitted SFD and DATA portion in a packet

Threshold training Sampling

Noise elimination

Voting scheme

Error handling mechanism

Continuous errors

Y N

Fig. 6. SFD detection flow chart

Based on the width of stripes, we will introduce a simple but robust way to detect the SFD. The flow chart is shown in Fig.

6.

a) Sampling: The sampled line of pixels should cover at least one dark stripe and one bright stripe in order to detect whether there are dark stripes. To ensure the sampled pixels can meet this requirement, they are sampled every several pixels from a continuous line of pixels which goes across one dark stripe and one bright stripe, shown as the white lines in Fig. 5. The length of the line should satisfy the inequality

0 1

1 1,

s width

T L Max

R f f

⎛ ⎞

≥ ⎜ ⎟

⎝ ⎠

whereT is the time to scan across all rows of the camera s sensor, Rwidth is the width of preview mode’s resolution in height (for example, if the resolution is 960*540, Rwidth is 540), and L is the length of the continuous pixel line. f0, f1 are the OOK frequencies of bit ‘0’ and bit ‘1’.

b) Noise elimination: As the taillight consists of many LEDs and irregular reflective surfaces, the illumination is uneven and there are a lot of noises in the image. For example, the dark stripes still have some irregular bright spots. In order to avoid misjudgments caused by the bright spots, the red values of the sampled pixels are sorted in ascending order and only a few smallest values are selected for SFD detection.

c) Threshold training: The selected smallest values are then compared with a threshold to determine whether this frame is a SFD frame. However, the values of the pixels obtained from the camera sensor are affected by several

Light ON

Light OFF

ⅡⅡⅡⅡ

Tframe

Fig. 7. A scenario where the sampling time could result in symbol errors

a) Two consecutive received frames when bit ‘0’ is transmitted

b) Two consecutive received frames when bit ‘1’ is transmitted Fig. 8. Patterns of bit ‘0’ and bit ‘1’

factors: the light intensity of the transmitting LEDs, the exposure time of the camera, and the distance between the LEDs and the camera. Thus, the threshold should be set dynamically in order to deal with different conditions. In our prototype, the first ten DATA frames are used as training frames to determine a proper threshold, which is set as the maximum value of the ten smallest values of the sampled pixels.

d) Voting Scheme: As there exist a slight timing difference between the transmitting LED and the receiver, due to the variance caused by the crystal oscillators in the devices, the frame rate of the camera is not exactly 30 frames per second with respect to the time reference of the recevier; the error could be as large as 0.1%. The error in frame rate causes the sampling phase of the camera to constantly change. Fig. 7 shows the problem caused by frame rate error; the high frequency square wave on the left represents the SFD and the low frequency square wave on the right represents the DATA portion. The red rectangular window represents the time for the rolling shutter to scan across all rows and the square wave inside this window would correspond to the image captured by the camera. Because of the frame rate error, the left boundary of the window constantly changes, leading to the unexpected result that the square wave in the window does not always have the same frequency. As shown in Fig. 7, in this case the image would show a LED light with half appearing bright , representing the SFD, and half with bright and dark stripes.

In order to mitigate this problem caused by the frame rate error, multiple lines of pixels are sampled and a simple voting scheme is used. In our prototype, three pixel lines with

(5)

Start

Current Frame = SFD

SFDJustBegin cntDataFrame = 0

Current Frame=SFD

SFDWillEnd and DataWillBegin

Current Frame=Data

DataJustBegin dataFrameCnt++

Current Frame=Data

DataDidBegin dataFrameCnt++

dataFrameCnt is Even?

Demodulate and save

data Y

Y

Y

N

Y N

N Drop these

data

N Prior Frame = DATA StartStateMachine = false

cntDataFrame = 0

Y N

Fig. 9. Error handling mechanism flow chart

different start points in the rolling shutter scanning direction are used to perform SFD detection individually. If two or three detection results show that the frame is SFD frame, then the frame is confirmed to be a SFD frame.

2) Detection of bit ‘0’ and bit ‘1’

In the DATA portion of the packet, bit ‘0’ is transmitted as 8 cycles of 120Hz OOK and bit ‘1’ is transmitted as 7 cycles of 105Hz OOK. Thus, the transmitting time of one bit is 1/15 second, which is two video frame time at the frame rate of 30 fps. The two frames of bit ‘0’ have the dark and bright stripes at the same positions while the two frames of bit ‘1’ are at the alternating positions, as shown in Fig. 8. Once the decoder receives two frames, it starts to decode the data bit. Sampled pixels are from the same positions of the two frames and the detection algorithm is described as follows.

Input:

Num: the number of sampled pixels

Threshold: the trained threshold used for SFD detection frame1: array of the values of the sampled pixels in the first frame

frame2: array of the values of the sampled pixels in the second frame

Output:

result: whether the bit is ‘1’ or ‘0’

1: for i from 1 to Num do

2: if abs(frame1[i] – frame2[i]) < threshold

3: count = count + 1

4: end if 5: end for

6: if count > Num/2 7: result = 0 8: else

9: result = 1 10:end if

TABLE I. FOUR STATES

States Prior Video Frame Current Video Frame

SFDJustBegin DATA SFD

SFDWillEnd SFD SFD

DataJustBegin SFD DATA

DataDidBegin DATA DATA

3) Error handling mechanism

The carefully designed SFD detection and bit detection methods previously described are intended to ensure accurate data decoding. However, due to some uncertain factors such as the distance between the LEDs and the cameras, and ambient lighting condition, errors are inevitable in the decoding procedure. In order to make the data transmission more reliable and the decoder more robust, it is necessary to design an error handling mechanism which can detect and then deal with the errors.

In our prototype, we design the error handling mechanism based on the state machine concept. Each data frame has the same structure which begins with the SFD and then followed by series of data bits. SFD and each data bit is transmitted in two video frame time, and each video frame can be classified into either a SFD frame or a DATA frame. The permutation and combination of the two types of video frames form four states: SFDJustBegin, SFDWillEnd, DataJustBegin and DataDidBegin, shown in table 1.

The four states switch from one to another in certain conditions and the next video frame detected ought to be a certain type. For example, if the prior video frame is a DATA frame and the current frame is a SFD frame, the state is SFDJustBegin. As SFD consists of two frames, the next frame must also be SFD frame; otherwise, an error must have happened and thus the state is reset. The flow chart of the error handling mechanism is shown in Fig. 9. When the decoder has detected another SFD which indicates the end of this data frame, the number of the video frames buffered from last SFD is checked; if the number is odd, which is impossible as each data bit is transmitted in two video frame time, there must have been errors, and thus the buffered frames are dropped and state is reset. If the number is even, the video frames are further processed to obtain the transmitted data.

IV. PERFORMANCE EVALUATION

To test the performance of the prototype, an iPhone 4 with a five megapixel camera is used as the receiver. The LED light of the transmitter is a scooter’s taillight which consists of twenty small red LEDs and irregular reflective surfaces. The iOS application on the iPhone enables preview mode of the camera and calls the decoder implemented with objective-C.

The frame rate of the camera is set to 30 fps and the resolution used in the experiments is 960 x 540.

(6)

TABLE II. TEST RESULTS OF DIFFERENT DISTANCES

Distance Bit Error Rate Frame Detection Error

8cm 0/13736 = 0 99/31896 = 0.31%

20cm 0/13328 = 0 98/30942 = 0.32%

30cm 0/8400 = 0 111/19890 = 0.56%

35cm 31/8340 = 0.37% 144/19999 = 0.72%

40cm 16/2224 = 0.72% 97/5620 = 1.7%

TABLE III. TEST RESULTS OF DIFFERENT ILLUMINATION CONDITIONS

Illumination environment Bit Error Rate Frame Detection Error

Bright 0/13328 = 0 98/30942 = 0.32%

Dark 0/18400 = 0 96/42712 = 0.22%

The system is evaluated by transmitting a number from the taillight to the camera of iPhone 4. The decimal number is converted to a binary sequence and then modulated with UFSOOK. After demodulating the captured images, the iOS application converts the binary sequence to decimal numbers and displays them on the screen.

In order to test the robustness and reliability of the system, two scenarios are evaluated: different environment illumination and different distances between the transmitter and the receiver. The utilized performance metrics include bit error rate (BER) and frame detection error rate (FDER). BER is the error rate of decoded data bits. FDER is the error rate of the video frame classification of either SFD frame or DATA frame. Once the error handling mechanism resets the state, a frame detection error happens. A random number from 100 to 1000 is selected to be transmitted repeatedly and the decoder on the iOS application decodes the data and the performance metrics are calculated

For different distance between taillight and smartphone camera, 8cm, 20cm, 30cm, 35cm and 40cm are tested respectively. The results are shown in Table III. The denominator of the metrics is the number of total frames or data bits received and the numerator is the number of errors. It can be observed that as the communication distance increases, FDER increases. When the communication distance is relatively small, no bit error is observed. For larger distances, both BER and FDER increase. This is because the camera used in the prototype can be sampled at only 30 fps and the OOK frequencies are designed based on such a low sampling rate; thus, the width of stripes in the image is relatively broad.

It is required that the taillight in the image covers at least one pair of dark and bright stripes so that the dark stripes locate inside the bright taillight and can be clearly detected. In addition, when the distance increases, the size of taillight in the image decreases, the difference between the light intensity of dark and bright stripes are too small for the decoder to distinguish. A set of higher OOK frequencies could be used to resolve the problem; however, in this case, the shutter time of the camera must be reduced in order to observe the high frequency OOK signal, and the range could also be limited due to less light exposure. Trade-offs between these options are not investigated in this paper and left as future works.

We also conduct our experiments in two different illumination environments with a communication distance of 20 cm. First, the experiment is carried out in dark environment

where the taillight is the only light source. Second, the experiment is repeated during daytime in an indoor environment around noon. The transmitter faces a window so that sun light is reflected by the surfaces of the taillight and acts as relatively strong interference. The test results are shown in Table II. We can observe that the FDER is higher in bright environment due to relatively stronger interferences.

However, the FDERs in both illumination environments are acceptably low and no bit errors are observed. This demonstrates that our proposed protocols and error handling mechanism can effectively resist interferences and noises.

V. CONCLUSION

In this paper, we investigate the possibility of using existing LED lights in a vehicle or in a traffic light and the camera of a smartphone to carry out V2LC. A communication protocol is proposed based on UFSOOK, optimized for rolling shutter camera, and a prototype is implemented. Evaluation results demonstrated that the system is robust and can resist high interferences and noises. As far as future work is concerned, in order to make the system more applicable for vehicular environment, methods to increase the communication range and to improve the robustness of the decoding algorithm in moving environments need to be investigated.

ACKNOWLEDGMENT

This work is also supported by a grant from National High Technology Research and Development Program of China (863 Program) (2011AA110401) as well as by National Science Council of Taiwan, National Taiwan University and Intel Corporation under Grants NSC-101-2911-I-002-001 and NTU-102R7501.

REFERENCES

[1] S. Haruyama, "Visible light communication using sustainanle LED lights," 2013 ITU Kaleidoscope: Building Sustainable Communities (K- 2013), vol., no., pp.1-6, Kyoto, Japan, 22-24 April 2013.

[2] H. L. Minh, D. O’Brien, G. Faulkner, et al. "100-Mb/s NRZ visible light communications using a postequalized white LED," IEEE Photonics Technology Letters, vol. 21, no.15, pp. 1063-1065, Aug. 2009.

[3] H. Elgala, R. Mesleh, H. Haas and B.Pricope, "OFDM visible light wireless communication based on white LEDs," IEEE 65th Vehicular Technology Conference (VTC2007-Spring), vol., no., pp.2185-2189, Dublin, Ireland, 22-25 Apr. 2007

[4] M. Z. Afgani, H. Haas, H.Elgala and D. Knipp, "Visible light communication using OFDM," IEEE 2nd International Conference on Testbeds and Research Infrastructures for the Development of Networks and Communities (TRIDENTCOM 2006), Barcelona, Spain, 2006.

[5] Casio Unveils Prototype of VLC System Using Smartphones at CES (Jan. 2012), http://world.casio.com/news/2012/0115_VisibleLightcomm/

[6] C. Danakis, M. Afgani, G. Povey, I. Underwood and H. Haas, "Using a CMOS camera sensor for visible light communication," IEEE Globecom Workshops, vol., no., pp.1244-1248, Anaheim, CA, USA, 3-7 Dec.

2012.

[7] R. D. Roberts, "Undersampled frequency shift ON-OFF keying (UFSOOK) for camera communications (CamCom)," Wireless and Optical Communication Conference (WOCC), 2013 22nd, vol., no., pp.645-648, Chongqing, China, 16-18 May 2013.

參考文獻

相關文件

Computer Science and Information Engineering National Taiwan University. 2014 APEC Cooperative Forum on Internet

• Due to heterogeneity in devices’ read-out duration, devices could experience very different idle gaps, and could lose different amounts of information. In addition, in the case

The images from the GTSDB test set are challenging and the test set includes many images that are under poor lighting conditions or blurry (Fig. These two datasets

The scientists selected three different kinds of light named primary light with specific wavelength as sources to compare with the test lamp.. 5 Different

• When light is refracted into two rays each polarized with the vibration directions.. oriented at right angles to one another, and traveling at

Despite their different capabilities and needs, in order for students with ID to be able to use English for the various purposes described in the learning targets, it is important

(12) School supervisors/heads should note that it is stipulated in existing legislation and licensing conditions applicable to non-franchised buses and private light buses

Using this formalism we derive an exact differential equation for the partition function of two-dimensional gravity as a function of the string coupling constant that governs the