• 沒有找到結果。

RollingLight: Enabling Line-of-Sight Light-to-Camera Communications

N/A
N/A
Protected

Academic year: 2022

Share "RollingLight: Enabling Line-of-Sight Light-to-Camera Communications"

Copied!
14
0
0

加載中.... (立即查看全文)

全文

(1)

RollingLight: Enabling Line-of-Sight Light-to-Camera Communications

Hui-Yu Lee

†§

, Hao-Min Lin

†§

, Yu-Lin Wei

†§

, Hsin-I Wu

†§

, Hsin-Mu Tsai

†§

and Kate Ching-Ju Lin

‡§

Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan

Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan

§Intel-NTU Connected Context Computing Center, Taipei, Taiwan

{r01922028, d00922003, r03922027, r02944020, hsinmu}@csie.ntu.edu.tw, katelin@citi.sinica.edu.tw

ABSTRACT

Recent literatures have demonstrated the feasibility and applicabil- ity of light-to-camera communications. They either use this new technology to realize specific applications, e.g., localization, by sending repetitive signal patterns, or consider non-line-of-sight sce- narios. We however notice that line-of-sight light-to-camera com- munications has a great potential because it provides a natural way to enable visual association, i.e., visually associating the received information with the transmitter’s identity. Such capability benefits broader applications, such as augmented reality, advertising, and driver assistance systems. Hence, this paper designs, implements, and evaluates RollingLight, a line-of-sight light-to-camera commu- nication system that enables a light to talk to diverse off-the-shelf rolling shutter cameras. To boost the data rate and enhance reli- ability, RollingLight addresses the following practical challenges.

First, its demodulation algorithm allows cameras with heteroge- neous sampling rates to accurately decode high-order frequency modulation in real-time. Second, it incorporates a number of de- signs to resolve the issues caused by inherently unsynchronized light-to-camera channels. We have built a prototype of Rolling- Light with USRP-N200, and also implemented a real system with Arduino Mega 2560, both tested with a range of different camera receivers. We also implement a real iOS application to examine our real-time decoding capability. The experimental results show that, even to serve commodity cameras with a large variety of frame rates, RollingLight can still deliver a throughput of 11.32 bytes per second.

Categories and Subject Descriptors

C.2.1 [Network Architecture and Design]: Wireless Commu- nication; B.4.1 [Data Communications Devices]; C.3 [Special- Purpose and Application-Based Systems]

Keywords

Visible Light Communications, Camera Communications, Rolling Shutter, Frequency Shift Keying, Smartphones

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full cita- tion on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- publish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.

MobiSys’15,May 18–22, 2015, Florence, Italy.

Copyright is held by the owner/author(s). Publication rights licensed to ACM.

ACM 978-1-4503-3494-5/15/05 ...$15.00.

http://dx.doi.org/10.1145/2742647.2742651 .

(a) LOS light-to-camera (b) NLOS light-to-camera Figure 1—Comparison between LOS and NLOS light-to- camera communications. Unlike non-line-of-sight light-to- camera systems, where a camera can only observe the mixed re- flected light signals, as inFig. 1(b), line-of-sight systems allow a camera to naturally associate the received information with each transmitting light, as in (a).

1. INTRODUCTION

Information received via conventional radio frequency (RF) wireless communication such as WiFi or Bluetooth is often hard to be visually associated with the transmitter’s identity, especially when multiple transmitters are located in close proximity. The abil- ity to visually identify which object transmits the received infor- mation can enable a number of augmented reality applications, in which the received information associated with different transmit- ting objects can be combined with the perceived view and rendered at correct locations. For example, as shown inFig. 1(a), when mul- tiple products are exhibited together, if the information received by a smartphone can be correctly associated with the transmitting ob- jects (or their nearby objects), then the information can be presented to the user in an intuitive way. Near-Field Communication (NFC), though can also identify the transmitter in proximity, does not have the ability to receive information from multiple objects simultane- ously due to the limited operational range of a few centimeters.

This paper presents RollingLight, a light-to-camera communica- tion system that enables line-of-sight (LOS) data transmission for a wide range of commodity cameras. Unlike non-line-of-sight light- to-camera systems, where a camera can only observe the mixed reflected light signals, as shown inFig. 1(b), line-of-sight systems allow a camera to naturally associate the received information with each transmitting light. In particular, when a camera observes a

(2)

light in LOS, in the image, the group of pixels occupied by the light displays the appearance of the light or the object illuminated by the light, which can be recognized by either human or computer vision techniques. The very same group of pixels also carries the extra information transmitted by that light, as shown inFig. 1(a). Hence, those pixels combine the two types of information, i.e., visual infor- mation and transmitted information, providing a very natural way to visually associate the received information with the transmitter’s identity. For example, in a museum or exhibition, the light on top of each exhibited object can deliver a brief introduction of that object or an URL link connecting to the detailed guide information.

RollingLight exploits the rolling shutter mechanism that exists in most commodity cameras today. Rolling shutter allows a camera to sample the received optical signal multiple times in a single cap- tured image. When a rolling shutter camera receives square waves of different frequencies, the captured images contain bright and dark strips in widths proportional to the inverse of the frequency.

As a result, we can use different signal frequencies to represent dif- ferent data symbols. Although there have been a number of exist- ing light-to-camera systems, they are either designed for non-LOS (NLOS) communications [19] or specific applications (e.g., local- ization [15]) that only require repetitive and short transmissions.

LOS light-to-camera communications however present several new practical challenges that have not yet been investigated.

First, camera sensors have diverse specifications. Even if a transmitter sends a signal of the same frequency, different cameras would observe different strip widths. Existing systems that either have low data rate or support only repetitive transmission can ig- nore this problem since small error in strip width estimation would not affect the bit error rate. However, to boost the throughput, a high order frequency modulation is needed, in which case small er- ror in strip width estimation would result in a high bit error rate.

To address this problem, we modified a pitch detection algorithm (PDA) [8] to accurately estimate the signal frequency, and borrowed the idea of channel estimation from RF-based systems to accurately determine the camera’s rolling shutter sampling rate. These designs allow RollingLight to correctly map the observed strip width to the transmitted signal frequency.

In addition, different cameras typically have different frame rates, and thus the transmitter cannot tune its symbol duration to match the frame duration of every camera. As a result, a camera could receive multiple symbols in an image or have symbol losses.

Even worse, unlike NLOS links where the stripe pattern is observed in the entire image, as shown inFig. 1(b), LOS links could capture a light that occupies only a portion of the image, and might expe- rience a higher symbol loss probability. More importantly, while loss probability could be quite stable for NLOS scenarios, it how- ever would be very dynamic for LOS scenarios, depending on the camera’s frame rate, image size, exposure duration, and read-out duration. We hence derived the expressions to estimate the loss probabilities of different cameras and various link conditions. This allows an application developer to prevent losses by adding an ap- propriate level of redundancy for the target devices and environ- ments it wants to serve.

We have implemented RollingLight on the Arduino Mega 2560 board, and developed an iOS decoding application. These are used to evaluate the real-time decoding capability of our design. We have further built a flexible experimental prototype using the USRP N200 radio platform, connected with a special front-end to drive the LED light. This is used to characterize and understand the limita- tions of the LOS light-to-camera channels with a variety of devices and environments. Our implementation uses high-order frequency- shift-keying-based (FSK-based) modulation and XOR-based parity

!"#$%&'!((

)&'*+$,(!'!*)-$&.((

)&'*+$,(!

1! 1!

/$0(1!

/$0(2!

/$0(3!

/$0(4!

1! 1!

1! 1!

1! 1!

2! 2!

2! 2!

2! 2!

2! 2!

567!!

8"#$%&'!()&'*+$,(5!!

/!*)-$&.()&'*+$,($9(*('$0(5'!

(a) Global shutter operation (b) Image by global shutter

1! 1!

!"#$!

%&'()!

%&'(*!

%&'(+!

%&'(,!

1! 1!

1! 1!

1! 1!

2! 2!

2! 2!

2! 2!

2! 2!

3! 3!

3! 3!

3! 3!

3! 3!

-./&012$(31245&6(!$!

%$437&18(31245&6(&9(4(2&'(!2!

(c) Rolling shutter operation (d) Image by rolling shutter Figure 2—Comparison between global shutter and rolling shut- ter.A rolling-shutter camera exposes right before the read-out. The exposure duration is hence shifted by a fixed amount of read-out duration, as shown in (c), resulting in the so-called rolling shutter effect, as shown in (d).

coding. The key findings from our measurements and experiments are as follows:

• Existing rolling shutter cameras have very different sampling rates, ranging from 39.2 to 52.4 read-out’s per millisecond among our tested devices. This heterogeneity is closely related to a proper choice of usable frequencies for FSK-based modulation and, hence, the achievable data rate.

• Unlike other demodulation schemes, which either only work in the high or low frequency region, RollingLight’s demodulation algorithm is consistently reliable in a wide range of frequencies.

• Even with the overhead of dealing with mixed-symbol frames and symbol losses to ensure reliability for diverse forms of cam- eras, RollingLight can still achieve a throughput of 11.32 bytes per second. This rate is sufficient to support several indoor appli- cations, such as localization, augmented reality and navigation in museum, exhibition or shopping center.

• The average per image processing time of RollingLight’s real- time decoder is 18.15 milliseconds, which is much smaller than the frame duration of most smartphones on today’s market.

2. PRIMER & CHARACTERISTICS

Before describing our RollingLight design, we give some back- ground about rolling shutter cameras, and use measurements to show some important characteristics.

2.1 Rolling Shutter Cameras

There are two types of cameras: 1) global shutter and 2) rolling shutter. Global shutter is commonly implemented on CCD sen- sors. It exposes all pixels on the sensor simultaneously, as shown inFig. 2(a), and gathers incoming light over all pixels during the exposure duration. An example image is shown inFig. 2(b)1. Al- though some CMOS sensors use a global shutter, the majority of

1Fig. 2(b) is in fact taken by an iPhone 5 when the fan is not mov- ing, micmicking an image taken by the global shutter.

(3)

Figure 3—Receiving operation of a rolling shutter camera. The top figure shows the transmitted square wave. The middle figure illustrates the exposure and read-out processes. The bottom figure demonstrates the resulting image after 90 degrees counterclockwise rotation.

them found on the market utilize a rolling shutter. One of the key properties of a rolling shutter is its sequential read-out architecture.

Since rolling shutters do not have storage to cache the accumulated charge during exposure operation, each row of pixels has to be ex- posed right before the read-out, as shown inFig. 2(c). Since the read-out procedure of different rows cannot overlap, the exposure duration of a row of pixels is hence shifted by a fixed amount of read-out duration, denoted byTr, resulting in the so-called rolling shutter effect. An example image is shown inFig. 2(d).

Fig. 3(a) illustrates a square wave transmitted by an LED light.

For each pixel of a particular row, the incoming intensity modu- lated signal is integrated for a period of exposure duration, denoted by Te. Let r(t) denote the received signal at time t, which in- cludes the signal from the transmitted light and other ambient in- terference. The intensity of a pixel in they-th row of the image, I[y], is then equal to the total amount of photons received during exposure, which can be expressed as follows:

I[y] =

! T0+(y−1)Tr+Te T0+(y−1)Tr

r(t)dt, 1 ≤ y ≤ Ymax, (1) whereT0 is a reference time corresponding to the start of the ex- posure duration of the first row in the image, andYmaxis the num- ber of rows in the image. Since the intensity of the accumulated charge changes row by row, the light will appear as bright and dark strips in the image.2 Note that, for LOS reception, the transmitting light might only illuminate a partial area of the image. For exam- ple, inFig. 3(b), the transmitted signal is only observed in rows [Yn, Yn+ H − 1], where H represents the height of the image area illuminated by the transmitting light. The corresponding re-

2For a special case where the exposure duration happens to be the multiple of the signal period, a rolling shutter camera would not observe strips.

(a) 8000 Hz (b) 4000 Hz (c) 2000 Hz

Figure 4—Stripe pattern. Different transmitting frequencies cor- respond to different strip widths in the received images.

ceived image, after 90-degree counterclockwise rotation, is plotted inFig. 3(c).

A camera does not expose at all time in a frame durationTf,rx, i.e., the inverse of its frame rate. There exists a time gap between the end time of the exposure of the last row,T0+(Ymax−1)Tr+Te, and the start time of the next image frame,T0 + Tf,rx. Namely, during this idle time gap of lengthTf,rx− (Ymax− 1)Tr− Te, the camera is not performing exposure, i.e., not receiving, and hence the transmitted signal during this time is lost. This gap is especially significant when the exposure durationTeis short. In addition, if the transmitting light only occupies part of the image, a larger por- tion of the transmitted signal would be lost.3 Specifically, if a light only occupies rows[Yn, Yn+ H − 1] in the image, then the signals sent during the exposure of rows[Y1, Yn−1] and [Yn+ H, Ymax] are missed. This is very different from the NLOS case, where the camera receives the reflected light signals and then the strips are always observed in the whole image. Signal losses in a NLOS link hence only occur during the aforementioned idle time gap. That is, a NLOS link experiences relatively constant losses, while a LOS link might suffer from dynamic losses, depending on not only the idle time gap but also the size and the location of the captured light in the image. Therefore, RollingLight needs to cope with those dy- namic signal losses in order to ensure reliable reception for different cameras and various link conditions.

2.2 Rolling Shutter Frequency-Shift Keying (RS-FSK) Modulation

RollingLight uses square wave in its modulation. One of the nice properties of square wave is that it only needs two output levels, which avoids complex driving circuitry and reduces the overall sys- tem cost. With rolling shutter sampling, a square wave of a certain frequency transmitted by the light would result in a stripe pattern in the image. In addition, different frequencies correspond to differ- ent widths of strips, as shown inFig. 4, and, most importantly, this strip width does not change with the location and the orientation of a camera and how large the light is in an image. In other words, the captured stripe pattern is not distorted due to either perspec- tive distortion or physical light shapes. RollingLight uses different frequencies of square wave to represent different symbols if their corresponding strip widths are distinguishable. This is the common FSK modulation used in many communication systems. LetF de- note the set of frequencies used for modulation. Then, each symbol can represent⌊log2|F|⌋ data bits. By increasing the number of fre- quencies, we can increase the order of modulation, and hence the data rate.

The receiving camera can demodulate the signal if it can measure the strip width of the received image and convert it back to the trans-

3Note that digital zoom in does not help with this symbol loss prob- lem. This is because it gets a closer view of a particular area without adjusting a camera’s lens. Thus, the rows of pixels occupied by the light and the duration of signal reception are both unchanged.

(4)

(a) iPhone 5c (b) iPhone 5s (c) iPhone 6Plus (width: 108 pixels) (width: 164 pixels) (width: 158 pixels) Figure 5—Devices receive different stripe patterns. Different devices observe different strip widths even when the transmitted frequency is the same, 250 Hz.

!"

#$ #%

&!'()

&#'*)

!$ !% !+

#" ,-.

/0)

&)

1)

(a) Case 1: frame durationTf,rx> symbol durationTs,tx

!"

#"

!"#$%

!&#'%

&( &) &* &+ &,

"( ") "* "+

-.% -.%

/01

!%

2%

(b) Case 2: frame durationTf,rx< symbol durationTs,tx

Figure 6—Mixed-symbol due to unsynchronization. In unsyn- chronized light-to-camera channels, a received frame might contain a mixture of multiple symbols.

mitted frequency. While the basic idea is simple, there are however a number of practical challenges needed to be solved. First, for a given transmitting light, different cameras might observe different strip widths because each camera has a different read-out duration Tr. To verify this, we use different smartphones with the same resolution (2,448 rows) to detect the stripe pattern of a light trans- mitting a 250-Hz square wave. For fair comparison, we adjust the location of the smartphones such that the light occupies the whole image, i.e., it occupies the same number of rows.Fig. 5shows that the number of strips observed by different cameras is not constant, which indicates that the strip width changes with different devices.

Second, the strip width in an image is always an integer, while the corresponding signal period is a real number. This difference in- troduces error in strip width estimation. We will discuss in §3.2 how RollingLight’s demodulation copes with device diversity and enhances the accuracy in strip width estimation.

2.3 Unsynchronized Communications

Cameras on the market usually have different frame rates. Not only this, the frame rate of a camera could even fluctuate. For ex- ample, as mentioned by [11], the frame rate of the camera on Sam- sung Galaxy S3 fluctuates between 21 and 29 fps with a mean of 25 fps. Thus, it is especially difficult for a light-to-camera system to synchronize the transmitting light with a wide variety of cameras.

That is, the transmitting symbol rate, i.e., the inverse of the symbol durationTs,tx, might not be equal to a camera’s frame rate, i.e., the inverse of the frame durationTf,rx. Such an unsynchronized com- munication channel is very likely to experience the mixed-symbol frameand symbol loss problems. To understand the causes of these problems, we classify the unsynchronized scenarios into two cases.

Figure 7—Example of a mixed-symbol frame. Different strip widths observed in a frame that contains two partial symbols.

!"

#" #$ #%

&#'')

!$ !% !! "# "$

%&

'&

%"()&

*+,

-."" -."" -.""

Figure 8—Symbol loss due to unsynchronization. Some sym- bols are lost during the idle time of discontinuous receiving. (Only occurs in Case 1.)

• Case 1: When the frame duration is longer than the symbol du- ration, i.e.,Tf,rx> Ts,tx.

• Case 2: When the frame duration is shorter than the symbol du- ration, i.e.,Tf,rx< Ts,tx.

The mixed-symbol frame problem refers to the scenario of re- ceiving a mixture of two or more consecutive transmitted symbols in the same image frame. This problem occurs in both unsynchro- nized cases. As shown inFig. 6(a), an example of case 1, framef3

contains two transmitted symbolss3ands4. Similarly, as shown in Fig. 6(b), an example of case 2, framef3contains two transmitted symbolss2ands3. For an image with mixed symbols, the receiver will observe different strip widths in different parts of the image, as shown inFig. 7. Even when the transmitting symbol rate is exactly the same with the receiving frame rate, the mixed-symbol frame problem can still occur if the boundaries of the transmitted symbols and the captured frames are not perfectly aligned.

On the other hand, the symbol loss problem means that some transmitted symbols are not detected in any received frames. This problem only occurs in case 1, i.e.,Tf,rx> Ts,tx. The root of this problem is the characteristic of a camera’s discontinuous receiving, as mentioned in §2.1. Recall that there exists a time gap between the end of exposure in this frame and the start of exposure in the next frame. Then, when the length of this gap is longer than a symbol duration, some symbols could entirely be dropped. For example, as shown inFig. 8, symbolss2,s4ands6are lost.

To check how severe the above problems could be in unsyn- chronized light-to-camera channels, we mimic the measurements performed in [11]. Specifically, we configure the light to transmit square wave with several different symbol rates, each of which is used to send 100 symbols. PointGrey Flea3 camera [2] is used to receive the signal and record 30-fps video. We then use our de- modulation scheme, which will be introduced in §3.2, to offline decode the frames. Fig. 9maps the frame index shown in y-axis to the corresponding detected symbol indices in x-axis. In each figure, we use blue and red circles to represent the first and sec- ond symbol detected in a frame, respectively. We can see from the figures that mixed-symbol frames exist in most of the cases, even when the frame rate is the same as the symbol rate, as shown in Fig. 9(c). In addition, different levels of unsynchronization lead to different symbol loss ratios, as shown inFig. 9(e) andFig. 9(f). We

(5)

0 10 20 30 40 50 60 0

50 100

Detected symbol index Received frame index

0 10 20 30 40 50 60

0 50 100

Detected symbol index Received frame index

tx = 20 fps tx = 29 fps

0 10 20 30 40 50 60

0 20 40 60

Detected symbol index Received frame index

0 10 20 30 40 50 60

0 20 40 60

Detected symbol index Received frame index

tx = 30 fps (not aligned) tx = 30 fps (aligned)

0 10 20 30 40 50 60

0 20 40

Detected symbol index Received frame index

0 10 20 30 40 50 60

0 10 20 30

Detected symbol index Received frame index

tx = 40 fps tx = 50 fps

Figure 9—Symbol losses and mixed symbols caused by differ- ent frame rates. Frame patterns received by a camera with a 30 fps frame rate under different transmitting symbol rates. Blue cir- cles represent the first symbol detected in a frame, while red circles represent the second symbol detected in a frame.

will hence describe in §3.3how RollingLight addresses the mixed- symbol frame problem. We will also derive the loss probability of cameras with heterogeneous settings. The derivation enables RollingLight to determine a proper amount of redundancy that en- sures successful decoding in most of commodity cameras.

3. ROLLINGLIGHT DESIGN 3.1 Overview

RollingLight’s transmitting light first maps data bits into sym- bols, each of which represents a number of bits. Each symbol is then represented by square wave of one of the frequencies inF and of durationTs,tx. Everyn data symbols are used to generate a parity symbol, which is appended to the end of thesen data sym- bols. The valuen is picked based on the estimation of the symbol loss probability (see §3.3). We then add a symbol splitter (SS) to the beginning of each symbol in order to address the mixed-symbol frame problem (see §3.3). Finally, a group of symbols are formed as a packet, which is preceded by a preamble used for estimating the read-out duration and performing demodulation. The light then transmits the resultant square wave, as shown inFig. 10.

RollingLight’s receiving camera captures a series of images. It first utilizes a number of image processing techniques to crop the areas occupied by different transmitting lights from each image (see

§3.4). It then detects the splitters in order to find the symbol bound- ary. Each transmitted symbol might only present in a single image or across multiple consecutive images. Hence, for each symbol, we find the image that contains the longest segment of that symbol.

Finally, the longest segment of all the transmitted symbols are de- modulated into codeword bits (see §3.2), which are then decoded into data bits.

3.2 RollingLight’s Demodulation

In order to demodulate the symbols, we first need to know the re- lationship between the transmitted frequencyf and the strip width

!"#$%&'#

()*%&+' (((,-.

)*%&+' ((((/$0$()*%&+'1

/$0$(

)*%&+'(2 !$"30*

)*%&+'(4 /$0$(

)*%&+'(4 /$0$(

)*%&+'(5

!$"30*(

)*%&+'(2 /$0$(

)*%&+'(6

)) )) )) )) )) )) ))

Figure 10—Packet Format of RollingLight. A preamble symbol is used to learn the read-out duration. Symbols are separated by a symbol splitter (SS). This example adds one parity symbol for every two data symbols. Green, red and blue symbols represent the symbols with a sequence number 1, 2 and 3, respectively.

W . Here, the strip width is defined as the number of pixels occupied by a bright or dark strip in a received image. Note that, for a square wave of frequencyf , the duration of a complete cycle is 1/f sec- onds. Therefore, for every1/f seconds a camera exposes, it should be able to read out a pair of bright and dark strips in the received image. On the other hand, recall that the time a camera spends to read out a row of pixels is its read-out durationTr. Therefore, in theory, the strip width can be found by

Wreal=

1 2f

Tr

= 1

2f Tr

. (2)

Note that the strip width derived from the above theoretical equation is a real number. However, in practice, a receiver can only measure the number of rows occupied by a stripW as an integer estimate of Wreal, and demodulate the symbol by

f= 1

2W Tr. (3)

Unfortunately, knowing the above relationship is still insufficient for a receiver to demodulate the signals. The reason is that the read- out duration of each cameraTris usually an unknown parameter.

To address this issue, we exploit an idea analogue to channel esti- mationused in many RF-based communication systems. In particu- lar,Trcan be considered as the channel of a light-to-camera link in our system. To estimate this channel, we let the transmitting light send a known preamble at the beginning of a packet. This preamble can be used to estimate the read-out durationTr.4 Letfpbe the frequency of the preamble; the read-out duration can be estimated by

Tr= 1

2fpWp. (4)

Then, any camera can use its learned read-out durationTr to de- modulate a transmitted symbol by first estimating the transmitted frequency f based on Eq. 3 and mappingf to the closest one in the frequency setF. Another thing worth noting is that this preamble can also be used to determine a proper exposure duration.

Specifically, to avoid overexposure, we can start from the smallest exposure duration setting, and gradually increase the value until a successful detection.

We can also observe fromEq. 2that the set of usable frequencies F should be carefully selected according to two factors: 1) the esti- mation error of the strip width, and 2) the maximal and the minimal

4We let the light transmit with a frequency that is higher than cam- eras’ Nyquist sampling rate when the light is idle, i.e., sending noth- ing. As a result, cameras do not see any strip when the light is idle, and hence can use image processing techniques to locate the start of a preamble. In addition, to prevent from missing a preamble, we send the preamble for 3 symbol durations.

(6)

strip width that can be detected by most of commodity cameras.

Say a camera can detect the strip width fromWmintoWmaxwith a maximal estimation errorω, i.e., |Wreal− W | ≤ ω. Then, we can select the frequency set as

F={ 1

2(Wmin+2kω)Tr

: ∀k=0, 1, · · ·, ⌊Wmax−Wmin

2ω ⌋−1}.

(5) By doing this, the difference between the strip widths of two con- secutive frequencies is a constant2ω. This hence allows us to tol- erate an error in strip width estimation between±ω.

To improve the probability of correct demodulation, we need to enhance the accuracy of the estimated strip widthW . To do so, we adopt a well-known pitch detection algorithm, YIN [8], to find the period of the periodical signals in the received image. YIN is com- monly used to find the fundamental period of audio signal in speech or music. Note that RollingLight modulates data by frequencies with a constant gap of different time-domain signal periods (i.e., constant strip width difference). Therefore, YIN, which performs time-domain auto-correlation, is especially suitable for our appli- cation, as compared to FFT transformation that identifies the peak frequency. Another advantage of the YIN-based algorithm is that it naturally addresses the non-uniform brightness issue of a captured light. In particular, previous work [18, 11] has observed that the center of the light received in the image is usually brighter than the corners. The YIN-based algorithm [8] already incorporates several tricks to handle the waveform with slowly changing DC, i.e., sur- face with non-uniform brightness.

YIN-based strip width estimation: We start by summing up all the pixels in each row to obtain an one-dimensional signal.

I[y] =

Xmax

"

x=1

I[x][y], (6)

whereI[x][y] is the intensity (luminance)5 of the pixel at location (x, y) in the received image. This operation averages out the noises in different columns of pixels. We can then use the following dif- ference functiond(δ) to find the period of the periodical signal I[y], i.e., twice of the strip width 2W .

d(δ) =

H/2−1

"

y=1

(I[y] − I[y + δ])2. (7)

Ideally, ifI[y] is a clean square wave without noise, d(δ) = 0 whenδ is a multiple of the signal period, i.e., δ = 2kW for all k = 1, 2, 3, · · · in our considered problem. Intuitively, to find the signal period, we can search for the smallest integer valueδ from [1, H/2] such that d(δ) = 0. However, the received signals are usually disturbed by noise, and hence the difference function is very unlikely to be exactly zero at the period. This makes the task of searching the first period in the disturbed signal more challenging.

To cope with this, we adopt the tricks proposed in [8] to find the first periodδ, and estimate the strip width byW = δ/2.

The above searching procedure only considers the integer values ofδ. However, as derived inEq. 2, the actual signal period of the signal2Wrealis typically a real number, which is proportional to the inverse of the read-out duration, i.e., the sampling rate. Hence, we further use parabolic interpolation to improve accuracy of the strip width estimation as follows.

W = ˆδ/2 = δ

2 + y+1− y-1

4(2y0− y+1− y-1) (8)

5If the obtained image is in RGB instead of grayscale, it needs to be converted to obtain the luminance information.

, wherey-1= d− 1), y0= d), and y+1= d+ 1).

3.3 Dealing with Unsynchronized Channels

As mentioned in §2.3, unsynchronized light-to-camera channels cause mixed-symbol frames and symbol losses. We now describe how RollingLight addresses these two issues.

Separating mixed symbols in an image:Receiving a frame with mixed symbols is a very frequent event, which cannot be neglected.

An image that contains a mixture of symbols with different fre- quencies could cause YIN-based strip width estimation to fail. In order for a receiver to separate those mixed symbols in an image, RollingLight inserts a short symbol splitter before the beginning of each data symbol, as shown inFig. 10. A splitter is sent with a given time periodTsand a known frequencyfs, which is not used by the preamble and data symbols. To detect the location of the splitter in an image, the receiver first uses its read-out durationTr, which is learned from the preamble, and the known frequencyfsto calculate the expected strip width of the splitter,Ws = 1/2fsTr. The receiver can further estimate the number of pixels occupied by a splitter asHs= Ts/Tr. Given the information about the ex- pected strip widthWsand the size of a splitterHs, we can locate the splitter by an approach similar to our YIN-based algorithm. Specif- ically, we calculate the difference function of the period2Wsfor a group ofHspixels starting from thei-th row as follows.

d(2Ws, i) =

i+Hs/2−1

"

y=i

(I[y] − I[y + 2Ws])2.

The difference valued(2Ws, i) would approximate to zero if the splitter locates in thei-th row. Therefore, the most possible location of the splitter in an image can be found by

i= arg min

i d(2Ws, i),

and we determine that thei-th row is a splitter ifd(2Ws, i) is smaller than a threshold. In our measurements, we found that the splitter can be detected with a fairly high probability when it contains 6 pairs of strips. We hence use this as the default setting in our experiments.

Recovering from symbol losses:To ensure that a receiver can re- cover from symbol losses, we append a parity symbol after every n data symbols by XOR-ing those n data symbols. Then, if any of thosen data symbol is lost, it can be recovered by XOR-ing the remaining(n − 1) data symbols with the parity symbol. How- ever, there are still two open problems: i) How much redundancy is required to serve most commodity devices? and ii) How do we determine the location of the lost symbol so that we can put the recovered symbol in right location?

To answer question i), we need to first understand how many losses would be observed by different devices. We note that a sym- bol loss event occurs if the whole symbol is sent during the re- ceiver’s idle gap, implying that the symbol duration is shorter than the duration of the idle gap, as shown inFig. 8. Therefore, if the symbol durationTs,txis fixed, the symbol loss probability that a receiver observes can be estimated by

Ploss = max# Tgap− Ts,tx

Tf,rx

, 0

$

= max# Tf,rx− (H − 1)Tr− Te− Ts,tx

Tf,rx

, 0

$ . (9) We can see from the above equation that different cameras could experience quite different loss rates due to their heterogeneous pa- rameters, e.g.,TrandTe, and the uncertain size of the light cap-

(7)

(a) Original (b) Denoise (c) Haar-like (d) Result Figure 11—Image processing for locating RollingLight’s trans- mitting lights.The top row shows the images taken with exposure duration 1/5000 sec, while the bottom row shows the images taken with exposure duration 1/400 sec.

tured in the image,H. Note that light-to-camera communication is an one-way communication link, via which the transmitter can- not get any information from the receivers. As a result, it is very hard for a light to dynamically adapt the level of redundancy re- quired by a specific receiver. To trade-off between compatibility and achievable data rate, a system designer can find the maximal loss ratePlossobserved by a finite set of devices that the system targets to serve. To ensure that this set of devices can be resilient to symbol losses, we can add one parity symbol for every⌊1/Ploss⌋ data symbols.

To solve the second problem, we label each data symbol, in- cluding those parity symbols, with one of the sequence numbers {0, 1, 2}, as shown inFig. 10, where three different colors of sym- bols represent three different sequence numbers. That is, the se- quence number of thek-th symbol is mod (k, 3). Three sequence numbers are already sufficient for RollingLight because of the fol- lowing reasons. First, since the loss pattern in light-to-camera chan- nels is quite regular, instead of random loss. Second, from our mea- surements performed on diverse devices (see §5.2), the maximum loss probability is about 0.5. The two reasons together imply that we do not need to worry about consecutive losses.

The three sequence numbers, in theory, can be represented by 1.5 bits. However, if we simply add digital bits as a sequence number right before a data symbol, it would require a 2-bit overhead, which is a waste. To efficiently utilize the available bandwidth, we exploit the following trick. We partition the set of available modulation frequencies into three equal-sized groups,F1,F2 andF3. If the sequence number of a symbol isi, we modulate the symbol using one of the frequencies inFi. Then, the receiver can easily detect the sequence number by checking which group the demodulated frequency belongs to. By doing this, we can improve the number of bits per symbol fromlog2(|F|)−2 = log2(|F|/4) to log2(|F|/3).

So far, we only discuss the case when a symbol is entirely dropped. There exists another case that a symbol is received, but is demodulated incorrectly since the number of received strips is not enough to get an accurate strip width estimation. In this case, we can also try to utilize the parity symbol to correct the erroneous data symbol. However, theoretically, the parity can only be used to “detect” error, instead of “correcting” error, because we have no idea which data symbol is in error. Hence, when we detect an error in a chunk of data symbols, we guess that the symbol in a chuck with the smallest number of strips is in error, and use the parity symbol to correct it. Given that the parity symbols could also be used to correct low-confidence symbols, we can slightly increase the amount of redundancy by adding one parity symbol for every

(a) Prototype on USRP

(b) Real system on Arduino (c) Experiment scenario Figure 12—Experimental setup. We implemented RollingLight’s transmitter on both USRP and Arduino systems, and experimented using different receiving cameras. We also implement an iOS de- coding application to check real-time decoding capability.

⌊1/(Ploss+ Perr − PlossPerr)⌋ data symbols, where Perr is a symbol error probability we want to protect.

3.4 Image Pre-processing

Before measuring the strip width for demodulation, Rolling- Light’s receiver performs a number of image processing techniques to denoise the received images and find the boundary of the target transmitting lights. For example, the images taken with a larger exposure duration might have background noise. In addition, the appearance of the light (e.g., repeated lamp pattern) might also in- terfere the stripe pattern in the captured image. Therefore, we first apply a noise subtraction technique to reduce the noise. In partic- ular, we generate a background image by averaging the latest few received images, and then subtract the background from the cur- rent image to obtain the denoised image (Fig. 11(b)). To identify the lights that display the stripe pattern, we apply the haar-like fil- ter [23] on the denoised image. After haar-like filtering, the pixel values of the areas occupied by conventional lights will decrease, as shown inFig. 11(c). We can finally apply Luxapose’s bound- ary detection mechanisms [15] on the resulting image to locate the boundary of RollingLight’s transmitting lights. We refer the readers to [15] for the detailed boundary detection techniques. Fig. 11(d) illustrates the final denoised images overlaid with the boundary of the detected transmitting lights.

More sophisticated detection and tracking algorithms based on computer vision techniques can be utilized if necessary. Note that only denoising needs to be performed per frame. We will show in §5.3that denoising can be done in real-time. All the other pre- processing is an one-time job in the beginning of transmissions.

Their complexity hence does not affect real-time decoding.

4. IMPLEMENTATION

We implemented two RollingLight prototypes: The first one is a flexible experimental prototype, built with USRP N200 [1] software defined radio, as shown inFig. 12(a). The flexibility allows us to examine RollingLight’s performance in a variety of scenarios. The other one is built with the Arduino Mega 2560 board [13], as shown

(8)

!"

#$%!&'(( )*+,-./,0

#123 4556789-7+,

:7;9<7-3=->/*,/- %)3"9<6/

4*0?7,+

/;93'!"( :*+#/3 $$)=%3

&.57,3#,<?8'6/03"9<6/

(7*/

(7*/

=)-/*,963!+*/*

+*70;/6?)3+,%4.!""--((3 .=234**9/

!"#

!$#

(a) Transmitter components

!!"#$%&'(

)*'+, -+.'&+

/- 0'1!&2"#34 566*"1+$"!#

7.+&$68!#'9 :7;4,<=4-+>*'

;?"*2 "#4-+.'&+4 0'1!&2"#34566

/-

!'1!2"#34 /&!3&+.

7.+&$68!#'94

""#74!#*($

0'+* $".'4

!'1!2"#34566

!!"

#$"

#%"

(b) Receiver components

Figure 13—System components of the transmitter and receiver.

inFig. 12(b). This is close to a real system and is used to verify RollingLight’s real-time decoding ability. In the flexible prototype, the transmitting light is connected to a USRP N200 via a special front-end board, as shown inFig. 13(a). We implement in USRP all the transmission components of RollingLight, including mod- ulation, preamble/splitter insertion, and parity coding. The USRP generates voltage varying signals with a sampling rate of 1 MHz, which is then converted to current varying signals by the front-end and used to drive the LED light. On the other hand, we also imple- ment all the components in an Arduino Mega 2560 board, which is connected to a MOSFET component that amplifies the signal to a proper voltage level for an LED light. The Mega 2560’s micro- controller has hardware support to generate pulse width modulation (PWM), and thus can be used to generate square waves of different frequencies. Unless otherwise specified, we set the default symbol rate to 30 symbols per second. We test our design using different types of LEDs, such as Bridgelux bxra-56c1100 LED Array.

On the receiver side, we experiment using a number of devices, including an experimental USB camera, PointGrey Flea3 cam- era [2], and different smartphones, such as Apple iPhone 5s, HTC New One, and Samsung Galaxy S4, as shown inFig. 13(b). The ex- perimental USB camera allows us to check the performance for var- ious settings, such as different exposure durations and frame rates.

To check RollingLight’s real-time decoding capability, we develop all the decoding components, including image pre-processing, split- ter detection, demodulation and loss recovery, as a real application on iOS devices. For all the other devices, we record video and per- form offline decoding. To speed up the decoding process in the iOS application, we implement the application using multiple threads.

One thread reads the images captured by the camera, and puts them into a pending queue. The other thread pulls the buffered images from the pending queue, and performs demodulation and decoding.

5. RESULTS

In this section, we first perform extensive measurements that help us to understand the characteristics and limitations of LOS light-to- camera channels. We then experimentally evaluate the performance of RollingLight. The measurements and experiments are carried

Table 1—Timing Parameters of Different Devices

Image Frame Measured Time Gap (ms) Resolution Rate Read-out (Percentage of (Xmaxx Ymax) (fps) Duration (µs) Frame Duration) Point Grey Flea3 2048x1080 30 14.73 17.42 (52.27%) Apple iPhone 6 Plus 1920x1080 30 21.42 10.20 (30.60%) Apple iPhone 5s 1920x1080 29.98 20.65 11.03 (33.10%) HTC New One 1920x1080 29.94 19.08 12.79 (38.30%) Samsung Galaxy S4 1920x1080 29.93 25.53 5.84 (17.48%)

out in the indoor testbed environments shown inFig. 12(c), and are designed to answer the following questions:

• How large is the strip width estimation error, and how does it affect the achievable throughput?

• How does RollingLight’s demodulation perform, compared to other schemes?

• What is RollingLight’s achievable throughput in different envi- ronments for a variety of cameras?

• Can RollingLight’s decoding be done in real-time?

5.1 Characterizing LOS Light-to-Camera Links

5.1.1 Estimated Timing Parameters

We first measure the timing parameters of different devices on the market, including the frame rate, the read-out duration and the idle gap.

Measurement Setting:We let the light repetitively send the pream- ble, i.e., the square wave of frequency 5,600 Hz, and, hence, there is no need to worry about the symbol duration here. We record the video captured by each device, and find the frame rate of each de- vice outputted by the ffmpeg video decoder. Each device then uses the received signals to learn its read-out duration based onEq. 4.

To improve the accuracy of read-out duration estimation, we try to decrease the exposure duration of a device, as a result increas- ing the intensity difference between bright and dark strips. To do so, we manually set the exposure duration of the Flea camera to the minimum value. However, the exposure duration of the tested smartphones cannot be configured manually. Thus, we reduce the distance of the light-to-camera link so that the camera can automat- ically pick a low exposure duration.

Results: Table 1 summarizes the timing parameters of the tested devices. Our key findings are as follows.

• The frame rates of different cameras are slightly different, but are all close to 30 fps. We hence use the symbol rate 30 symbols per second in all the later measurements and experiments.

• The read-out duration of different devices ranges from 19 µs to 25.5µs, which verifies the need for each device to learn its

“channel” in order to accurately detect the frequency-modulated signals.

• Due to heterogeneity in devices’ read-out duration, devices could experience very different idle gaps, and could lose different amounts of information. In addition, in the case that the trans- mitting light does not illuminate all the rows in the image, this time gap could further increase. Even if the transmitter sets the symbol rate similar to most of the devices, some devices might still see symbol loss when the size of the captured size is small and the exposure duration is short. Hence, a transmitting light should add a redundancy reliable enough for all the devices and target scenarios it wants to serve. For example, say we hope that

(9)

0 1000 2000 3000 4000 5000 6000 0

10 20 30 40 50 60 70 80 90

Frequency (Hz)

Strip Width (pixel)

iPhone5s-back NewOne-back GalaxyS4-back iPhone5s-front GalaxyS4-front NewOne-front

0 5 10 15 20 25 30 35 40 45 50

0 0.2 0.4 0.6 0.8 1 1.2

Real Strip Width Wreal (pixel)

Width Error (pixel)

iPhone5s-back NewOne-back GalaxyS4-back iPhone5s-front GalaxyS4-front NewOne-front

0 5 10 15 20 25 30 35 40 45 50

0.00 6.51 10.26 12.46 14.01 15.22 16.21 17.04 17.76

Widest Strip Width Allowed (pixel)

Ideal Data Rate (Bps)

(a) Strip widths observed by different cameras (b) Error in strip width detection (c) Ideal achievable data rate Figure 14—Available frequencies. Measurement results of different devices for frequency set selection. The set of usable frequencies is chosen according to both the received strip widths and the width detection error of diverse devices.

any camera can reliably receive the information if it captures the light with at leastH rows of pixels. Then, we need to use H and the measured timing parameters of different cameras to estimate Plossin the worst case and add redundancy accordingly.

5.1.2 Required Frequency Spacing

Recall that the set of usable frequencies should be determined based on 1) the strip widths that can be detected by cameras, and 2) the spacing required by two consecutive frequencies, as mentioned inSec. 3.2. Therefore, we next use measurements to identify the suitable set of frequencies that can be detected by most commodity devices on the market.

Measurement settings:The light modulates the symbols using a se- ries of frequencies between 300 Hz and 5,600 Hz, and continuously sends the modulated signals with a symbol duration 1/30 seconds.

Each device captures the light with a size of around 1,000 rows of pixels, and uses the preamble to learn its read-out duration.

Results: Fig. 14(a) plots the strip width detected by each different camera for various frequencies of the signal. The figures show that, for a given frequency, different cameras have a distinct read-out du- ration, and hence could observe a very different strip width. The difference is especially large in the low frequency region, which corresponds to a wide strip. This implies that, without knowing the read-out duration, a device cannot decode the frequency-modulated signals, and channel estimation becomes necessary for realizing high-order frequency modulation. In addition, we should apply the following strategy to pick the lowest and highest frequencies.

• In short, to pick the lowest frequency fmin, we need to know the minimal detectable frequency of a devicei, denoted by fmini , and select the most conservative one,fmin= maxifmini . Given a fixed LED size in the image, the number of strips observed by different cameras will be different. Because any camera requires at least a few pairs of dark and bright strips to detect their period, the lowest detectable frequency for each camera is hence also different. For example, if we need at least 6 strips (i.e., 3 pairs) in a captured light occupying 300 rows of pixels, the widest strip we allow is 50 pixel. We should pick1/100Trmin as the lowest frequency, whereTrminis the shortest possible read-out duration in the target devices.

• On the other hand, to pick the highest frequency fmax, we need to know the maximal detectable frequency of a devicei, denoted by fmaxi , and select the most conservative one,fmax= minifmaxi . Say we expect that a camera should capture the light with a strip width of at least 6 pixels. Then, we should pick1/12Trmaxas the highest frequency, whereTrmaxis the longest possible read-out duration in the target devices.

We next check what is the frequency spacing resilient to the er- ror in strip width estimation. Fig. 14(b) plots the estimation er- ror for various strip widths. Since, in this measurement, the fre- quencies of the transmitted symbols are known by each device, we can use the learned channelTrto calculate the real strip width by Wreal= 1/2f Tr. The estimation error is defined as|Wreal− W |, whereW is the width measured from the received images based on the YIN algorithm. The results show that the error of some de- vices fluctuates as the frequency changes, since the error due to dis- cretization is periodic with respect to the signal period and hence the frequency. In addition, a wider strip suffers from a larger error.

This is because, when the LED size in an image is fixed, a wider strip leads to a fewer number of strips, as a result decreasing the accuracy of the YIN algorithm. However, the estimation error is bounded by one pixel for the entire range of the tested frequencies.

We can hence set the gap between the strip widths of two consecu- tive frequencies to two pixel, and determine the frequency set based onEq. 5.

We finally want to check the ideal achievable data rate of each camera when we customize the optimal frequency setF for that camera. In particular, for each camera with a read-out durationTr, fminis set to1/2Trandfmaxis set to1/2WmaxTr, whereWmax

is the widest width allowed. The optimal frequency set of each camera can be chosen based onEq. 5 by settingω to the maxi- mal estimation error it experiences. The achievable data rate is then computed bylog2(|F|/3) because we use three sequence numbers.

Fig. 14(c) shows that the ideal achievable rate of different devices when the widest strip width we allow varies from 1 pixel to 100 pixels. A camera with a smaller error in strip width estimation can achieve a higher rate. In general, the achievable rate increases when we allow a wider strip. It however has a limit because the allowed strip width is eventually constrained by the resolution of an image.

Fortunately, we can observe from the figure that the data rate con- verges quickly when the allowed strip width increases. Hence, in our system implementation, we set the narrowest and widest strip width to 5 pixels and 100 pixels, respectively, with a spacing of 2 pixels, which corresponds to a set of 48 available frequencies.

5.1.3 Demodulation Error Probability

We next compare the performance of our YIN width estimation algorithm with two baseline approaches: i) DIP-based scheme, and ii)FFT-based scheme. All the schemes sum up all the pixels in each row, and generate an one-dimensional signal as shown inEq. 6. The DIP-based scheme uses the average value as a threshold to partition the signals into two parts, dark strips and bright strips, and output the average width of all the strips. The FFT-based scheme uses Fourier Transform to find the frequency of the stripe pattern (i.e.,

(10)

0.00 3.66 7.33 10.99 14.65 18.31 21.98 25.64 29.30 0%

20%

40%

60%

80%

100%

Exposure Duration (ms)

Detection Error Probability

FFT DIP YIN

0 2000 4000 6000 8000 10000 12000

0%

20%

40%

60%

80%

100%

Frequency (Hz)

Detection Error Probability

FFT DIP YIN

(a) Error for different exposure duration (frequency: 3,400 Hz) (b) Error for different frequencies (exposure duration: 29.28 ms) Figure 15—Detection error of different width detection algorithms. The YIN-based algorithm can support a relatively high exposure duration and a wide range of frequencies.

1/2W), which actually equals to the frequency of the transmitted signalf multiplied by the read-out duration Tr.

Experimental setting:We compare the error in strip width estima- tion of three schemes in various scenarios, e.g., different exposure duration and different transmitted frequencies. Since smartphones do not allow us to fine tune the parameters, we use the Flea cam- era in this experiment. The camera captures the light occupying the entire image. The estimated width is deemed incorrect if the estimation error exceeds one pixel, i.e.,|Wreal− W | > 1.

Results:Fig. 15(a) plots the estimation error probability for differ- ent settings of exposure duration, ranging from 0.01473 ms to 29.28 ms. The transmitted frequency is fixed to 3,400 Hz in this experi- ment. There exist some periodical jumps because the camera can hardly observe the stripe pattern when the exposure duration hap- pens to be a multiple of the period of the square wave. In general, the estimation error increases when the exposure duration increases because the difference between the intensity of bright strips and that of dark strips becomes smaller when the camera accumulates charge longer. The figure shows that both FFT-based and DIP-based schemes fail to operate when the exposure duration exceeds 18 ms.

In contrast, our YIN-based algorithm can still maintain a certain level of estimation probability, and hence can be applied in more environments.

Fig. 15(b) plots the estimation error probability for different modulation frequencies. The tested frequencies vary from 300 Hz to 5,600 Hz, and the exposure duration is fixed to 0.01473 ms. The results show that the performance of YIN estimation is consistently reliable in a wide range of frequencies. The FFT-based approach however experiences a much higher error probability in the low fre- quency band. This is because a unit of error in frequency domain, i.e.,|f − ˜f | = 1, corresponds to a larger error in strip width estima- tion, i.e.,|Wreal− W | = |1/f − 1/f|/Tr ≈ 1/f2Tr, when the frequencyf is small. On the other hand, the DIP-based approach performs worse in the high frequency band. When the strip width becomes narrow, any pair of the bright and dark strips disturbed by noise would contribute a significant error to the average width. In contrast, both the YIN-based and FFT-based schemes are capable of filtering out the effect of burst noise, and hence perform well even when the strip is narrow.

5.1.4 SNR of LOS and NLOS Links

We next check how ambient lights affect the LOS and NLOS light-to-camera links.

Measurement setting: We measure the SNR of a link with differ- ent distances, ranging from 1 meter to 5 meters. The SNR of a

1 2 3 4 5

0 10 20 30 40 50 60 70 80 90

Distance (m)

SNR (dB)

LOS (0 interferer) LOS (2 interferers) LOS (4 interferers)

NLOS (0 interferer) NLOS (2 interferers) NLOS (4 interferers)

Figure 16—Impact of ambient noise on SNR of LOS and NLOS links. LOS links achieve a higher SNR than NLOS links. In ad- dition, LOS links receive direct light signals, and are hence more resilient to ambient noise.

link is measured in two steps. A camera first receives a number of images when the light is turned off, and then receives a number of images when the light uses RollingLight to send a pure 2,000 Hz sine wave. To check the impact of ambient lights, we further deploy different numbers of interfering light sources (1170 lm per light source) nearby the transmitting light. The transmitting light and all interfering lights point to the same direction. We calculate the received power of the captured stripe pattern in an image by finding the average power at the transmitted frequency. The SNR is then calculated by the average received power calculated from the images received in the second step divided by the noise power, i.e., the power calculated from images received in the first step. We report the average results of the measurements collected from the Flea camera.

Result: Fig. 16shows that LOS links allow the receiving camera to receive direct light signals, and achieve a much stronger SNR as compared to NLOS links, which receive reflected light signals.

This reveals that our LOS design is not only useful for realizing vi- sual association applications, but can also improve the capacity of a light-to-camera link. Moreover, the results also demonstrate that LOS links are more resilient to ambient noise because other light sources would only interfere with the camera when they reflect off the surface of the transmitting light. As a result, ambient lights are less likely to interfere the limited reflecting area, i.e., light surface, and thus has very little impact on the performance of a LOS link. In contrast, NLOS links receive the reflected signals in the entire im-

參考文獻

相關文件

6 《中論·觀因緣品》,《佛藏要籍選刊》第 9 冊,上海古籍出版社 1994 年版,第 1

We would like to point out that unlike the pure potential case considered in [RW19], here, in order to guarantee the bulk decay of ˜u, we also need the boundary decay of ∇u due to

A good way to lead students into reading poetry is to teach them how to write their own poems.. The boys love the musical quality of

Now, nearly all of the current flows through wire S since it has a much lower resistance than the light bulb. The light bulb does not glow because the current flowing through it

Please create a timeline showing significant political, education, legal and social milestones for women of your favorite country.. Use the timeline template to record key dates

Now read the contents page again and answer the remaining questions. how to cook in

The disabled person can thus utilize this apparatus to integrate variety of commercial pointing devices and to improve their controllability in computer operation.. The

• Each student might start from a somewhat different point of view and experience the encounters with works of art and ideas in a different way... Postmodern