• 沒有找到結果。

The simplest policy is to generate the synchronization signals at a constant rate. In other words, the synchronization server generates the signals with a constant interval. Synchronization period, denoted by p, is the minimal time interval within which the synchronization signal schedule does not repeat. In this section, the synchronization period is equal to the synchronization signal period. A good synchronization schedule should prolong the synchronization period and shorten the least awake time. A long synchronization period allows the system to tolerate greater clock drift error; a short least awake time allows the sensors to consume less energy and the server to complete the data stream alignment earlier. (The server has to wait for all the sensors sampling for least awake time.) We propose two policies for generating constant interval synchronization signal schedule: simple constant interval and constant interval with different signals. In the following, we discuss how to determine the synchronization period and how to determine the temporal order of data streams.

We design two period assignment algorithms: one for the case that maximum transmission delay d t is known in advance and the other one is for the case that the maximum clock drift d c is known in advance.

A. Period Assignment for Known Maximum Transmission Delay

When there is an upper bound dt of all samples transmission delays and the length of one synchronization signal is l, we set the synchronization period to dt + l units of time (i.e., synchronization signals are generated for every dt + l units of time). While the period is used, we claim that the synchronization error is bounded by the length of synchronization signals, i.e., l.

The requirements for assigning the synchronization period is that the data server shall be able to determine the temporal order of the received data streams and assign the estimated sample time for every sample. When the maximum transmission delay is dt, for any sample Si,mwhose arrival time is ai,m, we know that the sample time must be in the time interval (ai,m

d

t

- l, a

i,m). The right end of the interval represents the case that the sensor samples the end of the signal and it takes almost no time to transmit the message. The left end of the range represents the case that the sensor samples the beginning of the synchronization signal and it takes the maximum transmission delay to send the message. Because the synchronization period is dt + l, there is one and only one signal in time interval (ai,m-

d

t

- l, a

i,m).

Figure 3 shows an example in which the maximum transmission delay is eight units of time and synchronization signal length is two units of time. In the figure, the dark circle and gray circle represent the first and second synchronization signals received by the sensors. According to the known maximum transmission delay and the length of synchronization signals, we know that the first synchronization signal

and second synchronization signal will be received in interval (0, 10) and (10, 20), respectively. For the stream, we choose the sample Si,kwhich samples the synchronization signal and whose arrival time is the maximal. Suppose the arrival time for sample Si,kis ai,k, the time to broadcast the sampled signal must be in the range (ai,k -

d

t -

l, a

i,k). Suppose the time instance to broadcast the signal received in the range (ai,k -

d

t -

l, a

i,k)

is tb, the estimated sample time stˆi,kis set as tb. Finally, we assign estimated sample times to other samples by adding/subtracting its sample period spi. The CLOCK FREE ALIGNMENT WITH KNOWN MAXIMUM

TRANSMISSION DELAY algorithm is shown in Algorithm 1.

Step 2 of this algorithm takes ni units of time to find a sample whose synchronization signal is in the worst case. Step 3 takes constant time to find the corresponding synchronization signal. Step 5 is also in constant time. Hence, the complexity is O(n), where n is the total input size.

We claim that at the start of each iteration of the iterative loop, we assign estimated sample times for D1, ...,

D

i-1, and 0 ≤

stˆ

j,m

st

j,ml for 1≤ j ≤ i − 1. Before the first iteration of the loop, there is no streams in D1, ..., Di1. Hence, the loop invariant holds. If the loop invariant holds after the i-th iteration, in the i +1-th iteration we will assign estimated sample time for Di+1. Because we assign the time instance at the start of synchronization signal to the estimated sample time, the estimated sample time is no later than the sample time, and difference is bounded by l. The loop invariant holds after the i +1-th iteration. After all iteration, for any two sample Si,m

and Sj,n, |(sti,m−stˆi,m)− (stj,n

stˆ

j,n)|≤ |sti,m

st

ˆi,m

|−|st

j,n

st

^j,n

|≤ l. In other word, the synchronization error is

bounded by l.

We claim that by the algorithm, the synchronization error is at most l units of time. The main idea is that in the above procedure, we assign the estimated sample time to a stream by shifting the sample time of the stream.

The displacement of each stream is in the range from 0 to l. Hence, the synchronization error is at most l. The proof for the claim follows.

Fig. 3. An example for constant Interval when dt =8 and l =2

Theorem 1: Given a set of data stream, the maximum transmission delay, length of synchronization signal,

CLOCKFREE ALIGNMENTWITHKNOWNMAXIMUM TRANSMISSION DELAY algorithm assigns the estimated sample time for each sample in the data streams such that the synchronization error is bounded by l.

Proof: Let Si,mi be the sample which represents a synchronization signal in stream Di. Because the time

difference between sample time sti,mi and the beginning of the synchronization signal is at most l, the difference between the estimated sample time stˆi,mi and the sample time sti,mi is at most l (i.e., stˆi,mi

-st

i,mi

l). Furthermore,

we assign estimated sample time stˆi,m =

stˆ

i,mi

+ sp

i

×

(m − mi) to sample Si,m. Hence, for any two samples, Si,m

and Sj,n, the synchronization error (sti,m

st

j,n) −(stˆi,m − stˆj,n)= |(stˆj,n

st

j,n) − (stˆi,m

st

i,m)|= |stˆj,n− stj,n| − |sti,m

st

i, m)|≤

l. We can obtain the result that the synchronization error is at most l.

B. Period Assignment for Known Maximum Clock Drift

In this subsection, we are concerned with the case that the maximum clock drift d c is known. As a reminder, the maximum clock drift d c is the maximum difference between any two transmission delays. In other words,

| di,m −dj,n |< d c for all i, j, m, n. When the maximum clock drift is d c , the synchronization period is set as 2(dc + l).We claim that the synchronization error is also bounded by the length of synchronization signal, i.e.,

l.

Before we present the algorithm to align the data streams, we define a sequence of interval Ti ≡ ((2i−1)(d c

−l), (2i+ 1)(dc −l)) for the algorithm. We claim that for two samples of which the arrival time is ai,mand aj,n, if

a

j,n − ai,m is in Ti, and Si,msamples the p-th signal, it implies that the Sj,nsamples the p + i-th signal.

Corollary 4.1: Given two samples S

i,mand Sj,nof which the arrival time is ai,mand aj,n, aj,n − ai,mis in Ti, and Si,m samples the p-th signal, Sj,nsamples the p + ith signal.

CLOCK FREE ALIGNMENT WITH MAXIMUM CLOCK DRIFT algorithm determines the estimate sample times for the stream and is listed in Algorithm 2.

Step8 in this algorithm is also in constant time. The other steps are the same as the steps in Algorithm 1. Hence, the total time complexity is also O(n1 + ... + nN ). The time complexity is O(n).

The proof of correctness is similar to the proof of Algorithm 1. There is a constant α such that for each sample Si,min this algorithm, the estimated sample time satisfies α ≤ sti,m

stˆ

i,m ≤ l +

α. The constant α is the difference

between the real time and the logical time. Hence, for each two sample Si,mand Sj,n, |(sti,m−stˆi,m)−(stj,n−stˆj,n)| ≤

|sti,m − stˆi,m|− |stj,n − stˆj,n|≤ (l + α) −(α)= l. In other word, the synchronization error is bounded by l.

Figure 4 shows an example for the case that maximum clock drift is three units of time and the length of synchronization signal length is two units of time. We can find that there are no samples sensing a signal whose arrival time is in the time intervals (20,25], (30,35], etc. If we construct a logical time which sets the arrival time for some sample recording a signal called sig* be 0. Then, for any sample recording a signal logical arrival time in Ti, the signal differs sig* exactly i signals.

In summary, the synchronization error is bounded by the length of the synchronization signal, i.e., l. When the maximum clock delay d t is known, the least awake time for the sensors is dt + l units of time; when the maximum clock drift d c is known, the least awake time is 2(dc

+ l) units of time.

相關文件