• 沒有找到結果。

Example Segmented Quantizer and Appearance of Spurious Tones 38

Chapter 2 : A Digital Quantizer with Shaped Quantization Noise that Remains Well

C. Example Segmented Quantizer and Appearance of Spurious Tones 38

the spectral shape of the sd[n] sequences, and similarly that the running sum of the quantization noise,

1

0

[ ] K 2d d[ ]

d

t n t n

=

=

, (58)

is bounded.

The restriction to first-order highpass shaped quantization noise still leaves flexibility in the design of the sd[n] sequences. This flexibility is exploited in the re-mainder of the paper to ensure that sp[n] for p = 1, 2, …, hs, and tp[n] for p = 1, 2, …, ht are free of spurious tones, where hs and ht are positive integers. By definition, if sp[n] and tp[n] contain spurious tones at a frequency ωn, then (54) and (55), respec-tively, are expected to be unbounded in probability at ω = ωn as L→∞. Therefore, to establish that there are no spurious tones in either sp[n] or tp[n], it is sufficient to show that (54) and (55) are bounded in probability for all |ω| ≤ π as L→∞. A spurious tone at ω = 0 is just a dc offset, so this case is excluded from consideration. Theorems 1 and 2 in the next section present sufficient conditions on the sd[n] sequences for (54) and (55) to be bounded in probability for every L ≥ 1 and 0 < |ω| ≤ π, thereby ensuring the absence of spurious tones in sp[n] and tp[n].

0, [ ] even

[ ], [ ] odd, [ 1] 0

[ ] 1, [ ] odd, [ 1] 1

1, [ ] odd, [ 1] 1

d

d d d

d

d d

d d

x n

r n x n t n

s n x n t n

x n t n

 =

 = − =

= = − = −

− = − =

(59)

where rd[n] is an independent random sequence that takes on the values 1 and –1 with equal probability. The results presented in [29] imply that neither sd[n] nor td[n] con-tain spurious tones. Therefore, s[n] and t[n] inherit these properties provided the rd[n]

sequences for d = 0, …, K−1 are independent. This is demonstrated by the estimated power spectra shown in Figure 13 which correspond to a simulated segmented quan-tizer with K = 16, x0[n] = 2457, and quantization blocks that implement (59).

10-4 10-3 10-2 10-1 100

-80 -60 -40 -20 0

10-4 10-3 10-2 10-1 100

-40 -20 0 20

[ ] t n

[ ] s n

Power Spectral Density

Normalized Frequency

Figure 13: Estimated power spectra of the quantization noise and its running sum for the SQ pre-sented in Section II.

However, if the quantization noise or its running sum is subjected to non-linear

distortion, spurious tones can be induced. For instance, Figure 14 shows the estimated power spectrum of t2[n] for the simulation example described above. Discrete spikes are evident in the plot, and it can be shown that the spikes grow without bound in pro-portion to the periodogram length. Therefore, the spikes represent spurious tones.

The presence of spurious tones implies that subjecting t[n] to second-order distortion is sufficient to induce spurious tones even though t[n] is known to be free of spurious tones.

10-4 10-3 10-2 10-1 100 -40

-30 -20 -10 0 10 20

Estimated Power Spectra (dB)

Normalized Frequency

Figure 14: Estimated power spectra of the square of the running sum of the quantization noise for the SQ presented in Section II.

The spur generation mechanism can be understood by considering the first quantization block. Suppose the input to the segmented quantizer is an odd-valued constant and t0[n−1] = 0 for some value of n. Then (59) implies that (s0[n], s0[n+1]) is either (−1, 1) or (1, −1) depending on the polarity of r0[n]. It follows from (57) that

(t0[n], t0[n+1]) is either (−1, 0) or (1, 0), and, by induction, t0[n] has the form {…, 0,

±1, 0, ±1, 0, ±1, 0, …}. Therefore, t02[n] has the form {…, 0, 1, 0, 1, 0, 1, 0, …}

which is periodic. A similar, but more involved analysis can be used to show that the td2[n] sequences for d > 0 also contain periodic components. These periodic compo-nents cause the spurious tones visible in Figure 3.

III.T

HEORY FOR

T

ONE

-F

REE

Q

UANTIZATION

S

EQUENCES

It is assumed throughout the remainder of the paper that the input to the quan-tizer, x0[n], is integer-valued and deterministic sequence for n = 0, 1, …, and that the segmented quantizer is designed such that the following properties are satisfied:

Property 1: xd+1[n] = (sd[n] + xd[n])/2 is integer-valued for n = 0, 1, …, and d = 0, 1,

…, K − 1.

Property 2: there exists a positive constant B such that |td[n]| < B, for n = 0, 1, 2, … .

Property 3: td[0] = 0, and

( )

[ ] [ 1], [ ], [ ]

d d d d

t n = f t nr n o n (60)

where f is a deterministic, memoryless function, {rd[n], d = 0, 1, …, K−1, n = 1, 2, …} is a set of independent identically distributed (iid) random variables, and

1, if [ ] is odd, [ ] [ ] mod 2

0, if [ ] is even,

d

d d

d

o n x n x n

x n

= =

 (61)

is called the parity sequence of the dth quantization block.

Property 1 and the assumption that x0[n] is integer-valued imply that sd[n] is an even integer when xd[n] is even, and an odd integer otherwise. Therefore, (57) implies that td[n] is integer-valued, and Property 2 further implies that it is restricted to a finite set of values. Let T1, T2, …, TN denote these values. Therefore, the function, f, in Property 3 takes on values restricted to the set {T1, T2, …, TN}.

It follows from Properties 1, 2, and 3 that xd+1[n], sd[n], and td[n], for d = 0, 1,

…, K−1, and n = 1, 2, …, depend only on the set of iid random variables {rd[n], d = 0, 1, …, K−1, n = 0, 1, 2, …} and the deterministic segmented quantizer input sequence, {x0[n], n = 1, 2, …,}. Therefore, the sample description space of the underlying prob-ability space is the set of all possible values of the random variables {rd[n], d = 0, 1,

…, K−1, and n = 0, 1, 2, …}.

Equation (57) implies that

[ ]s nd =t nd[ ]−t nd[ −1]. (62)

Therefore, it follows from Property 1 that

(

1 1 1

)

[ ] [ ] [ 1] [ ] / 2

d d d d

x n = t nt n− +x n , (63)

for 1 ≤ d < K. Recursively substituting (63) into itself and applying (61) yields

( )

1 0

0

[ ] 1 [ ] 2 [ ] [ 1] mod 2

2

d k

d k k

k

o n x n t n t n

=

 

=  +

− −  . (64)

Recursively substituting (60) into itself implies that for any integer n > 0,

( )

[ ] [ ], [ 1], , [1], [ ], [ 1], , [1]

d n d d d d d d

t n =g r n r n− … r o n o n− … o (65)

where gn is a deterministic, memoryless function. Similarly, for any pair of integers n2

> n1 > 0, recursively substituting (60) into itself m = n2 − n1 − 1 times implies that

( )

2 1 1 1 2 1 1 2

[ ] [ ], [ 1], [ 2], , [ ], [ 1], [ 2], , [ ]

d m d d d d d d d

t n =h t n r n + r n + … r n o n + o n + … o n (66) where hm is a deterministic, memoryless function.

Repeatedly substituting (64) into (65) to eliminate the variables {od[n], …, od[1]} and then recursively substituting the result into itself to eliminate the variables {tk[m], k = 0, …, d−1, m = 1, …, n} shows that td[n] is a random variable that de-pends only on x0[n] (which is deterministic), and the random variables {rk[m], k = 0, 1,

…, d, m = 1, 2, …, n}. This in conjunction with (64) implies that od[n] is a random variable that depends only on x0[n], and the random variables {rk[m], k = 0, 1, …, d−1, m = 1, 2, …, n}. In particular since the random sequence {od[n], n = 0, 1, 2, …} does not depend on the random sequence {rd[n], n = 0, 1, 2, …} and since all the random variables {rk[m] d = 0, 1, …, K−1, n = 0, 1, 2, …} are statistically independent by Property 3, it follows that {od[n], n = 0, 1, 2, …} and {rd[n], n = 0, 1, 2, …} are statis-tically independent random sequences. By similar reasoning, the random variable td[n] is statistically independent of the random variables {rd[m], m=n+1, n+2, …}.

Hence, (66) implies that td[n2] conditioned on the random variables td[n1], od[n1+1], od[n1+2], …, od[n2] is a function only of the statistically independent random variables rd[n1], rd[n1+1], …, rd[n2]. By definition, for i ≠ j the random variables {ri[n1], ri[n1+1], …, ri[n2]} are statistically independent of the random variables {rj[n1], rj[n1+1], …, rj[n2]}. Therefore, for i ≠ j the random variables ti[n2] and tj[n2] conditioned on ti[n1], tj[n1], oi[n1+1], oi[n1+2], …, oi[n2], oj[n1+1], oj[n1+2], …, oj[n2] are statistically independent. Consequently, for any positive real numbers p0, …, pK−1,

1

2 1 1 2

0 1

2 1 1 2

0 1

2 1 1 2

0

[ ] [ ], [ ]; 0, , 1, 1, ,

[ ] [ ], [ ]; 0, , 1, 1, ,

[ ] [ ], [ ]; 1, , ,

j

j

j

K p

j d d

j

K p

j d d

j

K p

j j j

j

E t n t n o n d K n n n

E t n t n o n d K n n n

E t n t n o n n n n

=

=

=

 = − = + 

 

 

 

=  = − = + 

 

=  = + 

… …

… …

(67)

where the second equality follows from (60) and the independence of the {rd[n], n = 1, 2, …,} sequences for d = 0, …, K − 1. This implies that the pmf of the random vari-able ti[n2] conditioned on ti[n1], oi[n1+1], oi[n1+2], …, oi[n2] is independent of any ad-ditional conditioning by tj[n1], oj[n1+1], oj[n1+2], …, oj[n2] for i ≠ j.

The statistical independence of od[n] and rd[n] together with (60) imply that {td[n], n = 0, 1, …} is a discrete-valued Markov random sequence conditioned on the sequence {od[n], n = 0, 1, …}. Whenever xd[n] is odd the one-step state transition ma-trix for td[n] is given by

{

d[ ] j| [d 1] i, [ ] 1d

}

N N

P t n T t n T o n

  ×

= = − = = 

Ao . (68)

Similarly, whenever xd[n] is even the one-step state transition matrix for td[n] is given by

{

d[ ] j| [d 1] i, [ ] 0d

}

N N

P t n T t n T o n

  ×

= = − = = 

Ae . (69)

The function f in Property 3 is independent of n and d, so neither matrix is a function of n and d.

Equation (62) implies that each possible value of sd[n] is given by Tj− Ti for some pair of integers i and j, 1 ≤ i, j ≤ N, so

{

d[ ] j i d[ 1] i, [ ] 1d

} {

d[ ] j d[ 1] i, [ ] 1d

}

P s n = −T T t n− =T o n = =P t n =T t n− =T o n = . (70) Given that td[n] is restricted to N possible values, sd[n] is restricted to N’ possible val-ues where N’ ≤ N 2. With identical reasoning to that used to proceed from (63) to (67), it follows that

1

2 0 1 1 1 1 2

0 1

2 1 1 2

0

[ ] [ ], , [ ], [ ]; 0, , 1, 1, ,

[ ] [ ], [ ]; 1, , .

j

j

K p

j K d

j

K p

j j j

j

E s n t n t n o n d K n n n

E s n t n o n n n n

=

=

 = − = + 

 

 

 

=  = + 

… … …

(71)

Given that {td[n], n = 0, 1, …} is a discrete-valued Markov random sequence condi-tioned on the sequence {od[n], n = 0, 1, …}, the conditional probability mass function (pmf) of td[n2] given td[n1] and od[n] is equal to the conditional pmf of td[n2] given td[n1], td[n1−1] and od[n]. Therefore, (62) implies that (71) is equivalent to

]

1

2 0 1 1 1 0 1 1 1

0

1

1 2 2 1 1 2

0

[ ] [ ], , [ ], [ ], , [ ], [ ];

0, , 1, 1, , [ ] [ ], [ ]; 1, ,

j

j

K p

j K K d

j

K p

j j j

j

E s n s n s n t n t n o n

d K n n n E s n t n o n n n n

=

=



 

= − = + =  = + 

… …

… … …

(72)

The following definitions are used by the theorems presented below. In analogy to the matrices Ao and Ae, let

{

d[ ] j| [d 1] i, [ ] 1d

}

N N'

P s n S t n T o n

  ×

= = − = = 

So , (73)

and

{

d[ ] j| [d 1] i, [ ] 0d

}

N N'

P s n S t n T o n

  ×

= = − = = 

Se , (74)

where {Si, 1 ≤ i ≤ N’} is the set of all possible values of sd[n]. Property 3 ensures that neither matrix is a function of n and d. It follows from (70) that each non-zero ele-ment of So or Se is equal to an element in Ao or Ae, respectively. For example, if Sk = Tj− Ti, then the element in the ith row and kth column of So is equal to the element in the ith row and jth column of Ao. In this fashion, once Ao and Ae are known, So and Se

can be deduced.

Let

( ) ( )

( ) ( )

1 1

( ) ( )

'

1

, , and

1

p p

p p

p p

N N

T S

T S

   

     

     

     

      

1  t   s   . (75)

Suppose a sequence of vectors, b[n] = [b1[n], …, bN[n]]T converges to a constant vec-tor, b1, as n→∞. Then the convergence is said to be exponential if there exist

con-stants C ≥ 0 and 0 ≤ α < 1 such that

[ ] n

b ni − ≤b Cα (76)

for all 1 ≤ i ≤ N and n ≥ 0.

Theorem 1: Suppose that the state transition matrices Ae and Ao satisfy

e o o e

A A = A A , (77)

and there exists an integer ht ≥ 1 such that for each positive integer p ≤ ht

( ) ( )

lim n p p , and lim n p p

n b n b

→∞A te = 1 →∞A to = 1 (78)

where bp is a constant and the convergence of both vectors is exponential. Then for every L ≥ 1,

[ p, ( )] ( )

E It L ω ≤C ω < ∞ (79)

for each 0 < |ω| ≤ π. Moreover, the bound C(ω), which is independent of L, is uniform in ω for all 0 < ε < |ω| ≤ π.

By Markov’s Inequality [30], this immediately leads to,

Corollary 1: Under the assumptions of Theorem 1, It Lp, ( )ω is bounded in probability for all L ≥ 1 and for each ω satisfying 0 < |ω| ≤ π.

Proof of Theorem 1: The expectation of It Lp, ( )ω can be expressed as

1 2

1 2

1 2

1 1 2

1 2

1 1

( )

1 2

, 0 0

1 1 1

( )

2

1 2

0 0 0

1 2

[ ( )] 1 [ ] [ ]

1 1

[ ] [ ] [ ]

p

L L

j n n

p p

t L n n

L L L

j n n

p p p

n n n

n n

E I E t n t n e

L

E t n E t n t n e

L L

J J

ω

ω

ω

= =

= = =

 

=  

   

=  +  

+

∑ ∑

∑ ∑ ∑



. (80)

The notation above means that J1 and J2 are defined as the first and second terms, re-spectively, to the left of the  symbol. Property 2 states that |td[n]| ≤ B, so it follows from (58) that t[n] ≤ B1 for some finite constant B1. Therefore, J1 ≤ B12p. The crux of the proof is showing that there exists a constant Ctp, positive constants D1, D2, and a constant 0 < α < 1 such that for n1 ≠ n2

2 1 1

1 2 1 2

[ ] [ ] p

n n n

p p

E t n t n −Ct  ≤Dα +Dα , (81)

The proof of (81), which is fairly lengthy, will be given later. Here (81) is used to complete the proof of the theorem. From (80), J2 can be expressed as

( )

1 2 1 2

1 2 1 2

1 2 1 2

1 1 1 1

( ) ( )

2 1 2

0 0 0 0

2,1 2,2

1 1

[ ] [ ]

.

p p

L L L L

j n n j n n

p p

t t

n n n n

n n n n

J E t n t n C e C e

L L

J J

ω ω

= = = =

 

=  −  +

+

∑ ∑ ∑ ∑



(82)

From (81) it is seen that

( )

( ) ( )

1 2 1

1 2

1 2

1 2 1

1 2 1

1 1

2,1 1 2

0 0

1 1 1

1 2

0 0 0

1 2 1 2

1

1 1

2 2

1 1

L L

n n n

n n

n n

L L L

n n n

n n n

L

J D D

L

D D

L

D D D D

α α

α α

α

α α

= =

= = =

≤ +

≤ +

≤ + − ≤ +

− −

∑ ∑

∑ ∑ ∑

(83)

and the bound is independent of L. Similarly, J2,2 can be bounded by

1 2

2,2

0

2 2

2

1 sin( / 2)

1 sin( / 2)

1 1

sin ( / 2)

p

p p

p

t L j n

n

t j L t

j

t

J C e L

L

C e C L

L L

L e L

C

ω

ω

ω ω

ω

ω

=

≤ −

≤ − − = −

 

≤  + 

 

. (84)

which is finite, independent of L, for each ω satisfying 0 < |ω| ≤ π; the bound is uni-form for all ω satisfying 0 < ε < |ω| ≤ π since sin(ω/2) > sin(ε/2). The result of the theorem then follows from (80) through (84).

To establish (81), it suffices to assume that n2 > n1. Using (58), E[tp[n2]tp[n1]]

can be expressed as

1 1

1 1

1 1 1 1

2 1 2 1

0 0 0 0 1 1

[ ] [ ] 2 p p i[ ] j[ ]

p p

p p

K K K K

c c d d

p p

c d

c c d d i j

E t n t n + + + + + E t n t n

= = = = = =

 

 =  

 

∑ ∑ ∑ ∑

   

∏ ∏

. (85) It is seen that the above expression is a finite sum of terms of the form

( )

1

1 2 2 1

0

( , ) K jpj[ ] [ ]qjj

j

Q n n E t n t n

=

 

=  

, (86)

where pj and qj are positive integers less than or equal to p. It thus suffices to establish a bound for Q(n1, n2) of the form

2 1 1

1 2 3 1 2

( , ) n n n

Q n nCCα +Cα . (87)

The right side of (86) is computed by conditional expectation as follows

1 2

1 1

1 2 1 1 2

0 0

( , )

[ ] j[ ] [ ], [ ], 0,1, , 1, 1, ,

i

K K

p q

i j d d

i j

Q n n

E t n E t n t n o n d K n n n

= =

  

 

= 



= − = + . (88)

Substituting (67) into the inner conditional expectation of (88) yields

( )

1

1 2 1 2 1 1 2

0

( , ) K pjj[ ] qjj[ ] [ ], [ ],j j 1, ,

j

Q n n E t n E t n t n o n n n n

=

   

= 

 = +   (89)

Since {td[n], n = 0, 1, …} is a Markov process for any given parity se-quence,{od[n] = od,n, n = 0, 1, …} where od n, ∈{0,1}, it follows from (68) and (69) that the m-step state transition matrix corresponding to td[n] from time n to time n + m can be written as

( )

, ,

1

[ , ] n m 1

d d k d k

k n

n m + o o

= +

 

=

o + e − 

A A A , (90)

where Ad[n, m] is an N×N matrix with elements of the form

{

d[ ] j d[ ] i, [d 1] d n, 1, [d 2] d n, 2, , [d ] d n m,

}

P t n m+ =T t n =T o n+ =o + o n+ =o +o n m+ =o + .(91) Since od,n is either 1 or 0 for each n, (77) can be used to write (90) as

, 1

[ , ] ym m ym m ym ym, where n m

d m d k

k n

n m y + o

= +

= e o = o e =

A A A A A . (92)

By definition, ym ≥ m/2 or m − ym ≥ m/2 depending on the given parity sequence. It follows from the exponential convergence of (78) that there exists positive numbers Cp,e and Cp,o and positive numbers αp,e and αp,o less than unity such that each element of

m ( )

y p

bp e

A t 1 (93)

is less than Cp,eαp,e m/2 for ym ≥ m/2, and each element of

m ( )

m y p

bp

Ao t 1 (94)

is less than Cp,oαp,o m/2 for m − ym ≥ m/2.

The matrices Am yo m and Aeym are stochastic matrices, so Am yο m1 1= , A 1 1eym = and

(

( )

)

( )

m m m m

m y y p m y y p

p p

b b

− =

o e o e

A A t 1 A A t 1, (95)

(

( )

)

( )

m m m m

y m y p y m y p

p p

b b

− =

e o e o

A A t 1 A A t 1. (96)

Since the elements of the vectors in (93) and (94) are exponentially bounded, the same must be true for the vectors in (95) and (96). From (92) it follows that the right side of either (95) or (96) is equal to

[ , ] ( )p

d n mbp

A t 1 . (97)

Therefore, in general each element of (97) has a magnitude less than Cα m/2 where C=max{Cg,e,Cg,o} and α=max{αg,e, αg,o}, which implies that

[ ] | [ ], [ ] , , 1, ,

p

d d d d n j p

E t n m t n o n + + =j o + j= … m→b (98) as m → ∞ uniformly in n where the convergence is also exponential. This result is

independent of the given deterministic sequence {od,n, n = 0, 1, …}, so it implies that

[ ] | [ ], [ ], 1, ,

p

d d d p

E t n m t n o n + + j j= … m→b (99)

almost surely as m → ∞ uniformly in n where the convergence is also exponential.

Thus, the inner conditional expectation in (89) converges exponentially to

rj

b as n2 − n1 → ∞ with probability one so that

1 1

1 2 1

0 0

( , ) i[ ]

j

K K

p

q i

j i

Q n n b E t n

= =

 

→  

 

∏ ∏

. (100)

More precisely, the exponential convergence of (100) implies that for every n2 > n1

2 1

2 1 1 2

[ ] | [ ], [ ], 1, , ( )

j

j

q n n

j j j q j

E t n t n o n n n= + … n −bC q α . (101) with probability one where C(qj) is a constant that depends on qj. For every n2 > n1

2 1 2 1

1 1

1 2 1

0 0

1

1 2 1 1 2

0 1

1 0

( , ) [ ]

[ ] [ ] | [ ], [ ], 1, ,

( )

i j

j j

j

j

K K

p

q i

j i

K p q

j j j j q

j

K p n n n n

j j

Q n n b E t n

E t n E t n t n o n n n n b

C q B α Cα

= =

=

=

 

−  

 

   

≤   = + − 

∏ ∏



. (102)

where B is given from Property 2. By similar reasoning, it can be established that

1

1 1

1 2

0 0

E K qjj[ ] K qj n

j j

t n b Cα

= =

 − ≤

 

(103)

Hence, the above two bounds imply there exist positive constants C1 and C2 such that for all n2 > n1

2 1 1

1 1

1 2

0 0

1 1 1 1 1 1

1 2 1 1

0 0 0 0 0 0

1 2

( , )

( , ) [ ] [ ]

.

i j

i i

j j i j

K K

p q

i j

K K K K K K

p p

i q i q p q

i j i j i j

n n n

Q n n b b

Q n n E t n b E t n b b b

Cα Cα

= =

= = = = = =

   

≤ −   +   −

   

≤ +

∏ ∏

∏ ∏ ∏ ∏ ∏ ∏

. (104)

Consequently, there exists a constant C3 such that

2 1 1

1 2 3 1 2

( , ) n n n

Q n nCCα +Cα (105)

which is of the required form.



Theorem 2: Suppose that the state transition matrices Ae and Ao satisfy

e o o e

A A = A A , (106)

and there exists an integer hs ≥ 1 such that for each positive integer p ≤ hs, the se-quence transition matrices Se and So satisfy

( ) ( ) ( ) ( )

lim n p lim n p lim n p lim n p p ,

n n n n c

→∞A S se e = →∞A S se o = →∞A S so e = →∞A S so o = 1 (107) where cp is a constant and the convergence of all vectors are exponential. Then for

every L ≥ 1,

[ s Lp, ( )] ( )

E I ω ≤D ω < ∞ (108)

for each 0 < |ω| ≤ π. Moreover, the bound D(ω), which is independent of L, is uniform in ω for all 0 < ε < |ω| ≤ π.

By Markov’s Inequality, this immediately leads to,

Corollary 2: Under the assumptions of Theorem 2, Is Lp, ( )ω is bounded in probability for all L ≥ 1 and for each ω satisfying 0 < |ω| ≤ π.

Proof of Theorem 2: The proof is similar to that of Theorem 1, so only the non-trivial differences with respect to the proof of Theorem 1 are presented.

Similarly to the proof of Theorem 1, it is necessary to show that

[ ]| [ ], [ ], 1, ,

p

d d d p

E s n m t n o n + + j j= … m→c (109) almost surely as m → ∞ uniformly in n where the convergence is also exponential.

With this result and sd[n], cp, and (72) playing the roles of td[n], bp, and (67) in the proof of Theorem 1, respectively, the proof of Theorem 2 is almost identical that of Theorem 1. Therefore, it is sufficient to prove (109).

Since the random variables td[n−1] and od[n] are statistically independent, for any given parity sequence,{od[n] = od,n, n = 0, 1, …} where od n, ∈{0,1}, it follows from (73), (74), and (91) that

( )

, 1 , 1

[ , 1] [ , ] 1

d n m+ = d n m  ood n m+ + + eod n m+ + 

S A S S , (110)

where Sd[n, m+1] is an N×N’ matrix with elements of the form

{

d[ 1] j d[ ] i, [d 1] d n, 1, , [d 1] d n m, 1

}

P s n m+ + =S t n =T o n+ =o +o n m+ + =o + + , (111) where i is the row index and j is the column index. By similar reasoning to that used

in the proof of Theorem 1, (106) and (107) together imply that there exists a positive number Dand a positive number β less than unity such that each element of the vector

[ , 1] ( )p

d n m+ −cp

S s 1 , (112)

has a magnitude less than D⋅β m/2. Thus, (112) implies that

[ ]| [ ], [ ] , , 1, ,

p

d d d d n j p

E s n m t n o n + + =j o + j= … m→c (113) as m → ∞ uniformly in n where the convergence is also exponential. This result is

independent of the given deterministic sequence {od,n, n = 0, 1, …}, so it implies that (109) holds almost surely as m → ∞ uniformly in n where the convergence is also ex-ponential.



IV. A S

EGMENTED

Q

UANTIZER THAT

S

ATISFIES

T

HEOREMS

1

AND

2

相關文件