• 沒有找到結果。

Recently developed Methods

In the last several sections, we have introduced four often used methods.

We also demonstrated how the methods work by working out asymptotic

expressions of the mean for two shape parameters of digital trees. So, it is natural to ask whether or not the variance and higher moments can be derived by the methods as well. The answer is yes. However, the computation can be extremely complicated and tricky.

For example, assume that we try to derive the asymptotic expression of the variance for the total path length of symmetric DSTs. Let Pn be the total path length of symmetric DSTs built on n strings. Then we can use the introduced methods to derive asymptotic expressions for E(Pn) and E(Pn2) and then computeE (Pn2)−E(Pn)2. However, the order ofE(Pn)2is n2(log n)2 while the order of the variance is n (see [121]). This implies that the first several terms of E(Pn)2 and E (Pn2) will be canceled. If we do not know the right order of the variance (which normally is the case), we will have to derive very long asymptotic expressions and face many cancelations. This would be extremely difficult not only because deriving long asymptotic expressions can be complicated, but also because the cancelation part often needs some deep knowledge on all kinds of constants, Fourier series and q-analysis.

To avoid such disadvantages and derive the variance more efficiently, a new set of mathematical tools has been proposed by M. Fuchs, H.-K. Hwang and V. Zacharovas in [74]. In this section, we will introduce these tools.

3.5.1 Poissonized Variance with Correction

For a given random variable Xn, we let f˜1(z) = e−z

n≥0

E(Xn)zn n!

and

f˜2(z) = e−z

n≥0

E(Xn2)zn n!.

Then, from the definition of the variance and analytical depoissonization, one may guess that if ˜f1(z) and ˜f2(z) are smooth enough, then

V(Xn) =E(Xn2)− (E(Xn))2 ∼ ˜f2(n)− ˜f1(n)2. (3.14) This asymptotic equivalent holds for many cases. However, for a large class of problems, ˜f2(n)− ˜f1(n)2 is not asymptotically equivalent to the variance.

One class of such problem are those with mean and variance satisfying

nlim→∞

logE(Xn)

log n = 1 and lim

n→∞

logV(Xn)

log n = 1. (3.15)

Examples of these problems are the two shape parameters we discussed be-fore.

To solve this problem, H.-K. Hwang, M. Fuchs and V. Zacharovas pro-posed the poissonized variance with correction

V (z) := ˜˜ f2(z)− ˜f2(z)2− z ˜f(z)2 (3.16) in [74]. For the problems satisfying (3.15), we have that

V(Xn) = ˜V (n) +O ((log n)c)

for some c ≥ 0 under suitable assumptions. Comparing (3.14) and (3.16), the difference is the appearance of the term z ˜f(z)2. To see why this term is necessary, we introduce the following lemma :

Lemma 3.5.1. Let ˜

f (z) = e−z

n≥0

anzn

n!. If ˜f (z) is an entire function, then

an =∑

j≥0

f˜(j)(n)

j! τj(n), (3.17)

where

τj(n) := n![zn](z− n)jez =

j k=0

(j k

)

(−1)j−k n!nj−k (n− k)!.

Now we let ˜D(z) := ˜f2(z)− ˜f1(z)2. By the above lemma we get that V(Xn) = E(Xn2)− (E(Xn))2

=∑

j≥0

f˜2(j)(n)

j! τj(n)−

(∑

j≥0

f˜1(j)(n) j! τj(n)

)2

= ˜D(n)− n ˜f(n)2 n 2

D˜′′(n) + smaller order terms.

For the case ˜f1(n) ≍ n log n, we find that n ˜f1(n)2 ≍ n(log n)2 is of larger order than ˜D(n).

Comparing to other methods like the second moment approach (using f˜2(z)) or usage of ˜D(z), one of the major advantages of using the poissonized variance with correction is that the computation is largely simplified. Analyt-ical poissonization transfers a problem from Bernoulli model to the Poisson model. While the other approaches apply the analytical depoissonization first and then dealt with many cancelations, the poissonized variance with cor-rections has already incorporated all the cancelations in the poisson model.

See [74] for more comparison between poissonized variance with corrections and other methods.

3.5.2 Poisson-Laplace-Mellin Method

The Poisson-Laplace-Mellin method was developed together with poissonized variance with correction in [74] to deal with shape parameters of symmetric random digital search trees. As its name implies, this approach combines poissonization, Laplace transform and Mellin transform. Before we explain this method step by step, we first need an important lemma.

Lemma 3.5.2. Let ˜

f (z) be a function whose Laplace transform exists and is analytic in C \ (−∞, 0]. Assume that

L [ ˜f (z); s] =

{ O (|s|−α| log s|m) , cs−β(

log(1

s

))m

,

uniformly for s→ ∞ with | arg(s)| ≤ π − ϵ, where α ∈ R, β ∈ C and m ≥ 0 is an integer. Moreover, assume that

L [ ˜f (z); s] =O(|s|−1−ϵ) uniformly for s→ ∞ with | arg(s)| ≤ π − ϵ. Then,

f (z) =˜









O(|z|α−1| log z|m),

czβ−1

m j=0

(m j

)

(log z)m−j j

∂ωj 1 Γ(ω)

ω=β

,

uniformly as z→ ∞ and | arg(z)| ≤ π2 − ϵ.

Now, we give a step by step description of how the Poisson-Laplace-Mellin method work:

1. Use the Poisson generating functions of the first and second moment.

The Poisson generating function of both the mean and the variance will satisfy a differential-functional equation of the form

f (z) + ˜˜ f(z) = 2 ˜f (z/2) + ˜t(z), (3.18) where ˜t(z) is some suitable function.

2. Substitute the Poisson generating function of the first and second mo-ment into the formula of poissonized variance with correction. The poissonized variance with corrections will also satisfy a differential-functional equation of the type (3.18).

3. Now, we have to asymptotically solve two differential-functional equa-tions of the form in (3.18). We apply Laplace transform to (3.18) to get rid of the differential operator. The resulting functional equation will be

(1 + s)L [ ˜f (z); s] = 4L [ ˜f (z); 2s] +L [˜t(z); s]. (3.19) 4. We let

Q(s) =

k≥1

( 1 s

2k )

, L [ ˜¯f (z); s] = L [ ˜f (z); s]

Q(−s) , and

L [˜t(z); s] =¯ L [˜t(z); s]

Q(−2s) .

Dividing both sides of (3.19) by Q(−2s), we will get a simplified func-tional equation

L [ ˜¯f (z); s] = 4 ¯L [ ˜f (z); 2s] + ¯L [˜t(z); s]. (3.20) 5. Apply the Mellin transform to (3.20), which yields

M [ ¯L [ ˜f (z); s]; ω] = M [ ¯L [˜t(z); s]; ω]

1− 22−ω .

By the standard theory of inverse Mellin transform which we have intro-duced in Section 3.3, we derive an asymptotic expansion of ¯L [ ˜f (z); s]

as s→ 0.

6. Applying Lemma 3.5.2 to the asymptotic expansion of ¯L [ ˜f (z); s] will give an asymptotic expansion of ˜f (z) as z → ∞.

7. The last step is to apply analytical depoissonization in order to get the desired results from the asymptotic expansions of ˜f (z). For this, one may use the standard method from [99] or the theory of JS-admissiblility which will be explained below.

3.5.3 JS-admissibility

For analytical depoissonization, the standard theory by P. Jacquet and W.

Szpankowski was introduced in Section 3.2. However, there are many com-plicated conditions to be checked before using the depoissonization lemmas from Section 3.2 and similar results in [99]. Thus, the authors of [74] pro-posed a systematic method for this which they called the JS-admissibility.

We start with the following definition which arises from Theorem3.2.4.

Definition 3.5.3. We let ϵ, ϵ

∈ (0, 1) be arbitrarily small numbers. An entire function ˜f is said to be JS-admissible, denoted by ˜f ∈ J S , if the following two conditions hold for |z| ≥ 1.

(I) There exists α, β ∈ R such that uniformly for | arg(z)| ≤ ϵ, f (z) =˜ O(

|z|α(log+|z|)β) , where log+x := log(1 + x).

(O) Uniformly for ϵ ≤ | arg(z)| ≤ π,

f (z) := ezf (z) =˜ O(

e(1−ϵ)|z|

) . Then Theorem 3.2.4 can be reformulated as follows.

Lemma 3.5.4. Assume ˜

f ∈ J S . Let f(z) = ezf (z), then we have that˜ an := f(n)(0) = n![zn]f (z) = n![zn]ezf (z)˜

=

2k j=0

f˜(j)(n)

j! τj(n) +O(

nα−k(log n)β)

for k = 1, 2, . . ..

The real advantage of introducing admissibility is that it opens the pos-sibility of developing closure properties as we discuss here.

Lemma 3.5.5. Let m be a nonnegative integer and α

∈ (0, 1), we have the following properties.

1. zm, e−αz ∈ J S .

2. If ˜f ∈ J S , then ˜f (αz), zmf˜∈ J S . 3. If ˜f , ˜g ∈ J S , then ˜f + ˜g ∈ J S .

4. If ˜f ∈ J S and ˜P is a polynomial, then the product ˜P ˜f ∈ J S . 5. If ˜f , ˜g ∈ J S , then ˜h(z) = ˜f (αz)˜g ((1− α)z) ∈ J S .

6. If ˜f ∈ J S , then ˜f ∈ J S and thus ˜f(m) ∈ J S .

In the last step of the Poisson-Laplace-Mellin method we have mentioned that the depoissonization step can be finished by JS-admissibility. Here we use the closure properties together with the following Proposition.

Proposition 3.5.6. Let ˜

f and ˜g be entire functions satisfying f (z) + ˜˜ f(z) = 2 ˜f (z/2) + ˜g(z),

with ˜f (0) = 0, then

˜

g ∈ J S if and only if ˜f ∈ J S .

With this proposition, for all differential functional equation of the form of (3.18), we only need to check whether ˜t(z) is JS-admissible to finish the depoissonization step.