• 沒有找到結果。

High Accuracy Reconstruction from Wavelet Coefficients

N/A
N/A
Protected

Academic year: 2022

Share "High Accuracy Reconstruction from Wavelet Coefficients"

Copied!
24
0
0

加載中.... (立即查看全文)

全文

(1)

High Accuracy Reconstruction from Wavelet Coefficients

Fritz Keinert* and Soon-Geol Kwon

Department of Mathematics, Iowa State University, Ames, Iowa 50011

Communicated by Gregory Beylkin

Received June 10, 1996; revised February 28, 1997

The accuracy of the wavelet approximation at resolution h Å 20nto a smooth function f is limited by O(hN), where N is the number of vanishing moments of the mother wavelet c. For any positive integer p, we derive a new approximation formula which allows us to recover a smooth f from its wavelet coefficients with accuracy O(hp). Related formulas for recovering derivatives of f are also given. q1997 Academic Press

1. INTRODUCTION

Let f, fH be the scaling functions of a biorthogonal multiresolution approximation.

The projection

PHnf Å

`

jÅ0`

»f, fn,j…fHn,j (1.1)

of a function fL2(R) onto the space V˜n spanned by

fHn,k(x) Å2n/2fH(2nx0k), k√Z, (1.2) is interpreted as an approximation to f at resolution hÅ 20n. Such projections form the starting point and/or result of any numerical method based on wavelets, from signal processing to wavelet – Galerkin methods.

In practice, the connection between point values of the smooth original function f and its scaling function coefficients is often established by simple formulas such as

fn,jÅ »f, fn,j… É20n/2f ((j/t)h),

* E-mail: keinert@iastate.edu.

293

(2)

where t Å

*

xf(x)dx is the center of mass of f. For many applications, this is perfectly adequate.

Higher accuracy formulas for the computation of fn,jhave been published (see, e.g., [6]), but the problem of recovering accurate point values of f from its coefficients fn,j

has not received much attention.

It is well known that if the wavelet c has N vanishing moments and f belongs to CN, then

Éf(x)0PHnf (x)É ÅO(hN).

In some applications, such as wavelet – Galerkin methods, we may know the coeffi- cients of a smooth solution function to high accuracy. How do we recover point values of f, and possibly some derivatives of f, to comparable accuracy?

For any given scaling function f and accuracy p, we find in this paper a function b with support in an interval of length p so that for smooth f,

Éf (x)0

j

»f, fn,j…bn,j(x)É ÅO(hp), (1.3)

where bn,jÅ2n/2b(2nx0k), k√Z, in analogy to (1.2). Related functions b[r] can be constructed to recover the rth derivative of f to accuracy O(hp0r) for any r õp.

While the series in (1.3) looks similar to a scaling function expansion, b and b[r]

do not satisfy any recursion relations in general.

Numerical experiments indicate that our alternative reconstruction formulas perform significantly better than the standard scaling function series (1.1) for smooth f and are no worse for highly oscillating or nonsmooth f. In addition, the new approach allows the reconstruction of derivatives of f.

2. WAVELET BASICS

The fundamentals of wavelet theory have been described in many publications, so we refer to the literature for most of the background, and we only introduce the specific notation and results we need. A standard reference for wavelets is [3]. Biortho- gonal wavelets are discussed in more detail in [2]. Throughout this paper, we will be using biorthogonal wavelets, which include orthogonal wavelets as a special case.

L2(R) is the space of square integrable complex-valued functions onRwith inner product

»f, g… Å

*

Rf (x)g(x)dx,

where the bar denotes complex conjugation. l2 is the space of square summable complex-valued sequences with inner product

(3)

»x, y… Å

`

iÅ0`

xiyi.

Cris the space of r times continuously differentiable functions on R.

A multiresolution analysis (MRA) [5] of L2(R) is a sequence {Vj}j√Z of closed subspaces of L2(R) with the properties

Vj, Vj/1

• <`jÅ0` Vjis dense in L2(R), and>`jÅ0` VjÅ{0}

f (x)VjBf (2x)Vj/1

f (x)VjBf (x020jk)Vjfor all k√Z

there exists a scaling function f(x)V0 such that the set of translates {f(x0 k)}k√Z forms a Riesz basis of V0.

Let W0 be a complement of V0in V1, that is, V1ÅV0/W0.

A function c(x)W0 whose translates {c(x0 k)}k√Z form a Riesz basis of W0is called a mother wavelet. We assume that such a function exists.

Since V0, W0are both subspaces of V1, recursion relations f(x) Å

q

2

k

hkf(2x 0k),

c(x) Åq2

k

gkf(2x 0k), (2.1)

must hold for some sequences {hk}, {gk} in l2. We denote the scaled translates of f, c by

fn,k(x)Å2n/2f(2nx0k),

cn,k(x)Å2n/2c(2nx0 k). (2.2) Biorthogonal wavelets are defined by two MRAs of L2(R), denoted by {Vj} and {V˜j}, whose basis functions satisfy the biorthogonality relations

»fn,j, fH n,k… Ådjk,

»cn,j, cH n,k… Ådjk,

»fn,j, cHn,k… Å »cn,j, fHn,k… Å 0. (2.3) Here djk is the Kronecker delta

djkÅ

H

01 if jif jxÅk,k.

(4)

For any function fL2(R), projections Pnf, Qnf, P˜nf, Q˜nf of f onto Vn, Wn, V˜n, W˜n, respectively, are given by

PnfÅ

`

jÅ0`

»f, fHn,j…fn,j, PHnfÅ

`

jÅ0`

»f, fn,j…fHn,j,

QnfÅ

`

jÅ0`

»f, cHn,j…cn,j, QHnf Å

`

jÅ0`

»f, cn,j…cHn,j.

In many applications, Pnf and P˜nf are interpreted as approximations to f at resolution hÅ20n, while Qnf and Q˜nf represent the fine detail in f at resolution h.

The relationship between {Vn}, {V˜n} etc. is completely symmetric. To any definition or algorithm there is a dual, with the tildes in opposite places. We choose to work with P˜ninstead of Pnin the following sections because it keeps the notation simpler.

The moments of f, c, fH, cH are defined as

MiÅ

*

xif(x)dx, MH iÅ

*

xifH(x)dx,

NiÅ

*

xic(x)dx, NH iÅ

*

xicH(x)dx. (2.4)

We assume that the scaling functions have been normalized so thatM0Å MH 0Å 1.

We say that c has N vanishing moments ifN0ÅN1Å rrr ÅNN01Å0,NN x0.

For later use, we also define the shifted moments of f,

Mi,jÅ

*

xif(x0j)dx Å

*

(x /j)if(x)dxÅ sÅ0

i

S

is

D

jsMi0s. (2.5)

It is easy to verify that

Mk,j/1Å

k

lÅ0

S

kl

D

Ml,j. (2.6)

3. CONSTRUCTION OF ALTERNATIVE RECONSTRUCTION FORMULAS Assume that for some function f and for some biorthogonal wavelet basis we know the projection P˜nf onto V˜n,

PHnfÅ

`

jÅ0`

fn,jfHn,j, (3.1)

where

fn,jÅ »f, fn,j….

How well can we recover the point values of f from this?

(5)

Assume that c has N vanishing moments. If fCN, it is well known and easy to prove that for any point x,

Éf (x)0 PHnf (x)É ÅO(hN),

where hÅ20n. (See, e.g., [6, Eq. (1.11)]; note that the meaning of N, N˜ is reversed, compared to our paper).

There are more detailed results available about the convergence rate of P˜nf to f under various conditions of f and f (see, for example, Walter [7], Kelly et al. [4]), but they are all based on the original scaling function series (3.1).

We propose instead to use a different series,

BnfÅ

`

jÅ0`

fn,jbn,j, (3.2)

where

bn,j(x) Å2n/2b(2nx0j) (3.3) in analogy to (2.2). For any choice of f and for any p√N, we will produce a function b(x) with support of length p so that for fCp,

Éf (x)0Bnf (x)É ÅO(hp). (3.4) For simplicity, the dependence of b on p and f is not usually expressed in the notation.

If necessary we will write b(x; p) and, similarly, for dependence on other parameters.

More generally, we want to find functions b[r]so that

Éf(r)(x) 0B[r]n f (x)É ÅO(hp0r), (3.5) where

f(r) Å dr dxrf, B[r]nf Å

`

jÅ0`

fn,jb[r]n,j,

b[r]n,j(x) Å2n/2b[r](2nx 0j). (3.6) To construct b, fix a scaling function f and integers n, p and assume that we are given the scaling function coefficients fn,j, j Å 0, . . . , p 01. Following a standard approach from numerical analysis, we first attempt to find coefficients cn,j(x), jÅ0, . . . , p01, so that

(6)

f (x)Åp01

jÅ0

fn,jcn,j(x) for f (x)Åxk, kÅ0, . . . , p01. (3.7)

For f (x)Åxk,

fn,jÅ »f, fn,j… Å

*

xk2n/2f(2nx0j)dx Åhk/(1/2)Mk,j.

Equation (3.7) leads to a system of linear equations,

HnMcn(x) Åp(x), (3.8)

where

HnÅ Hn(p) Åh1/2 h0

0 h1

???

0

hp01 ,

MÅM(f, p) Å

M0,0 M0,1 ??? M0,p01

M1,0 M1,1 ??? M1,p01

: : ??? :

Mp01,0 Mp01,1 ??? Mp01,p01

,

cn(x)Åcn(x; f, p)Å

cn,0(x) cn,1(x)

: cn,p01(x)

,

p(x)Åp(x; p) Å x0 x1 : xp01

.

It is easy to show that the matrix M(f, p) is nonsingular for all choices of f, p.

From definition (2.5), it follows immediately that MÅ BrV, where B is the lower triangular matrix with entries

bis Å

S

si

D

Mi0s, 0£s£ i,

and V is the Vandermonde matrix with entries£sj Åjs. Hence,

(7)

FIG. 1. Construction of b(x; p, x0).

det(M)Ådet(B)rdet(V)ÅMp0(

n01

kÅ1

k!) x0.

From

cn(x)Å M01H01n p(x) Åh01/2M01p(x/h),

we observe that each cn,j(x) is a polynomial of degree at most (p 01) in (x/h), and that

cn,j(x) Åh01/2c0,j(x/h)Å2n/2c0,j(2nx). (3.9) To put this approach into a wavelet-like setting, we select a unit interval I Å[x0, x0/1) for some (as yet undetermined) point x0. Scaled and shifted versions of I are denoted by In,sÅ[(x0 /s)h, (x0/s/1)h).

We restrict the use of formula (3.7) to the interval In,0. Values of f on a shifted interval In,sare recovered by using the same coefficients cn,jon a shifted set of scaling function coefficients,

f (x)Ép01

jÅ0

fn,j/scn,j(x0sh)Ås/p01

jÅs

fn,jcn,j0s(x0sh) for xIn,s. (3.10)

We can combine (3.9) and (3.10) into the desired form (3.2), (3.3) by defining

b(x; p, x0

c0,p01(x /p01; p) if x[x00p/1, x00p/2), :

c0,1(x/1; p) if x[x001, x0), c0,0(x; p) if x[x0, x0/1).

(3.11)

By construction, each b(x; p, x0) is a piecewise polynomial of degree p 0 1 with support [x0 0p/1, x0 /1) (see Fig. 1).

Formulas for reconstructing the derivatives of f can be obtained by an analogous

(8)

construction. We use the notation b[r] for the function used to reconstruct values of the r th derivative; f(r)denotes the rth derivative of f.

We again start by looking for coefficients c[r]n,jso that

f(r)(x)Åp01

jÅ0

fn,jc[r]n,j(x) for f (x) Åxk, kÅ0, . . . , p 01.

This leads to the matrix equation

HnMcn[r](x)Åp(r)(x), from which we easily obtain

c[r]n(x) Åh0rc(r)n(x) and, therefore,

b[r](x; p, x0

c(r)0,p01(x /p01; p) if x[x00p/1, x00p/2), :

c(r)0,1(x/1; p) if x[x001, x0), c(r)0,0(x; p) if x[x0, x0/1).

(3.12)

Thus,

b[r](x; p, x0)Åb(r)(x; p, x0)

in the sense of b[r]being the rth derivative from the right of b, i.e., ignoring possible discontinuities at the knots (x0/ j), j √ Z. b[r] is therefore a piecewise polynomial of degree (p0r01).

Remarks.

1. It is well known that Vandermonde matrices have condition numbers which increase rapidly with the size of the matrix. We were not able to avoid this phenomenon by alternative approaches to the derivation of b.

However, b functions for larger p do not appear to offer any real advantages over lower order ones. The lower order functions (p Å 3 or 4) already give excellent reconstruction results. Some of the higher order functions are oscillatory and thus unsuitable, because of cancellation problems during the reconstruction. The others look very similar to b functions for smaller p (see Fig. 4), but require a wider support.

For small values of p, we observed no difficulties in the numerical calculation of the coefficients. Even for pÅ10, we obtained the coefficients of the polynomials cn,j

to 10 decimals of accuracy, using the Bjo¨rck – Pereyra algorithm [1] in 15-decimal arithmetic.

2. If the dual scaling function fH is a spline, does b equal fH? The answer is: possibly yes, but usually not.

(9)

Assume that for some integer p, fH is a piecewise polynomial of degree p01, has support of length p and can be used to reconstruct all polynomials up to order p0 1. These conditions are satisfied if fH is the B-spline of order p, for example.

Then, if we use the same p, and if we choose x0 so that the supports of b and fH match, the polynomial pieces of both b and fH satisfy (3.8) and by uniqueness fH must equal b. For any other choice of p or x0, b will be different from fH.

Figure 3 shows the dual scaling function fH and b(x; p) for p Å 2, 3, 4, for the Daubechies – Cohen spline wavelet with (N, N˜ )Å(3, 3). b(x; 3) is equal to fH if we choose x0 Å 1, which makes b continuous. b(x; 2) and b(x; 4) are different from fH, and so is b(x; 3) for other choices of x0.

3. A natural question to ask is whether b or b[r]satisfy recursion relations analogous to (2.1). This is not the case in general, as can be verified from the examples in Section 6.

4. We have not used any special properties of f other thanM0Å

*

f(x)dx x0.

Our construction is valid in any setting where data of the form»f, fn,j…are given.

5. We will discuss the choice of x0below. At this point we would just like to point out that it is not required to choose the same x0for b and b[r]. If x0and x[r]0 are chosen differently, b[r]will be different from b(r) (see Figs. 2c, d).

4. ERROR ESTIMATES

Fix p and x0. For xI Å [x0, x0/ 1), the pointwise reconstruction error for the monomial xkat resolution h Å20is

ek(x) Åxk0p01

jÅ0

»xk, f0,jc0,j(x) Åxk0p01

jÅ0

Mk,jc0,j(x).

By (3.8), ekå 0 for kÅ0, . . . , p01. For k§p, ek is a polynomial of degree k.

If fCp, then for xIn,0,

f (x)Å p01

jÅ0

f(j)(0)

j! xj/f(p)(j)

p! xp for some j√In,0. The reconstruction error for f at the point x is

f (x)0p01

jÅ0

»f, fn,jcn,j(x)Åp01

jÅ0

f(j)(0)

j! hjej(x/h) /f(p)(j)

p! hpep(x/h) for some jIn,0

ÅO(hp), (4.1)

since f(p) and epare continuous in In,0.

For xIn,swe obtain a corresponding result using Taylor series expansion around the point sh.

If fCp/1, we can take the Taylor series expansion one step further: For xIn,s,

(10)

f (x)0p01

jÅ0

fn,jcn,j(x)Å f(p)(sh)

p! hpep((x/h)0s) /O(hp/1). (4.2) If s is a zero of epinside I, the local error at the scaled and translated points (s / s)h is O(hp/1).

Remarks.

1. It is conceivable that the term O(hp/1) in (4.2) could also vanish at the same point s, but that requires specific relationships between the higher moments of f.

Unless we are in a position to prescribe the moments of f, we would not expect that to happen.

2. Let sk, kÅ1, 2, . . . , denote the real zeros of ep. At these points, we potentially get higher order accuracy, but only if sklies inside IÅ[x0, x0/1). See the numerical examples in Section 6 for illustration.

3. It is possible to estimate the error at the point xIn,s based on Taylor series expansions around a point different from sh, for example around x itself. Such attempts lead back to the same results presented here, after laborious calculation.

Differentiation commutes with all the operations leading to the error estimates (4.1) and (4.2). In particular, if fCpand rõ p, then for xIn,0

f(r)(x) Åp0r01

jÅ0

f(j/r)(0)

j! xj/ f(p)(j)

(p 0r)!xp0r for some j√In,0

Åp01

jÅ0

f(j)(0) j!

dr

dxrxj/O(hp0r), fn,jÅp01

iÅ0

f(i)(0)

i! hi/(1/2)Mi,j/O(hp/(1/2)) as before, c[r]n,j Åh0r dr

dxrcn,j(x),

which leads to an error estimate at xIn,sof f(r)(x)0p01

jÅ0

fn,jc[r]n,j(x) Åp01

jÅ0

f(j)(sh) j!

dr

dxr[hjej((x/h)0s)]

/f(p)(sh) p!

dr

dxr[hpep((x/h) 0s)] /O(hp0r/1)

ÅO(hp0r). (4.3)

Again, the order of convergence is one level higher at certain points. These points s[r]k are the zeros of e[r]p in In,sand are, in general, different from sk.

5. SMOOTHNESS PROPERTIES OF THE b FUNCTIONS

In Sections 3 and 4, we had to perform the calculations for an arbitrary level n in order to obtain the correct powers of h. Since smoothness is independent of

(11)

scaling, it suffices to take nÅ 0, hÅ 20in this section, and we use the notation cjinstead of c0,j.

THEOREM5.1. The function b[r](x; x0) is continuous everywhere if it is continuous at one of the endpoints of its support.

Proof. We will write out the proof for r Å 0. As in the derivation of (4.3), the general proof is then obtained by differentiating everything, since differentiation commutes with all operations we use.

Continuity of b means (see (3.11) and Fig. 1) 0Åc0(x0/1),

cj(x0) Åcj/1(x0/1), jÅ0, . . . , p02, cp01(x0) Å0.

Without loss of generality, assume continuity at the left endpoint, that is, cp01(x0) Å 0. Since M is nonsingular, it suffices to prove

M 0 c0(x0)

: cp02(x0)

ÅM

c0(x0/1) c1(x0/1)

: cp01(x0/ 1)

.

The kth entry on the right-hand side is (x0 /1)k by (3.8). Using (2.6), the kth entry on the left is

p02

jÅ0

Mk,j/1cj(x0) Åp01

jÅ0

Mk,j/1cj(x0) (since cp01(x0)Å0)

Åp01

jÅ0

H

lÅ0

k

S

kl

D

Ml,j

J

cj(x0) ÅlÅ0

k

S

kl

D

xl0Å(x0/1)k. j

For even p, cp01is a polynomial of odd degree, so there always exists a choice of x0which makes b continuous. It is not obvious whether that is always possible for odd p, but we were able to find such x0in all the examples we considered.

As illustrated in the numerical examples below, x[r]0 should be chosen so that the leading error term e[r]p is small over the interval [x[r]0, x[r]0 / 1), whether or not this makes b[r] continuous. However, in all examples we tried we were able to find x[r]0

which produces a continuous b[r] and small error simultaneously.

The following lemma, together with Theorem 5.1, states that the starting points x0[r]which make b[r](x; p/1) continuous are the same as the points s[r]of (potentially) higher accuracy for b[r](x; p).

LEMMA5.2. For any rõp, if c[r]p(s; p/1) Å0, then

c[r]j (s; p/1)Åc[r]j (s; p), j Å0, . . . , p01, (5.1)

(12)

and

e[r]p(s)Å dr

dsrsp 0p01

jÅ0

Mp,jc[r]j (s; p)Å0. (5.2) Proof. We will show the proof for r Å0 only; the proof for general r proceeds along identical lines, with derivatives inserted all over the place.

Assume cp(s; p/1)Å0, then

M(p/ 1)c(s; p/1) Å

M0,p

M(p) :

Mp01,p

Mp,0 ??? Mp,p01 Mp,p

c0(s; p/1) : cp01(s; p/1)

0

(5.3)

Å s0

: sp01

sp .

Thus,

M(p)

S

ccp010(s; p(s; p: //1)1)

D

Å

S

ss:p010

D

which implies (5.1) by uniqueness of the solution of (3.8).

Equation (5.2) is the last equation in (5.3), using (5.1). j

We summarize the results of the last three sections in the following theorem.

THEOREM 5.3. For any choice of f, p, rõ p, x[r]0 , there exists a function b[r](x) Åb[r](x; f, p, x[r]0) so that for fCpand for any x√R,

Éf(r)(x) 0B[r]n f (x)É ÅO(hp0r), where

hÅ20n, B[r]nf Å

`

jÅ0`

fn,jb[r]n,j, b[r]n,j(x) Å2n/2b[r](2nx 0j).

b[r](x; f, p, x[r]0) has support [x[r]0 0p/1, x[r]0 /1] and is a polynomial of degree at most p0r 01 on each subinterval [x[r]0 / j, x[r]0 / j/1], j √Z.

(13)

TABLE 1

Choices of Starting Points x[r]0 Which Make b[r]Continuous and Points s[r]of Potentially Higher Accuracy for (a) Orthogonal Daubechies Wavelet with Two Vanishing Moments (b) Biorthogonal Daubechies – Cohen Spline Wavelet with (N, N˜ ) Å (3, 3)

(a) (b)

x[r]0 s[r] x[r]0 s[r]

pÅ2 rÅ0 (30 1/2 1

q

3)/2 (30

q

3)/2 (50

q

3)/2

rÅ1 (40 1

q

3)/2

pÅ3 rÅ0 (30 0.57706 1 1

q

3)/2

(14)

FIG. 2. Examples of b[r]for Daubechies wavelet with two vanishing moments: (a) scaling function f; (b) b (x; pÅ2, x0ÅM1); (c) b (x; pÅ3, x0ÅM1/1); (d) b[1](x; pÅ3, x0ÅM1/12); (e) b(x; p Å3, x0Å0).

e2(s)Å s20M2,0c0(s)0 M2,1c1(s)Å0, which leads to

12(2M1 /1{

q

4M204M21/1).

IfM2Å M21, which happens for all orthogonal wavelets with at least two vanishing moments, then sÅM1orM1/1.

b[1] is always a step function and is never continuous. The leading error term for B[1]n vanishes if

(15)

FIG. 2—Continued

(16)

FIG. 3. Examples of b[r]for biorthogonal Cohen – Daubechies spline wavelet with (N, N)Å(3, 3): (a) dual scaling function fH; (b) b(x; pÅ2, x0Å12); (c) b(x; pÅ3, x0Å1); (d) b(x; pÅ4, x0Å32).

e*2(s[1]) Å2s[1]0 M2,0c*0(s[1])0 M2,1c*1(s[1])Å 0, which leads to

s[1] ÅM1/12. For pÅ3, b is continuous if

x0Å12(2M1/1{

q

4M204M21/ 1), and b[1]is continuous if

x[1]0 ÅM1/12.

(17)

FIG. 3—Continued

These are the s, s[1] for pÅ 2 (see remark after Lemma 5.2). b[2] is always a step function and is never continuous.

The leading error term for Bnvanishes if

e3(s)Ås30M3,0c0(s)0M3,1c1(s)0M3,2c2(s)Å0.

The resulting cubic equation cannot be solved in closed form in general. If M2 Å

M21,M3ÅM31, which happens in the case of higher order coiflets, then sÅM1,M1

/1 orM1/2.

The leading error term for B[1]n vanishes for

s[1] Å1/M1{q1/30M21/M2. The leading error term for B[2]n vanishes for

s[2] Å1/M1.

(18)

FIG. 4. b for large p, for Daubechies wavelet with two vanishing moments: (a) b(x; pÅ5); (b) b(x;

pÅ10); (c) b(x; pÅ15).

(19)

TABLE 2

Comparison of Errors at Several Points x and Several Resolutions h, for Reconstruction of sin x Using b(x; p Å 3, x0 Å 1.633975), Based on Daubechies Wavelet with Two Vanishing Moments

x hÅ20 hÅ201 hÅ202

0.25 0.0020729566 0.0005046129 0.0000621160

. . .

0.50 0.0040526588 0.0004775991 0.0000575524

. . .

1.00 0.0031835356 0.0003380129 0.0000381963

Note. The underlined values show the predicted convergence behavior O(h3).

As particular examples, we consider two types of wavelets:

• The orthogonal Daubechies wavelet with 2 vanishing moments [3, p. 194 ff]. Its first few moments are

M0Å1, M1Å(30

q

3)/2 É0.633975, M2Å3(20

q

3)/2 É0.40192.

The biorthogonal Daubechies – Cohen spline wavelet with (N, N˜ ) Å (3, 3) (see [2, p. 543] or [3, p. 271 ff]). Its first few moments are

M0Å1, M1Å12, M2Å0, M3Å 014,

The points x[r]0 and s[r]for pÅ2 and 3 are summarized in Table 1. The dual scaling functions and some of the functions b, b[1]are shown in Figs. 2, 3, and 4.

6.2. Reconstruction Examples

The numerical examples were done on a DECstation 5000/133, using Matlab 4.2c.

The scaling function coefficients fn,j were calculated numerically to accuracy 10012, all subsequent calculations were done by Matlab in double precision.

Most of the examples are based on the reconstruction of the function f (x)Åsin x on the interval [0, 3].

First, we illustrate the error estimates (4.1) through (4.3), using the Daubechies wavelet with two vanishing moments and accuracy level pÅ3. The underlined values in Table 2 are approximately proportional to f-(0)h3 and decrease by the expected factor of 8 at each level; the values in each row (errors at a fixed point x for different resolutions) are not directly comparable.

(20)

TABLE 3

Comparison of Maximum Errors at Various Resolutions for Reconstruction of sin x or Its Derivative on [0, 3]

Maximum errors Reconstruction Errors

using evaluated for hÅ20 hÅ201 hÅ202 hÅ203

(a) f DxÅ1/64 0.2230931 0.0597919 0.0154932 0.0038994

b DxÅ1/64 0.0418520 0.0054484 0.0006783 0.0000842

s1Å2.565179 0.0179483 0.0012485 0.0000801 0.0000050 s2Å1.759679 0.0072115 0.0004933 0.0000316 0.0000020 s3Å0.577066 0.0162341 0.0012062 0.0000898 0.0000079 b[1] DxÅ1/64 0.1420792 0.0402152 0.0103265 0.0025981

(b) f DxÅ1/64 0.2230931 0.0597919 0.0154932 0.0038994

b DxÅ1/64 0.3933545 0.0531412 0.0067300 0.0008467

(c) f DxÅ1/64 0.0983246 0.0084478 0.0005585 0.0000354

b DxÅ1/64 0.0993215 0.0139191 0.0017952 0.0002273

Ratios

(a) f DxÅ1/64 3.7311581 3.8592332 3.9732578

b DxÅ1/64 7.6815296 8.0325838 8.0536774

s1Å2.565179 14.3753306 15.5893594 15.9203390 s2Å1.759679 14.6201103 15.6096820 15.8281089 s3Å0.577066 13.4590786 13.4291017 11.4237320

b[1] DxÅ1/64 3.5329707 3.8943582 3.9746441

(b) f DxÅ1/64 3.7311581 3.8592332 3.9732578

b DxÅ1/64 7.4020651 7.8961797 7.9484083

(c) f DxÅ1/64 11.6390266 15.1266675 15.7609234

b DxÅ1/64 7.1356205 7.7535307 7.8987923

Note. The errors were calculated either at equally spaced points with spacing Dx, or at all scaled and translated versions of the points si. The second part of the table shows ratios between adjacent columns in the first part. (a) Using continuous b(x; pÅ3, x0Å1.633975) and b[1](x; pÅ3, x0Å1.133975) based on Daubechies wavelet with two vanishing moments. (b) Using discontinuous b(x; pÅ3, x0Å0) based on Daubechies wavelet with two vanishing moments. (c) Using continuous b[1](x; pÅ3, x0Å1.133975) based on the biorthogonal Daubechies-Cohen spline wavelet with (N, N˜ )Å(3, 3).

Table 3(a) shows the maximum errors at equally spaced points across the interval [0, 3] for the original scaling function series (1.1) and for continuous b and b[1]with pÅ3. Also shown are the maximum errors at scaled translates of sk.

As expected, for general points the ratio of errors at different resolutions approaches 4 for f and b[1]and 8 for b. This indicates errors of order O(h2) and O(h3), respectively.

For the points s1and s2, which are inside I, convergence is of order O(h4), s3lies outside I, and the ratio approaches the value of 8, corresponding to the error O(h3) valid for general points.

Figure 5 shows the reconstructions themselves, at resolution h Å 20. Figure 6

(21)

FIG. 5. Reconstruction of sin x at resolution hÅ20: Dotted line, original function; dashed line, series using Daubechies wavelet with two vanishing moments; solid line, series using b(x; pÅ3, x0ÅM1/1).

compares the reconstruction errors for f and for b, as well as the error at scaled translates of s1at resolution h Å204; note the logarithmic scale on the vertical axis.

Second, we illustrate the effect of the choice of x0. Part (b) of Table 3 is comparable to the first two lines of part a, except that we have chosen x0Å0. For this choice, b is not continuous (see Fig. 2e). The errors still decay at the expected rate, but they are now much larger.

To understand the influence of x0 better, consider the graph of e3 in Fig. 7. The error depends on the size of e3over the interval IÅ[x0, x0/1). In general, we would

FIG. 6. Pointwise reconstruction error in reconstruction of sin x at resolution hÅ204: Dashed line, error using Daubechies wavelet with two vanishing moments; solid line, error using b(x; pÅ3, x0ÅM1

/1); circled points, error at scaled translates of s1É2.56518.

(22)

FIG. 7. Leading error term e3(x) for b(x; pÅ3), based on Daubechies wavelet with two vanishing moments. Zeros of the function (i.e., points sjpoints) are circled.

expect e3to be small if I is located somewhere in the region between the zeros of e3

(such as x0Å1.633975) and larger if I is farther away from the region of zeros (such as x0Å0).

The size of the error does not depend on whether b is continuous or not, but it is esthetically more pleasing to have a continuous approximating function if an x0can be found in the region where epis small.

Third, we demonstrate that our results work equally well for biorthogonal wavelets.

Part (c) of Table 3 shows results comparable to the first two lines of part (a), for the

FIG. 8. Reconstruction of sin x/0.51sin(2.5(x01))/0.71sin(6(x/1)) at resolution hÅ20: Dotted line, original function; dashed line, series using Daubechies wavelet with two vanishing moments;

solid line, series using b(x; pÅ3, x0ÅM1/1).

(23)

FIG. 9. Pointwise reconstruction error in reconstruction of sin x /0.51 sin(2.5(x01))/ 0.71 sin(6(x/1)) at resolution hÅ20: Solid line, error using b(x; pÅ3, x0ÅM1/1); circled points, error at scaled translates of s1É2.56518.

biorthogonal Daubechies – Cohen spline wavelet with (N, N˜ )Å(4, 2) (see [3, p. 271 ff]). In this case, our reconstruction is less accurate, but faster, than the scaling function series.

Experiments with other wavelets and with other smooth and slowly varying func- tions yield similar results to the ones presented so far.

Finally, Figs. 8 and 9 illustrate what happens in the case of a highly oscillating function; the results for functions with singularities are similar.

As shown in Fig. 8, the reconstruction Bnf is smoother looking, but not any closer to the original curve than the scaling function series P˜nf. However, the accuracy of Bnf is not any worse than the accuracy of P˜nf, either.

Figure 9 illustrates that the error at the scaled translates of sk is not noticeably smaller than at other points. Obviously, the error in the case of a highly oscillating function is not described all that well by the leading term.

7. SUMMARY

For situations where it is desired to recover point values of a function f to high accuracy from its scaling function coefficients fn,j, we propose to replace the standard scaling function series

PHnf Å

`

jÅ0`

»f, fn,j…fHn,j

by a corresponding series

BnfÅ

`

jÅ0`

»f, fn,j…bn,j,

(24)

or more generally,

f(r)(x) ÉB[r]n fÅ

`

jÅ0`

»f, fn,j…b[r]n,j.

Theoretical estimates and numerical experiments indicate that the new series Bnf has accuracy at least as good as the scaling function series for all functions f and performs significantly better for smooth, slowly varying f.

In addition, derivatives of f can be recovered by using the related series B[r]nf for which there is in general no counterpart using derivatives of scaling functions.

ACKNOWLEDGMENT

The authors thank the (anonymous) referee for many helpful hints which led to improved clarity and simplified proofs.

REFERENCES

1. A˚ . Bjo¨rck and V. Pereyra, Solution of Vandermonde systems of equations, Math. Comp. 24 (1970), 893 – 903.

2. A. Cohen, I. Daubechies, and J.-C. Feauveau, Biorthogonal bases of compactly supported wavelets, Comm. Pure Appl. Math. 45 (1992), 485 – 560.

3. I. Daubechies, ‘‘Ten Lectures on Wavelets,’’ CBMS-NSF Regional Conference Series in Applied Mathematics, Vol. 61 SIAM, Philadelphia, 1992.

4. S. Kelly, M. Kon, and L. Raphael, Pointwise convergence of wavelet expansions, Bull. Amer. Math.

Soc. 30 (1994), 87 – 94.

5. S. G. Mallat, Multiresolution approximations and wavelet orthonormal bases of L2(R), Trans. Amer.

Math. Soc. 315 (1989), 69 – 87.

6. W. Sweldens and R. Piessens, Quadrature formulae and asymptotic error expansions for wavelet approximations of smooth functions, SIAM J. Numer. Anal. 31 (1994), 1240 – 1264.

7. G. G. Walter, ‘‘Wavelets and Other Orthogonal Systems with Applications,’’ CRC Press, Boca Raton, FL, 1994.

參考文獻

相關文件

The main tool in our reconstruction method is the complex geometri- cal optics (CGO) solutions with polynomial-type phase functions for the Helmholtz equation.. This type of

‘Desmos’ for graph sketching and ‘Video Physics’ for motion analysis were introduced. Students worked in groups to design experiments, build models, perform experiments

Numerical experiments are done for a class of quasi-convex optimization problems where the function f (x) is a composition of a quadratic convex function from IR n to IR and

Particularly, combining the numerical results of the two papers, we may obtain such a conclusion that the merit function method based on ϕ p has a better a global convergence and

Then, it is easy to see that there are 9 problems for which the iterative numbers of the algorithm using ψ α,θ,p in the case of θ = 1 and p = 3 are less than the one of the

The empirical results indicate that there are four results of causality relationship between Investor Sentiment and Stock Returns, such as (1) Investor

match fundamental

Filter coefficients of the biorthogonal 9/7-5/3 wavelet low-pass filter are quantized before implementation in the high-speed computation hardware In the proposed architectures,