• 沒有找到結果。

2.1 Method I – Fixed Local Linear Constraints

N/A
N/A
Protected

Academic year: 2022

Share "2.1 Method I – Fixed Local Linear Constraints"

Copied!
29
0
0

加載中.... (立即查看全文)

全文

(1)

Journal of Multivariate Analysis

Simple and Efficient Improvements of Multivariate Local Linear Regression

Ming-Yen Cheng1 and Liang Peng2

Abstract

This paper studies improvements of multivariate local linear regression. Two intuitively appealing variance reduction techniques are proposed. They both yield estimators that retain the same asymptotic conditional bias as the multivariate local linear estimator and have smaller asymptotic conditional variances. The estimators are further examined in aspects of bandwidth selection, asymptotic relative efficiency and implementation. Their asymptotic relative efficiencies with respect to the multivariate local linear estimator are very attractive and increase exponentially as the number of covariates increases. Data- driven bandwidth selection procedures for the new estimators are straightforward given those for local linear regression. Since the proposed estimators each has a simple form, implementation is easy and requires much less or about the same amount of effort. In addition, boundary corrections are automatic as in the usual multivariate local linear regression.

AMS 1991 subject classification: 62G08; 62G05; 62G20

Keywords. Bandwidth selection, kernel smoothing, local linear regression, multiple re- gression, nonparametric regression, variance reduction.

1Department of Mathematics, National Taiwan University, Taipei 106, Taiwan. Email:

cheng@math.ntu.edu.tw

2School of Mathematics, Georgia Institute of Technology, Atlanta GA 30332-0160, USA. Email:

peng@math.gatech.edu

(2)

1 Introduction

Nonparametric regression methods are useful for exploratory data analysis and for rep- resenting underlying features that can not be well described by parametric regression models. In the recent two decades, many attentions have been paid to local polynomial modeling for nonparametric regression which was first suggested by Stone (1977) and Cleveland (1979). Fan (1993) and many others investigated the theoretical and numerical properties. Ruppert and Wand (1994) established theoretical results for local polynomial regression with multiple covariates. Wand and Jones (1995), Fan and Gijbels (1996) and Simonoff (1996) provided excellent reviews. We consider reducing variance in multivariate local linear regression. This is of fundamental interests since local linear techniques are very useful and efficient in a wide range of fields including survival analysis, longitudinal data analysis, time series modeling and so on.

The nonparametric regression model with multiple covariates is as follows

Yi = m(Xi) + ν1/2(Xi) εi, (1.1) where {Xi = (Xi1, · · · , Xid)T}ni=1 are i.i.d. random vectors with density function f and independent of ε1, · · · , εn, which are i.i.d. random variables with mean zero and variance one. The local linear estimator of the conditional mean function m(·) at x = (x1, · · · , xd)T is bα, the solution for α to the following locally kernel weighted least squares problem

minα,β

Xn i=1

ˆYi− α − βT(Xi− x)‰2

Πdj=1KXij − xj bjh

‘, (1.2)

where K(·) is a one-dimensional kernel function, h > 0, and bi > 0, i = 1, · · · , d, are constants. Here b1,· · · , bdare tuning parameters which allow us to choose different band- widths for each direction: in (1.2) the bandwidth for kernel smoothing along the i-th covariate is bih, i = 1, · · · , d. The kernel weight function in (1.2) is taken as a product kernel and the bandwidth matrix H1/2 = diag{hb1, · · · , hbd} is diagonal. From standard weighted least squares theory, the local linear estimator is given by

b

m(x) = eT(XxTWxXx)−1XxTWxY, (1.3)

(3)

where e = (1, 0, · · · , 0)T is a (d + 1)-vector, Y = (Y1, · · · , Yn)T,

Wx = diagn

Πdj=1K€X1j− xj bjh

, · · · , Πdj=1K€Xnj − xj bjh

o, Xx =



1 (X1− x)T ... ...

1 (Xn− x)T

 .

Define

B = diag{b21, · · · , b2d}, µ2(K) = Z

s2K(s) ds, R(K) = Z

K2(s) ds,

M2(x) =



2

∂x1∂x1m(x), · · · , ∂x1∂x2 dm(x)

... ... ...

2

∂xd∂x1m(x), · · · , ∂xd∂x2 dm(x)

 .

If x is an interior point, Ruppert and Wand (1994) showed that, under regularity condi- tions,

Eˆ b

m(x)− m(x)Œ

ŒX1, · · · , Xn‰

= 1

2h2µ2(K) trˆ

BM2(x)‰ + op

€h2

, (1.4)

Varˆ b m(x)Œ

ŒX1, · · · , Xn‰

= ν(x)

nhdf (x)Πdi=1bi

R(K)dˆ

1 + op(1)‰

. (1.5)

Here, that x is an interior point means that the set Sx,K

(z1, · · · , zd)T : Πj=1d K€ (zj xj)/(bjh)

> 0‰

, i.e. support of the local kernel weight function in the local least squares problem (1.2), is entirely contained in the support of the design density f . Expressions (1.4) and (1.5) reveal behaviors of bm(x). The conditional variance has a slower rate of convergence as the number of covariates d increases and the conditional bias is of the same order h2 for any value of d. Performance of m(x) can be measured by the asymptoticallyb optimal conditional mean squared error, i.e. the asymptotic conditional mean squared error minimized over all bandwidths. It has the order n−4/(d+4) and deteriorates for larger values of d. This is known as the curse of dimensionality problem. It occurs naturally because, with the same sample size, the design points X1, · · · , Xn are much less dense in higher dimensions so the variance inflates to a slower rate. Therefore, reducing variance of multivariate local linear regression becomes very important and it is investigated in the subsequent sections.

We propose two types of estimators of m(x) that improve the multivariate local linear regression estimator bm(x) in terms of reducing the asymptotic conditional variance while keeping the same asymptotic conditional bias. The first variance reducing estimator is

(4)

introduced in Section 2.1. It has a very appealing property of achieving variance reduction while requiring even much less computational effort, by a factor decreasing exponentially in d, than the original local linear estimator. Our second method, proposed in Section 2.2, is even more effective in the sense that its pointwise relative efficiency with respect to

b

m(x) is uniform and is the best that the first method can achieve at only certain points.

The way it is constructed can be easily explained by the first method.

Section 2 introduces the variance reducing techniques and investigates the asymp- totic conditional biases and variances. Bandwidth selection, the most crucial problem in nonparametric smoothing, is discussed in Section 3. Section 4 studies the asymptotic relative efficiencies and issues such as implementation and boundary corrections. A sim- ulation study and a real application are presented in Section 5. All proofs are given in Section 6.

2 Methodology

2.1 Method I – Fixed Local Linear Constraints

Let G = (G1, · · · , Gd)T be a vector of odd integers, and for each i = 1,· · · , d let {αi,j : j = 1,· · · , Gi} be an equally spaced grid of points with bin width

δibih = αi,j+1− αi,j for j = 1, · · · , Gi− 1,

where δi > 0, i = 1, · · · , d, are given tuning parameters. In practice, choosing δi [0.5, 1.5], i = 1,· · · , d, for moderate sample sizes is preferred. Then Λ =ˆ

1,u1, · · · , αd,ud)T : ui = 1,· · · , Gi for each i = 1, · · · , d‰

is a collection of grid points in the range D = 1,1, α1,G1] × · · · × [αd,1, αd,Gd] ⊂ Rd. Denote

Dv = [α1,2v1, α1,2v1+2]× · · · × [αd,2vd, αd,2vd+2],

where 2vi ∈ {1, 3, · · · , Gi− 2} for i = 1, · · · , d. Then the Dv’s form a partition of D. And for any fixed point x = (x1,· · · , xd)T ∈ D, there exist two vectors v = (v1,· · · , vd)T and r = (r1,· · · , rd)T, where ri ∈ [−1, 1] for i = 1, · · · , d, such that x is expressed as

xi = αi,2vi+1+ riδibih for each i = 1, · · · , d. (2.1)

(5)

So the vector v indicates the subset Dv of D that x belongs to and the vector r marks the location of x relative to the set of grid points that fall within Dv, i.e.,

Λv = Λ∩ Dv = Λ ∩ [α1,2v1, α1,2v1+2] × · · · × [αd,2vd, αd,2vd+2]

= ˆ

x(k1, · · · , kd) = (α1,2v1+k1,· · · , αd,2vd+kd)T : (k1, · · · , kd)T ∈ {0, 1, 2}d‰ . (2.2)

The local linear estimator bm(x) involves an inverse operation associated with the local design matrix in which only a few design points have positive weights, see (1.2) and (1.3). That contributes much instability to bm(x). Therefore, our idea of variance reduction in local linear estimation of m(x) at any x ∈ Dv ⊂ D is the following. Given { bm(α) : α ∈ Λ}, i.e. the local linear estimates evaluated over Λ, we form a linear combination of the values m(α), α ∈ Λb v ⊂ Λ, to be a new estimate of m(x) instead of recomputing bm(x) at x as in (1.3). In this way the resultant estimate is not allowed to differ too much from the values bm(α), α ∈ Λv, where Λv is a degenerate subset of Dv, and its source of variability is restricted to their variances and covariances. In other words, our new estimator at any x ∈ Dv is constrained by bm(α), α ∈ Λv, and in general will have a smaller variance than m(x). Meanwhile, to ensure the asymptotic conditionalb bias unchanged, the new estimator has to be subject to certain moment conditions. This can be accomplished by forcing the coefficients in the linear combination to fulfill the corresponding requirements.

Formally, put δ = (δ1, · · · , δd)T and let A0(s) = s(s − 1)

2 , A1(s) = 1 − s2, A2(s) = s(s + 1)

2 . (2.3)

Our first variance reduced estimator is defined as e

m(x; r, δ) = X

(k1,··· ,kd)T∈{0,1,2}d

di=1Aki(ri)o b m€

x(k1, · · · , kd

= X

x(k1,··· ,kd)∈Λv

di=1Aki(ri)o b m€

x(k1, · · · , kd

. (2.4)

That is, em(x; r, δ) is a linear combination of bm(α), α ∈ Λv ⊂ Λ, the original local linear estimates at the 3d grid points in Λv ⊂ Dv where x ∈ Dv.

Since the functions A0(s), A1(s) and A2(s) satisfy A0(s) + A1(s) + A2(s) = 1 for all s ∈ [−1, 1], it is clear from (2.3) and (2.4) that em(x; r, δ) = bm(x) for all x ∈ Λ and of

(6)

course em(x; r, δ) and bm(x) have the exactly same finite and large sample behaviors over Λ. So the interesting part is D \ Λ where em(x; r, δ) and bm(x) are not equal. The fact that em(x; r, δ) has a smaller variance than m(x) for all xb ∈ D \ Λ can be explained in two ways. First, compared to bm(x), em(x; r, δ) is constructed using more data points as the collection { bm(α) : α ∈ Λv} is based on observations with their X-values falling in a larger neighborhood of x. Another reason is that em(x; r, δ) is constrained by the local linear estimates bm(α), α ∈ Λv, instead of being built from a separate local linear fitting or in any modified way, so its source of variation is restricted to the local linear fittings at α∈ Λv.

Concerning the bias of em(x; r, δ) as an estimator of m(x), the expected values of b

m(α), α ∈ Λv, differ from m(x) by more than the usual h2-order bias of bm(x). For e

m(x; r, δ) to have the same asymptotic bias as bm(x), noting that points in Λvare all distant from x at the order h, the coefficients in the linear combination defining em(x; r, δ) have to sum up to one and cancel the extra order-h and order-h2 biases contributed bym(α), α ∈b Λv. Since A0(s), A1(s) and A2(s) defined in (2.3) satisfy conditions (6.2), the coefficients Πdi=1Aki(ri) in the linear combination defining em(x; r, δ) fulfill these requirements.

Throughout this paper we assume the following regularity conditions:

(A1) The kernel K is a compactly supported, bounded kernel such that µ2(K)∈ (0, ∞);

(A2) The point x is in the support of f . At x, ν is continuous, f is continuously differentiable and all second-order derivatives of m are continuous. Also, f (x) > 0 and ν(x) > 0;

(A3) The bandwidth h satisfies h = h(n) → 0 and nhd→ ∞ as n → ∞.

Our main result is as follows.

Theorem 1. Suppose that x is any point in D with the corresponding vectors v and r as in (2.1). Assume that every element of Λv is an interior point. Then, under conditions (A1)–(A3), as n→ ∞,

Eˆ e

m(x; r, δ) − m(x)Œ

ŒX1,· · · , Xn‰

= 1

2h2µ2(K) trˆ

BM2(x)‰ + op

€h2

, (2.5)

(7)

Varˆ e

m(x; r, δ)ŒŒX1, · · · , Xn‰

= ν(x)

nhdf (x)Πdi=1bi Πdi=1ˆ

R(K) − r2i(1 − r2i)C(δi+op

ˆ(nhd)−1‰

, (2.6) where

C(s) = 3

2C(0, s) − 2C(1

2, s) +1

2C(1, s) , C(s, t) =

Z

K(u − st)K(u + st) du.

Hence, from (1.4), (1.5), (2.5) and (2.6), bm(x) and m(x; r, δ) have the same asymp-e totic conditional bias and their asymptotic conditional variances differ only by the con- stant factors Πdi=1R(K) and Πdi=1ˆ

R(K)−ri2(1−ri2)C(δi

. Therefore, comparison between the asymptotic conditional variances lies on the vector δ of the bin widths for Λ and the vector r indicating the location of x in Dv ⊂ D, but not the vector v. The two quantities Πdi=1R(K) and Πdi=1ˆ

R(K) − r2i(1 − r2i)C(δi

are equal when r2i(1− ri2)C(δi) = 0 for i = 1, · · · , d. Concerning δ, C(δ1) = · · · = C(δd) = 0 if and only if δ1 = · · · = δd = 0, which corresponds to em(x; r, δ) = bm(x) for all x ∈ D and is not meaningful at all. The case that some δi are zero is not of any interest either, since in that case Λ is degenerate in the sense that it does not span D. So we are only interested in the case where all the bin widths are positive:

δi > 0, i = 1, · · · , d.

This condition is assumed throughout this paper. As for r, note that ri2(1 − ri2) = 0 if and only if ri ∈ {−1, 0, 1}. Under the assumption that δi > 0, i = 1, · · · , d, the two asymptotic conditional variances are equal if and only if ri ∈ {−1, 0, 1} for all i = 1, · · · , d and that corresponds to x ∈ Λ, which coincide with the fact that em(x; r, δ) = bm(x) for x ∈ Λ. On the other hand, ri2(1−r2i) > 0 for all ri ∈ [−1, 1]\{−1, 0, 1} and, for commonly used kernels, such as the Epanechnikov kernel K(u) = 0.75(1 − u2)I(−1 < u < 1) and the Normal kernel K(u) = exp(−u2/2)/√

2π, C(s) > 0 for all s > 0; see Cheng, Peng and Wu (2005). Hence we have, for all x ∈ D \ Λ,

Varˆ e

m(x; r, δ)ŒŒX1, · · · , Xn‰

< Varˆ b

m(x)ŒŒX1, · · · , Xn‰

asymptotically.

Ratio of the asymptotic conditional variance of em(x; r, δ) to that of m(x) isb Πdi=1n R(K)

R(K) − ri2(1 − ri2)C(δi) o.

(8)

Given B and δ, this ratio, as well as the pointwise asymptotic relative efficiency of e

m(x; r, δ) with respect to m(x), differs as x varies in Db v. And, irrelevant to the vec- tor v, m(x; r, δ) attains the most relative variance reduction when x has its associatede vector r = (r1, · · · , rd)T taking the values ri = ±p

1/2 for all i = 1,· · · , d. That is, within each Dv, em(x; r, δ) is asymptotically most efficient relative to bm(x) at the 2d points

x = (α1,2v1,· · · , αd,2vd)T+ h€

(1 + r11b1, · · · , (1+rddbdT

, r∈ {−1/√ 2, 1/√

2}d. (2.7) And the maximum is uniform, hence unique, across all such points and over all subsets Dv of D. Asymptotic relative efficiency of em(x; r, δ) with respect to bm(x) is investigated in further details in Section 4.1.

2.2 Method II – Varying Local Linear Constraints

As observed in Section 2.1, our first variance reduced estimatorm(x; r, δ) improves be m(x) in a non-uniform manner as x varies in D. And the same best pointwise relative variance reduction occurs at the 2d points given in (2.7) in each subset Dv of D. Our second variance reducing estimator is then constructed to achieve this best relative efficiency everywhere. The approach is that, fixing at any vector r ˆ

− 1/√ 2, 1/√

d

and for each x, evaluate the usual local linear estimates at 3d points surrounding x and then linearly combine these estimates to form a new estimator in the same way as in Section 2.1. But now these 3d neighboring points are determined by x and r and hence differ as x changes.

Consider that given both B and δ being positive vectors. Fix any r = (r1, · · · , rd)T {−1/√

2, 1/√

2}d. Then, for every x where m(x) is to be estimated, let αx,r = x − h€

(1 + r11b1,· · · , (1 + rddbdT

, Λx,r

xk1,··· ,kd(x; r) = αx,r+ h (k1δ1b1,· · · , kdδdbd)T : (k1,· · · , kd)T ∈ {0, 1, 2}d‰ . Define a variance reduced estimator of m(x) as

e

mr(x; δ) = X

(k1,··· ,kd)T∈{0,1,2}d

di=1Aki(ri)o b m€

xk1,··· ,kd(x; r)

= X

x

k1,··· ,kd(x;r)∈Λx,r

di=1Aki(ri)o b m€

xk1,··· ,kd(x; r) .

(9)

Thus emr(x; δ) is a linear combination of the local linear estimates over Λx,rand em(x; r, δ) is a linear combination of the local linear estimates at Λv. The coefficients in the linear com- binations are parallel. These facts explain clearly that emr(x; δ) enjoys the same variance reducing property as m(x; r, δ). The main difference between ee mr(x; δ) and em(x; r, δ) is that the set Λx,r in the definition of emr(x; δ) varies as x changes and r ∈ {−1/√

2, 1/√ 2}d is fixed, while the grid Λv for defining m(x; r, δ) is fixed for all x ∈ De v = [α1,2v1, α1,2v1+2] ×

· · · × [αd,2vd, αd,2vd+2] and r depends on x. See also (2.7). Again, δ1, · · · , δd are given tuning parameters and, for moderate sample sizes, choosing δi ∈ [0.5, 1.5], i = 1, · · · , d, is preferred. If support of X is bounded, say Supp(X) = [0, 1]d, then to keep Λx,r within Supp(X), in practice we take δi(xi) = minˆ

δi, xi/[(1 +p

1/2)h], (1 − xi)/[(1 +p

1/2)h]‰ , i = 1, · · · , d, for given δ1, · · · , δd.

The following theorem follows immediately from Theorem 1.

Theorem 2. Suppose that r is any given vector in {−1/√ 2, 1/√

2}d and every element of Λx,r is an interior point. Then, under conditions (A1)–(A3), as n → ∞,

Eˆ e

mr(x; δ) − m(x)Œ

ŒX1, · · · , Xn

‰ = 1

2h2µ2(K) trˆ

BM2(x)‰ + op€

h2

, (2.8) Varˆ

e

mr(x; δ)ŒŒX1, · · · , Xn‰

= ν(x)

nhdf (x)Πdi=1bi Πdi=1ˆ

R(K) − C(δi)/4‰ˆ

1 + op(1)‰

. (2.9)

Therefore the asymptotic conditional biases of emr(x; δ) and bm(x) are again the same. And the ratio of the asymptotic conditional variances is constant over all values of x satisfying the conditions in Theorem 2.

There are 2d such estimators indexed by r ∈ ˆ

− 1/√ 2, 1/√

d

. For a particular value of r, since the set Λx,r is skewed around x, i.e. the 3d points in Λx,r are asymmetri- cally distributed about x, finite sample bias of emr(x; δ) may be more than the asymptotic prediction. A way to avoid potential finite sample biases arising from this skewness is to take an average of all the 2d estimates. That is, given B and δ, the averaged variance reduced estimator is defined as

m(x; δ) = 2¯ −d X

r∈{−1/ 2,1/

2}d

e

mr(x; δ) ,

for every x. The following theorem is proved in Section 6.

(10)

Theorem 3. Suppose that every element of Λx,r is an interior point for every r ∈ {−1/√

2, 1/√

2}d. Then, under conditions (A1)–(A3), as n→ ∞,

m(x; δ) − m(x)¯ Œ

ŒX1, · · · , Xn‰

= 1

2h2µ2(K) trˆ

BM2(x)‰ + op€

h2

, (2.10)

Varˆ

¯ m(x; δ)Œ

ŒX1, · · · , Xn

‰ = ν(x)

nhdf (x)Πdi=1bi

Πdi=1n

R(K)− C(δi)

4 D(δi) 2

o

+opˆ

(nhd)−1‰

, (2.11) where

D(s) = C(0, s) − 1

4C(s) − 1 + 2

4 C√2 − 1 2 , s‘

3 + 2 2

16 C2 −√ 2 2 , s‘

1

8C√2 2 , s‘

1 −√ 2

4 C√2 + 1 2 , s‘

3 − 2√ 2

16 C√2 + 2 2 , s‘

.

The quantity D(δi) in (2.11) is always nonnegative for δi ≥ 0, see Cheng, Peng and Wu (2005). Therefore, from (2.8)–(2.11), besides being equally biased asymptotically as

e

mr(x; δ), ¯m(x; δ) has an even smaller asymptotic conditional variance.

3 Bandwidth Selection

The asymptotically optimal local bandwidth that minimizes the asymptotic conditional mean squared error of bm(x) is

h0(x) =h ν(x)R(K)d

nf (x)µ2(K)2tr{BM2(x)}2Πdi=1bi

i1/(d+4)

, (3.1)

and those for em(x; r, δ), emr(x; δ) and ¯m(x; δ) are respectively

h1(x) = B1(x, r; δ) h0(x) , h2(x) = B2(δ) h0(x) , h3(x) = B3(δ) h0(x) , (3.2) where

B1(x, r; δ) = Πdi=1

R(K) − ri2(1 − ri2)C(δi

/R(K)i1/(d+4)

, B2(δ) = Πdi=1

R(K) − C(δi)/4‰

/R(K)i1/(d+4)

, B3(δ) = Πdi=1

R(K)− C(δi)/4 − D(δi)/2‰

/R(K)i1/(d+4)

.

(11)

Many popular and reliable data-driven bandwidth selection rules for kernel smooth- ing are constructed based on the asymptotically optimal bandwidth expressions. Note that C(δi)’s and D(δi)’s in (3.2), relating the asymptotically optimal local bandwidths, are all constants determined by δ and the kernel K. Then an important implication of (3.2) is that data-based local bandwidth selection for any of the proposed estimators

e

m(x; r, δ), emr(x; δ) and ¯m(x; δ) is simply a matter of adjusting any local bandwidth se- lector for m(x) by multiplying the constant factors accordingly.b

Asymptotically optimal global bandwidths for a kernel estimator ¯m(x; h) of m(x) based on bandwidth h are usually derived from global measures of discrepancy such as

IAMSE€

¯ m; h

= Z

D

AMSEˆ

¯

m(x; h)‰

f (x) w(x) dx , (3.3) where AMSEˆ

¯

m(x; h)‰

is the asymptotic conditional mean squared error of ¯m(x; h) and w(x) is a known weight function. Asymptotically optimal global bandwidths form(x; r, δ)e and bm(x) that minimize this IAMSE measure with respect to h usually do not admit the simple relation the local counterparts have in (3.2). The reason is that the relative variance reduction achieved by em(x; r, δ) is non-uniform over every Dv. However, suppose that the conditional variance function ν(x) has a low curvature within each Dv. Then, since the bin widths δibih, i = 1,· · · , d, of Λ are all of order h, a sensible data-driven global bandwidth for m(x; r, δ) can be obtained from multiplying one fore m(x) by the constant factorb

” Z

D

Πdi=1ˆ

R(K) − r2i(1− r2i)C(δi

w(x) dx / Z

D

R(K)dw(x) dx

•1/(d+4)

, (3.4) see (1.4), (1.5), (2.5) and (2.6). Hence, the task of automatically choosing a global bandwidth for the proposed estimator em(x; r, δ) can be done analogously, or at least with not much more difficulty, as that for bm(x).

Denote as h0, h2 and h3 the asymptotically optimal global bandwidths of bm(·), e

mr(·; δ) and ¯m(·; δ). They are defined as the bandwidths minimizing the global measure in (3.3) for the respective estimators, with D being the set of points where the estimators are all defined. Then, from (2.5), (2.6), (2.8)–(2.11),

h2 = B2(δ) h0, h3 = B3(δ) h0, (3.5) where B2(δ) and B3(δ) are exactly as given in (3.2). The constants B2(δ) and B3(δ) depend only on the known kernel function K and the given bin width vector δ. Hence,

(12)

automatic global bandwidth selection procedures for both emr(x; δ) and ¯m(x; δ) can be easily established from adjusting those for the usual multivariate local linear estimator

b

m(x) by the appropriate constants. For example, Ruppert (1997) and Yang and Tschernig (1999) established automatic bandwidth procedures for multivariate local linear regres- sion. Thus the new estimators emr(x; δ) and ¯m(x; δ) both enjoy the appealing advantage that they achieve a great extent of variance reduction and improvement of efficiency with- out introducing any further unknown factors to bandwidth selection, the most important issue in nonparametric smoothing.

All the above adjustments are rather easy compared to bandwidth selection problems created by other modifications of local linear estimation, which usually change both the asymptotic conditional bias and variance. The reason for this major advantage is that the pointwise asymptotic conditional bias is unaffected everywhere no matter which of the variance reduction methods is applied. An even more promising feature of bothmer(x; δ) and ¯m(x; δ) is that they each has its pointwise asymptotic variance as the same constant multiple of that of bm(x) at all x.

4 Comparisons

This section is devoted to examine in details impacts of the new estimators on several essential issues in nonparametric smoothing, including relative efficiency, implementation and boundary effects.

4.1 Relative Efficiencies

The pointwise asymptotically optimal conditional mean squared error of bm(x), achieved by h0(x) in (3.1), is

AMSEˆ b

m(x); h0(x)‰

= 5 4

n ν(x)R(K)d nf (x) Πdi=1bi

o4/(d+4)h

µ2(K)2tr{BM2(x)}2id/(d+4)

. (4.1)

(13)

The asymptotically optimal local bandwidths in (3.2) respectively yield the pointwise asymptotically optimal mean squared errors of em(x; r, , δ), emr(x; δ) and ¯m(x; δ) as

AMSEˆ e

m(x; r, δ); h1(x)‰

= B1(x; r, δ)4AMSEˆ b

m(x); h0(x)‰ , AMSEˆ

e

mr(x; δ); h2(x)‰

= B2(δ)4AMSEˆ b

m(x); h0(x)‰ , AMSEˆ

¯

m(x; δ); h3(x)‰

= B3(δ)4AMSEˆ b

m(x); h0(x)‰ .

Thus, given B and δ, the pointwise asymptotic relative efficiencies of the new estimators with respect to bm(x) are

Effˆ e

m(x; r, δ), bm(x)‰

= B1(x; r, δ)−4, (4.2) Effˆ

e

mr(x; δ), bm(x)‰

= B2(δ)−4, Effˆ

m(x; δ), b¯ m(x)‰

= B3(δ)−4, (4.3) for every x. Similarly, taking the asymptotically optimal global bandwidths in (3.5) yields that the global asymptotic relative efficiencies ofmer(x; δ) and ¯m(x; δ) with respect to bm(x) are

Effˆ e

mr(·; δ), bm(·)‰

= B2(δ)−4, Effˆ

¯

m(·; δ), bm(·)‰

= B3(δ)−4. (4.4) Therefore, for each of emr(x; δ) and ¯m(x; δ), local and global asymptotic relative efficiencies compared to bm(x) are the same and depend only on δ and K. Global asymptotic relative efficiency of em(x; r, δ) with respect to bm(x) has a more complex form since the pointwise relative variance reduction is non-uniform. However, it is well approximated by a simple expression, not elaborated here, arising from the constant adjustment (3.4) to the global bandwidth.

Rewrite the pointwise and global asymptotic relative efficiencies B2(δ)−4and B3(δ)−4, in (4.3) and (4.4), as

B2(δ)−4 = Πdi=1S(δi)−4/(d+4), B3(δ)−4 = Πdi=1T (δi)−4/(d+4), where S(u) =ˆ

R(K)− C(u)/4‰

/R(K) and T (u) = ˆ

R(K) − C(u)/4 − D(u)/2‰

/R(K).

Consider the simplest case that δ1 = · · · = δd= δ0. Then Effˆ

e

mr(x; δ),m(x)b ‰

= Effˆ e

mr(·; δ), bm(·)‰

= S(δ0)−4d/(d+4), Effˆ

m(x; δ), b¯ m(x)‰

= Effˆ

m(·; δ), b¯ m(·)‰

= T (δ0)−4d/(d+4). (4.5)

(14)

For commonly used kernels, S(δ0)−1 is greater than one for all δ0 > 0, increases in δ0, and has an upper limit 8/5. Also, T (δ0)−1 is greater than one for all δ0 > 0, increases in δ0, and has an upper limit 16/5. If K is supported on [-1,1], the respective upper limits occur when δ0 = 2 and δ0 = 2/(√

2 − 1) in which case variances of the proposed estimators consist of only variances and no covariance of the local linear estimates in the linear combinations. Then, from (4.5), an important property of our variance reduction techniques is that the relative efficiency improvements increase as the dimensionality d of the covariates X = (X1, · · · , Xd)T grows. Table 1 contains some values of S(δ0)−4d/(d+4) and T (δ0)−4d/(d+4) when K is the Epanechnikov kernel.

Table 1: Relative efficiencies Effˆ e

mr(·; δ), bm(·)‰

(upper half) and Effˆ

m(·; δ), b¯ m(·)‰

(lower half) when δ1 = · · · = δd = δ0 and when K is the Epanechnikov kernel.

δ0 d=1 d=2 d=3 d=4 d=5

0.6 1.064 1.110 1.143 1.169 1.189

0.8 1.088 1.151 1.198 1.235 1.264

1.0 1.113 1.195 1.257 1.306 1.345

1.2 1.166 1.292 1.391 1.469 1.533

1.6 1.293 1.535 1.735 1.902 2.043

2.0 1.456 1.871 2.238 2.560 2.842

0.6 1.089 1.153 1.201 1.238 1.268

0.8 1.121 1.210 1.278 1.332 1.375

1.0 1.168 1.295 1.394 1.473 1.538

1.2 1.237 1.425 1.576 1.700 1.804

1.6 1.393 1.737 2.033 2.288 2.509

2.0 1.580 2.142 2.663 3.136 3.560

2/(√

2 − 1) 2.536 4.716 7.345 10.240 13.260

4.2 Implementation

Since there is no parametric structural assumptions on the unknown regression function m(·) in nonparametric smoothing, nonparametric regression estimators are usually eval- uated over a range of x-values in practice. Consider computing estimators of m(x) over a fine grid Λ0 to provide sensible comprehension of the regression function m(·) over a range of interest. Then the local linear estimator bm(x) requires to perform the local least

(15)

squares fitting (1.2) at every x ∈ Λ0.

To compute our first variance reduced estimator em(x; r, δ) for all x ∈ Λ0 one can proceed as follows. Form D and Λ as in Section 2.1 such that D covers Λ0 and Λ is coarser than Λ0, and then compute the estimates em(x; r, δ), x ∈ Λ0, using bm(α), α ∈ Λ, as described in Section 2.1. One appealing feature of em(x; r, δ) is that it is very easy to implement since the coefficients in the linear combination are just products of the simple one dimensional functions A0, A1and A2. Therefore, implemented in the above mentioned way, em(x; r, δ) amounts to a considerable saving of computational time compared to bm(x).

This is a particularly important advantage in the multivariate case. For example, if the number of grid points at each dimension in Λ is a fixed proportion of that in Λ0, then the number of evaluations of the local linear estimator is reduced exponentially as the dimension d increases. Hence, the better asymptotic conditional mean squared error performance of em(x; r, δ) compared to m(x) is true at virtually little cost but a greatb saving of computational effort.

The estimators emr(x, δ) and ¯m(x, δ) are more computationally involved. For ex- ample, in order to evaluate emr(x, δ) at each x, one needs to calculate the 3d local linear estimates m(α), α ∈ λb x,r. In a naive way, that requires 3d times the effort compared to whatm(x) needs. Fortunately, there are ways to avoid such an increase of computationalb effort. Suppose that again mer(x, δ) is to be evaluated over a grid Λ0. One approach is to take the bin widths of the grid Λ0 to be in proportions to the bin widths δibih, i = 1, · · · , d, of Λx,r so that m(α), α ∈ Λb x,r, can be reused for other values of x ∈ Λ0. Also, the set of coefficients in the linear combination is the same for all x ∈ Λ0 so needs to be evaluated once only. Hence, in this manner, emr(x, δ) is computed with about the same amount of effort as the local linear estimator bm(x). The estimator ¯m(x, δ) can be constructed in a similar way to alleviate computational effort.

4.3 Behaviors at Boundary Regions

One reason that the local linear technique is very popular in practice and in many contexts is that it does boundary correction automatically. For instance, when x is a boundary

(16)

point, the conditional bias and variance of bm(x) are both kept at the same orders as in the interior. Theorem 2.2 of Ruppert and Wand (1994) formally defines a boundary point in multivariate local linear regression and provides asymptotic expressions of the conditional bias and variance. The theorem shows that only the constant factors involving K and x are changed and the constant factors depend on how far, relative to the bandwidth matrix, x is away from the boundary.

Clearly from the form of em(x; r, δ), Theorem 1 can be extended to include the case where Λv is not entirely contained in the interior, so that asymptotic results of the conditional bias and variance of em(x; r, δ) are given for every x ∈ D. This is not elaborated here because it is straightforward but the notation becomes much more complicated. One can show that the conditional bias and variance is of the same orders at all x ∈ D as long as Λv is in the support of f , which is of course true in general. Therefore, em(x; r, δ) also achieves automatic boundary corrections.

For a boundary point x, comparison between the conditional variances of bm(x) and m(x; r, δ) becomes tedious as the constant coefficients are both very complex. How-e ever, we can argue that Varˆ

e

m(x; r, δ)Œ

ŒX1,· · · , Xn‰

is again asymptotically smaller than Varˆ

b m(x)Œ

ŒX1, · · · , Xn‰

at boundary regions in the following way. Our estimator em(x; r, δ) is a linear combination of bm(α), α ∈ Λv. It is well known that Varˆ

b

m(α)ŒŒX1, · · · , Xn‰ is much smaller than Varˆ

b m(x)Œ

ŒX1, · · · , Xn‰

for those α∈ Λv that are more away from the boundary than x. The weight in em(x; r, δ) put on any bm(α) with α closer to the boundary than x becomes close to one only when x is right nearby it. Otherwise, em(x; r, δ) spreads its weights onm(α) for those α ∈ Λb v more away from the boundary than x.

The constant multiplier in the asymptotic expression of E2ˆ b

m(x)−m(x)ŒŒX1, · · · , Xn‰ changes and it generally becomes smaller as x moves from the interior to the boundary region. Therefore, m(x; r, δ) no longer has the same asymptotic conditional bias ase m(x)b at a boundary point x. However, it is generally true that, as x moves more toward the boundary, the decrease in E2ˆ

b

m(x) − m(x)ŒŒX1,· · · , Xn‰

is much less than the increase in Varˆ

b m(x)Œ

ŒX1, · · · , Xn‰

. Then the difference between E2ˆ e

m(x; r, δ) − m(x)Œ

ŒX1, · · · , Xn‰ and E2ˆ

b

m(x) −m(x)Œ

ŒX1,· · · , Xn

‰is small compared to the variance reduction yielded by e

m(x; r, δ). This implies that the asymptotic conditional mean squared error of m(x; r, δ)e

(17)

is again smaller than that of bm(x) when x is a boundary point.

Consider behaviors of emr(x; δ) and ¯m(x; δ) in boundary regions. Suppose that Λx,r

is entirely contained in the support of the design density f . Then the above arguments can be used to demonstrate that emr(x; δ) and ¯m(x; δ) also have the automatic boundary correction property, and that they have smaller asymptotic mean squared errors than

b

m(x) in boundary regions.

Asymptotic behaviors of the conditional bias and variance of the local linear estima- tor bm(x) when x is near the boundary are illustrated and discussed in details by Fan and Gijbels (1992), Ruppert and Wand (1994) and Fan and Gijbels (1996), among others.

5 Simulation study and real application

5.1 Simulation Study

Consider model (1.1) with d = 2, ν(x) = 1, m(x1, x2) = sin(2πx1) + sin(2πx2). Let Xi1 and Xi2 be independent with each being uniformly distributed on [0, 1], and εi has a standard Normal distribution. We drew samples from the above model with sample size n = 400 and n = 1000. In computing ˆm(x1, x2) and ¯m(x1, x2; δ), we employed the biweight kernel K(u) = 1516(1 − u2)2I(|u| ≤ 1), h = 0.15 and 0.2, b1 = b2 = 1, δi = min{1, xi/[(1 + 1√

2)h], (1 − xi)/[(1 + 1/√

2)h]} for i = 1, 2. In Figures 1-4, the natural logarithm of ratio of the mean squared error of ¯m(x1, x2; δ) to that of ˆm(x1, x2) is plotted against different x1 = 0, 0.05, · · · , 1 and x2 = 0, 0.05, · · · , 1, and also against x2 = 0, 0.5, · · · , 1, but with fixed x1 = 0.2, 0.5, 0.8. These figures show that, in all cases considered, ¯m(x1, x2; δ) has a significantly smaller mean squared error than ˆm(x1, x2) when (x1, x2) is an interior point. Occasionally, the mean squared error of ¯m(x1, x2; δ) is slightly larger than that of ˆm(x1, x2) for some boundary points. Boundary behaviours is not a main issue in nonparametric multivariate regression since the data contain very little information about the regression surface there.

(18)

5.2 Real application

We applied the local linear estimator ˆm(·) and our variance reduced estimate ¯m(·; δ) to the Boston housing price data set. This data set consists of the median value of owner- occupied homes in 506 U.S. census tracts in the Boston area in 1970, together with several variables which might explain the variation of housing value, see Harrison and Rubinfeld (1978). Here we fit model (1.1) to the median values of homes with two covariates, x1 = LST AT (lower status of the population) and x2 = P T RAT IO (pupil-teacher ratio by town). We computed estimators ˆm(x1, x2) and ¯m(x1, x2; δ) by taking the same set of tuning parameters δi, i = 1, 2, and the same kernel as in Section 5.1 and h = 1, and b1 = 1.5, b2 = 0.5 were employed here. These estimators are plotted in Figure 5. Close to the boundary P T RAT IO = 0, mainly P T RAT IO less than 1.8, the two estimators behave quite differently since the regression surface changes drastically in that boundary region. The spikes in ¯m(x1, x2; δ) will disappear if smaller values of δi, i = 1, 2, are used.

For P T RAT IO > 3 or LST AT > 10, our estimator ¯m(x1, x2; δ) effectively smoothes out the spurious bumps produced by ˆm(x1, x2).

6 Proofs

Proof of Theorem 1. Put ∆ = ((k1 − 1 − r11b1,· · · , (kd − 1 − rddbd)T. Then x(k1,· · · , kd) − x = h∆. Further

m(x(k1, · · · , kd))− m(x) = h∆TM1(x) + 1

2h2TM2(x)∆ + o(h2), (6.1) where M1(x) =€

∂x1m(x), · · · ,∂xdm(x)T

. Since, for any s ∈ [−1, 1], X

j∈{0,1,2}

Aj(s) = 1 , X

j∈{0,1,2}

Aj(s)(j− 1 − s)l= 0 , l = 1, 2 , (6.2)

(19)

we have X

(k1,··· ,kd)T∈{0,1,2}d

di=1Aki(ri)o

= 1,

Eˆ e

m(x; r, δ) − m(x)ŒŒX1,· · · , Xn‰

= X

(k1,··· ,kd)T∈{0,1,2}d

di=1Aki(ri)o Eˆ

b m€

x(k1, · · · , kd

− m€

x(k1, · · · , kd)ŒŒX1,· · · , Xn‰

+ X

(k1,··· ,kd)T∈{0,1,2}d

di=1Aki(ri)on m€

x(k1, · · · , kd

− m(x)o

= X

(k1,··· ,kd)T∈{0,1,2}d

di=1Aki(ri)oh1

2µ2(K)h2trˆ

BM2(x)‰ + op

€h2i

+ X

(k1,··· ,kd)T∈{0,1,2}d

di=1Aki(ri)on

h∆TM1(x) + 1

2h2TM2(x)∆ + o€ h2o

= 1

2µ2(K) h2trˆ

BM2(x)‰

+ h E1+ 1

2h2E2+ op

€h2 . where

E1 = X

(k1,··· ,kd)T∈{0,1,2}d

di=1Aki(ri)o

TM1(x) ,

E2 = X

(k1,··· ,kd)T∈{0,1,2}d

di=1Aki(ri)o

TM2(x)∆ . Then (2.5) follows if E1 = E2 = 0 which can be validated by showing that

E1 = X

(k1,··· ,kd)T∈{0,1,2}d

di=1Aki(ri)oXd

j=1

(kj− 1 − rjjbj

∂xj

m(x)

= X

(k1,··· ,kd−1)T∈{0,1,2}d−1

d−1i=1Aki(ri)on X

kd∈{0,1,2}

Akd(rd) Xd

j=1

(kj− 1 − rjjbj

∂xj

m(x)o

= X

(k1,··· ,kd−1)T∈{0,1,2}d−1

d−1i=1Aki(ri)onXd−1

j=1

(kj − 1 − rjjbj

∂xj

m(x)o ,

參考文獻

相關文件

remember from Equation 1 that the partial derivative with respect to x is just the ordinary derivative of the function g of a single variable that we get by keeping y fixed.. Thus

Sometimes called integer linear programming (ILP), in which the objective function and the constraints (other than the integer constraints) are linear.. Note that integer programming

at each point of estimation, form a linear combination of a preliminary esti- mator evaluated at nearby points with the coefficients specified so that the asymptotic bias

Conditional variance, local likelihood estimation, local linear estimation, log-transformation, variance reduction, volatility..

It’s easy to check that m is a maximal ideal, called the valuation ideal. We can show that R is a

利用一些基本的 linear transfor- mations 的變化情形, 我們學習了如何將一個較複雜的 linear transformation 拆解成一些較 容易掌握的 linear

化成 reduced echelon form 後由於每一個 row 除了該 row 的 pivot 外, 只剩 free variables (其他的 pivot variable 所在的 entry 皆為 0), 所以可以很快地看出解的形式..

At least one can show that such operators  has real eigenvalues for W 0 .   Æ OK. we  did it... For the Virasoro