• 沒有找到結果。

Let the jth shadow input prices, Wj, be defined as

j j j

WH W , j1,...,J (3-1) , where H (j  0) denotes the allocative parameter of input j, measuring the extent to

which the shadow and actual input prices (W ) differ. It thus reflects the degree of j allocative inefficiency arising from, e.g., regulation or slow adjustment to changes in input prices. Here, a firm’s decision is assumed to be grounded on shadow input prices. Following Atkinson and Cornwell (1994), Kumbhakar (1996b, 1997), and Huang and Wang (2004), the minimized efficiency adjusted shadow cost, C , for a **

firm employing input vector X to produce output vector Y can be expressed as:

the shadow cost function independent of the TI parameter of b, and Y is an m-vector of output quantities. A firm is said to be technically efficient if it has a value

of b1, while a firm operating beneath the efficiency frontier has a value of b< 1.

The larger the value of b is, the more technically efficient the firm will be. Function )

, ( 

F represents the production transformation function.

Since a cost function must satisfy the homogeneity restriction of degree one in input prices, we can only measure J1 relative allocative parameters Hj/H , k

, 1,...,

j kJ, and j A value of k. Hj H less (greater) than unity means that the k

jth input tends to be overused (underused) relative to input k. Either overuse or

underuse reflects the presence of AI. Using Shephard’s Lemma, the shadow cost share equation of input j is written as:

After some manipulations and taking a natural logarithm, a firm’s actual expenditure (E) can be associated with C** (C*) and S*j as follows:

1 1

ln ln ln j j ln ( , ) ln j j

j j

EC

H S C Y W

H S U (3-4) , where U  lnb represents the additional (log) expenditure incurred by TI and is specified as a one-sided error term later, ln j1 j

j

H S

captures a partial extra cost

entailed by AI, and the remaining extra cost of AI is embedded in lnC(Y,W) due to .WW

Equation (3-4) becomes a regression equation after appending a two-sided random disturbance v to it, where v is assumed to be distributed as N(0,v2). Term Uv forms the composed error term. This equation associates TI with AI systematically. To identify the allocative parameters, one has to count on the share equations. It can be shown that the actual share equation of input j (S ) is formulated j as

After appending random disturbances to these share equations, they can be used to help estimate parameters H . When panel data are available, it is more ambitious to j assume TI term U to be time-varying. The time-variant TE model of Battese and

Coelli (1992) is adopted with Untunexp[(t T )], n1,...,N , t1,...,T, where

u is a firm-specific TI random variable distributed as n N(0,u2) independent of v , and nt g t( )exp[(t T )] contains an extra parameter  to be estimated.2

We now turn to the functional form of lnC(Y,W) in (3-4). It is conventionally specified as a translog form, or as a Fuss functional form like Berger et al. (1993), or as a Fourier flexible function such as Altunbas et al. (2001) and Huang and Wang (2004). In this paper lnC(Y,W) is formulated as a random vector with mean zero and constant covariance matrix. v and ntnt represent the usual statistical noise and are assumed to be distributed independently of each other. It can be shown that the nth firm’s probability density function of the composed disturbance n (n1,...,nT) is equal to:

2 Term g(t) decreases at an increasing rate if > 0, increases at an increasing rate if < 0, or stays constant if = 0.

1 2 distribution functions, respectively. The log-likelihood function of expenditure equation (3-7) alone can be easily derived by first multiplying (3-9) over firms and then taking the natural logarithm. Combining (3-9) with the joint probability density function of the (J1) random disturbances of the share equations (nt), the cost function system can be simultaneously estimated by the maximum likelihood if M has a known form.3 Readers are suggested to refer to, e.g., Ferrier and Lovell (1990) and Kumbhakar (1991), for details.

Three difficulties deserve specific mention. First, since the log-likelihood function of the above cost function system is highly nonlinear, getting maximum likelihood estimators is computationally difficult, even though not infeasible. Second,

M has an unknown functional form, hindering the log-likelihood function of the expenditure equation from being maximized with respect to M in particular. One alternative relies on the use of some nonparametric approaches to estimate M . However, M cannot be estimated directly by existing nonparametric regression methods, because M is not the conditional expectation of lnEntXntlnGnt by employing a nonparametric estimation. This problem can be solved by substituting

3 Note that random vector nt must now be assumed to be distributed as a multivariate normal with mean vector zero and constant covariance matrix.

ln nt nt ln nt | ln nt

t

E EX  G Y  for M(lnYnt)into the log-likelihood function.

ln nt nt ln nt | ln nt

E EX  G Y can now be consistently estimated by the nonparametric approach. For details, please see, e.g., Fan et al. (1996). Finally, term

1 *

ln nt ln j j

j

G

H S is obviously a nonlinear function of unknown parameters, leading the kernel estimation procedure for a standard semiparametric regression model, as proposed by Robinson (1988), to be not applicable. We shall discuss possible ways of getting rid of this difficulty in Subsection 4.1, which influence the consistency of the parameter estimates and are the core of this study.

We adopt the kernel estimation technique to estimate the conditional expectations, such as E

lnEnt| lnYnt

, since it is one of the popular nonparametric estimation methods. Specifically, the Nadaraya-Watson kernel estimator (Nadaraya, 1964; Watson, 1964) for a scalar lnY is given by: nt , where K  is the kernel function and h is the smoothing parameter. Equation ( ) (3-12) can be easily extended to a higher dimensional case of lnY . The rest of the nt conditional expectations can be estimated analogously.

We now outline the estimation procedure of the semiparametric shadow cost frontier in the following five steps.

Step 1. Simultaneously estimate the J1 input share equations of (3-8) by the NISUR to obtain the J1 estimates of relative allocative parameters Hj/Hk (j = 1,…, J and j ) and a part of the parameters involving the input prices of k expenditure equation (3-7).4 These estimates can be shown to be consistent and are

4 Terms involving solely (log) outputs do not emerge in the share equations after taking the partial derivatives of the expenditure equation with respect to (log) input prices.

used to calculate lnG , denoted by nt lnGˆnt.

Step 3. Equation (3-7) subtracts its own conditional expectations on lnY to yield nt

lnEntE(lnEnt lnYnt)[XntE X( nt lnYnt)]lnGntE(lnGˆnt| lnYnt)nt (3-13) After substituting the kernel estimates derived in Step 2 for those conditional expectations in (3-13), parameters  can be consistently estimated by the nonlinear least squares method, since the new error component nt ( vntUntt) has zero mean asymptotically. The nonlinear least squares is required due to the nonlinearity of

lnG . This distinguishes the current paper from Robinson (1988), where the ordinary nt least squares apply.

Maximizing the log-likelihood function derived from (3-9) with  replaced by nt ˆ nt

and

2 2 2

(1 t) it/( )

t i t

c  

  

g e nTT .

In (3-15) notation “^” is added on  since the kernel and NISUR estimators of ˆ (.| )it

E x and ˆ are used to replace their respective true counterparts. For details, please see Deng and Huang (2008) for a panel data setting with time variant TI.

Because  is a function of  , ˆ  , and data, it can be concentrated out of the log-likelihood function to reduce the number of unknown parameters.

Step 5. Maximize the concentrated log-likelihood function of the expenditure

equation over the remaining two unknown parameters of  and  , where  is nt replaced by  in Step 4. The resulting pseudolikelihood estimates are denoted by ˆ nt

ˆ and ˆ. Substituting them into (3-15), we get the estimate of  and still signify it by  Plugging the three estimates into (3-11) yields the estimate of ˆ.  , denoted t by  . Finally, the nonparametric function ˆt M(lnYnt) can be consistently estimated by

ˆ

ˆ(ln nt) ˆ(ln ln nt) ˆ( ln nt) ˆ(ln | ln nt) ˆt

M YE E YE X Y E G Y  (3-16)

where ˆ comes from the estimates of Step 3.

It is well known that the maximum likelihood estimator of  and  must be asymptotically unbiased and efficient if the regularity conditions hold. Although the individual kernel regression estimators of Step 2 have pointwise convergence rates slower than root-NT (NT1/ 2), where NT signifies the sample size, the average quantities of the elements in (3-15) have an order of O NTp( 1/ 2) under very weak conditions. See, for example, Härdle and Stoker (1989) and Fan and Li (1992). Fan et al. (1996) claimed that ˆ22O NTp( 1/ 2) under quite weak conditions. As

estimator Mˆ (lnYnt) of (3-16) is a function of several kernel regression estimators, having slower convergence rates than NT1/ 2 , it consequently converges to

(ln nt)

M Y for each nt at a slower rate than NT1/ 2.

The foregoing five steps complete the entire estimation procedure and the resulting estimates can be further utilized to evaluate, e.g., measures of AE and TE. In particular, the formula proposed by Battese and Coelli (1992) is adopted to gauge each firm’s TE score. Based on (3-4), the (log) cost of AI, denoted by u , is defined ntAI as the difference between the (log) shadow expenditure (ln ( , ) ln j1 j

j

C Y W

H S ) and the (log) optimized cost (lnC Y W

,

) that achieves AE, i.e.:

   

* * *

ln , ln , ln ,

AI

nt nt nt nt nt nt nt

uC Y WG Y WC Y W , (3-17) which is a non-negative value by definition. The measure of AE is then obtained by taking the natural exponent of minus u , which ranges from zero to unity. ntAI

There are three attributes worth noting. First, the consistent estimates of J1 relative allocative parameters Hj/H (j = 1,…, J and jk  ) yielded in Step 1 are k treated as given throughout the remaining four steps. This avoids estimating the whole cost system simultaneously by the maximum likelihood and the difficulty in achieving convergence, on the one hand. The number of parameters to be estimated in later steps is largely decreased, on the other hand. Second, despite the fact that lnGˆnt can be computed in Step 1 and is used to obtain kernel estimate Eˆ (ln | lnGˆ Ynt) in Step 2, parameters included in lnG of (3-13) need to be estimated again along with nt , even though they have already been estimated in Step 1. Conversely, Kumbhakar and Lovell (2000) suggested subtracting lnGˆnt directly from the dependent variable of (3-13), which may give rise to undesired estimation results. We will come back to this shortly. Third, since Step 5 aims to estimate merely  and  , the log-likelihood function is usually not very difficult to converge.

相關文件