• 沒有找到結果。

This section compares the performance of the estimators discussed in the previous two sections. To compare three models we consider the properties of the estimators as the sample size gets very large. We would like the estimators to get close to the true values as the sample size increases. It is natural to consider the objective that the mean square error (MSE) of the estimators should approach zero as the sample size gets very large. The MSE criterion implies that the estimator is unbiased asymptotically and that its variance goes to zero as the sample size increases.

Accordingly, the model can be regarded as a good one as its estimators satisfy the large sample properties. Table 1 summarizes the simulation outcomes of the empirical moments, i.e., bias and MSE, from the estimators for the nine (N, T) bundles. We first look at the performance of the allocative parameters, estimated in the first step. One thing that is immediately noticeable is that H2/H and 1 H3/H are well estimated 1 even for the case of the smallest sample size, i.e., (N, T) = (50, 6). Another desirable feature is that the bias and the MSE fall when either N or T increases, aside from the bias of H2/H when N = 200. Even in those exceptional cases the biases are 1 negligible.

Table 1. The performance of the allocative parameter estimates setting M(‧)=2 ln(1y1)

2/ 1

H H H3/H 1

(N , T) Bias MSE Bias MSE

( 50 , 6 ) 0.0006 0.0013 0.0048 0.0019 ( 50 , 10 ) 0.0004 0.0007 0.0030 0.0010 ( 50 , 20 ) 0.0001 0.0004 0.0026 0.0005 ( 100 , 6 ) 0.0002 0.0006 0.0029 0.0009 ( 100 , 10 ) 0.0001 0.0004 0.0026 0.0005 ( 100 , 20 ) -0.0001 0.0002 0.0010 0.0002 ( 200 , 6 ) -1.74E-06 0.0003 0.0021 0.0004 ( 200 , 10 ) -0.0001 0.0002 0.0010 0.0002 ( 200 , 20 ) 0.0003 0.0001 0.0004 0.0001

Table 2 reveals that in general the MSEs of the parameter estimates of the parametric portion fall quickly when either N or T increases. For instance, when fixing N = 50, the MSE of the coefficient of ln(w3/w shrinks swiftly from 0.2227 1) to 0.0579 as T increases from 6 to 20. The figure continues to fall to 0.0142 when (N, T) = (200, 20). In addition, the bias measures exhibit a similar pattern, although the

biases of some coefficients are a little large in the case of (N, T) = (50, 6) relative to their true values. In summary, the estimators in the first step perform quite well as expected in terms of their biases and MSEs, which improve when either N or T increases.

Table 2. The performance of the parameter estimates in Step 1 setting M(‧)=2 ln(1y1)

(N , T) (200 , 6) (200 ,10) (200 ,20)

Table 3 presents the biases and MSEs of the parametric part for Models A and B obtained from Step 3. Generally speaking, these estimators perform poorly. Their biases and MSEs are much larger than those derived from the first-stage estimation. In addition, the biases and MSEs of Model A decrease to some extent as the sample size increases, while the biases of Model B are hardly altered with an the increase in the sample size. This leads us to conclude that the computationally simple first-stage estimators of the parametric part outperform the third-step estimators of Models A and B. Does this imply that Step 3 is redundant? The answer is no. It is necessary for the estimation of the distribution parameters. Please see below.

The distribution parameters of v and U are estimated in Step 5 by the maximum likelihood, and Table 4 presents the results. The estimators of Model C have larger biases and MSEs in comparison with those of Models A and B in most cases. We therefore drop Model C from now on whenever not necessary and focus our analysis only on Models A and B. For the case of (2,) = (1.88, 1.66), despite the fact that Model B’s estimator of  has lower biases and MSEs than Model A does in almost all cases, though the differences are quite small. Model B’s estimator of 2 performs slightly better than Model A’s, while the reverse is true for the estimator of

. It is a caveat that Model A’s estimator of 2 tends to have a larger variation when the sample size is small. As far as the estimator of smooth function M(‧) is concerned, Model A is found to be superior to Model B since Model A yields much smaller biases and MSEs than Model B does in most cases. Only for the cases of a large time period (T = 20) are Model B’s biases a little less than Model A. Turning to the cases of (2,) = (1.35, 0.83) and (1.63, 1.24), the results are rather similar to the preceding case.

Table 3. The performance of the parameter estimates from the third-stage setting M(‧)=2 ln(1y1)

( 2, ) (1.88 , 1.66) (1.35 , 0.83) (1.63 , 1.24)

( N , T ) = (50, 6) Model A Model B Model A Model B Model A Model B

Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE

3 1

Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE

3 1

( 2, ) (1.88 , 1.66) (1.35 , 0.83) (1.63 , 1.24)

( N , T ) = (50, 20) Model A Model B Model A Model B Model A Model B

Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE

3 1

Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE

3 1

( 2, ) (1.88 , 1.66) (1.35 , 0.83) (1.63 , 1.24)

( N , T ) = (100, 10) Model A Model B Model A Model B Model A Model B

Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE

3 1

Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE

3 1

( 2, ) (1.88 , 1.66) (1.35 , 0.83) (1.63 , 1.24)

Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE

3 1

( 2, ) (1.88 , 1.66) (1.35 , 0.83) (1.63 , 1.24)

( N , T ) = (200, 20) Model A Model B Model A Model B Model A Model B

Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE Bias MSE

3 1

ln(w /w ) 0.4047 0.1674 -0.4005 0.1729 0.4054 0.1695 -0.4010 0.1792 0.4049 0.1680 -0.4007 0.1745

2

3 1

[ln(w /w)] -0.0007 0.0036 0.7215 0.5323 -0.0011 0.0039 0.7224 0.5395 -0.0008 0.0037 0.7218 0.5343

2 1 3 1

ln(w /w) ln(w /w) -4.81E-05 0.0037 -0.5495 0.3044 0.0004 0.0039 -0.5497 0.3056 0.0001 0.0037 -0.5496 0.3047

1 3 1

lny ln(w /w ) -0.1199 0.0147 -0.9208 0.8503 -0.1201 0.0148 -0.9209 0.8515 -0.1200 0.0147 -0.9209 0.8506

2 1

ln(w /w ) -0.4240 0.1835 0.2002 0.0424 -0.4243 0.1852 0.2003 0.0434 -0.4241 0.1840 0.2002 0.0426

2

2 1

[ln(w /w)] 0.0007 0.0039 0.7501 0.5636 0.0002 0.0042 0.7502 0.5642 0.0005 0.0040 0.7501 0.5638

1 2 1

lny ln(w /w ) 0.1258 0.0161 0.1997 0.0408 0.1258 0.0162 0.1995 0.0412 0.4049 0.1680 -0.4007 0.1745

Although both Models A and B perform reasonably well, the simulation results appear to be in favor of an advantage for Model A in general and for the estimation of TE scores in particular (see Tables 7 and 14 below). Comparing (3-13) with (4-1), one can tell that their disparity originates from how  and lnG are estimated. For nt Model A, they are estimated by NISUR viewing parameters contained in lnG as nt unknown, while for Model B lnG is replaced by nt lnGˆnt leaving  to be estimated by OLS. The superiority of Model A may be explained by its allowance for the presence of lnG in the expenditure equation. nt

It is apparent from Table 4 that Model C gives rise to undesirable estimators of ( ,2, ). This is mainly ascribable to the fact that it overlooks Step 3 and proceeds from Step 2 directly to Steps 4 and 5. By doing so, the residual of (4-2) is indirectly derived using the NISUR estimates of lnGˆnt and ˆ , which are obtained by simultaneously estimating the (J 1) share equations, instead of the expenditure equation. Conversely, the residuals of (3-13) and (4-1) corresponding to Models A and B, respectively, are directly deduced from estimating the expenditure equation. Step 3 is thus necessary.

We have learned from Tables 2 and 3 that the parameter estimates of the parametric part of the cost function obtained in the first step outperform those obtained in the third step. These estimates are applied to compute lnG . We now nt compare the performance of the estimated lnG to gain further insight into the nt properties of alternative models. Not surprisingly, G1 has smaller biases and MSEs than G2A and G2B, derived from Models A and B, respectively, in almost all (N, T) and (2,) combinations. The outcomes support the use of G1 as the estimate of

lnG . nt

Table 4. The performance of the estimators of ( ,  2, ) setting M(‧)=2 ln(1y1)

( ,  2, )= (0.025 , 1.35 , 0.83)

( ,  2, )= (0.025 , 1.63 , 1.24)

Table 5. The performance of estimated function lnG setting M(‧)=2 ln(1y1)

Table 6 shows the performance of the estimated AE measures. Again, the biases of AE1 are found to be smaller than both AE2A and AE2B, derived from Models A and B, respectively, in most (N, T) and (2,) bundles, although the biases of both AE2A and AE2B measures are already tiny. In addition, those biases and MSEs are decreasing as the sample size grows. One is led to conclude that AE1 is superior to AE2A and AE2B as an estimate of the AE.

Given that technical efficiency often constitutes one of the primary issues in empirical studies, we thereby investigate the performance of TE measures based on the three models. It can be seen from Table 7 that the biases and MSEs of Model C vary dramatically, implying that the model is apt to yield uncertain and incredible TE measures. However, the biases and MSEs of both Models A and B are much less than those of Model C. The other two cases of (2,) give similar implications.

Comparing the first two columns of Table 7 with the middle two columns of the same table, we see that the performance of the estimated TE score from Model A surpasses that of Model B in most (N, T) and (2,) bundles. Model B may be valid only when T is greater than or equal to 20. We also conduct simulations assuming   0.025 and the remaining parameter values are held intact. The results are shown in Tables I and II of the Appendix. A negative value of  implies that the TE score of a firm deteriorates over time. Tables I and II provide similar results to the foregoing. To assess the small sample properties of three models we have repeated the simulation techniques for N=30 and T=6, 10, 20. The results are shown in Tables III to IX of the Appendix. We find that the first step estimators of the cost share equations perform well and Model A still works well on the whole. Since there is no substantial difference in performance from large-sample size, the foregoing results are still hold even if N=30.

Table 6. The performance of estimated AE setting M(‧)=2 ln(1y1)

Table 7. The performance of estimated TE scores setting M(‧)=2 ln(1y1)

To understand the effects of dimensionality of smooth function M(‧) on the performance of the various estimators, we re-specify M as a function of two variables at present, i.e., ln y and 1 ln y . The simulation results given in Tables 8 to 14 are 2 comparable with Tables 1 to 7. Raising the dimension of M and extra regressors appears to exert no significant impacts on the performance of the estimators of interest.

In particular, the first step parameter estimates of the parametric part still perform well, as shown in Tables 8 and 9 in comparison with Table 10 of the third step estimates.

Table 11 displays analogous outcomes to Table 4. We argue for the employment of Model A once more. Table 12 unveils that G1 has the smallest biases and MSEs in most of the (N, T) combinations and hence outperforms G2A and G2B. Moreover, Table 13 shows that AE1 acts reasonably well. The biases and MSEs of both AE2A and AE2B decrease faster than AE1. Finally, Table 14 reveals that the TE scores of Model A outperform those of Model B, especially when the time period is short and they perform equally well when the time period exceeds, say, 20. Generally speaking, the performance of the estimators under consideration seems to be irrespective of the increase in the dimension of M and the explanatory variables in the parametric part.

Note that Model C produces quite large and infeasible TE scores with quite large variability.

We intend to check the performance of our proposed estimator in the context of cross sectional data. As different variance ratios give similar results for the three models, we only report the results of (2,) = (1.88, 1.66) in the Appendix to save space. The first step parameter estimates of Table X behave well and better than those in Table XI, where the biases of some estimates remain large even though N = 1000.

Table XII shows mixed results on the performance of the estimators of ( 2, ) and M(‧) from Models A and B, while Model C behaves badly. Measures G1 and AE1 in Tables XIII and XIV also perform satisfactorily, despite that the biases and MSEs in a few cases exceed G2 and AE2. Finally, Table XV displays that the biases and MSEs of the TE scores from Models A and B are close to each other and are relatively large, but smaller than those of Model C. It is well known in the area of efficiency and productivity analysis that the TE scores of firms cannot be consistently estimated, since the variance of the conditional mean or the conditional mode for each individual firm does not vanish as the size of the cross section increases (Schmidt and Sickles, 1984). Table XV confirms this, because the biases reduce tardily as the sample size

enlarges. Despite this drawback, which was avoidable in a panel data setting, evidence is found that Model A performs at least as well as Model B using cross sectional data.

We also estimate the translog cost function for the purpose of comparison. The M(‧) is re-specified as y1ln(1y1) and the results are shown in the Appendix.

Table XVI reveals that the parameter estimates in Step 1 perform well even though we re-specify M(‧) as a complicated functional form. The biases and MSEs of the parameter estimates fall when either N or T increases. Table XVII presents the biases and MSEs of the parameteric part for model A and Translog functional form.

Although the estimators of Model A have larger biases and MSEs than those of Translog, the biases and MSEs of Model A decrease as the sample size increase. Table XVIII presents the parameter estimates of the distribution. It is obvious that model A’s estimators have lower biases and MSEs than Translog does. In addition, Translog’s estimator of  perform poorly and the other estimators have large bias and MSE for the case of large time period (T=20). Table XIX shows the performance of the estimated ln G. G1 has smaller biases and MSEs than G2A and G2B in most of (N, T) combinations. Table XX reveals the performance of the estimated AE meansures.

Again, the biases and MSEs of AE1 are found to be smaller than both AE2A and AE2T, respectively, in almost all (N, T) bundles. As to the performance of the estimated TE measures, it can be seen from Table XXI that the biases and MSEs of Model A are much less than those of Translog. On the whole, evidence tends to support the superiority of Model A, implying that using semiparametric regression model is better than using translog functional form when the true functional form is complicated and unknown.

Table 8. The performance of the allocative parameter estimates setting M(‧)=2 ln(1y1) ln y2

Table 9. The performance of the parameter estimates in Step 1 setting M (‧) = 2 ln(1y1) ln y2

Table 10. The performance of the parameter estimates from the third-stage setting M (‧)=2 ln(1y1) ln y2

( 2, ) (1.88 , 1.66) (1.35 , 0.83)

(N , T ) = (50, 6) Model A Model B Model A Model B

Bias MSE Bias MSE Bias MSE Bias MSE

3 1

Bias MSE Bias MSE Bias MSE Bias MSE

3 1

Bias MSE Bias MSE Bias MSE Bias MSE

3 1

( 2, ) (1.88 , 1.66) (1.35 , 0.83)

(N , T ) = (100, 6) Model A Model B Model A Model B

Bias MSE Bias MSE Bias MSE Bias MSE

3 1

Bias MSE Bias MSE Bias MSE Bias MSE

3 1

Bias MSE Bias MSE Bias MSE Bias MSE

3 1

( 2, ) (1.88 , 1.66) (1.35 , 0.83)

(N , T ) = (200, 6) Model A Model B Model A Model B

Bias MSE Bias MSE Bias MSE Bias MSE

3 1

Bias MSE Bias MSE Bias MSE Bias MSE

3 1

Bias MSE Bias MSE Bias MSE Bias MSE

3 1

Table 11. The performance of the estimators of ( ,  2, ) setting M ( ‧ )=

( ,  2, )= (0.025 , 1.35 , 0.83)

Table 12. The performance of estimated function lnG setting M (‧)=2 ln(1y1) ln y2

( ,  2, )= (0.025 , 1.35 , 0.83)

相關文件