• 沒有找到結果。

3. Methodology

3.3. Vector Auto-Regression Models

regression model, but each with a different number of independent variables the number of which is indicated in the equation by βpxip. For all four sets of the regressions, yi represents the predicted or expected value of the dependent variable. The independent variables are reflected as xip with the overall expression including βp. Therefore, the expected value of each independent variable is quantified with the overall expression of βpxipwhere each βp represent the independent variables’

predicted coefficient. Finally, Ɛi represents the error term or residual of the predicted linear equation of the multivariate equation.

3.3. Vector Auto-Regression Models

In spite of the ‘information’ that is ‘lost’ during differencing models, it should be recognized that the vector auto-regression (VAR) is more appropriate for the data given the empirical finding that the time series data is not cointegrated. Because of this, several VAR models are used to determine the impact of time on the relationship between the macroeconomic and technological variables on equipment prices in their various forms. Moreover, several VARs are used as opposed to a single VAR with ten or so variables because using a VAR with less variables tends to be more econometrically sound. That is to say, using a VAR with less variables produces more accurate orthogonalized impulse response functions (OIRFs) which are used as de facto coefficient estimates since the de jure coefficient estimates tend to be high in number and difficult to interpret directionality of the relationship of variables. Moreover, fewer variables increases the degrees of freedom which increases the accuracy of the parameter estimates and in turn increases the estimates of the IRFs.

Finally, the VARs were also broken up into multiple smaller variable VARs because the OIRFs are sensitive to the ordering of the variables in the VAR equation. For instance, a

two-‧

with four variables would have 24 permutations and a VAR with five variables would have 120 permutations. Such an algebraic matrix suffers from too much complexity and require too many Cholesky decompositions. As is the case, different ordering creates different OIRF responses and therefore requiring an increased number of Cholesky decompositions in order to comprehensively obtain the totality of OIRF responses.

Basic or standard VARs without constraints were used in this thesis of which a bivariate equation of a basic VAR can be summarized as:

yi = β0+ β1Yt-1+ β2Yt-2+… βpYt-p + ut (3.2) where E (ut | Yt-1, Yt-2, … ) = 0. The lag number, or p, is the order or the lag length of the entire autoregressive model. The multivariate autoregressive model allows for k additional predictors in conjunction with q1 lags of the first predictor variable and q2 lags of the second predictor variable respectively. More accurately, a bivariate VAR model in its standard form is better articulated through a denotation of two separate equations. Before this however, the standard form of a VAR model is denoted in the form of two equations:

yt10+ β 12zt12yt-112zt-1yt (3.3) zt= β 20+ β 21yt12yt-122zt-1zt (3.4) In the standardized form of a bivariate VAR, yt and zt are each two different sequences or time series in which the time variance of yt is dependent upon the variance of time of zt or in otherwise used parlance the past realizations of the sequence denoted as zt. Moreover, not only is this the case, but conversely it may also be asserted that zt is dependent upon the past realizations of yt as well. It must be assumed that both time series are stationary and do not contain a unit root. Finally,

we use εyt and εzt to denote white noise disturbance processes which are assumed to be uncorrelated with one another. Since the simple version of the standard form of a bivariate VAR model (a form without any lag representation) is a first order VAR, its lag longest lag is equal to one. In the model which is used in this thesis, six lags are used in total with more than just two variables. Since this is the case, a breakdown of a multivariate VAR is appropriate to highlight.

One major issue with VAR output interpretation is that each lag provides a different coefficient. Moreover, each variable in the model is tested as both a dependent variable and an independent variable. Because of this, VARs oftentimes have many coefficients. For instance, in a VAR with three variables and six lags, there is a total of 54 coefficients. To solve this, econometricians interpret VAR results with an additional method of what is referred to as an impulse response function (IRF), also known as a generalized impulse response function (GIRF), or its orthogonalized version the orthogonal impulse response function (OIRF) the latter of which implements Cholesky decompositions. OIRFs are more accurate and provide more valid responses since they lack correlated shocks which the traditional IRF suffers from (Ronayne, 2011, pg. 3).

However, because OIRFs use Cholesky decompositions as priorly mentioned, different orderings of the variables (if using a bivariate or otherwise multivariate model) can produce different results in the OIRFs. In spite of this, the differences are oftentimes negligible. Either way, the visual representation of an IRF or OIRF shows the impact of a one-unit increase ‘shock’ in the impulse (independent) variable on the response (dependent) variable. It is effectively a forecast of the number of steps (time periods) into the future. The positive, negative, or horizontal movement of the datapoints in an IRF is representative of the autoregression’s vector moving average and can also be fitted with confidence intervals.