• 沒有找到結果。

不同結構信用風險模型之違約預測實證分析

N/A
N/A
Protected

Academic year: 2021

Share "不同結構信用風險模型之違約預測實證分析"

Copied!
22
0
0

加載中.... (立即查看全文)

全文

(1)

行政院國家科學委員會補助專題研究計畫

成 果 報 告

□期中進度報告

不同結構信用風險模型之違約預測實證分析

計畫類別:■ 個別型計畫 □ 整合型計畫

計畫編號:NSC 96-2416-H-009-030-

執行期間:2007 年 10 月 01 日至 2008 年 07 月 31 日

計畫主持人: 李漢星

共同主持人:

計畫參與人員:王志瑋、邱婉茜

成果報告類型(依經費核定清單規定繳交):■精簡報告 □完整報告

本成果報告包括以下應繳交之附件:

□赴國外出差或研習心得報告一份

□赴大陸地區出差或研習心得報告一份

■出席國際學術會議心得報告及發表之論文各一份

□國際合作研究計畫國外研究報告書一份

處理方式:除產學合作研究計畫、提升產業技術及人才培育研究計畫、

列管計畫及下列情形者外,得立即公開查詢

□涉及專利或其他智慧財產權,□一年□二年後可公開查詢

執行單位: 國立交通大學財務金融研究所

附件一

(2)

1.Introduction (前言與研究目的)

This study is intended to examine whether and by how much the generalization of the prevailing structural credit risk models improves the performance of the default prediction. Following the seminal works of Black and Scholes (1973) and Merton (1974), the structural credit risk modeling literature has developed into an important area of research. While most of the empirical studies try to test the performance of structural models in bond and credit derivatives pricing, little results are provided for default prediction. Therefore, in our study, we will compare various structural credit risk models for their default prediction capability. Credit risk models can be divided into two main categories: credit pricing models, and portfolio credit value-at-risk (VaR) models. Credit pricing models can be subdivided into two main approaches: structural-form models and reduce-form models. Portfolio credit VaR models, developed by banks and consultants, aimed at measuring the potential loss with a predetermined confidence interval that a portfolio of credit exposures could suffer within a specified time horizon. These models typically employ simpler assumptions and address less on the causes of single firm’s default. Reduce-form models typically assume exogenous random variable drives default and do not condition default on the value and other structure features, such as asset volatility and leverage, of the firm. As a result, there is no way to use them to predict bankruptcy of a firm. Therefore, in our empirical study, we limit our empirical analysis of default prediction in the single-firm structural models.

Prior empirical studies of structural models in default prediction and default barrier, even a handful, do not seem to come to a consensus. Chen, Hu, and Pan (2006) show that the Longstaff and Schwartz model (1995) performs poorly and is statistically no different from the flat barrier model without random interest rate assumption. The simpler Black-Cox (1976) outperforms the complex Longstaff and Schwartz model and they attribute the better

performance to the random recovery. Another finding provided by Brockman and Turtle (2003) shows that the default flat barriers are significantly positive while Wong and Choi (2006) find that default barriers are positive but not significant. It seems that the above empirical results are counter intuitive to the evolution of structural credit risk modeling. Therefore, it motivates us to empirically test a more comprehensive set of the structural models and to uncover the crucial factors of default prediction.

In our empirical study, we will test various structural credit risk models extended from the Merton (1974) model. Succeeding structural models relax the restrictive assumptions originally made and seek to incorporate the most critical factors. Although these extensions introduce more realism into the model, they increase the analytical complexity and

implemental difficulty. The goal of this study is, therefore, to empirically test if these complexities indeed improve the performance predicting corporate failure. Our focus is mainly put on two aspects of these extensions: the bond safety covenant in terms of

continuous default, and the shareholders’ discretion on the going concern decision in terms of endogenous barrier modeling. Using the Merton model as the base case, we can observe the performance enhancement, if any, through the introduction of continuous default, bankruptcy costs, and tax effect.

The European option approach by Merton (1974) ignores the possibility of failure prior to debt maturity and implicitly models corporate debt and equity as path-independent securities of the underlying asset value process. Researchers therefore introduce the default barrier to model this deficiency. In barrier models, we test the flat (or constant) default barrier model by Brockman and Turtle (2003), and the exponential barrier model of Black-Cox (1976). An

(3)

arguable assumption of the above barrier models is that the default barrier is exogenously determined. As a result, Leland (1994) developed the endogenous barrier model under stationary debt structure. Therefore, we will also include endogenous barrier model in our empirical test.

Prior empirical studies indicate that structural models generate poor empirical performances. Ericsson and Reneby (2004) argue that the inferior bond pricing performance of structural models may come from the estimation approaches traditionally used in the empirical studies. As a result, the perceived advantage of reduced-form models is more a result of the estimation procedure rather that of the model structure. Therefore, we adopt a better estimation

methodology, the Maximum Likelihood Estimation method proposed by Duan (1994) and Duan et al. (2004), which views the observed equity time series as a transformed data set of unobserved firm values with the theoretical equity pricing formula serving as the

transformation. This method has been evaluated by Ericsson and Reneby (2005) through simulation experiments, and their result shows that the efficiency of MLE is superior to the commonly adopted volatility restriction approach in the literature. Another reason to employ MLE is that the major data required for this method in the context of structural models is the common stock prices, which have much less microstructure issues compared with bond prices.

Our paper contributes to existing literature in two aspects: First, in contrast to previous research, we adopt the theoretically superior MLE approach and empirically test the default prediction capabilities of various models under different default barrier assumptions. Second, the role of the default barrier in structural models has long been adopted by researchers in literature while its validity is not empirically investigated until the research by Brockman and Turtle (2003) and Wong and Choi (2006). One of the advantages of the MLE approach is that it can jointly estimate asset volatility and default barrier. Therefore, in addition to the flat barrier assumption, we can also explore this issue further to the exponential barrier assumptions.

Our empirical results surprisingly show that the simple Merton model has a similar capability in default prediction as that of the Black and Cox model. The Merton model even outperforms the Brockman and Turtle model, and the difference of predictive ability is statistically

significant. The results are held for the in-sample, six-month and one-year out-of-sample tests for both the broad definition of bankruptcy as in Brockman and Turtle (2003) as well as the similar definition to Chen, Hu, and Pan (2006). In addition, we also find that the inferior performance of the Brockman and Turtle model may be the result of its unreasonable assumption of the flat barrier. In the one-year out-of-sample test, the Leland model

outperforms the Merton model in non-financial sector and the results hold for two alternative definitions of default. Furthermore, these results are still preserved in our robustness test as we use risk-neutral default probabilities instead of physical default probabilities.

2. Literature Review: Previous Empirical Studies of the Structural Credit Risk Models in Default Prediction (文獻探討)

Brockman and Turtle (2003) investigated the bankruptcy prediction performance under down-and-out call (DOC) framework using a large-cross section of industry firms from 1989 to 1998. Brockman and Turtle (2003) use the proxy approach measuring the market value of a

(4)

market value of equity as reported in Compustat. The asset volatility is measured as the square root of four times the quarterly variance measure, where the quarterly variance measure is computed by quarterly percentage changes in asset values for each firm in the sample with at least ten years of data. The promised debt payment is measured by all non-equity liabilities, computed as the total value of assets less the book value of

shareholders’ equity. Finally, the life span of each firm is set to be ten years, and they argued that barrier estimates are not particularly sensitive to lifespan assumption by the robustness test.

The empirical evidence shows that the failure probabilities implied by the DOC framework never underperform the well known accounting approach – Altman’s Z-score. In detail, the logistic regressions by including single or both of the implied failure probability and Z-score, the DOC approach dominants Z-score in predicting the corporate failure percentage of the one, three, five year tests as well as their size or book-to-market categorized tests. In addition, in the quintile-based test, the failure probability of DOC framework also stratify failure risks across firms and years much more effectively than the corresponding Z-score. We should note that another empirical finding by Brockman and Turtle (2003) is that implied default barriers are statistically significant for a large cross-section of industrial firms. However, Wong and Choi (2006) argue that it is the proxy approach of Brockman and Turtle (2003) that leads to barrier levels above the value of corporate liabilities. Hence, they adopt the transformed-data MLE approach and find that default barriers are positive but not very significant in the empirical study of a large sample of industry firms during 1993 to 2002.

Bharath and Shumway (2004) examine the default predictive ability of the KMV-Merton default probability of all the non-financial firms for the period 1980 to 2003. The method they use to estimate the KMV expected default frequency (EDF) is the same iterated procedure employed by Vassalou and Xing (2004). They compare the KMV-Merton default probability with several variables — the naïve probability estimate (without implementing the iterated procedure), market equity, and past returns, and find that the KMV-Merton model does not produce sufficient statistics for the probability of default. Implied default probabilities form the CDSs and corporate bond yield spreads are only weakly correlated with the KMV-Merton default probabilities after adjusting for agency ratings, bond characteristics, and their

alternative predictor. Moreover, they find that the naïve probability they propose, which captures both the functional form and the same basic inputs of the KMV-Merton default probability, performs slightly better as a predictor in hazard models and in out of sample forecasts. They conclude that the KMV-Merton probability is a marginally useful default forecaster, but it is not a sufficient statistic for default.

Recently, Chen, Hu, and Pan (2006) use the volatility restriction method to test five structural models including the models of Merton, Brockman and Turtle, Black-Cox, Geske (2 periods), and Longstaff-Schwartz as well as the proposed non-parametric model. The default

companies in the study are those filing Chapter 11 from January 1985 to December 2002 with assets greater that $50 million. Their results indicate that the distribution characteristics of equity returns and endogenous recovery are two important assumptions. On the other hand, random interest rates that play an important role in pricing credit derivatives are not an important assumption in predicting default.

Another study closely related to our study is regarding default probability estimation

investigated by Leland (2004). Leland (2004) examines the default probabilities predicted by the Longstaff and Schwartz (1995) model with the exogenous default boundary, and the Leland and Toft (1996) model with endogenous default boundary. Leland uses Moody’s

(5)

corporate bond default data from 1970 to 2000 in his study and follows a similar calibration approach by Huang and Huang (2003). Rather than matching the observed default

frequencies, Leland instead chooses common inputs across models to observe how well they match observed default statistics. The empirical results show that when costs and recovery rates are matched, the exogenous and the endogenous default boundary models fit observed default frequencies equally. The models predict longer-term default frequencies quite accurately, while shorter-term default frequencies tend to be underestimated. Thus, he suggests that a jump component should be included in asset value dynamics.

3. Empirical Methodology (研究方法)

Traditionally, the structural credit risk models are estimated by the volatility restriction approach or an even simpler approach such as the proxy approach. However, these two approaches and their variants lack the statistical basis, and the empirical results they produce are less convincible. Thus, the new estimation method such as the transformed MLE has been introduced into the empirical researches of structural models.

3.1 Maximum Likelihood Estimation Method

Duan (1994) developed a transformed data MLE approach to estimate continuous time models with unobservable variables using derivative prices. The obvious advantages are that (1) the resulting estimators are known to be statistically efficient in large samples, and (2) the sampling distribution is readily available for computing confidence intervals or for testing hypothesis. In the context of structural credit risk models, equity prices are the derivative of the underlying asset process and are readily available with large samples.

LetXbe an n-dimensional vector of unobserved variates. Assume that its density

function, f(x;θ), exists and it is continuously twice differentiable in both arguments. A vector of observed random variates,Y, results from a data transformation of the unobservable

vectorX . This transform fromRntoRnis a function of the unknown parameterθ, and is

one-to-one for everyθ∈Θ, where Θ is an open subset ofR . k

Denote this transformation byT(⋅;θ), whereT(⋅;⋅)is continuously twice differentiable in both arguments. Accordingly,Y =T(X;θ)andX =T−1(Y;θ). The log-likelihood function of the observed dataY isL(Y;θ). By change of variable, the log-likelihood function for the

transformed dataYcan be expressed by the log-likelihood function of the unobserved random vectorX , denoted asLX(⋅;θ), and the Jacobian, J , of a given transformation.

(

1

)

1( ; ); ) ln ( ; ) ( ) ; (Y θ =L TY θ θ + JT X θL X (1)

Implementation of the Transformed-data MLE in the Context of Structural Credit Risk Models (Duan et al. (2004)):

Step 1. Assign initial values of the parametersθ, and compute the implied asset value time series by ˆ (θˆ(0)) 1( ;θˆ(0))

ih

ih T S

V =, where h is the length of the time period and θˆ(m)denotes

the m-th iteration. Let m=1.

(6)

= − = = n i ih m m ih m m ih V m dV V dT n i V L S L 1 ) ( ) ( ) ( ) ( ) ( ) (ˆ (ˆ ), 1, , ; ˆ ) ln (ˆ (ˆ ); ˆ ) ˆ ; ( θ θ K θ θ θ (2) to obtain the estimated parametersθˆ(m).

Step 3. Compute the implied asset value time series by ˆ (ˆ( )) 1( ;ˆ(m))

ih m

ih T S

V θ = − θ , and let

m=m+1, go back to step 2 until the maximization criterion is met.

Step 4. Use the MLEθˆ to compute the implied asset valueVˆnhand the corresponding default

probability.

3.2 Monte Carlo Experiment

We follow Duan et al. (2004) and set the following parameter values to perform the

simulation experiment: interest rate r =0.05, asset driftµV=0.1, asset volatilityσV=0.3, initial firm value V0=1.0, Face value of debt F=1.0, option maturity T=2. The sampling period is set

to be 252 days a year, and a maturity (2-iδ ) years for the i-th data point of the simulated time series. Finally, we change the value of the default barrier in order to examine its effect on parameter estimation.

Our results in Table 1 are based on 1,000 simulated samples following the procedure by Duan et al. (2004) to mimic the daily sample of observed equity value of a survived firm. We use the same numerical optimization algorithm of Nelder-Mead (in Matlab software package) as that in Wong and Choi (2006), and the initial value of the barrier is set as 0.5. Our experiment results in Table 1 clearly show the strength and the limitation of the MLE method. The MLE method can jointly estimate and uncover the true asset volatility and default barrier well when the barrier hitting probability of the asset value process is not too low. However, as the true default barrier is under 0.5 in our experiment, the barrier estimates are seriously biased. Although the default barrier estimates are biased when barrier the hitting probability of asset value process is low, this is what the statistical theory precisely predicts since the value of likelihood function is flat and not sensitive to the change of the barrier level. A low barrier relative to the firm value (or the low hitting probability of the barrier) obviously implies that the barrier is immaterial. In other words, where it is exactly located doesn't materially affect equity values. Thus, one cannot expect to pin down the barrier using the equity time series. One important consequence regarding the estimate of the barrier parameter is that the testable hypothesis proposed by Brockman and Turtle (2003) should not be carried out by the

estimates of the barriers. Brockman and Turtle (2003) use the nested concept of standard call option and down-and-out barrier option model to argue that when the default is zero, the down-and-out option collapses to the standard European call option. However, due to the nature of the likelihood function of down-and-out option framework, one cannot expect to pin down the barrier when the barrier is low relative to the asset value, i.e., the default probability is low. When the default probability is low, the low barrier estimate can vary for a wide range since it does not affect likelihood function and equity pricing results.

Fortunately, for default prediction of our empirical studies, this should present no practical difficulties. The biased nature for low barrier cases could hardly affect the default

(7)

Furthermore, a formal test should be carried out by the performance of default prediction capability using alternative statistical test. In our study, we adopt the Receiver Operating Characteristic Curve and Accuracy Ratio for this issue and we will discuss them in the following section.

3.3 Measuring Capability of Predicting Financial Distress — Receiver Operating

Characteristic Curve and Accuracy Ratio

To analyze the capability of predicting financial distress, we adopt the method by accuracy ratio (AR) and Receiver Operating Characteristic (ROC) proposed by Moody’s, which is also used by Vassalou and Xing (2004) and Chen, Hu, and Pan (2006). Stein (2002, 2005) argues that the power of a default model to predict defaults is its ability to detect “True Default,” and the capability of a default model to calibrate to the data is its ability to detect “True Survival.” The ROC curve in the context of bankruptcy prediction is a plot of cumulative probability of the survival group against the cumulative probability of the default group. Assuming a firm default when its default probability is less than a cut-off threshold, the survival sample contains true survivals and false defaults, and the default sample contains true defaults and false survivals. Thus, the probabilities within the survival (default) group of true survive (default) and false default (survival) sum to unity.

The key statistic in the ROC methodology, known as the Cumulative Accuracy Profile (CAP), is the Accuracy Ratio (AR). AR is defined as the ratio of the area of tested modelAto the area of perfect modelAP, i.e.,AR= A/AP where0≤ AR≤1. Hence, the higher the AR is, the

more powerful is the model.

In our study, we modified the approach by Chen, Hu, and Pan (2006) as follows: 1. Rank all default probabilities (P ) from the largest to smallest. def

2. Compute the 100 percentiles of default probabilities (Pdef ).

3. Divide the sample into default and survival groups.

4. In the default group, compute the cumulative probability greater than each percentile of default probabilities. This will be plotted on the y axis.

5. In the survival group, compute the cumulative probability greater than each percentile of default probabilities. This will be plotted on the x axis.

6. Plot the ROC curve.

7. For each model, repeat step 1 to step 6 and calculate the AR.

8. Compute the test statistic z by the method of comparing the areas under ROC curves derived form the same cases by Hanley and McNeil (1983).

(8)

4. Empirical Tests and Results (研究結果與討論) 4.1 The Models

In our empirical study, we will test three barrier structural credit risk models extended from the Merton (1974) model. We will focus on two aspects of these extensions: the bond safety covenant in terms of continuous monitoring and default (Brockman and Turtle, 2003; Black-Cox, 1976); the shareholders’ discretion on the going concern decision in terms of endogenous barrier modeling under the stationary debt structure assumption (Leland, 1994).

4.2 Data

Reviewers In our empirical test, the equity prices are collected from CRSP (the Center for Research in Security Prices) and the financial statement information is retrieved from Compustat. The sampling period of the firms is from January 1986 to December 2005, while the quarterly accounting information is from 1984 to 2005 since some firms under financial distress stop filing financial reports a long time before they are delisted from the stock exchanges. The accounting information we use in our study is quarterly reports from

CRSP/Compustat Merged (CCM) Database. This is to obtain the most updated debt level and payout information, especially for those defaulted firms. In our empirical test, we consider only ordinary common shares (first digit of CRSP share type codes 1) and exclude

certificates, American trust components, and ADRs. Our final sample covers 20-year period from 1986 to 2005 and includes 15,607 companies.

In our empirical test, we adopt two different definitions of default:

Definition I The broad definition of bankruptcy by Brockman and Turtle (2003) — this includes firms that are delisted because of bankruptcy, liquidation, or poor performance. A firm is considered performance delisted, named by Brockman and Turtle, if it is given a CRSP delisting code of 400, or 550 to 585. Note that there are still other delisted firms due to

merges, exchanges, or being dropped by the exchange for other reasons, and they are considered as survival firms.

Definition II The definition of bankruptcy similar to that adopted by Chen, Hu, and Pan (2006) — default firms are collected from the BankruptcyData.com database, which includes over 2,500 public and major company filings dating back to 1986. We next match the

performance delisted samples with those companies collected from BankruptcyData.com, and add back the liquidated firms (with delisting code 400) to be our default group. All of the remaining firms are classified as survival firms. Note that a difference between our

classification and Chen, Hu, and Pan (2006) is that some of the companies filed bankruptcy petitions but were later acquired by (merged with) other companies (Delisting code: 200) are classified into survival group.

We first describe our sample selection criteria. First, companies with more than one share classes are excluded in our test. Second, since we also need accounting information in order to empirically test these models, firms without accounting information within two quarters going backward from the end of the estimation period are excluded. Thirdly, active firms (delisting code 100) during our sampling period while being delisted in 2006 are excluded. This is to ensure survival firms with delisting code 100 are financially healthy companies. Finally, to ensure adequate sample size for the MLE approach, we consider only those companies with over 252 days common share prices available.

(9)

In the end of this section, we present our key inputs for the structural models. Determining the amount of debt for our empirical study is not an obvious matter. As opposed to the simplest approach, for example by Brockman and Turtle (2003), to set the face value of debt equal to the total liabilities, we adopt the rough formula provided by Moody’s KMV — the value of current liabilities including short-term debt, plus half of the long-term debt. This formula is also adopted by some researchers such as Vassalou and Xing (2004).

Secondly, the payout rate g captures the payouts in the form of dividends, share repurchase, and bond coupons to stock holders and bondholders. To estimate the payout rate, we adopt the weighted average method similar to those by Eom, Helwege, and Huang (2004) and Ericsson, Reneby, and Wang (2006) as

) 1 ( ) ( )

(Interest Expenses Total Liabilities ×Leverage+ Equity Payout ratio × −Leverage

whereLeverage=Total Liabilities (Total Liabilities+Market Equity Value)

For the market equity value, we chose the number of shares outstanding times market price per share closest to the financial statement date. The equity payout rate is estimated as the total equity payout, which is the sum of cash dividends, preferred dividends, and purchase of common and preferred stock, dividend by the total equity payout plus market equity value. Thirdly, since four models in our study assume constant interest rate, one needs to feed in the appropriate interest rate for the model estimation. The three month T-bill rate from the Federal Reserve website is chosen as the risk-free rate. However, the three month T-bill rate fluctuated from the highest 9.45% in March 1989, dropped to the lowest 0.81% in June 2003, and went back to 4.08% in the end of December 2005. Therefore, to assure the proper discount rate for each firm across a long 20-year sampling period, interest rates are estimated as the average of 252 daily 3-month Constant Maturity Treasury (CMT) rates for each firm during the sampling period.

Finally, the Leland model needs debt coupons and we follow Ericsson, Reneby and Wang (2006) to set the average coupon as risk-free rate times total liabilities:

Rate Riskfree s

Liabilitie Total

Coupon= × . In addition, the Leland model considers tax

deductibility as well as bankruptcy cost. We follow Eom, Helwege, and Huang (2004) and set the tax rate to 35% and financial distress cost as 51.31%. Furthermore, we also follow Leland (1998) and Ericsson, Reneby and Wang (2006) and set the tax rate to 20% as an alternative setting. This is to reflect personal tax advantages to equity returns which reduce the advantage of debt.

4.3 Empirical Results

In our empirical test, we use the same numerical optimization algorithm of Nelder-Mead (in Matlab software package) as that adopted by Wong and Choi (2006). The inputs of parameters for debt level, asset payouts, interest rates, coupons, tax rate, and financial distress cost are as described in Section 4.1, and the option time to maturity is two years. The original Merton, Brockman and Turtle, and Leland models do not assume the asset payout rate, but they can be easily added into the models. For comparison purposes, we choose to estimate default

barriers,HF, instead of discount rates,γ, of each firms in the Black and Cox model, and the discount rates are assumed to be the average risk-free rates for those firms during the equity time series sampling period.

(10)

The delisting date of a delisted firm is simply the very last security trading day, while the delisting date of an active firm (delisting code 100) is set as the last trading day in year 2005. The inputs of equity time series of the in-sample estimation use equity values ending on the delisting date and traveling back 252 trading days. The six-month (one-year) out-of-sample estimation uses equity time series from 377 to 126 (503 to 252) trading days before delisting date. The sample sizes of the in-sample, six-month out-of-sample, and one-year out-of-sample tests are 15,607, 14,775, and 13,750, respectively. The differences of the sample sizes come from the availability of equity trading data. As we push the estimation period backward in time, we lose some firms due to the relatively shorter lives of these companies. After numerical optimization, final samples of the in-sample test, six-month out-of-sample, and one-year out-of-sample tests include 15,598, 14,765, and 13,744 firms.

4.3.1 Testing Results of Default Definition I

We next present in Figure 1 the in-sample ROC curves of the tested models. The

out-of-sample (six-month) and out-of-sample (one-year) are similar and thus are omits in this report to save the space. Formal statistical tests are carried out by the Accuracy Ratios (ARs) and the z statistics. Z statistics, compared with the Merton model, for the tested models are reported in the parentheses in Table 2. We find that in accordance with the results in the decile-based analysis, the Brockman and Turtle model is clearly inferior to the Merton and the Black and Cox models. The Leland model of in-sample test in both tax rate settings also underperforms the Merton model.

Our empirical result shows that the simple Merton model surprisingly outperforms the flat barrier model in default prediction. Furthermore, the performance of the Merton model is also similar to that of the Black and Cox model in all tests. The Black and Cox model has slightly higher ARs than those of Merton’s model, however, the differences are not statistically significant based on the z test. Moreover, Merton’s model also performs significantly better than the Leland model of the in-sample test.

The results of z test indicate that the difference of prediction capability between the Merton and the flat barrier models is statistically significant and the results hold for both in-sample and out-of-sample tests. Although theoretically the down-and-out option framework should nest the standard call option model, practically it may not perform better in the default prediction. Several possible reasons may explain our empirical results.

One of the possible explanations is that the continuous monitoring assumption of the flat barrier model makes it possible to default before the debt maturity, and thus increases the estimated default probabilities of the survival firms. One may argue that the implied default probabilities of the default firms increase as well. However, the magnitude of the increments may not be the same, and we do observe this issue in our empirical results.

For example, Alfacell Corporation, a survival firm, (CRSP permanent company number 35) clearly reflects this issue as shown in Figure 2. Alfacell experienced a drastic downfall of share prices in year 2005. However, it still survived through the end of 2006. In Figure 2, we present the one-year market equity, the estimated firm value of the Merton model, the

estimated firm value of the Brockman and Turtle model, the implied barrier, and the debt level of the KMV formula, respectively. Both models generate reasonable firm value estimates based on the corresponding model assumptions. The estimated firm values of the flat barrier model are higher than those of the Merton model due to the existence of the claims of the bondholders modeled as the down-and-in option. The implied default probability of Alfacell

(11)

Corporation is merely 0.04% by the Merton model while the default probability of the flat barrier model is as high as 61.21%. The gigantic difference comes from the implied default barrier. The debt level by the KMV formula is $1.75 million, but the implied barrier from the Brockman and Turtle model is $31.37 million! Such a high implied barrier leads to a high default probability of the flat barrier model. In contrast, default in Merton’s model is only related to the debt level at debt maturity and thus the default probability is very low. Note that to prevent from the local optimum problem of the barrier estimate, we also use another optimization routine, the fmincon function in Matlab, to re-estimate the Alfacell case but still obtain the same implied default barrier.

One may argue that imposing constraints on the default barrier can solve this issue. However, the high implied default barrier is a result of the return distribution of the equity value

process.Imposing constraints clearly violates the fundamental of the maximum likelihood estimation method and hinders the MLE method from searching global optimum. In the case of Alfacell Corporation, the likelihood function of the Brockman and Turtle model and the Merton model are 566.397 and 562.288, respectively. This indicates that the introduction of the barrier does improve the fitting of the return distribution of the equity value process. Furthermore, the equity pricing function of the flat barrier model in Equation (A.3) does not pre-specify the location of the barrier as well. The flat default barrier can be higher than the debt level as assumed in the Brockman and Turtle model. Accordingly, the fundamental issue is that the flat barrier assumption itself might be unreasonable and unrealistic. Finally, we should note that the extraordinarily high implied default barrier cannot happen in the Black-Cox model since it assumes that the default barrier is lower than the debt level. As a result, the implied default probability of Alfacell Corporation is only 0.06% by the Black and Cox model.

Another possible explanation is from our measure of the default prediction capability. The AR only preserves the ranking information of the default probabilities in our empirical test. The flat barrier model may generate the default probability distribution closer to the true default probability distribution compared with that of the Merton model. It is the tails of the default probability distributions of survival and default groups that truly determine the ARs. Finally, we cannot completely rule out the local optimum possibility, since it is well known that high dimensional optimization may not uncover the global optimum. The superior default prediction capability of the Merton model may come from the better estimates of the model parameters due to its simpler likelihood function and lower dimension in the optimization procedure.

We next turn to the sub-sample analysis by financial (Table 3) versus non-financial (Table 4) firms. Financial companies have industry-specific high leverage ratios and thus cannot be modeled well in finance literature. Consistent with the findings by Chen, Hu, and Pan (2006), we find that the Brockman and Turtle model perform much better in finance sector than in the industry sector, while the Merton and the Black and Cox models perform better in the industry sector. Accordingly, the difference of default prediction power of the flat barrier and the Merton model in finance sector is no longer significant.

Another important finding is that the Leland model outperforms Merton’s model in the Non-financial sector, and the differences are significant in the six-month and one-year out-of-sample tests. The Leland model shows large differences of default predictability between financial and non-financial sectors. This difference leads to its superior power of

(12)

4.3.2 Testing Results of Default Definition II

In this section, we regroup our survival and default group using the definition of bankruptcy similar to that adopted by Chen, Hu, and Pan (2006). Following their approach, we collect default firms from the BankruptcyData.com database, which includes over 2,500 public and major company filings dating back to 1986. Next, we match the performance delisted samples with those companies collected from BankruptcyData.com, and add back the liquidated firms (with delisting code 400) to be our default group. All of the remaining firms are classified as survival firms. Note that a difference between our classification and Chen, Hu, and Pan (2006) is that some of the companies filed bankruptcy petitions but were later acquired by (merged with) other companies (Delisting code: 200) are classified into survival group. The numbers of default firms following this approach greatly reduce from 4,871 to 1,325 for the in-sample test and from 4,533 to 1,260 (4,107 to 1,183) of the six-month (one-year)

out-of-sample tests. The accuracy ratios and z statistics are reported in Table 5.

From Table 5, our results still show that the Merton model outperforms the flat barrier model, and the difference of default prediction capability is statistically significant as that in Section 5.3.1. The prediction capabilities of the Merton and the Black and Cox model are similar as well. In addition, one can observe that all these models perform slightly worse than the broad definition of bankruptcy. The differences are around 2% across different models and tests. The reason may be the uncertainty of bankruptcy filings of companies been delisted from the stock exchange. One can use the MLE approach to capture information from the market equity values of those poorly performed and delisted firms, and obtain default probabilities of these firms. However, if those firms will eventually file bankruptcy may be subject to a lot of firm-specific human and company potential issues. These issues may not easily be captured just by the dynamics of the firms’ market equity values.

In Table 6 and Table 7, the financial versus non-financial sector analysis are reported. The performances among models are also similar to those of the broad definition of bankruptcy in Section 4.2.1. Unlike the performances in broad definition of default, not only the Brockman and Turtle model performs much better in the finance sector, but the Merton, the Black and Cox, as well as the Leland models also perform better in the financial sector. However, the accuracy ratios of the flat barrier are even higher than those of the Merton and the Black and Cox models in the finance sector, although the differences are not statistically significant. In the non-financial sector, the Leland model still performs better than Merton’s model, but a difference is that the Leland model no longer significantly outperforms the Merton model in six-month out-of-sample test.

5. Summary and Conclusions (結論與建議)

Our empirical results surprisingly show that the simple Merton model has a similar capability in default prediction as that of the Black and Cox model. The Merton model even outperforms the Brockman and Turtle model, and the difference of predictive ability is statistically

significant. The results are held for the in-sample, six-month and one-year out-of-sample tests for both the broad definition of bankruptcy as in Brockman and Turtle (2003) as well as the similar definition to Chen, Hu, and Pan (2006). In addition, we also find that the inferior performance of the Brockman and Turtle model may be the result of its unreasonable assumption that the flat barrier itself can be over the face value of debt. In the one-year out-of-sample test, the Leland model outperforms the Merton model in non-financial sector and the results hold for two alternative definitions of default. These results are still preserved

(13)

in our robustness test as we use risk-neutral default probabilities instead of physical default probabilities.

In addition, in terms of the differences of default probabilities between barrier models and Merton’s model, our results indicate that the introduction of default barriers has little

influence on default probabilities for a large portion of the survival firms and as many as 30% of the firms in default group. This is consistent with the results by Wong and Choi (2006) and does not support the finding by Brockman and Turtle (2003) that default barriers are

significantly positive. We should note that the models investigated in our study incorporate only net-worth covenant, and firms default only when the market value of its assets fall below a certain boundary. A recent empirical study by Davydenko (2005) finds a much more

complex picture of financial distress. Default of distressed firms may be triggered by either low asset value or liquidity shortage, and the importance of liquidity varies cross-sectionally depending on costs of external financing. Moreover, there are many low-value and

low-liquidity firms that are able to avoid default.

In summary, our empirical results indicate that exogenous default barriers, flat or exponential, are not crucial in default prediction. In contrast, endogenous barrier modeling has significant improvement in long term prediction for non-financial firms. However, we should note that the performance of the Leland model compared to the Merton model is weakened as the default prediction horizon shortened.

(14)

Table 1 A Monte Carlo Study of the MLE Method for the Brockman and Turtle (2003) Model

Model Parameters F=1 T=2

V

µ σV H (Barrier) Hitting Probability Barrier

True Value 0.1 0.3 0.9 67.746936% Mean 0.36377 0.30211 0.89479 Median 0.34914 0.29857 0.89837 Standard Deviation 0.21523 0.04856 0.07941 True Value 0.1 0.3 0.8 39.585685% Mean 0.24807 0.29789 0.79156 Median 0.22296 0.29490 0.80203 Standard Deviation 0.21503 0.04449 0.11039 True Value 0.1 0.3 0.75 28.074173% Mean 0.23082 0.30232 0.69968 Median 0.17726 0.29878 0.74795 Standard Deviation 0.24533 0.05624 0.18828 True Value 0.1 0.3 0.7 18.671759% Mean 0.19528 0.29924 0.61289 Median 0.17426 0.29643 0.69106 Standard Deviation 0.23842 0.03912 0.22313 True Value 0.1 0.3 0.6 6.409692% Mean 0.11387 0.29343 0.49035 Median 0.09683 0.29164 0.57849 Standard Deviation 0.26237 0.03410 0.24217 True Value 0.1 0.3 0.5 1.347824% Mean 0.11484 0.29314 0.41125 Median 0.11833 0.29224 0.35967 Standard Deviation 0.28141 0.03252 0.24325 True Value 0.1 0.3 0.4 0.127036% Mean 0.09522 0.29244 0.41637 Median 0.07599 0.29224 0.35732 Standard Deviation 0.29297 0.03222 0.24788 True Value 0.1 0.3 0.0000001 0.000000% Mean 0.08946 0.29237 0.40017 Median 0.08844 0.29143 0.29074 Standard Deviation 0.29598 0.03291 0.24124

(15)

Table 2 Accuracy Ratios and z Statistics of Physical Probabilities (Default Definition I; All Sample)

Accuracy Ratio Merton Brockman and Turtle Black and Cox Leland (TC=0.2) Leland (TC=0.35)

One Week (In Sample) 0.9357 0.9253 (-5.8513) 0.9365 (0.7667) 0.9314 (-2.3810) 0.9315 (-2.1933) Six Months (Out of Sample) 0.8749 0.8531 (-8.5565) 0.8768 (1.5632) 0.8705 (-1.6938) 0.8711 (-1.3984) One Year (Out of Sample) 0.8422 0.8156 (-8.8537) 0.8433 (0.8055) 0.8442 (0.6621) 0.8449 (0.8316) In-Sample One-Week (15,598 firms – 10,727 survival and 4,871 performance delisting firms)

Out-of-Sample 6-Month (14,765 firms - 10,232 survival and 4,533 performance delisting firms) Out-of-Sample 1-Year (13,744 firms - 9,637 survival and 4,107 performance delisting firms)

Table 3 Accuracy Ratios and z Statistics of Physical Probabilities (Default Definition I; Financial Firms)

Accuracy Ratio Merton Brockman and Turtle Black and Cox Leland (TC=0.2) Leland (TC=0.35)

One Week (In Sample) 0.8939 0.8900 (-0.5698) 0.8926 (-0.3598) 0.8750 (-3.1532) 0.8744 (-3.0896) Six Months (Out of Sample) 0.8496 0.8539 (0.5305) 0.8520 (0.5858) 0.8209 (-3.9062) 0.8199 (-3.7674) One Year (Out of Sample) 0.8319 0.8240 (-0.8894) 0.8333 (0.3162) 0.8083 (-2.7714) 0.8097 (-2.6578) In-Sample One-Week (2,809 firms – 2,409 survival and 400 performance delisting firms)

Out-of-Sample 6-Month (2,694 firms – 2,313 survival and 381 performance delisting firms) Out-of-Sample 1-Year (2,556 firms – 2,195 survival and 361 performance delisting firms)

(16)

15

Table 4 Accuracy Ratios and z Statistics of Physical Probabilities (Default Definition I; Non-Financial Firms)

Accuracy Ratio Merton Brockman and Turtle Black and Cox Leland (TC=0.2) Leland (TC=0.35)

One Week (In Sample) 0.9371 0.9255 (-6.2231) 0.9380 (0.8838) 0.9373 (0.0707) 0.9376 (0.2090) Six Months (Out of Sample) 0.8714 0.8437 (-10.0585) 0.8729 (1.1717) 0.8777 (2.2951) 0.8786 (2.5352) One Year (Out of Sample) 0.8379 0.8054 (-9.8963) 0.8385 (0.3975) 0.8543 (5.1790) 0.8555 (5.2588) In-Sample One-Week (12,789 firms – 8,318 survival and 4,471 performance delisting firms)

Out-of-Sample 6-Month (12,071 firms – 7,919 survival and 4,152 performance delisting firms) Out-of-Sample 1-Year (11,188 firms – 7,442 survival and 3,746 performance delisting firms)

Table 5 Accuracy Ratios and z Statistics of Physical Probabilities (Default Definition II; All Sample)

Accuracy Ratio Merton Brockman and Turtle Black and Cox Leland (TC=0.2) Leland (TC=0.35)

One Week (In Sample) 0.9152 0.9006 (-5.3136)* 0.9121 (-1.5613) 0.9145 (-0.2014) 0.9147 (-0.1159) Six Months (Out of Sample) 0.8574 0.8278 (-7.5183) 0.8561 (-0.6197) 0.8587 (0.2765) 0.8589 (0.3166) One Year (Out of Sample) 0.8166 0.7811 (-7.6036) 0.8147 (-0.7825) 0.8242 (1.5007) 0.8243 (1.4354) In-Sample One-Week (15,598 firms – 14,273 survival and 1,325 default firms)

Out-of-Sample 6-Month (14,765 firms – 13,498 survival and 1,267 default firms) Out-of-Sample 1-Year (13,744 firms – 12,561 survival and 1,183 default firms)

(17)

Table 6 Accuracy Ratios and z Statistics of Physical Probabilities (Default Definition II; Financial Firms)

Accuracy Ratio Merton Brockman and Turtle Black and Cox Leland (TC=0.2) Leland (TC=0.35)

One Week (In Sample) 0.8969 0.8999 (0.3203) 0.8879 (-1.2743) 0.8814 (-.14651) 0.8814 (-1.3993) Six Months (Out of Sample) 0.8669 0.8712 (0.3806) 0.8695 (0.3352) 0.8523 (-1.2010) 0.8515 (-1.1895) One Year (Out of Sample) 0.8582 0.8600 (0.1276) 0.8595 (0.1715) 0.8460 (-0.9036) 0.8444 (-0.9682) In-Sample One-Week (2,809 firms – 2,698 survival and 111 default firms)

Out-of-Sample 6-Month (2,694 firms – 2,588 survival and 106 default firms) Out-of-Sample 1-Year (2,556 firms – 2,453 survival and 103 default firms)

Table 7 Accuracy Ratios and z Statistics of Physical Probabilities (Default Definition II; Non-Financial Firms)

Accuracy Ratio Merton Brockman and Turtle Black and Cox Leland (TC=0.2) Leland (TC=0.35)

One Week (In Sample) 0.9117 0.8945 (-5.8440) 0.9088 (-1.4072) 0.9133 (0.4425) 0.9137 (0.5129) Six Months (Out of Sample) 0.8482 0.8128 (-8.2882) 0.8457 (-1.0880) 0.8556 (1.5270) 0.8561 (1.5317) One Year (Out of Sample) 0.8041 0.7614 (-8.4366) 0.8012 (-1.1194) 0.8209 (3.0848) 0.8215 (3.0167) In-Sample One-Week (12,789 firms – 11,575 survival and 1,214 default firms)

Out-of-Sample 6-Month (12,071 firms - 10,910 survival and 1,161 default firms) Out-of-Sample 1-Year (11,188 firms – 10,108 survival and 1,080 default firms)

(18)

Figure 1 ROC Curves – One Week In-Sample Test (All Sample)

ROC - One Week

0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00% 70.00% 80.00% 90.00% 100.00% 0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00% 70.00% 80.00% 90.00% 100.00% Default Probability of Survival Group ordered by Percentiles

D ef aul t P robabi lit y o f D ef aul t Gr ou p or der ed by Per cent iles Merton BT BC Leland

Figure 2 An Illustration of the Problem of the Brockman and Turtle Model by Alpacell Corporation ALFACELL CORP 0 20 40 60 80 100 120 140 160 180 0 50 100 150 200 250 Trading Days Value ( M illi on D ollar s) Market Equity

Estimated Firm Value (Merton) Estimated Firm Value (BT) Debt Level

(19)

References

Anderson, R. and S. Sundaresan, 1996, “Design and Valuation of Debt Contract,” Review of

Financial Studies 9, 37-68.

Anderson, R. and S. Sundaresan, 2000, “A Comparative Study of Structural Models of Corporate Bond Yields: An Exploratory Investigation,” Journal of Banking and Finance 24, 255-269. Bharath, S. T., and T. Shumway, 2004, “Forecasting Default with the KMV-Merton Model,”

Working Paper, University of Michigan.

Black, F. and J. C. Cox, 1976, “Valuing Corporate Securities: Some Effects of Bond Indenture Provisions,” Journal of Finance 31, 351-367.

Black, F. and M. Scholes, 1973, “The Pricing of Options and Corporate Liabilities,” Journal of

Political Economy 81, 637-654.

Briys, E. and F. de Varenne, 1997, “Valuing Risky Fixed Rate Debt: An Extension,” Journal of

Financial and Quantitative Analysis 32, 239-248.

Brockman, P. and H. J. Turtle, 2003, “A Barrier Option Framework for Corporate Security Valuation,” Journal of Financial Economics 67, 511-529.

Bruche M., 2005, “Estimating Structural Bond Pricing Models via Simulated Maximum Likelihood,” Working Paper, London School of Economics.

Chen R., F. J. Fabozzi, G. Pan, R. Sverdlove, 2007, “Sources of Credit Risk: Evidence from Credit Default Swaps,” Journal of Fixed Income, forthcoming.

Chen, R., S. Hu., and G. Pan, 2006, “Default Prediction of Various Structural Models,” Working Paper, Rutgers University, National Taiwan University, and National Ping-Tung University of Sciences and Technologies.

Crosbie, P. and J. Bohn, 2003, “Modeling Default Risk,” Moody’s KMV Technical Document. Available at http://www.defaultrisk.com/pp_model_35.htm.

Davydenko, S. A., 2005, “When Do Firms Default? A Study of the Default Boundary,” Working Paper, University of Toronto.

Delianedis, G. and R. Geske, 2001, “The Components of Corporate Credit Spreads: Default, Recovery, Tax, Jumps, Liquidity, and Market Factors,” Working Paper, UCLA.

Duan, J. C., 1994, “Maximum Likelihood Estimation Using Pricing Data of the Derivative Contract,” Mathematical Finance 4, 155-167.

Duan, J. C., 2000, “Correction: Maximum Likelihood Estimation Using Pricing Data of the Derivative Contract,” Mathematical Finance 10, 461-462.

Duan, J. C., G. Gauthier, and J. G. Simonato, 2004, ”On the Equivalence of the KMV and Maximum Likelihood Methods for Structural Credit Risk Models,” Working Paper, University of Toronto.

(20)

Duan, J. C., G. Gauthier, J. G. Simonato, and S. Zaanoun, 2003, “Estimating Merton’s Model by Maximum Likelihood with Survivorship Consideration,” Working Paper, University of Toronto.

Eom, Y. H., J. Helwege, and J. Huang, 2004, “Structural Models of Corporate Bond Pricing: An Empirical Anslysis,” Review of Financial Studies 17, 499-544.

Ericsson, J. and J. Reneby, 2004, “An Empirical Study of Structural Credit Risk Models Using Stock and Bond Prices,” Journal of Fixed Income 13, 38-49.

Ericsson, J. and J. Reneby, 2005, “Estimating Structural Bond Pricing Models,” Journal of

Business 78, 707-735.

Ericsson, J., J. Reneby, and H. Wang, 2006, “Can Structural Models Price Default Risk? Evidence from Bond and Credit Derivative Markets,” Working Paper, McGill University and Stockholm School of Economic.

Hanley, J. A. and B. J. McNeil, 1982, “The Meaning and Use of the Area under the Receiver Operating Characteristic (ROC),” Radiology 143, 29-36.

Hanley, J. A. and B. J. McNeil, 1983, “The Method of Comparing the Areas under Receiver Operating Characteristic Curves Derived form the Same Cases,” Radiology 148, 839-843. Hsu J. C., J., Saà-Requejo, P. Santa-Clara., 2003, “Bond Pricing with Default Risk,” Working

Paper, UCLA and Vector Asset Management.

Huang, J. and M. Huang, 2003, “How Much the Corporate-Treasury Yield Spread is Due to Credit Risk?” Working Paper, Penn State University and Stanford University.

Jarrow, R. A. and P. Protter, 2004, “Structure versus Reduced Form Models: A New Information Based Perspective,” Journal of Investment Management 2, 1-10.

Kim, I., K. Ramaswamy, and S. Sundaresan, 1993, “Does Default Risk in Coupons Affect the Valuation of Corporate Bonds?: A Contingent Claims Model,” Financial Management 22, 117-131.

Leland, H. E., 1994, Corporate debt value, bond covenants, and optimal capital structure, Journal

of Finance 49, 1213-1252.

Leland, H. E., 2004, “Prediction of Default Probabilities in Structural Models of Debt,” Journal

of Investment Management 2, No. 2.

Leland, H. E. and K. B. Toft, 1996, “Optimal Capital Structure, Endogenous Bankruptcy, and the Term Structure of Credit Spreads,” Journal of Finance 51, 987-1019.

Lin H., 2006, “Valuing Corporate Securities: Some Effects of Bond Indenture Provisions: A Correction,” Working paper, National Cheng Kung University.

Longstaff, F., and E. Schwartz, 1995, “A Simple Approach to Valuing Risky Fixed and Floating Rate Debt and Determining Swaps Spread,” Journal of Finance 50, 789-819.

(21)

Security Valuation,” Bourne Lyden Capital Partners and Barclays Global Investors, San Francisco, CA.

Merton, R. C., 1974, “On the Pricing of Corporate Debt: the Risk Structure of Interest Rates,”

Journal of Finance 28, 449-470.

Saunders, A., and L., Allen, 2002, Credit Risk Measurement, New York: John Wiely & Sons, Inc. Stein, R. M., 2002, “Benchmarking Default Prediction Models: Pitfalls and Remedies in Model

Validation,” Moody’s KMV white paper.

Stein, R. M., 2005, “The Relationship between Default Prediction and Lending Profits: Integrating ROC Analysis and Loan Pricing,” Journal of Banking and Finance 29, 1213-1236.

Vassalou, M. and Y. Xing, 2004, “Default Risk in Equity Returns,” Journal of Finance 59, 831-868.

Wei, D. and D. Guo, 1997, “Pricing Risky Debt: An Empirical Comparison of the Longstaff and Schwartz and Merton Models,” Journal of Fixed Income 7, 8-28.

Wong H. Y. and T. W. Choi, 2006, “Estimating Default Barriers from Market Information,” Working Paper, The Chinese Hong Kong University and Citic Kawah Bank.

Wong H.Y. and K.L. Li, 2006, “On Bias of Testing Merton’s Model,” Working Paper, The Chinese University of Hong Kong.

(22)

計畫成果自評

本研究符合原計畫進度,並已於 2008/07/03 在國際研討會(Annual Conference on Pacific Basin Finance, Economics, Accounting and Management)發表。本研究比較了四個著名的結 構信用風險模型— Merton (1974)、Brockman and Turtle (2003) 、Black and Cox (1976) 與Leland (1994) 模型在違約預測上的能力。此為現今結構信用風險模型的實證分析中 較缺乏的部份,實證結果指出,相較於Merton 模型,外生的違約界限假設並非最重要 的因素,相反的,內生的違約界限假設對長期非金融產業公司的違約預測有顯著的改 善,此結果可為之後結構信用風險模型發展之參考。本研究目前正在進行進一步的修 正,增加 robustness test,應可於近期投稿國際學術期刊。

數據

Table 1 A Monte Carlo Study of the MLE Method for  the Brockman and Turtle (2003) Model
Table 3 Accuracy Ratios and z Statistics of Physical Probabilities (Default Definition I; Financial Firms)  Accuracy Ratio  Merton  Brockman and Turtle  Black and Cox  Leland (TC=0.2)  Leland (TC=0.35)  One Week    (In Sample)  0.8939  0.8900 (-0.5698)  0.
Table 5 Accuracy Ratios and z Statistics of Physical Probabilities (Default Definition II; All Sample)  Accuracy Ratio  Merton  Brockman and Turtle  Black and Cox  Leland (TC=0.2)  Leland (TC=0.35)  One Week    (In Sample)  0.9152  0.9006 (-5.3136)*  0.912
Table 7 Accuracy Ratios and z Statistics of Physical Probabilities (Default Definition II; Non-Financial Firms)  Accuracy Ratio  Merton  Brockman and Turtle  Black and Cox  Leland (TC=0.2)  Leland (TC=0.35)  One Week    (In Sample)  0.9117  0.8945 (-5.8440
+2

參考文獻

相關文件

Pursuant to the service agreement made between the Permanent Secretary for Education Incorporated (“Grantor”) and the Grantee in respect of each approved programme funded by the

We showed that the BCDM is a unifying model in that conceptual instances could be mapped into instances of five existing bitemporal representational data models: a first normal

“Since our classification problem is essentially a multi-label task, during the prediction procedure, we assume that the number of labels for the unlabeled nodes is already known

The Hull-White Model: Calibration with Irregular Trinomial Trees (concluded).. • Recall that the algorithm figured out θ(t i ) that matches the spot rate r(0, t i+2 ) in order

The Hull-White Model: Calibration with Irregular Trinomial Trees (concluded).. • Recall that the algorithm figured out θ(t i ) that matches the spot rate r(0, t i+2 ) in order

The outcome of the research has that users’ “user background variable” and “the overall satisfaction” show significant difference, users’ “ actual feeling for service

Using the DMAIC approach in the CF manufacturing process, the results show that the process capability as well as the conforming rate of the color image in

The results showed that (1) to establish an accurate forecast model, the strength model needs more than 100 mix proportion experiments; the slump model only needs 50 mix