• 沒有找到結果。

A clustering time series model for the optimal hedge ratio decision making

N/A
N/A
Protected

Academic year: 2021

Share "A clustering time series model for the optimal hedge ratio decision making"

Copied!
13
0
0

加載中.... (立即查看全文)

全文

(1)

A clustering time series model for the optimal hedge

ratio decision making

Yu-Chia Hsu

a,n

, An-Pin Chen

b a

Department of Sport Information and Communication, National Taiwan University of Physical Education and Sport, No. 52-16, Section 2, Xuefu Road, Puzi City, Chiayi County 613, Taiwan, ROC

b

Institute of Information Management, National Chiao Tung University, No. 1001, University Road, Hsinchu City 300, Taiwan, ROC

a r t i c l e i n f o

Article history:

Received 21 January 2013 Received in revised form 16 April 2013

Accepted 23 January 2014 Communicated by A. Abraham Available online 14 February 2014 Keywords:

Optimal hedge ratio Financial time series GHSOM

Cluster analysis

a b s t r a c t

In this study, a novel procedure combining computational intelligence and statistical methodologies is proposed to improve the accuracy of minimum-variance optimal hedge ratio (OHR) estimation over various hedging horizons. The time series offinancial asset returns are clustered hierarchically using a growing hierarchical self-organizing map (GHSOM) based on the dynamic behaviors of market fluctuation extracted by measurement of variances, covariance, price spread, and their first and second differences. Instead of using original observations, observations with similar patterns in the same cluster and weighted by a resample process are collected to estimate the OHR. Four stock market indexes and related futures contracts, including Taiwan Weighted Index (TWI), Standard & Poor's 500 Index (S&P 500), Financial Times Stock Exchange 100 Index (FTSE 100), and NIKKEI 255 Index, are adopted in empirical experiments to investigate the correlation between hedging horizon and performance. Results of the experiments demonstrate that the proposed approach can significantly improve OHR decisions for mid-term and long-term hedging compared with traditional ordinary least squares and naïve models.

& 2014 Elsevier B.V. All rights reserved.

1. Introduction

Hedging has been of interest to both academicians and practi-tioners with the emergence offinancial derivatives markets, which is carried out by establishing the position of a derivative instrument to offset exposure to pricefluctuations opposite to that of underlying assets to minimize exposure to unwanted risk. To achieve this, the hedger determines a hedge ratio, i.e. the amount of futures contracts to buy or sell for each unit of the underlying asset on which he bears price risk. Therefore, an investor's hedging decisions heavily depend on the models which are capable with the dynamic evolution of the pairwise correlations between futures and spot markets and give appropriate estimates of these hedge ratios.

In the existing literature, the most widely used hedging strategy is to adopt the minimum-variance hedge ratio[1]. The optimal hedge ratio (OHR) is suggested to be obtained by simply regressing the spot market return on the futures market return using ordinary least squares (OLS) under the criterion of minimum variance[2]. However, these approaches obtained the classical time-invariant OHR which

appears inappropriate with the time-varying nature of manyfinancial markets. Improvements were made to capture time-varying features, such as by adopting dynamic hedging strategies based on the bivariate generalized autoregressive conditional heteroskedasticity (GARCH) framework [45–49] or the stochastic volatility (SV) model [5,50]. Although these studies were successful in capturing time-varying features, some concerns are raised when considering the long hedging horizon[3,4], and the distribution of data[6,7].

More recently, other approaches based on non-parametric mod-els have been proposed to avoid undue restrictions. Apart from the classical statistics methodology, Markov regime switching (MRS)[8], Kalman Filter[9], wavelet analysis[10–12], and particle swarm opti-mization (PSO) [13] were adopted as analysis tools or as a new approach to hedge ratio research. Although some of these models are more burdensome in computing, the accuracy of results have been improved and better hedging performance is obtained. However, rather limited research efforts have been devoted to improve the classic OLS-based method as written in the textbook using inter-disciplinary skill and knowledge across classic statistics and compu-tational time series cluster technique[59–61].

In this paper, we address the issue of bivariate time series from the OHR estimation perspective, and investigate the feasibility of OHR estimating using pattern recognition technique to collect similar data samples for variance and covariance estimation. In contrast to Contents lists available atScienceDirect

journal homepage:www.elsevier.com/locate/neucom

Neurocomputing

http://dx.doi.org/10.1016/j.neucom.2014.01.026 0925-2312& 2014 Elsevier B.V. All rights reserved.

nCorresponding author. Tel.:þ886 5 3621260x5519.

E-mail addresses:hyc0212@gmail.com,ychsu@ntupes.edu.tw(Y.-C. Hsu), apc@iim.nctu.edu.tw(A.-P. Chen).

(2)

classical OHR estimation under the minimum-variance framework, a two-phase model, which conserves the classic minimum-variance theoretical framework but avoids the complex restrictions and assumption of the OLS-based approach, is proposed based on growing hierarchical self-organizing map (GHSOM). The goal of the first phase is to recognize the data samples which have similar pattern. We suggest some features which represent the dynamic behaviors of bivariate variance–covariance structure for similarity measurement using GHSOM. In the second phase, we suggest replacing the raw data samples of the bivariate time series with the similar ones via the proposed data resampling and weight process in order to modify the distribution of the raw data.

This paper is organized infive main sections. Literature review of the recent development on the relevant methods of estimating the hedge ratio and time series clustering technique is described in

Section 2. The proposed model for OHR decision making is described inSection 3.Section 4presents the experiments, and discusses the empirical test results. The concluding remarks and suggestion for future works are provided inSection 5.

2. Literature review of related work

2.1. Minimum variance hedge

The basic concept of hedging involves the elimination of fluctuations in the value of a spot position by tracking futures contracts that are opposite to the position held by the spot market. For a long position in the spot market, the return of a hedged portfolio is given by

ΔHP ¼ ΔSrΔF ð1Þ

whereΔHP is the change in the value of the hedge portfolio; ΔS and ΔF are the changes in the spot and futures prices, i.e. the returns, respectively; and r is the hedge ratio. OHR is the value of r that maximizes the expected utility of the investor; it is defined as the expected return and risk of the hedged portfolio. The expected return of futures is 0 when the futures price follows a martingale; hence, the futures position will not affect the expected return of the portfolio.

The risk of the hedge portfolio is defined by its variance in the mean–variance framework. Therefore, OHR is simply the value of r that minimizes the variance of Eq.(1), which is given by ∂VarðΔHPÞ

∂r ¼ 2r  VarðΔFÞ2CovðΔS; ΔFÞ ¼ 0 ð2Þ where VarðΔFÞ is the variance of the futures return and CovðΔS; ΔFÞ is the covariance between the spot return and the futures return. OHR is determined by solving Eq.(2):

rn¼CovðΔS; ΔFÞ

VarðΔFÞ ð3Þ

The OHR given by Eq.(3)can be estimated by regressing the spot return on the futures return using OLS, which corresponds to conventional or classical OHR.

Hedging performance is typically evaluated by hedging effective-ness (HE). The degree of hedging effectiveeffective-ness is measured by the percentage reduction in the variance of portfolio after hedging[3]. The variance of hedge portfolio with estimated OHR can be expressed as

Varhedged¼ VarðΔHPÞ ¼ VarðΔStrΔFtÞ ð4Þ

where r is the OHR. Therefore, HE can be expressed as HE¼Varun hedgedVarhedged

V arunhedged  100% ¼ V arðΔSÞVarðΔHPÞ VarðΔSÞ 100% ¼ 1VarVarðΔHPÞðΔSÞ    100% ð5Þ

The value of HE can be used to evaluate the model of OHR estimation. A higher HE represents better OHR estimation, and vice versa.

2.2. Models for estimating the OHR

The simplest approach suggested to minimize portfolio risk is naïve hedge, which sets the hedge ratio equal to 1 over the whole hedging horizon. The correlation between spot and futures prices is assumed to be perfect, but it challenges the fact that the spot and futures prices are naturally stochastic and time variant. In order to accurately recognize the correlation between futures and spot prices, the static OLS hedge which uses the OLS coefficient of a regression of spot return on futures return is proposed[1]. However, it considers the joint distribution of spot and futures return as constant, and hence leads to suboptimal hedging decisions in periods of high basis volatility.

Recently, numerous works have focused on improving hedging performance using the dynamics in the joint distribution of returns and the time-varying nature of OHRs. Optimal hedge ratios are estimated using the family of GARCH models proposed by Engle

[63], Engle and Kroner[64], and Bollerslev[65,66]. Various GARCH models are studied in literature to investigate hedge ratio and hedging performance, including bivariate GARCH model with diagonal vech parameterization for commodity futures contracts [69], bivariate constant-correlation GARCH (CC-GARCH) model for foreign currency futures[45]and stock index futures[67], GARCH model with Baba– Engle–Kraft–Kroner (BEKK) parameterization for interest-rate futures

[70], augmented GARCH model for the freight futures market[71], and orthogonal GARCH and CC-GARCH for the electricity futures market

[68]. Although, these improvement models can capture the dynamic behavior of a time series for OHR estimation, these approaches do not work robustly when dealing with the OHR decisions over different hedging horizons. Only a few studies consider different hedging horizons for hedge ratio estimation[3–6,12,47,51,52], but the relation-ship between hedging horizons and hedge ratio still needed to be investigated using other methods.

Hedge horizons are often crucially important for making hedge ratio decision, because investors, such as regulators and speculative investors, as well as individuals and institutions participating in the financial markets have different behaviors with various hedging horizon length. However, three problems occurred in incorporating the hedging horizon in OHR estimation. First, the long-horizon OHR estimator based on a handful independent observations generated from long-horizon return series is unreliable[3]. This is because the frequency of data must match the hedging horizon (e.g., weekly or monthly data must be used to estimate the hedge ratio where the hedging horizon is one week or one month, respectively). Low data frequency would result in a substantial reduction in sample size[4]. Second, the assumption for the error term of the GARCH/SV model would lead to inaccurate results when estimating the multiperiod hedge ratio [5]. Third, the assumption for the underlying data-generating process, such as a unit root process, is unsuitable when the assumed condition does not hold true, as evidenced in many commodities markets[6].

The weight of observations in OHR estimating is another issue. Due to the conditional distribution of mostfinancial asset returns tend to vary over time, most OLS-based approaches adopt a rolling window scheme to obtain recent information for estimating the variance and covariance of spot and futures returns. However,

(3)

rolling window estimators use an equally weighted moving average of past squared returns and their cross products. Observa-tions have equal weight in the variance–covariance matrix estimator of the arbitrarily defined estimation period, but they have zero weight beyond the estimation period. GARCH class models are successful in capturing time-varying features for estimating conditional variance–covariance matrices, but they place too much weight on extreme observations when the distribution of data is leptokurtic and fat-tailed[7].

2.3. Cluster analysis for time series

Clusteringfinancial time series is a new approach to analyze the dynamic behavior of time series, and to forecast any future tendency of the time series for purposes of decision making. Many financial problems have been studied based on cluster analysis via computa-tional intelligence approach instead of the convencomputa-tional approach. Dose and Cincotti[62]use a stochastic-optimization technique based on time series cluster analysis for index tracking and enhanced index tracking problems. Pattarin et al. [15]propose a classification algo-rithm for mutual funds style analysis, which combines different statistical techniques and exploits information readily available at low cost. In their analysis, time series of past returns is used to retrieve synthetic and informative description of contexts characterized by high degrees of complexity, which is useful in identifying the styles of mutual funds. Gafiychuk et al.[16]use the self-organizing methods to investigate the time series data of the Dow Jones index. Basalto et al.

[17]use a novel clustering procedure, which is applied to thefinancial time series of the Dow Jones industrial average (DJIA) index tofind companies that share similar behaviors. The techniques proposed could extract relevant information from raw market data and yield meaningful hints as to the mutual time evolution of stocks. Karandikar et al.[18]develop a volatility clustering model to forecast value at risk (VaR). The feasibility and benefits of the model are demonstrated in an electricity price return series. Zhu[19]proposed a new model based on cluster analysis for oil futures price forecasting. This model is demonstrated using the historical data from NYMEX market, and shows that the proposed model can effectively and stably improve the precision of oil futures price forecasting. Focardi and Fabozzi [20]

adopt a clustering-based methodology to determine optimal tracking portfolio to track indexes. Papanastassiou [21] discuss classification and clustering of financial time series data based on a parametric GARCH (1,1) representation to assess their riskiness.

In spite of the prevalence of numerous clustering algorithms, including their success in a number of different application domains, clustering remains challengeable. The methods of data processing, feature extraction, similarity measurement, and topology of cluster construction are different when dealing with different data. For time series data, features extraction methods are organized by past research into three groups[53]working directly with the data either in the time or frequency domain; working indirectly with features extracted from the raw data; and working indirectly with models built from raw data. Defining an appropriate similarity measure and objective func-tion is also an issue of choosing clustering algorithm. Nevertheless, Jain[54]emphasizes that“there is no best clustering algorithm” when comparing the results of different clustering algorithms.

Clustering time series offers two benefits, one is that clustering can avoid the improper assumption and restriction of data, and the other is that data objects with similar dynamic behavior in their evolution over time are pooled and can thus help in data modeling. Gershenfeld et al.[22]propose a cluster-weighted model for time series analysis, which is a simple special case of the general theory of probabilistic networks but one that can handle most of the limita-tions of practical data sets without unduly constraining either data or user. They show that nonlinear, non-stationary, non-Gaussian, and discontinuous signals can be described by expanding the

probabilistic dependence of the future depending on past relation-ships of local models. Fruhwirth-Schnatter and Kaufmann [23]

propose to pool multiple time series into several groups using finite-mixture models. Within a panel of time series, only those that display “similar” dynamic properties are pooled to estimate the parameters of the generating process. They estimate the groups of time series simultaneously with group-specific model parameters using Bayesian Markov chain Monte Carlo simulation methods. They document the efficiency gains in estimation, and forecasting is realized relative to the overall pooling of the time series. D'Urso and Maharaj[24] suggest that time series often display dynamic behavior in their evolution over time, which should be taken into account when attempting to cluster the time series. They proposed a fuzzy clustering approach based on autocorrelation functions to determine and represent the underlying structure in the given time series.

2.4. Growing hierarchical self-organizing map

Recently, growing hierarchical self-organizing map (GHSOM) is used for cluster analysis and is presently the best available analysis tool in many researchfields[31–33]. Extended from the Kohonen's self-organizing feature map (SOM) [30], which is an unsupervised neural network that organizes a topological map, GHSOM has a hierarchical architecture of multiple layers. Each layer comprises several independent clusters representing the growing SOM[14]. SOM has shown the ability of pattern discovery[25,26]and prediction[27– 29] for time series data. The resulting map shows the natural relationships among patterns given in the network. However, the number of clusters, which describes the topology of the SOM, needs to be determined in advance. Moreover, the topology of the SOM lacks the ability to represent hierarchical relations of the data. Unlike traditional cluster analysis techniques, a GHSOM need not determine the number of clusters in advance. When applying GHSOM algorithm, time series data with similar patterns are clustered together. If the similarity of data in the same cluster is below a certain threshold, data are clustered once again by breadth or depth, thus expanding the SOM clusters. The topology of the clusters is automatically determined by the characteristics of the input data during the unsupervised training process, and related with the threshold setting for width and depth expansion. In this study, GHSOM is used for the time series analysis to estimate the clustered-based variance and covariance, which have not been studied in detail.

3. Methodology

3.1. Procedure of the proposed model

The conventional approach to OHR estimation is simply to regress the spot and futures series. The basic operating steps are shown in

Fig. 1. Thefirst step is to collect the market price of spots and futures as original data. Next, the original price series is sampled so it coincides with the hedging horizon and then transformed into a return series by differencing. Finally, these data are used to estimate variance and covariance using OLS to obtain the OHR.

In this study, two modifications of the conventional approach are proposed based on the philosophy that data with similar dynamic behaviors may appear in the future with higher probability than dissimilar ones, as shown in the top part of Fig. 1. The original composition of data for OHR estimation is modified by the selected data with a similar pattern, which is performed in two phases. Phase I is clustering time series and Phase II is modifying the probability distribution. The objective of Phase I is to identify higher probability data, which would occur in the next hedging horizon based on the whole data set, and ignore lower probability data. In Phase II, the

(4)

data composed by the higher probability data are expected to be more approximate to the normal distribution than the original data, suggesting decreased inaccuracy caused by leptokurtic and fat-tailed distributions.

3.2. Data transformation

The original data for OHR estimation gathered from thefinancial market are the daily closing (or settlement) prices of spot and futures. The price series are then be transformed to return series by differencing the price series. We consider continuously com-pounded data and magnify the scale by multiplying by 100 to avoid a small scale. The return series is expressed as the price change: ΔSt; ΔFt¼ lnðPt=Pt 1Þ100 ð6Þ

whereΔS and ΔF are price changes of spot and futures, respectively; P is the price series; and t refers to the present time.

These return series are then divided into two parts, in-sample estimating period and out-of-sampling testing period. The hedge portfolio in this study is adjusts every hedging horizon according to the latest estimated OHR until the out-of-sample testing period is due. A rolling window scheme is applied to achieve the dynamic hedging strategy. The rolling windows scheme estimates the OHR at time t according to the conditioning on the time t1 data set, which is exhibited inFig. 2. Herein, h denotes the hedging horizon while e is the in-sample estimating period. The length of the rolling window is eþh. OHR is estimated based on the observa-tions in the in-sample estimating period, from te to t, then used for hedging from t to tþh. Next, the window is rolled one hedging horizon ahead in order to reestimate the OHR based on the observations from tþhe to tþh. Then, we use the new OHR for the next hedging horizon, from tþh to tþ2h. OHR is reesti-mated every h day, and then used to adjust the hedging portfolio until the out-of-sample testing period is due.

3.3. Extracting the feature of dynamic behaviors

The dynamic behaviors exist in financial time series, and these dynamic behaviors are helpful for time series forecasting [23,24]. In this study, variance, covariance, price spread, and theirfirst and second differencing are adopted as the features representing the dynamic behavior of time series for clustering. Although, most research adopted the original price or return series and their deriva-tive technique indices, that exists certain dependency, are adopted as the features for financial market prediction and can obtain well performance [55]. These features extracted from single variable time series are hardly to present the behavior patterns of bivariate time series. Consequently, we adopt variance and covariance for

consideration of the volatility cluster behavior [56] and the joint distribution of bivariate time series. Price spread that critically influences the OHR[57]is also adopted. Furthermore, the indepen-dence variables represent thefirst and second moment which ,similar to the physical concepts of potential energy (price spread), momen-tum (first-order differencing)), and activation force (second-order differencing) [23] [58], are adopted. These features are calculated using the data one in the most recent hedging horizons, denoted by h, as follows: VarðΔStÞ ¼ Var½ΔSt h; …; ΔSt ð7Þ VarðΔFtÞ ¼ Var½ΔFt h; …; ΔFt ð8Þ CovðΔSt; ΔFtÞ ¼ Cov ΔSt h; …; ΔSt ΔFt h; …; ΔFt " # ð9Þ SpreadðSt; FtÞ ¼ ½Ft hSt h; …; FtSt ð10Þ

The first and second order difference of these features are shown as X0t¼ XtXt 1 Xt 1 ð11Þ X″t¼X 0 tX0t 1 X0t 1 ð12Þ

where X represents the functions of Var, Cov, and Sperad. Twelve vectors of variable are eligible to represent the dynamic behavior of an observation and are used as the input matrix of variables for GHSOM (Table 1).

3.4. Clustering by GHSOM

Each observation can extract a feature vector from the data from the previous hedging horizon. The feature vectors of the observations in the estimation interval include input matrix of GHSOM for OHR estimation. The GHSOM algorithm in this study is implemented in MATLAB[35]. When using the GHSOM, the result network topology is adjusted by the presetting parameters, including breadth of map, depth of GHSOM and threshold of cluster capacity, tofit the require-ment of analyzer. If the group size of cluster exceeds the threshold, the data will be clustered once again by breadth or depth, hence expanding the SOM clusters. To emphasize the hierarchical relation-ship of the clusters and to avoid data from being too concentrated on some clusters, the parameters of bread, depth, and threshold are respectively set as 0.001, 0.8, and 100, after several trials based on our experiment data.

(5)

The features vector extracted from the historical time series, which are processed by min-max normalization between1 and 1, are feed into to the GHSOM, and the hierarchically clustered data are given.Fig. 3illustrates two important relations of these hierarchical clusters. First, the number of data samples in each cluster is different. Clusters in the upper layers of the hierarchical architecture contain more data samples than those in the lower layers. Secondly, the hierarchical architecture also represents the degree of similarity. Any sample can be identified on the cluster based on the layer it belongs to. The host cluster in the lowest layer contains the least data but has the highest similarity with the forecasting data. In addition, similarity with data is decreased in the upper layer clusters.

To estimate the OHR for the next hedging horizon, the features of dynamic behaviors are extracted from the data in one hedge horizon ahead for each sample data in estimating period. After being hierarchically clustered, a group of similar data samples are collected by the cluster they belongs to in each layer.

3.5. Data resampling and weighting

Data samples with similar behavior may occur more frequently in the future and should be more emphasized than the dissimilar ones. However, when data are grouped by cluster analysis, the original data are divided into several groups, with each group only containing partial data. The number of similar data samples is far

less than the original data. Reducing sample size causes inaccuracy when OLS for OHR estimation is employed[4]. To overcome this problem, we propose to adopt -in cluster resampling, which has been used for solving sample-reduced problems[36,37]. Moreover, the architecture of hierarchical cluster is very similar to the hierarchical stratified resampling scheme, in which the observa-tions are divided into several groups according to their properties. Consequently, we expand the sample size by randomly replicating the similar data samples in the cluster for each layer until the sample size reaches the same population size of the estimation period. The more similar data will be replicated more frequently, thus increasing the occurrence probability in the whole popula-tion. As the results, the sample size is expanded to the original sample size of estimating period by multiplying the layer of the hierarchical architecture. The pseudo code for data resampling and

Fig. 2. The rolling windows scheme.

Table 1

Features of the observation.

Dynamic behaviors Eligible input variables Notations

Volatility clustering property.

Momentum of volatility change.

activation force that cause volatility change.

Variance

Variance of spot/futures return series VarðΔSÞ, VarðΔFÞ

First order differential of variance Var0ðΔSÞ, Var0ðΔFÞ

Second order differential of variance Var″ðΔSÞ, Var″ðΔFÞ

Joint distribution of spot and future return series.

Momentum of joint distribution change.

activation force that cause joint distribution change.

Covariance

Covariance of spot and futures return series CovðΔS; ΔFÞ

First order differential of covariance Cov0ðΔS; ΔFÞ

Second order differential of covariance Cov″ðΔS; ΔFÞ

Potential energy.

Momentum of Potential energy.

activation force that cause Potential energy change.

Price spread

Spread of spot and futures price series SpreadðS; FÞ

First order differential of SpreadðS; FÞ Spread0ðS; FÞ

Second order differential of SpreadðS; FÞ Spread″ðS; FÞ

(6)

weighting is described inFig. 4. The probability distribution of the original time series is modified by combining the original popula-tion data in estimating period and the resampling the similar data samples. When the conditional distribution of spot and futures returns is predictable, a more efficient estimate of the OHR can be obtained by conditioning on recent information[34].

3.6. OHR estimating

In this study, OHR estimation is improved by replacing the original data samples in the estimation period with the collection of unequal weighted similar data samples. The traditional OLS method for OHR estimation, expressed by Eq.(3), and is modified to Eq.(13), in whichΔ~S and Δ ~F refer to the collection of observations derived from spot and futures return series, respectively.

rn¼CovðΔ~S; Δ ~F Þ

V arðΔ ~F Þ ð13Þ

3.7. Model evaluation

The value of hedging effectiveness (HE) and the variance of hedge portfolio, expressed by Eqs.(5)and(4), are used to evaluate the model of OHR estimation in this study. Furthermore, White's Reality Check is adopted to compare the different OHR estimation models and to test the statistical significance of variance deduction

[38,39]. The reality check consists of a non-parametric test that checks if any of the numbers in the concurrent methods yield forecasts that are significantly better than a given benchmark method; then, it corrects the data snooping bias. Data snooping bias may occur when a given dataset is reused by one or more researchers for model selection. The null hypothesis that the performance of the proposed hedging model has no predictive

superiority over the conventional model is not rejected. The hypotheses are as follows:

H0. No method is better than the benchmark.

H1. At least one method is better than the benchmark.

4. Experimental design and results analysis

4.1. Experimental design

The experiments in this study are designed with two objectives: feasibility of the proposed GHSOM model and hedging performance over various hedging horizons for OHR decision making based on different models. Two factors are considered in the experiments. One is the features selection for dynamic behaviors, the other is the days of hedging horizon.

The feasibility of the proposed GHSOM model is examined using dynamic behaviors extracted as the feature of the time series. The feature-extracting process of the proposed model is tested in different settings to achieve the best parameters. The feature vectors that represent the dynamic behaviors of time series for GHSOM similarity measurement are composed of variance, covariance, price spread, and theirfirst and second order differences. We design six combinations of these parameters, which are adopted in the experimental models to verify the performance over various hedging horizons.Table 2presents the parameter settings of these models.

The optimal hedge ratio is estimated by the proposed model concerned with the hedging horizon, and the hedging decision is evaluated by hedging effectiveness. For each hedging horizon in the testing period, the hedged portfolio is adjusted once according to the latest OHR at the beginning of a hedge horizon and lasts until the beginning of the next hedging horizon. At the end of the testing

(7)

period, hedging effectiveness is calculated based on the variance of the hedging portfolio in each hedging horizon. Hedge horizons in the experiments are set at 1, 7, 14, 21, and 28 days, which cover the intervals from short-term to long-term. To compare hedging performance, the superiority of the proposed model is verified using two conventional models, the OLS and naïve models, both of which are widely used in OHR research on different hedging horizons[6,12].

4.2. Experiment data and basic statistics

This study obtained empirical trading data of the daily closing price from various stock and futures markets, including Taiwan Weighted

Index (TWI), Standard & Poor's 500 Index (S&P 500), Financial Times Stock Exchange 100 Index (FTSE 100), NIKKEI 255 Index, and their correlated futures contracts.Table 3lists the stock market index and exchange of their correlative futures contracts trade. All data were obtained from the Thomson Datastream database in the same period from July 21, 1999 to July 18, 2008. The futures prices series was gathered from the nearest month contracts and rolled over to the next nearest contracts on the maturity day due to the consideration of liquidity and price spread risk. The return series are defined as the logarithmicfirst difference of price series multiplied by 100 using Eq. (1). The numbers of observation for each market are listed in

Table 3. Among the total observations, thefirst 90% is considered the estimation period, and the remaining 10% is considered the testing period.

Table 4shows some basic distributional characteristics of the spot and futures return series. All eight series show significant skewness, kurtosis, and Jarque–Bera (JB) statistics, implying non-normal distributions with fatter tails. Comparisons of the standard deviation of return, kurtosis, and JB statistics indicate that the largest and smallest discrepancy between the spot and futures data are in TWI and FTSE 100, respectively. In other words, the correlation between spot and futures is highest in FTSE 100 and lowest in TWI. The large discrepancy between the spot and futures data displays more extreme movements than would be predicted by a normal distribution. The F-test for equal variance between spot and futures also indicates different characteristics in each market. The result shows that the null hypothesis of equal variance is rejected in TWI, but cannot be rejected in S&P 500, FTSE 100, and NIKKEI 255 Index. Consequently, the data of the same period gathered from different markets may exhibit different behaviors and cause inconsistencies in the results.

4.3. Comparisons of dynamic behaviors prediction

The variance, covariance, price spread, and first and second differences of the observations in previous hedging horizons are suggested to capture the dynamic behavior for predictingfluctuations in the next hedging horizon.Table 5presents the hedging effective-ness for all models. Results indicate that based on the same experi-ment data, the GHSOM model can obtain the best performance compared with the traditional OLS and naïve models, except for short-term hedging in FTSE 100 and one day hedging in S&P 500. A comparison of the six experiment models in all market data indicates that the best GHSOM model is different over different hedging horizons. For seven days hedging, the GHSOM_V0 model is the best model in all market data. However, for the 1 day and 28 days hedging, the GHSOM_V2 and GHSOM_V2C0S0 models are the best models in three of four market data.

Table 3 Experiment data.

Index (Spot) Exchange (Futures) Observations

Taiwan Weighted Index (TWI)

Taiwan Futures Exchange (TAIFEX) 2217 Standard & Poor's 500

index (S&P 500)

Chicago Mercantile Exchange (CME)

2263 Financial Times Stock

Exchange 100 Index (FTSE 100)

London International Financial Futures and Options Exchange (LIFFE)

2215

NIKKEI 255 Index Osaka Securities Exchange (OSE) 2275 Note: Data period is from July 21, 1999 to July 18, 2008.

Table 2

Parameter settings for testing dynamic behavior.

Code of testing model Input parameters (selected features)

Variance Covariance Price spread

GHSOM_V0 VarðΔSÞ, VarðΔFÞ

GHSOM_V2 VarðΔSÞ, VarðΔFÞ

Var0ðΔSÞ, Var0ðΔFÞ

Var″ðΔSÞ, Var″ðΔFÞ

GHSOM_V0C0S0 VarðΔSÞ, VarðΔFÞ CovðΔS; ΔFÞ SpreadðS; FÞ

GHSOM_V2C0S0 VarðΔSÞ, VarðΔFÞ

Var0ðΔSÞ, Var0ðΔFÞ CovðΔS; ΔFÞ SpreadðS; FÞ

Var″ðΔSÞ, Var″ðΔFÞ

GHSOM_V0C2S2 VarðΔSÞ, VarðΔFÞ CovðΔS; ΔFÞ SpreadðS; FÞ Cov0ðΔS; ΔFÞ Spread0ðS; FÞ Cov″ðΔS; ΔFÞ Spread″ðS; FÞ GHSOM_V2C2S2 VarðΔSÞ, VarðΔFÞ CovðΔS; ΔFÞ SpreadðS; FÞ

Var0ðΔSÞ, Var0ðΔFÞ Cov0ðΔS; ΔFÞ Spread0ðS; FÞ

Var″ðΔSÞ, Var″ðΔFÞ Cov″ðΔS; ΔFÞ Spread″ðS; FÞ

Table 4

Basic distributional statistics of return series.

TWI S&P 500 FTSE 100 NIKKEI 255

Spot Futures Spot Futures Spot Futures Spot Futures

Mean 0.0060 0.0068 0.0040 0.0042 0.0072 0.0073 0.0160 0.0158 Maximum 6.1721 6.7659 5.5744 5.7549 5.9026 5.9506 7.2217 8.0043 Minimum 9.9360 11.0795 6.0045 6.2709 5.8853 6.0625 7.2340 7.5986 Std. Dev. 1.5931 1.8262 1.1287 1.1404 1.1657 1.1663 1.4058 1.4346 Kurtosis 5.2942 5.8891 5.2732 5.4067 5.7929 5.7588 4.6042 4.6496 Skewness 0.1883 0.1867 0.0600 0.0274 0.2096 0.1658 0.2075 0.2122 Jarque-Bera (JB) 499.0886nnn 783.5957nnn 488.3711nnn 546.2142nnn 755.7494nnn 731.5745nnn 253.2913nnn 267.633nnn F-test for equal variances (p value)

0.0000nnn 0.6247 0.9790 0.3396

Note: (1) The skewness of normal distribution is zero. (2) The kurtosis of normal distribution is 3. (3) The hypothesis of F-test is that two independent samples, spot and futures return, come from normal distributions with the same variance.

(8)

The results imply that the ability to capture fluctuation under various timescales is different for GHSOM models. Short-term dynamic behavior may be captured by variance and its first and second differences. Long-term tendency may need more variables for its description than short-term tendency by adding covariance and price spread.

4.4. Comparison of hedging performance

For a comparison of hedging performance, we list the best GHSOM model from the six experiments models, and the two conventional models (naïve and OLS) inTable 6.Table 6shows that increasing the hedging horizon will increase the variance of unhedged portfolio but will be effectively reduced by the hedging model. The percentage of variance reduction, shown as hedging effectiveness in Table 5, is higher in a long hedging horizon than in a short one.

A comparison of the model using the variance of hedged portfolio inTable 6shows that the GHSOM model is superior to the OLS model; for TWI and NIKKEI 255, the GHSOM model obtains the minimum variance in all hedging horizons. However, the conventional OLS model cannot obtain minimum variance for all markets. Notably, for FTSE 100 and S&P 500 in short-term hedging, the static naïve model obtains the minimum variance. A possible reason for this may be the high correlation of thefluctuations of spot and futures for FTSE 100.

This can be observed from the closing statistics value of the spot and futures market inTable 3.

The value of hedging effectiveness is slightly different in these models. To test the significance of these models’ performance improvements, we perform White's reality check. When OLS is treated as the benchmark, the null hypothesis of no improvement of GHSOM model over benchmark is rejected for 28 days hedging in TWI, 21 and 28 days hedging in S&P 500, 21 days hedging in FTSE 100 and NIKKEI 255 at the significance level of 1%. Results of the reality check provide evidence that the proposed CI-model can improve the OLS model, especially in long-term hedging.

4.5. Comparison of OHR

Table 7, which includes OLS and the best GHSOM model for comparison, presents the average OHR and standard deviation for the underlying models. For all market data, the average OHR estimated using GHSOM and OLS models is very close though a large discrepancy that exists in the standard deviation. Maximum standard deviation of the OLS model is 0.0069 for the one day hedging for FTSE 100. However, minimum standard deviation of the GHSOM model is 0.0070 for 28 days hedging for S&P 500. Results suggest that the OHR estimated using the GHSOM model is more variant than that of the OLS model.

Table 5

Comparisons of dynamic behaviors.

Market/model Hedging effectiveness

Hedging horizon (days)

1 7 14 21 28 TWI GHSOM_V0 93.3309%n 97.1661%nn 99.2656% 99.3811% 99.3131% GHSOM_V2 93.3905%nn 97.1534%n 99.2480% 99.3942% 99.3556%n GHSOM_V0C0S0 93.2715% 96.9289% 99.2751% 99.3947% 99.3160% GHSOM_V2C0S0 93.0998% 96.8809% 99.2879%n 99.4342%n 99.3802%nn GHSOM_V0C2S2 93.1081% 97.0102% 99.3111%nn 99.4327% 99.3431% GHSOM_V2C2S2 93.1798% 96.9169% 99.2047% 99.4666%nn 99.3470% OLS 93.3055% 97.0244% 99.1612% 99.3860% 99.3089% Naïve 90.6982% 96.0278% 98.5331% 99.0888% 98.8415% S&P 500 GHSOM_V0 96.6140% 99.1678%nn 99.3126% 99.6413% 99.6493% GHSOM_V2 96.6662%n 99.1206% 99.3777% 99.6010% 99.6513% GHSOM_V0C0S0 96.6401% 99.1236% 99.3860%nn 99.6888%nn 99.7094% GHSOM_V2C0S0 96.6447% 99.0880% 99.3856%n 99.6806%n 99.7310%nn GHSOM_V0C2S2 96.6069% 99.0630% 99.3837% 99.6774% 99.6793% GHSOM_V2C2S2 96.6221% 99.0656% 99.3667% 99.6661% 99.7263%n OLS 96.6510% 99.1287%n 99.3705% 99.5911% 99.6131% Naïve 96.7974%nn 99.0029% 99.3045% 99.4826% 99.5752% FTSE 100 GHSOM_V0 96.9688% 98.5911% 98.5323% 99.0140% 99.4868%nn GHSOM_V2 97.0511%n 98.5522% 98.5596% 99.1438% 99.4483% GHSOM_V0C0S0 96.9842% 98.5673% 98.5690% 99.1743% 99.4857%n GHSOM_V2C0S0 96.9828% 98.6141%n 98.5572% 99.2197%n 99.4751% GHSOM_V0C2S2 96.9945% 98.5904% 98.5883% 99.1774% 99.4475% GHSOM_V2C2S2 97.0278% 98.5823% 98.6069% 99.2263%nn 99.4764% OLS 97.0130% 98.5552% 98.6306%n 99.0979% 99.4767% Naïve 97.1492%nn 98.6258%nn 98.8094%nn 99.1005% 99.3223% NIKKEI 255 GHSOM_V0 96.4072% 99.4409%nn 99.5007% 99.5261% 99.8709% GHSOM_V2 96.4909% 99.3939% 99.4533% 99.5560%nn 99.9036%n GHSOM_V0C0S0 96.5677%nn 99.4311%n 99.5075%n 99.5499%n 99.8948% GHSOM_V2C0S0 96.5026% 99.4310% 99.4954% 99.5245% 99.9051%nn GHSOM_V0C2S2 96.5099% 99.4037% 99.5107%nn 99.5274% 99.8899% GHSOM_V2C2S2 96.4615% 99.3797% 99.4698% 99.5431% 99.8893% OLS 96.5501%n 99.4271% 99.5002% 99.5072% 99.9023% Naïve 96.2222% 99.3305% 99.4317% 99.4585% 99.8711%

nRepresents the second best HE among eight models at the same hedging horizon. nnRepresents the best HE among eight models at the same hedging horizon.

(9)

Figs. 5–8present the plot of OHR estimated by the best GHSOM and OLS models over 1 and 28 days hedging horizons for all market data, correspondingly. In bothfigures, the OHR estimated using the traditional OLS model approximates a straight line, and the values are almost the same during the hedge period. However, the OHR

estimated using the GHSOM model is time-varying, which can reflect the dynamic behavior of thefinancial time series.

Fig. 7 also indicates the difference of FTSE 100 among all markets. The OHRs given by OLS model become smaller when the hedging horizons are longer. But it is contrary as observed in

Figs. 5, 6, and 8. Simultaneously, for FTSE 100,Table 4shows the highest p-value of F-test for equal variances, andTables 5 and 6

show the best model is naïve. Consequently, the market behavior of FTSE 100 is a special case which is hardly suitable to make hedge decision.

4.6. Discussion

The experimental results show that the model comparisons may differ in different markets. Some studies indicate GARCH family models to be superior to the OLS model in a specific market[40]. However, other studies indicate opposing opinions, stating that the OLS hedge ratio performs better than other popular multivariate GARCH models[41,42]. The naïve hedge ratio of 1 is suggested as the optimal hedge ratio when the hedging horizon is long[6]. The superiority of the hedging model can be evaluated using White's reality check. However, this evaluation is not significant for model comparisons in one day hedging[39,43]. This phenomenon may be due to the dissimilar behavior of markets: the behavior of an emerging market differs from a mature market. For example, hedging effectiveness can be enhanced by a certain model in emer-ging markets such as the Hungarian BSI market, but not for developed markets such as the US S&P 500 market[44]. A similar result can be observed in this study, that is, the hedging effective-ness in TWI is different from that of UK FTSE 100.

Another issue that arises in this study is that the improvement of HE is very slight when comparing different models, e.g. from 93.30% to 93.39% (0.09% improvement) on 1 day TWI dataset. The explanation of minor improvement is due to the small scale of variance and covariance computing, which is commonly reported in OHR literature. For example, Lee and Yoder's RS-BEKK models

[39]compared with OLS model improve the variance reduction from 77.4732% to 78.8891% for corn, and from 99.2068% to 99.2087% for nickel. Li's threshold VECM model [44] compared with OLS model improve the variance reduction from 96.22848% to 96.2646% for S&P 500. Moon et al.[42]report that principal component GARCH model compared with rolling OLS model improve the variance reduction from 95.45% to 95.52%. Although, the improvement is minor in this study, the proposed GHSOM model is capable of discovering similar behavior in the same market and can adapt to the characteristics of a particular market. Therefore, the long-term tendency of markets can be captured easily and the statistical significance when compared with OLS model in this study can be obtained.

5. Conclusions and future works

The empirical findings in this study are consistent with the following notations. First, hedging horizon will increase hedging effectiveness. When hedge horizon is increased, hedging effective-ness is also increased. Second, the proposed GHSOM model can improve the typical OLS model, especially in long-term hedging. Third, the presentfindings lend support to the superiority of the GHSOM model in enhancing hedging effectiveness for emerging markets, but not for developed markets such as the US S&P 500 and UK FTSE 100 markets. Finally, the OHR estimated using the GHSOM model is more volatile than the OHR estimated using the OLS model, which implies that the GHSOM model can rapidly

Table 6

Variance of the portfolio.

Market/models Variance Hedging horizon 1 7 14 21 28 TWI Unhedged 2.7527 20.5443 41.8060 35.5709 39.9629 Naïve 0.2561 0.8161 0.6132 0.1840 0.1698 OLS 0.1843 0.6113 0.3507 0.1454 0.1546 GHSOM 0.1819a 0.5822a 0.2880a 0.1148a 0.1056a Reality check p value 0.134 0.026n 0.015n 0.085 0.000nn S&P 500 Unhedged 1.6688 7.7995 14.9320 35.5709 39.9629 Naïve 0.0534a 0.0778 0.1039 0.1840 0.1698 OLS 0.0559 0.0680 0.0940 0.1454 0.1546 GHSOM 0.0556 0.0649a 0.0917a 0.1107a 0.1075a Reality check p value 0.354 0.072 0.257 0.000nn 0.000nn FTSE 100 Unhedged 2.0479 8.3512 23.6463 35.0068 57.8882 Naïve 0.0584a 0.1148a 0.2815a 0.3149 0.3923 OLS 0.0612 0.1207 0.3238 0.3158 0.3029 GHSOM 0.0604 0.1157 0.3294 0.2709a 0.2971a Reality check p value 0.039n 0.022n 1.000 0.002nn 0.050n NIKKEI 255 Unhedged 2.8548 17.2076 46.2477 38.9337 87.7568 Naïve 0.1078 0.1152 0.2628 0.2108 0.1131 OLS 0.0985 0.0986 0.2312 0.1919 0.0857 GHSOM 0.0980a 0.0962a 0.2263a 0.1729a 0.0833a Reality check p value 0.318 0.078 0.151 0.004nn 0.136 Note: The benchmark model for White's reality check is the OLS model.

nRepresents significance at the 5% level. nnRepresents significance at the 1% level. a

Represents the minimum variance among the naïve, OLS, and GHSOM hedged portfolios. Table 7 Comparison of OHR. Hedging horizon/ OHR

Model TWI S&P 500 FTSE 100 NIKKEI 255

Mean Std. Dev. Mean Std. Dev. Mean Std. Dev. Mean Std. Dev. 1 OLS 0.8189 0.0012 0.9636 0.0037 0.9819 0.0069 0.9429 0.0015 GHSOM 0.8263 0.0268 0.9649 0.0163 0.9833 0.0145 0.9474 0.0175 7 OLS 0.9423 0.0019 0.9746 0.0021 0.9738 0.0040 0.9799 0.0010 GHSOM 0.9288 0.0150 0.9751 0.0088 0.9821 0.0100 0.9826 0.0058 14 OLS 0.9570 0.0019 0.9818 0.0024 0.9746 0.0057 0.9737 0.0015 GHSOM 0.9493 0.0134 0.9821 0.0143 0.9851 0.0159 0.9832 0.0078 21 OLS 0.9625 0.0021 0.9872 0.0039 0.9647 0.0045 0.9851 0.0019 GHSOM 0.9598 0.0145 0.9847 0.0099 0.9701 0.0149 0.9867 0.0078 28 OLS 0.9647 0.0012 0.9960 0.0023 0.9589 0.0045 0.9764 0.0010 GHSOM 0.9624 0.0066 0.9882 0.0070 0.9676 0.0118 0.9858 0.0083

(10)

reflect the time-variant property of financial time series and provide accurate estimation for dynamic hedging decision.

Although this research still has some restriction of model parameters selection, this novel approach based on GHSOM can

improve the performance of traditional approach without too many inappropriate assumptions and restrictions. Consequently, the proposed model can also be considered as a powerful tool to investigate any financial market, in which the probability

Fig. 5. Comparison of OHR in different hedging horizons for TWI.

(11)

distribution of data is unrestricted and not necessary to fit any type of probability distribution.

Thefindings, although significant, have some limitations and are expected to be investigated further. The recommendations for future

works are summarized as follows. This research only conducts model and OHR estimations on stock index futures. However, the model has the potential to be applied to other futures markets, such as foreign exchange futures or commodity futures.

Fig. 7. Comparison of OHR in different hedging horizons for FTSE 100.

(12)

The proposed GHSOM model is expected to be used as a tool for investigating the relevant issue of volatility infinancial engineering, such as volatility forecasting, modifying beta coefficient in capital asset pricing model (CAPM), and estimating value of risk (VaR).

References

[1]L.H. Ederington, Hedging performance of the new futures markets, J. Financ. 34 (1979) 157–170.

[2]J. Hill, T. Schneeweis, A note on the hedging effectiveness of foreign-currency futures, J. Futures Mark. 1 (1981) 659–664.

[3]J.M. Geppert, A statistical-model for the relationship between futures contract hedging effectiveness and investment horizon length, J. Futures Mark. 15 (1995) 507–536.

[4]D. Lien, K. Shrestha, An empirical analysis of the relationship between hedge ratio and hedging horizon using wavelet analysis, J. Futures Mark. 27 (2007) 127–150.

[5]D. Lien, B.K. Wilson, Multiperiod hedging in the presence of stochastic

volatility, Int. Rev. Financ. Anal. 10 (2001) 395–406.

[6]S.S. Chen, C.F. Lee, K. Shrestha, An empirical analysis of the relationship between the hedge ratio and hedging horizon: a simultaneous estimation of the short- and long-run hedge ratios, J. Futures Mark. 24 (2004) 359–386.

[7]D.B. Nelson, D.P. Foster, Asymptotic filtering theory for univariate ARCH

models, Econometrica 62 (1994) 1–41.

[8]A. Alizadeh, N. Nomikos, A Markov regime switching approach for hedging

stock indices, J. Futures Mark. 24 (2004) 649–674.

[9]A. Hatemi, E. Roca, Calculating the optimal hedge ratio: constant, time varying and the Kalman Filter approach, Appl. Econ. Lett. 13 (2006) 293–299.

[10]R. Gençay, F. Selçuk, B. Whitcher, Systematic risk and timescales, Quant.

Financ. 3 (2003) 108–116.

[11]F. In, S. Kim, The hedge ratio and the empirical relationship between the stock and futures markets: a new approach using wavelet analysis, J. Bus. 79 (2006) 799–820.

[12]F. In, S. Kim, Multiscale hedge ratio between the Australian stock and futures markets: evidence from wavelet analysis, J. Multinatl. Financ. Manag. 16 (2006) 411–423.

[13]F. Azevedo, Z.A. Vale, P.B.D. Oliveira, A decision-support system based on

particle swarm optimization for multiperiod hedging in electricity markets, IEEE Trans. Power Syst. 22 (2007) 995–1003.

[14]A. Rauber, D. Merkl, M. Dittenbach, The growing hierarchical self-organizing map: exploratory analysis of high-dimensional data, IEEE Trans. Neural Netw. 13 (2002) 1331–1341.

[15]F. Pattarin, S. Paterlini, T. Minerva, Clustering financial time series: an

application to mutual funds style analysis, Comput. Stat. Data Anal. 47 (2004) 353–372.

[16]V.V. Gafiychuk, B.Y. Datsko, J. Izmaylova, Analysis of data clusters obtained by self-organizing methods, Phys. A-Stat. Mech. Appl. 341 (2004) 547–555. [17]N. Basalto, R. Bellotti, F. De Carlo, P. Facchi, E. Pantaleo, S. Pascazio, Hausdorff

clustering of financial time series, Phys. A-Stat. Mech. Appl. 379 (2007)

635–644.

[18]R.G. Karandikar, N.R. Deshpande, S.A. Khaparde, S.V. Kulkarni, Modelling

volatility clustering in electricity price return series for forecasting value at risk, Eur. Trans. Electr. Power 19 (2009) 15–38.

[19] J.R. Zhu, IEEE, A new model for oil futures price forecasting based on cluster analysis, in: 4th International Conference on Wireless Communications, Networking and Mobile Computing, vols 1–31, 2008, pp. 11456–11459. [20]S.M. Focardi, F.J. Fabozzi, A methodology for index tracking based on

time-series clustering, Quant. Financ. 4 (2004) 417–425.

[21] D. Papanastassiou, Classification and clustering of GARCH time series, in: L. Sakalauskas, C. Skiadas, E.K. Zavadskas (Eds.) Proceedings of the XIII International Conference on Applied Stochastic Models and Data Analysis, Vilnius, Lithuania, 2004.

[22]N. Gershenfeld, B. Schoner, E. Metois, Cluster-weighted modelling for time-series analysis, Nature 397 (1999) 329–332.

[23]S. Fruehwirth-Schnatter, S. Kaufmann, Model-based clustering of multiple

time series, J. Bus. Econ. Stat. 26 (2008) 78–89.

[24]P. D'Urso, E.A. Maharaj, Autocorrelation-based fuzzy clustering of time series, Fuzzy Sets Syst. 160 (2009) 3565–3589.

[25] C.-Y. Tsao, S.-H. Chen, Self-organizing maps as a foundation for charting or geometric pattern recognition infinancial time series, in: Proceedings of the 2002 IEEE International Conference on Computational Intelligence for Finan-cial Engineering (Cat. no.03TH8653), 2003, pp. 387–394.

[26] T.-C. Fu, F.-L. Chung, R.L.V. Ng, Pattern discovery from stock time series using self-organizing maps, KDD 2001 Workshop on Temporal Data Mining, San Francisco, 2001, pp. 27–37.

[27] M.O. Afolabi, O. Olude, Predicting stock prices using a hybrid Kohonen self organizing map (SOM), in: Proceedings of the 40th Annual Hawaii Interna-tional Conference on System Sciences, CD-ROM, 2007, 8pp.

[28] T. Senjyu, Y. Tamaki, K. Uezato, Next day load curve forecasting using self organizing map, in: Proceedings of the 2000 International Conference on Power System Technology, (Cat. no. 00EX409)|PowerCon 2000, 10.1109/ ICPST.2000.897176, 2000.

[29] G. Simon, A. Lendasse, M. Cottrell, J.C. Fort, M. Verleysen, Time series

forecasting: Obtaining long term trends with self-organizing maps, Pattern Recognit. Lett. 26 (2005) 1795–1808.

[30] T. Kohonen, Self-Organization and Associative Memory, 3rd edition, Springer-Verlag, New York, 1989.

[31]J.Y. Shih, Y.J. Chang, W.H. Chen, Using GHSOM to construct legal maps

for Taiwan's securities and futures markets, Expert Syst. Appl. 34 (2008) 850–858.

[32] H.C. Yang, C.H. Lee, D.W. Chen, A method for multilingual text mining and

retrieval using growing hierarchical self-organizing maps, J. Inf. Sci. 35 (2009) 3–23.

[33] S. Liu, L. Lu, G. Liao, J. Xuan, Pattern discovery from time series using growing hierarchical self-organizing map, in: Proceedings of the 13th Inter-national Conference on Neural Information Processing, ICONIP'06, Lecture Notes in Computer Science Part I, vol. 4232, xlviþ1153, 2006, pp. 1030–1037.

[34] R.D.F. Harris, J. Shen, Robust estimation of the optimal hedge ratio, J. Futures Mark. 23 (2003) 799–816.

[35] A. Chan, E. Panipalk, Growing Hierarchical Self Organising Map (GHSOM) toolbox: Visualisations and enhancements, in: L. Wang, J.C. Rajapakse, K. Fukushima, S.Y. Lee, X. Yao (Eds.), Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age, Iconip'02, Singapore, 2002, pp. 2537–2541.

[36] E.B. Hoffman, P.K. Sen, C.R. Weinberg, Within-cluster resampling, Biometrika 88 (2001) 1121–1134.

[37]R.H. Rieger, C.R. Weinberg, Analysis of clustered binary outcomes using

within-cluster paired resampling, Biometrics 58 (2002) 332–341.

[38] H. White, A reality check for data snooping, Econometrica 68 (2000)

1097–1126.

[39] H.T. Lee, J.K. Yoder, A bivariate Markov regime switching GARCH approach to estimate time varying minimum variance hedge ratios, Appl. Econ. 39 (2007) 1253–1265.

[40] R.J. Myers, Estimating time-varying optimal hedge ratios on futures markets (reprinted from vol 11, pg 39–53, 1991), J. Futures Mark. 20 (2000) 73–87.

[41]D. Lien, Y.K. Tse, A.K.C. Tsui, Evaluating the hedging performance of

the constant-correlation GARCH model, Appl. Financ. Econ. 12 (2002) 791–798.

[42] G.H. Moon, W.C. Yu, C.H. Hong, Dynamic hedging performance with the

evaluation of multivariate GARCH models: evidence from KOSTAR index futures, Appl. Econ. Lett. 16 (2009) 913–919.

[43] H.N.E. Bystrom, The hedging performance of electricity futures on the Nordic power exchange, Appl. Econ. 35 (2003) 1–11.

[44]M.Y.L. Li, Dynamic hedge ratio for stock index futures: application of threshold VECM, Appl. Econ. 42 (2010) 1403–1417.

[45] K.F. Kroner, J. Sultan, Time-varying distribution and dynamic hedging with

foreign currency futures, J. Financ. Quant. Anal. 28 (1993) 535–551.

[46] T. Choudhry, Short-run deviations and optimal hedge ratio: evidence from

stock futures, J. Multinatl. Financ. Manag. 13 (2003) 171–192.

[47]D. Lien, X.D. Luo, Multiperiod hedging in the presence of conditional hetero-skedasticity, J. Futures Mark. 14 (1994) 927–955.

[48] G. Moschini, R.J. Myers, Testing for constant hedge ratios in

commo-dity markets: a multivariate GARCH approach, J. Empir. Financ. 9 (2002) 589–603.

[49] C. Wang, S.S. Low, Hedging with foreign currency denominated stock index

futures: evidence from the MSCI Taiwan index futures market, J. Multinatl. Financ. Manag. 13 (2003) 1–17.

[50] T.G. Andersen, B.E. Sorensen, GMM estimation of a stochastic volatility model: a Monte Carlo study, J. Bus. Econ. Stat. 14 (1996) 328–352.

[51]C.T. Howard, L.J. Dantonio, Multiperiod hedging using futures—a risk

mini-mization approach in the presence of autocorrelation, J. Futures Mark. 11 (1991) 697–710.

[52] D. Lien, X.D. Luo, Estimating multiperiod hedge ratios in cointegrated markets, J. Futures Mark. 13 (1993) 909–920.

[53] T.W. Liao, Clustering of time series data—a survey, Pattern Recognit. 38 (2005) 1857–1874.

[54] A.K. Jain, Data clustering: 50 years beyond K-means, Pattern Recognit. Lett. 31 (2010) 651–666.

[55] M.R. Hassan, K. Ramamohanarao, J. Kamruzzaman, M. Rahman, M.M. Hossain,

A HMM-based adaptive fuzzy inference system for stock market forecasting, Neurocomputing 104 (2013) 10–25.

[56] B.B. Mandelbrot, The variation of certain speculative prices, J. Bus. XXXVI (1963) 392–417.

[57]D. Lien, L. Yang, Spot-futures spread, time-varying correlation, and hedging with currency futures, J. Futures Mark. 26 (2006) 1019–1038.

[58] A.P. Chen, Y.C. Hsu, Dynamic physical behavior analysis forfinancial trading decision support, IEEE Comput. Intell. Mag. 5 (2010) 19–23.

[59] Y.C. Hsu, A.P. Chen, Clustering time series data by SOM for the optimal hedge ratio estimation, in: Proccedings of the Third International Conference on Convergence and Hybrid Information Technology, ICCIT'08, 2008, pp. 1164–1169.

[60] Y.C. Hsu, A.P. Chen, SOM-based hedge ratio estimation with hierarchical cluster resampling, in: Proceedings of the International Conference on Computational Science and Engineering, CSE'09, 2009, pp. 368–373. [61] Y.C. Hsu, A.P. Chen, Futures hedging using clusters with dynamic behavior of

marketfluctuation, in: Proceedings of the 2012 International Joint Conference on Neural Networks (IJCNN), 2012, pp. 1–8.

(13)

[62]C. Dose, S. Cincotti, Clustering offinancial time series with application to index and enhanced index tracking portfolio, Phys. A-Stat. Mech. Appl. 355 (2005) 145–151.

[63]R.F. Engle, Autoregressive conditional heteroscedasticity with estimates of the variance of UK inflation, Econometrica 50 (1982) 987–1007.

[64]R.F. Engle, K.F. Kroner, Multivariate simultaneous generalized ARCH, Econom. Theory 11 (1995) 122–150.

[65]T. Bollerlev, Generalized autoregressive conditional heteroscedasticity,

J. Econom. 31 (1986) 307–327.

[66]T. Bollerslev, Modelling the coherence in short-run nominal exchange rates: a multivariate generalized ARCH model, Rev. Econ. Stat. 72 (1990) 498–505. [67]T.H. Park, L.N. Switzer, Bivariate GARCH estimation of the optimal hedge ratios

for stock index futures: a note, J. Futures Mark. 15 (1995) 61–67.

[68]H.N.E. Bystrom, The hedging performance of electricity futures on the nordic power exchange, Appl. Econ. 35 (2003) 1–11.

[69]R.T. Baillie, R.J. Myers, Bivariate GARCH estimation of the optimal commodity futures hedge, J. Appl. Econom. 6 (1991) 109–124.

[70]L. Gagnon, G. Lypny, Hedging short-term interest risk under timevarying

distributions, J. Futures Mark. 15 (1995) 767–783.

[71]M. Kavussanos, N. Nomikos, Hedging in the freight futures market, J. Deriv. 8 (2000) 41–58.

Yu-Chia Hsu is an assistant professor in the Depart-ment of Sports Information and Communication at National Taiwan University of Physical Education and Sport. His Ph.D. is from National Chaio Tung University in Information Management. His research interests include decision support system, data mining, e-com-merce, and information application in the area of financial, sports, and related service industry.

An-Pin Chen is a professor of the Institute of Informa-tion Management at NaInforma-tional Chiao Tung University. He got his Ph.D. in Industrial Engineering from University of Southern California. His research interests include decision support system, data mining,financial engi-neering, knowledge management and computational intelligence applications in the area offinance. He has published research papers in the Soft Computing, Journal of Information Science, Expert Systems with Applications among others.

數據

Fig. 1. The cluster-based approach.
Fig. 2. The rolling windows scheme.
Fig. 4. The pseudo code of data resampling and weighting.
Table 3 . Among the total observations, the first 90% is considered the estimation period, and the remaining 10% is considered the testing period.
+5

參考文獻

相關文件

We show that the exible time-driven distance-constrained model using the pinwheel scheduling algorithms can achieve both predictability and schedulability for many real-time

The performance guarantees of real-time garbage collectors and the free-page replenishment mechanism are based on a constant α, i.e., a lower-bound on the number of free pages that

Miroslav Fiedler, Praha, Algebraic connectivity of graphs, Czechoslovak Mathematical Journal 23 (98) 1973,

Akira Hirakawa, A History of Indian Buddhism: From Śākyamuni to Early Mahāyāna, translated by Paul Groner, Honolulu: University of Hawaii Press, 1990. Dhivan Jones, “The Five

1B - Time Series of the Consumer Price Index B (CPI-B) by Section 2G - Month-to-Month Change of the Composite CPI by Section 2A - Month-to-Month Change of the CPI-A by

The set of concrete decision problems that are polynomial-

The min-max and the max-min k-split problem are defined similarly except that the objectives are to minimize the maximum subgraph, and to maximize the minimum subgraph respectively..

Experiment a little with the Hello program. It will say that it has no clue what you mean by ouch. The exact wording of the error message is dependent on the compiler, but it might