• 沒有找到結果。

Input node: There are six input nodes in our proposed neural networks system.

As shown in Table 7, these values represent the emotions and volume of discussion of the three data resources, respectively.

Table 7. The definition of six variables in the input vector Variable Definition

x1 Facebook emotion

x2 volume of Facebook discussion and their positive/negative emotion x3 News emotion

x4 volume of News and their positive/negative emotion x5 ForeignEX of PTT emotion

x6 volume of PTT discussion and their positive/negative emotion

Hidden node: The number of adopted hidden nodes is 1 initially. Within the learning process, the number of adopted hidden nodes will be altered.

Output node: The output layer has two output nodes whose values are binary. The output could be seen as (1, 1) representing the appreciation, (1, -1) and (-1, 1) representing the unchanged and (-1, -1) representing the depreciation. That is, the movements of the exchange rate transactions have been divided into three groups, appreciation, unchanged and depreciation as shown in Table 8.

Desired output: The desired output is the real movement of exchange rate transaction. The representation of desired output regarding the appreciation is (1, 1), the depreciation is (-1, -1), and the unchanged is (1, -1).

Table 8. Representation for three movements groups

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

31

Category Description

(1, 1) Appreciation

(1, -1) or (-1, 1) Unchanged.

(-1, -1) Depreciation

The reasoning neural network can be used as a classifier which learns to distinguish whether the stimulus to two classes.

Figure 5. The flowchart of the learning process of Reasoning Neural Networks The stopping criterion of learning is the Linearly Separating Condition (LSC). At the stage of learning the kth training case, let K  K1  K2, where K1 and K2 are the sets of indices of training cases in class 1 and 2, respectively. The condition LSC(𝑘) = True if

𝑐 ∈ 𝐾min1(𝑘)𝑂(𝐵𝑐, 𝑌, 𝑋) > max

𝑐 ∈ 𝑘2 (𝑘)𝑂(𝐵𝑐, 𝑌, 𝑋) and LSC(𝑘) = False, otherwise.

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

32

Figure 6. The Thinking Mechanism

Figure 7. The Cramming Mechanism

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

33

Figure 8. The Reasoning Mechanism

When LSC(𝑘) = T, the following value 𝑣 can be used for the purpose of correct classification regarding the output node:

𝑣 =

𝑐∈𝐾min1(𝑘)𝑂(𝐵𝑐, 𝑌, 𝑋) + max

𝑐∈𝐾2(𝑘)𝑂(𝐵𝑐, 𝑌, 𝑋) 2

so that

𝐵 ∈ { 𝐶𝑙𝑎𝑠𝑠1, 𝑖𝑓 𝑂(𝐵, 𝑌, 𝑋) ≥ 𝑣 𝐶𝑙𝑎𝑠𝑠2, 𝑖𝑓 𝑂(𝐵, 𝑌𝑙, 𝑋) < −𝑣

Note that LSC(𝑘) = T is a sufficient, but not a necessary condition for the goal of 2-class learning.

Table 9. Defintion of all variables

Variable Definition

𝑩𝒄 the input vector of the cth case 𝑿 (𝑿1𝑇, 𝑿2𝑇, … , 𝑿𝑝𝑇)T

Y

(𝑌1𝑇, 𝑌2𝑇, … , 𝑌𝑞𝑇)T

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

34

Figure 9. The proposed neural network initial architecture

The process of data collection and data processing averages six minutes a day from three sources. In order to feed the data to the neural networks, we need to transform the fetched textual data to numerical representation. We derive sentiment scores by counting positive and negative words by NTUSD. First, we will read all the positive and negative words of the NTUSD into a large Python dictionary. Then, we read the fetched document and make the word-to-word comparison with the large Python dictionary. We assume the emotion value is zero initially. If the fetched words match to the positive words, the emotion value will add one, and vice versa. In this study, we do not distinguish between weak and strong words. The given result is obtained by simply counting positive and negative words on every day.

Table 10. Example of after transformed data

Date Facebook

Figure 10. Volume of discussion and their positive / negative emotion The 120 transaction daily data will be in chronological order, so we can regard it as time series data. We think that there have concept drifting and outliers in exchange rates environment. In order to deal the outliers problem in concept drifting environment, we implement Huang et.al (2014) work in the first part of neural networks. First, we take the first window, where M = 1, into consideration. The training block is made up of N elements, and the testing block is consisted of B elements. In this study, the first training block is made up of 1st to 90th and the second testing block is composed of 91st to 95th data. Moreover, the initial SLFN will be trained from the training block. The envelope module will wrap the least residual 95% data, letting the far away from the fitting function’s data as potential outliers.

-100

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

37

Figure 11. The implementation of moving windows in this experiment Over time, M will be 2. The first 5 data will be discarded. The training block will move to 6th to 95th. Meanwhile, the testing block will slide to 96th to 100th. The mechanism will be done until no incoming data. In the first part, we have six windows.

After the first part of learning process, we remove the potential outliers from the input data. At the same time, we obtained a trained SLFN. Then, we use the learning algorithm of RN to learn. First of all, the learning will read data in sequence. The second step is the thinking mechanism which checks if the LSC is satisfied. If the LSC is match, the next training case will be added to the neural networks. If not, it will go to the cramming mechanism. The cramming part strategy is recruiting extra hidden nodes. In this part, the total amount of hidden nodes will be large if extra hidden nodes are added frequently. In order to avoid a huge amount of hidden nodes, the next step is the reasoning mechanism. In the reasoning part, where there is a pruning mechanism, is designed to try to decrease the total amount of adopted hidden nodes. Sequentially ignoring the hidden node, and doing thinking mechanism. If the Linearly Separating

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

38

Condition is satisfied, remove the nth hidden node; otherwise, recover the hidden node.

When the learning process finish, put testing data to the reasoning neural networks to classify. Then, check if the forecasting matches the truth. In the end, we will get the accuracy rate of the reasoning neural networks.

We divide the data sources into seven cases to make forecasting exchange rate movements.

1.

Facebook

Figure 12. Facebook Forecasting Result (M = 1~3)

‧ 國

立 政 治 大 學

N a tio na

l C h engchi U ni ve rs it y

39

Figure 13. Facebook Forecasting Result (M = 4~6)

In the first part of proposed neural networks, it took nearly 50 minutes to remove potential outliers every window. Then, it took almost 6 hours to classify the output to three conditions every window in the second part of proposed neural networks. Finally, using Facebook to make forecasting, we got the 43% forecasting accuracy rate.

Table 11. Forecasting Accuracy Rate (Facebook)

1st NN 2nd NN

inliers (85) all (90) all (85) testing (5)

Window F

A

M = 1 (1 – 90)

↑ 47/47 N/A N/A 52/52 N/A N/A 47/47 N/A N/A N/A N/A 2/2

- N/A 1/1 N/A N/A 1/1 N/A N/A 1/1 N/A N/A N/A N/A

N/A N/A 37/37 N/A N/A 37/37 N/A N/A 37/37 2/3 N/A 1/3 M = 2 ↑ 47/47 N/A N/A 52/52 N/A N/A 47/47 N/A N/A 1/1 N/A N/A

相關文件