• 沒有找到結果。

Network Motif Model: An Efficient Approach to Extract Features from Relational Data

N/A
N/A
Protected

Academic year: 2021

Share "Network Motif Model: An Efficient Approach to Extract Features from Relational Data"

Copied!
6
0
0

加載中.... (立即查看全文)

全文

(1)

2006 IEEE International Conference on Systems, Man, and Cybernetics

October 8-11, 2006, Taipei, Taiwan

Network Motif Model: An Efficient Approach for Extracting Features from Relational Data

Chiung-Wei Huang, Ching-Chung Yu, Ching-Hao Mao, and Hahn-Ming Lee, Member, IEEE Abstract This paper proposes the Network Motif Model

(NMM), a novel and efficient approach for extracting features from relational data. First, our approach constructs a data network according to the data relation. Then significant sub-graphs are identified by extracting the basic network motifs from the data network, inspired by the motif concepts of complex network. At last, the first-order information of original data can be integrated with extracted significant sub-graphs to create the network motif features of relational data. Since basic motifs are easy to detect, the computation is efficient. Also, this kind of feature extraction not only preserves the relation of the data, but also keeps the label information of original data. Our experiments show that NMM has better classification accuracy than some inductive logic programming methods and probabilistic relational models. Thus, this model can be a potentially useful feature extraction strategy for statistical learning on Multi-relational data.

Keywords: Relational data mining, network motif model, complex network, first-order logic.

I. INTRODUCTION

S tatistical relational learning (SRL), which combines statistical learning with relational representations to predict properties of relational data, is now an emerging research area in machine learning [1]. Some learning models are frequently used to deal with feature extraction of relational data in statistical relational learning research area, such as first-order Bayesian classifier (lBC2) [2], probabilistic relational models (PRMs) [3], and relational Markov networks (RMNs) [4]. In general, there are two major feature extraction strategies, i.e., Inductive Logic Programming (ILP) [5] and Probabilistic Relational Models (PRMs) [6] in statistical relational learning. ILP is a deterministic classification approach, which applies inductive methods to extract features. In contrast, PRMs employ probability distribution and relational logic to predict the

Manuscript received March 30, 2006.

C. W. Huang is a Ph.D. candidate at the Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology. Also, he is with the Department of Electronic Engineering, Ching Yun University (e-mail: cwhuang(mail.cyu.edu.tw).

C. C. Yu got his Master degree from the Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology. Currently he is a system analyst in KPMG, Taiwan. (e-mail:

M91 1 5908gmail.ntust.edu.tw).

C. H. Mao is a Ph.D. student at the Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology (e-mail: d941 5004gmail.ntust.edu.tw).

H. M. Lee is a Professor of the Computer Science and Information Engineering at National Taiwan University of Science and Technology. #43, Sec.4, Keelung Rd., Taipei 106, Taiwan. He is also with the Institute of Information Science, Academia Sinica, Taipei, Taiwan. (e-mail:

hmlee@mail.ntust.edu.tw).

properties of objects. That is, the inductive logic programming cannot extract multiple label features [2] and the probabilistic relational models lack first-order information of original data [7]. Thus, it has been shown that using these kinds of approaches (ILP and PRMs) to extract features could not achieve great performance, especially when extracting features from Multi-relational data.

In the feature extraction process of PRMs, the first step is to discover sub-graphs. Then, the probability distributions of the sub-graphs will be calculated as features. At last, the features will be applied to Bayesian network for predicting the properties of objects. Though multiple label features for sub-graphs could be obtained, the characteristics of sub-graph interconnection are not considered.

Network motif is one of the hottest research topics in complex networks. Complex network exists in many fields, and it can be utilized to describe the structure of network, sociology, ecology, pathology, and genomics. In general, network motifs are those significant repeating interconnected patterns in complex networks [8]. That is, extracting network motifs can be treated as identifying significant sub-graphs from network structure. Compared with sub-graphs identified by PRMs, using network motifs strategy can identify far more sub-graphs. Furthermore, if the first-order information of original data could be integrated with the extracted sub-graphs information, the extracted features would be much more robust and significant than conventional approaches. Therefore in this paper, we propose a novel and efficient approach according to this idea to introduce the network motif concepts on feature extraction from relational data. The proposed approach thus can extract multiple label features and retain the data relation and label information for original data to improve the accuracy in later classification.

In the remainder, the proposed system architecture is given in Section II. Next, the evaluation approach and experimental results are presented in Section III. At last, we conclude with the main findings in Section IV.

II. NETWORK MOTIF MODEL

In Figure 1, the system architecture of proposed model is illustrated. The function of three major components, i.e., Network Construction, Network Motif Extraction, and Sub-graph Labeling, are briefly illustrated below:

* Network Construction: constructs a data network

according to the relationships of data.

(2)

Figure 3. The procedure of network motif extraction.

Figure 1. The system architecture of network motif model

Figure 2. An example of network construction.

* Network Motif Extraction: extracts significant sub-graphs from the previous data network.

* Sub-graph Labeling: integrates first-order information to sub-graphs of network motifs and outputs them as the network motif features.

A. Network Construction

In statistical relational learning, most feature extraction studies focus on discovering significant sub-graphs from the relationships of relational data and then analyzing the probability distribution of extracted sub-graphs to predict the properties of objects [9]. Our model adopts similar ideas, i.e.,

we construct a data network to reveal the relationships among

relational data in the first step. Then, significant sub-graphs

can be discovered easily from the network structure for later processing. Figure 2 illustrates a network structure which is constructed from a mutagenesis dataset [15].

B. Network MotifExtraction

In many studies, network motif strategy is employed to

deal with the sub-graph discovery in complex network [8].

Those studies stated that repetitive occurred patterns, called network motifs, are meaningful and important in analyzing the structure of complex network. We follow this concept to discover meaningful sub-graphs in feature extraction on relational data. Thus, after the data network is constructed, a network motifs extraction strategy is utilized to discover meaningful sub-graphs. Figure 3 illustrates the procedure of Network Motif Extraction.

In "Network Motif Extraction", an efficient Network-Motif search algorithm based on intersection operation is proposed to find out network motifs from data network. Then, "Significance Profile Discovery" searches the data network based on network motif patterns and significance profile to discover significant sub-graphs. After that, the discovered sub-graphs are treated as kernel features for the extraction output. In what follows, we introduce the detailed procedures of major steps in Network Motif Extraction, i.e., Network Motif Search and Significance Profile Discovery.

1) Network MotifSearch

Based on well known relations and out-degree links, 13 kinds of network motif sub-graphs are recognized to be the most representative building blocks of a complex network [8].

But it is very time consuming and with high cost to extract all the 13 types of network motifs [10]. According to some

studies, the Feed-forward loop (FFL) occurs frequently in complex networks [11-12] and is often chosen as a

representative motif to form the structure of complex networks. However due to FFL is a 3-node graph, it cannot well express some four-node graphs [10]. Therefore in our network motif search, both FFL and BP are adopted as the significant sub-graphs in finding the network motifs. The topology of FFL and BP can be found in Table 1.

"Network Motif Search" uses the interconnection characteristics of motifs to find out the intersections of network. Our method tries to intersect relational vertex sequences of nodes to obtain any possible intersection sets.

Then network motifs can be discovered from these intersection sets. For example, S(u): {v, w, x, y} is the set of vertex connecting to node u, S(v): {w, a, b} is the set of vertex connecting to node v, and S(x): {b, c, d} is the set of vertex connecting to node x. Note that as a hierarchical network topology, S(u) is the sets of first layer that connects to vertex

u. Also, S(v) and S(x) are the sets of second layer and third layer vertex sets connected to u, respectively. After taking an intersection operation on S(u) and S(v), the result is {w}.

Afterwards, we can easily find that the FFL network motif pattern is u-v-w. Besides, the intersection set of S(v) and S(x) is {b}; therefore, the BP network motif pattern, u-v-x-b, is found. According to this principle, we can discover all

dlOO

nonmutagenic (dl00 1, dl1002,..., dOO _26)

dlOO_1 dlOO 2 dlOO 7 dlOO 8

A A

22 dlOO0IdlOO 2 dlOO ldlOO 7 c 22 dlOO 2dlO0 8 h 3 h 3

dlOO2 7 dlO07 I dlOO_8 I

(3)

Sequence-Motif-Search (root vertex u)

(1) For all vertex C G(V,E), i.e. data network (2) v S(u) is the set of vertexes directly connected to u (3) w S(v) is the set of vertexes directly connected to v (4) cisthenumberofv, 1<= s<= c

(5) i <1 (6) while i < c

(7) Find vs = S(u) n S(v) and get the FFL(u,vi ,vs) (8) j + i 1

(9) whilej <= c

(10) Find S(vd n S(v) and get the BP(u, vi, vj, w) (1 1) increment loop counter i

(12) Return all found FFL(u, vi ,vs), BP(u, vi, vj, w)

Figure 4. Sequence-Motif-Search algorithm.

network motifs by performing intersection operation to each interconnected node over the entire data network constructed by the Network Construction phase.

Due to the intersection operation is simple and fast, our Network Motif Search is not computational intensive. Figure 4 depicts the Sequence-Motif-Search algorithm.

2) Significance Profile Discovery

A network motif can be further decomposed into smaller significant components, called significance profiles.

Previous researches introduce that four significant profiles

can be discovered from Feed-forward loop (FFL) and Bi-parallel (BP) graphs [9]. Table 1 provides a cross

reference between network motifs and significant profiles.

From the first row, a FFL motif can be composed of Backward-forward (BF), Forward-backward (FB), or

Forward-forward (FF) sub-graphs. And the second row

shows that BP can be composed of BF, FB, or FF sub-graphs.

Here we can conclude that "Network Motif Extraction" can also be utilized to search significant sub-graphs from data network. Figure 5 shows an example of significance profile discovery. The green-colored edges are the significant profiles discovered.

TABLE 1. THE SIGNIFICANCE PROFILE TABLE

Motif BF FB FF

F FFL IA^L

BP

C. Sub-graphs Labeling

After extracting significant sub-graphs from data network, the data relation is extracted completely. However, the extracted information does not have sufficient information to

depict the label information of original data, i.e., the

Figure 5. An example of significance profile discovery. The green-colored edges are the significant profiles discovered.

first-order information. Currently some kinds of Multi-relational data, e.g., Bio regulatory networks, are often with some original label information [1 1-12]. Thus, proposed method intends to integrate first-order information to

extracted significant sub-graphs in this phase, named Sub-graph Labeling. After labeling, we call the output features as "Network Motif Features", which are supposed to provide more rich and complete information for Multi-relational data than conventional approached. For example, Figure 6 illustrates an example of sub-graph labeling. On the left-hand side is the extracted significance profile and on the right-hand side is the first-order information from the original data. Then they are integrated

as a significant feature. It not only preserves the data relation but also keeps the important first-order information in feature

space. Thus, it may well express the Multi-relational data and is ready for the later classification task.

dlOO_1 dlO0_2

OMM... atomel(dl 00_1 ,c) atomel(dl 00_2,c)

c

atomel(d100_1 ,c)

A

atomel(d100_2,c)

Figure 6. An example of sub-graphs labeling.

III. EXPERIMENTAL EVALUATION

In this section, we depict our experiments on classification tasks to demonstrate the effectiveness of our approach. At first, experimental design including classification tools and datasets will be introduced. Afterwards, experimental results

on some relational data datasets will be discussed to confirm the performance of proposed method.

A. Experimental Design

Due to the network motif features are very similar to those features of text classification, we choose the well known and robust classification model, i.e., support vector machine (SVM), to be our classifier [14]. Since the LIBSVM provides the classification capability with linear, non-linear and multi-classification, it is chosen as our evaluation platform.

dlOO

nonmutagenic (dIOO 1, dlOO 2,..., dlOO 26)

dlOO I dlOO 2 dlOO 7 dlOO 8

dlOO IdlOO 2 c 22 dlOO IdlOO 7 dlOO 2dlO0 8 hl 3

7

(4)

At first, the datasets of inductive logic programming are used to evaluate our model because the inductive logic programming is frequently used in the data mining of relational data or structural data.

Three experiments on mutagenesis [15], Alzheimer's diseases [16], and Predictive Toxicology Challenge 2000-2001 [20] datasets are conducted in this study. The mutagenesis dataset consists of 188 regression-friendly and 42 regression-unfriendly compounds. And it is designed for predicting the mutagenesis directly or indirectly related to aromatic and heteroaromatic nitro compounds. The Alzheimer's diseases dataset is dedicated to compare the specific Alzheimer properties: low toxicity, high acetocholinease inhibition, good reversal of scopolamine induced deficiency, and inhibition of amine re-uptake. The US National Toxicology Program (NTP) held a competition, named "Predictive Toxicology Challenge 2000-2001", in 2001. In the competition, a dataset consists of 417 chemical compounds is provided as the training data. Each compound will be experimented by male rats, female rats, male mice, and female mice in order to identify whether this specific compound has equivocal carcinogenic effect evidence or has no carcinogenic effect evidence.

B. Experimental Evaluation

In this sub-section, we evaluate the classification performance of Network Motif Model on three data sets:

identifying mutagenic compounds, drugs against Alzheimer's disease, and Predictive Toxicology Challenge 2000-2001.

1) Mutagenesis

Mutagenesis is primarily used in distinguishing mutagenic and non-mutagenic compounds [15]. It has two versions, i.e., regression-friendly and regression-unfriendly. Due to the regression-unfriendly version is a smaller one, the regression-friendly version with 188 molecules, in which 125 molecules are positive, the rest are negative, is used to conduct our experiments.

For comparison, the experimental results of 1BC and 1BC2 in Table 2 are from Flach et al. [2]. In the first experiment, only lumo and logp properties are applied to distinguish mutagenic and non-mutagenic compounds. Then, proposed model is applied to extract features from atom and bond structure. Also, we use 10-fold cross-validation to evaluate the accuracy. The experiment results show that our model has very good performance (92.21% accuracy). In table expression, numbers in parentheses indicate the prediction rate of all data.

2) Alzheimer's Disease

The Alzheimer's Disease dataset is designed to compare the properties of the following drug categories [16]: (1) inhibit amine reuptake, (2) low toxicity, (3) high acetyl cholinesterase inhibition, and (4) good reversal of scopolamine-induced memory deficiency. The data set is used to analysis the structure activity of drugs for treatments.

In Table 3, we compare the classification accuracy of our proposed Network Motif Model (NMM) with 1BC, lBC2, RAC on each of the four properties listed above. RAC is the Reconsider-And-Conquer rule learning algorithm proposed in Bostrom et al [16]. Results of lBC and lBC2 are also from Flach et al. [2]. Also the results demonstrate that our Network Motif Model gets excellent performance.

Table 4 presents the comparison on accuracy of using sub-graphs features only and that of using sub-graphs features combined with first-order information. The results confirm that adopting features with sub-graphs and first-order information does improve the classification accuracy a lot.

3) Predictive Toxicology Challenge (PTC)

Mukund Deshpande et al. [20] proposed the Frequent Sub-Graph Discovery (FSG) algorithm to deal with PTC problem. And they evaluate the classification performance by using 5-fold cross validation on support vector machine.

Besides, the Receiver Operating Characteristic (ROC) curve is used to evaluate the predicting performance in classification. Thus, we use the PERF evaluation software to measure the ROC curve of our experimental results for comparison. Table 5 lists the area under ROC curve (AUC) of our proposed Network Motif Model (NMM) and Frequent Sub-Graph Discovery algorithm (FSG). It is found that proposed model has better classification performance.

At last in Table 6, we compare the computational complexity with related models. We have mentioned that our

"Network Motif Extraction" is to perform intersection operation on each vertex sequences. Therefore, the worst case is that given a set of n nodes, n2 computation is required. And the computation of "Sub-graphs Labeling" needs n times.

Therefore, the overall computational complexity in the worst case is 0(n2). It confirms that our model not only works well, but also is efficient.

IV. CONCLUSION

Current major feature extraction approaches, e.g., ILP and

PRMs, could not extract enough significant features for

predicting the properties of relational data to achieve good

prediction rate. It is because that features extracted by ILP

only contain one label feature. Although PRMs can deal with

multiple label features, the features are represented by

probabilistic distribution and do not take into account the

first-order information of original data. Therefore in this

paper, we proposed a novel and efficient approach, named

Network Motif Model (NMM), for extracting features from

relational data. At first, our approach constructs a data

network according to the data relation. Then significant

sub-graphs are identified by extracting the basic network

motifs from network structure, inspired by the motif concepts

of complex network. At last, the first-order information of

original data can be integrated with extracted sub-graphs as

the network motif features of relational data. Due to basic

motifs are easy to detect, the computation is efficient.

(5)

TABLE 2. AccuRAcY ON MUTAGENESIS DATE SET

Settings IBC (%) IBC2 () Progol Regression NMM

[2] [2] [15] (0 (0

[15] (proposed)

lumo and logp 71.4 71.4 67 84.04

Atoms and bonds 78.6 76.2 84.57

Include lumo and logp 81.0 83.3 83 92.21

Include inda and indl 78.6 83.3 83 85.64

TABLE 3. ACCURACY ON THE ALZHEIMER'S DISEASE DATA SET

Target IBC (Tr) e 1BC2 RAC [16] (%) class Majority (%) NMM (0)

[2] [2] [16] (proposed)

Inhibit amine reuptake Low toxicity

High acetyl cholinesterase inhibition Reversal of memory deficiency

79.4 74.8 68.1 76.1

78.2 74.8 76.4 69.2

85.6 81.8 75.0 60.2

74.9 67.6 51.6 76.6

89.19 94.94 87.10 83.02

TABLE 4. ACCURACY ON ALZHEIMER'S DISEASE DATA SET BY NETWORK MOTIFS ANALYSIS

Target

Inhibit amine reuptake Low toxicity

High acetyl cholinesterase inhibition Reversal of memory deficiency

NMM (%) (sub-graphs only)

60.81 57.47 55.47 61.29

NMM (%) (sub-graphs + first-order information)

89.19 94.94 87.10 83.02

TABLE 5. THE AREA UNDER ROC CURVE (AUC) ON PREDICTIVE TOXICOLOGY CHALLENGE DATA SET

Target FSG NMM (proposed)

Number of features AUC (0) Number of features AUC (0)

Male rats 7504 62.6 99 77.98

Female rats 25790 63.4 99 55.25

Male mice 24510 65.5 99 93.75

Female mice 7875 67.3 99 80.58

TABLE 6. THE COMPARISON OF COMPUTATIONAL COMPLEXITY

Feature extraction model Background Time complexity

Probabilistic relational models [5] Bayesian network 0(n2)

Relational Markov networks [4] Bayesian network 0(n2 +n k+c(mM))

Network motif model Network motifs 0 (n2)

Also, this kind of feature extraction not only preserves the relation of the data, but also keeps the label information of original data. Through experiments, the proposed model outperforms some Inductive Logic Programming methods and Probabilistic Relational Models in classification accuracy.

Thus, this model can be a potentially useful feature extraction strategy in statistical learning on Multi-relational data.

ACKNOWLEDGMENT

This work is supported in part by the National Digital

Archive Program-Research & Development of Technology

Division (NDAP-R&DTD), the National Science Council of

Taiwan under Contract No. NSC 94-2422-H-00 1-006, and by

the Taiwan Information Security Center (TWISC), the

National Science Council under Contract No. NSC

94-31 14-P-001-001-Y.

(6)

REFERENCES

[1] Matthew Richardson and Pedro Dominggos, "Markov logic networks", Machine Learning, special issue: multi-relational data mining and statistical relational learning. To appear.

[2] Peter A. Flach and Nicolas Lachiche, "Naive Bayesian classification of structured data", Machine Learning, Vol 57, No. 3, pp. 233-269, 2004.

[3] Lise Getoor, Nir Friedman, Daphne Koller and Benjamin Taskar, "Learning probabilistic models of link structure", Journal of Machine Learning Research, Vol. 3, No. 4-5, special issue on the Eighteenth International Conference on Machine Learning (ICML200 1), pp. 679-708, 2003

[4] Nir Friedman and Daphne Koller, "Being Bayesian about network structure. A Bayesian approach to structure discovery in Bayesian networks", Machine Learning, Vol. 50, No. 1-2, pp.

95-125, 2003.

[5] Lise Getoor, Nir Friedman, Daphne Koller and Benjamin Taskar, "Learning probabilistic models of link structure", Journal of Machine Learning Research, Vol. 3, No. 4-5, special issue on the Eighteenth International Conference on Machine Learning (ICML2001), pp. 679-708, 2003.

[6] Stephen Muggleton and Luc De Raedt, "Inductive logic programming: theory and methods". Journal of Logic Programming, Vol. 19/20, pp. 629-679, 1994.

[7] Saso Dzeroski and Nada Lavrac, editors, "Relational data mining", Springer-Verlag, 2001.

[8] Ron Milo, Shai Shen-Orr, Shalev Itzkovitz, Nadav Kashtan, Dmitri Chklovskii and UPi Alon, "Network motifs: Simple building blocks of complex networks", Science, Vol. 298, No.

5594, pp. 824-827, 2002.

[9] Ron Milo, Shalev Itzkovitz, Nadav Kashtan, Reuven Levitt, Shai Shen-Orr, Inbal Ayzenshtat, Michal Sheffer and Uri Alon,

"Superfamilies of evolved and designed networks", Science, Vol. 303, No. 5663, pp. 1538-1542. 2004.

[10] Esti Yeger-Lotem, Shmuel Sattath, Nadav Kashtan, Shalev Itzkovitz, Ron Milo, Ron Y. Pinter, Uri Alon and Hanah Margalit, "Network motifs in integrated cellular networks of transcription-regulation and protein-protein interaction", Proprietary Rights Notice for the Proceedings of the National Academy of Sciences (PNAS), Vol. 101, No. 16, pp.

5934-5939, 2004.

[11] Shmoolik Mangan, Alon Zaslaver and UP Alon, "The coherent feedforward loop serves as a sign-sensitive delay element in transcription networks", Journal of Molecular Biology, Vol.

334, No. 2, pp. 179-347, 2003.

[12] Shiraz Kalir, Shmoolik Mangan and Uri Alon, "A coherent feed-forward loop with a SUM input function prolongs flagella expression in Escherichia coli", Molecular Systems Biology, doi: 10.1038 / msb4100010, 2005.

[13] Shalev Itzkovitz, Uri Alon, "Subgraphs and network motifs in geometric networks", Physical Review E: statistical, nonlinear and soft-matter physics, Vol. 71, pp. (026117) 1-9, 2005.

[14] Hyunsoo Kim, Peg Howland and Haesun Park, "Dimension reduction in text classification with support vector machines", Journal of Machine Learning Research, Vol. 6, pp. 37-53, 2005.

[15] Stephen Muggleton, Ashwin Srinivasan, Ross Donald King and Michael J. E. Sternberg, "Biochemical knowledge discovery using inductive logic programming", Lecture Notes In Computer Science, Vol. 1532, pp. 326-341, In Proceedings of the first International Conference on Discovery Science, 1998.

[16] Henrik Bostrom and Lars Asker, "Combining divide-and-conquer and separate-and-conquer for efficient and effective rule induction", Lecture Notes in Computer Science, Vol. 1634, pp. 33-43, In Proceedings of the ninth International Workshop on Inductive Logic Programming, 1999.

[17] Ross Donald King, Stephen Muggleton, Richard A. Lewis and Michael J.E. Sternberg, "Drug design by machine learning: The use of inductive logic programming to model the structure-activity relationships of trimethoprim analogues binding to dihydrofolate reductase", Proceedings of the National Academy of Sciences, Vol. 89, No. 23, pp.

11322-11326, 1992.

[18] Robert Burbidge, Matthew Trotter, Bernard F. Buxton and Sean B. Holden, "Drug design by machine learning: support vector machines for pharmaceutical data analysis", Computers and Chemistry, Vol. 26, No. 1, pp. 5-14, 2001.

[19] Susanne Hoche and Stefan Wrobel. Scaling Boosting by Margin-based Inclusion of Features and Relations. In:

Proceedings of the 13th European Conference on Machine Learning, LNCS 2430, pages 148--160. Springer, August 2002.

[20] Mukund Deshpande, Michihiro Kuramochi, Nikil Wale and George Karypis, "Frequent substructure-based approaches for classifying chemical compounds", IEEE Transactions on Knowledge and Data Engineering, Vol. 17, No. 8, pp.

1036-1050, 2005.

[21] Akihiro Inokuchi, Takashi Washio and Hiroshi Motoda,

"Complete mining of frequent patterns from graphs: Mining graph data", Machine Learning, Vol. 50, No. 3, pp. 321-354, 2003.

[22] Radu Dobrin. Qasim K. Beg, Albert-Laszlo Barabasi and Zoltan N Oltvai, "Aggregation of topological motifs in the Escherichia coli transcriptional regulatory network", BMC Bioinformatics, Vol. 5, pp. 1-8, 2004.

[23] Xiao Fan Wang, "Complex networks: topology, dynamics and

synchronization", International Journal of Bifurcation and

Chaos, Vol. 12, No. 5, pp. 885-916, 2002

數據

Figure 1. The system architecture of network motif model
Figure 4. Sequence-Motif-Search algorithm.
TABLE 5. THE AREA UNDER ROC CURVE (AUC) ON PREDICTIVE TOXICOLOGY CHALLENGE DATA SET

參考文獻

相關文件

Is end-to-end congestion control sufficient for fair and efficient network usage. If not, what should we do

Responsible for providing reliable data transmission Data Link Layer from one node to another. Concerned with routing data from one network node Network Layer

This option is designed to provide students an understanding of the basic concepts network services and client-server communications, and the knowledge and skills

In the work of Qian and Sejnowski a window of 13 secondary structure predictions is used as input to a fully connected structure-structure network with 40 hidden units.. Thus,

3. Works better for some tasks to use grammatical tree structure Language recursion is still up to debate.. Recursive Neural Network Architecture. A network is to predict the

To solve this problem, this study proposed a novel neural network model, Ecological Succession Neural Network (ESNN), which is inspired by the concept of ecological succession

This study proposed the ellipse-space probabilistic neural network (EPNN), which includes three kinds of network parameters that can be adjusted through training: the variable

It is always not easy to control the process of construction due to the complex problems and high-elevation operation environment in the Steel Structure Construction (SSC)