• 沒有找到結果。

Application of Support Vector Machines in Bioinformatics

N/A
N/A
Protected

Academic year: 2022

Share "Application of Support Vector Machines in Bioinformatics"

Copied!
65
0
0

加載中.... (立即查看全文)

全文

(1)

Application of Support Vector Machines in Bioinformatics

by

Jung-Ying Wang

A dissertation submitted in partial fulfillment of the requirements for the degree of

Master of Science

(Computer Science and Information Engineering) in National Taiwan University

2002

 c

Jung-Ying Wang 2002 All Rights Reserved

(2)

ABSTRACT

Recently a new learning method called support vector machines (SVM) has shown comparable or better results than neural networks on some applications. In this thesis we exploit the possibility of using SVM for three important issues of bioinformatics:

the prediction of protein secondary structure, multi-class protein fold recognition, and the prediction of human signal peptide cleavage sites. By using similar data, we demonstrate that SVM can easily achieve comparable accuracy as using neural networks. Therefore, in the future it is a promising direction to apply SVM on more bioinformatics applications.

(3)

ACKNOWLEDGEMENTS

I would like to thank Chih-Jen Lin, my advisor, for his many suggestions and constant support during my research.

To my family I give my appreciation for their support and love over the years.

Without them this work would have never come into existence.

Taipei 106, Taiwan Jung-Ying Wang

December 3, 2001

(4)

TABLE OF CONTENTS

ABSTRACT . . . . ii

ACKNOWLEDGEMENTS . . . . iii

LIST OF TABLES . . . . vi

LIST OF FIGURES . . . viii

CHAPTER I. Introduction . . . 1

1.1 Background . . . 1

1.2 Protein Secondary Structure Prediction . . . 2

1.3 Protein Fold Prediction . . . 3

1.4 Signal Peptide Cleavage Sites . . . 6

II. Support Vector Machines . . . 8

2.1 Basic Concepts of SVM . . . 8

2.2 For Multi-class SVM . . . 12

2.2.1 One-against-all Method . . . 12

2.2.2 One-against-one Method . . . 13

2.3 Software and Model Selection . . . 13

III. Protein Secondary Structure Prediction . . . 15

3.1 The Goal of Secondary Structure Prediction . . . 15

3.2 Data Set Used in Protein Secondary Structure . . . 15

3.3 Coding Scheme . . . 16

3.4 Assessment of Prediction Accuracy . . . 17

IV. Protein Fold Recognition . . . 21

4.1 The Goal of Protein Fold Recognition . . . 21

4.2 Data Set and Feature Vectors . . . 21

(5)

4.3 Multi-class Methodologies for Protein Fold Classification . . . 22

4.4 Measure for Protein Fold Recognition . . . 26

V. Prediction of Human Signal Peptide Cleavage Sites . . . 29

5.1 The Goal of Predicting Signal Peptide Cleavage Sites . . . 29

5.2 Coding Schemes and Feature Vector Extraction . . . 29

5.3 Using SVM to Combine Cleavage Sites Predictors . . . 32

5.4 Measures of Cleavage Sites Prediction Accuracy . . . 33

VI. Results . . . 34

6.1 Comparison of Protein Second Structure Prediction . . . 34

6.2 Comparison of Protein Fold Recognition . . . 36

6.3 Comparison of Signal Peptide Cleavage Sites Prediction . . . 41

VII. Conclusions and Discussions . . . 43

7.1 Protein Secondary Structure Prediction . . . 43

7.2 Multi-class Protein Fold Recognition . . . 44

7.3 Signal Peptide Cleavage Sites Prediction . . . 44

APPENDICES . . . 45

BIBLIOGRAPHY . . . 51

(6)

LIST OF TABLES

Table

3.1 130 protein chains used for seven-fold cross validation. . . 18 4.1 Non-redundant subset of 27 SCOP folds using in training and testing 23 4.2 Six parameter sets extracted from protein sequence . . . 23 4.3 Prediction accuracy Qi in percentage using high confidence only . . 27 5.1 Properties of amino acid residues . . . 31 5.2 Relative hydrophobicity of amino acids . . . 32 6.1 SVM for protein secondary structure prediction. Using the seven-

fold cross validation on RS130 protein set . . . 35 6.2 SVM for protein secondary structure prediction. Using the quadratic

penalty term and the seven-fold cross validation on RS130 protein set 35 6.3 Prediction accuracy Qi for protein fold in percentage for the inde-

pendent test set . . . 37 6.4 (Cont’d) Prediction accuracy Qi for protein fold in percentage for

the independent test set . . . 38 6.5 Prediction accuracy Qi for protein fold in percentage for the ten-fold

cross validation. . . 39 6.6 The best parameters C and γ chosen for each subsystem and the

combiner . . . 41 6.7 Comparison of SVM with ACN and SignalP methods . . . 42 A.1 Optimal hyperparameters for the training set by 10-fold cross vali-

dation. . . 47

(7)

A.2 (Cont’d) Optimal hyperparameters for the training set by 10-fold cross validation. . . 48 B.1 Data set for human signal peptide cleavage sites prediction . . . 49 B.2 (Cont’d) Data set for human signal peptide cleavage sites prediction 50

(8)

LIST OF FIGURES

Figure

1.1 Region of SCOP hierarchy . . . 5

2.1 Separating hyperplane . . . 10

2.2 An example which is not linear separable . . . 10

3.1 An example of using evolutionary information to coding secondary structure . . . 19

4.1 Predictor for multi-class protein fold recognition . . . 27

6.1 A comparison of 27 folds for independent test set . . . 40

6.2 A comparison of 27 folds for ten-fold cross validation . . . 40

(9)

CHAPTER I

Introduction

1.1 Background

Bioinformatics is an emerging and rapidly growing field of science. As a conse- quence of the large amount of data produced in the field of molecular biology, most of the current bioinformatics projects deal with structural and functional aspects of genes and proteins. The data produced by thousands of research teams all over the world are collected and organized in databases specialized for particular subjects.

The existence of public databases with billions of data entries requires a robust analytical approach to cataloging and representing this with respect to its biological significance. Therefore, computational tools are needed to analyze the collected data in the most efficient manner. For example, working on the prediction of the biological functions of genes and proteins (or parts of them) based on structural data.

Recently support vector machines (SVM) has been a new and promising tech- nique for machine learning. On some applications it has obtained higher accuracy than neural networks (for example, [17]). SVM has also been applied to biological problems. Some examples are [6, 80]. In this thesis we exploit the possibility of using SVM for three important issues of bioinformatics: the prediction of protein secondary structure, multi-class protein fold recognition, and the prediction of human signal

(10)

peptide cleavage sites.

1.2 Protein Secondary Structure Prediction

Recently prediction for the structure and function of proteins has become in- creasingly important. A step on the way to obtain the full three-dimensional (3D) structure is to predict the local conformation of the polypeptide chain, which is called the secondary structure. The secondary structure consists of local folding regulari- ties maintained by hydrogen bonds and is traditionally subdivided into three classes:

alpha-helices, beta-sheets, and coil.

The sequence preferences and correlations involved in these structures have made secondary structure one of the classical problems in computational molecular biology, and one where machine learning approaches have been particularly successful. See [1] for a detailed review.

Many pattern recognition and machine learning methods have been proposed to solve this issue. Surveys are, for example, [63, 66]. Some typical approaches are as follows: (i) statistical information [49, 61, 53, 25, 28, 3, 26, 45, 78, 36, 19] ; (ii) physico-chemical properties [59] ; (iii) sequence patterns [75, 12, 62] ; (iv) multi- layered (or neural) networks [4, 60, 30, 40, 74, 83, 64, 65, 46, 9] ; (v) graph-theory [50, 27] ; (vi) multivariate statistics [38] ; (vii) expert rules [51, 24, 27, 84] ; and (viii) nearest-neighbor algorithms [82, 72, 68].

Among these machine learning methods, neural networks may be the most pop- ular and effective one for the secondary structure prediction. Up to now the highest accuracy is achieved by approaches using it. In 1988, secondary structure prediction directly using Neural Networks first achieved about 62% accuracy [60, 30]. In 1993, using evolutionary information, a Neural Network system had improved the predic-

(11)

tion accuracy to over 70% [65]. Recently there have been approaches (e.g. [58, 1]) using neural networks which achieve even higher accuracy (> 75%).

In this thesis, we apply SVM for protein secondary structure prediction. We worked on similar data and encoding schemes as those in Rost and Sander [65]

(referred here as RS130). The performance accuracy is verified by a seven-fold cross validation.

Results indicate that SVM easily returns comparable results as neural networks.

Therefore, in the future it is a promising direction to study other applications by using SVM.

1.3 Protein Fold Prediction

A key to understand the function of biological macromolecules, e.g., proteins, is the determination of the three-dimensional (3D) structure. Large-scale gene- sequencing projects accumulate a massive number of putative protein sequences.

However, information about 3D structures is available for only a small fraction of known proteins. Thus, although experimental structure determination has improved, the sequence-structure gap continues to increase.

This creates a need for extracting structural information from sequence databases.

The direct prediction of a protein’s 3D structure from a sequence remains elusive.

However, considerable progress has been shown in assigning a sequence to a fold class.

There have been two general approaches to this problem. One is to use threading algorithms. The other is a taxonometric approach which presumes that the number of folds is restricted and thus the focus is on structural predictions in the context of a particular classification of 3D folds. Proteins are defined as having a common fold if they have the same major secondary structures in the same arrangement and with

(12)

the same topological connections. To facilitate access to this information, Hubbard et al. [32] constructed the Structural Classification of Proteins (SCOP) database.

The SCOP database is a publicly accessible database over the internet. It stores a set of protein sequences which have been hand-classified into a hierarchical struc- ture based on their structure and function. The SCOP database aims to provide a detailed and comprehensive description of the structural and evolutionary rela- tionships between all proteins whose structure is known, including all entries in the Protein Data Bank (PDB). The distinction between evolutionary relationships and those that arise from the physics and chemistry of proteins is a feature that is unique to this database. The database is freely accessible on the web with an entry point at URL http : //scop.mrc − lmb.cam.ac.uk/scop/.

Many levels exist in the SCOP hierarchy, which are illustrated in Figure 1.1 [32].

The principal levels are family, superfamily, and fold, which will be described below.

Family: Homology is a clear indication of shared structures and frequently related functions. At the simplest level of similarity we can group proteins into families of homologus sequences with a clear evolutionary relationship.

Superfamily: Superfamilies can be loosely defined as composition of families with a probable evolutionary relationship, supported mainly by common structural and functional features, in the absence of detectable sequence homology.

Fold: Folds can be described as representing the architecture of proteins. Two proteins will have a common fold if they have comparable elements of secondary structure with the same topology of connections.

(13)















α β

  

   

β  !"# β 

$

% 

α

& $

 ' ( ' ( ' (



α% !)# β%

     !#

α+β 

α/β α+β

*

Figure 1.1: Region of SCOP hierarchy

In this thesis a computational method based on SVM has been developed for the assignment of a protein sequence to a folding class in the SCOP. We investigated two strategies for multi-class SVM: “one-against-all” and “one-against-one”. Then we combine these two methods with a voting process to do the classification of 27 folds of data. Comparing to general classification problem this data set has very few data. Applying our method increases the overall prediction accuracy to 63.6% when using an independent test set and 52.7% when using the ten-fold cross validation on the training set. Both improve the current prediction accuracy by more than 7%.

The experimental results reveal that model selection is an important step in SVM design.

(14)

1.4 Signal Peptide Cleavage Sites

Signal peptides target proteins for secretion in both prokaryotic and eukaryotic cells. The signal peptide of the nascent protein on a free ribosome is recognized by Signal Recognition Particle (SRP) which arrests translation. SRP then binds an SRP receptor on the endoplasmic reticulum (ER) membrane and inserts the signal peptide into the membrane. Translation resumes, and the protein is translocated through the membrane into the ER lumen as it is synthesized. Other sequence determinants on the protein then dictate whether it will remain in the ER lumen, or pass on to one of the other membrane-bound compartments, or be secreted from the cell.

Signal peptides control the entry of virtually all proteins to the secretory pathway.

They comprise the N -terminal part of the amino acid chain, and are cleaved off while the protein is translocated through the membrane. The common structure of signal peptides consist of three regions: a positively charged n-region, followed by a hydrophobic h-region, and a neutral but polar c-region [79]. The cleavage site is generally characterized by neutral small side-chain amino acids at positions -1 and -3 (relative to the cleavage site) [55, 56].

Strong interest in prediction of the signal peptides and their cleavage sites has been evoked not only by the huge amount of unprocessed data available, but also by the industrial need to find more effective vehicles for production of proteins in recombinant systems.

In this thesis, we use four independent SVM coding schemes (“subsystems”) to learn the mapping between amino acid sequences and signal peptide cleavage sites from the known protein structures and physico-chemical properties. Then a SVM combiner learns to combine the outputs of the four subsystems to make final predic-

(15)

tions. To have a fair comparison, we consider similar data and the same encoding scheme used in ACN [33] for negative patterns, and compared with two established predictors (SignalP ([55, 56]) and ACN) for signal peptides. We demonstrate that SVM can achieve higher accuracy then using SignalP and ACN.

(16)

CHAPTER II

Support Vector Machines

2.1 Basic Concepts of SVM

The support vector machine (SVM) is a new and promising technique for data classification and regression. After the development in the past five years, it has become an important topic in machine learning and pattern recognition. Not only it has a better theoretical foundation, practical comparisons have also shown that it is competitive with existing methods such as neural networks and decision trees (e.g.

[43, 7, 17]).

Existing surveys and books on SVM are, for example, [14, 76, 77, 8, 71, 15].

The number of applications of SVM is dramatically increasing, for example, object recognition [57], combustion engine detection [67], function estimation [73], text cat- egorization [34], chaotic system [52], handwritten digit recognition [47], and database marketing [2].

The SVM technique was first developed by Vapnik and his group in former AT&T Bell Laboratories. The original idea is to use a linear separating hyperplane which maximizes the distance between two classes to create a classifier. For problems which can not be linearly separated in the original input space, support vector machines employ two techniques to deal this case. First we introduce a soft margin hyperplane

(17)

which adds a penalty function of violation of constraints to our optimization criterion.

Secondly we non-linearly transform the original input space into a higher dimension feature space. Then in this new feature space it is more possible to find a linear optimal separating hyperplane.

Given training vectors xi, i = 1, . . . , l of length n, and a vector y defined as follows

yi =





1 if xi in class 1,

−1 if xi in class 2,

The support vector technique tries to find the separating hyperplane with the largest margin between two classes, measured along a line perpendicular to the hyperplane.

For example, in Figure 2.1, two classes could be fully separated by a dotted line wTx + b = 0. We would like to decide the line with the largest margin. In other words, intuitively we think that the distance between two classes of training data should be as large as possible. That means we want to find a line with parameters w and b such that the distance between wTx + b = ±1 is maximized. As the distance between wTx + b = ±1 is 2/kwk and maximizing 2/kwk is equivalent to minimizing wTw/2, we have the following problem:

minw,b

1 2wTw

yi((wTxi) + b) ≥ 1, (2.1) i = 1, . . . , l.

The constraint yi((wTxi) + b) ≥ 1 means

(wTxi) + b ≥ 1 if yi = 1, (wTxi) + b ≤ −1 if yi =−1.

That is, data in the class 1 must be on the right-hand side of wTx+b = 0 while data in the other class must be on the left-hand side. Note that the reason of maximizing the

(18)

wTx + b =

+1 0

−1

Figure 2.1: Separating hyperplane

distance between wTx + b = ±1 is based on Vapnik’s Structural Risk Minimization [77].

Figure 2.2: An example which is not linear separable

However, practically problems may not be linear separable where an example is in Figure 2.2. SVM uses two methods to handle this difficulty [5, 14]: First, it allows training errors. Second, SVM non-linearly transforms the original input space into a higher dimensional feature space by a function φ:

minw,b,ξ

1

2wTw + C(

Xl i=1

ξi) (2.2)

yi((wTφ(xi)) + b)≥ 1 − ξi, (2.3) ξi ≥ 0, i = 1, . . . , l.

A penalty term CPl

i=1ξi in the objective function and training errors are allowed.

That is, constraints (2.3) allow that training data may not be on the correct side of

(19)

the separating hyperplane wTx + b = 0 while we minimize the training errorPl i=1ξi in the objective function. Hence if the penalty parameter C is large enough and the data is linear separable, problem (2.3) goes back to (2.1) as all ξi will be zero [44]. Note that training data x is mapped into a (possibly infinite) vector in a higher dimensional space:

φ(x) = (φ1(x), φ2(x), . . .).

In this higher dimensional space, it is more possible that data can be linearly sepa- rated. An example by mapping x from R3 to R10 is as follows:

φ(x) = (1,√ 2x1,√

2x2,√

2x3, x21, x22, x23,√

2x1x2,√

2x1x3,√

2x2x3),

Hence (2.2) is a problem in an infinite dimensional space which is not easy. Cur- rently the main procedure is by solving a dual formulation of (2.2). It needs a closed form of K(xi, xj) ≡ φ(xi)Tφ(xj) which is usually called the kernel function. Some popular kernels are, for example, RBF kernel: e−γkxi−xjk2 and polynomial kernel:

(xTi xj/γ + δ)d, where γ and δ are parameters.

After the dual form is solved, the decision function is written as

f (x) = sign(wTφ(x) + b).

In other words, for a test vector x, if wTφ(x) + b > 0, we classify it to be in the class 1. Otherwise, we think it is in the second class. Only some of xi, i = 1, . . . , l are used to construct w and b and they are important data called support vectors. In general, the number of support vectors is not large. Therefore we can say SVM is used to find important data (support vectors) from training data.

(20)

2.2 For Multi-class SVM

2.2.1 One-against-all Method

The earliest used implementation for SVM multi-class classification is probably the one-against-all method. It constructs k SVM models where k is the number of classes. The ith SVM is trained with all of the examples in the ith class with positive labels, and all other examples with negative labels. Thus given l training data (x1, y1), . . . , (xl, yl), where xi ∈ Rn, i = 1, . . . , l and yi ∈ {1, . . . , k} is the class of xi, the ith SVM is by solving the following problem:

wmini,bii

1

2(wi)Twi+ C Xl

j=1

i)j

(wi)Tφ(xj)) + bi ≥ 1 − ξji, if yj = i, (2.4) (wi)Tφ(xj)) + bi ≤ −1 + ξji, if yj 6= i,

ξji ≥ 0, j = 1, . . . , l,

where training data xi are mapped to a higher dimensional space by the function φ and C is the penalty parameter. Then there are k decision functions:

(w1)Tφ(x) + b1, ...

(wk)Tφ(x) + bk.

Generally, we say x is in the class which has the largest value of the decision function:

class of x = argmaxi=1,...,k((wi)Tφ(x) + bi).

In this thesis we will use another strategy. On the SCOP database with 27 folds, we build 27 “one-against-all” classifiers. Each protein in the test set is tested against all 27 “one-against-all” classifiers. If the result is “positive”, then we will assign a

(21)

vote for the class. However, if the result is “negative”, representation the protein belongs to one of the 26 other folds. In other words, the protein belongs to each of the other 26 folds with a probability of 1/26. Therefore, in our coding we do not assign any vote to this case.

2.2.2 One-against-one Method

Another major method is called the one-against-one method. It was first in- troduced in [41], and the first use of this strategy on SVM was in [23, 42]. This method constructs k(k− 1)/2 classifiers where each one trains data from two classes.

For training data from the ith and the jth classes, we solve the following binary classification problem:

wijmin,bijij

1

2(wij)Twij+ C(X

t

ij)t)

((wij)Tφ(xt)) + bij) ≥ 1 − ξtij, if yt = i, (2.5) ((wij)Tφ(xt)) + bij) ≤ −1 + ξtij, if yt= j,

ξtij ≥ 0.

There are different methods for doing the future testing after all k(k −1)/2 classifiers are constructed. In this thesis, we use the following voting strategy suggested in [23]:

if sign((wij)Tφ(x) + bij)) says x is in the ith class, then the vote for the ith class is added by one. Otherwise, the jth is increased by one. Then we predict x is in the class with the largest vote. The voting approach described above is also called the

“Max Wins” strategy.

2.3 Software and Model Selection

We use the software LIBSVM [10] for experiments. LIBSVM is a general li- brary for support vector classification and regression, which is available at http :

(22)

//www.csie.ntu.edu.tw/~cjlin/libsvm.

As mentioned in Section 2.1 that there are different functions φ to map data to higher dimensional spaces, practically we need to select the kernel function K(xi, xj) = φ(xi)Tφ(xj). There are several types of kernels in used with all kinds of problems.

Each kernel may be more suitable for some problems. For example, some well-known problems with large amount of features, such as text classification [35] and DNA prob- lems [81], are reported to be classified more correctly with the linear kernel. In our experience, the RBF kernel is a decent choice for most problems. A learner with the RBF kernel usually performs no worse than others do, in terms of the generalization ability.

In this thesis, we did some simple comparisons and observed that using the RBF kernel the performance is a little better than the linear kernel K(xi, xj) = xTi xj for all the problems we studied. Therefore, for the three data sets in stead of staying in the original space a non-linear mapping to a higher dimensional space seems useful.

We then use the RBF kernel for all the experiments.

Another important issue is the selection of parameters. For SVM training, few parameters such as the penalty parameter C and the kernel parameter γ of the RBF function must be determined in advance. Choosing optimal parameters for support vector machines is an important step in SVM design. From the results of [20] we know, for the formulation (2.2), cross validation may be a better estimator than others. So we use the cross validation on different parameters for the model selection.

(23)

CHAPTER III

Protein Secondary Structure Prediction

3.1 The Goal of Secondary Structure Prediction

Given an amino acid sequence the goal of secondary structure prediction is to predict a secondary structure state (α, β, coil) for each residue in the sequence.

Many different methods have been applied to tackle this problem. A good predictor must be based on knowledge learned from existing data. That is, we have to train a model using several sequences with known secondary structures. In this chapter we will show that by a simple use of SVM, it can easily achieve as good accuracy as using neural networks.

3.2 Data Set Used in Protein Secondary Structure

The choice of protein database for secondary structure prediction is complicated by potential homology between proteins in the training and testing set. Homologous proteins in the database can give misleading results since learning methods in some cases can memorize the training set. Therefore protein chains without significant pairwise homology are used for developing our prediction task. To have a fair com- parison, we consider the same 130 protein sequences used in Rost and Sander [65]

for training and testing. These proteins, taken from the HSSP (Homology-derived

(24)

Structures and Sequence alignments of Proteins) database [69], all have less than 25% pairwise similarity and more than 80 residues. Table 3.1 lists the 130 protein chains used in our study.

The secondary structure assignment was done according to the DSSP (Dictio- nary of Secondary Structures of Proteins) algorithm [37], which distinguishes eight secondary structure classes. We converted the eight types into three classes in the following way: H (α-helix), I (π-helix), and G (310-helix) as helix (α), E (extended strand) as β-strand (β), and all others as coil (c). Note that different conversion methods influence the prediction accuracy to some extent, as discussed by Cuff and Barton [16].

3.3 Coding Scheme

Before the work by Rost and Sander [65] one common coding for the secondary structure prediction (e.g. [60, 30]) is considering a moving window of n (typically 13-21) neighboring residues and each position of a window has 21 possible values (20 amino acids and a null input). Hence the presentation of each residue can be by an integer ranging from 1 to 21 or by 21 binary (i.e. value 0 or 1) indicators. If we take the later approach then among the 21 binary indicators only one has the value one.

Therefore, the number of data points is the same as the number of residues while each data point has 21 × n values. These encoding methods with three-state neural networks obtained about 63% accuracy.

A breakthrough on the encoding method is by using the evolutionary information [65]. We use this method in our study. The key idea is for any training sequence;

we consider its related sequences as well. These related sequences provide structural information, which is not affected by the local change of the amino acids. Instead

(25)

of just feeding the base sequence they feed the multiple alignment in the form of a profile. An alignment means aligning the protein sequences so that large chunks of the amino acid sequence align with each other. Basically the coding scheme considers a moving window of 17 (typically 13-21) neighboring residues. For each residue the frequency of occurrence of each 20 amino acids at one position in the alignment is computed. In our study, the alignments (profile) are taken from the HSSP database.

The window is shifted residue by residue through the protein chain, thus yielding N data points for a chain with N residues.

Figure 3.1 is an example of using evolutionary information for encoding where we have aligned four proteins. In the gray column the based sequence has the residue

“K” while the multiple alignments in this position are “P”, “G”, “G” and “.” (in- dicate point of deletion in this sequence). Finally, the frequencies are directly used as the values of output coding. Therefore, the coding scheme in this position will be given as G = 0.50, P = 0.25, K = 0.25.

Prediction is made for the central residue in the windows. In order to allow the moving window to overlap the amino- or carboxyl-terminal end of the protein a null input was added for each residue. Therefore, each data point contains 21 × 17 = 357 values. Hence each data can be represented as a vector.

Note that the RS130 data set consists of 24,387 data points in three classes where 47% are coil , 32% are helix, and 21% are strand.

3.4 Assessment of Prediction Accuracy

An important fact about prediction problems is that training errors are not impor- tant; only test errors (i.e. accuracy for predicting new sequences) count. Therefore, it is important to estimate the generalized performance of a learning method.

(26)

Table 3.1: 130 protein chains used for seven-fold cross validation.

256b A 2aat 8abp 6acn 1acx 8adh 3ait

set A 2ak3 A 2alp 9api A 9api B 1azu 1cyo 1bbp A

1bds 1bmv 1 1bmv 2 3blm 4bp2

2cab 7cat A 1cbh 1cc5 2ccy A 1cdh 1cdt A

set B 3cla 3cln 4cms 4cpa I 6cpa 6cpp 4cpv

1crn 1cse I 6cts 2cyp 5cyt R

1eca 6dfr 3ebx 5er2 E 1etu 1fc2 C 1fdl H

set C 1dur 1fkf 1fnd 2fxb 1fxi A 2fox 1g6n A

2gbp 1a45 1gd1 O 2gls A 2gn5

1gpl A 4gr1 1hip 6hir 3hmg A 3hmg B 2hmz A

set D 5hvp A 2i1b 3icb 7icd 1il8 A 9ins B 1l58

1lap 5ldh 1gdj 2lhb 1lmb 3

2ltn A 2ltn B 5lyz 1mcp L 2mev 4 2or1 L 1ovo A

set E 1paz 9pap 2pcy 4pfk 3pgm 2phh 1pyp

1r09 2 2pab A 2mhu 1mrt 1ppt

1rbp 1rhd 4rhv 1 4rhv 3 4rhv 4 3rnt 7rsa

set F 2rsp A 4rxn 1s01 3sdh A 4sgb I 1sh1 2sns

2sod B 2stv 2tgp I 1tgs I 3tim A

6tmn E 2tmv P 1tnf A 4ts1 A 1ubq 2utg A 9wga A set G 2wrp R 1bks A 1bks B 4xia A 2tsc A 1prc C 1prc H

1prc L 1prc M

The database of non-homologous proteins used for seven-fold cross validation. All proteins have less than 25% pairwise similarity for lengths great than 80 residues.

(27)

SH3 N S T N K D W W K sequence to process

index a1 N K S N P D W W E

a2 E E H . G E W W K multiple a3 R S T . G D W W L alignment a4 F S . . . . F F G

1 V 0 0 0 0 0 0 0 0 0

2 L 0 0 0 0 0 0 0 0 20 numeric 3 I 0 0 0 0 0 0 0 0 0 profile 4 M 0 0 0 0 0 0 0 0 0

5 F 20 0 0 0 0 0 20 20 0 6 W 0 0 0 0 0 0 80 80 0 7 Y 0 0 0 0 0 0 0 0 0 8 G 0 0 0 0 50 0 0 0 20 9 A 0 0 0 0 0 0 0 0 0 10 P 0 0 0 0 25 0 0 0 0 11 S 0 60 25 0 0 0 0 0 0 12 T 0 0 50 0 0 0 0 0 0 13 C 0 0 0 0 0 0 0 0 0 14 H 0 0 25 0 0 0 0 0 0 15 R 20 0 0 0 0 0 0 0 0 16 K 0 20 0 0 25 0 0 0 40 17 Q 0 0 0 0 0 0 0 0 0 18 E 20 20 0 0 0 25 0 0 20 19 N 40 0 0 100 0 0 0 0 0 20 D 0 0 0 0 0 75 0 0 0

Coding output: G=0.50, P=0.25, K=0.25

Figure 3.1: An example of using evolutionary information to coding secondary struc- ture

(28)

Several different measures for assessment the accuracy have been suggested in the literature. The most common measure for the secondary structure prediction is the overall three-state accuracy (Q3). It is defined as the ratio of correctly predicted residues to the total number of residues in the database under consideration [60, 64].

Q3 is calculated by:

Q3 = qα+ qβ + qcoil

N × 100, (3.1)

where N is the total number of residues in the test data sets, and qs is the number of residues of secondary structure type s that are predicted correctly.

(29)

CHAPTER IV

Protein Fold Recognition

4.1 The Goal of Protein Fold Recognition

Since protein sequence information grows significantly faster than information on protein 3D structure, the need for predicting the folding pattern of a given protein sequence naturally arises. In this chapter a computational method based on SVM has been developed for the assignment of a protein sequence to a folding class in the SCOP. We investigated two strategies for multi-class SVM: “one-against-all” and

“one-against-one”. Then we combine these two methods with a voting process to do the classification of 27 folds of data.

4.2 Data Set and Feature Vectors

Because tests based on different protein sets are hard to compare, to have a fair comparison, we consider the same data set used in Ding and Dubchak [18, 21, 22]

for training and testing. The data set is available at http : //www.nersc.gov/ cding/protein. The training set contains 313 proteins grouped into 27 folds, which were selected from the database built by Dubchak [22] as shown in Table 4.1. Note that the original database is separated to form 128 folds. These proteins are subset of the PDB-select sets [29], where two proteins have no more than 35% of sequence

(30)

identity for any aligned subsequences longer than 80 residues.

The independent test set contains 385 proteins in the same 27 folds. It is a subset of PDB-40D set developed by the authors of the SCOP database [13], where sequences having less than 40% identity are chosen. In addition, all proteins in the PDB-40D that had more than 35% identity with proteins of the training set were excluded from the testing set.

Here for data coding we use the same six parameter sets as Ding and Dubchak [18]. Note that the six parameter sets, as listed in Table 4.2, were extracted from protein sequence independently (for details see Dubchak et al. [22]) . Thus, one may apply learning methods based on a single parameter set for protein fold prediction.

Therefore, in our coding schemes we will use each of the parameter set individual and their combination as our input coding.

For example, the parameter set “C” considers that each protein is associated with the percentage composition of the 20 amino acids. Therefore, the number of data points is the same as the number of proteins where each data point has 20 dimensions (values). We can also combine two parameter sets into one dataset. For example we can combine “C” and “H” into one dataset “CH”, so each data point has 20 + 21 = 41 dimensions.

4.3 Multi-class Methodologies for Protein Fold Classifica- tion

Remember that we have 27 folds of data so we have to solve multi-class classi- fication problems. Currently two approaches are commonly used for combining the binary SVM classifiers to perform a multi-class prediction. One is the “one-against- one” method (See Chapter 2.2.2) where k(k−1)/2 classifiers are constructed and each

(31)

Table 4.1: Non-redundant subset of 27 SCOP folds using in training and testing

Fold Index # Training data # Test data

α

Globin-like 1 13 6

Cytochrome c 3 7 9

DNA-binding 3-helical bundle 4 12 20

4-helical up-and-down bundle 7 7 8

4-helical cytokines 9 9 9

Alpha;EF-hand 11 7 9

β

Immunoglobulin-like β-sandwich 20 30 44

Cupredoxins 23 9 12

Viral coat and capsid proteins 26 16 13

ConA-like lectins/glucanases 30 7 6

SH3-like barrel 31 8 8

OB-fold 32 13 19

Trefoil 33 8 4

Trypsin-like serine proteases 35 9 4

Lipocalins 39 9 7

α/β

(TIM)-barrel 46 29 48

FAD(also NAD)-binding motif 47 11 12

Flavodoxin-like 48 11 13

NAD(P)-binding Rossmann-fold 51 13 27

P-loop containing nucleotide 54 10 12

Thioredoxin-like 57 9 8

Ribonuclease H-like motif 59 10 14

Hydrolases 62 11 7

Periplasmic binding protein-like 69 11 4

α+β

β-grasp 72 7 8

Ferredoxin-like 87 13 27

Small inhibitors,toxins,lectins 110 14 27

Table 4.2: Six parameter sets extracted from protein sequence

Symbol parameter set Dimension

C Amino acids composition 20

S Predicted secondary structure 21

H Hydrophobicity 21

V Normalized van der Waals volume 21

P Polarity 21

Z Polarizability 21

(32)

one trains data from two different classes. Another approach for multi-class classifi- cation is the “one-against-all” method (See Chapter 2.2.1) where k SVM models are constructed and the ith SVM is trained with data in the ith class as positive, and all other data as negative. A comparison on both methods for multi-class SVM is in [31].

After analyzing our data, we find out that the number of proteins in each fold is quit small (7∼30 for the training set). If using the “one-against-one” method, some binary classifiers may work on only 14 data points. It may emerge larger noise due to the involvement of all possible binary classifier pairs. On the contrary, if the

“one-against-all” method is used, we will have more examples (same as the training data) to learn.

Meanwhile we observed the interesting results from [80] where they do the molec- ular classification of multiple tumor types. Their data set contains only 190 samples grouped into 14 classes. They found that for using both cross validation and inde- pendent test set, the “one-against-all” achieves the better performance. The authors conclude that the reason is because the binary classifier in the “one-against-all”

method has more examples than the “one-against-one” method. In our multi-class fold prediction problem we have the same situation, lots of classes but only few data. Therefore, in our implementation, we will mainly consider the “one-against- all” method to generate binary classifiers for multi-class prediction.

Note that according to Ding and Dubchak [18], using multiple parameter sets and applying a majority vote on the results lead to much better prediction accuracy.

Thus, in our study we will base on the six parameter sets to construct 15 encoding schemes. For the first six coding schemes, each of the six parameter sets (C, S, H, V, P, Z) is used.

(33)

After doing some experiments the following combinations CS, HZ, SV, CSH, VPZ, HVP, CSHV, and CSHVPZ are chosen as another eight coding schemes. Note that they have different dimensionalities. For the combination CS, there are 41 (20+21) dimensions. Similarly, for HZ and SV, both have 42 (21+21) dimensions. Therefore, CSH, VPZ, HVP, and CSHVPZ have 62, 63, 63, and 125 dimensions respectively.

As we have 27 protein folds, for each encoding scheme if the “one-against-all”

is used, there are 27 binary classifiers. Since we have 14 coding schemes, using the

“one-against-all” strategy, totally we will train 14 × 27 binary classifiers. Following [22], if a protein is classified as “positive” then we will assign a vote to that class. If a protein is classified as “negative” the probability that it belongs to anyone of the other 26 classes is only 1/26. If we still assign it to one of the other 26 classes, the misclassification rate may be very high. Thus, these proteins are not assigned to any class.

In our coding schemes if any of the 14 × 27 “one-against-all” binary classifiers assigns a protein sequence to a folding class, then that class gets a vote. Therefore, for the 14 coding schemes base on above “one-against-all” strategy, each fold (class) will have zero to 14 votes. However, we found that after the above procedure some proteins may not have any vote on any fold. For example, among 385 data of the independent test set, using the parameter set “composition” only, 142 are classified as positive by some binary classifiers. If they are assigned to the corresponding folds, 126 are correctly predicted with the accuracy rate 88.73%. The remaining 243 data are not assigned to any fold, so their status is still unknown. Results of using the 14 coding schemes are shown in Table 4.3. Although for the worst case a protein may be assigned to 27 folds, practically most input proteins obtain no more than one vote.

(34)

After using the above 14 coding schemes there are still some proteins whose corresponding folds are not assigned. Since in the “one-against-one” SVM classifier we use the so-called “Max Wins” strategy (See Chapter 2.2.2), after the testing procedure each protein must be assigned to a fold (class). Therefore, we will use the best “one-against-one” method as the 15th coding scheme and combine it with the above 14 “one-against-all” results using a voting scheme to get the final prediction.

Here we used the same “one-against-one” method in Ding and Dubchak [18]. For example, a combination C+H means we separately perform the “one-against-one”

method on two parameter sets C and H. Then we combine the votes obtained from using the two parameter sets to decide the winner.

The best result we find is the combined C+S+H+V parameter sets where the average accuracy achieves 58.2%. It is slightly above 55.5% accuracy by Ding and Dubchak and their best result 56.5% using C+S+H+P. Figure 4.1 shows the overall structure of our method.

Before constructing each SVM classifier, we first conduct some cross validation with different parameters on the training data. The best parameters C and γ selected are shown in Tables A.1 and A.2.

4.4 Measure for Protein Fold Recognition

We use the standard Qi percentage accuracy (4.1) for assessing the accuracy of protein fold recognition:

Qi = ci

ni × 100, (4.1)

where niis the number of test data in class i, and ci of them are correctly recognized.

Here we use two ways to evaluate the performance of our protein fold recognition system. For the first one, we test the system against a data set which is independent

(35)





   

  

  

  

    

   





        

   !

Figure 4.1: Predictor for multi-class protein fold recognition

Table 4.3: Prediction accuracy Qi in percentage using high confidence only

Parameter Test set Correct Positive Ten-fold CV Correct Positive set accuracy% prediction value accuracy% prediction value

C 88.73 126 142 78.16 68 87

S 91.59 98 107 70.83 51 72

H 91.95 80 87 65.22 15 23

V 97.56 80 82 77.27 17 22

P 92.65 63 68 75.00 3 4

Z 97.67 84 86 68.75 11 16

CS 86.34 139 161 80.36 90 112

HZ 94.59 105 111 78.13 25 32

SV 89.34 109 122 75.47 80 106

CSH 88.16 134 152 77.19 88 114

VPZ 99.03 102 103 90.91 20 22

HVP 94.95 94 99 69.23 18 26

CSHV 94.87 111 117 84.88 73 86

CSHVPZ 90.65 126 139 77.45 79 102

ALL 76.83 199 259 63.48 146 230

(36)

of the training set. Note that proteins in the independent test set have less than 35% sequence identity with those used in training. Another evaluation is by cross validation. We report ten-fold cross validation accuracy by using the training set.

(37)

CHAPTER V

Prediction of Human Signal Peptide Cleavage Sites

5.1 The Goal of Predicting Signal Peptide Cleavage Sites

Secretory proteins contain a leader sequence - the signal peptide - serving as a signal for translocating the protein across a membrane. During translocation, the signal peptide is cleaved from the rest of the protein.

Strong interests in prediction of the signal peptides and their cleavage sites have been evoked not only by the huge amount of unprocessed data available, but also by the industrial need to find more effective vehicles for the production of proteins in recombinant systems. For a systematic description in this area, see a comprehensive review by Nielsen et al. [54]. In this chapter we will use SVM to recognize the cleavage sites of signal peptides directly from the amino acid sequence.

5.2 Coding Schemes and Feature Vector Extraction

To have a fair comparison, we consider the data set assembled by Nielsen et al.

[55, 56] encompassing 416 sequences of human secretory proteins. We use five-fold cross validation to measure the performance. The data sets are from an FTP server at f tp : //virus.cbs.dtu.dk/pub/signalp.

(38)

Most data classification techniques require feature vectors sets as input. That is, a sequence of amino acids should be replaced by a sequence of symbols representing local physico-chemical properties.

In our coding protein sequence data were presented to the SVM using sparsely encoded moving windows [60, 30]. Symmetric and asymmetric windows of a size varying from 10 to 30 positions were tested. Four feature-vector sets are extracted independently from protein sequences to form four different coding schemes (“sub- systems”).

The coding scheme, using the one by [33] is considering a window where the cleavage site is in it as a positive pattern. Then ten subsequent windows following the positive patten are considered negative. As now we have 416 sequences, there are totally 416 positive and 4160 negative examples. After some experiments, we chose the asymmetric window of 20 amino acids including the cleavage site itself and those [-15,+4] relative to it for generating the positive pattern. This matches the location of cleavage site pattern information [70].

The first subsystem is considering each position of a window has 21 possible values (20 amino acids and a null input). Hence the presentation of each amino acid can be by an integer ranging from 1 to 21 or by 21 binary (i.e. value 0 or 1) indicators.

We take the later approach so among the 21 binary indicators only one has the value one. Therefore, using our encoding scheme each data point (positive or negative) is a vector with 21× 20 values.

The second subsystem considers that each amino acid is associated with ten binary indicators, representing some properties [85]. In Table 5.1 each row shows that an amino acid posses which properties. Then in our encoding each data is a vector with 10× 20 values.

(39)

Table 5.1: Properties of amino acid residues

Amino acid 1 2 3 4 5 6 7 8 9 10

Ile y y

Leu y y

Val y y y

Cys y y

Ala y y y

Gly y y y

Met y

Phe y y

Tyr y y y

Trp y y y

His y y y y y

Lys y y y y

Arg y y y

Glu y y y

Gln y

Asp y y y y

Asn y y

Ser y y y

Thr y y y

Pro y y

Properties: 1.hydrophobic, 2.positive, 3.negative, 4.polar, 5.charged, 6.small, 7.tiny, 8.aliphatic, 9.aromatic, 10.proline. “y” means the amino acid has the property.

(40)

The third subsystem combines the above two encodings into one dataset so data point has 31× 20 attributes.

The last subsystem used in this study is the relative hydrophobicity of amino acids. Following [11], 21 amino acids are separated to three groups (Table 5.2).

Therefore, three binary attributes indicate the associated group of a amino acid so each data point has 3 × 20 values.

Table 5.2: Relative hydrophobicity of amino acids

Amino acid Polar Neutral Hydrophobic

Ile y

Leu y

Val y

Cys y

Ala y

Gly y

Met y

Phe y

Tyr y

Trp y

His y

Lys y

Arg y

Glu y

Gln y

Asp y

Asn y

Ser y

Pro y

Thr y

5.3 Using SVM to Combine Cleavage Sites Predictors

The idea of combining models instead of selecting the best one, in order to improve performance, is well known in statistics and has a long theoretical background. In this thesis, we used above four subsystem outputs to combine them as the SVM combiner its inputs to make final prediction of cleavage sites.

(41)

5.4 Measures of Cleavage Sites Prediction Accuracy

To assess the resulting predictions, test performances have been calculated by five-fold cross validation. The data set was divided into five approximately equal- sized parts, and then each of the five SVM was carried out by using one part as test data and the other four parts as training data. The cross validation accuracy is the total number of correctly identified test data divided by the total number of data. A more complicated measure of accuracy is given by the correlation coefficient introduced in [48]:

M CC = pn − uo

p(p + u)(p + o)(n + u)(n + o),

where p, n, u, and o are numbers of true positive, true negative, false positive, and false negative locations, respectively. It can be clearly seen that a higher M CC is better.

(42)

CHAPTER VI

Results

6.1 Comparison of Protein Second Structure Prediction

We carried out some experiments to tune up and evaluate the prediction system by training 1/7 of the data set and testing the selected model by another 1/7. After this we find the pair of C = 10 and γ = 0.05 that achieves the best prediction rate.

Therefore, this best parameter set is used for constructing the models for future testing.

Table 6.1 lists the number of training as well as testing data of the seven cross validation steps. It also reports the number of support vectors and the accuracy. Note that numbers of training/testing data are different as our split on the training/testing sets is at the level of proteins but not amino acids. The average accuracy is 70.5%

which is competitive with results in Rost and Sander [65]. Indeed in [65] a direct use of neural networks on this encoding scheme achieved only 68.2% accuracy. Other techniques must be incorporated in order to attain 70% accuracy.

We would like to emphasize here again that we use the same data set (including the type of alignment profiles) and secondary structure definition (reduction from eight to three secondary structure) as those in Rost and Sander. In addition, the same accuracy assessment of Rost and Sander is used so the comparison is fair.

參考文獻

相關文件

In the work of Qian and Sejnowski a window of 13 secondary structure predictions is used as input to a fully connected structure-structure network with 40 hidden units.. Thus,

“Since our classification problem is essentially a multi-label task, during the prediction procedure, we assume that the number of labels for the unlabeled nodes is already known

Core vector machines: Fast SVM training on very large data sets. Multi-class support

Core vector machines: Fast SVM training on very large data sets. Multi-class support

An alternative activated-state model was generated by substituting the single ligand-biased receptor refinement protocol with the following procedure: (1) rotation of TM VI by

Godsill, “Detection of abrupt spectral changes using support vector machines: an application to audio signal segmentation,” Proceedings of the IEEE International Conference

The organic properties existed in bulk solution of the filters and the surface of filters were respectively dominiated by the fulvic-like and aliphatic protein whichever the

Based on different characteristics of known protein-protein interaction sites, several methods have been proposed for predicting interface residues using a combination of