• 沒有找到結果。

Improving Prediction of Protein Secondary Structure using Structured Neural Networks and Multiple Sequence Alignments

N/A
N/A
Protected

Academic year: 2022

Share "Improving Prediction of Protein Secondary Structure using Structured Neural Networks and Multiple Sequence Alignments"

Copied!
27
0
0

加載中.... (立即查看全文)

全文

(1)

Improving Prediction of Protein Secondary Structure using Structured Neural Networks and Multiple Sequence Alignments

Sren Kamaric Riis

Electronics Institute, Building 349 Technical University of Denmark

DK-2800 Lyngby, Denmark Email: riis@ei.dtu.dk

Anders Krogh

NORDITA, Blegdamsvej 17 DK-2100 Copenhagen, Denmark

Email: krogh@nordita.dk Phone: +45 3532 5503

Fax: +45 3138 9157

Journal of Computational Biology, vol. 3, p. 163-183, 1996

keywords:

protein secondary structure prediction, neural networks, multiple sequence align- ment.

* Corresponding author.

Present address: The Sanger Centre, Hinxton Hall, Hinxton, Cambridge CB10 1RQ, UK.

Abstract

The prediction of protein secondary structure by use of carefully structured neural net- works and multiple sequence alignments has been investigated. Separate networks are used for predicting the three secondary structures -helix, -strand and coil. The networks are designed using a priori knowledge of amino acid properties with respect to the secondary structure and of the characteristic periodicity in -helices. Since these single-structure net- works all have less than 600 adjustable weights over-tting is avoided. To obtain a three-state prediction of -helix,-strand or coil, ensembles of single-structure networks are combined with another neural network. This method gives an overall prediction accuracy of 66.3%

when using seven-fold cross-validation on a database of 126 non-homologous globular pro- teins. Applying the method to multiplesequence alignments of homologousproteins increases the prediction accuracy signicantly to 71.3% with corresponding Matthews' correlation co- ecients C = 0:59, C = 0:52 and Cc = 0:50. More than 72% of the residues in the database are predicted with an accuracy of 80%. It is shown that the network outputs can be interpreted as estimated probabilities of correct prediction, and therefore these numbers indicate which residues are predicted with high condence.

1 Introduction

Prediction of protein structure from the primary sequence of amino acids is a very challenging task, and the problem has been approached from several angles. A step on the way to a prediction of the full 3D structure is to predict the local conformation of the polypeptide chain, which is called the secondary structure. A lot of interesting work has been done on this problem, and over the last 10 to 20 years the methods have gradually improved in accuracy. This improvement

(2)

is partly due to the increased number of reliable structures from which rules can be extracted and partly due to improvement of methods.

Most often the various secondary structures are grouped into the three main categories - helix, -strand and \other". We use the term coil for the last category. Usually these categories are dened on the basis of the secondary structure assignments found by the DSSP program (Kabsch & Sander, 1983). Some of the rst work on secondary structure prediction was based on statistical methods in which the likelihood of each amino acid being in one of the three types of secondary structures was estimated from known protein structures. These probabilities were then averaged in some way over a small window to obtain the prediction (Chou & Fasmann, 1978 Garnier et al., 1978). These methods were later extended in various ways to include correlations among amino acids in the window (Gibrat et al., 1987 Biou et al., 1988).

Around 1988 the rst attempts were made to use neural networks to predict protein secondary structure (Qian & Sejnowski, 1988 Bohr et al., 1988). The accuracy of the predictions made by Qian and Sejnowski seemed better than those obtained by previous methods, although tests based on dierent protein sets are hard to compare. This fact started a wave of applications of neural networks to the secondary structure prediction problem (Holley & Karplus, 1989 Kneller et al., 1990 Stolorz et al., 1992), sometimes in combination with other methods (Zhang et al., 1992 Maclin & Shavlik, 1993). The types of neural network used in most of this work were essentially the same as the one used in the study of Qian and Sejnowski, namely a fully connected perceptron with at most one hidden layer. A very serious problem with these networks is the over-tting caused by the huge number of free parameters (weights) to be estimated from the data. Over-tting means that the performance of the network is poor on data that are not part of the training data, even though the performance is very good on the training data (Hertz et al., 1991). In most previous work the over-tting is dealt with by stopping the training of the network before the error on the training set is at a minimum, see e.g. (Qian & Sejnowski, 1988 Rost & Sander, 1993b) and section 2.3 of this paper. A signicant exception is the work of Maclin and Shavlik (Maclin & Shavlik, 1993) in which the Chou-Fasman method (Chou &

Fasmann, 1978) was built into a neural network before training. This procedure led to a network with much more structure than the fully connected ones.

The most successful application of neural networks to secondary structure prediction is prob- ably the recent work by Rost and Sander (Rost & Sander, 1993a Rost & Sander, 1993b Rost &

Sander, 1994), which has resulted in the prediction mail server called PHD (Rost et al., 1994a).

Rost and Sander use the same basic network architecture as Qian and Sejnowski trained on the three category secondary structure problem. Their networks have 40 hidden units and an input window of 13 amino acids, and the network is trained to predict the secondary structure of the central residue. They use two methods to overcome the problem of over-tting. First, they use early stopping, which means that training is stopped after the training error is below some threshold. Second, an arithmetic average is computed over predictions from several networks trained independently using dierent input information and training procedures. Using an en- semble or committee of neural networks is known to help in suppressing noise and over-tting (Hansen & Salamon, 1990 Krogh & Vedelsby, 1995). They also lter the predictions with a neural network which takes the predictions from the rst network as input and gives a new prediction based on these. This technique was pioneered by Qian and Sejnowski, and helps in producing more realistic results by, for instance, suppressing -helices or -strands of length one. However, the most signicant new feature in the work of Rost and Sander is the use of alignments. For each protein in the data set a set of aligned homologous proteins is found.

Instead of just feeding the base sequence to the network they feed the multiple alignment in the form of a prole, i.e., for each position an amino acid frequency vector is fed to the network.

Using these and a few other \tricks", the performance of the network is reported to be above 71% correct secondary structure predictions using seven fold cross-validation on a database of

(3)

One of the primary goals of the present work has been to carefully design neural network topologies particularly well suited for the task of secondary structure prediction. These networks contain much fewer free parameters than fully connected networks and thereby over-tting is avoided. We use several methods well-known to the neural network community to further improve performance. One of the most interesting is a learned encoding of the amino acids in a vector of three real numbers. We use the same set of protein structures as Rost and Sander (Rost & Sander, 1994) for training and evaluation of the method, which means that the results are directly comparable. Our initial goal has been to get as good predictions from single sequences as possible. This work had three stages. First, individual networks were designed for prediction of the three structures. Next, instead of using only one network for each type of structure, an ensemble of 5 networks were used for each structure. Thirdly, these ensembles of single structure networks were combined by another neural network to obtain a three state prediction. This prediction from single sequences yields a result of 66{67% accuracy which is 3-4% better than a fully connected network on the same dataset. The method is then applied to multiple alignments as follows. For each protein in the alignment the secondary structure is predicted independently. The nal prediction is then found by combining these predictions via the alignment as in (Zvelebil et al., 1987 Russell & Barton, 1993 Levin et al., 1993). By this method we obtain a result of 71.3%, which is practically identical to the result of Rost and Sander (1994).

2 Materials and methods

2.1 Data set

When using neural networks for secondary structure prediction the choice of protein database is complicated by potential homology between proteins in the training and testing set. Homologous proteins in the database can give misleading results since neural networks in some cases can memorize the training set. Furthermore, the size of the training and testing sets can have a considerable inuence on the results, because non-homologous proteins in general are very dierent. Using a small training set often results in bad generalization ability, while a small testing set gives a very poor estimate of the prediction accuracy. For evaluation of the method we therefore use seven-fold cross-validation on the set of 126 non-homologous globular proteins from (Rost & Sander, 1994), see Table 1. With seven-fold cross-validation approximately 1/7 of the database is left out while training, and the remaining part is used for testing. This is done cyclically seven times, and the resulting prediction is thus a mean over seven dierent testing sets. The division of the database into the seven subsets (set A-set G) shown in Table 1 is assumed not to have any inuence on the results presented in the following sections. A more reliable estimate of the prediction accuracy could be achieved by using Leave One Out cross- validation where one protein is left out while training on the rest, but this would lead to very large computational demands. The proteins used all satisfy the homology-threshold dened by Sander and Schneider (Sander & Schneider, 1991), i.e., no proteins in the database have more than 25% pairwise sequence identity for lengths >80 residues. The proteins are taken from the HSSP-database version 1.0, release 25.0 (Sander & Schneider, 1991). The secondary structure assignments were done according to the DSSP algorithm (Kabsch & Sander, 1983), but the 8 types of structures were converted to three in the following way: H ( -helix), I (-helix) and G (310-helix) were classied as helix ( ), E (extended strand) as -strand (), and all others as coil (c).

2.2 Measures of prediction accuracy

Several dierent measures of prediction accuracy have been suggested in the literature. The most common measure is the overall three-state prediction percentage dened as the ratio of

(4)

Set A 256b A 2aat 8abp 6acn 1acx 8adh 3ait

1ak3 A 2alp 9api A 9api B 1azu 3b5c 1bbp A

1bds 1bmv 1 1bmv 2

Set B 3blm 4bp2 2cab 7cat A 1cbh 1cc5 2ccy A

1cd4 1cdt A 3cla 3cln 4cms 4cpa I 6cpa

6cpp 4cpv 1crn 1cse 6cts 2cyp 5cyt

Set C 6dfr1eca 3ebx 5er2 E 1etu 1fc2 C 1fdl H 1fdx

1fkf 2fnr 2fxb 1fxi A 4fxn 3gap A 2gbp

2gcr 1gd1 O 2gls A 2gn5 1gpl A

Set D 4gr1 1hip 6hir 3hmg A 3hmg B 2hmz A 5hvp A

2i1b 3icb 7icd 1il8 A 9ins B 1l58 1lap

5ldh 2lh4 2lhb 1lrd 3

Set E 2ltn A 2ltn B 5lyz 1mcp L 2mev 4 2or1 L 1ovo A

1paz 9pap 2pcy 4pfk 3pgm 2phh 1pyp

1r09 2 2pab A

Set F 2mhu 1mrt 1ppt 1rbp 1rhd 4rhv 1 4rhv 3

4rhv 4 3rnt 7rsa 2rsp A 1s01 1sdh A 4rxn

4sgb I

Set G 1sh1 2sns 2sod B 2stv 2tgp I 1tgs I 3tim A

6tmn E 2tmv P 1tnf A 4ts1 A 1ubq 2utg A 9wga A 2wrp R 1wsy A 1wsy B 4xia A 2tsc A

Table 1: The database of non-homologous proteins used for seven-fold cross-validation. All pro- teins have less than 25% pairwise similarity for lengths>80 residues and the crystal structures are determined at a resolution better than 2.5A rms. The data set contains 24,395 residues with 32% -helix, 21%-strand and 47% coil.

(5)

Observed HHHHHHHHHHCCC Prediction 1 CHHHCHHHCHCCC Prediction 2 CCHHHHHHHHHHC Table 2: Predictions from two dierent methods

correctly predicted residues to the total number of residues in the database under consideration (Qian & Sejnowski, 1988 Rost & Sander, 1993b). Since our data set contains 32% -helix, 21% -strand and 47% coil, a random prediction yields Qrandom3 = 36:3% if weighted by the percentage of occurrence. For comparison the best obtainable prediction by homology methods is about Qhomology3 = 88% (Rost et al., 1994b). Q3 describes the performance of the method averaged over all residues in the database. For a single protein the expected prediction accuracy is better described by the per chain accuracy<Qchain3 >given by the average of the three-state prediction accuracy over all protein chains (Rost & Sander, 1993b).

A measure of the performance on secondary structure classi= ,  or coil is the percentage

Qi of correctly predicted residues observed in class i. These measures can be very helpful in detecting over- and under-prediction of one or more types of secondary structures. Note that

Qi diers from the two-state prediction accuracy Q2i (Hayward & Collins, 1992) used when evaluating single-structure networks.

A complementary measure of prediction accuracy is obtained from the Matthews' correlation coe cients (Matthews, 1975) for each of the three secondary structures C, C and Cc. The correlation coe cients are 1.0 if the predictions are all correct and ;1:0 if all the predictions are false. The advantage of the correlation coe cients is seen in case of a random or trivial prediction. A trivial prediction of helices for all residues gives Q = 100% and Q3 = 32%, but Ci = 0:0. Similarly Ci is close to zero for random predictions. The Mathews' correlation coe cients are widely used and the exact denitions can also be found in e.g. (Qian & Sejnowski, 1988 Rost & Sander, 1993b).

Even though Mathews' correlation coe cients give more reliable estimates of the prediction accuracy they do not express how realistic the prediction is. Consider the two predictions in Table 2 obtained from dierent methods (Rost & Sander, 1993b). Prediction 1 gives a higherQ3 as well as higher correlation coe cients than prediction 2, but the latter is more realistic seen from a biological point of view. The rst method predicts unrealistic short helices in contrast to the long helix predicted by the second method. This illustrates the need of comparing predicted and observed mean lengthsLiof secondary structure segments. In addition to the mean lengths an interesting measure is the percentage of overlapping segments of observed and predicted secondary structure used by Maclin and Shavlik (Maclin & Shavlik, 1993). The percentage of segment overlapPiOvl tells how good the method is at locating segments of secondary structure.

This is of particular interest since the 3D structure of a given protein family to some extent is determined by the approximate location of regular secondary structure segments (Rost et al., 1994b). A trivial prediction of helices at all positions givesPovl= 100% if at least one observed helix segment exists. The overlap percentages should thus only be used in combination with some of the performance measures mentioned above.

2.3 Neural networks for secondary structure prediction

The networks used in this work are all feed-forward layered networks, trained using the back- propagation algorithm in on-line mode, see e.g. (Hertz et al., 1991). The main dierence from previous works using these types of networks will be described in this section.

In most applications of neural networks to secondary structure prediction, fully connected networks with a vast number of adjustable weights have been used. For instance, the best network found in the work of Qian and Sejnowski (Qian & Sejnowski, 1988) had more than 10,000 weights. When training a network with that many weights from the limited number of

(6)

5 5 6 0 6 5 7 0 7 5 8 0 8 5 9 0 9 5

0 1 0 2 0 3 0 4 0 50 6 0 7 0 8 0 9 0 1 0 0

Three-state accuracy [%]

E p o c h

Tr ai n40 Tes t4 0

Figure 1: Three-state percentages (Q3) for the training and testing set during training of the network with 40 hidden units used in (Qian & Sejnowski, 1988) (spacer unit omitted). The training set consists of sets B{G and for testing set A is used, see Table 1. The percentages are plotted against the number of training epochs, i.e. full sweeps through the training set. Because of the extreme number of weights the network develops a very poor generalization on the testing set. In less than 100 training epochs the training set is learnt almost to perfection while the performance on the testing set has dropped from the maximum value of approximately 62%

to 57%. Qian and Sejnowski reported the best percentage obtained for the testing set as an estimate of the prediction accuracy.

proteins available one gets into the problem of over-tting. At some point during training the network begins to learn special features in the training set, i.e. the network begins to memorize the training set. These special features can be considered as noise or atypical examples of mappings between amino acid sequence and secondary structure. Since the noise in the training and testing sets is uncorrelated the generalization ability on the testing set deteriorates at some point during training, see Figure 1. The point at which the generalization ability deteriorates is highly dependent on the initial weights and on the dynamics of the learning rule. Hence, it is almost impossible to determine at which point the training should be stopped in order to get an optimal solution. Usually early stopping is used, where the training is stopped after some

xed number of iterations (Rost & Sander, 1993a Rost & Sander, 1993b Rost & Sander, 1994) or by using a validation set to monitor the generalization ability of the network during training (Maclin & Shavlik, 1993). When the performance on the validation set begins to deteriorate the training is stopped. However, sacricing data for validation sets can be crucial for the performance of the model, since the available amount of data is limited. Another method is to choose the network achieving the best performance on the test set by always saving the best network during training, as was done by Qian and Sejnowski. In that case the performance on the test set can not be expected to reect the performance on independent data. The best approach of course is to deal with the root of the problem, namely nding the proper complexity of the network.

One of the main goals of this work has been to design networks that avoid over-tting all together. By avoiding over-tting, the learning and generalization errors stay almost identical, and therefore training can be continued until it reaches minimum training error.

(7)

Adaptive encoding of amino acids

As in most of the existing methods, the secondary structure of the j'th residue Rj is predicted from a window of amino acids, Rj;n:::Rj:::Rj+n where W = 2n+ 1 is the window size.

These neural networks are often referred to as sequence-structure networks. Usually the amino acids are encoded by 21 binary numbers, such that each number corresponds to one amino acid.

The last number corresponds to a space, and is used to indicate the ends of a protein. This encoding, which we will call the orthogonal encoding, has the advantage of not introducing any articial correlations between the amino acids, but it is highly redundant, since 21 symbols can be encoded in 5 bits. This redundancy is one of the reasons why networks for secondary structure prediction tend to have a very large number of weights. However, according to Taylor (1986) the properties of the 20 amino acids with respect to the secondary structure can be expressed remarkably well by only two physical parameters: the hydrophobicity and the molecular volume.

This suggests using another encoding scheme than the orthogonal one.

By a method called weight sharing (Le Cun et al., 1989) it is possible to let the network itself choose the best encoding of the amino acids. The starting point is the above mentioned orthogonal encoding, but we omit the spacer input unit used by Qian and Sejnowski, and instead all inputs are set to zero for the part of the window where no residues are present. For each window position the 20 inputs are connected to M hidden units by 20M weights. This set of weights (and the M thresholds) corresponding to one window position is identical to those used for all the other window positions (see Figure 2). It is like placing an exact copy of the same small neural network with 20 inputs and M outputs at each amino acid position in the window, and these networks form the rst layer of the big network. More precisely, if the weight from input jto hidden unit iis called wkij for thek'th window position, thenwkij =wlij for all k and l. These sets of weights are forced to stay identical during training they always share the same values. In this way the encoding of the amino acids is the same for all positions in the window. The weights are learned by a straight-forward generalization of back-propagation in which weight updates are summed for weights sharing the same value (Le Cun et al., 1989). The use of weight sharing implies that the rst layer only contains 21M adjustable parameters including thresholds no matter the size of the window. In this workM = 3 is used and each of the 20 amino acids are thus represented by only three real numbers in the interval "01]. This leads to a dramatic reduction of the almost 11,000 weights used in the rst layer of Qian and Sejnowski's fully connected network, even if an extra hidden layer is added to the network.

The adaptive encoding scheme of the amino acids is called local encoding. Since the encoding is learned along with the other weights in the network it will be the `optimal' encoding, in the sense that it yields the minimum error on the training set for that specic network and that specic task. The adaptive nature of the encoding also means that it depends on the initial weights (like the other weights in the network) and may dier between dierent runs of the learning algorithm.

Structured networks

It is a common assumption that a network (or any other adaptive method) with some built-in knowledge about the problem performs better than more general networks, see e.g. (Maclin &

Shavlik, 1993). Many existing prediction methods use the same model for predicting the three types of secondary structure (helix, strand, and coil). Since the three secondary structures are very dierent it is possible that performance could be enhanced if separate networks were specically designed for each of the three structures. We will now explain how prior knowledge about secondary structures can be used to design such single-structure networks.

The majority of the helices in the database used are -helices. A residue in an -helix is hydrogen bonded to the fourth residue above and the fourth residue below in the primary sequence, and it takes 3.6 amino acids to make a turn in an -helix. It is likely that this periodic

(8)

T Q A S F D D P V T I L T Q A S V T I L

. . . G . . .

T Q A

Identical for all positions in the window Identical for all positions in the window Identical for all positions in the window

Figure 2: Network for predicting helices. The network uses the local encoding scheme and has a built-in period of 3 residues. Grey circles symbolize three hidden units and emphasized lines three weights. In the lower part of the gure shaded triangles symbolize 20 shared weights and shaded rectangles 20 input units. The single-structure network shown has a window size of 13 residues and only one output.

structure is essential for the characterization of an -helix. These characteristics are all of local nature and can therefore easily be built into a network that predicts helices from windows of the amino acid sequence. In Figure 2 a network with local encoding (in the rst hidden layer), a built-in period of 3 residues in the connections between the rst and second hidden layer and a window size of 13 residues is shown. The second hidden layer in the network contains 10 units that are fully connected to the output layer, giving a total of 144 adjustable parameters. For comparison a standard network with no hidden units at all, orthogonal encoding, and a window length of 13 residues has 261 adjustable parameters.

In contrast to helices, -strands and coil do not have such a locally described periodic structure. Therefore, the strand and coil networks only use the local encoding scheme, and a second hidden layer with 5-10 units fully connected to the rst hidden layer as well as to the output layer. Early studies (results not shown) indicated that a window size of 15 residues was optimal for all three types of single-structure networks. Thus, a typical structured helix network contains 160 weights, while typical strand and coil networks contain about 300-530 weights.

As shown in Figure 2 the single-structure networks only have one output. If the output is larger than some decision threshold the prediction is -helix, -strand or coil depending on the type of structure under consideration. For an input/output interval of "01] a decision threshold of 0.5 was found to be optimal.

The performance of the constrained single-structure networks are compared with the predic- tions obtained from perceptrons with no hidden units having window lengths of 13 amino acids.

The single-structure networks are all trained balanced i.e., for each positive example (helix) a negative example (non-helix) is chosen at random from the training set. In this way the same number of positive and negative examples are used in the training. According to Hayward and Collins (Hayward & Collins, 1992) balanced training gives only minor changes in the percentage of correctly classied residues (Q2i), but slightly better correlation coe cients. This is in good agreement with our own experiments (results not shown).

(9)

Filtering the predictions

As described earlier, some predictions may be very unrealistic from a biological point of view.

For instance, prediction 1 in Table 2 has an -helix of length one in the end. To obtain more realistic predictions a structure-structure network can be applied to the prediction from the previously described sequence-structure network. In the work of Qian and Sejnowski a window of 13 secondary structure predictions is used as input to a fully connected structure-structure network with 40 hidden units. Thus, this network has 313 inputs and 3 outputs, and the predicted secondary structure for the central amino acid is chosen as the largest of the three outputs. In this way the prediction becomes dependent on the surrounding structures. The structure-structure network is often called a lter network, because it is used to lter out bad predictions, although it can in principle do more than that. According to (Qian & Sejnowski, 1988 Rost & Sander, 1994) the lter network improves the three-state accuracy signicantly and makes the prediction more realistic in terms of predicted mean lengths of secondary structure segments. A lter network similar to the one used by Qian and Sejnowski can also be applied when combining the predictions from the three single-structure networks. Notice that this network actually increases the size of the window used for the prediction of an amino acid.

Since the rst network uses a window size of 13 (or 15) amino acids, the second network receives information based on a total window of 25 (or 29) amino acids.

For the single-structure predictions a lter network can be applied in the same manner as for the three-state predictions. In this work each of the single-structure predictions are ltered with fully connected networks having 10 hidden units and a window size of 15 single-structure predictions. As will be shown, the ltration of the single-structure predictions before combining to a three-state prediction can be omitted without loss of accuracy.

Using softmax for combining single-structure predictions

Usually the neural network outputs three values, one for each of the three structures. This type of network does not necessarily choose one of the three structures. For instance it can (and sometimes does) classify one input pattern as all three types of structure, i.e., it gives large outputs on all three output units. In practice of course, the input is classied as the structure giving the largest output, but conceptually this type of classication is more suited for independent classes. It may be benecial to build in the constraint that a given input belongs to only one of the three structures. This can be done by a method called Softmax (Bridle, 1990), which ensures that the three outputs always sum to one (for secondary structure prediction the same idea was used in (Stolorz et al., 1992)). Hence, the outputs can be interpreted as the conditional probabilities that a given input belongs to each of the three classes. Simulation studies done by Richard and Lippmann (Richard & Lippmann, 1991) show that neural network classiers provide good estimates of Bayesian a posterior probabilities (conditional probabilities).

In the Softmax method the usual sigmoidal activation function

Oi =g(hi) = 1

1 +e;hi (1)

in the output layer is replaced by the normalizing function

Oi = Pehi

jehj (2)

where Oi is the i'th output and the sum in the denominator extends over all outputs. In these formulas hi = Pjwijxj is the net input to output unit i, wij is the weight connecting output unit ito hidden unit j, andxj is the output of hidden unit j. A log-likelihood cost function is used instead of the usual squared error cost function. Ifi is the target output of thei'th output

(10)

unit, then the contribution to the cost function from one training example can be written as

E(

w

) =X

i

ilog(i

Oi) (3)

whereas the usual cost function isPi(i;Oi)2. The weight update formulas are easily calculated and turn out to be identical to the ones used in standard backpropagation if an entropic cost function is applied, see e.g. (Hertz et al., 1991).

To combine and lter the single-structure predictions a single neural network is used. This network takes the outputs from the three single structure networks as input and uses the soft- max function (2) on the three output classes. The combining network takes a window of 15 consecutive predictions of helix, strand and coil as input, and the input layer is fully connected to the output layer via 10 hidden units. When using Softmax the predictions can be inter- preted as estimated probabilities of correct prediction. Results on how well the outputs match probabilities will be shown.

Ensembles of single-structure networks

The solution found by a neural network after training depends on the initial weights and the sequence of training examples. Thus, training two identical networks often results in two dierent solutions, i.e., two dierent local minima in the objective function are found. Since the solutions are not completely correlated the combination of two or more networks (an ensemble) often improves the overall accuracy (Granger, 1989 Hansen & Salamon, 1990 Wolpert, 1992 Perrone

& Cooper, 1994). For complex classication tasks the use of ensembles can be thought of as a way of averaging out statistical uctuations. The combination of several solutions can in some cases contribute valuable information, which is especially true if they disagree (Krogh &

Vedelsby, 1995). Having networks that over-t the data is one way of making the networks disagree (if they over-t dierently), and it can indeed be shown that over-tting can sometimes be benecial in an ensemble (Sollich & Krogh, 1995). Another obvious way to make the networks in the ensemble disagree is to use dierent network architectures and/or training methods. In this work ensembles of 5 dierent single-structure networks (for each type of secondary structure) are used. The networks all use the local encoding scheme and the dierences are introduced by using various periods in the -network and by using dierent numbers of hidden units.

The usual way to combine the ensemble predictions is to sum the predictions using uniform (equal) weighting of the ensemble members. Instead, we have chosen to use a neural network for the combination. Rather than rst training a lter network for each of the individual networks in the ensemble, our approach is to combine and lter the whole ensemble with only one network.

This network takes a window of predictions from all the single-structure networks in the ensemble and then decides one output for the central residue. However, using a fully connected network results in considerable over-tting since a window length of 15 residues equals 153NE inputs for an ensemble ofNE networks. HereNE = 5 is used leading to a total of 225 inputs. One way to reduce the number of weights is by weighting each of the three single-structure ensembles separately for all positions in the window. In this way segments of 5 inputs corresponding to, e.g., 5 helix network outputs are connected to one hidden unit in the combining network, see Figure 3. Thus, for a given position in the input window each of the three ensembles are averaged using position specic weights. This constraint gives a total of 315 = 45 hidden units that are fully connected to the output layer consisting of three units. The prediction for the central residue is chosen as the largest of the three outputs that are normalized with softmax.

The combining network is trained unbalanced, i.e. with ,, and coil appearing with their true frequencies in the overall training data set.

(11)

α β c

+3 +4 +5 +6 +7 +1 +2

0 -5 -4 -3 -2 -1 -7 -6

Combining network

α ensemble

β ensemble

c ensemble

... ...

α ensemble

β ensemble

c ensemble α

ensemble β ensemble

c ensemble

...ANIVGGIEYSINNASLCVGFSVTRGATKGFVTAGHCGTVN...

Amino acid sequence

Figure 3: The ensemble method for combining and ltering ensembles of single-structure net- works. The combining network (top of gure) takes a window of 3515 predictions from the ensembles of single-structure networks (3 structures, 5 networks for each structure, and a window length of 15). In the combining network the ensembles for each of the three structures are weighted separately by position specic weights for each window position.

(12)

2.4 Using multiple alignments of homologous proteins

Multiple alignments of homologous proteins contain more information about secondary struc- tures than single sequences alone, because the secondary structure is considerably better con- served than the amino acid sequence. The use of multiple alignments can give signicant im- provements in the secondary structure prediction (Zvelebil et al., 1987) especially if weakly related proteins are included in the alignments. The latter only holds if the alignment of the weakly related proteins is good, i.e., resembles the structural alignment obtained by superposi- tion of protein backbones (Levin et al., 1993).

Recently Rost and Sander have had signicant success by using sequence proles from mul- tiple alignments as input to the neural network instead of single sequences (Rost & Sander, 1993a Rost & Sander, 1993b Rost & Sander, 1994). A prole consists of the frequencies of the 20 amino acids in each column of the multiple alignment, and in the work of Rost and Sander these frequencies are used as inputs to the neural network, i.e., the usual representation of an amino acid by a 1 and 19 zeros is replaced by 20 real numbers. When using proles instead of single sequences, correlations between amino acids in the window will not be available to the network. Although this may not degrade performance in practice, we have chosen another approach, which conserves these correlations. It is the approach also taken in (Zvelebil et al., 1987 Russell & Barton, 1993 Levin et al., 1993), where the predictions are made from the single sequences and then combined afterwards using the alignment. This method also has the advan- tage of being able to use any secondary structure prediction method (based on single sequences) and any alignment method.

To the protein for which a secondary structure prediction is wanted (called the base protein), a set of homologous proteins are found. This set of proteins, including the base protein, is used for the secondary structure prediction in the following way.

1. The secondary structure for each of the homologous proteins in the set is predicted sep- arately using its amino acid sequence. Any prediction method based on single sequences can be used at this stage, but we use the ensemble method described above.

2. The protein sequences in the set are aligned by some multiple sequence alignment method and each protein is assigned a weight (see below).

3. For each column in the alignment a consensus prediction is found from the predictions corresponding to each of the amino acids in the column (see below).

For each column the consensus is obtained either by weighted average, or by weighted ma- jority. The weighted average is calculated by rst multiplying the -helix predictions by the weights of the proteins, and then summing the weighted helix predictions column-wise. Similarly the weighted sums of -strand and coil predictions are calculated. Note that insertions in the alignment do not contribute to the column sums. The largest of the three column sums then determine the predicted secondary structure for this column. The weighted average approach is illustrated in 3. In the weighted majority, the prediction for each amino acid is chosen by the largest of the three outputs. Then the total sum of -helix predictions is calculated by column-wise summing of the weights for those proteins where an -helix is predicted. Similarly the total sums of -strand and coil predictions are found. The secondary structure obtaining the largest column sum is chosen as the predicted one for this column. In this way, weighted majority becomes dependent on the estimated weights for each of the proteins in the alignment.

Weighting the aligned proteins

If an alignment contains many very similar proteins and a few that dier signicantly from the majority, then the minority will have almost no inuence on the prediction. Therefore it

(13)

Protein 1 .. Protein 28 .. Protein 56 Weighted Pred. Obs.

Weight 0.018 Weight 0.010 Weight 0.053 sum

K .1 .0 .9 K .1 .0 .9 * .0 .0 .0 .1 .0 .9 c c

E .2 .0 .8 E .2 .0 .8 * .0 .0 .0 .2 .0 .8 c c

T .7 .0 .3 S .4 .1 .5 * .0 .0 .0 .4 .0 .6 c c

A .6 .1 .3 S .5 .0 .5 * .0 .0 .0 .4 .0 .6 c

A .8 .0 .2 A .7 .0 .3 * .0 .0 .0 .6 .0 .4

A .9 .0 .1 M .8 .1 .1 A .1 .0 .9 .6 .1 .3

K .9 .0 .1 .. K .8 .1 .1 .. K .1 .1 .8 .6 .1 .3

F .9 .0 .1 F .8 .0 .2 F .5 .1 .4 .6 .1 .3

E .9 .0 .1 Q .8 .0 .2 Q .4 .1 .5 .6 .1 .3

R .8 .1 .1 R .8 .0 .2 E .6 .0 .4 .6 .1 .3

Q .8 .1 .1 Q .8 .0 .2 K .5 .1 .4 .6 .1 .3

H .8 .0 .2 .. H .7 .1 .2 .. H .3 .1 .6 .5 .1 .4

M .7 .0 .3 M .4 .1 .5 I .1 .1 .8 .3 .2 .5 c c

D .4 .0 .6 D .2 .1 .7 P .1 .0 .9 .2 .1 .7 c c

S .4 .0 .6 S .2 .1 .7 N .2 .0 .8 .2 .1 .7 c c

S .4 .1 .5 S .2 .0 .8 * .0 .0 .0 .2 .1 .7 c c

T .4 .1 .5 .. G .1 .0 .9 .. * .0 .0 .0 .2 .1 .7 c c

S .5 .1 .4 S .1 .0 .9 * .0 .0 .0 .2 .1 .7 c c

A .5 .1 .4 P .1 .0 .9 * .0 .0 .0 .2 .1 .7 c c

A .3 .1 .6 S .2 .0 .8 * .0 .0 .0 .2 .1 .7 c c

S .3 .1 .6 T .2 .0 .8 T .1 .1 .8 .2 .0 .8 c c

S .3 .0 .7 N .2 .0 .8 N .1 .1 .8 .2 .0 .8 c c

: : : : : :

: : : : : :

Table 3: Example of consensus prediction for the protein 7rsa obtained by weighted average. In the table a small part of the HSSP alignment is shown for only 3 out of a total of 56 proteins in the alignment. For each protein is shown the predictions of (left to right) -helix, -strand and coil produced by the ensemble method. Note that a \*" in the amino acid sequence corresponds to a gap in the alignment. The column \Weighted average" is the weighted average consensus prediction for each of the three secondary structures. The two last columns show the predicted and observed structures respectively.

(14)

been suggested in recent years, see e.g. (Altschul et al., 1989 Vingron & Argos, 1989 Sibbald

& Argos, 1990 Gerstein et al., 1994 Heniko & Heniko, 1994). Here we will use a newly developed one based on the maximum entropy principle (Krogh & Mitchison, 1995).

For an alignment of N proteins, the entropic weights are found by maximizing the entropy of the alignment dened by (Krogh & Mitchison, 1995):

S(w1:::wN) = XM

j=1

ej =;XM

j=1

X

x pj(x)logpj(x) (4)

where the sum is extended over all alignment columns j = 1:::M and over the 20 dierent amino acidsx. pj(x) is the weighted amino acid frequencies for columnj, i.e.,pj(x) is a function of the weights assigned to the aligned proteins, see (Krogh & Mitchison, 1995). The entropy is a concave function in the weights, and it is therefore easy to maximize. We have used simple gradient ascent in this work, although more e cient techniques are available. The problem with entropic weighting and any other alignment based weighting scheme is that erroneously aligned proteins can be assigned very high weights, which obviously is wrong. Since aligning weakly related proteins often results in erroneous alignments (Vingron & Argos, 1989 Levin et al., 1993) the weighting schemes should be used with precaution. For this reason we have also tested a combination of the uniform and the entropic weights. Thus, for protein i in the alignment the weight is given by:

wi=

N

+ (1; )wientropic (5)

where = 0:5 is used in this work.

To improve the alignment prediction a one-hidden-layer lter network is applied to the consensus prediction. This network takes a window of 15 consecutive alignment predictions as input. In addition the column entropyej (dened in (4)) and the weighted number of insertions and deletions (InDels) for each column are used as input. Thus, the lter network has a total of 15(3 + 3) = 90 inputs. The entropy of each alignment column indicates how well the current position is conserved. That is, if the column entropy is close to zero, then the variation of the amino acids in this column is small, i.e. this position is well conserved in the protein family. On the other hand, if the column entropy is large, then the variation of amino acids is large, i.e. this position is very poor conserved. Since regular secondary structure segments are more conserved than coil segments, a large variation of amino acids is often observed in coil regions (Rost et al., 1994b). Thus, a large column entropy often corresponds to a coil region, and a small entropy to an -helix or a -strand region. The weighted number of InDels is the number of insertions and deletions on the considered alignment position weighted by equation (5). InDels most often occur in coil regions. To avoid over-tting the number of hidden units is 5 in this lter network.

The alignments used to test this method are taken from the HSSP-database version 1.0, release 25.0 (Sander & Schneider, 1991). For each of the 126 non-homologous proteins the cor- responding HSSP le is found. These les consist of homologous proteins that have at least 30%

sequence identity for alignment lengths>80 residues, and larger for shorter proteins (Sander &

Schneider, 1991). There are two minor problems using the HSSP les for secondary structure predictions. For creating the alignments in the HSSP les, knowledge about the secondary struc- ture of the base protein is used, since no insertions or deletions in regular secondary structure segments are allowed. Furthermore, there might be homologies between proteins in dierent HSSP les, although the base proteins do not have signicant homologies, and this might give homology between the test and training sets. In our experience these points have insignicant inuence on the results, and using the HSSP les gives us the advantage of being able to directly compare our results with those of (Rost & Sander, 1994).

(15)

5 0 5 5 6 0 6 5 7 0 7 5 8 0 8 5

0 2 0 4 0 6 0 8 0 1 0 0

Percentage correctly predicted [%]

E p o c h

T r a i n T e s t

Figure 4: Percentage (Q2) of residues predicted correctly by the -network as a function of the number of training epochs (full sweeps through the training set: set B{G). The solid curve shows the percentage of learned residues in the training set and the dotted curve the prediction accuracy on the testing set (set A).

3 Results

3.1 Two-state predictions by single-structure networks

The result of training the structured -network on set B{G and using set A as testing set is shown in Figure 4. This gure shows two interesting features: 1) Over-tting is gone, i.e. the accuracy on the training and testing sets are approximately equal 2) the training and testing percentages oscillate in phase. The rst observation means that this network gives reliable estimates of prediction accuracy on new proteins not in the database used for developing the method. The observed uctuations is mostly due to the use of balanced training where a dierent set of negative examples (non-helix) are used in each training epoch. Since the in- phase oscillations are observed for all of our networks, the nal network weights are chosen as follows. The network is trained for 100 training epochs, and in each epoch the training error is measured. If the training error is lower than in all previous epochs the corresponding weights are saved. In this way, the set of weights corresponding to the smallest training error seen during all 100 epochs is found.

For the single structure predictions, a fully connected network with hidden units only per- forms as well as a one-layer network if the training is stopped at the right time, see (Hayward

& Collins, 1992). Therefore we use a one-layer network as a reference model. In Table 4 the results obtained with the single-structure networks are summarized. From the table it is seen that the structured networks predict the three secondary structures better than the reference models. Furthermore, the structured helix network learns the training set better than the refer- ence helix network despite the fact that the latter contains more weights (reference -network:

261 weights, structured -network: 160 weights). This shows that the learned representation of the amino acids is considerably better than the orthogonal representation.

When comparing two-state predictions for dierent testing sets a considerable variation in performance is seen. For testing set B the helix network classiesQ2= 72:5% of the residues correctly while this number is Q2= 77:7% for testing set F. The variation between the seven dierent testing sets is observed for all of the single-structure networks and is partly due to the dierent distributions of the three secondary structures, and partly due to the fact that non-homologous proteins in general are very dierent. This emphasizes the importance of using

(16)

QTrain

2i (%) QTest2i (%) CiTrain CiTest -network:

Reference 74.10 72.54 0.42 0.37

Structured 75.59 74.98 0.42 0.39

Filter 76.31 76.46 0.42 0.40

-network:

Reference 75.08 73.84 0.39 0.36

Structured 78.10 76.48 0.41 0.37

Filter 81.52 81.34 0.41 0.41

coil-network:

Reference 71.91 70.78 0.43 0.41

Structured 72.09 71.33 0.44 0.42

Table 4: Two-state predictions of -helix,-strand and coil found by seven-fold cross-validation.

The reference networks are perceptrons with window lengthsW = 13. The structured networks all use the local encoding scheme and the -network has a built in period of 3 residues. The fully connected lter network takes a window of 15 predictions from the structured network as input and has 10 hidden units. The lter is trained unbalanced.

cross-validation when estimating prediction accuracies.

To improve the two-state prediction a lter network is applied. This network takes a window of 15 consecutive predictions from the sequence-structure network as input. The lter network is fully connected and contains 10 hidden units. As shown in Table 4 the lter improves prediction approximately 1.5% for helices and almost 5% for strands. Filtering the coil predictions gives only about 0.5% improvement, probably because coil is an irregular structure, i.e., it is not so much dependent on the surrounding structures.

3.2 Combining single-structure networks

To obtain a three-state prediction the single-structure networks are combined with a lter net- work. The lter network takes a window of 15 consecutive secondary structure predictions as input and has 10 hidden units. In Table 5 the results achieved when using the non-ltered and the ltered single-structure predictions as input are shown. From this table it is seen that lter- ing the single-structure predictions before the combining network does not improve performance.

This is because the combining network in itself acts like a lter network. For comparison, the performance of a network identical to Qian and Sejnowski's with 40 hidden units is also shown in Table 5. The performance of this network is evaluated on the same set of non-homologous pro- teins by seven-fold cross-validation, and it is seen that the fully connected network only obtains

Q

3= 63:2% compared toQ3 = 65:4% obtained by combining the unltered single-structure pre- dictions. Note that the results obtained with the Qian and Sejnowski model are found by using the best performance on each of the seven testing sets, which over-estimates the performance.

For the combining network the above dened stop criterion is used.

The eect of the local encoding scheme is illustrated by a three-state network, which uses the adaptive encoding of amino acids in the rst layer and 5 hidden units in the second layer.

This network has a window size of 15 residues leading to a total of only 311 adjustable weights compared to approximately 11,000 weights in Qian and Sejnowski's network. Despite this dif- ference, the local encoding network gives about the sameQ3 and better correlation coe cients, indicating that the amino acids are well described by only three real parameters, and that the fully connected networks are highly over-parametrized. Results after ltration are shown in Table 5. The lter network has an input window of 15 and a hidden layer consisting of 10 units.

(17)

Q

3 (%) C C Cc

Combined single-structure nets:

Filtered input 64.46 0.45 0.39 0.42

Unltered input 65.39 0.46 0.41 0.43

Ensembles 66.27 0.48 0.41 0.44

Alignments:

Uniform (Not lt.) 68.81 0.55 0.46 0.48 Entropic (Not lt.) 68.68 0.54 0.46 0.47 Entropic+Uniform (Not lt.) 69.20 0.55 0.46 0.48 Entropic+Uniform (Filtered) 71.32 0.59 0.52 0.50 Reference models:

Qian and Sejnowski network 63.16 0.40 0.35 0.41 Local encoding (Not lt.) 63.10 0.42 0.36 0.41 Local encoding (Filtered) 64.20 0.44 0.37 0.41

Table 5: Cross-validated three-state predictions obtained by various methods. Ensembles refers to the combination of ensembles of 5 single-structure networks with a constrained network and Alignments refers to using multiple alignments of homologous sequences in combination with ensembles. The alignment prediction is obtained by weighted average and dierent weighting schemes are shown. The eect of ltering the alignment prediction is also shown. For comparison is shown the performance of a fully connected network similar to the one used by Qian and Sejnowski with 40 hidden units (input spacer units are omitted). Note that the performance for the fully connected network is given by the best performance on the testing set during training and that the previously dened stop criterion is used for all other networks. Also shown is a three-state prediction network with local encoding in the rst layer and 5 hidden units in the second layer fully connected to the output layer. Results with and without ltration are shown.

(18)

bles give an improvement of approximately 0.9% in the overall three-state prediction accuracy mostly due to a better helix prediction (higher C), see Table 5. This is less than the im- provement of more than 2% reported by Rost and Sander (Rost & Sander, 1994) when using ensembles of neural networks for secondary structure prediction. This is probably because the single-structure networks used in this work are very well adjusted and that no over-tting is observed. The networks used by Rost and Sander have a considerable tendency to over-t, and we believe that an important role of the ensemble in their work is to \average out" the over-tting. This is possible if the members in the ensemble over-t dierently, i.e., they make dierent errors, and therefore their average output is generally better than the output of any single network in the ensemble (Hansen & Salamon, 1990 Krogh & Vedelsby, 1995).

3.3 Using multiple alignments of related proteins

To improve the performance of the ensemble method multiple alignments are applied as described previously. Since only minor dierences where observed between the weighted majority scheme and the weighted average scheme only results from the latter (which tend to be the best) will be presented in the following. As can be seen in Table 5 the dierence between uniform weighting and entropic weighting is surprisingly small. Furthermore, the entropic weighting seems to be slightly inferior to the uniform weighting. As already discussed, any weighting scheme suers from assigning large weights to erroneous alignments, and that might be one of the reasons for this. However, using the combined weighting scheme a gain of approximately 0.5% is seen compared to both uniform and entropic weighting.

Using a network to lter the alignment prediction as described in section 2.4 gives an amazing gain of more than 2% in the three-state accuracy. The lter takes a window of 15 \raw" alignment predictions, the column entropy and the weighted number of InDels as input. The network is fully connected and contains 5 hidden units. Thus, the ltered alignment prediction yields

Q

3 = 71:3%, and the corresponding Matthews correlation coe cients of C = 0:59C = 0:52, and Cc= 0:50 indicates a very good prediction. Comparing the ltered alignment prediction to the one obtained using single sequences a gain of 5% is observed.

In order to further improve the performance, the following additional inputs were tried.

1. Normalized distance from the central residue to the ends of the protein 2. Normalized length of the protein

3. Frequency of the 20 amino acids in the base protein

The last two inputs contain global information about the protein under consideration. However, none of these attempts lead to signicant improvements (they all resulted in a gain of less than 0.1%).

Compared to the single-sequence method the alignment method obtains 5% higher classi- cation rate, and a considerable increase is seen in the Matthews correlation coe cients for all three structures. This conrms that evolutionary information is extremely important in the description of secondary structure from amino acid sequences. In the work of Levin et al. (Levin et al., 1993) a gain in accuracy of almost 7% is reported when applying multiple alignments to a combination of the GOR (Garnier et al., 1978) and SIMPA (Levin & Garnier, 1988) methods.

A similar gain is reported by Rost and Sander (Rost & Sander, 1994) when using proles to train a neural network resembling the one used by Qian and Sejnowski. The smaller gain of 5%

found in this work is probably due to a better single-sequence method than the ones used by the above mentioned authors.

The increase in prediction accuracy when using multiple alignments is mostly due to a better prediction of -helices and -strands as shown in Table 6. Thus, the increase in c is

(19)

Ensembles -helix -strand coil

Qi (All) 64.2% 45.7% 76.9%

Qi (Core) 71.5% 53.5% 79.5%

LObsi 9.1 5.1 6.2

LPredi 7.5 3.8 7.8

PiOvl 68% 69% 91%

Alignments -helix -strand coil

Qi (All) 68.9% 57.0% 79.2%

Qi (Core) 76.6% 67.1% 81.9%

LObsi 9.1 5.1 6.2

LPredi 9.3 4.4 7.8

PiOvl 83% 80% 95%

Table 6: The performance of the ensemble and alignment method on each of the three secondary structures found by seven-fold cross-validation. \All" refers to all residues, while \Core" refers to all residues except the rst and last residue in segments of secondary structure.

mostly contribute information about regular secondary structures in agreement with the fact that the three-dimensional structure of a protein family is mainly determined by the approximate location of helices and strands (Rost et al., 1994b). Hence, the ends of regular secondary structure segments are less well dened than the core of regular secondary structure segments.

This is veried in Table 6 where it is seen that the core of helix and strand segments are predicted considerably better than the mean for all residues. The corresponding percentages for coil shows that the core and ends of coil segments are approximately equally well dened.

As discussed earlier, the performance of the prediction method should not be based only on the percentages of correctly predicted residues. In order to see how realistic the prediction is the predicted and observed mean lengths of secondary structure segments are shown in Table 6. It is seen that the alignment method gives a much better prediction of segment lengths for helices and strands than the single-sequence method. The predicted helix segments have nearly the same lengths as the observed helix segments and the underprediction of -strands tends to be slightly worse for the ensemble method. However, the overprediction of coil seems to remain unchanged when using alignments.

Since-sheets often contain non-local interactions the strands are poorly dened from local sequences of amino acids. This is reected in the -strand prediction shown in Table 6 only

Q = 57:0% of the observed strands are being correctly predicted. This should be compared to

Q= 68:9% andQc = 79:2%. In some sense it is more interesting if the algorithm nds segments of helix or strands at approximately correct locations. Even though the strand prediction is clearly inferior to the helix prediction (in terms ofQandQ), segments of these two structures are located equally well. As shown in Table 6 an impressive 83% of all predicted helix segments overlap with at least one observed helix segment. The corresponding percentage for strand is 80%. These high overlap percentages illustrate that the alignment method is very good at locating and distinguishing segments of regular secondary structure.

For a new protein with unknown structure the performance is better described by the per chain accuracy QChain3 . The alignment method yields QChain3 = 70:8%9:3%. This is slightly smaller than the performance measured per residue, which means that long chains are predicted slightly better than short chains. Albeit the expected per chain accuracy lies betweenQChain3 = 61:5% and QChain3 = 80:1% the prediction can be signicantly worse as illustrated in Figure 5.

For four of the chains in the data set the three-state accuracy is less than 50%. Most prediction methods are good at capturing general features contained in the database used for training.

Hence, the more atypical a given protein is compared to proteins in the training set the more

(20)

0 2 4 6 8 1 0 1 2

4 0 5 0 6 0 7 0 8 0 9 0 1 0 0

Number of chains

Q 3 per chain [%]

Figure 5: Distribution of per chain three-state accuracies obtained by the alignment method.

The average three-state accuracy is<QChain3 >= 70:8% with a standard deviation of = 9:3%.

likely is a poor prediction.

3.4 The neural network output as estimated probabilities of correct predic-

The prediction for a certain residue is given by the output unit with the largest output. The

tion

actual output value for this unit can be interpreted as the probability that the prediction is correct. To see if this interpretation is correct, one can nd the actual accuracy of predictions for residues giving an output in a certain interval. In Figure 6 is shown the observed three-state accuracy versus the estimated accuracy for those residues producing an output in a certain inter- val. The estimated prediction accuracy is given by the arithmetic average of network predictions in the interval. The gure shows that a linear relationship exists between the estimated and the observed accuracy verifying that the network outputs can indeed be interpreted as estimated probabilities of correct prediction. Note that the lowest estimated probability is 0.33 since the three outputs must sum to one and since the prediction is chosen as the largest of the three outputs.

In Figure 7 is shown the observed accuracy plotted against the percentage of residues pre- dicted with outputs above a certain value. This is another way to see that the higher output of the lter network the more reliable is the prediction. In this gure one can see that 72% of the database yields Q3 = 80% and 36% scores about Q3 = 90%. Thus, for more than 36% of the database an accuracy comparable to that of homology methods is achieved. This position specic reliability measure can be used to locate those regions of a new protein with unknown structure that are predicted with particular high condence thereby making an experimental determination of the structure considerably easier. These results are very similar to those of (Rost & Sander, 1994).

Since -strands are predicted less accurately than both -helices and coil, the estimated probability for this structure is generally smaller than the probabilities for -helices and coil.

In Figure 8 the percentages of observed helices, strands and coil predicted with outputs in the given intervals are shown. Most-strands are predicted with an output below 0.6 corresponding to a relatively uncertain prediction. In contrast an impressive 27% of all observed helices are predicted with outputs in the interval 0.9{1.0 corresponding to a very high reliability. Further-

參考文獻

相關文件

The Secondary Education Curriculum Guide (SECG) is prepared by the Curriculum Development Council (CDC) to advise secondary schools on how to sustain the Learning to

3: Calculated ratio of dynamic structure factor S(k, ω) to static structure factor S(k) for &#34;-Ge at T = 1250K for several values of k, plotted as a function of ω, calculated

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

Upon the full implementation of the New Senior Secondary (NSS) academic structure in the 2012/13 school year, and with the aims of enhancing the support for the

• Uses a nested structure to accumulate path data as the simulation is running. • Uses a multiple branch structure to choose the

The remaining positions contain //the rest of the original array elements //the rest of the original array elements.

mNewLine ; invoke the macro This is how you define and invoke a simple macro. The assembler will substitute &#34;call

• The abstraction shall have two units in terms o f which subclasses of Anatomical structure are defined: Cell and Organ.. • Other subclasses of Anatomical structure shall