• 沒有找到結果。

Support Vector Machinery for Infinite Ensemble Learning

N/A
N/A
Protected

Academic year: 2022

Share "Support Vector Machinery for Infinite Ensemble Learning"

Copied!
28
0
0

加載中.... (立即查看全文)

全文

(1)

Support Vector Machinery for Infinite Ensemble Learning

Hsuan-Tien Lin htlin@caltech.edu

Ling Li ling@caltech.edu

Department of Computer Science California Institute of Technology Pasadena, CA 91125, USA

Editor: Peter L. Bartlett

Abstract

Ensemble learning algorithms such as boosting can achieve better performance by averag- ing over the predictions of some base hypotheses. Nevertheless, most existing algorithms are limited to combining only a finite number of hypotheses, and the generated ensemble is usually sparse. Thus, it is not clear whether we should construct an ensemble classifier with a larger or even an infinite number of hypotheses. In addition, constructing an infi- nite ensemble itself is a challenging task. In this paper, we formulate an infinite ensemble learning framework based on the support vector machine (SVM). The framework can out- put an infinite and nonsparse ensemble through embedding infinitely many hypotheses into an SVM kernel. We use the framework to derive two novel kernels, the stump kernel and the perceptron kernel. The stump kernel embodies infinitely many decision stumps, and the perceptron kernel embodies infinitely many perceptrons. We also show that the Laplacian radial basis function kernel embodies infinitely many decision trees, and can thus be ex- plained through infinite ensemble learning. Experimental results show that SVM with these kernels is superior to boosting with the same base hypothesis set. In addition, SVM with the stump kernel or the perceptron kernel performs similarly to SVM with the Gaussian radial basis function kernel, but enjoys the benefit of faster parameter selection. These properties make the novel kernels favorable choices in practice.

Keywords: ensemble learning, boosting, support vector machine, kernel

1. Introduction

Ensemble learning algorithms, such as boosting (Freund and Schapire, 1996), are successful in practice (Meir and R¨atsch, 2003). They construct a classifier that averages over some base hypotheses in a set H. While the size of H can be infinite, most existing algorithms use only a finite subset of H, and the classifier is effectively a finite ensemble of hypotheses.

Some theories show that the finiteness places a restriction on the capacity of the ensemble (Freund and Schapire, 1997), and some theories suggest that the performance of boosting can be linked to its asymptotic behavior when the ensemble is allowed to be of an infinite size (R¨atsch et al., 2001). Thus, it is possible that an infinite ensemble is superior for learning. Nevertheless, the possibility has not been fully explored because constructing such an ensemble is a challenging task (Vapnik, 1998).

In this paper, we conquer the task of infinite ensemble learning, and demonstrate that better performance can be achieved by going from finite ensembles to infinite ones. We

(2)

formulate a framework for infinite ensemble learning based on the support vector ma- chine (SVM) (Vapnik, 1998). The key of the framework is to embed an infinite number of hypotheses into an SVM kernel. Such a framework can be applied both to construct new kernels for SVM, and to interpret some existing ones (Lin, 2005). Furthermore, the framework allows us to compare SVM and ensemble learning algorithms in a fair manner using the same base hypothesis set.

Based on the framework, we derive two novel SVM kernels, the stump kernel and the perceptron kernel, from an ensemble learning perspective (Lin and Li, 2005a). The stump kernel embodies infinitely many decision stumps, and as a consequence measures the simi- larity between examples by the `1-norm distance. The perceptron kernel embodies infinitely many perceptrons, and works with the `2-norm distance. While there exist similar kernels in literature, our derivation from an ensemble learning perspective is nevertheless original.

Our work not only provides a feature-space view of their theoretical properties, but also broadens their use in practice. Experimental results show that SVM with these kernels is su- perior to successful ensemble learning algorithms with the same base hypothesis set. These results reveal some weakness in traditional ensemble learning algorithms, and help under- stand both SVM and ensemble learning better. In addition, SVM with these kernels shares similar performance to SVM with the popular Gaussian radial basis function (Gaussian- RBF) kernel, but enjoys the benefit of faster parameter selection. These properties make the two kernels favorable choices in practice.

We also show that the Laplacian-RBF kernel embodies infinitely many decision trees, and hence can be viewed as an instance of the framework. Experimentally, SVM with the Laplacian-RBF kernel performs better than ensemble learning algorithms with decision trees. In addition, our derivation from an ensemble learning perspective helps to explain the success of the kernel on some specific applications (Chapelle et al., 1999).

The paper is organized as follows. In Section 2, we review the connections between SVM and ensemble learning. Next in Section 3, we propose the framework for embedding an infinite number of hypotheses into a kernel. We then derive the stump kernel in Section 4, the perceptron kernel in Section 5, and the Laplacian-RBF kernel in Section 6. Finally, we show the experimental results in Section 7, and conclude in Section 8.

2. Support Vector Machine and Ensemble Learning

In this section, we first introduce the basics of SVM and ensemble learning. Then, we review some established connections between the two in literature.

2.1 Support Vector Machine

Given a training set {(xi, yi)}Ni=1, which contains input vectors xi ∈ X ⊆ RD and their corresponding labels yi ∈ {−1, +1}, the soft-margin SVM (Vapnik, 1998) constructs a classifier

g(x) = sign hw, φxi + b

(3)

from the optimal solution to the following problem:1 (P1) min

w∈F ,b∈R,ξ∈RN

1

2hw, wi + C

N

X

i=1

ξi

s.t. yi hw, φxii + b ≥ 1 − ξi, for i = 1, 2, . . . , N, ξi≥ 0, for i = 1, 2, . . . , N.

Here C > 0 is the regularization parameter, and φx = Φ(x) is obtained from the feature mapping Φ : X → F . We assume the feature space F to be a Hilbert space equipped with the inner product h·, ·i (Sch¨olkopf and Smola, 2002). Because F can be of an infinite number of dimensions, SVM solvers usually work on the dual problem:

(P2) min

λ∈RN

1 2

N

X

i=1 N

X

j=1

λiλjyiyjK(xi, xj) −

N

X

i=1

λi

s.t. 0 ≤ λi ≤ C, for i = 1, 2, . . . , N,

N

X

i=1

yiλi= 0.

Here K is the kernel function defined as K(x, x0) = hφx, φx0i. Then, the optimal classifier becomes

g(x) = sign

N

X

i=1

yiλiK(xi, x) + b

!

, (1)

where b can be computed through the primal-dual relationship (Vapnik, 1998; Sch¨olkopf and Smola, 2002).

The use of a kernel function K instead of computing the inner product directly in F is called the kernel trick, which works when K(·, ·) can be computed efficiently (Sch¨olkopf and Smola, 2002). Alternatively, we can begin with an arbitrary K, and check whether there exists a space-mapping pair (F , Φ) such that K(·, ·) is a valid inner product in F . A key tool here is the Mercer’s condition, which states that a symmetric K(·, ·) is a valid inner product if and only if its Gram matrix K, defined by Ki,j = K(xi, xj), is always positive semi-definite (PSD) (Vapnik, 1998; Sch¨olkopf and Smola, 2002).

The soft-margin SVM originates from the hard-margin SVM, which forces the margin violations ξi to be zero. When such a solution is feasible for (P1), the corresponding dual solution can be obtained by setting C to ∞ in (P2).

2.2 Adaptive Boosting and Linear Programming Boosting

The adaptive boosting (AdaBoost) algorithm (Freund and Schapire, 1996) is perhaps the most popular and successful approach for ensemble learning. For a given integer T and a hypothesis set H, AdaBoost iteratively selects T hypotheses ht∈ H and weights wt≥ 0 to construct an ensemble classifier

gT(x) = sign

T

X

t=1

wtht(x)

! .

1. When η is nonzero, sign(η) ≡|η|η. We shall let sign(0) ≡ 0 to make some mathematical setup cleaner.

(4)

The underlying algorithm for selecting ht ∈ H is called a base learner. Under some as- sumptions (R¨atsch et al., 2001), it is shown that when T → ∞, AdaBoost asymptotically approximates an infinite ensemble classifier

g(x) = sign

X

t=1

wtht(x)

!

, (2)

such that (w, h) is an optimal solution to

(P3) min

wt∈R,ht∈H

X

t=1

wt

s.t. yi

X

t=1

wtht(xi)

!

≥ 1, for i = 1, 2, . . . , N, wt≥ 0, for t = 1, 2, . . . , ∞.

Note that there are infinitely many variables in (P3). In order to approximate the optimal solution well with a fixed and finite T , AdaBoost resorts to two related properties of some of the optimal solutions for (P3): finiteness and sparsity.

• Finiteness: When two hypotheses have the same prediction patterns on the training input vectors, they can be used interchangeably during the training time, and are thus ambiguous. Since there are at most 2N prediction patterns on N training input vectors, we can partition H into at most 2N groups, each of which contains mutually ambiguous hypotheses. Some optimal solutions of (P3) only assign one or a few nonzero weights within each group (Demiriz et al., 2002). Thus, it is possible to work on a finite data-dependent subset of H instead of H itself without losing optimality.

• Sparsity: Minimizing the `1-norm kwk1=P

t=1|wt| often leads to sparse solutions (Meir and R¨atsch, 2003; Rosset et al., 2007). That is, for hypotheses in the finite (but possibly still large) subset of H, only a small number of weights needs to be nonzero.

AdaBoost can be viewed as a stepwise greedy search algorithm that approximates such a finite and sparse ensemble (Rosset et al., 2004).

Another boosting approach, called the linear programming boosting (LPBoost), can solve (P3) exactly. We will introduce the soft-margin LPBoost, which constructs an ensem- ble classifier like (2) with the optimal solution to

(P4) min

wt∈R,ht∈H

X

t=1

wt+ C

N

X

i=1

ξi

s.t. yi

X

t=1

wtht(xi)

!

≥ 1 − ξi, for i = 1, 2, . . . , N, ξi ≥ 0, for i = 1, 2, . . . , N,

wt≥ 0, for t = 1, 2, . . . , ∞.

(5)

Demiriz et al. (2002) proposed to solve (P4) with the column generating technique.2 The algorithm works by adding one unambiguous htto the ensemble in each iteration. Because of the finiteness property, the algorithm is guaranteed to terminate within T ≤ 2N iterations.

The sparsity property can sometimes help speed up the convergence of the algorithm.

R¨atsch et al. (2002) worked on a variant of (P4) for regression problems, and discussed optimality conditions when H is of infinite size. Their results can be applied to (P4) as well.

In particular, they showed that even without the finiteness property (e.g., when htoutputs real values rather than binary values), (P4) can still be solved using a finite subset of H that is associated with nonzero weights. The results justify the use of the column generating technique above, as well as a barrier, AdaBoost-like, approach that they proposed.

Recently, Rosset et al. (2007) studied the existence of a sparse solution when solving a generalized form of (P4) with some H of infinite and possibly uncountable size. They showed that under some assumptions, there exists an optimal solution of (P4) such that at most N + 1 weights are nonzero. Thus, iterative algorithms that keep adding necessary hypotheses htto the ensemble, such as the proposed path-following approach (Rosset et al., 2007) or the column generating technique (Demiriz et al., 2002; R¨atsch et al., 2002), could work by aiming towards such a sparse solution.

Note that even though the findings above indicate that it is possible to design good algorithms to return an optimal solution when H is infinitely large, the resulting ensemble relies on the sparsity property, and is effectively of only finite size. Nevertheless, it is not clear whether the performance could be improved if either or both the finiteness and the sparsity restrictions are removed.

2.3 Connecting Support Vector Machine to Ensemble Learning

The connection between AdaBoost, LPBoost, and SVM is well-known in literature (Freund and Schapire, 1999; R¨atsch et al., 2001; R¨atsch et al., 2002; Demiriz et al., 2002). Consider the feature transform

Φ(x) = h1(x), h2(x), . . .. (3)

We can see that the problem (P1) with this feature transform is similar to (P4). The elements of φxin SVM are similar to the hypotheses ht(x) in AdaBoost and LPBoost. They all work on linear combinations of these elements, though SVM deals with an additional intercept term b. SVM minimizes the `2-norm of the weights while AdaBoost and LPBoost work on the `1-norm. SVM and LPBoost introduce slack variables ξi and use the parameter C for regularization, while AdaBoost relies on the choice of the parameter T (Rosset et al., 2004).

Note that AdaBoost and LPBoost require wt≥ 0 for ensemble learning.

Several researchers developed interesting results based on the connection. For example, R¨atsch et al. (2001) proposed to select the hypotheses ht by AdaBoost and to obtain the weights wt by solving an optimization problem similar to (P1) in order to improve the robustness of AdaBoost. Another work by R¨atsch et al. (2002) introduced a new density estimation algorithm based on the connection. Rosset et al. (2004) applied the similarity to compare SVM with boosting algorithms. Nevertheless, as limited as AdaBoost and LPBoost, their results could use only a finite subset of H when constructing the feature

2. Demiriz et al. (2002) actually worked on an equivalent but slightly different formulation.

(6)

mapping (3). One reason is that the infinite number of variables wtand constraints wt≥ 0 are difficult to handle. We will show the remedies for these difficulties in the next section.

3. SVM-Based Framework for Infinite Ensemble Learning

Vapnik (1998) proposed a challenging task of designing an algorithm that actually generates an infinite ensemble classifier, that is, an ensemble classifier with infinitely many nonzero wt. Traditional algorithms like AdaBoost or LPBoost cannot be directly generalized to solve the task, because they select the hypotheses in an iterative manner, and only run for a finite number of iterations.

We solved the challenge via another route: the connection between SVM and ensemble learning. The connection allows us to formulate a kernel that embodies all the hypotheses in H. Then, the classifier (1) obtained from SVM with the kernel is a linear combination over H (with an intercept term). Nevertheless, there are still two main obstacles. One is to actually derive the kernel, and the other is to handle the constraints wt≥ 0 to make (1) an ensemble classifier. In this section, we combine several ideas to deal with these obstacles, and conquer Vapnik’s task with a novel SVM-based framework for infinite ensemble learning.

3.1 Embedding Hypotheses into the Kernel

We start by embedding the infinite number of hypotheses in H into an SVM kernel. We have shown in (3) that we could construct a feature mapping from H. The idea is extended to a more general form for deriving a kernel in Definition 1.

Definition 1 Assume that H = {hα: α ∈ C}, where C is a measure space. The kernel that embodies H is defined as

KH,r(x, x0) = Z

C

φx(α)φx0(α) dα, (4)

where φx(α) = r(α)hα(x), and r : C → R+ is chosen such that the integral exists for all x, x0 ∈ X .

Here α is the parameter of the hypothesis hα. Although two hypotheses with different α values may have the same input-output relation, we would treat them as different objects in our framework. We shall denote KH,r by KHwhen r is clear from the context. The validity of the definition is formalized in the following theorem.

Theorem 2 Consider the kernel KH in Definition 1.

1. The kernel is an inner product for φx and φx0 in the Hilbert space F = L2(C), which contains functions ϕ(·) : C → R that are square integrable.

2. For a set of input vectors {xi}Ni=1∈ XN, the Gram matrix of KH is PSD.

Proof The first part is known in mathematical analysis (Reed and Simon, 1980), and the second part follows Mercer’s condition.

(7)

Constructing kernels from an integral inner product is a known technique in litera- ture (Sch¨olkopf and Smola, 2002). The framework adopts this technique for embedding the hypotheses, and thus could handle the situation even when H is uncountable. Note that when r2(α) dα is a “prior” on hα, the kernel KH,r(x, x0) can be interpreted as a covariance function commonly used in Gaussian process (GP) models (Williams, 1998; Rasmussen and Williams, 2006). Some Bayesian explanations can then be derived from the connection between SVM and GP, but are beyond the scope of this paper.

3.2 Negation Completeness and Constant Hypotheses When we use KH in (P2), the primal problem (P1) becomes

(P5) min

w∈L2(C),b∈R,ξ∈RN

1 2

Z

C

w2(α) dα + C

N

X

i=1

ξi

s.t. yi

Z

C

w(α)r(α)hα(xi) dα + b



≥ 1 − ξi, for i = 1, 2, . . . , N,

ξi ≥ 0, for i = 1, 2, . . . , N.

In particular, the classifier obtained after solving (P2) with KHis the same as the classifier obtained after solving (P5):

g(x) = sign

Z

C

w(α)r(α)hα(x) dα + b



. (5)

When C is uncountable, it is possible that each hypothesis hα only takes an infinitesimal weight w(α)r(α) dα in the ensemble. Thus, the classifier (5) is very different from those obtained with traditional ensemble learning, and will be discussed further in Subsection 4.2.

Note that the classifier (5) is not an ensemble classifier yet, because we do not have the constraints w(α) ≥ 0, and we have an additional term b. Next, we would explain that such a classifier is equivalent to an ensemble classifier under some reasonable assumptions.

We start from the constraints w(α) ≥ 0, which cannot be directly considered in (P1).

Vapnik (1998) showed that even if we add a countably infinite number of constraints to (P1), infinitely many variables and constraints would be introduced to (P2). Then, the latter problem would still be difficult to solve.

One remedy is to assume that H is negation complete, that is,3 h ∈ H ⇔ (−h) ∈ H.

Then, every linear combination over H has an equivalent linear combination with only nonnegative weights. Negation completeness is usually a mild assumption for a reasonable H (R¨atsch et al., 2002). Following this assumption, the classifier (5) can be interpreted as an ensemble classifier over H with an intercept term b. Somehow b can be viewed as the weight on a constant hypothesis c, which always predicts c(x) = 1 for all x ∈ X . We shall further add a mild assumption that H contains both c and (−c). Then, the classifier (5) or (1) is indeed equivalent to an ensemble classifier.

3. We use (−h) to denote the function (−h)(·) = −(h(·)).

(8)

1. Consider a training set {(xi, yi)}Ni=1and the hypothesis set H, which is assumed to be negation complete and to contain a constant hypothesis.

2. Construct a kernel KH according to Definition 1 with a proper embedding function r.

3. Choose proper parameters, such as the soft-margin parameter C.

4. Solve (P2) with KH and obtain Lagrange multipliers λi and the intercept term b.

5. Output the classifier

g(x) = sign

N

X

i=1

yiλiKH(xi, x) + b

! ,

which is equivalent to some ensemble classifier over H.

Algorithm 1: SVM-based framework for infinite ensemble learning

We summarize our framework in Algorithm 1. The framework shall generally inherit the profound performance of SVM. Most of the steps in the framework can be done by existing SVM implementations, and the hard part is mostly in obtaining the kernel KH. In the next sections, we derive some concrete instances using different base hypothesis sets.

4. Stump Kernel

In this section, we present the stump kernel, which embodies infinitely many decision stumps. The decision stump sq,d,α(x) = q · sign (x)d− α works on the d-th element of x, and classifies x according to q ∈ {−1, +1} and the threshold α (Holte, 1993). It is widely used for ensemble learning because of its simplicity (Freund and Schapire, 1996).

4.1 Formulation and Properties

To construct the stump kernel, we consider the following set of decision stumps S =sq,d,αd: q ∈ {−1, +1} , d ∈ {1, . . . , D} , αd∈ [Ld, Rd] .

We also assume X ⊆ (L1, R1) × (L2, R2) × · · · × (LD, RD). Thus, the set S is negation complete and contains s+1,1,L1 as a constant hypothesis. The stump kernel KS defined below can then be used in Algorithm 1 to obtain an infinite ensemble of decision stumps.

Definition 3 The stump kernel is KS with r(q, d, αd) = rS = 12,

KS(x, x0) = ∆S

D

X

d=1

(x)d− (x0)d

= ∆S

x − x0 1, where ∆S = 12PD

d=1(Rd− Ld) is a constant.

(9)

Definition 3 is a concrete instance that follows Definition 1. The details of the derivation are shown in Appendix A. As we shall see further in Section 5, scaling rS is equivalent to scaling the parameter C in SVM. Thus, without loss of generality, we use rS = 12 to obtain a cosmetically cleaner kernel function.

The validity of the stump kernel follows directly from Theorem 2 of the general frame- work. That is, the stump kernel is an inner product in a Hilbert space of some square integrable functions ϕ(q, d, αd), and it produces a PSD Gram matrix for any set of input vectors {xi}Ni=1∈ XN. Given the ranges (Ld, Rd), the stump kernel is very simple to com- pute. Furthermore, the ranges are not even necessary in general, because dropping the constant ∆S does not affect the classifier obtained from SVM.

Theorem 4 Solving (P2) with the stump kernel KS is the same as solving (P2) with the simplified stump kernel ˜KS(x, x0) = − kx − x0k1. That is, equivalent classifiers can be ob- tained from (1).

Proof We extend from the results of Berg et al. (1984) to show that ˜KS(x, x0) is con- ditionally PSD (CPSD). In addition, because of the constraint PN

i=1yiλi = 0, a CPSD kernel ˜K(x, x0) works exactly the same for (P2) as any PSD kernel of the form ˜K(x, x0) + ∆, where ∆ is a constant (Sch¨olkopf and Smola, 2002). The proof follows with ∆ = ∆S.

In fact, a kernel ˆK(x, x0) = ˜K(x, x0) + f (x) + f (x0) with any mapping f is equivalent to ˜K(x, x0) for (P2) because of the constraintPN

i=1yiλi = 0. Now consider another kernel KˆS(x, x0) = ˜KS(x, x0) +

D

X

d=1

(x)d+

D

X

d=1

(x0)d= 2

D

X

d=1

min((x)d, (x0)d).

We see that ˆKS, ˜KS, and KS are equivalent for (P2). The former is called the histogram in- tersection kernel (up to a scale of 2) when the elements (x)drepresent generalized histogram counts, and has been successfully used in image recognition applications (Barla et al., 2003;

Boughorbel et al., 2005; Grauman and Darrell, 2005). The equivalence demonstrates the usefulness of the stump kernel on histogram-based features, which would be further dis- cussed in Subsection 6.4. A remark here is that our proof for the PSD-ness of KS comes directly from the framework, and hence is simpler and more straightforward than the proof of Boughorbel et al. (2005) for the PSD-ness of ˆKS.

The simplified stump kernel is simple to compute, yet useful in the sense of dichotomizing the training set, which comes from the following positive definite (PD) property.

Theorem 5 (Lin, 2005) Consider training input vectors {xi}Ni=1 ∈ XN. If there exists a dimension d such that (xi)d6= (xj)d for all i 6= j, the Gram matrix of KS is PD.

The PD-ness of the Gram matrix is directly connected to the classification capacity of the SVM classifiers. Chang and Lin (2001b) showed that when the Gram matrix of the kernel is PD, a hard-margin SVM with such a kernel can always dichotomize the training set perfectly. Keerthi and Lin (2003) then applied the result to show that SVM with the popular Gaussian-RBF kernel K(x, x0) = exp

−γ kx − x0k22

can always dichotomize the training set when C → ∞. We obtain a similar theorem for the stump kernel.

(10)

- 6

d d t

t

Figure 1: The XOR data set

Theorem 6 Under the assumption of Theorem 5, there exists some C > 0 such that for all C ≥ C, SVM with KS can always dichotomize the training set {(xi, yi)}Ni=1.

We make two remarks here. First, although the assumption of Theorem 6 is mild in practice, there are still some data sets that do not have this property. An example is the famous XOR data set (Figure 1). We can see that every possible decision stump makes 50%

of errors on the training input vectors. Thus, AdaBoost and LPBoost would terminate with one bad decision stump in the ensemble. Similarly, SVM with the stump kernel cannot dichotomize this training set perfectly, regardless of the choice of C. Such a problem is inherent in any ensemble model that combines decision stumps, because the model belongs to the family of generalized additive models (Hastie and Tibshirani, 1990; Hastie et al., 2001), and hence cannot approximate non-additive target functions well.

Second, although Theorem 6 indicates how the stump kernel can be used to dichotomize the training set perfectly, the classifier obtained usually overfits to noise (Keerthi and Lin, 2003). For the Gaussian-RBF kernel, it has been known that SVM with reasonable param- eter selection provides suitable regularization and achieves good generalization performance even in the presence of noise (Keerthi and Lin, 2003; Hsu et al., 2003). We observe similar experimental results for the stump kernel (see Section 7).

4.2 Averaging Ambiguous Stumps

We have discussed in Subsection 2.2 that the set of hypotheses can be partitioned into groups and traditional ensemble learning algorithms can only pick a few representatives within each group. Our framework acts in a different way: the `2-norm objective function of SVM leads to an optimal solution that combines all the predictions within each group.

This property is formalized in the following theorem.

Theorem 7 Consider two ambiguous hα, hβ ∈ H. If the kernel KHis used in Algorithm 1, the optimal w of (P5) satisfies w(α)r(α) = w(β)r(β).

Proof The optimality condition between (P1) and (P2) leads to w(α)

r(α) =

N

X

i=1

λihα(xi) =

N

X

i=1

λihβ(xi) = w(β) r(β).

If w(α) is nonzero, w(β) would also be nonzero, which means both hα and hβ are included in the ensemble. As a consequence, for each group of mutually ambiguous hypotheses, our framework considers the average prediction of all hypotheses as the consensus output.

(11)

The averaging process constructs a smooth representative for each group. In the fol- lowing theorem, we demonstrate this behavior with the stump kernel, and show how the decision stumps group together in the final ensemble classifier.

Theorem 8 Define (˜x)d,a as the a-th smallest value in {(xi)d}Ni=1, and Ad as the number of different (˜x)d,a. Let (˜x)d,0= Ld, (˜x)d,(Ad+1) = Rd, and

ˆ

sq,d,a(x) = q ·





+1, when (x)d≥ (˜x)d,a+1;

−1, when (x)d≤ (˜x)d,a;

2(x)d−(˜x)d,a−(˜x)d,a+1

x)d,a+1−(˜x)d,a , otherwise.

Then, for ˆr(q, d, a) = 12p(˜x)d,a+1− (˜x)d,a,

KS(xi, x) = X

q∈{−1,+1}

D

X

d=1 Ad

X

a=0

ˆ

r2(q, d, a)ˆsq,d,a(xi)ˆsq,d,a(x).

Proof First, for any fixed q and d, a simple integration shows that Z x)d,a+1

x)d,a

sq,d,α(x) dα = 

(˜x)d,a+1− (˜x)d,a ˆ

sq,d,a(x).

In addition, note that for all α ∈



(˜x)d,a, (˜x)d,a+1



, ˆsq,d,a(xi) = sq,d,α(xi). Thus, Z Rd

Ld



r(q, d, α)sq,d,α(xi)

r(q, d, α)sq,d,α(x) dα

=

Ad

X

a=0

Z x)d,a+1

x)d,a

 1

2sq,d,α(xi)  1

2sq,d,α(x)

 dα

=

Ad

X

a=0

1

4sˆq,d,a(xi)

Z x)d,a+1 x)d,a

sq,d,α(x) dα

=

Ad

X

a=0

1 4



(˜x)d,a+1− (˜x)d,a

 ˆ

sq,d,a(xi)ˆsq,d,a(x).

The theorem can be proved by summing over all q and d.

As shown in Figure 2, the function ˆsq,d,a is a smoother variant of the decision stump.

Theorem 8 indicates that the infinite ensemble of decision stumps produced by our frame- work is equivalent to a finite ensemble of data-dependent and smoother variants. Another view of ˆsq,d,a is that they are continuous piecewise linear functions (order-2 splines) with knots defined on the training features (Hastie et al., 2001). Then, Theorem 8 indicates that an infinite ensemble of decision stumps can be obtained by fitting an additive model of finite size using these special splines as the bases. Note that although the fitting problem is of finite size, the number of possible splines can grow as large as O(N D), which can sometimes

(12)

-

˜

xd,ad,a+1

sq,d,αd(x)

(a) a group of ambiguous decision stumps sq,d,αd with αd`(˜x)d,a, (˜x)d,a+1

´

-

˜

xd,ad,a+1













ˆ sq,d,a(x)

(b) SVM-based infinite ensemble learning uses the consensus: the averaged stump ˆsq,d,a

-

˜

xd,ad,a+1

mq,d,a(x)

(c) Base learners for AdaBoost and LPBoost usually only consider the middle stump mq,d,a

Figure 2: The averaged stump and the middle stump

be too large for iterative algorithms such as backfitting (Hastie et al., 2001). On the other hand, our SVM-based framework with the stump kernel can be thought as a route to solve this special spline fitting problem efficiently via the kernel trick.

As shown in the proof of Theorem 8, the averaged stump ˆsq,d,a represents the group of ambiguous decision stumps with αd ∈ 

(˜x)d,a, (˜x)d,a+1

. When the group is larger, ˆsq,d,a becomes smoother. Traditional ensemble learning algorithms like AdaBoost or LPBoost rely on a base learner to choose one decision stump as the only representative within each group, and the base learner usually returns the middle stump mq,d,a. As shown in Figure 2, the threshold of the middle stump is at the mean of (˜x)d,a and (˜x)d,a+1. Our framework, on the other hand, enjoys a smoother decision by averaging over more decision stumps.

Even though each decision stump only has an infinitesimal hypothesis weight, the averaged stump ˆsq,d,a has a concrete weight in the ensemble.

5. Perceptron Kernel

In this section, we extend the stump kernel to the perceptron kernel, which embodies infinitely many perceptrons. A perceptron is a linear threshold classifier of the form

pθ,α(x) = sign θTx − α .

It is a basic theoretical model for a neuron, and is very important for building neural networks (Haykin, 1999).

To construct the perceptron kernel, we consider the following set of perceptrons P =pθ,α: θ ∈ RD, kθk2 = 1, α ∈ [−R, R] .

We assume that X is within the interior of B(R), where B(R) is a ball of radius R centered at the origin in RD. Then, the set P is negation complete, and contains a constant hypoth-

(13)

esis pe1,−R where e1 = (1, 0, . . . , 0)T. Thus, the perceptron kernel KP defined below can be used in Algorithm 1 to obtain an infinite ensemble of perceptrons.

Definition 9 Let ΘD =

Z

kθk2=1

dθ, ΞD = Z

kθk2=1

cos (anglehθ, e1i) dθ,

where the operator angleh·, ·i is the angle between two vectors, and the integrals are cal- culated with uniform measure on the surface kθk2 = 1. The perceptron kernel is KP

with r(θ, α) = rP,

KP(x, x0) = ∆P

x − x0 2, where the constants rP = (2ΞD)12 and ∆P = ΘDΞ−1D R.

The details are shown in Appendix A. With the perceptron kernel, we can construct an infinite ensemble of perceptrons. Such an ensemble is equivalent to a neural network with one hidden layer, infinitely many hidden neurons, and the hard-threshold activation functions. Williams (1998) built an infinite neural network with either the sigmoidal or the Gaussian activation function through computing the corresponding covariance function for GP models. Analogously, our approach returns an infinite neural network with hard- threshold activation functions (ensemble of perceptrons) through computing the perceptron kernel for SVM. Williams (1998) mentioned that “Paradoxically, it may be easier to carry out Bayesian prediction with infinite networks rather than finite ones.” Similar claims can be made with ensemble learning.

The perceptron kernel shares many similar properties to the stump kernel. First, the constant ∆P can also be dropped, as formalized below.

Theorem 10 Solving (P2) with the simplified perceptron kernel ˜KP(x, x0) = − kx − x0k2 is the same as solving (P2) with KP(x, x0).

Second, SVM with the perceptron kernel can also dichotomize the training set perfectly, which comes from the usefulness of the simplified perceptron kernel ˜KP in interpolation.

Theorem 11 (Micchelli, 1986) Consider input vectors {xi}Ni=1∈ XN, and the perceptron kernel KP in Definition 9. If xi 6= xj for all i 6= j, then the Gram matrix of KP is PD.

Then, similar to Theorem 6, we get the following result.

Theorem 12 If xi 6= xj for all i 6= j, there exists some C > 0 such that for all C ≥ C, SVM with KP can always dichotomize the training set {(xi, yi)}Ni=1.

Another important property, called scale-invariance, accompanies the simplified percep- tron kernel, which was also named the triangular kernel by Fleuret and Sahbi (2003). They proved that when the kernel is used in the hard-margin SVM, scaling all training input vectors xi by some positive γ does not change the optimal solution.

In fact, in the soft-margin SVM, a well-known result is that scaling the Gram matrix K by some γ > 0 is equivalent to scaling C by γ in (P2). Because the simplified perceptron

(14)

kernel ˜KP satisfies γ ˜KP(x, x0) = ˜KP(γx, γx0), the effect of scaling training examples can be equivalently performed with the parameter selection step on C. That is, when C is selected reasonably, there is no need to explicitly have a scaling parameter γ.

Recall that we construct the perceptron kernel (and the stump kernel) with an embed- ding constant rP (and rS), and from Definition 1, multiplying the constant by √

γ > 0 is equivalent to scaling the Gram matrix K by γ. Thus, when C is selected reasonably, there is also no need to explicitly try different rP or rS for these two kernels. We will further discuss the benefits of this property in Subsection 6.4.

6. Laplacian-RBF Kernel

In the previous sections, we applied Definition 1 on some simple base hypothesis sets. Next, we show how complex hypothesis sets can also be embedded in a kernel by suitably combin- ing the kernels that embody simpler sets. We will introduce two useful tools: summation and multiplication. The tools would eventually allow us to embed infinitely many deci- sion trees in a kernel. Interestingly, the kernel obtained is equivalent to the well-known Laplacian-RBF kernel in some parameters.

6.1 Summation: Embedding Multiple Sets of Hypotheses

Summation can be used to embed multiple sets of hypotheses altogether. For example, given kernels KH1 and KH2, their summation

K(x, x0) = KH1(x, x0) + KH2(x, x0)

embodies both H1and H2. In other words, if we use K(x, x0) in Algorithm 1, we could obtain an ensemble classifier over H1∪ H2 when the union is negation complete and contains a constant hypothesis.

In traditional ensemble learning, when multiple sets of hypotheses are considered alto- gether, it is usually necessary to call a base learner for each set. On the other hand, our framework only requires a simple summation on the kernel evaluations. In fact, as shown in the next theorem, our framework can be applied to work with any countable sets of hypotheses, which may not be an easy task for traditional ensemble learning algorithms.

Theorem 13 Assume that the kernels KH1, . . . , KHJ are defined for some J ∈ NS{∞}

with sets of hypotheses H1, . . . , HJ, respectively. Then, let

K(x, x0) =

J

X

j=1

KHj(x, x0).

If K(x, x0) exists for all x, x0 ∈ X , and H =SJ

j=1Hj is negation complete and contains a constant hypothesis, Algorithm 1 using K(x, x0) outputs an ensemble classifier over H.

Proof The theorem comes from the following result in mathematical analysis: any count- able direct sum over Hilbert spaces is a Hilbert space (Reed and Simon, 1980, Example 5).

Lin (2005, Theorem 6) showed the details of the proof.

(15)

A remark on Theorem 13 is that we do not intend to define a kernel with H directly.

Otherwise we need to choose suitable C and r first, which may not be an easy task for such a complex hypothesis set. Using the summation of the kernels, on the other hand, allow us to obtain an ensemble classifier over the full union with less efforts.

6.2 Multiplication: Performing Logical Combination of Hypotheses

It is known that we can combine two kernels by point-wise multiplication to form a new ker- nel (Sch¨olkopf and Smola, 2002). When the two kernels are associated with base hypothesis sets, a natural question is: what hypothesis set is embedded in the new kernel?

Next, let output +1 represent logic TRUE and −1 represent logic FALSE. We show that multiplication can be used to perform common logical combinations on the hypotheses.

Theorem 14 For two sets of hypotheses H1= {hα: α ∈ C1} and H2= {hβ: β ∈ C2}, define H = {hα,β: hα,β(x) = −hα(x) · hβ(x), α ∈ C1, β ∈ C2} .

In addition, let r(α, β) = r1(α)r2(β). Then,

KH,r(x, x0) = KH1,r1(x, x0) · KH2,r2(x, x0) for all x, x0∈ X .

The proof simply follows from Definition 1. Note that when representing logic, the combined hypothesis hα,β is the XOR operation on hα and hβ. More complicated results about other operations can be introduced under a mild assumption called neutrality.

Definition 15 A set of hypothesis H = {hα: α ∈ C} is neutral to X with a given r if and only if for all x ∈ X , R

α∈Chα(x)r2(α) dα = 0.

Note that for a negation complete set H, neutrality is usually a mild assumption (e.g., by assigning the same r for hα and −hα). We can easily verify that the set of decision stumps in Definition 3 and the set of perceptrons in Definition 9 are both neutral.

Theorem 16 For two sets of hypotheses H1= {hα: α ∈ C1} and H2= {hβ: β ∈ C2}, define H = hq,α,β: hq,α,β(x) = q · min hα(x), hβ(x), α ∈ C1, β ∈ C2, q ∈ {−1, +1} . Assume that H1 and H2 are neutral with r1 and r2, respectively, and both integrals

1= Z

α∈C1

r12(α) dα, ∆2 = Z

β∈C2

r22(β) dβ

are finite. In addition, let r(q, α, β) =√

2r1(α)r2(β). Then,

KH,r(x, x0) = KH1,r1(x, x0) + ∆1 · KH2,r2(x, x0) + ∆2 for all x, x0∈ X . Furthermore, H is neutral to X with r.

(16)

Proof Because hα(x), hβ(x) ∈ {−1, +1}, h+1,α,β(x) = 1

2 hα(x)hβ(x) + hα(x) + hβ(x) − 1.

Then,

KH,r(x, x0)

= 2 Z

h+1,α,β(x)h+1,α,β(x0)r2(α, β) dβ dα

= 1 2

Z

hα(x)hβ(x) + hα(x) + hβ(x) − 1

hα(x0)hβ(x0) + hα(x0) + hβ(x0) − 1r2(α, β) dβ dα

= Z

hα(x)hβ(x)hα(x0)hβ(x0) + hα(x)hα(x0) + hβ(x)hβ(x0) + 1r21(α)r22(β) dβ dα (6)

= KH1,r1(x, x0) + ∆1 · KH2,r2(x, x0) + ∆2 .

Note that (6) comes from the neutrality assumption, which implies that during integration, the cross-terms like Z

hα(x)hβ(x0)r21(α)r22(β) dα dβ are all 0. Neutrality of H follows from the symmetry in q.

The arithmetic operation (+1·min) is equivalent to the AND operation when the outputs represent logic, and hence (−1 · min) represents the NAND operation. If H1 and H2 are negation complete, the NOT operation is implicit in the original sets, and hence OR can be equivalently performed through OR(a, b) = NAND(NOT(a), NOT(b)).

6.3 Stump Region Kernel, Decision Tree Kernel, and Laplacian-RBF Kernel Next, we use the stump kernel to demonstrate the usefulness of summation and multi- plication. When H1 = H2 = S, the resulting KH from Theorem 16 embodies AND/OR combinations of two decision stumps in S. Extending this concept, we get the following new kernels.

Definition 17 The L-level stump region kernel KTL is recursively defined by KT1(x, x0) = KS(x, x0) + ∆S, ∆1 = 2∆S,

KTL+1(x, x0) = KTL(x, x0) + ∆L

KS(x, x0) + ∆S , ∆L+1= 2∆LS for L ∈ N.

If we construct a kernel from {c, −c} with r = q1

2S on each hypothesis, we can see that the constant ∆S is also a neutral kernel. Since neutrality is preserved by summation, the kernel KT1 is neutral as well. By repeatedly applying Theorem 16 and maintaining ∆Las the constant associated with TL, we see that KTL embodies all possible AND/OR combinations of L decision stumps in S. We call these hypotheses the L-level stump regions.

Note that we can solve the recurrence and get KTL(x, x0) = 2LLS

L

X

`=1

 KS(x, x0) + ∆S

2∆S

`

, for L ∈ N.

(17)

Then, by applying Theorem 13, we obtain an ensemble classifier over stump regions of any level.

Theorem 18 For 0 < γ < 1

S, the infinite stump region (decision tree) kernel KT(x, x0) = exp γ · KS(x, x0) + ∆S − 1

can be applied to Algorithm 1 to obtain an ensemble classifier over T =S L=1TL. Proof By Taylor’s series expansion of exp() near  = 0, we get

KT(x, x0) =

X

L=1

γL

L! KS(x, x0) + ∆SL

= γKT1(x, x0) +

X

L=2

γL

L! KTL(x, x0) − 2∆SKTL−1(x, x0)

=

X

L=1

γL

L!KTL(x, x0) −

X

L=1

γL+1

(L + 1)!2∆SKTL(x, x0)

=

X

L=1

 γL

L! − γL+12∆S

(L + 1)!



KTL(x, x0).

Note that τL = γL!LγL+1(L+1)!2∆S > 0 for all L ≥ 1 if and only if 0 < γ < 1

S. The desired result simply follows Theorem 13 by scaling the r functions of each KTL by√

τL.

The set of stump regions of any level contains all AND/OR combinations of decision stumps. It is not hard to see that every stump region can be represented by recursive axis-parallel partitions that output {−1, +1}, that is, a decision tree (Quinlan, 1986; Hastie et al., 2001). In addition, we can view the nodes of a decision tree as logic operations:

tree

= OR(AND(root node condition, left), AND(NOT(root node condition), right)).

By recursively replacing each root node condition with a decision stump, we see that every decision tree can be represented as a stump region hypothesis. Thus, the set T that contains stump regions of any level is the same as the set of all possible decision trees, which leads to the name decision tree kernel.4

Decision trees are popular for ensemble learning, but traditional algorithms can only deal with trees of finite levels (Breiman, 1999; Dietterich, 2000). On the other hand, when the decision tree kernel KT is plugged into our framework, it allows us to actually build an infinite ensemble over decision trees of arbitrary levels.

4. We use the name decision tree kernel for KT in Theorem 18 because the kernel embodies an infinite number of decision tree “hypotheses” and can be used in our framework to construct an infinite ensemble of decision trees. As pointed out by a reviewer, however, the kernel is derived in a particular way, which makes the metric of the underlying feature space different from the metrics associated with common decision tree “algorithms.”

(18)

Note that the decision tree kernel KT(x, x0) is of the form κ1exp −κ2

x − x0 1 + κ3

where κ1, κ2, κ3 are constants and κ1, κ2 are positive. We mentioned in Section 4 that scaling the kernel with κ1 is equivalent to scaling the soft-margin parameter C in SVM, and in Theorem 4 that dropping κ3 does not affect the solution obtained from SVM. Then, the kernel KT(x, x0) is similar to the Laplacian-RBF kernel KL(x, x0) = exp (−γ kx − x0k1).

This result is a novel interpretation of the Laplacian-RBF kernel: under suitable parameters, SVM with the Laplacian-RBF kernel allows us to obtain an infinite ensemble classifier over decision trees of any level.5

Not surprisingly, when all training input vectors xiare distinct (Micchelli, 1986; Baxter, 1991), the Gram matrix of KL (and hence KT) is PD. Then, the Laplacian-RBF kernel and the decision tree kernel could be used to dichotomize the training set perfectly.

6.4 Discussion on Radial Basis Function Kernels

Note that the stump kernel, the perceptron kernel, the Laplacian-RBF kernel, and the Gaussian-RBF kernel are all radial basis functions. They can all be used to dichotomize the training set perfectly under mild conditions, while the first three connect to explanations from an ensemble perspective. Next, we compare two properties of these kernels, and discuss their use in SVM applications.

First, we can group these kernels by the distance metrics they use. The stump kernel and the Laplacian-RBF kernel deal with the `1-norm distance between input vectors, while the others work on the `2-norm distance. An interesting property of using the `2-norm distance is the invariance to rotations. From the construction of the perceptron kernel, we can see how the rotation invariance is obtained from an ensemble point-of-view. The transformation vectors θ in perceptrons represent the rotation, and rotation invariance comes from embedding all possible θ uniformly in the kernel.

Some applications, however, may not desire rotation invariance. For example, when representing an image with color histograms, rotation could mix up the information in each color component. Chapelle et al. (1999) showed some successful results with the Laplacian- RBF kernel on this application. In Subsection 4.1, we have also discussed some image recognition applications using the histogram intersection kernel, which is equivalent to the stump kernel, on histogram-based features. Gene expression analysis, as demonstrated by Lin and Li (2005b), is another area that the stump kernel could be helpful.

Second, we can group kernels by whether they are scale-invariant (see also Section 5).

The simplified stump kernel and the simplified perceptron kernel are scale-invariant, which means that C is the only parameter that needs to be determined. On the other hand, different combinations of (γ, C) need to be considered for the Gaussian-RBF kernel or the Laplacian-RBF kernel during parameter selection (Keerthi and Lin, 2003). Thus, SVM with the simplified stump kernel or the simplified perceptron kernel enjoys an advantage on speed during parameter selection. As we will see in Section 7.2, experimentally they

5. Note that the techniques in Theorem 18 can be coupled with Theorem 14 to show that Laplacian-RBF kernel with any γ > 0 embodies XOR stump regions (a special type of decision tree) of any level. We emphasize on the AND-OR stump regions here to connect better to general decision trees.

(19)

perform similarly to the Gaussian-RBF kernel on many data sets. Thus, SVM applications that consider speed as an important factor may benefit from using the simplified stump kernel or the simplified perceptron kernel.

7. Experiments

We first compare our SVM-based infinite ensemble learning framework with AdaBoost and LPBoost using decision stumps, perceptrons, or decision trees as the base hypothesis set.

The simplified stump kernel (SVM-Stump), the simplified perceptron kernel (SVM-Perc), and the Laplacian-RBF kernel (SVM-Dec) are plugged into Algorithm 1 respectively. We also compare SVM-Stump, SVM-Perc, and SVM-Dec with SVM-Gauss, which is SVM with the Gaussian-RBF kernel.

The deterministic decision stump algorithm (Holte, 1993), the random coordinate de- scent perceptron algorithm (Li and Lin, 2007), and the C4.5 decision tree algorithm (Quin- lan, 1986) are taken as base learners in AdaBoost and LPBoost for the corresponding base hypothesis set. For perceptrons, we use the RCD-bias setting with 200 epochs of training;

for decision trees, we take the pruned tree with the default settings of C4.5. All base learn- ers above have been shown to work reasonably well with boosting in literature (Freund and Schapire, 1996; Li and Lin, 2007).

We discussed in Subsection 4.2 that a common implementation of AdaBoost-Stump and LPBoost-Stump only chooses the middle stumps. For further comparison, we include all the middle stumps in a set M, and construct a kernel KM with r = 12 according to Definition 1.

Because M is a finite set, the integral in (4) becomes a summation when computed with the counting measure. We test our framework with this kernel, and call it SVM-Mid.

LIBSVM 2.8 (Chang and Lin, 2001a) is adopted as the soft-margin SVM solver, with a suggested procedure that selects a suitable parameter with a five-fold cross validation on the training set (Hsu et al., 2003). For SVM-Stump, SVM-Mid, and SVM-Perc, the parameter log2C is searched within {−17, −15, . . . , 3}, and for SVM-Dec and SVM-Gauss, the parameters (log2γ, log2C) are searched within {−15, −13, . . . , 3} × {−5, −3, . . . , 15}.

We use different search ranges for log2C because the numerical ranges of the kernels could be quite different. After the parameter selection procedure, a new model is trained using the whole training set, and the generalization ability is evaluated on an unseen test set.

For boosting algorithms, we conduct the parameter selection procedure similarly. The parameter log2C of LPBoost is also searched within {−17, −15, . . . , 3}. For AdaBoost, the parameter T is searched within {10, 20, . . . , 1500}. Note that because LPBoost can be slow when the ensemble size is too large (Demiriz et al., 2002), we set a stopping criterion to generate at most 1000 columns (hypotheses) in order to obtain an ensemble within a reasonable amount of time.

The three artificial data sets from Breiman (1999) (twonorm, threenorm, and ringnorm) are generated with training set size 300 and test set size 3000. We create three more data sets (twonorm-n, threenorm-n, ringnorm-n), which contain mislabeling noise on 10%

of the training examples, to test the performance of the algorithms on noisy data. We also use eight real-world data sets from the UCI repository (Hettich et al., 1998): australian, breast, german, heart, ionosphere, pima, sonar, and votes84. Their feature elements are scaled to [−1, 1]. We randomly pick 60% of the examples for training, and the rest for testing.

(20)

data set number of number of number of training examples test examples features

twonorm 300 3000 20

twonorm-n 300 3000 20

threenorm 300 3000 20

threenorm-n 300 3000 20

ringnnorm 300 3000 20

ringnorm-n 300 3000 20

australian 414 276 14

breast 409 274 10

german 600 400 24

heart 162 108 13

ionosphere 210 141 34

pima 460 308 8

sonar 124 84 60

votes84 261 174 16

a1a 1605 30956 123

splice 1000 2175 60

svmguide1 3089 4000 4

w1a 2477 47272 300

Table 1: Summarized information of the data sets used

For the data sets above, we compute the means and the standard errors of the results over 100 runs. In addition, four larger real-world data sets are used to test the validity of the framework for large-scale learning. They are a1a (Hettich et al., 1998; Platt, 1999), splice (Hettich et al., 1998), svmguide1 (Hsu et al., 2003), and w1a (Platt, 1999).6 Each of them comes with a benchmark test set, on which we report the results. Some information of the data sets used is summarized in Table 1.

7.1 Comparison of Ensemble Learning Algorithms

Tables 2, 3, and 4 show the test performance of several ensemble learning algorithms on different base hypothesis sets.7 We can see that SVM-Stump, SVM-Perc, and SVM-Dec are usually better than AdaBoost and LPBoost with the same base hypothesis set, especially for the cases of decision stumps and perceptrons. In noisy data sets, SVM-based infinite ensemble learning always significantly outperforms AdaBoost and LPBoost. These results demonstrate that it is beneficial to go from a finite ensemble to an infinite one with suitable regularization. When comparing the two boosting approaches, LPBoost is at best com- parable to AdaBoost on a small number of the data sets, which suggests that the success of AdaBoost may not be fully attributed to its connection to (P3) or (P4).

6. These data sets are downloadable on tools page of LIBSVM (Chang and Lin, 2001a).

7. For the first 14 rows of Tables 2, 3, 4, and 5, results that are as significant as the best ones are marked in bold; for the last 4 rows, the best results are marked in bold.

參考文獻

相關文件

decision tree: a traditional learning model that realizes conditional aggregation.. Decision Tree Decision Tree Hypothesis.. Disclaimers about

infinite ensemble learning could be better – existing AdaBoost-Stump applications may switch. derived new and

– stump kernel: succeeded in specific applications infinite ensemble learning could be better – existing AdaBoost-Stump applications may switch. not the

decision tree: a traditional learning model that realizes conditional aggregation.. Disclaimers about Decision

Hsuan-Tien Lin (NTU CSIE) Machine Learning Techniques 5/22.. Decision Tree Decision Tree Hypothesis. Disclaimers about

Understanding and inferring information, ideas, feelings and opinions in a range of texts with some degree of complexity, using and integrating a small range of reading

How would this task help students see how to adjust their learning practices in order to improve?..

• To introduce the Learning Progression Framework (LPF) as a reference tool for designing a school- based writing programme to facilitate progressive development