• 沒有找到結果。

One-sided Support Vector Regression for Multiclass Cost-sensitive Classification

N/A
N/A
Protected

Academic year: 2021

Share "One-sided Support Vector Regression for Multiclass Cost-sensitive Classification"

Copied!
8
0
0

加載中.... (立即查看全文)

全文

(1)

Multiclass Cost-sensitive Classification

Han-Hsing Tu r96139@csie.ntu.edu.tw

Hsuan-Tien Lin htlin@csie.ntu.edu.tw

Department of Computer Science and Information Engineering, National Taiwan University

Abstract

We propose a novel approach that reduces cost-sensitive classification to one-sided re-gression. The approach stores the cost infor-mation in the regression labels and encodes the minimum-cost prediction with the one-sided loss. The simple approach is accompa-nied by a solid theoretical guarantee of er-ror transformation, and can be used to cast any one-sided regression method as a cost-sensitive classification algorithm. To validate the proposed reduction approach, we design a new cost-sensitive classification algorithm by coupling the approach with a variant of the support vector machine (SVM) for one-sided regression. The proposed algorithm can be viewed as a theoretically justified extension of the popular one-versus-all SVM. Experimen-tal results demonstrate that the algorithm is not only superior to traditional one-versus-all SVM for cost-sensitive classification, but also better than many existing SVM-based cost-sensitive classification algorithms.

1. Introduction

Regular classification, which is a traditional and pri-mary problem in machine learning, comes with a goal of minimizing the rate of misclassification errors dur-ing prediction. Many real-world applications, how-ever, need different costs for different types of mis-classification errors. For instance, let us look at a three-class classification problem of predicting a pa-tient as {healthy, cold-infected, H1N1-infected}. Con-sider three different types of misclassification errors out of the six possibilities: (A) predicting a healthy pa-tient as cold-infected; (B) predicting a healthy papa-tient

Appearing in Proceedings of the 27thInternational Confer-ence on Machine Learning, Haifa, Israel, 2010. Copyright 2010 by the author(s)/owner(s).

as H1N1-infected; (C) predicting an H1N1-infected pa-tient as healthy. We see that (C) >> (B) > (A) in terms of the cost that the society pays. Many other ap-plications in medical decision making and target mar-keting share similar needs, which can be formalized as the cost-sensitive classification problem. The prob-lem is able to express any finite-choice and bounded-loss supervised learning problems (Beygelzimer et al.,

2005), and thus has been attracting much research at-tention (Abe et al., 2004; Langford & Beygelzimer,

2005;Zhou & Liu,2006;Beygelzimer et al.,2007). While cost-sensitive classification is well-understood for the binary case (Zadrozny et al., 2003), the coun-terpart for the multiclass case is more difficult to an-alyze (Abe et al., 2004; Zhou & Liu, 2006) and will be the main focus of this paper. Many existing ap-proaches for multiclass cost-sensitive classification are designed by reducing (heuristically or theoretically) the problem into other well-known problems in ma-chine learning. For instance, the early MetaCost ap-proach (Domingos,1999) solved cost-sensitive classifi-cation by reducing it to a conditional probability es-timation problem. Abe et al. (2004) proposed sev-eral approaches that reduce cost-sensitive classifica-tion to regular multiclass classificaclassifica-tion. There are also many approaches that reduce cost-sensitive clas-sification to regular binary clasclas-sification (Beygelzimer et al., 2005; Langford & Beygelzimer, 2005; Beygelz-imer et al., 2007). Reduction-based approaches not only allow us to easily extend existing methods into solving cost-sensitive classification problems, but also broaden our understanding on the connections be-tween cost-sensitive classification and other learning problems (Beygelzimer et al.,2005).

In this paper, we propose a novel reduction-based ap-proach for cost-sensitive classification. Unlike existing approaches, however, we reduce cost-sensitive classi-fication to a less-encountered problem: one-sided re-gression. Such a reduction is very simple but comes with solid theoretical properties. In particular, the

(2)

re-duction allows the total one-sided loss of a regressor to upper-bound the cost of the associated classifier. In other words, if a regressor achieves small one-sided loss on the reduced problem, the associated classifier would not suffer from much cost on the original cost-sensitive classification problem.

Although one-sided regression is not often seen in ma-chine learning, we find that its regularized (and hyper-linear) form can be easily cast as a variant of the popular support vector machine (SVM,Vapnik,1998). The variant will be named one-sided support vector re-gression (OSSVR). Similar to the usual SVM, OSSVR could solve both linear and non-linear one-sided regres-sion via the kernel trick. By coupling OSSVR with our proposed reduction approach, we obtain a novel algorithm for cost-sensitive classification. Interest-ingly, the algorithm takes the common one-versus-all SVM (OVA-SVM,Hsu & Lin,2002) as a special case, and is only a few lines different from OVA-SVM. That is, our proposed algorithm can be viewed as a sim-ple and direct extension of OVA-SVM towards cost-sensitive classification. Experimental results demon-strate that the proposed algorithm is indeed useful for general cost-sensitive settings, and outperforms OVA-SVM on many data sets. In addition, when compared with other SVM-based algorithms, the proposed algo-rithm can often achieve the smallest average test cost, which makes it the leading SVM-based cost-sensitive classification algorithm.

The paper is organized as follows. In Section 2, we give a formal setup of the cost-sensitive classification problem. Then, in Section 3, we reduce cost-sensitive classification to one-sided regression and demonstrate its theoretical guarantees. OSSVR and its use for cost-sensitive classification is introduced in Section 4. Fi-nally, we present the experimental results in Section5

and conclude in Section6.

2. Problem Statement

We start by introducing the regular classification prob-lem before we move to the cost-sensitive classification one. In the regular classification problem, we seek for a classifier that maps the input vector x to some discrete label y, where x is within an input space X ⊆ RD,

and y is within a label space Y = {1, 2, . . . , K}. We assume that there is an unknown distribution D that generates examples (x, y) ∈ X × Y. Consider a training set S = {(xn, yn)}Nn=1, where each

train-ing example (xn, yn) is drawn i.i.d from D.

Reg-ular classification aims at using S to find a classi-fier g : X → Y that comes with a small E(g), where

E(h) ≡ E

(x,y)∼DJy 6= h(x)K is the (expected) test error

of a classifier h with respect to the distribution D.1 The cost-sensitive classification problem extends reg-ular classification by coupling a cost vector c ∈ RK

with every example (x, y). The k-th component c[k] of the cost vector denotes the price to be paid when predicting x as class k. With the additional cost infor-mation, we now assume an unknown distribution Dc

that generates cost-sensitive examples (x, y, c) ∈ X × Y × RK. Consider a cost-sensitive training set S

c =

{(xn, yn, cn)}Nn=1, where each cost-sensitive training

example (xn, yn, cn) is drawn i.i.d from Dc.

Cost-sensitive classification aims at using Sc to find a

clas-sifier g : X → Y that comes with a small Ec(g), where

Ec(h) ≡ E (x,y,c)∼Dc

c[h(x)] is the (expected) test cost of h with respect to Dc.

Note that the label y is actually not needed for cal-culating Ec(h). We keep the label in our setup to

help illustrate the connection between cost-sensitive classification and regular classification. Naturally, we assume that c[y] = cmin= min1≤`≤Kc[`].

We will often consider the calibrated cost vector ¯c where ¯c[k] ≡ c[k] − cmin for every k ∈ Y. Thus,

¯

c[y] = 0. Define the calibrated test cost of h as ¯ Ec(h) ≡ E (x,y,c)∼Dc ¯ c[h(x)] = Ec(h) − E (x,y,c)∼Dc cmin.

Because the second term at the right-hand-side is a constant that does not depend on h, finding a classi-fier g that comes with a small Ec(g) is equivalent to

finding a classifier that comes with a small ¯Ec(g).

We put two remarks on our setup above. First, the setup is based on example-dependent cost vec-tors c : Y → R rather than a class-dependent cost ma-trix C : Y × Y → R, where each entry C(y, k) denotes the price to be paid when predicting a class-y example as class k. The class-dependent setup allows one to use a complete picture of the cost information in algorithm design (Domingos,1999;Zhou & Liu,2006), but is not applicable when the cost varies in a stochastic envi-ronment. On the other hand, the example-dependent setup is more general (Abe et al., 2004), and includes the class-dependent setup as a special case by defin-ing the cost vector in (x, y, c) as c[k] ≡ C(y, k) for every k ∈ Y. In view of generality, we focus on the example-dependent setup in this paper.

Secondly, regular classification can be viewed as a spe-cial case of cost-sensitive classification by replacing the

1The boolean operation

J·K is 1 when the inner condition is true, and 0 otherwise.

(3)

cost information in c with a na¨ıve (insensitive) cost matrix of Ce(y, k) ≡ Jy 6= kK. Thus, when applying a regular classification algorithm directly to general cost-sensitive classification problems, it is as if we are feeding the algorithm with inaccurate cost informa-tion, which intuitively may lead to unsatisfactory per-formance. We will see such results in Section5.

3. One-sided Regression for

Cost-sensitive Classification

From the setup above, the value of each cost compo-nent c[k] carries an important piece of information. Recent approaches that reduce cost-sensitive classifi-cation to regular classificlassifi-cation encode the cost infor-mation in the weights (importance) of the transformed classification examples. Some of the approaches leads to more promising theoretical results, such as Sensitive Error Correcting Output Codes (SECOC, Langford & Beygelzimer, 2005), Filter Tree (FT, Beygelzimer et al., 2007) and Weighted All-Pairs (WAP, Beygelz-imer et al., 2005). Nevertheless, it has been shown that a large number of weighted classification exam-ples are often required to store the cost information accurately (Abe et al.,2004;Langford & Beygelzimer,

2005), or the encoding structure and procedure can be quite complicated (Langford & Beygelzimer,2005;

Beygelzimer et al., 2005; 2007). Because of those caveats, the practical use of those algorithms has not been fully investigated.

To avoid the caveats of encoding the cost information in the weights, we place the cost information in the la-bels of regression examples instead. Such an approach emerged in the derivation steps of SECOC (Langford & Beygelzimer,2005), but its direct use has not been thoroughly studied. Regression, like regular classifica-tion, is a widely-studied problem in machine learning. Rather than predicting the discrete label y ∈ Y with a classifier g, regression aims at using a regressor r to estimate the real-valued labels Y ∈ R. We propose to train a joint regressor r(x, k) that estimates the cost values c[k] directly. Intuitively, if we can obtain a re-gressor r(x, k) that estimates each c[k] perfectly for any cost-sensitive example (x, y, c), we can use the es-timate to choose the best prediction

gr(x) ≡ argmin 1≤k≤K

r(x, k). (1)

What if the estimate r(x, k) cannot match the desired value c[k] perfectly? In the real world, it is indeed the common case that r(x, k) would be somewhat dif-ferent from c[k], and the inexact r(x, k) may lead to misclassification (i.e. more costly prediction) in (1).

-c[1] ∆1 6 r(x, 2)r(x, 1)6 ∆2 c[2] c[3] 6 r(x, 3) c[K] 6 r(x, K) -6 r(x, 2)r(x, 1)6 ∆2 -6 r(x, 2) r(x, 1)6 ∆1

Figure 1. intuition behind one-sided regression

We illustrate such misclassification cases by Figure 1. Without loss of generality, assume that c is ordered such that c[1] ≤ c[2] ≤ · · · ≤ c[K]. We shall fur-ther assume that c[1] < c[2] and thus the correct pre-diction y = 1 is unique. Now, if gr(x) = 2, which

means r(x, 2) ≤ r(x, k) for every k ∈ Y. More specif-ically, r(x, 2) ≤ r(x, 1). Define ∆1 ≡ r(x, 1) − c[1]

and ∆2 ≡ c[2] − r(x, 2). Because c[1] < c[2] and

r(x, 2) ≤ r(x, 1), the terms ∆1and ∆2cannot both be

negative. Then, there are three possible cases. 1. At the top of Figure 1, ∆1 ≥ 0 and ∆2 ≥ 0.

Then, ¯c[2] = c[2] − c[1] ≤ ∆1+ ∆2.

2. At the middle of Figure 1, ∆1 ≤ 0 and ∆2 ≥ 0.

Then, ¯c[2] ≤ ∆2.

3. At the bottom of Figure1, ∆1 ≥ 0 and ∆2 ≤ 0.

Then, ¯c[2] ≤ ∆1.

In all the above cases in which a misclassification gr(x) = 2 happens, the calibrated cost ¯c[2] is no larger

than max(∆1, 0) + max(∆2, 0). This finding holds true

even when we replace the number 2 with any k be-tween 2 and K, and will be proved in Theorem1. A conceptual explanation is as follows. There are two different kinds of cost components c[k]. If the compo-nent c[k] is the smallest within c (i.e., c[k] = cmin), it

is acceptable and demanded to have an r(x, k) that is no more than c[k] because a smaller r(x, k) can only lead to a better prediction gr(x). On the other hand,

if c[k] > cmin, it is acceptable and demanded to have

an r(x, k) that is no less than c[k]. If all the demands are satisfied, no cost would incur in gr; otherwise the

calibrated cost would be upper-bounded by the total deviations on the wrong “side.” Thus, for any cost-sensitive example (x, y, c), we can define a special re-gression loss ξk(r) ≡ max(∆k(r), 0), where (2) ∆k(r) ≡  2 r c[k] = cmin z − 1 r(x, k) − c[k].

(4)

When r(x, k) is on the correct side, ξk(r) = 0.

Oth-erwise ξk(r) represents the deviation between the

es-timate r(x, k) and the desired c[k]. We shall use the definitions to prove a formal statement that connects the cost paid by grwith the loss of r.

Theorem 1 (per-example loss bound). For any cost-sensitive example (x, y, c), ¯ c[gr(x)] ≤ K X k=1 ξk(r). (3)

Proof. Let ` = gr(x). When c[`] = cmin, the

left-hand-side is 0 while the right-left-hand-side is non-negative because all ξk(r) ≥ 0 by definition.

On the other hand, when c[`] > cmin = c[y], by the

definition in (2),

ξ`(r) ≥ c[`] − r(x, `), (4)

ξy(r) ≥ r(x, y) − c[y] . (5)

Because ` = gr(x),

r(x, y) − r(x, `) ≥ 0. (6) Combining (4), (5), and (6), we get

¯ c[`] ≤ ξ`(r) + ξy(r) ≤ K X k=1 ξk(r),

where the last inequality holds because ξk(r) ≥ 0.

Theorem 1 says that for any given cost-sensitive ex-ample (x, y, c), if a regressor r(x, k) closely esti-mates c[k] under the specially designed linear one-sided loss ξk(r), the associated classifier gr(x) only

pays a small calibrated cost ¯c[gr(x)]. We can prove a

similar theorem for the quadratic one-sided loss ξ2 k(r),

but the details are omitted because of page limits. Based on Theorem1, we could achieve the goal of find-ing a low-cost classifier by learnfind-ing a low-one-sided-loss regressor first. We formalize the learning prob-lem as one-sided regression, which seeks for a regres-sor that maps the input vector X ∈ ˆX to some real label Y ∈ R with the loss evaluated by some direc-tion Z ∈ {−1, +1}. We use Z = −1 to indicate that there is no loss at the left-hand-side r(X) ≤ Y , and Z = +1 to indicate that there is no loss at the right-hand-side r(X) ≥ Y . Assume that there is an un-known distribution Dothat generates one-sided

exam-ples (X, Y, Z) ∈ ˆX ×R×{−1, +1}. We consider a train-ing set So = {(Xn, Yn, Zn)}Nn=1, where each training

example (Xn, Yn, Zn) is drawn i.i.d from Do. Linear

one-sided regression aims at using Soto find a

regres-sor r : ˆX → R that comes with a small Eo(r), where

Eo(q) ≡ E (X,Y,Z)∼Do

max Z q(X) − Y, 0 .

is the expected linear one-sided loss of the regressor q with respect to Do.

With the definition above, we are ready to solve the cost-sensitive classification problem by reducing it to one-sided regression, as shown in Algorithm1.

Algorithm 1 reduction to one-sided regression 1. Construct ˜So= n (Xn,k, Yn,k, Zn,k o from Sc, where Xn,k= (xn, k); Yn,k = cn[k] ; Zn,k = 2 r cn[k] = cn[yn] z − 1.

2. Train a regressor r(x, k) : X × Y → R from ˜So

with a one-sided regression algorithm. 3. Return the classifier grin (1).

Note that we can define a distribution Do that draws

an example (x, y, c) from Dc, chooses k uniformly at

random, and then generates (X, Y, Z) by

X = (x, k), Y = c[k] , Z = 2 r

c[k] = cmin

z − 1.

We see that ˜So consists of (dependent) examples

from Doand contains (many) subsets So∼ DNo . Thus,

a reasonable one-sided regression algorithm should be able to use ˜So to find a regressor r that comes with

a small Eo(r). By integrating both sides of (3) with

respect to Dc, we get the following theorem.

Theorem 2 (error guarantee of Algorithm 1). Con-sider any Dc and its associated Do. For any

regres-sor r, ¯Ec(gr) ≤ K · Eo(r)

Thus, if we can design a good one-sided regression al-gorithm that learns from ˜Soand returns a regressor r

with a small Eo(r), the algorithm can be cast as a

cost-sensitive classification algorithm that returns a classifier gr with a small ¯Ec(gr). That is, we have

reduced the cost-sensitive classification problem to a one-sided regression problem with a solid theoretical guarantee. The remaining question is, how can we design a good one-sided regression algorithm? Next, we propose a novel, simple and useful one-sided re-gression algorithm that roots from the support vector machine (SVM,Vapnik,1998).

(5)

4. One-sided Support Vector

Regression

From Theorem 2, we intend to find a decent regres-sor r with respect to Do. Nevertheless, Do is

de-fined from an unknown distribution Dc and hence is

also unknown. We thus can only rely on the train-ing set ˜So on hand. Consider an empirical risk

min-imization paradigm (Vapnik, 1998) that finds r by minimizing an in-sample version of Eo(g). That is,

r = argmin q PK k=1 PN n=1ξn,k, where ξn,kdenotes ξk(q)

on the training example (xn, yn, cn). We can

decom-pose the problem of finding a joint regressor r(x, k) to K sub-problems of finding individual regressors rk(x) ≡ r(x, k). In other words, for every given k,

we can separately solve

rk= argmin qk N X n=1 ξn,k . (7)

Let us look at linear regressors qk(x) = hwk, φ(x)i + bk

in a Hilbert space H, where the transform φ : X → H, the weight wk∈ H, and the bias bk ∈ R. Adding a

reg-ularization term λ2hwk, wki to the objective function,

each sub-problem (7) becomes

min wk,bk,ξn,k λ 2hwk, wki + N X n=1 ξn,k (8) s.t. ξn,k≥ Zn,k  hwk, φ(xn)i + bk− c[k]  , ξn,k≥ 0, for all n.

where Zn,k is defined in Algorithm 1. Note that (8)

is a simplified variant of the common support vector regression (SVR) algorithm. Thus, we will call the variant one-sided support vector regression (OSSVR). Similar to the original SVR, we can solve (8) easily in the dual domain with the kernel trick K(xn, xm) ≡

hφ(xn), φ(xm)i. The dual problem of (8) is

min α 1 2 N X n=1 N X m=1 αnαmZn,kZm,kK(xn, xm) + N X n=1 Zn,kcn[k] αn (9) s.t. n X m=1 Zm,kαm= 0; 0 ≤ αn≤ 1 λ, for all n. Coupling Algorithm 1 with OSSVR, we obtain the following novel algorithm for cost-sensitive classification : RED-OSSVR, as shown in Algorithm2. Note that the common OVA-SVM (Hsu & Lin,2002) algorithm has exactly the same steps, except that

Algorithm 2 reduction to OSSVR

1. Training: For k = 1,2, . . . , K, solve the primal problem in (8) or the dual problem in (9). Then, obtain a regressor rk(x) = hwk, φ(x)i + bk.

2. Prediction: Return gr(x) = argmin 1≤k≤K

rk(x).

the (Zn,kcn[k]) terms in (8) and (9) are all replaced

by −1. In other words, OVA-SVM can be viewed as a special case of RED-OSSVR by considering the cost vectors cn[k] = 2Jyn6= kK − 1. Using those cost vec-tors is the same as considering the insensitive cost ma-trix Ce(see Section2) by scaling and shifting. That is,

OVA-SVM equivalently “wipes out” the original cost information and replaces it by the insensitive costs.

5. Experiments

In this section, we conduct experiments to validate our proposed RED-OSSVR algorithm. In all the ex-periments, we use LIBSVM (Chang & Lin, 2001) as our SVM solver, adopt the perceptron kernel (Lin & Li, 2008), and choose the regularization parameter λ within {217, 215, . . . , 2−3} by a 5-fold cross-validation

procedure on only the training set (Hsu et al.,2003). Then, we report the results using a separate test set (see below). Following a common practice in re-gression, the labels Yn,k (that is, cn[k]) are linearly

scaled to [0, 1] using the training set.

5.1. Comparison with Artificial Data Set We first demonstrate the usefulness of RED-OSSVR using an artificial data set in R2 with K = 3. Each class is generated from a Gaussian distribution of vari-ance 14 with centers at (−1, 0), (12,

√ 3 2 ), ( 1 2, − √ 3 2 ),

re-spectively. The training set consists of 500 points of each class, as shown in Figure 2. We make the data set cost-sensitive by considering a fixed cost ma-trix Crot =   0 1 100 100 0 1 1 100 0 

. Figure 2(a)shows the Bayes optimal boundary with respect to Crot.

Because there is a big cost in Crot(1, 2), Crot(2, 3),

and Crot(3, 1), the optimal boundary rotates in the

counter-clockwise direction to avoid the huge costs. Figure 2(b) depicts the decision boundary obtained from OVA-SVM. The boundary separates the adja-cent two Gaussian almost evenly. Although such a boundary achieves a small misclassification error rate, it pays for a big overall cost. On the other hand, the boundary obtained from RED-OSSVR in Figure2(c)

(6)

cost-sensitive classification problems, it is important to respect the cost information (like RED-OSSVR) in-stead of dropping it (like OVA-SVM), and decent per-formance can be obtained by using the cost informa-tion appropriately.

5.2. Comparison with Benchmark Data Sets Next, we compare RED-OSSVR with four existing al-gorithms, namely, FT-SVM (Beygelzimer et al.,2007), SECOC-SVM (Langford & Beygelzimer,2005) WAP-SVM (Beygelzimer et al.,2005) and (cost-insensitive) OVA-SVM (Hsu & Lin, 2002). As discussed in Section 3, the first three algorithms reduce cost-sensitive classification to binary classification while carrying a strong theoretical guarantee. The algo-rithms not only represent the state-of-the-art cost-sensitive classification algorithms, but also cover four major multiclass-to-binary decompositions ( Beygelz-imer et al.,2005) that are commonly used in SVM: one-versus-all (RED-OSSVR, OVA), tournament (FT), er-ror correcting (SECOC) and one-versus-one (WAP). Ten benchmark data sets (iris, wine, glass, vehicle, vowel, segment, dna, satimage, usps, letter) are used for comparison. All data sets come from the UCI Ma-chine Learning Repository (Hettich et al.,1998) except usps (Hull,1994). We randomly separate each data set with 75% of the examples for training and the rest 25% for testing. All the input vectors in the training set are linearly scaled to [0, 1] and then the input vectors in the test set are scaled accordingly.

The ten benchmark data sets were originally gath-ered for regular classification and do not contain any cost information. To make the data sets cost-sensitive, we adopt the randomized proportional setup that was used by Beygelzimer et al. (2005). In particular, we consider a cost matrix C(y, k), where the diagonal en-tries C(y, y) are 0, and the other enen-tries C(y, k) are uniformly sampled fromh0, 2000|{n:yn=k}|

|{n:yn=y}| i

. Then, for each example (x, y), the cost vector c comes from the y-th row of C (see Section 2). Although such a setup has a long history, we acknowledge that it does not fully reflect the realistic needs. The setup is taken here solely for a general comparison on the algorithms. To test the validity of our proposed algorithm on more realistic cost-sensitive classification tasks, we take a random 40% of the huge 10%-training set of KDDCup 1999 (Hettich et al.,1998) as another data set (kdd99). We do not use the test set accompanied because of the known mismatch in training and test distributions, but we do take its original cost matrix for evaluation. The 40% then goes through similar

75%-25% splits and scaling, as done with other data sets.

We compare the test costs between RED-OSSVR and each individual algorithms over 20 runs using a pair-wise one-tailed t-test of 0.1 significance level, as shown in Table 1. kdd99 takes longer to train and hence we only show the results over 5 runs. We then show the average test costs and their standard errors for all al-gorithms in Table2. Furthermore, we list the average test error rate in Table3.

OVA-SVM versus RED-OSSVR. We see that RED-OSSVR can often achieve lower test costs than OVA-SVM (Table 2), at the expense of higher error rates (Table 3). In particular, Tabel 1 shows that RED-OSSVR is significantly better on 5 data sets and significantly worse on only 2: vowel and letter. We can take a closer look at vowel. Table3suggests that OVA-SVM does not misclassify much on vowel. Hence, the resulting test cost is readily small. Then, it is hard for RED-OSSVR to make improvements using arbitrary cost information. On the other hand, for data sets like glass or vehicle, on which OVA-SVM suffers from large error and cost, RED-OSSVR can use the cost information appropriately to perform much better. SECOC-SVM versus RED-OSSVR. SECOC-SVM is usually the worst among the five algorithms. Note that SECOC can be viewed as a reduction from cost-sensitive classification to regression coupled with a reduction from regression to binary classifica-tion (Langford & Beygelzimer, 2005). Nevertheless, the latter part of the reduction requires a thresholding step (for which we used the grid-based thresholding in the original paper). Theoretically, an infinite num-ber of thresholds is needed, and hence any finite-sized threshold choices inevitably lead to loss of information. From the results, SECOC-SVM can suffer much from the loss of information. RED-OSSVR, on the other hand, only goes through the first part of the reduc-tion, and hence could preserve the cost information ac-curately and achieves significantly better performance on 9 out of the 11 data sets, as shown in Table1. SVM versus RED-OSSVR. Both WAP-SVM and RED-OSSVR performs similarly well on 6 out of the 11 data sets. Nevertheless, note that WAP-SVM does pairwise comparisons, and hence needs needs K(K−1)2 underlying binary SVMs. Thus, it takes much longer to train and does not scale well with the number of classes. For instance, on letter, training WAP-SVM would take about 13 times longer than training RED-OSSVR. With the similar performance, RED-OSSVR can be a preferred choice.

(7)

(a) Bayes optimal (b) OVA-SVM (c) RED-OSSVR Figure 2. boundaries learned from a 2D artificial data set

FT-SVM versus RED-OSSVR. FT-SVM and RED-OSSVR both need only O(K) underlying SVM classifier/regressors and hence scale well with K. Nev-ertheless, from Table1, we see that RED-OSSVR per-forms significantly better than FT-SVM on 7 out of the 11 data sets. Note that FT-SVM is based on let-ting the labels compete in a tournament, and thus the design of the tournament can affect the resulting per-formance. From the results we see that the simple ran-dom tournament design, asBeygelzimer et al. (2007) originally used, is not as good as RED-OSSVR. The difference makes RED-OSSVR a better choice unless there is a strong demand on the O(log2K) prediction

complexity of FT-SVM.

In summary, RED-OSSVR enjoys three advantages: using the cost-information accurately and appropri-ately, O(K) training time, and strong empirical per-formance. The advantages suggest that it shall be the leading SVM-based algorithm for cost-sensitive classi-fication nowadays.

Note that with the kernel trick in RED-OSSVR, we can readily obtain a wide range of classifiers of dif-ferent complexity and thus achieve lower test costs than existing methods that focused mainly on decision trees (Abe et al.,2004;Beygelzimer et al.,2005;Zhou & Liu, 2006). The results from those comparisons are not included here because of page limits.

6. Conclusion

We proposed a novel reduction approach from cost-sensitive classification to one-sided regression. The ap-proach is based on estimating the components of the cost vectors directly via regression, and uses a specifi-cally designed regression loss that is tightly connected to the cost of interest. The approach is simple, yet enjoys strong theoretical guarantees in terms of error transformation. In particular, our approach allows any decent one-sided regression method to be cast as a de-cent cost-sensitive classification algorithm.

We modified the popular SVR algorithm to derive a new OSSVR method that solves one-sided regression problems. Then, we coupled the reduction approach with OSSVR for cost-sensitive classification. Our

Table 1. comparing the test costs of RED-OSSVR and each algorithm using a pairwise one-tailed t-test of 0.1 signifi-cance level

data set FT SECOC WAP OVA

iris ≈ ≈ ≈ wine ≈ ≈ ≈ glass vehicle vowel ≈ × × segment ≈ ≈ dna ≈ satimage usps ≈ ≈ letter ≈ × kdd99 ≈ ≈ ≈

: RED-OSSVR significantly better × : RED-OSSVR significantly worse ≈ : otherwise

novel RED-OSSVR algorithm is a theoretically justi-fied extension of the commonly used OVA-SVM algo-rithm. Experimental results demonstrated that RED-OSSVR is superior to OVA-SVM for cost-sensitive classification. Furthermore, RED-OSSVR can enjoy some advantages over three major SVM-based cost-sensitive classification algorithms. The findings sug-gest that RED-OSSVR is the best SVM-based algo-rithm for cost-sensitive classification nowadays.

Acknowledgments

We thank Chih-Jen Lin, Yuh-Jye Lee, Shou-De Lin and the anonymous reviewers for valuable suggestions. The project was partially supported by the National Science Council of Taiwan via NSC 98-2221-E-002-192 and 98-2218-E-002-019. We are grateful to the NTU Computer and Information Networking Center for the support of high-performance computing facilities.

References

Abe, N., Zadrozny, B., and Langford, J. An itera-tive method for multi-class cost-sensiitera-tive learning. In Proceedings of the 10th ACM SIGKDD Interna-tional Conference on Knowledge Discovery and Data Mining, pp. 3–11. ACM, 2004.

(8)

Table 2. average test cost of SVM-based algorithms

data set N K RED-OSSVR FT-SVM SECOC-SVM WAP-SVM OVA-SVM

iris 150 3 23.82±5.52∗ 31.75±5.53 29.30±5.53 28.13±6.20 35.58±7.16 wine 178 3 19.43±4.65∗ 20.72±5.27 19.66±4.54 19.99±5.82 28.00±5.16 glass 214 6 228.54±15.61∗ 264.52±15.66 251.91±14.18 253.29±16.76 283.86±18.17 vehicle 846 4 178.63±20.37∗ 190.68±21.62 193.52±21.38 187.25±22.69 216.94±17.83 vowel 990 11 23.07±2.89 25.96±2.76 88.57±7.25 17.63±2.01 13.43±1.84∗ segment 2310 7 26.85±1.83∗ 31.01±2.24 61.48±9.73 27.22±2.14 27.07±2.20 dna 3186 3 42.57±2.97∗ 56.92±3.98 54.86±5.67 50.27±3.38 42.94±2.61 satimage 6435 6 68.62±4.20∗ 78.34±4.54 93.26±5.01 73.64±4.24 79.39±4.06 usps 9298 10 24.22±0.94 30.64±1.08 75.94±7.29 23.71±0.78∗ 23.76±0.96 letter 20000 26 27.34±0.57 44.04±0.90 207.89±5.64 26.61±0.51∗ 26.62±0.72 kdd99 197608 5 0.0015±0.0001∗ 0.0015±0.0001∗ 0.7976±0.0000 0.0015±0.0001∗ 0.0016±0.0001

(those with the lowest mean are marked with *; those within one standard error of the lowest one are in bold)

Table 3. average test error (%) of SVM-based algorithms

data set RED-OSSVR FT-SVM SECOC-SVM WAP-SVM OVA-SVM

iris 6.84±1.15 11.71±2.34 19.47±3.58 6.97±0.82 4.34±0.75∗ wine 3.56±0.55 3.00±0.74 7.00±2.10 2.78±0.78 2.78±0.47∗ glass 31.48±1.37 49.54±2.24 45.65±3.15 39.81±2.48 29.91±0.63∗ vehicle 26.18±2.46 29.13±2.90 29.46±2.82 29.13±2.94 20.83±0.61∗ vowel 5.26±0.49 4.09±0.52 42.64±2.89 6.92±0.79 1.27±0.17∗ segment 3.66±0.27 4.78±0.48 25.35±3.83 4.28±0.37 2.59±0.15∗ dna 7.00±0.62 10.79±1.66 13.24±3.31 7.74±0.70 4.19±0.14∗ satimage 9.50±0.30 13.05±0.97 29.94±3.85 11.06±0.60 7.26±0.10∗ usps 3.45±0.20 4.60±0.39 32.74±2.37 4.96±0.63 2.19±0.06∗ letter 3.84±0.22 9.65±0.61 76.66±1.72 7.73±0.21 2.66±0.07∗ kdd99 0.075±0.004 0.074±0.004∗ 0.796±0.000 0.078±0.004 0.077±0.004

(those with the lowest mean are marked with *; those within one standard error of the lowest one are in bold)

Beygelzimer, A., Dani, V., , Hayes, T., Langford, J., and Zadrozny, B. Error limiting reductions between classification tasks. In Machine Learning: Proceed-ings of the 22rd International Conference, pp. 49–56. ACM, 2005.

Beygelzimer, A., Langford, J., and Ravikumar, P. Multiclass classification with filter trees. Down-loaded fromhttp://hunch.net/~jl, 2007.

Chang, C.-C. and Lin, C.-J. LIBSVM: A Library for Support Vector Machines. National Taiwan Univer-sity, 2001. Software available athttp://www.csie. ntu.edu.tw/~cjlin/libsvm.

Domingos, P. MetaCost: A general method for mak-ing classifiers cost-sensitive. In Proceedmak-ings of the 5th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 155– 164. ACM, 1999.

Hettich, S., Blake, C. L., and Merz, C. J. UCI reposi-tory of machine learning databases, 1998.

Hsu, C.-W. and Lin, C.-J. A comparison of methods for multiclass support vector machines. IEEE Trans-actions on Neural Networks, 13(2):415–425, 2002.

Hsu, C.-W., Chang, C.-C., and Lin, C.-J. A practi-cal guide to support vector classification. Technipracti-cal report, National Taiwan University, 2003.

Hull, J. J. A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(5):550–554, 1994. Langford, J. and Beygelzimer, A. Sensitive error

cor-recting output codes. In Learning Theory: 18th An-nual Conference on Learning Theory, pp. 158–172. Springer-Verlag, 2005.

Lin, H.-T. and Li, L. Support vector machinery for in-finite ensemble learning. Journal of Machine Learn-ing Research, 9:285–312, 2008.

Vapnik, V. N. Statistical Learning Theory. Wiley, New York, NY, 1998.

Zadrozny, B., Langford, J., and Abe, N. Cost sensi-tive learning by cost-proportionate example weight-ing. In Proceedings of the 3rd IEEE International Conference on Data Mining, pp. 435, 2003.

Zhou, Z.-H. and Liu, X.-Y. On multi-class cost-sensitive learning. In Proceedings of the 21st Na-tional Conference on Artificial Intelligence, pp. 567– 572. AAAI Press, 2006.

數據

Figure 1. intuition behind one-sided regression We illustrate such misclassification cases by Figure 1.
Table 1. comparing the test costs of RED-OSSVR and each algorithm using a pairwise one-tailed t-test of 0.1  signifi-cance level
Table 3. average test error (%) of SVM-based algorithms

參考文獻

相關文件

The embedding allows the proposed algorithm, active learning with cost embedding (ALCE), to define a cost-sensitive uncertainty measure from the distance in the hidden space..

We proposed the condensed filter tree (CFT) algorithm by coupling several tools and ideas: the label powerset approach for reducing to cost-sensitive classifi- cation, the

Exten- sive experimental results justify the validity of the novel loss function for making existing deep learn- ing models cost-sensitive, and demonstrate that our proposed model

Experiments on the benchmark and the real-world data sets show that our proposed methodology in- deed achieves lower test error rates and similar (sometimes lower) test costs

Furthermore, we leverage the cost information embedded in the code space of CSRPE to propose a novel active learning algorithm for cost-sensitive MLC.. Extensive exper- imental

Coupling AdaBoost with the reduction framework leads to a novel algorithm that boosts the training accuracy of any cost-sensitive ordinal ranking algorithms theoretically, and in

For those methods utilizing label powerset to reduce the multi- label classification problem, in [7], the author proposes cost- sensitive RAkEL (CS-RAkEL) based on RAkEL optimizing on

We also used reduction and reverse reduction to design a novel boosting ap- proach, AdaBoost.OR, to improve the performance of any cost-sensitive base ordinal ranking algorithm..