• 沒有找到結果。

Progressive Random k -Labelsets for Cost-Sensitive Multi-Label Classification

N/A
N/A
Protected

Academic year: 2022

Share "Progressive Random k -Labelsets for Cost-Sensitive Multi-Label Classification"

Copied!
26
0
0

加載中.... (立即查看全文)

全文

(1)

Progressive Random k -Labelsets for Cost-Sensitive Multi-Label Classification

Yu-Ping Wu r02922167@csie.ntu.edu.tw

Hsuan-Tien Lin htlin@csie.ntu.edu.tw

Department of Computer Science and Information Engineering, National Taiwan University, Taiwan

Abstract

In multi-label classification, an instance is associated with multiple relevant labels, and the goal is to predict these labels simultaneously. Many real-world applications of multi-label classification come with different performance evaluation criteria. It is thus important to design general multi-label classification methods that can flexibly take different criteria into account. Such methods tackle the problem of cost-sensitive multi-label classification (CSMLC). Most existing CSMLC methods either suffer from high computational complexity or focus on only certain specific criteria. In this work, we propose a novel CSMLC method, named progressive random k-labelsets (PRAkEL), to resolve the two issues above. The method is extended from a popular multi-label classification method, random k-labelsets, and hence inherits its efficiency. Furthermore, the proposed method can handle arbitrary example-based evaluation criteria by progressively transforming the CSMLC problem into a series of cost-sensitive multi-class classification problems. Experimental results demonstrate that PRAkEL is competitive with existing methods under the specific criteria they can optimize, and is superior under several popular criteria.

Keywords: machine learning, multi-label classification, loss function, cost-sensitive learning, labelset, ensemble method

1. Introduction

Multi-label classification (MLC) extends traditional multi-class classification by allowing each instance to be associated with a set of relevant labels. For example, in text classification, a document (instance) can belong to several topics (labels). Given a set of instances as well as their relevant labels, the goal of an MLC method is to predict the relevant labels of a new instance. Recently, MLC has attracted much research attention with a wide range of applications including music tag annotation (Trohidis et al.,2008;Lo et al.,2011), image classification (Boutell et al.,2004), and video classification (Qi et al.,2007).

In contrast to multi-class classification, one important characteristic of MLC is the possible correlations between different labels. Many approaches have been proposed to exploit the correlations. Chaining methods learn a label by treating other labels as features (Read et al.,2011;Dembczynski et al.,2010). Labelset-based methods learn several labels jointly (Tsoumakas et al.,2010;Tsoumakas and Vlahavas,2007;Lo et al.,2014;Lo,2013).

Other methods transform the space of labels to capture the correlations (Hsu et al.,2009;

Tai and Lin,2012;Hardoon et al.,2004).

(2)

A key challenge of MLC is to automatically adapt a method to the evaluation criterion of interest. In real-world applications, different criteria are often required to evaluate the performance of an MLC method. For example, Hamming loss measures the proportion of the misclassified labels to the total number of labels; F1 score originates from information retrieval applications and is the harmonic mean of the precision and recall; subset 0/1 loss requires all labels to be correctly predicted. Because of the different natures of those criteria, a method that performs well under one criterion may not be well-suited for other criteria. It is therefore important to design general MLC methods that take the evaluation criterion into account, either in the training or prediction stage. Since the evaluation criterion, or metric, determines the cost for misclassifying an instance, this type of problem is generally called cost-sensitive multi-label classification (CSMLC) (Lo et al.,2014;Li and Lin,2014), which is formally defined in Section 2.

We shall explain in Section 3that most existing MLC methods either aim for optimizing a certain evaluation metric or require extra efforts to be adapted to each metric. For example, binary relevance (BR) (Tsoumakas et al.,2010) minimizes Hamming loss by learning each label independently. Label powerset (LP) (Tsoumakas et al.,2010) minimizes subset 0/1 loss by transforming the MLC problem to a multi-class classification problem with a huge number of hyper-classes. The well-known random k-labelsets (RAkEL) (Tsoumakas and Vlahavas,2007) method focuses on many smaller multi-class classification problems to be computationally efficient, but it is only loosely connected to subset 0/1 loss (Ferng and Lin, 2013).

There are currently a few methods for dealing with general CSMLC problems (Dem- bczynski et al., 2010; Tsochantaridis et al., 2005; Li and Lin, 2014;Doppa et al., 2014).

RAkEL has been extended to cost-sensitive random k-labelsets (CS-RAkEL) (Lo, 2013) and generalized k-labelsets ensemble (GLE) (Lo et al.,2014) to handle example-dependent weighted Hamming loss, but not general metrics. Probabilistic classifier chain (Dembczynski et al., 2010) requires designing an efficient inference rule with respect to the metric, and covers many, but not all, of the metrics of interest (Li and Lin, 2014). Condensed filter tree (Li and Lin,2014) is a chaining method that takes any evaluation metric into account during the training stage, but its training time is quadratic to the number of labels. The structured support vector machine (Tsochantaridis et al., 2005) can also handle arbitrary metric, but it relies on solving a sophisticated optimization problem depending on the metric and is thus also inefficient. To the best of our knowledge, no existing CSMLC methods are both general and efficient.

In this work, we design a general and efficient CSMLC method in Section4. This novel method, named progressive random k-labelsets (PRAkEL), is extended from RAkEL to inherit its efficiency. In particular, PRAkEL practically enjoys linear training time in terms of the number of labels. Moreover, PRAkEL is able to optimize any example-based metric by modifying the training stage of RAkEL. More specifically, RAkEL reduces the original problem to many regular multi-class problems and ignores the original cost information;

PRAkEL reduces the CSMLC problem to many cost-sensitive multi-class ones by transferring the cost information to the sub-problems. The transferring task is non-trivial, however, because each sub-problem involves only a subset of labels of the original problem. We therefore introduce the notion of reference labels to determine the costs in the sub-problems.

(3)

We carefully propose two strategies for defining the reference labels, which lead to different advantages and disadvantages in both theoretical and empirical aspects.

We conducted experiments on seven benchmark datasets with various sizes and domains.

The experimental results in Section5show that PRAkEL is competitive with state-of-the-art MLC methods under the specific metrics associated with the methods. Furthermore, in terms of general metrics, PRAkEL usually outperforms other methods. The results demonstrate that the proposed method is indeed more general, and more suitable for solving real-world problems.

2. Problem Setup

In CSMLC, we denote an instance by a vector x ∈ X = Rd and the relevant labels of x by a set Y ⊆ {1, 2, · · · , K}, where K is the total number of labels. Equivalently, this set of labels can be represented by a bit vector y ∈ Y = {0, 1}K, where the l-th component y[l] is 1 if and only if the l-th label is relevant, i.e., l ∈ Y . Here, X and Y are called the input space and label space, respectively; the pair (x, y) is called an example. In this work, we consider a particular CSMLC setup that allows each example to carry its own cost information. The example-based setup, which assumes example-dependent costs, is more general than the setup with label-dependent costs, in which all examples share the same cost functions. The more general setup makes it possible to express the importance of different instances easily through embedding the importance in the example-dependent cost, and has been considered in several studies of cost-sensitive learning (Fan et al.,1999;Zadrozny et al., 2003;Sun et al., 2007). Formally, given a training set {(xn, yn, cn)}Nn=1 consisting of N examples, where cn: Y → R≥0 is a non-negative cost function and each (xn, yn, cn) is drawn independently from an unknown distribution D, the goal of CSMLC is to learn a classifier h : X → Y such that the expected cost E(x,y,c)∼D[c(h(x))] is small.

Note that the example-based setup cannot cover all popular evaluation criteria in multi- label classification. For instance, the micro-F1 and macro-F1 criteria, which are defined on a set of y rather than a single one, cannot be expressed as an example-dependent cost function.

Nonetheless, as highlighted by earlier CSMLC works (Li and Lin, 2014), studying the example-based setup can be viewed as an intermediate step toward those more complicated criteria.

Two remarks about this setup are in order. First, for a classifier h, since c(h(x)) is being minimized, it is natural to assume c has a minimum of 0 at y, the true label vector of x. With this assumption, although y does not appear in the learning goal, its information is implicitly stored in the cost function. Second, we can similarly define the problem of cost-sensitive multi-class classification (CSMCC) by replacing the label space Y with {1, 2, · · · , K}, which stands for K different classes. In fact, this setup is widely adopted in many existing works (Tu and Lin,2010;Zhou and Liu,2010;Abe et al.,2004).

Modern CSMCC works (Zhou and Liu,2010) allow flexibly taking any cost functions into account based on application needs. While the proposed method shares the same flexibility in its derivation, we consider a more realistic scenario of CSMLC in the experiments. In particular, many CSMLC problems are actually associated with a global, label-dependent cost L : Y × Y → R, typically called a loss function, where L(y, ˆy) is the loss when predicting y as ˆy. Those problems aim to learn a classifier h : X → Y such that E[L(y, h(x))] is

(4)

small (Dembczynski et al.,2010;Li and Lin,2014). The aim can be easily expressed in our setup by assigning

cn(ˆy) = L(yn, ˆy). (1)

We focus on CSMLC with such loss functions to demonstrate the applicability of the proposed method and to make a fair comparison with existing CSMLC methods (Li and Lin,2014;

Dembczynski et al.,2010). Popular loss functions include

• Hamming loss1

LH(y, ˆy) = 1 K

K

X

l=1

J ˆy[l] 6= y[l]K;

• weighted Hamming loss with respect to the weight w ∈ R≥0K

LH,w(y, ˆy) =

K

X

l=1

w[l] ·J ˆy[l] 6= y[l]K;

• ranking loss

Lr(y, ˆy) = 1 R(y)

X

(k,l) : y[k]<y[l]

J ˆy[k] > ˆy[l]K + 1

2J ˆy[k] = ˆy[l]K, where R(y) = |{(k, l) | y[k] < y[l]}| is a normalizer;

• F1 loss2

LF(y, ˆy) = 1 − 2y · ˆy kyk1+ kˆyk1, which is one minus the F1 score;

• subset 0/1 loss

Ls(y, ˆy) =J ˆy 6= yK.

For those loss functions defined above, we follow the convention that when the denominator is zero, the loss is defined as zero.

To simplify the explanations of the proposed method, we further introduce some termi- nology. We denote the set of K labels by LK = {1, · · · , K}. A subset S of LK with |S| = k is called a k-labelset. If S = {s1, · · · , sk} is a k-labelset with s1< · · · < sk, then we denote (y[s1], · · · , y[sk]) ∈ {0, 1}k by y[S]. When the number of labels, K, is clear in the context, we also use the notation Sc to represent the (K − k)-labelset LK\ S = {1 ≤ l ≤ K | l /∈ S}.

We summarize the main notation used throughout the paper in Table 1.

1.J·K is the indicator function.

2. k · k1 is the `1 norm.

(5)

Notation Description

N Number of training examples

d Dimension of input space (number of features)

K Number of labels

X = Rd Input space (feature space) Y = {0, 1}K Output space (label space) x ∈ X Instance (feature vector)

y ∈ Y True label vector

ˆ

y ∈ Y Predicted label vector

˜

y ∈ Y Reference label vector (see Section4.2) h : X → Y Multi-label classifier

c : Y → R≥0 Example-dependent cost function L : Y × Y → R Label-dependent loss function LK= {1, · · · , K} The set of K labels

S ⊆ LK with |S| = k k-labelset

y[S] The ordered set of labels of y within S

M Number of iterations (labelsets) for the proposed method

Table 1: Main notation used in the paper.

3. Related Work

Multi-label classification methods can be divided into two main categories, namely, algo- rithm adaptation and problem transformation (Tsoumakas and Katakis,2007). Algorithm adaptation methods directly extend a specific learning algorithm to tackle MLC problems.

Multi-label k-nearest neighbor (ML-kNN) (Zhang and Zhou, 2007) is adapted from the famous k-nearest neighbors algorithm. AdaBoost.MH and AdaBoost.MR (Schapire and Singer, 2000) are an two multi-label extensions of the AdaBoost algorithm (Freund and Schapire, 1999). ML-C4.5 (Clare and King, 2001) is an adaptation of the popular C4.5 algorithm. BP-MLL (Zhang and Zhou,2006) is derived from the back-propagation algorithm of neural networks.

Problem transformation methods transform MLC problems into other types of learning problems and solve them by existing algorithms. Such methods are general and can be coupled with any mature algorithms. Our proposed method in Section 4 belongs to this category.

Binary relevance (BR) (Tsoumakas et al., 2010) is arguably the simplest problem transformation method, which transforms the MLC problem into several binary classi- fication problems by learning and predicting each label independently. Classifier chain (CC) (Read et al.,2011) iteratively learns a binary classifier to predict the l-th label using {(xn, ˆyn[1], · · · , ˆyn[l − 1])} as the training set, where ˆyn contains the previously predicted labels. Although it considers the label dependencies, the order of labels becomes crucial to the performance of CC. Many approaches have been proposed to address this issue (Read et al.,2011,2014;Goncalves et al.,2013). In particular, the ensemble of classifier chains (ECC) (Read et al., 2011) learns several CC classifiers, each with a random ordering of

labels, and it averages the predictions from all the classifiers to classify a new instance.

(6)

Instead of learning one binary classifier for each label, probabilistic classifier chain (PCC) (Dembczynski et al.,2010) learns probabilistic classifiers to estimate P (y | x) by the chain

rule

P (y | x) = P (y[1] | x) ·

K

Y

l=2

P (y[l] | x, y[1], · · · , y[l − 1])

and then applies Bayes optimal inference rule designed for the evaluation metric to produce the final prediction. In principle, PCC is able to be adapted to any different metric to tackle CSMLC problems by designing proper inference rules for the metric. However, deriving efficient inference rules for different metrics is practically challenging. Inference rules for Hamming, ranking, F1 and subset 0/1 loss have been designed (Dembczynski et al.,2010, 2011), but the rules for other metrics remain an open question. Similar to ECC, the ensembled probabilistic classifier chain (EPCC) (Dembczynski et al., 2010) resolves the issue of label ordering by random orderings.

The Monte Carlo optimization for classifier chains (MCC) (Read et al.,2014) employs the Monte Carlo scheme to find a good label ordering in the training stage of PCC. A recently proposed method, the classifier trellis (CT) (Read et al., 2015), is extended from MCC to consider a trellis structure of labels rather than a chain to improve efficiency.

During the prediction stage of both methods (Read et al.,2014,2015), the Monte Carlo scheme is applied to generate samples from P (y | x). A large number of samples may be required for Monte Carlo simulation, which results in possible computational challenges during prediction. While those samples can in principle be used to make cost-sensitive predictions, the possibility has not been fully studied in both works. In fact, the original works consider only approximate inference for Hamming loss and subset 0/1 loss.

A group of methods take label dependencies into account by learning multiple labels jointly. Label powerset (LP) (Tsoumakas et al.,2010) transforms each label combination into a unique hyper-class and learns a multi-class classifier. If there are K labels in total, then the number of classes may be as large as 2K. Hence, when the number of labels is large, LP suffers from computational issues and an insufficient number of training examples within each class.

To overcome the drawback, a method called random k-labelsets (RAkEL) (Tsoumakas and Vlahavas,2007) focuses on one labelset at a time. Recall that a k-labelset is a size-k subset of {1, 2, · · · , K}. RAkEL iteratively selects a random k-labelset Sm and learns an LP classifier hm for the training set restricted to the labels within Sm, i.e., {(xn, yn[Sm])}.

Each classifier hm predicts the k labels within Sm, and the final prediction of an instance is produced by a majority vote of all the classifiers. Because the number of classes in each LP classifier is decreased, RAkEL is more efficient than LP. In addition, it achieves better performance than LP in terms of Hamming and F1 loss.

Nonetheless, there is a noticeable issue of RAkEL. In each multi-class sub-problem, a one-bit prediction error and a two-bit error are equally penalized. That is, the LP classifiers cannot distinguish between small and big errors. Because these classifiers are learned without considering the evaluation metric, RAkEL is not a cost-sensitive method.

Two extensions of RAkEL were proposed to address the above issue, but they both consider only the example-dependent weighted Hamming loss rather than general metrics.

The cost-sensitive random k-labelsets (CS-RAkEL) (Lo, 2013) method reduces the CSMLC

(7)

problem to several multi-class ones with instance weights. The weight of each instance is defined as the sum of the misclassified costs of the relevant labels. Despite the restriction, one advantage of CS-RAkEL is that it only requires re-weighting of the instances and can hence be coupled with many traditional multi-class classification algorithms.

Generalized k-labelsets ensemble (GLE) (Lo et al.,2014) learns a set of LP classifiers and determines a linear combination of them by minimizing the averaged loss of training examples. The minimization is formulated as a quadratic optimization problem without any constraints and hence can be solved efficiently. While both CS-RAkEL and GLE are pioneering works on extending RAkEL for CSMLC, they focus on specific applications of tagging. As a consequence, the two methods do not come with much theoretical guarantee, and it is non-trivial to extend them to handle example-dependent costs.

For the methods introduced above, BR and CC optimize Hamming loss; CS-RAkEL and GLE deal with weighted Hamming loss; MCC and CT minimize Hamming and subset 0/1 loss currently, with the potential of handling general metrics yet to be studied; PCC is designed to deal with general metrics, but is computationally demanding for arbitrary metrics that come without efficient inference rules. Another method that deals with general metrics is the structured support vector machine (SSVM) (Tsochantaridis et al.,2005). The SSVM optimizes a metric by re-scaling certain variables in the traditional SVM optimization problem based on the metric. However, the complexity of solving the problem depends on the metric and is usually too high for practical applications.

Condensed filter tree (CFT) (Li and Lin, 2014) is a state-of-the-art CSMLC method, extended from the well-known filter tree algorithm (Beygelzimer et al., 2009) to handle multi-label data. Similarly, the divide-and-conquer tree algorithm (Beygelzimer et al.,2009) for multi-class problems can be directly adapted to CSMLC problems to design the top-down tree (TT) method (Li and Lin,2014). Both CFT and TT can be viewed as cost-sensitive extensions of CC. CFT suffers from its training time, which is quadratic to the number of labels; TT suffers from its weaker performance as compared with CFT (Li and Lin,2014).

Multi-label search (MLS) (Doppa et al.,2014) optimizes a metric by adapting the HC- search framework to multi-label problems. It learns a heuristic function and estimates the evaluation metric in the training stage. Then, during the prediction stage, MLS conducts a heuristic search towards minimizing the estimated cost. Despite its generality, MLS suffers from high computational complexity. To learn the heuristic function during training, it needs to solve a ranking problem consisting of O(N K) examples, where N is the number of training examples and K is the number of labels.

In summary, many existing MLC methods are not applicable to arbitrary example-based metrics of CSMLC (BR, CC, LP, RAkEL). There are some extensions dealing with restricted metrics of CSMLC (CS-RAkEL, GLE). For general metrics, current methods suffer from computational issues (CFT, MLS, SSVM), performance issues (TT), or require elegant design of inference rules or more studies to handle different metrics (PCC, MCC, CT). In the next section, we present a general yet efficient cost-sensitive multi-label method, which is competitive with state-of-the-art CSMLC methods.

(8)

4. Proposed Method

Recall that the LP method solves an MLC problem by transforming it into a single multi-class problem. Similarly, a CSMLC problem can be transformed into a cost-sensitive multi-class classification (CSMCC) problem, as illustrated in the CFT work (Li and Lin,2014). The resulting method, however, suffers from the same computational issue as LP, and hence is not feasible for large problems. CFT solves the computational issue by considering an efficient multi-class classification model—the filter tree.

In this work, we deal with the computational issue differently. We extend the idea of RAkEL and propose a novel labelset-based method, which iteratively transforms the CSMLC problem into a series of CSMCC problems. Different from RAkEL, the critical part of the proposed method is the transfer of the cost information to the sub-problems in the training stage. This is not a trivial task, since each sub-problem involves only a subset of labels and hence the costs in each sub-problem cannot be easily connected to those in the original problem. Therefore, we introduce the notion of reference label vectors to determine the costs in the sub-problems. While the overall idea sounds simple, it advances the study of CSMLC in several aspects:

• Compared with traditional MLC methods such as RAkEL, the proposed method is sensitive to the evaluation metric and hence is able to optimize arbitrary example-based metrics.

• Compared with CS-RAkEL and GLE, the proposed method handles more general metrics and comes with solid theoretical analysis.

• Compared with PCC, MCC and SSVMs, our method alternatively considers label dependencies through labelsets and requires no manual adaptation to each evaluation metric.

• Compared with existing CSMLC methods such as CFT, our method is more efficient in terms of training time complexity while reaching similar level of performance.

We first provide the framework of the proposed method. Then, we describe it in great detail and present its analysis.

4.1. Framework

Let T = {(xn, yn, cn)}Nn=1be the training set and M be the number of iterations. Inspired by RAkEL, in the m-th iteration, our method selects a random k-labelset Sm and constructs a CSMCC training set Tm0 = {(xn, yn[Sm], c0n)}Nn=1 of K0 = 2k classes, where c0n: {0, 1}k→ R.

The main difference between our method and RAkEL is that, the multi-class sub-problems defined here contain the costs c0n, and hence our method is able to carry the information of the evaluation metric. The two issues of RAkEL discussed in Section3can also be resolved by properly defining these c0n. Although in our problem setup described in Section2, the label space of a CSMCC problem should be LK0, by considering a bijection between LK0 and {0, 1}k, we may treat yn[Sm] as an element of LK0 and assume c0n: LK0 → R. Then, any CSMCC algorithm can be employed to learn a multi-class classifier h0m: X → {0, 1}k for Tm0.

(9)

Similar to RAkEL, the final prediction of a new instance x is produced by a majority vote of all the classifiers h0m. More precisely, if we define hm: X → {−1, 0, 1}K by

 hm(x)[Sm] = 2 · h0m(x) − 1 ∈ {−1, 1}k

hm(x)[Smc] = 0 ∈ {0}K−k, (2)

then the final prediction ˆy ∈ Y can be obtained by setting ˆy[l] = 1 if and only if PM

m=1hm[l] > 0.

4.2. Cost Transformation

Having described the framework, we now turn our attention to the multi-class cost func- tions c0n in the sub-problems, which must be defined in each iteration. At this point, notice that if we define c0n(ˆy0) =J ˆy06= yn[Sm]K, then the proposed method degenerates into RAkEL.

Since this c0n is independent of the original cost function cn, it can also be seen from this assignment that RAkEL is not a cost-sensitive method.

To establish the connections between these two cost functions, c0n must carry a certain amount of information of cn. Note that the domain of c0n is {0, 1}k and cn is defined on Y = {0, 1}K. To extend c0n to the domain of cn, we propose considering a reference label vector ˜yn∈ Y and setting the value of c0n to be the cost cn assuming the labels outside Sm were predicted the same as ˜yn. Mathematically,

c0n(ˆy0) = cn(ˆy0∪ ˜yn[Scm]). (3) Here, we treat ˆy0 and ˜yn[Smc] as subsets of Sm and Scm, respectively, and therefore, their union is considered as a subset of LK, or equivalently a bit vector in {0, 1}K.

It then remains to define these ˜yn in each iteration to complete the transformation. We shall see in the next section that these reference vectors may depend on the classifiers learned in the previous iterations, and hence, the multi-class cost functions would be obtained progressively. As a consequence, the proposed method is called progressive random k- labelsets (PRAkEL). The training and prediction algorithms of PRAkEL are presented in Algorithms 1 and 2, where the weighting strategy mentioned in line 8 of Algorithm 1 is described in Section4.4. For now we simply assume αm= 1 for 1 ≤ m ≤ M . Another thing to note is that, we do not explicitly require selecting a labelset that has not been chosen before. However, in practice we give higher priority to those labels that were selected fewer times in the previous iterations. In particular, we guarantee that all labels are selected at least once if kM ≥ K.

4.3. Defining Reference Label Vectors

We propose two strategies for defining the reference label vectors. The first, and also the most intuitive, is to let ˜yn= yn in every iteration. The proposed method with this assignment is denoted by PRAkELt to indicate the usage of the true label vectors. In this strategy, we implicitly assume that the labels outside the labelset can be perfectly predicted by the other classifiers.

In real-world situations, however, this is usually not the case. Therefore, in the second strategy, we define ˜yn to be the predicted label vector of xn obtained thus far. Thus, the

(10)

Algorithm 1: Training algorithm of PRAkEL

Input: Training set T = {(xn, yn, cn)}Nn=1, cost-sensitive multi-class training algorithm A Parameter: Size of labelset k, number of iterations M

Output: Labelsets Sm, weights αm and multi-class classifiers h0m: X → {0, 1}k for 1 ≤ m ≤ M

1 for m ← 1 to M do

2 Select a random k-labelset Sm;

3 Define the reference label vector ˜yn for 1 ≤ n ≤ N ;

4 Transform each example (xn, yn, cn) to (xn, yn[Sm], c0n) by Equation3;

5 T0 ← {(xn, yn[Sm], c0n)}Nn=1 based on (3);

6 h0m← A(T0);

7 Define hm by Equation2;

8 Assign weight αm according to the weighting strategy;

9 Fn← Fn+ αmhm(xn) for 1 ≤ n ≤ N ;

10 end

Algorithm 2: Prediction algorithm of PRAkEL

Input: New instance x, labelsets Sm, weights αm and multi-class classifiers h0m: X → {0, 1}k for 1 ≤ m ≤ M

Output: Predicted label vector ˆy ∈ Y

1 Define hm for 1 ≤ m ≤ M by (2);

2 F ←PM

m=1αmhm(x);

3 y = 0 ∈ {0}ˆ K;

4 for l ← 1 to K do

5 if F [l] > 0 then

6 y[l] ← 1;ˆ

7 end

8 end

optimization in each sub-problem no longer depends on the perfect predictions from the previous classifiers. Formally, let Fm,n =Pm

p=1hp(xn) for 1 ≤ n ≤ N and define Hm,n ∈ Y by Hm,n[l] =JFm,n[l] > 0K. That is, Hm,n is the prediction of xn by a majority vote of the first m classifiers. We then define ˜ynin the m-th iteration to be Hm−1,n for m ≥ 2, and let y˜n = yn in the first iteration. Since the reference label vectors as well as the multi-class sub-problems are obtained progressively, the proposed method coupled with this strategy is denoted simply by PRAkEL.

Recall that in our problem setup we assume the minimum of each cn is 0. Therefore, for PRAkELt we have minyˆ0∈{0,1}kc0n(ˆy0) = miny∈Yˆ cn(ˆy[Sm] ∪ yn[Smc]) = cn(yn) = 0. In other words, the minimum cost for every example in each sub-problem is 0, which is a consequence of ˜yn= yn. For PRAkEL, however, this identity may not hold. Since the predicted labels outside Sm cannot be altered in the m-th iteration, it is natural to add a constant to each of the functions c0nsuch that minyˆ0∈{0,1}kc0n(ˆy0) = 0. Therefore, the transformed cost functions

(11)

for PRAkEL are all shifted to satisfy this equality by the following formula.

c0n(ˆy0) = cn(ˆy0∪ ˜yn[Smc]) − cn(ˆy0∪ yn[Smc]) (4) Interestingly, after shifting the costs, PRAkELt and PRAkEL become equivalent under Hamming loss and ranking loss. To show this, we first present two lemmas.

Lemma 1 Let Lr be the function of ranking loss and y ∈ Y = {0, 1}K. Then, there exists a unique w ∈ R≥0K such that Lr(y, ·) = LH,w(y, ·), where LH,w is the function of weighted Hamming loss with respect to w.

Proof See Appendix A.

Lemma 2 Let LH,w be the function of weighted Hamming loss and S be a k-labelset. For any subsets y00 and y10 of S, LH,w(y, y00∪ ˜y[Sc]) − LH,w(y, y01∪ ˜y[Sc]) is independent of

˜

y ∈ {0, 1}K.

Proof See Appendix A.

Theorem 3 Under Hamming loss and ranking loss, PRAkELt and PRAkEL are equivalent.

Proof Let L be the loss function of interest and consider the m-th iteration. For any instance x, let b0 and c0 be the cost functions of x in the m-th multi-class sub-problem, in the training of PRAkELt and PRAkEL, respectively. We show that b0(y0) = c0(y0) − min c0. Let ˜y be the reference label vector of x for PRAkEL. Since we are considering a single instance, by Lemma1, we may assume L is the function of weighted Hamming loss. Let S be the k-labelset in the current iteration and y be the true label vector of x.

If y0 ⊆ S, then by definition,

c0(y0) − min c0= c(y0∪ ˜y[Sc]) − min

ˆ

y : ˆy[Sc]=˜y[Sc]c(ˆy)

= L(y, y0∪ ˜y[Sc]) − min

y : ˆˆ y[Sc]=˜y[Sc]L(y, ˆy)

= L(y, y0∪ ˜y[Sc]) − min

yˆ0⊆SL(y, ˆy0∪ ˜y[Sc])

= max

ˆ

y0⊆S(L(y, y0∪ ˜y[Sc]) − L(y, ˆy0∪ ˜y[Sc])).

In addition, by Lemma2, L(y, y0∪ ˜y[Sc]) − L(y, ˆy0∪ ˜y[Sc]) is independent of ˜y[Sc] for all yˆ0 ⊆ S. Therefore, we have

c0(y0) − min c0 = max

ˆ

y0⊆S(L(y, y0∪ y[Sc]) − L(y, ˆy0∪ y[Sc]))

= L(y, y0∪ y[Sc]) − L(y, y[S] ∪ y[Sc])

= L(y, y0∪ y[Sc])

= b(y0∪ y[Sc])

= b0(y0).

(12)

Moreover, for these two loss functions, it is easy to derive an upper bound of the training cost. Consider a training example (x, y, c). Let em be the training cost of x in the m-th CSMCC sub-problem. We hope to bound the overall multi-label training cost of x in terms of these em.

By Lemma 1, again, it suffices to consider weighted Hamming loss. Recall that K is the number of labels, k is the size of the labelsets, and M is the number of iterations.

For simplicity, assume kM is a multiple of K. In addition, we assume that each label appears in exactly r = kM/K labelsets. That is, the labelsets are selected uniformly. Let hm ∈ {−1, 0, 1}K be the prediction of x in the m-th iteration as defined in Section4.1 and ˆ

y ∈ Y be the final prediction, which is obtained by averaging these hm. Now, focus on the l-th label. If ˆy[l] 6= y[l], then there must be at least half of those m with l ∈ Sm such that hm[l]

is predicted incorrectly. Hence, the part of the overall training cost contributed by the l-th label cannot exceed em/(r/2) = 2em/r. As a result, by the property of weighted Hamming loss, the training cost is no more than PM

m=12em/r = (2K/k)¯e, where ¯e = PM

m=1em/M . By the above arguments, we have the following theorem.

Theorem 4 Let Em be the multi-class training cost of the training set in the m-th iteration.

Then, under Hamming loss and ranking loss, the overall training cost of the CSMLC problem for both PRAkELt and PRAkEL is no more than (2K/k) ¯E, where ¯E is the mean of Em. Proof Since the statement is true for each example, the proof is straightforward.

Despite the equivalence between PRAkELt and PRAkEL for Hamming and ranking loss, they are not the same for arbitrary cost functions. In the experiment section, we demonstrate that PRAkEL is more effective under F1 loss. For now, we present an explanation, by restricting ourselves to the case where the labelsets are disjoint. In this case, K/k = M , and the upper bound in Theorem4 can be improved to (K/k) ¯E = M ¯E because the final prediction of each label is determined by a single LP classifier. Under this restriction, we have a similar result for PRAkEL. Before stating the next theorem, we have to make some normality assumption about the cost functions. For a label vector y and its corresponding cost function c, we assume that if ˆy0 ∈ Y is one bit closer to y than ˆy00∈ Y, then c(ˆy0) ≤ c(ˆy00).

That is, a more correct prediction does not result in a larger cost. In fact, this simple assumption has been implicitly made by many MLC methods such as BR, CC and RAkEL.

Theorem 5 Assume the labelsets are disjoint. Then, for any cost function satisfying the above assumption, the overall training cost for PRAkEL is no more than M ¯E.

Proof We may assume there is only one training example (x, y, c), where the subscript n is dropped here for simplicity. Recall that the reference label vector of x in the m-th iteration,

(13)

denoted by ˜y(m), is defined to be Hm−1 for m ≥ 2. Then, for m ≥ 2, c(Hm) = c(Hm[Sm] ∪ Hm[Smc])

= c(h0m(x) ∪ Hm−1[Smc])

= Em+ c(y[Sm] ∪ Hm−1[Smc])

≤ Em+ c(Hm−1),

where the third equality is by definition of Em, and the inequality follows from the assumption we just made. Hence, by induction, the overall training cost is c(HM) ≤ c(˜y(1))+PM

m=1Em = c(y) + M ¯E = M ¯E.

Note that this bound cannot be improved since all inequalities in the proof become equalities under Hamming loss. Nonetheless, there is no analogous result for PRAkELt, as shown in the following theorem.

Theorem 6 Assume k < K. For PRAkELt, there is no constant B > 0 such that the bound B ¯E of the overall training cost holds for any cost functions.

Proof Again, assume the labelsets are disjoint and there is only one instance x. Consider the special case where the true label vector of x is y = (1, · · · , 1) ∈ Y, and assume hm[l] = −1 for all l ∈ Sm and all m. In this case, ˆy = (0, · · · , 0) ∈ Y, and therefore, its F1 loss is LF(y, ˆy) = 1. In addition, if we define ˆym= ˆy[Sm] ∪ y[Smc], then

Em = LF(y, ˆym) (5)

=

P

lJy[l] 6= ˆym[l]K P

lJy[l] 6= ˆym[l]K + 2 P

lJy[l] = ˆym[l] = 1K

(6)

= k

k + 2(K − k). (7)

Hence, we have LF(y, ˆy) = 1 = ((2K − k)/k) ¯E. Note that if the factor 2 in the equation (7) is replaced by a larger constant, then the bound needs to be larger. Moreover, we can freely define a loss function L similar to LF by replacing the constant 2 in (6) with an arbitrary positive one. Letting the constant tend to infinity, the proof is complete.

Theorems 5 and 6 suggest we define the reference label vectors to be the predicted instead of the true ones. Empirical results in the experiment section also support this finding.

In fact, a previous study on multi-target regression has already revealed the problem of treating true targets as additional input variables (Spyromitros-Xioufis et al.,2016). Besides, the authors showed that in-sample estimates of target variables are still problematic, and proposed an approach of out-of-sample estimates to tackle the issue. Although we do not consider this kind of estimates in this paper, we believe that a similar approach for PRAkEL could be considered in future work.

One disadvantage of employing the predicted labels is that the sub-problems need to be learned iteratively, while the training process of the LP classifiers of RAkEL can be parallelized. In addition, the two cost-sensitive extensions of RAkEL, CS-RAkEL and GLE, as well as PRAkELt, apparently do not have this drawback. There is thus a tradeoff between performance and efficiency.

(14)

4.4. Weighting of Base Classifiers

In general, some sub-problems of PRAkEL are easier to solve, while others are more difficult.

Thus, the performance of each LP classifier within PRAkEL can be different, and the majority vote of these classifiers may be sub-optimal. Inspired by GLE (Lo et al., 2014), we can further assign different weights to the LP classifiers to represent the importance of them. To achieve this, a linear combination of the classifiers is learned by minimizing the training cost.

Formally, given a new instance x, its prediction ˆy ∈ Y is produced by setting ˆy[l] = 1 if and only if PM

m=1αmhm(x)[l] > 0, where these αm> 0 are called the weights of the base classifiers. Accordingly, the assignment Fm,n =Pm

p=1hp(xn) in the previous section should be changed to Fm,n =Pm

p=1αphp(xn).

One approach for determining these weights is to solve an optimization problem after all the hm are learned, just as GLE does. However, this overall optimization ignores the iterative nature of PRAkEL, where the value of Fm,n depends on αp for 1 ≤ p < m in the m-th iteration. We therefore iteratively determine αm by greedily minimizing the training cost. More precisely, let α1 = 1 for simplicity, and for m ≥ 2, by regarding Hm,n as a function of αm, we solve the following single-variable optimization problem and define αm to be an optimal solution.

minα∈R

1 N

N

X

n=1

cn(Hm,n(α)) (8)

It is not easy to solve this type of problems in general. Nevertheless, since the objective function is piece-wise constant, the optimization problem (8) can be solved by considering only finitely many α, and the remaining task is to obtain these candidate α. It then suffices to find the discontinuities of the objective function, and therefore the zeros of each component of the function Fm,n(α) for all n, denoted by a set Em,n ⊆ R. Since Fm,n(α) = Fm−1,n+ αhm(xn), we have Em,n ⊆ {α | Fm,n(α)[l] = 0 for some l ∈ Sm} = {−Fm−1,n[l]/hm(x)[l] | l ∈ Sm}, implying |Em,n| ≤ |Sm| = k. If (∪nEm,n) ∩ R>0 = {a1, · · · , aP} with 0 < a1 < · · · < aP, then clearly P ≤ N k, and the set of candidate α can be chosen to be {(ai+ ai+1)/2 | 1 ≤ i < P } ∪ {a1/2, aP + 1}. This weighting strategy is called greedy weighting (GW).

Certainly, one can simplify the process of solving (8) by minimizing it over a fixed finite set, E, the candidate set of α, to ease the burden of computation and decrease the possibility of overfitting. For example, let E = {i/P | 1 ≤ i ≤ P } ∪ {} for some P ∈ N, where 0 <  < 1/P M is a small number for tie breaking. This weighting strategy is called simple weighting (SW).

4.5. Analysis of Time Complexity

First, we analyze the training time complexity of PRAkEL without considering the weighting of the base classifiers. The trivial steps of Algorithm 1 to form the sub-problems are of time complexity at most O(N ) multiplied by the time needed to calculate the reference label ˜yn and the cost cn. The more time-consuming step of PRAkEL, similar to RAkEL, depends on the time spent on the CSMCC base classifier, which is denoted as T0(N, d, K0) for N examples, d features, and K0 classes. The empirical results of PRAkEL in the next section demonstrate that it suffices to let each label appear in a fixed number of labelsets on

(15)

average. That is, only M = O(K/k) iterations are needed, and hence, the practical training time of PRAkEL is T0(N, d, 2k) · O(K/k), which is linear in K. In contrast, as discussed in Section 3, the training time of CFT (Li and Lin,2014) is O(N K2) multiplied by the time needed to calculate the cost cn, and summed with O(K) calls to the base classifier. The complexity analysis reveals the asymptotic efficiency of PRAkEL over CFT.

When considering the weighting, in each iteration, GW (which is generally more time consuming than SW) needs O(k) to determine the zeros of each Fm,n, and evaluating the goodness of all candidate α can be done within O(N k), multiplied by the time needed to calculate cn. That is, the running time of PRAkEL-GW with M = O(K/k) iterations needs an additional O(N K) multiplied by the time needed to calculate the cost cn. The additional time of PRAkEL-GW is still asymptotically more efficient than the training time of CFT.

5. Experiment

5.1. Experimental Setup

The experiments were conducted on seven benchmark datasets (Tsoumakas et al.,2011).3 These datasets were taken because of their diversity of domains and popularity in multi-label research community. Their basic statistics are provided in Table2, where N is the number of examples, d is the dimension of the input space, and K is the number of labels.

Dataset Domain N d K Cardinality Density

CAL500 music 502 68 174 26.044 0.150

emotions music 593 72 6 1.868 0.311

enron text 1702 1001 53 3.378 0.064

medical text 978 1449 45 1.245 0.028

scene image 2407 294 6 1.074 0.179

tmc2007 text 28596 500 22 2.220 0.101

yeast biology 2417 103 14 4.237 0.303

Table 2: Statistics of the datasets.

For statistical significance, all results reported in Section 5.2 were averaged over 30 independent runs. For each run, we randomly sampled 75% of the dataset for training and used the remaining data for testing. One third of the training set was reserved for validation.

We compared four variants of the proposed method, namely, PRAkELt, PRAkEL, PRAkEL-GW and PRAkEL-SW, with three types of methods: (a) labelset-related methods, including RAkEL (Tsoumakas and Vlahavas, 2007) and CS-RAkEL (Lo, 2013); (b) state-of- the-art CSMLC methods, including EPCC (Dembczynski et al., 2010, 2011, 2012) and CFT (Li and Lin,2014); (c) a state-of-the-art cost-insensitive MLC method, ML-kNN (Zhang and Zhou, 2007). All hyper-parameters of all the compared methods and the base classifiers were selected by grid search on the validation set. For our method and the labelset-related methods, the parameter k was selected from {2, · · · , 9}, and for each k, the maximum M was fixed to 10K/k. The ensemble size of EPCC was selected from {1, · · · , 7} for efficiency, and on datasets with more than 20 labels, the Monte Carlo sampling technique was employed

3. They were obtained fromhttp://mulan.sourceforge.net/datasets-mlc.html.

(16)

with a sample size of 200 (Dembczynski et al., 2012). For CFT, the number of internal iterations was selected from {2, · · · , 8}, as suggested by the original authors.4

For the base classifier of EPCC, we employed logistic regression implemented in LI- BLINEAR (Fan et al.,2008). For the methods requiring a regular binary or multi-class classifier, we used linear one-versus-all support vector machines (SVMs) implemented in LIBLINEAR. Our method was coupled with linear RED-OSSVR (Tu and Lin, 2010).5 The regularization parameter in linear SVMs and RED-OSSVR was also selected by grid search on the validation set. The cost functions we considered in the experiments are all derived from loss functions, as explained in Section2.

5.2. Results and Discussion

Dataset PRAkELt PRAkEL PRAkEL-GW PRAkEL-SW

CAL500 0.1370 ± 0.0004 0.1370 ± 0.0004 0.1379 ± 0.0004 0.1372 ± 0.0004 emotions 0.1951 ± 0.0026 0.1951 ± 0.0026 0.1974 ± 0.0024 0.1961 ± 0.0025 enron 0.0465 ± 0.0002 0.0465 ± 0.0002 0.0466 ± 0.0003 0.0465 ± 0.0003 medical 0.0103 ± 0.0002 0.0103 ± 0.0002 0.0103 ± 0.0002 0.0102 ± 0.0002 scene 0.0919 ± 0.0008 0.0919 ± 0.0008 0.0915 ± 0.0008 0.0915 ± 0.0008 tmc2007 0.0532 ± 0.0001 0.0532 ± 0.0001 0.0538 ± 0.0002 0.0533 ± 0.0001 yeast 0.1950 ± 0.0008 0.1950 ± 0.0008 0.1957 ± 0.0008 0.1955 ± 0.0009

Dataset EPCC CFT RAkEL ML-kNN

CAL500 0.1370 ± 0.0004 0.1371 ± 0.0004 0.1372 ± 0.0004 0.1466 ± 0.0004 emotions 0.1987 ± 0.0020 0.2012 ± 0.0025 0.2048 ± 0.0027 0.2032 ± 0.0026 enron 0.0461 ± 0.0002 0.0466 ± 0.0002 0.0466 ± 0.0002 0.0548 ± 0.0003 medical 0.0104 ± 0.0001 0.0105 ± 0.0002 0.0100 ± 0.0002 0.0157 ± 0.0002 scene 0.0923 ± 0.0009 0.0989 ± 0.0008 0.0919 ± 0.0008 0.0885 ± 0.0009 tmc2007 0.0568 ± 0.0001 0.0559 ± 0.0001 0.0546 ± 0.0001 0.0671 ± 0.0001 yeast 0.1990 ± 0.0008 0.1993 ± 0.0009 0.2160 ± 0.0012 0.1988 ± 0.0010

Table 3: Performance of each method in terms of Hamming loss (mean ± standard error).

Dataset PRAkELt PRAkEL PRAkEL-GW PRAkEL-SW

CAL500 0.2619 ± 0.0009 0.2619 ± 0.0009 0.2555 ± 0.0009 0.2579 ± 0.0009 emotions 0.2179 ± 0.0029 0.2179 ± 0.0029 0.2186 ± 0.0030 0.2182 ± 0.0031 enron 0.1424 ± 0.0010 0.1424 ± 0.0010 0.1424 ± 0.0010 0.1420 ± 0.0010 medical 0.0464 ± 0.0011 0.0464 ± 0.0011 0.0497 ± 0.0012 0.0465 ± 0.0012 scene 0.1285 ± 0.0016 0.1285 ± 0.0016 0.1258 ± 0.0015 0.1274 ± 0.0017 tmc2007 0.0856 ± 0.0002 0.0856 ± 0.0002 0.0844 ± 0.0002 0.0848 ± 0.0003 yeast 0.2290 ± 0.0014 0.2290 ± 0.0014 0.2291 ± 0.0014 0.2288 ± 0.0015

Dataset EPCC CFT RAkEL ML-kNN

CAL500 0.2501 ± 0.0007 0.2534 ± 0.0006 0.3902 ± 0.0018 0.3885 ± 0.0010 emotions 0.2121 ± 0.0030 0.2227 ± 0.0031 0.2460 ± 0.0036 0.2505 ± 0.0032 enron 0.1409 ± 0.0007 0.1415 ± 0.0008 0.2533 ± 0.0014 0.3054 ± 0.0016 medical 0.0395 ± 0.0011 0.0483 ± 0.0010 0.1067 ± 0.0019 0.2031 ± 0.0027 scene 0.1263 ± 0.0011 0.1411 ± 0.0014 0.1658 ± 0.0016 0.1651 ± 0.0019 tmc2007 0.0866 ± 0.0001 0.0844 ± 0.0002 0.1554 ± 0.0006 0.2054 ± 0.0006 yeast 0.2283 ± 0.0013 0.2322 ± 0.0013 0.2628 ± 0.0014 0.2498 ± 0.0017

Table 4: Performance of each method in terms of ranking loss (mean ± standard error).

4. Because of its efficiency issues, we restricted the maximum number of iterations to 4 on datasets with K > 20.

5. RED-OSSVR can be shown to be equivalent to one-versus-all SVMs for cost functions c y) = y 6= y

(17)

Dataset PRAkELt PRAkEL PRAkEL-GW PRAkEL-SW CAL500 0.6498 ± 0.0015 0.5246 ± 0.0014 0.5217 ± 0.0016 0.5216 ± 0.0014 emotions 0.3347 ± 0.0046 0.3308 ± 0.0040 0.3326 ± 0.0044 0.3322 ± 0.0041 enron 0.4545 ± 0.0026 0.4143 ± 0.0028 0.4169 ± 0.0029 0.4138 ± 0.0030 medical 0.1969 ± 0.0029 0.1899 ± 0.0032 0.1921 ± 0.0037 0.1902 ± 0.0035 scene 0.2469 ± 0.0023 0.2467 ± 0.0023 0.2478 ± 0.0025 0.2466 ± 0.0025 tmc2007 0.2753 ± 0.0005 0.2671 ± 0.0005 0.2670 ± 0.0006 0.2661 ± 0.0005 yeast 0.3644 ± 0.0020 0.3455 ± 0.0022 0.3453 ± 0.0021 0.3449 ± 0.0021

Dataset EPCC CFT RAkEL ML-kNN

CAL500 0.5160 ± 0.0014 0.5248 ± 0.0013 0.6579 ± 0.0028 0.6545 ± 0.0020 emotions 0.3282 ± 0.0039 0.3330 ± 0.0044 0.3859 ± 0.0048 0.3938 ± 0.0058 enron 0.4064 ± 0.0017 0.3951 ± 0.0027 0.4648 ± 0.0028 0.5675 ± 0.0032 medical 0.2145 ± 0.0030 0.2082 ± 0.0033 0.2163 ± 0.0033 0.4028 ± 0.0052 scene 0.2481 ± 0.0024 0.2790 ± 0.0022 0.2789 ± 0.0027 0.2976 ± 0.0036 tmc2007 0.2788 ± 0.0004 0.2805 ± 0.0005 0.3020 ± 0.0009 0.3871 ± 0.0010 yeast 0.3468 ± 0.0020 0.3551 ± 0.0026 0.3936 ± 0.0025 0.3818 ± 0.0028

Table 5: Performance of each method in terms of F1 loss (mean ± standard error).

Tables 3,4 and5 present the results of the four variants of our method, EPCC, CFT, RAkEL and ML-kNN in terms of Hamming, ranking and F1 loss. The best results for each dataset are marked in bold.

Comparison of Variants of PRAk EL In this subsection, we draw a comparison between the four variants of the proposed method, namely, PRAkELt, PRAkEL, PRAkEL-GW and PRAkEL-SW. We first compare PRAkELt and PRAkEL to understand the difference between using the true and the predicted ones as the reference label vectors. Recall that PRAkELt and PRAkEL are theoretically equivalent under Hamming and ranking loss, and therefore, it is not a coincidence that the results of the first two variants in Tables 3 and4 are exactly the same. Table5 shows that on all the datasets PRAkEL has lower costs than PRAkELt in terms of F1 loss. We also present in Table 6the results of the Student’s t-test at a significance level of 0.05, on two pairs of variants. The comparison of PRAkEL and PRAkELt under F1 loss reveals that PRAkEL is significantly superior on five datasets. This demonstrates the benefits of exploiting previous predictions, and is also consistent with the theoretical results in Theorems5 and6. Thus, for the remaining experiments, the results of PRAkELt are not presented.

Loss function PRAkEL v.s.

PRAkELt

PRAkEL-GW v.s. PRAkEL

PRAkEL-SW v.s. PRAkEL

Hamming loss 0/7/0 0/5/2 1/6/0

Ranking loss 0/7/0 3/3/1 3/4/0

F1 loss 5/2/0 1/5/1 2/5/0

Total 5/16/0 4/13/4 6/15/0

Table 6: Variants of PRAkEL versus other variants by the Student’s t-test at a significance level of 0.05 (superior/comparable/inferior).

(18)

Figure 1: Training costs of PRAkEL, PRAkEL-GW and PRAkEL-SW in terms of F1 loss with the standard errors.

Dataset PRAkEL PRAkEL-GW PRAkEL-SW

CAL500 0.4947 ± 0.0027 0.4804 ± 0.0030 0.4866 ± 0.0029 emotions 0.2425 ± 0.0048 0.2327 ± 0.0045 0.2366 ± 0.0049 enron 0.2658 ± 0.0064 0.2493 ± 0.0072 0.2559 ± 0.0070 medical 0.0313 ± 0.0036 0.0210 ± 0.0026 0.0264 ± 0.0032 scene 0.1797 ± 0.0026 0.1747 ± 0.0024 0.1773 ± 0.0025 tmc2007 0.2170 ± 0.0008 0.2092 ± 0.0011 0.2122 ± 0.0009 yeast 0.3133 ± 0.0013 0.3050 ± 0.0015 0.3074 ± 0.0014

Table 7: Training costs of PRAkEL, PRAkEL-GW and PRAkEL-SW in terms of F1 loss (mean ± standard error).

Next, we compare the three weighting strategies, i.e., uniform, greedy and simple weighting. From table 6, overall PRAkEL is competitive with PRAkEL-GW, although under ranking loss the performance of PRAkEL-GW is slightly better. In addition, from the last comparison wee see that PRAkEL-SW is never outperformed by PRAkEL under these three loss functions. For Hamming loss, there is no significant difference between the performance of PRAkEL and PRAkEL-SW. For ranking loss and F1 loss, however, PRAkEL-SW performs slightly better than PRAkEL.

Since the last two variants greedily minimize the training costs in every iteration, it is expected that their training costs are much lower than PRAkEL’s. Table7 and Fig.1, which show the training costs in terms of F1 loss, verify this deduction. Under other loss functions we also observe similar behavior. The reason is that, for PRAkEL-GW, the weights of the classifiers are determined from an optimization problem with no constraints, while for PRAkEL-SW, the weights are restricted to the candidate set. From a holistic point of view, the candidate set acts as a regularizer, which prevents PRAkEL-SW from

(19)

(a) Hamming loss. (b) Ranking loss. (c) F1 loss.

Figure 2: Training and test costs of PRAkEL versus the number of iterations (M ) on the yeast dataset.

Loss function Significance by Nemenyi test Hamming loss None

Ranking loss {PRAkEL-SW, EPCC}  {RAkEL, ML-kNN}

F1 loss {PRAkEL, PRAkEL-SW}  {RAkEL, ML-kNN}, {PRAkEL-GW, EPCC}  {ML-kNN}

Table 8: Significance indicated by the Nemenyi test at a significance level of 0.05 ( means significantly better than).

excessively overfitting the training set. In conclusion, among the four variants of our method, PRAkEL-SW is the most stable.

Finally, we demonstrate the effectiveness of ensemble. Fig. 2 shows the training and test costs versus the number of iterations M on yeast dataset. We can see that all the costs are decreasing as a function of M . The behavior of the costs on the other datasets is similar.

Comparison with State-of-the-art Methods We compare our method with EPCC, CFT, RAkEL and ML-kNN in terms of Hamming, ranking and F1 loss. Table 3 shows the performance of each method under Hamming loss. RAkEL and ML-kNN individually achieve the best performance on one dataset. On the other datasets, the method with the lowest cost is either PRAkEL or EPCC. Overall, all the methods perform fairly well under Hamming loss.

The results for the other two loss functions are shown in Tables 4 and 5. In terms of ranking loss, EPCC is the most stable method, which outperforms the others on five datasets, and the proposed method reaches the lowest cost on the remaining two datasets.

Under F1 loss, our method is superior to the others on half of the datasets, and EPCC has the best performance on two datasets. In addition, it can be seen that under these two loss functions, the two cost-insensitive methods, RAkEL and ML-kNN, are completely not comparable to either of the other cost-sensitive methods. This observation also demonstrates the effectiveness of cost sensitivity.

To compare all the classifiers over multiple datasets, we conducted the Friedman test with the corresponding Nemenyi post-hoc test (Demˇsar,2006). For all the three loss functions, the p-values of the Friedman test were 6.6 × 10−3, 3.6 × 10−5 and 8.7 × 10−6, respectively.

Therefore, the null hypothesis was rejected at α = 0.05, and the post-hoc test was performed

參考文獻

相關文件

Linear classification is an old topic; but recently there are new applications and large-scale challenges The optimization problem can be solved by many existing techniques.

Parameter/kernel selection and practical issues Multi-class classification.. Discussion

introduces a methodology for extending regular classification algorithms to cost-sensitive ones with any cost. provides strong theoretical support for

classify input to multiple (or no) categories.. Multi-label Classification.

A novel surrogate able to adapt to any given MLL criterion The first cost-sensitive multi-label learning deep model The proposed model successfully. Tackle general

Receiver operating characteristic (ROC) curves are a popular measure to assess performance of binary classification procedure and have extended to ROC surfaces for ternary or

Cost-and-Error-Sensitive Classification with Bioinformatics Application Cost-Sensitive Ordinal Ranking with Information Retrieval Application Summary.. Non-Bayesian Perspective

Parallel dual coordinate descent method for large-scale linear classification in multi-core environments. In Proceedings of the 22nd ACM SIGKDD International Conference on