• 沒有找到結果。

Generalized Bradley-Terry Models and Multi-class Probability Estimates

N/A
N/A
Protected

Academic year: 2022

Share "Generalized Bradley-Terry Models and Multi-class Probability Estimates"

Copied!
31
0
0

加載中.... (立即查看全文)

全文

(1)

Generalized Bradley-Terry Models and Multi-class Probability Estimates

Tzu-Kuo Huang r93002@csie.ntu.edu.tw

Department of Computer Science, National Taiwan University Taipei 106, Taiwan

Ruby C. Weng chweng@nccu.edu.tw

Department of Statistics, National Chengchi University Taipei 116, Taiwan

Chih-Jen Lin cjlin@csie.ntu.edu.tw

Department of Computer Science, National Taiwan University Taipei 106, Taiwan

Editor:Greg Ridgeway

Abstract

The Bradley-Terry model for obtaining individual skill from paired comparisons has been popular in many areas. In machine learning, this model is related to multi-class probability estimates by coupling all pairwise classification results. Error correcting output codes (ECOC) are a general framework to decompose a multi-class problem to several binary problems. To obtain probability estimates under this framework, this paper introduces a generalized Bradley-Terry model in which paired individual comparisons are extended to paired team comparisons. We propose a simple algorithm with convergence proofs to solve the model and obtain individual skill. Experiments on synthetic and real data demonstrate that the algorithm is useful for obtaining multi-class probability estimates. Moreover, we discuss four extensions of the proposed model: 1) weighted individual skill, 2) home-field advantage, 3) ties, and 4) comparisons with more than two teams.

Keywords: Bradley-Terry model, Probability estimates, Error correcting output codes, Support Vector Machines

1. Introduction

The Bradley-Terry model (Bradley and Terry, 1952) for paired comparisons has been broadly applied in many areas such as statistics, sports, and machine learning. It considers a set of k individuals for which

P (individual i beats individual j) = pi

pi+ pj

, (1)

and pi> 0 is the overall skill of individual i. Suppose that the outcomes of all comparisons are independent and denote rij as the number of times that i beats j. Then the negative log-likelihood takes the form

l(p) =−X

i<j



rijlog pi

pi+ pj

+ rjilog pj

pi+ pj



. (2)

(2)

Since l(p) = l(αp) for any α > 0, l(p) is scale invariant. Therefore, it is convenient to assume that Pk

i=1pi= 1 for the sake of identifiability. One can then estimate pi by min

p

l(p)

subject to 0≤ pj, j = 1, . . . , k,

k

X

j=1

pj = 1.

(3)

This approach dates back to (Zermelo, 1929) and has been extended to more general settings.

For instance, in sports scenario, extensions to account for the home-field advantage and ties have been proposed. Some reviews are, for example, (David, 1988; Davidson and Farquhar, 1976; Hunter, 2004; Simons and Yao, 1999). The solution of (3) can be solved by a simple iterative procedure:

Algorithm 1

1. Start with any initial p0j > 0, j = 1, . . . , k.

2. Repeat (t = 0, 1, . . .)

(a) Let s = (t mod k) + 1. Define

pt+1

"

pt1, . . . , pts−1, P

i:i6=srsi

P

i:i6=srsi+ris

pts+pti

, pts+1, . . . , ptk

#T

. (4)

(b) Normalize pt+1.

until ∂l(pt)/∂pj = 0, j = 1, . . . , k are satisfied.

This algorithm is so simple that there is no need to use sophisticated optimization techniques. If rij ∀i, j satisfy some mild conditions, Algorithm 1 globally converges to the unique minimum of (3). A systematic study on the convergence of Algorithm 1 is in (Hunter, 2004).

An earlier work (Hastie and Tibshirani, 1998) in statistics and machine learning con- sidered the problem of obtaining multi-class probability estimates by coupling results from pairwise comparisons. Assume

¯

rij ≡ P (x in class i | x in class i or j)

is known. This work estimates pi = P (x in class i) by minimizing the (weighted) Kullback- Leibler (KL) distance between ¯rij and µij ≡ pi/(pi+ pj):

min

p

X

i<j

nij



¯

rijlog r¯ij

µij

+ ¯rjilog r¯ji

µji



subject to 0≤ pj, j = 1, . . . , k,

k

X

j=1

pj = 1,

(5)

(3)

where nij is the number of training data in class i or j. By defining rij ≡ nijij and removing constant terms, (5) reduces to the same form as (2), and hence Algorithm 1 can be used to find p. Although one might interpret this as a Bradley-Terry model by treating classes as individuals and rij as the number that the ith class beats the jth class, it is not indeed. First, rij (now defined as nijij) may not be an integer any more. Secondly, rij

are dependent as they share the same training set. However, the closeness between the two motivates us to propose more general models in this paper.

The above approach involving comparisons for each pair of classes is referred to as the

“one-against-one” setting in multi-class classification. It is a special case of the framework error correcting output codes (ECOC) to decompose a multi-class problem into a number of binary problems (Dietterich and Bakiri, 1995; Allwein et al., 2001). Some classification techniques are two-class based, so this framework extends them to multi-class scenarios.

Zadrozny (2002) generalizes the results in (Hastie and Tibshirani, 1998) to obtain prob- ability estimates under ECOC settings. The author proposed an algorithm analogous to Algorithm 1 and demonstrated some experimental results. However, the convergence issue was not discussed. Though the author intended to minimize the KL distance as Hastie and Tibshirani (1998) did, in Section 4.2 we show that their algorithm may not converge to a point with the smallest KL distance.

Motivated from multi-class classification with ECOC settings, this paper presents a generalized Bradley-Terry model where each competition is between two teams (two disjoint subsets of subjects) and team size/members can vary from competition to competition.

Then from the outcomes of all comparisons, we fit this general model to estimate the individual skill. Here we propose a simple iterative method to solve the generalized model.

The convergence is proved under mild conditions.

The proposed model has some potential applications. For example, in tennis or bad- minton, if a player participates in many singles and doubles, this general model can combine all outcomes to yield the estimated skill of all individuals. More importantly, for multi-class problems by combining binary classification results, we can also minimize the KL distance and obtain the same optimization problem. Hence the proposed iterative method can be directly applied to obtain the probability estimate under ECOC settings.

This paper is organized as follows. Section 2 introduces a generalized Bradley-Terry model and a simple algorithm to maximize the log-likelihood. The convergence of the proposed algorithm is in Section 3. Section 4 discusses multi-class probability estimates and experiments are in Sections 5 and 6. In Section 7 we discuss four extensions of the proposed model: 1) weighted individual skill, 2) home-field advantage, 3) ties, and 4) comparisons with more than two teams. Discussion and conclusions are in Section 8. A short and preliminary version of this paper appeared in an earlier conference NIPS 2004 (Huang et al., 2005)1.

2. Generalized Bradley-Terry Model

In this section we study a generalized Bradley-Terry model for approximating individual skill. Consider a group of k individuals: {1, . . . , k}. Each time two disjoint subsets Ii+ and Ii form teams for a series of games and ri ≥ 0 (ri ≥ 0) is the number of times that Ii+

1. Programs used are at http://www.csie.ntu.edu.tw/cjlin/libsvmtools/libsvm-errorcode.

(4)

beats Ii (Ii beats Ii+). Thus, we have Ii ⊂ {1, . . . , k}, i = 1, . . . , m so that Ii = Ii+∪ Ii, Ii+6= ∅, Ii 6= ∅, and Ii+∩ Ii=∅.

If the game is designed so that each member is equally important, we can assume that a team’s skill is the sum of all its members’. This leads to the following model:

P (Ii+ beats Ii) = P

j∈Ii+pj

P

j∈Iipj

.

If the outcomes of all comparisons are independent, then estimated individual skill can be obtained by defining

qi ≡X

j∈Ii

pj, q+i ≡ X

j∈Ii+

pj, qi≡ X

j∈Ii

pj

and minimizing the negative log-likelihood

min

p l(p) =−

m

X

i=1

rilog(qi+/qi) + rilog(qi/qi)

subject to

k

X

j=1

pj = 1, 0≤ pj, j = 1, . . . , k.

(6)

Note that (6) reduces to (3) in the pairwise approach, where m = k(k− 1)/2 and Ii, i = 1, . . . , m are as the following:

Ii+ Ii ri ri {1} {2} r12 r21 ... ... ... ... {1} {k} r1k rk1 {2} {3} r23 r32 ... ... ... ... {k − 1} {k} rk−1,k rk,k−1

In the rest of this section we discuss how to solve the optimization problem (6).

2.1 A Simple Procedure to Maximize the Likelihood

The difficulty of solving (6) over (3) is that now l(p) is expressed in terms of q+i , qi, qi but the real variable is p. We propose the following algorithm to solve (6).

(5)

Algorithm 2

1. Start with initial p0j > 0, j = 1, . . . , k and obtain corresponding q0,+i , q0,−i , qi0, i = 1, . . . , m.

2. Repeat (t = 0, 1, . . .)

(a) Let s = (t mod k) + 1. Define pt+1 by pt+1j = ptj, ∀j 6= s, and

pt+1s = P

i:s∈Ii+ ri

qit,+ +P

i:s∈Ii ri qt,−i

P

i:s∈Ii

ri+ri qit

pts. (7)

(b) Normalize pt+1.

(c) Update qt,+i , qit,−, qit to qt+1,+i , qit+1,−, qit+1, i = 1, . . . , m.

until ∂l(pt)/∂pj = 0, j = 1, . . . , k are satisfied.

The gradient of l(p), used in the stopping criterion, is:

∂l(p)

∂ps

= −

m

X

i=1



ri∂ log q+i

∂ps

+ ri∂ log qi

∂ps − (ri+ ri)∂ log qi

∂ps



= − X

i:s∈Ii+

ri

qi+ − X

i:s∈Ii

ri

qi + X

i:s∈Ii

ri+ ri qi

, s = 1, . . . , k. (8)

In Algorithm 2, for the multiplicative factor in (7) to be well defined (i.e., non-zero denominator), we need Assumption 1, which will be discussed in Section 3. Eq. (7) is a simple fixed-point type update; in each iteration, only one component (i.e., pts) is modified while the others remain the same. If we apply the updating rule (7) to the pairwise model,

pt+1s = P

i:s<i rsi

pts +P

i:i<s rsi

pts

P

i:s<irsi+ris

pts+pti +P

i:i<sris+rsi

pts+pti

pts= P

i:i6=srsi

P

i:i6=srsi+ris

pts+pti

reduces to (4).

The updating rule (7) is motivated from using a descent direction to strictly decrease l(p): If ∂l(pt)/∂ps6= 0 and pts> 0, then under suitable assumptions on ri, ri,

∂l(pt)

∂ps

(pt+1s − pts) = ∂l(pt)

∂ps

 P

i:s∈Ii+ ri

q+i +P

i:s∈Ii ri qi −P

i:s∈Ii

ri+ri qi

P

i:s∈Ii

ri+ri qit

 pts

= − ∂l(pt)

∂ps

2

pts

! .

 X

i:s∈Ii

ri+ ri qti

< 0. (9)

Thus, pt+1s − pts is a descent direction in optimization terminology since a sufficiently small step along this direction guarantees the strict decrease of the function value. As now we

(6)

take the whole direction without searching for the step size, more efforts are needed to prove the strict decrease in the following Theorem 1. However, (9) does hint that (7) is a reasonable update.

Theorem 1 Let s be the index to be updated at pt. If 1. pts> 0,

2. ∂l(pt)/∂ps6= 0, and 3. P

i:s∈Ii(ri+ ri) > 0, then

l(pt+1) < l(pt).

The proof is in Appendix A. Note thatP

i:s∈Ii(ri+ ri) > 0 is a reasonable assumption. It means that individual s participates in at least one game.

2.2 Other Methods to Maximize the Likelihood

We briefly discuss other methods to solve (6). For the original Bradley-Terry model, Hunter (2004) discussed how to transform (3) to a logistic regression form: Under certain assump- tions2, the optimal pi> 0,∀i. Using this property and the constraints pj ≥ 0,Pk

j=1pj = 1 of (3), we can reparameterize the function (2) by

ps= eβs Pk

j=1eβj, (10)

and obtain

−X

i<j



rijlog 1

1 + eβj−βi + rjilog eβj−βi 1 + eβj−βi



. (11)

This is the negative log-likelihood of a logistic regression model. Hence, methods such as iterative weighted least squares (IWLS) (McCullagh and Nelder, 1990) can be used to fit the model. In addition, β is now unrestricted, so (3) is transformed to an unconstrained optimization problem. Then conventional optimization techniques such as Newton or Quasi Newton can also be applied.

Now for the generalized model, (6) can still be re-parameterized as an unconstrained problem with the variable β. However, the negative log-likelihood

m

X

i=1

rilog P

j∈I+i eβj P

j∈Iieβj + rilog P

j∈Iieβj P

j∈Iieβj

!

(12)

is not in a form similar to (11), so methods for logistic regression may not be used. Of course Newton or Quasi Newton is still applicable but their implementations are not simpler than Algorithm 2.

2. They will be described in the next section.

(7)

3. Convergence of Algorithm 2

Though Theorem 1 has shown the strict decrease of l(p), we must further prove that Algorithm 2 converges to a stationary point of (6). Thus if l(p) is convex, a global optimum is obtained. A vector p is a stationary (Karash-Kuhn-Tucker) point of (6) if and only if there is a scalar δ and two nonnegative vectors λ and ξ such that

∇f(p)j = δ + λj− ξj,

λjpj = 0, ξj(1− pj) = 0, j = 1, . . . , k.

In the following we will prove that under certain conditions Algorithm 2 converges to a point satisfying

0 < pj < 1,∇f(p)j = 0, j = 1, . . . , k. (13) That is, δ = λj = ξj = 0,∀j. Problem (6) is quite special as through the convergence proof of Algorithm 2 we show that its optimality condition reduces to (13), the condition without considering constraints. Furthermore, an interesting side-result is that from Pk

j=1pj = 1 and (13), we obtain a point in Rk satisfying (k + 1) equations.

If Algorithm 2 stops in a finite number of iterations, then ∂l(p)/∂pj = 0, j = 1, . . . , k, which means a stationary point of (6) is already obtained. Thus, we only need to handle the case where{pt} is an infinite sequence. As {pt}t=0 is in a compact set

{p | 0 ≤ ps ≤ 1,

k

X

j=1

pj = 1},

there is at least one convergent subsequence. Assume that{pt}, t ∈ K is any such sequence and it converges to p. In the following we will show that ∂l(p)/∂pj = 0, j = 1, . . . , k.

To prove the convergence of a fixed-point type algorithm (i.e., Lyapunov’s theorem), we require ps > 0,∀s. Then if ∂l(p)/∂ps 6= 0 (i.e., p is not optimal), we can use (7) to find p∗+1 6= p, and, as a result of Theorem 1, l(p∗+1) < l(p). This property further leads to a contradiction. To have ps> 0,∀s, for the original Bradley-Terry model, Ford (1957) and Hunter (2004) assume that for any pair of individuals s and j, there is a “path” from s to j; that is, rs,s1 > 0, rs1,s2 > 0, . . . , rst,j > 0. The idea behind this assumption is simple:

Since Pk

r=1pr = 1, there is at least one pj > 0. If in certain games s beats s1, s1 beats s2, . . ., and st beats j, then ps, the skill of individual s, should not be as bad as zero. For the generalized model, we make a similar assumption:

Assumption 1 For any two different individuals s and j, there are Is0, Is1, . . . , Ist, such that either

1. rs0 > 0, rs1 > 0, . . . , rst > 0,

2. Is+0 ={s}; Is+r ⊂ Isr−1, r = 1, . . . , t; j ∈ Ist, or

1. rs0 > 0, rs1 > 0, . . . , rst > 0,

(8)

2. Is0 ={s}; Isr ⊂ Isr−1, r = 1, . . . , t; j ∈ Is+t.

The idea is that if pj > 0, and s beats Is0, a subset of Is0 beats Is1, a subset of Is1 beats Is2, . . . , and a subset of Ist−1 beats Ist, which includes j, then ps should not be zero. How this assumption is exactly used is in Appendix B for proving Lemma 2.

Assumption 1 is weaker than that made earlier in (Huang et al., 2005). However, even with the above explanation, this assumption seems to be very strong. Whether the gener- alized model satisfies Assumption 1 or not, an easy way to fulfill it is to add an additional term

− µ

k

X

s=1

log ps

Pk j=1pj

!

(14) to l(p), where µ is a small positive number. That is, for each s, we make an Ii ={1, . . . , k}

with Ii+ ={s}, ri = µ, and ri = 0. As Pk

j=1pj = 1 is one of the constraints, (14) reduces to−µPk

s=1log ps, which is usually used as a barrier term in optimization to ensure that ps does not go to zero.

An issue left in Section 2 is whether the multiplicative factor in (7) is well defined. With Assumption 1 and initial p0j > 0, j = 1, . . . , k, one can show by induction that ptj > 0,∀t and hence the denominator of (7) is never zero: If ptj > 0, Assumption 1 implies that there is some i such that Ii+={j} or Ii={j}. Then eitherP

i:j∈Ii+ri/qit,+ orP

i:j∈Iiri/qit,− is positive. Thus, both numerator and denominator in the multiplicative factor are positive, and so is pt+1j .

The result ps > 0 is proved in the following lemma.

Lemma 2 If Assumption 1 holds, ps> 0, s = 1, . . . , k.

The proof is in Appendix B.

As the convergence proof will use the strictly decreasing result, we note that Assump- tion 1 implies the condition P

i:s∈Ii(ri + ri) > 0,∀s, required by Theorem 1. Finally, the convergence is established:

Theorem 3 Under Assumption 1, any convergent point of Algorithm 2 is a stationary point of (6).

The proof is in Appendix C. Though ri in the Bradley-Terry model is an integer indicating the number of times that team Ii+ beats Ii, in the convergence proof we do not use such a property. Hence later for multi-class probability estimates, where ri is a real number, the convergence result still holds.

Note that a stationary point may be only a saddle point. If (6) is a convex programming problem, then a stationary point is a global minimum. Unfortunately, l(p) may not be convex, so it is not clear whether Algorithm 2 converges to a global minimum or not. The following theorem states that in some cases including the original Bradley-Terry model, any convergent point is a global minimum, and hence a maximum likelihood estimator:

Theorem 4 Under Assumption 1, if 1. |Ii+| = |Ii| = 1, i = 1, . . . , m or

(9)

2. |Ii| = k, i = 1, . . . , m,

then (6) has a unique global minimum and Algorithm 2 globally converges to it.

The proof is in Appendix D. The first case corresponds to the original Bradley-Terry model.

Later we will show that under Assumption 1, the second case is related to “one-against-the rest” for multi-class probability estimates. Thus though the theorem seems to be rather restricted, it corresponds to useful situations.

4. Multi-class Probability Estimates

A classification problem is to train a model from data with known class labels and then predict labels of new data. Many classification methods are two-class based approaches and there are different ways to extend them for multi-class cases. Most existing studies focus on predicting class labels but not probability estimates. In this section, we discuss how the generalized Bradley-Terry model can be applied to multi-class probability estimates.

As mentioned in Section 1, there are various ways to decompose a multi-class prob- lem into a number of binary classification problems. Among them, the most commonly used are “one-against-one” and “one-against-the rest.” Recently Allwein et al. (2001) pro- posed a more general framework for the decomposition. Their idea, extended from that of Dietterich and Bakiri (1995), is to associate each class with a row of a k× m “coding matrix” with all entries from {−1, 0, +1}. Here m is the number of binary classification problems to be constructed. Each column of the matrix represents a comparison between classes with “−1” and “+1,” ignoring classes with “0.” Note that the classes with “−1”

and “+1” correspond to our Ii and Ii+, respectively. Then the binary learning method is run for each column of the matrix to obtain m binary decision rules. For a given example, one predicts the class label to be j if the results of the m binary decision rules are “clos- est” to labels of row j in the coding matrix. Since this coding method can correct errors made by some individual decision rules, it is referred to as error correcting output codes (ECOC). Clearly the commonly used “one-against-one” and “one-against-the rest” settings are special cases of this framework.

Given ni, the number of training data with classes in Ii = Ii+∪ Ii, we assume here that for any given data x,

¯

ri= P (x in classes of Ii+| x in classes of Ii) (15) is available, and the task is to estimate P (x in class s), s = 1, . . . , k. We minimize the (weighted) KL distance between ¯ri and qi+/qi similar to (Hastie and Tibshirani, 1998):

min

p m

X

i=1

ni



¯

rilog r¯i

(qi+/qi) + (1− ¯ri) log 1− ¯ri

(qi /qi)



. (16)

By defining

ri ≡ nii and ri ≡ ni(1− ¯ri), (17) and removing constant terms, (16) reduces to (6), the negative log-likelihood of the gener- alized Bradley-Terry model. It is explained in Section 1 that one cannot directly interpret

(10)

this setting as a generalized Bradley-Terry model. Instead, we minimize the KL distance and obtain the same optimization problem.

We show in Section 5 that many practical “error correcting codes” have the same |Ii|, i.e., each binary problem involves the same number of classes. Thus, if data is balanced (all classes have about the same number of instances), then n1 ≈ · · · ≈ nm and we can remove ni in (16) without affecting the minimization of l(p). As a result, ri = ¯ri and ri = 1− ¯ri.

In the rest of this section we discuss the case of “one-against-the rest” in detail and the earlier result in (Zadrozny, 2002).

4.1 Properties of the “One-against-the rest” Approach For this approach, m = k and Ii, i = 1, . . . , m are

Ii+ Ii ri ri {1} {2, . . . , k} r1 1− r1

{2} {1, 3, . . . , k} r2 1− r2

... ... ... ... {k} {1, . . . , k − 1} rk 1− rk

Clearly, |Ii| = k ∀i, so every game involves all classes. Then, n1 =· · · = nm = the total number of training data and the solution of (16) is not affected by ni. This and (17) suggest that we can solve the problem by simply taking ni = 1 and ri+ ri = 1, ∀i. Thus, (8) can be simplified as

∂l(p)

∂ps

=−rs ps − X

j:j6=s

rj 1− pj

+ k.

Setting ∂l(p)/∂ps = 0∀s, we have rs

ps − 1− rs

1− ps = k−

k

X

j=1

rj

1− pj. (18)

Since the right-hand side of (18) is the same for all s, we can denote it by δ. If δ = 0, then pi = ri. This happens only ifPk

i=1ri = 1. If δ6= 0, (18) implies ps= (1 + δ)−p(1 + δ)2− 4rsδ

2δ . (19)

In Appendix E we show that ps defined in (19) satisfies 0 ≤ ps ≤ 1. Note that ((1 + δ) + p(1 + δ)2− 4rsδ)/2δ also satisfies (18), but when δ < 0, it is negative and when δ > 0, it is greater than 1. Then the solution procedure is as the following:

If Pk

i=1ri= 1,

optimal p = [r1, . . . , rk]T. else

find the root ofPk s=1

(1+δ)−

(1+δ)2−4rsδ

− 1 = 0.

optimal ps= (19).

IfPk

i=1ri = 1, p = [r1, . . . , rk]T satisfies ∂l(p)/∂ps= 0∀s, and thus is the unique optimal solution in light of Theorem 4. For the else part, Appendix E proves that the above equation

(11)

of δ has a unique root. Therefore, instead of using Algorithm 2, one can easily solve a one- variable nonlinear equation and obtain the optimal p. This “one-against-the rest” setting is special as we can directly prove the existence of a solution satisfying k + 1 equations:

Pk

s=1ps = 1 and ∂l(p)/∂ps = 0, s = 1, . . . , k. Earlier for general models we rely on the convergence proof of Algorithm 2 to show the existence (see the discussion in the beginning of Section 3).

From (19), if δ > 0, larger ps implies smaller (1 + δ)2− 4rsδ and hence larger rs. The situation for δ < 0 is similar. Therefore, the order of p1, . . . , pk is the same as that of r1, . . . , rk:

Theorem 5 If rs≥ rt, then ps≥ pt.

This theorem indicates that results from the generalized Bradley-Terry model are reasonable estimates.

4.2 An Earlier Approach

Zadrozny (2002) was the first to address the probability estimates using error-correcting codes. By considering the same optimization problem (16), she proposes a heuristic updat- ing rule

pt+1s ≡ P

i:s∈Ii+ri+P

i:s∈Iiri P

i:s∈I+i niqt,+i

qit +P

i:s∈Ii niqt,−i

qit

pts, (20)

but does not provide a convergence proof. For the “one-against-one” setting, (20) reduces to (4) in Algorithm 1. However, we will show that under other ECOC settings, the algorithm using (20) may not converge to a point with the smallest KL distance. Taking the “one- against-the rest” approach, if k = 3 and r1 = r2= 3/4, r3= 1/2, for our approach Theorem 5 implies p1 = p2. Then (18) and p1+ p2+ p3 = 1 give

3

4p1 − 1

4(1− p1) = 1

2p3 − 1

2(1− p3) = 1

2(1− 2p1) − 1 4p1. This leads to a solution

p= [15−√

33, 15−√ 33, 2√

33− 6]T/24, (21)

which is also unique according to Theorem 4. If this is a convergent point by using (20), then a further update from it should lead to the same point (after normalization). Thus, the three multiplicative factors must be the same. Since we keep Pk

i=1pti = 1 in the algorithm, with the property ri+ ri = 1, for this example the factor in the updating rule (20) is

rs+P

i:i6=sri pts+P

i:i6=s(1− pti) = k− 1 + 2rs−Pk i=1ri

k− 2 + 2pts

= 2rs

1 + 2pts. (22) Clearly the p obtained earlier in (21) by our approach of minimizing the KL distance does not result in the same value for (22). Thus, in this case Zadrozny (2002)’s approach fails to converge to the unique solution of (16) and hence lacks a clear interpretation.

(12)

5. Experiments: Simulated Examples

In the following two sections, we present experiments on multi-class probability estimates using synthetic and real-world data. In implementing Algorithm 2, we use the following stopping condition:

s:s∈{1,...,k}max

P

i:s∈Ii+ ri

qt,+i +P

i:s∈Ii ri qit,−

P

i:s∈Ii

ri+ri qti

− 1

< 0.001,

which implies that ∂l(pt)/∂ps, s = 1, . . . , k are all close to zero.

5.1 Data Generation

We consider the same setting in (Hastie and Tibshirani, 1998; Wu et al., 2004) by defining three possible class probabilities:

(a) p1 = 1.5/k, pj = (1− p1)/(k− 1), j = 2, . . . , k.

(b) k1 = k/2 if k is even, and (k + 1)/2 if k is odd; then p1 = 0.95 × 1.5/k1, pi = (0.95− p1)/(k1− 1) for i = 2, . . . , k1, and pi = 0.05/(k− k1) for i = k1+ 1, . . . , k.

(c) p1 = 0.95× 1.5/2, p2 = 0.95− p1, and pi = 0.05/(k− 2), i = 3, . . . , k.

All classes are competitive in case (a), but only two dominate in (c). For given Ii, i = 1, . . . , m, we generate ri by adding some noise to qi+/qi and then check if the proposed method obtains good probability estimates. Since qi+/qi of these three cases are different, it is difficult to have a fair way of adding noise. Furthermore, various ECOC settings (described later) will also result in different q+i /qi. Though far from perfect, here we try two ways:

1. An “absolute” amount of noise:

ri= min(max(ǫ,qi+

qi + 0.1N (0, 1)), 1− ǫ). (23) Then ri = 1− ri. Here ǫ = 10−7 is used so that all ri, ri are positive.

This is the setting considered in (Hastie and Tibshirani, 1998).

2. A “relative” amount of noise:

ri = min(max(ǫ,q+i qi

(1 + 0.1N (0, 1))), 1− ǫ). (24)

ri and ǫ are set in the same way.

(13)

5.2 Results of Various ECOC Settings

We consider the four encodings used in (Allwein et al., 2001) to generate Ii: 1. “1vs1”: the pairwise approach (Eq. (5)).

2. “1vsrest”: the “One-against-the rest” approach in Section 4.1.

3. “dense”: Ii ={1, . . . , k} for all i. Ii is randomly split to two equally-sized sets Ii+ and Ii. [10 log2k]3 such splits are generated. That is, m = [10 log2k].

Intuitively, more combinations of subjects as teams give more information and may lead to a better approximation of individual skill. Thus, we would like to select a diversified Ii+, Ii, i = 1, . . . , m. Following Allwein et al. (2001), we repeat the selection 100 times.

For each collection of Ii+, Ii, i = 1, . . . , m, we calculate the smallest distance between any pair of (Ii+, Ii) and (Ij+, Ij). A larger value indicates better quality of the coding, so we pick the one with the largest value. For the distance between any pair of (Ii+, Ii) and (Ij+, Ij), Allwein et al. (2001) consider a generalized Hamming distance defined as follows:

k

X

s=1





0 if s∈ Ii+∩ Ij+ or s∈ Ii∩ Ij, 1 if s∈ Ii+∩ Ij or s∈ Ii∩ Ij+, 1/2 if s /∈ Ii or s /∈ Ij.

4. “sparse”: Ii+, Ii are randomly drawn from {1, . . . , k} with E(|Ii+|) = E(|Ii|) = k/4.

Then [15 log2k] such splits are generated. Similar to “dense,” we repeat the procedure 100 times to find a good coding.

The way of adding noise may favor some ECOC settings. Since in general q+i

qi

for “1vs1” ≫ qi+ qi

for “1vsrest,”

adding 0.1N (0, 1) to qi+/qi result in very inaccurate ri for “1vsrest.” On the other hand, if using a relative way, noise added to ri and ri for “1vsrest” is smaller than that for

“1vs1.” This analysis indicates that using the two different noise makes the experiment more complete.

Figures 1 and 2 show results of adding an “absolute” amount of noise. Two criteria are used to evaluate the obtained probability estimates: Figures 1 presents averaged accuracy rates over 500 replicates for each of the four encodings when k = 22, 23, . . . , 26. Figure 2 gives the (relative) mean squared error (MSE):

MSE = 1 500

500

X

j=1 k

X

i=1

(ˆpji − pi)2/

k

X

i=1

p2i

!

, (25)

where ˆpj is the probability estimate obtained in the jth of the 500 replicates. Using the same two criteria, Figures 3 and 4 present results of adding a “relative” amount of noise.

3. We use [x] to denote the nearest integer value of x.

(14)

2 3 4 5 6 0

0.2 0.4 0.6 0.8 1

log2 k

Test Accuracy

(a)

2 3 4 5 6

0 0.2 0.4 0.6 0.8 1

log2 k

Test Accuracy

(b)

2 3 4 5 6

0 0.2 0.4 0.6 0.8 1

log2 k

Test Accuracy

(c)

Figure 1: Accuracy of predicting the true class by four encodings and (23) for generating noise: “1vs1” (dashed line, square marked), “1vsrest” (solid line, cross marked),

“dense” (dotted line, circle marked), “sparse” (dashdot line, diamond marked).

Sub-figures 1(a), 1(b) and 1(c) correspond to the three settings of class probabil- ities in Section 5.1.

2 3 4 5 6

0 0.5 1 1.5 2

log2 k

MSE

(a)

2 3 4 5 6

0 0.5 1 1.5 2

log2 k

MSE

(b)

2 3 4 5 6

0 0.5 1 1.5 2

log2 k

MSE

(c)

Figure 2: MSE by four encodings and (23) for generating noise. The legend is the same as that of Figure 1.

(15)

2 3 4 5 6 0

0.2 0.4 0.6 0.8 1

log2 k

Test Accuracy

(a)

2 3 4 5 6

0 0.2 0.4 0.6 0.8 1

log2 k

Test Accuracy

(b)

2 3 4 5 6

0 0.2 0.4 0.6 0.8 1

log2 k

Test Accuracy

(c)

Figure 3: Accuracy of predicting the true class by four encodings and (24) for generating noise. The legend is the same as that of Figure 1.

2 3 4 5 6

0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016

log2 k

MSE

(a)

2 3 4 5 6

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4

log2 k

MSE

(b)

2 3 4 5 6

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45

log2 k

MSE

(c)

Figure 4: MSE by four encodings and (24) for generating noise. The legend is the same as that of Figure 1.

Clearly, following our earlier analysis on adding noise, results of “1vsrest” in Figures 3 and 4 are much better than those in Figures 1 and 2. In all figures, “dense and “sparse” are less competitive in cases (a) and (b) when k is large. Due to the large |Ii+| and |Ii|, the model is unable to single out a clear winner when probabilities are more balanced. For “1vs1,” it is good for (a) and (b), but suffers some losses in (c), where the class probabilities are highly unbalanced. Wu et al. (2004) have observed this shortcoming and proposed a quadratic model for the “1vs1” setting.

(16)

dna 0 waveform satimage segment usps mnist letter 0.05

0.1 0.15 0.2 0.25 0.3 0.35 0.4

Data sets

Test error

(a) via proposed probability estimates

dna 0 waveform satimage segment usps mnist letter 0.05

0.1 0.15 0.2 0.25 0.3 0.35 0.4

Data sets

Test error

(b) via exponential-loss decoding

Figure 5: Testing error on smaller (300 training, 500 testing) data sets by four encodings:

“1vs1” (dashed line, square marked), “1vsrest” (solid line, cross marked), “dense”

(dotted line, circle marked), “sparse” (dashdot line, asterisk marked).

dna 0 waveform satimage segment usps mnist letter 0.05

0.1 0.15 0.2 0.25 0.3 0.35 0.4

Data sets

Test error

(a) via proposed probability estimates

dna 0 waveform satimage segment usps mnist letter 0.05

0.1 0.15 0.2 0.25 0.3 0.35 0.4

Data sets

Test error

(b) via exponential-loss decoding

Figure 6: Testing error on larger (800 training, 1000 testing) data sets by four encodings.

The legend is the same as that of Figure 5.

Results here indicate that the four encodings perform very differently under various conditions. Later in experiments for real data, we will see that in general the situation is closer to case (c), and all four encodings are practically viable4.

4. Experiments here are done using MATLAB (http://www.mathworks.com), and the programs are avail- able at http://www.csie.ntu.edu.tw/cjlin/libsvmtools/libsvm-errorcode/generalBT.zip.

(17)

6. Experiments: Real Data

In this section we present experimental results on some real-world multi-class problems.

There are two goals of experiments here:

1. Check the viability of the proposed multi-class probability estimates. We hope that under reasonable ECOC settings, equally good probabilities are obtained.

2. Compare with the standard ECOC approach without extracting probabilities. This is less important than the first goal as the paper focuses on probability estimates.

However, as the classification accuracy is one of the evaluation criteria used here, we can easily conduct a comparison.

6.1 Data and Experimental Settings

We consider data sets used in (Wu et al., 2004): dna, satimage, segment, and letter from the Statlog collection (Michie et al., 1994), waveform from UCI Machine Learning Repository (Blake and Merz, 1998), USPS (Hull, 1994), and MNIST (LeCun et al., 1998). Except dna, which takes two possible values 0 and 1, each attribute of all other data is linearly scaled to [−1, 1]. The data set statistics are in Table 1.

Table 1: Data Set Statistics

dataset dna waveform satimage segment USPS MNIST letter

#classes 3 3 6 7 10 10 26

#attributes 180 21 36 19 256 784 16

After data scaling, we randomly select smaller (300/500) and larger (800/1,000) train- ing/testing sets from thousands of points for experiments. 20 such selections are generated and results are averaged5.

We use the same four ways in Section 5 to generate Ii. All of them have|I1| ≈ · · · ≈ |Im|.

With the property that these multi-class problems are reasonably balanced, we set ni = 1 in (16).

We consider support vector machines (SVM) (Boser et al., 1992; Cortes and Vapnik, 1995) with the RBF (Radial Basis Function) kernel e−γkxi−xjk2 as the binary classifier. An improved version (Lin et al., 2003) of (Platt, 2000) obtains ri using SVM decision values.

It is known that SVM may not give good probability estimates (e.g., Zhang (2004)), but Platt (2000) and Wu et al. (2004) empirically show that using decision values from cross validation yields acceptable results in practice. In addition, SVM is sometimes sensitive to parameters, so we conduct a selection procedure before testing. Details can be found in Figure 4 of (Wu et al., 2004). The code is modified from LIBSVM (Chang and Lin, 2001), a library for support vector machines.

5. All training/testing sets used are at http://www.csie.ntu.edu.tw/cjlin/papers/svmprob/data.

(18)

6.2 Evaluation Criteria and Results

For these real data sets, there are no true probability values available. We consider the same three evaluation criteria used in (Wu et al., 2004):

1. Test errors. Averages of 20 errors for smaller and larger sets are in Figures 5(a) and 6(a), respectively.

2. MSE (Brier Score).

1 l

l

X

j=1 k

X

i=1

(Iyj=i− ˆpji)2

! ,

where l is the number of test data, ˆpj is the probability estimate of the jth data, yj is the true class label, and Iyj=i is an indicator function (1 if yj = i and 0 other- wise). This measurement (Brier, 1950), popular in meteorology, satisfies the following property:

arg min

ˆ p

EY[

k

X

i=1

(IY=i− ˆpi)2]≡ arg min

ˆ p

k

X

i=1

(ˆpi− pi)2,

where Y , a random variable for the class label, has the probability distribution p.

Brier score is thus useful when the true probabilities are unknown. We present the average of 20 Brier scores in Figure 7.

3. Log loss:

−1 l

l

X

j=1

log ˆpjyj,

where ˆpj is the probability estimate of the jth data and yj is its actual class label. It is another useful criterion when true probabilities are unknown:

minpˆ

EY[−

k

X

i=1

log ˆpi· IY=i]≡ min

ˆ

p

k

X

i=1

pilog ˆpi

has the minimum at ˆpi= pi, i = 1, . . . , k. Average of 20 splits are presented in Figure 8.

Results of using the three criteria all indicate that the four encodings are quite com- petitive. Such an observation suggests that in practical problems class probabilities may resemble those specified in case (c) in Section 5; that is, only few classes dominate. Wu et al.

(2004) is the first one pointing out this resemblance. In addition, all figures show that “1vs1”

is slightly worse than others in the case of larger k (e.g., letter). Earlier Wu et al. (2004) pro- posed a quadratic model, which gives better probability estimates than the Bradley-Terry model for “1vs1.”

In terms of the computational time, because the number of binary problems for “dense”

and “sparse” ([10 log2k] and [15 log2k], respectively) are larger than k, and each binary problem involves many classes of data (all and one half), their training time is longer than that of “1vs1” and “1vsrest.” “Dense” is particularly time consuming. Note that though

“1vs1” solves k(k− 1)/2 SVMs, each is small via using only two classes of data.

(19)

dna 0 waveform satimage segment usps mnist letter 0.01

0.02 0.03 0.04 0.05 0.06 0.07 0.08

Data sets

MSE

(a) 300 training, 500 testing

dna 0 waveform satimage segment usps mnist letter 0.01

0.02 0.03 0.04 0.05 0.06 0.07 0.08

Data sets

MSE

(b) 800 training, 1000 testing

Figure 7: MSE by four encodings. The legend is the same as that of Figure 5.

dna 0 waveform satimage segment usps mnist letter 0.2

0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Data sets

log loss

(a) 300 training, 500 testing

dna 0 waveform satimage segment usps mnist letter 0.2

0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Data sets

log loss

(b) 800 training, 1000 testing

Figure 8: Log loss by four encodings. The legend is the same as that of Figure 5.

To check the effectiveness of the proposed model in multi-class classification, we compare it with a standard ECOC-based strategy which does not produce probabilities: exponential loss-based decoding by Allwein et al. (2001). Let ˆfi be the decision function of the ith binary classifier, and ˆfi(x) > 0 (< 0) specifies that data x to be in classes in Ii+ (Ii). This approach determines the predicted label by the following rule:

predicted label = arg min

s

X

i:s∈Ii+

e− ˆfi+ X

i:s∈Ii

efˆi

! .

Testing errors for smaller and larger sets are in Figures 5(b) and 6(b), respectively. Com- paring them with results by the proposed model in Figures 5(a) and 6(a), we observe that both approaches have very similar errors. Therefore, in terms of predicting class labels only, our new method is competitive.

參考文獻

相關文件

Set a time limit for students to complete the reading and work through the Reading Comprehension and Language Practice activities.. At the end of the allotted time, have students

Set a time limit for students to complete the reading and work through the Reading Comprehension and Language Practice activities.. At the end of the allotted time, have students

A limited section of a reading text (usually a short paragraph) is used for students to practice stress and intonation of a previously read passage.. With limited use, students

This paper presents a calibration technique which estimates the mismatch, and adjusts the equivalent gain of the phase noise canceling circuitry to improve the degree of phase

(a) In your group, discuss what impact the social issues in Learning Activity 1 (and any other socials issues you can think of) have on the world, Hong Kong and you.. Choose the

Numerical experiments are done for a class of quasi-convex optimization problems where the function f (x) is a composition of a quadratic convex function from IR n to IR and

of the spin polarisation L. Bocher et al. submitted (2011).. Mapping plasmons and EM fields Mapping plasmons and EM fields.. New possibilities for studying the low

If necessary, you might like to guide students to read over the notes and discuss the roles and language required of a chairperson or secretary to prepare them for the activity9.