• 沒有找到結果。

Distributed Primal Coordinate Descent Method for Support Vector Machines

N/A
N/A
Protected

Academic year: 2022

Share "Distributed Primal Coordinate Descent Method for Support Vector Machines"

Copied!
3
0
0

加載中.... (立即查看全文)

全文

(1)

Journal of Machine Learning Research () Submitted ; Published

Distributed Primal Coordinate Descent Method for Support Vector Machines

Editor:

Abstract

Keywords:

1. Introduction 2. Experiments

Dataset l n Density Best C

yahoo-japan 176, 203 832, 026 0.016% 0.5 yahoo-korea 460, 554 3, 052, 939 0.011% 2

url 2, 396, 130 3, 231, 961 0.004% 4

webspam 350, 000 16, 609, 143 0.022% 32 KDD2010-a 8, 407, 752 20, 216, 830 0.000% 0.015625 KDD2010-b 19, 264, 097 29, 890, 095 0.000% 0.03125

Table 1: Data statistics. For url, webspam, KDD2010-a, and KDD2010-b, test sets are not available so we randomly split the original data into 80%/20% as training set and test set, respectively.

Acknowledgments

References

c .

(2)

(a) yahoo-japan (b) yahoo-korea (c) url

(d) webspam (e) KDD2010-a (f) KDD2010-b

Figure 1: Relative fuction value difference versus trainingg time

(a) yahoo-japan (b) yahoo-korea (c) url

(d) webspam (e) KDD2010-a (f) KDD2010-b

Figure 2: Relative testing accuracy versus trainingg time

2

(3)

Distributed Primal Coordinate Descent Method for Support Vector Machines

(a) CDPrimal-A (b) CDPrimal-E (c) TRON

Figure 3: Percentage of computation and communication

3

參考文獻

相關文件

2 Distributed classification algorithms Kernel support vector machines Linear support vector machines Parallel tree learning?. 3 Distributed clustering

We shall treat this test and two related test statistics, each based on the maximum likelihood method in this chapter.. For the general case only asymptotic distributions of the

On the other hand, we provide an alternative proof, which uses the new properties of the merit function, for the convergence result of the descent method considered in [Chen, J.-S.:

Answer: Suppose on the contrary that X/∼ is a Hausdorff space.. Which means that every one element set in X/∼ is closed, and hence by X/∼ is a finite set, we conclude

Convergence of the (block) coordinate descent method requires typi- cally that f be strictly convex (or quasiconvex or hemivariate) differentiable and, taking into account the

Solving SVM Quadratic Programming Problem Training large-scale data..

An instance associated with ≥ 2 labels e.g., a video shot includes several concepts Large-scale Data. SVM cannot handle large sets if using kernels

Core vector machines: Fast SVM training on very large data sets. Using the Nystr¨ om method to speed up