• 沒有找到結果。

Large-Scale Convex Optimization over Matrices for Multi-task Learning

N/A
N/A
Protected

Academic year: 2022

Share "Large-Scale Convex Optimization over Matrices for Multi-task Learning"

Copied!
38
0
0

加載中.... (立即查看全文)

全文

(1)

for Multi-task Learning

Paul Tseng

Mathematics, University of Washington Seattle

Optima, Univ. Illinois, Urbana-Champaign March 26, 2009

Joint work with Ting Kei Pong (UW) and Jieping Ye (ASU)

(2)

Prologue

This story began with an innocent looking email..

(3)

A Question..

On Thu, 18 Sep 2008, Jieping Ye wrote:

Dr. Tseng,

I recently came across your interesting work on the block

coordinate descent method for non-differentiable optimization.

I wonder whether the convergence result will apply for the

matrix case where each block is a positive definite matrix, i.e., min f(X, Y, Z), where X, Y, Z are positive definite matrices.

I will appreciate it if you can provide some relevant references on this if any. Thanks!

Best, Jieping

(4)

The Problem

Q0,Wmin f (Q,W) := tr(WTQ−1W) + tr(Q) + kAW − Bk2F

Q ∈ <n×n, W ∈ <n×m (A ∈ <p×n and B ∈ <p×m are given), kBkF = (P

i,j Bij2 )1/2

(5)

The Problem

Q0,Wmin f (Q,W) := tr(WTQ−1W) + tr(Q) + kAW − Bk2F

Q ∈ <n×n, W ∈ <n×m (A ∈ <p×n and B ∈ <p×m are given), kBkF = (P

i,j Bij2 )1/2

Note

:

• f (Q,W) is diff., convex in Q for each W, convex in W for each Q.

• The min is finite, but may not be attained (e.g., when B = 0).

• If min is attained, it’s attained at a critical pt, i.e., ∇f (Q,W) = 0.

• m = # tasks. Q−1 = covar. matrix.

(6)

On Wed, 24 Sep 2008, Jieping Ye wrote:

Dear Paul, ...

The multi-task learning problem comes from our biological application: Drosophila gene expression pattern analysis (funded by NSF and NIH).

...

Thanks, Jieping

(7)

First Try

∇f (Q,W) = −Q−1W WTQ−1 + I, 2Q−1W + 2AT(AW − B) So ∇f (Q,W) = 0 implies

(WTQ−1)TWTQ−1 = I, Q−1W + MW = ATB where M := ATA.

(8)

First Try

∇f (Q,W) = −Q−1W WTQ−1 + I, 2Q−1W + 2AT(AW − B) So ∇f (Q,W) = 0 implies

(WTQ−1)TWTQ−1 = I, Q−1W + MW = ATB where M := ATA.

So rank(W) = n, rank(ATB) = n, ... , and

(I + MQ)(I + QM ) = ATBBTA

(9)

First Try

∇f (Q,W) = −Q−1W WTQ−1 + I, 2Q−1W + 2AT(AW − B) So ∇f (Q,W) = 0 implies

(WTQ−1)TWTQ−1 = I, Q−1W + MW = ATB where M := ATA.

So rank(W) = n, rank(ATB) = n, ... , and

(I + MQ)(I + QM ) = ATBBTA

Prop. 1

: If f has a stationary pt, then rank(ATB) = n, M  0, and

Q = (M−1ATBBTAM−1)12 − M−1, W = (M + Q−1)−1ATB

(10)

But..

Date: Sat, 25 Oct 2008 16:48:20 -0700 Dear Paul,

Thanks for the writeup. Very interesting.

...

Unfortunately, M is commonly not positive definite in our applications.

...

Thanks, Jieping

(11)

Second Try

Suppose M = ATA is singular, so r := rank(A) < n.

Use SVD or QR decomp. of A:

A = R h

A 0e i ST

with A ∈ <e p×r, RTR = I and STS = I. Let B := Re TB.

Prop. 2

:

Q0,Wmin f (Q,W) = min

Q0,fe W

f (˜ Q,e Wf), where

f (˜ Q,e Wf) := tr(fWTQe−1Wf) + tr(Q) +e

AeWf − eB

2 F

.

Then recover Q,W from Q,e Wf.

(12)

Moreover, f˜ has a stationary pt iff

( fM−1AeTB eeBTA feM−1)12  fM−1

where M := ef ATA.e

Done?

(13)

But..

Date: Thu, 30 Oct 2008 10:44:56 -0700 Dear Tseng,

Thanks. I like the derivation.

It seems the condition in Eq. (4) is the key.

We need to somehow relax this condition.

Will perturbation solve this problem?

Best, Jieping

(14)

Third Try

Assume w.l.o.g. rankA = n. Let

h(Q) := inf

W f (Q,W)

= inf

W tr(WTQ−1W) + tr(Q) + kAW − Bk2F

= tr(Q) + tr(ETE(Q + C)−1) + const.

with C := M−1  0 and E := BTAC. (M = ATA)

(15)

Third Try

Assume w.l.o.g. rankA = n. Let

h(Q) := inf

W f (Q,W)

= inf

W tr(WTQ−1W) + tr(Q) + kAW − Bk2F

= tr(Q) + tr(ETE(Q + C)−1) + const.

with C := M−1  0 and E := BTAC. (M = ATA)

Then h(Q) is cont. over Q  0 (!) so

minQ0 h(Q) = min

Q0,W f (Q,W).

Moreover, (Q,W) 7→ WTQ−1W is operator-convex, so f is convex, and hence h is convex.

(16)

Prop. 3

: minQ0 h(Q) is attained, and

∇h(Q) = I − (Q + C)−1ETE(Q + C)−1

is Lipschitz cont. over Q  0.

(17)

Prop. 3

: minQ0 h(Q) is attained, and

∇h(Q) = I − (Q + C)−1ETE(Q + C)−1

is Lipschitz cont. over Q  0.

Moreover, using Schur complement, minQ0 h(Q) reduces to an SDP:

min tr(Q) + tr(U) s.t. Q  0,

 Q 0 0 U

 +

 C ET

E 0



 0

Recall C  0 is n × n and E is m × n. This SDP is solvable by existing IP solvers (SeDuMi, SDPT3, CSDP, ..) for around m + n ≤ 500.

(18)

But..

Date: Mon, 1 Dec 2008 09:33:13 -0700 ...

In our application, n is around 1000-2000 and m is around 50-100.

...

It contains 1000-3000 rows depending on the feature extraction scheme. In general, X is dense. However, one of our recent feature extraction schemes produces sparse X. By the way, the columns of X correpspond

to biological images.

Best, Jieping

For m = 100, n = 2000, (Q,W) comprises 2201000 variables. A ∈ <p×n may be dense.

6. .

_

(19)

Fourth Try

Lesson from my graduate student days:

“When stuck, look at the dual”

(20)

Consider the dual problem

maxΛ0 min

Q0,U L(Q,U,Λ), with Lagrangian (hW, Zi = tr(WTZ))

L(Q,U,Λ) := hI,Qi + hI,Ui −

 Λ1 ΛT2 Λ2 Λ3

 ,

 Q 0 0 U

 +

 C ET

E 0



= hI − Λ1,Qi + hI − Λ3,Ui − hΛ1, Ci − 2hΛ2, Ei

(21)

Consider the dual problem

maxΛ0 min

Q0,U L(Q,U,Λ), with Lagrangian (hW, Zi = tr(WTZ))

L(Q,U,Λ) := hI,Qi + hI,Ui −

 Λ1 ΛT2 Λ2 Λ3

 ,

 Q 0 0 U

 +

 C ET

E 0



= hI − Λ1,Qi + hI − Λ3,Ui − hΛ1, Ci − 2hΛ2, Ei

For dual feas., need I  Λ1, I = Λ3, Λ1  ΛT2 Λ2. Dual problem reduces to

min

IΛ1T

2 Λ2

hC, Λ1i + 2hE,Λ2i

Since C  0, minimum w.r.t. Λ1 is attained at Λ1 = ΛT2 Λ2.

(22)

The dual problem reduces to (recall Λ2 ∈ <m×n)

min

IΛT2 Λ2

d22) := 1

2hC,ΛT2 Λ2i + hE, Λ2i.

(23)

The dual problem reduces to (recall Λ2 ∈ <m×n)

min

IΛT2 Λ2

d22) := 1

2hC,ΛT2 Λ2i + hE, Λ2i.

• No duality gap since the primal problem has interior soln.

• Recovers Q as Lagrange multiplier assoc. with I  ΛT2 Λ2.

• ∇d22) = Λ2C + E is Lipschitz cont. with constant L = λmax(C).

(24)

The dual problem reduces to (recall Λ2 ∈ <m×n)

min

IΛT2 Λ2

d22) := 1

2hC,ΛT2 Λ2i + hE, Λ2i.

• No duality gap since the primal problem has interior soln.

• Recovers Q as Lagrange multiplier assoc. with I  ΛT2 Λ2.

• ∇d22) = Λ2C + E is Lipschitz cont. with constant L = λmax(C).

What about the constraint I  ΛT2 Λ2?

Prop. 4

: For any Λ2 ∈ <m×n (m ≤ n) with SVD Λ2 = R

D 0 

ST,

Proj(Λ2) := arg min

IΨT2 Ψ2

2 − Ψ2k2F = R

min {D, I} 0  ST

(25)

Solving the reduced dual

:

Coded 3 methods in Matlab: Frank-Wolfe, grad.-proj. with LS Goldstein, Levitin, Polyak, and accel. grad.-proj. Nesterov.

Accel. grad.-proj. seems most efficient.

0. Choose I  ΛT2 Λ2. Set Λprev2 = Λ2, θprev = θ = 1. Fix L = λmax(C). Go to 1.

1. Set

Λext2 = Λ2 +

 θ

θprev − θ



Λ2 − Λprev2  . Update Λprev2 ← Λ2, θprev ← θ, and

Λ2 ← Proj



Λext2 − 1

L∇d2ext2 )



θ ←

√θ4 + 4θ2 − θ2

2 .

If relative duality gap ≤ tol, stop. Else to to 1.

(26)

Test Results (Preliminary)

Tested on random data: A ∼ U [0, 1]p×n and B ∼ U [0, 1]p×m. tol = .001 n = 2000 m = 100 p= 1000 tol= 0.001

reduce A to have full column rank:

done reducing A, time: 38.9895

done computing C and E, time: 4.05682

termination due to negligible change in U = 3.4469e-11 iter= 10 dobj= -96.7469 dual feas= 8.88178e-15

pobj= -96.7469 primal feas= 1.43293e-15 accel. grad-proj: iter= 10 total_time= 67.9021

fmin = 193.494 fval = 193.494

n = 2000 m = 100 p= 3000 tol= 0.001 done computing C and E, time: 31.5357

termination due to negligible change in U = 7.32917e-11 iter= 10 dobj= -137.14 dual feas= 6.21725e-15

pobj= -137.14 primal feas= 3.06165e-15 accel. grad-proj: iter= 10 total_time= 190.652 fmin = 8632.32 fval = 8632.32

(27)

However

: When n = p, L is large (≈ 106) and #iterations is very large.

(28)

Maybe finally..

Date: Sun, 11 Jan 2009 11:09:15 -0700 Dear Paul,

Sorry for the delay.

Some preliminary results prepared by my student are

attached. Overall, it performs well, especially when the number of labels is large. We will conduct more extensive experimental studies and keep you updated.

Best, Jieping

(29)

In preliminary result on Drosophila gene expression pattern annotation, a group of images are associated with variable number of terms using a controlled vocabulary.

k-means clustering and feature extractions are used to obtain a global

histogram counting the number of features closest to the visual words in the codebook obtained from the clustering algorithm (with 3000 clusters), etc.

Hard (soft) assignment: a feature assigned to one (multiple) word.

• n = 3000 (#clusters)

• 10 ≤ m ≤ 60 (#terms/tasks)

• 2200 ≤ p ≤ 2800 (#samples).

(30)

m MTLhard MTLsoft SVMhard SVMsoft PMKstar PMKclique 10 77.22±0.63 78.86±0.58 74.89±0.68 78.51±0.60 71.80±0.81 71.98±0.87 20 78.95±0.82 80.90±1.02 76.38±0.84 78.94±0.86 72.01±1.01 71.70±1.20 10 52.57±1.19 54.89±1.24 52.25±0.98 55.64±0.69 46.20±1.18 47.06±1.16 20 33.15±1.37 37.25±1.25 35.62±0.99 39.18±1.18 28.21±1.00 28.11±1.09 10 59.92±1.04 60.84±0.99 55.74±1.02 59.27±0.80 53.25±1.15 53.36±1.20 20 55.33±0.88 56.79±0.72 51.70±1.17 54.25±0.93 49.59±1.24 48.14±1.34

Table 1: Performance (top: AUC, middle: macro F1, bottom: micro F1) of MTL, SVM, PMK on data sets in stage range 9-10 (m = 10, 20 and p = 919, 1015).

(31)

m MTLhard MTLsoft SVMhard SVMsoft PMKstar PMKclique 10 84.06±0.53 86.18±0.45 83.05±0.54 84.84±0.57 78.68±0.58 78.52±0.55 30 81.83±0.46 83.85±0.36 79.18±0.51 81.31±0.48 71.85±0.61 71.13±0.53 50 80.56±0.53 82.87±0.53 76.19±0.72 78.75±0.68 69.66±0.81 68.80±0.68 10 60.30±0.92 64.00±0.85 60.37±0.88 62.61±0.82 54.61±0.68 55.19±0.62 30 35.20±0.85 39.15±0.83 35.32±0.75 37.38±0.95 22.30±0.70 24.85±0.63 50 23.07±0.86 26.67±1.05 23.46±0.60 26.26±0.65 14.07±0.48 15.04±0.46 10 66.89±0.79 68.92±0.68 65.67±0.60 66.73±0.74 62.06±0.54 61.84±0.51 30 55.66±0.64 56.70±0.68 48.87±0.85 51.52±0.96 47.08±0.81 44.81±0.66 50 52.92±0.78 54.54±0.70 47.18±0.84 47.97±0.90 44.25±0.65 42.49±0.70

Table 2: Performance (top: AUC, middle: macro F1, bottom: micro F1) of MTL, SVM, PMK on data sets in stage range 11-12 (10 ≤ m ≤ 50, 1622 ≤ p ≤ 2070).

(32)

m MTLhard MTLsoft SVMhard SVMsoft PMKstar PMKclique 10 87.38±0.36 89.43±0.31 86.66±0.35 88.42±0.35 82.07±0.41 82.53±0.62 30 82.76±0.36 85.86±0.34 81.13±0.46 83.45±0.38 73.34±0.46 73.73±0.52 60 80.17±0.40 83.32±0.45 77.18±0.46 79.75±0.47 67.15±0.57 67.11±0.64 10 64.43±0.77 67.42±0.78 62.97±0.68 66.38±0.71 57.37±0.91 58.42±0.94 30 42.48±0.87 47.39±0.91 41.92±0.76 45.07±0.68 29.62±0.67 31.04±0.82 60 24.78±0.67 29.84±0.62 25.49±0.55 28.72±0.57 15.65±0.46 16.13±0.48 10 67.85±0.60 70.50±0.58 66.67±0.45 68.79±0.60 60.98±0.74 61.87±0.77 30 53.74±0.45 57.04±0.69 48.11±0.90 51.19±0.83 43.50±0.70 44.14±0.78 60 48.79±0.60 51.35±0.58 42.84±0.76 44.48±0.84 37.28±0.81 38.29±0.78

Table 3: Performance (top: AUC, middle: macro F1, bottom: micro F1) of MTL, SVM, PMK on data sets in stage range 13-16 (10 ≤ m ≤ 60, 2228 ≤ p ≤ 2754).

(33)

Conclusions & Extensions

1. A seemingly nasty problem arising from application is tamed by a mix of convex/matrix analysis, and modern algorithms.

(34)

Conclusions & Extensions

1. A seemingly nasty problem arising from application is tamed by a mix of convex/matrix analysis, and modern algorithms.

2. Extension to related convex optimization problems in learning?

(35)

Conclusions & Extensions

1. A seemingly nasty problem arising from application is tamed by a mix of convex/matrix analysis, and modern algorithms.

2. Extension to related convex optimization problems in learning?

3. Better algorithms to handle the case of p ≈ n?

(36)

Conclusions & Extensions

1. A seemingly nasty problem arising from application is tamed by a mix of convex/matrix analysis, and modern algorithms.

2. Extension to related convex optimization problems in learning?

3. Better algorithms to handle the case of p ≈ n?

The End?

(37)

Nooo..

On Sun, 22 Mar 2009, Jieping Ye wrote:

Dear Paul,

Thanks for the updated version.

...

There are two tex files: introduction.tex and experiment.tex and two bib files.

...

Thanks, Jieping

(38)

The introduction shows the original problem is a reformulation of

minW kWk + kAW − Bk2F

with kWk = P

i σi(W) (“trace/nuclear-norm”).

This can be solved by accel. gradient method (one SVD per iter.) too.

Which is faster? Will see..

參考文獻

相關文件

The multi-task learning problem comes from our biological application: Drosophila gene expression pattern analysis (funded by NSF and

• Information on learners’ performance in the learning task is collected throughout the learning and teaching process so as to help teachers design post-task activities

• Oral interactions are often indivisible from the learning and teaching activities of an English task, and as such, speaking activities can be well integrated into any

 A task which promotes self-directed learning skills Writing Activity: A Biography for a Famous Person. Onion

1) Pre-learning task [Edupuzzle task] on “Investor and Financial Education Council (IFEC): Chin Family” Youtube video clip.. Teaching financial literacy in junior form curriculum.

Parallel dual coordinate descent method for large-scale linear classification in multi-core environments. In Proceedings of the 22nd ACM SIGKDD International Conference on

“Since our classification problem is essentially a multi-label task, during the prediction procedure, we assume that the number of labels for the unlabeled nodes is already known

Efficient training relies on designing optimization algorithms by incorporating the problem structure Many issues about multi-core and distributed linear classification still need to