• 沒有找到結果。

# Smoothing methods for Second-Order Cone Programs/Complementarity Problems

N/A
N/A
Protected

Share "Smoothing methods for Second-Order Cone Programs/Complementarity Problems"

Copied!
21
0
0

(1)

## Programs/Complementarity Problems

Paul Tseng

University of Washington, Seattle

SIAM Conf. Optim May, 2005

Abstract

This is a talk given at SIAM Conf. Optim, Stockholm, May 2005.

(2)

### Talk Outline

• I. Second-Order Cone (SOC) Program and Complementarity Problem

• Unconstrained Diff. Min. Reformulation

• Numerical Experience

• II. SOCP from Dist. Geometry Optim

• Simulation Results

(3)

### Convex SOCP

min g(x) s.t. Ax = b

x ∈ K

A ∈ <m×n, b ∈ <m

g : <n → <, convex, twice cont. diff.

K = Kn1 × · · · × Knp Kni def= n

xi = h

x1i x2i

i ∈ < × <ni−1 : kx2ik2 ≤ x1io

Special cases? LP, SOCP,...

(4)

n

(5)

### Suff. Optim. Conditions

x ∈ K, y ∈ K, xTy = 0, Ax = b, y = ∇g(x) − ATζd

⇐⇒

x ∈ K, y ∈ K, xTy = 0, x = F (ζ), y = G(ζ) with F (ζ) = d + (I − AT(AAT)−1A)ζ

G(ζ) = ∇g(F (ζ)) − AT(AAT)−1Aζ (Ad = b)

(6)

### SOCCP

Find ζ ∈ <n satisfying

x ∈ K, y ∈ K, xTy = 0, x = F (ζ), y = G(ζ) F, G : <n → <n smooth

∇F (ζ), −∇G(ζ) column-monotone ∀ζ ∈ <n, i.e.,

∇F (ζ)u − ∇G(ζ)v = 0 ⇒ uTv ≥ 0

Special cases? convex SOCP, monotone NCP,...

(7)

### How to solve SOCCP?

For LP, simplex methods and interior-point methods.

For SOCP, interior-point methods.

For convex SOCP and column-monotone SOCCP?

Interior-point methods not amenable to warm start. Non-interior methods?

(8)

### Nonsmooth Eq. Reformulation

xi · yi def= h

x1i x2i

i · h

y1i y2i

i

= h

xTi yi x1iy2i+y1ix2i

i

(Jordan product assoc. with Kni)

φFB(x, y) def= h

(x2i + yi2)1/2 − xi − yiip

i=1

Fact (Fukushima,Luo,T ’02):

φFB(x, y) = 0 ⇐⇒ x ∈ K, y ∈ K, xTy = 0

Thus, SOCCP is equivalent to

φFB(F (ζ), G(ζ)) = 0

φFB is strongly semismooth (Sun,Sun ’03)

(9)

### Unconstr. Smooth Min. Reformulation

min fFB(ζ) def= kφFB(F (ζ), G(ζ))k2

2

F, G smooth and ∇F (ζ), −∇G(ζ) column-monotone ∀ζ ∈ <n (e.g., LP, SOCP, convex SOCP, monotone NCP)

For monotone NCP (K = <n+),

fFB is smooth, and ∇fFB(ζ) = 0 ⇐⇒ ζ is a soln

(Geiger,Kanzow ’96)

The same holds for SOCCP. (J.-S. Chen,T ’04)

Advantage? Any method for unconstrained diff. min. (e.g., CG, BFGS, L-BFGS) can be used to find ∇fFB(ζ) = 0.

(10)

### Numerical Experience on Convex SOCP

x = F (ζ) = d + (I − P )ζ y = G(ζ) = ∇g(F (ζ)) − P ζ

with P = AT(AAT)−1A, Ad = b. (Solve min kAx − bk to find d)

• Implement in Matlab CG-PR, BFGS, L-BFGS (memory=5) to minimize fFB(ζ), using Armijo stepsize rule, with ζinit = 0. Stop when

max{fFB(ζ), |xTy|} ≤ accur.

• Let ψFB(x, y) def= kφFB(x, y)k2

2. Then fFB(ζ) = ψFB(x, y)

∇fFB(ζ) = (I − P )∇xψFB(x, y) − P ∇yψFB(x, y)

Compute P ζ using Cholesky factorization of AAT or using preconditioned CG. Compute ψFB(x, y) and ∇ψFB(x, y) within Fortran Mex files.

(11)

### DIMACS Challenge SOCPs

• Problem names and statistics:

nb (m = 123, n = 2383, K = (K3)793 × <4+)

nb-L2 (m = 123, n = 4195, K = K1677 × (K3)838 × <4+)

nb-L2-bessel (m = 123, n = 2641, K = K123 × (K3)838 × <4+)

Compare iters/cpu(sec)/accuracy with Sedumi 1.05 (Sturm ’01), which implements a predictor-corrector interior-point method.

Problem SeDuMi (pars.eps=1e-5) L-BFGS-Chol (accur=1e-5)

Name iter/cpu iter/cpu

nb 19/7.6 1042/16.5

nb-L2 11/11.1 330/9.2

nb-L2-bessel 11/5.3 108/1.7

Table 1: (cpu times are in sec on an HP DL360 workstation, running Matlab 6.1)

(12)

### Regularized Sum-of-Norms Problems

minw≥0 PM

i=1 kAiw − bik2 + h(w),

Ai ∼ U[−1, 1]mi×`, bi ∼ U[−5, 5]mi, mi ∼ U{2, 3, ..., r} (r ≥ 2).

h(w) = 1Tw + 13kwk33 (cubic reg.) Reformulate as a convex SOCP:

minimize PM

i=1 zi + h(w)

subject to Aiw + si = bi, (zi, si) ∈ Kmi+1, i=1,...,M, w ∈ <`+.

Problem BFGS-Chol CG-PR-Chol L-BFGS-Chol

`, M, r (m, n) iter/cpu iter/cpu iter/cpu 500,10,10 (56,566) 352/24.6 1703/6.6 497/2.4 500,50,10 (283,833) 546/85.1 3173/69.0 700/12.4 500,10,50 (246,756) 272/36.3 1290/23.0 371/5.6

Table 2: (cpu times are in sec on an HP DL360 workstation, running Matlab 6.5.1, with accur=1e-3)

(13)

### Smoothing Newton Step

φµ

FB(x, y) def= (x2 + y2 + µ2e)1/2 − x − y

with e = (1, 0, .., 0

| {z }

n1

, ..., 1, 0, .., 0

| {z }

np

)T, µ > 0 (Fukushima,Luo,T ’02)

Given ζ, choose µ > 0 and solve

∇φµ

FB(F (ζ), G(ζ))T∆ζ = −φFB(F (ζ), G(ζ)) Use ∆ζ to accelerate convergence.

This requires more work per iteration. Use it judiciously.

(14)

### Observations

For our unconstrained smooth merit function approach:

• Less work/iteration, simpler matrix computation than interior-point methods.

• Applicable to convex SOCP and column-monotone SOCCP.

• Useful for warm start?

Drawback:

• Many more iters. than interior-point methods.

• Lower solution accuracy.

(15)

### SOCP from Dist. Geometry Optim

(ongoing work..)

n pts in <d (d = 2, 3).

Know xm+1, ..., xn and Eucl. dist. estimate for pairs of ‘neighboring’ pts dij > 0 ∀(i, j) ∈ A ⊆ {1, ..., n} × {1, ..., n}.

Estimate x1, ..., xm.

Problem (nonconvex):

x1min,...,xm

X

(i,j)∈A

kxi − xjk22 − d2ij

(16)

Convex relaxation:

x1min,...,xm

X

(i,j)∈A

max{0, kxi − xjk22 − d2ij}

This is an unconstrained (nonsmooth) convex program, can be reformulated as an SOCP. Alternatives?

Smooth approx.:

max{0, t} ≈ µh  t µ



(µ > 0) h smooth convex, lim

t→−∞h(t) = lim

t→∞h(t) − t = 0.

We use h(t) = ((t2 + 4)1/2 + t)/2 (CHKS).

(17)

### Smooth Approximation of Convex Relaxation

x1min,...,xm fµ(x1, .., xm) def= X

(i,j)∈A

µh kxi − xjk2 − d2ij µ

!

Solve the smooth approximation using Inexact Block Coordinate Descent:

• If k∇xifµk = Ω(µ), then update xi by moving it along the Newton direction

−[∇2x

ixifµ]−1xifµ, with Armijo stepsize rule, and re-iterate.

• Decrease µ when k∇xifµk = O(µ) ∀i.

µinit = 1e − 3. µend = 2e − 6. Decrease µ by a factor of 5. Code in Matlab.

(18)

### Simulation Results

Uniformly generate x˜1, ..., ˜xn in [−.5, .5]2, m = 0.9n two pts are nhbrs if dist< .06.

Set dij = k˜xi − ˜xjk (Biswas, Ye ’03)

SeDuMi Inexact BCD

n SOCP dim cpu/Err cpu/Err

1000 21472 × 33908 330/.48 373/.48 2000 84440 × 130060 12548/.57 2090/.52

Table 3: (cpu times are in secs on a Linux PC cluster, running Matlab 6.1.) Err = Pm

i=1 kxi − ˜xik22.

(19)

True soln (m = 900, n = 1000)

SOCP soln found by SeDuMi SOCP soln found by Inexact BCD

(20)

### Observations

For our smoothing-Inexact BCD approach:

• Better cpu time than using SeDuMi.

Add barrier term to find analytic center soln.

• Computation easily distributes.

• Code in Fortran (instead of Matlab) to improve time?

(21)

### Thanks, Christian, for lending the use of your laptop!

6. .

^

We point out that extending the concepts of r-convex and quasi-convex functions to the setting associated with second-order cone, which be- longs to symmetric cones, is not easy

In section 4, based on the cases of circular cone eigenvalue optimization problems, we study the corresponding properties of the solutions for p-order cone eigenvalue

Chen, Conditions for error bounds and bounded level sets of some merit func- tions for the second-order cone complementarity problem, Journal of Optimization Theory and

It is well-known that, to deal with symmetric cone optimization problems, such as second-order cone optimization problems and positive semi-definite optimization prob- lems, this

Abstract We investigate some properties related to the generalized Newton method for the Fischer-Burmeister (FB) function over second-order cones, which allows us to reformulate

Taking second-order cone optimization and complementarity problems for example, there have proposed many ef- fective solution methods, including the interior point methods [1, 2, 3,

For finite-dimensional second-order cone optimization and complementarity problems, there have proposed various methods, including the interior point methods [1, 15, 18], the

Fukushima, On the local convergence of semismooth Newton methods for linear and nonlinear second-order cone programs without strict complementarity, SIAM Journal on Optimization,