to appear in Applied Mathematics and Computation, 2012

**A continuation approach for solving binary quadratic program** **based on a class of NCP-functions**

Jein-Shan Chen ^{1}
Department of Mathematics
National Taiwan Normal University

Taipei 11677, Taiwan E-mail: jschen@math.ntnu.edu.tw

Jing-Fan Li

Department of Mathematics National Taiwan Normal University

Taipei 11677, Taiwan E-mail: 697400011@ntnu.edu.tw

Jia Wu

School of Mathematical Sciences Dalian University of Technology

Dalian 116024, China

E-mail: jwu dora@mail.dlut.edu.cn

February 12, 2012

**Abstract. In the paper, we consider a continuation approach for the binary quadratic**
program (BQP) based on a class of NCP-functions. More speciﬁcally, we recast the
BQP as an equivalent minimization and then seeks its global minimizer via a global
continuation method. Such approach had been considered in [11] which is based on the
Fischer-Burmeister function. We investigate this continuation approach again by using
a more general function, called the generalized Fischer-Burmeister function. However,
the theoretical background for such extension can not be easily carried over. Indeed, it
needs some subtle analysis.

**Keywords. Nonlinear complementarity problem, generalized Fischer-Burmeister func-**
tion, Binary quadratic program.

1Member of Mathematics Division, National Center for Theoretical Sciences, Taipei Oﬃce. The author’s work is supported by National Science Council of Taiwan.

In this paper, we consider the following binary quadratic program (BQP)

*min x*^{T}*Qx + c*^{T}*x* over *x∈ S,* (1)

*where Q is an n×n symmetric matrix, c is a vector in IR*^{n}*and S is the binary discrete set*
*{0, 1}** ^{n}*. It is known that BQP is NP-hard and has a variety of applications in computer
science, operations research and engineering, see [1, 3, 8, 13, 14] and references therein.

There have been proposed several continuous approaches for solving BQP [9, 12, 15] which
often need to cooperate with branch and bound algorithms or some heuristic strategies to
generate an exact or approximate solution. In [10], another type of continuous approach
was proposed which is to reformulate BQP as an equivalent mathematical programming
problem with equilibrium constraints (MPEC) and then consider an eﬀective algorithm to
ﬁnd its global solution. In this approach, many NCP-functions are employed to convert
equilibrium constraints into a collection of quasi-linear equality constraints. Among
*others, the Fischer-Burmeister function ϕ*_{FB} : IR^{2} *→ IR deﬁned as*

*ϕ*_{FB}*(a, b) =√*

*a*^{2}*+ b*^{2}*− (a + b)* (2)

is a popular one. In this paper, we investigate this continuation approach again by using
*a more general function ϕ** _{p}* : IR

^{2}

*→ IR, called the generalized Fischer-Burmeister function*and deﬁned by

*ϕ**p**(a, b) :=∥(a, b)∥**p**− (a + b),* (3)
*where p > 1 is an arbitrary ﬁxed real number and* *∥(a, b)∥**p* *denotes the p-norm of (a, b),*
i.e., *∥(a, b)∥**p* = √^{p}

*|a|** ^{p}*+

*|b|*

^{p}*. In other words, in the generalized FB function ϕ*

*, we*

_{p}*replace the 2-norm of (a, b) appeared in the FB function by a more general p-norm.*

*The function ϕ**p* is still an NCP-function, which naturally induces another NCP-function
*ψ** _{p}* : IR

^{2}

*→ IR*+ given by

*ψ*_{p}*(a, b) :=* 1

2*|ϕ**p**(a, b)|*^{2}*.* (4)

*For any given p > 1, the function ψ** _{p}* is shown to possess all favorable properties of the

*FB function ψ*

_{FB}.

Traditionally, in the continuation approach for BQP, one needs to utilize the fact that
*x∈ {0, 1}*^{n}*⇐⇒ x**i* *= x*^{2}_{i}*, i = 1, 2,· · · , n.* (5)
To the contrast, our proposed continuous optimization approach arises from the comple-
mentarity condition formulation of 0*−1 vector x ∈ {0, 1}** ^{n}*, which includes the equivalence
(5) with redundant constraints

0*≤ x**i* *≤ 1, i = 1, 2, · · · , n.*

so that it can generate an integer feasible solution. For ﬁnding the global minimizer of our
continuous optimization problem, we employ the similar way as in [10, 11]. In summary,
the method is to add a quadratic penalty term associated with its equilibrium constraints
and a logarithmic barrier term associated with box constraints*−1 ≤ x**i* *≤ 1, i = 1, 2, ..., n,*
respectively, to the objective function, and then construct a global smoothing function.

*Since the generalized Fischer-Burmeister function ψ** _{p}*is quasi-linear, the quadratic penalty
for equilibrium constraints will make the convexity of the global smoothing function
more stronger. Particularly, we have shown that the global smoothing function is strictly
convex in the whole domain for barrier parameter large enough or in a subset of its domain
for penalty parameter large enough. According to the feature above, we use a global
continuation algorithm deﬁned in [11] via a sequence of unconstrained minimization for
this function with varying penalty and barrier parameters. Although the idea is brought
from [11], as will be seen, the theoretical background for such extension can not be
easily carried over. Indeed, it needs some subtle analysis for extending the background

*materials. Without loss of generally, in this paper we consider the case that S ={−1, 1}*

*.*

^{n}*By a transformation z = (x + e)/2 for the variable x and the unit vector e in IR, we can*

*extend the conclusions to the case S ={0, 1}*

*.*

^{n}**2** **Continuous formulation based on Φ**

_{p}**function**

In this section we will reformulate (1) as an equivalent continuous optimization based
*on the ϕ** _{p}* function. As will be seen, the following equivalence plays a key role which

*says that a binary constraint t∈ {a, b} with a, b ∈ IR is equivalent to a complementarity*condition (or equilibrium constraint), i.e.,

*t∈ {a, b} ⇐⇒ t − a ≥ 0, b − t ≥ 0, (t − a)(t − b) = 0.*

With this, the unconstrained BQP problem in (1) can be recast as a mathematical programming problem with equilibrium constraints (MPEC)

*min f (x)*

*s.t.* *(1 + x*_{i}*, 1− x**i**) = 0,* *i = 1, 2, ..., n,*
*1 + x*_{i}*≥ 0, 1 − x**i* *≥ 0, i = 1, 2, · · · , n.*

(6)

*In fact, given any NCP-function ϕ : IR×IR → IR, the property of NCP-functions (see [6])*
yields that the equilibrium constraint in (6) is indeed equivalent to an equality constraint
*associated with ϕ:*

*(1 + x**i**, 1− x**i**) = 0,* *1 + x**i* *≥ 0, 1 − x**i* *≥ 0, ⇐⇒ ϕ(1 + x**i**, 1− x**i**) = 0.* (7)
Thus we reformulate the original BQP problem, which together with (6) and (7), as the
following continuous optimization problem:

*min f (x)*

*s.t.* *ϕ(1 + x*_{i}*, 1− x**i**) = 0, i = 1, 2,· · · , n*

*−1 ≤ x**i* *≤ 1, i = 1, 2, · · · , n.*

(8)

box constraints *−1 ≤ x**i* *≤ 1, i = 1, 2, · · · , n in (8) are indeed redundant, we keep*
them on purpose. Actually, we shall see that such constraints play a crucial role in
the construction of a global smoothing function for problem (8) as was shown in [9,
10]. Generally speaking, most NCP-functions are non-diﬀerentiable, such as the popular
Fischer-Burmeister function in (2), the generalized Fischer-Burmeister function in (3),
as well as the minimum function

*ϕ*_{min}*(a, b) = min{a, b}.*

*However, it is very interesting to observe that, when specializing ϕ in (8) as the generalized*
Fischer-Burmeister function, we can reach smooth constraint functions

*ϕ*_{p}*(1 + x*_{i}*, 1− x**i*) = √^{p}

*|1 + x**i**|** ^{p}*+

*|1 − x*

*i*

*|*

^{p}*− 2 = 0, i = 1, 2, ..., n*

and consequently some usual nonlinear programming solvers can be employed to design an eﬀective algorithm for solving problem (8). In view of this, we in this paper pay atten- tion to the following equivalent continuous formulations reformulated by the generalized Fischer-Burmeister function:

*min f (x)*

*s.t.* *ϕ*_{p}*(1 + x*_{i}*, 1− x**i**) = 0,* *i = 1, 2,· · · , n*

*−1 ≤ x**i* *≤ 1, i = 1, 2, · · · , n.*

(9)

*We also note that using the equivalence that x*_{i}*∈ {−1, 1} ⇐⇒ x*^{2}*i* = 1 gives another
another type of continuous optimization:

*min f (x)*

*s.t.* *x*^{2}_{i}*= 1, i = 1, 2,· · · , n*

*−1 ≤ x**i* *≤ 1, i = 1, 2, · · · , n.*

(10)

The formulation of (10) looks simple and friendly at ﬁrst glance, nonetheless, the following remarkable advantages explain why we still stick to the smooth constrained optimization problem (9):

**(i) The quasi-linearity of generalized Fischer-Burmeister function implies that it feasible**
set tends to be convex.

**(ii) The equality constraint conditions ϕ**_{p}*(1 + x*_{i}*, 1− x**i**) = 0, i = 1, 2, ..., n have incor-*
*porated the equivalent formulation x*^{2}_{i}*= 1, i = 1, 2, ..., n, of x* *∈ {−1, 1} with its*
relaxation formulation *−1 ≤ x**i* *≤ 1, i = 1, 2, ..., n, which indicates that, when*
solving (9) with a penalty function method, an implicit interior point constraint is
additionally imposed on.

**(iii) From Proposition 2.1 as below, we see that the quadratic penalty function of equal-**
ity constraints is strictly convex in a very large region when the penalty parameter
is large enough.

These advantages have great contributions to searching for an optimal solution or a favorable suboptimal solution of (1), which will be shown later. Before we prove the main proposition, we ﬁrst introduce several technical lemmas which are important for building up the background materials of our extension.

**Lemma 2.1 Let f, g be real-valued functions from IR to IR**_{+}*. Suppose f, g satisfy*
**(i) f**^{′}*(x) > 0 and g*^{′}*(x) < 0 for all x* *∈ (a, b),*

**(ii) f**^{′′}*(x) < 0 and g*^{′′}*(x) < 0 for all x∈ (a, b),*
**(iii) (f g)**^{′}*(a) < 0 and f (a)≥ g(a).*

*Then (f g)*^{′}*(x) < 0 for all x* *∈ (a, b).*

**Proof. To achieve our result, we need to verify two things: (i) (f g)**^{′}*(a) < 0 and (ii)*
*(f g)*^{′}*(x) is decreasing on x* *∈ (a, b). We proceed these veriﬁcations as below.*

(i) From the assumptions and the chain rule, it is clear that
*(f g)*^{′}*(a) = f*^{′}*(a)g(a) + f (a)g*^{′}*(a) < 0.*

*(ii) Since (f g)*^{′}*(x) = f*^{′}*(x)g(x) + f (x)g*^{′}*(x), we see that in order to show (f g)*^{′}*(x) is*
*decreasing on x∈ (a, b), it is enough to argue both f*^{′}*(x)g(x) and f (x)g*^{′}*(x) are decreasing*
*on (a, b). We look into the ﬁrst term ﬁrst. Note that*

*(f*^{′}*(x)g(x))*^{′}*= f*^{′′}*(x)g(x) + f*^{′}*(x)g*^{′}*(x)≤ 0 ∀x ∈ (a, b)*

*because f*^{′′}*(x) < 0, g(x)* *≥ 0, f*^{′}*(x) > 0 and g*^{′}*(x) < 0. This claims that f*^{′}*(x)g(x)*
*is decreasing on x* *∈ (a, b). The decreasing of f(x)g*^{′}*(x) over (a, b) can be concluded*
similarly.

Thus, from all the above, the proof is complete. *2*

The conclusion of next lemma is simple and neat, however, its arguments are very tedious. Indeed the main idea behind is approximation.

**Lemma 2.2 Let ψ**_{p}*be deﬁned as in (4). Then, ψ*_{p}^{′′}*(1+t, 1−t) is positive at t = ±*√
2^{1}^{3} *− 1*
*for all p≥ 2.*

**Proof.** *For symmetry, we only prove the case of t =* √

2^{1}^{3} *− 1. First, from direct*
*computations and simplifying the expression of ψ*^{′′}* _{p}*, we have

*ψ*^{′′}_{p}*(1 + t, 1− t) =* *((1 + t)** ^{p}* + (1

*− t)*

*)*

^{p}^{1}

^{p}*(1 + t)*^{2}(1*− t)*^{2}*((1 + t)** ^{p}*+ (1

*− t)*

*)*

^{p}^{2}

*× F (p, t)*(11)

*where F (p, t) = f*_{0}*(p, t)* *f*_{1}*(p, t) + f*_{2}*(p, t) + f*_{3}*(p, t)* *+ f*_{4}*(p, t) with*

*f*_{0}*(p, t) = ((1 + t)** ^{p}*+ (1

*− t)*

*)*

^{p}

^{p}^{1}

*f*

_{1}

*(p, t) = (1− t)*

^{2}

*(t + 1)*

^{2p}*f*

_{2}

*(p, t) = (t + 1)*

^{2}(1

*− t)*

^{2p}*f*3*(p, t) = (2t*^{2}*+ 4p− 6)(t + 1)** ^{p}*(1

*− t)*

^{p}*f*

_{4}

*(p, t) = (8− 8p)(1 − t*

^{2})

^{p}*.*

*Since the ﬁrst term on the right side of (11) is always positive for all p≥ 2, it suﬃces to*
*show that F*

(
*p,*

√

2^{1}^{3} *− 1* )

*> 0 for all p≥ 2. However, it is very hard to claim this fact*
*directly. Our strategy is to construct a function A : IR→ IR such that*

*A(p)≤ F*
(

*p,*

√
2^{1}^{3} *− 1*

)

*∀p ≥ 2.* (12)

*The special feature for A(p) is that it is easier to verify A(p)≥ 0 for all p ≥ 2 so that our*
goal could be reached. Now, we proceed the proof by carrying out the aforementioned
two steps.

*Step (1): Construct a function A(·) satisfying (12). Indeed, the function F (·, ·) is com-*
*posed of f*_{0}*, f*_{1}*, f*_{2}*, f*_{3} *and f*_{4}*, so for each f** _{i}*, we will construct a corresponding piecewise

*function a*

_{i}*such that a*

_{i}*(p)≤ f*

*i*

(
*p,*√

2^{1}^{3} *− 1*)

*for i = 0, 1, 2, 3, 4. Then, combining them*
*together to build up the function A(·). For making the reader understand more easier ,*
we will give some pictures during the process of proof.

*(i) First, we explain how to set up a*_{0}*(p). Notice that the second derivative of f*_{0} with
*respect to p is positive at t =*

√

2^{1}^{3} *− 1 for all p ≥ 2, f*0 *is strictly convex at t =*

√
2^{1}^{3} *− 1*
*for all p≥ 2 (the detailed arguments are provided in Appendix A). Hence, we consider a*
real piecewise function deﬁned as

*a*_{0}*(p) =*
{ *−1*

8 *(p− 2) + 2*^{2}^{3} if 2*≤ p ≤ −8*√

2^{1}^{3} *− 1 − 6 + 8(2*^{2}^{3}*),*

√

2^{1}^{3} *− 1 + 1* *if p≥ −8*√

2^{1}^{3} *− 1 − 6 + 8(2*^{2}^{3}*).*

*Figure 1 depicts the relation between a*_{0}*(p) and f*_{0}
(

*p,*√

2^{1}^{3} *− 1* )

. Besides, the following facts

*a*_{0}*(2) = f*_{0}
(

*2,*

√
2^{1}^{3} *− 1*

)

lim

*p**→2*^{+}*a*^{′}_{0}*(p) <* *d*
*dpf*_{0}

(
*2,*

√
2^{1}^{3} *− 1*

)

*a*^{′′}_{0}*(p) = 0 <* *d*^{2}
*dp*^{2}*f*_{0}

(
*p,*

√
2^{1}^{3} *− 1*

)

*Figure 1: The graphs of a*_{0} *and f*_{0}.

*indicate the ﬁrst part of function a*_{0}*(p) is less than f*_{0}
(

*p,*

√

2^{1}^{3} *− 1* )

*for 2 < p* *≤*

*−8*√

2^{1}^{3} *− 1 − 6 + 8(2*^{2}^{3}). On the other hand, another fact

*p→∞*lim *f*0

(
*p,*

√
2^{1}^{3} *− 1*

)

=

√

2^{1}^{3} *− 1 + 1*

*says that the second part of function a*_{0}*(p) is less than or equal to f*_{0}
(

*p,*√

2^{1}^{3} *− 1*)
for
*p≥ −8*√

2^{1}^{3} *− 1 − 6 + 8(2*^{2}^{3}). Thus, we conclude that

*a*_{0}*(p)≤ f*0

(
*p,*

√
2^{1}^{3} *− 1*

)

*∀p ≥ 2.*

(ii) Secondly, we consider a quadratic function deﬁned as

*a*_{1}*(p) =*
(

1*−*

√
2^{1}^{3} *− 1*

)2( 1 +

√
2^{1}^{3} *− 1*

)4

ln (

1 +

√
2^{1}^{3} *− 1*

)

*(p− 1)*^{2}
+

(
1*−*

√
2^{1}^{3} *− 1*

)2( 1 +

√
2^{1}^{3} *− 1*

)4[
1*− ln*

( 1 +

√
2^{1}^{3} *− 1*

)]

*.*

*Figure 2 depicts the relation between a*_{1}*(p) and f*_{1}
(

*p,*√

2^{1}^{3} *− 1*)

. Again, using the fol- lowing facts

*a*_{1}*(2) = f*_{1}
(

*2,*

√
2^{1}^{3} *− 1*

)
*,*
*a*^{′}_{1}(2) = *d*

*dpf*_{1}
(

*2,*

√
2^{1}^{3} *− 1*

)
*,*
*a*^{′′}_{1}*(p)* *≤* *d*^{2}

*dp*^{2}*f*_{1}
(

*p,*

√
2^{1}^{3} *− 1*

)

*∀p ≥ 2,*
we immediately achieve

*a*_{1}*(p)≤ f*1

(
*p,*

√
2^{1}^{3} *− 1*

)

*∀p ≥ 2.*

(iii) Thirdly, we consider a function deﬁned as

*a*2*(p) =*

*−*^{1}_{5}*(p− 2) +*(
1*−*√

2^{1}^{3} *− 1* )4(
1 +√

2^{1}^{3} *− 1* )2

if 2*≤ p ≤ 12 + 20*(

2^{1}^{3} *− 2*^{2}^{3})
+

√

2^{1}^{3} *− 1*(

40(2^{1}^{3})*− 40 − 10(2*^{2}^{3})
)

*,*
0

if *p≥ 12 + 20*(

2^{1}^{3} *− 2*^{2}^{3})
+

√

2^{1}^{3} *− 1*(

40(2^{1}^{3})*− 40 − 10(2*^{2}^{3})
)

*.*

*Figure 3: The graphs of a*_{2} *and f*_{2}.
*Figure 3 depicts the relation between a*_{2}*(p) and f*_{2}

(
*p,*√

2^{1}^{3} *− 1*)

. We observe that the
*function f*_{2} *is positive and convex on p≥ 2, then the following facts*

*a*2*(2) = f*2

(
*2,*

√
2^{1}^{3} *− 1*

)

lim

*p**→2*^{+}*a*^{′}_{2}*(p) <* *d*
*dpf*_{2}

(
*2,*

√
2^{1}^{3} *− 1*

)

*a*^{′′}_{2}*(p) = 0 <* *d*^{2}
*dp*^{2}*f*2

(
*p,*

√
2^{1}^{3} *− 1*

)

*∀p > 2*

*yield a*_{2}*(p)≤ f*2

(
*p,*√

2^{1}^{3} *− 1*)

*for all p≥ 2.*

(iv) Fourthly, we consider a real piecewise function deﬁned as

*a*_{3}*(p) =*

[√

2*− 2*^{1}^{3} (

24*− 12(2*^{2}^{3})
)

+ 16(2^{2}^{3} *− 2*^{1}^{3})*− 8*]
*p*
+

√

2*− 2*^{1}^{3}(24(2^{2}^{3})*− 48) + 40(2*^{2}^{3} *− 2*^{1}^{3}) + 20 if 2*≤ p ≤* ^{5}_{2}*,*

*−*(

4

312^{1}^{3} + _{31}^{4}
)

(2*− 2*^{1}^{3})^{5}^{2}*p +*
(72

312^{1}^{3} + ^{72}_{31}
)

(2*− 2*^{1}^{3})^{5}^{2} if ^{5}_{2} *≤ p ≤ 18,*

0 *if p≥ 18.*

*Figure 4 depicts the relation between a*_{3}*(p) and f*_{3}
(

*p,*√

2^{1}^{3} *− 1*)

. The relation is clear from the picture, however, we need to go through three subcases to verify it mathemati-

cally.

If 2 *≤ p ≤* ^{5}_{2}*, we compute f*_{3}
(

*p,*√

2^{1}^{3} *− 1* )

= (

2(2^{1}^{3})*− 8 + 4p*)

(2*− 2*^{1}^{3})^{p}*. Moreover, we*
have

*d*
*dpf*_{3}

(
*p,*

√
2^{1}^{3} *− 1*

)

= (2*− 2*^{1}^{3})* ^{p}*
[

4 + (

2(2^{1}^{3})*− 8 + 4p*)

ln(2*− 2*^{1}^{3})
]

*,*
*d*^{2}

*dp*^{2}*f*_{3}
(

*p,*

√
2^{1}^{3} *− 1*

)

= (2*− 2*^{1}^{3})* ^{p}*ln(2

*− 2*

^{1}

^{3}) [

8 + (

2(2^{1}^{3})*− 8 + 4p*)

ln(2*− 2*^{1}^{3})
]

*.*
Then, the following facts

*a*_{3}*(2) = f*_{3}
(

*2,*

√
2^{1}^{3} *− 1*

)
*,*
*a*3

(5 2

)

*= f*3

(5
2*,*

√
2^{1}^{3} *− 1*

)
*,*
lim

*p**→2*^{+}*a*^{′}_{3}*(p)* *≤* *d*
*dpf*_{3}

(
*2,*

√
2^{1}^{3} *− 1*

)
*,*

*and f*_{3}
(

*p,*

√

2^{1}^{3} *− 1* )

being concave on[
*2,*^{5}_{2}]

*imply a*_{3}*(p)≤ f*3

(
*p,*

√

2^{1}^{3} *− 1*)

under this case.

If ^{5}_{2} *≤ p ≤ 18 , using the facts that*

*a*_{3}
(5

2 )

*= f*_{3}
(5

2*,*

√
2^{1}^{3} *− 1*

)

lim

*p**→*^{5}_{2}^{+}

*a*^{′}_{3}*(p)* *≤* *d*
*dpf*_{3}

(5
2*,*

√
2^{1}^{3} *− 1*

)

*and a*_{3}*(p) = f*_{3}
(

*p,*√

2^{1}^{3} *− 1*)

*having only one solution at p =* ^{5}_{2}*, we obtain a*_{3}*(p)* *≤*
*f*_{3}

(
*p,*√

2^{1}^{3} *− 1*)

under this case.

*If p* *≥ 18, knowing f*3*(p) > 0 for all p, then it is clear that a*_{3}*(p)* *≤ f*3

(
*p,*

√

2^{1}^{3} *− 1*)
under this case.

*(v) Finally, notice that the second derivative of f*_{4} *with respect to p is positive at t =*

√

2^{1}^{3} *− 1 for all p ≥* ^{−2+ln(2−2}^{1}^{3}^{)}

ln(2*−2*^{1}^{3}) *, and negative for p≤* ^{−2+ln(2−2}^{1}^{3}^{)}

ln(2*−2*^{1}^{3}) *, so f*_{4} is strictly convex
*at t =*

√

2^{1}^{3} *− 1 for all p ≥* ^{−2+ln(2−2}^{1}^{3}^{)}

ln(2*−2*^{1}^{3}) *and strictly concave for all p≤* ^{−2+ln(2−2}^{1}^{3}^{)}

ln(2*−2*^{1}^{3}) . Hence,
we consider a real piecewise function deﬁned as

*a*_{4}*(p) =*

*−*^{153}_{50}*p− 8(2 − 2*^{1}^{3})^{2}+ ^{153}_{25} if 2*≤ p ≤* ^{5}_{2}*,*
[*−*^{497}_{50} + 16(2*− 2*^{1}^{3})^{2}

]

*p +* ^{583}_{25} *− 48(2 − 2*^{1}^{3})^{2} if ^{5}_{2} *≤ p ≤ 3,*

*−*^{13}_{10}*p−* ^{13}_{5} if 3*≤ p ≤* ^{49}_{13}*,*

*−*^{15}_{2} *if p≥* ^{49}_{13}*.*

*Figure 5 depicts the relation between a*_{4}*(p) and f*_{4}
(

*p,*√

2^{1}^{3} *− 1*)

. Again, we need to discuss several subcases to prove the relation mathematically.

For 2*≤ p ≤* ^{5}_{2}, the following facts

*a*_{4}*(2) = f*_{4}
(

*2,*

√
2^{1}^{3} *− 1*

)

lim

*p**→2*^{+}*a*^{′}_{4}*(p) <* *d*
*dpf*_{4}

(
*2,*

√
2^{1}^{3} *− 1*

)

*a*^{′′}_{4}*(p) = 0 <* *d*^{2}
*dp*^{2}*f*4

(
*p,*

√
2^{1}^{3} *− 1*

)

*yield the ﬁrst part of function a*_{4}*(p) is less than f*_{4}
(

*p,*

√

2^{1}^{3} *− 1* )

under this case.

For ^{5}_{2} *≤ p ≤ 3, using the following facts*

*a*_{4}*(3) < f*_{4}
(

*3,*

√
2^{1}^{3} *− 1*

)

lim

*p**→3*^{−}*a*^{′}_{4}*(p) >* *d*
*dpf*_{4}

(
*3,*

√
2^{1}^{3} *− 1*

)

(13)
*a*^{′′}_{4}*(p) = 0 <* *d*^{2}

*dp*^{2}*f*_{4}
(

*p,*

√
2^{1}^{3} *− 1*

)

*we have a*4*(p) is less than f*4

(
*p,*

√

2^{1}^{3} *− 1*)

under this case.

For 3*≤ p ≤* ^{49}_{13}, we know that

lim

*p**→3*^{+}*a*^{′}_{4}*(p) >* *d*
*dpf*4

(
*3,*

√
2^{1}^{3} *− 1*

)
*.*

*This together with (13) gives a*_{4}*(p) is less than f*_{4}
(

*p,*

√

2^{1}^{3} *− 1*)

under this case.

*Figure 6: The graphs of A and F*

*For p* *≥* ^{49}_{13}*, from f*_{4}*(p) being strictly convex for all p* *≤* ^{−2+ln(2−2}^{1}^{3}^{)}

ln(2*−2*^{1}^{3}) and being strictly
*concave for all p≥* ^{−2+ln(2−2}^{1}^{3}^{)}

ln(2*−2*^{1}^{3}) , we know
*d*

*dpf*_{4}

(*−1 + ln(2 − 2*^{1}^{3})
ln(2*− 2*^{1}^{3})

)

= 0 and lim

*p**→∞**f*_{4}*(p) = 0*
*which lead to f*_{4}*(p) >−*^{15}_{2} *for all p≤ 2. Thus, a*4*(p)* *≤ f*4

(
*p,*

√

2^{1}^{3} *− 1* )

under this case.

*Now, we are ready to deﬁne a function A : IR* *→ IR satisfying (12). As the mentioned*
idea, the function is deﬁned by

*A(p) = a*_{0}*(p)*
[

*a*_{1}*(p) + a*_{2}*(p) + a*_{3}*(p)*
]

*+ a*_{4}*(p).*

*According to our constructions of a*_{i}*(p), it is clear that A(p)* *≤ F*(
*p,*

√

2^{1}^{3} *− 1* )

for all
*p≥ 2. Figure 6 shows the relation between A(p) and F*(

*p,*

√

2^{1}^{3} *− 1* )
.

*Step (2): We will show that A(p)* *≥ 0 for all p ≥ 2. Notice that A(p) is piecewise*
*smooth, hence A*^{′}*(p) is a piecewise function. Indeed, the expression of A*^{′}*(p) looks very*

*expression for A*^{′}*(p) in Appendix C which helps us understand the structure of A*^{′}*(p).*

*The key point is that from the expression of the A*^{′}*(p), we can verify the following facts:*

*A(2) = 0,*
*A*

(

*−8*

√

2^{1}^{3} *− 1 − 6 + 8(2*^{2}^{3})
)

*> A*
(5

2 )

*> 0,*

and {

*A*^{′}*(p) < 0 if* *p∈ (*^{5}_{2}*,−8*√

2^{1}^{3} *− 1 − 6 + 8(2*^{2}^{3}*)),*
*A*^{′}*(p) > 0* *otherwise,*

*with the exception of points of discontinuity. Thus, we conclude A(p)≥ 0 for all p ≥ 2*
*and (12) is satisﬁed, which imply F*

(
*p,*√

2^{1}^{3} *− 1* )

*≥ 0 for all p ≥ 2. Then, the proof is*
complete. *2*

**Lemma 2.3 (a) Let f be a convex function deﬁed on a convex set C in IR**^{n}*and g be a*
*nondecreasing convex function deﬁned on an interval I in IR. Suppose f (C)⊆ I.*

*Then, the composite function g◦ f deﬁned by (g ◦ f)(x) = g(f(x)) is convex on C.*

* (b) Suppose ϕ*1

**: U**

*→ IR is a twice continuously diﬀerentiable function with a compact*

**set U***∈ IR*

^{n}*and ϕ*

_{2}

**: X**

*→ IR is a twice continuously diﬀerentiable function such*

*that the minimum eigenvalue of its Hessian matrix∇*

^{2}

*xx*

*ϕ*

_{2}

*(x) is greater than ε (> 0)*

*for all x*1

**∈ X, where X ⊂ U. Then there exists a constant ˆβ > 0 such that ϕ***+ βϕ*

_{2}

**is a strictly convex function on X for β > ˆ**β.**Proof. (a) See [2, Chap III, Lemma 1.4].**

(b) See [9, Theorem 3.1]. *2*

**Proposition 2.1 Let ϕ**_{p}*and ψ*_{p}*be deﬁned as in (3) and (4), respectively. Then, for any*
*ﬁxed p≥ 2, the following hold.*

**(a) The function ϕ***p**(1 + t, 1− t) is strictly convex for all t ∈ IR.*

**(b) The function ψ**_{p}*(1 + t, 1− t) is strictly convex for all t /∈*[

*−*√

2^{1}^{3} *− 1,*√

2^{1}^{3} *− 1*]
*.*
**Proof. (a) It is known know that ϕ**_{p}*is a convex function [4, 5, 6]. Note that f is a*
*composition of ϕ*_{p}*and an aﬃne function. Thus, f is convex since it is a composition of*
a convex function and an aﬃne function (the composition of two convex functions is not
necessarily convex, however, our case does guarantee the convexity because one of them
is aﬃne).

*(b) Due to the symmetry of ψ*_{p}*(1 + t, 1− t), it is enough to show that ψ**p**(1 + t, 1− t) is*
*strictly convex for t≥*√

2^{1}^{3} *− 1. To proceed, we discuss two cases.*

*(i) If t* *≥ 1, the function ψ**p**(1 + t, 1− t) can be regard as a composite function of*
*ϕ**p**(1 + t, 1− t) and h(·) = (·)*^{2}*. Because h(·) is nondecreasing convex function on [0, ∞]*

*and ϕ*_{p}*(1 + t, 1− t) is positive strictly convex for t ≥ 1, from Lemma 2.3, we obtain*
*ψ(1 + t, 1− t) is strictly convex for t ≥ 2.*

*(ii) If 1 > t≥*√

2^{1}^{3} *− 1, we know that*

*−ψ*^{′}*p**(1 + t, 1− t) = −ϕ**p**(1 + t, 1− t)ϕ*^{′}*p**(1 + t, 1− t)*

*−ψ**p*^{′′}*(1 + t, 1− t) =* [

*−ϕ*^{′}*p**(1 + t, 1− t)ϕ*^{′}*p**(1 + t, 1− t) − ϕ**p**(1 + t, 1− t)ϕ*^{′′}*p**(1 + t, 1− t)*]
*.*
Then, it suﬃces to show that *−ψ**p*^{′′}*(1 + t, 1− t) < 0 for p ≥ 2. To this end, we compute*
*the third derivative of ϕ*_{p}*(1 + t, 1− t) with respect to t and prove that it is negative. To*
see this,

*ϕ*^{′′′}_{p}*(1 + t, 1− t) =* *4 [(1 + t)** ^{p}* + (1

*− t)*

*]*

^{p}^{1}

^{p}*(1 + t)*

*(1*

^{p}*− t)*

^{p}*(p− 1)*

*(1 + t)*^{3}*(t− 1)*^{3}*[(1 + t)** ^{p}*+ (1

*− t)*

*]*

^{p}^{3}

*× T (p, t)*(14)

*where T is a real valued function deﬁned by*

*T (p, t) = (1 + t)*^{p}*(2p− 1 − 3t) − (1 − t)*^{p}*(2p + 3t− 1).*

It’s not hard to verify the ﬁrst term of the right side of (14) is always negative for all
*p* *≥ 2. Thus, we only need to show T (p, t) > 0 for all p ≥ 2 which is equivalent to*
*verifying T (2, t) > 0 and T (p, t) > T (2, t) for all p > 2. These can be done as below.*

*(i) Because T (2, t) = 6t− 6t*^{3}*, it is clear T (2, t) > 0.*

*(ii) To show that T (p, t) > T (2, t) for p > 2, we ﬁrst argue that*

*(1 + t)*^{p}*> (1− t)*^{p}^{−1}*(2p + 3t− 1) ∀p > 2,* (15)
it’s equivalent to show that _{(1}_{−t)}*p*^{(1+t)}*−1**(2p+3t*^{p}*−1)* *is greater than 1 for all p > 2. Therefore, we*
*consider the derivative of the following function with respect to p as follows:*

*d*
*dp*

*(1 + t)*^{p}

(1*− t)*^{p}^{−1}*(2p + 3t− 1)* (16)

= *(1 + t)*^{p}

(1*− t)*^{p}^{−1}*(2p + 3t− 1)*^{2} *×*[

(1*− 3t − 2p) ln(1 − t) + (2p + 3t − 1) ln(1 + t) − 2*]
*.*
*Observing both terms of the right side of (16) are positive for all p > 2 and using*

*(1+t)*^{p}

(*−1+3t+2p)(1−t)*^{p−1}*> 1 when p=2, we can achieve (15). Secondly, we know that*

*2p− 1 − 3t > 1 − t ∀p > 2.* (17)

*ϕ*^{′′′}_{p}*(1 + t, 1− t) < 0 ∀p ≥ 2.*

*Then, applying Lemma 2.1 gives the desired result for which we set f (t) =−ϕ**p**(1+t, 1−t)*
*and g(t) = ϕ*^{′}_{p}*(1 + t, 1− t).* *2*

The result of Proposition 2.1(b) could be improved under some sense. More speciﬁ-
*cally, the interval where ψ*_{p}*(1 + t, 1− t) is strictly convex varies as long as p changes. We*
*originally wish to ﬁgure out the exact interval where ψ**p**(1 + t, 1− t) is strictly convex for*
*each p. However, it is very hard to ﬁnd a closed form depending p to reﬂect this feature*
(indeed, it may be not possible in our opinion). To compromise, we try to ﬁnd such an
*appropriate common interval for all p≥ 2 as shown in Proposition 2.1(b). The following*
two ﬁgures (Figures 7-8) depict the geometric view regarding what we just mentioned.

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

p=1.1 p=1.5 p=2 p=3 p=10

*Figure 7: The graphs of ψ*_{p}*(1 + t, 1− t) for diﬀerent p.*

**3** **Global continuation algorithm for BQP**

Due to the logarithmic barrier function being strictly convex and Proposition 2.1, we
now introduce the quadratic penalty ∑_{n}

*i=1**ψ*_{p}*(1 + x*_{i}*, 1− x**i*) for the equality constraints

0

1

2

3 0 0.5 1 1.5 2 2.5 3

0 0.5 1 1.5 2

y−axis x−axis

z−axis

*Figure 8: The graph of ψ*_{p}*(1 + t, 1− t) with a ﬁxed p.*

and the logarithmic barrier*−*∑_{n}

*i=1**[ln(1 + x** _{i}*) + ln(1

*− x*

*i*)] of the box constraints into the (9). Construct a global smoothing function

*ϕ(x, α, τ ) = f (x) + α*

∑*n*
*i=1*

*ψ*_{p}*(1 + x*_{i}*, 1− x**i*)*− τ*

∑*n*
*i=1*

*[ln(1 + x** _{i}*) + ln(1

*− x*

*i*)] (18)

*where τ > 0 is a barrier parameter and α > 0 is a penalty parameter. The next property*
*indicates that the strictly convexity of function ϕ(x, α, τ ) on (−1, 1)** ^{n}* when the barrier

*parameter is large enough, and the strictly convexity of function ϕ(x, α, τ ) in a large*

*subset of its domain for all τ > 0.*

**Proposition 3.1 Let ϕ(x, α, τ ) be the function deﬁned by (18). Then, the following hold.**

**(a) There exists a constant ˆ**τ > 0 such that if τ > ˆτ and α > 0, ϕ(x, α, τ ) is strictly*convex on (−1, 1)*^{n}*.*

**(b) There exists a constant ˆ**α > 0 such that if α > ˆα and τ > 0, ϕ(x, α, τ ) is strictly*convex on the set D :=*

{

*x∈ (−1, 1)*^{n}* |x*_{i}*| >*√

2^{1}^{3} *− 1, i = 1, 2, · · · , n*}
*.*

*ϕ*_{a}*(x) := f (x) + α*

∑*n*
*i=1*

*ψ*_{p}*(1 + x*_{i}*, 1− x**i**),*

*ϕ*_{b}*(x) :=* *−*

∑*n*
*i=1*

*[ln(1 + x** _{i}*) + ln(1

*− x*

*i*

*)].*

*Then the expression of the Hessian matrix of ϕ*_{b}*(x) at any x∈ X is given by*

*∇*^{2}*xx**ϕ**b***(x) = diag**

( 1

(1*− x*1)^{2} + 1

*(1 + x*_{1})^{2}*,· · · ,* 1

(1*− x**n*)^{2} + 1
*(1 + x** _{n}*)

^{2}

)
*,*

*where diag(x) denotes a diagonal matrix with the components of x as the diagonal*
elements. Moreover, the function _{(1}_{−x}^{1}

*i*)^{2} + _{(1+x}^{1}

*i*)^{2} *has minimum 2 at point x** _{i}* = 0, and
so every diagonal element of

*∇*

^{2}

*xx*

*ϕ*

_{b}*(x) is at least 2. Thus, by letting U = [−1, 1]*

^{n}*, ε = 2*and using Lemma 2.3(b) yield the desired result.

*(b) Set ϕ*_{a}*= f (x) and*
*ϕ**b* =

∑*n*
*i=1*

*ψ**p**(1 + x**i**, 1− x**i*)*−* *τ*
*α*

∑*n*
*i=1*

*[ln(1 + x**i*) + ln(1*− x**i**)].*

From the proof of Lemma 2.2, it follows that

*∇*^{2}*xx*

(∑*n*

*i=1*

*ψ** _{p}*(1

*− x*

*i*

*, 1− x*

*i*) )

**= diag**
(

*ψ*_{p}* ^{′′}*(1

*− x*1

*, 1 + x*

_{1}

*),· · · , ψ*

*p*

*(1*

^{′′}*− x*

*n*

*, 1 + x*

*) )*

_{n}*,*

*where ψ*_{p}* ^{′′}*(1

*− x*

*i*

*, 1 + x*

_{i}*) can be found in (11). Now taking f (t) =*

*−ϕ*

*p*

*(1 + t, 1− t),*

*g(t) = ϕ*

^{′}

_{p}*(1 + t, 1− t) and applying the proof(ii) of Lemma 2.1, we have*

*ψ*_{p}* ^{′′}*(1

*− t, 1 + t) > ψ*2

*(1*

^{′′}*− t, 1 + t) ∀p > 2.*

In addition, from [11, Lemma 3.1], we also have
*ψ*_{b}* ^{′′}*(1

*− t, 1 + t) =*2√

*(2t*^{2}+ 2)^{3} *− 8*

√*(2t*^{2}+ 2)^{3} *> 0.0004* *∀|t| > 0.51.*

Therefore, the above two inequalities imply

*ψ*_{p}* ^{′′}*(1

*− t, 1 + t) > 0.0004 ∀ |t| > 0.51 and ∀p ≥ 2.*

This indicates that every diagonal element of *∇*^{2}*xx**ψ*_{p}*is at least 0.0004. Using the fact*
that the Hessian matrix of *−*_{α}* ^{τ}* ∑

_{n}*i=1**[ln(1 + x** _{i}*) + ln(1

*− x*

*i*)] is positive deﬁnite, we obtain that every diagonal element of

*∇*

^{2}

*xx*

*ϕ*

_{b}*is at least 0.0004. Now taking*

*U = [−1, 1]*^{n}*,* *X = D* and *ε = 0.0004*

and applying Lemma 2.3 gives the desired conclusion. *2*

As remarked in [11], the result of Proposition 3.1 oﬀers motivation to use the function
*ϕ(x, α, τ ) to develop a global continuation algorithm for the constrained optimization*
problem (9). This method will generate a global optimal solution or at least a desirable
local solution via a sequence of unconstrained minimization

*x*min*∈IR*^{n}*ϕ(x, α*_{k}*, τ** _{k}*) (19)
with an increasing penalty parameter sequence

*{α*

*k*

*} and a decreasing barrier parameter*sequence

*{τ*

*k*

*}. Note that to ensure the strict convexity of ϕ(x, α*

*k*

*, τ*

*), we have to utilize*

_{k}*a suﬃciently large initial value τ*

_{0}to start with the algorithm. As the iteration goes on, the convexity of logarithmic barrier

*−τ*

*k*

∑*n*

*i=1**[ln(1 + x** _{i}*) + ln(1

*− x*

*i*)] will become

*weak, but the strict convexity of ϕ(x, α*

*k*

*, τ*

*k*) can still be guaranteed due to the increasing

*of the penalty parameter α*

_{k}*. This means that for each k*

*∈ IN, the minimization prob-*

*lem (19) can be easily solved if we have skillful technique to adjust the parameter α and τ .*

**Algorithm 3.1**

* Step 0 Given parameters α*0

*, τ*0

*, σ*1

*> 1, σ*2

*∈ (0, 1) and ϵ > 0. Select a starting point*ˆ

*x*^{0} *and set k = 0.*

**Step 1 Solve the unconstrained minimization problem (19) with the starting point ˆ***x** ^{k}*,

*and denote by x*

*its optimal solution.*

^{k}**Step 2 If** √∑*n*

*i=1**ψ** _{p}*(1

*− x*

^{k}*i*

*, 1− x*

^{k}*i*)

*≤ ϵ, terminate the algorithm, else go to Step 3.*

**Step 3 Update the parameters α***k+1* *= σ*1*α**k* *and τ**k+1* *= σ*2*τ**k*.
**Step 4 Set ˆ***x** ^{k+1}* = ˆ

*x*

^{k}*, k = k + 1 and go to Step 1.*

Is Algorithm 3.1 well-deﬁned? To answer this, we give an existence theorem of solution for the unconstrained minimization problem (19). In fact, its proof can be found in [11, Lemma 3.2], we give a brief proof here for completeness.

**Proposition 3.2 Let ϕ(x, α**_{k}*, τ*_{k}*) be the function deﬁned as in (18). Then, the following*
*hold.*

**(a) For each k**∈ IN, the minimization problem (19) has a solution x^{k}*.*

**(b) From (a), there exists an ˆ**τ such that the solution to problem (19) is unique when*τ*_{k}*> ˆτ*

*ϕ(x, α*_{k}*, τ*_{k}*) is continuous and X*_{1} *is a compact set, there exist two real numbers L*_{1} and
*U*_{1} such that

*L*_{1} *≤ ϕ(x, α**k**, τ** _{k}*)

*≤ U*1

*∀x ∈ X*1

*.*

*On the other hand, we note that ϕ(x, α*_{k}*, τ** _{k}*)

*−→ +∞ when x*

*i*0

*−→ 1*

^{−}*or x*

_{i}_{0}

*−→ 1*

^{+}for

*some i*0

*∈ {1, 2, ..., n}. Hence, the continuity of function ϕ(x, α*

*k*

*, τ*

*k*) implies that there

*exists an δ with 0 < δ < 1/4 such that*

*ϕ(x, α*_{k}*, τ** _{k}*)

*≥ U*1

*∀x ∈*(

(*−1, −1 + δ] ∪ [1 − δ, 1)*
)*n*

*.* (20)

*Let X = [−1+δ, 1−δ]*^{n}*. Again, ϕ(x, α*_{k}*, τ*_{k}*) being continuous on a compact set X implies*
that there exists an ˆ*x∈ X such that for each k ∈ IN*

*ϕ(ˆx, α*_{k}*, τ** _{k}*)

*≤ ϕ(x, α*

*k*

*, τ*

*)*

_{k}*∀x ∈ X.*

*Moreover, due to X*_{1} *⊆ X, we know*

*ϕ(ˆx, α*_{k}*, τ** _{k}*)

*≤ U*1

*.*(21)

Combining (20) and (21) yields that

*ϕ(ˆx, α**k**, τ**k*)*≤ ϕ(x, α**k**, τ**k*) *∀x ∈ (−1, 1)*^{n}*\ X.*

Thus, together with (20), it shows that ˆ*x is exactly the desired solution x** ^{k}*.

*(b) From conclusion of Proposition 3.1(a), ϕ(ˆx, α*_{k}*, τ** _{k}*) is strictly convex on (

*−1, 1)*

*.*

^{n}*Hence x*

*is unique.*

^{k}*2*

**4** **Numerical experiments**

In this section, we report numerical results of Algorithm 3.1 for solving the unconstrained binary quadratic programming problem. Our numerical experiments are carried out in Matlab (version 7.8) running on a PC Inter core 2 Q8200 of 2.33 GHz CPU and 2.00 GB Memory.

In our numerical experiments, we employ BFGS algorithm with strong Wolfe-Powell
line search to solve the unconstrained minimization problem (19), and terminate the
*current iteration as long as x** ^{k}* satisﬁes the following criterion:

*
∇*^{x}*ϕ(x*^{k}*, α*_{k}*, τ** _{k}*)

*≤5.0e− 3.*

The values for the parameters involved in Algorithm 3.1 are chosen as follows:

*α*_{0} *= 0, σ*_{1} *= 2, σ*_{2} *= 0.5, ε = 1.0e− 3,*

*and the initial barrier parameter τ*0 varies with the scale of problems (here we choose its
value the same as that in [11]). The starting point ˆ*x*^{0} *= 0.9(1, 1, . . . , 1)*^{T}*∈ IR** ^{n}* is used

*for all test problems. To obtain an integer solution x*

*from the ﬁnal iterate point ˆ*

^{∗}*x*

*of Algorithm 3.1, we let*

^{∗}*x*^{∗}* _{i}* =

{ *−1 if |ˆx*^{∗}*i* + 1*| ≤ 1.0e − 2*

1 if *|ˆx*^{∗}*i* *− 1| ≤ 1.0e − 2* *for i = 1, 2,· · · , n.*

The test problems are all from the OR-Library and have the following formulation
*max z*^{T}*Qz*

*s.t.* *z*_{i}*∈ {0, 1}, i = 1, 2, · · · , n.*

*To solve these problems with Algorithm 3.1, we use the formula z = (x+e)/z to transform*
them into the following formulation

*− min −*^{1}_{4}*x*^{T}*Qx−* ^{1}_{2}*x*^{T}*Qe−* ^{1}_{4}*e*^{T}*Qe*
*s.t. x*_{i}*∈ {−1, 1}, i = 1, 2, · · · , n.*

*The optimal values generated by Algorithm 3.1 with diﬀerent p (p=1.1, 2, 4, 5, 10, 20, 50,*
100) are listed in Tables 1-5 (see Appendix D), where^{′}*−** ^{′}* means that the algorithm fails to
get an optimal solution when the maximum CPU time arrives. Moreover, to present the
objective evaluation and comparison of the performance of Algorithm 3.1 with diﬀerent

*p, we adopt the performance proﬁle introduced in [7] as a means. In particular, we regard*

*Algorithm 3.1 corresponding to a p as a solver and assume that there are n*

*solvers and*

_{s}*n*

*test problems from the OR-Library collection*

_{j}*J . We are interested in using the*

*optimal values calculated by Algorithm 3.1 as performance measure for diﬀerent p. For*

*each problem j and solver s, let*

*t*_{j,s}*:= the optimal value of problem j by solver s, µ** _{j,s}*:= 1

*t*

*j,s*

*.*

*We compare the performance on problem j by solver s with the best performance by any*
*one of the n** _{s}* solvers on this problem, i.e., we employ the performance ratio

*r** _{j,s}*:=

*µ*

_{j,s}min*{µ**j,s* *: s∈ S }* = max*{t**j,s**: s∈ S }*
*t*_{j,s}*,*

where*S is the set of eight solvers. An overall assessment of each solver is obtained from*
*ρ*_{s}*(τ ) :=* 1

*n** _{j}*size

*{j ∈ J : r*

*j,s*

*≤ τ},*

*Algorithm 3.1 for solver s.*

Figure 9 shows the performance proﬁle of the reciprocals of optimal values obtained
*by Algorithm 3.1 in the range of [1, 1.04] for eight solvers on 50 test problems. The*
*eight solvers correspond to Algorithm 3.1 with p = 1.1, p = 2, p = 4, p = 5, p = 10,*
*p = 20, p = 50 and p = 100, respectively. From this ﬁgure, we see that Algorithm 3.1 are*
*considerably eﬃcient no matter which value of p is chosen. In fact, Algorithm 3.1 with*
*the aforementioned p values can solve all the 50 test problems except for p = 5, 20, 100.*

*Moreover, Algorithm 3.1 with p = 4 has the best numerical performance (has the highest*
probability of being the optimal solver) and the probability of its being the winner on a
*given BQP is around 0.48. Besides, p = 1.1 and p = 2 have a comparable performance*
*with p = 4, please refer to Appendix D for more detailed numerical reports.*

1 1.005 1.01 1.015 1.02 1.025 1.03 1.035 1.04

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1

The values of tau

The values of performance profile

p=1.1 p=2 p=4 p=5 p=10 p=20 p=50 p=100

Figure 9: Performance proﬁle of the reciprocals of optimal values by Algorithm 3.1 with
*diﬀerent p.*

**References**

*[1] B. Alidaee, G. Kochenberger and A. Ahmadian, 0−1 quadratic programming*
*approach for the optimal solution of two scheduling problems, International Journal*
of Systems Science, 25 (1994), 401-408.