• 沒有找到結果。

The Branch and Bound Principle

Solving an NP-hard discrete optimization problem is often an immense job requiring a very efficient algorithm, and the Branch and Bound (B&B) paradigm is one of the main tools used in constructing such a solution. A B&B method searches for the best solution in the complete space of solutions according to a given problem. However, explicit enumeration is normally impossible due to the exponentially increasing number of potential solutions. The use of bounds for the function to be optimized combined with the value of the current best solution enables the algorithm to search parts of the solution space only implicitly.

At any point during the solution process, the status of the solution with respect to the search of the solution space is described by a pool of yet unexplored subset of this and the best solution found so far. Initially only one subset exists, namely the complete solution space, and the best solution found so far is ∞. The unexplored subspaces are represented as nodes in a dynamically generated search tree, which initially only contains the root, and each iteration of a classical B&B algorithm processes one such node. The iteration has three main components: selection of the node to process, bound

8

calculation, and branching. In Fig. 2.1, the initial situation and the first step of the process are illustrated.

The sequence of these may vary according to the strategy chosen for selecting the next node to process. If the selection of next subproblem is based on the bound value of the subproblems, then the first operation of an iteration after choosing the node is branching. For each of these, it is checked whether the subspace consists of a single solution, in which case it is compared to the current best solution keeping the best of these. Otherwise the bounding function for the subspace is calculated and compared to the current best solution. If the subspace cannot contain the optimal solution, the whole subspace is discarded. The search terminates when there are no unexplored parts of the solution space left, and the optimal solution is then the one recorded as ”current best”

S

S1

S2

S3

S1 S2 S3

S11

S22

*S3

S12

S21

S11 S21

S3

S12 S22 * = Doesn’t contain optimal solution

Figure 2.1: Illustration of the search space for a B&B procedure.

9

2.2.1 Terminology and General description

In the following subsection, we consider minimization problems - the case of maxi-mization problems can be dealt with similarly. The problem is to minimize a function f (x) of variables (x1. . . xn) over a region of feasible solutions, S :

minx∈S f (x)

The function f is called the objective function and may be of any type. The set of feasible solutions is usually determined by general conditions on the variables, e.g. that these must be non-negative integers or binary, and special constraints determining the structure of the feasible set. In many cases, a set of potential solutions, G, containing S, for which f is still well defined. A function g(x) often defined on G (or S) with the property that g(x) ≤ f (x) for all x in S arises naturally. Both S and G are very useful in the B&B context. Fig. 2.2 illustrates the situation where S and G are intervals of real numbers.

S G f

g

Figure 2.2: The relation between the bounding function g and the objective function f on the sets S and G of feasible and potential solutions of a problem.

2.2.2 Bounding Function

The bounding function is the key component of any B&B algorithm in the sense that a low quality bounding function cannot be compensated for through good choices

10

of branching and selection strategies. Ideally the value of a bounding function for a given subproblem should equal the value of the best feasible solution to the problem, but on account of obtaining this value is usually in itself NP-hard, the goal is to come as close as possible using only a limited amount of computational effort. A bounding function is called strong, if it in general gives values close to the optimal value for the subproblem bounded, and weak if the values produced are far from the optimum. One often experiences a trade off between quality and time when dealing with bounding functions: The more time spent on calculating the bound, the better the bound value usually is. It is normally considered beneficial to use as strong a bounding function as possible in order to keep the size of the search tree as small as possible.

Bounding functions naturally arise in connection with the set of potential solutions G and the function g mentioned in above. Due to the fact that S ⊆ G, and that g(x) ≤ f (x) on G, the following is easily seen to hold:

minx∈Gg(x) ≤ minx∈G f (x) minx∈S g(x)



≤ min

x∈S f (x) (2.8)

If both of G and g exist there are now choices between three optimization problems, for each of which the optimal solution will provide a lower bound for the given objective function. The ”skill” here is of course to chose G and/or g so that one of these is easy to solve and provides tight bounds.

2.2.3 Branching Rule

All branching rules in the context of B&B can be seen as subdivision of a part of the search space through the addition of constraints, often in the form of assigning values to variables. Convergence of B&B is ensured if the size of each generated subproblem is smaller than the original problem, and the number of feasible solutions to the original problem is finite. Normally, the subproblems generated are disjoint - in this way the problem of the same feasible solution appearing in different subspaces of the search tree is avoided.

11

2.2.4 Strategies for Selecting Next Subproblem

The strategy for selecting the next live subproblem to investigate usually reflects a trade off between keeping the number of explored nodes in the search tree low, and staying within the memory capacity of the computer used.

If one always selects among the live subproblems one of those with the lowest bound, called the best first search strategy, BeFS. Fig. 2.3 shows a small search tree -the numbers in each node corresponds to the sequence. A subproblem P is called critical if the given bounding function when applied to P results in a value strictly less than the optimal solution of the problem in question. Nodes in the search tree corresponding to critical subproblems have to be partitioned by the B&B algorithm no matter when the optimal solution is identified - they can never be discarded by means of the bounding function.

Since the lower bound of any subspace containing an optimal solution must be less than or equal to the optimum value, only nodes of the search tree with lower bound less than or equal to this will be explored.

f=1, g=0

f=1, g=0.5 f=2, g=1

6 5 7

f=3, g=2 f=2, g=1.5

9 8

f=3, g=1.5 f=4, g=2 1

2 3

4

f=5.5, g=3 f=5, g=2.5

Figure 2.3: Search strategies in B&B: the Best-First Search.

Even though the choice of the subproblem with the current lowest lower bound makes good sense also regarding the possibility of producing a good feasible solution, memory

12

problems arise if the number of critical subproblems of a given problem becomes too large. The situation more or less corresponds to a breath first search strategy, BFS, in which all nodes at one level of the search tree are processed before any node at a higher level. Fig. 2.4 shows the search tree with the numbers in each node corresponding to the BFS processing sequence. The number of nodes at each level of the search tree grows exponentially with the level making it infeasible to do breadth first search for larger problems.

f=1, g=0

f=1, g=0.5 f=2, g=1

4 6 7

f=3, g=2 f=2, g=1.5

9

f=3, g=1.5 f=4, g=2 1

2 3

5

f=5.5, g=3 f=5, g=2.5 8

Figure 2.4: Search strategies in B&B: the Breath First Search.

The alternative used is a depth first search strategy, DFS. Here a live node with largest level in the search tree is chosen for exploration. Fig. 2.5 shows the DFS processing sequence number of the nodes. The memory requirement in terms of number of subproblems to store at the same time is now bounded above by the number of levels in the search tree multiplied by the maximum number of children of any node, which is usually a quite manageable number. An advantage from the programming point of view is the use of recursion to search the tree - this enables one to store the information about the current subproblem in an incremental way, so only the constraints added in connection with the creation of each subproblem need to be stored. The drawback is that if the incumbent is far from the optimal solution, large amounts of unnecessary

13

bounding computations may take place. In order to avoid this, DFS is often combined with a selection strategy which is that exploring the node with the small lower bound first hopefully leads to a good feasible solution.

f=1, g=0

Figure 2.5: Search strategies in B&B: the Depth First Search.

2.3 Dual Decomposition method for Non-convex