• 沒有找到結果。

FINDING A COMPLETE MATCHING WITH THE MAXIMUM PRODUCT ON WEIGHTED BIPARTITE GRAPHS

N/A
N/A
Protected

Academic year: 2021

Share "FINDING A COMPLETE MATCHING WITH THE MAXIMUM PRODUCT ON WEIGHTED BIPARTITE GRAPHS"

Copied!
7
0
0

加載中.... (立即查看全文)

全文

(1)

Computers Math. Applic. Vol. 25, No. 5, pp. 65-71, 1993 0097-4943/93 $6.00 + 0.00 Printed in Great Britain. All rights reserved Copyright~)1993 Pergamon Press Ltd

F I N D I N G A C O M P L E T E M A T C H I N G W I T H T H E M A X I M U M P R O D U C T O N W E I G H T E D B I P A R T I T E G R A P H S

FRANK S. C. TSENG AND WEI-PANG YANG

Department of Computer Science and Information Engineering, National Chiao Tung University Hsinchu, Taiwan 30050, Republic of China

ARSE~. L. P. CHEN*

Department of Computer Science, National Taing Hua University Hsinchu, Taiwan 30043, Republic of China

(Received June 199~)

A b s t r a c t - - T h e traditional bipartite weighted matching problem is to maximize the largest possible sum of weights. In this paper, we define a bipartite matching problem which maximizes the largest possible

product

of weights and develop an algorithm to solve it. Although this problem corresponds to a non-linear program, we show this problem can be easily solved by modifying the

Hungarian

method.

Finally, we present an application of this problem.

1. I N T R O D U C T I O N

G r a p h theory [1] is a useful mathematical tool to model systems involving discrete objects.

A matching

in a graph is a set of edges, no two of which are adjacent. Given a bipartite graph, it is a well-known problem to find a matching t h a t has as many edges as possible. Many interesting algorithms for solving such a problem have been developed. For example, in [2], Hopcroft and Karp presented an efficient algorithm to solve such a problem. Even and Tarjan [3] pointed out t h a t such a problem can be solved by employing the max-flow algorithm [4].

A more generalized version of the matching problem is to consider a weighted bipartite graph and find a matching with the largest possible

sum

of weights. This generalized problem has been solved by the

Hungarian Method

[5, 6]. This problem can be formulated as a 0-1 integer linear program as follows.

Let zij be a set of variables for i = 1 , . . . , n and j = 1 , . . . , n, where n is the number of nodes in each node set o f the complete bipartite graph

G - (U U V,E),

U = { u l , u 2 , . . . , u , } and

V = {Vl,V2,...,v,}.

Here, zij = 1 means t h a t the edge

(ui,vj) is

included in the matching, whereas z 0 = 0 means t h a t it is not. Therefore, for a set of such values to represent a complete matching, we have the following constraints:

~-~zij

= 1, i = 1 , . . . , n , j = l f l

E zij = 1,

j = 1,...,n,

i=1 zij = 0 or 1.

T h e goal of this problem is to maximize

~ i ~ j wij zij,

where

wij

is the weight of the edge

(ul, vj).

However, how to solve the matching problem with the largest possible

product

of weights (i.e., maximizing

l']i ~ wljZij

with the same constraints) has not been established. T h e solution of maximizing

~'~i ~..~j woziJ is

not necessarily the same as t h a t of 1-L ~-]~j wijzlj. An example is shown as follows. Consider the following weighted bipartite graph.

*Author to w h o m all correspondence should be sent.

(2)

66 F.S.C. T S E N G et aL

®

6/®

The maximum matching with the largest possible

sum

of weights is {(ul, vg.), (u2, vl)} with their sum of weights

E E wi.i

xij = 11 + 2 = 13. i j

T h e other matching is {(Ul, vl), (u2, v2)} with their s u m of weights

E E wij xij = 6 + 6 =12.

i j

However, the maximum matching with the largest possible

product

of weights is

{(Ul, 111), (U2, V2) }

with their product of weights

l - [ = 6 × 6 = 3 6

i j

The other matching is {(u2, vl), (ul, v2)} with their product of weights

H E w i J xij =

II × 2 = 22. i j

In this paper, we will provide a solution to the matching problem with the objective function YIi )"~j wij zij. We call such problem as the bipartite maximum product matching problem (MP- M). Although the problem formulation is a non-linear program, we find the

Hungarian Method

can be modified to solve it.

2. T H E M O D I F I E D H U N G A R I A N M E T H O D F O R M P M

In such a weighted matching problem on a bipartite graph G = (U U V, E), we cart assume the underlying graph is a complete bipartite graph,

Kn,n.

Otherwise, without loss of generality, if I U I < I V ] then we can add (I V ] - I U ]) new nodes to U with edges of weight one incident from all nodes in V to each of them. Furthermore, if there are missing edges in G we can assign the weights of these edges to be zero.

To attack this problem, it is more convenient to consider it as a

minimization

problem by simply considering the weight of

(ui, vj)

to be --1. That is, we want to find a matching M which

wi5

minimizes

IL ~ j xlj/wlj

under the following constraints:

E x i j = 1, i = 1 , . . . , n , (1) j = l n = 1, = 1 , . . . , , , (2) i=I

xlj

= 0 or 1. (3)

W e n o w present an example to sketch h o w to find such a solution. T h e n we will modify the

Hungarian

algorithm presented in [7], in which it was used to find a complete matching M that maximizes

~,i ~-,j

wijxij.

For formal treatment of that algorithm, refer to [8].

(3)

W e i g h t e d b i p a r t i t e g r a p h s 6 7

EXAMPLE 2.1. We begin by representing the bipartite graph in m a t r i x form, [ e l i ] , where mij = is the inverse of the weight of the edge

(ui, vj).

An example m a t r i x is given below.

w l j

3 8 7 . 1 4 6

Following T h e o r e m 2.1 (to be discussed later), our solution remains unchanged if we divide all members of some row or some column by the smallest number of the row or the the column. This follows since only one entry will be selected from any row or any column. Therefore, the value

of

I-[i ~'~j zij/wij

for any matching M will be divided by the same amount.

By dividing each row by the smallest member of t h a t row, the example m a t r i x thus becomes

1 8 7 3 3 " 1 4 6

Analogously, by dividing each column by the smallest member of t h a t column, the example m a t r i x becomes

1 16 7 15 6 " 1 s 3

Our problem now is to select n (here, 3) entries from the matrix, such t h a t there are no two in the same row or column (we call these entries

independent),

with as small a p r o d u c t as possible. Since our entries are all greater than or equal to one, the smallest p r o d u c t we could hope for is one. Therefore, if n independent entries of value one can be found, then an optimal solution can be obtained. Now, since all the ones are contained in the first column and the first row, such a solution is not present. This can be seen by crossing with a line on the first row and a line on the first column. Notice t h a t the entry in the left-upper corner is crossed twice. We now adjust the m a t r i x by the following procedure.

1. Let k be the smallest number t h a t is not included in any of our crossed rows or columns. In our example, k = t.~6 1 5 "

2. Divide all uncrossed numbers by k.

3. Leave entries which are crossed once unchanged. 4. Multiply k to all numbers which are crossed twice.

This procedure produces at least one or more one in the uncrossed position of our matrix and leaves all the ones unchanged, unless they are crossed twice. It is impossible for an optimal solution to include those entries which are crossed twice.

By following this procedure until n independent ones are obtained, our example m a t r i x becomes 16 1 1"

1 1" 35 1 " 3 4 5 2 16

T h e optimal solution entries are all marked with an asterisk (.). Referring to the original matrix,

]'Ii ~']~j zij/w~j

-- 1 × 8 x 4 = 32. T h a t

is, YI~ )"]~j wij zij = ~ .

|

This algorithm can now be summarized by the following steps. ALGORITHM 2.1.

An algorithm

for

MPM.

representing a weighted bipartite graph. I n p u t : A Matrix [mij], mij = ~,~

(4)

68 F.S.C. TSENO et ai.

1. (Reduce row and column.) Divide each row by the smallest number of that row. Do the same for each column.

2. (Check for n independent ones.) If there are n independent ones, then we are done; stop. Obtain the corresponding edges as the resultant matching.

3. (Find a minimal cover to adjust the matrix.) Otherwise, find the minimum number of lines that cross all ones. Let k be the smallest number of those uncrossed entries. Divide those uncrossed numbers by k and multiply k to those numbers that are crossed by two lines. Go

to Step 2. 1

Now, we show that if [m~j] is the matrix obtained from [m#] by Step 1 of Algorithm 2.1, then the solution matchings of [m~j] and [mij] are identical.

THEOREM 2.1. If [m~j] is the matrix obtained from [mij] by Step 1 of Algorithm 2.1, then the solution matchings of [m~j] and [mij] are identical.

PROOF. Suppose that ai and/~j are the smallest numbers used to divide the i th row and jth column of [mij], respectively, for each row i and each column j. Then,

m~j = mij

Oti t~j "

Let X and X' be the objective functions associated with the old MPM and the new MPM problems represented by [mlj] and [m~j], respectively; i.e.,

X ' = H E m : j z i j and X : H E m i j x i j .

i j i j

Since constraints (1), (2), and (3) can be regarded as: for each permutation ~r: { 1 , . . . , n } --+ { 1 , . . . , n } ,

1, i f j = r ( i ) , xij = 0, else,

the new problem that maximizes X' = equivalent to

i = 1 , . . . , n and j = 1 , . . . , n , (4)

1-Ii ~ j m~j xij under constraints (1), (2), and (3) is

maximize I-Ii Y]j m~j xij under constraint (4),

- maximize 1-L m~(0 for all possible permutations r, rn~,(9 -- maximize 1-L ~ ~.<,) = maximize -

11, ~,11, ~-c,~

I-I, E

-- maximize ~ . , x = maximize ~

That is, X and X' differ only by the total amount divided, which is a constant. Therefore, their

solution matchings are identical. 1

An efficient way for performing Step 2 in Algorithm 2.1 is necessary to pronounce whether or not a set S of independent ones exists, and if it does, which entries belong to it. We observe that any such S has the property that its ones in [m[j] can be transformed into a leading diagonal of ones by interchanging some rows. For example, suppose [m~j] is

[ 1 " 2 3 ] 4 5 1" , 6 1" 7

for all possible permutations It, for all possible permutations ~r, under constraint (4),

(5)

Weighted bipartite graphs 69

then, upon the interchange of rows 2 and 3 it becomes: 1" 2 3 ]

6 1" 7

.

4 5 1"

W e need exactly n (in this example 3) lines to cross out all the marked ones. N o smaller n u m b e r of lines will suffice.

Because the interchange of rows does not affect the m i n i m u m n u m b e r of crossing lines, it is easy to see that if the m i n i m u m n u m b e r of lines necessary to cross out all the ones equals n, a minimal S can be identified. If the m i n i m u m n u m b e r of lines is strictly less than n, a minimal S is not yet at hand.

T o find the smallest possible n u m b e r of crossing lines, [9] provides the following rules of thumb, which can be repeatedly applied until none of t h e m are satisfied.

1. If there is a row (column) with exactly one uncrossed one, then draw a vertical (horizontal) line through this one.

2. If all rows or columns with ones have two or more uncrossed ones, then choose the row (column) with the least number of uncrossed ones and draw a vertical (horizontal) line through one of the uncrossed ones.

3.

Break ties arbitrarily.

T h e following example illustrates these rules: E X A M P L E 2.2. Consider the following matrix:

1 1 3 ]

4 1 1 . 6 1 1

T h e first column has exactly one uncrossed one (i.e., m11), by Rule 1, we cross out the first row as follows:

[

Now, we must apply Rule 2. B y arbitrarily

4 1 1 . 6 1 1

choosing m2~ and cross out the second row, we obtain:

i i $ 1

4 i i .

6 1 1 Finally, since column 3 has only one uncrossed third row:

4 6 This requires three lines to cross out all ones.

one (i.e., m33), we apply Rule 1 and cross out the

i i .

1 1

Therefore, there is a minimal S.

Each application of Step 3 in Algorithm 2.1 will produce at least one more one in the uncrossed entries in the matrix and leave all the ones unchanged, unless they are crossed by two lines. Therefore, Step 3 will always yield a set of n independent ones in a finite n u m b e r of repetitions.

(6)

7 0 F.S.C. T S E N G e t al.

3. J U S T I F I C A T I O N O F O U R A L G O R I T H M We justify Algorithm 2.1 by the following theorems.

THEOREM 3.1. I f c, c < n, is the m i n i m u m number of lines that cover all ones, then each

application o f Step 3 in Algorithm 2.1 will divide the product o f all entries by k " L c " . PROOF. The net effect of Step 3 can be regarded as the following two steps.

1. Divide all entries by k.

2. Multiply k to all entries covered by a line. Do this line by line.

T h a t will leave all entries which are crossed once unchanged, then multiply k to all numbers which are crossed twice.

Assume the product of all entries is P , then after Step 3 the product of all entries will be

P / k " " x k ~" = P / k "2-¢". I

Note t h a t k is always greater than 1. Therefore, by n > c, we have k n2-¢" > 1, which implies

p / k "~-~" < P.

THEOREM 3.2. Algorithm 2.1 will term/nate in finite steps.

PROOF. By Theorem 3.1, each application of Step 3 will reduce the product of all entries by k '.2-¢" > 1. If the algorithm loops forever, then the product of all entries will be reduced to zero, which is impossible--since all entries are kept to be greater than or equal to one after Step 1.

Therefore, the algorithm will terminate in finite steps. |

In fact, the algorithm can be performed in O(n a) for a complete bipartite graph with 2n nodes [8].

4. AN A P P L I C A T I O N O F M P M In this section, we present an application of the MPM problem.

EXAMPLE 4.1. Suppose we have a tennis team of three netters, named a, b, and c, and we want to pairwise play single games with another team of three net ters, named x, y, and z. By analyzing the past records of our netters and opponents, we may obtain the following probability information of each of our netters to beat the opponents.

Ivs. II • I

I

I

a 0.6 0 . 3 0 . 8 b 0.5 0 . 9 0 . 3 c 0.2 0 . 7 0 . 8

For example, the probability for a to beat x is 0.6. Now, suppose we want to find an arrangement for our netters and their corresponding opponents, such that we have the best chance to beat all the opponents.

This problem can be regarded as an MPM problem by transforming it to the complete bipartite graph G = ({a, b, c} U {x, y, z}, E) with the entries in the above table as the weights of the edges. T h a t is, we want to find a matching in G with the largest product of weights.

To apply our algorithm, we regard it as a minimization problem by considering the following matrix: 1 1 1 5 10 5 0.6 O.S 0.8 3 3 4 1 1 1 = 2 10 10 0.5 0.9 0.3 9 3 1 1 1 5 10 5 0.2 0.7 0.8 7 4

By Step 1 of Algorithm 2.1, we obtain

[

1 ~ 1]

27

1 3 .

(7)

Weighted bipartite graphs 71

By Step 2, we can find 3 independent ones as follows, which are marked by asterisks. . s 1

27 1" 3

3 s I* Therefore, the solution of this example is:

a v e r s u s x, b versus y, and c versus z.

If we arrange the single games according to this solution, then the probability to beat all the

opponents is 0.6 x 0.9 x 0.8 = 0.432. |

5. D I S C U S S I O N

In this paper, we define a bipartite matching problem which maximizes the largest possible

product of weights. T h e solution for a bipartite matching which maximizes the sum of weights is

not necessarily the same as t h a t for maximizing the product of weights. Fortunately, we show this problem can be easily solved by modifying the Hungarian method. Besides, we show an example, which is usually invoked for decision making, to illustrate an application of our problem.

R E F E R E N C E S

1. J.A. Bondy and U.S.R~ Murty, Graph Theory with Applications, Macmillan Press, NY, (1976).

2. J.E. Hopcroft and R.M. Karp, An n 5/2 algorithm for maximum matching in bipartite graphs, SIAM J. Computing 2 (4), 225-231 (1973).

3. S. Even and R.E. Tarjan, Network flow and testing graph connectivity, SIAM J. Com~uting 4 (4), 507-512 (1975).

4. L.R. Ford and D.R. Fulkerson, Flows in Networks, Princeton Univ. Press, Princeton, N J, (1962). 5. D. KSnig, Graph and matrices, Mat. Fig. Lapok (Hungarian) 38, 116-119 (1931).

6. H.W. Kuhn, The Hungarian method for the assignment problem, Naval Research Logistics Quarterly 2, 83-97 (1955).

7. R. Gould, Graph Theory, The Benjamin/Cumming Publishing Comp., (1988).

8. C.H. Papadlmitriou and K. Steiglitz, Combinatorial Optimization: Algorithms and Complexity, Pren- tice-Hall, Englewood Cliffs, N J, pp. 221-226, (1982).

9. L.R. Foulds, Optimization Techniques: An Introduction, Springer-Verlag, NY, pp. 87-88, (1981). 10. P. Hall, On representatives of subsets, J. London Math. Soc. 10, 26-30 (1935).

參考文獻

相關文件

In Chapter 3, we transform the weighted bipartite matching problem to a traveling salesman problem (TSP) and apply the concepts of ant colony optimization (ACO) algorithm as a basis

• The randomized bipartite perfect matching algorithm is called a Monte Carlo algorithm in the sense that.. – If the algorithm finds that a matching exists, it is always correct

• The randomized bipartite perfect matching algorithm is called a Monte Carlo algorithm in the sense that.. – If the algorithm finds that a matching exists, it is always correct

6 《中論·觀因緣品》,《佛藏要籍選刊》第 9 冊,上海古籍出版社 1994 年版,第 1

In particular, we present a linear-time algorithm for the k-tuple total domination problem for graphs in which each block is a clique, a cycle or a complete bipartite graph,

Robinson Crusoe is an Englishman from the 1) t_______ of York in the seventeenth century, the youngest son of a merchant of German origin. This trip is financially successful,

fostering independent application of reading strategies Strategy 7: Provide opportunities for students to track, reflect on, and share their learning progress (destination). •

Strategy 3: Offer descriptive feedback during the learning process (enabling strategy). Where the