• 沒有找到結果。

Determining the existence of a majorizing shape

In some problems, the goal is to find a majorizing shape, or to determine if one exists. If Algorithm 2 given in Section 3.3 yields a single shape, then it is the majorizing shape. However, there is a much faster way of finding out whether a majorizing shape exists, and identifying it if it exists. Even if our goal is to find all nonmajorized shapes, we can still use the faster algorithm as preprocessing. In case it finds a majorizing shape, then there is no need to go through Algorithm 2.

This procedure constructs two nonmajorized shapes in Γ(L, U ), i.e., the one which goes the upper bound route as much as possible in Algorithm 2 and the one which goes the lower bound route as much as possible. We will refer to them as the top shape and the bottom shape. Note that in constructing the top shape sT, we need only to adjust upper bounds; and in constructing the bottom shape sB, only to adjust lower bounds.

Theorem 3.4.1. If sT and sB are equivalent, then either of them is a ma-jorizing shape; if not, then no mama-jorizing shape exists.

Proof. i) sT = sB. Suppose Ui = max

1≤j≤pUj. Consider the reduced problem where part i is deleted and n changes to n − Ui. Let s0T, s0B be the two shapes identified by our procedure in the reduced problem. Clearly, s0T = sT \ {Ui}.

We prove s0B = sB\ {Ui} (here we refer to shape-types as multisets).

A lower bound Lv will be adjusted in the reduced problem only if Lv+P

j6=i,vUj < n − Ui or equivalently,

Lv+P

j6=vUj < n,

which is the criterion of adjusting Lv in the original problem. Therefore, the adjustment of lower bounds in choosing s0B is the same as sB, which implies s0B= sB\ {Ui}.

Next we prove by induction on p that all regular shapes generated by Algorithm 2 are equivalent to sT. It is trivially true for p = 1. Assume that it holds for general p − 1 ≥ 1, we prove it for p.

Suppose to the contrary, that s0 6= sT is also a nonmajorized regular shape. Then s0 chooses Ui or Lk. Without loss of generality, assume s0 chooses Ui. By induction, s \ {Ui} majorizes s0\ {Ui}. Hence, s majorizes s0. Finally, we prove that no E-shape can exist. Let the common regular shape contains r upper bounds and t lower bounds where r + t = p − 1 or p.

Suppose to the contrary that an E-step occurs at stage j + k after j upper bounds and k lower bounds are selected. Among the remaining parts, the largest(in the ≺ ordering) effective upper bound is U[j+1] and the smallest effective lower bounds is L[k+1]. Necessarily, j < r + 1 and k < t + 1, or s(s0) would not agree with the common regular shape. If U[j+1] and L[k+1]are from the same part, then selecting one means not selecting the other in a shape.

In particular, L[k+1] would not be in s and U[j+1] not in s0, contradicting the common regular shape.

(ii) If sT 6= sB, then Theorem 3.4.1 assures that both sT and sB are nonmajorized shapes; in particular no majorizing shape exists.

If we calculateP

Li at the beginning, then Ui0 = min{Ui, n − (P

Li− Li) can be computed with one subtraction. Therefore, adjusting each Ui takes a constant time. It takes O(p) time to adjust all Ui in each calling of the algorithm and O(p) time to select maximum of {Ui0}. The algorithm is called p times to obtain sT, so the total time is O(p(p + p)) = O(p2). The time complexity of constructing sB is the same. Finally, checking sT = sB takes O(p) time.

An improvement of this algorithm is to sort {Ui}, and to sort {Lj} among those parts with the same upper bound at the beginning, so that we don’t have to do it at every stage. But the running time is still O(p2).

Example 11. sT = (20, 19, 3, 4, 5) and sB = (1, 2, 18, 17, 13). Hence no majorizing shape exists.

Example 12. U = (100, 90, 60, 50, 17), L = (10, 70, 10, 48, 10). If n = 228, we obtain sT = sB= {90, 70, 10, 48, 10} which is a majorizing shape. But, if 219 ≤ n ≤ 226, then there is no majorizing shape.

Chapter 4

The Mean-partition Problem

In the mean-partition problem the goal is to partition a finite set of ele-ments, each associated with a number, into p disjoint parts so as to optimize an objective function which depends on the averages of the vectors that are assigned to each part. A partition is then associated with a p-vector θ¯π = (¯θ1, ¯θ2, ..., ¯θp) where ¯θi is the mean of part i. A useful approach in studying the problem is to explore the mean-partition polytope MΠ.

When f is quasi-convex, there exists an optimal partition π with ¯θπ being a vertex of the mean-partition polytope MΠ. In such a case, it is useful to study MΠ, in particular, to identify properties of partitions π for which θ¯π is a vertex of the mean-partition polytope. In Sec 4.1, we will make a linear transformation of the mean-partition polytope to the sum-partition polytope, thus allowing the transformation of results from the latter to the former. Unfortunately, this linear transformation technique can not be ex-tended to the bounded-shape problem since we cannot identify the linear transformation. We also explore the approache introduced in Sec. 1.3 for the sum-partition problem to construct mean-partition polytopes. Note that this approach works depending on two things: (i)Hλ ⊆ P ⊆ Cλ and (ii)λ is supermodular. We will study the two issues separately for the single-shape mean-partition problem. In particular, we will shaw that (i) is not satisfied but (ii) is. Thus we cannot conclude Hλ = P = Cλ. However, the proof of supermodularity is mathematically interesting, and hopefully, accomplishing

this challenging proof may bring some benefit in some unexpected direction in the future.

4.1 Linear transformation of mean-partition problems to sum-partition problems

We observe that the single-shape mean-partition problem with prescribed-shape (n1, ..., np) and objective function given by (1.4.3) coincides with the corresponding sum-partition problem with objective function given by (1.2.4) where f satisfies

f (x1, ..., xp) = g(x1 n1

, ...,xp np

) for x ∈ Rp (4.1.1) In particular, properties of optimal solutions for single-shape mean-partition problems are deducible from established properties of optimal solutions of corresponding sum-partition problems. For example, it is known [3] that:

A real number function f is called quasi-convex if the maximum over every line segment contained in the domain of f is attained at one of the two endpoints.

Theorem 4.1.1. When the θπ’s are distinct, every single-shape sum-partition problem with f quasi-convex has at least one consecutive optimal partition.

This result establish the polynomial solvability of the single-shape sum-partition problem. Now, as a function g is quasi-convex if and only if so is the function f that is defined through (4.1.1), we conclude Theorem 4.1.1 that when g is quasi-convex, each single-shape mean-partition problem has at least one consecutive optimal solution and is solvable in polynomial time.

Furthermore, by applying the one-to-one transformation (x1, ..., xp) = (x1

n1

, ..., xp np

) (4.1.2)

we see that the single-shape mean-partition polytope is the one-to-one linear image of the corresponding single-shape sum-partition polytope. A virtue of

this transformation is that it preserves vertices.

Let (n1, ..., np) be a vector of positive integers with coordinate-sum n and let Π be the set of partitions with shape (n1, ..., np). We observe that for every partition π ∈ Π, ¯θπ = (θnπ1

1 , ...,θnπp

p), and therefore MΠ= conv{¯θπ : π ∈ Π} = conv{(θπ1

n1, . . . ,θπp

np ) : π ∈ Π}

= {(x1

n1, . . . , xp

np) : (x1, . . . , xp) ∈ conv{θπ : π ∈ Π} = PΠ}

= {(y1, . . . , yp) : (n1y1, . . . , npyp) ∈ PΠ}.

Using the representation of PΠ through (1.3.1) we get the representation of MΠ as the set of vectors y ∈ Rp that satisfy

X

i∈I

niyi ≥ λ(I) for all I ⊆ {1, ..., p} and Xp

i=1

niyi = λ({1, ..., p}). (4.1.3)

Thus we have Theorem 4.1.2.

Theorem 4.1.2. When the θπ’s are distinct, every single-shape mean-partition problem with g quasi-convex has at least one consecutive optimal partition.

The linear transformation approach does not apply to the bounded-shape mean-partition problem, since the variation in shape prevent the transforma-tion form being linear as in (4.1.2). Consequently, vertices are not preserved in this nonlinear transformation. Example 13 shows that a partition which is not a vertex of bounded-shape sum-partition polytope becomes a vertex of bounded-shape mean-partition polytope.

Example 13. Let n = 4, θi = i, for i = 1, ..., 4, p = 2, U = (2, 3), L = (1, 2). Then the sum-partition polytope is the line-segment connecting (1, 9) and (7, 3), the mean-partition polytope is the parallelogram with vertices {(1, 3), (4, 2), (1.5, 3.5), (3.5, 1.5)}. A partition π = ({1, 2}, {3, 4}) is not a vertex of the sum-partition polytope but is a vertex of the mean-partition polytope.

Although we cannot use the linear transformation approach to obtain the bounded shape mean-partition polytope, we still have the following result.

Theorem 4.1.3. When the θπ’s are distinct and g is quasi-convex, each constrained-shape mean-partition problem has a consecutive optimal parti-tion.

Proof. An optimal mean-partition must have a shape. Theorem 4.1.3 now follows from Theorem 4.1.2.

Anily and Federgruen [1] studied the bounded-shape mean-partition prob-lem under the objective function f (π) = Pp

i=1

h(¯θπ, ni). They proved that if for each ni, h(x, ni) is convex and nondecreasing in x, then there exists a disjoint optimal partition. Their result follows form Theorem 4.1.3 when the objective function f (π) as a special type of quasi-convex function. We note that with stronger assumptions on h(x, y), Anily and Federgruen obtained additional, tighter, results which are not available from our approach.

4.2 Supermodularity of λ

M

In this section, we explore a direct approach, along the line of Sec. 1.3 to con-struct the single-shape mean partition polytope. Without loss of generality, we assume that n1 ≤ n2 ≤ ... ≤ np.

For I = {i1, i2, ..., ik} ⊆ {1, ..., p}, we suppose that ii < i2 < ... < ik. Define Nik = Pk

x=1

nix for 1 ≤ k ≤ |I|. Set

λM(I) = X|I|

k=1

(

Nik

X

j=Nik−1+1

θj/nik). (4.2.1)

Example 14. Let n = 3, θi = i, for i = 1, 2, 3, p = 2 and consider the mean partition problem corresponding to the set Π of partitions with shape (1, 2).

The set Π contains the three partitions the three partitions ({1}, {2, 3}), ({2}, {1, 3}) and ({3}, {1, 2}) whose corresponding vectors are, respectively,

(1, 2.5), (2, 2) and (3, 1.5). The mean-partition polytope MΠis then the (2, 1.5). Finally, the two permutations (1, 2) and (2, 1) of {1, 2} correspond, respectively, to the vectors (λM)(1,2) = (λM({1}), λM({1, 2}) − λM({1}) = (1, 2.5) and (λM)(2,1) = (λM({1, 2}) − λM({2}), λM({2})) = (2, 1.5), and HλM is the line-segment connecting these points.

Example 14 explains that (i)HλM ⊆ MΠ ⊆ CλM isn’t satisfied. Now we show that (ii)λM is supermodular. We first prove

Lemma 4.2.1. For any shape partition π = (π1, ..., πp) , P the multipliers for the θj’s is

1

which are ordered from large to small. Since for any π, P

i∈I

θπi is computed by multiplying the same set of θj’s with the same set of multipliers, except in different parings, λM(I) achieves the minimum by pairing reversely.

Define ∆I(π) = λM(I) − λM(I \ {i1}).

Lemma 4.2.2. Suppose I ⊂ J and i1 = j1. Then ∆I(π) ≤ ∆J(π).

Proof. First assume nj1 = 1 in the representation (4.2.2)) cancels with the components in θπ0

jk except the

by induction on k. For k = 1 out in nj1 elements instead of 1 in Figure 2.1. So the numerator of (4.2.3) would be a difference between two njk-sums; but the same logic applies. The second way is to notice that θnj1 gets cancelled out in ∆J(π) − ∆I(π). So the scenario is to compare the impact on I and J when both moves back nj1 elements. But this is equivalent to moving one element back nj1 times.

Finally, we are ready to prove the main result of this section.

Theorem 4.2.3. λM as defined in (4.2.1) is supermodular.

Proof. Let I and J, be two subsets of {1, ..., p}. Without loss of generality, assume I ∪ J = {1, 2, ..., m}. We prove Theorem 4.2.3 by induction on m.

Theorem 4.2.3 is trivially true for m = 1. We prove the general m ≥ 2 case.

Case(1) 1 ∈ I ∩ J, i.e. both I and J contain 1. Delete π1 and the θj’s in it.

Suppose n1 = k. Then the reduced partition problem is to partition the set k+1, ..., θn} into p − 1 parts. Theorem 4.2.3 follows by induction.

Since the first difference is unchanged, and the second becomes larger by Lemma 4.2.2 , i.e, λM(I ∩ J) − λM(I ∩ J) = ∆I∩J(π) ≤ ∆J(π) = λM(J) − λM(J).

4.3 Some new results in the mean-partition problem

Given vectors a and b in Rp, we say that a weakly submajorizes b, written a w Âb if

Xk i=1

a[i] Xk

i=1

b[i] for k = 1, . . . , p (4.3.1) It is also well known [17]:

Theorem 4.3.1. Suppose f is Schur convex nondecreasing and a w Âb. Then f (a) ≥ f (b).

Lemma 4.3.2. Suppose (¯θπ1, ..., ¯θπp) is the mean vector of the reverse size-consecutive partition, π, and let (¯θπ1, ..., ¯θπp) denote the mean vector of an arbitrary partition π with the shape is equivalent to the shape of π. Thenθπ1, ..., ¯θπp) w Â(¯θπ1, ..., ¯θπp).

Proof. It was proved in [5] that reverse size-consecutive is a 2-shape-sortable property, namely, it suffices to prove Lemma 4.3.2 by assuming p = 2. Define W to be the set consisting of n11, n1 of them, and n12, n2 of them. In the sum ¯θπ1 + ¯θπ2, each θj ∈ π1 contributes nθj1 and each θj ∈ π2 contributes nθj2. Therefore ¯θπ1 + ¯θπ2 is determined by a ont-to-one mapping between W and the set of n1+ n2 θ’s. By the Hardy, Littlewood and Polya theorem, the sum is maximized when the mapping is monotone, larger element in W mapped to larger θ, which implies the reverse size-consecutive partition achieves the maximum sum.

Next, we prove that

max{¯θπ1, ¯θπ2} ≥max{¯θπ1, ¯θπ2}.

Without loss of generality, assume n1 ≤ n2. It is trivial that ¯θπ1 ≥ ¯θπ1. Let π02 consist of the n2 largest θ’s. Then clearly,

θ¯π2 ≤ ¯θπ20

and the average of the n1 largest θ’s is larger than the average of the n2 largest θ’s, that means

θ¯π02 ≤ ¯θπ1. So,

max{¯θπ1, ¯θπ2} = ¯θπ1 ≥max{¯θπ1, ¯θπ2}.

Using Theorem 4.3.1 and Lemma 4.3.2, we obtain

Theorem 4.3.3. There exists a reverse size-consecutive optimal partition for the single-shape mean partition problem.

Corollary 4.3.4. There exists a reverse size-consecutive optimal partition for the constrained-shape mean partition problem.

Note that for a given shape, the size-consecutive partition is unique. So for the mean-partition problem with the constrained-shape set Γ, we only need to compare the f -values of |Γ| partitions, one from each shape in |Γ|.

For bounded-shape partitions, |Γ| is not explicit. It suffices to consider only those shapes in |Γ| which is not majorized by any other shape in Γ. Further, in Chapter 3, we bounded the number of these nonmajorized shape by 2p−1(Sec.

3.3).

Although we don’t know how to characterize the constrained-shape mean partition polytope, we can bound its number of vertices by the sum of the number of vertices on the single-shape mean-partition polytope for each shape in Γ. Since there is a one-to-one mapping between the vertices of the single-shape mean-partition polytope and the vertices of the single-shape

sum-partition polytope, and also a one-to-one mapping is well known [18] be-tween the latter and the set of consecutive partitions, we obtain a bound of

|Γ|p!. This is indeed an upper bound as the following example shows that a consecutive partition of a shape in Γ is not a vertex of the constrained-shape polytope.

Example 15. Let Γ = {(1, 3), (2, 2), (3, 1)} n = 4, θi = i for i = 1...4, p = 2.

We give the two points generated by the two consecutive partitions for each shape:

shape consecutive partitions (1, 3) (1, 3) (4, 2) (2, 2) (32,72) (72,32) (3, 1) (3, 1) (2, 4)

Thus the polytope has 4 vertices (1, 3)(4, 2)(3, 1)(2, 4) while the two points yielded by shape (2, 2) are internal.

Theorem 4.3.5. Suppose f is quasi-convex. Then there exists a consecutive optimal partition in a set of cardinality at most |Γ|p! for the constrained-shape mean-partition problem with set Γ.

Chapter 5

Conclusion and remarks

In this thesis, we develop the generation function approach to count the number of bounded-shape partitions, which helps us to estimate the practi-cability of the brute-force method to find an optimal partition. We extend the concept of majorizing shape to the concept of nonmajorized shape for bounded-shape sum-partition problem with Schur convex objective function, we prove that there exists a nonmajorized shape for which the corresponding size-consecutive partition is optimal. Moreover, we prove 2p−1 is an upper bound of nonmajorized shape-types, and develop algorithms to find all non-majorized shapes(shape-types). In the last chapter, we research the mean-partition problem. We use the linear transformation approach to character-ize the single-shape mean-partition polytope and prove that if the objective function is quasi-convex, then there exist a consecutive optimal partition.

We also give a bound of the cardinality of the candidate set to find optimal partition for constrained-shape mean-partition case.

We list some topics for future research:

(i) to find a more explicit formula to count the number of bounded-shape partitions,

(ii) to give the exactly value of f (p), (iii) prove our ¡ p−1

b(p−1)/2c

¢ conjecture,

(iv) develop the faster algorithm to find all nonmajorized shapes(shape-types),

(v) characterize the bounded-shape mean-partition polytope.

References

[1] S. Anily and A. Federgruen, Structured partition problems, Oper. Res.

39 (1991), 130–149.

[2] R. A. Brualdi, Introductory Combinatorics, 3nd ed., Prentice Hall, 1999, Chapter 8.

[3] E. R. Barnes, A. J. Hoffman and U. G. Rothblum, Optimal partitions having disjoint convex and conic hulls, Mathematical Programming: Se-ries A, 54 (1992) 69-86.

[4] F. H. Chang, H. B. Chen, J. Y. Guo, F. K. Hwang and U. G. Rothblum, One-dimensional optimal bounded-shape partitions for Schur convex sum objective functions, to appear.

[5] G. J. Chang, F. L. Chen, L. L. Hwang, F. K. Hwang, S. T. Nuan, U.

G. Rothblum, I-Fan Sun, J. W. Wang, and H. G. Yen, Sortabilities of partition properties, Journal of Combinatorial Optimization 2 (1999) 413-427.

[6] F. H. Chang, J. Y. Guo, F. K. Hwang and Y. C. Pan, A generating function approach to count the number of bounded-shape partitions, to appear.

[7] F. H. Chang and F. K. Hwang, Supermodularity in mean-partition prob-lems, Journal of Global Optimization.

[8] F. H. Chang, F. K. Hwang and U. G. Rothblum, The mean-partition problem, preprint.

[9] B. Gao, F. K. Hwang, W. W.-C. Li and U. G. Rothblum, Partition poly-topes over 1-dimensional points, Math. Program. 85 (1999) 335–362.

[10] F. K. Hwang, M. M. Liao and C. Y. Chen, Supermodularity of various partition problems, J. Global Optimization 18 (2000) 275–282.

[11] F. K. Hwang, J. S. Lee and U. G. Rothblum, Permutation polytopes corresponding to strongly supermodular functions, Disc. Appl. Math. 142 (2004) 52–97.

[12] F. K. Hwang, S. Onn and U. G. Rothblum, Representations and char-acterizations of vertices of bounded-shape partition polytopes, Linear Algebra and its Applications, 278 (1998) 263-284.

[13] F. K. Hwang, S. Onn and U. G. Rothblum, Explicit solution of partition problems over a 1-dimensional parameter space, Naval Research Logistics, 47 (2000) 531-540.

[14] F. K. Hwang and U. G. Rothblum, Directional-quasi-convexity, asym-metric Schur-convexity and optionality of consecutive partitions, Math.

Oper. Res. 21 (1996) 540–554.

[15] F. K. Hwang and U. G. Rothblum, Partition-optimization with Schur-convex sum objective functions, SIAM J. Disc. Math., to appear.

[16] F. K. Hwang and U. G. Rothblum, Partition: Optimality and clustering, World Scientific, Singapore, to appear.

[17] A. W. Marshall and I. Olkin, Inequalties, Theory of majorization and its applications, Academic Press, New York, 1979.

[18] L. S. Shapely, Cores of convex gormes, Intern. J. Game Theory 1 (1971) 11–29.

[19] E. Sperner, Ein Satz ¨uber Untermengen einer endlichen Menge, Mathe-matische Zeitschrift. 27 (1928) 544-548.

相關文件