Homework #1
Solution Manual
1 Manhaha Problem (programming assignment)
1 # i n c l u d e < s t d i o . h >
2 # i n c l u d e < s t d l i b . h >
3 # i n c l u d e < l i m i t s . h >
4 # d e f i n e N 1 0 0 0 0 1 0
5 s t r u c t P o i n t { int x , y , id ; } p [ N ] , q [ N ];
6 int dis [ N ] , who [ N ];
7 int c m p _ x ( c o n s t v o i d * a , c o n s t v o i d * b ) {
8 r e t u r n (( s t r u c t P o i n t *) a ) - > x - (( s t r u c t P o i n t *) b ) - > x ;
9 }
10 vo i d c a l c u l a t e ( int cx , int cy , int s1 , int e1 , int s2 , int e2 ) { 11 int d1 = s1 < e1 ?1: -1 , d2 = s2 < e2 ?1: -1 , big = INT_MIN , w h o _ b i g = -1;
12 w h i l e ( s1 != e1 && cy * p [ s1 ]. y < cy * p [ s2 ]. y ) s1 += d1 ;
13 w h i l e ( s1 != e1 ) {
14 w h i l e ( s2 != e2 && cy * p [ s2 ]. y <= cy * p [ s1 ]. y ) {
15 int now = cx * p [ s2 ]. x + cy * p [ s2 ]. y ;
16 if ( now > big || now == big && p [ s2 ]. id < w h o _ b i g ) {
17 big = now ;
18 w h o _ b i g = p [ s2 ]. id ;
19 }
20 s2 += d2 ;
21 }
22 int me = p [ s1 ]. id , now = cx * p [ s1 ]. x + cy * p [ s1 ]. y - big ;
23 if ( now < dis [ me ] || now == dis [ me ] && who_big < who [ me ] ) {
24 dis [ me ]= now ;
25 who [ me ]= w h o _ b i g ;
26 }
27 s1 += d1 ;
28 }
29 }
30 vo i d m e r g e ( int l , int m , int r ) { 31 int i = l , j = m , k = l ;
32 w h i l e ( i < m || j < r ) q [ k + + ] = p [ j == r || i < m && p [ i ]. y < p [ j ]. y ? i ++: j + + ] ; 33 for ( i = l ; i < r ; i ++ ) p [ i ]= q [ i ];
34 }
35 vo i d s o l v e ( int l , int r ) {
36 if ( r - l ==1 ) r e t u r n ;
37 int m =( l + r ) / 2 ;
38 s o l v e ( l , m );
39 s o l v e ( m , r );
40 c a l c u l a t e ( -1 ,+1 , l , m , m , r );
41 c a l c u l a t e (+1 ,+1 , m , r , l , m );
42 c a l c u l a t e ( -1 , -1 , m -1 , l -1 , r -1 , m - 1 ) ; 43 c a l c u l a t e (+1 , -1 , r -1 , m -1 , m -1 , l - 1 ) ; 44 m e r g e ( l , m , r );
45 }
46 int m a i n () 47 {
48 int n , i ;
49 s c a n f ( " % d " ,& n );
50 for ( i =0; i < n ; i ++ ) p [ i ]. id = i ;
51 for ( i =0; i < n ; i ++ ) dis [ i ]= I N T _ M A X ;
52 for ( i =0; i < n ; i ++ ) s c a n f ( " % d % d " ,& p [ i ]. x ,& p [ i ]. y );
53 q s o r t ( p , n , s i z e o f ( s t r u c t P o i n t ) , c m p _ x );
54 s o l v e (0 , n );
55 for ( i =0; i < n ; i ++ ) p r i n t f ( " % d ␣ % d \ n " , who [ i ] , dis [ i ]);
56 r e t u r n 0;
57 }
The key point is the function calculate. The Manhattan distance of two points (x1, y1), (x2, y2) is defined as|x1−x2|+|y1−y2|. But if we constraint that x1 ≥ x2 and y1 ≥ y2, the distance will becomes to x1− x2+ y1− y2 = (x1+ y1)− (x2+ y2). This equation shows that the distance to (x1, y1) is smaller as (x2+ y2) larger, so we can simply sort points in both sides by y-coordinate, and then solve this problem in linear time under the constraint x1 ≥ x2 and y1 ≥ y2. However, we can still solve this problem without this constraint since the remaining is just the symmetry cases of this! By including merge sort on y-coordinate when doing divide and conquer instead of re-sorting every time, the total computation time is T (n) = 2T (n/2) + O(n) = O(n lg n).
2 Super Safe
Note that we can undo a move by press/release same button again, since the constraint that determine whether we can press/release button k or not is only related to button 1, 2, . . . , k−1.
1) Let si be the strategy to open type-i safe.
i) If n > 2, do sn−2 to release button 1, 2, . . . , n− 2.
ii) Release the button n.
iii) If n > 2, undo sn−2 to press button 1, 2, . . . , n− 2.
iv) If n > 1, do sn−1 to release button 1, 2, . . . , n− 1.
2) According to our strategy, we have a1 = 1
a2 = 2
an= an−2+ 1 + an−2+ an−1 = an−1+ 2an−2+ 1 ∀n > 2
3) We claim that the general form of an is 2n+2−3−(−1)6 n. Proof. We use mathematical induction to show that.
Basis:
∀n = 1 2n+2− 3 − (−1)n
6 = 23 − 3 − (−1)
6 = 1 = a1
∀n = 2 2n+2− 3 − (−1)n
6 = 24 − 3 − (+1)
6 = 2 = a2
Induction:
an= an−1+ 2an−2+ 1 ∀n ≥ 3
= 2n+1− 3 − (−1)n−1
6 + 2· 2n− 3 − (−1)n−2
6 + 1
= 2n+1− 3 − (−1)n−1
6 +2n+1− 6 − 2 · (−1)n−2
6 +6
6
= 2n+2− 3 − (−1)n−1− 2 · (−1)n−2 6
= 2n+2− 3 − [(−1)n−1+ (−1)n−2]− (−1)n−2 6
= 2n+2− 3 − (−1)n 6
4) Proof. We use mathematical induction for each stage to show that.
Basis:
When n≤ 2, a1 = 1, a2 = 2 is obviously optimal.
Induction:
∀n ≥ 3
i) OOO...OOO → XXX...XOO
We need to release the button n at some time, and the state which can release the button n is unique. Thus we use sn−2 to achieve this state, since sn−2 is optimal by induction hypothesis.
ii) XXX...XOO → XXX...XOX
Simply release the button n is obviously optimal.
iii) XXX...XOX → OOO...OOX
We have at exactly two choices for each state in this stage. These choices are press/re- lease button 1 or k + 1 where k is the first pressed button, and one of them will be the undo move, which lead to the previous state. But the optimal strategy should not enter same state more than once, so for each step the optimal way is uniquely determined.
Undo sn−2 would not enter same state more than once by induction hypothesis, thus this stage is optimal.
iv) OOO...OOX → XXX...XXX
What should we do now is same as opening type-(n− 1) safe, thus simply use sn−2 is optimal.
5) What we need is same as the stage iii and iv when opening type-(n + 1) safe, without the
last button. It takes
an−1+ an= 2n+1− 3 − (−1)n−1
6 + 2n+2− 3 − (−1)n 6
= 2n+1− 3 − (−1)n−1+ 2n+2− 3 − (−1)n 6
= 2n+1+ 2n+2− 3 − 3 − (−1)n−1− (−1)n 6
= 2· 2n+ 4· 2n− 6 − [(−1)n−1+ (−1)n] 6
= 6· 2n 6 − 1
= 2n− 1
steps to open the safe. Thus peter can always use this strategy to release the first pressed button until open the safe successfully regardless of the states of the buttons at the beginning.
Another solution is using the fact that the total number of states in type-n safe is 2n, so this strategy will visit every state before the safe be opened.
3 Read Your Textbook
1) Since n lg n = O(
nlog34−ϵ)
,∀0 < ϵ ≤ log34− 1 ≈ 0.26, by case 1 of master theorem, we have T (n) = Θ(
nlog34) .
2) Note that the n-th harmonic number Hn =∑n i=1
1
i = Θ(lg n). Let f (n) = T (n)/n, we have f (n) = T (n)/n
= [3T (n/3) + n/lg n] /n
= 3T (n/3)/n + 1/lg n
= T (n/3)/(n/3) + 1/lg n
= f (n/3) + 1/lg n
= f (n/9) + 1/lg(n/3) + 1/lg n ...
= Θ
⌊log∑3n⌋−1
i=0
1 lg(n/3i)
= Θ
⌊log3
n⌋−1
∑
i=0
1 log3(n/3i)
= Θ
⌊log∑3n⌋−1
i=0
1 log3n− i
= Θ
⌊log3
n⌋
∑
i=1
1 i
= Θ (lg lg n) Thus T (n) = nf (n) = Θ (n lg lg n).
3) We use the substitution method to show that T (n) = O(n) = cn for some constant c > 0.
T (n) = T (n/2) + T (n/4) + T (n/8) + n
≤ cn/2 + cn/4 + cn/8 + n
= 7cn/8 + n
≤ cn ∀c ≥ 8
And T (n) = T (n/2) + T (n/4) + T (n/8) + n≥ n = Ω(n), so T (n) = Θ(n).
4) Let f (n) = T (n)/n, we have
f (n) = T (n)/n
=[√
nT (√
n) + n] /n
= T(√
n) /√
n + 1
= f(√
n) + 1
= f(√
2lg n )
+ 1
= f (
2lg n2 )
+ 1
= f (
2lg n4 )
+ 1 + 1 ...
= Θ (lg lg n) Thus T (n) = nf (n) = Θ (n lg lg n).
4 Dividing and Matching by Segments
1) Let point p be the intersection of aibj and axby:
ai ... ax
.
bj
. by
. p
By triangle inequality, we have
aiby < aip + pby and axbj < axp + pbj
=⇒ aiby + axbj < aip + pby+ axp + pbj
=⇒ aiby + axbj < aip + pbj + axp + pby
=⇒ aiby + axbj < aibj+ axby
Thus the total length will be strict smaller after each iteration.
With the fact that the total number of methods to connect these points is finite, this algorithm will terminate eventually, so for any given 2n points, a solution always exists.
2) Take the lowest point(if more than one, take any) as the center, and sort the remaining points by degree respect to it. Because no three points are collinear, this order is unique.
Without loss of generality, assume that the lowest point is a bn. We claim that there is a amber point am such that if we connect bn to am, the remaining points will be divided into two half plane, and each half plane contains same number of amber/blue points. Thus we can recursively solve these two half plane by same algorithm, since any segment in one half plane would not intersect with another segment in other half plane.
Note that the amber point am is equivalent to the definition of lovely number in problem 5, if we define the relation < on points by the degree respect to bn. Thus we can do sorting points and finding am in O(n lg n) time with the same algorithm in problem 5. Since for each time we connect one pair of points, by repeating at most n times, the total time complexity is O(n2lg n) = O(n3).
5 A Lovely Problem (Uh, probably not)
1) Proof. Let f (x) = |{ai | ai < x}| − |{bi | bi < x}|, then f(ak) = 0 iff ak is a lovely number, and we have f (x) ∈ Z, f(a1) ≤ 0, f(an+1) ≥ 0, f(ai+1) ≤ f(ai) + 1, We can show that
∃ak: f (ak) = 0 on above condition by mathematical induction.
Basis:
When n = 0, f (a1)≤ 0, f(an+1) = f (a1)≥ 0 =⇒ f(a1) = 0.
Induction:
If f (a1) = 0, just take k = 1 is enough.
If f (a1) < 0, by f (ai+1)≤ f(ai) + 1, we have a2 ≤ 0, and just apply induction hypothesis on a2, a3, . . . , an+1 is enough.
2) Require: A and B are sorted
1: function lucky1(n, A, B)
2: f ← 0
3: j ← 1
4: for i← 1 to n + 1 do
5: while j ≤ n and bj < ai do
6: f ← f − 1
7: j ← j + 1
8: if f = 0 then
9: return i
10: f ← f + 1
This algorithm runs in O(n) time since the variable i and j are strictly increasing.
3) Require: A is sorted
1: function lucky2(n, A, B)
2: f ← 0
3: l← 1
4: r← n + 1
5: loop
6: m← ⌈(l + r)/2⌉
7: B′ ← {bi | bi ∈ B, bi < am}
8: f′ ← f + (m − l) − |B′|
9: if f′ > 0 then
10: r ← m − 1
11: B ← B′
12: else if f′ < 0 then
13: f ← f′
14: l ← m
15: B ← B \ B′
16: else
17: return m
Note that we use m← ⌈(l + r)/2⌉ to avoid infinite loop maybe caused by m ← ⌊(l + r)/2⌋, since if l + 1 = r =⇒ ⌊(l + r)/2⌋ = l, l ← m would change nothing. It is easy to see that f ≤ 0, and we claim that |B| ≤ f + r − l always be true in above algorithm. thus the total computation cost T is
T = O(∑
loop
|B|)
= O(∑
loop
f + r− l)
= O(∑
loop
r− l)
= O(
∑lg n i=1
n 2i)
= O(n)
proof of |B| ≤ f + r − l. We use mathematical induction on loop to show that.
Basis:
The initial state is |B| = n = 0 + (n + 1) − 1 = f + r − l Induction:
∀f′ > 0
|B′| = f − f′+ (m− l)
< f + m− l
≤ f + (m − 1) − l
∀f′ < 0
|B \ B′| = |B| − |B′|
≤ (f + r − l) − (f − f′+ m− l)
= f + r− l − f + f′− m + l
= f′+ r− m If f = 0, this algorithm will terminate immediately.
4) 1: function lucky3(n, A, B)
2: f ← 0
3: l← 1
4: r← n + 1
5: loop
6: m← ⌈(l + r)/2⌉
7: Run selection algorithm such that al, al+1, . . . , am−1 < am < am+1, am+2, . . . , ar
8: B′ ← {bi | bi ∈ B, bi < am}
9: f′ ← f + (m − l) − |B′|
10: if f′ > 0 then
11: r ← m − 1
12: B ← B′
13: else if f′ < 0 then
14: f ← f′
15: l ← m
16: B ← B \ B′
17: else
18: return m
This algorithm just take another O(r− l) for each iteration due to the linear time selection algorithm, thus the total computation time is same as the previous problem, O(n).