• 沒有找到結果。

# Spring 2016

N/A
N/A
Protected

Share "Spring 2016"

Copied!
108
0
0

(1)

## Hung-Yuan Fan (范洪源)

Department of Mathematics, National Taiwan Normal University, Taiwan

(2)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

(3)

## Root-Finding Problem (勘根問題)

One of the most basic problems in numerical analysis.

Try to find a root (or solution) p of a nonlinear equation of the form

f(x) = 0,

given a real-valued function f, i.e. f(p) = 0.

The root p is also called a zero (零根) of f.

## Note: Three numerical methods will be discussed here:

Bisection method

Newton’s (or Newton-Raphson) method

Secant and False Position (or Regula Falsi) methods

(4)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

### The Procedure of Bisection Method

Assume that f is well-defined on the interval [a, b].

Set a1 = a and b1 = b. Find the midpoint p1 of [a1, b1] by p1= a1+

1− a1

## 2

= a1+ b1

2 .

If f(p1) = 0, set p = p1 and we are done.

If f(p1)̸= 0, then we have

f(p1)· f(a1) > 0⇒ p ∈ (p1, b1). Set a2= p1and b2= b1. f(p1)· f(a1) < 0⇒ p ∈ (a1, p1). Set a2= a1and b2= p1. Continue above process until convergence.

(5)

(6)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

### Pseudocode of Bisection Method

Given f∈ C[a, b] with f(a) · f(b) < 0 .

## Algorithm 2.1: Bisection

INPUT endpoints a, b; tolerance TOL; max. no. of iter. N0. OUTPUT an approx. sol. p.

Step 1 Set i = 1 and FA = f(a);

Step 2 While i≤ N0 do Steps 3–6

Step 3 Set p = a + (b− a)/2; FP = f(p).

Step 4 If FP = 0 or (b− a)/2 < TOL then OUTPUT(p); STOP.

Step 5 Set i = i + 1.

Step 6 If FP· FA > 0 then set a = p and FA = FP.

Else set b = p. (FA is unchanged)

Step 7 OUTPUT(‘Method failed after N0 iterations’) and STOP.

(7)

## Stopping Criteria (停止準則)

In Step 4, other stopping criteria can be used. Let ϵ > 0 be a given tolerance and p1, p2, . . . , pN be generated by Bisection method.

(1) |pN− pN−1| < ϵ,

(2) |pN−p|pN−1|

N| < ϵ with pN̸= 0, (3) |f(pN)| < ϵ.

(8)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## Example 1, p. 50

(1) Show that the equation

f(x) = x3+ 4x2− 10 = 0 has exactly one root in [1, 2].

(2) Use Bisection method to determine an approx. root which is accurate to at least within 10−4.

The root is p = 1.365230013 correct to 9 decimal places.

(9)

## Solution

(1) By IVT with f(1)f(2) = (−5)(14) < 0, ∃ p ∈ (1, 2) s.t.

f(p) = 0. Since f(x) = 3x2+ 8x > 0 for x∈ (1, 2), the root must be unique in [1, 2].

(2) After 13 iterations, since |a14| < |p|, we have

|p − p13|

|p| |b14− a14|

|a14| ≤ 9.0 × 10−5. Note that

|f(p9)| < |f(p13)| in the Table 2.1.

(10)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

(11)

## Thm 2.1 (二分法的收斂定理)

Suppose that f∈ C[a, b] with f(a) · f(b) < 0. The Bisection method generates a sequence{pn}n=1 converging to a root p of f with

|p − pn| ≤ b− a

2n ∀ n ≥ 1.

The rate of convergence is O(21n).

## pf: For each n

≥ 1, p ∈ (an, bn) and bn− an= b− a

## 2

n−1 by induction.

Hence, we hve

|p − pn| ≤ bn− an

2 = b− a 2n .

(12)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## Remark

Applying Thm 2.1 to Example 1, we see that

|p − p9| ≤ 2− 1

29 ≈ 2 × 10−3,

but the actual absolute value is|p − p9| ≈ 4.4 × 10−6. In this case, the error bound in Thm 2.1 is much larger than the actual error.

(13)

## Example 2, p. 52

As in Example 1, use Thm 2.1 to estimate the smallest number N of iterations so that|p − pN| < 10−3.

## Sol: Applying Thm 2.1, it follows that

|p − pN| ≤ 2− 1

2N < 10−3 ⇐⇒ 2−N < 10−3, or, equivalently, (−N) log102 <−3 ⇐⇒ N > log3

102 ≈ 9.96. So,

## 10 iterations will ensure the required accuracy. But, in fact, we

know that

|p − p9| ≈ 4.4 × 10−6.

(14)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## In Practical Computation...

To avoid the round-off errors in the computations, use pn= an+bn− an

2 instead of pn= an+ bn

2 .

To avoid the overflow or underflow of f(pn)· f(an), use sign(f(pn))· sign(f(an)) < 0 instead of f(pn)· f(an) < 0.

## Note: The sign function is defined by

sign(x) =



1 if x > 0, 0 if x = 0,

−1 if x < 0.

(15)

(16)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## Def 2.2

The number p is called a fixed point (固定點) of a real-valued function g if g(p) = p.

## Note: A root-finding problem of the form

f(p) = 0,

where p is a root of f, can be transformed to a fixed-point form p = g(p),

for some suitable function g obtained by algebraic transposition.

(函數 g(x) 經由原函數 f(x) 代數移項可得)

(17)

## Thm 2.3 (固定點的存在性與唯一性)

(i) Ifg∈ C[a, b]andg([a, b])⊆ [a, b], then g has at least one fixed point in [a, b].

(ii) If, in addition, g(x) exists on (a, b) and ∃ 0 < k < 1 s.t.

|g(x)| ≤ k ∀ x ∈ (a, b), then there is exactly one fixed point in [a, b].

(18)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

### Illustrative Diagram for Fixed Points

Geometrically, a fixed point p∈ [a, b] is just the point where the curvesy = g(x) andy = xintersect.

(19)

## Proof of Thm 2.3

(i) If g(a) = a or g(b) = b, we are done. If not, then g(a) > a and g(b) < b, since g([a, b])⊆ [a, b]. Note that the function h(x) = g(x)− x ∈ C[a, b] and

h(a) = g(a)− a > 0, h(b) = g(b) − b < 0.

By IVT, ∃ p ∈ (a, b) s.t. h(p) = 0 or g(p) = p.

(ii) Suppose that ∃ p ̸= q ∈ [a, b] s.t. g(p) = p and g(q) = q . By MVT, ∃ ξ between p and q s.t.

|p − q| = |g(p) − g(q)| = |g(ξ)||p − q|

≤ k|p − q|< |p − q|,

which is a contradiction! Hence, g must have a unique fixed point in [a, b].

(20)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## Example 3, p. 59

Although the sufficient conditions are NOT satisfied for g(x) = 3−x on the interval [0, 1], there does exist a unique fixed point of g in [0, 1].

## Sol: Since g

(x) =−3−xln 3 < 0 ∀ x ∈ [0, 1], g is strictly decreasing on [0, 1] and hence

1

3 = g(1)≤ g(x) ≤ g(0) = 1 ∀ x ∈ [0, 1], i.e. g∈ C[0, 1] and g([0, 1]) ⊆ [0, 1].

(21)

### Example 3: Condition (ii) Is NOT Satisfied (2/2)

But also note that

g(0) =− ln 3 ≈ −1.0986,

thus|g(x)| ≮ 1 on (0, 1) and condition (ii) of Thm 2.3 is not satisfied. Because g is strictly deceasing on [0, 1], its graph must intersect the graph of y = x at exactly one fixed point p∈ (0, 1).

(22)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## Functional (or Fixed-Point) Iteration

Assume that g∈ C[a, b] and g([a, b]) ⊆ [a, b]. The fixed-point iteration generates a sequence{pn}n=1, with p0 ∈ [a, b], defined by

pn= g(pn−1) ∀ n ≥ 1.

This method is also called the functional iteration. (泛函迭代)

(23)

### Illustrative Diagrams

Starting wirh p0∈ [a, b], we obtain a sequence of points

(p0, p1)→ (p1, p1)→ (p1, p2)→ (p2, p2)→ (p2, p3)→ · · ·(p, p), wherep = g(p).

(24)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

### Pseudocode of Functional Iteration

To find a sol. p to x = g(x) given an initial approx. p0.

## Algorithm 2.2: Fixed-Point Iteration

INPUT initial approx. p0; tolerance TOL; max. no. of iter. N0. OUTPUT approx. sol. p to x = g(x).

Step 1 Set i = 1.

Step 2 While i≤ N0 do Steps 3–6 Step 3 Set p = g(p0).

Step 4 If|p − p0| < TOL then OUTPUT(p); STOP.

Step 5 Set i = i + 1.

Step 6 Set p0= p. (Update p0)

Step 7 OUTPUT(‘Method failed after N0 iterations’); STOP.

(25)

## 5 Possible Fixed-Point Forms

The root-finding problem

f(x) = x3+ 4x2− 10 = 0

can be transformed to the following 5 fixed-point forms:

(a) x = g1(x) = x− x3− 4x2+ 10 (b) x = g2(x) = (10

x − 4x)1/2 (c) x = g3(x) = 1

2(10− x3)1/2 (d) x = g4(x) = ( 10 4 + x)1/2 (e) x = g5(x) = x− x3+ 4x2− 10

3x2+ 8x

(26)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

0

(27)

## Some Questions

Under what conditions does the fixed-point iteration (FPI) pn= g(pn−1), n = 1, 2, . . .

convergefor any p0 ∈ [a, b]?

What is the error bound for the FPI?

In addition, what is the rate of convergence?

(28)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## Thm 2.4 (Fixed-Point Thm)

Suppose that g∈ C[a, b] and g([a, b]) ⊆ [a, b]. If g(x) exists on (a, b) and∃ k ∈ (0, 1) s.t.

|g(x)| ≤ k ∀ x ∈ (a, b),

thenfor any p0∈ [a, b], the sequence {pn}n=1 defined by pn= g(pn−1) ∀ n ≥ 1,

converges to the unique fixed point p∈ [a, b] of g.

(29)

### Proof of Thm 2.4

Thm 2.3 ensure that ∃!p∈ [a, b] s.t. g(p) = p.

For each n≥ 1, it follows from MVT that ∃ ξn between pn−1 and p s.t.

|pn− p| = |g(pn−1)− g(p)| = |gn)||pn−1− p| ≤ k|pn−1− p|.

By induction =⇒|pn− p| ≤ kn|p0− p| for n ≥ 0.

Since 0 < k < 1, we see that

nlim→∞|pn− p| = 0 ⇐⇒ lim

n→∞pn= p by the Sandwich Thm.

(30)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## Cor 2.5 (固定點迭代的誤差上界)

If g satisfies the hypotheses of Thm 2.4, then we have (1) |pn− p| ≤ knmax{p0− a, b − p0} ∀ n ≥ 0, (2) |pn− p| ≤ k1− k |n p1− p0| ∀ n ≥ 1.

## pf: Inequality (1) followss immediately from the proof of Thm 2.4.

For m > n≥ 1, by MVT inductively, we obtain

|pm− pn| ≤ |pm− pm−1| + |pm−1− pm−2| + · · · + |pn+1− pn|

≤ km−1|p1− p0| + km−2|p1− p0| + · · · + kn|p1− p0|

= kn(1 + k + k2+· · · + km−n−1)· |p1− p0|.

Hence, by taking m→ ∞, we have

|p − pn| = lim

m→∞|pm− pn| ≤ kn(∑

i=0

ki)

|p1− p0| = kn

1− k|p1− p0|.

(31)

## Remarks

The rate of convergence for the fixed-point iteration depends onkn or 1k−kn .

The smaller the value of k, the faster the convergence.

The convergence would be very slow if k≈ 1.

(32)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## 5 Possible Fixed-Point Forms

The root-finding problem

f(x) = x3+ 4x2− 10 = 0

can be transformed to the following 5 fixed-point forms:

(a) x = g1(x) = x− x3− 4x2+ 10 (b) x = g2(x) = (10

x − 4x)1/2 (c) x = g3(x) = 1

2(10− x3)1/2 (d) x = g4(x) = ( 10 4 + x)1/2 (e) x = g5(x) = x− x3+ 4x2− 10

3x2+ 8x

(33)

### Illustration

(a) g1([1, 2])* [1, 2] and |g1(x)| > 1 for x ∈ [1, 2].

(b) g2([1, 2])* [1, 2] and |g2(x)| ≮ 1 for any interval containing p≈ 1.36523, since |g2(p)| ≈ 3.4.

(c) Since p0 = 1.5, 1 < 1.28≈ g3(1.5)≤ g3(x)≤ g3(1) = 1.5 and hence g3([1, 1.5])⊆ [1, 1.5]. We also note that g3 satisfies|g3(x)| ≤ |g3(1.5)| ≈ 0.66 for x ∈ [1, 1.5].

(d) g4([1, 2])⊆ [1, 2] and the derivative g4 satisfies

|g4(x)| = −5 10(4 + x)3/2

< 5

10(5)3/2 ≈ 0.1414.

(e) It is Newton’s method satisfying g5(p) = 0 theoretically!

(34)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## (牛頓法及其推廣)

(35)

### Derivation of Newton’s Method

Suppose that f(p) = 0,f(p)̸= 0 andf∈ C2[a, b].

Given an initial approximation p0 ∈ [a, b]with f(p0)̸= 0 s.t.

|p − p0| is sufficiently small.

By Taylor’s Thm, ∃ ξ(p) between p and p0 s.t.

0 = f(p) = f(p0) + f(p0)(p− p0) +f′′(ξ(p))

2 (p− p0)2. Since |p − p0| is sufficiently small, it follows that

0≈ f(p0) + f(p0)(p− p0)⇐⇒ p ≈ p0 f(p0) f(p0). This suggests the procedure of Newton’s method:

pn= pn−1 f(pn−1)

f(pn−1) ∀ n ≥ 1.

(36)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## Observations

Let g be a real-valued function defined by g(x) = x− f(x)

f(x), x∈ [a, b],

Newton’s method can be viewed as a fixed-point iteration pn= g(pn−1) ∀ n ≥ 1, where |p0− p| is sufficiently small.

If f(p) = 0, g(p) = p, i.e., p is a fixed-point of g.

g∈ C[a, b]and its first derivative is given by g(x) = f(x)f′′(x)

[f(x)]2 , x∈ [a, b].

If f(p) = 0, then g(p) = 0 follows immediately.

(37)

## Further Questions

Under what conditions does Newton’s method converge to p?

What is the error bond for Newton’s method?

How to choose a good initial guess p0?

What is the rate of convergence for Newton’s method?

(38)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

### Pseudocode of Newton’s Method

To find a sol. to f(x) = 0 given an initial approx. p0.

## Algorithm 2.3: Newton’s Method

INPUT initial approx. p0; tolerance TOL; max. no. of iter. N0. OUTPUT approx. sol. p to f(x) = 0.

Step 1 Set i = 1.

Step 2 While i≤ N0 do Steps 3–6 Step 3 Setp = p0− f(p0)/f(p0).

Step 4 If|p − p0| < TOL then OUTPUT(p); STOP.

Step 5 Set i = i + 1.

Step 6 Set p0= p. (Update p0)

Step 7 OUTPUT(‘Method failed after N0 iterations’); STOP.

(39)

## Example 1, p. 69

Use (a) fixed-point iteration and (b) Newton’s method to find an approximate root p of the nonlinear equation

f(x) = cos x− x = 0

with initial guessp0 = π4. The root is p≈ 0.739085133215161.

(40)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

### Solution (1/3)

(a) Consider the fixed-point form x = g(x), where g(x) = cos(x) ∀ x ∈ [0,π

2].

Then it is easily seen that

1 g∈ C[0,π2],

2 g([0,π2])⊆ [0, 1] ⊆ [0,π2],

3 |g(x)| = | − sin(x)| < 1 ∀ x ∈ (0,π2).

From Thm 2.4 =⇒ the fixed-point iteration pn= g(pn−1) = cos(pn−1) ∀ n ≥ 1

must converge to the unique fixed point p∈ (0,π2) of gfor any initial p0 ∈ [0,π2]!

(41)

### Solution (2/3)

Applying the FPI with an initial guessp0 = π4, we obtain the following numerical results.

The root is p≈ 0.739085133215161.

Note that only 2 significant digits!

(42)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

### Solution (3/3)

(b) For the same initial approx. p0= π/4, applying Newton’s method

pn= pn−1−cos(pn−1)− pn−1

− sin(pn−1)− 1 ∀ n ≥ 1, we obtain the following numerical results

The actual root is p≈ 0.739085133215161.

(43)

## Thm 2.6 (牛頓法的收斂定理)

Let f∈ C2[a, b] and p∈ (a, b). If f(p) = 0andf(p)̸= 0, then

∃δ > 0s.t. Newton’s method generates a sequence {pn}n=1 defined by

pn= pn−1 f(pn−1)

f(pn−1) ∀ n ≥ 1 converging to pfor any p0 ∈ [p − δ, p + δ].

## Note: The local convergence of Newton’s method is guaranteed

in Thm 2.6, but the order of convergence is NOT discussed here!

(44)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## Sketch of Proof (1/2)

Since f(p)̸= 0,∃ δ1> 0 s.t.

f(x)̸= 0 ∀ x ∈ (p − δ1, p + δ1), and hence

g(x) = x− f(x) f(x) is well-defined on (p− δ1, p + δ1).

Moreover, since its derivative is given by g(x) = f(x)f′′(x)

[f(x)]2 ∀ x ∈ (p − δ1, p + δ1), it follows that g∈ C1(p− δ1, p + δ1) because f∈ C2[a, b].

Note that f(p) = 0 =⇒g(p) = pandg(p) = 0.

(45)

## Sketch of Proof (2/2)

Because g is conti. at p, for any k∈ (0, 1), ∃ 0 < δ < δ1 s.t.

|g(x)| < k ∀ x ∈ [p − δ, p + δ].

For x∈ [p − δ, p + δ], from MVT ⇒ ∃ ξ between x and p s.t.

|g(x) − p| = |g(x) − g(p)| = |g(ξ)| |x − p| < δ.

Hence, g([p− δ, p + δ]) ⊆ [p − δ, p + δ].

From Thm 2.4 =⇒ the seq. generated by Newton’s method pn= g(pn−1) ∀ n ≥ 1

converges to p for any p0∈ [p − δ, p + δ].

(46)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## Questions

How to guess a good initial approximation p0? How to estimate δ > 0 derived in Thm 2.6?

What is the order of convergence for Newton’s method?

How to modify Newton’s method if f(x) is difficult to be evaluated in practice? Use Secant Method!

(47)

### Derivation of Secant Method (割線法)

In many applications, it is often difficult to evaluate the derivative of f.

Since f(pn−1) = lim

x→pn−1

f(x)−f(pn−1)

x−pn−1 , we have f(pn−1) f(pn−2)− f(pn−1)

pn−2− pn−1 = f(pn−1)− f(pn−2) pn−1− pn−2

for any n≥ 2.

With above spprox. for the derivative, Neton’s method is rewritten as

pn= pn−1−f(pn−1)(pn−1− pn−2)

f(pn−1)− f(pn−2) ∀ n ≥ 2.

This is called the Secant method with initial approximations p and p .

(48)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

(49)

## Key Steps of Secant Method

Given two initial p0 and p1 with q0← f(p0) and q1← f(p1), the followings are performed repeatedly in the Secant method:

1 Compute the new approximation

p← p1−q1(p1− p0) q1− q0

;

2 Updatep0← p1 and q0 ← q1;p1← p andq1← f(p).

(50)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

### Pseudocode of Secant Method

To find a sol. to f(x) = 0 given initial approx. p0 and p1.

## Algorithm 2.4: Secant Method

INPUT initial approx. p0, p1; tolerance TOL; max. no. of iter. N0. OUTPUT approx. sol. p to f(x) = 0.

Step 1 Set i = 2; q0 = f(p0); q1 = f(p1).

Step 2 While i≤ N0 do Steps 3–6

Step 3 Setp = p1− q1(p1− p0)/(q1− q0).

Step 4 If|p − p1| < TOL then OUTPUT(p); STOP.

Step 5 Set i = i + 1.

Step 6 Setp0= p1;q0= q1; p1= p;q1= f(p).

Step 7. OUTPUT(‘Method failed after N0 iterations’); STOP.

(51)

## Example 2, p. 72

Use the Secant method to find a sol. to f(x) = cos x− x = 0

with initial approx.p0 = 0.5 andp1 = π/4. Compare the results with those of Newton’s method obtained in Example 1.

## Sol: Applying the Secant method

pn= pn−1 (cos pn−1− pn−1)(pn−1− pn−2)

(cos pn−1− pn−1)− (cos pn−2− pn−2) ∀ n ≥ 2, we see that its approximationp5 is accurate to 10 significant digits, whereas Newton’s method produced the same accuracy after 3 iterations.

(52)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## but slower than Newton’s method.

(53)

### Method of False Position (錯位法)

The method of False Position is also called Regula Falsi method. The root is always bracketed between successive approximations.

Firstly, find p2 using the Secant method . How to determine the next approx. p3?

If f(p2)· f(p1) < 0 (or sign(f(p2))· sign(f(p1)) < 0), then p3

is the x-intercept of the line joining (p1, f(p1)) and (p2, f(p2)).

If not, p3 is the x-intercept of the line joining (p0, f(p0)) and (p2, f(p2)), and then interchange the indices on p0 and p1.

Continue above procedure until convergence.

(54)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

(55)

## Key Steps of False Position Method

Given two initial p0 and p1 with q0← f(p0) and q1← f(p1), the followings are performed repeatedly in the False Position method:

1 Compute the new approximation

p← p1−q1(p1− p0) q1− q0

;

2 Compute q← f(p);

3 If q· q1 < 0, updatep0 ← p1 and q0 ← q1;

4 Updatep1← p and q1 ← q.

(56)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

### Pseudocode for Method of False Position

To find a sol. to f(x) = 0 given initial approx. p0 and p1.

## Algorithm 2.5: Method of False Position

INPUT initial approx. p0, p1; tolerance TOL; max. no. of iter. N0. OUTPUT approx. sol. p to f(x) = 0.

Step 1 Set i = 2; q0 = f(p0); q1 = f(p1).

Step 2 While i≤ N0 do Steps 3–7

Step 3 Setp = p1− q1(p1− p0)/(q1− q0).

Step 4 If|p − p1| < TOL then OUTPUT(p); STOP.

Step 5 Set i = i + 1; q = f(p).

Step 6 If q· q1< 0 thenp0= p1;q0= q1. Step 7 Setp1= p; q1= q.

Step 8 OUTPUT(‘Method failed after N0 iterations’); STOP.

(57)

## Example 3, p. 74

Use the method of False Position to find a sol. to f(x) = cos x− x = 0

with p0 = 0.5 and p1 = π/4. Compare the results with those obtained by Newton’s method and Secant method.

(58)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

(59)

## Def 2.7 (收斂階數的定義)

A sequence{pn}n=0 converges to p of order α, with asymptotic error constant λ if∃ α ≥ 1 and λ ≥ 0 with

nlim→∞

|pn+1− p|

|pn− p|α = λ.

(i) α = 1 and 0 < λ < 1 =⇒ {pn}n=0 is linearly convergent.

(ii) α = 1 and λ = 0 =⇒ {pn}n=0 is superlinearly convergent.

(iii) α = 2 =⇒ {pn}n=0 is quadratically convergent.

## Note: The higher-order convergence is always expected in

practical computation!

(60)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## Thm 2.8 (固定點迭代的線性收斂性)

Suppose that g∈ C[a, b] and g([a, b]) ⊆ [a, b]. Ifg ∈ C(a, b),

∃ k ∈ (0, 1) s.t. |g(x)| ≤ k ∀ x ∈ (a, b) and g(p)̸= 0, thenfor any p0 ∈ [a, b], the sequence

pn= g(pn−1) ∀ n ≥ 1

converges only linearly to the unique fixed point p∈ [a, b].

(61)

## Proof of Thm 2.8

Thm 2.4 (Fixed-Point Thm) ensures that the sequence pn converges to the unique fixed point p∈ [a, b].

For each n≥ 1, by MVT =⇒ ∃ ξn between pn and p s.t.

|pn+1− p| = |g(pn)− g(p)| = |gn)||pn− p|.

Since lim

n→∞pn= p, ξn→ p as n → ∞. Thus,

nlim→∞

|pn+1− p|

|pn− p| = lim

n→∞|gn)| = |g(p)| > 0

because g ∈ C(a, b), i.e.,the sequence pn converges to p only linearly!

(62)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

## Thm 2.9 (固定點迭代的二次收斂性)

Ifg(p) = p,g(p) = 0and∃ open interval I containing p where g′′∈ C(I) and |g′′(x)| < M ∀ x ∈ I,

then∃ δ > 0 s.t. the sequence defined by pn= g(pn−1) ∀ n ≥ 1

converges at least quadratically to pfor any p0 ∈ [p − δ, p + δ]. Moreover, we have

|pn+1− p| < M

2|pn− p|2 for sufficiently large values of n.

You need to configure DC1 to resolve any DNS requests that are not for the contoso.com zone by querying the DNS server of your Internet Service Provider (ISP). What should

This article reviews the local governance theory and Rhodes’s policy network theory; furthermore, taking two examples in Taiwan, which are Wu-Lai hot spring area and Tai-an hot

Your problem may be modest, but if it challenges your curiosity and brings into play your inventive faculties, and if you solve it by your own means, you may experience the tension

 Providing participants with opportunities to design appropriate tasks and activities to help students develop their skills in selecting, extracting, summarising and

For example, Ko, Chen and Yang [22] proposed two kinds of neural networks with different SOCCP functions for solving the second-order cone program; Sun, Chen and Ko [29] gave two

• Understand Membrane Instantons from Matrix Model Terminology. Thank You For

You may spend more time chatting online than talking face-to-face with your friends or family.. So, are you a heavy

The superlinear convergence of Broyden’s method for Example 1 is demonstrated in the following table, and the computed solutions are less accurate than those computed by