• 沒有找到結果。

Chapter 10 Numerical Solutions of Nonlinear Systems of Equations

N/A
N/A
Protected

Academic year: 2022

Share "Chapter 10 Numerical Solutions of Nonlinear Systems of Equations"

Copied!
46
0
0

加載中.... (立即查看全文)

全文

(1)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Chapter 10

Numerical Solutions of Nonlinear Systems of Equations

Hung-Yuan Fan (范洪源)

Department of Mathematics, National Taiwan Normal University, Taiwan

Spring 2016

(2)

Section 10.1

Fixed Points for Functions of Several

Variables

(3)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Objective

To solve a system of nonlinear equations of the form









f1(x1, x2,·, xn) = 0, f2(x1, x2,·, xn) = 0, ...

fn(x1, x2,·, xn) = 0,

(1)

where each fi:Rn→ R is a (nonlinear) function for i = 1, 2, . . . , n.

The unknown vector x = [x1, x2,· · · , xn]T∈ Rn is called a solution to the nonlinear system (1).

(4)

Vector-Valued Functions

Reformulation of the Nonlinear System

Consider a vector-valued function F :Rn→ Rn defined by F(x) = [f1(x), f2(x),· · · , fn(x)]T∈ Rn ∀ x ∈ Rn.

The system of nonlinear equations (1) can be represented as F(x) = 0, x = [x1, x2,·, xn]T∈ Rn. (2) The functions f1, f2,· · · , fnare called the coordinate

functions (坐標函數) of F.

(5)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Two Vector Norms (向量的範數)

Definition (常用的向量範數) Let v = [v1, v2,· · · , vn]T∈ Rn.

T he l2

-norm (or Euclidean norm) of v is defined by

∥v∥2= vTv =

vu ut∑n

i=1

v2i.

T he l

-norm of v is defined by

∥v∥= max

1≤i≤n|vi|.

MATLAB Command:

norm(v, 2) or norm(v) is used for

∥v∥2, and norm(v, 'inf') is used for ∥v∥.

(6)

Example 1, p. 631

Place the following nonlinear system





3x1− cos(x2x3)12 = 0

x21− 81(x2+ 0.1)2+ sin x3+ 1.06 = 0 e−x1x2 + 20x3+10π3−3 = 0

(3)

in the form (2).

(7)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Solution

Rewrite the nonlinear system (3) as

F(x1, x2, x3)≡ [f1(x1, x2, x3), f2(x1, x2, x3), f3(x1, x2, x3)]T

= [0, 0, 0]T= 0∈ R3, where the coordinate functions are defined by

f1(x1, x2, x3) = 3x1− cos(x2x3)1 2,

f2(x1, x2, x3) = x21− 81(x2+ 0.1)2+ sin x3+ 1.06, f3(x1, x2, x3) = e−x1x2+ 20x3+ 10π− 3

3 .

(8)

Fixed-Point Forms

As in Chap. 2, we shall transform the root-finding problem (2) into a fixed-point problem

x = G(x), x∈ D,

where G : D⊆ Rn→ Rn is some vector-valued function with domain

D ={[x1, x2,· · · , xn]T| ai≤ xi≤ bi, i = 1, 2,· · · , n} (4) for some constants a1, a2,· · · , an and b1, b2,· · · , bn.

Fixed-Point Iteration (FPI) with an initial vector

x(0)∈ D: x(k) = G(x(k−1)), k = 1, 2, . . . ,

provided that x(k) ∈ D ∀ k ≥ 1.

(9)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Fixed Points in R

n

Def 10.5

The vector-valued function G : D⊆ Rn→ Rn has a fixed point at p∈ D ifG(p) = p.

Questions

Under what conditions does the sequence of vectors {x(k)}k=0 generated by FPI converge to the unique fixed point p∈ D?

What is the error bound for the absolute error ∥x(k)− p∥? What is the rate of convergence for FPI?

We may write

G(x) = [g1(x), g2(x),· · · , gn(x)]T,

where each gi is the ith component function of G for i = 1, 2, . . . , n.

(10)

Convergence Theorem of FPI

Thm 10.6

Let G be conti. on D withG(D)⊆ D, where the domain D is defined as in (4). Then

(1) G has at least one fixed point in D.

(2) If, in addition, ∃0 < K < 1s.t. each component function gi has conti. partial derivatives with

∂gi(x)

∂xj

K

n, whenever x∈ D,

for i, j = 1, 2, . . . , n, then with any x(0)∈ D conv. to the

unique fixed point p

∈ D, and

∥x(k)− p∥ Kk

1− K∥x(1)− x(0) ∀ k.

(11)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

How to check the continuity of G?

Thm 10.4 (分量函數的連續性)

Let g : D⊆ Rn→ R be a function and x0 ∈ D. If ∃ δ > 0 and M > 0 s.t. the partial derivatives of g exist on Nδ(x0)∩ D with

∂g(x)

∂xj

≤ M ∀ x ∈ Nδ(x0)∩ D,

for j = 1, 2, . . . , n, theng is conti. at x0. Continuity of G

G is conti. at x0∈ D ⇐⇒ each component function gi is conti. at x0 for i = 1, 2, . . . , n.

G is conti. on D⇐⇒ each gi is conti. on D for i = 1, 2, . . . , n.

(12)

Example 2, p. 633

(a) Place the nonlinear system in Example 1





3x1− cos(x2x3)12 = 0

x21− 81(x2+ 0.1)2+ sin x3+ 1.06 = 0 e−x1x2 + 20x3+10π3−3 = 0

in a fixed-point form x = G(x), x∈ D, and show that there is a unique sol. on

D ={[x1, x2, x3]T| − 1 ≤ xi≤ 1, i = 1, 2, 3}.

(b) Perform the FPI withx(0)= [0.1, 0.1,−0.1]T and the stopping criterion ∥x(k)− x(k−1)< 10−5.

(13)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Solution of (a)

Solving the ith eq. of (3) for xi (i = 1, 2, 3)⇒

x

1= 1

3cos(x2x3) + 1

6 ≡g1(x1, x2, x3)

x

2= 1

9

x21+ sin x3+ 1.06− 0.1 ≡g2(x1, x2, x3) (5)

x

3= −1

20e−x1x2 −10π− 3

60 ≡g3(x1, x2, x3).

So, define a vectored-valued function G : D→ R3 by

G(x1, x2, x3) = [g1(x1, x2, x3),g2(x1, x2, x3),g3(x1, x2, x3)]T∈ R3 for any x = [x1, x2, x3]T∈ D. Now, consider the fixed-point form

x = G(x), x∈ D obtained from the original nonlinear system (3).

(14)

Solution of (a)–Conti’d

Fist, we shall claim thatG(D)⊆ D. It is easily seen from (5) that for any x∈ D, we have

|g1(x)| ≤ 1

3| cos(x2x3)| +1

6 ≤ 0.50,

|g2(x)| = 1 9

x21+ sin x3+ 1.06− 0.1

1 9

√(1)2+ sin(1) + 1.06− 0.1 < 0.09

|g3(x)| = 1

20e−x1x2 +10π− 3 60

1

20e +10π− 3

60 < 0.61.

Hence, we know thatG(D)⊆ D.

(15)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Solution of (a)–Conti’d

Next, simple manipulation from Calculus gives that

∂g1

∂x2

= −x3

3 sin(x2x3), ∂g1

∂x3

= −x2

3 sin(x2x3), (6)

∂g2

∂x1 = x1

9√

x21+ sin x3+ 1.06, ∂g2

∂x3 = cos x3 18√

x21+ sin x3+ 1.06, (7)

∂g3

∂x1

= −x2

20 e−x1x2, ∂g3

∂x2

= −x1

20 e−x1x2, ∂g1

∂x1

= ∂g2

∂x2

= ∂g3

∂x3

= 0.

(8)

=⇒All first partial derivatives of g1, g2, g3 are conti. on D!

(16)

Solution of (a)–Conti’d

Now, from (6) = ∂g1

∂x2

|x3|

3 · | sin(x2x3)| ≤ sin 1

3 < 0.281, ∂g1

∂x3

< 0.281.

From (7), we see that ∂g2

∂x1

1

9√

sin(−1) + 1.06 = 1 9

0.218 < 0.238, ∂g2

∂x3

1

18√

sin(−1) + 1.06 = 1 18

0.218 < 0.119, and furthermore, from (8), we also have

∂g3

∂x1

e

20 < 0.14, ∂g3

∂x2

e

20 < 0.14.

(17)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Solution of (a)–Conti’d

Thus, the partial derivatives of g1, g2, g3 are bounded on D. It follows from Thm 10.4 thatG must be conti. on Dand

∂gi

∂xj

≤ 0.281 = K n = K

3 ∀ x ∈ D

for i, j = 1.2.3. So, the sufficient conditions of Thm 10.6 are satisfied with the constantK = (0.281)(3) = 0.843 < 1.

Conclusions

G has a unique fixed point p∈ D by Thm 10.6.

This fixed point p is one of the solutions to the original nonlinear system (3).

(18)

Solution of (b)–Numerical Results

Finally, perform the FPI

x(k)= G(x(k−1)), k = 1, 2, . . .

with x(0) = [0.1, 0.1,−0.1]T∈ D and∥x(k)− x(k−1)< 10−5. Actual sol. p = [0.5, 0,−π6 ]T ≈ [0.5, 0, −0.5235987757]T.

(19)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

A Test for the Error Bound

With the computed sol. x(5) and the actual fixed point p∈ D,

∥x(5)− p∥≤ 2 × 10−8.

WithK = 0.843, the theoretical error bound would become

∥x(5)− p∥ (0.843)5

1−0.843(0.423) < 1.15.

The error bound in Thm 10.6 might be much larger than

the actual absolute error!

(20)

Accelerating Convergence (加速收斂性)

Basic Ideas

Use the latest estimates generated by the FPI x(k)1 , x(k)2 ,· · · , x(k)i−1

instead of x(k−1)1 , x(k−1)2 ,· · · , x(k−1)i−1 to compute the ith componentx(k)i .

This is the same as the idea of Gauss-Seidel method for solving linear systems. (See Chapter 7)

(21)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Revisit Example 2

Reformulation as Gauss-Seidel Method

Consider the following Gauss-Seidel form for Example 2 x(k)1 = 1

3cos(x(k2 −1)x(k3−1)) +1 6, x(k)2 = 1

9

(x(k)1 )2+ sin x(k3−1)+ 1.06− 0.1, (9) x(k)3 = −1

20e−x(k)1 x(k)2 −10π− 3

60 , k = 1, 2, . . .

withx(0)= [0.1, 0.1,−0.1]T∈ R3 and the same stopping criterion

∥x(k)− x(k−1)< 10−5.

(22)

Applying the iteration (9) with given initial vectorx(0), the numerical results are shown in the following table. Note: In

general, this method does not always accelerate the

convergence! (不一定每次都能加速固定點迭代法!)

(23)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Section 10.2

Newton’s Method

(24)

In One-Dimensional Case

Review of Newton’s Method

Newton’s method for solving a nonlinear equation of one variable

f(x) = 0, x∈ R

can be regarded as a fixed-point iteration with g(x) = x− 1

f(x) · f(x) ≡ x −ϕ(x)· f(x).

The quadratic convergence of Newton’s method is always expected if the initial guess is sufficiently close to a zero of f.

(25)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

In Multidimensional Case

Objectives

For solving a nonlinear system

F(x) = [f1(x), f2(x),· · · , fn(x)]T= 0∈ Rn, x∈ Rn, try to develop a FPI with the vector-valued function

G(x) = x−A(x)−1F(x)

≡ [g1(x), g2(x),· · · , gn(x)]T∈ Rn, x∈ Rn, (10) assuming that A(x) = [aij(x)]∈ Rn×n is nonsingular at the

fixed point p of G.

Hopefully, the quadratic convergence can be achieved under reasonable conditions.

(26)

Thm 10.7 (FPI 二次收斂的充分條件) Let G(p) = p. Suppose that∃ δ > 0 with

(i) ∂g∂xi

j is conti. on Nδ(p) for i, j = 1, 2, . . . , n;

(ii) ∂x2gi

j∂xk is conti. on Nδ(p), and∃ M > 0 s.t.

2gi(x)

∂xj∂xk

≤ M ∀ x ∈ Nδ(p),

for i, j, k = 1, 2, . . . , n;

(iii) ∂g∂xi(p)

j = 0 for i, j = 1, 2, . . . , n.

Then∃δˆ≤ δ s.t. the seq. {x(k)}k=0 generated by FPI converges

quadratically to p for any x

(0)∈ Nδˆ(p). Moreover,

∥x(k)− p∥ n2M

2 ∥x(k−1)− p∥2 ∀ k ≥ 1.

(27)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Derivation of the Matrix A(x)

WriteA(x)−1= [bij(x)]∈ Rn×n. From (10)

gi(x) = xi

n k=1

bik(x)fk(x), i = 1, 2, . . . , n.

For each i, j = 1, 2, . . . , n, the first partial derivatives of gi are

∂gi(x)

∂xj

=







1n

k=1

(∂bik(x)

∂xj fk(x) +bik(x)∂f∂xk(x)j )

, i = j,

n

k=1

(∂bik(x)

∂xj fk(x) +bik(x)∂f∂xk(x)

j

)

, i̸= j.

(11)

(28)

Derivation of the Matrix A(x)–Conti’d

From condition (iii) of Thm 10.7 and (11), we immediately obtain

0 = ∂gi(p)

∂xj =







1n

k=1

bik(p)∂f∂xk(p)j , i = j,

n

k=1

bik(p)∂f∂xk(p)

j , i̸= j. (12) Define the Jacobian matrix J(x) = [∂f∂xi(x)

j ]∈ Rn×n by

J(x) =





∂f1

∂x1(x) ∂x∂f1

2(x) · · · ∂x∂f1n(x)

∂f2

∂x1(x) ∂x∂f2

2(x) · · · ∂x∂f2n(x)

... ... ...

∂fn

∂x1(x) ∂x∂fn

2(x) · · · ∂x∂fnn(x)





, x∈ Nδ(p).

It follows from (12) that A(p)−1J(p) = I or A(p) = J(p).

(29)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Newton’s Method

So, it is appropriate to choose A(x) = J(x)for x∈ Nδ(p).

Basic form of Newton’s method for nonlinear systems:

x(k) = G(x(k−1)) = x(k−1)−A(x(k−1))−1F(x(k−1))

= x(k−1)−J(x(k−1))−1F(x(k−1)), k = 1, 2, . . . , (13) where x(0) ∈ Nˆδ(p) and J(x) is nonsingular on Nˆδ(p) with 0 <δˆ≤ δ.

Quadratic convergence of Newton’s method is guaranteed

from Thm 10.7 if the initial guess is sufficiently close to p!

(30)

Some Comments on Newton’s Method (13)

We DO NOT computeJ(x(k−1))−1

explicitly in practical

computation.

In order to save the operation counts, we first solve the linear system

J(x(k−1))y=−F(x(k−1))

for the correction vectoryusing Gaussian Elimination

with Partial Pivoting, and then compute the next iterate via

x(k)= x(k−1)+y.

Floating-point operation counts ≈ O(23

n

3) per iteration.

(31)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Pseudocode of Newton’s Method

To approx. the sol. of the nonlinear system F(x) = 0, x∈ Rn. Algorithm 10.1: Newton’s Method for Systems

INPUT dim. n; initial x∈ Rn; tol. TOL; max. no. of iter. N0. OUTPUT an approx. sol. x to the nonlinear system.

Step 1 Set k = 1.

Step 2 While (k≤ N0) do Steps 3–7

Step 3 Compute F(x) and the Jacobian matrixJ(x).

Step 4 Solve the n× n linear systemJ(x)y=−F(x).

Step 5 Set x = x +y.

Step 6 Ify∥ < TOL then OUTPUT(x); STOP.

Step 7 Set k = k + 1.

Step 8 OUTPUT(‘Maximum number of iterations exceeded’); STOP.

(32)

Example 1, p. 641 (See also Example 2 of Sec. 10.1) Apply Newton’s Method to solve the nonlinear system





3x1− cos(x2x3)12 = 0

x21− 81(x2+ 0.1)2+ sin x3+ 1.06 = 0 e−x1x2+ 20x3+ 10π3−3 = 0

withx(0)= [0.1, 0.1,−0.1]T and∥x(k)− x(k−1)< 10−5.

(33)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Numerical Results of Example 1

The Jacobian matrix J(x) is easily obtain from Calculus as

J(x1, x2, x3) =

 3 x3 ∼ (x2x3) x3sin(x2x3) 2x1 −162(x2+ 0.1) cos x3

−x2e−x1x2 −x1e−x1x2 20

 .

Actual sol. p = [0.5, 0,−π6 ]T≈ [0.5, 0, −0.5235987757]T.

(34)

Section 10.3

Quasi-Newton Methods

(擬牛頓法)

(35)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Newton’s Method v.s. Broyden’s Method (1/2)

For Each Iterate of Newton’s Method

At least n2 scalar functional evaluations for the Jacobian matrix J(x(k)) andn scalar functional evaluations for F(x(k)).

Solving a linear system involving the Jacobian requires O(n3) operation counts.

Self-Correcting: it will generally correct for roundoff error

with the successive iterations.

Quadratic convergenceoccurs if a good initial guess is given.

(36)

Newton’s Method v.s. Broyden’s Method (2/2)

For Each Iterate of Broyden’s Method

Only n scalar functional evaluations are required!

The amount of operation counts for solving the linear system is reduced to O(n2).

It is Not Self-Correcting with the successive iterations.

Only superlinear convergenceoccurs if a good initial guess is given, i.e., we have

klim→∞

∥x(k+1)− p∥

∥x(k)− p∥ = 0,

where p∈ Rn is a solution of the nonlinear system F(x) = 0.

(37)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

About Broyden’s Method . . .

lt belongs to a class of least-change secant update

methods that produce algorithms called quasi-Newton.

The quasi-Newton methods replace the Jacobian matrix in Newton’s method with an approximate matrix that is

easily updated at each iteration.

References (參考文獻)

[Broy] C G. Broyden, A class of methods for solving nonlinear simultaneous equations, Math. Comp., 19(92), 577-593, 1965.

[DM] J. E. Dennis, Jr. and J. J. Moré, Quasi-Newton methods, motivation and theory, SIAM Rev., 19(1), 46–89, 1977.

(38)

Derivation of Broyden’s Method (1/2)

For an initial approx. x(0)∈ Rn, compute the Jacobian matrix A0 = J(x(0))∈ Rn×n and the first iterate

x(1)= x(0)−A0−1F(x(0)) as Newton’s method.

If we let

s1= x(1)− x(0) and y1= F(x(1))− F(x(0)), want to determine a matrixA1 ≈ J(x(1))∈ Rn×n satisfying the quasi-Newton condition or secant condition

A1(x(1)− x(0)) = F(x(1))− F(x(0)) or A1s1 = y1. (14)

(39)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Derivation of Broyden’s Method (2/2)

To determineA1 uniquely, Broyden[Broy] imposed

A1z = A0z ∀ z ∈ Rn with sT1z = 0 (15) on the secant codition (14).So, it follows from (14) and (15) that [DM]

A1= A0+(y1− A0s1)

∥s122 · sT1

and hence x(2) = x(1)−A1−1F(x(1)).

In general, for k≥ 2, we have

Ak = Ak−1+(yk− Ak−1sk)

∥sk22 ·skT, (16) x(k+1) = x(k)−Ak−1F(x(k)),

wheresk= x(k)− x(k−1)=−A−1k−1F(x(k−1)) and yk= F(x(k))− F(x(k−1)).

(40)

Remarks

From (16), we see that Ak is obtained from the previous Ak−1 plus an rank-1 updating matrix.

This technique is called the least-change secant updates.

In single-variable Newton’s method, may write f(xk) f(xk)− f(xk−1)

xk− xk−1 or f(xk)(xk−xk−1)≈ f(xk)−f(xk−1);

while we try to determine uniquelyAk ≈ J(x(k))s.t.

Ak(x(k)− x(k−1)) = F(x(k))− F(x(k−1)) in the multidimensional case.

(41)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

A Question

With the special structure of Ak, how to reduce the number of arithmetic calculations toO(n2) for solving the n× n linear system Ak−1F(X(k))?

Thm 10.8 (Sherman-Morrison Formula)

If A∈ Rn×n is nonsingular and x, y∈ Rn are nonzero vectors with yTA−1x̸= −1, then A+xyT is nonsingular and

(A+xyT)−1= A−1−A−1xyTA−1 1 + yTA−1x.

(42)

Reformulation of A

−1k

For each k≥ 1, from (16) and Sherman-Morrison formula =⇒

Ak−1 =

(Ak−1+(yk− Ak−1sk)

∥sk22

· sTk

)−1

= A−1k−1−A−1k−1(y

k−Ak−1sk

∥sk22

)sTkA−1k−1

1 + sTkA−1k−1(

yk−Ak−1sk

∥sk22

)

= A−1k−1 (A−1k−1yk− sk)(sTkA−1k−1)

∥sk22+ sTkA−1k−1yk− ∥sk22

= A−1k−1+(sk− A−1k−1yk)(sTkA−1k−1) sTkA−1k−1yk

= A−1k−1+(sk−A−1k−1yk)(sTkA−1k−1)

−sTk · (−A−1k−1yk) . (17)

(43)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Algorithm 10.2: Broyden’s Method

INPUT dim. n; initial x∈ Rn; tol. TOL; max. no. of iter. N0. OUTPUT an approx. sol. x of nonlinear system F(x) = 0.

Step 1 Set A0 = J(x): the Jacobian matrix evaluated at x.

v = F(x). (Note: v = F(x(0)).) Step 2 Set A = A−10 . (Use Gaussian elimination.)

Step 3 Set s =−Av; x = x + s; k = 1. (Note: s = s1, x = x(1).) Step 4 While (k≤ N0) do Steps 5–11.

Step 5 Set w = v; v = F(x); y = v− w. (Note: y = yk.) Step 6 Set z =−Ay. (Note: z = −A−1k−1yk.)

Step 7 Set p =−sTz. (Note: p = sTkA−1k−1yk.)

Step 8 Set uT= sTA; A = A +1p(s + z)uT. (Note: A = A−1k .) Step 9 Set s =−Av; x = x + s. (Note: s = −A−1k F(x(k)) and

x = x(k+1).)

Step 10 If∥s∥ < TOL then OUTPUT(x); STOP.

Step 11 Set k = k + 1.

Step 12 OUTPUT(‘Maximum number of iterations exceeded’); STOP.

(44)

Example 1, p. 651 (See also Example 2 of Sec. 10.1) Use Broyden’s Method to solve the nonlinear system





3x1− cos(x2x3)12 = 0

x21− 81(x2+ 0.1)2+ sin x3+ 1.06 = 0 e−x1x2+ 20x3+ 10π3−3 = 0

withx(0)= [0.1, 0.1,−0.1]T and∥x(k)− x(k−1)2 < 10−5.

(45)

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . .

. .

. . . . .

Numerical Results for Example 1

The superlinear convergence of Broyden’s method for Example 1 is demonstrated in the following table, and the computed solutions are less accurate than those computed by Newton’s method.

Actual sol. p = [0.5, 0,−π6 ]T ≈ [0.5, 0, −0.5235987757]T.

(46)

Thank you for your attention!

參考文獻

相關文件

Al atoms are larger than N atoms because as you trace the path between N and Al on the periodic table, you move down a column (atomic size increases) and then to the left across

Quadratically convergent sequences generally converge much more quickly thank those that converge only linearly.

“Find sufficiently accurate starting approximate solution by using Steepest Descent method” + ”Compute convergent solution by using Newton-based methods”. The method of

Some of the most common closed Newton-Cotes formulas with their error terms are listed in the following table... The following theorem summarizes the open Newton-Cotes

denote the successive intervals produced by the bisection algorithm... denote the successive intervals produced by the

‰ For example Competition Provisions in Hong Kong Chapter 106 Telecommunication Ordinances 7K : S.1 A licensee (entity) shall not engage in conduct which, in the opinion of

Then, it is easy to see that there are 9 problems for which the iterative numbers of the algorithm using ψ α,θ,p in the case of θ = 1 and p = 3 are less than the one of the

obtained by the Disk (Cylinder ) topology solutions. When there are blue and red S finite with same R, we choose the larger one. For large R, it obeys volume law which is same