**Chapter 1**

**Mathematical Preliminaries and** **Error Analysis**

**Hung-Yuan Fan (范洪源)**

**Department of Mathematics,**
**National Taiwan Normal University, Taiwan**

**Spring 2016**

**Section 1.1**

**Review of Calculus**

### Limits

**Def 1.1**

*A fuction f : X→ R has the limit L at x*0, denoted by

*x*lim*→x*0

*f(x) = L,*

if*∀ ε > 0, ∃ δ > 0s.t. x ∈ X, 0 < |x − x*0*| < δ ⇒ |f(x) − L| < ε.*

### Continuity (連續性)

**Def 1.2**

**1** *A fuction f : X→ R is continuous (簡寫: conti.) at x*0*∈ X if*

*x*lim*→x*0

*f(x) = f(x*_{0}*).*

**2** *f is conti. on X if it is conti. at each point of X.*

**3** *C(X) ={f | f is conti. on X} denotes the set of all conti.*

*functions defined on X.*

**Note: if X = [a, b], (a, b), [a, b) or (a, b] with a < b, write C[a, b],**

**Note: if X = [a, b], (a, b), [a, b) or (a, b] with a < b, write C[a, b],**

*C(a, b), C[a, b) or C(a, b], respectively.*

### Limits of Sequences

**Def 1.3**

A sequence (簡寫: seq.) of real numbers*{x**n**}*^{∞}*n=1* converges (簡寫:

*conv.) to the limit x, written*

*n*lim*→∞**x*_{n}*= x, or* *x*_{n}*→ x as n → ∞,*
if*∀ ε > 0, ∃ N(ε) ∈ N s.t. n > N(ε) ⇒ |x**n**− x| < ε.*

**Thm 1.4 (序列與連續性的關係)**

*Let f be a real-valued function defined on∅ ̸= X ⊆ R and x*0*∈ X.*

The followings are equivalent:

a. *f is conti. at x*_{0}.

b. *∀ seq. {x**n**}*^{∞}_{n=1}*⊆ X with lim*

*→∞**x*_{n}*= x*_{0}, lim

*→∞**f(x*_{n}*) = f(x*_{0}).

### Differentiability (可微分性)

**Def 1.5**

**1** *A fuction f : X→ R is differentiable (簡寫: diffi.) at x*0 *∈ X if*
*f*^{′}*(x*0) = lim

*x**→x*^{0}

*f(x)− f(x*0)
*x− x*0

= lim

*h**→0*

*f(x*_{0}*+ h)− f(x*0)

*h* *.*

**2** *f is cdiff. on X if it is cdiffy6t. at each point of X.*

**3** *C*^{n}*(X) denotes the set of all functions having n conti.*

*derivatives on X.*

**4** *C*^{∞}*(X) denotes the set of functions having derivatives of all*
*orders on X.*

### Continuity v.s. Differentiability

**Thm 1.6**

*Let f be a real-valued function defined on X and x*_{0} *∈ X. Then*
*f is diff. at x*_{0} =*⇒ f is conti. at x*0*.*

### Rolle’s Theorem

**Thm 1.7 (Rolle’s Thm)**

*f∈ C[a, b] and f is diff. on (a, b). If f(a) = f(b), then ∃ c ∈ (a, b)*
*s.t. f*^{′}*(c) = 0.*

### Generalized Rolle’s Theorem

**Thm 1.10 (Generalized Rolle’s Thm)**

*f∈ C[a, b] is n times diff. on (a, b). If f(x**i**) = 0 for some n + 1*
*distinct numbers a≤ x*0*< x*1 *<· · · < x**n**≤ b, then*

*∃ c ∈ (x*0*, x**n*)*⊆ [a, b] s.t. f*^{(n)}*(c) = 0.*

### Mean Value Theorem (簡寫: MVT, 均值定理)

**Thm 1.8 (MVT)**

*f∈ C[a, b] and f is diff. on (a, b). Then ∃ c ∈ (a, b) s.t.*

*f*^{′}*(c) =* *f(b)− f(a)*

*b− a* *or f(b)− f(a) = f*^{′}*(c)(b− a).*

### Extreme Value Theorem (簡寫: EVT, 極值定理)

**Thm 1.9 (EVT)**

*If f∈ C[a, b], then ∃ c*1*, c*2*∈ [a, b] s.t.*

*f(c*_{1})*≤ f(x) ≤ f(c*2) *∀ x ∈ [a, b].*

### Intermediate Value Theorem (簡寫: IVT, 中間值定理)

**Thm 1.11 (IVT)**

*f∈ C[a, b], K is any number between f(a) and f(b)*

=*⇒ ∃ c ∈ (a, b) s.t. f(c) = K.*

### Taylor Polynomials and Series

**Thm 1.14 (Taylor’s Thm, 泰勒定理)**

*f∈ C*^{n}*[a, b], f*^{(n+1)}*∃ on [a, b] and x*0*∈ [a, b].*

*⇒ ∀ x ∈ [a, b], ∃ ξ(x) between x*0 *and x s.t. f(x) = P*_{n}*(x) + R*_{n}*(x),*
where

*P*_{n}*(x) =*

∑*n*
*k=0*

*f*^{(k)}*(x*_{0})

*k!* *(x− x*0)^{k}**, (the nth Taylor poly. for f)***R*_{n}*(x) =* *f*^{(n+1)}*(ξ(x))*

*(n + 1)!* *(x− x*0)^{n+1}*.*

**(remainder or truncation error associated with P***n**(x))*

### Taylor Series (泰勒級數)

**Remarks**

**1** If lim

*n**→∞**R*_{n}*(x) = 0* *∀ x ∈ I (I: interval with x*0*∈ I), then*
*f(x) = lim*

*n**→∞**P*_{n}*(x) =*

∑*∞*
*k=0*

*f*^{(k)}*(x*_{0})

*k!* *(x− x*0)^{k}*∀ x ∈ I.*

**We say that the Taylor series for f about x**_{0} *conv. to f on I.*

**2** *If x*_{0}**= 0, the Taylor series is often called the Maclaurin**

**series.**

**Example 3, p. 11**

*The second (or third) Taylor poly. for f(x) = cos x about x*_{0}= 0 is
*P*_{2}*(x) = P*_{3}*(x) = 1−*^{1}_{2}*x*^{2}, but their truncation errors satisfy

*|R*2*(x)| ≤| sin ξ(x)| |x|*^{3}
6 *≤* *|x|*^{4}

6 *= 0.1¯*6*· |x|*^{4}
(*∵ | sin ξ(x)| ≤ |ξ(x)| ≤ |x| ∀ x ∈ R)*

*|R*3*| ≤| cos ˜ξ(x)| |x|*^{4}
24 *≤* *|x|*^{4}

24 *= 0.041¯*6*· |x|*^{4}*.*
**(Sharper Bound for***|x| ≈ 0!)*

### What are the goals of numerical analysis?

**Remark**

Two objectives of numerical analysis:

**1** Find an approximation to the solution of a given problem.

**2** Determine a bound for the accuracy of the approximation.

Is this error bound tight and sharp?

### Integration (1/2)

**Def 1.12 (定積分的定義)**

**1** *The (Riemann) definite integral of f on [a, b] is defined by*

∫ _{b}

*a*

*f(x) dx =* lim

max

1*≤i≤n**∆x**i**→ 0*

∑*n*
*i=1*

*f(z*_{i}*)∆x*_{i}*,*

*where P ={a = x*0 *< x*1*<· · · < x**n**= b} is any partition of*
*[a, b], z**i**∈ [x**i**−1**, x**i**] and ∆x**i**= x**i**− x**i**−1* *for i = 1, 2, . . . , n.*

**2** *f is called (Riemann) integrable over [a, b] if the limit exists.*

**Note: f is conti. on [a, b]**

**Note: f is conti. on [a, b]**

*⇒ f is integrable over [a, b].*

### Integration (2/2)

**Remark**

*f is integrable over [a, b] =⇒*

∫ _{b}

*a*

*f(x) dx = lim*

*n**→∞*

∑*n*
*i=1*

*f(z*_{i}**)�x,****(�x =**

**b**

**− a****n**

)
*≈*

∑*n*
*i=0*

*w*_{i}*· f(x*_{i}* )�x,* (w

_{i}**: weighting coeff.)**

with*z*_{i}*= x**i* or *x*_{i}_{−1}*for i = 1, 2, . . . , n.*

*Riemann Sums (黎曼和) with z*

_{i}*= x*

_{i}*∀ i*

### Weighted MVT for Definite Integrals

**Thm 1.13 (定積分的權重均值定理)**

*f ∈ C[a, b] and g is an integrable function that does not change*

*sign on [a, b]. Then∃ c ∈ (a, b) s.t.*

∫ *b*
*a*

*f(x)g(x) dx = f(c)*

∫ *b*
*a*

*g(x) dx.*

**Note: When**

*g(x)≡ 1*, we have

*f(c) =* 1
*b− a*

∫ _{b}

*a*

*f(x) dx≡ f**avg**,*
*where f*_{avg}**is the average value of f on [a, b].**

### The Average Value of a Function

**Section 1.2**

**Round-off Errors and Computer** **Arithmetic**

**(捨入誤差與電腦算術)**

### Binary Machine Numbers (二進位機器數字)

**IEEE 754-1985 Standard (updated version: IEEE 754-2008)**

**1** Single Precision Format (32 bits; 單精度)

**2** Double Precision Format (64 bits; 雙精度)

**3** Extended Precision Format (80 bits; 擴充精度)
sign: 1 bit, exponent: 16 bits, fraction: 63 bits

### 64-bit Floating-Point Representation

64-bit representation is used for a real number.

**Each binary floating-point number (浮點數) has at least 16**

**decimal digits of precision.**

**1-bit sign (符號) s is followed by 11-bit exponent (指數) c****(characteristic, 0***≤ c ≤ 2*^{11}*− 1 = 2047) and 52-bit binary*

**fraction f (mantissa: 尾數).**

**fraction f (mantissa: 尾數).**

### The Normalized Forms (正規化形式或標準化形式)

*Normalized binary floating-pint form of x∈ R is*

*fl(x) = (−1)** ^{s}*2

^{c}

^{−1023}*(1 + f)*

_{2}= (

*−1)*

*( 1 +*

^{s}∑*k*
*i=1*

*b** _{i}*2

*)*

^{−i}102^{c}^{−1023}*,*
*where f = (0.b*_{1}*b*_{2}*· · · b**k*)2.

F =**{fl(y) | y ∈ R} is a finite (and proper) subset of R.**

**The difference between two adjacent (相鄰的) 64-bit**
*floating-point numbers is ε** _{M}* = 2

^{−52}*≈ 2.22 × 10*

*.*

^{−16}**Note: the machine precision (or epsilon) is**

*ε** _{M}* = 2

^{−23}*≈ 1.19 × 10*

*for the single precision format.*

^{−7}### Some Examples

**1** Since

*27.56640625*_{10}**= 11011.10010001**_{2}

*= 1.101110010001*_{2}*× 2*^{4}*, (Normalized Form)*
*we have s = 0, c = 4 + 1023 = 1027*_{10}=**10000000011**** _{2}** and

*mantissa f = 0.101110010001*

**. Using IEEE 754 format**

_{2}*⇒*

**0**

**10000000011**

**10111001000100**

*· · · 0 (補 40 個零!)*

**2** Note that

*0.1*_{10}**= 0.0 0011**_{2}**= 1.1 0011**_{2}*× 2*^{−4}*.*
*How to store 0.1*_{10} by using IEEE 754 format?

### Remarks on IEEE 754 Format

**1** *The smallest positive floating-point number (with s = 0,*
*c = 1 and f = 0) is*

*fl*_{min}= 2* ^{−1022}*(1 + 0)

*≈ 2.2 × 10*

^{−308}*.*

**2** *The largest one (with s = 0, c = 2046 and f = 1− 2** ^{−52}*) is

*fl*

*= 2*

_{max}^{1023}(2

*− 2*

*)*

^{−52}*≈ 1.8 × 10*

^{308}

*.*

**3** *|fl(x)| > fl*max * ⇒ overflow (上溢位) and |fl(x)| < fl*min

*⇒*

**underflow (下溢位) and reset x = 0.**

**4** **Two zeros +0 (with s = 0, c = 0, f = 0) and****−0 (with***s = 1, c = 0, f = 0) exist!*

### Decimal Machine Numbers (十進位機器數字)

*Normalized decimal floating-point form of y∈ R is*
*fl(y) =±0.d*1*d*_{2}*· · · d**k**× 10*^{n}*,*

**where 1****≤ d****1** **≤ 9, 0 ≤ d****i****≤ 9 (i = 2, . . . , k) and n ∈ Z. In****this case, fl(y): k-digit decimal machine number.**

**The k-digit fl(y) of a normalized real number***y =±0.d*1*d*_{2}*· · · d**k**d*_{k+1}*· · · × 10*^{n}

* can be obtained by terminating the mantissa of y at k*
decimal digits.

### Two Methods of Termination

**1**

**Chopping: (直接捨去法)**

*fl(y) = ±0.d*

**1**

**d**

_{2}

**· · · d****k**

*× 10*

^{n}*,*

*i.e. simply chop off the digits d*

_{k+1}*d*

_{k+2}*· · · .*

**2**

**Rounding: (四捨五入法)**

*fl(y) =*

{ *±(0.d*1*d*_{2}*· · · d**k***+ 10***^{−k}*)

*× 10*

^{n}*, d*

_{k+1}

**≥ 5 (Round Up)****±0.d****1**

**d**

_{2}

**· · · d****k**

*× 10*

^{n}*, d*

*k+1*

**< 5 (Round Down)***≡ ±0.δ*_{1}*δ*_{2}*· · · δ**k**× 10** ^{n}*after chopping.

**Example 1, p. 20**

Determine the 5-digit (a) chopping and (b) rounding values of
**π = 0.31415926**· · · × 10^{1}*.*

**Sol:**

(a) *fl(π) = 0.31415× 10*^{1} by chopping.

(b) *fl(π) = (0.31415 + 10** ^{−5}*)

*× 10*

^{1}

**= 0.31416**× 10^{1}by rounding.

### Absolute and Relative Errors (絕對誤差與相對誤差)

**Def 1.15**

*If p*^{∗}*is an approximation to p, then*

**1** **the absolute error is AE(p*** ^{∗}*) =

*|p*

^{∗}*− p|.*

**2** **the relative error is**

*RE(p** ^{∗}*) =

*|p*

^{∗}*− p|*

*|p|* *, providred that p̸= 0.*

**Note: the relative error is independent of the magnitude of p, but**

the absolute error might vary widely!
**Note: the relative error is independent of the magnitude of p, but**

### Examples of Abs. and Rel. Errors

**Example 2, p. 21**

*Find the abs. and rel. errors when approximating p by p** ^{∗}*.
(a)

*p = 0.3000*

**× 10**

^{1}*and p*

^{∗}*= 0.3100*

**× 10****.**

^{1}(b) *p = 0.3000 × 10*

^{−3}*and p*

^{∗}*= 0.3100*

**× 10***. (c)*

^{−3}*p = 0.3000*

**× 10**

^{4}*and p*

^{∗}*= 0.3100*

**× 10****.**

^{4}**Sol:**

(a) *AE(p*^{∗}**) = 0.1 and RE(p**^{∗}**) = 0.333 �3****× 10***^{−1}*.

(b) *AE(p*^{∗}**) = 0.1****× 10**^{−4}*and RE(p*^{∗}**) = 0.333 �3****× 10***^{−1}*.
(c)

*AE(p*

^{∗}

**) = 0.1****× 10**

^{3}*and RE(p*

^{∗}

**) = 0.333 �3****× 10***.*

^{−1}( 相對誤差都一樣, 但是絕對誤差變化很大!)

### Significant Digits (有效位數)

**Def 1.16**

*p*^{∗}*approximate p ̸= 0 to t significant digits (or figures) if*

*∃ largest t ∈ N ∪ {0} satisfying*
*RE(p** ^{∗}*) =

*|p*

^{∗}*− p|*

*|p|* *≤ 5 × 10*^{−t}*.*

**Note: for any normalized y = 0.d**

**Note: for any normalized y = 0.d**

_{1}

*d*

_{2}

*· · · × 10*

^{n}*∈ R, its k-digit*decimal representation satisfies

*RE(fl(y))≤ 10*^{−k+1}**= 10**^{−(k−1)}**by using chopping (see the textbook), and**

*RE(fl(y))≤ 0.5 × 10*^{−k+1}**= 5****× 10**^{−k}

### Finite-Digit Arithmetic (有限位數的算術)

**Elementary Floating-Pont Arithmetic**

*For floating-point representations fl(x) and fl(y) of real numbers x*
*and y, assume that*

*x⊕ y = fl(fl(x) + fl(y)), x ⊗ y = fl(fl(x) × fl(y)),*
*x⊖ y = fl(fl(x) − fl(y)), x ⊘ y = fl(fl(x) ÷ fl(y)).*

**Note: in practical computation, we usually have**

**fl(x op y) = (x op y)(1 + δ) with**|δ| ≤ ε*M*

*,*

**where op = +,**−, ×, ÷, and ε*M*is the machine precision.

### Subtraction of Nearly Equal Numbers (相近數的減法)

**Cancellation of Significant Digits**

*If x, y∈ R (x > y) have the k-digit decimal representations*
**fl(x) = 0.d**_{1}

**d**

_{2}

**· · · d****p**

*α*

*p+1*

*α*

*p+2*

*· · · α*

*k*

*× 10*

^{n}*,*

**fl(y) = 0.d**

_{1}**d**

_{2}

**· · · d****p**

*β*

*p+1*

*β*

*p+2*

*· · · β*

*k*

*× 10*

^{n}*,*then

*fl(x)− fl(y) = (0.α**p+1**α**p+2**· · · α**k**− 0.β**p+1**β**p+2**· · · β**k*)**× 10**^{n}^{−p}

*≡ 0.σ***p+1***σ***p+2***· · · σ***k****× 10**^{n}^{−p}*,*

*i.e. x ⊖ y = fl(fl(x) − fl(y)) has at most*

*k− p*significant digits,

*with the last p digits being either 0 or randomly assigned.*

### Magnification of Absolute Errors (絕對誤差的擴大)

**Remark**

*Suppose that fl(z) = z + δ with|δ| being the absolute error. If*
*ε = 10*^{−n}*with n ∈ N is a number of small magnitude, then*

*fl(z)*

*fl(ε)* *≈ (z + δ) × 10** ^{n}*=

*z*

*ε*+ 10^{n}*δ.*

*So, the absolute error in computing z/ε is*
*fl(z)*

*fl(ε)* *−z*
*ε*

* ≈ 10*^{n}*· |δ| = |δ|/ε.*

**Example 4, pp. 23–24 (1/2)**
Given four real numbers

*x =* 5

7 *= 0.714285,* *u = 0.714251*
*v = 98765.9,* *w = 0.111111× 10*^{−4}*.*

*Find 5-digit chopping values of x⊖ u, (x ⊖ u) ⊘ w, (x ⊖ u) ⊗ v and*
*u⊕ v.*

**Sol: The absolute error for x**

**Sol: The absolute error for x**

*⊖ u is*

*|(x − u) − (x ⊖ u)| = |(x − u) − fl(fl(x) − fl(u))|*

=*|(*5

7 **− 0.714251) − fl(0.71428 × 10**^{0}**− 0.71425 × 10**^{0})*|*

=**|0.347143 × 10**^{−4}**− 0.30000 × 10**^{−4}*|*

**× 10**^{−5}

### Example 4, pp. 23–24 (2/2)

*The relative error for x⊖ u is given by*
*RE(x⊖ u) =* *0.47143× 10*^{−5}

*0.347143× 10*^{−4}** = 0.1358 ≤ 0.136.**

### How to avoid the loss of accuracy?

**Some Tricks**

**1** Reformulation of the calculations to avoid the subtraction of
two nearly equal numbers.

(改變計算公式以避免相近數字相減)

**2** Rearrangement of the calculations by the nested arithmetic.

(利用巢狀算術技巧以減少四則運算數量)

**The lesson:**

**Think before you compute!**

### Illustration of Trick 1

*Distinct real roots of ax*^{2}*+ bx + c = 0 with a̸= 0 and*
*b*^{2}*− 4ac > 0 are*

*x*_{1} = *−b +√*

*b*^{2}*− 4ac*

*2a* *,* *x*_{2}= *−b −√*

*b*^{2}*− 4ac*

*2a* *.*

If*b > 0* and*4ac≪ b*^{2}, then

*−b +**√*

*b*^{2}*− 4ac ≈ 0 ⇒** Loss of accuracy for computing x*1

**!**

*Rewrite the formula for x*1by rationalization (有理化)

*x*1= *−2c*
**b +***√*

**b**^{2}**− 4ac****. (分母不是相近數相減!)***Use x*_{1}*x*_{2}= ^{c}_{a}*⇒ x*2=_{ax}^{c}

1 = ^{−b−}^{√}_{2a}^{b}^{2}* ^{−4ac}*.

### An Example for Trick 1 (1/2)

**Example, pp. 25–26**

*Use 4-digit rounding arithmetic to determine the first root x*_{1} of
*f(x) = x*^{2}*+ 62.10x + 1 = 0.*

**Sol: Two real roots of f(x) = 0 are approximately**

**Sol: Two real roots of f(x) = 0 are approximately**

*x*

_{1}=

*2 =*

**−0.01610723, x***−62.08390.*

Use 4-digit rounding*⇒*
*fl(*√

*b*^{2}*− 4ac) = fl(*√

*(62, 10)*^{2}**− (4.000)(1.000)(1.000)) = 62.06,***f(x*_{1}) = **−62.10 + 62.06**

*2.000* =* −0.02000,*
with the relative error being

*| − 0.01611 + 0.02000|* _{−1}

### An Example for Trick 1 (2/2)

*In addition, if we use the reformulation for x*_{1}, then
*fl(x*_{1}*) = fl*

( *fl(−2c)*
*fl(b +√*

*b*^{2}*− 4ac)*
)

*= fl*

( *−2.000*
*62.10 + 62.06*

)

=**−0.01610,****which has the small relative error 6.2****× 10***^{−4}*.

**Note:**

**近似零根 x**_{1}

**的精度提升至 3 個有效位數!**

### An Example of Polynomial Evaluation (1/2)

**Example 6, pp. 26–27**

Evaluate the 3-digit chopping and rounding values of a poly.

*f(x) = x*^{3}*− 6.1x*^{2}*+ 3.2x + 1.5 at x = 4.71.*

**Sol: The actual value is y = f(4.71) =**

**Sol: The actual value is y = f(4.71) =**

*chopping/rounding arithmetic, we have The 3-digit approx. values*

**−14.263899. Using 3-digit***of y are*

*fl(y) = fl*(

*((104.− 134.) + 15.0) + 1.5*)

=**−13.5, (Chopping)***fl(y) = fl*(

*((105.− 135.) + 15.1) + 1.5*)

=**−13.4. (Rounding)**

### An Example of Polynomial Evaluation (2/2)

*Hence, the relative errors in computing fl(y) are*
*RE(fl(y)) =**−14.263899 + 13.5*

*−14.263899* ** ≈ 5.36 × 10**^{−2}**, (Chopping)***RE(fl(y)) =**−14.263899 + 13.4*

*−14.263899* ** ≈ 6.06 × 10**^{−2}**. (Rounding)**

=*⇒ Only*one significant digitfor both chopping and rounding
*values of y = f(4.71)!*

### Nested Arithmetic (巢狀算術)

**Rearrangement of Poly. Evaluation**

**Direct Computation: (4 multiplications and 3 additions)**
*f(x) = x· (x · x) − 6.1 · (x · x) + 3.2 · x + 1.5*
**Nested Computation: (2 multiplications and 3 additions)**

*f(x) =*(

*(x− 6.1) · x + 3.2*)

*· x + 1.5*

**Again, using 3-digit arithmetic with the nested form =***⇒*
*RE(fl(y)) =***−14.263899 + 14.2**

*−14.263899* ** ≈ 4.5 × 10**^{−3}**, (Chopping)***RE(fl(y)) =***−14.263899 + 14.3**** ≈ 2.5 × 10**^{−3}**. (Rounding)**

**Useful Suggestion**

The accuracy of an approximation can be improved ifwe reduce the number of arithmetic operations.

**(減少四則運算的數量可以改進計算解的精度!)**

**HW of Sec 1.2:**

*√*

**24,**

**Section 1.3**

**Algorithms and Convergence**

**(演算法與收斂性)**

**Algorithms and Pseudocodes (虛擬碼)**

**An algorithm is a procedure that describes a finite sequence**

**of steps to be performed in a specified order.**

The objective of an algorithm is to implement a procedure for

**solving a problem or approximating a solution to the** **problem.**

(演算法目標是求解問題或是得到該問題的數值近似解)
**Pseudocode is an informal environment-independent**
description of the key principles of an algorithm.

It uses structural conventions of a programming language, but
**is intended for human reading rather than machine reading.**

**An Example of Pseudocode**
To solve the root-finding problem

*f(x) = ax*^{2}*+ bx + c = 0* with *a̸= 0.*

INPUT *coefficients a, b, c.*

OUTPUT *approximate root x.*

Step 1 *Compute the discriminant D = b*^{2}*− 4ac.*

Step 2 *Compute approximate root x to f(x) = 0 using D.*

Step 3 **OUTPUT(x); STOP.**

### An Illustration of Algorithm

**Example 1, p. 33**

*The Nth Taylor poly. of f(x) = ln x about x*_{0} = 1 is

*P*_{N}*(x) =*

∑*N*
*i=1*

(*−1)*^{i+1}

*i* *(x− 1)*^{i}*.*

**Construct an algorithm to determine the minimal value of N s.t.**

*| ln(1.5) − P**N**(1.5)| < 10*^{−5}*.*

**Note: From the Alternating Series Thm =**

*⇒*

*| ln x − P**N**(x)| ≤*(−1)^{N+1}

*N + 1* *(x− 1)*^{N+1}*.*
**So, the stopping criterion (停止準則) should be**

*|a**N+1**| =*(*−1)*^{N+1}

*(x− 1)*^{N+1}* < TOL,*

### Algorithm for Example 1

### Stability of Algorithms (演算法的穩定性)

**Definition**

**An algorithm is called stable if it satisfies the property that**

**small changes in the initial data produce correspondingly** **small changes in the final results.**

**(初始資料的微小變動 =**

**⇒ 計算結果也是微小變化)****Otherwise, the algorithm is called unstable, i.e. small changes**
**in the initial data produce large changes in the final results.**

**(初始資料的微小變動 =**

**⇒ 計算結果產生大幅變化)**### Growth of Errors

**Def 1.17 (誤差的線性與指數成長)**

*E*_{0} *> 0: the magnitude of error at some stage in the calculations,*
*E*_{n}*: the magnitude of error after n subsequent operations.*

**1** **The growth of error is called linear if E**_{n}*≈ CnE*0, where the
*constant C > 0 is independent of n.*

**2** **The growth of error is called exponential if E**_{n}*≈ C*^{n}*E*_{0} for
*some C > 1.*

### Example of an Unstable Algorithm (1/2)

**An Unstable Procedure**

The sequence*{p**n**}*^{∞}*n=0* defined by
*p*_{n}*= c*1(1

3)^{n}*+ c*23^{n}

is the general solution to the recursive equation (遞迴方程式)
*p** _{n}*=

^{10}

_{3}

*p*

_{n}

_{−1}*− p*

*n*

*−2*

*,*

*n = 2, 3, . . . .*

**p**

_{0}

**= 1, p****1**=

^{1}

_{3}

**⇒ c****1**

**= 1, c****2**

**= 0. The solution is**

*p*

*= (1*

_{n}3)^{n}*.*

Use 5-digit rounding **⇒ ^p****0** **= 1.0000, ^**

**p**

_{1}

**= 0.33333 and****hence ^**

**c**

_{1}

**= 1.0000, ^****c**

**=**

_{2}

**−0.12500 × 10***. The solution is*

^{−5}### Example of an Unstable Algorithm (2/2)

The absolute error in computing ˆ*p** _{n}* is

*AE(ˆp*_{n}*) = p**n**− ˆp**n**= 0.12500× 10** ^{−5}*(3

^{n}*).*

=**⇒ An unstable procedure with exponential growth of errors!**

### Rates of Convergence (收斂比率)

**Def 1.18**

Suppose that*{α**n**}*^{∞}* _{n=1}* and

*{β*

*n*

*}*

^{∞}*are two sequences with*

_{n=1}*n*lim*→∞**α**n**= α and lim*

*n**→∞**β**n*= 0. If*∃ K > 0 and n*0 *∈ N s.t.*

*|α**n**− α| ≤ K|β**n**| ∀ n ≥ n*0*,*

then we say that*{α**n**}*^{∞}*n=1* **conv. to α with rate (or order) of**

**convergence O(β**

**convergence O(β**

**), and write**

_{n}*α**n**= α + O(β**n**).* *(as n→ ∞)*

**Note: seq.**

*{α*

*n*

*}*

^{∞}*n=1*

**is often generated by some iterative method**

*(迭代法), and it is often compared with β*

*= 1*

_{n}*n*

^{p}*for p > 0.*

**Example 2, p. 37**

*For n≥ 1, consider two sequences of real numbers*
*α**n*= *n + 1*

*n*^{2} and *α*ˆ*n*= *n + 3*
*n*^{3} *.*
Determine their rates of convergence.

**Sol: Since**

*|α**n**− 0| =* *n + 1*

*n*^{2} *≤* *n + n*

*n*^{2} **= 2***·*1

*n* **≡ 2β***n**,*

*|ˆα**n**− 0| =* *n + 3*

*n*^{3} *≤* *n + 3n*

*n*^{3} **= 4***·* 1

*n*^{2} **≡ 4 ˆβ***n*

**for all n****≥ 1, it follows that***α**n***= 0 + O(**

**1**

**n**

*),*

*α*ˆ

*n*

**= 0 + O(**

**1**

**n**

^{2}*).*

*Big-Oh Notation (大 O 符號)*

**Def 1.19**

Suppose that lim

*h**→0**F(h) = L and lim*

*h**→0**G(h) = 0. If* *∃ K > 0 and*
*δ > 0 s.t.*

*|F(h) − L| ≤ K|G(h)| for 0 < |h| < δ,*
then we write

**F(h) = L + O(G(h)).***(as h→ 0)*

**Note: In practice, we often choose G(h) = h**

**Note: In practice, we often choose G(h) = h**

^{p}*for p > 0, and the*

*largest value of p is expected.*

**Example 3, p. 38**

*Show that cos h +*^{1}_{2}*h*^{2} *= 1 + O(h*^{4}).

**pf: From Taylor’s Thm,**

*∃ ξ(h) between 0 and h s.t.*

*cos h = 1−*1

2*h*^{2}+*cos ξ(h)*

24 *h*^{4} *for h̸= 0.*

Hence, we see that
*(cos h +*1

2*h*^{2})*− 1* = *| cos ξ(h)|*

24 *|h*^{4}*| ≤*

**1**

**24**

**|h**

^{4}*| for h ̸= 0,*which gives the desired result by Def.