• 沒有找到結果。

# Chapter 5 Initial-Value Problems for Ordinary Differential Equations

N/A
N/A
Protected

Share "Chapter 5 Initial-Value Problems for Ordinary Differential Equations"

Copied!
67
0
0

(1)

## Hung-Yuan Fan (范洪源)

Department of Mathematics, National Taiwan Normal University, Taiwan

(2)

## (初值問題的基本理論)

(3)

Objectives

Develop numerical methods for approximating the solution to

## initial-value problem (IVP)

{ dy

dt = f(t, y), a≤ t ≤ b,

y(a) = α, (1)

where y(t) is the unique solution to IVP (1) on [a, b].

Error analysis for these numerical methods.

Note:

1 The first equation in (1) is an ordinary differential equation

## (ODE; 常微分⽅程式).

2 y(a) = α is called an initial condition (IC; 初值條件).

(4)

Def 5.1, p. 261

A function f(t, y) satisfies a Lipschitz condition in y on a set D⊆ R2 if∃ a Lipschitz constant L > 0 s.t.

|f(t, y1)− f(t, y2)| ≤ L|y1− y2|, whenever (t, y1)∈ D and (t, y2)∈ D.

(5)

Thm 5.4 (IVP 解的唯⼀性)

Suppose that f(t, y) is conti. on D ={(t, y) | a ≤ t ≤ b and y ∈ R}.

Iff satisfies a Lipschitz condition in y on D, then the IVP (1) has a

## unique solution y(t) for a

≤ t ≤ b.

Corollary of Thm 5.4

Suppose that f(t, y) is conti. on D ={(t, y) | a ≤ t ≤ b and y ∈ R}.

If∃ a Lipschitz constant L > 0 with ∂f

∂y(t, y) ≤ L ∀ (t, y) ∈ D,

thenf(t, y) satisfies a Lipschitz condition in y on D, and therefore, the IVP (1) has a unique solution y(t) for a≤ t ≤ b.

(6)

Def 5.5, p. 263 (Well-Posedness of IVP)

The IVP { dy

dt = f(t, y), a≤ t ≤ b, y(a) = α

is said to be a well-posed problem if

A unique solution y(t) exists on [a, b], and

∃ ε0 and k > 0 s.t.for any 0 < ε < ε0, whenever δ(t)∈ C[a, b]

with |δ(t)| < ε for t∈ [a, b] and 0| < ϵ, the perturbed IVP { dz

dt = f(t, z) +δ(t), a≤ t ≤ b, z(a) = α +δ0

has a unique solution z(t) satisfying

(7)

Thm 5.6 (初值問題是 Well-Posed 的充分條件)

Suppose that D ={(t, y) | a ≤ t ≤ b and − ∞ < y < ∞} ⊆ R2. Iff∈ C(D) satisfies a Lipschitz condition in y on D, thenthe IVP (1) is well-posed.

Remarks

Because anyround-off errorintroduced in the representation perturbs the original IVP (1), numerical methods will always be connected with solving a perturbed IVP.

If the original IVP is well-posed, the numerical solution to a perturbed problem will accurately approximate the unique solution to the original problem!

(8)

Section 5.1 勾選習題

(9)

## (尤拉法或歐拉法)

(10)

### Derivation of Euler’s Method

Assume that the following IVP (1) { dy

dt = f(t, y), a≤ t ≤ b, y(a) = α

is well-posed and y(t) is the unique sol. to IVP (1) on [a, b].

Choose the equally-distributed mesh points (網格點) on [a, b]

ti= a + i· h, i = 0, 1, 2, . . . ,N, (2) whereN∈ Nand h = (b− a)/N is the step size. (步⻑) Note

(11)

The graph of the unique solution y(t) evaluated at each mesh point is shown below.

(12)

### Derivation of Euler’s Method (Conti’d)

Ify(t)∈ C2[a, b], it follows from Taylor’s Thm that for each i = 0, 1, . . . , N− 1, ∃ ξi∈ (ti, ti+1) s.t.

y(ti+1) = y(ti) + (ti+1− ti)y(ti) + (ti+1− ti)2 2 y′′i)

= y(ti) +hy(ti) +h2 2 y′′i)

= y(ti) +hf(ti, y(ti)) +h2 2 y′′i) whereh is the step size and ti is chosen as in (2).

Deleting the remainder term =⇒ Euler’s method constructs wi ≈ y(ti) for i = 0, 1, . . . , Nby

(13)

The fist step of Euler’s method is shown below.

(14)

After N steps of Euler’s method defined as in (3), the differences betweeny(ti) and wi (i = 1, 2, . . . , N) are shown below.

(15)

Algorithm 5.1: Euler’s Method

INPUT endpoints a, b; positive integer N; initial condition α.

OUTPUT approximation w to y at the (N + 1) values oft.

Step 1 Set h = (b− a)/N; t = a; w = α;

OUTPUT(t, w).

Step 2 For i = 1, 2, . . . , N do Steps 3–4.

Step 3 Setw = w + h· f(t, w); (Compute wi.) t = a + i· h. (Compute ti.) Step 4 OUTPUT(t, w).

Step 5

## STOP.

(16)

Example 1, p. 268 Consider the following IVP

y = y− t2+ 1, 0≤ t ≤ 2, y(0) = 0.5.

(a) Show that y(t) = (t + 1)2− 0.5et is the unique solution to above IVP.

(b) Apply Euler’s method (Alg. 5.1) withh = 0.2 andN = 10 to obtain approximations wi, and compare these with the actual values of y(ti) for i = 1, 2, . . . , N.

(17)

Solution

(a) Note that y(0) = (1 + 0)2− 0.5e0 = 0.5 and it is easily seen that y(t) satisfies the given ODE by direct computation!

(b) From the Euler’s method in (3), we have w0 = 0.5, t0= 0,

wi+1= wi+ 0.2(wi+ t2i − 1) = wi+ 0.2[wi− (0.2i)2+ 1]

=1.2wi− 0.008i2+ 0.2, i = 0, 1, . . . , 9, whereti = 0 + 0.2i = 0.2ifor i = 0, 1, 2, . . . , 10 = N.

(18)

The numerical results of Part (b) are shown in the following table.

(19)

### Error Bounds for Euler’s Method

Thm 5.9 (Theoretical Error Bound)

Suppose thatf is conti. and satisfiesa Lipschitz condition with constant L > 0on D ={(t, y) ∈ R2| a ≤ t ≤ b, −∞ < y < ∞}, and that∃ M > 0 with

|y′′(t)| ≤ M ∀ t ∈ [a, b],

where y(t) is the unique sol. to the IVP (1). If w0, w1,· · · , wN are the approximations generated by Euler’s method for some N∈ N, then for each i = 0, 1, . . . , N, we have

|y(ti)− wi| ≤ hM 2L [

eL(ti−a)− 1] with ti being the grid points and h being the step size.

(20)

In practice, it is difficult to verify the boundedness of y′′(t)!

Instead, we may check for the boundednes of y′′(t) = dy(t)

dt = d

dtf(t, y(t))

= ∂f

∂t(t, y(t)) + ∂f

∂y(t, y(t))· f(t, y(t)) without explicitly knowing the unique solution y(t).

(21)

### A Test for Theoretical Error Bound

Example 2, p. 272 (驗證 Thm 5.9 的誤差上界)

As in Example 1, Euler’s method withh = 0.2is applied for computing the approximationswi (i = 0, 1, . . . , N) of the unique solutiony(t) = (t + 1)2− 0.5et to the IVP

y = y− t2+ 1, 0≤ t ≤ 2, y(0) = 0.5.

Compare the error bounds given in Thm 5.9 to the actual errors

|y(ti)− wi| for i = 0, 1, . . . , N.

(22)

Solution (1/2)

Let f(t, y) = y− t2+ 1 be a real-valued function defined on the set D ={(t, y) ∈ R2| 0 ≤ t ≤ 2, −∞ < y < ∞}.

Thenf∈ C(D) satisfies a Lipschitz condition in y on D with L = 1, since

∂f

∂y(t, y) = 1 or ∂f

∂y(t, y) ≤ 1 ∀ (t, y) ∈ D.

Moreover, since the unique sol. is y(t) = (t + 1)2− 0.5et, we have y′′(t) = 2− 0.5et and hence

|y′′(t)| ≤0.5e2− 2 ≡ M ∀ t ∈ [0, 2].

(23)

Solution (2/2)

So, it follows from Thm 5.9 that the error bounds for Euler’s method are given by

|y(ti)− wi| ≤ hM 2L

[

eL(ti−a)− 1]

= (0.1)· (0.5e2− 2) · (eti − 1), where the approx. wi computed by Euler’s method are

w0 = 0.5, wi+1= 1.2wi− 0.008i2+ 0.2 for i = 0, 1, . . . , 9, and the mesh points are ti= 0.2i for i = 0, 1, 2, . . . , 10 = N.

(24)

The numerical comparison between actual errors and error bounds is shown in the following table.

Rate of Convergence for Euler’s Method

|y(ti)− wi| =O(h) for each i = 0, 1, . . . , N.

(25)

Finite-Digit Approximations to y(ti)

If h = b − aN and ti = a + ih for i = 0, 1, . . . , N, then note that Euler’s method is performed

in the exact arithmetic:

w0= α,

wi+1= wi+ hf(ti, wi) for i = 0, 1, . . . , N− 1.

in the finite-digit arithmetic:

u0= α +δ0,

ui+1= ui+ hf(ti, ui) +δi+1 for i = 0, 1, . . . , N− 1, (4) whereδi denotes the round-off error associated with ui for each i = 0, 1, . . . , N.

(26)

### Practical Error Bounds for Euler’s Method

Thm 5.10 (Error Bound in Finite-Digit Arithmetic)

Let y(t) be the unique solution to the IVP (1) and u0, u1,· · · , uN

be finite-digit approximations obtained using (4). Ifi| < δ for each i = 0, 1, . . . , N and the sufficient conditions of Thm 5.9 hold, then

|y(ti)− ui| ≤ 1 L

(hM 2 +δ

h )[

eL(ti−a)− 1]

+0|eL(ti−a) (5) for each i = 0, 1, . . . , N.

(27)

Comments on Thm 5.10 Since it is easily seen that

hlim→0+

(hM 2 +δ

h )

=∞,

h

## tends to increase the total error in theapproximation.

If we let

E(h) = hM 2 + δ

h for h> 0,

then E(h) = M2 − hδ2, and therefore, it follows from Calculus that E(h) is minimized at h =

M. In fact, we know that E(h) < 0 or E(h) is decreasing for 0 < h < h, E(h) > 0 or E(h) is increasing for h > h.

(28)

Section 5.2 勾選習題

(29)

## (⾼階泰勒法)

(30)

Let y(t) be the unique solution to the IVP

y = f(t, y), a≤ t ≤ b, y(a) = α. (6)

Def 5.11 (局部截斷誤差)

The difference method (差分⽅法) for solving the IVP (6) w0= α, wi+1 = wi+ hϕ(ti, wi) for i = 0, 1, . . . , N− 1, has local truncation error (簡稱 LTE)

τi+1(h) = yi+1− (yi+ hϕ(ti, yi))

h = yi+1− yi

h − ϕ(ti, yi),

(31)

The LET of Euler’s Method

Ify(t)∈ C2[a, b], it follows from Taylor’s Thm that for each i = 0, 1, . . . , N− 1, ∃ ξi∈ (ti, ti+1) s.t.

yi+1= yi+ hf(ti, yi) +h2

2 y′′i), (7) where h is the step size and ti is chosen as in (2).

From (7), LTE of Euler’s method at the ith step is τi+1(h) = yi+1− yi

h −ϕ(ti, yi) = yi+1− yi

h −f(ti, yi) = h 2y′′i).

So, we see that

τi(h) = O(h) for each i = 1, 2, . . . , N, sincey′′ is bounded on [a, b].

(32)

### Taylor Method of Order n∈ N

If the sol. y(t) is smooth enough, say, y∈ Cn+1[a, b], then

∃ ξi∈ (ti, ti+1) s.t.

y(ti+1) = y(ti) +

n k=1

hk

k!y(k)(ti) + hn+1

(n + 1)!y(n+1)i) (8) for each i = 0, 1, . . . , N− 1.

Successive differentiation gives that

y(t) = f(t, y(t)), y′′(t) = f(t, y(t)),· · · , y(n+1)(t) = f(n)(t, y(t)).

Then Eq. (8) can be rewritten as

n hk−1 −1) hn+1

(33)

Taylor’s Method of Order n∈ N

The approximations wi to y(ti) (i = 0, 1, . . . , N) are computed by w0= α,

wi+1= wi+ hT(n)(ti, wi) for each i = 0, 1, . . . , N− 1, (10) where h is the step size and

T(n)(ti, wi) = f(ti, wi) + h

2f(ti, wi) +· · · + hn−1

n! f(n−1)(ti, wi) for each i = 0, 1, . . . , N− 1.

## Note:

Euler’s method is just the Taylor’s method of order one!

(34)

Thm 5.12 (⾼階泰勒法的局部截斷誤差)

Let y(t) be the unique solution to the IVP 　(6) on [a, b]. If y∈ Cn+1[a, b], then the LTEs of Taylor’s method of order n defined in (10) satisfy

τi(h) = O(hn) for each i = 1, 2, . . . , N,

where n and N are some positive integers.

Recall from Eq. (9)

For each i = 0, 1, . . . , N− 1, ∃ ξi∈ (ti, ti+1) s.t.

y ·

n hk−1

f(k−1) hn+1 f(n)

(35)

Proof

From Taylor’s Thm and (9), we obtain yi+1− yi

h −T(n)(ti, yi)= hn

(n + 1)!f(n)i, y(ξi)) for some ξi ∈ (ti, ti+1).

Siney(n+1)= f(n) ∈ C[a, b],|f(n)(t, y(t))| < M ∀ t ∈ [a, b]

and hence the LTE at the ith step satisfies

i+1(h)| ≤ M

(n + 1)!hn for h > 0, i.e., τi+1(h) = O(hn) for each i = 0, 1, . . . , N− 1.

(36)

Example 1, p. 278

Apply Taylor’s method of orders (a) two and (b) four with N = 10to compute the approximationswi (i = 0, 1, . . . , N)of the unique solution

y(t) = (t + 1)2− 0.5et to the IVP

y = y− t2+ 1, 0≤ t ≤ 2, y(0) = 0.5.

Let f be a real-valued function defined by

f(t, yt)) = y(t)− t2+ 1 ∀ t ∈ [0, 2].

(37)

Solution of Part (a): Taylor’s Method of Order 2 Since f(t, y) = y− 2t = y − t2+ 1− 2t, we have

T(2)(ti, wi) = f(ti, wi) +h

2f(ti, wi)

= (1 +h

2)(wi− t2i + 1)− hti

for each i = 0, 1, . . . , N− 1.

Taylor’s method of order 2 is defined by w0= 0.5,

wi+1= wi+ hT(2)(ti, wi)

= wi+ (0.2) [

(1 +0.2

2 )(wi− (0.2i)2+ 1)− (0.2)(0.2i)]

= 1.22wi− 0.0088i2− 0.008i + 0.22 for each i = 0, 1, . . . , 9.

(38)

The numerical results of Part (a) are shown in the following table.

(39)

Solution of Part (b): Taylor’s Method of Order 4 (1/2) Similarly, successive differentiation w.r.t. t gives

f′′(t, y) = y− t2− 2t − 1 = f(3)(t, y).

The function T(4)(ti, wi) is defined by T(4)(ti, wi) = f(ti, wi) +h

2f(ti, wi) +h2

6f′′(ti, wi) +h3

24f(3)(ti, wi)

= (1 +h 2 +h2

6 + h3

24)(wi− t2i)− (1 + h 3 +h2

12)(hti) + 1 +h

2 −h2 6 h3

24.

(40)

Solution of Part (b): Taylor’s Method of Order 4 (2/2) Substituting h = 0.2 and ti = 0.2i into T(4), we thus obtain Taylor’s method of order 4 as

w0= 0.5,

wi+1= wi+ hT(4)(ti, wi)

= 1.2214wi− 0.008856i2− 0.00856i + 0.2186 for each i = 0, 1, . . . , 9.

(41)

The numerical results of Part (b) are shown in the following table.

(42)

Section 5.3 勾選習題

(43)

## (R-K 法)

(44)

Taylor’s Methods v.s. Runge-Kutta Methods

1 Taylor’s Methods

Advantage: They have hogh-order LTE at each step.

Disadvantage: They require computation and evaluation of the high-order derivatives of f(t, y(t)) w.r.t. the variable t.

2 Runge-Kutta Methods

They also have thehigh-order LTE at each step.

But eliminate the need to compute and evaluate the derivatives of f(t, y).

(45)

Thm 5.13 (Taylor’s Thm for Functions of Two Variables) Suppose f(t, y) and all its partial derivatives of order≤ n + 1 are conti. on D ={(t, y) | a ≤ t ≤ b, c ≤ y ≤ d}, and let

Q0(t0, y0)∈ D. Then for every (t, y) ∈ D, ∃ ξ between t and t0 and

∃ µ between y and y0 s.t.

f(t, y) = Pn(t, y) + Rn(t, y), where

Pn(t, y) = f(t0, y0) + [

(t− t0)ft(t0, y0) + (y− y0)fy(t0, y0) ]

+

[(t− t0)2

2 ftt(Q0) + (t− t0)(y− y0)fyt(Q0) +(y− y0)2

2 fyy(Q0) ]

+· · · +[1 n!

n j=0

( n j

)

(t− t0)n−j(y− y0)j nf

∂tn−j∂yj(t0, y0) ]

(11)

(46)

Thm 5.13–Conti’d and

Rn(t, y) = 1 (n + 1)!

n+1 j=0

( n + 1 j

)

(t−t0)n+1−j(y−y0)j n+1f

∂tn+1−j∂yj(ξ, µ).

(12) In this case, Pn(t, y) is called the nth Taylor polynomial in two

## variables for the function f about Q

0(t0, y0), and Rn(t, y) is the

## remainder term associated with P

n(t, y).

(47)

A Class of R-K Methods

1 Runge-Kutta Methods of Order 2 (⼆階 R-K 法) Midpoint method (中點法)

Modified Euler method (修正歐拉法) LTE at each step =O(h2)

2 Runge-Kutta Methods of Order 3 (三階 R-K 法) Heun’s method

LTE at each step =O(h3)

3 Runge-Kutta Methods of Order 4 with LTEO(h4)

(48)

### The Midpoint Method (1/2)

For the Midpoint Method, we try to determine the values for a1, α1 and β1 s.t. a1f(t + α1, y + β1) approximates

T(2)(t, y) = f(t, y) +h 2f(t, y)

= f(t, y) +h 2

[ft(t, y) + fy(t, y)· y(t) ]

= f(t, y) +h

2ft(t, y) +h

2fy(t, y)· f(t, y) (13) From Thm 5.13 with (11)–(12) ⇒ ∃ ξ between t + α1 and t and∃ µ between y + β1 and y s.t.

a1f(t + α1, y + β1)= a1f(t, y)+(a1α1)ft(t, y)+(a1β1)fy(t, y)+R1, (14)

(49)

### The Midpoint Method (2/2)

Comparing the coefficients in (13) and (14) a1 = 1, α1 = h

2, β1 = h 2f(t, y).

So, we further obtain

T(2)(t, y) = f(t +h 2, y + h

2f(t, y))− R1

with R1= h82ftt(ξ, µ) + h42f(t, y)fyt(ξ, µ) +h82[f(t, y)]2fyy(ξ, µ).

This means that the ith LTE of Midpoint Method satisfies τi+1(h) = yi+1− yi

h − f(ti+h

2, yi+ h

2f(ti, yi))

= yi+1− yi

h − T(2)(ti, yi) + R1=O(h2), since R1 = O(h2) if all second-order partial derivatives of f are bounded on [a, b].

(50)

R-K Methods of Order Two

1

## Midpoint Method:

w0 = α, wi+1 = wi+ hf

(ti+ h

2, wi+h

2f(ti, wi) )

, for each i = 0, 1, . . . , N− 1.

2

## Modified Euler Method:

w0 = α, wi+1= wi+h

2 [

f(ti, wi) + f(ti+1, wi+hf(ti, wi)) ]

,

(51)

Comments on R-K Methods of Order 2

## Two function evaluations of f are required per step.

For the Modified Euler Method, we use the function a1f(t, y) +a2f(t +α2, y +δ2f(t, y))

to approximate T(3)(t, y). The desired parameters are given by a1= a2 = 1

2, α2 = δ2 = h.

The LTE at each step is O(h2).

(52)

Example 2, p. 286

Apply (a) Midpoint Method and (b) Modified Euler Method withN = 10 to compute the approximationswi (i = 0, 1, . . . , N)of the unique solution

y(t) = (t + 1)2− 0.5et to the IVP

y = f(t, y) = y− t2+ 1, 0≤ t ≤ 2, y(0) = 0.5.

(53)

The numerical results of Parts (a) and (b) are shown in the following table.

(54)

### R-K Methods of Order Three

The functionT(3)(t, y) can be approximated by f(

t +α1, y +δ1f(

t +α2, y +δ2f(t, y))) , involving 4 parameters to be determined.

Heun’s Method (R-K Method of Order 3)

w0 =α, wi+1 =wi+h

4

[f(ti, wi)+

3f( ti+2h

, wi+2h f(

ti+h

, wi+h

f(ti, wi) ))]

,

(55)

### R-K Methods of Order Three

Algorithm: Heun’s Method (Order Three)

w0= α,

k1= h· f(ti, wi), k2= h· f(

ti+h

3, wi+1 3k1)

, k3= h· f(

ti+2h

3 , wi+2 3k2)

, wi+1= wi+1

4(k1+ 3k3), for each i = 0, 1, . . . , N− 1.

(56)

Comments on R-K Methods of Order 3

## Three function evaluations of f are required per step.

The LTE at each step is O(h3).

R-K methods of order 3 are NOT generally used in practice!

(57)

Example for Heun’s Method

Apply Heun’s Method withN = 10to compute the approximationswi (i = 0, 1, . . . , N) of the unique solution

y(t) = (t + 1)2− 0.5et to the IVP

y = f(t, y) = y− t2+ 1, 0≤ t ≤ 2, y(0) = 0.5.

(58)

The numerical results of Heun’s method with N = 10 and h = 0.2 are shown in the following table.

(59)

### R-K Methods of Order Four

Algorithm 5.2: Runge-Kutta (Order Four)

w0 = α,

k1 = h· f(ti, wi), k2 = h· f(

ti+h

2, wi+1 2k1)

, k3 = h· f(

ti+h

2, wi+1 2k2)

, k4 = h· f(ti+1, wi+k3), wi+1 = wi+1

6(k1+ 2k2+ 2k3+ k4), for each i = 0, 1, . . . , N− 1.

(60)

Comments on R-K Methods of Order 4

## Four function evaluations of f are required per step.

The LTE at each step is O(h4).

These methods are the most commonly used for solving the IVPs in practice!

(61)

Example 3, p. 289

Apply Runge-Kutta Method of Order 4 withN = 10to compute the approximationswi (i = 0, 1, . . . , N) of the unique solution

y(t) = (t + 1)2− 0.5et to the IVP

y = f(t, y) = y− t2+ 1, 0≤ t ≤ 2, y(0) = 0.5.

(62)

Numerical results for the R-K method of order 4 are shown in the following table.

(63)

For R-K methods, the relationship between the number of

## (function) evaluations per step and the order of LTE is shown

in the following table.

Reference

J. C. Butcher, The non-existence of ten-stage eight-order explicit Runge-Kutta methods, BIT, Vol. 25, pp. 521–542, 1985.

(64)

Comparisons Between R-K Methods

If the R-K method of order 4 is to be superior to Euler’s method, it should give more accuracywith step size h than Euler’s method with step size h/4.

If the R-K method of order 4 is to be superior to the R-K method of order 2, it should give more accuracy with step size h than second-order method with step size h/2.

(65)

### Illustration

For the same IVP as before

y = f(t, y) = y− t2+ 1, 0≤ t ≤ 2, y(0) = 0.5, we apply Euler’s method withh = 0.1/4 = 0.025, Modified

## Euler method with

h = 0.1/2 = 0.05, and R-K method of order

## 4 with

h = 0.1. The numerical results are given as follows.

(66)

Section 5.4 勾選習題

(67)

## Thank you for your attention!

In the inverse boundary value problems of isotropic elasticity and complex conductivity, we derive estimates for the volume fraction of an inclusion whose physical parameters

• One technique for determining empirical formulas in the laboratory is combustion analysis, commonly used for compounds containing principally carbon and

Al atoms are larger than N atoms because as you trace the path between N and Al on the periodic table, you move down a column (atomic size increases) and then to the left across

Then, we tested the influence of θ for the rate of convergence of Algorithm 4.1, by using this algorithm with α = 15 and four different θ to solve a test ex- ample generated as

Numerical results are reported for some convex second-order cone programs (SOCPs) by solving the unconstrained minimization reformulation of the KKT optimality conditions,

Particularly, combining the numerical results of the two papers, we may obtain such a conclusion that the merit function method based on ϕ p has a better a global convergence and

Then, it is easy to see that there are 9 problems for which the iterative numbers of the algorithm using ψ α,θ,p in the case of θ = 1 and p = 3 are less than the one of the