• 沒有找到結果。

APPLICATIONS OF TAYLOR SERIES

在文檔中 INFINITE SERIES (頁 70-76)

NOTES

THEOREM 4.2 Suppose that lim

64. Use the Maclaurin series for tan x and the result of exercise 63 to conjecture the Maclaurin series for tanh x

7.8 APPLICATIONS OF TAYLOR SERIES

In section 7.7, we developed the concept of a Taylor series expansion and gave many illustrations of how to compute Taylor series expansions. We also gave a few hints as to how these expansions might be used. In this section, we expand on our earlier presentation, by giving a few examples of how Taylor series are used in applications. You probably recognize that the work in the preceding section was challenging. The good news is that your hard work has significant payoffs, as we illustrate with the following problems. It’s worth noting that many of the early practitioners of the calculus (including Newton and Leibniz) worked actively with series.

In this section, we will use series to approximate the values of transcendental functions, evaluate limits and integrals and define important new functions. As you continue your studies in mathematics and related fields, you are likely to see far more applications of Taylor series than we can include here.

You may have wondered how calculators and computers calculate values of transcen-dental functions, like sin (1.234567). We can now use Taylor series to do so, using only basic arithmetic operations.

EXAMPLE 8.1 Using Taylor Polynomials to Approximate a Sine Value Use a Taylor series to compute sin (1.234567) accurate to within 10−11.

Solution It’s not hard to find the Taylor series expansion for f (x)= sin x about x = 0.

(We left this as an exercise in section 7.7.) We have

sin x=

k=0

(−1)k

(2k+ 1)! x2k+1= x − 1 3!x3+ 1

5!x5− 1

7!x7+ · · · ,

where the interval of convergence is (−∞, ∞). Notice that if we take x = 1.234567, the series representation of sin 1.234567 is

sin 1.234567 =

k=0

(−1)k

(2k+ 1)!(1.234567)2k+1,

which is an alternating series. We can use a partial sum of this series to approximate the desired value, but just how accurate will a given partial sum be? Recall that for alternating series, the error in a partial sum is bounded by the absolute value of the first neglected term. (Note that you could also use the remainder term from Taylor’s Theorem to bound the error.) To ensure that the error is less than 10−11, we must find an integer k such that

1.2345672k+1

(2k+ 1)! < 10−11. By trial and error, we find that 1.23456717

17! ≈ 1.010836 × 10−13< 10−11,

so that k = 8 will do. Observe that this says that the first neglected term corresponds to k= 8 and so, we compute the partial sum

sin 1.234567 ≈7

k=0

(−1)k

(2k+ 1)!(1.234567)2k+1

= 1.234567 −1.2345673

3! +1.2345675

5! −1.2345677

7! + · · · − 1.23456715 15!

≈ 0.94400543137.

Check your calculator or computer to verify that this matches your calculator’s estimate. 

If you look carefully at example 8.1, you might discover that we were a bit hasty. Cer-tainly, we answered the question and produced an approximation with the desired accuracy, but was this the easiest way in which to do this? The answer is no, as we simply grabbed the most handy Taylor series expansion of f (x)= sin x. You should try to resist the impulse to automatically use the Taylor series expansion about x = 0 (i.e., the Maclaurin series), rather than making a more efficient choice. We illustrate this in example 8.2.

EXAMPLE 8.2 Choosing a More Appropriate Taylor Series Expansion Repeat example 8.1, but this time, make a more appropriate choice of the Taylor series.

Solution Recall from our discussion in section 7.7 that Taylor series converge much faster close to the point about which you expand, than they do far away. So, if we need to compute sin 1.234567, is there a handy Taylor series expansion of f (x)= sin x about some point closer to x = 1.234567? Keeping in mind that we only know the value of sin x exactly at a few points, you should quickly recognize that a series expanded about x =

π

2 ≈ 1.57 is a better choice than one expanded about x = 0. (Another reasonable choice is the Taylor series expansion about x =π3.) In example 7.5, recall that we had found that

sin x =

k=0

(−1)k (2k)!

xπ

2

2k

= 1 −1 2

xπ

2

2

+ 1 4!

xπ

2

4

− · · · , where the interval of convergence is (−∞, ∞). Taking x = 1.234567, gives us

sin 1.234567 =

k=0

(−1)k (2k)!

1.234567 −π 2

2k

= 1 −1 2

1.234567 −π 2

2 + 1

4!

1.234567 −π 2

4

− · · · , which is again an alternating series. Using the remainder term from Taylor’s Theorem to bound the error, we have that

|Rn(1.234567)| =

f(2n+2)(z) (2n+ 2)!



1.234567 − π 2

2n+2

|1.234567 −π2|2n+2 (2n+ 2)! .

(Note that we might also have chosen to use Theorem 4.2.) By trial and error, you can find that

|1.234567 −π2|2n+2

(2n+ 2)! < 10−11

for n= 4, so that an approximation with the required degree of accuracy is sin 1.234567 ≈

4 k=0

(−1)k (2k)!

1.234567 − π 2

2k

= 1 −1 2

1.234567 −π 2

2 + 1

4!

1.234567 −π 2

4

−1 6!

1.234567 −π 2

6

+ 1 8!

1.234567 −π 2

8

≈ 0.94400543137.

Compare this result to example 8.1, where we needed to compute many more terms of the Taylor series to obtain the same degree of accuracy. 

We can also use Taylor series to quickly conjecture the value of a difficult limit.

Be careful, though: the theory of when these conjectures are guaranteed to be correct is beyond the level of this text. However, we can certainly obtain helpful hints about certain limits.

EXAMPLE 8.3 Using Taylor Polynomials to Conjecture the Value of a Limit

Use Taylor series to conjecture lim

x→0

sin (x3)− x3

x9 .

Solution Again recall that the Maclaurin series for sin x is sin x =

k=0

(−1)k

(2k+ 1)!x2k+1= x − 1

3! x3+ 1

5!x5− 1

7!x7+ · · · , where the interval of convergence is (−∞, ∞). Substituting x3for x gives us

sin x3

=

k=0

(−1)k

(2k+ 1)!(x3)2k+1= x3x9 3! +x15

5! − · · · . This gives us

sin x3

− x3

x9 =

x3x9

3! +x15 5! − · · ·



− x3

x9 = −1

3! +x6 5! + · · · and so, we conjecture that

xlim→0

sin x3

− x3

x9 = −1

3! = −1 6.

You can verify that this limit is correct using l’Hôpital’s Rule (three times, simplifying each time). 

So, for what else can Taylor series be used? There are many answers to this question, but this next one is quite useful. Since Taylor polynomials are used to approximate functions on

a given interval and since there’s nothing easier to integrate than a polynomial, we consider using a Taylor polynomial approximation to produce an approximation of a definite integral.

It turns out that such an approximation is often better than that obtained from the numerical methods developed in section 4.9. We illustrate this in example 8.4.

EXAMPLE 8.4 Using Taylor Series to Approximate a Definite Integral

Use a Taylor polynomial with n = 8 to approximate1

−1cos (x2) d x.

Solution Note that you do not know an antiderivative of cos (x2) and so, have no choice but to rely on a numerical approximation of the value of the integral. Since you are integrating on the interval (−1, 1), a Maclaurin series expansion (i.e., a Taylor series expansion about x= 0) is a good choice. It’s a simple matter to show that

cos x =

k=0

(−1)k

(2k)! x2k= 1 − 1 2x2+ 1

4!x4− 1

6!x6+ · · · ,

which converges on all of (−∞, ∞). Replacing x by x2 gives us the Taylor series expansion

cos (x2)=

k=0

(−1)k

(2k)! x4k = 1 −1 2x4+ 1

4!x8− 1

6!x12+ · · · , so that

cos (x2)≈ 1 −1 2 x4+ 1

4!x8. This leads us to the approximation

 1

−1cos (x2) d x

 1

−1

1−1

2x4+ 1 4!x8

 d x

=

xx5 10+ x9

216

x=1

x=−1

=977

540 ≈ 1.809259.

Our CAS gives us1

−1cos (x2) d x≈ 1.809048, so our approximation appears to be very accurate. 

You might reasonably argue that we don’t need Taylor series to obtain approximations like those in example 8.4, as you could always use other, simpler numerical methods like Simpson’s Rule to do the job. That’s often true, but just try to use Simpson’s Rule on the integral in example 8.5.

EXAMPLE 8.5 Using Taylor Series to Approximate the Value of an Integral

Use a Taylor polynomial with n = 5 to approximate

 1

−1

sin x x d x.

Solution Note that you do not know an antiderivative of sin x

x . Further, observe that the integrand is discontinuous at x = 0. However, this does not need to be treated as

an improper integral, since lim

x→0

sin x

x = 1. (This says that the integrand has a removable discontinuity at x = 0.) From the first few terms of the Maclaurin series for f (x) = sin x, we have the Taylor polynomial approximation

sin x≈ x − x3 3! +x5

5!, so that

sin x

x ≈ 1 −x2 3! +x4

5!.

Notice that since this is a polynomial, it is simple to integrate. Consequently,

 1

−1

sin x x d x

 1

−1

1−x2

6 + x4 120

 d x

=

xx3 18+ x5

600

x=1

x=−1

=

1− 1 18+ 1

600



−1 + 1 18− 1

600



= 1703

900 ≈ 1.8922.

Our CAS gives us

 1

−1

sin x

x d x≈ 1.89216, so our approximation is quite good. On the other hand, if you try to apply Simpson’s Rule or Trapezoidal Rule, the algorithm will not work, as they will attempt to evaluatesin x

x at x= 0. (Most graphing calculators and some computer algebra systems also fail to give an answer here, due to the division by zero at x= 0.) 

While you have now calculated Taylor series expansions of many familiar functions, many other functions are actually defined by a power series. These include many functions in the very important class of special functions that frequently arise in physics and engi-neering applications. These functions cannot be written in terms of elementary functions (the algebraic, trigonometric, exponential and logarithmic functions with which you are fa-miliar) and are only known from their series definitions. Among the more important special functions are the Bessel functions, which are used in the study of fluid motion, acoustics, wave propagation and other areas of applied mathematics. The Bessel function of order p is defined by the power series

Jp(x)=

k=0

(−1)kx2k+p

22k+pk!(k+ p)!, (8.1)

for nonnegative integers p. You might find it surprising that we define a function by a power series expansion, but in fact, this is very common. In particular, in the process of solving differential equations, we often derive the solution as a series. As it turns out, most of these series solutions are not elementary functions. Specifically, Bessel functions arise in the solution of the differential equation x2y+ xy+ (x2− p2)y= 0. In examples 8.6 and 8.7, we explore several interesting properties of Bessel functions.

EXAMPLE 8.6 The Radius of Convergence of a Bessel Function Find the radius of convergence for the series defining the Bessel function J0(x).

Solution From equation (8.1) with p= 0, we have J0(x)= 

k=0

(−1)kx2k

22k(k!)2 . Observe that the Ratio Test gives us

klim→∞

ak+1

ak

 = limk→∞

 x2k+2 22k+2[(k+ 1)!]2

22k(k!)2 x2k

 = limk→∞

 x2 4(k+ 1)2

 = 0 < 1,

for all x. The series then converges absolutely for all x and so, the radius of convergence is∞. 

In example 8.7, we explore an interesting relationship between the zeros of two Bessel functions.

EXAMPLE 8.7 The Zeros of Bessel Functions

Verify graphically that on the interval [0, 10], the zeros of J0(x) and J1(x) alternate.

Solution Unless you have a CAS with these Bessel functions available as built-in functions, you will need to graph partial sums of the defining series:

J0(x)

n k=0

(−1)kx2k

22k(k!)2 and J1(x)

n k=0

(−1)kx2k+1 22k+1k!(k+ 1)!.

Before graphing these, you must first determine how large n should be in order to produce a reasonable graph. Notice that for each fixed x > 0, both of the defining series are alternating series. Consequently, the error in using a partial sum to approximate the function is bounded by the first neglected term. That is,



J0(x)

n k=0

(−1)kx2k 22k(k!)2



 ≤

x2n+2 22n+2[(n+ 1)!]2

and 

J1(x)

n k=0

(−1)kx2k+1 22k+1k!(k+ 1)!



 ≤

x2n+3

22n+3(n+ 1)!(n + 2)!,

with the largest error in each occurring at x = 10. Notice that for n = 12, we have that



J0(x)−12

k=0

(−1)kx2k 22k(k!)2



 ≤

x2(12)+2

22(12)+2[(12+ 1)!]2 ≤ 1026

226(13!)2 < 0.04 and

J1(x)−12

k=0

(−1)kx2k+1 22k+1k!(k+ 1)!



 ≤

x2(12)+3

22(12)+3(12+ 1)!(12 + 2)! ≤ 1027

227(13!) (14!) < 0.04.

Consequently, using a partial sum with n = 12 will result in approximations that are within 0.04 of the correct value for each x in the interval [0, 10]. This is plenty of accuracy for our present purposes. Figure 7.43 shows graphs of partial sums with n = 12 for J0(x) and J1(x).

y

x

2 4 6 8 10

0.5 0.5 1

y  J0(x)

y  J1(x)

FIGURE 7.43

y= J0(x) and y= J1(x).

Notice that J1(0)= 0 and in the figure, you can clearly see that J0(x)= 0 at about x= 2.4, J1(x)= 0 at about x = 3.9, J0(x)= 0 at about x = 5.6, J1(x)= 0 at about x= 7.0 and J0(x)= 0 at about x = 8.8. From this, it is now apparent that the zeros of J0(x) and J1(x) do indeed alternate on the interval [0, 10]. 

It turns out that the result of example 8.7 generalizes to any interval of positive numbers and any two Bessel functions of consecutive order. That is, between consecutive zeros of Jp(x) is a zero of Jp+1(x) and between consecutive zeros of Jp+1(x) is a zero of Jp(x). We explore this further in the exercises.

EXERCISES 7.8

WRITING EXERCISES

1. In example 8.2, we showed that an expansion about x= π2 is more accurate for approximating sin (1.234567) than an ex-pansion about x= 0 with the same number of terms. Explain why an expansion about x= 1.2 would be even more efficient, but is not practical.

2. Assuming that you don’t need to rederive the Maclaurin series

在文檔中 INFINITE SERIES (頁 70-76)