• 沒有找到結果。

Hot enough for ya?

1.13 Fourier Series in Action

1.13.1 Hot enough for ya?

The study of how temperature varies over a region was the first use by Fourier in the 1820’s of the method of expanding a function into a series of trigonometric functions. The physical phenomenon is described, at least approximately, by a partial differential equation, and Fourier series can be used to write down solutions.

We’ll give a brief, standard derivation of the differential equation in one spatial dimension, so the config-uration to think of is a one-dimensional rod. The argument involves a number of common but difficult, practically undefined terms, first among them the term “heat”, followed closely by the term “temperature”.

As it is usually stated, heat is a transfer of “energy” (another undefined term, thank you) due to temper-ature difference; the transfer process is called “heat”. What gets transferred is energy. Because of this,

heat is usually identified as a form of energy and has units of energy. We talk of heat as a ‘transfer of energy’, and hence of ‘heat flow’, because, like so many other physical quantities heat is only interesting if it’s associated with a change. Temperature, more properly called “thermodynamic temperature” (for-merly “absolute temperature”), is a derived quantity. The temperature of a substance is proportional to the kinetic energy of the atoms in the substance.19 A substance at temperature 0 (absolute zero) cannot transfer energy — it’s not “hot”. The principle at work, essentially stated by Newton, is:

A temperature difference between two substances in contact with each other causes a transfer of energy from the substance of higher temperature to the substance of lower temperature, and that’s heat, or heat flow. No temperature difference, no heat.

Back to the rod. The temperature is a function of both the spatial variable x giving the position along the rod and of the time t. We let u(x, t) denote the temperature, and the problem is to find it. The description of heat, just above, with a little amplification, is enough to propose a partial differential equation that u(x, t) should satisfy.20 To derive it, we introduce q(x, t), the amount of heat that “flows” per second at x and t (so q(x, t) is the rate at which energy is transfered at x and t). Newton’s law of cooling says that this is proportional to the gradient of the temperature:

q(x, t) = −kux(x, t) , k > 0 .

The reason for the minus sign is that if ux(x, t) > 0, i.e., if the temperature is increasing at x, then the rate at which heat flows at x is negative — from hotter to colder, hence back from x. The constant k can be identified with the reciprocal of “thermal resistance” of the substance. For a given temperature gradient, the higher the resistance the smaller the heat flow per second, and similarly the smaller the resistance the greater the heat flow per second.

As the heat flows from hotter to colder, the temperature rises in the colder part of the substance. The rate at which the temperature rises at x, given by ut(x, t), is proportional to the rate at which heat

“accumulates” per unit length. Now q(x, t) is already a rate — the heat flow per second — so the rate at which heat accumulates per unit length is the rate in minus the rate out per length, which is (if the heat is flowing from left to right)

q(x, t) − q(x + ∆x, t)

∆x .

Thus in the limit

ut(x, t) = −k0qx(x, t) , k0> 0 .

The constant k0 can be identified with the reciprocal of the “thermal capacity” per unit length. Thermal resistance and thermal capacity are not the standard terms, but they can be related to standard terms, e.g., specific heat. They are used here because of the similarity of heat flow to electrical phenomena — see the discussion of the mathematical analysis of telegraph cables, below.

Next, differentiate the first equation with respect to x to get qx(x, t) = −kuxx(x, t) ,

and substitute this into the second equation to obtain an equation involving u(x, t) alone:

ut(x, t) = kk0uxx(x, t) . This is the heat equation.

To summarize, in whatever particular context it’s applied, the setup for a problem based on the heat equation involves:

19With this (partial) definition the unit of temperature is the Kelvin.

20This follows Bracewell’s presentation.

1.13 Fourier Series in Action 41

• A region in space.

• An initial distribution of temperature on that region.

It’s natural to think of fixing one of the variables and letting the other change. Then the solution u(x, t) tells you

• For each fixed time t how the temperature is distributed on the region.

• At each fixed point x how the temperature is changing over time.

We want to look at two examples of using Fourier series to solve such a problem: heat flow on a circle and, more dramatically, the temperature of the earth. These are nice examples because they show different aspects of how the methods can be applied and, as mentioned above, they exhibit forms of solutions, especially for the circle problem, of a type we’ll see frequently.

Why a circle, why the earth — and why Fourier methods? Because in each case the function u(x, t) will be periodic in one of the variables. In one case we work with periodicity in space and in the other periodicity in time.

Heating a circle Suppose a circle is heated up, not necessarily uniformly. This provides an initial distribution of temperature. Heat then flows around the circle and the temperature changes over time. At any fixed time the temperature must be a periodic function of the position on the circle, for if we specify points on the circle by an angle θ then the temperature, as a function of θ, is the same at θ and at θ + 2π, since these are the same points.

We can imagine a circle as an interval with the endpoints identified, say the interval 0 ≤ x ≤ 1, and we let u(x, t) be the temperature as a function of position and time. Our analysis will be simplified if we choose units so the heat equation takes the form

ut= 12uxx,

that is, so the constant depending on physical attributes of the wire is 1/2. The function u(x, t) is periodic in the spatial variable x with period 1, i.e., u(x + 1, t) = u(x, t), and we can try expanding it as a Fourier series with coefficients that depend on time:

u(x, t) = X n=−∞

cn(t)e2πinx where cn(t) = Z 1

0

e−2πinxu(x, t) dx .

This representation of cn(t) as an integral together with the heat equation for u(x, t) will allow us to find cn(t) explicitly. Differentiate cn(t) with respect to t by differentiating under the integral sign:

c0n(t) = Z 1

0

ut(x, t)e−2πinxdx;

Now using ut= 12uxx we can write this as c0n(t) =

Z 1 0

1

2uxx(x, t)e−2πinxdx

and integrate by parts (twice) to get the derivatives off of u (the function we don’t know) and put them onto e−2πinx (which we can certainly differentiate). Using the facts that e−2πin = 1 and u(0, t) = u(1, t)

(both of which come in when we plug in the limits of integration when integrating by parts) we get c0n(t) =

Z 1 0

1

2u(x, t) d2

dx2e−2πinxdx

= Z 1

0 1

2u(x, t)(−4π2n2)e−2πinxdx

= −2π2n2 Z 1

0

u(x, t)e−2πinxdx = −2π2n2cn(t).

We have found that cn(t) satisfies a simple ordinary differential equation c0n(t) = −2π2n2cn(t) ,

whose solution is

cn(t) = cn(0)e−2π2n2t.

The solution involves the initial value cn(0) and, in fact, this initial value should be, and will be, incorpo-rated into the formulation of the problem in terms of the initial distribution of heat.

At time t = 0 we assume that the temperature u(x, 0) is specified by some (periodic!) function f (x):

u(x, 0) = f (x) , f (x + 1) = f (x) for all x.

Then using the integral representation for cn(t), cn(0) =

Z 1 0

u(x, 0)e−2πinxdx

= Z 1

0

f (x)e−2πinxdx = ˆf (n) ,

the n-th Fourier coefficient of f ! Thus we can write

cn(t) = ˆf (n)e−2π2n2t, and the general solution of the heat equation is

u(x, t) = X n=−∞

f (n)eˆ −2π2n2te2πinx.

This is a neat way of writing the solution and we could leave it at that, but for reasons we’re about to see it’s useful to bring back the integral definition of ˆf (n) and write the expression differently.

Write the formula for ˆf (n) as

f (n) =ˆ Z 1

0

f (y)e−2πinydy .

(Don’t use x as the variable of integration since it’s already in use in the formula for u(x, t).) Then u(x, t) =

X n=−∞

e−2π2n2te2πinx Z 1

0

f (y)e−2πinydy

= Z 1

0

X n=−∞

e−2π2n2te2πin(x−y)f (y) dy ,

1.13 Fourier Series in Action 43

or, with

g(x − y, t) = X n=−∞

e−2π2n2te2πin(x−y),

we have

u(x, t) = Z 1

0

g(x − y, t)f (y) dy .

The function

g(x, t) = X n=−∞

e−2π2n2te2πinx

is called Green’s function, or the fundamental solution for the heat equation for a circle. Note that g is a periodic function of period 1 in the spatial variable. The expression for the solution u(x, t) is a convolution integral, a term you have probably heard from earlier classes, but new here. In words, u(x, t) is given by the convolution of the initial temperature f (x) with Green’s function g(x, t). This is a very important fact.

In general, whether or not there is extra time dependence as in the present case, the integral Z 1

0

g(x − y)f (y) dy

is called the convolution of f and g. Observe that the integral makes sense only if g is periodic. That is, for a given x between 0 and 1 and for y varying from 0 to 1 (as the variable of integration) x − y will assume values outside the interval [0, 1]. If g were not periodic it wouldn’t make sense to consider g(x − y), but the periodicity is just what allows us to do that.

To think more in EE terms, if you know the terminology coming from linear systems, the Green’s function g(x, t) is the “impulse response” associated with the linear system “heat flow on a circle”, meaning

• Inputs go in: the initial heat distribution f (x).

• Outputs come out: the temperature u(x, t).

• Outputs are given by the convolution of g with the input: u(x, t) = Z 1

0

g(x − y, t)f (y) dy .

Convolutions occur absolutely everywhere in Fourier analysis and we’ll be spending a lot of time with them this quarter. In fact, an important result states that convolutions must occur in relating outputs to inputs for linear time invariant systems. We’ll see this later.

In our example, as a formula for the solution, the convolution may be interpreted as saying that for each time t the temperature u(x, t) at a point x is a kind of smoothed average of the initial temperature distribution f (x). In other settings a convolution integral may have different interpretations.

Heating the earth, storing your wine The wind blows, the rain falls, and the temperature at any particular place on earth changes over the course of a year. Let’s agree that the way the temperature varies is pretty much the same year after year, so that the temperature at any particular place on earth is roughly a periodic function of time, where the period is 1 year. What about the temperature x-meters under that particular place? How does the temperature depend on x and t?21

21This example is taken from Fourier Series and Integrals by H. Dym & H. McKean, who credit Sommerfeld.

Fix a place on earth and let u(x, t) denote the temperature x meters underground at time t. We assume again that u satisfies the heat equation, ut= 12uxx. This time we try a solution of the form

u(x, t) = X n=−∞

cn(x)e2πint,

reflecting the periodicity in time.

Again we have an integral representation of cn(x) as a Fourier coefficient, cn(x) =

Z 1 0

u(x, t)e−2πintdt ,

and again we want to plug into the heat equation and find a differential equation that the coefficients satisfy. The heat equation involves a second (partial) derivative with respect to the spatial variable x, so we differentiate cn twice and differentiate u under the integral sign twice with respect to x:

c00n(x) = Z 1

0

uxx(x, t)e−2πintdt . Using the heat equation and integrating by parts (once) gives

c00n(x) = Z 1

0

2ut(x, t)e−2πintdt

= Z 1

0

4πinu(x, t)e−2πintdt = 4πincn(x) . We can solve this second-order differential equation in x easily on noting that

(4πin)1/2= ±(2π|n|)1/2(1 ± i) ,

where we take 1 + i when n > 0 and 1 − i when n < 0. I’ll leave it to you to decide that the root to take is −(2π|n|)1/2(1 ± i), thus

cn(x) = Ane−(2π|n|)1/2(1±i)x.

What is the initial value An = cn(0)? Again we assume that at x = 0 there is a periodic function of t that models the temperature (at the fixed spot on earth) over the course of the year. Call this f (t). Then u(0, t) = f (t), and

cn(0) = Z 1

0

u(0, t)e−2πintdt = ˆf (n) . Our solution is then

u(x, t) = X n=−∞

f (n)eˆ −(2π|n|)1/2(1±i)xe2πint.

That’s not a beautiful expression, but it becomes more interesting if we rearrange the exponentials to isolate the periodic parts (the ones that have an i in them) from the nonperiodic part that remains. The latter is e−(2π|n|)1/2x. The terms then look like

f (n) eˆ −(2π|n|)1/2xe2πint∓(2π|n|)1/2ix

.

What’s interesting here? The dependence on the depth, x. Each term is damped by the exponential e−(2π|n|)1/2x

1.13 Fourier Series in Action 45

and phase shifted by the amount (2π|n|)1/2x.

Take a simple case. Suppose that the temperature at the surface x = 0 is given just by sin 2πt and that the mean annual temperature is 0, i.e.,

Z 1 0

f (t) dt = ˆf (0) = 0 .

All Fourier coefficients other than the first (and minus first) are zero, and the solution reduces to u(x, t) = e−(2π)1/2xsin(2πt − (2π)1/2x) .

Take the depth x so that (2π)1/2x = π. Then the temperature is damped by e−π = 0.04, quite a bit, and it is half a period (six months) out of phase with the temperature at the surface. The temperature x-meters below stays pretty constant because of the damping, and because of the phase shift it’s cool in the summer and warm in the winter. There’s a name for a place like that. It’s called a cellar.

The first shot in the second industrial revolution Many types of diffusion processes are similar enough in principle to the flow of heat that they are modeled by the heat equation, or a variant of the heat equation, and Fourier analysis is often used to find solutions. One celebrated example of this was the paper by William Thomson (later Lord Kelvin): “On the theory of the electric telegraph” published in 1855 in the Proceedings of the Royal Society.

The high tech industry of the mid to late 19th century was submarine telegraphy. Sharp pulses were sent at one end, representing the dots and dashes of Morse code, and in transit, if the cable was very long and if pulses were sent in too rapid a succession, the pulses were observed to smear out and overlap to the degree that at the receiving end it was impossible to resolve them. The commercial success of telegraph transmissions between continents depended on undersea cables reliably handling a large volume of traffic.

How should cables be designed? The stakes were high and a quantitative analysis was needed.

A qualitative explanation of signal distortion was offered by Michael Faraday, who was shown the phe-nomenon by Latimer Clark. Clark, an official of the Electric and International Telegraph Company, had observed the blurring of signals on the Dutch-Anglo line. Faraday surmised that a cable immersed in water became in effect an enormous capacitor, consisting as it does of two conductors — the wire and the water

— separated by insulating material (gutta-percha in those days). When a signal was sent, part of the energy went into charging the capacitor, which took time, and when the signal was finished the capacitor discharged and that also took time. The delay associated with both charging and discharging distorted the signal and caused signals sent too rapidly to overlap.

Thomson took up the problem in two letters to G. Stokes (of Stokes’ theorem fame), which became the published paper. We won’t follow Thomson’s analysis at this point, because, with the passage of time, it is more easily understood via Fourier transforms rather than Fourier series. However, here are some highlights. Think of the whole cable as a (flexible) cylinder with a wire of radius a along the axis and surrounded by a layer of insulation of radius b (thus of thickness b − a). To model the electrical properties of the cable, Thomson introduced the “electrostatic capacity per unit length” depending on a and b and

, the permittivity of the insulator. His formula was

C = 2π

ln(b/a).

(You may have done just this calculation in an EE or physics class.) He also introduced the “resistance per unit length”, denoting it by K. Imagining the cable as a series of infinitesimal pieces, and using Kirchhoff ’s circuit law and Ohm’s law on each piece, he argued that the voltage v(x, t) at a distance x from the end

of the cable and at a time t must satisfy the partial differential equation vt= 1

KCvxx.

Thomson states: “This equation agrees with the well-known equation of the linear motion of heat in a solid conductor, and various forms of solution which Fourier has given are perfectly adapted for answering practical questions regarding the use of the telegraph wire.”

After the fact, the basis of the analogy is that charge diffusing through a cable may be described in the same way as heat through a rod, with a gradient in electric potential replacing gradient of temperature, etc. (Keep in mind, however, that the electron was not discovered till 1897.) Here we see K and C playing the role of thermal resistance and thermal capacity in the derivation of the heat equation.

The result of Thomson’s analysis that had the greatest practical consequence was his demonstration that

“. . . the time at which the maximum electrodynamic effect of connecting the battery for an instant . . . ” [sending a sharp pulse, that is] occurs for

tmax = 16KCx2.

The number tmax is what’s needed to understand the delay in receiving the signal. It’s the fact that the distance from the end of the cable, x, comes in squared that’s so important. This means, for example, that the delay in a signal sent along a 1000 mile cable will be 100 times as large as the delay along a 100 mile cable, and not 10 times as large, as was thought. This was Thomson’s “Law of squares.”

Thomson’s work has been called “The first shot in the second industrial revolution.”22 This was when electrical engineering became decidedly mathematical. His conclusions did not go unchallenged, however.

Consider this quote of Edward Whitehouse, chief electrician for the Atlantic Telegraph Company, speaking in 1856

I believe nature knows no such application of this law [the law of squares] and I can only regard it as a fiction of the schools; a forced and violent application of a principle in Physics, good and true under other circumstances, but misapplied here.

Thomson’s analysis did not prevail and the first transatlantic cable was built without regard to his spec-ifications. Thomson said they had to design the cable to make KC small. They thought they could just crank up the power. The continents were joined August 5, 1858, after four previous failed attempts. The first successful sent message was August 16. The cable failed three weeks later. Too high a voltage. They fried it.

Rather later, in 1876, Oliver Heaviside greatly extended Thomson’s work by including the effects of induc-tion. He derived a more general differential equation for the voltage v(x, t) in the form

vxx = KCvt+ SCvtt,

where S denotes the inductance per unit length and, as before, K and C denote the resistance and capacitance per unit length. The significance of this equation, though not realized till later still, is that it allows for solutions that represent propagating waves. Indeed, from a PDE point of view the equation looks like a mix of the heat equation and the wave equation. (We’ll study the wave equation later.) It is Heaviside’s equation that is now usually referred to as the “telegraph equation”.

22See Getting the Message: A History of Communications by L. Solymar.

1.13 Fourier Series in Action 47

The last shot in the second World War Speaking of high stakes diffusion processes, in the early stages of the theoretical analysis of atomic explosives it was necessary to study the diffusion of neutrons produced by fission as they worked their way through a mass of uranium. The question: How much mass is needed so that enough uranium nuclei will fission in a short enough time to produce an explosion.23 An analysis of this problem was carried out by Robert Serber and some students at Berkeley in the summer of 1942, preceding the opening of the facilities at Los Alamos (where the bulk of the work was done and the bomb was built). They found that the so-called “critical mass” needed for an explosive chain reaction was about 60 kg of U235, arranged in a sphere of radius about 9 cm (together with a tamper surrounding the Uranium). A less careful model of how the diffusion works gives a critical mass of 200 kg. As the story goes, in the development of the German atomic bomb project (which predated the American efforts), Werner Heisenberg worked with a less accurate model and obtained too high a number for the critical mass. This set their program back.

For a fascinating and accessible account of this and more, see Robert Serber’s The Los Alamos Primer.

These are the notes of the first lectures given by Serber at Los Alamos on the state of knowledge on atomic bombs, annotated by him for this edition. For a dramatized account of Heisenberg’s role in the German atomic bomb project — including the misunderstanding of diffusion — try Michael Frayn’s play Copenhagen.

相關文件