• 沒有找到結果。

The Fourier Transform and its Applications

N/A
N/A
Protected

Academic year: 2022

Share "The Fourier Transform and its Applications"

Copied!
70
0
0

加載中.... (立即查看全文)

全文

(1)

Lecture Notes for

The Fourier Transform and its Applications

Prof. Brad Osgood Stanford University

https://see.stanford.edu/materials/lsoftaee261/book-fall-07.pdf http://www.coursehero.org/lecture/fourier-series-0

https://www.youtube.com/view_play_list?p=B24BC7956EE040CD https://see.stanford.edu/Course/EE261

(2)
(3)

Contents

1 Fourier Series 1

1.1 Introduction and Choices to Make . . . 1

1.2 Periodic Phenomena . . . 2

1.3 Periodicity: Definitions, Examples, and Things to Come . . . 4

1.4 It All Adds Up . . . 9

1.5 Lost at c . . . . 10

1.6 Period, Frequencies, and Spectrum . . . 13

1.7 Two Examples and a Warning . . . 16

1.8 The Math, the Majesty, the End . . . 21

1.9 Orthogonality . . . 26

1.10 Appendix: The Cauchy-Schwarz Inequality and its Consequences . . . 33

1.11 Appendix: More on the Complex Inner Product . . . 36

1.12 Appendix: Best L2 Approximation by Finite Fourier Series . . . 38

1.13 Fourier Series in Action . . . 39

1.14 Notes on Convergence of Fourier Series . . . 50

1.15 Appendix: Pointwise Convergence vs. Uniform Convergence . . . 58

1.16 Appendix: Studying Partial Sums via the Dirichlet Kernel: The Buzz Is Back . . . 59

1.17 Appendix: The Complex Exponentials Are a Basis for L2([0, 1]) . . . . 61

1.18 Appendix: More on the Gibbs Phenomenon . . . 62

2 Fourier Transform 65 2.1 A First Look at the Fourier Transform . . . 65

2.2 Getting to Know Your Fourier Transform . . . 75

3 Convolution 95 3.1 A ∗ is Born . . . . 95

3.2 What is Convolution, Really? . . . 99

3.3 Properties of Convolution: It’s a Lot like Multiplication . . . 101

(4)

3.4 Convolution in Action I: A Little Bit on Filtering . . . 102

3.5 Convolution in Action II: Differential Equations . . . 106

3.6 Convolution in Action III: The Central Limit Theorem . . . 116

3.7 The Central Limit Theorem: The Bell Curve Tolls for Thee . . . 128

3.8 Fourier transform formulas under different normalizations . . . 130

3.9 Appendix: The Mean and Standard Deviation for the Sum of Random Variables . . . 131

3.10 More Details on the Central Limit Theorem . . . 132

3.11 Appendix: Heisenberg’s Inequality . . . 133

4 Distributions and Their Fourier Transforms 137 4.1 The Day of Reckoning . . . 137

4.2 The Right Functions for Fourier Transforms: Rapidly Decreasing Functions . . . 142

4.3 A Very Little on Integrals . . . 148

4.4 Distributions . . . 152

4.5 A Physical Analogy for Distributions . . . 164

4.6 Limits of Distributions . . . 165

4.7 The Fourier Transform of a Tempered Distribution . . . 168

4.8 Fluxions Finis: The End of Differential Calculus . . . 174

4.9 Approximations of Distributions . . . 177

4.10 The Generalized Fourier Transform Includes the Classical Fourier Transform . . . 178

4.11 Operations on Distributions and Fourier Transforms . . . 179

4.12 Duality, Changing Signs, Evenness and Oddness . . . 179

4.13 A Function Times a Distribution Makes Sense . . . 182

4.14 The Derivative Theorem . . . 185

4.15 Shifts and the Shift Theorem . . . 186

4.16 Scaling and the Stretch Theorem . . . 188

4.17 Convolutions and the Convolution Theorem . . . 190

4.18 δ Hard at Work . . . 195

4.19 Appendix: The Riemann-Lebesgue lemma . . . 205

4.20 Appendix: Smooth Windows . . . 206

4.21 Appendix: 1/x as a Principal Value Distribution . . . 209

5 III, Sampling, and Interpolation 211 5.1 X-Ray Diffraction: Through a Glass Darkly1 . . . 211

5.2 The III Distribution . . . 212

5.3 The Fourier Transform of III . . . 216

(5)

CONTENTS iii

5.4 Periodic Distributions and Fourier series . . . 219

5.5 Sampling Signals . . . 223

5.6 Sampling and Interpolation for Bandlimited Signals . . . 225

5.7 Interpolation a Little More Generally . . . 229

5.8 Finite Sampling for a Bandlimited Periodic Signal . . . 231

5.9 Troubles with Sampling . . . 236

5.10 Appendix: How Special is III? . . . 246

5.11 Appendix: Timelimited vs. Bandlimited Signals . . . 248

6 Discrete Fourier Transform 251 6.1 From Continuous to Discrete . . . 251

6.2 The Discrete Fourier Transform (DFT) . . . 254

6.3 Two Grids, Reciprocally Related . . . 259

6.4 Appendix: Gauss’s Problem . . . 260

6.5 Getting to Know Your Discrete Fourier Transform . . . 261

6.6 Periodicity, Indexing, and Reindexing . . . 262

6.7 Inverting the DFT and Many Other Things Along the Way . . . 264

6.8 Properties of the DFT . . . 273

6.9 Different Definitions for the DFT . . . 277

6.10 The FFT Algorithm . . . 279

6.11 Zero Padding . . . 292

7 Linear Time-Invariant Systems 295 7.1 Linear Systems . . . 295

7.2 Examples . . . 296

7.3 Cascading Linear Systems . . . 301

7.4 The Impulse Response . . . 302

7.5 Linear Time-Invariant (LTI) Systems . . . 304

7.6 Appendix: The Linear Millennium . . . 307

7.7 Appendix: Translating in Time and Plugging into L . . . 308

7.8 The Fourier Transform and LTI Systems . . . 309

7.9 Matched Filters . . . 311

7.10 Causality . . . 313

7.11 The Hilbert Transform . . . 314

7.12 Appendix: The Hilbert Transform of sinc . . . 320

7.13 Filters Finis . . . 321

(6)

7.14 Appendix: Geometric Series of the Vector Complex Exponentials . . . 330

7.15 Appendix: The Discrete Rect and its DFT . . . 332

8 n-dimensional Fourier Transform 335 8.1 Space, the Final Frontier . . . 335

8.2 Getting to Know Your Higher Dimensional Fourier Transform . . . 347

8.3 Higher Dimensional Fourier Series . . . 361

8.4 III, Lattices, Crystals, and Sampling . . . 371

8.5 Crystals . . . 381

8.6 Bandlimited Functions on R2 and Sampling on a Lattice . . . 383

8.7 Naked to the Bone . . . 386

8.8 The Radon Transform . . . 388

8.9 Getting to Know Your Radon Transform . . . 391

8.10 Appendix: Clarity of Glass . . . 395

8.11 Medical Imaging: Inverting the Radon Transform . . . 396

A Mathematical Background 403 A.1 Complex Numbers . . . 403

A.2 The Complex Exponential and Euler’s Formula . . . 406

A.3 Algebra and Geometry . . . 409

A.4 Further Applications of Euler’s Formula . . . 409

B Some References 413

Index 415

(7)

Chapter 1

Fourier Series

1.1 Introduction and Choices to Make

Methods based on the Fourier transform are used in virtually all areas of engineering and science and by virtually all engineers and scientists. For starters:

• Circuit designers

• Spectroscopists

• Crystallographers

• Anyone working in signal processing and communications

• Anyone working in imaging

I’m expecting that many fields and many interests will be represented in the class, and this brings up an important issue for all of us to be aware of. With the diversity of interests and backgrounds present not all examples and applications will be familiar and of relevance to all people. We’ll all have to cut each other some slack, and it’s a chance for all of us to branch out. Along the same lines, it’s also important for you to realize that this is one course on the Fourier transform among many possible courses. The richness of the subject, both mathematically and in the range of applications, means that we’ll be making choices almost constantly. Books on the subject do not look alike, nor do they look like these notes — even the notation used for basic objects and operations can vary from book to book. I’ll try to point out when a certain choice takes us along a certain path, and I’ll try to say something of what the alternate paths may be.

The very first choice is where to start, and my choice is a brief treatment of Fourier series.1 Fourier analysis was originally concerned with representing and analyzing periodic phenomena, via Fourier series, and later with extending those insights to nonperiodic phenomena, via the Fourier transform. In fact, one way of getting from Fourier series to the Fourier transform is to consider nonperiodic phenomena (and thus just about any general function) as a limiting case of periodic phenomena as the period tends to infinity. A discrete set of frequencies in the periodic case becomes a continuum of frequencies in the nonperiodic case, the spectrum is born, and with it comes the most important principle of the subject:

Every signal has a spectrum and is determined by its spectrum. You can analyze the signal either in the time (or spatial) domain or in the frequency domain.

1Bracewell, for example, starts right off with the Fourier transform and picks up a little on Fourier series later.

(8)

I think this qualifies as a Major Secret of the Universe.

All of this was thoroughly grounded in physical applications. Most often the phenomena to be studied were modeled by the fundamental differential equations of physics (heat equation, wave equation, Laplace’s equation), and the solutions were usually constrained by boundary conditions. At first the idea was to use Fourier series to find explicit solutions.

This work raised hard and far reaching questions that led in different directions. It was gradually realized that setting up Fourier series (in sines and cosines) could be recast in the more general framework of orthog- onality, linear operators, and eigenfunctions. That led to the general idea of working with eigenfunction expansions of solutions of differential equations, a ubiquitous line of attack in many areas and applications.

In the modern formulation of partial differential equations, the Fourier transform has become the basis for defining the objects of study, while still remaining a tool for solving specific equations. Much of this development depends on the remarkable relation between Fourier transforms and convolution, something which was also seen earlier in the Fourier series days. In an effort to apply the methods with increasing generality, mathematicians were pushed (by engineers and physicists) to reconsider how general the notion of “function” can be, and what kinds of functions can be — and should be — admitted into the operating theater of calculus. Differentiation and integration were both generalized in the service of Fourier analysis.

Other directions combine tools from Fourier analysis with symmetries of the objects being analyzed. This might make you think of crystals and crystallography, and you’d be right, while mathematicians think of number theory and Fourier analysis on groups. Finally, I have to mention that in the purely mathematical realm the question of convergence of Fourier series, believe it or not, led G. Cantor near the turn of the 20th century to investigate and invent the theory of infinite sets, and to distinguish different sizes of infinite sets, all of which led to Cantor going insane.

1.2 Periodic Phenomena

To begin the course with Fourier series is to begin with periodic functions, those functions which exhibit a regularly repeating pattern. It shouldn’t be necessary to try to sell periodicity as an important physical (and mathematical) phenomenon — you’ve seen examples and applications of periodic behavior in probably (almost) every class you’ve taken. I would only remind you that periodicity often shows up in two varieties, sometimes related, sometimes not. Generally speaking we think about periodic phenomena according to whether they are periodic in time or periodic in space.

1.2.1 Time and space

In the case of time the phenomenon comes to you. For example, you stand at a fixed point in the ocean (or on an electrical circuit) and the waves (or the electrical current) wash over you with a regular, recurring pattern of crests and troughs. The height of the wave is a periodic function of time. Sound is another example: “sound” reaches your ear as a longitudinal pressure wave, a periodic compression and rarefaction of the air. In the case of space, you come to the phenomenon. You take a picture and you observe repeating patterns.

Temporal and spatial periodicity come together most naturally in wave motion. Take the case of one spatial dimension, and consider a single sinusoidal wave traveling along a string (for example). For such a wave the periodicity in time is measured by the frequency ν, with dimension 1/sec and units Hz (Hertz = cycles per second), and the periodicity in space is measured by the wavelength λ, with dimension length and units whatever is convenient for the particular setting. If we fix a point in space and let the time vary (take a video of the wave motion at that point) then successive crests of the wave come past that

(9)

1.2 Periodic Phenomena 3

point ν times per second, and so do successive troughs. If we fix the time and examine how the wave is spread out in space (take a snapshot instead of a video) we see that the distance between successive crests is a constant λ, as is the distance between successive troughs. The frequency and wavelength are related through the equation v = λν, where v is the speed of propagation — this is nothing but the wave version of speed = distance/time. Thus the higher the frequency the shorter the wavelength, and the lower the frequency the longer the wavelength. If the speed is fixed, like the speed of electromagnetic waves in a vacuum, then the frequency determines the wavelength and vice versa; if you can measure one you can find the other. For sound we identify the physical property of frequency with the perceptual property of pitch, for light frequency is perceived as color. Simple sinusoids are the building blocks of the most complicated wave forms — that’s what Fourier analysis is about.

1.2.2 More on spatial periodicity

Another way spatial periodicity occurs is when there is a repeating pattern or some kind of symmetry in a spatial region and physically observable quantities associated with that region have a repeating pattern that reflects this. For example, a crystal has a regular, repeating pattern of atoms in space; the arrangement of atoms is called a lattice. The electron density distribution is then a periodic function of the spatial variable (in R3) that describes the crystal. I mention this example because, in contrast to the usual one-dimensional examples you might think of, here the function, in this case the electron density distribution, has three independent periods corresponding to the three directions that describe the crystal lattice.

Here’s another example — this time in two dimensions — that is very much a natural subject for Fourier analysis. Consider these stripes of dark and light:

No doubt there’s some kind of spatially periodic behavior going on in the respective images. Furthermore, even without stating a precise definition, it’s reasonable to say that one of the patterns is “low frequency”

and that the others are “high frequency”, meaning roughly that there are fewer stripes per unit length in the one than in the others. In two dimensions there’s an extra subtlety that we see in these pictures:

“spatial frequency”, however we ultimately define it, must be a vector quantity, not a number. We have to say that the stripes occur with a certain spacing in a certain direction.

Such periodic stripes are the building blocks of general two-dimensional images. When there’s no color, an image is a two-dimensional array of varying shades of gray, and this can be realized as a synthesis — a

(10)

Fourier synthesis — of just such alternating stripes.

There are interesting perceptual questions in constructing images this way, and color is more complicated still. Here’s a picture I got from Foundations of Vision by Brian Wandell, who is in the Psychology Department here at Stanford.

The shades of blue and yellow are the same in the two pictures —the only a change is in the frequency.

The closer spacing “mixes” the blue and yellow to give a greenish cast. Here’s a question that I know has been investigated but I don’t know the answer. Show someone blue and yellow stripes of a low frequency and increase the frequency till they just start to see green. You get a number for that. Next, start with blue and yellow stripes at a high frequency so a person sees a lot of green and then lower the frequency till they see only blue and yellow. You get a number for that. Are the two numbers the same? Does the orientation of the stripes make a difference?

1.3 Periodicity: Definitions, Examples, and Things to Come

To be certain we all know what we’re talking about, a function f (t) is periodic of period T if there is a number T > 0 such that

f (t + T ) = f (t)

for all t. If there is such a T then the smallest one for which the equation holds is called the fundamental period of the function f .2 Every integer multiple of the fundamental period is also a period:

f (t + nT ) = f (t) , n = 0, ±1, ±2, . . .3

I’m calling the variable t here because I have to call it something, but the definition is general and is not meant to imply periodic functions of time.

2Sometimes when people say simply “period” they mean the smallest or fundamental period. (I usually do, for example.) Sometimes they don’t. Ask them what they mean.

3It’s clear from the geometric picture of a repeating graph that this is true. To show it algebraically, if n ≥ 1 then we see inductively that f (t + nT ) = f (t + (n − 1)T + T ) = f (t + (n − 1)T ) = f (t). Then to see algebraically why negative multiples of T are also periods we have, for n ≥ 1, f (t − nT ) = f (t − nT + nT ) = f (t).

(11)

1.3 Periodicity: Definitions, Examples, and Things to Come 5

The graph of f over any interval of length T is one cycle. Geometrically, the periodicity condition means that the shape of one cycle (any cycle) determines the graph everywhere; the shape is repeated over and over. A homework problem asks you to turn this idea into a formula.

This is all old news to everyone, but, by way of example, there are a few more points I’d like to make.

Consider the function

f (t) = cos 2πt +12cos 4πt , whose graph is shown below.

0 0.5 1 1.5 2 2.5 3 3.5 4

−1

−0.5 0 0.5 1 1.5

t

f(t)

The individual terms are periodic with periods 1 and 1/2 respectively, but the sum is periodic with period 1:

f (t + 1) = cos 2π(t + 1) + 12cos 4π(t + 1)

= cos(2πt + 2π) +12cos(4πt + 4π) = cos 2πt + 12cos 4πt = f (t) .

There is no smaller value of T for which f (t + T ) = f (t). The overall pattern repeats every 1 second, but if this function represented some kind of wave would you say it had frequency 1 Hz? Somehow I don’t think so. It has one period but you’d probably say that it has, or contains, two frequencies, one cosine of frequency 1 Hz and one of frequency 2 Hz.

The subject of adding up periodic functions is worth a general question:

• Is the sum of two periodic functions periodic?

I guess the answer is no if you’re a mathematician, yes if you’re an engineer, i.e., no if you believe in irrational numbers and leave it at that, and yes if you compute things and hence work with approximations.

For example, cos t and cos(

2t) are each periodic, with periods 2π and 2π/

2 respectively, but the sum cos t + cos(

2t) is not periodic.

Here are plots of f1(t) = cos t + cos 1.4t and of f2(t) = cos t + cos(

2t).

(12)

−30 −20 −10 0 10 20 30

−2

−1 0 1 2

−30 −20 −10 0 10 20 30

−2

−1 0 1 2

t f1(t)f2(t)

(I’m aware of the irony in making a big show of computer plots depending on an irrational number when the computer has to take a rational approximation to draw the picture.) How artificial an example is this?

Not artificial at all. We’ll see why, below.

1.3.1 The view from above

After years (centuries) of work, there are, in the end, relatively few mathematical ideas that underlie the study of periodic phenomena. There are many details and subtle points, certainly, but these are of less concern to us than keeping a focus on the bigger picture and using that as a guide in applications. We’ll need the following.

1. The functions that model the simplest periodic behavior, i.e., sines and cosines. In practice, both in calculations and theory, we’ll use the complex exponential instead of the sine and cosine separately.

2. The “geometry” of square integrable functions on a finite interval, i.e., functions for which Z b

a

|f (t)|2dt < ∞ .

3. Eigenfunctions of linear operators (especially differential operators).

The first point has been familiar to you since you were a kid. We’ll give a few more examples of sines and cosines in action. The second point, at least as I’ve stated it, may not be so familiar — “geometry” of a space of functions? — but here’s what it means in practice:

• Least squares approximation

• Orthogonality of the complex exponentials (and of the trig functions)

(13)

1.3 Periodicity: Definitions, Examples, and Things to Come 7

I say “geometry” because what we’ll do and what we’ll say is analogous to Euclidean geometry as it is expressed (especially for computational purposes) via vectors and dot products. Analogous, not identical.

There are differences between a space of functions and a space of (geometric) vectors, but it’s almost more a difference of degree than a difference of kind, and your intuition for vectors in R2 or R3 can take you quite far. Also, the idea of least squares approximation is closely related to the orthogonality of the complex exponentials.

We’ll say less about the third point, though it will figure in our discussion of linear systems.4 Furthermore, it’s the second and third points that are still in force when one wants to work with expansions in functions other than sine and cosine.

1.3.2 The building blocks: a few more examples

The classic example of temporal periodicity is the harmonic oscillator, whether it’s a mass on a spring (no friction) or current in an LC circuit (no resistance). The harmonic oscillator is treated in exhaustive detail in just about every physics class. This is so because it is the only problem that can be treated in exhaustive detail.

The state of the system is described by a single sinusoid, say of the form A sin(2πνt + φ) .

The parameters in this expression are the amplitude A, the frequency ν and the phase φ. The period of this function is 1/ν, since

A sin(2πν t + 1

ν



+ φ) = A sin(2πνt + 2πν1

ν + φ) = A sin(2πνt + 2π + φ) = A sin(2πνt + φ) .

The classic example of spatial periodicity, the example that started the whole subject, is the distribution of heat in a circular ring. A ring is heated up, somehow, and the heat then distributes itself, somehow, through the material. In the long run we expect all points on the ring to be of the same temperature, but they won’t be in the short run. At each fixed time, how does the temperature vary around the ring?

In this problem the periodicity comes from the coordinate description of the ring. Think of the ring as a circle. Then a point on the ring is determined by an angle θ and quantities which depend on position are functions of θ. Since θ and θ + 2π are the same point on the circle, any continuous function describing a physical quantity on the circle, e.g., temperature, is a periodic function of θ with period 2π.

The distribution of temperature is not given by a simple sinusoid. It was Fourier’s hot idea to consider a sum of sinusoids as a model for the temperature distribution:

XN n=1

Ansin(nθ + φn) .

The dependence on time is in the coefficients An. We’ll study this problem more completely later, but there are a few points to mention now.

Regardless of the physical context, the individual terms in a trigonometric sum such as the one above are called harmonics, terminology that comes from the mathematical representation of musical pitch — more

4It is the role of complex exponentials as eigenfunctions that explains why you would expect to take only integer multiples of the fundamental period in forming sums of periodic functions.

(14)

on this in a moment. The terms contribute to the sum in varying amplitudes and phases, and these can have any values. The frequencies of the terms, on the other hand, are integer multiples of the fundamental frequency 1/2π. Because the frequencies are integer multiples of the fundamental frequency, the sum is also periodic, and the period is 2π. The term Ansin(nθ + φn) has period 2π/n, but the whole sum can’t have a shorter cycle than the longest cycle that occurs, and that’s 2π. We talked about just this point when we first discussed periodicity.5

1.3.3 Musical pitch and tuning

Musical pitch and the production of musical notes is a periodic phenomenon of the same general type as we’ve been considering. Notes can be produced by vibrating strings or other objects that can vibrate regularly (like lips, reeds, or the bars of a xylophone). The engineering problem is how to tune musical instruments. The subject of tuning has a fascinating history, from the “natural tuning” of the Greeks, based on ratios of integers, to the theory of the “equal tempered scale”, which is the system of tuning used today. That system is based on 21/12.

There are 12 notes in the equal tempered scale, going from any given note to the same note an octave up, and two adjacent notes have frequencies with ratio 21/12. If an A of frequency 440 Hz (concert A) is described by

A = cos(2π · 440 t) , then 6 notes up from A in a well tempered scale is a D] given by

D] = cos(2π · 440

2 t) .

(The notes in the scale are cos(2π · 440 · 2n/12t) from n = 0 to n = 12.) Playing the A and the D] together gives essentially the signal we had earlier, cos t + cos 21/2t. I’ll withhold judgment whether or not it sounds any good.

Of course, when you tune a piano you don’t tighten the strings irrationally. The art is to make the right approximations. To read more about this, see, for example

http://www.precisionstrobe.com/

To read more about tuning in general try

http://www.wikipedia.org/wiki/Musical tuning Here’s a quote from the first reference describing the need for well-tempered tuning:

Two developments occurred in music technology which necessitated changes from the just toned temperament. With the development of the fretted instruments, a problem occurs when setting the frets for just tuning, that octaves played across two strings around the neck would produce impure octaves. Likewise, an organ set to a just tuning scale would reveal chords with unpleasant properties. A compromise to this situation was the development of the mean toned scale. In this system several of the intervals were adjusted to increase the number of usable keys. With the evolution of composition technique in the 18th century increasing the use of harmonic modulation a change was advocated to the equal tempered scale. Among these

5There is another reason that only integer multiples of the fundamental frequency come in. It has to do with the harmonics being eigenfunctions of a differential operator, and the boundary conditions that go with the problem.

(15)

1.4 It All Adds Up 9

advocates was J. S. Bach who published two entire works entitled The Well-tempered Clavier.

Each of these works contain 24 fugues written in each of twelve major and twelve minor keys and demonstrated that using an equal tempered scale, music could be written in, and shifted to any key.

1.4 It All Adds Up

From simple, single sinusoids we can build up much more complicated periodic functions by taking sums.

To highlight the essential ideas it’s convenient to standardize a little and consider functions with period 1.

This simplifies some of the writing and it will be easy to modify the formulas if the period is not 1. The basic function of period 1 is sin 2πt, and so the Fourier-type sum we considered briefly in the previous lecture looks like

XN n=1

Ansin(2πnt + φn) .

This form of a general trigonometric sum has the advantage of displaying explicitly the amplitude and phase of each harmonic, but it turns out to be somewhat awkward to calculate with. It’s more common to write a general trigonometric sum as

XN n=1

(ancos(2πnt) + bnsin(2πnt)) ,

and, if we include a constant term (n = 0), as a0

2 + XN n=1

(ancos(2πnt) + bnsin(2πnt)) .

The reason for writing the constant term with the fraction 1/2 is because, as you will check in the homework, it simplifies still another expression for such a sum.

In electrical engineering the constant term is often referred to as the DC component as in “direct current”.

The other terms, being periodic, “alternate”, as in AC. Aside from the DC component, the harmonics have periods 1, 1/2, 1/3, . . . , 1/N , respectively, or frequencies 1, 2, 3, . . . , N . Because the frequencies of the individual harmonics are integer multiples of the lowest frequency, the period of the sum is 1.

Algebraic work on such trigonometric sums is made incomparably easier if we use complex exponentials to represent the sine and cosine.6 I remind you that

cos t = eit+ e−it

2 , sin t = eit− e−it 2i . Hence

cos(2πnt) = e2πint+ e−2πint

2 , sin(2πnt) = e2πint− e−2πint

2i .

Using this, the sum

a0 2 +

XN n=1

(ancos(2πnt) + bnsin(2πnt))

6See the appendix on complex numbers where there is a discussion of complex exponentials, how they can be used without fear to represent real signals, and an answer to the question of what is meant by a “negative frequency”.

(16)

can be written as

XN n=−N

cne2πint.

Sorting out how the a’s, b’s, and c’s are related will be left as a problem. In particular, you’ll get c0 = a0/2, which is the reason we wrote the constant term as a0/2 in the earlier expression.7

In this final form of the sum, the coefficients cn are complex numbers, and they satisfy c−n = cn.

Notice that when n = 0 we have

c0 = c0,

which implies that c0 is a real number; this jibes with c0 = a0/2. For any value of n the magnitudes of cn

and c−n are equal:

|cn| = |c−n| .

The (conjugate) symmetry property, c−n = cn, of the coefficients is important. To be explicit: if the signal is real then the coefficients have to satisfy it, since f (t) = f (t) translates to

XN n=−N

cne2πint= XN n=−N

cne2πint= XN n=−N

cne2πint= XN n=−N

cne−2πint,

and if we equate like terms we get c−n = cn. Conversely, suppose the relation is satisfied. For each n we can group cne2πint with c−ne−2πint, and then

cne2πint+ c−ne−2πint = cne2πint+ ¯cne2πint= 2 Re cne2πint . Therefore the sum is real:

XN n=−N

cne2πint= XN n=0

2 Re cne2πint

= 2 Re ( N

X

n=0

cne2πint )

.

1.5 Lost at c

Suppose we have a complicated looking periodic signal; you can think of one varying in time but, again and always, the reasoning to follow applies to any sort of one-dimensional periodic phenomenon. We can scale time to assume that the pattern repeats every 1 second. Call the signal f (t). Can we express f (t) as a sum?

f (t) = XN n=−N

cne2πint

In other words, the unknowns in this expression are the coefficients cn, and the question is can we solve for these coefficients?

7When I said that part of your general math know-how should include whipping around sums, this expression in terms of complex exponentials was one of the examples I was thinking of.

(17)

1.5 Lost at c 11

Here’s a direct approach. Let’s take the coefficient ck for some fixed k. We can isolate it by multiplying both sides by e−2πikt:

e−2πiktf (t) = e−2πikt XN n=−N

cne2πint

= · · · + e−2πiktcke2πikt+ · · · = · · · + ck+ · · · Thus

ck = e−2πiktf (t) − XN n=−N,n6=k

cne−2πikte2πint= e−2πiktf (t) − XN n=−N,n6=k

cne2πi(n−k)t.

We’ve pulled out the coefficient ck, but the expression on the right involves all the other unknown coeffi- cients. Another idea is needed, and that idea is integrating both sides from 0 to 1. (We take the interval from 0 to 1 as “base” period for the function. Any interval of length 1 would work — that’s periodicity.) Just as in calculus, we can evaluate the integral of a complex exponential by

Z 1 0

e2πi(n−k)tdt = 1

2πi(n − k)e2πi(n−k)tit=1 t=0

= 1

2πi(n − k)(e2πi(n−k)− e0) = 1

2πi(n − k)(1 − 1) = 0 . Note that n 6= k is needed here.

Since the integral of the sum is the sum of the integrals, and the coefficients cn come out of each integral, all of the terms in the sum integrate to zero and we have a formula for the k-th coefficient:

ck = Z 1

0

e−2πiktf (t) dt .

Let’s summarize and be careful to note what we’ve done here, and what we haven’t done. We’ve shown that if we can write a periodic function f (t) of period 1 as a sum

f (t) = XN n=−N

cne2πint,

then the coefficients cn must be given by cn=

Z 1 0

e−2πintf (t) dt .

We have not shown that every periodic function can be expressed this way.

By the way, in none of the preceding calculations did we have to assume that f (t) is a real signal. If, however, we do assume that f (t) is real, then let’s see how the formula for the coefficients jibes with cn= c−n. We have

cn=

Z 1 0

e−2πintf (t) dt



= Z 1

0

e−2πintf (t) dt

= Z 1

0

e2πintf (t) dt (because f (t) is real, as are t and dt)

= c−n (by definition of cn)

(18)

The cn are called the Fourier coefficients of f (t), because it was Fourier who introduced these ideas into mathematics and science (but working with the sine and cosine form of the expression). The sum

XN n=−N

cne2πint

is called a (finite) Fourier series.

If you want to be mathematically hip and impress your friends at cocktail parties, use the notation f (n) =ˆ

Z 1 0

e−2πintf (t) dt

for the Fourier coefficients. Always conscious of social status, I will use this notation.

Note in particular that the 0-th Fourier coefficient is the average value of the function:

f (0) =ˆ Z 1

0

f (t) dt .

Also note that because of periodicity of f (t), any interval of length 1 will do to calculate ˆf (n). Let’s check this. To integrate over an interval of length 1 is to integrate from a to a + 1, where a is any number. Let’s compute how this integral varies as a function of a.

d da

 Z a+1

a

e−2πintf (t) dt

= e−2πin(a+1)f (a + 1) − e−2πinaf (a)

= e−2πinae−2πinf (a + 1) − e−2πinaf (a)

= e−2πinaf (a) − e−2πinaf (a) (using e−2πin = 1 and f (a + 1) = f (a))

= 0 . In other words, the integral

Z a+1 a

e−2πintf (t) dt is independent of a. So in particular,

Z a+1 a

e−2πintf (t) dt = Z 1

0

e−2πintf (t) dt = ˆf (n) .

A common instance of this is

f (n) =ˆ Z 1/2

−1/2

e−2πintf (t) dt .

There are times when such a change is useful.

Finally note that for a given function some coefficients may well be zero. More completely: There may be only a finite number of nonzero coefficients; or maybe all but a finite number of coefficients are nonzero;

or maybe none of the coefficients are zero; or there may be an infinite number of nonzero coefficients but also an infinite number of coefficients that are zero — I think that’s everything. What’s interesting, and important for some applications, is that under some general assumptions one can say something about the size of the coefficients. We’ll come back to this.

(19)

1.6 Period, Frequencies, and Spectrum 13

1.6 Period, Frequencies, and Spectrum

We’ll look at some examples and applications in a moment. First I want to make a few more general observations. In the preceding discussion I have more often used the more geometric term period instead of the more physical term frequency. It’s natural to talk about the period for a Fourier series representation of f (t),

f (t) = X n=−∞

f (n)eˆ 2πint.

The period is 1. The function repeats according to f (t + 1) = f (t) and so do all the individual terms, though the terms for n 6= 1 have the strictly shorter period 1/n.8 As mentioned earlier, it doesn’t seem natural to talk about “the frequency” (should it be 1 Hz?). That misses the point. Rather, being able to write f (t) as a Fourier series means that it is synthesized from many harmonics, many frequencies, positive and negative, perhaps an infinite number. The set of frequencies present in a given periodic signal is the spectrum of the signal. Note that it’s the frequencies, like ±2, ±7, ±325, that make up the spectrum, not the values of the coefficients ˆf (±2), ˆf (±7), ˆf (±325).

Because of the symmetry relation ˆf (−n) = ˆf (n), the coefficients ˆf (n) and ˆf (−n) = 0 are either both zero or both nonzero. Are numbers n where ˆf (n) = 0 considered to be part of the spectrum? I’d say yes, with the following gloss: if the coefficients are all zero from some point on, say ˆf (n) = 0 for |n| > N , then it’s common to say that the signal has no spectrum from that point, or that the spectrum of the signal is limited to the points between −N and N . One also says in this case that the bandwidth is N (or maybe 2N depending to whom you’re speaking) and that the signal is bandlimited.

Let me also point out a mistake that people sometimes make when thinking too casually about the Fourier coefficients. To represent the spectrum graphically people sometimes draw a bar graph where the heights of the bars are the coefficients. Something like:

1 2 3 4

−1 0

−2

−4 −3

Why is this a mistake? Because, remember, the coefficients ˆf (0), ˆf (±1), ˆf(±2), . . . are complex numbers

— you can’t draw them as a height in a bar graph. (Except for ˆf (0) which is real because it’s the average value of f (t).) What you’re supposed to draw to get a picture like the one above is a bar graph of

| ˆf (0)|2, | ˆf (±1)|2, | ˆf (±2)|2, . . ., i.e., the squares of the magnitudes of the coefficients. The square magnitudes of the coefficient | ˆf (n)|2 can be identified as the energy of the (positive and negative) harmonics e±2πint. (More on this later.) These sorts of plots are what you see produced by a “spectrum analyzer”. One could

8By convention, here we sort of ignore the constant term c0when talking about periods or frequencies. It’s obviously periodic of period 1, or any other period for that matter.

(20)

also draw just the magnitudes | ˆf (0)|, | ˆf(±1)|, | ˆf(±2)|, . . ., but it’s probably more customary to consider the squares of the magnitudes.

The sequence of squared magnitudes | ˆf (n)|2 is called the power spectrum or the energy spectrum (different names in different fields). A plot of the power spectrum gives you a sense of how the coefficients stack up, die off, whatever, and it’s a way of comparing two signals. It doesn’t give you any idea of the phases of the coefficients. I point all this out only because forgetting what quantities are complex and plotting a graph anyway is an easy mistake to make (I’ve seen it, and not only in student work but in an advanced text on quantum mechanics).

The case when all the coefficients are real is when the signal is real and even. For then f (n) = ˆˆ f (−n) =

Z 1 0

e−2πi(−n)tf (t) dt = Z 1

0

e2πintf (t) dt

= − Z −1

0

e−2πinsf (−s) ds (substituting t = −s and changing limits accordingly)

= Z 0

−1

e−2πinsf (s) ds (flipping the limits and using that f (t) is even)

= ˆf (n) (because you can integrate over any period, in this case from −1 to 0) Uniting the two ends of the calculation we get

f (n) = ˆˆ f (n),

hence ˆf (n) is real. Hidden in the middle of this calculation is the interesting fact that if f is even so is ˆf , i.e.,

f (−t) = f (t)f (−n) = ˆˆ f (n).

It’s good to be attuned to these sorts of symmetry results; we’ll see their like again for the Fourier transform.

What happens if f (t) is odd, for example?

1.6.1 What if the period isn’t 1?

Changing to a base period other than 1 does not present too stiff a challenge, and it brings up a very important phenomenon. If we’re working with functions f (t) with period T , then

g(t) = f (T t) has period 1. Suppose we have

g(t) = XN n=−N

cne2πint,

or even, without yet addressing issues of convergence, an infinite series g(t) =

X n=−∞

cne2πint.

Write s = T t, so that g(t) = f (s). Then f (s) = g(t) =

X n=−∞

cne2πint= X n=−∞

cne2πins/T

(21)

1.6 Period, Frequencies, and Spectrum 15

The harmonics are now e2πins/T. What about the coefficients? If

ˆ g(n) =

Z 1 0

e−2πintg(t) dt then, making the same change of variable s = T t, the integral becomes

1 T

Z T 0

e−2πins/Tf (s) ds .

To wrap up, calling the variable t again, the Fourier series for a function f (t) of period T is X

n=−∞

cne2πint/T

where the coefficients are given by

cn= 1 T

Z T 0

e−2πint/Tf (t) dt .

As in the case of period 1, we can integrate over any interval of length T to find cn. For example, cn= 1

T Z T /2

−T /2

e−2πint/Tf (t) dt .

(I didn’t use the notation ˆf (n) here because I’m reserving that for the case T = 1 to avoid any extra confusion — I’ll allow that this might be too fussy.)

Remark As we’ll see later, there are reasons to consider the harmonics to be

√1

Te2πint/T and the Fourier coefficients for nonzero n then to be

cn= 1

T

Z T 0

e−2πint/Tf (t) dt .

This makes no difference in the final formula for the series because we have two factors of 1/

T coming in, one from the differently normalized Fourier coefficient and one from the differently normalized complex exponential.

Time domain / frequency domain reciprocity Here’s the phenomenon that this calculation illus- trates. As we’ve just seen, if f (t) has period T and has a Fourier series expansion then

f (t) = X n=−∞

cne2πint/T.

We observe from this an important reciprocal relationship between properties of the signal in the time domain (if we think of the variable t as representing time) and properties of the signal as displayed in the frequency domain, i.e., properties of the spectrum. In the time domain the signal repeats after T seconds, while the points in the spectrum are 0, ±1/T , ±2/T , . . . , which are spaced 1/T apart. (Of course for period T = 1 the spacing in the spectrum is also 1.) Want an aphorism for this?

(22)

The larger the period in time the smaller the spacing of the spectrum. The smaller the period in time, the larger the spacing of the spectrum.

Thinking, loosely, of long periods as slow oscillations and short periods as fast oscillations, convince yourself that the aphorism makes intuitive sense. If you allow yourself to imagine letting T → ∞ you can allow yourself to imagine the discrete set of frequencies becoming a continuum of frequencies.

We’ll see many instances of this aphorism. We’ll also have other such aphorisms — they’re meant to help you organize your understanding and intuition for the subject and for the applications.

1.7 Two Examples and a Warning

All this is fine, but does it really work? That is, given a periodic function can we expect to write it as a sum of exponentials in the way we have described? Let’s look at an example.

Consider a square wave of period 1, such as illustrated below.

t f (t)

0 1

1 2

−1

−1

−2

· · ·

· · ·

Let’s calculate the Fourier coefficients. The function is f (t) =

(+1 0 ≤ t < 12

−1 12 ≤ t < 1

and then extended to be periodic of period 1. The zeroth coefficient is the average value of the function on 0 ≤ t ≤ 1. Obviously this is zero. For the other coefficients we have

f (n) =ˆ Z 1

0

e−2πintf (t) dt

= Z 1/2

0

e−2πintdt − Z 1

1/2

e−2πintdt

= h

1

2πine−2πint i1/2

0

− h

1

2πine−2πint i1

1/2 = 1

πin 1 − e−πin We should thus consider the infinite Fourier series

X

n6=0

1

πin 1 − e−πin e2πint

(23)

1.7 Two Examples and a Warning 17

We can write this in a simpler form by first noting that 1 − e−πin =

(0 n even 2 n odd

so the series becomes X

n odd

2

πine2πint. Now combine the positive and negative terms and use

e2πint− e−2πint = 2i sin 2πnt .

Substituting this into the series and writing n = 2k + 1, our final answer is 4

π X k=0

1

2k + 1sin 2π(2k + 1)t .

(Note that the function f (t) is odd and this jibes with the Fourier series having only sine terms.)

What kind of series is this? In what sense does it converge, if at all, and to what does it converge, i.e, can we represent f (t) as a Fourier series through

f (t) = 4 π

X k=0

1

2k + 1sin 2π(2k + 1)t ? The graphs below are sums of terms up to frequencies 9 and 39, respectively.

0 0.5 1 1.5 2

−1.5

−1

−0.5 0 0.5 1 1.5

(24)

0 0.5 1 1.5 2

−1.5

−1

−0.5 0 0.5 1 1.5

We see a strange phenomenon. We certainly see the general shape of the square wave, but there is trouble at the corners. Clearly, in retrospect, we shouldn’t expect to represent a function like the square wave by a finite sum of complex exponentials. Why? Because a finite sum of continuous functions is continuous and the square wave has jump discontinuities. Thus, for maybe the first time in your life, one of those theorems from calculus that seemed so pointless at the time makes an appearance: The sum of two (or a finite number) of continuous functions is continuous. Whatever else we may be able to conclude about a Fourier series representation for a square wave, it must contain arbitrarily high frequencies. We’ll say what else needs to be said next time.

I picked the example of a square wave because it’s easy to carry out the integrations needed to find the Fourier coefficients. However, it’s not only a discontinuity that forces high frequencies. Take a triangle wave, say defined by

f (t) = (1

2 + t12 ≤ t ≤ 0

1

2 − t 0 ≤ t ≤ +12

and then extended to be periodic of period 1. This is continuous. There are no jumps, though there are corners. (Draw your own graph!) A little more work than for the square wave shows that we want the infinite Fourier series

1 4+

X k=0

2

π2(2k + 1)2cos(2π(2k + 1)t)

I won’t reproduce the calculations in public; the calculation of the coefficients needs integration by parts.

Here, too, there are only odd harmonics and there are infinitely many. This time the series involves only cosines, a reflection of the fact that the triangle wave is an even function. Note also that the triangle wave the coefficients decrease like 1/n2 while for a square wave they decrease like 1/n. I alluded to this sort of thing, above (the size of the coefficients); it has exactly to do with the fact that the square wave is discontinuous while the triangle wave is continuous but its derivative is discontinuous. So here is yet another occurrence of one of those calculus theorems: The sines and cosines are differentiable to all orders, so any finite sum of them is also differentiable. We therefore should not expect a finite Fourier series to represent the triangle wave, which has corners.

(25)

1.7 Two Examples and a Warning 19

How good a job do the finite sums do in approximating the triangle wave? I’ll let you use your favorite software to plot some approximations. You will observe something different from what happened with the square wave. We’ll come back to this, too.

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

0 0.1 0.2 0.3 0.4 0.5

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

0 0.1 0.2 0.3 0.4 0.5

One thing common to these two examples might be stated as another aphorism:

It takes high frequencies to make sharp corners.

This particular aphorism is important, for example, in questions of filtering, a topic we’ll consider in detail later:

• Filtering means cutting off.

(26)

• Cutting off means sharp corners.

• Sharp corners means high frequencies.

This comes up in computer music, for example. If you’re not careful to avoid discontinuities in filtering the signal (the music) you’ll hear clicks — symptoms of high frequencies — when the signal is played back.

A sharp cutoff will inevitably yield an unsatisfactory result, so you have to design your filters to minimize this problem.

Why do instruments sound different? More precisely, why do two instruments sound different even when they are playing the same note? It’s because the note they produce is not a single sinusoid of a single frequency, not the A at 440 Hz, for example, but a sum (literally) of many sinusoids, each contributing a different amount. The complex wave that reaches your ear is the combination of many ingredients. Two instruments sound different because of the harmonics they produce and because of the strength of the harmonics.

Shown below are approximately the waveforms (what you’d see on an oscilloscope) for a bassoon and a flute both playing the same note and the power spectrum of the respective waves — what you’d see on a spectrum analyzer, if you’ve ever worked with one. The height of the bars corresponds to the energy of the individual harmonics, as explained above. Only the positive harmonics are displayed here. The pictures are highly simplified; in reality a spectrum analyzer would display hundreds of frequencies.

The spectral representation — the frequency domain — gives a much clearer explanation of why the instruments sound different than does the time domain signal. You can see how the ingredients differ and by how much. The spectral representation also offers many opportunities for varieties of signal processing that would not be so easy to do or even to imagine in the time domain. It’s easy to imagine pushing some bars down, pulling others up, or eliminating blocks, operations whose actions in the time domain are far from clear.

(27)

1.8 The Math, the Majesty, the End 21

As an aside, I once asked Julius Smith, an expert in computer music here at Stanford, why orchestras tune to an oboe playing an A. I thought it might be because the oboe produces a very pure note, mostly a perfect 440 with very few other harmonics, and this would be desirable. In fact, it seems just the opposite is the case. The spectrum of the oboe is very rich, plenty of harmonics. This is good, apparently, because whatever instrument you happen to play, there’s a little bit of you in the oboe and vice versa. That helps you tune.

For a detailed discussion of the spectra of musical instruments see

http://epubs.siam.org/sam-bin/getfile/SIREV/articles/38228.pdf

1.8 The Math, the Majesty, the End

In previous sections, we worked with the building blocks of periodic functions — sines and cosines and complex exponentials — and considered general sums of such “harmonics”. We also showed that if a periodic function f (t) — period 1, as a convenient normalization — can be written as a sum

f (t) = XN n=−N

cne2πint,

then the coefficients are given by the integral cn=

Z 1 0

e−2πintf (t) dt .

This was a pretty straightforward derivation, isolating cn and then integrating. When f (t) is real, as in many applications, one has the symmetry relation c−n = cn. In a story we’ll spin out over the rest of the quarter, we think of this integral as some kind of transform of f , and use the notation

f (n) =ˆ Z 1

0

e−2πintf (t) dt

to indicate this relationship.9

At this stage, we haven’t done much. We have only demonstrated that if it is possible to write a periodic function as a sum of simple harmonics, then it must be done in the way we’ve just said. We also have some examples that indicate the possible difficulties in this sort of representation; an infinite series may be required and then convergence is certainly an issue. But we’re about to do a lot. We’re about to answer the question of how far the idea can be pushed: when can a periodic signal be written as a sum of simple harmonics?

1.8.1 Square integrable functions

There’s much more to the structure of the Fourier coefficients and to the idea of writing a periodic function as a sum of complex exponentials than might appear from our simple derivation. There are:

9Notice that although f (t) is defined for a continuous variable t, the transformed function ˆf is defined on the integers. There are reasons for this that are much deeper than just solving for the unknown coefficients as we did last time.

(28)

• Algebraic and geometric aspects

◦ The algebraic and geometric aspects are straightforward extensions of the algebra and geometry of vectors in Euclidean space. The key ideas are the inner product (dot product), orthogonality, and norm. We can pretty much cover the whole thing. I remind you that your job here is to transfer your intuition from geometric vectors to a more general setting where the vectors are signals; at least accept that the words transfer in some kind of meaningful way even if the pictures do not.

• Analytic aspects

◦ The analytic aspects are not straightforward and require new ideas on limits and on the nature of integration. The aspect of “analysis” as a field of mathematics distinct from other fields is its systematic use of limiting processes. To define a new kind of limit, or to find new consequences of taking limits (or trying to), is to define a new area of analysis. We really can’t cover the whole thing, and it’s not appropriate to attempt to. But I’ll say a little bit here, and similar issues will come up when we define the Fourier transform.

1.8.2 The punchline revealed

Let me introduce the notation and basic terminology and state what the important results are now, so you can see the point. Then I’ll explain where these ideas come from and how they fit together.

Once again, to be definite we’re working with periodic functions of period 1. We can consider such a function already to be defined for all real numbers, and satisfying the identity f (t + 1) = f (t) for all t, or we can consider f (t) to be defined initially only on the interval from 0 to 1, say, and then extended to be periodic and defined on all of R by repeating the graph (recall the periodizing operation in the first problem set). In either case, once we know what we need to know about the function on [0, 1] we know everything. All of the action in the following discussion takes place on the interval [0, 1].

When f (t) is a signal defined on [0, 1] the energy of the signal is defined to be the integral Z 1

0

|f (t)|2dt .

This definition of energy comes up in other physical contexts also; we don’t have to be talking about functions of time. (In some areas the integral of the square is identified with power.) Thus

Z 1 0

|f (t)|2dt < ∞

means that the signal has finite energy, a reasonable condition to expect or to impose.

I’m writing the definition in terms of the integral of the absolute value squared |f (t)|2rather than just f (t)2 because we’ll want to consider the definition to apply to complex valued functions. For real-valued functions it doesn’t matter whether we integrate |f (t)|2 or f (t)2.

One further point before we go on. Although our purpose is to use the finite energy condition to work with periodic functions, and though you think of periodic functions as defined for all time, you can see why we have to restrict attention to one period (any period). An integral of a periodic function from −∞ to ∞,

for example Z

−∞

sin2t dt does not exist (or is infinite).

參考文獻

相關文件

We summarize these properties as follows, using the fact that this function is just a special case of the exponential functions considered in Theorem 2 but with base b = e

But for double integrals, we want to be able to integrate a function f not just over rectangles but also over regions D of more general shape, such as the one illustrated in

remember from Equation 1 that the partial derivative with respect to x is just the ordinary derivative of the function g of a single variable that we get by keeping y fixed.. Thus

Since the assets in a pool are not affected by only one common factor, and each asset has different degrees of influence over that common factor, we generalize the one-factor

In Chapter 3, we transform the weighted bipartite matching problem to a traveling salesman problem (TSP) and apply the concepts of ant colony optimization (ACO) algorithm as a basis

The proof is based on Hida’s ideas in [Hid04a], where Hida provided a general strategy to study the problem of the non-vanishing of Hecke L-values modulo p via a study on the

Since we use the Fourier transform in time to reduce our inverse source problem to identification of the initial data in the time-dependent Maxwell equations by data on the

Encouraging students to think purposefully about what they want, and how they’re getting there, is a great way to make creative writing assessable.. One more