• 沒有找到結果。

# Discrete Fourier Transform

N/A
N/A
Protected

Share "Discrete Fourier Transform"

Copied!
44
0
0

(1)

## Discrete Fourier Transform

### The Modern World

According to some, the modern world began in 1965 when J. Cooley and J. Tukey published their account of an efficient method for numerical computation of the Fourier transform.1 According to some others, the method was known to Gauss in the mid 1800s; the idea that lies at the heart of the algorithm is clearly present in an unpublished paper that appeared posthumously in 1866. Take your pick.

Whatever the history, the present and future demands are that we process continuous signals by discrete methods. Computers and digital processing systems can work with finite sums only. To turn the continuous into the discrete and finite requires that a signal be both time-limited and band-limited, something we know cannot be true, and that we take a finite number of samples, something we know cannot suffice. But it works. At least such approximations work to the extent that a large fraction of the world’s economy depends upon them, and that’s not a bad measure of success.

Some would argue that one shouldn’t think in terms of “turning the continuous into the discrete”, but rather that measurements and data in the real world come to us in discrete form, and that’s how we should understand the world and work with it. Period. For some, “discrete” versus “continuous” rises to the level of a religious war, and this battle can be fought out in the different approaches to the discrete Fourier transform. I’m taking sides, at least initially, in favor of “from continuous to discrete” as a way of motivating the definition. For one thing, we have built up a lot of intuition and understanding for the Fourier transform, its properties, and its uses, and I hope some of that intuition can transfer as we now work with the discrete Fourier transform. My choice for now is to make the discrete look as similar to the continuous as possible.

### 6.1From Continuous to Discrete

Start with a signal f (t) and its Fourier transform F f (s), both functions of a continuous variable. We want to:

• Find a discrete version of f (t) that’s a reasonable approximation of f (t).

• Find a discrete version of F f (s) that’s a reasonable approximation of F f (s).

1Incidentally, Tukey is also credited with coining the term “bit” as an abbreviation for “binary digit” — how about that for immortality!

(2)

• Find a way that the discrete version of F f (s) is related to the discrete version of f (t) that’s a reasonable approximation to the way F f (s) is related to f (t).

Good things to try for, but it’s not quite straightforward. Here’s the setup.

We suppose that f (t) is zero outside of 0 ≤ t ≤ L. We also suppose that the Fourier transform F f (s) is zero, or effectively zero (beyond our ability to measure, negligible energy, whatever) outside of 0 < s < 2B.

I’m taking the support of F f to be the interval from 0 to 2B instead of −B to B because it will make the initial indexing of the sample points easier; this will not be an issue in the end. We’ll also take L and B to both be integers so we don’t have to round up or down in any of the considerations that follow; you can think of that as our first concession to the discrete.

Thus we are regarding f (t) as both time-limited and band-limited, with the knowledge that this can only be approximately true. Remember, however, that we’re ultimately going to come up with a definition of a discrete Fourier transform that will make sense in and of itself regardless of these shaky initial assumptions.

After the definition is written down we could erase all that came before it, or merely cast a brief glance backwards from the discrete to the continuous with a few comments on how the former approximates the latter. Many treatments of the discrete Fourier transform that start with the discrete and stay with the discrete do just that. We’re trying not to do that.

According to the sampling theorem (misapplied here, but play along), we can reconstruct f (t) perfectly from its samples if we sample at the rate 2B samples per second. Since f (t) is defined on an interval of length L and the samples are 1/2B apart, that means that we want a total of

N = L

1/2B = 2BL (note that N is therefore even) evenly spaced samples, at the points

t0 = 0, t1 = 1

2B, t2 = 2

2B, . . . , tN −1= N − 1 2B . To know the values f (tk) is to know f (t), reasonably well. Thus we state:

• The discrete version of f (t) is the list of sampled values f (t0), f (t1), . . . , f (tN −1).

Next, represent the discrete version of f (t) (the list of sampled values) “continuously” with the aid of a finite impulse train (a finite III-function) at the sample points:

N −1X

n=0

δ(t − tn) that is,

fdiscrete(t) = f (t)

N −1X

n=0

δ(t − tn) =

N −1X

n=0

f (tn)δ(t − tn) .

This is what we have considered previously as the sampled form of f (t). The Fourier transform of fdiscrete is

F fdiscrete(s) =

N −1X

n=0

f (tn)F δ(t − tn) =

N −1X

n=0

f (tn)e−2πistn.

(3)

6.1 From Continuous to Discrete 253

This is close to what we want — it’s the continuous Fourier transform of the sampled form of f (t).

Now let’s change perspective and look at things in the frequency domain. The function f (t) is limited to 0 ≤ t ≤ L, and this determines a sampling rate for reconstructing F f (s) from its samples in the frequency domain. The sampling rate is 1/L. (Not 2/L: think about how you would derive the sampling formula when the function is nonzero over an interval from 0 to p/2 rather than −p/2 to +p/2.) We sample F f (s) over the interval from 0 to 2B in the frequency domain at points spaced 1/L apart. The number of sample points is

2B

1/L= 2BL = N

the same number of sample points as for f (t). The sample points for F f (s) are of the form m/L, and there are N of them:

s0 = 0, s1 = 1

L, . . . , sN −1= N − 1

L .

The discrete version of F f (s) that we take is not F f (s) evaluated at these sample points sm. Rather, it is F fdiscrete(s) evaluated at the sample points. We base the approximation of F f (s) on the discrete version of f (t). To ease the notation write F (s) for F fdiscrete(s). Then:

• The discrete version of F f (s) is the list of values

F (s0) =

N −1X

n=0

f (tn)e−2πis0tn, F (s1) =

N −1X

n=0

f (tn)e−2πis1tn, . . . , F (sN −1) =

N −1X

n=0

f (tn)e−2πisN −1tn.

By this definition, we now have a way of going from the discrete version of f (t) to the discrete version of F f (s), namely,

F (sm) =

N −1X

n=0

f (tn)e−2πismtn.

These sums, one for each m from m = 0 to m = N − 1, are supposed to be an approximation to the Fourier transform going from f (t) to F f (s). In what sense is this a discrete approximation to the Fourier transform? Here’s one way of looking at it. Since f (t) is timelimited to 0 ≤ t ≤ L, we have

F f (s) = Z L

0

e−2πistf (t) dt . Thus at the sample points sm,

F f (sm) = Z L

0

e−2πismtf (t) dt .

and to know the values F f (sm) is to know F f (s) reasonably well. Now use the sample points tk for f (t) to write a Riemann sum approximation for the integral. The spacing ∆t of the points is 1/2B, so

F f (sm) = Z L

0

f (t)e−2πisntdt ≈

N −1X

n=0

f (tn)e−2πismtn∆t = 1 2B

N −1X

n=0

f (tn)e−2πismtn = 1

2BF (sm) . This is the final point:

• Up to the factor 1/2B, the values F (sm) provide an approximation to the values F f (sm).

(4)

Writing a Riemann sum as an approximation to the integral defining F f (sm) essentially discretizes the integral, and this is an alternate way of getting to the expression for F (sn), up to the factor 2B. We short-circuited this route by working directly with F fdiscrete(s).

You may find the “up to the factor 1/2B” unfortunate in this part of the discussion, but it’s in the nature of the subject. In fact, back in Chapter 2 we encountered a similar kind of “up to the factor . . . ” phenomenon when we obtained the Fourier transform as a limit of the Fourier coefficients for a Fourier series.

We are almost ready for a definition, but there’s one final comment to clear the way for that. Use the definition of the sample points

tn= n

2B, sm = m L to write

F (sm) =

N −1X

n=0

f (tn)e−2πismtn =

N −1X

n=0

f (tn)e−2πinm/2BL =

N −1X

n=0

f (tn)e−2πinm/N.

This form of the exponential, e−2πinm/N, puts more emphasis on the index of the inputs (n) and outputs (m) and on the number of points (N ) and “hides” the sample points themselves. That’s the last step toward the discrete.

### 6.2The Discrete Fourier Transform (DFT)

This development in the previous section suggests a general definition. Instead of thinking in terms of sampled values of a continuous signal and sampled value of its Fourier transform, we may think of the discrete Fourier transform as an operation that accepts as input a list of N numbers and returns as output a list of N numbers.

There are actually a number of things to say about the inputs and outputs of this operation, and we won’t try to say them all at once. For the present, suffice it to say that we’ll generally use the vector and “discrete signal” notation and write N -tuples as

f = (f , f , . . ., f [N − 1)]) .

I’ll write vectors in boldface. If you want to use another notation, that’s fine, but pick something — for much of this chapter you really will need a notation to distinguish a vector from a scalar. Note that the indexing goes from 0 to N − 1 rather than from 1 to N . This is one of the things we’ll comment on later.

Here’s the definition of the discrete Fourier transform.

• Let f = (f , f , . . . , f [N − 1]) be an N -tuple. The discrete Fourier transform (DFT) of f is the N -tuple F = (F, F, . . . , F[N − 1]) defined by

F[m] =

N −1X

n=0

f [n]e−2πimn/N, m = 0, 1, . . . , N − 1 .

It’s perfectly legitimate to let the inputs f [n] be complex numbers, though for applications to signals they’ll typically be real. The computed values F[m] are complex, being sums of complex exponentials.

6.2.1 Notation and conventions 1

I said that I wanted to set things up to look as much like the continuous case as possible. A little notation can go a long way here.

(5)

6.2 The Discrete Fourier Transform (DFT) 255

First off, I want to take Matlab seriously. Part of Matlab’s usefulness is to formulate operations and commands in terms of vectors and to operate componentwise on vectors with many of the ordinary rules of arithmetic. We’ll do the same here. So, for example, if

x = (x, x, . . ., x[N − 1]) and y = (y, y, . . ., y[N − 1]) then, by definition,

x y = (xy, xy, . . . , x[N − 1]y[N − 1])

(componentwise product of the vectors, not to be confused with the dot product) x

y = (x

y,x

y, . . . ,x[N − 1]

y[N − 1]) when the individual quotients make sense xp = (xp, xp, . . . , x[N − 1]p) when the individual powers make sense

and so on. These operations are all standard in Matlab. We even allow a function of one variable (think sine or cosine, for example) to operate on a vector componentwise via

f (x, x, . . . , x[N − 1])

= f (x), f (x), . . . , f (x[N − 1]) .

We also use the Matlab notation [r : s] for the tuple of numbers (r, r + 1, r + 2, . . . , s) —very useful for indexing. Finally, we’ll also write

0 = (0, 0, . . ., 0) for the zero vector.

Vector complex exponential The definition of the discrete Fourier transform — like that of the con- tinuous Fourier transform — involves a complex exponential. We’ll write

ω = e2πi/N and occasionally we’ll decorate this to

ωN = e2πi/N

when we want to emphasize the N in a particular formula. Note that Re ωN = cos 2π/N , Im ωN = sin 2π/N .

We have seen ωN in various places in our work. It’s an N -th root of unity, meaning ωNN = e2πiN/N = e2πi = 1 .

Then for any integer n

ωNN n= 1 and in general, for any integers n and k

ωNN n+k = ωNk .

This comes up often enough that it’s worth pointing out. Also note that when N is even ωNN/2= e2πiN/2N = e= −1 and hence ωNkN/2= (−1)k.

Finally, some people write ωN = e−2πin/N (minus instead of plus in the exponential) and W = e2πin/N. Be aware of that if you peruse the literature.

(6)

For the discrete Fourier transform, where vectors are naturally involved, it’s helpful to introduce a complex exponential vector. There are N distinct N -th roots of unity, corresponding to the N powers of ωN:

1 = ω0N, ω1N, ωN2, . . . , ωNN −1. We let

ω = (1, ω, ω2, . . . , ωN −1)

be the vector in CN consisting of the N distinct powers of ω. (Drop the subscript N now — it’s understood.) Be careful to keep the underline notation here (or whatever notation you like).

The vector real and imaginary parts of ω are Re ω = cos



N[0 : N − 1]



, Im ω = sin



N[0 : N − 1]

 . See how that works:

cos



N[0 : N − 1]



is short for



1, cos

N, cos

N, . . . , cos2π(N − 1) N

 .

Also important are the powers of ω. We write

ωk = (1, ωk, ω2k, . . . , ω(N −1)k) for the vector of k-th powers. Then also

ω−k = (1, ω−k, ω−2k, . . . , ω−(N −1)k) . Note how we write the components:

ωk[m] = ωkm, ω−k[m] = ω−km.

(You can see why it’s important here to use notations that, while similar, can distinguish a vector from a scalar.)

Taking powers of ω is cyclic of order N , meaning that

ωN = (1N, e2πiN/N, e4πiN/N, . . . , e2πi(N −1)N/N) = (1, 1, 1, . . . , 1) For shorthand, we’ll write

1 = (1, 1, . . ., 1) for the vector of all 1’s. Then, compactly,

ωN = 1 . and

ωnN = 1 and ωnN +k = ωk for any integers n and k.

Along with making the discrete case look like the continuous case goes making vector calculations look like scalar calculations.

(7)

6.2 The Discrete Fourier Transform (DFT) 257

The DFT in vector form Introducing the vector complex exponential allows us to write the formula defining the discrete Fourier transform in a way, and with a notation, that really looks like a discrete version of the continuous transform. The DFT is given by (defined by)

Ff =

N −1X

k=0

f [k]ω−k.

I’ve underlined the F to show its vector character. We’ll write FN if we need to call attention to the N . To emphasize once again its nature, the DFT of a vector is another vector. The components of F f are the values of F f at the points m = 0, 1, . . ., N − 1:

Ff [m] =

N −1X

k=0

f [k]ω−k[m] =

N −1X

k=0

f [k]ω−km=

N −1X

k=0

f [k]e−2πikm/N.

We note one special value:

Ff  =

N −1X

k=0

f [k]ω−k =

N −1X

k=0

f [k]

the sum of the components of the input f . For this reason some people define the DFT with a 1/N in front, so that the zeroth component of the output is the average of the components of the input, like the zeroth Fourier coefficient of a periodic function is the average of the function over one period. We’re not doing this.

The DFT in matrix form The DFT takes vectors to vectors, and it does so linearly. To state this formally as a property:

F (f1+ f2) = F f1+ F f2 and F (αf ) = αF f . Showing this is easy, and make sure you see why it’s easy; e.g.,

F (f1+ f2) =

N −1X

k=0

(f1+ f2)[k]ω−k =

N −1X

k=0

(f1[k] + f2[k])ω−k

=

N −1X

k=0

f1[k]ω−k+

N −1X

k=0

f2[k]ω−k = F f1+ F f2.

As a linear transformation from CN to CN, the DFT, F = F f , is exactly the matrix equation





 F 

F 

F 

... F [N −1]







=







1 1 1 · · · 1

1 ω−1·1 ω−1·2 · · · ω−(N −1) 1 ω−2·1 ω−2·2 · · · ω−2(N −1)

... ... ... . .. ... 1 ω−(N −1)·1 ω−(N −1)·2 · · · ω−(N −1)2











 f 

f 

f 

... f [N −1]





 .

That is, the discrete Fourier transform is the big old N × N matrix

F =







1 1 1 · · · 1

1 ω−1 ω−2 · · · ω−(N −1) 1 ω−2 ω−4 · · · ω−2(N −1)

... ... ... . .. ... 1 ω−(N −1) ω−2(N −1) · · · ω−(N −1)2





 .

(8)

Again notice that we take the indices for f and F to go from 0 to N − 1 instead of from 1 to N . This is a standard convention in the subject, but it clashes with the conventions of matrix terminology, where for an N × N matrix we usually write the indices from 1 to N . Tough break, but it seems to be etched in stone — be careful.

Positive and negative frequencies The heading of this section may seem a little odd, since for F = Ff we index the output F = (F, F, . . . , F[N − 1]), and hence the points in the spectrum, from 0 to N − 1

— no negative indices in sight. It will seem less odd when we discuss reindexing, but this requires a little more preparation. For now, and for use in the example to follow, there is something important to point out about the values of Ff that is analogous to F f (−s) = F f (s) in the continuous case when f (t) is real.

Suppose that N is even. (This was the case in our derivation of the formula for the DFT, and it’s often assumed, though it’s not necessary for the ultimate definition of F .) Suppose also that we consider real inputs f = (f , f , . . . , f [N − 1]). Something special happens at the midpoint, N/2, of the spectrum.

We find

Ff [N/2] =

N −1X

k=0

f [k]ω−k[N/2] =

N −1X

k=0

f [k]ω−kN/2

=

N −1X

k=0

f [k]e−πik =

N −1X

k=0

(−1)−kf [k] (using ωN/2= −1)

The value of the transform at N/2 is Ff [N/2] and is an alternating sum of the components of the input vector f . In particular, Ff [N/2] is real.

More than Ff [N/2] being real, though, is that the spectrum “splits” at N/2. For a start, look at Ff [(N/2) + 1] and F f [(N/2) − 1]:

Ff [N

2 + 1] =

N −1X

k=0

f [k]ω−k[N

2 + 1] =

N −1X

k=0

f [k]ω−kω−N k/2 =

N −1X

k=0

f [k]ω−k(−1)−k

Ff [N

2 − 1] =

N −1X

k=0

f [k]ω−k[N

2 − 1] =

N −1X

k=0

f [k]ωkω−N k/2=

N −1X

k=0

f [k]ωk(−1)−k

Comparing the two calculations we see that Ff [N

2 + 1] = F f [N 2 − 1] . Similarly, we get

Ff [N

2 + 2] =

N −1X

k=0

f [k]ω−2k(−1)−k, Ff [N

2 − 2] =

N −1X

k=0

f [k]ω2k(−1)−k so that

Ff [N

2 + 2] = F f [N 2 − 2] . This pattern persists down to

Ff  =

N −1X

k=0

f [k]ω−k and

Ff [N − 1] =

N −1X

k=0

f [k]ω−k(N −1)=

N −1X

k=0

f [k]ωkω−kN =

N −1X

k=0

f [k]ωk

(9)

6.3 Two Grids, Reciprocally Related 259

i.e., to

Ff  = F f [N − 1] .

Here’s where it stops; recall that Ff  is the sum of the components of f .

Because of this result (with an alternate explanation later in this chapter), when the spectrum is indexed from 0 to N − 1 the convention is to say that the frequencies from m = 1 to m = N/2 − 1 are the positive frequencies and those from N/2 + 1 to N − 1 are the negative frequencies. Whatever adjectives one uses, the important upshot is that, for a real input f , all the information in the spectrum is in the first component F f  (the “DC” component, the sum of components of the input), the components Ff , F f , . . ., F f [N/2 − 1], and the special value F f [N/2] (the alternating sum of the components of the input). The remaining components of F f are just the complex conjugates of those from 1 to N/2 − 1.

As we’ll see, this has practical importance.

### 6.3Two Grids, Reciprocally Related

Refer back to our understanding that the DFT finds the sampled Fourier transform of a sampled signal.

We have a grid of points in the time domain and a grid of points in the frequency domain where the discrete version of the signal and the discrete version of its Fourier transform are known. More precisely, shifting to the discrete point of view, the values of the signal at the points in the time domain are all we know about the signal and the values we compute according to the DFT formula are all we know about its transform.

In the time domain the signal is limited to an interval of length L. In the frequency domain the transform is limited to an interval of length 2B. When you plot the discrete signal and its DFT (or rather, e.g., the magnitude of its DFT since the DFT is complex), you should (probably) plot over these intervals (but your software might not give you this for the DFT). The grid points in the time domain are spaced 1/2B apart. The grid points in the frequency domain are spaced 1/L apart, so note (again) that the spacing in one domain is determined by properties of the function in the other domain. The two grid spacings are related to the third quantity in the setup, the number of sample points, N . The equation is

N = 2BL

Any two of these quantities — B, L, or N — determine the third via this relationship. The equation is often written another way, in terms of the grid spacings. If ∆t = 1/2B is the grid spacing in the time domain and ∆ν = 1/L is the grid spacing in the frequency domain, then

1

N = ∆t∆ν .

These two (equivalent) equations are referred to as the reciprocity relations. A thing to put into your head is that for a fixed number of sample points N making ∆t small means making ∆ν large, and vice versa.

Here’s why all this is important. For a given problem you want to solve — for a given signal you want to analyze by taking the Fourier transform — you typically either know or choose two of:

• How long the signal lasts, i.e., how long you’re willing to sit there taking measurements — that’s L.

• How many measurements you make — that’s N .

• How often you make a measurement — that’s ∆t.

Once two of these are determined everything else is set.

(10)

### 6.4Appendix: Gauss’s Problem

Finally, here’s the problem Gauss considered on representing the orbit of an asteroid by a finite Fourier series. Gauss was interested in astronomy, as he was in everything else, and occupied himself for a period in calculating orbits of recently discovered asteroids. This led to two great computational techniques. One was taken up right away (least squares curve fitting) and the other was forgotten (the algorithm for efficient computation of Fourier coefficients — the fast Fourier transform — as mentioned above).

It was in calculating the orbit of Pallas that Gauss introduced methods that became the FFT algorithm. He had twelve data points for the orbit, relating the “ascension” θ, measured in degrees, and the “declination”

X , measured in minutes of arc. It appears from the data that X depends periodically on θ, so the problem is to interpolate a finite Fourier series based on the twelve samples. Gauss considered a sum of the form

X = f (θ) = a0+ X5 k=1

 akcos

2πkθ 360



+ bksin

2πkθ 360



+ a6cos

12πθ 360

 .

Here’s the data:

θ 0 30 60 90 120 150 180 210 240 270 300 330

X 408 89 −66 10 338 807 1238 1511 1583 1462 1183 804

How to determine the 12 unknown coefficients based on the samples Xn= f (θn)? Want to take a whack at it? It requires solving 12 linear equations in 12 unknowns, something Gauss could have done by hand.

Nevertheless, he was enough taken by the symmetries inherent in using sines and cosines to devise a scheme that rearranges the algebra in a way to reduce the total number of steps — essentially introducing a collection of easier subproblems whose solutions are later recombined to give the desired grand solutions.

That rearrangement is what we’ll talk about later, and I won’t attempt to reconstruct Gauss’s solution.

Here are the coefficients, as found by a modern FFT routine:

k 0 1 2 3 4 5 6

ak 780.6 −411.0 43.4 −4.3 −1.1 0.3 0.1

bk−720.2 −2.2 5.5 −1.0 −0.3

More impressive is the graph, shown in the following figure.

(11)

6.5 Getting to Know Your Discrete Fourier Transform 261

### 6.5Getting to Know Your Discrete Fourier Transform

We introduced the discrete Fourier transform (DFT) as a discrete approximation of the usual Fourier transform. The DFT takes an N -tuple f = (f , f , . . . , f [N − 1]) (the input) and returns an N -tuple F = (F, F, . . . , F[N − 1]) (the output) via the formula

F = F f =

N −1X

k=0

f [k]ω−k

where ω is the vector complex exponential,

ω = (1, ω, ω2, . . . , ωN −1) , where ω = e2πi/N. Evaluating F f at an index m gives the m-th component of the output

F[m] =

N −1X

k=0

f [k]ω−km=

N −1X

k=0

f [k]e−2πikm/N

We also write F as the N × N matrix,

F =







1 1 1 · · · 1

1 ω−1 ω−2 · · · ω−(N −1) 1 ω−2 ω−4 · · · ω−2(N −1)

... ... ... . .. ... 1 ω−(N −1) ω−2(N −1) · · · ω−(N −1)2





 .

The (m, n) entry is just ω−mn where the rows and columns are indexed from 0 to N − 1.

We want to develop some general properties of the DFT, much as we did when we first introduced the continuous Fourier transform. Most properties of the DFT correspond pretty directly to properties of the

(12)

continuous Fourier transform, though there are differences. You should try to make the most of these correspondences, if only to decide for yourself when they are close and when they are not. Use what you know!

Derivations using the DFT are often easier than those for the Fourier transform because there are no worries about convergence, but discrete derivations can have their own complications. The skills you need are in manipulating sums, particularly sums of complex exponentials, and in manipulating matrices. In both instances it’s often a good idea to first calculate a few examples, say for DFTs of size three or four, to see what the patterns are.2 We’ll find formulas that are interesting and that are very important in applications.

### 6.6Periodicity, Indexing, and Reindexing

Let me begin by highlighting a difference between the discrete and continuous cases rather than a similarity.

The definition of the DFT suggests, even compels, some additional structure to the outputs and inputs.

The output values F[m] are defined initially only for m = 0 to m = N − 1, but their definition as F[m] =

N −1X

k=0

f [k]ω−km

implies a periodicity property. Since

ω−k(m+N )= ω−km we have

N −1X

k=0

f [k]ω−k(m+N )=

N −1X

k=0

f [k]ω−km= F[m] .

If we consider the left hand side as the DFT formula producing an output, then that output would be F[m + N ]. More generally, and by the same kind of calculation, we would have

F[m + nN ] = F[m]

for any integer n. Thus, instead of just working with F as an N -tuple it’s natural to “extend” it to be a periodic sequence of period N . For example, if we start off with N = 4 and the values (F, F, F, F) then, by definition, the periodic extension of F has F = F , F = F, F = F, and so on, and going in the other direction, F[−1] = F, F[−2] = F, and so on. In general,

F[p] = F[q] if p − q is a multiple of N , positive or negative.

or put another way

F[p] = F[q] if p ≡ q modN . We then have the formula

F[m] =

N −1X

k=0

f [k]ω−km for all integers m.

Because of these observations, and unless instructed otherwise:

2One thing to be mindful of in deriving formulas in the general case, in particular when working with sums, is what you call the “index of summation” (analogous to the “variable of integration”) and not to get it confused or in conflict with other variables that are in use. Derivations might also involve “changing the variable of summation”, analogous to “changing the variable of integration”, a procedure that seems easier in the case of integrals than in sums, maybe just because of all the practice we’ve had with integrals.

(13)

6.6 Periodicity, Indexing, and Reindexing 263

• We will always assume that F is a periodic sequence of period N .

Once we define the inverse DFT it will emerge that an input f to the DFT also extends naturally to be a periodic sequence of period N . We’ll also assume that, starting now.

• We will always assume that f is a periodic sequence of period N .

So, briefly, we assume that all our discrete signals are periodic. To take an important example, if, according to this dictum, we consider the vector complex exponential not just as a vector but as a periodic discrete signal then we can define it simply by

ω[n] = ωn, n an integer.

As for how this differs from the continuous case, we certainly can consider periodicity — that’s what the subject of Fourier series is all about, after all — but when working with the Fourier transform we don’t have to consider periodicity. In the discrete case we really do. Some things just don’t work (e.g., convolution) if we don’t work with periodic inputs and outputs.

If we were developing the DFT from a purely mathematical point of view, we would probably incorporate periodicity as part of the initial definition, and this is sometimes done. It would make some parts of the mathematical development a little smoother (though no different in substance), but I think on balance it’s a mistake. It’s extra baggage early on and can make the tie in with physical applications more awkward.

6.6.1 Notation and conventions 2

Having extended the inputs and outputs to be periodic sequences, it’s mostly a matter of taste, or a preferred point of view, whether one then wants to think of an input to the DFT as a vector in CN that is to be “extended periodically”, as a discrete signal f : Z → C that is periodic of period N , where Z stands for the integers. Each of these viewpoints can be helpful. For example, vectors are helpful if one wants to think of the DFT as a matrix, or when inner products are involved.

When confusion arises — and it does arise — it usually comes from the vector or N -tuple stance, and from questions and conventions on how vectors are indexed, whether from 0 to N − 1 or from 1 to N , or other choices. In fact, indexing and reindexing the components of a DFT is something that just seems to come up — it certainly comes up in varied implementations of the DFT, and it’s something you have to be able to handle if you use different packages or program any of these formulas.

The definition of the DFT that we’ve given is pretty standard, and it’s the one we’ll use. One sometimes finds an alternate definition of the DFT, used especially in imaging problems, where N is assumed to be even and the index set for both the inputs f and the outputs F is taken to be [−(N/2) + 1 : N/2] = (−(N/2) + 1, −(N/2) + 2, . . . , −1, 0, 1, . . . , N/2). The definition of the DFT is then:

Ff =

N/2X

k=−N/2+1

f [k]ω−k or in components F[m] =

N/2X

k=−N/2+1

f [k]ω−km.

We would be led to this indexing of the inputs and outputs if, in the sampling-based derivation we gave of the DFT in the previous lecture, we sampled on the time interval from −L/2 to L/2 and on the frequency interval from −B to B. Then, using the index set [−(N/2) + 1 : N/2], the sample points in the time domain would be of the form

t−N/2+1 = −L 2 + 1

2B = −N/2 + 1

2B , t−N/2+2 = −N/2 + 2

2B , . . . , tN/2= −N/2 + N

2B = N/2

2B = L 2 ,

(14)

and in the frequency domain of the form s−N/2+1 = −B + 1

L =−N/2 + 1

L , s−N/2+2 = −N/2 + 2

L , . . . , sN/2= −N/2 + N

L = N/2

L = B . (Maybe you can see why I didn’t want to set things up this way for our first encounter.)

The “new” definition, above, of the DFT is completely equivalent to the first definition because of period- icity. There’s a phrase one sees that’s supposed to alleviate the tension over this and other similar sorts of things. It goes something like:

“The DFT can be defined over any set of N consecutive indices.”

What this means most often in practice is that we can write

Ff =

p+N −1

X

k=p

f [k]ω−k.

We’ll explain this thoroughly in a later section. It’s tedious, but not difficult. If you think of an input (or output) f as a periodic discrete signal (something your software package can’t really do) then you don’t have to worry about “how it’s indexed”. It goes on forever, and any block of N consecutive values, f [p], f [p + 1], . . ., f [p + N − 1], should be as good as any other because the values of f repeat. You still have to establish the quoted remark, however, to be assured that finding the DFT gives the same result on any such block you need. This is essentially a discrete form of the statement for continuous periodic functions that the Fourier coefficients can be calculated by integrating over any period.

Positive and negative frequencies, again Let’s tie up a loose end and see how periodicity makes honest negative frequencies correspond to negative frequencies “by convention”. Suppose we have a periodic input f and output F = Ff indexed from −(N/2) + 1 to N/2. We would certainly say in this case that the negative frequencies go from −(N/2) + 1 to −1, with corresponding outputs F[−(N/2) + 1], F[−(N/2) + 2], . . . F[−1]. Where do these frequencies go if we “reindex” from 0 to N − 1? Using periodicity,

F[−N

2 + 1] = F[−N

2 + 1 + N ] = F[N 2 + 1]

F[−N

2 + 2] = F[−N

2 + 2 + N ] = F[N 2 + 2]

and so on up to F[−1] = F[−1 + N ]

The “honest” negative frequencies at −(N/2) + 1, . . . , −1, are by periodicity the “negative frequencies by convention” at N/2 + 1, . . . , N − 1.

### 6.7Inverting the DFT and Many Other Things Along the Way

By now it should be second nature to you to expect that any (useful) transform ought to have an inverse transform. The DFT is no exception. The DFT does have an inverse, and the formula for the inverse is quite simple and is very similar to the formula for the DFT itself, (almost) just like the continuous case.

The key to inversion is the “discrete orthogonality” of the complex exponentials. We’re going to look at the problem of finding F−1 from both the vector point of view and the matrix point of view, with more emphasis on the former. You can take your pick which you prefer, but it’s helpful to know both.

(15)

6.7 Inverting the DFT and Many Other Things Along the Way 265 6.7.1 The discrete δ

Good news! No need to go through the theory of distributions to define a δ in the discrete case. We can do it directly and easily by setting

δ0 = (1, 0, . . . , 0) .

In words, there’s a 1 in the zeroth slot and 0’s in the remaining N − 1 slots. δ0 is really just the first basis vector of CN under an assumed name (the way we’re indexing vectors from 0 to N − 1), but to make comparisons to the continuous case we prefer to accord it independent status. We didn’t specify N , and so, strictly speaking, there’s a δ0 for each N , but since δ0 will always arise in a context where the N is otherwise specified we’ll set aside that detail. As a periodic signal the definition of δ0 is

δ0[m] =

(1 m ≡ 0 mod N, 0 otherwise For the DFT of δ0 we have

F δ0 =

N −1X

k=0

δ0[k]ω−k = ω0 = 1 . Great — just like F δ = 1, and no tempered distributions in sight!

If we think of applying the DFT as matrix multiplication, then F δ0 pulls out the first column, which is 1.

(We may drop the subscript 0 on δ0 if it’s clear from the context.)

The shifted discrete δ is just what you think it is,

δk = (0, . . . , 0, 1, 0, . . . , 0)

with a 1 in the k-th slot and zeros elsewhere. That is, the lowly k-th natural basis vector of CN is now masquerading as the important δk and we can write for an arbitrary vector f ,

f =

N −1X

k=0

f [k]δk.

As a periodic discrete signal

δk[m] =

(1 m ≡ k mod N 0 otherwise . Note, then, that if f is a periodic discrete signal we can still write

f =

N −1X

k=0

f [k]δk.

The δk’s can be viewed as a basis for CN and also as a basis for the N -periodic discrete signals.

For the DFT of δk we have

F δk =

N −1X

n=0

δk[n]ω−n = ω−k

From the matrix point of view, taking the DFT of δk pulls out the k-th column of the DFT matrix, and that’s ω−k.

(16)

These are our first explicit transforms, and if we believe that the discrete case can be made to look like the continuous case, the results are encouraging. We state them again.

• The DFTs of the discrete δ and shifted δ are:

F δ0 = 1 and F δk = ω−k. We’ll establish other properties of discrete δk’s (convolution, sampling) later.

To know F on a basis Notice that the linearity of F and the knowledge that F δk = ω−k recovers the general formula for F . Indeed,

f =

N −1X

k=0

f [k]δk ⇒ Ff =

N −1X

k=0

f [k]F δk =

N −1X

k=0

f [k]ω−k.

6.7.2 Orthogonality of the vector complex exponentials

Having introduced the vector complex exponential

ω = (1, ω, ω2, . . . , ωN −1) and its k-th power

ωk = (1, ωk, ω2k, . . . , ω(N −1)k) it is easy to formulate a key property:

If k and ` are any integers then

ωk · ω`=

(0 k 6≡ ` mod N N k ≡ ` mod N

Thus the powers of the vector complex exponentials are “almost” orthonormal. We could make them orthonormal by considering instead

√1

N(1, ωk, ω2k, . . . , ω(N −1)k) , but we won’t do this.

In the continuous case the analogous result is that the family of functions (1/

T )e2πint/T are orthonormal with respect to the inner product on L2([0, T ]):

Z T 0

√1

Te2πimt/T 1

Te−2πint/Tdt = 1 T

Z T 0

e2πi(m−n)t/T

dt =

(1 m = n 0 m 6= n

Orthogonality in the continuous case is actually easier to establish than in the discrete case, because, sometimes, integration is easier than summation. However, you pretty much established the result in the

(17)

6.7 Inverting the DFT and Many Other Things Along the Way 267

discrete case on your first problem set this quarter: the problem of adding up roots of unity is exactly what’s involved, which is why I asked you to do that question way back then.

There are two things we’ll need for the derivation. The first is that ωkN = 1 where k is any integer. The second is the sum of a finite geometric series, something we have used repeatedly:

1 + z + z2+ · · · + zN −1=



1 − zN

1 − z z 6= 1

N z = 1

We’ll use this formula for z = ω. In that case,

1 + ω + ω2+ . . . + ωN −1= 1 − ωN

1 − ω = 0 1 − ω = 0

which is the exercise you did on the first problem set. More generally, if k is not an integer multiple of N , so that ωk 6= 1 while ωkN = 1, then

1 + ωk + ω2k+ . . . ω(N −1)k= 1 − ωkN 1 − ωk = 0 , while if k is an integer multiple of N then ωk = 1 and

1 + ωk + ω2k+ . . . ω(N −1)k = 1 + 1 + . . . + 1 = N . Succinctly,

1 + ωk+ ω2k+ · · · + ω(N −1)k = (

0 k 6≡ 0 mod N N k ≡ 0 mod N Let’s compute the inner product ωk· ω`:

ωk· ω`=

N −1X

n=0

ωk[n]ω`[n]

=

N −1X

n=0

ωk[n]ω−`[n] (taking Matlab seriously)

=

N −1X

n=0

ωk−`[n] (ditto)

=

N −1X

n=0

ω(k−`)n =

(0 k − ` 6≡ 0 mod N N k − ` ≡ 0 mod N =

(0 k 6≡ ` mod N N k ≡ ` mod N Done.

Remark From this result we conclude that the N distinct vectors 1, ω−1, ω−2, . . . , ω−(N −1)are a basis of CN (and so are 1, ω, ω2, . . . , ωN −1). From the earlier result that F δk = ω−k we then know that F is invertible. This doesn’t tell us what the inverse is, however. We have to work a little harder for that.

The DFT of the vector complex exponential With the orthogonality of the vector complex expo- nentials established, a number of other important results are now within easy reach. For example, we can now find F ωk.

(18)

By definition,

F ωk =

N −1X

n=0

ωk[n]ω−n and its `-th component is then

F ωk[`] =

N −1X

n=0

ωk[n]ω−n[`]

=

N −1X

n=0

ωknω−n` = ωk· ω` =

(0 k 6≡ ` mod N N k ≡ ` mod N We recognize this, and we are pleased.

• The discrete Fourier transform of ωk is

F ωk = N δk

Perhaps, we are almost pleased. There’s a factor of N that comes in that we don’t see, in any way, in the continuous case. Here it traces back, ultimately, to ||ω||2 = N .

The appearance of a factor N or 1/N in various formulas, always wired somehow to ||ω||2 = N , is one thing that makes the discrete case appear different from the continuous case, and it’s a pain in the neck to keep straight. Be careful.

6.7.3 Reversed signals and their DFTs

For a discrete signal, f , defined on the integers, periodic or not, the corresponding reversed signal, f, is defined by

f[m] = f [−m] .

If f is periodic of period N , as we henceforth again assume, and we write it as the vector f = (f , f , . . . , f [N − 1]) .

Then

f = (f [N ], f [N − 1], . . . , f ) (using f [N ] = f )

which makes the description of f as “reversed” even more apt (though, as in many irritating instances, the indexing is a little off ). Defined directly in terms of its components this is

f[n] = f [N − n]

and this formula is good for all integers n. This description of f is often quite convenient. Note that reversing a signal satisfies the principle of superposition (is linear as an operation on signals):

(f + g) = f+ g and (αf )= αf. It’s even more than that — we have

(f g)= (f)(g) . Let’s consider two special cases of reversed signals. First, clearly

δ0 = δ0

(19)

6.7 Inverting the DFT and Many Other Things Along the Way 269

and though we’ll pick up more on evenness and oddness later, this says that δ0 is even. For the shifted δ, δk = δ−k.

I’ll let you verify that. With this result we can write f=

N −1X

k=0

f [k]δk



=

N −1X

k=0

f [k]δ−k.

One might say that the δk are a basis for the forward signals and the δ−k are a basis for the reversed signals.

Next let’s look at ω. First we have,

ω= (ω[N ], ω[N − 1], ω[N − 2], . . . , ω) = (1, ωN −1, ωN −1, ωN −2, . . . , ω) . But now notice (as we could have noticed earlier) that

ωN −1ω = ωN = 1 ⇒ ωN −1= ω−1. Likewise

ωN −2ω2 = ωN = 1 ⇒ ωN −2= ω−2. Continuing in this way we see, very attractively,

ω= (1, ω−1, ω−2, . . . , ω−(N −1)) = ω−1. In the same way we find, equally attractively,

k)= ω−k. Of course then also

−k)= ωk.

This has an important consequence for the DFT — our first discrete “duality result”. (Though we haven’t yet introduced the inverse DFT, which is how one usually thinks about duality. It’s the inverse DFT that we’re pointing toward). Let’s consider Ff, the DFT of the reversed signal. To work with the expression for F f we’ll need to use periodicity of f and do a little fancy foot work changing the variable of summation in the definition of the DFT. Here’s how it goes

Ff=

N −1X

k=0

f[k]ω−k

=

N −1X

k=0

f [N − k] ω−k (reversing f )

= X1

`=N

f [`] ω`−N (letting ` = N − k)

= X1

`=N

f [`] ω` (since ω−N = 1)

(20)

But using f [N ] = f  and ωN = ω0 = 1 we can clearly write X1

`=N

f [`]ω`=

N −1X

`=0

f [`]ω`

=

N −1X

`=0

f [`]ω−`



= (F f ).

We have shown Ff= (F f ) Cool. A little drawn out, but cool.

This then tells us that

F ω−k= (F ωk)= (N δk)= N δ−k. In turn, from here we get a second duality result. Start with

Ff =

N −1X

k=0

f [k]ω−k

and apply F again. This produces

F Ff =

N −1X

k=0

f [k]F ω−k = NX

k=0

f [k]δ−k = N f

To give the two results their own display:

• Duality relations for the DFT are:

Ff= (F f ) and F Ff = N f.

We’ll do more on reversed signals, evenness and oddness, etc., but we’ve waited long enough for the big moment.

6.7.4 The inverse DFT

We take our cue for finding F−1 from the duality results in the continuous case that say F−1f = F f= (F f ).

The only thing we don’t have in the discrete case is the definition of F−1, and this equation tells us how we might try defining it. There’s actually a factor of N coming in, and because I know what’s going to happen I’ll put it in now and define

F−1f = 1 NFf and so equivalently

F−1f = 1

N(F f ) and also F−1f = 1 N

N −1X

n=0

f [n]ωn. Let’s see why this really does give us an inverse of F .

Unit 1 Unit 1 - How many people are in your familyA. Read

Yesterday he was absent from school because he had a coldD. Yesterday the weather was

She is going to buy them a digital camera, because they love taking pictures of..

Markov chain: a discrete random process with a finite number of states and it satisfies the property that the next state depends only on the current state.. Aperiodic: all states

First Taiwan Geometry Symposium, NCTS South () The Isoperimetric Problem in the Heisenberg group Hn November 20, 2010 13 / 44.. The Euclidean Isoperimetric Problem... The proof

In fact, one way of getting from Fourier series to the Fourier transform is to consider nonperiodic phenomena (and thus just about any general function) as a limiting case of

Root the MRCT b T at its centroid r. There are at most two subtrees which contain more than n/3 nodes. Let a and b be the lowest vertices with at least n/3 descendants. For such

When we want to extend an operation from functions to distributions — e.g., when we want to define the Fourier transform of a distribution, or the reverse of distribution, or the

1 As an aside, I don’t know if this is the best way of motivating the definition of the Fourier transform, but I don’t know a better way and most sources you’re likely to check

Particles near (x, y, z) in the fluid tend to rotate about the axis that points in the direction of curl F(x, y, z), and the length of this curl vector is a measure of how quickly

Since the assets in a pool are not affected by only one common factor, and each asset has different degrees of influence over that common factor, we generalize the one-factor

The best way to picture a vector field is to draw the arrow representing the vector F(x, y) starting at the point (x, y).. Of course, it’s impossible to do this for all points (x,

Since we use the Fourier transform in time to reduce our inverse source problem to identification of the initial data in the time-dependent Maxwell equations by data on the

Robinson Crusoe is an Englishman from the 1) t_______ of York in the seventeenth century, the youngest son of a merchant of German origin. This trip is financially successful,

fostering independent application of reading strategies Strategy 7: Provide opportunities for students to track, reflect on, and share their learning progress (destination). •

Strategy 3: Offer descriptive feedback during the learning process (enabling strategy). Where the

Now, nearly all of the current flows through wire S since it has a much lower resistance than the light bulb. The light bulb does not glow because the current flowing through it

support vector machine, ε-insensitive loss function, ε-smooth support vector regression, smoothing Newton algorithm..

Abstract In this paper, we study the parabolic second-order directional derivative in the Hadamard sense of a vector-valued function associated with circular cone.. The

There are existing learning resources that cater for different learning abilities, styles and interests. Teachers can easily create differentiated learning resources/tasks for CLD and

As we shall see in Section 30.2, if we choose “complex roots of unity” as the evaluation points, we can produce a point-value representation by taking the discrete Fourier transform

• Thresholded image gradients are sampled over 16x16 array of locations in scale space. • Create array of

Singleton,”A methd for computing the fast Fourier Transform with auxiliary memory and limited high-speed storage”, IEEE Trans. Audio