**Chapter 6**

**Discrete Fourier Transform**

**The Modern World**

According to some, the modern world began in 1965 when J. Cooley and J. Tukey published their account
of an efficient method for numerical computation of the Fourier transform.^{1} According to some others, the
method was known to Gauss in the mid 1800s; the idea that lies at the heart of the algorithm is clearly
present in an unpublished paper that appeared posthumously in 1866. Take your pick.

Whatever the history, the present and future demands are that we process continuous signals by discrete methods. Computers and digital processing systems can work with finite sums only. To turn the continuous into the discrete and finite requires that a signal be both time-limited and band-limited, something we know cannot be true, and that we take a finite number of samples, something we know cannot suffice. But it works. At least such approximations work to the extent that a large fraction of the world’s economy depends upon them, and that’s not a bad measure of success.

Some would argue that one shouldn’t think in terms of “turning the continuous into the discrete”, but rather that measurements and data in the real world come to us in discrete form, and that’s how we should understand the world and work with it. Period. For some, “discrete” versus “continuous” rises to the level of a religious war, and this battle can be fought out in the different approaches to the discrete Fourier transform. I’m taking sides, at least initially, in favor of “from continuous to discrete” as a way of motivating the definition. For one thing, we have built up a lot of intuition and understanding for the Fourier transform, its properties, and its uses, and I hope some of that intuition can transfer as we now work with the discrete Fourier transform. My choice for now is to make the discrete look as similar to the continuous as possible.

**6.1** **From Continuous to Discrete**

*Start with a signal f (t) and its Fourier transform F f (s), both functions of a continuous variable. We want*
to:

*• Find a discrete version of f (t) that’s a reasonable approximation of f (t).*

*• Find a discrete version of F f (s) that’s a reasonable approximation of F f (s).*

1Incidentally, Tukey is also credited with coining the term “bit” as an abbreviation for “binary digit” — how about that for immortality!

*• Find a way that the discrete version of F f (s) is related to the discrete version of f (t) that’s a*
*reasonable approximation to the way F f (s) is related to f (t).*

Good things to try for, but it’s not quite straightforward. Here’s the setup.

*We suppose that f (t) is zero outside of 0 ≤ t ≤ L. We also suppose that the Fourier transform F f (s) is*
*zero, or effectively zero (beyond our ability to measure, negligible energy, whatever) outside of 0 < s < 2B.*

*I’m taking the support of F f to be the interval from 0 to 2B instead of −B to B because it will make the*
*initial indexing of the sample points easier; this will not be an issue in the end. We’ll also take L and B to*
both be integers so we don’t have to round up or down in any of the considerations that follow; you can
think of that as our first concession to the discrete.

*Thus we are regarding f (t) as both time-limited and band-limited, with the knowledge that this can only*
be approximately true. Remember, however, that we’re ultimately going to come up with a definition of a
discrete Fourier transform that will make sense in and of itself regardless of these shaky initial assumptions.

After the definition is written down we could erase all that came before it, or merely cast a brief glance backwards from the discrete to the continuous with a few comments on how the former approximates the latter. Many treatments of the discrete Fourier transform that start with the discrete and stay with the discrete do just that. We’re trying not to do that.

*According to the sampling theorem (misapplied here, but play along), we can reconstruct f (t) perfectly*
*from its samples if we sample at the rate 2B samples per second. Since f (t) is defined on an interval of*
*length L and the samples are 1/2B apart, that means that we want a total of*

*N =* *L*

*1/2B* *= 2BL* *(note that N is therefore even)*
evenly spaced samples, at the points

*t*0 *= 0, t*1 = 1

*2B, t*2 = 2

*2B, . . . , t**N −1*= *N − 1*
*2B* *.*
*To know the values f (t**k**) is to know f (t), reasonably well. Thus we state:*

*• The discrete version of f (t) is the list of sampled values f (t*_{0}*), f (t*_{1}*), . . . , f (t** _{N −1}*).

*Next, represent the discrete version of f (t) (the list of sampled values) “continuously” with the aid of a*
finite impulse train (a finite III-function) at the sample points:

*N −1*X

*n=0*

*δ(t − t**n*)
that is,

*f*_{discrete}*(t) = f (t)*

*N −1*X

*n=0*

*δ(t − t** _{n}*) =

*N −1*X

*n=0*

*f (t*_{n}*)δ(t − t*_{n}*) .*

*This is what we have considered previously as the sampled form of f (t). The Fourier transform of f*_{discrete}
is

*F f*_{discrete}*(s) =*

*N −1*X

*n=0*

*f (t*_{n}*)F δ(t − t** _{n}*) =

*N −1*X

*n=0*

*f (t*_{n}*)e*^{−2πist}^{n}*.*

6.1 From Continuous to Discrete 253

*This is close to what we want — it’s the continuous Fourier transform of the sampled form of f (t).*

*Now let’s change perspective and look at things in the frequency domain. The function f (t) is limited to*
*0 ≤ t ≤ L, and this determines a sampling rate for reconstructing F f (s) from its samples in the frequency*
*domain. The sampling rate is 1/L. (Not 2/L: think about how you would derive the sampling formula*
*when the function is nonzero over an interval from 0 to p/2 rather than −p/2 to +p/2.) We sample F f (s)*
*over the interval from 0 to 2B in the frequency domain at points spaced 1/L apart. The number of sample*
points is

*2B*

*1/L= 2BL = N*

*the same number of sample points as for f (t). The sample points for F f (s) are of the form m/L, and*
*there are N of them:*

*s*0 *= 0, s*1 = 1

*L, . . . , s**N −1*= *N − 1*

*L* *.*

*The discrete version of F f (s) that we take is not F f (s) evaluated at these sample points s**m*. Rather, it
*is F f*_{discrete}*(s) evaluated at the sample points. We base the approximation of F f (s) on the discrete version*
*of f (t). To ease the notation write F (s) for F f*_{discrete}*(s). Then:*

*• The discrete version of F f (s) is the list of values*

*F (s*0) =

*N −1*X

*n=0*

*f (t**n**)e*^{−2πis}^{0}^{t}^{n}*, F (s*1) =

*N −1*X

*n=0*

*f (t**n**)e*^{−2πis}^{1}^{t}^{n}*, . . . , F (s**N −1*) =

*N −1*X

*n=0*

*f (t**n**)e*^{−2πis}^{N −1}^{t}^{n}*.*

*By this definition, we now have a way of going from the discrete version of f (t) to the discrete version of*
*F f (s), namely,*

*F (s**m*) =

*N −1*X

*n=0*

*f (t**n**)e*^{−2πis}^{m}^{t}^{n}*.*

*These sums, one for each m from m = 0 to m = N − 1, are supposed to be an approximation to the*
*Fourier transform going from f (t) to F f (s). In what sense is this a discrete approximation to the Fourier*
*transform? Here’s one way of looking at it. Since f (t) is timelimited to 0 ≤ t ≤ L, we have*

*F f (s) =*
Z *L*

0

*e*^{−2πist}*f (t) dt .*
*Thus at the sample points s** _{m}*,

*F f (s**m*) =
Z *L*

0

*e*^{−2πis}^{m}^{t}*f (t) dt .*

*and to know the values F f (s*_{m}*) is to know F f (s) reasonably well. Now use the sample points t*_{k}*for f (t)*
*to write a Riemann sum approximation for the integral. The spacing ∆t of the points is 1/2B, so*

*F f (s**m*) =
Z *L*

0

*f (t)e*^{−2πis}^{n}^{t}*dt ≈*

*N −1*X

*n=0*

*f (t**n**)e*^{−2πis}^{m}^{t}^{n}*∆t =* 1
*2B*

*N −1*X

*n=0*

*f (t**n**)e*^{−2πis}^{m}^{t}^{n} = 1

*2BF (s**m**) .*
This is the final point:

*• Up to the factor 1/2B, the values F (s*_{m}*) provide an approximation to the values F f (s** _{m}*).

*Writing a Riemann sum as an approximation to the integral defining F f (s**m*) essentially discretizes the
*integral, and this is an alternate way of getting to the expression for F (s**n**), up to the factor 2B. We*
*short-circuited this route by working directly with F f*_{discrete}*(s).*

*You may find the “up to the factor 1/2B” unfortunate in this part of the discussion, but it’s in the nature of*
the subject. In fact, back in Chapter 2 we encountered a similar kind of “up to the factor . . . ” phenomenon
when we obtained the Fourier transform as a limit of the Fourier coefficients for a Fourier series.

We are almost ready for a definition, but there’s one final comment to clear the way for that. Use the definition of the sample points

*t** _{n}*=

*n*

*2B,* *s** _{m}* =

*m*

*L*to write

*F (s** _{m}*) =

*N −1*X

*n=0*

*f (t*_{n}*)e*^{−2πis}^{m}^{t}^{n} =

*N −1*X

*n=0*

*f (t*_{n}*)e** ^{−2πinm/2BL}* =

*N −1*X

*n=0*

*f (t*_{n}*)e*^{−2πinm/N}*.*

*This form of the exponential, e*^{−2πinm/N}*, puts more emphasis on the index of the inputs (n) and outputs*
*(m) and on the number of points (N ) and “hides” the sample points themselves. That’s the last step*
toward the discrete.

**6.2** **The Discrete Fourier Transform (DFT)**

This development in the previous section suggests a general definition. Instead of thinking in terms of
sampled values of a continuous signal and sampled value of its Fourier transform, we may think of the
*discrete Fourier transform as an operation that accepts as input a list of N numbers and returns as output*
*a list of N numbers.*

There are actually a number of things to say about the inputs and outputs of this operation, and we won’t
try to say them all at once. For the present, suffice it to say that we’ll generally use the vector and “discrete
*signal” notation and write N -tuples as*

**f = (f [0], f [1], . . ., f [N − 1)]) .**

I’ll write vectors in boldface. If you want to use another notation, that’s fine, but pick something — for
much of this chapter you really will need a notation to distinguish a vector from a scalar. Note that the
*indexing goes from 0 to N − 1 rather than from 1 to N . This is one of the things we’ll comment on later.*

Here’s the definition of the discrete Fourier transform.

**• Let f = (f [0], f [1], . . . , f [N − 1]) be an N -tuple. The discrete Fourier transform (DFT) of f is the****N -tuple F = (F[0], F[1], . . . , F[N − 1]) defined by**

**F[m] =**

*N −1*X

*n=0*

**f [n]e**^{−2πimn/N}*,* *m = 0, 1, . . . , N − 1 .*

**It’s perfectly legitimate to let the inputs f [n] be complex numbers, though for applications to signals they’ll****typically be real. The computed values F[m] are complex, being sums of complex exponentials.**

**6.2.1** **Notation and conventions 1**

I said that I wanted to set things up to look as much like the continuous case as possible. A little notation can go a long way here.

6.2 The Discrete Fourier Transform (DFT) 255

First off, I want to take Matlab seriously. Part of Matlab’s usefulness is to formulate operations and commands in terms of vectors and to operate componentwise on vectors with many of the ordinary rules of arithmetic. We’ll do the same here. So, for example, if

* x = (x[0], x[1], . . ., x[N − 1])* and

*then, by definition,*

**y = (y[0], y[1], . . ., y[N − 1])****x y = (x[0]y[0], x[1]y[1], . . . , x[N − 1]y[N − 1])**

(componentwise product of the vectors, not to be confused with the dot product)
**x**

**y** = (**x[0]**

**y[0]***,***x[1]**

**y[1]***, . . . , x[N − 1]*

* y[N − 1]*) when the individual quotients make sense

**x**

^{p}**= (x[0]**

^{p}

**, x[1]**

^{p}

**, . . . , x[N − 1]***) when the individual powers make sense*

^{p}and so on. These operations are all standard in Matlab. We even allow a function of one variable (think
*sine or cosine, for example) to operate on a vector componentwise via*

**f (x[0], x[1], . . . , x[N − 1])**

**= f (x[0]), f (x[1]), . . . , f (x[N − 1])***.*

*We also use the Matlab notation [r : s] for the tuple of numbers (r, r + 1, r + 2, . . . , s) —very useful for*
indexing. Finally, we’ll also write

* 0 = (0, 0, . . ., 0)*
for the zero vector.

**Vector complex exponential** The definition of the discrete Fourier transform — like that of the con-
tinuous Fourier transform — involves a complex exponential. We’ll write

*ω = e** ^{2πi/N}*
and occasionally we’ll decorate this to

*ω*_{N}*= e*^{2πi/N}

*when we want to emphasize the N in a particular formula. Note that*
*Re ω*_{N}*= cos 2π/N ,* *Im ω*_{N}*= sin 2π/N .*

*We have seen ω**N* *in various places in our work. It’s an N -th root of unity, meaning*
*ω*_{N}^{N}*= e*^{2πiN/N}*= e*^{2πi}*= 1 .*

*Then for any integer n*

*ω*_{N}* ^{N n}*= 1

*and in general, for any integers n and k*

*ω*_{N}^{N n+k}*= ω*_{N}^{k}*.*

*This comes up often enough that it’s worth pointing out. Also note that when N is even*
*ω*_{N}^{N/2}*= e*^{2πiN/2N}*= e** ^{iπ}*= −1 and hence

*ω*

_{N}*= (−1)*

^{kN/2}

^{k}*.*

*Finally, some people write ω*_{N}*= e*^{−2πin/N}*(minus instead of plus in the exponential) and W = e** ^{2πin/N}*. Be
aware of that if you peruse the literature.

For the discrete Fourier transform, where vectors are naturally involved, it’s helpful to introduce a complex
*exponential vector. There are N distinct N -th roots of unity, corresponding to the N powers of ω**N*:

*1 = ω*^{0}_{N}*, ω*^{1}_{N}*, ω*_{N}^{2}*, . . . , ω*_{N}^{N −1}*.*
We let

**ω = (1, ω, ω**^{2}*, . . . , ω** ^{N −1}*)

**be the vector in C**^{N}*consisting of the N distinct powers of ω. (Drop the subscript N now — it’s understood.)*
Be careful to keep the underline notation here (or whatever notation you like).

**The vector real and imaginary parts of ω are**
**Re ω = cos**

_{2π}

*N**[0 : N − 1]*

*,* **Im ω = sin**

_{2π}

*N**[0 : N − 1]*

*.*
See how that works:

cos

_{2π}

*N**[0 : N − 1]*

is short for

*1, cos*^{2π}

*N**, cos*^{4π}

*N**, . . . , cos*^{2π(N − 1)}*N*

*.*

**Also important are the powers of ω. We write**

**ω**^{k}*= (1, ω*^{k}*, ω*^{2k}*, . . . , ω** ^{(N −1)k}*)

*for the vector of k-th powers. Then also*

**ω**^{−k}*= (1, ω*^{−k}*, ω*^{−2k}*, . . . , ω*^{−(N −1)k}*) .*
Note how we write the components:

**ω**^{k}*[m] = ω*^{km}*,* **ω**^{−k}*[m] = ω*^{−km}*.*

(You can see why it’s important here to use notations that, while similar, can distinguish a vector from a scalar.)

**Taking powers of ω is cyclic of order N , meaning that**

**ω*** ^{N}* = (1

^{N}*, e*

^{2πiN/N}*, e*

^{4πiN/N}*, . . . , e*

*2πi(N −1)N/N*

*) = (1, 1, 1, . . . , 1)*For shorthand, we’ll write

* 1 = (1, 1, . . ., 1)*
for the vector of all 1’s. Then, compactly,

**ω**^{N}* = 1 .*
and

**ω**^{nN}**= 1** and **ω**^{nN +k}**= ω**^{k}*for any integers n and k.*

Along with making the discrete case look like the continuous case goes making vector calculations look like scalar calculations.

6.2 The Discrete Fourier Transform (DFT) 257

**The DFT in vector form** Introducing the vector complex exponential allows us to write the formula
defining the discrete Fourier transform in a way, and with a notation, that really looks like a discrete
version of the continuous transform. The DFT is given by (defined by)

F**f =**

*N −1*X

*k=0*

**f [k]ω**^{−k}*.*

I’ve underlined the F to show its vector character. We’ll write F_{N}*if we need to call attention to the N .*
*To emphasize once again its nature, the DFT of a vector is another vector. The components of F* **f are**
the values of F **f at the points m = 0, 1, . . ., N − 1:**

F**f [m] =**

*N −1*X

*k=0*

**f [k]ω**^{−k}*[m] =*

*N −1*X

*k=0*

**f [k]ω*** ^{−km}*=

*N −1*X

*k=0*

**f [k]e**^{−2πikm/N}*.*

We note one special value:

F**f [0] =**

*N −1*X

*k=0*

**f [k]ω*** ^{−k}*[0] =

*N −1*X

*k=0*

**f [k]**

* the sum of the components of the input f . For this reason some people define the DFT with a 1/N in*
front, so that the zeroth component of the output is the average of the components of the input, like the
zeroth Fourier coefficient of a periodic function is the average of the function over one period. We’re not
doing this.

**The DFT in matrix form** *The DFT takes vectors to vectors, and it does so linearly. To state this*
formally as a property:

**F (f**_{1}**+ f**_{2}**) = F f**_{1}**+ F f**_{2} and * F (αf ) = αF f .*
Showing this is easy, and make sure you see why it’s easy; e.g.,

**F (f**1**+ f**2) =

*N −1*X

*k=0*

**(f**1**+ f**2**)[k]ω*** ^{−k}* =

*N −1*X

*k=0*

**(f**1* [k] + f*2

**[k])ω**

^{−k}=

*N −1*X

*k=0*

**f**1**[k]ω*** ^{−k}*+

*N −1*X

*k=0*

**f**2**[k]ω**^{−k}**= F f**1**+ F f**2*.*

**As a linear transformation from C**^{N}**to C**^{N}**, the DFT, F = F f , is exactly the matrix equation**

*F [0]*

*F [1]*

*F [2]*

...
*F [N −1]*

=

1 1 1 · · · 1

1 *ω*^{−1·1} *ω*^{−1·2} · · · *ω** ^{−(N −1)}*
1

*ω*

^{−2·1}

*ω*

^{−2·2}· · ·

*ω*

^{−2(N −1)}... ... ... . .. ...
1 *ω*^{−(N −1)·1}*ω** ^{−(N −1)·2}* · · ·

*ω*

^{−(N −1)}^{2}

*f [0]*

*f [1]*

*f [2]*

...
*f [N −1]*

*.*

*That is, the discrete Fourier transform is the big old N × N matrix*

F =

1 1 1 · · · 1

1 *ω*^{−1} *ω*^{−2} · · · *ω** ^{−(N −1)}*
1

*ω*

^{−2}

*ω*

^{−4}· · ·

*ω*

^{−2(N −1)}... ... ... . .. ...
1 *ω*^{−(N −1)}*ω** ^{−2(N −1)}* · · ·

*ω*

^{−(N −1)}^{2}

*.*

* Again notice that we take the indices for f and F to go from 0 to N − 1 instead of from 1 to N . This is*
a standard convention in the subject, but it clashes with the conventions of matrix terminology, where for

*an N × N matrix we usually write the indices from 1 to N . Tough break, but it seems to be etched in*

*stone — be careful.*

**Positive and negative frequencies** **The heading of this section may seem a little odd, since for F = Ff**
**we index the output F = (F[0], F[1], . . . , F[N − 1]), and hence the points in the spectrum, from 0 to N − 1**

— no negative indices in sight. It will seem less odd when we discuss reindexing, but this requires a little
more preparation. For now, and for use in the example to follow, there is something important to point
out about the values of F**f that is analogous to F f (−s) = F f (s) in the continuous case when f (t) is real.**

*Suppose that N is even. (This was the case in our derivation of the formula for the DFT, and it’s often*
assumed, though it’s not necessary for the ultimate definition of F .) Suppose also that we consider real
**inputs f = (f [0], f [1], . . . , f [N − 1]). Something special happens at the midpoint, N/2, of the spectrum.**

We find

F**f [N/2] =**

*N −1*X

*k=0*

**f [k]ω**^{−k}*[N/2] =*

*N −1*X

*k=0*

**f [k]ω**^{−kN/2}

=

*N −1*X

*k=0*

**f [k]e*** ^{−πik}* =

*N −1*X

*k=0*

(−1)^{−k}**f [k]***(using ω** ^{N/2}*= −1)

*The value of the transform at N/2 is F f [N/2] and is an alternating sum of the components of the input*

**vector f . In particular, F**

**f [N/2] is real.**More than F* f [N/2] being real, though, is that the spectrum “splits” at N/2.* For a start, look at
F

**f [(N/2) + 1] and F f [(N/2) − 1]:**F**f [**^{N}

2 + 1] =

*N −1*X

*k=0*

**f [k]ω*** ^{−k}*[

^{N}2 + 1] =

*N −1*X

*k=0*

**f [k]ω**^{−k}*ω** ^{−N k/2}* =

*N −1*X

*k=0*

**f [k]ω*** ^{−k}*(−1)

^{−k}F**f [**^{N}

2 − 1] =

*N −1*X

*k=0*

**f [k]ω*** ^{−k}*[

^{N}2 − 1] =

*N −1*X

*k=0*

**f [k]ω**^{k}*ω** ^{−N k/2}*=

*N −1*X

*k=0*

**f [k]ω*** ^{k}*(−1)

^{−k}Comparing the two calculations we see that
F**f [**^{N}

2 **+ 1] = F f [*** ^{N}*
2

*− 1] .*Similarly, we get

F**f [**^{N}

2 + 2] =

*N −1*X

*k=0*

**f [k]ω*** ^{−2k}*(−1)

^{−k}*,*F

**f [**

^{N}2 − 2] =

*N −1*X

*k=0*

**f [k]ω*** ^{2k}*(−1)

*so that*

^{−k}F**f [**^{N}

2 **+ 2] = F f [*** ^{N}*
2

*− 2] .*This pattern persists down to

F**f [1] =**

*N −1*X

*k=0*

**f [k]ω*** ^{−k}*
and

F**f [N − 1] =**

*N −1*X

*k=0*

**f [k]ω*** ^{−k(N −1)}*=

*N −1*X

*k=0*

**f [k]ω**^{k}*ω** ^{−kN}* =

*N −1*X

*k=0*

**f [k]ω**^{k}

6.3 Two Grids, Reciprocally Related 259

i.e., to

F**f [1] = F f [N − 1] .**

Here’s where it stops; recall that F**f [0] is the sum of the components of f .**

Because of this result (with an alternate explanation later in this chapter), when the spectrum is indexed
*from 0 to N − 1 the convention is to say that the frequencies from m = 1 to m = N/2 − 1 are the*
*positive frequencies and those from N/2 + 1 to N − 1 are the negative frequencies. Whatever adjectives*
**one uses, the important upshot is that, for a real input f , all the information in the spectrum is in the**
first component F **f [0] (the “DC” component, the sum of components of the input), the components**
F* f [1], F f [2], . . ., F f [N/2 − 1], and the special value F f [N/2] (the alternating sum of the components of*
the input). The remaining components of F

**f are just the complex conjugates of those from 1 to N/2 − 1.**As we’ll see, this has practical importance.

**6.3** **Two Grids, Reciprocally Related**

Refer back to our understanding that the DFT finds the sampled Fourier transform of a sampled signal.

We have a grid of points in the time domain and a grid of points in the frequency domain where the discrete version of the signal and the discrete version of its Fourier transform are known. More precisely, shifting to the discrete point of view, the values of the signal at the points in the time domain are all we know about the signal and the values we compute according to the DFT formula are all we know about its transform.

*In the time domain the signal is limited to an interval of length L. In the frequency domain the transform*
*is limited to an interval of length 2B. When you plot the discrete signal and its DFT (or rather, e.g., the*
magnitude of its DFT since the DFT is complex), you should (probably) plot over these intervals (but
*your software might not give you this for the DFT). The grid points in the time domain are spaced 1/2B*
*apart. The grid points in the frequency domain are spaced 1/L apart, so note (again) that the spacing in*
one domain is determined by properties of the function in the other domain. The two grid spacings are
*related to the third quantity in the setup, the number of sample points, N . The equation is*

*N = 2BL*

*Any two of these quantities — B, L, or N — determine the third via this relationship. The equation is*
*often written another way, in terms of the grid spacings. If ∆t = 1/2B is the grid spacing in the time*
*domain and ∆ν = 1/L is the grid spacing in the frequency domain, then*

1

*N* *= ∆t∆ν .*

*These two (equivalent) equations are referred to as the reciprocity relations. A thing to put into your head*
*is that for a fixed number of sample points N making ∆t small means making ∆ν large, and vice versa.*

Here’s why all this is important. For a given problem you want to solve — for a given signal you want to
*analyze by taking the Fourier transform — you typically either know or choose two of:*

*• How long the signal lasts, i.e., how long you’re willing to sit there taking measurements — that’s L.*

*• How many measurements you make — that’s N .*

*• How often you make a measurement — that’s ∆t.*

Once two of these are determined everything else is set.

**6.4** **Appendix: Gauss’s Problem**

Finally, here’s the problem Gauss considered on representing the orbit of an asteroid by a finite Fourier series. Gauss was interested in astronomy, as he was in everything else, and occupied himself for a period in calculating orbits of recently discovered asteroids. This led to two great computational techniques. One was taken up right away (least squares curve fitting) and the other was forgotten (the algorithm for efficient computation of Fourier coefficients — the fast Fourier transform — as mentioned above).

It was in calculating the orbit of Pallas that Gauss introduced methods that became the FFT algorithm. He
*had twelve data points for the orbit, relating the “ascension” θ, measured in degrees, and the “declination”*

*X , measured in minutes of arc. It appears from the data that X depends periodically on θ, so the problem*
is to interpolate a finite Fourier series based on the twelve samples. Gauss considered a sum of the form

*X = f (θ) = a*0+
X5
*k=1*

*a**k*cos

*2πkθ*
360

*+ b**k*sin

*2πkθ*
360

*+ a*6cos

*12πθ*
360

*.*

Here’s the data:

*θ* 0 30 60 90 120 150 180 210 240 270 300 330

*X* 408 89 −66 10 338 807 1238 1511 1583 1462 1183 804

*How to determine the 12 unknown coefficients based on the samples X*_{n}*= f (θ** _{n}*)? Want to take a whack
at it? It requires solving 12 linear equations in 12 unknowns, something Gauss could have done by hand.

Nevertheless, he was enough taken by the symmetries inherent in using sines and cosines to devise a scheme that rearranges the algebra in a way to reduce the total number of steps — essentially introducing a collection of easier subproblems whose solutions are later recombined to give the desired grand solutions.

That rearrangement is what we’ll talk about later, and I won’t attempt to reconstruct Gauss’s solution.

Here are the coefficients, as found by a modern FFT routine:

*k* 0 1 2 3 4 5 6

*a**k* *780.6* *−411.0* *43.4* *−4.3* *−1.1* *0.3* *0.1*

*b** _{k}* −

*−720.2*

*−2.2*

*5.5*

*−1.0*

*−0.3*−

More impressive is the graph, shown in the following figure.

6.5 Getting to Know Your Discrete Fourier Transform 261

**6.5** **Getting to Know Your Discrete Fourier Transform**

We introduced the discrete Fourier transform (DFT) as a discrete approximation of the usual Fourier
**transform. The DFT takes an N -tuple f = (f [0], f [1], . . . , f [N − 1]) (the input) and returns an N -tuple****F = (F[0], F[1], . . . , F[N − 1]) (the output) via the formula**

**F = F f =**

*N −1*X

*k=0*

**f [k]ω**^{−k}

**where ω is the vector complex exponential,**

**ω = (1, ω, ω**^{2}*, . . . , ω*^{N −1}*) ,* where *ω = e*^{2πi/N}*.*
Evaluating F **f at an index m gives the m-th component of the output**

**F[m] =**

*N −1*X

*k=0*

**f [k]ω*** ^{−km}*=

*N −1*X

*k=0*

**f [k]e**^{−2πikm/N}

*We also write F as the N × N matrix,*

F =

1 1 1 · · · 1

1 *ω*^{−1} *ω*^{−2} · · · *ω** ^{−(N −1)}*
1

*ω*

^{−2}

*ω*

^{−4}· · ·

*ω*

^{−2(N −1)}... ... ... . .. ...
1 *ω*^{−(N −1)}*ω** ^{−2(N −1)}* · · ·

*ω*

^{−(N −1)}^{2}

*.*

*The (m, n) entry is just ω*^{−mn}*where the rows and columns are indexed from 0 to N − 1.*

We want to develop some general properties of the DFT, much as we did when we first introduced the continuous Fourier transform. Most properties of the DFT correspond pretty directly to properties of the

continuous Fourier transform, though there are differences. You should try to make the most of these correspondences, if only to decide for yourself when they are close and when they are not. Use what you know!

Derivations using the DFT are often easier than those for the Fourier transform because there are no
worries about convergence, but discrete derivations can have their own complications. The skills you need
are in manipulating sums, particularly sums of complex exponentials, and in manipulating matrices. In
both instances it’s often a good idea to first calculate a few examples, say for DFTs of size three or four,
to see what the patterns are.^{2} We’ll find formulas that are interesting and that are very important in
applications.

**6.6** **Periodicity, Indexing, and Reindexing**

*Let me begin by highlighting a difference between the discrete and continuous cases rather than a similarity.*

The definition of the DFT suggests, even compels, some additional structure to the outputs and inputs.

**The output values F[m] are defined initially only for m = 0 to m = N − 1, but their definition as****F[m] =**

*N −1*X

*k=0*

**f [k]ω**^{−km}

implies a periodicity property. Since

*ω*^{−k(m+N )}*= ω** ^{−km}*
we have

*N −1*X

*k=0*

**f [k]ω*** ^{−k(m+N )}*=

*N −1*X

*k=0*

**f [k]ω**^{−km}**= F[m] .**

If we consider the left hand side as the DFT formula producing an output, then that output would be
**F[m + N ]. More generally, and by the same kind of calculation, we would have**

**F[m + nN ] = F[m]**

**for any integer n. Thus, instead of just working with F as an N -tuple it’s natural to “extend” it to be a****periodic sequence of period N . For example, if we start off with N = 4 and the values (F[0], F[1], F[2], F[3])****then, by definition, the periodic extension of F has F[4] = F [0], F[5] = F[1], F[6] = F[2], and so on, and****going in the other direction, F[−1] = F[3], F[−2] = F[2], and so on. In general,**

**F[p] = F[q]***if p − q is a multiple of N , positive or negative.*

or put another way

**F[p] = F[q]***if p ≡ q modN .*
We then have the formula

**F[m] =**

*N −1*X

*k=0*

**f [k]ω**^{−km}*for all integers m.*

Because of these observations, and unless instructed otherwise:

2One thing to be mindful of in deriving formulas in the general case, in particular when working with sums, is what you call the “index of summation” (analogous to the “variable of integration”) and not to get it confused or in conflict with other variables that are in use. Derivations might also involve “changing the variable of summation”, analogous to “changing the variable of integration”, a procedure that seems easier in the case of integrals than in sums, maybe just because of all the practice we’ve had with integrals.

6.6 Periodicity, Indexing, and Reindexing 263

**• We will always assume that F is a periodic sequence of period N .**

**Once we define the inverse DFT it will emerge that an input f to the DFT also extends naturally to be a**
*periodic sequence of period N . We’ll also assume that, starting now.*

**• We will always assume that f is a periodic sequence of period N .**

*So, briefly, we assume that all our discrete signals are periodic. To take an important example, if, according*
to this dictum, we consider the vector complex exponential not just as a vector but as a periodic discrete
signal then we can define it simply by

**ω[n] = ω**^{n}*,* *n an integer.*

*As for how this differs from the continuous case, we certainly can consider periodicity — that’s what the*
*subject of Fourier series is all about, after all — but when working with the Fourier transform we don’t have*
to consider periodicity. In the discrete case we really do. Some things just don’t work (e.g., convolution)
if we don’t work with periodic inputs and outputs.

If we were developing the DFT from a purely mathematical point of view, we would probably incorporate periodicity as part of the initial definition, and this is sometimes done. It would make some parts of the mathematical development a little smoother (though no different in substance), but I think on balance it’s a mistake. It’s extra baggage early on and can make the tie in with physical applications more awkward.

**6.6.1** **Notation and conventions 2**

Having extended the inputs and outputs to be periodic sequences, it’s mostly a matter of taste, or a
**preferred point of view, whether one then wants to think of an input to the DFT as a vector in C*** ^{N}* that is

*for the integers. Each of these viewpoints can be helpful. For example, vectors are helpful if one wants to think of the DFT as a matrix, or when inner products are involved.*

**to be “extended periodically”, as a discrete signal f : Z → C that is periodic of period N , where Z stands***When confusion arises — and it does arise — it usually comes from the vector or N -tuple stance, and from*
*questions and conventions on how vectors are indexed, whether from 0 to N − 1 or from 1 to N , or other*
choices. In fact, indexing and reindexing the components of a DFT is something that just seems to come
*up — it certainly comes up in varied implementations of the DFT, and it’s something you have to be able*
to handle if you use different packages or program any of these formulas.

The definition of the DFT that we’ve given is pretty standard, and it’s the one we’ll use. One sometimes
*finds an alternate definition of the DFT, used especially in imaging problems, where N is assumed to be*
**even and the index set for both the inputs f and the outputs F is taken to be [−(N/2) + 1 : N/2] =***(−(N/2) + 1, −(N/2) + 2, . . . , −1, 0, 1, . . . , N/2). The definition of the DFT is then:*

F**f =**

*N/2*X

*k=−N/2+1*

**f [k]ω*** ^{−k}* or in components

**F[m] =***N/2*X

*k=−N/2+1*

**f [k]ω**^{−km}*.*

We would be led to this indexing of the inputs and outputs if, in the sampling-based derivation we gave of
*the DFT in the previous lecture, we sampled on the time interval from −L/2 to L/2 and on the frequency*
*interval from −B to B. Then, using the index set [−(N/2) + 1 : N/2], the sample points in the time*
domain would be of the form

*t** _{−N/2+1}* = −

*L*2 + 1

*2B* = *−N/2 + 1*

*2B* *, t** _{−N/2+2}* =

*−N/2 + 2*

*2B* *, . . . , t** _{N/2}*=

*−N/2 + N*

*2B* = *N/2*

*2B* = *L*
2 *,*

and in the frequency domain of the form
*s*_{−N/2+1}*= −B +* 1

*L* =*−N/2 + 1*

*L* *, s** _{−N/2+2}* =

*−N/2 + 2*

*L* *, . . . , s** _{N/2}*=

*−N/2 + N*

*L* = *N/2*

*L* *= B .*
(Maybe you can see why I didn’t want to set things up this way for our first encounter.)

*The “new” definition, above, of the DFT is completely equivalent to the first definition because of period-*
icity. There’s a phrase one sees that’s supposed to alleviate the tension over this and other similar sorts of
things. It goes something like:

*“The DFT can be defined over any set of N consecutive indices.”*

What this means most often in practice is that we can write

F**f =**

*p+N −1*

X

*k=p*

**f [k]ω**^{−k}*.*

We’ll explain this thoroughly in a later section. It’s tedious, but not difficult. If you think of an input
**(or output) f as a periodic discrete signal (something your software package can’t really do) then you**
*don’t have to worry about “how it’s indexed”. It goes on forever, and any block of N consecutive values,*
* f [p], f [p + 1], . . ., f [p + N − 1], should be as good as any other because the values of f repeat. You still*
have to establish the quoted remark, however, to be assured that finding the DFT gives the same result
on any such block you need. This is essentially a discrete form of the statement for continuous periodic
functions that the Fourier coefficients can be calculated by integrating over any period.

**Positive and negative frequencies, again** Let’s tie up a loose end and see how periodicity makes
honest negative frequencies correspond to negative frequencies “by convention”. Suppose we have a periodic
**input f and output F = F****f indexed from −(N/2) + 1 to N/2. We would certainly say in this case that the****negative frequencies go from −(N/2) + 1 to −1, with corresponding outputs F[−(N/2) + 1], F[−(N/2) + 2],****. . . F[−1]. Where do these frequencies go if we “reindex” from 0 to N − 1? Using periodicity,**

**F[−***N*

2 **+ 1] = F[−***N*

2 * + 1 + N ] = F[N*
2 + 1]

**F[−***N*

2 **+ 2] = F[−***N*

2 * + 2 + N ] = F[N*
2 + 2]

and so on up to
**F[−1] = F[−1 + N ]**

*The “honest” negative frequencies at −(N/2) + 1, . . . , −1, are by periodicity the “negative frequencies by*
*convention” at N/2 + 1, . . . , N − 1.*

**6.7** **Inverting the DFT and Many Other Things Along the Way**

By now it should be second nature to you to expect that any (useful) transform ought to have an inverse transform. The DFT is no exception. The DFT does have an inverse, and the formula for the inverse is quite simple and is very similar to the formula for the DFT itself, (almost) just like the continuous case.

The key to inversion is the “discrete orthogonality” of the complex exponentials. We’re going to look at
the problem of finding F^{−1} from both the vector point of view and the matrix point of view, with more
emphasis on the former. You can take your pick which you prefer, but it’s helpful to know both.

6.7 Inverting the DFT and Many Other Things Along the Way 265
**6.7.1** **The discrete δ**

*Good news! No need to go through the theory of distributions to define a δ in the discrete case. We can*
do it directly and easily by setting

**δ**0 *= (1, 0, . . . , 0) .*

* In words, there’s a 1 in the zeroth slot and 0’s in the remaining N − 1 slots. δ*0 is really just the first

**basis vector of C**

^{N}*under an assumed name (the way we’re indexing vectors from 0 to N − 1), but to make*

*comparisons to the continuous case we prefer to accord it independent status. We didn’t specify N , and*

**so, strictly speaking, there’s a δ**0

*0*

**for each N , but since δ***will always arise in a context where the N is*

**otherwise specified we’ll set aside that detail. As a periodic signal the definition of δ**

_{0}is

**δ**0*[m] =*

(1 *m ≡ 0 mod N,*
0 otherwise
**For the DFT of δ**_{0} we have

**F δ**0 =

*N −1*X

*k=0*

**δ**0**[k]ω**^{−k}**= ω**^{0} **= 1 .***Great — just like F δ = 1, and no tempered distributions in sight!*

**If we think of applying the DFT as matrix multiplication, then F δ**0 **pulls out the first column, which is 1.**

**(We may drop the subscript 0 on δ**_{0} if it’s clear from the context.)

**The shifted discrete δ is just what you think it is,**

**δ***k* *= (0, . . . , 0, 1, 0, . . . , 0)*

**with a 1 in the k-th slot and zeros elsewhere. That is, the lowly k-th natural basis vector of C*** ^{N}* is now

**masquerading as the important δ**

*k*

**and we can write for an arbitrary vector f ,**

**f =**

*N −1*X

*k=0*

**f [k]δ**_{k}*.*

As a periodic discrete signal

**δ***k**[m] =*

(1 *m ≡ k* *mod N*
0 *otherwise .*
**Note, then, that if f is a periodic discrete signal we can still write**

**f =**

*N −1*X

*k=0*

**f [k]δ***k**.*

**The δ***k***’s can be viewed as a basis for C**^{N}*and also as a basis for the N -periodic discrete signals.*

**For the DFT of δ***k* we have

**F δ***k* =

*N −1*X

*n=0*

**δ***k***[n]ω**^{−n}**= ω**^{−k}

**From the matrix point of view, taking the DFT of δ***k* *pulls out the k-th column of the DFT matrix, and*
**that’s ω*** ^{−k}*.

These are our first explicit transforms, and if we believe that the discrete case can be made to look like the continuous case, the results are encouraging. We state them again.

**• The DFTs of the discrete δ and shifted δ are:**

**F δ**0 **= 1** and **F δ***k* **= ω**^{−k}*.*
**We’ll establish other properties of discrete δ***k*’s (convolution, sampling) later.

**To know F on a basis** **Notice that the linearity of F and the knowledge that F δ***k* **= ω*** ^{−k}* recovers the
general formula for F . Indeed,

**f =**

*N −1*X

*k=0*

**f [k]δ*** _{k}* ⇒ F

**f =**

*N −1*X

*k=0*

**f [k]F δ*** _{k}* =

*N −1*X

*k=0*

**f [k]ω**^{−k}*.*

**6.7.2** **Orthogonality of the vector complex exponentials**

Having introduced the vector complex exponential

**ω = (1, ω, ω**^{2}*, . . . , ω** ^{N −1}*)

*and its k-th power*

**ω**^{k}*= (1, ω*^{k}*, ω*^{2k}*, . . . , ω** ^{(N −1)k}*)
it is easy to formulate a key property:

*If k and ` are any integers then*

**ω**^{k}**· ω*** ^{`}*=

(0 *k 6≡ ` mod N*
*N* *k ≡ ` mod N*

Thus the powers of the vector complex exponentials are “almost” orthonormal. We could make them orthonormal by considering instead

√1

*N(1, ω*^{k}*, ω*^{2k}*, . . . , ω*^{(N −1)k}*) ,*
but we won’t do this.

*In the continuous case the analogous result is that the family of functions (1/*√

*T )e** ^{2πint/T}* are orthonormal

*with respect to the inner product on L*

^{2}

*([0, T ]):*

Z *T*
0

√1

*Te** ^{2πimt/T}* 1

√

*Te*^{−2πint/T}*dt =* 1
*T*

Z *T*
0

*e**2πi(m−n)t/T*

*dt =*

(1 *m = n*
0 *m 6= n*

Orthogonality in the continuous case is actually easier to establish than in the discrete case, because, sometimes, integration is easier than summation. However, you pretty much established the result in the

6.7 Inverting the DFT and Many Other Things Along the Way 267

discrete case on your first problem set this quarter: the problem of adding up roots of unity is exactly what’s involved, which is why I asked you to do that question way back then.

*There are two things we’ll need for the derivation. The first is that ω*^{kN}*= 1 where k is any integer. The*
second is the sum of a finite geometric series, something we have used repeatedly:

*1 + z + z*^{2}*+ · · · + z** ^{N −1}*=

*1 − z*^{N}

*1 − z* *z 6= 1*

*N* *z = 1*

*We’ll use this formula for z = ω. In that case,*

*1 + ω + ω*^{2}*+ . . . + ω** ^{N −1}*=

*1 − ω*

^{N}*1 − ω* = 0
*1 − ω* = 0

*which is the exercise you did on the first problem set. More generally, if k is not an integer multiple of N ,*
*so that ω*^{k}*6= 1 while ω** ^{kN}* = 1, then

*1 + ω*^{k}*+ ω*^{2k}*+ . . . ω** ^{(N −1)k}*=

*1 − ω*

^{kN}*1 − ω*

^{k}*= 0 ,*

*while if k is an integer multiple of N then ω*

*= 1 and*

^{k}*1 + ω*^{k}*+ ω*^{2k}*+ . . . ω*^{(N −1)k}*= 1 + 1 + . . . + 1 = N .*
Succinctly,

*1 + ω*^{k}*+ ω*^{2k}*+ · · · + ω** ^{(N −1)k}* =
(

0 *k 6≡ 0 mod N*
*N* *k ≡ 0 mod N*
**Let’s compute the inner product ω**^{k}**· ω*** ^{`}*:

**ω**^{k}**· ω*** ^{`}*=

*N −1*X

*n=0*

**ω**^{k}**[n]ω**^{`}*[n]*

=

*N −1*X

*n=0*

**ω**^{k}**[n]ω**^{−`}*[n]* (taking Matlab seriously)

=

*N −1*X

*n=0*

**ω**^{k−`}*[n]* (ditto)

=

*N −1*X

*n=0*

*ω** ^{(k−`)n}* =

(0 *k − ` 6≡ 0 mod N*
*N* *k − ` ≡ 0 mod N* =

(0 *k 6≡ ` mod N*
*N* *k ≡ ` mod N*
Done.

**Remark** **From this result we conclude that the N distinct vectors 1, ω**^{−1}**, ω**^{−2}**, . . . , ω*** ^{−(N −1)}*are a basis

**of C**

^{N}**(and so are 1, ω, ω**

^{2}

**, . . . , ω**

^{N −1}**). From the earlier result that F δ**

*k*

**= ω**

*we then know that F is invertible. This doesn’t tell us what the inverse is, however. We have to work a little harder for that.*

^{−k}**The DFT of the vector complex exponential** With the orthogonality of the vector complex expo-
nentials established, a number of other important results are now within easy reach. For example, we can
**now find F ω*** ^{k}*.

By definition,

**F ω*** ^{k}* =

*N −1*X

*n=0*

**ω**^{k}**[n]ω**^{−n}*and its `-th component is then*

**F ω**^{k}*[`] =*

*N −1*X

*n=0*

**ω**^{k}**[n]ω**^{−n}*[`]*

=

*N −1*X

*n=0*

*ω*^{kn}*ω*^{−n`}**= ω**^{k}**· ω*** ^{`}* =

(0 *k 6≡ ` mod N*
*N* *k ≡ ` mod N*
We recognize this, and we are pleased.

**• The discrete Fourier transform of ω*** ^{k}* is

**F ω**^{k}**= N δ**_{k}

*Perhaps, we are almost pleased. There’s a factor of N that comes in that we don’t see, in any way, in the*
**continuous case. Here it traces back, ultimately, to ||ω||**^{2} *= N .*

**The appearance of a factor N or 1/N in various formulas, always wired somehow to ||ω||**^{2} *= N , is one*
thing that makes the discrete case appear different from the continuous case, and it’s a pain in the neck
to keep straight. Be careful.

**6.7.3** **Reversed signals and their DFTs**

**For a discrete signal, f , defined on the integers, periodic or not, the corresponding reversed signal, f**^{−}, is
defined by

**f**^{−}**[m] = f [−m] .**

**If f is periodic of period N , as we henceforth again assume, and we write it as the vector****f = (f [0], f [1], . . . , f [N − 1]) .**

Then

**f**^{−} **= (f [N ], f [N − 1], . . . , f [1])****(using f [N ] = f [0])**

**which makes the description of f**^{−} as “reversed” even more apt (though, as in many irritating instances,
the indexing is a little off ). Defined directly in terms of its components this is

**f**^{−}**[n] = f [N − n]**

**and this formula is good for all integers n. This description of f**^{−} is often quite convenient. Note that
reversing a signal satisfies the principle of superposition (is linear as an operation on signals):

**(f + g)**^{−} **= f**^{−}**+ g**^{−} and **(αf )**^{−}**= αf**^{−}*.*
It’s even more than that — we have

**(f g)**^{−}**= (f**^{−}**)(g**^{−}*) .*
Let’s consider two special cases of reversed signals. First, clearly

**δ**^{−}_{0} **= δ**_{0}

6.7 Inverting the DFT and Many Other Things Along the Way 269

**and though we’ll pick up more on evenness and oddness later, this says that δ**0 **is even. For the shifted δ,**
**δ**^{−}_{k}**= δ***−k**.*

I’ll let you verify that. With this result we can write
**f**^{−}=

* ^{N −1}*X

*k=0*

**f [k]δ**_{k}

−

=

*N −1*X

*k=0*

**f [k]δ**_{−k}*.*

**One might say that the δ***k* **are a basis for the forward signals and the δ***−k* are a basis for the reversed
signals.

**Next let’s look at ω. First we have,**

**ω**^{−}**= (ω[N ], ω[N − 1], ω[N − 2], . . . , ω[1]) = (1, ω**^{N −1}*, ω*^{N −1}*, ω*^{N −2}*, . . . , ω) .*
But now notice (as we could have noticed earlier) that

*ω*^{N −1}*ω = ω*^{N}*= 1 ⇒ ω*^{N −1}*= ω*^{−1}*.*
Likewise

*ω*^{N −2}*ω*^{2} *= ω*^{N}*= 1 ⇒ ω*^{N −2}*= ω*^{−2}*.*
Continuing in this way we see, very attractively,

**ω**^{−}*= (1, ω*^{−1}*, ω*^{−2}*, . . . , ω*^{−(N −1)}**) = ω**^{−1}*.*
In the same way we find, equally attractively,

**(ω*** ^{k}*)

^{−}

**= ω**

^{−k}*.*Of course then also

**(ω*** ^{−k}*)

^{−}

**= ω**

^{k}*.*

This has an important consequence for the DFT — our first discrete “duality result”. (Though we haven’t
yet introduced the inverse DFT, which is how one usually thinks about duality. It’s the inverse DFT
that we’re pointing toward). Let’s consider F**f**^{−}, the DFT of the reversed signal. To work with the
expression for F **f**^{−} **we’ll need to use periodicity of f and do a little fancy foot work changing the variable**
of summation in the definition of the DFT. Here’s how it goes

F**f**^{−}=

*N −1*X

*k=0*

**f**^{−}**[k]ω**^{−k}

=

*N −1*X

*k=0*

**f [N − k] ω**^{−k}**(reversing f )**

= X1

*`=N*

**f [`] ω**^{`−N}*(letting ` = N − k)*

= X1

*`=N*

**f [`] ω**^{`}**(since ω*** ^{−N}* = 1)

**But using f [N ] = f [0] and ω**^{N}**= ω**^{0} **= 1 we can clearly write**
X1

*`=N*

**f [`]ω*** ^{`}*=

*N −1*X

*`=0*

**f [`]ω**^{`}

=

* ^{N −1}*X

*`=0*

**f [`]ω**^{−`}

−

**= (F f )**^{−}*.*

We have shown F**f**^{−}**= (F f )**^{−} Cool. A little drawn out, but cool.

This then tells us that

**F ω**^{−k}**= (F ω*** ^{k}*)

^{−}

**= (N δ***)*

_{k}^{−}

**= N δ**

_{−k}*.*In turn, from here we get a second duality result. Start with

F**f =**

*N −1*X

*k=0*

**f [k]ω**^{−k}

*and apply F again. This produces*

F F**f =**

*N −1*X

*k=0*

**f [k]F ω**^{−k}*= N*X

*k=0*

**f [k]δ**_{−k}**= N f**^{−}

To give the two results their own display:

• Duality relations for the DFT are:

F**f**^{−}**= (F f )**^{−} and F F**f = N f**^{−}*.*

We’ll do more on reversed signals, evenness and oddness, etc., but we’ve waited long enough for the big moment.

**6.7.4** **The inverse DFT**

We take our cue for finding F^{−1} from the duality results in the continuous case that say
F^{−1}*f = F f*^{−}*= (F f )*^{−}*.*

The only thing we don’t have in the discrete case is the definition of F^{−1}, and this equation tells us how
*we might try defining it. There’s actually a factor of N coming in, and because I know what’s going to*
*happen I’ll put it in now and define*

F^{−1}**f =** 1
*N*F**f**^{−}
and so equivalently

F^{−1}**f =** 1

*N***(F f )**^{−} and also F^{−1}**f =** 1
*N*

*N −1*X

*n=0*

**f [n]ω**^{n}*.*
Let’s see why this really does give us an inverse of F .