ferris blogs stuff DFT notes
29 Jul 2021
This is a(n edited/corrected) “repost” of this Twitter thread, but I thought it would be nice to have all the notes collected here as well, so here goes.
I finally got to the punchline of “Mathematics of the DFT” and I must say, I’ve really enjoyed this book. It’s certainly dense and several parts took some rumination before they clicked, but I’m very satistfied.
I highly recommend reading the book if you’re curious about this stuff or want to fill in gaps in your knowledge. That said, here are some points that I found particularly interesting, many of which were totally new to me:
- The “add angles and multiply magnitudes” thing for complex multiplications isn’t arbitrary; it falls out of the algebra if you interpret the complex numbers as simple binomials. (side note: my Norwegian wife didn’t know “FOIL” which makes sense but I found funnily surprising)
- All real sinusoids can be expressed as a sum of a conjugate pair complex sinusoids. Put another way, all real signals must have matching positive and negative frequency components each half the amplitude of the real signal components. The conjugate pair part is important; when summed, it makes the imaginary components cancel, and you’re left with a double-magnitude real component.
- Expressions describing rotations in the complex plane can be broken down into two parts: a phasor, which contains the magnitude and phase offset, and a carrier, which contains the time-dependent part.
- Continuous sinusoids at different frequencies are always orthogonal. When sampled, this only holds when the periods of the sinusoids are factors of the signal length. These are equivalent imagining the continuous case as moving the samples infinitely close together.
- Vectors can be “projected” onto one another. Geometrically this corresponds to finding the point on one vector that connects the end of the other vector at a right angle.
- This projection can be expressed as a scalar times one of the vectors, where the scalar (called the “coefficient of projection”) is a scaled inner product of the two vectors.
- For a vector of dimension
N
, one can project the vector onto N
orthogonal vectors, and then perfectly reconstruct the original vector again by multiplying the coefficients of projection by the vectors and summing the results.
- The DFT (and many related transforms) are thus simply this: scaled coefficients of projection of a length
N
signal, interpreted as an N
-dimensional vector, projected onto N
orthogonal basis vectors that happen to be sampled sinusoids which are N
harmonics of the sampling rate divided by N
. The scaling term in the IDFT accounts for this scaling.
- Lots of things follow from this understanding, eg. how the cooley-tukey FFT algorithm works (the even and odd samples each project onto
N/2
sinusoids described by roots of unity, with the odd results multiplied by phasors), which is very satisfying!
Admittedly the later chapters in the book (7 and 8) are much more dry imo; 7 isn’t particularly interesting for the most part (for me at least) and 8 seems somewhat thin. The appendices are a similar story, but I suppose they’re meant to be concise. Still, I’m glad I pushed through and carefully read the first 6 chapters; totally worth it IMO :)
29 Jul 2021
Add a comment