In this post we will understand what it means to say “this song has a lot of high frequencies”, then hunt and capture the waveform that allows us to say this! Some Python code will be used, but understanding it is not necessary. Footnotes are more technically advanced.

## The x coordinate of the circle… *is* frequency?!

Sine waves are widely agreed to be the “most basic wave” when doing anything involving periodic signals – whether radio circuit analysis or music synthesis. Indeed, the sine wave is pretty simple as far as waves go, and quite smooth to boot, but I’ve always asked myself: why the sine? There are many other simple, basic waves that could be used as a basis for a theory of frequencies.

… and yet mathematicians and engineers, even musicians, always choose the sine[1]. What makes the sine so special? Could a theory of frequency analysis be built around a different waveform, and how different would it be? How different would music sound if we used another simple waveform as our basis?

The answer turns out to be that the concept of “frequency” is tied very strongly to sine waves – the same waves that if you were lucky you heard defined as the *x* coordinate of a point moving along a circle, and otherwise you heard defined as “something something hypotenuse”. It is impossible really to talk about frequency without talking about sines. This is a basic fact of math that every mathematician and engineer understands intuitively, but I’ve searched and searched and haven’t seen it treated anywhere, so I’ll give it a try.

## Frequency stays in the family

So what do we mean when we say “frequency”? We mean that something repeats after some time/distance/whatever our x axis represents, and again after twice that, and so on. Of course, not only sines have frequency:

But we don’t want to limit our treatment of frequencies to noticing that things “have frequencies”. We’d like to be able to say “this sound/signal is *composed of* frequencies”:

But to do that, we need some sort of basic signal that *conserves frequency under whatever we mean by “composition”*. And by composition we mean addition:

=

+

Wait, we want our waveform to be conserved under addition to… *what*? Addition to other waveforms of the same frequency but different amplitude and/or phase, because a waveform can differ in amplitude and phase and we’d still consider it the same waveform (see picture to the right if you don’t know what those words mean).

If we knew of a waveform like that, and if it were possible to build every possible signal by adding a bunch of waveforms like that (a big *if*, but french mathematician Joseph Fourier, godfather of electrical engineers, has us covered), then we could add together all the waves in each frequency, look at the resulting wave (that has the same frequency)’s amplitude, and say something like “my oh my, this singer hits them high notes straight in the face”. We could even plot a *spectrogram*, a plot of how much of each frequency a wave has. but first we need to find it.

Alas, the square wave is not such a waveform. Adding two square waves with the same frequency (100), same amplitude (0.5), but different phases (0 and 33) gives something that is definitely not a square wave:

Now usually comes the part where I show you how we can use trigonometric identities to prove that “for any frequency f and parameters a,b,p,p’ there exist c and p” so that `a * sin(f * t + p) + b * sin(f * t + p') = c * sin(f * t + p'')`

“, but where’s the fun in that? We’re not mathematicians, we’re researchers, explorers, discoverers, artists, cavemen of the digital revolution! Lets go hunt for our waveform!

How *do* we hunt for a waveform with a strange property? It’s pretty easy, actually.

We take any periodic waveform, and we make it *more like the one we want*, and then we do it again, and again, hundreds of times, until we have something that looks *very* similar to what we’re looking for[2].

What could we do to make it *more like a waveform that conserves frequency under addition*? Well, one simple thing we could do is add a phase-shifted version of itself to it.

```
# imports, figure setup, etc', see final code
data = np.arange(100)
for _ in range(5):
plt.plot(data)
data += np.roll(data, random.randrange(len(data)))
```

This is a nice beginning, but we will need to run it a lot more times to get a good result. Also, notice how the waveforms get more and more *vertical offset*, i.e. each is higher in general than previous ones. When we talk about periodic waveforms we usually ignore the vertical offset, also known as DC offset for reasons that are rooted in electric engineering.

```
data = 1. * np.arange(100)
for _ in range(5):
data -= np.sum(data) / len(data)
plt.plot(data)
data += np.roll(data, random.randrange(len(data)))
```

Hey, this is starting to look like something!

Lets run it a bunch more times, say 100:

It’s… definitely something like a sine wave, and it clearly conserves its shape under addition to a phase-shifted version of itself, but instead of conserving our frequency it conserves some high multiple of it (am I counting… 20?)

A trick that I found to really improve the results is to only allow phase shifts of up to 1/4 of the signal length (this probably has to do with something called the nyquist frequency, but as far as I’m concerned I just stumbled upon it while hunting – improvisation skills are very important to hunting):

```
data = 1. * np.arange(100)
for _ in range(10):
data -= np.sum(data) / len(data)
plt.plot(data)
data += np.roll(data, random.randrange(len(data) / 4))
```

Very nice for 10 iterations! Really starting to look like a sine wave, so maybe that’s indeed the waveform that conserves frequency and allows us to “decompose signals to their constituent frequencies”, which would just be a different way to say “decompose the signal into no more than one sine wave at every frequency”. But I digress.

Lets try to improve our plot a bit. Maybe normalize the amplitude so we get waves of the same size in every iteration and it’s easier to compare them?

```
data = 1. * np.arange(100)
for _ in range(10):
data -= np.sum(data) / len(data)
data /= np.amax(data)
plt.plot(data)
data += np.roll(data, random.randrange(len(data) / 4))
```

Yes! The picture is getting clearer! We can now see what’s going on in there… Wait. Clearly I’m cheating. We didn’t start from a clean slate, we started with a straight line. Maybe the sine only arises when we do that? Let’s try starting with a random signal (noise) and see what happens:

```
data = np.random.random(100)
for i in range(31):
data -= np.sum(data) / len(data)
data /= np.amax(data)
if i % 5 == 0: plt.plot(data)
data += np.roll(data, random.randrange(len(data) / 4))
```

Sine again! OK, we might be on to something. Seem like there indeed is a waveform that keeps its shape when added to similar waveforms that share frequency but not necessarily amplitude and phase. And we hunted it simply by taking random noise and making it a bit more like what we wanted, again and again and again… pretty cool, huh?

You’re welcome to dive into the Jupyter Notebook with all the code I used to write this post.

## Fourier Transform and More

We didn’t address how to decompose a song into sine waves at different frequencies, and we didn’t address the idea of a signal’s spectrum, its representation in the frequency domain:

Those are maybe for another post. I’ve kept you awake long enough, get some rest, maybe eat some chocolate.

In the meantime, there’s a nice explanation at BetterExplained, and a *great* treatment of the whole subject in Dr. Steven Smith’s Scientist and Engineer’s Guide to Digital Signal Processing, probably the best technical book I’ve ever read, available for free online (read the PDF, it’s much nicer).

[1] well, not always, there is a thriving field called Wavelet Theory that is all about decomposing signals into other basic waveforms, but sines – more precisely called sinosoidal waveforms when we allow them to take any frequence, amplitude, and phase – are acknowledged by all as *the* most basic waveform

[2] many times when we want to find a *fixed point* of a function *f(x)* we can choose an arbitrary *x*, then compute *f(x), f(f(x)), f(f(f(x))), …* until applying *f* again stops changing the result, and we have found our prey. This technique is useful in surprisingly varied situations.

For example, if you’re holding an unfolded map that includes the town you’re in and want to find the exact point in the map that describes the same location on planet Earth where it resides, start with any point on the map, find the point on the map that describes the point on Earth where it is currently located, find the point on the map that describes the point on Earth where *that* point is currently located, and so on until you converge on the point that is described by itself.

I first found this technique explained in a commentary on Larry Page and Sergei Brin’s PageRank paper, where they explain how Google works: the key to PageRank is finding an eigenvector with eigenvalue 1 for the matrix of links between all the websites on the internet, and they find an eigenvector of a matrix *A* by computing *A⋅A⋅A⋅…⋅A⋅A⋅A⋅x* for some non-degenerate vector *x*, and they can compute this really fast because for matrices, *A⋅A⋅A⋅A⋅**A⋅A⋅A⋅A⋅**A⋅A⋅A⋅A⋅A⋅A⋅A⋅A** = (((A²)²)²)²*. As a young student my mind was completely blown.