Something I always find pretty mind-blowing is that a physical thin lens can also perform a Fourier transform.
This (generally) only applies to coherent light, so it's not something we're used to everyday, but it forms the basis of a lot of laser-based optical technologies. Combined with some other effects you can do "all-optical" signal processing [1,2], or generate create dynamic, "programmable" holograms [3]
Looking at the page source they seem to be using the library from Project Nayuki[0]. I'm actually quite amazed at how few lines of code are required for an FFT implementation[1].
Anyway, I might give a simple draw-on-canvas website a go, give me a few hours.
If you don't need it in real time and don't have a whole lot of points, you can do discrete Fourier transforms in even less code by just doing FT instead of FFT. Modern computers are fast enough that the most straightforward O(n^2) algorithm that is immediately obvious from the definition of the discrete Fourier transform given in Wikipedia will often be fast enough. E.g., something like this in C:
typedef struct {
double r;
double i;
} complex;
void fourier(complex * x, complex * y, int np)
{
int n, k;
for (k = 0; k < np; ++k)
{
y[k].r = y[k].i = 0;
for (n = 0; n < np; ++n)
{
double c = cos(2*M_PI*k*n/np);
double s = sin(2*M_PI*k*n/np);
y[k].r += x[n].r * c + x[n].i * s;
y[k].i += -x[n].r * s + x[n].i * c;
}
}
}
20 years ago, on a middle of the road desktop, that was fast enough that I was able to do an almost real time guitar tuner with it.
I remember getting a very helpful tip as an undergrad Math major: once you start doing senior undergrad/early graduate level classes, expect each textbook page to take a long time to read. Upwards of an hour is not uncommon. And I found this to be very true in my own experience.
I'm always struck by this thought when I see code like this. It's not a lot of code, but if I had to actually go through and internalize it, it would certainly take me a while.
> It's not a lot of code, but if I had to actually go through and internalize it, it would certainly take me a while.
Oh yeah, I definitely didn't mean to imply that it was easy code. It has that elegance that you really only see in the kind of concise code that can only be written by people who really grok the underlying system (both programming and FFT).
For sure. As Brooks might say, code like that is high in "essential complexity". There's almost no boilerplate - there's a nearly 1:1 ratio of lines of code to important idea.
I don't think this exactly what you want, but you might like it anyway. You draw and it breaks it down into the 2D fourier series, and you can control the number of fourier terms which essentially acts like a crude high pass filter:
You can't just "draw" on the Fourier side because it actually consists of complex numbers, and only the magnitude is plotted. And the Fourier plot needs to be rotationally symmetric. The closest you could get is increasing or decreasing the magnitude of pixel regions.
Maybe a DCT would be more amenable to hand-editing because it's real and has no symmetry constraints.
I should have a 2d line chart, and I should be able to draw magnitude at each frequency. It rotates and filters. Right now, a line chart consists of 1 between two frequencies, and 0 elsewhere.
Or I should have a 2d image, and I draw dark/light. It reflects appropriately to get the other half, and multiplies the image (without rotating). Right now, this image looks like a donut.
I'm drawing the filter, you see. It applies the filter, and shows me:
* The source image FFT(what I drewFFT(image))
The point spread function / convolution kernel FFT(what I drew)
> I'd love to be able to draw on the Fourier side.
Notice that the direct and inverse transforms are essentially the same transformation. So you can draw on the image side and swap both images on your mind.
I don't want to draw an image. I want to draw a filter of an image. Right now, I have a rock-hard cutoff at two frequencies. I'd like to draw a curve, and have it Fourier filter based on that curve.
This is an interactive demo that lets you explore how to remove certain wavelength-related (frequency) content from an image. By wavelength, I mean intensities that vary by some rate-of-change of pixels either across horizontally or up-and-down vertically in the image. The low-cut slider sets up a threshold below which content is removed, and the high-cut slider sets up an equivalent threshold above which content is removed.
I think it is interesting because it reveals a couple things. First, that most of the content we find essential to the image is in the low-frequencies (the high-cut slider can be moved very far left before the image appears altered). Second, it demonstrates that a sharp cutoff of frequencies results in ringing (oscillations) in the image (the opposite is also true -- a sharp edge in the image will produce oscillating wavelength magnitudes in the frequency plot).
This method of removing wavelength-related content (called Fourier Filtering) is very precise in terms of cutting out specific frequencies, but the effect can be undesirable in terms of producing images that appear smooth and artifact-free (no blemishes). If you want to produce a smoother image which has certain periodic content removed, you would have to filter it out gradually in the Fourier representation (called filter "roll-off") instead of a sharp cut-off.
A Fourier transform maps a signal from a time domain into an equivalent [0] signal in the frequency domain.
This demo:
1. Performs a Fourier transform on some image pixels, taking the pixel data from the "time" domain into the frequency domain. (Think of time as "how far through the image we are", and the pixel's intensity as the signal's magnitude; frequency in this case becomes a slightly abstract concept.)
2. Visualises that frequency domain in various ways - by default you're seeing the magnitude of the transformed signal
3. applies a band-pass filter to that frequency domain - i.e. only allowing signals above a certain frequency (low-cut) and below another certain frequency (high-cut), and removing the rest. Playing with this might give you an intuitive notion of what "frequency" means here.
4. applies the inverse transform, giving you a signal back in the time domain (i.e. a normally viewable image).
You can see what the effect of bandpassing the frequency-domain signal is on the end result. Thinking about it in terms of Information, if we band-pass half of the given frequency spectrum out, we're essentially throwing away half of the Information... but yet the image is still useful to a human (this is the principle of how lossy compression algorithms work, as others have noted).
[0] In the case of a "perfect" transform, in reality most algorithms are lossy
I suppose really I'm showing my DSP-training by saying that a Fourier is from a time domain, really it's just a special mapping between two domains, and it just so happens that it correlates with the physical analogues of time domain -> time frequency, space domain -> spatial frequency. In this case it's really space domain -> spatial frequency, but w/e
I won't ELI5, but maybe someone here remembers the name for chimeric pictures which contain one image in the low frequencies, and a different one in the high, so one gets different impressions (smile vs frown, etc.) depending upon one is seated at one's monitor or viewing from across the room?
The DFF-Mono pattern is interesting because of how if you reduce the high-cut you can see the bands disappearing step-wise in decreasing density... I assume this occurs as the Nyquist frequency is passed for those pixels?
> I assume this occurs as the Nyquist frequency is passed for those pixels?
No, the Nyquist frequency isn’t the relevant parameter here. What is happening here is that you have a low-pass filter with a very steep cutoff (or high-pass, depending on what you are sweeping). As you sweep the cutoff, because the filter is so steep, the band suddenly disappears when the content moves from one side of the cutoff to the other.
The "grid" example shows a similar effect, but the difference is even more stark. With Low Cut Frequency at 0%, slowly move the High Cut Frequency between 7% and 9%, on the "Modulus view" you should see the white dots which represent vertical bands in the "Inverse transform" get cut out, so the "Inverse transform" switches from a fuzzy grid to just horizontal bands.
I think JPEG is too. In school, we implemented something like JPEG compression using some Fourier transform stuff that reduced the bitmap size by 90%+ and was still fairly decent looking. It was an image handpicked for suitability under compression, I'm sure. A sunflower, I think?
I had a similar kind of lab in college. I ended up testing it on an image hand-picked for performing very poorly under JPEG compression, a GBA Pokemon game screenshot. Chroma subsampling alone visibly degraded the sharpness of brightly colored objects. Strangely chroma subsampling degrades photographic imagery far less, and helps less sophisticated algorithms (like JPEG) allocate limited bits better.
The image processing community has moved away from Lena. It's a sexist nude crop from Playboy Magazine. It's uncomfortable and gross to use that in a professional setting where we're meant to be welcoming and inclusive.
This (generally) only applies to coherent light, so it's not something we're used to everyday, but it forms the basis of a lot of laser-based optical technologies. Combined with some other effects you can do "all-optical" signal processing [1,2], or generate create dynamic, "programmable" holograms [3]
[1] https://ieeexplore.ieee.org/document/686739
[2] https://ieeexplore.ieee.org/document/6648413
[3] https://en.wikipedia.org/wiki/Computer-generated_holography