Put in earbuds or a headset and (quietly) play sounds that are slightly different in frequency (1-10hz).
You will still hear a sort of ethereal beat tone between them that's different than the beat you would hear if you were listening to the same through speakers. I don't see how this would occur frequency-domain perception and it's too present (at least in my experiment) to be attributed to bone conduction across the skull.
As far as I know, we don't know the exact mechanism by which binaural beats are produced, but it does seem to be from FFT information.
The hair cells in our cochlea function essentially as a great big FFT -- this is well-established -- and so our source neural input begins with that. The brain doesn't have access to the underlying waveform at all, as far as I'm aware. It is incredibly sensitive to the timing from each ear though (just as FFT includes phase information).
Our brain does perform advanced signal processing to condense sets of overtones into a single fundamental frequency, which even works in the case of a "missing fundamental", and binaural beats are conceivably explained by this mechanism. Though it could be a different mechanism at work as well, related to how we process audio spatially.
Whoa! At least that gives me some comfort I wasn't imagining it.
I wish I understood some of the neurological explanation of how the beats are perceived because it doesn't seem to match with my understanding of the explanation of how our ears work. If everything is pushed into the frequency domain based on the stimulation of different parts of the cochlea, where does the time domain beat emerge?
Part of why we evolved two ears is to be able to locate sounds within our perceptive field. I think the best example is to listen to some recordings made with two head-mounted microphones. I like these here; one was the closest I’ve come to believing I was standing in a pond while in my own house. It requires headphones to get the effect: https://quietamerican.org/field_vietnam.html
There’s another component at play too: beat frequencies. This happens anytime you have different frequencies playing simultaneously. This is a result of simple waveform interference. Lots of examples, but I’ll never pass up a chance to link to Julius Sumner Miller[0]: https://youtu.be/7dxkW5bsUgs
So the brain is doing lots of work to integrate the stereo “image,” in much the same way we can wear 3D glasses and perceive depth[1]. Binaural beats reduce things down to a more fundamental level: you’re playing with how your mind integrates the stereo field in a weird way, and it produces a beat frequency that does not exist in the pulsed air. This may be learned behavior.
[0] I’m eagerly waiting for some music producer to sample this video: “all the music fell out”, “we should have this mechanism called beats”, “beats: wonderful!”, etc.
[1] I wonder what the effect of rapidly switching the left/right components of a stereo image would be. Probably nausea.
The part that doesn't make sense to me is that the binaural beat frequency corresponds with the physical beat frequency of the sound. So if you got 432Hz in one ear and 428Hz in the other, you're going to hear a 4Hz beat frequency between the two.
If the cochlea is effectively taste buds for sound, the only thing the brain is going to get is which part of the cochlea is being tickled. There's no time domain information there, just some ambiguous 'pitch'.
If that's the case though, how does the brain know to synthesize the 4Hz differential between these two frequencies. The 432Hz and the 428Hz aren't making it to the brain, just the fact both ears are getting tickled in very close but different places.
(Also my dad absolutely LOVED watching JSM and would always call us into the room any time he was doing one of his crazy experiments on TV. I agree his stuff is very 'sampleable'
Edit: Just watched the video, it's actually a gold mine for hip hop lol. Just play this in the background and scrub around his videos - https://www.youtube.com/watch?v=JVISRjhXzzM
Oh that simultaneous site is neat! Doesn’t work for me on iOS but what a great idea. I’ve been using two separate devices for that kind of thing (mostly confirming mashups that I will likely never follow through with, but it’s still fun to do).
Looks like you know more than me about how our brains process audio. I was running on the assumption that some kind of frequency analysis made it to our higher processing centers, is that not the case? Given that what we hear all the time is incredibly chaotic (multiple pitches that we hear as chords, lyrics vs the rest of the music, focusing on one person speaking, etc) I thought we must at least be running some kind of internal spectrum analyzer and continuously comparing new input to previous averages or something.
There almost has to be a clock-like reference construct somewhere, right? The ability of some people to perceive perfect pitch points to it IMO.
Put in earbuds or a headset and (quietly) play sounds that are slightly different in frequency (1-10hz).
You will still hear a sort of ethereal beat tone between them that's different than the beat you would hear if you were listening to the same through speakers. I don't see how this would occur frequency-domain perception and it's too present (at least in my experiment) to be attributed to bone conduction across the skull.