> only at frequencies outside the hearing range of most people.
That is the idea, but the reality is that past a certain bitrate, songs begin sounding weak/metallic. That bitrate is dependent upon the listener, the equipment, the song, and the codec.
I cannot give you any nice, objective numbers here since sound quality is heavily subjective. But you cannot discount subjective experience here simply because a study found that x% of the general population cannot discern the difference between 128kbps MP3s and 320kbps MP3s. My own experience is that many songs suffer with 128kbps MP3s, particularly classical. I've used at least 192kbps MP3s since I started storing my music collection on a computer.
Also consider the fact that if bitrate was irrelevant, why are the content providers tending toward higher bitrates? We can safely assume they'd prefer to act in their own interests and keep bandwidth as low as possible.
Did you do a well controlled blind test is? That’s, I guess, the relevant question. I don’t think you can trust your ears if you know what you are listening to.
I do actually recommend doing just that. It doesn’t matter who can and cannot hear what, what matters is whether you can hear the difference in a blind test. I did just that before I started buying compressed music. (I didn’t try 128kbps MP3s. I consequently don’t know whether I can hear the difference. I tried 256kbps AAC files – those were the ones I was planning on buying – and I most certainly couldn’t hear the difference.)
MP3 certainly is limited, it even has some problems that are inherent to it, not even a higher bitrate can fix those. Short, sharp sounds (think castanets), for example, are a problem.
Because of the way human hearing works, loud sounds mask quiet sounds. MP3 (and other lossy compression algorithms) use this mask to hide noise (the noise that results from compressing the audio). In order to be able to do that the algorithm has to figure out where the mask is and there is, of course, a time dimension to that mask. MP3 can’t have arbitrarily short masks, it is consequently possible that the noise that’s supposed to be hidden under a loud sound spills over to sections where everything is actually quiet. This happens when there is a short loud sound followed by silence. You know, castanets.
No high bitrate can solve that problem (it can only reduce the overall noise that has to be hidden) but newer compression algorithms (like, for example, AAC) are more flexible with their masks and don’t necessarily have the same problem.
Go to http://www.hydrogenaudio.org/forums/ and you can find all the nice objective numbers you want. The reality is that they now do listening tests at <96kbits because the encoders are too good above that threshold. Subjective is no more useful here than it is any other quantifiable, scientific application.
Providers push higher bitrates because customers think they're better and demand them.