This is interesting, because at every step along the way, each actor took the locally optimal step -- webmasters wanted to serve up working pages to their users, and new browser vendors wanted their users to get pages with all the supported features.
Yet, in the end, we end up with a mess for everybody. What could have been done differently to end up at a good solution? I guess having universally defined and complied with standards would have helped, so a browser could just say "I support HTML 1.3".
>What could have been done differently to end up at a good solution?
Simple, not having a user agent from the start.
Ideally a URI would always just return the exact same webpage. Except it became necessary to be able to update them which broke this assumption, and eventually the need for some kind of authenticated session spawned all kinds of mechanisms that definitively killed off URIs as Uniform Resource Identifiers.
Perhaps if we were to do it all over we'd have a uniform method for authentication, and maybe even the possibility to refer to past versions of a page. Alas it was not to be.
This might not surprise you, but I think serving the mobile and desktop versions via the same URI is an abomination and needs to stop. Encouraging mobile users to use an app instead is downright sacrilegious.
I feel like web masters shot themselves in the foot and created this mess. I kind of like just serving the page and letting the market figure it out. Developers would have moved faster to make sure their stuff didn't break and maybe the browser wars and the dark ages that followed would not have happened.
That's not really a pragmatic approach, though. Users don't care that it's the browser's fault, they'll use the service with the better user experience.
Turned out, there were cases where browsers returned "true" while their implementation of the feature did not do what the authors wanted. There were various reasons for this: the feature detection being decoupled from the feature implementation, bugs in the feature that could not be captured by the implementation, the detection not being fine-grained enough, etc. And there were cases where hasFeature returned "false" while the feature was usable, for similar reasons.
Long story short, at this point the implementation of hasFeature per spec, and in browsers, is "return true".
In TLS we now have two bogus version numbers you should ignore. We also have an extension that will signal the real version number. It'll also send a bunch of bogus version numbers to "train" servers to expect and ignore bogus version numbers.
This is all due to the fact that server vendors found it too complicated to implement "if I get version higher than what I support I answer with the highest version I do support". Instead they often implement "if I get a version higher than I support I'll drop the connection".
But all of that was not enough to make TLS 1.3 work. It now also includes sending a bunch of bogus messages that have no meaning and are ignored, just to make it look more like TLS 1.2.
Yet, in the end, we end up with a mess for everybody. What could have been done differently to end up at a good solution? I guess having universally defined and complied with standards would have helped, so a browser could just say "I support HTML 1.3".