Hacker News new | past | comments | ask | show | jobs | submit login

This is interesting, because at every step along the way, each actor took the locally optimal step -- webmasters wanted to serve up working pages to their users, and new browser vendors wanted their users to get pages with all the supported features.

Yet, in the end, we end up with a mess for everybody. What could have been done differently to end up at a good solution? I guess having universally defined and complied with standards would have helped, so a browser could just say "I support HTML 1.3".




>What could have been done differently to end up at a good solution?

Simple, not having a user agent from the start.

Ideally a URI would always just return the exact same webpage. Except it became necessary to be able to update them which broke this assumption, and eventually the need for some kind of authenticated session spawned all kinds of mechanisms that definitively killed off URIs as Uniform Resource Identifiers.

Perhaps if we were to do it all over we'd have a uniform method for authentication, and maybe even the possibility to refer to past versions of a page. Alas it was not to be.


It's total wishful thinking, but I wish would take another stab at designing the Web with the benefit of 30 years of hindsight.

Pretty soon we're just going to be executing WebAssembly blobs and that will be that.


You can still do this today by simplifying your code to where you don't have to sniff useragents, etc.



Different user agents are used for desktop vs mobile pages for the same browser, so a single user agent is not a viable solution now.


This might not surprise you, but I think serving the mobile and desktop versions via the same URI is an abomination and needs to stop. Encouraging mobile users to use an app instead is downright sacrilegious.


> I guess having universally defined and complied with standards would have helped, so a browser could just say "I support HTML 1.3".

Probably not; standards on the web that don't lag behind implementation end up like XHTML 2.0.


You made me chuckle.


I feel like web masters shot themselves in the foot and created this mess. I kind of like just serving the page and letting the market figure it out. Developers would have moved faster to make sure their stuff didn't break and maybe the browser wars and the dark ages that followed would not have happened.


That's not really a pragmatic approach, though. Users don't care that it's the browser's fault, they'll use the service with the better user experience.


A working reference implementation of a page renderer that defined what "correct" display looked like to go along with the spec.


> What could have been done differently to end up at a good solution?

Something like a standardized API for feature-detection, possibly.


That existed: see DOMImplementation.hasFeature.

Turned out, there were cases where browsers returned "true" while their implementation of the feature did not do what the authors wanted. There were various reasons for this: the feature detection being decoupled from the feature implementation, bugs in the feature that could not be captured by the implementation, the detection not being fine-grained enough, etc. And there were cases where hasFeature returned "false" while the feature was usable, for similar reasons.

Long story short, at this point the implementation of hasFeature per spec, and in browsers, is "return true".


The client hints draft is somewhat helping here (10 years later): http://httpwg.org/http-extensions/client-hints.html


At first I though feature detection as in Sobel, LoG, &c, which made me really confused when reading the replies.


How do other protocols handle versioning? SSL/TLS seems to do it well enough.


Absolutely not.

In TLS we now have two bogus version numbers you should ignore. We also have an extension that will signal the real version number. It'll also send a bunch of bogus version numbers to "train" servers to expect and ignore bogus version numbers. This is all due to the fact that server vendors found it too complicated to implement "if I get version higher than what I support I answer with the highest version I do support". Instead they often implement "if I get a version higher than I support I'll drop the connection".

But all of that was not enough to make TLS 1.3 work. It now also includes sending a bunch of bogus messages that have no meaning and are ignored, just to make it look more like TLS 1.2.

David Benjamin summarized that recently at Real World Crypto: https://www.youtube.com/watch?v=_mE_JmwFi1Y





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: