Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One thing which really resonated:

"But will you do the boring but necessary browser testing to figure out if what you’re describing is always true, or just most of the time? And will you repeat that testing once new versions have come out? Will you go through related pages..."

This. A thousand times this. The problem for me isn't the quantity of information any more, it's the quality. 10-15 years ago if you hit an even slightly esoteric problem you'd bottom out a search pretty quickly and be on your own. Now, you'll find dozens of blog articles, community answers, Reddit threads... and unless you're very lucky they will all be wrong, from subtle "works on my machine"-isms up to "just commit a god-rights CI token to your repository, it'll be fine" - the telltale sign often being nobody can tell you why this is the solution, merely that they bashed other random solutions to related problems together until a particular combination happened to work.

Authoritative sources like MDN are vital in this context, having something you can refer to that tells you how things actually work so you can verify whether the suggestion you or a co-worker found on a blog is a sensible solution or the kind of horrible mess you'd expect to find alongside world-writable S3 buckets and services that regularly time out due to being OOM killed.



This is tangential to your point, but it’s funny how sometimes the amount of information on a topic can ultimately be a detriment due to the dilution of truth over time.

I’ve been spending my free time working with an experimental library. Google stops returning relevant results for searches on this topic around the 10th result. While this is often infuriating and leads to countless hours deep in indecipherable library code, it is equally likely to stumble upon an in depth discussion among users about pros and cons of various solutions. This context is rarely captured for mainstream tools, and when it is, those authors are lauded for their ability to contextualize the problem.

What is most disappointing to me is how often we document “what” but not “why” when most of us NEED the context of “why” to make comparisons across different tools or approaches for our use cases.


To me this is related to the "terrarium problem" of software ecosystems, where the larger the ecosystem is, the more capable it is in some sense, but the less it can actually be understood. Web tech being all encompassing has made it an unsustainably large terrarium.

And you can mitigate a terrarium's impact somewhat by defining a protocol around it, but then the protocol becomes the new terrarium. And so it goes. That is why it never really gets solved by more general-purpose layers, but it does get solved in some degree by the documentation, which at least gives you a map of the jungle, and it gets solved for any end use by distilling down to a smaller terrarium that can be understood by its definitions rather than its dependencies.


> The problem for me isn't the quantity of information any more, it's the quality. 10-15 years ago if you hit an even slightly esoteric problem you'd bottom out a search pretty quickly and be on your own.

It's not just technical information either. The internet has done this to everything. In comparison, the old mainstream media seemed to be more "truthful", in some sense. It is more truthful when it came to hard, citable facts. Most things the MSM offered as facts had cites, and turned out to be accurate. In comparison, the Internet swims in bullshit. It ranges from opinion dressed up as fact to outright lies told by trolls, and then and onto conspiracy theories from people living in some alternate Universe. Worst, since these people believe (probably correctly) that the louder they yell, the more they are likely to convince the internet seems to be mostly this stuff (outposts like Wikipedia being an exception).

And yet, I prefer this situation to what the mainstream media offered. The MSM suffered from two defects. Firstly, they were very prone to repeating accepted meme's as fact, over and over again, as if there were no competing theories. The most recent example masks where the MSM was flooded with the prevailing western expert opinion that masks had no effect. Back in the day, before the internet, I would have just accepted that. There was no easy way (where easy is spend 1/2 an hour entering search terms, clicking and reading) to fact check, so how could you do otherwise?

The second thing journalists are bloody hopeless at distilling and summarising the truth from an expert. Before the internet you rarely had an opportunity to see it in action. I had been told by elders repeatedly that if you see a story in the media about an event you attended or place you know well, you won't recognise it. That happened to me once too - it was a report about the commonwealth games in Time magazine. It was exactly as predicted - utterly unrecognisable. But it only happened once, and the lesson faded. Then the internet came along and you would read a MSM report, then happen on the comments of someone there or read the expert's own words, and it was like "wtf?". Now I find myself treating the output of journalists with suspicion, avoiding it where possible.

Much later it dawned on me why it was like this. The journalists main task, the one their employers judged them on and remunerated them on accordingly, had nothing to do with how accurately they reported the facts. It was whether you came back for more; bought tomorrow's paper, switched on the TV news; the only relevance of the truth to that endeavour is whether sticking to it brings readers or not. The combination of repeating established memes without question, the overarching drive to deliver addictive brain candy rather than information, and the difficulty of fact checking made the output of the MSM a truly insipid product.

So yeah, the Internet is swimming in endless repeats of bullshit and lies, but unlike the old MSM floating in that torrent of crap are the actual facts. You just have to put in the work to find them. And once you figure out how to do it, it's not even that much work. Knowing about sites like MDN, which is a reliable, complete, and up-to-date source of information is one of these tricks. For those of us who like our facts neat, being able to go straight to the MDN saves us literally hours of time in shifting through torrents of bullshit, or god-help us reading yards of inscrutable standards in order to find the thing we need to know. Loss of the MDN would be a real disaster in the economic sense - it would cost a lot of people a huge amount of money and time.

Still, as Wikipedia, and stack overflow demonstrate, it's possible to crowd source that style of site. Let's hope it doesn't come to that.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: