Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Friendly reminder for people on HN reading this:

I know this is actually quite interesting, but before you start worrying about the latency of the name servers of your TLD, you might want to do something about the metric ton of JavaScript on your site and the 25 different 3rd party servers from which you side load most of it. Also those 6 additional servers from which you load a bunch of TTF fonts. Especially if all your site does is just display some text and two or three pictures.



To add to this, one thing people tend to forget is that html has a dns prefetch option for offsite js you absolutely must have. (rel=dns-prefetch)

Of course I agree with goliath, which is why I try very hard to write pure html5+css3 with no JS unless absolutely necessary. It is very rarely necessary. When it is, very rarely do I need one of the crazy frameworks, pure js works pretty well.

Beyond that, this is why adblocker plus and umatrix are two must have addons to firefox. Once you build your asset rule list up with only the couple of js needed to run a site, the same site that takes forever for the average visitor can actually be fairly speedy when none of it's js loads.

Now, for original topic, if you are on windows check out GRC's DNS Benchmark, and if on nix, check out namebench.


Thanks for the tip. I've heard about it, but never saw anything about it. Here is a bit of info on dsn-prefetch, https://developer.mozilla.org/en-US/docs/Web/HTTP/Link_prefe...


You can go ahead a step and `preconnect` to a particular URL if you know it. This includes dns prefetching but goes ahead and initializes an HTTP connection.


> rel=dns-prefetch

This is amazing! How could I have missed this? How are the loading times affected when using this option, if you care to share?


It largerly depends on how you use it. For example, if you are using standard Google Fonts, you can pre-connect to the fonts file's hostname so the browser already has DNS resolved when the CSS file refers to the font files.


It helps more when the offsites are in another geographic location, like if for some reason you are loading something from the EU on a server in the US. Benefits can be marginal otherwise, so I just suggest people play around with it for any offsite requests they have.


Beware even some text-only browsers have added prefetching; need to patch source to turn it off

And then there are browsers that have internal stub resolver. Horrible

https://www.reddit.com/r/chrome/comments/bgh8th/chrome_73_di...

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-...

https://www.chromium.org/developers/design-documents/dns-pre...

https://www.ghacks.net/2019/04/23/missing-chromes-use-a-pred...

https://www.ghacks.net/2013/04/27/firefox-prefetching-what-y...

https://www.ghacks.net/2010/04/16/google-chrome-dns-fetching...

I have been doing "DNS prefetching" before this term existed

I do non-recursive lookups and store the data I need in custom zone files or HOST file. I get faster lookups than any "solution" from any third party.

It is sad how much control is taken from the user, always with the stated goal of "making the web faster".

In many cases they are makng it slower. The irony of this blog post by a CDN about TLD latency is that some CDNs actually cause DNS-related delay by requiring excessive numbers of queries to resolve names, e.g., Akamai

Users have the option to choose for themselves the IP address they want to use for a given resource. If they find that the connection is slow, then they can switch to another one. Same idea as choosing a mirror when downloading open source software. Some users might want this selection done for them automatically, others might not


> And then there are browsers that have internal stub resolver.

You mean internal caching resolver? Every application has an internal stub resolver, even if it's just using getaddrinfo, which builds and sends DNS packets to the recursive, caching resolvers specified by /etc/resolv.conf or equivalent system setting. But getaddrinfo is blocking, and various non-portable extensions (e.g. glibc getaddrinfo_a, OpenBSD getaddrinfo_async) are integration headaches, so it's common for many applications to include their own async stub resolver. What sucks is if an internal stub resolver doesn't obey the system settings.


As a user, I prefer gethostbyname to getaddrinfo. The text-only browser I use actually has --without-getaddrinfo as a compile time option, so I know I am not alone in this preference. The best "stub resolvers" are programs like dnsq, dq, drill, etc. They do not do any "resolution", they just send queries according the user's specification.

As a user, I expect that the application interfacing with the resolver routines provided by the OS will respect the configuration settings I make in resolv.conf. Having to audit every application for how it handles DNS resolution is a headache.

https://www.xda-developers.com/fix-dns-ad-blocker-chrome/

I recall there were earlier experiments with internal DNS resolution in Chromium, e.g., code was added then removed.

Browser DNS caches are another annoyance but that is not what I meant.


> As a user, I prefer gethostbyname to getaddrinfo

On many systems (e.g. OpenBSD) they're implemented with the exact same code. glibc is something of an outlier given its insanely complex implementations interacting with RedHat's backward compatibility promises. Many of the code paths are the same[1], but getaddrinfo permits stuff like parallel A and AAAA lookups, and minor tweaks in behavior (e.g. timing, record ordering) often break somebody's [broken] application, so I'm not surprised some people have stuck to gethostbyname, which effectively disables or short-circuits alot of the optimization and feature code.

But, yeah, browsers in particular do all sorts of crazy things, even before DoH, that were problematic.

[1] As a temporary hack to quickly address a glibc getaddrinfo CVE without having to upgrade (on tens of thousands of deployed systems) the ancient version of glibc in the firmware, I [shamefully] wrote a simple getaddrinfo stub that used glibc's gethostbyname interfaces, and dynamically loaded it as a shared library system wide. It worked because while most of the same code paths were called, the buffer overflow was only reached when using getaddrinfo directly. Hopefully that company has since upgraded the version of glibc in their firmware. But at the time it made sense because the hack was proposed, written, tested, and queued for deployment before the people responsible for maintaining glibc could even schedule a meeting to discuss the process of patching and testing, which wasn't normally included in firmware upgrades and nobody could remember the last time they pushed out rebuilt binaries. glibc was so old because everybody was focused on switching Linux distributions, which of course took years to accomplish rather than months.


As a user, I admire your work.


No kidding. I'm using a > 1GBit fibre line and the majority of my page loads is still downloading recursive, pointless javascript dependencies, or doing a billion CORS OPTIONS requests and waiting for the response. DNS latency doesn't even factor into the end result user experience, it's dominated entirely by front end design decisions.

If this was a concern it’s an admission that JavaScript developers have optimized to the best of their ability. Which is just sad.


> a billion CORS OPTIONS requests

When I first saw the CORS headers on large sites that are sent with every effin request, I thought I gone mad, only to learn that it's encouraged to be used... I remember times with monsteriusly sized cookies, this is the same, except this won't go away with the rise of server side sessions.


Do you have a moment to talk about The Great Saver, uMatrix?


Disabling a lot of the bloat makes random pages break, which is an even worse experience.


Would I recommend uMatrix to my non technical friends? Absolutely not.

For technical people it's a matter of self selection. It's worse for the ones of us that can't bother checking which scripts break the page and deciding whether to enable them. It's great for the others. Personally I can't imagine using the web without uMatrix (and uBlock Origin). For the site that really breaks no matter what, if I really need it either I open it in a Vivaldi private window (I only have uBlock in that browser) or if anything else fails, I start Chrome and close it immediately after I did what I had to do.


But it is compensated for by the sites that actually work better when you strip their non-local JS from them. I've lost track of the number of times I've come to the HN comments and read about how unreadable the page was due to all the popups and ads and modal dialogs when I just read the text. You're absolutely not wrong that some sites get worse, but it's not one-sided in that direction.


uBlock Origin is probably the better middle way for most. It should accomplish what you describe most of the time yet breakage is the exception instead of the rule.


Those random pages are not worth the visit then.


Nothing wrong with preflight checks. What’s wrong is that a SPA will make 100 calls to a web server for data that isn’t even that complex.

So much poor API design out there that most would be better off with server side rendering.


> So much poor API design out there that most would be better off

...fixing the API design.


Many inexperienced developers don't understand the difference in latency between a local memory call and a remote HTTP API call, so treat the two as equal in terms of architectural concerns.


It's been a truism for many years now: "slow" is often not your side of the pipe, especially with high-bandwidth connections.

>If this was a concern it’s an admission that JavaScript developers have optimized to the best of their ability. Which is just sad.

I mean sure, but "is this the best you can do?" requires you to know what the goals are, and I suspect the issues raised in these comments are not on hardly any of the lists. New Relic and its third-party cool-usage-graphing friends are way way higher in priority.


Absolutely agree with this. The first time I installed the uMatrix extension, half the sites were broken because they wouldn't load unless I greenlighted access to all the third-party servers. Only 30% of the sites I've come across didn't need tinkering with uMatrix to work.

PS: uMatrix disables loading of resources (IFrames, scripts, etc.) if it's not served by the first-party (i.e. The domain itself) domain by default.


And if they weren't using 60 second TTLs everywhere, they might actually from the caching built into DNS!


Are you suggesting this might be a premature optimization? :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: