Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've actually been struggling with a problem related to this. First time page loads if you have a 2000loc JavaScript file + index + css + favicon requests nothing but those 4 resources, which is very quick on a keep-alive http 1.1 server.

I've written all of that from scratch because I got tired of maintaining node.js

But when splitting up the js file into pieces and using es6 modules, say 12 different files, Chrome makes 8 TCP connections on 8 different sockets, and each connection has its own TCP handshake (and TLS handshake for https). How do you bundle things without using a build system or a bundler? Import maps help, and it's not difficult to simply hash the page and copy the asset to a "dist/" folder with the hash appended, but it's still slow on first page load.

I'm not a web developer professionally (or network engineer), so I'm learning about web networking myself for the first time. It might be helpful to add a section about "traffic shaping"? I've smacked together a service worker that does the work of caching well-enough for now, but I'm definitely doing something rather strange and reinventing something. My page loaded significantly faster when it was just 1 JS file, no caching needed.



It's hard to bundle without a bundler, but...

It sounds like preloading modules would help your case a lot: https://web.dev/articles/modulepreload

The number of connections is a bit of a red herring here, the problem is (typically) that the browser loads one module which then tells it to load another module, and so on. Each round-trip is wasted time. Preloading gives the browser a flat list of all required dependencies right away. By the way, this applies to everything else: having all required resources declared at the top of the page makes things faster. You can even preload some less-obvious things like background images referenced by CSS files.

This might achieve the "bundling" you want, in the sense that all the preloaded resources can be multiplexed into a single connection. But again, the number of connections is almost nothing compared to the number of round-trips required.


This is my understanding of HTTP versions and when bundling is necessary:

On HTTP 1 chrome will make 6 TCP connections to a host, and serialize requests one by one over those connections. This suffers from the waterfall effect, where you have to wait for the first 6 requests to complete, then the next batch, and so on. On lossy connections this can also lead to head of line blocking on each of those connections, where it has to wait for a packet of one file to arrive before it "frees up" the connection for the next request. On HTTP 1 bundling becomes necessary pretty quickly to guarantee good performance.

On HTTP 2 it will make 1 TCP connection to a host, and multiplex requests over those connections, so they may all download in intermixed chunks. This has less connection negotiation overhead, and it does not suffer from the waterfall effect to the same degree. However, it still has head of line blocking on lossy connections, and this is in fact made worse because there is only one connection so if it's blocked at TCP level on a single packet of a single file every request behind of that packet is blocked as well. I've done some tests and on reasonable quality connections there is not much overhead involved with requesting a hundred files instead of one file. The caveat is "on reasonable quality connections".

On HTTP 3 the QUIC protocol is based on UDP and therefore connectionless. There is no overhead for establishing connections or doing TLS handshakes, and no waterfall or head of line blocking. However, because of the connectionless protocol a lot of TCP concerns (like dealing with packet loss) become a concern of the HTTP layer, and this complicates implementation. The browsers already support it, but it is especially an issue at the web server level. This means it will take a while for this feature to roll out to servers everywhere. Once the web moves over to HTTP 3 the performance advantages of bundling should largely disappear.

A service worker and/or careful use of caching can be used as a workaround to lessen the impact of requesting many files over HTTP 1, but this adds implementation complexity in the application and a bug may cause clients to end up in a semi-permanently broken state.


HTTP 2 and above will use one connection to retrieve several files. Caddy [1] can act as a static file server that will default to HTTP 2 if all parties support it. No configuration required.

If you allow UDP connections in your firewall, it will upgrade to HTTP 3 automagically as well.

I highly recommend it

[1] https://caddyserver.com/




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: