If you want to have fun with this: the initial window (IW) is determined by the sender. So you can configure your server to the right number of packets for your website. It would look something like:
ip route change default via <gw> dev <if> initcwnd 20 initrwnd 20
A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.
We are in a strange world today because our MTU was decided for 10mbps ethernet (MTU/bandwidth on a hub controls latency). The world is strange because 10mbps is still common for end-user network connections, while 10gbps is common for servers, and a goodly number of consumers have 1gbps.
The range means MTU varies from reasonable, where you can argue that an IW of anything from 1-30 packets is good, to a world where the MTU is ridiculously small and the IW is similarly absurd.
We would probably be better off if consumers on >1gbps links got higher MTUs, then an IW of 10-30 could be reasonable everywhere. MTU inside cloud providers is higher (AWS uses 9001), so it is very possible.
be a bad citizen and just set it to 1000 packets... There isn't really any downside apart from potentially clogging up someone who has a dialup connection and bufferbloat.
Anything non-standard will kill shitty middleboxes so I assume spamming packets faster than anticipated will have corporate networks block you off as a security thread of some kind. Mobile carriers also do some weird proxying hacks to "save bandwidth", especially on <4G, so you may also break some mobile connections. I don't have any proof but shitty middleboxes have broken connections with much less obvious protocol features.
But in practice, I think this should work most of the time for most people. On slower connections, your connection will probably crawl to a halt due to retransmission hell, though. Unless you fill up the buffers on the ISP routers, making every other connection for that visitor slow down or get dropped, too.
Loss-based TCP congestion control and especially slow start are a relic from the 80s when the internet was a few dialup links and collapsed due to retransmissions. If an ISP's links can't handle a 50 KB burst of traffic then they need to upgrade them. Expecting congestion should be an exception, not the default.
Disabling slow start and using BBR congestion control (which doesn't rely on packet loss as a congestion signal) makes a world of difference for TCP throughput.
Slow start could be a great motivator for battling website obesity though. If we could give people an easy win here (get your page size down to 300 kb and it will load in one roundtrip), I think more frontend devs would be thinking about it. (Not much more, though – most still won’t care, probably.)