Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anything non-standard will kill shitty middleboxes so I assume spamming packets faster than anticipated will have corporate networks block you off as a security thread of some kind. Mobile carriers also do some weird proxying hacks to "save bandwidth", especially on <4G, so you may also break some mobile connections. I don't have any proof but shitty middleboxes have broken connections with much less obvious protocol features.

But in practice, I think this should work most of the time for most people. On slower connections, your connection will probably crawl to a halt due to retransmission hell, though. Unless you fill up the buffers on the ISP routers, making every other connection for that visitor slow down or get dropped, too.



Loss-based TCP congestion control and especially slow start are a relic from the 80s when the internet was a few dialup links and collapsed due to retransmissions. If an ISP's links can't handle a 50 KB burst of traffic then they need to upgrade them. Expecting congestion should be an exception, not the default.

Disabling slow start and using BBR congestion control (which doesn't rely on packet loss as a congestion signal) makes a world of difference for TCP throughput.


Slow start could be a great motivator for battling website obesity though. If we could give people an easy win here (get your page size down to 300 kb and it will load in one roundtrip), I think more frontend devs would be thinking about it. (Not much more, though – most still won’t care, probably.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: