Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you want to have fun with this: the initial window (IW) is determined by the sender. So you can configure your server to the right number of packets for your website. It would look something like:

    ip route change default via <gw> dev <if> initcwnd 20 initrwnd 20
A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.


> A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.

Any reference for this?



Thanks!


I'm not going to dig it up for you, but this is in line with what I've read and observed. I set this to 20 packets on my personal site.



We are in a strange world today because our MTU was decided for 10mbps ethernet (MTU/bandwidth on a hub controls latency). The world is strange because 10mbps is still common for end-user network connections, while 10gbps is common for servers, and a goodly number of consumers have 1gbps.

The range means MTU varies from reasonable, where you can argue that an IW of anything from 1-30 packets is good, to a world where the MTU is ridiculously small and the IW is similarly absurd.

We would probably be better off if consumers on >1gbps links got higher MTUs, then an IW of 10-30 could be reasonable everywhere. MTU inside cloud providers is higher (AWS uses 9001), so it is very possible.


TL;DR we'd have to switch away from Ethernet (802.3) entirely. Even Jumbo frames are vulnerable to silent corruption. https://en.wikipedia.org/wiki/Jumbo_frame#Error_detection

It was a good decision at the time, back when everything was far slower and more expensive.


be a bad citizen and just set it to 1000 packets... There isn't really any downside apart from potentially clogging up someone who has a dialup connection and bufferbloat.


This sounds like a terrible idea, but can anybody pinpoint why exactly?


Anything non-standard will kill shitty middleboxes so I assume spamming packets faster than anticipated will have corporate networks block you off as a security thread of some kind. Mobile carriers also do some weird proxying hacks to "save bandwidth", especially on <4G, so you may also break some mobile connections. I don't have any proof but shitty middleboxes have broken connections with much less obvious protocol features.

But in practice, I think this should work most of the time for most people. On slower connections, your connection will probably crawl to a halt due to retransmission hell, though. Unless you fill up the buffers on the ISP routers, making every other connection for that visitor slow down or get dropped, too.


Loss-based TCP congestion control and especially slow start are a relic from the 80s when the internet was a few dialup links and collapsed due to retransmissions. If an ISP's links can't handle a 50 KB burst of traffic then they need to upgrade them. Expecting congestion should be an exception, not the default.

Disabling slow start and using BBR congestion control (which doesn't rely on packet loss as a congestion signal) makes a world of difference for TCP throughput.


Slow start could be a great motivator for battling website obesity though. If we could give people an easy win here (get your page size down to 300 kb and it will load in one roundtrip), I think more frontend devs would be thinking about it. (Not much more, though – most still won’t care, probably.)


Doing that would basically disable the congestion control at the start of the connection.

Which would be kinda annoying on a slow connection.

Either you'd have buffer issues or dropped packets.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: